How far do you think personalization of AI assistants to align with a user's tastes and preferences should go?

Use these indicators to tag your arguments by copy and pasting them from here. Please use proper indentations for s


 * Argument in favor of the position
 * Argument against the position
 * Objection to the argument.
 * Objection to the objection.

And note that the  Argument for one position, is usually an  Argument against another position. You do not need to duplicate your arguments, just add it once in the relevant section.

Position: AI assistants should be highly personalized to align with a user's tastes and preferences.
Relevant details, definitions and assumptions regarding the first possibility.
 * User gets information that is more relevant and useful
 * Helps individuals and organizations reach relevant audiences
 * Fosters more diverse content & perspectives
 * Keeps users engaged
 * Makes it harder to communicate those perspectives - Larry
 * Creates echo chambers
 * Can lead to predatory targeted advertising that is detrimental to a user or society
 * Impedes our shared understanding reality - Larry
 * Can lead to addiction

Position: No, AI assistants should be not be highly personalized to align with a user's tastes and preferences

 * If a user's taste / preferences are toxic, AI should not further lead them down that path
 * Who determines what constitutes a "toxic" taste/preference?
 * Argument against the second possibility.

Position: AI assistants should only be highly personalized to align with a user's tastes and preferences if the PII is stored locally on the user's computer

 * A centralized database of detailed psychological data on people is too much of a security risk.
 * That's basically what most people's phones are already.
 * Argument against the second possibility.

Notes and references
Tastes and preferences overlap with goals and values, but not completely. Teachers, counselors and coaches know that it is important to respect the tastes and preferences of their clients while simultaneously looking for ways to "up-levell" their beliefs about what it is possible for them to achieve. This is a potential example of AI instilling one level of values while accepting and respecting another level of values.