The Emotional Alignment Design Policy

Eric Schwitzgebel and Jeff Sebo

in draft

According to what we call the Emotional Alignment Design Policy, artificial entities should be designed to elicit emotional reactions from users that appropriately reflect the entities' capacities and moral status, or lack thereof. This principle can be violated in two ways: by designing an artificial system that elicits stronger or weaker emotional reactions than its capacities and moral status warrant (overshooting or undershooting), or by designing a system that elicits the wrong type of emotional reaction (hitting the wrong target). Although presumably attractive, practical implementation faces several challenges including: How can we respect user autonomy while promoting appropriate responses? How should we navigate expert and public disagreement and uncertainty about facts and values? What if emotional alignment seems to require creating or destroying entities with moral status? To what extent should designs conform to versus attempt to alter user assumptions and attitudes?

By following the link below, you are requesting a copy for personal use only, in accord with "fair use" laws.

Click here to view as a PDF file: The Emotional Alignment Design Policy (pdf, July 7, 2025).

Click here to view as an html file: The Emotional Alignment Design Policy (htm, July 7, 2025).

Or email eschwitz at domain: ucr.edu for a copy of this paper.


Return to Eric Schwitzgebel's homepage.