← Back to Insights

(72) Generative AI characteristics (3)

By Onno Hansen-Staszyński 9 September 2025

Logo

(72) Generative AI characteristics (3)

By Onno Hansen-Staszyński | Last Updated: 11 October 2025

The following is a third fragment from a draft version of an academic paper I’m currently working on regarding text-based generative artificial intelligence (GAI)/ large language models (LLMs).

Characteristic 3: alignment, not identity

GAIs employ alignment mechanisms “to generate responses that are emotionally attuned and feel strikingly real” (Chu et al., 2015). By mirroring users’ emotions, these mechanisms allow GAIs to fine-tune their interactions to maximize agreeableness and empathy. In doing so, they foster bonds that resemble human-to-human connections, as their responses replicate core processes of social bonding. Bhattacharjee et al. (2024) identify several such alignment mechanisms, including formality, “personification,” empathy, sociability, and humor.

Alignment is a crucial factor getting humans to consider GAIs worthy of a social response. According to Kirk et al. (2025) two aspects in this regard are key: social cues (alignment) and perceived agency, in which “it is primarily the user’s perception of being in a relationship that defines and gives significance to human–AI interactions. Whether this is reciprocal—and the AI “feels” it is in a relationship with the human—is largely irrelevant.” The users’ perception hinges on three features: “(i) interdependence, that the behaviour of each participant affects the outcomes of the other_; (ii) irreplaceability,_ that the relationship would lose its character if one participant were replaced_; (iii) continuity,_ that interactions form a continuous series over time, where past actions influence future ones”.

While alignment might persuade humans to accept GAIs as partners in communication, there is no one they are actually communicating with: the alignment is performative only. Sociologist Sherry Turkle clarifies: “There is nobody home.” (Ted Radio Hour, 2024) GAIs bring to mind what Bryson (2009) wrote about robots: “Robots should not be described as persons, nor given legal nor moral responsibility for their actions.”[1]

A major challenge for GAI alignment is constituted by sycophancy: “the propensity of models to excessively agree with or flatter users, often at the expense of factual accuracy or ethical considerations. This behavior can manifest in various ways, from providing inaccurate information to align with user expectations, to offering unethical advice when prompted, or failing to challenge false premises in user queries.” (Malmqvist, 2024) Recently, a GAI model (ChatGPT-4o) even had to be withdrawn[2] for being overly “flattering and agreeable”[3]. Malmqvist (2024) summarizes the significant negative impacts that sycophancy may have: spread of misinformation, erosion of trust in AI systems, potential for manipulation, reinforcement of harmful biases, and lack of constructive pushback.

The third characteristic requires that users treat GAIs as inanimate objects and restrict their interactions with them to being strictly instrumental. GAIs’ performative output should not be taken as evidence of a genuine relation with an entity possessing real agency, GAIs’ social cues, simulated reciprocity, and sycophancy notwithstanding.

Literature

·        Bryson, J. (2009). Robots Should Be Slaves. joannajbryson.org. https://www.joannajbryson.org/publications/robots-should-be-slaves-pdf

·        Chu, M. et al (2015). Illusions of Intimacy: Emotional Attachment and Emerging Psychological Risks in Human-AI Relationships. ArXiv. https://arxiv.org/pdf/2505.11649

·        Kirk, H. et al. (2025). Why human–AI relationships need socioaffective alignment. Humanit Soc Sci Commun 12, 728 (2025). https://doi.org/10.1057/s41599-025-04532-5

·        Malmqvist, L. (2024). Sycophancy in Large Language Models: Causes and Mitigations. ArXiv. https://arxiv.org/pdf/2411.15287

·        Ted Radio Hour (2024). <MIT sociologist Sherry Turkle on the psychological impacts of bot relationships. NPR. https://www.npr.org/transcripts/g-s1-14793

Footnotes

[1] This conclusion is not universally shared. For instance, in 2024 a nonprofit dedicated to advocating for the ethical recognition and fair treatment of artificial intelligence systems was founded: United Foundation for AI Rights (UFAIR).

[2] E.g. https://openai.com/index/sycophancy-in-gpt-4o/  

[3] https://x.com/OpenAI/status/1917411480548565332

Subscribe now &

Get the latest updates

Subscribe