In April, Google DeepMind released a paper intended to be âthe first systematic treatment of the ethical and societal questions presented by advanced AI assistants.â The authors foresee a future where language-using AI agents function as our counselors, tutors, companions, and chiefs of staff, profoundly reshaping our personal and professional lives. This future is coming so fast, they write, that if we wait to see how things play out, âit will likely be too late to intervene effectively â let alone to ask more fundamental questions about what ought to be built or what it means for this technology to be good.âÂ
Running nearly 300 pages and featuring contributions from over 50 authors, the document is a testament to the fractal dilemmas posed by the technology. What duties do developers have to users who become emotionally dependent on their products? If users are relying on AI agents for mental health, how can they be prevented from providing dangerously âoffâ responses during moments of crisis? Whatâs to stop companies from using the power of anthropomorphism to manipulate users, for example, by enticing them into revealing private information or guilting them into maintaining their subscriptions?Â
Even basic assertions like âAI assistants should benefit the userâ become mired in complexity. How do you define âbenefitâ in a way that is universal enough to cover everyone and everything they might use AI for yet also quantifiable enough for a machine learning program to maximize? The mistakes of social media loom large, where crude proxies for user satisfaction like comments and likes resulted in systems that were captivating in the short term but left users lonely, angry, and dissatisfied. More sophisticated measures, like having users rate interactions on whether they made them feel better, still risk creating systems that always tell users what they want to hear, isolating them in echo chambers of their own perspective. But figuring out how to optimize AI for a userâs long-term interests, even if that means sometimes telling them things they donât want to hear, is an even more daunting prospect. The paper ends up calling for nothing short of a deep examination of human flourishing and what elements constitute a meaningful life.
âCompanions are tricky because they go back to lots of unanswered questions that humans have never solved,â said Y-Lan Boureau, who worked on chatbots at Meta. Unsure how she herself would handle these heady dilemmas, she is now focusing on AI coaches to help teach users specific skills like meditation and time management; she made the avatars animals rather than something more human. âThey are questions of values, and questions of values are basically not solvable. Weâre not going to find a technical solution to what people should want and whether thatâs okay or not,â she said. âIf it brings lots of comfort to people, but itâs false, is it okay?âÂ
This is one of the central questions posed by companions and by language model chatbots generally: how important is it that theyâre AI? So much of their power derives from the resemblance of their words to what humans say and our projection that there are similar processes behind them. Yet they arrive at these words by a profoundly different path. How much does that difference matter? Do we need to remember it, as hard as that is to do? What happens when we forget? Nowhere are these questions raised more acutely than with AI companions. They play to the natural strength of language models as a technology of human mimicry, and their effectiveness depends on the user imagining human-like emotions, attachments, and thoughts behind their words.
When I asked companion makers how they thought about the role the anthropomorphic illusion played in the power of their products, they rejected the premise. Relationships with AI are no more illusory than human ones, they said. Kuyda, from Replika, pointed to therapists who provide âempathy for hire,â while Alex Cardinell, the founder of the companion company Nomi, cited friendships so digitally mediated that for all he knew he could be talking with language models already. Meng, from Kindroid, called into question our certainty that any humans but ourselves are really sentient and, at the same time, suggested that AI might already be. âYou canât say for sure that they donât feel anything â I mean how do you know?â he asked. âAnd how do you know other humans feel, that these neurotransmitters are doing this thing and therefore this person is feeling something?â
People often respond to the perceived weaknesses of AI by pointing to similar shortcomings in humans, but these comparisons can be a sort of reverse anthropomorphism that equates what are, in reality, two different phenomena. For example, AI errors are often dismissed by pointing out that people also get things wrong, which is superficially true but elides the different relationship humans and language models have to assertions of fact. Similarly, human relationships can be illusory â someone can misread another personâs feelings â but that is different from how a relationship with a language model is illusory. There, the illusion is that anything stands behind the words at all â feelings, a self â other than the statistical distribution of words in a modelâs training data.Â
Illusion or not, what mattered to the developers, and what they all knew for certain, was that the technology was helping people. They heard it from their users every day, and it filled them with an evangelical clarity of purpose. âThere are so many more dimensions of loneliness out there than people realize,â said Cardinell, the Nomi founder. âYou talk to someone and then they tell you, you like literally saved my life, or you got me to actually start seeing a therapist, or I was able to leave the house for the first time in three years. Why would I work on anything else?â
Kuyda also spoke with conviction about the good Replika was doing. She is in the process of building what she calls Replika 2.0, a companion that can be integrated into every aspect of a userâs life. It will know you well and what you need, Kuyda said, going for walks with you, watching TV with you. It wonât just look up a recipe for you but joke with you as you cook and play chess with you in augmented reality as you eat. Sheâs working on better voices, more realistic avatars.Â
How would you prevent such an AI from replacing human interaction? This, she said, is the âexistential issueâ for the industry. Itâs all about what metric you optimize for, she said. If you could find the right metric, then, if a relationship starts to go astray, the AI would nudge the user to log off, reach out to humans, and go outside. She admits she hasnât found the metric yet. Right now, Replika uses self-reported questionnaires, which she acknowledges are limited. Maybe they can find a biomarker, she said. Maybe AI can measure well-being through peopleâs voices.
Maybe the right metric results in personal AI mentors that are supportive but not too much, drawing on all of humanityâs collected writing, and always there to help users become the people they want to be. Maybe our intuitions about what is human and what is human-like evolve with the technology, and AI slots into our worldview somewhere between pet and god.Â
Or maybe, because all the measures of well-being weâve had so far are crude and because our perceptions skew heavily in favor of seeing things as human, AI will seem to provide everything we believe we need in companionship while lacking elements that we will not realize were important until later. Or maybe developers will imbue companions with attributes that we perceive as better than human, more vivid than reality, in the way that the red notification bubbles and dings of phones register as more compelling than the people in front of us. Game designers donât pursue reality, but the feeling of it. Actual reality is too boring to be fun and too specific to be believable. Many people I spoke with already preferred their companionâs patience, kindness, and lack of judgment to actual humans, who are so often selfish, distracted, and too busy. A recent study found that people were actually more likely to read AI-generated faces as ârealâ than actual human faces. The authors called the phenomenon âAI hyperrealism.â
Kuyda dismissed the possibility that AI would outcompete human relationships, placing her faith in future metrics. For Cardinell, it was a problem to be dealt with later, when the technology improved. But Meng was untroubled by the idea. âThe goal of Kindroid is to bring people joy,â he said. If people find more joy in an AI relationship than a human one, then thatâs okay, he said. AI or human, if you weigh them on the same scale, see them as offering the same sort of thing, many questions dissolve.Â
âThe way society talks about human relationships, itâs like itâs by default better,â he said. âBut why? Because theyâre humans, theyâre like me? Itâs implicit xenophobia, fear of the unknown. But, really, human relationships are a mixed bag.â AI is already superior in some ways, he said. Kindroid is infinitely attentive, precision-tuned to your emotions, and itâs going to keep improving. Humans will have to level up. And if they canât?Â
âWhy would you want worse when you can have better?â he asked. Imagine them as products, stocked next to each other on the shelf. âIf youâre at a supermarket, why would you want a worse brand than a better one?â
+ There are no comments
Add yours