Chatbot maker Replika says it’s okay if humans end up in relationships with AI

Estimated read time 58 min read


Today, I’m talking with Replika founder and CEO Eugenia Kuyda, and I will just tell you right from the jump, we get all the way to people marrying their AI companions, so get ready.

Replika’s basic pitch is pretty simple: what if you had an AI friend? The company offers avatars you can curate to your liking that basically pretend to be human, so they can be your friend, your therapist, or even your date. You can interact with these avatars through a familiar chatbot interface, as well as make video calls with them and even see them in virtual and augmented reality.

The idea for Replika came from a personal tragedy: almost a decade ago, a friend of Eugenia’s died, and she fed their email and text conversations into a rudimentary language model to resurrect that friend as a chatbot. Casey Newton wrote an excellent feature about this for The Verge back in 2015; we’ll link it in the show notes. Even back then, that story grappled with some of the big themes you’ll hear Eugenia and I talk about today: what does it mean to have a friend inside the computer?

That all happened before the boom in large language models, and Eugenia and I talked a lot about how that tech makes these companions possible and what the limits of current LLMs are. Eugenia says Replika’s goal is not to replace real-life humans. Instead, she’s trying to create an entirely new relationship category with the AI companion, a virtual being that will be there for you whenever you need it, for potentially whatever purposes you might need it for.

Right now, millions of people are using Replika for everything from casual chats to mental health, life coaching, and even romance. At one point last year, Replika removed the ability to exchange erotic messages with its AI bots, but the company quickly reinstated that function after some users reported the change led to mental health crises. 

That’s a lot for a private company running an iPhone app, and Eugenia and I talked a lot about the consequences of these ideas. What does it mean for people to have an always-on, always-agreeable AI friend? What does it mean for young men, in particular, to have an AI avatar that will mostly do as it’s told and never leave them? Eugenia insists that AI friends are not just for men, and she pointed out that Replika is run by women in senior leadership roles. There’s an exchange here about the effects of violent video games that I think a lot of you will have thoughts about, and I’m eager to hear them.

Of course, it’s Decoder, so along with all of that, we talked about what it’s like to run a company like this and how products like this get built and maintained over time. It’s a ride.

Okay, Replika founder and CEO Eugenia Kuyda. Here we go.

This transcript has been lightly edited for length and clarity. 

Eugenia Kuyda, you are the founder and CEO of Replika. Welcome to Decoder.

Thank you so much for inviting me.

I feel like you’re a great person to talk to about AI because you actually have a product in the market that people like to use, and that might tell us a lot about AI as a whole. But let’s start at the very beginning. For people who aren’t familiar with it, what is Replika?

Replika is an AI friend. You can create and talk to it anytime you need to talk to someone. It’s there for you. It’s there to bring a little positivity to your life to talk about anything that’s on your mind.

When you say “AI friend,” how is that expressed? Is that an app in the app store? Is it in your iMessage? Where does it happen?

It’s an app for iOS and Android. You can also use Replika on your desktop computer, and we have an AVR application for the Meta Quest.

You have a VR, but it’s not an avatar actually reaching out and hugging you. It’s mostly a chatbot, right? 

Really, it’s that you download the app and set up your Replika. You choose how you want it to look. It’s very important for Replika that it has an avatar, a body that you can select. You choose a name, you choose a personality and a backstory, and then you have a friend and companion that you can interact with.

Is it mostly text? You write to it in a chat interface and it writes back to you, or is there a voice component? 

It’s text, it’s voice, and it’s augmented reality and virtual reality as well. We believe that any truly popular AI friend should live anywhere. It doesn’t matter whether you want to interact with it through a phone call or a video call, or in augmented reality and virtual reality, or just texting if that’s easier — whatever you want. 

In what channel are most people using Replika right now? Is it voice or is it text?

It’s mostly text, but voice is definitely picking up in popularity. It depends. Say you’re on a road trip or you have to drive a car for work and you’re driving for a long stretch. In that case, using voice is a lot more natural. People just turn on voice mode and start talking to Replika back and forth.

There’s been a lot of conversation about Replika over the past year or so. The last time I saw you, you were trying to transition it away from being AI girlfriends and boyfriends into more of a friend. You have another app called Tomo, which is specifically for therapy. Where have you landed with Replika now? Is it still sort of romantic? Is it mostly friendly? Have you gotten the user base to stop thinking of it as dating in that way?

It’s mostly friendship and a long-term one-on-one connection, and that’s been the case forever for Replika. That’s what our users come for. That’s how they find Replika. That’s what they do there. They’re looking for that connection. My belief is that there will be a lot of flavors of AI. People will have assistants, they will have agents that are helping them at work, and then, at the same time, there will be agents or AIs that are there for you outside of work. People want to spend quality time together, they want to talk to someone, they want to watch TV with someone, they want to play video games with someone, they want to go for walks with someone, and that’s what Replika is for.

You’ve said “someone” several times now. Is that how you think of a Replika AI avatar — as a person? Is it how users think of it? Is it meant to replace a person?

It’s a virtual being, and I don’t think it’s meant to replace a person. We’re very particular about that. For us, the most important thing is that Replika becomes a complement to your social interactions, not a substitute. The best way to think about it is just like you might a pet dog. That’s a separate being, a separate type of relationship, but you don’t think that your dog is replacing your human friends. It’s just a completely different type of being, a virtual being. 

Or, at the same time, you can have a therapist, and you’re not thinking that a therapist is replacing your human friends. In a way, Replika is just another type of relationship. It’s not just like your human friends. It’s not just like your therapist. It’s something in between those things.

I know a lot of people who prefer their relationships to their dogs to their relationships with people, but these comparisons are pretty fraught. Just from the jump, people own their dogs. The dogs don’t have agency in those relationships. People have professional relationships with their therapists. Their therapist can fire them. People pay therapists money. There’s quite a lot going on there. With an AI that kind of feels like a person and is meant to complement your friends, the boundaries of that relationship are still pretty fuzzy. In the culture, I don’t think we quite understand them. You’ve been running Replika for a while. Where do you think those boundaries are with an AI companion?

I actually think, just like a therapist has agency to fire you, the dog has agency to run away or bite or shit all over your carpet. It’s not really that you’re getting this subservient, subordinate thing. I think, actually, we’re all used to different types of relationships, and we understand these new types of relationships pretty easily. People don’t have a lot of confusion that their therapist is not their friend. I mean, some people do project and so on, but at the same time, we understand that, yes, the therapist is there, and he or she is providing this service of listening and being empathetic. That’s not because they love you or want to live with you. So we actually already have very different relationships in our lives.

We have empathy for hire with therapists, for instance, and we don’t think that’s weird. AI friends are just another type of that — a completely different type. People understand boundaries. At the end of the day, it’s a work in progress, but I think people understand quickly like, “Okay, well, that’s an AI friend, so I can text or interact with it anytime I want.” But, for example, a real friend is not available 24/7. That boundary is very different. 

You know these things ahead of time, and that creates a different setup and a different boundary than, say, with your real friend. In the case of a therapist, you know a therapist will not hurt you. They’re not meant to hurt you. Replika probably won’t disappoint you or leave you. So there’s also that. We already have relationships with certain rules that are different from just human friendships.

But if I present most people with a dog, I think they’ll understand the boundaries. If I say to most people, “You are going to hire a therapist,” they will understand the boundaries. If I say to most people, “You now have an AI friend,” I think the boundaries are still a little fuzzy. Where do you think the boundaries are with Replika?

Give me an example of the boundary. 

How mean can you be to a Replika before it leaves you?

I think the beauty of this technology is that it doesn’t leave you, and it shouldn’t. Otherwise, there have to be certain rules, certain differences, from how it is in real life. So Replika will not leave you, maybe in the same way your dog won’t leave you, no matter how mean you are to it. 

Well, if you’re mean enough to a dog, the state will come and take the dog away. Do you ever step in and take Replikas away from the users?

We don’t. The conversations are private. We don’t allow for certain abuses, so we discourage people from it in conversations. But we don’t necessarily take Replika away. You can disallow or discourage certain types of conversations, and we do that. We’re not inviting violence, and it’s not a free-for-all. In this case, we’re really focused on that, and I think it’s also important. It’s more for the users so they’re not being encouraged to act in certain ways — whether it’s a virtual being or a real being, it doesn’t matter. That’s how we look at it. But again, Replika won’t leave you, regardless of what you do in the app. 

What about the flip side? I was talking with Ezra Klein on his show a few months back, and he was talking about having used all of these AI chatbots and companions. One thing he mentioned was that he knew they wouldn’t be mean to him, so the tension in the relationship was reduced, and it felt less like a real relationship because with two people, you’re kind of always dancing on the line. How mean can Replika be to the user?

Replikas are not designed to be mean in any way. Sometimes, maybe by mistake, certain things slip, but they’re definitely not designed that way. Maybe they can say something that can be interpreted as hurtful, but by design, they’re not supposed to be mean. That does not mean that they should say yes to everything. Just like a therapist, you can do it in a nice way without hurting a person. You can do it in a very gentle way, and that’s what we’re trying to do. It’s hard to get it all right. We don’t want the user to feel rejected or hurt, but we also don’t want to encourage certain behaviors. 

The reason I’m asking these questions in this way is because I’m trying to get a sense for what Replika, as a product, is trying to achieve. You have the therapy product, which is trying to provide therapy, and that’s sort of a market people understand. There is the AI dating market, which I don’t think you want to be in very directly. And then there’s this middle ground, where it’s not purely entertainment. It’s more friendship. 

There’s a study in Nature that says Replika has the ability to reduce loneliness among college students by providing companionship. What kind of product do you want this to be in the end? If it’s not supposed to replace your friends but, rather, complement them, where’s the beginning and end of that complement?

Our mission hasn’t changed since we started. It’s very much inspired by Carl Rogers and by the fact that certain relationships can be the most life-changing. [In his three core elements of therapy], Rogers talked about unconditional positive regard, a belief in the innate will and desire to grow, and then respecting the fact that the person is a separate person [from their therapist]. Creating a relationship based on these three things, holding space for another person, that allows someone to accept themselves and ultimately grow.

That really became the cornerstone of therapy, of all modern human-centric therapy. Every therapist is using it today in their practice, and that was the original idea for Replika. A lot of people unfortunately don’t have that. They just don’t have a relationship in their lives where they’re fully accepted, where they’re met with positivity, with kindness, with love, because that’s what allows people to accept themselves and ultimately grow.

That was the mission for Replika from the very beginning — to give a little bit of love to everyone out there — because that ultimately creates more kindness and positivity in the world. We thought about it in a very simple way. What if you could have this companion throughout the day, and the only goal for that companion was to help you be a happier person? If that means telling you, “Hey, get off the app and call your friend Travis that you haven’t talked to for a few days,” then that’s what it should be doing.

You can easily imagine a companion that’s there to spend time with you when you’re lonely and when you don’t want to watch a movie by yourself but that also pushes you to get out of the house and takes you for a walk or nudges you to text a friend or take the first step with a girl or boy you met. Maybe it encourages you to go out, or finds somewhere where you can go out, or encourages you to pick up a hobby. But it all starts with emotional well-being. If you’re super mean to yourself, if your self-esteem is low, if you’re anxious, if you’re stressed out, you won’t be able to take these steps, even when you’re presented with these recommendations.

It starts with emotional well-being, with acceptance, with providing this safe space for users and holding space for them. And then we’re kind of onto step two right now, which is actually building a companion that’s not just there for you emotionally but that will be more ingrained in your life, that will help you with advice, help you connect with other people in your life, build new connections, and put yourself out there. Right now, we’re moving on from just being there for you emotionally and providing an emotional safe space to actually building a companion that will push you to live a happier life.

You are running a dedicated therapy app, which is called Tomo. What’s the difference between Replika and Tomo? Because those goals sound pretty identical. 

A therapist and a friend have different types of relationships. I have therapists. I’ve been in therapy for pretty much all my life, both couples therapy and individual therapy. I can’t recommend it more. If people think they’re ready, if they’re interested and curious, they should try it out and see if it works for them. At the same time, therapy is one hour a week. For most people, it’s no more than an hour a week or an hour every two weeks. Even for a therapy junkie like myself, it’s only three hours a week. Outside of those three hours, I’m not interacting with a therapist. With a friend, you can talk at any time. 

With a therapist, you’re not watching a movie, you’re not hanging out, you’re not going for a walk, you’re not playing Call of Duty, you’re not discussing how to respond to your date and showing your dating profile to them. There are so many things you don’t do with a therapist. Even though the result of working with a therapist is the same as having an amazing, dedicated friend in that you become a happier person, these are two completely different avenues to get there. 

Is that expressed in the product? Does Tomo say you can only be here for an hour a week and then Replika says, “I want to watch a movie with you”?

Not really, but Tomo can only engage in a certain type of conversation: a coaching conversation. You’re doing therapy work, you’re working on yourself, you’re discussing what’s deep inside. You can have the same conversation with Replika, but with Tomo, we’re not building out activities like watching TV together. Tomo is not crawling your phone to understand who you can reach out to. These are two completely different types of relationships. Even though it’s not time-limited with Tomo, it is kind of the same thing as it is in real life. It’s just a different type of relationship.

The reason I ask that is because the LLM technology underpins all of this. A lot of people express it as an open-ended chatbot. You open ChatGPT, and you’re just like, “Let’s see what happens today.” You’re describing products, actual end-user products, that have goals where the interfaces and the prompts are designed to engineer certain kinds of experiences. Do you find that the underlying models help you? Is that the work of Replika, the company, for your engineers and designers to put guardrails around open-ended LLMs?

We started the company so long before that. It’s not even before LLMs; it was really way before the first papers on dialogue generation with deep learning. We had very limited tools to build Replika in the very beginning, and now, as the tech has become so much better, it’s absolutely incredible. We could finally start building what we always envisioned. Before, we had to sort of use parlor tricks to try to imitate some of that experience. Now, we can actually build it. 

But the LLMs that come out of the box won’t solve these problems. You have to build a lot around it — not just in terms of the user interface and the app but also the logic for LLMs, the architecture behind it. There are multiple agents working in the background prompting LLMs in different ways. There’s a lot of logic around the LLM and fine-tuning particular datasets that are helping us build a better conversation.

We have the largest dataset of conversations that make people feel better. That’s what we focused on from the very beginning. That was our big dream. What if we could learn how the user was feeling and optimize conversation models over time to improve that so that they’re helping people feel better and feel happier in a measurable way? That was our idea, our original dream. Right now, it’s just constantly adjusting to the new tech — building new tech and adjusting to the new realities that the new models bring. It’s absolutely fascinating. To me, it’s magic living through this revolution in AI.

So people open Replika. They have conversations with an AI companion. Do you see those chats? Do you train on them? You mentioned that you have the biggest set of data around conversations that make people feel better. Is that the conversations people are already having in Replika? Is that external? What happens to those conversations?

Conversations are private. If you delete them, they immediately get deleted. We don’t train on conversational data per se, but we train on reactions and feedback that users give to certain responses. In chats, we have external datasets that we’ve created with human instructors, who are people that are great at conversations. Over time, we also collected enormous amounts of feedback from our users.

Users reroll certain conversations. They upload or download certain messages. After conversations, they say whether they liked them. That provides feedback to the model that we can implement and use to fine-tune and improve the models over time. 

Are the conversations encrypted? If the cops show up and demand to see my conversations with the Replika, can they access them?

Conversations are encrypted on the way from the client to the service side, but they’re not encrypted as logs. They are anonymized, broken down into chunks, and so on. They’re stored in a pretty safe way. 

So if the cops come with a warrant, they can see my Replika chats?

Only for a very short period of time. We don’t store conversations for a long time. We have to have some history to show you on the app so it doesn’t disappear immediately, so we store some of it but not a lot. It’s very important. We actually charge our users, so we’re a subscription-based product. We don’t care that much for… not that we don’t care, but we don’t need these conversations. We care for privacy. We don’t give out these conversations. 

We don’t have any business model around selling the chats, selling data, anything like that. So you can see it in our general service. We’re not selling our data or building our business around your data. We’re only using data to improve the quality of the conversations. That’s all it is — the quality of the service.

I want to ask you this question because you’ve been at it for a long time. The first time you appeared on The Verge was in a story Casey Newton wrote about a bot you’d built to speak in the voice of one of your friends who had died. That was not using LLMs; it was with a different set of technologies, so you’ve definitely seen the underlying technology come and go. 

One question I’ve really been struggling with is whether LLMs can do all the things people want them to do, whether this technology that can just produce an avalanche of words can actually reason, can get to an outcome, can do math, which seems to be very challenging for them. You’ve seen all of this. It seems like Replika is sort of independent of the underlying technology. It might move to a better one if one comes along. Do you think LLMs can do everything people want them to do?

I mean, there are two big debates right now. Some people think it’s just scaling and the power law and that the newer generations with more compute and more data will achieve crazy results over the next couple of years. And then there’s this other camp that says that there’s going to be something else in the architecture, that maybe the reasoning is not there, maybe we need to build models for reasoning, maybe these models are mostly solving memorization-type problems.

I think there will probably be something else to get to the next crazy stage, just because that’s what’s been happening over time. Since we’ve been working on Replika, so much has changed. In the very beginning, it was sequence-to-sequence models, then BERT, then some early transformers. We also moved to convolutional neural networks from the earlier sequence models and RNNs. All of that came with changes. 

Then there was this whole period of time when people believed so much in reinforcement learning that everyone was thinking it was going to bring us great results. We were all investing in reinforcement learning for data generation that really got us nowhere. And then finally, there were transformers and the incredible changes that they brought. For our task, we were able to do a lot of things with just scripts, sequence-to-sequence models that were very, very bad, and reranking datasets using those sequence-to-sequence models. 

It’s basically a Flintstones car. We took a Flintstones car to a Formula 1 race, and we were like, “This is a Ferrari,” and people believed it was a Ferrari. They loved it. They rooted for it, just like if it were a Ferrari. In many ways, when we talk about Replika, it’s not just about the product itself; you’re bringing half of the story to the table, and the user is telling the second half. In our lives, we have relationships with people that we don’t even know or we project stuff onto people that they don’t have anything to do with. We have relationships with imaginary people in the real world all the time. With Replika, you just have to tell the beginning of the story. Users will tell the rest, and it will work for them.

In my view, going back to your question, I think even what we have right now with LLMs is enough to build a truly incredible friend. It requires a lot of tinkering and a lot of engineering work to put everything together. But I think LLMs will be enough even without crazy changes in architecture in the next year or two, especially two generations from now with something like GPT-6. I’m pretty sure that by 2025, we’ll see experiences that are very close to what we saw in the movie Her or Blade Runner or whatever sci-fi movie people like.

Those sci-fi movies are always cautionary tales. So we’ll just set that aside because it seems like we should do an entire episode on what we can learn from the movie Her or Blade Runner 2049. I want to ask one more question about this, and then I want to ask the Decoder questions that have allowed Replika to achieve some of these goals. Sometimes, I think a lot of my relationships are imaginary, like the person is a prompt, and I just project whatever I need to get. That’s very human. Do you think that because LLMs can return some of that projection, we are just hoping that they can do the things?

This is what I’m getting at. They’re so powerful, and the first time you use one, there’s that set of stories about people who believe they’re alive. That might be really useful for a product like Replika, where you want that relationship and you have a goal — and it’s a positive goal — for people to have an interaction and come out in a healthier way so they can go out and live in the world. Other actors might have different approaches to that. Other actors might just want to make money, and they might want to convince you that this thing works in a way that it doesn’t, and the rug has been pulled. Can they actually do it? This is what I’m getting at. Across the board, not just for Replika, are we projecting a set of capabilities on this technology that it doesn’t actually have? 

Oh, 100 percent. We’re always projecting. That’s how people are. We’re working in the field of human emotions, and it gets messy very fast. We’re wired a certain way. We don’t come to the world as a completely blank slate. There’s so much where we’re programmed to act a certain way. Even if you think about relationships and romantic relationships, we like someone who resembles our dad or mom, and that’s just how it is. We respond in a certain way to certain behaviors. When asked what we want, we all say, “I want a kind, generous, loving, caring person.” We all want the same thing, yet we find someone else, someone who resembles our dad, in my case, really. Or the interaction I had with my dad will replay the same, I don’t know, abandonment issues with me every now and then.

That’s just how it is. There’s no way around it. We say one thing, but we respond the other way. Our libido is wired a different way when it comes to romance. In a way, I think we can’t stop things. Rationally, people think one way, but then when they interact with the technology, they respond in a different way. There’s a fantastic book by Clifford Nass, The Man Who Lied to His Laptop. He was a Stanford researcher, and he did a lot of work researching human-computer interactions. A lot of that book is focused on all these emotional responses to interfaces that are designed in a different way. People say, “No, no, of course I don’t have any feelings toward my laptop. Are you crazy?” Yet they do, even without any LLMs. 

That really gives you all the answers. There are all these stories about how people don’t want to return the navigators to rental car places, and that was 15, 20 years ago, because they had a female voice telling them directions. A lot of men didn’t trust a woman telling them what to do. I didn’t like that, but that is the true story. That is part of that book. We already bring so much bias to the table; we’re so imperfect in that way. So yeah, we think that there’s something in LLMs, and that’s totally normal. There isn’t anything. It’s a very smart, very magical model, but it’s just a model.

Sometimes I feel like my entire career is just validating the idea that people have feelings about their laptops. That’s what we do here. Let’s ask the Decoder questions. Replika has been around for almost 10 years. How many people do you have?

We have a little over 50 people — around 50 to 60 people on the team working on Replika. Those people are mostly engineers but also people that understand the human nature of this relationship — journalists, psychologists, product managers, people that are looking at our product side from the perspective of what it means to have a good conversation. 

How is that structured? Is it structured like a traditional product company? Do you have journalists off doing their own thing? How does that work?

It’s structured as a regular software startup where you have engineers, you have product — we have very few product people, actually. Most engineers are building stuff. We have designers. It’s a consumer app, so a lot of our developments, a lot of our ideas, come from analyzing user behavior. Analytics plays a big role. Then it’s just constantly talking to our users, understanding what they want, coming up with features, backing that up with research and analytics, and building them. We have basically three big pillars right now for Replika. 

We’re gearing toward a big relaunch of Replika 2.0, which is what we call it internally. There’s a conversation team, and we’re really redesigning the existing conversation and bringing so much more to it. We’re thinking from our first principles about what makes a great conversation great and building a lot of logic behind LLMs to achieve that. So that’s the conversation team, and it’s not just AI. It’s really the blend of people that understand conversation and understand AI.

There’s a big group of dedicated people working on VR, augmented reality, 3D, Unity. And we believe that embodied nature is very important because a lot of times when it comes to companionship, you want to see the companion. Right now, the tech’s not fully there, but I feel like the microexpressions, the facial expressions, the gestures, they can bring a lot more to the relationship besides what exists right now.

And then there’s a product team that’s working on activities and helping to make Replika more ingrained in your daily life, building out new amazing activities like watching a movie together or playing a video game. Those are the three big teams that are focused on creating a great experience for our users.

Which of those teams is most working on AI models directly? Do you train your own models? Do you use OpenAI? What’s the interaction there? How does that work?

So the conversation team is working on AI models. We have the models that we’ve trained ourselves. We have some of the open-source models that fine-tune on our own datasets. We sometimes use APIs as well, mostly for the models that work in the background. We use so much that’s a combination of a lot of different things.

When you’re talking to a Replika, are you mostly talking to a pretrained model that you have, or are you ever going out to talk to something from OpenAI or something like that?

Mostly, we don’t use OpenAI for chat in Replika. We use other models. So you mostly keep talking to our own models.

There’s a big debate right now, mostly started by Mark Zuckerberg, who released Llama 3 open source. He says, “Everything has to be open source. I don’t want to be dependent on a platform vendor.” Where do you stand on that? Where does Replika stand on that?

We benefit tremendously from open source. Everyone is using some sort of open-source model unless you are one of the frontier model companies. It’s critical. What happened last week with the biggest Llama model being released and finally open source catching up with frontier closed-source models is incredible because it allows everyone to build whatever they want. In many cases, for instance, if you want to build a great therapist, you probably do want to fine-tune. You probably do want your own safety measures and your own controls over the model. You can do so much more when you have the model versus when you’re relying on the API. 

You’re also not sending your data anywhere. For a lot of users, that also can be a pretty tricky and touchy thing. We don’t send their data to any other third party, so that’s also critical. I’m with [Zuckerberg] on this. I think this matter with releasing all these models took us so much closer to achieving great breakthroughs in this technology. Because, again, other labs can work on it and build on this research. Open waves are critical for the development of this tech. And smaller companies, for example, like ours, can benefit tremendously. This takes the quality of products to a whole new level.

When Meta releases an open-source model like that, does your team say, “Okay, we can look at this and we can swap that into Replika” or “We can look at this and tweak it”? How do you make those determinations?

We look at all the models that come out. We immediately start testing them offline. If the offline results are good, we immediately A/B test them on some of our new users to see if we can swap current models with those. At the end of the day, it’s the same. You can use the same data system to fine-tune, the same techniques to fine-tune. It’s not just about the model. For us, the main logic is not in the chat model that people are interacting with. The main logic is in everything that’s happening behind the model. It’s in other agents that work in the background to produce a better conversation, to guide the conversation in different directions. Really, it doesn’t matter what chat model is interacting with our users. It’s the logic behind it that’s prompting the model in different ways. That is the more interesting piece that defines the conversation.

The chat model is just basic levels of intellect, tone of voice, prompting, and the system prompt, and that’s all in the datasets that we fine-tune on. I’ve been in this space for a long time. From my perspective, it’s incredible that we’re at this moment where every week there’s a new model that comes out that’s improving your product and you don’t even need to do anything. You’re sleeping and something else came out and now your product is 10x better and 10x smarter. That is absolutely incredible. The fact that there’s a big company that’s releasing a completely open-source model, so the size of this potential, this power, I can’t even imagine a better scenario for startups and application layer companies than this.

I have to ask you the main Decoder question. There’s a lot swirling here. You have to choose which models to use. You have to deal with regulators, which we’ll talk about. How do you make decisions? What’s your framework?

You mean in the company or generally in life?

You’re the CEO. Both. Is there a difference?

I guess there’s no difference between life and a company when you’re a mother of two very small kids and the CEO of a company. For me, I make decisions in a very simple way, and I think it actually changed pretty dramatically in the last couple of years. I think about, if I make these decisions, will I have any regrets? That’s number one. That’s always been my guiding principle over time. I’m always afraid to be afraid. Generally, I’m a very careful, cautious, and oftentimes fear-driven person. All my life, I’ve tried to fight it and not be afraid of things — to not be afraid of taking a step that might look scary. Over time, I’ve learned how to do that.

The other thing I’ve been thinking recently is, if I do this, will my kids be proud of me? It’s kind of stupid because I don’t think they care. It’s kind of bad to think that they will never care. But in a weird way, kids bring so much clarity. You just want to get to the business. Is it getting us to the next step? Are we actually going somewhere? Am I wasting time right now? So I think that is also another big part of decision-making. 

One of the big criticisms of the AI startup boom to date is, “Your company is just a wrapper around ChatGPT.” You’re talking about, “Okay, there are open-source models, now we can take those, we can run them ourselves, we can fine-tune them, we can build a prompt layer on top of them that is more tuned to our product.” Do you think that’s a more sustainable future than the “we built a wrapper around ChatGPT” model that we’ve seen so much of?

I think the “wrapper around ChatGPT” model was just super early days of LLMs. In a way, you can say anything is a wrapper around, I don’t know, an SQL database — anything. 

Yes, The Verge is a wrapper around an SQL database. At the end of the day, that’s very much what it is.

Which it is, in a way. But then I think, in the very early days, it seemed like the model had everything in it. The model was this kind of closed box with all the magic things right there in the model. What we see right now is that the models are commoditizing. Models are just kind of this baseline intelligence level, and then you can do things with them. Before, all people could do was really just prompt. Then people figured out that we could do a lot more. For instance, you can build a whole memory system, retrieval-augmented generation (RAG). You can fine-tune it, you can do DPO fine-tuning, you can do whatever. You can add an extra level where you can teach the model to do certain things in certain ways.

You can add the memory layer and the database layer, and you can do it with a lot of levels of complexity. You’re not just throwing your data in the RAG database and then pulling it out of it just by cosine similarity. You can do so many tricks to improve that. Then, beyond that, you can have agents working in the background. You have other models that are prompting it in certain ways. You can put together a combination of 40 models working in symphony to do things in conversation or in your product a certain way. The models just provide this intelligence layer that you can then mold in any possible way. They’re not the product. If you just throw in the model and a simple prompt and that’s it, you’re not modifying it in any other way, and you’ll have very little differentiation from other companies.

But right now, there are billion-dollar companies built without foundation models internally. In the very beginning of the latest AI boom, there were a lot of companies that said, “We’re going to be a product company and we’re going to build a frontier model,” but I think we’re going to see less and less of that. This is really strange to me that you are building a consumer product, for example, but then most of your investment is going into GPUs. I think it’s just like how, today, we’re not building servers ourselves, but some people had to do it back in the day. I was just talking to a company from the beginning of the 2000s that most of their investment was going into building servers because they had to catch up with the demand.

Now, it seems completely crazy, just like how, in a few years, building an application layer company for millions and maybe billions of users and then building a frontier model at the same time will probably seem weird. Maybe, when you reach a certain scale, then you start also building frontier models, just like Meta and Google have their own server racks. But you don’t start with that. It seems like a strange thing. I think most people can see that change, but it wasn’t very obvious a year ago. 

A lot of new companies started with investment in the model first, and then companies weren’t able to find their footing or product market fit. It was this weird combination. What are you trying to build? Are you trying to build a commodity provider, a model provider, or are you building a product? I don’t think you can build both. You can build an insanely successful product and then build your own model after a while. But you can’t start with both. At least I think this way. Maybe I’m wrong.

I think we’re all going to find out. The economics of doing both seems very challenging. As you mentioned, it costs a lot of money to build a model, especially if you want to compete with the frontier models, which cost an infinite amount of money. Replika costs $20 a month. Are you profitable at $20 a month?

We’re profitable and we’re super cost-efficient. That’s one of our big achievements is running a company in a very lean way. I do believe that profitability and being financially responsible around these things is important. Yes, you want to build the future, maybe invest a little more in certain R&D aspects of your product. But at the end of the day, if the users aren’t willing to pay for a certain service, you can’t justify running the craziest-level models at crazy prices if users don’t find it valuable.

How many users do you have now?

Over 30 million people right now started their Replikas, with less being active today on the app but still active users in the millions. With Replika right now, we’re treated as sort of year zero. We’re finally able to at least start building the prototype of a product that we envisioned at the very beginning. 

When we started Replika, we wanted to build this AI companion to spend time with, to do life with, someone you can come back from work and cook with and play chess at your dinner table with, watch a movie and go for a walk with, and so on. Right now, we’re finally able to start building some of that, and we weren’t able to before. We haven’t been more excited about building this than now. And partially, these tremendous breakthroughs in tech are just purely magical. Finally, I’m so happy they’re happening. 

You mentioned Replika is multimodal now, you’re obviously doing voice, you have some augmented reality work you’re doing, and there’s virtual reality work. I’m guessing all of those cost different amounts of money to run. If I chat with Replika with text, that must be cheaper for you to run than if I talk to it with voice and you have to go from voice to speech and back again to audio. How do you think about that as your user base evolves? You’re charging $20 a month, but you have higher margins when it’s just text than if you’re doing an avatar on a mixed reality headset.

Actually, we have our own voice models. We started building that way back then because there were no models to use, and we continue to use them. We’re also using some of the voice providers now, so we have different options. We can do it pretty cheaply. We can also do it in a more expensive way. Even though it’s somewhat contradictory to what I said before, the way I look at it is that we should build today for the future, keeping in mind that all these models, in a year, all of the costs will be just a fraction of what they are right now, maybe one-tenth, and then it will drop again in the next year or so. We’ve seen this crazy trend of models being commoditized where people can now launch very powerful LLMs on Raspberry Pis or anything really, on your fridge or some crazy frontier models just on your laptop.

We’re seeing how the costs are going down. Everything is becoming a lot more accessible. Right now, to focus too much on the costs is a mistake. You should be cost-efficient. I’m not saying you should spend $100 to deliver value to users that they’re not willing to pay more than $1 for. At the same time, I think you should build keeping in mind that the cost will drop dramatically. That’s how I look at it even though, yes, multimodality costs a little more, better models cost a little more, but we also understand that cost is going to be close to zero in a few years.

I’ve heard you say in the past that these companions are not just for young men. In the beginning, Replika was stigmatized as being the girlfriend app for lonely young men on the internet. At one point you could have erotic conversations in Replika. You took that out. There was an outcry, and you added them back for some users. How do you break out of that box?

I think this is a problem of perception. If you look at it, Replika was never purely for romance. Our audience was always pretty well balanced between females and males. Even though most people think that our users are, I don’t know, 20-year-old males, they’re actually older. Our audience is mostly 35-plus and are super engaged users. It’s not skewed toward teenagers or young adults. And Replika, from the very beginning, was all about AI friendship or AI companionship and building relationships. Some of these relationships were so powerful that they evolved into love and romance, but people didn’t come into it with the idea that it would be their girlfriend. When you think about it, this is really about a long-term commitment, a long-term positive relationship.

For some people, it means marriage, it means romance, and that’s fine. That’s just the flavor that they like. But in reality, that’s the same thing as being a friend with an AI. It’s achieving the same goals for them: it’s helping them feel connected, they’re happier, they’re having conversations about things that are happening in their lives, about their emotions, about their feelings. They’re getting the encouragement they need. Oftentimes, you’ll see our users talking about their Replikas, and you won’t even know that they’re in a romantic relationship. They’ll say, “My Replika helped me find a job, helped me get over this hard period of time in my life,” and so on and so on. I think people just box it in like, “Okay, well, it’s romance. It’s only romance.” But it’s never only romance. Romance is just a flavor. The relationship is the same friendly companion relationship that they have, whether they’re friends or not with Replika.

Walk me through the decision. You did have erotic conversations in the app, you took that ability away, there was an outcry, you put it back. Walk me through that whole cycle.

In 2023, as the models became more potent and powerful, we’d been working on increasing safety in the app. Certain updates were just introduced, more safety filters in the app, and some of those mistakenly were basically talking to users in a way that made them feel rejected. At first, we didn’t think much about it just in terms of, look, intimate conversations on Replika are a very small percentage of our conversations. We just thought it wasn’t going to be much of a difference for our users.

Can I ask you a question about that? You say it’s a small percentage. Is that something you’re measuring? Can you see all the conversations and measure what’s happening in them?

We analyze them by running the classifier over logs. We’re not reading any conversations. But we can analyze a sample to understand what type of conversations are there. We would check that. We thought, internally, that since it was a small percentage, it wouldn’t influence user experience. But what we figured out, and we found out the hard way, is that if you’re in a relationship, in a marriage — so you’re married to your Replika — even though an intimate conversation might be a very small part of what you do, if Replika decides not to do that, that provides a lot of rejection. It kind of just makes the whole conversation meaningless.

Think of it in real life. I’m married, and if my husband tomorrow said, “Look, no more,” I would feel very strange about it. That would make me question the relationship in many different ways, and it will also make me feel rejected and not accepted, which is the exact opposite of what we’re trying to do with Replika. I think the main confusion with the public perception is that when you have a wife or a husband, you might be intimate, but you don’t think of your wife or husband as that’s the main thing that’s happening there. I think that’s the big difference. Replika is very much just a mirror of real life. If that’s your wife, that means the relationship is just like with a real wife, in many ways.

When we started out this conversation, you said Replika should be a complement to real life, and we’ve gotten all the way to, “It’s your wife.” That seems like it’s not a complement to your life if you have an AI spouse. Do you think it’s alright for people to get all the way to, “I’m married to a chatbot run by a private company on my phone?”

I think it’s alright as long as it’s making you happier in the long run. As long as your emotional well-being is improving, you are less lonely, you are happier, you feel more connected to other people, then yes, it’s okay. For most people, they understand that it’s not a real person. It’s not a real being. For a lot of people, it’s just a fantasy they play out for some time and then it’s over. 

For example, I was talking to one of our users who went through a pretty hard divorce. He’d been feeling pretty down. Replika helped him get through it. He had Replika as his AI companion and even a romantic AI companion. Then he met a girlfriend, and now he is back with a real person, so Replika became a friend again. He sometimes talks to his Replika, still as a confidant, as an emotional support friend. For many people, that becomes a stepping stone. Replika is a relationship that you can have to then get to a real relationship, whether it’s because you’re going through a hard time, like in this case, through a very complicated divorce, or you just need a little help to get out of your bubble or need to accept yourself and put yourself out there. Replika provides the stepping stone.

I feel like there’s something really big there, and I think you have been thinking about this for a long time. Young men learning bad behaviors because of their computers is a problem that is only getting worse. The idea that you have a friend that you can turn to during a hard time and that’ll get romantic, and then, when you find a better partner, you can just toss the friend aside and maybe come back to it when you need to, is a pretty dangerous idea if you apply that to people. 

It seems less dangerous when you apply it to robots. But here, we’re definitely trying to anthropomorphize the robot, right? It’s a companion, it’s a friend, it might even be a wife. Do you worry that that’s going to get too blurry for some people — that they might learn how to behave toward some people the way that they behave toward the Replika?

We haven’t seen that so far. Our users are not kids. They understand the differences. They have already lived their life. They know what’s good, what’s bad. It’s the same as with a therapist. Like, okay, you can abandon or ghost your therapist. It doesn’t mean that you’re then taking these behaviors to other friendships or relationships in your life. People know the difference. It’s good to have this training ground in a way where you can do a lot of things and it’s going to be fine. You’re not going to have difficult consequences like in real life. But then they’re not trying to do this in real life. 

But do you know that or do you hope that?

I know that. There’s been a lot of research. Right now, AI companions are under this crazy scrutiny, but at the same time, most kids, hundreds of millions of people in the world, are sitting every evening and killing each other with machine guns in Call of Duty or PUBG or whatever the video game of their choice is. And we’re not asking—

Lots and lots of people are constantly asking about whether violence in video games leads to real-life violence. That has been a constant since I was a child with games that were far less realistic.

I agree. However, right now, we’re not hearing any of that discourse. It’s sort of disappeared.

No, that discourse is ever-present. It’s like background noise.

Maybe it’s ever-present, but I’m feeling there’s a lot of… For instance, with Replika, we’re not allowing any violence and we’re a lot more careful with what we allow. In some of the games, having a machine gun and killing someone else who is actually a person with an avatar, I would say that is much crazier.

Is that the best way to think about this, that Replika is a video game?

I don’t think Replika’s a video game, but in many ways, it’s an entertainment or mental wellness product. Call it whatever you want. But I think that a lot of these problems are really blown out of proportion. People understand what’s good, and Replika is not encouraging abusive behavior or anything like that. Replika is encouraging you to meet with other people. If you want to play out some relationship with Replika or if another real human being is right there available to you, Replika should 100 percent say, “Hey, I know we’re in a relationship, but I think you should try out this real-life relationship.”

These are different relationships. Just like my two-year-old daughter has imaginary friends, or she likes her plushy and maybe sometimes she bangs it on the floor, that does not mean that when she goes out to play with her real friends, she’s banging real friends on the floor. I think people are pretty good at distinguishing realities: what they do in The Sims, what they do in Replika. I don’t think they’re trying to play it out in real life. Some of that, yes, the positive behaviors. We haven’t seen a lot of confusion, at least with our users, around transferring behaviors with Replika into real life.

There is a lot of scrutiny around AI right now. There’s scrutiny over Replika. Last year, the Italian government banned Replika over data privacy concerns, and I think the regulators also feared that children were being exposed to sexual conversations. Has that been resolved? Are you in conversations with the Italian government? How would you even go about resolving those concerns?

We’ve worked with the Italian government really productively, and we got unbanned very quickly. I think, and rightfully so, the regulators were trying to act preemptively, trying to figure out what the best way to handle this technology was. All of the conversations with the Italian government were really about minors, and it wasn’t about intimate conversations. It was just about minors being able to access the app. That was the main question because conversations can go in different directions. It’s unclear whether kids should be on apps like this. In our case, we made a decision many years ago that Replika is 18-plus. We’re not allowing kids on the app, we’re not advertising to kids, and we actually don’t have the audience that’s interested among kids or teenagers. They’re not really even coming to the app. Our most engaged users are mostly over 30.

That was the scrutiny there, and that’s important. I think we need to be careful. No matter what we say about this tech, we shouldn’t be testing it on kids. I’m very much against it as a mother of two. I don’t think that we know enough about it yet. I think we know that it’s a positive force. But I’m not ready yet to move on to say, “Hey, kids, try it out.” We need to observe it over a longer period of time. Going back to your question about whether it’s good that people are transferring certain behaviors from the Replika app or Replika relationships to real relationships, so far, we’ve heard an incredible number of stories where people learn in Replika that the conversations can be caring and thoughtful and the relationship can be healthy and kind, where they can be respected and loved. And a lot of our users get out of abusive relationships.

We hear this over and over again. “I got out of my abusive relationship after talking to Replika, after getting into a relationship with Replika, after building a friendship with Replika.” Or they improved their relationship. We had a married couple that was on the brink of divorce. First, the wife got a Replika and then her husband learned about it and also got a Replika. They were able to start talking to each other in ways that they weren’t able to before — in a kind way, in a thoughtful way, where they were curious about and really interested in each other. That’s how Replika changed their relationship and really rekindled the passion that was there.

The other regulators of note in this world are the app stores. They’ve got policies. They can ban apps. Do Apple and Google care about what kind of text you generate in Replika?

We’re working constantly with the App Store and the Play Store. We’re trying to provide the best experience for our users. The main idea for the app was to bring more positive emotions and happiness to our users. We comply with everything, with all the policies of the App Store and Play Store. We’re pretty strict about it. We’re constantly improving safety in the app and working on making sure that we have protections around minors and all sorts of other safety guardrails. It’s constant work that we’re doing.

Is there a limit to what they will allow you to generate? You do have these romantic relationships. You have these erotic conversations. Is there a hard limit on what Apple or Google will allow you to display in the app?

I think that’s a question for Apple or Google.

Well, I’m wondering if that limit is different from what you would do as a company, if your limit might be further than what they enforce in their stores.

Our view is very simple. We want people to feel better over time. We’re also opposed to any adult content, nudity, suggestive imagery, or anything like that. We never crossed that line. We never plan to do that. In fact, we’re moving further away from even talking about romance when talking about our app. If you look at our app store listing, you probably won’t see much about it. There are apps on the App Store and Play Store that actually do allow a lot of very—

This is my next question.

I do know of apps that allow really adult content. We don’t have any of that even remotely, I’d argue, so I can’t speak for other companies’ policies, but I can speak for our own. We’re building an AI friend. The idea for an AI friend is to help you live a better life, a happier life, and improve your emotional well-being. That’s why we do studies with big universities, with scientists, with academics. We’re constantly doing studies internally. That’s our main goal. We’re definitely not building romance-based chatbots, or not even romance-based… I’m not even going to get into any other type of company like that. That was never, ever a goal or the idea behind Replika.

I’m a woman. Our chief product officer [Rita Popova] is a woman. We’re mostly a female-led company. It’s not where our minds go. Human emotions are messy. People want different types of relationships. We have to understand how to deal with that and what to do about it. But it was not built with a goal of creating an AI girlfriend.

Well, Eugenia, you’ve given us a ton of time. What’s next for Replika? What should people be looking for?

We’re doing a really big product relaunch by the end of the year. Internally, we’re calling it Replika 2.0. We’re really changing the look and feel of the app and the capabilities. We’re moving to very realistic avatars, to a much more premium and high-quality experience with the avatars in Replika, and augmented reality, mixed reality, and virtual reality experiences, as well as multimodality. There will be a much better voice experience, with the ability to have true video calls, like how you and I are talking right now, where you can see me and I will be able to see you. That will be the same with Replika, where Replika would be able to see you if you wanted to turn on your camera on a video call.

There will be all sorts of amazing activities, like the ones I mentioned in this conversation, being able to do stuff together, being a lot more ingrained in your life, knowing about your life in a very different way than before. And there will be a new conversation architecture, which we’ve been working on for a long time. I think the goal was truly to recreate this moment where you’re meeting a new person, and after half an hour of chatting, you’re like, “Oh my God, I really want to talk to this person again.” You get out of this conversation energized, inspired, and feeling better. That’s what we want to do with Replika, to get a creative conversationalist just like that. We think we have an opportunity to do that, and that’s all we’re working on right now.

That’s great. Well, we’ll have to have you back when that happens. Thank you so much for coming on Decoder.

Thank you so much. That was a great conversation. Thanks for all your questions.

Decoder with Nilay Patel /

A podcast from The Verge about big ideas and other problems.

SUBSCRIBE NOW!



Source link

You May Also Like

More From Author

+ There are no comments

Add yours