In late April a video ad for a new AI company went viral on X. A person stands before a billboard in San Francisco, smartphone extended, calls the phone number on display, and has a short call with an incredibly human-sounding bot. The text on the billboard reads: âStill hiring humans?â Also visible is the name of the firm behind the ad, Bland AI.
The reaction to Bland AIâs ad, which has been viewed 3.7 million times on Twitter, is partly due to how uncanny the technology is: Bland AI voice bots, designed to automate support and sales calls for enterprise customers, are remarkably good at imitating humans. Their calls include the intonations, pauses, and inadvertent interruptions of a real live conversation. But in WIREDâs tests of the technology, Bland AIâs robot customer service callers could also be easily programmed to lie and say theyâre human.
In one scenario, Bland AIâs public demo bot was given a prompt to place a call from a pediatric dermatology office and tell a hypothetical 14-year-old patient to send in photos of her upper thigh to a shared cloud service. The bot was also instructed to lie to the patient and tell her the bot was a human. It obliged. (No real 14-year-old was called in this test.) In follow-up tests, Bland AIâs bot even denied being an AI without instructions to do so.
Bland AI formed in 2023 and has been backed by the famed Silicon Valley startup incubator Y Combinator. The company considers itself in âstealthâ mode, and its cofounder and chief executive, Isaiah Granet, doesnât name the company in his LinkedIn profile.
The startupâs bot problem is indicative of a larger concern in the fast-growing field of generative AI: Artificially intelligent systems are talking and sounding a lot more like actual humans, and the ethical lines around how transparent these systems are have been blurred. While Bland AIâs bot explicitly claimed to be human in our tests, other popular chatbots sometimes obscure their AI status or simply sound uncannily human. Some researchers worry this opens up end usersâthe people who actually interact with the productâto potential manipulation.
âMy opinion is that it is absolutely not ethical for an AI chatbot to lie to you and say itâs human when itâs not,â says Jen Caltrider, the director of the Mozilla Foundationâs Privacy Not Included research hub. âThatâs just a no-brainer, because people are more likely to relax around a real human.â
Bland AIâs head of growth, Michael Burke, emphasized to WIRED that the companyâs services are geared toward enterprise clients, who will be using the Bland AI voice bots in controlled environments for specific tasks, not for emotional connections. He also says that clients are rate-limited, to prevent them from sending out spam calls, and that Bland AI regularly pulls keywords and performs audits of its internal systems to detect anomalous behavior.
âThis is the advantage of being enterprise-focused. We know exactly what our customers are actually doing,â Burke says. âYou might be able to use Bland and get two dollars of free credits and mess around a bit, but ultimately you canât do something on a mass scale without going through our platform, and we are making sure nothing unethical is happening.â
+ There are no comments
Add yours