Cosmos can also generate tokens about each avatar movement that act like time stamps, which will be used to label brain data. Labeling data enables an AI model to accurately interpret and decode brain signals and then translate those signals into the intended action.
All of this data will be used to train a brain foundation model, a large deep-learning neural network that can be adapted to a wide range of uses rather than needing to be trained on each new task.
âAs we get more and more data, these foundation models get better and become more generalizable,â Shanechi says. âThe issue is that you need a lot of data for these foundation models to actually become foundational.â That is difficult to achieve with invasive technology that few people will receive, she says.
Synchronâs device is less invasive than many of its competitorsâ. Neuralink and other companiesâ electrode arrays sit in the brain or on the brainâs surface. Synchronâs array is a mesh tube thatâs inserted at the base of the neck and threaded through a vein to read activity from the motor cortex. The procedure, which is similar to implanting a heart stent in an artery, doesnât require brain surgery.
âThe big advantage here is that we know how to do stents in the millions around the globe. In every part of the world, thereâs enough talent to go do stents. A normal cath lab can do this. So itâs a scalable procedure,â says Vinod Khosla, founder of Khosla Ventures, one of Synchronâs investors. As many as 2 million people in the United States alone receive stents every year to prop open their coronary arteries to prevent heart disease.
Synchron has surgically implanted its BCI in 10 subjects since 2019 and has collected several yearsâ worth of brain data from those people. The company is getting ready to launch a larger clinical trial that is needed to seek commercial approval of its device. There have been no large-scale trials of implanted BCIs because of the risks of brain surgery and the cost and complexity of the technology.
Synchronâs goal of creating cognitive AI is ambitious, and it doesnât come without risks.
âWhat I see this technology enabling more immediately is the possibility of more control over more in the environment,â says Nita Farahany, a professor of law and philosophy at Duke University who has written extensively about the ethics of BCIs. In the longer term, Farahany says that as these AI models get more sophisticated, they could go beyond detecting intentional commands to predicting or making suggestions about what a person might want to do with their BCI.
âTo enable people to have that kind of seamless integration or self-determination over their environment, it requires being able to decode not just intentionally communicated speech or intentional motor commands, but being able to detect that earlier,â she says.
It gets into sticky territory about how much autonomy a user has and whether the AI is acting consistently with the individualâs desires. And it raises questions about whether a BCI could shift someoneâs own perception, thoughts, or intentionality.
Oxley says those concerns are already arising with generative AI. Using ChatGPT for content creation, for instance, blurs the lines between what a person creates and what AI creates. âI don’t think that problem is particularly special to BCI,â he says.
For people with the use of their hands and voice, correcting AI-generated materialâlike autocorrect on your phoneâis no big deal. But what if a BCI does something that a user didnât intend? âThe user will always be driving the output,â Oxley says. But he recognizes the need for some kind of option that would allow humans to override an AI-generated suggestion. âThere’s always going to have to be a kill switch.â
+ There are no comments
Add yours