AI Is Decoding Oinks to See if the Pigs Are Alright

Estimated read time 10 min read


There’s a lot, as usual, to call out in the fast-moving world of generative AI, from the Biden administration’s new rules on the government’s use of AI to the Federal Trade Commission banning AI-generated customer reviews to questions about whether an AI-generated Game of Thrones character led to the tragic suicide of a 14-year-old boy.

AI Atlas art badge tag AI Atlas art badge tag

I’ll get to some of that news below. But in the interest of thinking positively about how gen AI can advance our understanding of, well, just about anything, let’s start off with this: European scientists have come up with an AI algorithm that interprets the sounds made by pigs, to serve as a sort of early warning system for farmers when their animals need to be cheered up. 

Under the headline “AI decodes oinks and grunts to keep pigs happy,” Reuters reported that the algorithm “could potentially alert farmers to negative emotions in pigs” so they can intervene and improve the animals’ well-being. The news outlet spoke with Elodie Mandel-Briefer, a behavioral biologist at the University of Copenhagen, who’s been co-leading the effort.

Scientists from universities in Denmark, the Czech Republic, France, Germany, Norway and Switzerland “used thousands of recorded pig sounds in different scenarios, including play, isolation and competition for food, to find that grunts, oinks, and squeals reveal positive or negative emotions,” Reuters said. Mandel-Briefer told the news agency that though a good farmer will have a sense of how their pigs are faring just by watching them in their pens, much of the emphasis with current tools is on an animal’s physical condition.

“Emotions of animals are central to their welfare, but we don’t measure it much on farms,” she told Reuters.

If you’re a fan of Babe and Charlotte’s Web, like I am, you’re probably nodding and saying that of course we should be paying attention to the feelings of animals. And you may also be thinking, Isn’t it remarkable that AI may help us — if not exactly talk to animals, like Doctor Dolittle or Eliza Thornberry — at least give farmers a helpful bead on a creature’s inner life?

Given that researchers have also been using AI to decode the sounds elephants make, leading them to conclude that elephants call each other by name just like we humans do, I’m optimistic that a Dolittle/Thornberry-like translation chatbot isn’t that far away. 

Here are the other doings in AI worth your attention.

Apple Intelligence is sort of, kind of heading our way

Apple users will soon start to see some of the gen AI tools promised as part of the company’s Apple Intelligence rollout, when the tech giant releases software updates this week for the iPhone, the iPad and the Mac.

iOS 18.1 includes a handful of Apple Intelligence features, “like AI-suggested writing tools that pop up in documents or emails, photo tools including Clean Up to remove unwanted parts of an image, and a number of Siri changes,” CNET reviewers Scott Stein and Patrick Holland reported. “The most conspicuous changes to Siri include a new voice designed to sound more natural, the ability to understand the context of conversations, a new glowing border around the display when Siri is running, and a new double-tap gesture on the bottom of the screen to type to Siri.”

There’s a caveat, they add: “While some of Apple’s AI features sound genuinely useful, the limited rollout to only certain iPhones, iPads and Macs later this year (iPhone 15 Pro models or later, and Macs and iPads with M-series chips) means they won’t be used by everyone.”

What’s with the slow, limited AI rollout by Apple, which is seen as being behind its rivals including Microsoft and Google when it comes to gen AI tools? Apple’s software chief, Craig Federighi, told The Wall Street Journal’s Joanna Stern that the company is taking a measured approach to gen AI because Apple is focused on privacy and the responsible use of artificial intelligence.

“You could put something out there and have it be sort of a mess,” Federighi told Stern. “Apple’s point of view is more like, ‘Let’s try to get each piece right and release it when it’s ready.'” 

Or maybe Apple is just behind its rivals, as some people speculate.

I, for one, am waiting to play around with Genmoji, which I believe may be a creative way to get Apple users comfortable with writing gen AI prompts. Details here

FTC bans fake — and AI-generated — reviews, testimonials

There have been many, many stories questioning whether you should believe reviews and customer testimonials purporting to be written by average people on sites like Amazon and Yelp. Now the US Federal Trade Commission aims to save consumers time and money with a new rule that, among other things, bans “fake or false consumer reviews … that misrepresent that they are by someone who does not exist, such as AI-generated fake reviews,” the FTC said in a release.

Signup notice for AI Atlas newsletter Signup notice for AI Atlas newsletter

“Fake reviews not only waste people’s time and money, but also pollute the marketplace and divert business away from honest competitors,” FTC Chair Lina Khan said in the August release, which focused on rules that kicked in last week. The rules also ban the sale or purchase of online reviews.

CNET writer Samantha Kelly noted that the new rule applies to future reviews. “About 90% of people rely on reviews when shopping online, according to marketing platform Uberall,” Kelly wrote. “Although it’s unclear how the FTC will enforce the rule, it could choose to pursue a few high-profile cases to set an example. It can seek fines up to $51,744 per violation.”

If you think a review is fake, you can report it to the FTC here

FYI, CNET’s reviews are all done by our human staff, and in line with our AI policy, we’re not using AI tools to do any of the hands-on, product-based testing that informs our reviews and ratings — “except in cases where we’re reviewing and rating AI tools themselves and need to generate examples of their output, as we do within AI Atlas, our human-created compendium of AI news and information.”

For Elon Musk, an AI imitation may be the sincerest form of theft, lawsuit alleges

The production company that brought us the film Blade Runner 2049 isn’t thrilled with Elon Musk. Alcon Entertainment alleges that while showcasing Tesla’s robotaxi during the vehicle’s October debut, the billionaire used AI-generated imagery that it says too closely copies images from the 2017 movie.

Alcon, which is suing Tesla, CEO Musk and the film’s distributor, Warner Bros., said it had been asked about the use of an “iconic still image” from the movie to promote Tesla’s new Cybercab, according to Alcon’s 41-page lawsuit (which you can read here).

“Alcon refused all permissions and adamantly objected to Defendants suggesting any affiliation between BR2049 and Tesla, Musk or any Musk-owned company,” the suit says. “Defendants then used an apparently AI-generated faked image to do it all anyway.”

You can see the images in question in this story by the BBC, which reported that Musk has mentioned the original Blade Runner film in the past and had hinted “at one point that it was a source of inspiration for Tesla’s Cybertruck.” 

Tesla and Warner Bros. haven’t responded to various media requests for comment. “Musk opted for troll mode in responding to news of the lawsuit — saying [in an X post] ‘That movie sucked‘ — rather than addressing specifics of the complaint,” Variety reported. The Washington Post noted that Musk mentioned Blade Runner during the Cybercab rollout. “I love Blade Runner, but, uh, I don’t know if we want that future,” Musk said. “I think we want that duster [coat] he’s wearing, but, uh, but not the bleak apocalypse. We want to have a fun, exciting future.”

Tesla’s robotaxi event — called We, Robot — also prompted filmmaker Alex Proyas to call out similarities to designs for I, Robot, his 2004 film based on Isaac Asimov’s stories. “Hey Elon, can I have my designs back please,” Proyas wrote in a post on X that’s been viewed 8.1 million times. 

OpenAI whistleblower cites copyright concerns, Perplexity sued

Publishers and AI companies are actively facing off over whether the makers of the large language models that power gen AI chatbots (like ChatGPT, Anthropic and Perplexity) can scrape content off the internet, including copyrighted materials, to train their models. Publishers say no, with The New York Times notably suing OpenAI and Microsoft. AI companies say they’re operating under fair use guidelines and don’t have to compensate or ask permission of copyright holders. 

Last week, the Times reported that a former OpenAI researcher involved in gathering material off the internet to feed ChatGPT believes OpenAI’s use of copyrighted data breaks the law. During a series of interviews, Suchir Balaji, who worked at OpenAI for four years, shared his concerns with the paper. “ChatGPT and other chatbots, he said, are destroying the commercial viability of the individuals, businesses and internet services that created the digital data used to train these A.I. systems,” the Times reported.

“This is not a sustainable model for the internet ecosystem as a whole,” Balaji told the paper.

In response to Balaji’s assertions, OpenAI reiterated that it’s collecting content from the internet “in a manner protected by fair use.” And coincidentally (or not), OpenAI and Microsoft announced they would contribute $5 million each in cash and tech services to fund news projects focused on AI adoption at five big city daily news organizations.

Meanwhile, Perplexity AI was slapped with a lawsuit by Dow Jones and the New York Daily News, which are owned by media mogul Rupert Murdoch. The media companies said the AI startup has been engaging in a “massive amount of illegal copying” of their copyrighted work. Reuters also noted that the NYT sent Perplexity a cease and desist notice earlier this month “demanding it stop using the newspaper’s content for generative AI purposes.”

In response to the Dow Jones suit, the CEO of Perplexity, Aravind Srinivas, told Reuters that he was “surprised” and that the company is open to talking to publishers about licensing their content. That comes after Wired and Forbes accused Perplexity of plagiarizing their content, prompting the AI search engine to start a revenue-sharing program with publishers.

Also worth knowing…

When asked to share his views on tech, director Spike Lee said he’s “scared” of AI. Lee spoke as part of a lecture series hosted by the Gibbes Museum of Art. “I was in my hotel room, on Instagram, and they have these things [with] a lower third saying it’s AI, but the things have people saying the exact opposite of who these people are. So, you don’t know what’s what, and it’s scary. It’s scary,” said Lee, the creative force behind classic films including Do the Right Thing and Malcolm X. “I just think that … sometimes technology can go too far.” Video of the event is available on YouTube here. Lee’s AI comments are just before the 57-minute mark.

If you’re looking for some lessons in how to use gen AI tools, CNET contributor Carly Quellman offers up her review of MasterClass’ three-part series on how to embrace AI.

The Biden administration put out a memorandum outlining how “the Pentagon, the intelligence agencies and other national security institutions should use and protect artificial intelligence technology, putting ‘guardrails’ on how such tools are employed in decisions varying from nuclear weapons to granting asylum,” The New York Times reported. You can read the memorandum here.

Researchers at the University of California, Los Angeles, say they’ve developed an AI deep-learning system that “teaches itself quickly to automatically analyze and diagnose MRIs and other 3D medical images — with accuracy matching that of medical specialists in a fraction of the time.”





Source link

You May Also Like

More From Author

+ There are no comments

Add yours