Gemini is a powerful artificial intelligence (AI) model from Google that can understand text, images, videos, and audio. As a multimodal model, Gemini is described as capable of completing complex tasks in math, physics, and other areas, and understanding and generating high-quality code in various programming languages.
It is currently available through the Gemini chatbot (formerly Google Bard) and some Google Pixel devices and will gradually be folded into other Google services. During Google I/O 2024, the company announced new features that will come to Gemini, including a new ‘Live’ mode and integrations with Project Astra. Gemini also powers AI overview in Google searches.
Also: I ranked the AI features announced at Google I/O from most useful to gimmicky
“Gemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research,” said Dennis Hassabis, CEO and co-founder of Google DeepMind, when announcing Gemini.
“It was built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across, and combine different types of information including text, code, audio, image, and video.”
+ There are no comments
Add yours