Google on Wednesday, December 6, presented an artificial intelligence model called Gemini, which the company described as the most flexible configuration of advanced technology since it will be released in different parameters.
The Internet search giant announced that one of the versions of the presented machine intelligence system will be designed for smartphones. The company notes that this feature of its AI model is an important competitive advantage.
The artificial intelligence configuration developed by the tech giant will have three versions, including Gemini Ultra, Gemini Pro, and Gemini Nano. Eli Collins, vice president of products at Google DeepMind, said that this model of machine intelligence will be able to work both on mobile devices and be used in large-scale data centers. According to him, the technology giant has long wanted to create an artificial intelligence configuration that is inspired by the human understanding of the world and how human civilization interacts with the space around it.
Eli Collins also noted that the company was striving to ensure that its AI model was more useful to employees, and not just smart software. He stated that Gemini is a step closer of the technology giant to materialize this vision of artificial intelligence as a functional system aware of the space of being and interacting with it.
Before the release of the machine intelligence model, Google conducted several tests, after which it stated that according to the results of six out of eight studies, Gemini Pro surpassed GPT-3.5 from OpenAI in terms of efficiency. The tech giant also said that its development demonstrated a higher level of problem-solving quality compared to GPT-4, the mentioned company’s advanced artificial intelligence configuration.
Google has calculated that AlphaCode 2, the newest AI-based product that can explain and generate code, surpasses 85% of competitors in sphere programming.
Starting December 6, developers who want to create Gemini-based apps for smartphones and tablets can subscribe to the nano version of the technology giant’s artificial intelligence model. Google also announced that it will launch an AI configuration on its flagship Pixel 8 Pro smartphone. Owners of these devices will have access to such artificial intelligence functions as, for example, the ability to sum up points from a recorded phone conversation.
Next week, the tech giant will open access to Gemini Pro for cloud customers through the Vertex AI and AI Studio platforms.
The maximum version of the artificial intelligence model developed by Google is Gemini Ultra. An advanced variation of the product will initially be available as part of the Early Access program. In this case, developers and corporate companies will be able to use the AI model. More detailed information on this issue will be made public next week. The tech giant will expand access to Gemini Ultra in early 2024.
The new artificial intelligence model can also be integrated into a variety of Google apps and services via Bard, which is the company’s conversational chatbot competing with ChatGPT created by OpenAI. Before Gemini’s launch, Bard used PaLM 2, a large-scale language model announced at a developer conference in May this year.
Over the past year, the tech giant has realized the need for a conceptual rethink of its core search business and has come to the understanding that it should begin to respond to the rapid increase in the number of artificial intelligence programs that can create content. Google has been a kind of pioneer in the sphere of machine intelligence research for a certain period. At the same time, the technology giant has repeatedly received criticism about the too-slow appearance of artificial intelligence-based products on the market. Dissatisfaction with Google’s slowness in sphere AI intensified after the debut of ChatGPT, which became something like a sensational event that attracted the attention of the whole world.
Against the background of the specified accusations against the technology giant Gemini can be described as the company’s response to market pressure. Google has stated that its artificial intelligence model is natively multimodal. Gemini has been trained to process text and image queries for users.
At the same time, Google representatives warn about the susceptibility of the new artificial intelligence model to so-called hallucinations, which are the generation of false information. Ally Collins described this specificity as an unsolved research problem. He noted that Gemini has the most comprehensive security assessment of all the tech giant’s artificial intelligence models. The AI configuration has been tested to verify its propensity to generate content containing narratives of provoking hatred and semantic constructions of political bias.
The aforementioned Gemini integration into Bard will allow the model to connect to Google services, including Gmail, maps, documents, and YouTube. The chatbot is already running on Gemini Pro. In this case, Bard gets enhanced reasoning, planning, and understanding capabilities. A chatbot with an integrated AI model will be able to work in English in 170 countries, but not in the EU and the UK, where the company has not yet agreed on its new product with regulators.
In early 2024, the technology giant plans to release Bard Advanced. In this case, the process of functioning the chatbot will be carried out based on a more productive version of the artificial intelligence model called Gemini Ultra.
Sissie Hsiao, Google’s vice president of products for Bard, said that thanks to Gemini, Bard will receive the largest and best update to date, which will give users access to new opportunities for creativity, collaboration, and interaction.
As we have reported earlier, Google’s AI Chatbot Answers Questions About YouTube Videos.