Google has recently announced a new stream of language models titled “Gemma,” which are based on the recent Gemini series that witnessed widespread publicity and acclaim in the AI space. The tech giant has been rather careful in describing Gemma AI and has termed it an “open-weight” model. This essentially indicates that these models can be modified and fine-tuned based on user requirements but are still monitored, and the terms of use are still dictated by Google. Since open-source AIs and LLMs have recently raised concerns about AI misuse, companies like Google have been wary and have refrained from calling the models open-source for obvious reasons. Regardless, open models still provide a variety of opportunities for developers and other enthusiasts to access an advanced LLM while also fine-tuning it for their use cases.
Like Google, other firms have also launched a variety of open-source and open-weight LLMs. This includes other tech behemoths such as Meta and Microsoft, who together launched LlaMa 2, a large open-source LLM, as well as French tech startup Mistral, which also provided the tech space with a capable open-source large language model. Other firms like HuggingFace have also had their open-source alternatives on the market for nearly a year now, creating considerable competition even in the open-source and open-weight LLM domains. The upcoming sections broach Google’s Gemma AI in greater detail.
Google Gemma’s Key Features
Google Gemma has been launched in two variants—a 2-billion-parameter edition and a 7-billion-parameter alternative, respectively. Since the firm has not published a paper demonstrating the key features and facets of the technology, concrete details surrounding the nuances of the two Gemma AI models do not exist. However, it is known that prospective users can download the entire LLM and operate it on their personal computers, workbenches, and other supported devices. Google Gemma is based on decoder-only technology, which the firm has also implemented in major LLMs such as Gemini and the older PaLM 2 models. Given that Gemma is an open model, it can be augmented to fit a variety of use cases, and Google is entering the space to also engage with developers and create a platform where they can utilize the platform as an environment.
Without revealing too much, Google called its new generative AI model “state-of-the-art” and stated that it’s capable of beating other models in the same class. That being said, Gemma AI’s 2B and 7B variants are only the initial entrants in the section, and larger successors are bound to be added to the open model series. Gemma’s training data has been filtered for personal and private information to ensure the large language model adheres to the principles of responsible AI. In addition to extant assistance from Google, the firm has also provided a Responsible Generative AI Toolkit for developers to use when creating AI-generated content for their respective necessities.
Gemma AI’s Technical Attributes
Gemma AI can be used to create chatbots, AI writing tools, and other generic content-generation applications. There’s no concrete information on the nature of data Google has deployed to train the models; however, the firm’s CEO, Sundar Pichai, did state that the datasets are based on the larger Gemini models that currently power Google’s other AI offerings. Along with Google, NVIDIA has also collaborated with the tech giant to optimize the functioning of LLMs on its chips and hardware. Gemma AI is built to best function on Google Cloud, and the firm has also made it more lucrative for new users, adding up to $300 in credits for such accounts. Gemma LLMs can also be accessed for free via data science communities like Kaggle and collaborative development platforms such as Colab Notebooks.
Besides efficiency and lightweight functional mechanics, Google has also trained these generative models to be better in terms of AI safety. Security and vulnerability have become major concerns for AI firms, besides issues of copyright that have persistently plagued the industry. As numerous firms, regulatory authorities, and governments wade through these ethically complex areas, Google has attempted to stick to the most responsible approach possible. Since Google has not completely opened up the platform to independent developers and third parties, the company still retains considerable control over the direction of the model at large and its use. This also works to add more credibility to the model in the long term.
The Scope for Google AI’s Open LLMs
Since Google’s rivalry with OpenAI’s ChatGPT has grown steadily, the firm is looking to expand its reach in the AI market. While other companies like Meta have also entered the open-source market, Google has taken a slightly different approach but with similar goals of enhancing third-party engagement with its offerings. Given that Gemini’s launch has been fairly successful and experimental services like Search Generative Experience have also been on track, Google is continuing to implement its “AI first” policy, now in the open domain. Since demand for open-source AI and LLM technologies remains high, Google will continue to innovate and integrate novel developments and discoveries within its frameworks, albeit with more control over its offerings when compared to truly open-source LLMs.
FAQs
1. Is Google’s Gemma AI free to use?
Yes, Google Gemma AI is free and can be accessed on Kaggle, Google Colab Notebooks, or Google Cloud.
2. How many parameters does Gemma have?
Gemma AI is currently available in two variants, two and seven billion parameters. Google plans on introducing options with a greater number of parameters shortly.
3. Is Gemma an open-source LLM?
Gemma is not open-source since Google still controls the trajectory of the overall development of the LLM and what it can be used for. However, users can download the models and augment them for specific use cases so long as they adhere to Google’s policies.