12.1 C
New York
Tuesday, April 9, 2024

Tune Gemini Pro in Google AI Studio or with the Gemini API



Posted by Cher Hu, Product Manager and Saravanan Ganesh, Software Engineer for Gemini API

The following post was originally published in October 2023. Today, we’ve updated the post to share how you can easily tune Gemini models in Google AI Studio or with the Gemini API.

Last year, we launched Gemini 1.0 Pro, our mid-sized multimodal model optimized for scaling across a wide range of tasks. And with 1.5 Pro this year, we demonstrated the possibilities of what large language models can do with an experimental 1M context window. Now, to quickly and easily customize the generally available Gemini 1.0 Pro model (text) for your specific needs, we’ve added Gemini Tuning to Google AI Studio and the Gemini API.

What is tuning?

Developers often require higher quality output for custom use cases than what can be achieved through few-shot prompting. Tuning improves on this technique by further training the base model on many more task-specific examples—so many that they can’t all fit in the prompt.

Fine-tuning vs. Parameter Efficient Tuning

You may have heard about classic “fine-tuning” of models. This is where a pre-trained model is adapted to a particular task by training it on a smaller set of task-specific labeled data. But with today’s LLMs and their huge number of parameters, fine-tuning is complex: it requires machine learning expertise, lots of data, and lots of compute.

Tuning in Google AI Studio uses a technique called Parameter Efficient Tuning (PET) to produce higher-quality customized models with lower latency compared to few-shot prompting and without the additional costs and complexity of traditional fine-tuning. In addition, PET produces high quality models with as little as a few hundred data points, reducing the burden of data collection for the developer.

Why tuning?

Tuning enables you to customize Gemini models with your own data to perform better for niche tasks while also reducing the context size of prompts and latency of the response. Developers can use tuning for a variety of use cases including but not limited to:

  • Classification: Run natural language tasks like classifying your data into predefined categories, without needing tons of manual work or tools.
  • Information extraction: Extract structured information from unstructured data sources to support downstream tasks within your product.
  • Structured output generation: Generate structured data, such as tables, quickly and easily.
  • Critique Models: Use tuning to create critique models to evaluate output from other models.

Get started quickly with Google AI Studio

1. Create a tuned model

It’s easy to tune models in Google AI Studio. This removes any need for engineering expertise to build custom models. Start by selecting “New tuned model” in the menu bar on the left.

moving image showing how to create a tuned model in Google AI Studio by opening 'New Tuned Model' from the menu

2. Select data for tuning

You can tune your model from an existing structured prompt or import data from Google Sheets or a CSV file. You can get started with as few as 20 examples and to get the best performance, we recommend providing a dataset of at least 100 examples.

moving image showing how to select data for tuning in Google AI Studio by importing data

3. View your tuned model

View your tuning progress in your library. Once the model has finished tuning, you can view the details by clicking on your model. Start running your tuned model through a structured or freeform prompt.

moving image showing how to view your tuned model in Google AI Studio by importing data

4. Run your tuned model anytime

You can also access your newly tuned model by creating a new structured or freeform prompt and selecting your tuned model from the list of available models.

moving image demonstrating what it looks like to run your tuned model in Google AI Studio after importing data

Tuning with the Gemini API

Google AI Studio is the fastest and easiest way to start tuning Gemini models. You can also access the feature via the Gemini API by passing the training data in the API request when creating a tuned model. Learn more about how to get started here.

We’re excited about the possibilities that tuning opens up for developers and can’t wait to see what you build with the feature. If you’ve got some ideas or use cases brewing, share them with us on X (formerly known as Twitter) or Linkedin.



Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles