Starting April 9, 2025, these models will no longer be accessible. You will have to migrate to a newer model to avoid any service disruption.
We've included resources below on how to migrate to newer models.
What you need to know
On October 9, 2024, we will make the following changes on the legacy models:
- Block the use of these models from a newly created project.
- Reject new quota increase requests.
- Lower the default quota to 60 QPM.
- If you have previously requested a quota increase, you will NOT be impacted.
- Block new tuning jobs on these models.
- You can still use already trained models.
The PaLM models listed below will be available until the new extended date of April 9, 2025:
Code | Text | Chat |
---|---|---|
code-bison@001 codechat-bison@001 code-gecko@001 code-bison@002 code-bison-32k@002 codechat-bison@002 codechat-bison-32k@002 code-gecko@002 |
text-bison@001 text-bison@002 text-bison-32k@002 textembedding-gecko@002 textembedding-gecko@001 text-unicorn@001 |
chat-bison@001 chat-bison@002 chat-bison-32k@002 |
What you need to do
We strongly encourage you to migrate to Gemini 1.5 Flash and Gemini 1.5 Pro for improved performance across most tasks, substantially increased context window over 1M tokens, and native multimodality. You will also see substantial cost savings along with these improvements.
Additionally, you can use Vertex AI Evaluation service to compare performance between models on your own evaluation data sets.
Please review our full guide on how to migrate from PaLM API to Gemini API in Vertex AI.
PaLM | Gemini |
---|---|
|
|