이 가이드에서는 PaLM API를 사용하는 방식에서 Gemini API를 사용하는 방식으로 Python용 Vertex AI SDK 코드를 마이그레이션하는 방법을 보여줍니다. Gemini로 텍스트, 멀티턴 대화(채팅), 코드를 생성할 수 있습니다. Gemini 출력은 PaLM 출력과 다를 수 있으므로 마이그레이션한 후 응답을 확인하세요. 자세한 내용은 Vertex AI SDK의 멀티모달 클래스 소개를 참조하세요.
Gemini는 코드 완성을 수행할 수 없습니다. 코드 완성 애플리케이션을 만들어야 하는 경우 code-gecko 모델을 사용합니다. 자세한 내용은 Codey 코드 완성 모델을 참조하세요.
코드 생성의 경우 Gemini의 인용 차단율이 더 높습니다.
모델의 응답 신뢰도를 나타내는 Codey 코드 생성 모델의 신뢰도 점수는 Gemini에 노출되지 않습니다.
Gemini 모델을 사용하도록 PaLM 코드 업데이트
GenerativeModel 클래스의 메서드는 대부분 PaLM 클래스의 메서드와 동일합니다. 예를 들어 GenerativeModel.start_chat를 사용하여 PaLM에 해당하는 ChatModel.start_chat를 바꿉니다. 하지만 Google Cloud 는 항상 Gemini를 개선하고 업데이트하므로 약간의 차이가 발생할 수 있습니다. 자세한 내용은 Python SDK 참조를 확인하세요.
PaLM API에서 Gemini API로 마이그레이션하려면 다음 코드 수정이 필요합니다.
모든 PaLM 모델 클래스의 경우 Gemini의 GenerativeModel 클래스를 사용합니다.
GenerativeModel 클래스를 사용하려면 다음 import 문을 실행합니다.
from vertexai.generative_models import GenerativeModel
Gemini 모델을 로드하려면 from_pretrained 메서드를 사용하는 대신 GenerativeModel 생성자를 사용합니다. 예를 들어 Gemini 1.0 Pro 모델을 로드하려면 GenerativeModel(gemini-2.0-flash-001)을 사용합니다.
Gemini에서 텍스트를 생성하려면 PaLM 모델에 사용되는 predict 메서드 대신 GenerativeModel.generate_content 메서드를 사용합니다. 예를 들면 다음과 같습니다.
model=GenerativeModel("gemini-2.0-flash-001")response=model.generate_content("Write a short poem about the moon")
Gemini와 PaLM 클래스 비교
각 PaLM 모델 클래스는 Gemini의 GenerativeModel 클래스로 대체됩니다. 다음 표에서는 PaLM 모델에서 사용되는 클래스와 Gemini의 해당 클래스를 보여줍니다.
PaLM 모델
PaLM 모델 클래스
Gemini 모델 클래스
text-bison
TextGenerationModel
GenerativeModel
chat-bison
ChatModel
GenerativeModel
code-bison
CodeGenerationModel
GenerativeModel
codechat-bison
CodeChatModel
GenerativeModel
일반 설정 안내
Vertex AI의 PaLM API 및 Gemini API 모두 설정 프로세스는 동일합니다. 자세한 내용은 Python용 Vertex AI SDK 소개를 참조하세요.
다음은 Vertex AI SDK for Python을 설치하는 짧은 코드 샘플입니다.
이 샘플 코드에서 PROJECT_ID를 Google Cloud 프로젝트 ID로 바꾸고 LOCATION을 Google Cloud 프로젝트 위치(예: us-central1)로 바꿉니다.
Gemini 및 PaLM 코드 샘플
다음 각 코드 샘플 쌍에는 PaLM 코드와 PaLM 코드에서 마이그레이션된 Gemini 코드가 나란히 포함됩니다.
텍스트 생성: 기본
다음 코드 샘플은 텍스트 생성 모델을 만들기 위한 PaLM API와 Gemini API의 차이점을 보여줍니다.
PaLM
Gemini
fromvertexai.language_modelsimportTextGenerationModelmodel=TextGenerationModel.from_pretrained("text-bison@002")response=model.predict(prompt="The opposite of hot is")print(response.text)# 'cold.'
fromvertexai.generative_modelsimportGenerativeModelmodel=GenerativeModel("gemini-2.0-flash-001")responses=model.generate_content("The opposite of hot is")forresponseinresponses:print(response.text)
매개변수를 사용하여 텍스트 생성
다음 코드 샘플은 선택적 매개변수를 사용하여 텍스트 생성 모델을 만들기 위한 PaLM API와 Gemini API의 차이점을 보여줍니다.
PaLM
Gemini
fromvertexai.language_modelsimportTextGenerationModelmodel=TextGenerationModel.from_pretrained("text-bison@002")prompt="""You are an expert at solving word problems.Solve the following problem:I have three houses, each with three cats.each cat owns 4 mittens, and a hat. Each mitten wasknit from 7m of yarn, each hat from 4m.How much yarn was needed to make all the items?Think about it step by step, and show your work."""response=model.predict(prompt=prompt,temperature=0.1,max_output_tokens=800,top_p=1.0,top_k=40)print(response.text)
fromvertexai.generative_modelsimportGenerativeModelmodel=GenerativeModel("gemini-2.0-flash-001")prompt="""You are an expert at solving word problems.Solve the following problem:I have three houses, each with three cats.each cat owns 4 mittens, and a hat. Each mitten wasknit from 7m of yarn, each hat from 4m.How much yarn was needed to make all the items?Think about it step by step, and show your work."""responses=model.generate_content(prompt,generation_config={"temperature":0.1,"max_output_tokens":800,"top_p":1.0,"top_k":40,})forresponseinresponses:print(response.text)
채팅
다음 코드 샘플은 채팅 모델을 만들기 위한 PaLM API와 Gemini API의 차이점을 보여줍니다.
PaLM
Gemini
fromvertexai.language_modelsimportChatModelmodel=ChatModel.from_pretrained("chat-bison@002")chat=model.start_chat()print(chat.send_message("""Hello! Can you write a 300 word abstract for a research paper I need to write about the impact of AI on society?"""))print(chat.send_message("""Could you give me a catchy title for the paper?"""))
fromvertexai.generative_modelsimportGenerativeModelmodel=GenerativeModel("gemini-2.0-flash-001")chat=model.start_chat()responses=chat.send_message("""Hello! Can you write a 300 word abstract for a research paper I need to write about the impact of AI on society?""")forresponseinresponses:print(response.text)responses=chat.send_message("""Could you give me a catchy title for the paper?""")forresponseinresponses:print(response.text)
코드 생성
다음 코드 샘플은 윤년 여부를 예측하는 함수를 만들기 위한 PaLM API와 Gemini API의 차이점을 보여줍니다.
Codey
Gemini
fromvertexai.language_modelsimportCodeGenerationModelmodel=CodeGenerationModel.from_pretrained("code-bison@002")response=model.predict(prefix="Write a function that checks if a year is a leap year.")print(response.text)
fromvertexai.generative_modelsimportGenerativeModelmodel=GenerativeModel("gemini-2.0-flash-001")response=model.generate_content("Write a function that checks if a year is a leap year.")print(response.text)
[[["이해하기 쉬움","easyToUnderstand","thumb-up"],["문제가 해결됨","solvedMyProblem","thumb-up"],["기타","otherUp","thumb-up"]],[["이해하기 어려움","hardToUnderstand","thumb-down"],["잘못된 정보 또는 샘플 코드","incorrectInformationOrSampleCode","thumb-down"],["필요한 정보/샘플이 없음","missingTheInformationSamplesINeed","thumb-down"],["번역 문제","translationIssue","thumb-down"],["기타","otherDown","thumb-down"]],["최종 업데이트: 2025-07-09(UTC)"],[],[],null,["# Migrate from PaLM API to Gemini API on Vertex AI\n\nThis guide shows how to migrate Vertex AI SDK for Python code from using the PaLM\nAPi to using the Gemini API. You can generate text, multi-turn conversations\n(chat), and code with Gemini. After you migrate, check your responses because\nthe Gemini output might be different from PaLM output.\n\nGemini differences from PaLM\n----------------------------\n\nThe following are some differences between Gemini and PaLM models:\n\n- Their response structures are different. To learn about the Gemini response structure, see the\n [Gemini API model reference response body](/vertex-ai/generative-ai/docs/model-reference/gemini#response_body).\n\n- Their safety categories are different. To learn about differences between Gemini and PaLM safety settings, see\n [Key differences between Gemini and other model families](/vertex-ai/generative-ai/docs/multimodal/configure-safety-attributes#key_differences_between_gemini_and_other_model_families).\n\n- Gemini can't perform code completion. If you need to create a code completion\n application, use the `code-gecko` model. For more information, see\n [Codey code completion model](/vertex-ai/generative-ai/docs/code/test-code-completion-prompts).\n\n- For code generation, Gemini has a higher recitation block rate.\n\n- The confidence score in Codey code generation models that indicates how\n confident the model is in its response isn't exposed in Gemini.\n\nUpdate PaLM code to use Gemini models\n-------------------------------------\n\nThe methods on the `GenerativeModel` class are mostly the same as the methods on\nthe PaLM classes. For example, use `GenerativeModel.start_chat` to replace the\nPaLM equivalent, `ChatModel.start_chat`. However, because Google Cloud is always\nimproving and updating Gemini, you might run into some differences. For more\ninformation, see the\n[Python SDK Reference](/python/docs/reference/aiplatform/latest/vertexai)\n\nTo migrate from the PaLM API to the Gemini API, the following code modifications\nare required:\n\n- For all PaLM model classes, you use the `GenerativeModel` class in Gemini.\n\n- To use the `GenerativeModel` class, run the following import statement:\n\n `from vertexai.generative_models import GenerativeModel`\n- To load a Gemini model, use the `GenerativeModel` constructor instead of\n using the `from_pretrained` method. For example, to load the\n Gemini 1.0 Pro model, use\n `GenerativeModel(gemini-2.0-flash-001)`.\n\n- To generate text in Gemini, use the `GenerativeModel.generate_content` method\n instead of the `predict` method that's used on PaLM models. For example:\n\n```python\n model = GenerativeModel(\"gemini-2.0-flash-001\")\n response = model.generate_content(\"Write a short poem about the moon\")\n```\n\nGemini and PaLM class comparison\n--------------------------------\n\nEach PaLM model class is replaced by the `GenerativeModel` class in Gemini. The\nfollowing table shows the classes used by the PaLM models and their equivalent\nclass in Gemini.\n\nCommon setup instructions\n-------------------------\n\nFor both PaLM API and Gemini API in Vertex AI, the setup process is\nthe same. For more information, see\n[Introduction to the Vertex AI SDK for Python](/vertex-ai/docs/python-sdk/use-vertex-ai-python-sdk).\nThe following is a short code sample that installs the Vertex AI SDK for Python. \n\n```python\npip install google-cloud-aiplatform\nimport vertexai\nvertexai.init(project=\"\u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e\", location=\"\u003cvar translate=\"no\"\u003eLOCATION\u003c/var\u003e\")\n```\n\nIn this sample code, replace \u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e with your Google Cloud project ID,\nand replace \u003cvar translate=\"no\"\u003eLOCATION\u003c/var\u003e with the location of your Google Cloud project\n(for example, `us-central1`).\n\nGemini and PaLM code samples\n----------------------------\n\nEach of the following pairs of code samples includes PaLM code and, next to it,\nGemini code that's been migrated from the PaLM code.\n\n### Text generation: basic\n\nThe following code samples show the differences between the PaLM API and Gemini\nAPI for creating a text generation model.\n\n### Text generation with parameters\n\nThe following code samples show the differences between the PaLM API and Gemini\nAPI for creating a text generation model, with optional [parameters](/vertex-ai/generative-ai/docs/start/quickstarts/api-quickstart#parameter_definitions).\n\n### Chat\n\nThe following code samples show the differences between the PaLM API and Gemini\nAPI for creating a chat model.\n\n### Code generation\n\nThe following code samples show the differences between the PaLM API and Gemini\nAPI for generating a function that predicts if a year is a leap year.\n\nMigrate prompts to Gemini models\n--------------------------------\n\nIf you have sets of prompts that you previously used with PaLM 2 models, you can\noptimize them for use with [Gemini models](/vertex-ai/generative-ai/docs/learn/models) by\nusing the\n[Vertex AI prompt optimizer (Preview)](/vertex-ai/generative-ai/docs/learn/prompts/prompt-optimizer).\n\nNext steps\n----------\n\n- See the [Google models](../learn/models) page for more details on the latest models and features."]]