使用集合让一切井井有条
根据您的偏好保存内容并对其进行分类。
生成式 AI 概览
Google Cloud 提供一系列产品和工具,支持构建生成式 AI 应用的整个生命周期。
模型探索和托管
Google Cloud 通过 Vertex AI 提供一组先进的基础模型,包括 Gemini。您还可以将第三方模型部署到 Vertex AI Model Garden 或者在 GKE 或 Compute Engine 上自行托管。
提示设计和工程
提示设计是编写提示和回答对的过程,为语言模型提供额外的上下文和指令。编写提示后,您可以将其作为提示数据集提供给模型以进行预训练。当模型执行预测时,它会使用内置的指令做出回答。
建立依据和 RAG
“建立依据”是指将 AI 模型与数据源相关联,以提高响应的准确性并减少幻觉。“RAG”RAG是一种常见的“建立依据”技术,它会搜索相关信息并将其添加到模型的提示中,以确保输出基于事实和最新信息。
代理和函数调用
借助代理,您可以轻松设计出对话式界面并将其集成到您的移动应用中,而函数调用则可扩展模型的功能。
模型自定义和训练
专业任务(例如,针对特定术语训练语言模型)可能需要的训练比仅使用提示设计或建立依据的训练更多。在这种情况下,您可以使用模型调优来提高性能,也可以训练自己的模型。
设置 LangChain
LangChain 是一个适用于生成式 AI 应用的开源框架,使您可以将上下文构建到提示中,并根据模型的回答采取行动。
如未另行说明,那么本页面中的内容已根据知识共享署名 4.0 许可获得了许可,并且代码示例已根据 Apache 2.0 许可获得了许可。有关详情,请参阅 Google 开发者网站政策。Java 是 Oracle 和/或其关联公司的注册商标。
最后更新时间 (UTC):2024-12-22。
[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2024-12-22。"],[[["\u003cp\u003eGoogle Cloud provides comprehensive tools and products for every stage of building generative AI applications, from model exploration to deployment.\u003c/p\u003e\n"],["\u003cp\u003eVertex AI allows users to access, test, tune, and deploy Google's large generative AI models, including Gemini, for use in AI-powered applications.\u003c/p\u003e\n"],["\u003cp\u003ePrompt design and engineering, including using Vertex AI Studio, are crucial for shaping model responses and optimizing their effectiveness.\u003c/p\u003e\n"],["\u003cp\u003eGrounding techniques, like RAG, connect AI models to data sources to improve accuracy and reduce hallucinations, using tools like Google Search, AlloyDB, Cloud SQL, and more.\u003c/p\u003e\n"],["\u003cp\u003eDevelopers can customize and train models, using tools like Cloud TPU, and evaluate performance with Vertex AI to enhance model effectiveness on specialized tasks.\u003c/p\u003e\n"]]],[],null,["# Generative AI\n=============\n\nDocumentation and resources for building and implementing generative AI\napplications with Google Cloud tools and products.\n[Get started for free](https://console.cloud.google.com/freetrial) \n\n#### Start your proof of concept with $300 in free credit\n\n- Get access to Gemini 2.0 Flash Thinking\n- Free monthly usage of popular products, including AI APIs and BigQuery\n- No automatic charges, no commitment \n[View free product offers](/free/docs/free-cloud-features#free-tier) \n\n#### Keep exploring with 20+ always-free products\n\n\nAccess 20+ free products for common use cases, including AI APIs, VMs, data warehouses,\nand more.\n\nLearn about building generative AI applications\n-----------------------------------------------\n\n### [Generative AI on Vertex AI](/vertex-ai/generative-ai/docs/overview)\n\nAccess Google's large generative AI models so you can test, tune, and deploy them for use in your AI-powered applications. \n\n### [Gemini Quickstart](/vertex-ai/generative-ai/docs/start/quickstarts/quickstart-multimodal)\n\nSee what it's like to send requests to the Gemini API through Google Cloud's AI-ML platform, Vertex AI. \n\n### [AI/ML orchestration on GKE](/kubernetes-engine/docs/integrations/ai-infra)\n\nLeverage the power of GKE as a customizable AI/ML platform featuring high performance, cost effective serving and training with industry-leading scale and flexible infrastructure options. \n\n### [When to use generative AI](/docs/ai-ml/generative-ai/generative-ai-or-traditional-ai)\n\nIdentify whether generative AI, traditional AI, or a combination of both might suit your business use case. \n\n### [Develop a generative AI application](/docs/ai-ml/generative-ai/develop-generative-ai-application)\n\nLearn how to address the challenges in each stage of developing a generative AI application. \n\n### [Code samples and sample applications](/docs/generative-ai/code-samples)\n\nView code samples for popular use cases and deploy examples of generative AI applications that are secure, efficient, resilient, high-performing, and cost-effective. \n\n### [Generative AI glossary](/docs/generative-ai/glossary)\n\nLearn about specific terms that are associated with generative AI.\n\nGen AI tools\n------------\n\nGen AI development flow\n-----------------------\n\nModel exploration and hosting\n-----------------------------\n\nGoogle Cloud provides a set of state-of-the-art foundation models through Vertex AI, including Gemini. You can also deploy a third-party model to either Vertex AI Model Garden or self-host on GKE or Compute Engine. \n\n### [Google Models on Vertex AI (Gemini, Imagen)](/vertex-ai/generative-ai/docs/learn/models)\n\nDiscover test, customize, and deploy Google models and assets from an ML model library. \n\n### [Other models in the Vertex AI Model Garden](/vertex-ai/generative-ai/docs/model-garden/explore-models)\n\nDiscover, test, customize, and deploy select OSS models and assets from an ML model library. \n\n### [Text generation models via HuggingFace](/vertex-ai/generative-ai/docs/open-models/use-hugging-face-models)\n\nLearn how to deploy HuggingFace text generation models to Vertex AI or Google Kubernetes Engine (GKE). \n\n### [GPUs on Compute Engine](/compute/docs/gpus/about-gpus)\n\nAttach GPUs to VM instances to accelerate generative AI workloads on Compute Engine.\n\nPrompt design and engineering\n-----------------------------\n\nPrompt design is the process of authoring prompt and response pairs to give language models additional context and instructions. After you author prompts, you feed them to the model as a prompt dataset for pretraining. When a model serves predictions, it responds with your instructions built in. \n\n### [Vertex AI Studio](/vertex-ai/generative-ai/docs/start/quickstarts/quickstart)\n\nDesign, test, and customize your prompts sent to Google's Gemini and PaLM 2 large language models (LLM). \n\n### [Overview of Prompting Strategies](/vertex-ai/generative-ai/docs/learn/prompts/prompt-design-strategies)\n\nLearn the prompt-engineering workflow and common strategies that you can use to affect model responses. \n\n### [Prompt Gallery](/vertex-ai/generative-ai/docs/prompt-gallery)\n\nView example prompts and responses for specific use cases.\n\nGrounding and RAG\n-----------------\n\n*Grounding* connects AI models to data sources to improve the accuracy of responses and reduce hallucinations. *RAG*, a common grounding technique, searches for relevant information and adds it to the model's prompt, ensuring output is based on facts and up-to-date information. \n\n### [Vertex AI grounding](/vertex-ai/generative-ai/docs/grounding/overview)\n\nYou can ground Vertex AI models with Google Search or with your own data stored in Vertex AI Search. \n\n### [Ground with Google Search](/vertex-ai/generative-ai/docs/multimodal/ground-gemini#web-ground-gemini)\n\nUse Grounding with Google Search to connect the model to the up-to-date knowledge available on the internet. \n\n### [Vector embeddings in AlloyDB](/alloydb/docs/ai/work-with-embeddings)\n\nUse AlloyDB to generate and store vector embeddings, then index and query the embeddings using the pgvector extension. \n\n### [Cloud SQL and pgvector](https://github.com/pgvector/pgvector?tab=readme-ov-file#pgvector)\n\nStore vector embeddings in Postgres SQL, then index and query the embeddings using the pgvector extension. \n\n### [Integrating BigQuery data into your LangChain application](https://cloud.google.com/blog/products/ai-machine-learning/open-source-framework-for-connecting-llms-to-your-data)\n\nUse LangChain to extract data from BigQuery and enrich and ground your model's responses. \n[description](/firestore/docs/vector-search) \n\n### [Vector embeddings in Firestore](/firestore/docs/vector-search)\n\nCreate vector embeddings from your Firestore data, then index and query the embeddings. \n\n### [Vector embeddings in Memorystore (Redis)](/memorystore/docs/redis/about-vector-search)\n\nUse LangChain to extract data from Memorystore and enrich and ground your model's responses.\n\nAgents and function calling\n---------------------------\n\nAgents make it easy to design and integrate a conversational user interface into your mobile app, while function calling extends the capabilities of a model. \n\n### [AI Applications](/generative-ai-app-builder/docs/introduction)\n\nLeverage Google's foundation models, search expertise, and conversational AI technologies for enterprise-grade generative AI applications. \n\n### [Vertex AI Function calling](/vertex-ai/generative-ai/docs/multimodal/function-calling)\n\nAdd function calling to your model to enable actions like booking a reservation based on extracted calendar information.\n\nModel customization and training\n--------------------------------\n\nSpecialized tasks, such as training a language model on specific terminology, might require more training than you can do with prompt design or grounding alone. In that scenario, you can use model tuning to improve performance, or train your own model. \n\n### [Evaluate models in Vertex AI](/vertex-ai/generative-ai/docs/models/evaluation-overview)\n\nEvaluate the performance of foundation models and your tuned generative AI models on Vertex AI. \n\n### [Tune Vertex AI models](/vertex-ai/generative-ai/docs/models/tune-models)\n\nGeneral purpose foundation models can benefit from tuning to improve their performance on specific tasks. \n\n### [Cloud TPU](/tpu/docs)\n\nTPUs are Google's custom-developed ASICs used to accelerate machine learning workloads, such as training an LLM.\n\nRelated guides and sites\n------------------------\n\n[description](/architecture/gen-ai-rag-vertex-ai-vector-search) \nIntermediate\n\n### [Infrastructure for a RAG-capable generative AI application using Vertex AI and Vector Search](/architecture/gen-ai-rag-vertex-ai-vector-search)\n\nReference architecture for a RAG-capable generative AI application using Vertex AI and Vector Search. \n[description](/architecture/rag-capable-gen-ai-app-using-vertex-ai) \nIntermediate\n\n### [Infrastructure for a RAG-capable generative AI application using Vertex AI and AlloyDB for PostgreSQL](/architecture/rag-capable-gen-ai-app-using-vertex-ai)\n\nReference architecture for a RAG-capable generative AI application using Vertex AI and AlloyDB for PostgreSQL. \n[description](/architecture/rag-capable-gen-ai-app-using-gke) \nIntermediate\n\n### [Infrastructure for a RAG-capable generative AI application using GKE and Cloud SQL](/architecture/rag-capable-gen-ai-app-using-gke)\n\nReference architecture for a RAG-capable generative AI application using GKE, Cloud SQL, and open source tools like Ray, Hugging Face, and LangChain.\n\nStart building\n--------------\n\n### Set up your development environment for Google Cloud\n\n- [C# and .NET](/dotnet/docs/setup)\n- [C++](/cpp/docs/setup)\n- [Go](/go/docs/setup)\n- [Java](/java/docs/setup)\n- [JavaScript and Node.js](/nodejs/docs/setup)\n- [Python](/python/docs/setup)\n- [Ruby](/ruby/docs/setup)\n\n### Set up LangChain\n\nLangChain is an open source framework for generative AI apps that allows you to build context into your prompts, and take action based on the model's response.\n\n- [Python (LangChain)](https://python.langchain.com/docs/integrations/llms/google_vertex_ai_palm)\n- [JavaScript (LangChain.js)](https://js.langchain.com/docs/integrations/platforms/google)\n- [Java (LangChain4j)](https://docs.langchain4j.dev/integrations/language-models/google-palm/)\n- [Go (LangChainGo)](https://tmc.github.io/langchaingo/docs/)\n\n### View code samples and deploy sample applications\n\nView [code samples for popular use cases and deploy examples of generative AI applications](/docs/generative-ai/code-samples) that are secure, efficient, resilient, high-performing, and cost-effective."]]