Run LLM inference on Cloud Run GPUs with Hugging Face TGI (services)
Stay organized with collections
Save and categorize content based on your preferences.
The following example shows how to run a backend service that runs the Hugging Face Text Generation Inference (TGI) toolkit, which is a toolkit for deploying and serving Large Language Models (LLMs), using Llama 3.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-12-19 UTC."],[],[]]