Stay organized with collections
Save and categorize content based on your preferences.
Download profile data
This document describes how you can download your profile data to your
local system, and how you can programmatically retrieve profile data by
using a Go application.
Download profiles by using the Google Cloud console
To download the profile displayed in the flame graph,
click Downloadget_app.
Profiler uses the following naming convention for the
downloaded file:
To retrieve profile data, use the ListProfiles API method. The following
sample Go program demonstrates the use of this API.
The sample program creates a folder in the directory from where it is run, and
generates a set of numbered pprof files. Each file has a naming
convention similar to profile000042.pb.gz. Each directory contains profile data
and a metadata file - metadata.csv, which contains information about the
downloaded files.
// Sample export shows how ListProfiles API can be used to download
// existing pprof profiles for a given project from GCP.
package main
import (
"bytes"
"context"
"encoding/csv"
"encoding/json"
"flag"
"fmt"
"io"
"log"
"os"
"time"
cloudprofiler "cloud.google.com/go/cloudprofiler/apiv2"
pb "cloud.google.com/go/cloudprofiler/apiv2/cloudprofilerpb"
"google.golang.org/api/iterator"
)
var project = flag.String("project", "", "GCP project ID from which profiles should be fetched")
var pageSize = flag.Int("page_size", 100, "Number of profiles fetched per page. Maximum 1000.")
var pageToken = flag.String("page_token", "", "PageToken from a previous ListProfiles call. If empty, the listing will start from the begnning. Invalid page tokens result in error.")
var maxProfiles = flag.Int("max_profiles", 1000, "Maximum number of profiles to fetch across all pages. If this is <= 0, will fetch all available profiles")
const ProfilesDownloadedSuccessfully = "Read max allowed profiles"
// This function reads profiles for a given project and stores them into locally created files.
// The profile metadata gets stored into a 'metdata.csv' file, while the individual pprof files
// are created per profile.
func downloadProfiles(ctx context.Context, w io.Writer, project, pageToken string, pageSize, maxProfiles int) error {
client, err := cloudprofiler.NewExportClient(ctx)
if err != nil {
return err
}
defer client.Close()
log.Printf("Attempting to fetch %v profiles with a pageSize of %v for %v\n", maxProfiles, pageSize, project)
// Initial request for the ListProfiles API
request := &pb.ListProfilesRequest{
Parent: fmt.Sprintf("projects/%s", project),
PageSize: int32(pageSize),
PageToken: pageToken,
}
// create a folder for storing profiles & metadata
profilesDirName := fmt.Sprintf("profiles_%v", time.Now().Unix())
if err := os.Mkdir(profilesDirName, 0750); err != nil {
log.Fatal(err)
}
// create a file for storing profile metadata
metadata, err := os.Create(fmt.Sprintf("%s/metadata.csv", profilesDirName))
if err != nil {
return err
}
defer metadata.Close()
writer := csv.NewWriter(metadata)
defer writer.Flush()
writer.Write([]string{"File", "Name", "ProfileType", "Target", "Duration", "Labels"})
profileCount := 0
// Keep calling ListProfiles API till all profile pages are fetched or max pages reached
profilesIterator := client.ListProfiles(ctx, request)
for {
// Read individual profile - the client will automatically make API calls to fetch next pages
profile, err := profilesIterator.Next()
if err == iterator.Done {
log.Println("Read all available profiles")
break
}
if err != nil {
return fmt.Errorf("error reading profile from response: %w", err)
}
profileCount++
filename := fmt.Sprintf("%s/profile%06d.pb.gz", profilesDirName, profileCount)
err = os.WriteFile(filename, profile.ProfileBytes, 0640)
if err != nil {
return fmt.Errorf("unable to write file %s: %w", filename, err)
}
fmt.Fprintf(w, "deployment target: %v\n", profile.Deployment.Labels)
labelBytes, err := json.Marshal(profile.Labels)
if err != nil {
return err
}
err = writer.Write([]string{filename, profile.Name, profile.Deployment.Target, profile.Duration.String(), string(labelBytes)})
if err != nil {
return err
}
if maxProfiles > 0 && profileCount >= maxProfiles {
fmt.Fprintf(w, "result: %v", ProfilesDownloadedSuccessfully)
break
}
if profilesIterator.PageInfo().Remaining() == 0 {
// This signifies that the client will make a new API call internally
log.Printf("next page token: %v\n", profilesIterator.PageInfo().Token)
}
}
return nil
}
func main() {
flag.Parse()
// validate project ID
if *project == "" {
log.Fatalf("No project ID provided, please provide the GCP project ID via '-project' flag")
}
var writer bytes.Buffer
if err := downloadProfiles(context.Background(), &writer, *project, *pageToken, *pageSize, *maxProfiles); err != nil {
log.Fatal(err)
}
log.Println("Finished reading all profiles")
}
The sample program accepts the following command line arguments:
project: The project from which the profiles are retrieved. Required.
page_size: The maximum number of profiles retrieved per API call. The
maximum value of page_size is 1000. When not specified, this field is set
to 100.
page_token: A string token generated by a previous run of the
program to resume downloads. Optional.
max_profiles: The maximum number of profiles to retrieve. If a non-positive
integer is provided, then the program attempts to retrieve all profiles.
Optional.
Change to the directory that contains the sample program:
cd golang-samples/profiler/export
Run the program after you replace YOUR_GCP_PROJECT with the ID of your
Google Cloud project:
go run main.go -project YOUR_GCP_PROJECT -page_size 1000 -max_profiles 10000
The program might take considerable time to complete. The program outputs a
token for the next page after retrieving the current page. You can use the
token to resume the process if the program is interrupted.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-11-22 UTC."],[],[]]