 JSON representation
 MetricValueSet
 MetricValue
 Distribution
 LinearBuckets
 ExponentialBuckets
 ExplicitBuckets
 LogEntry
 LogSeverity
 Importance
Represents information regarding an operation.
JSON representation  

{ "operationId": string, "operationName": string, "consumerId": string, "startTime": string, "endTime": string, "labels": { string: string, ... }, "metricValueSets": [ { object( 
Fields  

operationId 
Identity of the operation. This must be unique within the scope of the service that generated the operation. If the service calls services.check() and services.report() on the same operation, the two calls should carry the same id. UUID version 4 is recommended, though not required. In scenarios where an operation is computed from existing information and an idempotent id is desirable for deduplication purpose, UUID version 5 is recommended. See RFC 4122 for details. 
operationName 
Fully qualified name of the operation. Reserved for future use. 
consumerId 
Identity of the consumer who is using the service. This field should be filled in for the operations initiated by a consumer, but not for serviceinitiated operations that are not related to a specific consumer. This can be in one of the following formats: project: 
startTime 
Required. Start time of the operation. A timestamp in RFC3339 UTC "Zulu" format, accurate to nanoseconds. Example: 
endTime 
End time of the operation. Required when the operation is used in A timestamp in RFC3339 UTC "Zulu" format, accurate to nanoseconds. Example: 
labels 
Labels describing the operation. Only the following labels are allowed:
An object containing a list of 
metricValueSets[] 
Represents information about this operation. Each MetricValueSet corresponds to a metric defined in the service configuration. The data type used in the MetricValueSet must agree with the data type specified in the metric definition. Within a single operation, it is not allowed to have more than one MetricValue instances that have the same metric names and identical label value combinations. If a request has such duplicated MetricValue instances, the entire request is rejected with an invalid argument error. 
logEntries[] 
Represents information to be logged. 
importance 
DO NOT USE. This is an experimental field. 
MetricValueSet
Represents a set of metric values in the same metric. Each metric value in the set should have a unique combination of start time, end time, and label values.
JSON representation  

{
"metricName": string,
"metricValues": [
{
object( 
Fields  

metricName 
The metric name defined in the service configuration. 
metricValues[] 
The values in this metric. 
MetricValue
Represents a single metric value.
JSON representation  

{ "labels": { string: string, ... }, "startTime": string, "endTime": string, // Union field 
Fields  

labels 
The labels describing the metric value. See comments on An object containing a list of 

startTime 
The start of the time period over which this metric value's measurement applies. The time period has different semantics for different metric types (cumulative, delta, and gauge). See the metric definition documentation in the service configuration for details. A timestamp in RFC3339 UTC "Zulu" format, accurate to nanoseconds. Example: 

endTime 
The end of the time period over which this metric value's measurement applies. A timestamp in RFC3339 UTC "Zulu" format, accurate to nanoseconds. Example: 

Union field value . The value. The type of value used in the request must agree with the metric definition in the service configuration, otherwise the MetricValue is rejected. value can be only one of the following: 

boolValue 
A boolean value. 

int64Value 
A signed 64bit integer value. 

doubleValue 
A double precision floating point value. 

stringValue 
A text string value. 

distributionValue 
A distribution value. 
Distribution
Distribution represents a frequency distribution of doublevalued sample points. It contains the size of the population of sample points plus additional optional information:
 the arithmetic mean of the samples
 the minimum and maximum of the samples
 the sumsquareddeviation of the samples, used to compute variance
 a histogram of the values of the sample points
JSON representation  

{ "count": string, "mean": number, "minimum": number, "maximum": number, "sumOfSquaredDeviation": number, "bucketCounts": [ string ], // Union field 
Fields  

count 
The total number of samples in the distribution. Must be >= 0. 

mean 
The arithmetic mean of the samples in the distribution. If 

minimum 
The minimum of the population of values. Ignored if 

maximum 
The maximum of the population of values. Ignored if 

sumOfSquaredDeviation 
The sum of squared deviations from the mean: Sum[i=1..count]((x_i  mean)^2) where each x_i is a sample values. If 

bucketCounts[] 
The number of samples in each histogram bucket. The buckets are defined below in Any suffix of trailing zeros may be omitted. 

Union field Buckets are numbered the the range of [0, N], with a total of N+1 buckets. There must be at least two buckets (a singlebucket histogram gives no information that isn't already provided by The first bucket is the underflow bucket which has a lower bound of inf. The last bucket is the overflow bucket which has an upper bound of +inf. All other buckets (if any) are called "finite" buckets because they have finite lower and upper bounds. As described below, there are three ways to define the finite buckets. (1) Buckets with constant width. (2) Buckets with exponentially growing widths. (3) Buckets with arbitrary userprovided widths. In all cases, the buckets cover the entire real number line (inf, +inf). Bucket upper bounds are exclusive and lower bounds are inclusive. The upper bound of the underflow bucket is equal to the lower bound of the smallest finite bucket; the lower bound of the overflow bucket is equal to the upper bound of the largest finite bucket. 

linearBuckets 
Buckets with constant width. 

exponentialBuckets 
Buckets with exponentially growing width. 

explicitBuckets 
Buckets with arbitrary userprovided width. 
LinearBuckets
Describing buckets with constant width.
JSON representation  

{ "numFiniteBuckets": number, "width": number, "offset": number, } 
Fields  

numFiniteBuckets 
The number of finite buckets. With the underflow and overflow buckets, the total number of buckets is 
width 
The i'th linear bucket covers the interval [offset + (i1) * width, offset + i * width) where i ranges from 1 to numFiniteBuckets, inclusive. Must be strictly positive. 
offset 
The i'th linear bucket covers the interval [offset + (i1) * width, offset + i * width) where i ranges from 1 to numFiniteBuckets, inclusive. 
ExponentialBuckets
Describing buckets with exponentially growing width.
JSON representation  

{ "numFiniteBuckets": number, "growthFactor": number, "scale": number, } 
Fields  

numFiniteBuckets 
The number of finite buckets. With the underflow and overflow buckets, the total number of buckets is 
growthFactor 
The i'th exponential bucket covers the interval [scale * growthFactor^(i1), scale * growthFactor^i) where i ranges from 1 to numFiniteBuckets inclusive. Must be larger than 1.0. 
scale 
The i'th exponential bucket covers the interval [scale * growthFactor^(i1), scale * growthFactor^i) where i ranges from 1 to numFiniteBuckets inclusive. Must be > 0. 
ExplicitBuckets
Describing buckets with arbitrary userprovided width.
JSON representation  

{ "bounds": [ number ], } 
Fields  

bounds[] 
'bound' is a list of strictly increasing boundaries between buckets. Note that a list of length N1 defines N buckets because of fenceposting. See comments on The i'th finite bucket covers the interval [bound[i1], bound[i]) where i ranges from 1 to bound_size()  1. Note that there are no finite buckets at all if 'bound' only contains a single element; in that special case the single bound defines the boundary between the underflow and overflow buckets. bucket number lower bound upper bound i == 0 (underflow) inf bound[i] 0 < i < bound_size() bound[i1] bound[i] i == bound_size() (overflow) bound[i1] +inf 
LogEntry
An individual log entry.
JSON representation  

{ "name": string, "timestamp": string, "severity": enum( 
Fields  

name 
Required. The log to which this log entry belongs. Examples: 

timestamp 
The time the event described by the log entry occurred. If omitted, defaults to operation start time. A timestamp in RFC3339 UTC "Zulu" format, accurate to nanoseconds. Example: 

severity 
The severity of the log entry. The default value is 

insertId 
A unique ID for the log entry used for deduplication. If omitted, the implementation will generate one based on operationId. 

labels 
A set of userdefined (key, value) data that provides additional information about the log entry. An object containing a list of 

Union field payload . The log entry payload, which can be one of multiple types. payload can be only one of the following: 

protoPayload 
The log entry payload, represented as a protocol buffer that is expressed as a JSON object. The only accepted type currently is An object containing fields of an arbitrary type. An additional field 

textPayload 
The log entry payload, represented as a Unicode string (UTF8). 

structPayload 
The log entry payload, represented as a structure that is expressed as a JSON object. 
LogSeverity
The severity of the event described in a log entry, expressed as one of the standard severity levels listed below. For your reference, the levels are assigned the listed numeric values. The effect of using numeric values other than those listed is undefined.
You can filter for log entries by severity. For example, the following filter expression will match log entries with severities INFO
, NOTICE
, and WARNING
:
severity > DEBUG AND severity <= WARNING
If you are writing log entries, you should map other severity encodings to one of these standard levels. For example, you might map all of Java's FINE, FINER, and FINEST levels to LogSeverity.DEBUG
. You can preserve the original severity level in the log entry payload if you wish.
Enums  

DEFAULT 
(0) The log entry has no assigned severity level. 
DEBUG 
(100) Debug or trace information. 
INFO 
(200) Routine information, such as ongoing status or performance. 
NOTICE 
(300) Normal but significant events, such as start up, shut down, or a configuration change. 
WARNING 
(400) Warning events might cause problems. 
ERROR 
(500) Error events are likely to cause problems. 
CRITICAL 
(600) Critical events cause more severe problems or outages. 
ALERT 
(700) A person must take an action immediately. 
EMERGENCY 
(800) One or more systems are unusable. 
Importance
Defines the importance of the data contained in the operation.
Enums  

LOW 
The API implementation may cache and aggregate the data. The data may be lost when rare and unexpected system failures occur. 
HIGH 
The API implementation doesn't cache and aggregate the data. If the method returns successfully, it's guaranteed that the data has been persisted in durable storage. 