Operation

Represents information regarding an operation.

JSON representation
{
  "operationId": string,
  "operationName": string,
  "consumerId": string,
  "startTime": string,
  "endTime": string,
  "labels": {
    string: string,
    ...
  },
  "metricValueSets": [
    {
      object(MetricValueSet)
    }
  ],
  "logEntries": [
    {
      object(LogEntry)
    }
  ],
  "importance": enum(Importance),
}
Fields
operationId

string

Identity of the operation. This must be unique within the scope of the service that generated the operation. If the service calls services.check() and services.report() on the same operation, the two calls should carry the same id.

UUID version 4 is recommended, though not required. In scenarios where an operation is computed from existing information and an idempotent id is desirable for deduplication purpose, UUID version 5 is recommended. See RFC 4122 for details.

operationName

string

Fully qualified name of the operation. Reserved for future use.

consumerId

string

Identity of the consumer who is using the service. This field should be filled in for the operations initiated by a consumer, but not for service-initiated operations that are not related to a specific consumer.

This can be in one of the following formats: project:, projectNumber:, apiKey:.

startTime

string (Timestamp format)

Required. Start time of the operation.

A timestamp in RFC3339 UTC "Zulu" format, accurate to nanoseconds. Example: "2014-10-02T15:01:23.045123456Z".

endTime

string (Timestamp format)

End time of the operation. Required when the operation is used in ServiceController.Report, but optional when the operation is used in ServiceController.Check.

A timestamp in RFC3339 UTC "Zulu" format, accurate to nanoseconds. Example: "2014-10-02T15:01:23.045123456Z".

labels

map (key: string, value: string)

Labels describing the operation. Only the following labels are allowed:

  • Labels describing monitored resources as defined in the service configuration.
  • Default labels of metric values. When specified, labels defined in the metric value override these default.
  • The following labels defined by Google Cloud Platform:
    • cloud.googleapis.com/location describing the location where the operation happened,
    • servicecontrol.googleapis.com/userAgent describing the user agent of the API request,
    • servicecontrol.googleapis.com/service_agent describing the service used to handle the API request (e.g. ESP),
    • servicecontrol.googleapis.com/platform describing the platform where the API is served (e.g. GAE, GCE, GKE).

An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }.

metricValueSets[]

object(MetricValueSet)

Represents information about this operation. Each MetricValueSet corresponds to a metric defined in the service configuration. The data type used in the MetricValueSet must agree with the data type specified in the metric definition.

Within a single operation, it is not allowed to have more than one MetricValue instances that have the same metric names and identical label value combinations. If a request has such duplicated MetricValue instances, the entire request is rejected with an invalid argument error.

logEntries[]

object(LogEntry)

Represents information to be logged.

importance

enum(Importance)

DO NOT USE. This is an experimental field.

MetricValueSet

Represents a set of metric values in the same metric. Each metric value in the set should have a unique combination of start time, end time, and label values.

JSON representation
{
  "metricName": string,
  "metricValues": [
    {
      object(MetricValue)
    }
  ],
}
Fields
metricName

string

The metric name defined in the service configuration.

metricValues[]

object(MetricValue)

The values in this metric.

MetricValue

Represents a single metric value.

JSON representation
{
  "labels": {
    string: string,
    ...
  },
  "startTime": string,
  "endTime": string,

  // Union field value can be only one of the following:
  "boolValue": boolean,
  "int64Value": string,
  "doubleValue": number,
  "stringValue": string,
  "distributionValue": {
    object(Distribution)
  },
  // End of list of possible types for union field value.
}
Fields
labels

map (key: string, value: string)

The labels describing the metric value. See comments on google.api.servicecontrol.v1.Operation.labels for the overriding relationship.

An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }.

startTime

string (Timestamp format)

The start of the time period over which this metric value's measurement applies. The time period has different semantics for different metric types (cumulative, delta, and gauge). See the metric definition documentation in the service configuration for details.

A timestamp in RFC3339 UTC "Zulu" format, accurate to nanoseconds. Example: "2014-10-02T15:01:23.045123456Z".

endTime

string (Timestamp format)

The end of the time period over which this metric value's measurement applies.

A timestamp in RFC3339 UTC "Zulu" format, accurate to nanoseconds. Example: "2014-10-02T15:01:23.045123456Z".

Union field value. The value. The type of value used in the request must agree with the metric definition in the service configuration, otherwise the MetricValue is rejected. value can be only one of the following:
boolValue

boolean

A boolean value.

int64Value

string (int64 format)

A signed 64-bit integer value.

doubleValue

number

A double precision floating point value.

stringValue

string

A text string value.

distributionValue

object(Distribution)

A distribution value.

Distribution

Distribution represents a frequency distribution of double-valued sample points. It contains the size of the population of sample points plus additional optional information:

  • the arithmetic mean of the samples
  • the minimum and maximum of the samples
  • the sum-squared-deviation of the samples, used to compute variance
  • a histogram of the values of the sample points
JSON representation
{
  "count": string,
  "mean": number,
  "minimum": number,
  "maximum": number,
  "sumOfSquaredDeviation": number,
  "bucketCounts": [
    string
  ],

  // Union field bucket_option can be only one of the following:
  "linearBuckets": {
    object(LinearBuckets)
  },
  "exponentialBuckets": {
    object(ExponentialBuckets)
  },
  "explicitBuckets": {
    object(ExplicitBuckets)
  },
  // End of list of possible types for union field bucket_option.
}
Fields
count

string (int64 format)

The total number of samples in the distribution. Must be >= 0.

mean

number

The arithmetic mean of the samples in the distribution. If count is zero then this field must be zero.

minimum

number

The minimum of the population of values. Ignored if count is zero.

maximum

number

The maximum of the population of values. Ignored if count is zero.

sumOfSquaredDeviation

number

The sum of squared deviations from the mean: Sum[i=1..count]((x_i - mean)^2) where each x_i is a sample values. If count is zero then this field must be zero, otherwise validation of the request fails.

bucketCounts[]

string (int64 format)

The number of samples in each histogram bucket. bucketCounts are optional. If present, they must sum to the count value.

The buckets are defined below in bucket_option. There are N buckets. bucketCounts[0] is the number of samples in the underflow bucket. bucketCounts[1] to bucketCounts[N-1] are the numbers of samples in each of the finite buckets. And bucketCounts[N] is the number of samples in the overflow bucket. See the comments ofbucket_option` below for more details.

Any suffix of trailing zeros may be omitted.

Union field bucket_option. Defines the buckets in the histogram. bucket_option and bucket_counts must be both set, or both unset.

Buckets are numbered the the range of [0, N], with a total of N+1 buckets. There must be at least two buckets (a single-bucket histogram gives no information that isn't already provided by count).

The first bucket is the underflow bucket which has a lower bound of -inf. The last bucket is the overflow bucket which has an upper bound of +inf. All other buckets (if any) are called "finite" buckets because they have finite lower and upper bounds. As described below, there are three ways to define the finite buckets.

(1) Buckets with constant width. (2) Buckets with exponentially growing widths. (3) Buckets with arbitrary user-provided widths.

In all cases, the buckets cover the entire real number line (-inf, +inf). Bucket upper bounds are exclusive and lower bounds are inclusive. The upper bound of the underflow bucket is equal to the lower bound of the smallest finite bucket; the lower bound of the overflow bucket is equal to the upper bound of the largest finite bucket. bucket_option can be only one of the following:

linearBuckets

object(LinearBuckets)

Buckets with constant width.

exponentialBuckets

object(ExponentialBuckets)

Buckets with exponentially growing width.

explicitBuckets

object(ExplicitBuckets)

Buckets with arbitrary user-provided width.

LinearBuckets

Describing buckets with constant width.

JSON representation
{
  "numFiniteBuckets": number,
  "width": number,
  "offset": number,
}
Fields
numFiniteBuckets

number

The number of finite buckets. With the underflow and overflow buckets, the total number of buckets is numFiniteBuckets + 2. See comments on bucket_options for details.

width

number

The i'th linear bucket covers the interval [offset + (i-1) * width, offset + i * width) where i ranges from 1 to numFiniteBuckets, inclusive. Must be strictly positive.

offset

number

The i'th linear bucket covers the interval [offset + (i-1) * width, offset + i * width) where i ranges from 1 to numFiniteBuckets, inclusive.

ExponentialBuckets

Describing buckets with exponentially growing width.

JSON representation
{
  "numFiniteBuckets": number,
  "growthFactor": number,
  "scale": number,
}
Fields
numFiniteBuckets

number

The number of finite buckets. With the underflow and overflow buckets, the total number of buckets is numFiniteBuckets + 2. See comments on bucket_options for details.

growthFactor

number

The i'th exponential bucket covers the interval [scale * growthFactor^(i-1), scale * growthFactor^i) where i ranges from 1 to numFiniteBuckets inclusive. Must be larger than 1.0.

scale

number

The i'th exponential bucket covers the interval [scale * growthFactor^(i-1), scale * growthFactor^i) where i ranges from 1 to numFiniteBuckets inclusive. Must be > 0.

ExplicitBuckets

Describing buckets with arbitrary user-provided width.

JSON representation
{
  "bounds": [
    number
  ],
}
Fields
bounds[]

number

'bound' is a list of strictly increasing boundaries between buckets. Note that a list of length N-1 defines N buckets because of fenceposting. See comments on bucket_options for details.

The i'th finite bucket covers the interval [bound[i-1], bound[i]) where i ranges from 1 to bound_size() - 1. Note that there are no finite buckets at all if 'bound' only contains a single element; in that special case the single bound defines the boundary between the underflow and overflow buckets.

bucket number lower bound upper bound i == 0 (underflow) -inf bound[i] 0 < i < bound_size() bound[i-1] bound[i] i == bound_size() (overflow) bound[i-1] +inf

LogEntry

An individual log entry.

JSON representation
{
  "name": string,
  "timestamp": string,
  "severity": enum(LogSeverity),
  "insertId": string,
  "labels": {
    string: string,
    ...
  },

  // Union field payload can be only one of the following:
  "protoPayload": {
    "@type": string,
    field1: ...,
    ...
  },
  "textPayload": string,
  "structPayload": {
    object
  },
  // End of list of possible types for union field payload.
}
Fields
name

string

Required. The log to which this log entry belongs. Examples: "syslog", "book_log".

timestamp

string (Timestamp format)

The time the event described by the log entry occurred. If omitted, defaults to operation start time.

A timestamp in RFC3339 UTC "Zulu" format, accurate to nanoseconds. Example: "2014-10-02T15:01:23.045123456Z".

severity

enum(LogSeverity)

The severity of the log entry. The default value is LogSeverity.DEFAULT.

insertId

string

A unique ID for the log entry used for deduplication. If omitted, the implementation will generate one based on operationId.

labels

map (key: string, value: string)

A set of user-defined (key, value) data that provides additional information about the log entry.

An object containing a list of "key": value pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }.

Union field payload. The log entry payload, which can be one of multiple types. payload can be only one of the following:
protoPayload

object

The log entry payload, represented as a protocol buffer that is expressed as a JSON object. You can only pass protoPayload values that belong to a set of approved types.

An object containing fields of an arbitrary type. An additional field "@type" contains a URI identifying the type. Example: { "id": 1234, "@type": "types.example.com/standard/id" }.

textPayload

string

The log entry payload, represented as a Unicode string (UTF-8).

structPayload

object (Struct format)

The log entry payload, represented as a structure that is expressed as a JSON object.

LogSeverity

The severity of the event described in a log entry, expressed as one of the standard severity levels listed below. For your reference, the levels are assigned the listed numeric values. The effect of using numeric values other than those listed is undefined.

You can filter for log entries by severity. For example, the following filter expression will match log entries with severities INFO, NOTICE, and WARNING:

severity > DEBUG AND severity <= WARNING

If you are writing log entries, you should map other severity encodings to one of these standard levels. For example, you might map all of Java's FINE, FINER, and FINEST levels to LogSeverity.DEBUG. You can preserve the original severity level in the log entry payload if you wish.

Enums
DEFAULT (0) The log entry has no assigned severity level.
DEBUG (100) Debug or trace information.
INFO (200) Routine information, such as ongoing status or performance.
NOTICE (300) Normal but significant events, such as start up, shut down, or a configuration change.
WARNING (400) Warning events might cause problems.
ERROR (500) Error events are likely to cause problems.
CRITICAL (600) Critical events cause more severe problems or outages.
ALERT (700) A person must take an action immediately.
EMERGENCY (800) One or more systems are unusable.

Importance

Defines the importance of the data contained in the operation.

Enums
LOW The API implementation may cache and aggregate the data. The data may be lost when rare and unexpected system failures occur.
HIGH The API implementation doesn't cache and aggregate the data. If the method returns successfully, it's guaranteed that the data has been persisted in durable storage.