The Pub/Sub to Splunk template is a streaming pipeline that reads messages from a Pub/Sub subscription and writes the message payload to Splunk using Splunk's HTTP Event Collector (HEC). The most common use case of this template is to export logs to Splunk. To see an example of the underlying workflow, see Deploying production-ready log exports to Splunk using Dataflow.
Before writing to Splunk, you can also apply a JavaScript user-defined function to the message payload. Any messages that experience processing failures are forwarded to a Pub/Sub unprocessed topic for further troubleshooting and reprocessing.
As an extra layer of protection for your HEC token, you can also pass in a Cloud KMS key along with the base64-encoded HEC token parameter encrypted with the Cloud KMS key. See the Cloud KMS API encryption endpoint for additional details on encrypting your HEC token parameter.
Pipeline requirements
- The source Pub/Sub subscription must exist prior to running the pipeline.
- The Pub/Sub unprocessed topic must exist prior to running the pipeline.
- The Splunk HEC endpoint must be accessible from the Dataflow workers' network.
- The Splunk HEC token must be generated and available.
Template parameters
Required parameters
- inputSubscription: The Pub/Sub subscription to read the input from. For example,
projects/your-project-id/subscriptions/your-subscription-name
. - url: The Splunk HEC URL. The URL must be routable from the VPC that the pipeline runs in. For example,
https://splunk-hec-host:8088
. - outputDeadletterTopic: The Pub/Sub topic to forward undeliverable messages to. For example,
projects/<PROJECT_ID>/topics/<TOPIC_NAME>
.
Optional parameters
- token: The Splunk HEC authentication token. Must be provided if the
tokenSource
parameter is set toPLAINTEXT
orKMS
. - batchCount: The batch size for sending multiple events to Splunk. Defaults to
1
(no batching). - disableCertificateValidation: Disable SSL certificate validation. Default
false
(validation enabled). Iftrue
, the certificates are not validated (all certificates are trusted) androotCaCertificatePath
parameter is ignored. - parallelism: The maximum number of parallel requests. Defaults to
1
(no parallelism). - includePubsubMessage: Include the full Pub/Sub message in the payload. Default
false
(only the data element is included in the payload). - tokenKMSEncryptionKey: The Cloud KMS key to use to decrypt the HEC token string. This parameter must be provided when tokenSource is set to KMS. If the Cloud KMS key is provided, the HEC token string must be passed in encrypted. For example,
projects/your-project-id/locations/global/keyRings/your-keyring/cryptoKeys/your-key-name
. - tokenSecretId: The Secret Manager secret ID for the token. This parameter must provided when the tokenSource is set to
SECRET_MANAGER
. For example,projects/your-project-id/secrets/your-secret/versions/your-secret-version
. - tokenSource: The source of the token. The following values are allowed:
PLAINTEXT
,KMS
, andSECRET_MANAGER
. You must provide this parameter when Secret Manager is used. IftokenSource
is set toKMS
,tokenKMSEncryptionKey
, and encrypted, thentoken
must be provided. IftokenSource
is set toSECRET_MANAGER
, thentokenSecretId
must be provided. IftokenSource
is set toPLAINTEXT
, thentoken
must be provided. - rootCaCertificatePath: The full URL to the root CA certificate in Cloud Storage. The certificate provided in Cloud Storage must be DER-encoded and can be supplied in binary or printable (Base64) encoding. If the certificate is provided in Base64 encoding, it must be bounded at the beginning by -----BEGIN CERTIFICATE-----, and must be bounded at the end by -----END CERTIFICATE-----. If this parameter is provided, this private CA certificate file is fetched and added to the Dataflow worker's trust store in order to verify the Splunk HEC endpoint's SSL certificate. If this parameter is not provided, the default trust store is used. For example,
gs://mybucket/mycerts/privateCA.crt
. - enableBatchLogs: Specifies whether logs should be enabled for batches written to Splunk. Default:
true
. - enableGzipHttpCompression: Specifies whether HTTP requests sent to Splunk HEC should be compressed (gzip content encoded). Default:
true
. - javascriptTextTransformGcsPath: The Cloud Storage URI of the .js file that defines the JavaScript user-defined function (UDF) to use. For example,
gs://my-bucket/my-udfs/my_file.js
. - javascriptTextTransformFunctionName: The name of the JavaScript user-defined function (UDF) to use. For example, if your JavaScript function code is
myTransform(inJson) { /*...do stuff...*/ }
, then the function name ismyTransform
. For sample JavaScript UDFs, see UDF Examples (https://github.com/GoogleCloudPlatform/DataflowTemplates#udf-examples). - javascriptTextTransformReloadIntervalMinutes: Define the interval that workers may check for JavaScript UDF changes to reload the files. Defaults to: 0.
User-defined function
Optionally, you can extend this template by writing a user-defined function (UDF). The template calls the UDF for each input element. Element payloads are serialized as JSON strings. For more information, see Create user-defined functions for Dataflow templates.
Function specification
The UDF has the following specification:
- Input: the Pub/Sub message data field, serialized as a JSON string.
- Output: the event data to be sent to the Splunk HEC events endpoint. The output must be a string or a stringified JSON object.
Run the template
Console
- Go to the Dataflow Create job from template page. Go to Create job from template
- In the Job name field, enter a unique job name.
- Optional: For Regional endpoint, select a value from the drop-down menu. The default
region is
us-central1
.For a list of regions where you can run a Dataflow job, see Dataflow locations.
- From the Dataflow template drop-down menu, select the Pub/Sub to Splunk template.
- In the provided parameter fields, enter your parameter values.
- Optional: To switch from exactly-once processing to at-least-once streaming mode, select At Least Once.
- Click Run job.
gcloud
In your shell or terminal, run the template:
gcloud dataflow jobs run JOB_NAME \ --gcs-location gs://dataflow-templates-REGION_NAME/VERSION/Cloud_PubSub_to_Splunk \ --region REGION_NAME \ --staging-location STAGING_LOCATION \ --parameters \ inputSubscription=projects/PROJECT_ID/subscriptions/INPUT_SUBSCRIPTION_NAME,\ token=TOKEN,\ url=URL,\ outputDeadletterTopic=projects/PROJECT_ID/topics/DEADLETTER_TOPIC_NAME,\ javascriptTextTransformGcsPath=PATH_TO_JAVASCRIPT_UDF_FILE,\ javascriptTextTransformFunctionName=JAVASCRIPT_FUNCTION,\ batchCount=BATCH_COUNT,\ parallelism=PARALLELISM,\ disableCertificateValidation=DISABLE_VALIDATION,\ rootCaCertificatePath=ROOT_CA_CERTIFICATE_PATH
Replace the following:
JOB_NAME
: a unique job name of your choiceREGION_NAME
: the region where you want to deploy your Dataflow job—for example,us-central1
VERSION
: the version of the template that you want to useYou can use the following values:
latest
to use the latest version of the template, which is available in the non-dated parent folder in the bucket— gs://dataflow-templates-REGION_NAME/latest/- the version name, like
2023-09-12-00_RC00
, to use a specific version of the template, which can be found nested in the respective dated parent folder in the bucket— gs://dataflow-templates-REGION_NAME/
STAGING_LOCATION
: the location for staging local files (for example,gs://your-bucket/staging
)INPUT_SUBSCRIPTION_NAME
: the Pub/Sub subscription nameTOKEN
: Splunk's Http Event Collector tokenURL
: the URL path for Splunk's Http Event Collector (for example,https://splunk-hec-host:8088
)DEADLETTER_TOPIC_NAME
: the Pub/Sub topic nameJAVASCRIPT_FUNCTION
: the name of the JavaScript user-defined function (UDF) that you want to useFor example, if your JavaScript function code is
myTransform(inJson) { /*...do stuff...*/ }
, then the function name ismyTransform
. For sample JavaScript UDFs, see UDF Examples.PATH_TO_JAVASCRIPT_UDF_FILE
: the Cloud Storage URI of the.js
file that defines the JavaScript user-defined function (UDF) you want to use—for example,gs://my-bucket/my-udfs/my_file.js
BATCH_COUNT
: the batch size to use for sending multiple events to SplunkPARALLELISM
: the number of parallel requests to use for sending events to SplunkDISABLE_VALIDATION
:true
if you want to disable SSL certificate validationROOT_CA_CERTIFICATE_PATH
: the path to root CA certificate in Cloud Storage (for example,gs://your-bucket/privateCA.crt
)
API
To run the template using the REST API, send an HTTP POST request. For more information on the
API and its authorization scopes, see
projects.templates.launch
.
POST https://dataflow.googleapis.com/v1b3/projects/PROJECT_ID/locations/LOCATION/templates:launch?gcsPath=gs://dataflow-templates-LOCATION/VERSION/Cloud_PubSub_to_Splunk { "jobName": "JOB_NAME", "environment": { "ipConfiguration": "WORKER_IP_UNSPECIFIED", "additionalExperiments": [] }, "parameters": { "inputSubscription": "projects/PROJECT_ID/subscriptions/INPUT_SUBSCRIPTION_NAME", "token": "TOKEN", "url": "URL", "outputDeadletterTopic": "projects/PROJECT_ID/topics/DEADLETTER_TOPIC_NAME", "javascriptTextTransformGcsPath": "PATH_TO_JAVASCRIPT_UDF_FILE", "javascriptTextTransformFunctionName": "JAVASCRIPT_FUNCTION", "batchCount": "BATCH_COUNT", "parallelism": "PARALLELISM", "disableCertificateValidation": "DISABLE_VALIDATION", "rootCaCertificatePath": "ROOT_CA_CERTIFICATE_PATH" } }
Replace the following:
PROJECT_ID
: the Google Cloud project ID where you want to run the Dataflow jobJOB_NAME
: a unique job name of your choiceLOCATION
: the region where you want to deploy your Dataflow job—for example,us-central1
VERSION
: the version of the template that you want to useYou can use the following values:
latest
to use the latest version of the template, which is available in the non-dated parent folder in the bucket— gs://dataflow-templates-REGION_NAME/latest/- the version name, like
2023-09-12-00_RC00
, to use a specific version of the template, which can be found nested in the respective dated parent folder in the bucket— gs://dataflow-templates-REGION_NAME/
STAGING_LOCATION
: the location for staging local files (for example,gs://your-bucket/staging
)INPUT_SUBSCRIPTION_NAME
: the Pub/Sub subscription nameTOKEN
: Splunk's Http Event Collector tokenURL
: the URL path for Splunk's Http Event Collector (for example,https://splunk-hec-host:8088
)DEADLETTER_TOPIC_NAME
: the Pub/Sub topic nameJAVASCRIPT_FUNCTION
: the name of the JavaScript user-defined function (UDF) that you want to useFor example, if your JavaScript function code is
myTransform(inJson) { /*...do stuff...*/ }
, then the function name ismyTransform
. For sample JavaScript UDFs, see UDF Examples.PATH_TO_JAVASCRIPT_UDF_FILE
: the Cloud Storage URI of the.js
file that defines the JavaScript user-defined function (UDF) you want to use—for example,gs://my-bucket/my-udfs/my_file.js
BATCH_COUNT
: the batch size to use for sending multiple events to SplunkPARALLELISM
: the number of parallel requests to use for sending events to SplunkDISABLE_VALIDATION
:true
if you want to disable SSL certificate validationROOT_CA_CERTIFICATE_PATH
: the path to root CA certificate in Cloud Storage (for example,gs://your-bucket/privateCA.crt
)
What's next
- Learn about Dataflow templates.
- See the list of Google-provided templates.