>

Installing the Connector app

This page provides the Connector example of the Cloud Security Command Center (Cloud SCC) application package. The Connector App ingests security findings stored in a Cloud Storage bucket populated by a partner. The findings ingestion process is triggered when a new file completes the upload process to the bucket.

This guide was written for tools version 3.3.0. If you're using a different version, please see the README file included with the tools version you downloaded. As of May 22, 2019, the most recent release version is 4.0.1.

Before you begin

Before you start this guide, you must complete the prerequisites and installation setup in Setting up Cloud SCC tools.

To install and run the Connector package, you will also need the following:

  • An active GCP Organization
  • An active Cloud Billing account
  • Python version 3.5.3
  • A user with the following Cloud Identity and Access Management (Cloud IAM) role at the organization level:
    • Pub/Sub Publisher - roles/pubsub.publisher

Setting up environment variables

  1. Go to the Google Cloud Platform Console.
    Go to the GCP Console page
  2. Click Activate Cloud Shell.
  3. Run the following commands to set environment variables. Use the tools release version you downloaded during setup. This guide was written for tools version 3.3.0. For other tools versions, see the README included with the files you downloaded.

    # Release version you downloaded during setup
    export VERSION=[RELEASE_VERSION]
    
    # Directory to unzip the installation files
    export WORKING_DIR=${HOME}/scc-tools-install
    
    # Organzation ID where the script will run
    export ORGANIZATION_ID=[YOUR_ORG_ID]
    
    # Project ID to be created
    export CONNECTOR_PROJECT_ID=[YOUR_CONNECTOR_PROJECT_ID]
    
    # A valid billing account ID
    export BILLING=[YOUR_BILLING_ACCOUNT_ID]
    
    # One Compute Engine region listed in Regions and Zones:
    # https://cloud.google.com/compute/docs/regions-zones
    export REGION=[YOUR_REGION]
    
    # A selected location from the App Engine Locations:
    # https://cloud.google.com/appengine/docs/locations) list
    export GAE_LOCATION=[YOUR_LOCATION]
    
    # A Cloud Storage bucket to upload the partner findings file. This will be
    # created if it doesn't exist
    # See: https://cloud.google.com/storage/docs/creating-buckets
    export CONNECTOR_BUCKET=[YOUR_CONNECTOR_BUCKET]
    
    # A Cloud Storage bucket to upload the Cloud Functions source code. This will
    # be created if it doesn't exist
    # See: https://cloud.google.com/storage/docs/creating-buckets
    export CONNECTOR_CF_BUCKET=[YOUR_CLOUD_FUNCTION_BUCKET]
    
    # Absolute path to the service account file for your Cloud SCC API  project
    export SCC_SA_FILE=[ABSOLUTE_PATH_TO_SERVICE_ACCOUNT_FILE]
    
    # Comma separated values for custom roles that can be added to the deployer
    # service account
    export CUSTOM_ROLES=custom.gaeAppCreator
    
  4. On the Cloud Shell menu bar, click Upload file on the More devshell settings menu.

  5. Upload the scc-connector-${VERSION}.zip file you downloaded during the installation setup.

  6. Unzip the file you uploaded by running:

    unzip -qo scc-connector-${VERSION}.zip -d ${WORKING_DIR}
    
  7. Go to the installation working directory:

    cd ${WORKING_DIR}
    

Installing the Connector app package

In any of the following sections, you can simulate executions of the commands by using the option --simulation.

Step 1: Creating the project

Create the project in which you'll install the Connector app, and then link it to your billing account by running:

gcloud projects create ${CONNECTOR_PROJECT_ID} \
  --organization ${ORGANIZATION_ID}

gcloud beta billing projects link ${CONNECTOR_PROJECT_ID} \
  --billing-account ${BILLING}

Step 2: Enabling Google APIs

The Connector app uses Google APIs to function. Enable the APIs needed by the project by running:

(cd setup; \
pipenv run python3 enable_apis.py \
  --project_id ${CONNECTOR_PROJECT_ID} \
  --connector-apis \
  --no-simulation)

Deploying the Connector app package

When you deploy the Connector app package, you will do the following:

  • Create the custom role needed to create App Engine apps, if it isn't already.
  • Create the service account that will be used to deploy.
  • Create the partner project.
  • Create the Cloud Storage buckets in the project for the partner findings and the Cloud Functions function deploy.
  • Create the Cloud Pub/Sub topics in the project.
  • Deploy the Cloud Functions functions that are part of the Connector app in the project.
  • Deploy a minimal App Engine application in the project to enable Cloud Datastore.
  • Create a Cloud Pub/Sub notification on the partner Cloud Storage bucket in the project.

During deployment, you will be prompted to select the translator you want to use. The translator works as an adapter to convert a security findings discovered by a partner from the partner format to the Cloud SCC SourceFinding format. For more information, see the findings.create API reference.

When you select a translator option, the corresponding .yaml will be used to map the incoming JSON. You can change this later by updating the translator Cloud Functions function.

Step 1: Creating custom roles

To make sure that you have the necessary custom roles created in your organization, run the following:

(cd setup; \
pipenv run python3 create_custom_role.py \
  --custom_role_name custom.gaeAppCreator \
  --project_id ${CONNECTOR_PROJECT_ID} \
  --organization_id ${ORGANIZATION_ID} \
  --deployment_name gae-creator-custom-role \
  --template_file templates/custom_gae_creator_role.py \
  --no-simulation)

Step 2: Creating a service account

To create the service account that will be used to deploy the application, run the following:

(cd setup; \
pipenv run python3 create_service_account.py \
  --name deployer \
  --project_id ${CONNECTOR_PROJECT_ID} \
  --organization_id ${ORGANIZATION_ID} \
  --roles_file roles/connector.txt \
  --custom_roles ${CUSTOM_ROLES} \
  --no-simulation)

Step 3: Running the setup script

Deploy the Connector app by running the following:

(cd setup; \
 export DEPLOY_CREDENTIALS=./service_accounts/${CONNECTOR_PROJECT_ID}_deployer.json;
 pipenv run python3 run_setup_connector.py \
  --organization_id ${ORGANIZATION_ID} \
  --key_file ${DEPLOY_CREDENTIALS} \
  --billing_account_id ${BILLING} \
  --region ${REGION} \
  --gae_region ${GAE_LOCATION} \
  --connector_project ${CONNECTOR_PROJECT_ID} \
  --connector_bucket ${CONNECTOR_BUCKET} \
  --cf_bucket ${CONNECTOR_CF_BUCKET} \
  --connector_sa_file ${SCC_SA_FILE} \
  --no-simulation)

Configuring the application

You can configure the Connector app operation mode for prod or demo by publishing a message to a configuration topic:

  • prod: All files added to the configured Cloud Storage bucket are read and their findings are ingested and sent to Cloud SCC.
  • demo: When a file is added to the configured Cloud Storage bucket, its full location on Cloud Storage is stored in Cloud Datastore. New files overwrite the info on Cloud Datastore. A message can be posted to a Cloud Pub/Sub topic to flush this cache and force processing of the file.

Step 1: Setting demo mode for validation

gcloud pubsub topics publish projects/${CONNECTOR_PROJECT_ID}/topics/configuration \
  --message "{\"mode\": \"demo\"}"

Step 2: Proccessing the last loaded file

On demo mode, force the processing of the last loaded file to the application bucket by running:

gcloud pubsub topics publish projects/${CONNECTOR_PROJECT_ID}/topics/flushbuffer \
  --message "{}"

Step 3: Setting production mode

gcloud pubsub topics publish projects/${CONNECTOR_PROJECT_ID}/topics/configuration \
  --message "{\"mode\": \"prod\"}"

Reading findings files

The findings file provided by partners must be UTF-8 without BOM encoded and it must be a valid JSON or CSV or TSV file. Findings file processing is done using a YAML configuration file. This file maps a known set of attributes and properties from the findings file.

Attributes are a set of fields that are common to all findings from all partners:

  • Id: the unique identifier of the finding.
  • Source ID: the partner identity, currently must be one of:
    • GOOGLE_ANOMALY_DETECTION
    • CLOUDFLARE
    • CROWDSTRIKE
    • DOME9
    • FORSETI
    • PALO_ALTO_NETWORKS
    • QUALYS
    • REDLOCK
  • Category: the security finding category according to the Partner classification.
  • Asset Ids: A list of GCP asset IDs, usually in one of the following forms:
    • organization/\<organization_id\>
    • \<project_id\>/\<asset_type\>/\<id\>
  • Event Time: the date and time of the finding identification.
  • URL: a URL with additional information on the finding on the partner original system.

Properties are partner-specific information that will be stored as a set of <Key,Value> pairs, like severity, Solution, remediation or summary.

Properties values are stored as strings. If the key in the findings JSON file points to a nested JSON object, the connector app will "Stringify" the JSON object so that it can be sent to Cloud SCC without losing any information.

The YAML file that is used to guide findings ingestion has the following structure:

  • A metadata section, with information about:
    • type: the file format:
    • json: for JSON files with a single JSON object or a jsonArray of objects.
    • csv: for fixed position files. For these files, the delimiter field is used to identify the separator: "," or TAB character.
    • org_name: the organization name.
    • root_element with one of the following values:
    • null if the finding object is flat.
    • The key on the JSON file for the JSON object that contains the information to be used.
    • deep_levels: used to map an inner JSON object related to the root_source to search for the data to be parsed.
    • mapped_ips: a key value custom map that can be used by the Connector app to link partner info to the corresponding info in GCP, like an external IP linked to a Compute Engine instance asset ID.
  • fixed_fields: a fixed value section with forced values to be ingested. This is mostly used for sourceId and URL.
  • api_to_fields: the actual mapped values section with fields to be read from the findings file. This includes the attributes not yet mapped and all of the partner properties in the properties subsection.

Below is a sample YAML file for GOOGLE_ANOMALY_DETECTION findings:

type: json
org_name: organizations/[YOUR_ORGANIZATION_ID]
root_element: null
deep_levels: !!seq [ assetIds ]
fixed_fields:
  sourceId: GOOGLE_ANOMALY_DETECTION
api_to_fields:
  id:
    transform: concat_organization_id
    path: id
  category: category
  assetIds:
    transform: to_array_asset_ids
    path: assetIds
  eventTime:
    transform: time_to_millis
    unit: 1000
    path: eventTime
  properties:
    action: properties.action
    serviceAccount: properties.serviceAccount
    storageBucket: properties.storageBucket
    product: properties.product
    summary: customSummary

If a field isn't mapped on the YAML file, it won't be ingested. Any field can have a transform that will preprocess the field value before it's ingested. The following transforms are currently supported:

  • Converting date/time fields
  • Concatenating the organization ID to the finding ID
  • Converting a single field value to a JSON array on the object that will be sent to Cloud SCC
  • A from -> to transformation for the mapped_ips meta field based on the original field value.

Examples

The directory ./connector/dm/mapper_samples contains the default example for each of the current partners.

Cloudflare, Crowdstrike, Dome9, Palo Alto, and Redlock are examples that show how to convert partner findings where the partner finding doesn't have the ID of the Google Cloud Platform asset in the format that the Cloud SCC API needs. These example show where you can define a map from values on the partner finding to valid asset IDs for the Cloud SCC API. The Cloudflare example shows how to fix a single asset ID for all the findings.

Updating findings files

If you update a findings YAML file, you need to redeploy the Cloud Functions function.

Following is an example of a simulated Cloud Functions update:

(cd setup; \
 pipenv run python3 update_cloud_function.py \
    --project_id ${CONNECTOR_PROJECT_ID} \
    --bucket_name ${CONNECTOR_CF_BUCKET} \
    --cloud_function [CONNECTOR_CF_NAME] \
    --sa_file ${SCC_SA_FILE} \
    --simulation)

Where [CONNECTOR_CF_NAME] is one of the following:

  • configuration
  • flushbuffer
  • forwardfilelink
  • translation

For example, if you changed the translation function, it would be -cf translation. To run an update, use the --no-simulation flag.

Esta página foi útil? Conte sua opinião sobre:

Enviar comentários sobre…

Cloud Security Command Center
Precisa de ajuda? Acesse nossa página de suporte.