You can target a private endpoint for HTTP calls from your workflow execution by using Service Directory's service registry with Workflows. By creating a private endpoint within a Virtual Private Cloud (VPC) network, the endpoint can be VPC Service Controls-compliant.
VPC Service Controls provides an extra layer of security defense that is independent of Identity and Access Management (IAM). While IAM enables granular identity-based access control, VPC Service Controls enables broader context-based perimeter security, including controlling data egress across the perimeter.
Service Directory is a service registry that stores information about registered network services, including their names, locations, and attributes. Regardless of their infrastructure, you can register services automatically, and capture their details. This allows you to discover, publish, and connect services at scale for all your service endpoints.
A VPC network provides connectivity for your virtual machine (VM) instances, and allows you to create private endpoints within your VPC network by using internal IP addresses. HTTP calls to a VPC network resource are sent over a private network while enforcing IAM and VPC Service Controls.
VPC Service Controls is a Google Cloud feature that allows you to set up a service perimeter and create a data transfer boundary. You can use VPC Service Controls with Workflows to help protect your services, and to reduce the risk of data exfiltration.
This document shows you how to register a VM in a VPC network as a Service Directory endpoint. This allows you to provide your workflow with a Service Directory service name. Your workflow execution uses the information retrieved from the service registry to send the appropriate HTTP request, without egressing to a public network.
This diagram provides an overview:
At a high level, you must do the following:
- Grant permissions to the Cloud Workflows service agent so that the service agent can view Service Directory resources and access VPC networks using Service Directory.
- Create a VPC network to provide networking functionality.
- Create a VPC firewall rule so that you can allow or deny traffic to or from VM instances in your VPC network.
- Create a VM instance in the VPC network. A Compute Engine VM instance is a virtual machine that is hosted on Google's infrastructure. The terms Compute Engine instance, VM instance, and VM are synonymous and are used interchangeably.
- Deploy an application on the VM. You can run an app on your VM instance and confirm that traffic is being served as expected.
Configure Service Directory so that your workflow execution can invoke a Service Directory endpoint.
Create and deploy your workflow. The
private_service_name
value in your workflow specifies the Service Directory endpoint that you registered in the previous step.
Grant permissions to the Cloud Workflows service agent
Some Google Cloud services have service agents that allow the services to access your resources. If an API requires a service agent, then Google creates the service agent after you activate and use the API.
When you first deploy a workflow, the Cloud Workflows service agent is automatically created with the following format:
service-PROJECT_NUMBER@gcp-sa-workflows.iam.gserviceaccount.com
You can manually create the service account in a project without any workflows with this command:
gcloud beta services identity create \ --service=workflows.googleapis.com \ --project=PROJECT_ID
Replace
PROJECT_ID
with your Google Cloud project ID.To view Service Directory resources, grant the Service Directory Viewer role (
servicedirectory.viewer
) on the project to the Workflows service agent:gcloud projects add-iam-policy-binding PROJECT_ID \ --member=serviceAccount:service-PROJECT_NUMBER@gcp-sa-workflows.iam.gserviceaccount.com \ --role=roles/servicedirectory.viewer
Replace
PROJECT_NUMBER
with your Google Cloud project number. You can find your project number on the Welcome page of the Google Cloud console or by running the following command:gcloud projects describe PROJECT_ID --format='value(projectNumber)'
To access VPC networks using Service Directory, grant the Private Service Connect Authorized Service role (
roles/servicedirectory.pscAuthorizedService
) on the project to the Workflows service agent:gcloud projects add-iam-policy-binding PROJECT_ID \ --member=serviceAccount:service-PROJECT_NUMBER@gcp-sa-workflows.iam.gserviceaccount.com \ --role=roles/servicedirectory.pscAuthorizedService
Create a VPC network
A VPC network is a virtual version of a physical network that is implemented inside of Google's production network. It provides connectivity for your Compute Engine VM instances.
You can create an auto mode or custom mode VPC network. Each new network that you create must have a unique name within the same project.
For example, the following command creates an auto mode VPC network:
gcloud compute networks create NETWORK_NAME \ --subnet-mode=auto
Replace NETWORK_NAME
with a name for the
VPC network.
For more information, see Create and manage VPC networks.
Create a VPC firewall rule
VPC firewall rules let you allow or deny traffic to or from VM instances in a VPC network based on port number, tag, or protocol.
VPC firewall rules are defined at the network level, and only apply to the network where they are created; however, the name you choose for a rule must be unique to the project.
For example, the following command creates a firewall rule for a specified
VPC network and allows ingress traffic from any IPv4 address,
0.0.0.0/0
. The --rules
flag value of all
makes the rule applicable to all
protocols and all destination ports.
gcloud compute firewall-rules create RULE_NAME \ --network=projects/PROJECT_ID/global/networks/NETWORK_NAME \ --direction=INGRESS \ --action=ALLOW \ --source-ranges=0.0.0.0/0 \ --rules=all
Replace RULE_NAME
with a name for the firewall rule.
For more information, see Use VPC firewall rules.
Create a VM instance in the VPC network
VM instances include Google Kubernetes Engine (GKE) clusters, App Engine flexible environment instances, and other Google Cloud products built on Compute Engine VMs. To support private network access, a VPC network resource can be a VM instance, Cloud Interconnect IP address, or a Layer 4 internal load balancer.
Compute Engine instances can run public images for Linux and Windows Server that Google provides, as well as private custom images that you can create or import from your existing systems. You can also deploy Docker containers.
You can choose the machine properties of your instances, such as the number of virtual CPUs and the amount of memory, by using a set of predefined machine types or by creating your own custom machine types.
For example, the following command creates a Linux VM instance from a public image with a network interface attached to the VPC network you created previously.
Create and start a VM instance:
gcloud compute instances create VM_NAME \ --image-family=debian-11 \ --image-project=debian-cloud \ --machine-type=e2-micro \ --network-interface network=projects/PROJECT_ID/global/networks/NETWORK_NAME
Replace
VM_NAME
with a name for the VM.If you are prompted to confirm the zone for the instance, type
y
.After you create the VM instance, note the
INTERNAL_IP
address that is returned.In the Google Cloud console, go to the VM instances page.
In the Name column, click the name of the appropriate VM instance.
If the VM is running, to stop the VM, click
Stop.To edit the VM, click
Edit.In the Networking > Firewalls section, to permit HTTP or HTTPS traffic to the VM, select Allow HTTP traffic or Allow HTTPS traffic.
For this example, select the Allow HTTP traffic checkbox.
Compute Engine adds a network tag to your VM which associates the firewall rule with the VM. It then creates the corresponding ingress firewall rule that allows all incoming traffic on
tcp:80
(HTTP) ortcp:443
(HTTPS).To save your changes, click Save.
To restart the VM, click Start/Resume.
For more information, see Create and start a VM instance.
Deploy an application on the VM
To test the network configuration and to confirm that traffic is being served as expected, you can deploy a simple app on your VM that listens on a port.
For example, the following commands create a Node.js web service that listens on port 3000.
Establish an SSH connection to your VM instance.
Update your package repositories:
sudo apt update
Install NVM, Node.js, and npm.
For more information, see Setting up a Node.js development environment.
Interactively create a
package.json
file:npm init
For example:
{ "name": "test", "version": "1.0.0", "description": "", "main": "index.js", "scripts": { "test": "hello" }, "author": "", "license": "ISC" }
Install Express, a web application framework for Node.js:
npm install express
Write the code for the test app:
vim app.js
The following sample creates an app that responds to
GET
requests to the root path (/
) with the text "Hello, world!"Note the port that the app is listening on. The same port number must be used when configuring the endpoint for the Service Directory service.
Confirm that the app is listening on port 3000:
node app.js
Compute Engine offers a range of deployment options. For more information, see Choose a Compute Engine deployment strategy for your workload.
Configure Service Directory
To support invoking a private endpoint from a workflow execution, you must set up a Service Directory namespace, register a service in the namespace, and add an endpoint to the service.
For example, the following commands create a namespace, a service, and an endpoint that specifies the VPC network and internal IP address of your VM instance.
Create a namespace:
gcloud service-directory namespaces create NAMESPACE \ --location=REGION
Replace the following:
NAMESPACE
: the ID of the namespace or fully qualified identifier for the namespace.REGION
: the Google Cloud region that contains the namespace; for example,us-central1
.
Create a service:
gcloud service-directory services create SERVICE \ --namespace=NAMESPACE \ --location=REGION
Replace
SERVICE
with the name of the service that you are creating.Configure an endpoint.
gcloud service-directory endpoints create ENDPOINT \ --namespace=NAMESPACE \ --service=SERVICE \ --network=projects/PROJECT_NUMBER/locations/global/networks/NETWORK_NAME \ --port=PORT_NUMBER \ --address=IP_ADDRESS \ --location=REGION
Replace the following:
ENDPOINT
: the name of the endpoint that you are creating.PORT_NUMBER
: the port that the endpoint is running on; for example,3000
.IP_ADDRESS
: the IPv6 or IPv4 address of the endpoint; this is the internal IP address that you noted previously.
For more information, see Configure Service Directory and Configure private network access.
Create and deploy your workflow
Calling or invoking a private endpoint from Workflows is done
through an HTTP request. The most common HTTP request methods have a call
shortcut (such as http.get and
http.post), but you can make any
type of HTTP request by setting the call
field to http.request
and
specifying the type of request using the method
field. For more information,
see Make an HTTP request.
Create a source code file for your workflow:
touch call-private-endpoint.JSON_OR_YAML
Replace
JSON_OR_YAML
withyaml
orjson
depending on the format of your workflow.In a text editor, copy the following workflow (which in this case uses an HTTP protocol for the
url
value) to your source code file:YAML
main: steps: - checkHttp: call: http.get args: url: http://IP_ADDRESS private_service_name: "projects/PROJECT_ID/locations/REGION/namespaces/NAMESPACE/services/SERVICE" result: res - ret: return: ${res}
JSON
{ "main": { "steps": [ { "checkHttp": { "call": "http.get", "args": { "url": "http://IP_ADDRESS", "private_service_name": "projects/PROJECT_ID/locations/REGION/namespaces/NAMESPACE/services/SERVICE" }, "result": "res" } }, { "ret": { "return": "${res}" } } ] } }
The
private_service_name
value must be a string that specifies a registered Service Directory service name with the following format:projects/PROJECT_ID/locations/LOCATION/namespaces/NAMESPACE_NAME/services/SERVICE_NAME
Deploy the workflow. For test purposes, you can attach the Compute Engine default service account to the workflow to represent its identity:
gcloud workflows deploy call-private-endpoint \ --source=call-private-endpoint.JSON_OR_YAML \ --location=REGION \ --service-account=PROJECT_NUMBER-compute@developer.gserviceaccount.com
Execute the workflow:
gcloud workflows run call-private-endpoint \ --location=REGION
You should see a result similar to the following:
argument: 'null' duration: 0.650784403s endTime: '2023-06-09T18:19:52.570690079Z' name: projects/968807934019/locations/us-central1/workflows/call-private-endpoint/executions/4aac88d3-0b54-419b-b364-b6eb973cc932 result: '{"body":"Hello, world!","code":200,"headers":{"Connection":"keep-alive","Content-Length":"21","Content-Type":"text/html; charset=utf-8","Date":"Fri, 09 Jun 2023 18:19:52 GMT","Etag":"W/\"15-NFaeBgdti+9S7zm5kAdSuGJQm6Q\"","Keep-Alive":"timeout=5","X-Powered-By":"Express"}}' startTime: '2023-06-09T18:19:51.919905676Z' state: SUCCEEDED
What's next
- Learn more about Private Service Connect.
- Set up a service perimeter using VPC Service Controls.
- Invoke a private on‑prem, Compute Engine, GKE, or other endpoint by enabling IAP.