Learn about troubleshooting steps that you might find helpful if you run into problems using Cloud Data Fusion.

Pipeline is stuck

In Cloud Data Fusion versions before 6.2, there is a known issue where pipelines get stuck in the Starting or Running states. Stopping the pipeline results in the following error: Malformed reply from SOCKS server.


To fix it, delete the Dataproc cluster. To prevent your pipeline from getting stuck on the next run, update the compute profile:

  • Required: Increase the Dataproc cluster size to at least 2 CPU and master nodes to at least 8 GB.
  • Optional: Migrate to Cloud Data Fusion 6.2. Starting in version 6.2, pipeline executions are submitted through the Dataproc Job API and do not impose heavy memory usage on the master node. However, it is still recommended that you use 2 CPU and 8 GB master nodes for production jobs.

Changing cluster size


To change the cluster size, export the Compute Profile, and then use the REST API to update the memory settings:

  1. Export the Compute Profile. It gets saved locally to a JSON file.
  2. Edit the following memory settings in the JSON file: update masterCPUs to at least 2, and masterMemoryMB to at least 8192 MB (roughly 8 GB).

     "name": "masterCPUs",
     "value": "2",
     "isEditable": true
     "name": "masterMemoryMB",
     "value": "8192",
     "isEditable": true
  3. Use the REST API to update the Compute Profile. You can either use cURL or the HTTP Executor in the UI.

    For cURL, use the following command:

    curl -H "Authorization: Bearer $(gcloud auth print-access-token)" https://<data-fusion-instance-url>/api/v3/profiles/<profile-name> -X PUT -d @<path-to-json-file>