You can use a BigQuery connector to enable programmatic read/write access to Google BigQuery. This is ideal for processing data that you've already stored in BigQuery. No command-line access is exposed.
The BigQuery connector for Hadoop downloads data into your Google Cloud Storage bucket before running a Hadoop job. After the Hadoop job successfully completes, the data is deleted from Cloud Storage. You are charged for storage according to Cloud Storage pricing. In order to avoid excess charges, check your Cloud Storage account and make sure unneeded temporary files are removed. By downloading the BigQuery connector for Hadoop you acknowledge and accept these additional terms.
When using the connector you will also be charged for any associated BigQuery usage fees.
Getting the connector
Download the BigQuery connector javadoc reference.
Using the connector
To configure and enable BigQuery access, modify bigquery_env.sh and load it with bdutil:
./bdutil --bucket foo-bucket -n 5 -P my-cluster --env_var_files bigquery_env.sh deploy