Create A Simple Application With the API

This tutorial shows how to build a simple command-line application (in C#, Go, Java, Node.js, PHP, Python, or Ruby) using the Google BigQuery client libraries. The application queries the Stack Overflow public dataset and displays the results.

Objectives

Use the BigQuery client libraries to:

  • Authenticate to the BigQuery API
  • Run a query
  • Read the query results

Costs

The code sample in this tutorial executes a query that processes about 500 MB of data. Refer to the pricing reference for more details on BigQuery pricing.

Before you begin

  1. An understanding of BigQuery concepts and terminology.

    Try the BigQuery Quickstart to get familiar with common BigQuery tasks.

  2. A project with the BigQuery API enabled.

    Applications that use BigQuery must be associated with a Google Cloud Platform Console project with the BigQuery API enabled.

  3. A local development environment.

    C#

    To set up your local development environment, see Setting up a .NET development environment.

    Go

    Ensure you have a recent version of Go installed.

    Java

    Ensure you have a recent version of Maven or Gradle installed.

    Node.js

    To set up your local development environment, see Setting up a Node.js development environment.

    PHP

    Ensure you have a recent version of PHP and Composer installed.

    Python

    To set up your local development environment, see Setting up a Python development environment.

    Ruby

    Ensure you have a recent version of Ruby and Bundler installed.

Download the sample code

Download the code for the sample command-line application and navigate into the app directory:

  1. Clone the samples repository to your local machine.

    C#

    git clone https://github.com/GoogleCloudPlatform/dotnet-docs-samples

    Alternatively, you can download the sample as a zip file and extract it.

    Go

    go get -u -d github.com/GoogleCloudPlatform/golang-samples/bigquery/simpleapp

    Java

    git clone https://github.com/GoogleCloudPlatform/java-docs-samples

    Alternatively, you can download the sample as a zip file and extract it.

    Node.js

    git clone https://github.com/googleapis/nodejs-bigquery.git

    Alternatively, you can download the sample as a zip file and extract it.

    PHP

    git clone https://github.com/GoogleCloudPlatform/php-docs-samples

    Alternatively, you can download the sample as a zip file and extract it.

    Python

    git clone https://github.com/GoogleCloudPlatform/python-docs-samples

    Alternatively, you can download the sample as a zip file and extract it.

    Ruby

    git clone https://github.com/GoogleCloudPlatform/ruby-docs-samples

    Alternatively, you can download the sample as a zip file and extract it.

  2. Change to the directory that contains the sample code:

    C#

    Double click dotnet-docs-samples\bigquery\api\BigquerySample.sln so that it opens Visual Studio 2017.

    Go

    cd $GOPATH/src/github.com/GoogleCloudPlatform/golang-samples/bigquery/simpleapp

    Java

    cd java-docs-samples/bigquery/cloud-client

    Node.js

    cd nodejs-bigquery/samples

    PHP

    cd php-docs-samples/bigquery/stackoverflow

    Python

    cd python-docs-samples/bigquery/cloud-client

    Ruby

    cd ruby-docs-samples/bigquery

Setup application dependencies

This sample uses the Google Cloud client libraries to make calls to the BigQuery API.

C#

For more on installing and creating a BigQuery client, refer to BigQuery Client Libraries.

The solution file for the sample includes the needed dependencies using NuGet.

Import the BigQuery libraries:

using System;
using Google.Cloud.BigQuery.V2;

Go

For more on installing and creating a BigQuery client, refer to BigQuery Client Libraries.

The go get command used to download the sample also downloads any needed dependencies.

Import the BigQuery libraries:

import (
	"fmt"
	"io"
	"log"
	"os"

	"cloud.google.com/go/bigquery"
	"google.golang.org/api/iterator"

	"golang.org/x/net/context"
)

Java

For more on installing and creating a BigQuery client, refer to BigQuery Client Libraries.

The pom.xml defines dependencies which will be downloaded when you build the sample via Maven.

<dependency>
  <groupId>com.google.cloud</groupId>
  <artifactId>google-cloud-bigquery</artifactId>
  <version>1.22.0</version>
</dependency>

If using Gradle, run gradle init to automatically convert from pom.xml to a Gradle build file.

Import the BigQuery libraries:

import com.google.cloud.bigquery.BigQuery;
import com.google.cloud.bigquery.BigQueryOptions;
import com.google.cloud.bigquery.FieldValueList;
import com.google.cloud.bigquery.Job;
import com.google.cloud.bigquery.JobId;
import com.google.cloud.bigquery.JobInfo;
import com.google.cloud.bigquery.QueryJobConfiguration;
import com.google.cloud.bigquery.QueryResponse;
import com.google.cloud.bigquery.TableResult;
import java.util.UUID;

Node.js

For more on installing and creating a BigQuery client, refer to BigQuery Client Libraries.

The package.json file defines the dependencies for the sample.

{
  "name": "nodejs-docs-samples-bigquery",
  "version": "0.0.1",
  "private": true,
  "license": "Apache-2.0",
  "author": "Google LLC",
  "repository": "googleapis/nodejs-bigquery",
  "engines": {
    "node": ">=4"
  },
  "scripts": {
    "test": "repo-tools test run --cmd npm -- run cover",
    "ava": "ava -T 3m --verbose test/*.test.js system-test/*.test.js",
    "cover": "nyc --reporter=lcov --cache ava -T 3m --verbose test/*.test.js system-test/*.test.js && nyc report"
  },
  "dependencies": {
    "@google-cloud/bigquery": "1.2.0",
    "@google-cloud/storage": "1.5.1",
    "yargs": "10.0.3"
  },
  "devDependencies": {
    "@google-cloud/nodejs-repo-tools": "2.1.3",
    "ava": "0.24.0",
    "nyc": "11.3.0",
    "proxyquire": "1.8.0",
    "sinon": "4.1.3",
    "uuid": "3.1.0"
  }
}

Install the dependencies:

npm install

Import the BigQuery libraries:

// Imports the Google Cloud client library
const BigQuery = require('@google-cloud/bigquery');

PHP

For more on installing and creating a BigQuery client, refer to BigQuery Client Libraries.

The composer.json file defines the dependencies for the sample.

{
    "require": {
        "google/cloud-bigquery": "^1.0"
    },
    "require-dev": {
        "phpunit/phpunit": "~4.8"
    }
}

Install the dependencies:

composer install

Import the BigQuery libraries:

use Google\Cloud\BigQuery\BigQueryClient;

Python

For more on installing and creating a BigQuery client, refer to BigQuery Client Libraries.

Install the dependencies with pip.

pip install -r requirements.txt

The dependencies used in this sample are defined in the requirements.txt file.

google-cloud-bigquery==0.31.0
google-auth-oauthlib==0.2.0
pytz==2018.3
Conda users may wish to use the community-supported BigQuery package for Conda in the conda-forge channel.

Import the BigQuery libraries:

from google.cloud import bigquery

Ruby

For more on installing and creating a BigQuery client, refer to BigQuery Client Libraries.

The Gemfile file defines the dependencies for the sample.

# Copyright 2016 Google, Inc
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

source "https://rubygems.org"

gem "google-cloud-bigquery"

group :test do
  gem "rspec"
  gem "rspec-retry"
  gem "google-cloud-storage"
end

Install the dependencies:

bundle install

Import the BigQuery libraries:

require "google/cloud/bigquery"

Create a BigQuery service object

Use application default credentials to authenticate and make authorized requests to BigQuery.

C#

Use the BigQueryClient.Create() function to create the BigQuery service object.

// By default, the Google.Cloud.BigQuery.V2 library client will authenticate 
// using the service account file (created in the Google Developers 
// Console) specified by the GOOGLE_APPLICATION_CREDENTIALS 
// environment variable. If you are running on
// a Google Compute Engine VM, authentication is completely 
// automatic.
var client = BigQueryClient.Create(projectId);

Go

Use the bigquery.NewClient() function to create the BigQuery service object.

ctx := context.Background()

client, err := bigquery.NewClient(ctx, proj)
if err != nil {
	return nil, err
}

Java

Use the BigQueryOptions.getDefaultInstance() function to use the default authentication options. Use the BigQueryOptions.getService() function to create the BigQuery service object.

BigQuery bigquery = BigQueryOptions.getDefaultInstance().getService();

Node.js

Instantiate the BigQuery class to create the BigQuery service object.

/**
 * TODO(developer): Uncomment the following lines before running the sample.
 */
// const projectId = "your-project-id";

// Creates a client
const bigquery = new BigQuery({
  projectId: projectId,
});

PHP

Instantiate the BigQueryClient class to create the BigQuery service object.

$bigQuery = new BigQueryClient([
    'projectId' => $projectId,
]);

Python

Instantiate the bigquery.Client class to create the BigQuery service object.

client = bigquery.Client()

Ruby

Use the Google::Cloud::Bigquery.new function to create the BigQuery service object.

# This uses Application Default Credentials to authenticate.
# @see https://cloud.google.com/bigquery/docs/authentication/getting-started
bigquery = Google::Cloud::Bigquery.new

Refer to the authentication guide for other ways to authenticate to the BigQuery API.

Running queries

Query the Stack Overflow public dataset to find the most viewed questions tagged with google-bigquery.

SELECT
  CONCAT(
    'https://stackoverflow.com/questions/',
    CAST(id as STRING)) as url,
  view_count
FROM `bigquery-public-data.stackoverflow.posts_questions`
WHERE tags like '%google-bigquery%'
ORDER BY view_count DESC
LIMIT 10

This query uses standard SQL syntax, which is described in the query reference guide. The client libraries default to standard SQL syntax. See Enabling standard SQL to change SQL dialects.

Running the query

Query using the authenticated BigQuery client.

C#

For more on installing and creating a BigQuery client, refer to BigQuery Client Libraries.

string query = @"SELECT
    CONCAT(
        'https://stackoverflow.com/questions/',
        CAST(id as STRING)) as url, view_count
    FROM `bigquery-public-data.stackoverflow.posts_questions`
    WHERE tags like '%google-bigquery%'
    ORDER BY view_count DESC
    LIMIT 10";
var result = client.ExecuteQuery(query, parameters: null);

Go

Use the bigquery.Query() function to define a query and Query.Read() function to submit the query and get the results.

query := client.Query(
	`SELECT
		CONCAT(
			'https://stackoverflow.com/questions/',
			CAST(id as STRING)) as url,
		view_count
	FROM ` + "`bigquery-public-data.stackoverflow.posts_questions`" + `
	WHERE tags like '%google-bigquery%'
	ORDER BY view_count DESC
	LIMIT 10;`)
return query.Read(ctx)

Java

Define the query with a QueryJobConfiguration instance. Start the query job with the BigQuery.create() method.

QueryJobConfiguration queryConfig =
    QueryJobConfiguration.newBuilder(
      "SELECT "
          + "CONCAT('https://stackoverflow.com/questions/', CAST(id as STRING)) as url, "
          + "view_count "
          + "FROM `bigquery-public-data.stackoverflow.posts_questions` "
          + "WHERE tags like '%google-bigquery%' "
          + "ORDER BY favorite_count DESC LIMIT 10")
        // Use standard SQL syntax for queries.
        // See: https://cloud.google.com/bigquery/sql-reference/
        .setUseLegacySql(false)
        .build();

// Create a job ID so that we can safely retry.
JobId jobId = JobId.of(UUID.randomUUID().toString());
Job queryJob = bigquery.create(JobInfo.newBuilder(queryConfig).setJobId(jobId).build());

// Wait for the query to complete.
queryJob = queryJob.waitFor();

// Check for errors
if (queryJob == null) {
  throw new RuntimeException("Job no longer exists");
} else if (queryJob.getStatus().getError() != null) {
  // You can also look at queryJob.getStatus().getExecutionErrors() for all
  // errors, not just the latest one.
  throw new RuntimeException(queryJob.getStatus().getError().toString());
}

Node.js

Use the BigQuery.query() method to start the query.

// The SQL query to run
const sqlQuery = `SELECT
  CONCAT(
    'https://stackoverflow.com/questions/',
    CAST(id as STRING)) as url,
  view_count
  FROM \`bigquery-public-data.stackoverflow.posts_questions\`
  WHERE tags like '%google-bigquery%'
  ORDER BY view_count DESC
  LIMIT 10`;

// Query options list: https://cloud.google.com/bigquery/docs/reference/v2/jobs/query
const options = {
  query: sqlQuery,
  useLegacySql: false, // Use standard SQL syntax for queries.
};

// Runs the query
bigquery
  .query(options)
  .then(results => {
    const rows = results[0];
    printResult(rows);
  })
  .catch(err => {
    console.error('ERROR:', err);
  });

PHP

Create a query configuration and use the BigQueryClient.startQuery() method to start the query.

$query = <<<ENDSQL
SELECT
  CONCAT(
    'https://stackoverflow.com/questions/',
    CAST(id as STRING)) as url,
  view_count
FROM `bigquery-public-data.stackoverflow.posts_questions`
WHERE tags like '%google-bigquery%'
ORDER BY view_count DESC
LIMIT 10;
ENDSQL;
$queryJobConfig = $bigQuery->query($query);
$queryResults = $bigQuery->runQuery($queryJobConfig);

Python

Use the Client.query() method to start the query.

query_job = client.query("""
    SELECT
      CONCAT(
        'https://stackoverflow.com/questions/',
        CAST(id as STRING)) as url,
      view_count
    FROM `bigquery-public-data.stackoverflow.posts_questions`
    WHERE tags like '%google-bigquery%'
    ORDER BY view_count DESC
    LIMIT 10""")

results = query_job.result()  # Waits for job to complete.

Ruby

Use the Google::Cloud::Bigquery::Project.query function to start a query and wait for the results.

sql     = "SELECT " +
          "CONCAT('https://stackoverflow.com/questions/', " +
          "       CAST(id as STRING)) as url, view_count " +
          "FROM `bigquery-public-data.stackoverflow.posts_questions` " +
          "WHERE tags like '%google-bigquery%' " +
          "ORDER BY view_count DESC LIMIT 10"
results = bigquery.query sql

For more examples of running BigQuery queries, see:

Displaying the query result

Wait for the query to complete and display the results.

C#

For more on installing and creating a BigQuery client, refer to BigQuery Client Libraries.

Console.Write("\nQuery Results:\n------------\n");
foreach (var row in result)
{
    Console.WriteLine($"{row["url"]}: {row["view_count"]} views");
}

Go

Use the RowIterator.Next() function to load each row into a struct pointer.

type StackOverflowRow struct {
	URL       string `bigquery:"url"`
	ViewCount int64  `bigquery:"view_count"`
}

// printResults prints results from a query to the Stack Overflow public dataset.
func printResults(w io.Writer, iter *bigquery.RowIterator) error {
	for {
		var row StackOverflowRow
		err := iter.Next(&row)
		if err == iterator.Done {
			return nil
		}
		if err != nil {
			return err
		}

		fmt.Fprintf(w, "url: %s views: %d\n", row.URL, row.ViewCount)
	}
}

Java

Iterate over the QueryResponse to get all the rows in the results. The iterator automatically handles pagination. Each FieldList exposes the columns by numeric index or column name.

// Get the results.
QueryResponse response = bigquery.getQueryResults(jobId);

TableResult result = queryJob.getQueryResults();

// Print all pages of the results.
for (FieldValueList row : result.iterateAll()) {
  String url = row.get("url").getStringValue();
  long viewCount = row.get("view_count").getLongValue();
  System.out.printf("url: %s views: %d%n", url, viewCount);
}

Node.js

Query results are returned as a list of rows, where each row is a dictionary.

console.log('Query Results:');
rows.forEach(function(row) {
  let url = row['url'];
  let viewCount = row['view_count'];
  console.log(`url: ${url}, ${viewCount} views`);
});

PHP

Call the Job.queryResults() method to wait for the query to finish. Each row in the query results is an associative array.

if ($queryResults->isComplete()) {
    $i = 0;
    $rows = $queryResults->rows();
    foreach ($rows as $row) {
        printf('--- Row %s ---' . PHP_EOL, ++$i);
        printf('url: %s, %s views' . PHP_EOL, $row['url'], $row['view_count']);
    }
    printf('Found %s row(s)' . PHP_EOL, $i);
} else {
    throw new Exception('The query failed to complete');
}

Python

Iterate over the RowIterator to get all the rows in the results. The iterator automatically handles pagination. Each Row exposes the columns by numeric index, column name, or as Python attributes.

for row in results:
    print("{} : {} views".format(row.url, row.view_count))

Ruby

The Google::Cloud::Bigquery::Data class exposes each row as a hash.

results.each do |row|
  puts "#{row[:url]}: #{row[:view_count]} views"
end

Learn more about working with data rows in BigQuery:

Complete source code

Here is the complete source code for the sample.

C#

For more on installing and creating a BigQuery client, refer to BigQuery Client Libraries.

using System;
using Google.Cloud.BigQuery.V2;

namespace GoogleCloudSamples
{
    public class BigquerySample
    {
        const string usage = @"Usage:
BigquerySample <project_id>";

        private static void Main(string[] args)
        {
            string projectId = null;
            if (args.Length == 0)
            {
                Console.WriteLine(usage);
            }
            else
            {
                projectId = args[0];
                // By default, the Google.Cloud.BigQuery.V2 library client will authenticate 
                // using the service account file (created in the Google Developers 
                // Console) specified by the GOOGLE_APPLICATION_CREDENTIALS 
                // environment variable. If you are running on
                // a Google Compute Engine VM, authentication is completely 
                // automatic.
                var client = BigQueryClient.Create(projectId);
                string query = @"SELECT
                    CONCAT(
                        'https://stackoverflow.com/questions/',
                        CAST(id as STRING)) as url, view_count
                    FROM `bigquery-public-data.stackoverflow.posts_questions`
                    WHERE tags like '%google-bigquery%'
                    ORDER BY view_count DESC
                    LIMIT 10";
                var result = client.ExecuteQuery(query, parameters: null);
                Console.Write("\nQuery Results:\n------------\n");
                foreach (var row in result)
                {
                    Console.WriteLine($"{row["url"]}: {row["view_count"]} views");
                }
            }
            Console.WriteLine("\nPress any key...");
            Console.ReadKey();
        }
    }
}

Go

For more on installing and creating a BigQuery client, refer to BigQuery Client Libraries.

import (
	"fmt"
	"io"
	"log"
	"os"

	"cloud.google.com/go/bigquery"
	"google.golang.org/api/iterator"

	"golang.org/x/net/context"
)


func main() {
	proj := os.Getenv("GOOGLE_CLOUD_PROJECT")
	if proj == "" {
		fmt.Println("GOOGLE_CLOUD_PROJECT environment variable must be set.")
		os.Exit(1)
	}

	rows, err := query(proj)
	if err != nil {
		log.Fatal(err)
	}
	if err := printResults(os.Stdout, rows); err != nil {
		log.Fatal(err)
	}
}

// query returns a slice of the results of a query.
func query(proj string) (*bigquery.RowIterator, error) {
	ctx := context.Background()

	client, err := bigquery.NewClient(ctx, proj)
	if err != nil {
		return nil, err
	}

	query := client.Query(
		`SELECT
			CONCAT(
				'https://stackoverflow.com/questions/',
				CAST(id as STRING)) as url,
			view_count
		FROM ` + "`bigquery-public-data.stackoverflow.posts_questions`" + `
		WHERE tags like '%google-bigquery%'
		ORDER BY view_count DESC
		LIMIT 10;`)
	return query.Read(ctx)
}

type StackOverflowRow struct {
	URL       string `bigquery:"url"`
	ViewCount int64  `bigquery:"view_count"`
}

// printResults prints results from a query to the Stack Overflow public dataset.
func printResults(w io.Writer, iter *bigquery.RowIterator) error {
	for {
		var row StackOverflowRow
		err := iter.Next(&row)
		if err == iterator.Done {
			return nil
		}
		if err != nil {
			return err
		}

		fmt.Fprintf(w, "url: %s views: %d\n", row.URL, row.ViewCount)
	}
}

Java

For more on installing and creating a BigQuery client, refer to BigQuery Client Libraries.

import com.google.cloud.bigquery.BigQuery;
import com.google.cloud.bigquery.BigQueryOptions;
import com.google.cloud.bigquery.FieldValueList;
import com.google.cloud.bigquery.Job;
import com.google.cloud.bigquery.JobId;
import com.google.cloud.bigquery.JobInfo;
import com.google.cloud.bigquery.QueryJobConfiguration;
import com.google.cloud.bigquery.QueryResponse;
import com.google.cloud.bigquery.TableResult;
import java.util.UUID;

public class SimpleApp {
  public static void main(String... args) throws Exception {
    BigQuery bigquery = BigQueryOptions.getDefaultInstance().getService();
    QueryJobConfiguration queryConfig =
        QueryJobConfiguration.newBuilder(
          "SELECT "
              + "CONCAT('https://stackoverflow.com/questions/', CAST(id as STRING)) as url, "
              + "view_count "
              + "FROM `bigquery-public-data.stackoverflow.posts_questions` "
              + "WHERE tags like '%google-bigquery%' "
              + "ORDER BY favorite_count DESC LIMIT 10")
            // Use standard SQL syntax for queries.
            // See: https://cloud.google.com/bigquery/sql-reference/
            .setUseLegacySql(false)
            .build();

    // Create a job ID so that we can safely retry.
    JobId jobId = JobId.of(UUID.randomUUID().toString());
    Job queryJob = bigquery.create(JobInfo.newBuilder(queryConfig).setJobId(jobId).build());

    // Wait for the query to complete.
    queryJob = queryJob.waitFor();

    // Check for errors
    if (queryJob == null) {
      throw new RuntimeException("Job no longer exists");
    } else if (queryJob.getStatus().getError() != null) {
      // You can also look at queryJob.getStatus().getExecutionErrors() for all
      // errors, not just the latest one.
      throw new RuntimeException(queryJob.getStatus().getError().toString());
    }

    // Get the results.
    QueryResponse response = bigquery.getQueryResults(jobId);

    TableResult result = queryJob.getQueryResults();

    // Print all pages of the results.
    for (FieldValueList row : result.iterateAll()) {
      String url = row.get("url").getStringValue();
      long viewCount = row.get("view_count").getLongValue();
      System.out.printf("url: %s views: %d%n", url, viewCount);
    }
  }
}

Node.js

For more on installing and creating a BigQuery client, refer to BigQuery Client Libraries.

function printResult(rows) {
  console.log('Query Results:');
  rows.forEach(function(row) {
    let url = row['url'];
    let viewCount = row['view_count'];
    console.log(`url: ${url}, ${viewCount} views`);
  });
}

function queryStackOverflow(projectId) {
  // Imports the Google Cloud client library
  const BigQuery = require('@google-cloud/bigquery');

  /**
   * TODO(developer): Uncomment the following lines before running the sample.
   */
  // const projectId = "your-project-id";

  // Creates a client
  const bigquery = new BigQuery({
    projectId: projectId,
  });

  // The SQL query to run
  const sqlQuery = `SELECT
    CONCAT(
      'https://stackoverflow.com/questions/',
      CAST(id as STRING)) as url,
    view_count
    FROM \`bigquery-public-data.stackoverflow.posts_questions\`
    WHERE tags like '%google-bigquery%'
    ORDER BY view_count DESC
    LIMIT 10`;

  // Query options list: https://cloud.google.com/bigquery/docs/reference/v2/jobs/query
  const options = {
    query: sqlQuery,
    useLegacySql: false, // Use standard SQL syntax for queries.
  };

  // Runs the query
  bigquery
    .query(options)
    .then(results => {
      const rows = results[0];
      printResult(rows);
    })
    .catch(err => {
      console.error('ERROR:', err);
    });
}

PHP

For more on installing and creating a BigQuery client, refer to BigQuery Client Libraries.

require __DIR__ . '/vendor/autoload.php';

use Google\Cloud\BigQuery\BigQueryClient;


// get the project ID as the first argument
if (2 != count($argv)) {
    die("Usage: php stackoverflow.php YOUR_PROJECT_ID\n");
}

$projectId = $argv[1];

$bigQuery = new BigQueryClient([
    'projectId' => $projectId,
]);
$query = <<<ENDSQL
SELECT
  CONCAT(
    'https://stackoverflow.com/questions/',
    CAST(id as STRING)) as url,
  view_count
FROM `bigquery-public-data.stackoverflow.posts_questions`
WHERE tags like '%google-bigquery%'
ORDER BY view_count DESC
LIMIT 10;
ENDSQL;
$queryJobConfig = $bigQuery->query($query);
$queryResults = $bigQuery->runQuery($queryJobConfig);

if ($queryResults->isComplete()) {
    $i = 0;
    $rows = $queryResults->rows();
    foreach ($rows as $row) {
        printf('--- Row %s ---' . PHP_EOL, ++$i);
        printf('url: %s, %s views' . PHP_EOL, $row['url'], $row['view_count']);
    }
    printf('Found %s row(s)' . PHP_EOL, $i);
} else {
    throw new Exception('The query failed to complete');
}

Python

For more on installing and creating a BigQuery client, refer to BigQuery Client Libraries.

from google.cloud import bigquery


def query_stackoverflow():
    client = bigquery.Client()
    query_job = client.query("""
        SELECT
          CONCAT(
            'https://stackoverflow.com/questions/',
            CAST(id as STRING)) as url,
          view_count
        FROM `bigquery-public-data.stackoverflow.posts_questions`
        WHERE tags like '%google-bigquery%'
        ORDER BY view_count DESC
        LIMIT 10""")

    results = query_job.result()  # Waits for job to complete.

    for row in results:
        print("{} : {} views".format(row.url, row.view_count))


if __name__ == '__main__':
    query_stackoverflow()

Ruby

For more on installing and creating a BigQuery client, refer to BigQuery Client Libraries.

require "google/cloud/bigquery"

# This uses Application Default Credentials to authenticate.
# @see https://cloud.google.com/bigquery/docs/authentication/getting-started
bigquery = Google::Cloud::Bigquery.new

sql     = "SELECT " +
          "CONCAT('https://stackoverflow.com/questions/', " +
          "       CAST(id as STRING)) as url, view_count " +
          "FROM `bigquery-public-data.stackoverflow.posts_questions` " +
          "WHERE tags like '%google-bigquery%' " +
          "ORDER BY view_count DESC LIMIT 10"
results = bigquery.query sql

results.each do |row|
  puts "#{row[:url]}: #{row[:view_count]} views"
end

What's next

Envoyer des commentaires concernant…