Install and run a Jupyter notebook
Install, run, and access a Jupyter notebook on a Cloud Dataproc cluster.
Monte Carlo methods using Cloud Dataproc and Apache Spark
Run Monte Carlo simulations in Python and Scala with Cloud Dataproc and Apache Spark.
Write and run Spark Scala jobs
Create and submit Spark Scala jobs with Cloud Dataproc.
Use the Python client library
Use the Google API Python client library to programmatically interact with Cloud Dataproc, including creating clusters and jobs.
Google BigQuery and Spark ML
Use Cloud Dataproc, BigQuery, and Apache Spark ML for Machine Learning.
Writing a MapReduce Job with the BigQuery Connector
Follow example code that shows you how to write a MapReduce Job with the BigQuery Connector for Apache Hadoop.
Using the BigQuery Connector with Apache Spark
Follow example code that uses the BigQuery Connector for Apache Hadoop with Apache Spark.