As part of a set of technologies that contribute to a machine learning solution, AI Platform Prediction requires a development environment with carefully configured prerequisites and dependencies. This page describes the pieces that make up your development environment and the issues that go with them.
Python version support
AI Platform Prediction can run Python 2.7 or Python 3. You can set the Python version for your training jobs in a configuration file or with gcloud commands.
Online and batch prediction work with trained models, regardless of whether they were trained using Python 2 or Python 3.
If you need to port your code between Python 2 and Python 3, you can use compatibility libraries like six to help. Six is included in the AI Platform Prediction runtime images by default.
Root access
If you are configuring your base development environment, you may need to use
sudo
to run your pip
installation on macOS or Linux. However, if you use a
virtual environment, you won't need root access, because installation happens
outside of OS-protected system directories.
Runtime environment
The configuration of the virtual machines that run Google Cloud project in the cloud is defined by the runtime version that you use.
Python virtual environments
Python configuration can be complicated, especially if you develop other Python applications using different technologies on the same computer. You can simplify your package and version management by using a virtual environment to do your Python development.
A Python virtual environment manages a Python interpreter and packages that are isolated from your computer's default environment and dedicated to your project. You can use virtual environments to configure separate environments for each Python project you work on, each with its own version of Python and the modules you need.
There are several options for virtual Python environments. We recommend Anaconda (or its smaller version Miniconda). These include their own virtual environment manager called Conda. Anaconda is a popular suite of packages and tools that is commonly used by data scientists.
Machine learning frameworks
AI Platform Training and AI Platform Prediction support the following frameworks:
- TensorFlow for training, online prediction, and batch prediction. See the guide to training and prediction with TensorFlow Estimator on AI Platform.
- scikit-learn and XGBoost for training and online prediction. See tutorials on using scikit-learn and XGBoost with AI Platform Prediction.
Google Cloud Platform account
You must have a Google Cloud account with billing enabled and a project with the AI Platform Training and Prediction API enabled to use any of the cloud functionality of AI Platform Prediction. If you are new to Google Cloud, read the overview of projects for more information.
Cloud Compute regions
Processing resources are allocated by region and zone, which correspond to the data centers where the resources are physically located. You should typically run your one-off jobs, like model training, in the region closest to your physical location (or the physical location of your intended users), but note the following points:
Note the available regions for AI Platform Prediction services, including model training on GPUs and other hardware, and online/batch prediction.
You should always run your AI Platform Prediction jobs in the same region as the Cloud Storage bucket that you're using to read and write data for the job.
You should use the Standard Storage class for any Cloud Storage buckets that you're using to read and write data for your AI Platform Prediction job.
What's next
- Work through the getting started guide for TensorFlow Estimator on AI Platform Prediction.
- Work through the getting started guide for scikit-learn and XGBoost on AI Platform Prediction.