Running Django on Kubernetes Engine

It's easy to get started developing Django apps running on Google Kubernetes Engine, and because the apps you create run on the same infrastructure that powers all of Google's products, you can be confident that they will scale to serve all of your users, whether there are a few or millions of them.

This tutorial assumes you are familiar with Django web development. It walks you through deploying the official Django tutorial app.

It's a good idea to work through that tutorial before this one, especially if you are new to Django development. The app's models represent polls that contain questions, and you can interact with the models using the Django admin console.

This tutorial requires Python 3.4 or later. You also need to have Docker installed.

Before you begin

Check off each step as you complete it.

  1. check_box_outline_blank check_box Create a project in the Google Cloud Platform Console.
    If you haven't already created a project, create one now. Projects enable you to manage all Google Cloud Platform resources for your app, including deployment, access control, billing, and services.
    1. Open the GCP Console.
    2. In the drop-down menu at the top, select Create a project.
    3. Click Show advanced options. Under App Engine location, select a United States location.
    4. Give your project a name.
    5. Make a note of the project ID, which might be different from the project name. The project ID is used in commands and in configurations.
  2. check_box_outline_blank check_box Enable billing for your project, and sign up for a free trial.

    If you haven't already enabled billing for your project, enable billing now, and sign up for a free trial. Enabling billing allows the app to consume billable resources such as running instances and storing data. During your free trial period, you won't be billed for any services.

  3. check_box_outline_blank check_box Install the Cloud SDK.

    If you haven't already installed the Cloud SDK, install and initialize the Cloud SDK now. The Cloud SDK contains tools and libraries that enable you to create and manage resources on GCP.

  4. check_box_outline_blank check_box Enable APIs for your project.

    This takes you to the GCP Console and automatically enables the APIs used by this tutorial. The APIs used are: Cloud SQL Admin API, Google Kubernetes Engine API .

Log in to gcloud

Acquire new credentials to use the Cloud SQL Admin API:

gcloud auth application-default login

Downloading and running the app

After you've completed the prerequisites, download and deploy the Django sample app. The following sections guide you through configuring, running, and deploying the sample.

Cloning the Django app

The code for the Django sample app is in the GCP Python Samples repository on GitHub.

Clone the repository to your local machine:

git clone

Go to the directory that contains the sample code:

cd python-docs-samples/kubernetes_engine/django_tutorial

Alternatively, you can download the sample as a zip and extract it.

Setting up your local environment

When deployed, your app uses the Cloud SQL Proxy that is built in to the App Engine environment to communicate with your Cloud SQL instance. However, to test your app locally, you must install and use a local copy of the proxy in your development environment.

Learn more about the Cloud SQL Proxy.

To perform basic admin tasks on your Cloud SQL instance, you can use the PostgreSQL client.

Enable the Cloud SQL Admin API

Before using Cloud SQL, you must enable the Cloud SQL Admin API:

gcloud services enable sqladmin

Installing the Cloud SQL Proxy

Download and install the Cloud SQL Proxy. The Cloud SQL Proxy connects to your Cloud SQL instance when running locally.

Linux 64-bit

  1. Download the proxy:
    wget -O cloud_sql_proxy
  2. Make the proxy executable:
    chmod +x cloud_sql_proxy

Linux 32-bit

  1. Download the proxy:
    wget -O cloud_sql_proxy
  2. Make the proxy executable:
    chmod +x cloud_sql_proxy

macOS 64-bit

  1. Download the proxy:
    curl -o cloud_sql_proxy
  2. Make the proxy executable:
    chmod +x cloud_sql_proxy

macOS 32-bit

  1. Download the proxy:
    curl -o cloud_sql_proxy
  2. Make the proxy executable:
    chmod +x cloud_sql_proxy

Windows 64-bit

Right-click and select Save Link As to download the proxy. Rename the file to cloud_sql_proxy.exe.

Windows 32-bit

Right-click and select Save Link As to download the proxy. Rename the file to cloud_sql_proxy.exe.
If your operating system isn't included here, you can also compile the proxy from source.

Creating a Cloud SQL instance

  1. Create a Cloud SQL for PostgreSQL instance.

    Name the instance polls-instance or similar. It can take a few minutes for the instance to be ready. When the instance is ready, it's visible in the instances list.

  2. Use the Cloud SDK to run the following command where [YOUR_INSTANCE_NAME] represents the name of your Cloud SQL instance:
    gcloud sql instances describe [YOUR_INSTANCE_NAME]

    In the output, note the value shown for [CONNECTION_NAME].


Initializing your Cloud SQL instance

  1. Start the Cloud SQL Proxy by using the [CONNECTION_NAME] value from the previous step:


    ./cloud_sql_proxy -instances="[YOUR_INSTANCE_CONNECTION_NAME]"=tcp:5432


    cloud_sql_proxy.exe -instances="[YOUR_INSTANCE_CONNECTION_NAME]"=tcp:5432

    Replace [YOUR_INSTANCE_CONNECTION_NAME] with the [CONNECTION_NAME] value that you recorded in the previous step.

    This step establishes a connection from your local computer to your Cloud SQL instance for local testing purposes. Keep the Cloud SQL Proxy running the entire time you test your app locally.

  2. Create a Cloud SQL user and database:

    GCP Console

    1. Create a new database by using the GCP Console for your Cloud SQL instance polls-instance. For example, you can use the name polls.
    2. Create a new user by using the GCP Console for your Cloud SQL instance polls-instance.

    Postgres client

    1. In a separate command-line tab, install the Postgres client.
      sudo apt-get install postgresql
    2. Use the Postgres client or similar program to connect to your instance. When prompted, use the root password you configured.
      psql --host --user postgres --password
    3. Create the required databases, users, and access permissions in your Cloud SQL database by using the following commands. Replace [POSTGRES_USER] and [POSTGRES_PASSWORD] with the username and password you want to use.
      CREATE DATABASE polls;

Creating a service account

The proxy requires a service account with Editor privileges for your Cloud SQL instance. For more information about service accounts, see the GCP Auth Guide.

  1. Go to the Service accounts page of the Google Cloud Platform Console.

    Go to the Service accounts page

  2. If needed, select the project that contains your Cloud SQL instance.
  3. Click Create service account.
  4. In the Create service account dialog, provide a descriptive name for the service account.
  5. For Role, select one of the following roles:
    • Cloud SQL > Cloud SQL Client
    • Cloud SQL > Cloud SQL Editor
    • Cloud SQL > Cloud SQL Admin
  6. Change the Service account ID to a unique value that you will recognize so you can easily find this service account later if needed.
  7. Click Furnish a new private key.
  8. The default key type is JSON, which is the correct value to use.
  9. Click Create.

    The private key file is downloaded to your machine. You can move it to another location. Keep the key file secure.

Configuring the database settings

  1. Set environment variables for database access for local testing.


export DATABASE_USER=<your-database-user>
export DATABASE_PASSWORD=<your-database-password>


set DATABASE_USER=<your-database-user>
set DATABASE_PASSWORD=<your-database-password>

Setting up your GKE configuration

  1. This application is represented in a single Kubernetes configuration, called polls. In polls.yaml replace <your-project-id> with your project ID.

  2. In polls.yaml replace <your-cloudsql-connection-string> with the value of connectionName outputted from the following command:

    gcloud beta sql instances describe [YOUR_INSTANCE_NAME]

Running the app on your local computer

  1. To run the Django app on your local computer, set up a Python development environment, including Python, pip, and virtualenv.

  2. Create an isolated Python environment, and install dependencies. If your Python 3 installation has a different name, use that in the first command:

    virtualenv env -p python3
    source env/bin/activate
    pip install -r requirements.txt
  3. Run the Django migrations to set up your models:

    python makemigrations
    python makemigrations polls
    python migrate
  4. Start a local web server:

    python runserver
  5. In your browser, go to http://localhost:8000.

You should see a simple webpage with the following text: "Hello, world. You're at the polls index." The sample app pages are delivered by the Django web server running on your computer. When you're ready to move forward, press Ctrl+C to stop the local web server.

Using the Django admin console

  1. Create a superuser:

    python createsuperuser
  2. Run the main program:

    python runserver
  3. In your browser, go to http://localhost:8000/admin.

  4. Log in to the admin site using the username and password you created when you ran createsuperuser.

Deploying the app to GKE

  1. When the app is deployed to Google Cloud Platform, it uses the Gunicorn server. Gunicorn doesn't serve static content, so the app uses Cloud Storage to serve static content.

    Create a Cloud Storage bucket and make it publicly readable. Replace <your-gcs-bucket> with a bucket name of your choice. For example, you could use your project ID as a bucket name:

    gsutil mb gs://<your-gcs-bucket>
    gsutil defacl set public-read gs://<your-gcs-bucket>
  2. Gather all the static content locally into one folder:

    python collectstatic
  3. Upload the static content to Cloud Storage:

    gsutil rsync -R static/ gs://<your-gcs-bucket>/static
  4. In mysite/, set the value of STATIC_URL to this URL, replacing <your-gcs-bucket> with your bucket name.<your-gcs-bucket>/static/
  5. To initialize GKE, go to the GCP Console. Wait for the "Kubernetes Engine is getting ready. This may take a minute or more" message to disappear.

  6. Create a GKE cluster:

    gcloud container clusters create polls \
      --scopes "","cloud-platform" \
      --num-nodes 4 --zone "us-central1-a"

    Did you get the error: "Project [PROJECT_ID] is not fully initialized with the default service accounts."?

    Initialize {container_name_short}}

    If you received an error, visit the console to initialize GKE in your project:

    Go to the GKE page

    Wait for the "Kubernetes Engine is getting ready. This can take a minute or more" message to disappear.

  7. After the cluster is created, use the kubectl command-line tool, which is integrated with the gcloud tool, to interact with your GKE cluster. Because gcloud and kubectl are separate tools, make sure kubectl is configured to interact with the right cluster:

    gcloud container clusters get-credentials polls --zone "us-central1-a"
  8. You need several secrets to enable your GKE app to connect with your Cloud SQL instance. One is required for instance-level access (connection), while the other two are required for database access. For more information about the two levels of access control, see Instance Access Control.

    1. To create the secret for instance-level access, provide the location of the key you downloaded when you created your service account:

      kubectl create secret generic cloudsql-oauth-credentials --from-file=credentials.json=[PATH_TO_CREDENTIAL_FILE]
    2. Create the secrets needed for database access:

      kubectl create secret generic cloudsql --from-literal=username=[PROXY_USERNAME] --from-literal=password=[PASSWORD]
  9. Retrieve the public Docker image for the Cloud SQL proxy.

    docker pull
  10. Build a Docker image, replacing <your-project-id> with your project ID.

    docker build -t<your-project-id>/polls .
  11. Configure docker to use gcloud as a credential helper, so that you can push the image to Google Container Registry:

    gcloud auth configure-docker
  12. Push the Docker image. Replace <your-project-id> with your project ID.

    docker push<your-project-id>/polls
  13. Create the GKE resource:

    kubectl create -f polls.yaml
  14. After the resources are created, there should be three polls pods on the cluster. Check the status of your pods:

    kubectl get pods

    Wait a few minutes for the pod statuses to turn to Running. If the pods are not ready or if you see restarts, you can get the logs for a particular pod to figure out the issue:

    kubectl logs <your-pod-id>

Seeing the app run in GCP

After the pods are ready, you can get the public IP address of the load balancer:

kubectl get services polls

Navigate to the EXTERNAL-IP address in your browser to see the Django basic landing page and access the admin console.

Understanding the code

The Django sample app was created using the standard Django tooling. These commands create the project and the polls app:

django-admin startproject mysite
python startapp polls

The contains the configuration for your SQL database:

    'default': {
        # If you are using Cloud SQL for MySQL rather than PostgreSQL, set
        # 'ENGINE': 'django.db.backends.mysql' instead of the following.
        'ENGINE': 'django.db.backends.postgresql',
        'NAME': 'polls',
        'USER': os.getenv('DATABASE_USER'),
        'PASSWORD': os.getenv('DATABASE_PASSWORD'),
        'HOST': '',
        'PORT': '5432',

The polls.yaml files specifies two Kubernetes resources. The first is the Service, which defines a consistent name and private IP address for the Django web app. The second is a HTTP Load Balancer with a public-facing external IP address.

# The polls service provides a load-balancing proxy over the polls app
# pods. By specifying the type as a 'LoadBalancer', Container Engine will
# create an external HTTP load balancer.
# For more information about Services see:
# For more information about external HTTP load balancing see:
apiVersion: v1
kind: Service
  name: polls
    app: polls
  type: LoadBalancer
  - port: 80
    targetPort: 8080
    app: polls

The service provides a network name and IP address, and GKE pods run the application code behind the service. The polls.yaml file specifies a deployment that provides declarative updates for GKE pods. The service directs traffic to the deployment by matching the service's selector to the deployment's label. In this case, the selector polls is matched to the label polls.

apiVersion: extensions/v1beta1
kind: Deployment
  name: polls
    app: polls
  replicas: 3
        app: polls
      - name: polls-app
        # Replace  with your project ID or use `make template`
        # This setting makes nodes pull the docker image every time before
        # starting the pod. This is useful when debugging, but should be turned
        # off in production.
        imagePullPolicy: Always
            - name: DATABASE_USER
                  name: cloudsql
                  key: username
            - name: DATABASE_PASSWORD
                  name: cloudsql
                  key: password
        - containerPort: 8080

      - image:
        name: cloudsql-proxy
        command: ["/cloud_sql_proxy", "--dir=/cloudsql",
          - name: cloudsql-oauth-credentials
            mountPath: /secrets/cloudsql
            readOnly: true
          - name: ssl-certs
            mountPath: /etc/ssl/certs
          - name: cloudsql
            mountPath: /cloudsql
        - name: cloudsql-oauth-credentials
            secretName: cloudsql-oauth-credentials
        - name: ssl-certs
            path: /etc/ssl/certs
        - name: cloudsql
Kunde den här sidan hjälpa dig? Berätta:

Skicka feedback om ...