Running Django on Google Kubernetes Engine

Django apps that run on Google Kubernetes Engine (GKE) scale well because they run on the same infrastructure that powers all of Google's products.

This tutorial assumes you are familiar with Django web development. If you are new to Django development, it's a good idea to work through writing your first Django app before continuing. In that tutorial, the app's models represent polls that contain questions, and you can interact with the models by using the Django admin console.

This tutorial requires Python 2.7 or 3.4 or later. You also need to have Docker installed.

Before you begin

Check off each step as you complete it.

  1. check_box_outline_blank check_box Create a project in the Google Cloud Platform Console.
    If you haven't already created a project, create one now. Projects enable you to manage all Google Cloud Platform resources for your app, including deployment, access control, billing, and services.
    1. Open the GCP Console.
    2. In the drop-down menu at the top, select Create a project.
    3. Click Show advanced options. Under App Engine location, select a United States location.
    4. Give your project a name.
    5. Make a note of the project ID, which might be different from the project name. The project ID is used in commands and in configurations.
  2. check_box_outline_blank check_box Enable billing for your project, and sign up for a free trial.

    If you haven't already enabled billing for your project, enable billing now, and sign up for a free trial. Enabling billing allows the app to consume billable resources such as running instances and storing data. During your free trial period, you won't be billed for any services.

  3. check_box_outline_blank check_box Install the Cloud SDK.

    If you haven't already installed the Cloud SDK, install and initialize the Cloud SDK now. The Cloud SDK contains tools and libraries that enable you to create and manage resources on GCP.

  4. check_box_outline_blank check_box Enable APIs for your project.

    This takes you to the GCP Console and automatically enables the APIs used by this tutorial. The APIs used are: Cloud SQL Admin API, Compute Engine API .

Downloading and running the app

After you've completed the prerequisites, download and run the Django sample app. The following sections guide you through configuring, running, and deploying the app.

Cloning the Django app

The code for the Django sample app is in the GoogleCloudPlatform/python-docs-samples repository on GitHub.

  1. You can either download the sample as a zip file and extract it or clone the repository to your local machine by using the following command:

    git clone https://github.com/GoogleCloudPlatform/python-docs-samples.git
    
  2. Change to the directory that contains the sample code:

    cd python-docs-samples/kubernetes_engine/django_tutorial
    

Setting up your local environment

When deployed, your app uses the Cloud SQL Proxy that is built in to the App Engine environment to communicate with your Cloud SQL instance. However, to test your app locally, you must install and use a local copy of the proxy in your development environment.

Learn more about the Cloud SQL Proxy.

To perform basic admin tasks on your Cloud SQL instance, you can use the PostgreSQL client.

Enable the Cloud SQL Admin API

Before using Cloud SQL, you must enable the Cloud SQL Admin API:

gcloud services enable sqladmin

Installing the Cloud SQL Proxy

Download and install the Cloud SQL Proxy. The Cloud SQL Proxy connects to your Cloud SQL instance when running locally.

Linux 64-bit

  1. Download the proxy:
    wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy
    
  2. Make the proxy executable:
    chmod +x cloud_sql_proxy
    

Linux 32-bit

  1. Download the proxy:
    wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.386 -O cloud_sql_proxy
    
  2. Make the proxy executable:
    chmod +x cloud_sql_proxy
    

macOS 64-bit

  1. Download the proxy:
    curl -o cloud_sql_proxy https://dl.google.com/cloudsql/cloud_sql_proxy.darwin.amd64
    
  2. Make the proxy executable:
    chmod +x cloud_sql_proxy
    

macOS 32-bit

  1. Download the proxy:
    curl -o cloud_sql_proxy https://dl.google.com/cloudsql/cloud_sql_proxy.darwin.386
    
  2. Make the proxy executable:
    chmod +x cloud_sql_proxy
    

Windows 64-bit

Right-click https://dl.google.com/cloudsql/cloud_sql_proxy_x64.exe and select Save Link As to download the proxy. Rename the file to cloud_sql_proxy.exe.

Windows 32-bit

Right-click https://dl.google.com/cloudsql/cloud_sql_proxy_x86.exe and select Save Link As to download the proxy. Rename the file to cloud_sql_proxy.exe.
If your operating system isn't included here, you can also compile the proxy from source.

Creating a Cloud SQL instance

  1. Create a Cloud SQL for PostgreSQL instance.

    Name the instance polls-instance or similar. It can take a few minutes for the instance to be ready. When the instance is ready, it's visible in the instances list.

  2. Use the Cloud SDK to run the following command where [YOUR_INSTANCE_NAME] represents the name of your Cloud SQL instance:
    gcloud sql instances describe [YOUR_INSTANCE_NAME]

    In the output, note the value shown for [CONNECTION_NAME].

    The [CONNECTION_NAME] value is in the format [PROJECT_NAME]:[REGION_NAME]:[INSTANCE_NAME].

Initializing your Cloud SQL instance

  1. Start the Cloud SQL Proxy by using the [CONNECTION_NAME] value from the previous step:

    Linux/macOS

    ./cloud_sql_proxy -instances="[YOUR_INSTANCE_CONNECTION_NAME]"=tcp:5432

    Windows

    cloud_sql_proxy.exe -instances="[YOUR_INSTANCE_CONNECTION_NAME]"=tcp:5432

    Replace [YOUR_INSTANCE_CONNECTION_NAME] with the [CONNECTION_NAME] value that you recorded in the previous step.

    This step establishes a connection from your local computer to your Cloud SQL instance for local testing purposes. Keep the Cloud SQL Proxy running the entire time you test your app locally.

  2. Create a Cloud SQL user and database:

    GCP Console

    1. Create a new database by using the GCP Console for your Cloud SQL instance polls-instance. For example, you can use the name polls.
    2. Create a new user by using the GCP Console for your Cloud SQL instance polls-instance.

    Postgres client

    1. In a separate command-line tab, install the Postgres client.
      sudo apt-get install postgresql
    2. Use the Postgres client or similar program to connect to your instance. When prompted, use the root password you configured.
      psql --host 127.0.0.1 --user postgres --password
    3. Create the required databases, users, and access permissions in your Cloud SQL database by using the following commands. Replace [POSTGRES_USER] and [POSTGRES_PASSWORD] with the username and password you want to use.
      CREATE DATABASE polls;
      CREATE USER [POSTGRES_USER] WITH PASSWORD '[POSTGRES_PASSWORD]';
      GRANT ALL PRIVILEGES ON DATABASE polls TO [POSTGRES_USER];
      GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO [POSTGRES_USER];
      

Creating a service account

The proxy requires a service account with Editor privileges for your Cloud SQL instance. For more information about service accounts, see the GCP authentication overview.

  1. Go to the Service accounts page of the Google Cloud Platform Console.

    Go to the Service accounts page

  2. Select the project that contains your Cloud SQL instance.
  3. Click Create service account.
  4. In the Create service account dialog, provide a descriptive name for the service account.
  5. For Role, select one of the following roles:
    • Cloud SQL > Cloud SQL Client
    • Cloud SQL > Cloud SQL Editor
    • Cloud SQL > Cloud SQL Admin
  6. Change the Service account ID to a unique, easily recognizable value.
  7. Click Furnish a new private key and confirm that the key type is JSON.
  8. Click Create.

    The private key file is downloaded to your machine. You can move it to another location. Keep the key file secure.

Configuring the database settings

Use the following commands to set environment variables for database access. These environment variables are used for local testing.

Linux/macOS

export DATABASE_USER=<your-database-user>
export DATABASE_PASSWORD=<your-database-password>

Windows

set DATABASE_USER=<your-database-user>
set DATABASE_PASSWORD=<your-database-password>

Setting up your GKE configuration

  1. This application is represented in a single Kubernetes configuration, called polls. In polls.yaml replace <your-project-id> with your GCP project ID.

  2. Run the following command and note the value of connectionName:

    gcloud beta sql instances describe [YOUR_INSTANCE_NAME]
    
  3. In the polls.yaml file, replace <your-cloudsql-connection-string> with the connectionName value.

Running the app on your local computer

  1. To run the Django app on your local computer, set up a Python development environment, including Python, pip, and virtualenv.

  2. Create an isolated Python environment and install dependencies. If your Python 3 installation has a different name, use that in the first command:

    virtualenv env
    source env/bin/activate
    pip install -r requirements.txt
    
  3. Run the Django migrations to set up your models:

    python manage.py makemigrations
    python manage.py makemigrations polls
    python manage.py migrate
    
  4. Start a local web server:

    python manage.py runserver
    
  5. In your browser, go to http://localhost:8000.

    You see a page with the following text: "Hello, world. You're at the polls index." The Django web server running on your computer delivers the sample app pages.

  6. Press Control+C to stop the local web server.

Using the Django admin console

  1. Create a superuser. You need to specify a username and password.

    python manage.py createsuperuser
    
  2. Run the main program:

    python manage.py runserver
    
  3. In your browser, go to http://localhost:8000/admin.

  4. Log in to the admin site using the username and password you used when you ran createsuperuser.

Deploying the app to GKE

When the app is deployed to Google Cloud Platform, it uses the Gunicorn server. Gunicorn doesn't serve static content, so the app uses Cloud Storage to serve static content.

Collect and upload static resources

  1. Create a Cloud Storage bucket and make it publicly readable. Replace [YOUR_GCS_BUCKET] with a bucket name of your choice. For example, you could use your project ID as a bucket name.

    gsutil mb gs://[YOUR_GCS_BUCKET]
    gsutil defacl set public-read gs://[YOUR_GCS_BUCKET]
    
  2. Gather all the static content locally into one folder:

    python manage.py collectstatic
    
  3. Upload the static content to Cloud Storage:

    gsutil rsync -R static/ gs://[YOUR_GCS_BUCKET]/static
    
  4. In mysite/settings.py, set the value of STATIC_URL to the following URL, replacing [YOUR_GCS_BUCKET] with your bucket name:

    http://storage.googleapis.com/[YOUR_GCS_BUCKET]/static/
    

Set up GKE

  1. To initialize GKE, go to the Clusters page.

    Go to the Clusters page

    When you use GKE for the first time in a project, you need to wait for the "Kubernetes Engine is getting ready. This may take a minute or more" message to disappear.

  2. Create a GKE cluster:

    gcloud container clusters create polls \
      --scopes "https://www.googleapis.com/auth/userinfo.email","cloud-platform" \
      --num-nodes 4 --zone "us-central1-a"
    

    Did you get the error: "Project [PROJECT_ID] is not fully initialized with the default service accounts."?

    Initialize GKE

    If you received an error, go to the Google Cloud Platform Console to initialize GKE in your project.

    Go to the Clusters page

    Wait for the "Kubernetes Engine is getting ready. This can take a minute or more" message to disappear.

  3. After the cluster is created, use the kubectl command-line tool, which is integrated with the gcloud tool, to interact with your GKE cluster. Because gcloud and kubectl are separate tools, make sure kubectl is configured to interact with the right cluster.

    gcloud container clusters get-credentials polls --zone "us-central1-a"
    

Set up Cloud SQL

  1. You need several secrets to enable your GKE app to connect with your Cloud SQL instance. One is required for instance-level access (connection), while the other two are required for database access. For more information about the two levels of access control, see Instance access control.

    1. To create the secret for instance-level access, provide the location ([PATH_TO_CREDENTIAL_FILE]) of the JSON service account key you downloaded when you created your service account (see Creating a service account):

      kubectl create secret generic cloudsql-oauth-credentials --from-file=credentials.json=[PATH_TO_CREDENTIAL_FILE]
      
    2. To create the secrets for database access, use the SQL [PROXY_USERNAME] and [PASSWORD] defined in step 2 of Initializing your Cloud SQL instance:

      kubectl create secret generic cloudsql --from-literal=username=[PROXY_USERNAME] --from-literal=password=[PASSWORD]
      
  2. Retrieve the public Docker image for the Cloud SQL proxy.

    docker pull b.gcr.io/cloudsql-docker/gce-proxy:1.05
    
  3. Build a Docker image, replacing <your-project-id> with your project ID.

    docker build -t gcr.io/<your-project-id>/polls .
    
  4. Configure Docker to use gcloud as a credential helper, so that you can push the image to Container Registry:

    gcloud auth configure-docker
    
  5. Push the Docker image. Replace <your-project-id> with your project ID.

    docker push gcr.io/<your-project-id>/polls
    
  6. Create the GKE resource:

    kubectl create -f polls.yaml
    

Deploy the app to GKE

After the resources are created, there are three polls pods on the cluster. Check the status of your pods:

    kubectl get pods

Wait a few minutes for the pod statuses to display as Running. If the pods aren't ready or if you see restarts, you can get the logs for a particular pod to figure out the issue. [YOUR-POD-ID] is a part of the output returned by the previous kubectl get pods command.

    kubectl logs [YOUR_POD_ID]

Seeing the app run in GCP

After the pods are ready, you can get the public IP address of the load balancer:

kubectl get services polls

Go to the EXTERNAL-IP address in your browser to see the Django basic landing page and access the admin console.

Understanding the code

The Django sample app was created using the standard Django tooling. These commands create the project and the polls app:

django-admin startproject mysite
python manage.py startapp polls

The settings.py contains the configuration for your SQL database:

DATABASES = {
    'default': {
        # If you are using Cloud SQL for MySQL rather than PostgreSQL, set
        # 'ENGINE': 'django.db.backends.mysql' instead of the following.
        'ENGINE': 'django.db.backends.postgresql',
        'NAME': 'polls',
        'USER': os.getenv('DATABASE_USER'),
        'PASSWORD': os.getenv('DATABASE_PASSWORD'),
        'HOST': '127.0.0.1',
        'PORT': '5432',
    }
}

The polls.yaml file specifies two Kubernetes resources. The first is the Service, which defines a consistent name and private IP address for the Django web app. The second is an HTTP load balancer with a public-facing external IP address.

# The polls service provides a load-balancing proxy over the polls app
# pods. By specifying the type as a 'LoadBalancer', Container Engine will
# create an external HTTP load balancer.
# For more information about Services see:
#   https://cloud.google.com/container-engine/docs/services/
# For more information about external HTTP load balancing see:
#   https://cloud.google.com/container-engine/docs/load-balancer
apiVersion: v1
kind: Service
metadata:
  name: polls
  labels:
    app: polls
spec:
  type: LoadBalancer
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app: polls

The service provides a network name and IP address, and GKE pods run the app's code behind the service. The polls.yaml file specifies a deployment that provides declarative updates for GKE pods. The service directs traffic to the deployment by matching the service's selector to the deployment's label. In this case, the selector polls is matched to the label polls.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: polls
  labels:
    app: polls
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: polls
    spec:
      containers:
      - name: polls-app
        # Replace  with your project ID or use `make template`
        image: gcr.io/<your-project-id>/polls
        # This setting makes nodes pull the docker image every time before
        # starting the pod. This is useful when debugging, but should be turned
        # off in production.
        imagePullPolicy: Always
        env:
            - name: DATABASE_USER
              valueFrom:
                secretKeyRef:
                  name: cloudsql
                  key: username
            - name: DATABASE_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: cloudsql
                  key: password
        ports:
        - containerPort: 8080

      - image: gcr.io/cloudsql-docker/gce-proxy:1.05
        name: cloudsql-proxy
        command: ["/cloud_sql_proxy", "--dir=/cloudsql",
                  "-instances=<your-cloudsql-connection-string>=tcp:5432",
                  "-credential_file=/secrets/cloudsql/credentials.json"]
        volumeMounts:
          - name: cloudsql-oauth-credentials
            mountPath: /secrets/cloudsql
            readOnly: true
          - name: ssl-certs
            mountPath: /etc/ssl/certs
          - name: cloudsql
            mountPath: /cloudsql
      volumes:
        - name: cloudsql-oauth-credentials
          secret:
            secretName: cloudsql-oauth-credentials
        - name: ssl-certs
          hostPath:
            path: /etc/ssl/certs
        - name: cloudsql
          emptyDir:
หน้านี้มีประโยชน์ไหม โปรดแสดงความคิดเห็น