You can run a Docker container on a machine running Container-Optimized OS in
much the same way as you would on most other node image distributions, by using
the docker run
command. For example:
docker run --rm busybox echo "hello world"
The following output appears:
Unable to find image 'busybox:latest' locally
latest: Pulling from library/busybox
. . .
Status: Downloaded newer image for busybox:latest
hello world
When managing SSH keys in metadata, all user accounts managed by Compute Engine on the cos
image are added
to the docker
group by default. This lets any logged-in user run docker
commands without root privileges. When managing SSH keys using OS Login, the
user account needs to be added to the docker
group manually. Or else, the user
has to add sudo
for each docker
command.
Accessing public images in Container Registry or Artifact Registry
Container Registry support is built in to the cos
node image. To
start a container from Container Registry, run:
docker run --rm gcr.io/google-containers/busybox echo "hello world"
The following output appears:
Unable to find image 'gcr.io/google-containers/busybox:latest' locally
Pulling repository gcr.io/google-containers/busybox
. . .
Status: Downloaded newer image for gcr.io/google-containers/busybox:latest
hello world
Accessing private images in Artifact Registry or Container Registry
Starting with milestone 60 releases,
docker-credential-gcr
is pre-installed in Container-Optimized OS images. It's the recommended
way to access private images in Artifact Registry or Container Registry.
To use docker-credential-gcr
, run the following command:
Artifact Registry
docker-credential-gcr configure-docker --registries LOCATION-docker.pkg.dev
Replace LOCATION with the location of your repository.
Container Registry
docker-credential-gcr configure-docker
The following output appears:
/home/username/.docker/config.json configured to use this credential helper
To run an image from the registry, use the following command:
Artifact Registry
docker run --rm LOCATION-docker.pkg.dev/your-project/repository/your-image
Replace LOCATION with the location of your repository.
Container Registry
docker run --rm gcr.io/your-project/your-image
Supported Container Registry hostnames are:
- us.gcr.io
- eu.gcr.io
- asia.gcr.io
Alternately, you can fetch appropriate OAuth access tokens from Compute Engine
metadata and use them with the docker login
command manually, as shown in the
following example:
METADATA=http://metadata.google.internal/computeMetadata/v1
SVC_ACCT=$METADATA/instance/service-accounts/default
ACCESS_TOKEN=$(curl -H 'Metadata-Flavor: Google' $SVC_ACCT/token | cut -d'"' -f 4)
docker login -u oauth2accesstoken -p $ACCESS_TOKEN https://gcr.io
docker run … gcr.io/your-project/your-image
Using cloud-init with Container Registry
This cloud-init example uses the Cloud Config format to start a Docker container from an image stored in Docker's container registry called DockerHub. The example below uses the Cloud Config format to start a Docker container from an image stored in Container Registry:
#cloud-config
write_files:
- path: /etc/systemd/system/cloudservice.service
permissions: 0644
owner: root
content: |
[Unit]
Description=Start a simple docker container
Wants=gcr-online.target
After=gcr-online.target
[Service]
Environment="HOME=/home/cloudservice"
ExecStartPre=/usr/bin/docker-credential-gcr configure-docker
ExecStart=/usr/bin/docker run --rm --name=mycloudservice gcr.io/google-containers/busybox:latest /bin/sleep 3600
ExecStop=/usr/bin/docker stop mycloudservice
ExecStopPost=/usr/bin/docker rm mycloudservice
runcmd:
- systemctl daemon-reload
- systemctl start cloudservice.service
Configuring Docker daemon to pull images from registry cache
You can configure Docker daemon to pull images from a registry cache by using registry mirrors.
Configure the daemon to use the
registry-mirror
option in one of the following ways:- In the
/etc/default/docker
file, add theregistry-mirror
option for the registry (for example,https://mirror.gcr.io
):
echo 'DOCKER_OPTS="--registry-mirror=https://mirror.gcr.io"' | tee /etc/default/docker
- In the
/etc/default/docker
file, append"--registry-mirror=https://mirror.gcr.io"
to the existingDOCKER_OPTS
:
sed -i -e 's|"$| --registry-mirror=https://mirror.gcr.io"|' /etc/default/docker
- In the
After adding the registry mirror, restart the Docker daemon for the changes to take effect:
sudo systemctl daemon-reload sudo systemctl restart docker
Adding a configuration to /etc/default/docker
is non-persistent across reboot.
To ensure that your docker configuration remains persistent across reboots,
consider adding the commands either in the cloud-init
script of the instance's
metadata in the cloud-config
format
or startup script
.
The following example uses the cloud-config
format to configure a
registry-mirror
:
#cloud-config
runcmd:
- echo 'DOCKER_OPTS="--registry-mirror=https://mirror.gcr.io"' | tee /etc/default/docker
- systemctl daemon-reload
- systemctl restart docker
For more on configuring an instance with cloud-init
, see
Using cloud-init with the Cloud config format.
Running a Kubernetes cluster
The recommended approach to running a Kubernetes cluster on Google Cloud is using Google Kubernetes Engine (GKE). However, if you want to run a self-managed version of open-source Kubernetes, follow these instructions.
- Make sure that your Kubernetes master can be reached by opening port 443 in your firewall.
Download Kubernetes release binaries, unpack, and bring up the cluster as follows:
# Download and extract the latest kubernetes release. cd <empty-dir> KUBERNETES_VERSION="v1.4.6" curl -sSL -o kubernetes.tar.gz https://github.com/kubernetes/kubernetes/releases/download/${KUBERNETES_VERSION}/kubernetes.tar.gz tar xzf kubernetes.tar.gz cd kubernetes # Configure environment to use Container-Optimized OS export KUBE_OS_DISTRIBUTION=cos # Start up a cluster and verify that it is running: cluster/kube-up.sh cluster/kubectl.sh get nodes cluster/kubectl.sh get pods --namespace=kube-system
Now, you can run your application on the cluster. For example, you can start a Redis cluster using the following example:
cluster/kubectl.sh create -f \ examples/guestbook/all-in-one/guestbook-all-in-one.yaml cluster/kubectl.sh get pods cluster/kubectl.sh describe pods redis-master-pod-name
For instances running as part of a GKE cluster, Docker and
Kubelet logs are also automatically exported to Google Cloud's operations suite logging.
Logs for Docker, Kubelet, and kube-proxy
are available in Google Cloud's operations suite
under Compute Engine VM Instance when using the Google Cloud console.
Once your cluster is no longer needed, you can tear it down:
cluster/kube-down.sh
Troubleshooting
Resolving option conflicts between Docker daemon.json
and flags
When configuring the Docker daemon, if the same option is set with a
daemon.json
file and with flags, Docker will fail to start with an error
similar to:
unable to configure the Docker daemon with file /etc/docker/daemon.json:
the following directives are specified both as a flag and in the configuration file:
The recommended solution to resolve this conflict is to modify the default
daemon.json
, which is located at /etc/docker/daemon.json
. Modifying this
file allows for changing only the affected options, while maintaining the other
default options. This can be done using cloud-init
, for example using a
cloud-config
similar to:
#cloud-config
write_files:
- path: /tmp/modify_docker_daemon_opts.py
permissions: 0744
owner: root
content: |
import json, sys, os, logging
DAEMON_OPTS_FILE = '/etc/docker/daemon.json'
opts = {}
if os.path.exists(DAEMON_OPTS_FILE):
with open(DAEMON_OPTS_FILE) as f:
try:
opts = json.load(f)
except:
logging.info("json parsing failed, starting with empty config.")
pass
# Add your daemon option modifications here
# For example,
# opts['log-opts']['max-size'] = '100m'
with open(DAEMON_OPTS_FILE, 'w') as f:
json.dump(opts, f)
runcmd:
- python /tmp/modify_docker_daemon_opts.py
- rm -f /tmp/modify_docker_daemon_opts.py
- systemctl restart docker.service