Compute Engine Management with Puppet, Chef, Salt, and Ansible - Appendix

The following sections provide step-by-step instructions to help you get started with the cloud management tools discussed in this article. The listed commands are valid as of the time of this writing (February 2014). Please check the websites for Google Compute Engine and the individual tools for the most up-to-date information.

Getting started with Puppet on Compute Engine

Standalone

  1. Install Puppet on your workstation.
  2. Install the gce_compute module on your workstation.

    puppet module install puppetlabs-gce_compute

  3. Install and configure gcutil on your workstation.
  4. Set up a device.conf file. This typically goes into the $HOME/.puppet/directory.

[my_project]
 type gce
 url [/dev/null]:
gce_project_id

Where:

  • my_project is any name you choose.
  • gce_project_id is the Project ID of your Compute Engine project.

  1. Create a manifest file for creating Compute Engine resources. Call it puppet-www.pp.

$zone = 'us-central1-a'

gce_firewall { 'puppet-www-http':
 ensure      => present,
 network     => 'default',
 description => 'allows incoming HTTP connections',
 allowed     => 'tcp:80',
}

gce_disk { 'puppet-www-boot':
  ensure       => present,
  source_image => 'debian-7',
  size_gb      => '10',
  zone         => "$zone",
}

gce_instance { 'puppet-www':
  ensure       => present,
  description  => 'Basic web server',
  machine_type => n1-standard-1,
  disk         => 'puppet-www-boot,boot',
  zone         => "$zone",
  network      => 'default',

  require   => Gce_disk['puppet-www-boot'],

  puppet_master => "",
  manifest     => '
    class apache ($version = "latest") {
      package {"apache2":
        ensure => $version,
      }
      file {"/var/www/":
        ensure  => present,
        content => "<html>\n<body>\n\t<h2>Hi, this is $hostname ($gce_external_ip).</h2>\n</body>\n</html>\n",
        require => Package["apache2"],
      }
      service {"apache2":
        ensure => running,
        enable => true,
        require => File["/var/www/"],
      }
    }

    include apache',
}

  1. Apply the manifest file.

puppet apply --certname my_project puppet-www.pp

Master/Agent

In Master/Agent mode, managing Compute Engine resources and managing the software running on the associated instances are separated. Software management is under the domain of the Puppet Master. Compute Engine instances can be managed on the master instance or from any other workstation running Puppet. The instructions below are for performing Compute Engine resource management from the master instance.

  1. Create a Compute Engine instance for the Puppet Master. This can be done from the Google Cloud Platform Console or with gcutil addinstance from a workstation (or with Puppet and gce_compute).
  2. Connect to the Puppet Master instance via gcutil ssh.
  3. Install the Puppet Master software.
  4. (Optional) Configure the Puppet Master service for autosigning (see Puppet certificate management below).

    echo "*.$(hostname --domain)" | sudo tee /etc/puppet/autosign.conf

  5. Create a site manifest file to specify instance software and services (/etc/puppet/manifest/site.pp).

class apache ($version = "latest") {
  package {"apache2":
    ensure => $version,
  }
  file {"/var/www/":
    ensure  => present,
    content => "<html>\n<body>\n\t<h2>Hi, this is $hostname.</h2>\n</body>\n</html>\n",
    require => Package["apache2"],
  }
  service {"apache2":
    ensure => running,
    enable => true,
    require => File["/var/www/"],
  }
}
node 'puppet-child-www' {
  include apache
}

The Puppet Master is now ready.

  1. Install the gce_compute module.

    puppet module install puppetlabs-gce_compute

  2. Set up device.conf.

PROJECT=$(/usr/share/google/get_metadata_value project-id)

cat > ~/.puppet/device.conf << EOF
[my_project]
  type gce
  url [/dev/null]:${PROJECT}
EOF

  1. Create a manifest file for creating Compute Engine instances and associated resources (gce_www_up.pp).

$master = $fqdn
$zone = 'us-central1-a'

gce_firewall { 'allow-http':
  ensure       => present,
  description  => 'Allow HTTP',
  network      => 'default',
  allowed      => 'tcp:80',
  allowed_ip_sources => '0.0.0.0/0',
}

gce_disk { 'puppet-child-www':
  ensure       => present,
  description  => 'Boot disk for puppet-child-www',
  size_gb      => 10,
  zone         => "$zone",
  source_image => 'debian-7',
}

gce_instance { 'puppet-child-www':
  ensure       => present,
  description  => 'Basic web node',
  machine_type => 'n1-standard-1',
  zone         => "$zone",
  disk         => 'puppet-child-www,boot',
  network      => 'default',

  require      => Gce_disk['puppet-child-www'],

  puppet_master  => "$master",
  puppet_service => present,
}

  1. Apply the gce_www_up.pp manifest file.

puppet apply --certname=my_project gce_www_up.pp

  1. If you did not configure autosigning, then, on the Master, monitor the certificate requests from the new agent nodes and sign them as they arrive.

sudo puppet cert list
sudo puppet cert sign [agent-hostname]

Puppet certificate management

Before an agent can receive its resource catalog from the master, the agent's SSL certificate must be accepted and signed by the master. Enabling autosigning can be done securely:

  1. Ensure that the master port (8140 by default) is not accessible on the master's public IP address.
    Default: Initial configuration of the
    default Compute Engine network opens only the SSH port publicly.
  2. Ensure that the master port is accessible on the master's private IP address.
    Default: Initial configuration of the
    default Compute Engine network opens all TCP ports internally.
  3. Specify *.c.<project>.<domain>.internal[3] in the master's autosign.conf file.
    This limits autosigning to requests from instances on your project's internal network.

Enabling autosigning will allow you to get instances up and provisioned with their software catalog more quickly.

Be aware that when an agent instance is restarted, a new SSL certificate is generated and sent to the master, if one cannot be found. This can result in an authentication failure, as the master will already have accepted a different SSL certificate from an instance of the same name.

To handle this, on instance restart, make sure that either the Puppet SSL certificate is maintained on durable storage available to the instance, or ensure that references to the original certificate are removed from the master. The former can most easily be achieved by booting from the instance's original persistent disk, the latter by executing puppet cert clean <agent node>.

Getting started with Chef on Compute Engine

Master/Agent

Overview


You will set up four Compute Engine instances:

  • A chef server (chef-server), which will be used to store the cookbooks.
  • A chef workstation (chef-workstation), which will be used to execute commands.
  • Two chef client nodes, which will run the application code.

These instructions assume that you have gcutil installed. Execute

  • green instructions on your local workstation,
  • blue instructions on the Chef server, and
  • red instructions on the Chef workstation.

Project

Run the following commands from your local workstation.

  1. Use the Google Cloud Platform Console or the gcloud command-line tool to create a new project.

gcloud config set project <new-project-id>

  1. Add a firewall rule that will allow relevant traffic to your nodes. You can do this from the Cloud Platform Console (under Networks) or from the command line with gcutil:

gcutil addfirewall http-firewall --allowed=tcp:80
gcutil addfirewall https-firewall --allowed=tcp:443

 
Ideally, restrict the firewalls to specific VMs
by assigning tags to the VMs and using these tags in the firewalls.


Chef Server
  1. Create a Google Compute Engine instance and name it chef-server.
  1. SSH to chef-server and run the following commands to set it up as a Chef Server:

wget https://opscode-omnibus-packages.s3.amazonaws.com/el/6/x86_64/chef-server-11.0.8-1.el6.x86_64.rpm

sudo apt-get update && sudo apt-get update -y

sudo apt-get install alien
sudo alien --scripts -i chef-server-11.0.8-1.el6.x86_64.rpm

sudo chef-server-ctl reconfigure && sleep 30 && sudo chef-server-ctl test

Chef Workstation
  1. Create a Compute Engine instance and name it chef-workstation.
  1. SSH to chef-workstation and run the following commands to set it up as a Chef workstation.

sudo apt-get update && sudo apt-get upgrade -y
curl -L https://www.opscode.com/chef/install.sh | sudo bash

  1. Make sure the $USER environment variable is set on your VM.

echo $USER # check if already set
export USER=[your user name] # if not already set

  1. Copy the .pem files from chef-server to chef-workstation. You must make those files accessible on chef-server so that they can be copied.

On your local workstation, run:

eval `ssh-agent`
ssh-add ~/.ssh/google_compute_engine

SSH to chef-server and run:

sudo chmod 644 /etc/chef-server/admin.pem
scp /etc/chef-server/admin.pem $USER@chef-workstation:~
sudo chmod 600 /etc/chef-server/admin.pem
sudo chmod 644 /etc/chef-server/chef-validator.pem
scp /etc/chef-server/chef-validator.pem $USER@chef-workstation:~
sudo chmod 600 /etc/chef-server/chef-validator.pem

  1. Back on  chef-workstation, put the .pem files into the right place.

sudo mkdir /etc/chef-server
sudo mv admin.pem /etc/chef-server
sudo chmod 600 /etc/chef-server/admin.pem
sudo mv chef-validator.pem /etc/chef-server
sudo chmod 600 /etc/chef-server/chef-validator.pem
sudo apt-get install git
git clone git://github.com/opscode/chef-repo.git

  1. Configure Knife.
    When you execute the
    knife command below, it will ask you a number of questions.
  1. For the server URL in the command, use the external IP address of your chef-server instance, which you can find in the Cloud Platform Console.  The full server URL will be of the form https://[server's external IP address]:443.
  2. Set the cookbook_path to ~/chef-repo.
  3. Enter a password of at least 6 characters, or else the knife configure command will exit with an error.

On chef-workstation, run:

knife configure -i  # server: https://[server's external IP address]:443,  cookbook_path = ~/chef-repo

  1. Verify that Knife is now working on chef-workstation.

knife client list
knife user list

  1. Install knife-google on chef-workstation.

sudo /opt/chef/embedded/bin/gem install knife-google

  1. Set up OAuth 2.0 through the Cloud Platform Console.
  1. Run the following commands on chef-workstation.

knife google setup # use Project ID, Client ID, and Client Secret from above
knife google server list -Z us-central1-a
gcutil ssh `hostname -s`

knife google server create google-compute-engine-1 \
 -m n1-standard-1 -I [image of your choosing] \
 -Z [zone of your choosing] -x $USER -i ~/.ssh/google_compute_engine

knife google server create google-compute-engine-2 \
 -m n1-standard-1 -I [image of your choosing] \
 -Z [zone of your choosing] -x $USER -i ~/.ssh/google_compute_engine


Setting up the cookbook

  1. The easiest way to get started is by using a pre-existing open source cookbook.
    Run the following commands on
    chef-workstation.

knife cookbook site install apt
knife cookbook site install apache2
knife cookbook upload apt apache2

  1. To run the gce cookbook, you must upload the cookbook to the server and add the cookbook to your nodes' run lists. To do this, run the following commands from chef-workstation.

knife cookbook upload apt apache2 -o ~/chef-repo/cookbooks/
knife node run_list add google-compute-engine-1 'apt' 'apache2'
knife node run_list add google-compute-engine-2 'apt' 'apache2'

  1. You can manually execute the recipe by running chef-client from each of the nodes. This command will contact the server, get the latest recipes, and execute them. To do this, ssh into each of the nodes and execute the following command.

sudo chef-client

Getting started with Salt on Compute Engine

Set up the Salt Master

This section assumes you will be running the Salt Master within Compute Engine. The v2014.1.0 (Hydrogen) release of Salt and v0.14.1 of libcloud are required.

Set up Compute Engine credentials
  1. Make sure you have a Client ID Service Account. You will need the Service Account's generated email address (e.g. 'long-hash-address@developer.gserviceaccount.com') and the private key file. You can use the Google Cloud Platform Console to create the Service Account.
  2. Create a copy of the private key in a format allowed by the Python cryptographic library. You can use the openssl command (http://www.openssl.org/) to convert the format. (Note that when prompted, the default password is notasecret.)

    openssl pkcs12 -in long-hash.p12 -nodes -nocerts \
       | openssl rsa -out /path/to/your/pkey.pem
Set up the Salt Master and configure Salt-Cloud

Salt is normally run as the root super-user. Unless otherwise noted, all commands below assume you have already become the root user.

  1. Create a Compute Engine instance for the Salt Master. This can be done in the Cloud Platform Console or with gcutil addinstance. When creating the new instance, it is recommended that you use salt as the instance's name. This is the expected default and will seamlessly work with Compute Engine's internal DNS resolution.
  2. Connect to the Salt Master with gcutil ssh.
  3. Install Salt with the bootstrap script.
    curl -o bootstrap.sh -L http://bootstrap.saltstack.org
    sh bootstrap.sh -M -N git v2014.1.0

  4. Create the main "cloud" configuration file for Salt.  You need:
  1. Project ID
  2. Service Account email address (ends with @developer.gserviceaccount.com)
  3. location of your converted private key
    cat > /etc/salt/cloud <<EOF
    providers:
     gce-config:
       project: 'YOUR_PROJECT_ID'
       service_account_email_address: '
    ...@developer.gserviceaccount.com'
       service_account_private_key: '/path/to/your/pkey.pem'
       provider: gce
    EOF
  1. Create the cloud profile config file.
    This example specifies profiles for both a master and a minion.

    cat > /etc/salt/cloud.profiles <<EOF
    salt_minion:
     minion:
       master: salt
     image: debian-7-wheezy-v20131120
     size: n1-standard-1
     location: us-central2-a
     make_master: False
     deploy: True
     tags: '["minion", "salt"]'
     provider: gce-config
    EOF


  2. Make sure your local root user has a Compute Engine ssh key defined with the metadata server.
    gcutil ssh --permit_root_ssh $(hostname -s)
  1. When prompted, create a passphrase for the new key.
  2. When the command completes, it will have logged you in to the instance again.
  3. Exit the gcutil ssh session to get back to your original login session.
  1. Create and provision three minions with the salt_minion profile. The command will create the Compute Engine instances, generate certificates, and execute a deploy script on the new instances to provision them as Salt minions.
    salt-cloud -p salt_minion minion{1..3}
  2. On the master, verify that the minions can be reached with a simple ping test. You should see a True response for each minion.
    salt 'minion*' test.ping
Managing minions from the master

Now that the master and minions have been created, all minion management will be performed from the master. You will create the configuration files to install an Apache web server on each minion along with a custom index.html page.

  1. Log into the master as root.
  2. Create the directory tree for the configuration files.
    mkdir -p /srv/salt/webserver
  3. Create a basic top.sls state file to match your webserver configuration to your minions’ hosts.
    cat > /srv/salt/top.sls <<EOF
    base:
     'minion*':
       - webserver
    EOF

  4. Create the webserver state file, which is used to ensure the apache2 Debian package is installed, the service is running, and the custom index.html is deployed.
    cat > /srv/salt/webserver/init.sls <<EOF
    apache2:
     pkg:
       - installed
     service:
       - running

    /var/www/:
     file.managed:
       - source: salt://webserver/
       - user: root
       - group: root
       - mode: 644
       - template: jinja
       - require:
         - pkg: apache2
  5. Write out the custom index.html file as a Jinja template that will utilize the minion's local grains.
    cat > /srv/salt/webserver/ <<EOF
    <html>
    <title>Welcome to '{{ grains.id }}'</title>
    <body>
    Here are some facts auto-generated from local grains.
    <pre>
       hostname: {{ grains.id }}
       eth0 ip: {{ grains.fqdn_ip4[0] }}
       my master: {{ grains.master }}
       manufacturer: {{ grains.manufacturer }}
    </pre>
    </body>
    </html>

  6. Apply the configuration across all minions.
    salt 'minion*' state.highstate

Getting started with Ansible on Compute Engine

Ansible depends on libcloud version 0.14.1 or greater.

Set up Compute Engine credentials

  1. Make sure you have a Client ID Service Account. You will need the Service Account's generated email address (e.g. 'long-hash-address@developer.gserviceaccount.com') and the private key file. You can use the Google Cloud Platform Console to create the Service Account.
  2. A copy of the private key must be created in a format allowed by the Python cryptographic library. You can use the openssl command  (http://www.openssl.org/)  to convert the format (note that when prompted, the default password is notasecret).
    openssl pkcs12 -in long-hash.p12 -nodes -nocerts \
       | openssl rsa -out /path/to/your/pkey.pem

Set up Ansible

  1. The steps below are based on the instructions for Running from Source.
    git clone git://github.com/ansible/ansible.git
    cd ./ansible
    source ./hacking/env-setup
  2. Create a static inventory file and set your environment.
    cat > ~/inv.ini << EOF
    [local]
    localhost

    [gce_instances]
    www1
    www2
    EOF
    export ANSIBLE_HOSTS=~/inv.ini

  3. Create a sample playbook file named gce.yml. The file specifies that two Debian Compute Engine instances are created, and the Apache web server is installed on each instance along with a custom index.html page. The playbook also sets a firewall rule to open up HTTP traffic. Note that the Compute Engine instance names and the "play" (a grouping of tasks within the playbook) for installing software on them (bolded below) match the [gce_instances] inventory section in the static inventory file created in the previous step.

- name: Create Compute Engine instances
  hosts: local
  gather_facts: no
  vars:
    names:
www1,www2
    machine_type: n1-standard-1
    image: debian-7
    zone: us-central1-a
    pid: YOUR_PROJECT_ID
    email: YOUR_SERVICE_ACCOUNT_EMAIL
    pem: /path/to/your/pkey.pem
  tasks:
    - name: Launch instances
      local_action: gce instance_names="{{ names }}"
                    machine_type="{{ machine_type }}"
                    image="{{ image }}" zone="{{ zone }}"
                    project_id="{{ pid }}" pem_file="{{ pem }}"
                    service_account_email="{{ email }}"
      register: gce
    - name: Wait for SSH to come up
      local_action: wait_for host="{{ item.public_ip }}" port=22 delay=10
                    timeout=60 state=started
      with_items: gce.instance_data

- name: Install apache, set a custom index.html
  hosts:
gce_instances
  sudo: yes
  tasks:
    - name: Install apache
      apt: pkg=apache2 state=present
    - name: Create custom index.html
      copy: dest=/var/www/ content='Hi, I am {{ ansible_hostname }}'
    - name: set file stats on index.html
      file: path=/var/www/ owner=root group=root mode=0644
    - name: Start apache
      service: name=apache2 state=started

- name: Create a firewall rule to allow HTTP
  hosts: localhost
  gather_facts: no
  vars:
    pid: YOUR_PROJECT_ID
    email: YOUR_SERVICE_ACCOUNT_EMAIL
    pem: /path/to/your/pkey.pem
  tasks:
    - name: Allow HTTP
      local_action: gce_net fwname=all-http name=default allowed=tcp:80
                    project_id="{{ pid }}" pem_file="{{ pem }}"
                    service_account_email="{{ email }}"

  1. For testing purposes, you may wish to disable SSH host key verification with an environment variable.
    export ANSIBLE_HOST_KEY_CHECKING=False
  2. Now execute the Ansible playbook command to performed the plays specified in the gce.yml file. If you are using the Compute Engine default ssh keys, you can specify that with the --private-key argument,
    ansible-playbook --private-key=~/.ssh/google_compute_engine ~/gce.yml

[3] If you are uncertain of your internal domain, execute hostname --domain on the Puppet master instance.

Send feedback about...