Tidbits from creating a Chef Delivery application pipeline

This post contains an assortment of things I learned whilst creating a Chef Delivery pipeline for a nodejs application, presented in no particular order. It’s really the post that never grew up. I’m assuming that you have at least worked through the Chef Delivery tutorial before trying to read this.

First, some overall context

The examples I use in this post come from the Delivery pipeline for a nodejs application with a backend MongoDB database. Build is done with grunt, test with jasmine, deployment with Chef.

We manage a number of application-specific cookbooks in the same git repo as our application code.  These cookbooks and the application code travel down the same delivery pipeline. Other more general cookbooks that we use have their own pipelines. At some point, we might specify these as delivery dependencies for the application pipeline, but we have not yet done so.

Our git repo looks something like this:

.delivery/                 # build cookbook & delivery config
    build-cookbook/
    config.json
cookbooks/                 # our application-specific cookbooks             
public/                    # our client code (javascript)
server/                    # our server code (nodejs)
spec/                      # application unit and func tests (jasmine)
topologies/                # knife-topo JSON files describing deployments
Gruntfile                  # our grunt build scripts

We use delivery-truck to manage the cookbooks through the pipeline. In the Chef Tutorial, delivery-truck is used with a single cookbook repo: it also handles multi-cookbook repos like ours (as long as the cookbooks are in a top-level cookbooks directory).

We wrote our own build cookbook logic (not suprisingly) to manage our application code through the pipeline. In general, that logic consists of (1) what is needed to setup the build tools and (2) a thin wrapper around existing build and test scripts.

An example of one of our phase recipes is the following, for publish:

include_recipe 'build-cookbook::_run_config'
include_recipe 'delivery-truck::publish'
include_recipe 'topology-truck::publish'
include_recipe 'build-cookbook::_publish_app'

The _run_config recipe does some standard setup in each phase.

The delivery_truck::publish recipe looks for any changed cookbooks, and if there are it unloads them to the Chef Server.

The topology-truck cookbook is a work-in-progress that we use to provision and configure our application deployments: its topology-truck::publish recipe looks for changes to the topology JSON files, and if there are any, it uploads them to the Chef Server (as data bag items) using the knife-topo plugin.

The _publish_app recipe packages up the application code, and publishes the package to an internal repository so that it can be used in the later deploy phase.

In this pipeline, we do not literally push to production in the ‘Delivered’ stage. Instead, the output is a build of the application that is ready to be deployed in production. For us, the ‘deploy’ actions in this stage are more a final ‘publish’.

Default recipe – install build tools for use in multiple stages/phases

In each stage, Chef Delivery runs the build cookbook default recipe with superuser privileges, and then runs the specific phase recipe as a specific user (dbuild). The dbuild user has write access to the filesystem in the project workspace, but cannot be used to install globally.

Quite often, there will be some build tools that you either have to install with superuser access, or that it makes sense to install once for use across multiple phases (in which case, you probably do not want to share build nodes across projects). You will need to use the default recipe to setup these tools.

Here’s an example from our default recipe, where we install some prerequisite packages, nodejs and the grunt client:

# include apt as it's important we get up to date git
include_recipe 'apt'

%w(g++ libkrb5-dev make git libfontconfig).each do |pkg|
  package pkg
end

include_recipe 'nodejs::default'

# enable build to run grunt by installing client globally
nodejs_npm 'grunt-cli' do
  name 'grunt-cli'
  version node['automateinsights']['grunt-cli']['version']
end

If you used the above, you would discover that phases can take some considerable time to complete, even if there is no work to be done. What’s going on?

It turns out that the file cache path is being set to a path in the unique workspace for that particular project, stage and phase (e.g. /var/opt/delivery/workspace/33.33.33.11/test/test/mvt/master/acceptance/provision/cache). In each stage and phase the default recipe (via the ark resource) downloads the nodejs package to that unique directory. That can be a lot of slow downloads, even if the correct version of nodejs is already installed.

We prevent this by changing the cache path, for the default recipe only:

# Force cache path to a global path so default recipe downloads
# e.g. nodejs don't get put in each local cache
Chef::Config[:file_cache_path] = '/var/chef/cache'

Phase recipes – beware community cookbooks needing superuser

Most community cookbooks assume they will be run with superuser privileges. You may encounter issues if you try to use them in phase recipes rather than in the default recipe.

For example, we usually use the nodejs_npm resource from the nodejs cookbook to install npm packages for our application. It should be fine to do this in the phase recipe, because we’re installing the npm packages locally. Unfortunately, the nodejs_npm resource always checks whether nodejs is installed by running the install recipe. This action fails without superuser privileges, even if nodejs is already installed.

Basically, you will often end up using the execute resource in the phase recipes e.g.:

repo_path = node['delivery']['workspace']['repo']
build_user = node['delivery_builder']['build_user']

# install test dependencies cannot use nodejs_npm
# because that will try to install nodejs and fail for permissions
execute 'install dependencies' do
  command 'npm install'
  cwd repo_path
  user build_user
end

A little bit of sugar

The delivery-sugar cookbook has some useful helpers for your recipes. Include it in your build cookbook’s metadata.rb to use it.

change.stage is useful if you need conditional logic depending on the stage (e.g. your logic for provision or deploy may vary in Acceptance, Union, Rehearsal, Delivered).

change.changed_cookbooks and change.changed_files are useful if you need logic dependent on what has changed in the commit.

Probably the most useful sugar is with_server_config, illustrated in the next section.

Accessing the Chef Server from phase recipes

The build cookbook recipes are run using chef-client local mode. If you try to do a  node search or access a data bag in the build cookbook recipes, it will not work as you intend. Luckily delivery-sugar can help here, too. The with_server_config command will let you do what you want. It temporarily changes the Chef::Config to point to the Chef Server, and then switches it back to the local context.

Here’s a recipe where we use Chef vault to setup AWS credentials on a build node. The highlighted code executes at compile time in the context of the Chef Server to retrieve the credentials. The rest of the recipe executes in the local context.

# setup credentials to push package to S3
include_recipe 'chef-vault'

root_path = node['delivery']['workspace_path']
aws_dir = File.join(root_path, '.aws')

creds = {}
with_server_config do
  creds = chef_vault_item('test', 'aws_creds')
end
build_user = node['delivery_builder']['build_user']

directory aws_dir do
  owner build_user
  mode '0700'
end

template File.join(aws_dir, 'config') do
  source 'awsconfig.erb'
  owner build_user
  mode '0400'
  variables(
    user: build_user,
    region: node['myapp']['download_bucket_region']
  )
end

template File.join(aws_dir, 'credentials') do
  source 'awscredentials.erb'
  owner build_user
  sensitive true
  mode '0400'
  variables(
    user: build_user,
    access_id: creds['aws_access_key_id'],
    secret_key: creds['aws_secret_access_key']
  )
end

If you want to do something similar to the above using Chef Vault, remember to add the build nodes to the search query for the vault.

Set your own log level

By default, Chef Delivery displays the WARN log level in the output from each phase. You may well want to use a different level.  In your build cookbook, create a recipe e.g. recipes/set_log_level.rb:

# Set config for all runs
Chef::Log.level(node['delivery']['config']['myapp']['log_level'].to_sym)

Set the default log level that you want in the build cookbook in an attribute file, e.g: attributes/default.rb:

default['delivery']['config']['myapp']['log_level'] = :info

Include the set_log_level recipe in all of the phase recipes (unit.rb, lint.rb, syntax.rb, etc) and in the default recipe:

include_recipe 'build-cookbook::set_log_level'
...

You can then override the default value for log level in the .delivery/config.json file:

{
  "version":"2",
  "build_cookbook":{
    "path":".delivery/build-cookbook",
    "name":"build-cookbook"
  },
  "myapp" : {
    "log_level": "debug"
  },
  ...
}

Node attributes available to phase recipes

The following is an example of the node attributes passed into a Delivery build run and thus made available to build cookbook recipes. node['delivery_builder']['build_user'] and node['delivery']['workspace']['repo'] are particularly useful, as is the ability to pass in attributes via the delivery config.json and have them appear under node['delivery']['config'], as shown in the previous section.

{
    "delivery":{
        "workspace_path":"/var/opt/delivery/workspace",
        "workspace":{
            "root":"/var/opt/delivery/workspace/33.33.33.11/test/test/mvt/master/acceptance/provision",
            "chef":"/var/opt/delivery/workspace/33.33.33.11/test/test/mvt/master/acceptance/provision/chef",
            "cache":"/var/opt/delivery/workspace/33.33.33.11/test/test/mvt/master/acceptance/provision/cache",
            "repo":"/var/opt/delivery/workspace/33.33.33.11/test/test/mvt/master/acceptance/provision/repo",
            "ssh_wrapper":"/var/opt/delivery/workspace/33.33.33.11/test/test/mvt/master/acceptance/provision/bin/git_ssh"
        },
        "change":{
            "enterprise":"test",
            "organization":"test",
            "project":"mvt",
            "pipeline":"master",
            "change_id":"590651e5-5c5e-4762-be32-900840ba373f",
            "patchset_number":"latest",
            "stage":"acceptance",
            "phase":"provision",
            "git_url":"ssh://builder@test@33.33.33.11:8989/test/test/mvt",
            "sha":"8403835f4fb718e504a5bffd265afcf640807fcb",
            "patchset_branch":""
        },
        "config":{
                ... as specified in config.json...
        }
    },
    "delivery_builder":{
        "workspace":"/var/opt/delivery/workspace/33.33.33.11/test/test/mvt/master/acceptance/provision",
        "repo":"/var/opt/delivery/workspace/33.33.33.11/test/test/mvt/master/acceptance/provision/repo",
        "cache":"/var/opt/delivery/workspace/33.33.33.11/test/test/mvt/master/acceptance/provision/cache",
        "build_id":"deprecated",
        "build_user":"dbuild"
    }
}

 

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s