This guide should include:
- Baking in a Pipeline
- Advanced Options
- Bake and Copy vs Multi-Region Bake
- Custom Bake Scripts
- Baking Using
- Caching Bakes
Definition: The term ‘Baking’ is used within Spinnaker to refer to the process of creating machine images.
Preprequisites and assumptions:
- You are familiar with creating applications and pipelines
- You are deploying to Amazon Web Services (AWS)
Baking in a Pipeline
First let’s go through an example of baking, then we can go into some details and information sharing.
In this example we will bake an image containing a Debian package created by a Jenkins’ job. If you like, you can check out the working with Jenkins guide for more information on how Jenkins and Spinnaker can work together. We also have a guide on creating debian packages.
First let’s look at the the Jenkins job that builds our package.
As you can see, the last run archived a package named
Now let’s go to Spinnaker and create a new pipeline. I named mine
On the configuration stage, I scroll down and add a new Jenkins automated trigger. Select my master, and then select the Jenkins job shown above (
armory/job/armory-hello-deploy/job/master). For this example, there is no need to worry about setting a properties file.
Next I add a bake stage (add stage -> Type: Bake)
us-west-2 region. For the Package field, I enter the base name of my package. In my case, the entire package filename is
armory-hello-deploy_0.5.0-h5.c4baff4_all.deb so I will input
armory-hello-deploy. Also, I know that my package is created for Ubuntu 14, so I make sure to select the ‘trusty (v14.04)’ option.
At this point if I want to see more details about my bake, I can click the ‘View Bakery Details’ box. A new window will open up with the bakery logs. In my case, the first few lines look like:
Under the hood, Spinnaker is leveraging HashiCorp’s Packer tool to create images. The above is a Packer log file.
The thing I wanted to point out here is that the correct version of the package is getting passed down to the bakery.
After the bake is successful, I see:
Notice that the AMI name and ID are shared in the lower right’s blue box - in this example it is “armory-hello-deploy-all-20170329204459-trusty (ami-c78410a7)”.
If I press ‘Start Manual Execution’ again, since the package version hasn’t changed, it will reuse the same image rather than rebaking. The screen for that looks like:
Notice the whole pipeline only ran for ‘00:00’ and in the lower right Spinnaker says ‘No changes detected; reused existing bake’
You can do additional things like use a specific base AMI, specify your baked AMI’s name, use a custom packer script, or pass variables to a packer script.
If you would like to change the name in AWS of your AMI, you can do so by selecting the ‘Show Advanced Options’ checkbox in the Bake Stage Configuration. Continuing from our example above, when I see:
What do all of these fields mean? Great question!
Changing an AMI’s name
If I were to instead input ‘mycustomname’ into the ‘AMI Name’ field, like:
After re-running the pipeline, I see that Spinnaker named the AMI
Then I add ‘mycustomsuffix’ to the ‘AMI Suffix’ field:
Repeating the bake, I see that Spinnaker named the AMI
Often you will want to specify a base image for use in your bake. In that case you will use the ‘Base AMI’ field, not to be confused with the ‘Base Name’ field. As an example, I have specified
In this situation, the base OS selection (ubuntu/trusty/windows) will be ignored.
You can also select a base AMI more dynamically by combing the ‘Bake’ stage type with the ‘Find Image’ stage type. For more details check out the Find Images Guide
Adding Debian Repositories
It is common practice to use a base image throughout your team or organization. Usually this base image will be kept up to date with security patches and will contain common tools (DataDog, Splunk, etc.). It is also a good place to register your Debian repository’s GPG keys.
If you need to add repositories on a per bake basis, you can use the ‘Extended Attributes’ within the ‘Advanced Options’ section. You can add a key/value pair where the key is labeled ‘repository’ and the value is a space separated list of repository URLs. For example:
This will add Armory’s bintray debian repository to the bake.
By selecting a region, you are selecting which region the bake will take place. When a bake happens, an instance is spun up, the desired packages are installed, and then a snapshot is taken. The location where the instances spin up is determined by which region you select. Multiple regions may be selected at once.
When a bake step executes, Spinnaker looks for a previously created image before baking a new one. It uses the set of packages (and their versions) that users specify in the bake stage configuration to determine if the bake is neccessary.
You can force Spinnaker to always bake by selecting the ‘Rebake: Rebake image without regard to the status of any existing bake’ checkbox on the bake stage configuration screen. You also have the option to force rebaking when manually executing a pipeline.
Bake and Copy vs Multi-Region Bake
There are two options for getting an image to multiple regions in AWS. A common practice outside of Spinnaker is to create your AMI and then copy it to the regions you need. However, Spinnaker by default will do a multi-region bake. This means if you select more than one region it will go through the process of creating an image in each region (spin up an instance, install the packages, etc).
There are trade-offs to each approach. Generally, Spinnaker’s default multi-region bake approach is faster than the bake and copy approach. However, if you need to limit all baking activities to one region then there isn’t much of a choice.
Custom Bake Scripts
If you would like to use a custom Packer script to bake your AMI you will need to contact your Spinnaker Administrator. The script will have to be installed on your Spinnaker instances.
Baking using the
chroot builder for packer allows you to bake an AMI without having to spin up a new instance. Instead, a new EBS volume is mounted to the running Spinnaker instance,
chroot is executed on the new volume, packer installs the required packages on the volume, a snapshot is taken, and then volume is cleanly detached. To enable
chroot style baking, we’ll need to configure
rosco with some additional properties. Add the following to
version: "2.1" services: rosco: privileged: true volumes: - /dev:/dev
Armory Spinnaker comes with a default chroot template which is named
aws-chroot.json and stored your other packer templates in
/opt/spinnaker/config/packer. Make sure to use this template (or a user provided template) when configuring your bake stage.
Note: There are a few “gotchas” with chroot builders.
Spinnaker will cache bakes and not re-run a bake to save time if it finds the bake key in its cache. When Spinnaker bakes a package it creates a unique key based on the following components: Cloud Provider Type, Base OS, Base AMI, AMI Name, Packer Template Filename, Var Filename, Package Name and Package Version. If any of those components change at the time of bake it will rebake otherwise it’ll use the cached AMI.
Package Name and Version
By default, Spinnaker looks for an artifact from the Jenkins build that triggered the bake to parse out version information. Below are valid names to packages:
subscriberha-1.0.0-h150 subscriberha-1.0.0-h150.586499 subscriberha-1.0.0-h150.586499/WE-WAPP-subscriberha/150
This allows you to just specify
subscriberha as the
package in your bake stage and Spinnaker will handle which version to bake based on the Jenkins trigger that was chosen at execution time.
When you have a failing bake step and you do not know why, a good place to start is with the bakery log. You can find a link to the bakery log in the Detail of your bake step on the pipeline execution screen
Click the link that says ‘View Bakery Details’. It can be helpful to track down the last command that the bakery executed.
Another source of information is the pipeline’s source link. You can find this link in small writing in the far bottom right of the pipeline execution detail screen. The ‘source’ is a JSON representation of the data generated by Spinnaker when running your pipeline (not just the bake step).