Installing Armory in GKE using the Armory Operator

Note: This guide is a work in progress.

This guide contains instructions for installing Armory on a GKE Cluster using the Armory Operator. Refer to the Armory Operator Reference for manifest entry details.


This document is written with the following workflow in mind:

  • You have a machine configured to use the gcloud CLI tool and a recent version of the kubectl tool
  • You have logged into the gcloud CLI and have permissions to create GKE clusters and a service account

Installation summary

Installing Armory using the Armory Operator consists of the following steps:

  • Create a cluster where Armory and the Armory Operator will reside
  • Setup the Armory Operator CRDs (custom resource definitions)
  • Deploy Armory Operator pods to the cluster
  • Create a GCS service account
  • Create a Kubernetes service account
  • Create a GCS storage bucket
  • Modify the Armory Operator kustomize files for your installation
  • Deploy Armory through the Armory Operator

Create GKE cluster

This creates a minimal GKE cluster in your default region and zone.

gcloud container clusters create spinnaker-cluster
export KUBECONFIG=kubeconfig-gke
gcloud container clusters get-credentials spinnaker-cluster

Check that namespaces have been created:

kubectl --kubeconfig kubeconfig-gke get namespaces

Output is similar to:

default Active 2m24s
kube-node-lease Active 2m26s
kube-public Active 2m26s
kube-system Active 2m26s

Set up Armory Operator CRDs

Fetch the Armory Operator manifests:

# RELEASE=v0.3.2 bash -c 'curl -L${RELEASE}/manifests.tgz | tar -xz'

bash -c 'curl -L | tar -xz'

Apply the custom resource definitions:

kubectl apply -f deploy/crds/

Output is similar to: created created

Deploy Armory Operator

This step creates the spinnaker-operator namespace and deploys the Armory Operator pods.

kubectl create ns spinnaker-operator

kubectl -n spinnaker-operator apply -f deploy/operator/cluster

Output is:

deployment.apps/spinnaker-operator created
configmap/halyard-config-map created created created
serviceaccount/spinnaker-operator created

Create GCS service account

export SERVICE_ACCOUNT_NAME=<name-for-your-service-account>
export SERVICE_ACCOUNT_FILE=<name=for-your-service-account.json>
export PROJECT=$(gcloud info --format='value(config.project)')

gcloud --project ${PROJECT} iam service-accounts create \
    --display-name ${SERVICE_ACCOUNT_NAME}

SA_EMAIL=$(gcloud --project ${PROJECT} iam service-accounts list \
    --filter="displayName:${SERVICE_ACCOUNT_NAME}" \

gcloud --project ${PROJECT} projects add-iam-policy-binding ${PROJECT} \
    --role roles/storage.admin --member serviceAccount:${SA_EMAIL}

mkdir -p $(dirname ${SERVICE_ACCOUNT_FILE})

gcloud --project ${PROJECT} iam service-accounts keys create ${SERVICE_ACCOUNT_FILE} \
    --iam-account ${SA_EMAIL}

Create Kubernetes service account

CONTEXT=$(kubectl config current-context)

# This service account uses the ClusterAdmin role -- this is not necessary,
# more restrictive roles can by applied.
curl -s | \
  sed "s/spinnaker-service-account/${SERVICE_ACCOUNT_NAME}/g" | \
  kubectl apply --context $CONTEXT -f -

TOKEN=$(kubectl get secret --context $CONTEXT \
   $(kubectl get serviceaccount ${SERVICE_ACCOUNT_NAME} \
       --context $CONTEXT \
       -n spinnaker \
       -o jsonpath='{.secrets[0].name}') \
   -n spinnaker \
   -o jsonpath='{.data.token}' | base64 --decode)

kubectl config set-credentials ${CONTEXT}-token-user --token $TOKEN

kubectl config set-context $CONTEXT --user ${CONTEXT}-token-user

Create GCS bucket

Use the Cloud Console to do create your bucket. If you’re going to put secrets in the bucket, make sure to create a secrets directory in the bucket. Also, ensure the bucket is accessible from the service account you created.

Customize the Kustomize Files


  • Update Armory version to deploy

  • Set the persistent storage type, bucket, rootFolder, project, jsonPath (pick something unique)

  • Add `gcs`` to the config patch

      version: 2.19.8  
        persistentStoreType: gcs
          bucket: <your-bucket-name>
          rootFolder: front50
          project: <your-project-name>
          jsonPath: <your-unique-gcs-account.json>


Under files, add a file for the your-unique-gcs-account.json. This will be the content from the GCS service account you created above. The file is named gcs-account.json in the following example:

  gcs-account.json: |
      "type": "service_account",
      "project_id": "cloud-project",
      "private_key_id": "cf04d5d545bOTHERSTUFFHERE9f9d134f",
      "private_key": "-----BEGIN PRIVATE KEY-----\nSTUFF HERE\n-----END PRIVATE KEY-----\n",
      "client_email": <your-client-email>,
      "client_id": <your-client-id>,
      "auth_uri": "",
      "token_uri": "",
      "auth_provider_x509_cert_url": "",
      "client_x509_cert_url": <your-cert-url>

Add the Kubernetes provider account

There are a few ways to do this with the Armory Operator. This uses the typical way of doing it with config. The Account CRD is probably the way this will be done in the future.

Update the config-patch.yml with the provider accounts:

    enabled: true
    - name: spinnaker
      kubeconfigFile: gke-kubeconfig
      providerVersion: V2
      serviceAccount: false
      onlySpinnakerManaged: false
    primaryAccount: spinnaker

You have a couple of ways to get that kubeconfig file into the config. From least secure to more secure, you can put the kubeconfig:

  • In the files-patch.yml (like the example above shows)
  • In a Kubernetes secret for the Spinnaker namespace
  • In a bucket
  • In Vault

For the first option, the gke-kubeconfig file is then added into the files-patch.yml like this:

gke-kubeconfig: |
  apiVersion: v1
    - cluster:
        certificate-authority-data: LS0tLSSTUFFo=
      name: gke_cloud-armory_us-central1-c_spinnaker-cluster

For the third option, the gke-kubeconfig file is copied to a bucket. Then the config-patch.yml references the location of that file for the kubeconfig file key like this:

kubeconfigFile: encryptedFile:gcs!b:bucketname!f:secrets/kubeconfig-gke

Install Kustomize (optional)

You can do a kubectl -k to deploy Kustomize templates, but what may be more helpful is to install Kustomize so that you can build Kustomize and look at the YAML first. Note that Kustomize is installed as part of kubectl.

curl -s "\
kubernetes-sigs/kustomize/master/hack/"  | bash
sudo mv kustomize /usr/local/bin/

Deploy Armory using Kustomize

kubectl create ns <spinnaker-namespace>
kustomize build deploy/spinnaker/kustomize | kubectl -n <spinnaker-namespace> apply -f -

Configure ingress

The SpinnakerService.yml file contains an expose section fthat defines how a LoadBalancer object will be setup to publicly expose Deck and Gate. See spec.expose for details.

Configure authentication

To enable basic form authentication in Armory as in this KB article, you need to understand how your kustomization.yml file is configured. If you have the profiles-patch.yml, you are telling Kustomize to overwrite the profiles section of the config with entries for each of the components (clouddriver, deck, gate, etc). So you can put all of the entries for those profile files into profiles-patch.yml.

Here is an example profiles-patch.yml with Kustomize turned on and basic form authentication configured.


kind: SpinnakerService
  name: spinnaker  # name doesn't matter since this is a patch
  # spec.spinnakerConfig - This section is how to specify configuration spinnaker
    # spec.spinnakerConfig.profiles - This section contains the YAML of each service's profile
      clouddriver: {} # is the contents of ~/.hal/default/profiles/clouddriver.yml
      # deck has a special key "settings-local.js" for the contents of settings-local.js
        # settings-local.js - contents of ~/.hal/default/profiles/settings-local.js
        # Use the | YAML symbol to indicate a block-style multiline string
        settings-local.js: |
          window.spinnakerSettings.feature.kustomizeEnabled = true;
          window.spinnakerSettings.feature.artifactsRewrite = true;
          window.spinnakerSettings.authEnabled = true;
      echo: {}    # is the contents of ~/.hal/default/profiles/echo.yml
      fiat: {}    # is the contents of ~/.hal/default/profiles/fiat.yml
      front50: {} # is the contents of ~/.hal/default/profiles/front50.yml
            enabled: true
            name: spin
            password: spin4u99
      igor: {}    # is the contents of ~/.hal/default/profiles/igor.yml
      kayenta: {} # is the contents of ~/.hal/default/profiles/kayenta.yml
      orca: {}    # is the contents of ~/.hal/default/profiles/orca.yml
      rosco: {}   # is the contents of ~/.hal/default/profiles/rosco.yml

Alternately, you can include a separate patch file for each component and then reference the component in the kustomization.yml file. See the Kustomize docs for details.

Configure Dinghy

Create a patch-dinghy.yml file with the following contents:

kind: SpinnakerService
  name: spinnaker
          enabled: true
          templateOrg: yourorg
          templateRepo: yourrepo
          githubToken: yourtoken
          dinghyFilename: dinghyfile
          autoLockPipelines: true
              enabled: false

Now add an entry to the end of kustomization.yml to include patch-dinghy.yml.

Other patch files

You can add additional patch files to turn on functionality. Examples are in the minnaker repository.

Set Up TLS: