Configure Armory Continuous Deployment Using a Manifest File

This guide describes the fields in the SpinnakerService manifest that the the Armory Operator uses to deploy Armory Continuous Deployment or the Spinnaker Operator uses to deploy Spinnaker.

This guide is for both the Armory Operator and the Spinnaker Operator. Armory Continuous Deployment and Spinnaker configuration is the same except for features only in Armory Continuous Deployment. Those features are marked Proprietary.

Before you begin

  • This guide assumes you want to expand the manifest file used in the Quickstart.
  • You know how to deploy Armory Continuous Deployment using a Kubernetes manifest file. See the Quickstart’s Single manifest file section.

Kubernetes manifest file

The structure of the manifest file is the same whether you are using the Armory Operator or the Spinnaker Operator. The value of certain keys, though, depends on whether you are deploying Armory Continuous Deployment or Spinnaker. The following snippet is the first several lines from a spinnakerservice.yml manifest that deploys Armory Continuous Deployment.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
apiVersion: spinnaker.armory.io/v1alpha2
kind: SpinnakerService
metadata:
  name: spinnaker
spec:
  spinnakerConfig:
    config:
      version: <version>
      persistentStorage:
        persistentStoreType: s3
        s3:
          bucket: <s3-bucket-name>
          rootFolder: front50
  • Line 1: apiVersion is the CRD version of the SpinnakerService custom resource.
    • If you are deploying Armory Continuous Deployment, the value is spinnaker.armory.io/v1alpha2; if you change this value, the Armory Operator won’t process the manifest file.
    • If you are deploying Spinnaker, the value is spinnaker.io/v1alpha2; if you change this value, the Spinnaker Operator won’t process the manifest file.
  • Line 8: spec.spinnakerConfig.config.version
Expand to see a skeleton manifest file

This file is from the public armory/spinnaker-operator repo. You use this file to configure and deploy Spinnaker. Note that the apiVersion is the SpinnakerService CRD used by the Spinnaker Operator.

apiVersion: spinnaker.io/v1alpha2
kind: SpinnakerService
metadata:
  name: spinnaker
spec:
  # spec.spinnakerConfig - This section is how to specify configuration spinnaker
  spinnakerConfig:
    # spec.spinnakerConfig.config - This section contains the contents of a deployment found in a halconfig .deploymentConfigurations[0]
    config:
      version: 1.28.1   # the version of Spinnaker to be deployed
      persistentStorage:
        persistentStoreType: s3
        s3:
          bucket: <change-me> # Change to a unique name. Spinnaker stores application and pipeline definitions here
          rootFolder: front50

    # spec.spinnakerConfig.profiles - This section contains the YAML of each service's profile
    profiles:
      clouddriver: {} # is the contents of ~/.hal/default/profiles/clouddriver.yml
      # deck has a special key "settings-local.js" for the contents of settings-local.js
      deck:
        # settings-local.js - contents of ~/.hal/default/profiles/settings-local.js
        # Use the | YAML symbol to indicate a block-style multiline string
        settings-local.js: |
                    window.spinnakerSettings.feature.kustomizeEnabled = true;
      echo: {}    # is the contents of ~/.hal/default/profiles/echo.yml
      fiat: {}    # is the contents of ~/.hal/default/profiles/fiat.yml
      front50: {} # is the contents of ~/.hal/default/profiles/front50.yml
      gate: {}    # is the contents of ~/.hal/default/profiles/gate.yml
      igor: {}    # is the contents of ~/.hal/default/profiles/igor.yml
      kayenta: {} # is the contents of ~/.hal/default/profiles/kayenta.yml
      orca: {}    # is the contents of ~/.hal/default/profiles/orca.yml
      rosco: {}   # is the contents of ~/.hal/default/profiles/rosco.yml

    # spec.spinnakerConfig.service-settings - This section contains the YAML of the service's service-setting
    # see https://www.spinnaker.io/reference/halyard/custom/#tweakable-service-settings for available settings
    service-settings:
      clouddriver: {}
      deck: {}
      echo: {}
      fiat: {}
      front50: {}
      gate: {}
      igor: {}
      kayenta: {}
      orca: {}
      rosco: {}

    # spec.spinnakerConfig.files - This section allows you to include any other raw string files not handle above.
    # The KEY is the filepath and filename of where it should be placed
    #   - Files here will be placed into ~/.hal/default/ on halyard
    #   - __ is used in place of / for the path separator
    # The VALUE is the contents of the file.
    #   - Use the | YAML symbol to indicate a block-style multiline string
    #   - We currently only support string files
    #   - NOTE: Kubernetes has a manifest size limitation of 1MB
    files: {}
  #      profiles__rosco__packer__example-packer-config.json: |
  #        {
  #          "packerSetting": "someValue"
  #        }
  #      profiles__rosco__packer__my_custom_script.sh: |
  #        #!/bin/bash -e
  #        echo "hello world!"

  # spec.expose - This section defines how Spinnaker should be publicly exposed
  expose:
    type: service  # Kubernetes LoadBalancer type (service/ingress), note: only "service" is supported for now
    service:
      type: LoadBalancer

      # annotations to be set on Kubernetes LoadBalancer type
      # they will only apply to spin-gate, spin-gate-x509, or spin-deck
      annotations:
        service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
        # uncomment the line below to provide an AWS SSL certificate to terminate SSL at the LoadBalancer
        #service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-west-2:9999999:certificate/abc-123-abc

      # provide an override to the exposing KubernetesService
      overrides: {}
      # Provided below is the example config for the Gate-X509 configuration
#        deck:
#          annotations:
#            service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-west-2:9999999:certificate/abc-123-abc
#            service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
#        gate:
#          annotations:
#            service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-west-2:9999999:certificate/abc-123-abc
#            service.beta.kubernetes.io/aws-load-balancer-backend-protocol: https  # X509 requires https from LoadBalancer -> Gate
#       gate-x509:
#         annotations:
#           service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
#           service.beta.kubernetes.io/aws-load-balancer-ssl-cert: null
#         publicPort: 443

  validation: {}

  # Patching of generated service or deployment by Spinnaker service.
  # Like in Kustomize, several patch types are supported. See
  # https://github.com/armory/spinnaker-operator/blob/master/doc/options.md#speckustomize
  kustomize: {}
    # An example to change Gate's image name using a strategic merge patch
#    gate:
#      deployment:
#        patchesStrategicMerge:
#          - |
#            spec:
#              template:
#                spec:
#                  containers:
#                  - name: gate
#                    image: gate:1.0.0

Manifest sections

metadata.name

apiVersion: apiVersion: spinnaker.armory.io/v1alpha2
kind: SpinnakerService
metadata:
  name: spinnaker

metadata.name is the name of your Armory Continuous Deployment service. Use this name to view, edit, or delete Armory Continuous Deployment. The following example uses the name prod:

$ kubectl get spinsvc prod

Note that you can use spinsvc for brevity. You can also use spinnakerservices.spinnaker.armory.io (Armory Continuous Deployment) or spinnakerservices.spinnaker.io (Spinnaker).

spec.spinnakerConfig

Contains the same information as the deploymentConfigurations entry in a Halyard configuration.

For example, given the following ~/.hal/config file:

currentDeployment: default
deploymentConfigurations:
- name: default
  version: 2.17.1
  persistentStorage:
    persistentStoreType: s3
    s3:
      bucket: mybucket
      rootFolder: front50

The equivalent of that Halyard configuration is the following spec.spinnakerConfig:

apiVersion: apiVersion: spinnaker.armory.io/v1alpha2
kind: SpinnakerService
metadata:
  name: spinnaker
spec:
  spinnakerConfig:
    config:
      version: 2.17.1
      persistentStorage:
        persistentStoreType: s3
        s3:
          bucket: mybucket
          rootFolder: front50

spec.spinnakerConfig.config contains the following sections:

spec.spinnakerConfig.profiles

Configuration for each service profile. This is the equivalent of ~/.hal/default/profiles/<service>-local.yml. For example the following profile is for Gate:

spec:
  spinnakerConfig:
    config:
    ...
    profiles:
      gate:
        default:
          apiPort: 8085

Note that for Deck, the profile is a string under the key settings-local.js:

spec:
  spinnakerConfig:
    config:
    ...
    profiles:
      deck:
        settings-local.js: |
                    window.spinnakerSettings.feature.artifactsRewrite = true;

spec.spinnakerConfig.service-settings

Settings for each service. This is the equivalent of ~/.hal/default/service-settings/<service>.yml. For example the following settings are for Clouddriver:

spec:
  spinnakerConfig:
    config:
    ...
    service-settings:
      clouddriver:
        kubernetes:
          serviceAccountName: spin-sa

spec.spinnakerConfig.files

Contents of any local files that should be added to the services. For example to reference the contents of a kubeconfig file:

spec:
  spinnakerConfig:
    config:
      providers:
        kubernetes:
          enabled: true
          accounts:
          - name: cluster-1
            kubeconfigFile: cluster1-kubeconfig
            ...
    files:
      cluster1-kubeconfig: |
                <FILE CONTENTS HERE>

A double underscore (__) in the file name is translated to a path separator (/). For example to add custom packer templates:

    files: {}
      profiles__rosco__packer__example-packer-config.json: |
        {
          "packerSetting": "someValue"
        }        
      profiles__rosco__packer__my_custom_script.sh: |
        #!/bin/bash -e
        echo "hello world!"        

spec.expose

Optional. Controls how Armory Continuous Deploymentgets exposed. If you omit it, no load balancer gets created. If this section gets removed, the Load Balancer does not get deleted.

Use the following configurations:

  • spec.expose.type: How Armory Continuous Deploymentgets exposed. Currently, only service is supported, which uses Kubernetes services to expose Spinnaker.
  • spec.expose.service: Service configuration
  • spec.expose.service.type: Should match a valid Kubernetes service type (i.e. LoadBalancer, NodePort, or ClusterIP).
  • spec.expose.service.annotations: Map containing annotations to be added to Gate (API) and Deck (UI) services.
  • spec.expose.service.overrides: Map with key for overriding the service type and specifying extra annotations: Armory Continuous Deploymentservice name (Gate or Deck) and value. By default, all services receive the same annotations. You can override annotations for a Deck (UI) or Gate (API) services.

spec.validation

Currently these configurations are experimental. By default, the Operator always validates Kubernetes accounts when applying a SpinnakerService manifest.

Validation options that apply to all validations that Operator performs:

  • spec.validation.failOnError: Boolean. Defaults to true. If false, the validation runs and the results are logged, but the service is always considered valid.
  • spec.validation.failFast: Boolean. Defaults to false. If true, validation stops at the first error.
  • spec.validation.frequencySeconds: Optional. Integer. Define a grace period before a validation runs again. For example, if you specify a value of 120 and edit the SpinnakerService without changing an account within a 120 second window, the validation on that account does not run again.

Additionally, the following settings are specific to Kubernetes, Docker, AWS, AWS S3, CI tools, metric stores, persistent storage, or notification systems:

  • spec.validation.providers.kubernetes
  • spec.validation.providers.docker
  • spec.validation.providers.aws
  • spec.validation.providers.s3
  • spec.validation.providers.ci
  • spec.validation.providers.metricStores
  • spec.validation.providers.persistentStorage
  • spec.validation.providers.notifications

Supported settings are enabled (set to false to turn off validations), failOnError, and frequencySeconds.

The following example disables all Kubernetes account validations:

spec:
  validation:
    providers:
      kubernetes:
        enabled: false

spec.accounts

Support for SpinnakerAccount CRD (Experimental):

  • spec.accounts.enabled: Boolean. Defaults to false. If true, the SpinnakerService uses all SpinnakerAccount objects enabled.
  • spec.accounts.dynamic (experimental): Boolean. Defaults to false. If true, SpinnakerAccount objects are available to Armory Continuous Deployment as the account is applied (without redeploying any service).

Example Manifests for exposing Armory Continuous Deployment

Load balancer Services

spec:
  expose:
    type: service
    service:
      type: LoadBalancer
      annotations:
        "service.beta.kubernetes.io/aws-load-balancer-backend-protocol": "http"
        "service.beta.kubernetes.io/aws-load-balancer-ssl-ports": "80,443"
        "service.beta.kubernetes.io/aws-load-balancer-ssl-cert": "arn:aws:acm:us-west-2:xxxxxxxxxxxx:certificate/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"

The preceding manifest generates these two services:

spin-deck

apiVersion: v1
kind: Service
metadata:
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: 80,443
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert": arn:aws:acm:us-west-2:xxxxxxxxxxxx:certificate/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
  labels:
    app: spin
    cluster: spin-deck
  name: spin-deck
spec:
  ports:
 - name: deck-tcp
   nodePort: xxxxx
   port: 9000
   protocol: TCP
   targetPort: 9000
  selector:
   app: spin
   cluster: spin-deck
  sessionAffinity: None
  type: LoadBalancer

spin-gate

apiVersion: v1
kind: Service
metadata:
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: 80,443
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert": arn:aws:acm:us-west-2:xxxxxxxxxxxx:certificate/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
  labels:
     app: spin
     cluster: spin-gate
  name: spin-gate
spec:
  ports:
  - name: gate-tcp
    nodePort: xxxxx
    port: 8084
    protocol: TCP
    targetPort: 8084
  selector:
    app: spin
    cluster: spin-gate
  sessionAffinity: None
  type: LoadBalancer

Different service types for Deck (UI) and Gate (API)

spec:
  expose:
    type: service
    service:
      type: LoadBalancer
      annotations:
        "service.beta.kubernetes.io/aws-load-balancer-backend-protocol": "http"
        "service.beta.kubernetes.io/aws-load-balancer-ssl-ports": "80,443"
        "service.beta.kubernetes.io/aws-load-balancer-ssl-cert": "arn:aws:acm:us-west-2:xxxxxxxxxxxx:certificate/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
      overrides:
        gate:
          type: NodePort

The preceding manifest generates these two services:

spin-deck

apiVersion: v1
kind: Service
metadata:
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: 80,443
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert": arn:aws:acm:us-west-2:xxxxxxxxxxxx:certificate/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
  labels:
    app: spin
    cluster: spin-deck
  name: spin-deck
  spec:
  ports:
  - name: deck-tcp
    nodePort: xxxxx
    port: 9000
    protocol: TCP
    targetPort: 9000
  selector:
    app: spin
    cluster: spin-deck
  sessionAffinity: None
  type: LoadBalancer

spin-gate

apiVersion: v1
kind: Service
metadata:
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: 80,443
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert": arn:aws:acm:us-west-2:xxxxxxxxxxxx:certificate/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
  labels:
    app: spin
    cluster: spin-gate
  name: spin-gate
spec:
  ports:
  - name: gate-tcp
    nodePort: xxxxx
    port: 8084
    protocol: TCP
    targetPort: 8084
  selector:
    app: spin
    cluster: spin-gate
  sessionAffinity: None
  type: NodePort

Different annotations for Deck (UI) and Gate (API)

spec:
  expose:
    type: service
    service:
      type: LoadBalancer
      annotations:
        "service.beta.kubernetes.io/aws-load-balancer-backend-protocol": "http"
        "service.beta.kubernetes.io/aws-load-balancer-ssl-ports": "80,443"
        "service.beta.kubernetes.io/aws-load-balancer-ssl-cert": "arn:aws:acm:us-west-2:xxxxxxxxxxxx:certificate/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
      overrides:
        gate:
          annotations:
            "service.beta.kubernetes.io/aws-load-balancer-internal": "true"

The preceding manifest file generates these two services:

spin-deck

apiVersion: v1
kind: Service
metadata:
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: 80,443
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert": arn:aws:acm:us-west-2:xxxxxxxxxxxx:certificate/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
  labels:
    app: spin
    cluster: spin-deck
  name: spin-deck
spec:
  ports:
  - name: deck-tcp
    nodePort: xxxxx
     port: 9000
     protocol: TCP
     targetPort: 9000
  selector:
     app: spin
     cluster: spin-deck
  sessionAffinity: None
  type: LoadBalancer

spin-gate

apiVersion: v1
kind: Service
metadata:
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: 80,443
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert": arn:aws:acm:us-west-2:xxxxxxxxxxxx:certificate/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
    service.beta.kubernetes.io/aws-load-balancer-internal: true
  labels:
    app: spin
    cluster: spin-gate
  name: spin-gate
spec:
  ports:
 - name: gate-tcp
    nodePort: xxxxx
    port: 8084
    protocol: TCP
    targetPort: 8084
  selector:
    app: spin
    cluster: spin-gate
  sessionAffinity: None
  type: Loadbalancer

X509

spec:
  config:
    profiles:
      gate:
        default:
          apiPort: 8085  
  expose:
    type: service
    service:
      type: LoadBalancer

      annotations:
        service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http

      overrides:
      # Provided below is the example config for the Gate-X509 configuration
        deck:
          annotations:
            service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-west-2:9999999:certificate/abc-123-abc
            service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
        gate:
          annotations:
            service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-west-2:9999999:certificate/abc-123-abc
            service.beta.kubernetes.io/aws-load-balancer-backend-protocol: https  # X509 requires https from LoadBalancer -> Gate
       gate-x509:
         annotations:
           service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
           service.beta.kubernetes.io/aws-load-balancer-ssl-cert: null
         publicPort: 443

Help resources

What’s next


Last modified August 18, 2023: (02b163b7)