Use SQL with Orca

By default, Orca (the task orchestration service) uses Redis as its backing store. You can now configure Orca to use a relational database to store its pipeline execution. The main advantage of doing so is a gain in performance and the removal of Redis as a single point of failure.

Base Configuration

You can find a complete description of the options in the open source documentation.

SQL is not currently supported in Halyard’s main configuration but can be setup in <HALYARD>/<DEPLOYMENT>/profiles/orca-local.yml:

sql:
  enabled: true
  connectionPool:
    jdbcUrl: jdbc:mysql://<DB CONNECTION HOSTNAME>:<DB CONNECTION PORT>/<DATABASE NAME>
    user: orca_service
    password: <orca_service password>
    connectionTimeout: 5000
    maxLifetime: 30000
    maxPoolSize: 50
  migration:
    jdbcUrl: jdbc:mysql://<DB CONNECTION HOSTNAME>:<DB CONNECTION PORT>/<DATABASE NAME>
    user: orca_migrate
    password: <orca_migrate password>

# Ensure we're only using SQL for accessing execution state
executionRepository:
  sql:
    enabled: true
  redis:
    enabled: false

monitor:
  activeExecutions:
    redis: false

Initial Run

Once you’ve provisioned your RDBMS and ensured connectivity from Spinnaker, you’ll need to create the database. You can skip this step if you create the database during provisioning - for instance with Terraform:

CREATE SCHEMA `orca` DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;

Then we’ll grant authorization to the orca_service and orca-migration users:


  GRANT
    SELECT, INSERT, UPDATE, DELETE, EXECUTE, SHOW VIEW
  ON `orca`.*
  TO 'orca_service'@'%';

  GRANT
    SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, INDEX, ALTER, LOCK TABLES, EXECUTE, SHOW VIEW
  ON `orca`.*
  TO 'orca_migrate'@'%';

The above configuration grants authorization from any host. You can restrict it to the cluster in which Spinnaker runs by replacing the % with the IP address of Orca pods from MySQL.

Keeping existing execution history

The above configuration will point Orca to your database. However it won’t migrate your existing execution history to your new database. You have the option to run a dual repository with the following:

executionHistory:
  dual:
    enabled: true
    primaryClass: com.netflix.spinnaker.orca.sql.pipeline.persistence.SqlExecutionRepository
    previousClass: com.netflix.spinnaker.orca.pipeline.persistence.jedis.RedisExecutionRepository

Remove these settings once all the intesting execution history is only in your database.

Database Maintenance

Each new version of Orca may potentially migrate the database schema. This is done with the orca_migrate user defined above.

Pipeline executions are saved to the database. Each execution can add between a few KBs to hundreds of KBs of data depending on the size of your pipeline. It means that after a while, data will grow large and you’ll likely want to purge older executions.

Note: We recommend saving past executions to a different data store for auditing purposes. You can do it in a variety of ways:

  • During the purge, by marking, exporting, then deleting older records.
  • By saving execution history from Echo’s events and just delete older records from your database.