Skip to main content
Version: 5.8.x

Continuous Delivery

We model Continuous Delivery as the step of the pipeline where built artifacts are uploaded from the build machine to a well-known repository location. This could be a container image registry like Docker Hub, a blob store like AWS S3, or even a database.

This makes a clear separation of responsibilities between CI/CD and Deployment:

  • The CI/CD pipeline, configured only with BUILD.bazel files, should upload only the artifacts that are
    • green: we can prove that all relevant tests are passing
    • changed from a previous build: we don't want to waste time/resources uploading the same artifact repeatedly. Also we don't want release engineers to have to sort through a massive list of duplicates when choosing a release.
  • The deployment system
    • locates and "promotes" artifacts to the next environment, such as "dev", "staging", or "prod".

What is "deliverable"

A deliverable artifact is one that contains both the binary or files to push, as well as the "pushing" logic that knows how to perform the upload. In Bazel terms, this means a deliverable should be an executable program that can be bazel run.

For example, a *_push rule, like oci_push or container_push can be executed with bazel run to push a Docker image to a registry like dockerhub, so it is already "deliverable".

See the examples section below for more ways to create deliverable Bazel targets.

Git Push

Sometimes your artifacts belong in a separate code repository. For example, you might publish an SDK built from the API definitions in a monorepo.

See an example git_push executable here:

S3 upload

Coming Soon

We'll publish an example of an s3_cp macro.


Which targets to deliver

By default, we deliver all targets tagged with tags = ["deliverable"]. You could add this tag on each deliverable target, or write a macro that adds this tag to all targets of that rule kind.

You can customize this behavior with a Bazel query expression that identifies deliverable targets. For example, to deliver all container_push targets:

deliverable: 'kind("container_push_ rule", //...)'

Deliverable targets must be executable. This allows you to encode your upload logic in a program you control in the monorepo. For example, you might write a simple sh_binary that uploads to S3.

See below for examples of "pusher" binaries.

Which changes to deliver

To determine whether that executable target should be delivered on a particular commit, we first hash it. This uses the aspect outputs command ( with a special pseudo-mnemonic "ExecutableHash", for example

$ aspect outputs 'attr("tags", "\bdeliverable\b", //...)' ExecutableHash
//cli:release h1:cj8OUC3l3fIr3Zxnffk6y7gukLOJmiWRCAQoqadg66Y=
//workflows/rosetta:release h1:kjHVajw+Nta2kh3Epcd32DkZxTE1NHA8b5N7hCNFNSM=

If the hash value matches one previously seen, then we skip delivery of that target.

You can run this command locally to understand whether a given change to a source file will result in a new executable. Sometimes the result may surprise you. For example, if you change a comment in a .go source file, the compiler produces the same .a file as a result, so the hash we see on the uploader executable is unchanged.

Another scenario that won't change the executable is if you change some production configuration. For example, you might have Helm charts to deploy to Kubernetes. If these aren't included inside the image, then changes to these files won't cause a new delivery.

To make it easy to diagnose issues, we upload this list of targets as a "delivery manifest" found in in the artifacts uploaded by the CI pipeline.

Which branch(es) to deliver

We deliver only when running on a "release branch". The default is ["main", "master"].

Configure this in the branches property. You can use glob patterns in this property.

For example:

branches: ['main', 'hotfix-*']

When to deliver

By default, delivery is manual. A Release Engineer will manually create the Delivery workflow step by logging into the CI system and triggering the workflow.

Set always_deliver in the config to always run the delivery. Any green release build will trigger a Delivery workflow step.


Aspect Workflows relies on the Bazel stamping setup in your repository. If you build the artifact with --stamp (or some other Bazel flags that include it, such as --config=release), this should create release artifacts that satisfy your deployment system.

The version used is up to you. Read our blog article for more about how you might choose to version your artifacts.

By default, we run the delivery with bazel run --stamp. If you need different stamping flags, use the stamp_flags property in the config, like so:

stamp_flags: ['--config=release']

Break glass (deliver on red)

Rarely, the main branch is red and the Buildcop isn't able to quickly resolve it. During this time, a product team believes that the breakage is unrelated to their application and feels strong pressure to ship.

In this case, the release engineer can navigate to the CI webpage and trigger the delivery pipeline manually, providing special parameters:

  • delivery_commit: what commit to check out and deliver.
  • delivery_targets: override the affected targets, and deliver this comma-separated list of targets instead.

In the future we plan a more auditable option for this, where the release engineer can trigger the delivery with a GitHub comment on a commit.


Deploying the artifacts is out-of-scope from Aspect Workflows. We assume you have some system that promotes releases from one environment to another, for example some clients use