Skip to main content
Version: 5.4.x

Setup on Google Cloud Platform


Experimental support is now available for Buildkite. CircleCI and GitHub Actions are coming soon.

We recommend creating a new project so that GCP billing gives an easy insight into the costs, and also so our engineers don't have overly broad access to your other infrastructure.

Aspect will need the following information about your project in order to share our terraform module and base runner images:

  1. Your GCP project number. To get your project number, run gcloud projects list and look under the PROJECT_NUMBER column for your Workflows project.
  2. The email addresses of any users or service accounts who will run operations like plan and apply. We can optionally enable access for your entire company's domain.

Your project's storage transfer service account needs to be initialized. GCP lazily creates this service account the first time that it's used. Trigger its creation by visiting this page, inputing your project id on the right hand side, then clicking Execute. A JSON blob will appear below showing the account's info and confirming that it has been created.

Enable required services

Enable the following APIs in your project:

You may need to wait a short while for the changes to propagate.

Setup a NAT

Your project's network requires a NAT in order for our runners to talk to the CI provider. If you don't already have one, you can use this terraform to set one up:

data "google_compute_network" "default" {
name = "default"

resource "google_compute_router" "router" {
name = "router"
network =

bgp {
asn = 64514

resource "google_compute_router_nat" "nat" {
name = "router-nat"
router =
nat_ip_allocate_option = "AUTO_ONLY"
source_subnetwork_ip_ranges_to_nat = "ALL_SUBNETWORKS_ALL_IP_RANGES"

Create a GCP Image (optional)

Aspect Workflows runs on GCE instances, not Kubernetes pods. Therefore the base image uses an GCP Image. We provide one that has our dependencies, and if your build is fully hermetic, this will work fine.

This bit of terraform can be used to locate our image at plan/apply-time:

# Find the most recent image shared from the Aspect account
data "google_compute_image" "runner_image" {
project = "aspect-workflows-images"
# Replace XXX with one of gha, cci, buildkite
family = "aspect-ci-XXX-worker"

If your build is not hermetic, for example some C++ dynamic-linked library needs to be on the machine, you can use Packer to make a reproducible build of a custom AMI.


More docs on packer coming soon.

Add the terraform module

Our terraform module is currently delivered in a GCS bucket. You add it to your existing Terraform setup.

Here's an example:
module "aspect_workflows" {
# Replace 5.x.x with an actual version:
source = "gcs::"
network =
# subnetwork = ... (Optional)

# Replace XXX with one of gha, cci, buildkite
hosts = ["XXX"]

cluster_standard_node_count = 3
remote = {
cache_shards = 1
cache_size_gb = 100
replicate_cache = false
load_balancer_replicas = 2

resource_types = {
"default" = {
machine_type = "n1-standard-4"
image_id =
use_preemptible = true

bk_runner_groups = {
default = {
min_runners = 0
max_runners = 10
resource_type = "default"
agent_idle_timeout_min = 90


Run terraform apply, or use whatever automation you already use for your Infrastructure-as-code such as Atlantis.

You'll get a resulting infrastructure like the following:


Infrastructure diagram coming soon.

Increase quotas


New GCP projects start with a quota on the maximum number of CPUs which is usually too low and will prevent more than a handful of runners from starting up.

To request an increase, visit the quotas page for your project and request an increase on the number of CPUs allowed in your region.

How many CPUs you request depends on the runner machine_type you use. For example, if you use n1-standard-4 (4 vCPUs) runners and have a maximum of 10 agents, you'll want at least 40 CPUs just for the runners.

Leave an additional buffer for nodes that run in the kubernetes cluster. Currently the cluster uses e2-medium instances (2 vCPUs), and the number of nodes is controlled by cluster_standard_node_count.

Next steps

Continue by choosing which CI platform you plan to interact with, and follow the corresponding installation steps.