Skip to main content
Version: 5.11.x

Setup on Google Cloud Platform

Prerequisites

The instructions assume you have terraform installed on your machine, or use some automation like Atlantis to run apply for you.

Minimum supported versions for Workflows terraform code:

  • Terraform >= 1.4
  • Kubernetes Provider >= 2.0.1
  • Helm Provider >= 2.9.0
  • Google Provider >= 4.63.1 (>= 6.9 as of Workflows version 5.12)
  • Kubectl Provider (alekc/kubectl) >= 2.0

Enable required services

Enable the following APIs in your project:

You may need to wait a short while for the changes to propagate.

Setup a NAT

Your project's network requires a NAT in order for our runners to talk to the CI provider. If you don't already have one, you can use this terraform to set one up:

data "google_compute_network" "default" {
name = "default"
}

resource "google_compute_router" "router" {
name = "router"
network = data.google_compute_network.default.id

bgp {
asn = 64514
}
}

resource "google_compute_router_nat" "nat" {
name = "router-nat"
router = google_compute_router.router.name
nat_ip_allocate_option = "AUTO_ONLY"
source_subnetwork_ip_ranges_to_nat = "ALL_SUBNETWORKS_ALL_IP_RANGES"
}

Choose a GCP Image

Aspect Workflows CI runners are GCE instances, not Kubernetes pods. Therefore, the base image uses a GCP image. To help you get started with Workflows, Aspect provides a number of starter images.

Aspect Workflows starter GCP images are currently built on the following Linux distributions,

  • Debian 11 "bullseye" (debian-11)
  • Debian 12 "bookworm" (debian-12)
  • Ubuntu 23.04 "Lunar Lobster" (ubuntu-2304)

for both amd64 (x86_64) and arm64 (aarch64) and come in the following variants,

  • minimal (just the bare minimum Workflows deps for fully hermetic builds)
  • gcc (minimal + gcc)
  • docker (minimal + docker)
  • kitchen-sink (minimal + gcc, docker and other misc tools)

Select an image from Aspect's pre-built GCP starter images.

This bit of terraform can be used to locate the image at plan/apply-time:

data "google_compute_image" "runner_image" {
project = "aspect-workflows-images"
name = "aspect-workflows-<debian-11|debian-12|ubuntu-2304>-<minimal|gcc|docker|kitchen-sink>-<amd64|arm64>-<version>"
}

The packer scripts we use to build Aspect Workflows starter images are open source and can be found at https://github.com/aspect-build/workflows-images.

note

In a later setup stage, we'll discuss how to build a custom AMI if required to green up your build.

Add the terraform module

Our terraform module is currently delivered in a GCS bucket. You add it to your existing Terraform setup.

Here's an example:

main.tf
module "aspect_workflows" {
# Replace x.x.x with an actual version:
source = "https://static.aspect.build/aspect/x.x.x/terraform-gcp-aspect-workflows.zip"

# Ask us to generate a customer ID for your organization and input it here
customer_id = "MyCorp"

network = data.google_compute_network.default.id
subnetwork = data.google_compute_subnetwork.default.id

# Replace xxx with one of gha, cci, bk
hosts = ["xxx"]

k8s_cluster = {
standard_nodes = {
min_count = 1
max_count = 20
machine_type = "e2-standard-4"
}
remote_cache_nodes = {
count = 2
machine_type = "c3-standard-4-lssd"
num_ssds = 1 # 1 x 375GB = 375GB per shard
}
}

remote = {
cache = {
shards = 2
}
frontend = {
min_scaling = 1
max_scaling = 20
}
}

resource_types = {
"default" = {
machine_type = "c2d-standard-4"
image_id = data.google_compute_image.runner_image.id
}
}

# Replace xxx with one of gha, cci, bk
xxx_runner_groups = {
default = {
max_runners = 10
resource_type = "default"
agent_idle_timeout_min = 90
}
}
}

Apply

Run terraform apply, or use whatever automation you already use for your Infrastructure-as-code such as Atlantis.

Increase quotas

caution

New GCP projects start with a quota on the maximum number of CPUs which is usually too low and will prevent more than a handful of runners from starting up.

To request an increase, visit the quotas page for your project and request an increase on the number of CPUs allowed in your region.

How many CPUs you request depends on the runner machine_type you use. For example, if you use n1-standard-4 (4 vCPUs) runners and have a maximum of 10 agents, you'll want at least 40 CPUs just for the runners.

Leave an additional buffer for nodes that run in the Kubernetes cluster. Currently the cluster uses e2-medium instances (2 vCPUs), and the number of nodes is controlled by cluster_standard_node_count.

Next steps

Continue by choosing which CI platform you plan to interact with, and follow the corresponding installation steps.