Skip to main content
Version: 5.8.x

Setup on Google Cloud Platform

We recommend creating a new project so that GCP billing gives an easy insight into the costs, and also so our engineers don't have overly broad access to your other infrastructure.

Aspect will need the following information about your project in order to share our terraform module and base runner images:

  1. Your GCP project number. To get your project number, run gcloud projects list and look under the PROJECT_NUMBER column for your Workflows project.
  2. The email addresses of any users or service accounts who will run operations like plan and apply. We can optionally enable access for your entire company's domain.

Your project's storage transfer service account needs to be initialized. GCP lazily creates this service account the first time that it's used. Trigger its creation by visiting this page, inputting your project id on the right hand side, then clicking Execute. A JSON blob will appear below showing the account's info and confirming that it has been created.

Enable required services

Enable the following APIs in your project:

You may need to wait a short while for the changes to propagate.

Setup a NAT

Your project's network requires a NAT in order for our runners to talk to the CI provider. If you don't already have one, you can use this terraform to set one up:

data "google_compute_network" "default" {
name = "default"
}

resource "google_compute_router" "router" {
name = "router"
network = data.google_compute_network.default.id

bgp {
asn = 64514
}
}

resource "google_compute_router_nat" "nat" {
name = "router-nat"
router = google_compute_router.router.name
nat_ip_allocate_option = "AUTO_ONLY"
source_subnetwork_ip_ranges_to_nat = "ALL_SUBNETWORKS_ALL_IP_RANGES"
}

Create a GCP Image

Aspect Workflows runs on GCE instances, not Kubernetes pods. Therefore the base image uses an GCP Image. Follow the packer instructions to create a base image image for your runners containing any non-hermetic deps that your build requires.

This bit of terraform can be used to locate your image at plan/apply-time:

data "google_compute_image" "runner_image" {
project = "my-project"
name = "my-workflows-runner-v1"
}

Add the terraform module

Our terraform module is currently delivered in a GCS bucket. You add it to your existing Terraform setup.

Here's an example:

main.tf
module "aspect_workflows" {
# Replace 5.x.x with an actual version:
source = "gcs::https://storage.googleapis.com/storage/v1/aspect-artifacts/5.x.x/workflows-gcp/terraform-gcp-aspect-workflows.zip"
network = data.google_compute_network.default.id
subnetwork = data.google_compute_subnetwork.default.id

# Replace XXX with one of gha, cci, buildkite
hosts = ["XXX"]

k8s_cluster = {
node_count = 3
machine_type = "e2-standard-2"
}

remote = {
cache_size_gb = 384
cache_shards = 3
replicate_cache = false
load_balancer_replicas = 2
}

resource_types = {
"default" = {
machine_type = "n1-standard-4"
image_id = data.google_compute_image.runner_image.id
use_preemptible = true
}
}

bk_runner_groups = {
default = {
min_runners = 0
max_runners = 10
resource_type = "default"
agent_idle_timeout_min = 90
}
}
}

Apply

Run terraform apply, or use whatever automation you already use for your Infrastructure-as-code such as Atlantis.

You'll get a resulting infrastructure like the following:

note

Infrastructure diagram coming soon.

Increase quotas

caution

New GCP projects start with a quota on the maximum number of CPUs which is usually too low and will prevent more than a handful of runners from starting up.

To request an increase, visit the quotas page for your project and request an increase on the number of CPUs allowed in your region.

How many CPUs you request depends on the runner machine_type you use. For example, if you use n1-standard-4 (4 vCPUs) runners and have a maximum of 10 agents, you'll want at least 40 CPUs just for the runners.

Leave an additional buffer for nodes that run in the kubernetes cluster. Currently the cluster uses e2-medium instances (2 vCPUs), and the number of nodes is controlled by cluster_standard_node_count.

Next steps

Continue by choosing which CI platform you plan to interact with, and follow the corresponding installation steps.