Skip to main content
Version: 5.4.x

Setup on Amazon Web Services

We recommend creating a new subaccount, so that AWS billing gives an easy insight into the costs, and also so our engineers don't have overly broad access to your other infrastructure.

Amazon's documentation for subaccounts is at

Aspect will need to share our terraform module and base AMI with your org. We'll need your AWS account ID, as well as the role you use to run terraform operations like plan and apply.

Granting permissions

First, we need to grant permission for Aspect engineers to perform setup and maintenance.

Create a role to hold our policies. You can do this in Terraform, or in the AWS console:

Navigate to IAM > Roles > Create role

  1. Select trusted entity
    • Trusted entity type: AWS account
    • Another AWS account: 302232432727
    • Require MFA: enable (our engineers are required to use multi-factor auth)
  2. Add permissions: search for the following policies and add them:
    • CloudWatchLogsReadOnlyAccess
    • AutoScalingConsoleReadOnlyAccess
  3. Name, review, and create
    • One choice for the name is aspect-workflows-comaintainers. We'll need to know the name in order to assume the role.

If Aspect engineers will perform the terraform apply then we need more permissions:

Permissions required for terraform apply
autoscaling:CreateAutoScalingGroup autoscaling:DeleteAutoScalingGroup autoscaling:DescribeAutoScalingGroups autoscaling:DescribeScalingActivities autoscaling:SetInstanceProtection autoscaling:UpdateAutoScalingGroup cloudformation:CreateStack cloudformation:DeleteStack cloudformation:DescribeStacks cloudformation:GetTemplate cloudwatch:DeleteAlarms cloudwatch:DescribeAlarms cloudwatch:ListTagsForResource cloudwatch:PutMetricAlarm ec2:AuthorizeSecurityGroupEgress ec2:AuthorizeSecurityGroupIngress ec2:CreateLaunchTemplate ec2:CreateSecurityGroup ec2:DeleteLaunchTemplate ec2:DeleteSecurityGroup ec2:DescribeImages ec2:DescribeLaunchTemplates ec2:DescribeLaunchTemplateVersions ec2:DescribeNetworkAcls ec2:DescribeNetworkInterfaces ec2:DescribeRouteTables ec2:DescribeSecurityGroups ec2:DescribeSubnets ec2:DescribeVpcAttribute ec2:DescribeVpcClassicLink ec2:DescribeVpcClassicLinkDnsSupport ec2:DescribeVpcs ec2:RevokeSecurityGroupEgress elasticloadbalancing:CreateListener elasticloadbalancing:CreateLoadBalancer elasticloadbalancing:CreateTargetGroup elasticloadbalancing:DeleteListener elasticloadbalancing:DeleteLoadBalancer elasticloadbalancing:DeleteTargetGroup elasticloadbalancing:DescribeListeners elasticloadbalancing:DescribeLoadBalancerAt elasticloadbalancing:DescribeLoadBalancers elasticloadbalancing:DescribeTags elasticloadbalancing:DescribeTargetGroupAt elasticloadbalancing:DescribeTargetGroups elasticloadbalancing:DescribeTargetHealth elasticloadbalancing:ModifyLoadBalancerAt elasticloadbalancing:ModifyTargetGroupAt events:DeleteRule events:DescribeRule events:ListTagsForResource events:ListTargetsByRule events:PutRule events:PutTargets events:RemoveTargets iam:AddRoleToInstanceProfile iam:AttachRolePolicy iam:CreateInstanceProfile iam:CreatePolicy iam:CreateRole iam:DeleteInstanceProfile iam:DeletePolicy iam:DeleteRole iam:DetachRolePolicy iam:GetInstanceProfile iam:GetPolicy iam:GetPolicyVersion iam:GetRole iam:ListAttachedRolePolicies iam:ListInstanceProfilesForRole iam:ListPolicyVersions iam:ListRolePolicies iam:RemoveRoleFromInstanceProfile lambda:AddPermission lambda:CreateFunction lambda:DeleteFunction lambda:GetFunction lambda:GetFunctionCodeSigningConfig lambda:GetPolicy lambda:ListVersionsByFunction lambda:RemovePermission logs:CreateLogGroup logs:DeleteLogGroup logs:DescribeLogGroups logs:ListTagsLogGroup logs:PutRetentionPolicy memorydb:CreateCluster memorydb:CreateSubnetGroup memorydb:DeleteCluster memorydb:DeleteSubnetGroup memorydb:DescribeClusters memorydb:DescribeSubnetGroups memorydb:ListTags s3:CreateBucket s3:DeleteBucket s3:DeleteBucketPolicy s3:DeleteObject s3:GetAccelerateConfiguration s3:GetBucketAcl s3:GetBucketCors s3:GetBucketLogging s3:GetBucketPolicy s3:GetBucketPublicAccessBlock s3:GetBucketRequestPayment s3:GetBucketTagging s3:GetBucketVersioning s3:GetBucketWebsite s3:GetEncryptionConfiguration s3:GetLifecycleConfiguration s3:GetObject s3:GetObjectAttributes s3:GetObjectTagging s3:GetObjectVersion s3:GetObjectVersionAttributes s3:GetReplicationConfiguration s3:ListAllMyBuckets s3:ListBucket s3:ListObjects s3:PutBucketLogging s3:PutBucketPolicy s3:PutBucketPublicAccessBlock s3:PutEncryptionConfiguration s3:PutLifecycleConfiguration s3:PutObject ssm:DeleteParameter ssm:DescribeParameters ssm:GetParameter ssm:ListTagsForResource ssm:PutParameter sts:AssumeRole sts:GetCallerIdentity

Create an Amazon Machine Image (AMI) (optional)

Aspect Workflows runs on EC2 instances, not Kubernetes pods. Therefore the base image uses an AMI. We provide one that has our dependencies, and if your build is fully hermetic, this will work fine.

This bit of terraform can be used to locate our AMI at plan/apply-time:

# Find the AMI shared from the Aspect account.
data "aws_ami" "aspect_worker_ami" {
most_recent = true
# Owner is aspect-build AWS org
owners = ["302232432727"]
filter {
name = "name"
# Replace XXX with one of gha, cci, bk
values = ["aspect-ci-XXX-worker-*"]

If your build is not hermetic, for example some C++ dynamic-linked library needs to be on the machine, you can use Packer to make a reproducible build of a custom AMI.


More docs on packer coming soon.

Add the terraform module

Our terraform module is currently delivered in an S3 bucket. You add it to your existing Terraform setup.

Here's an example:
module "aspect_workflows" {
# Replace 5.x.x with an actual version:
source = "s3::"
customer_id = "MyCorp"
vpc_id = data.terraform_state.outputs.circleci_vpc.vpc_id
vpc_subnets = [data.terraform_state.outputs.circleci_vpc.private_subnets[0]]

# Replace XXX with one of gha, cci, bk
hosts = ["XXX"]

# Define Bazel states we know how to warm up
warming_sets = {
default = {}

resource_types = {
"default" = {
instance_type = "i4i.xlarge"
image_id =

# Replace XXX with one of gha, cci, bk
XXX_runner_groups = {
default = {
max_runners = 10
min_runners = 0
resource_type = "default" # Corresponds to a resource_types entry above
warming = true
warming_set = "default" # Corresponds to a warming_sets entry above
default-warming = {
max_runners = 1
min_runners = 0
resource_type = "default" # Corresponds to a resource_types entry above
policies = {
# "default" key in warming_management_policies corresponds to a warming_sets entry above
warming_manage : module.aspect-workflows.warming_management_policies["default"].arn

Applying custom security groups to runners

It might be required to add custom security groups to the runners that are managed by Aspect workflows. This can be achieved by setting the security_groups attribute on the runners or queue configuration object. This is a map from string -> AWS Security Group ID (the name is not currently used by Workflows).

runners = {
default = {
security_groups = {
vpn_access :

Allowing Aspect read-only support access

The Workflows module exposes an IAM policy document that provides read-only access to key logs, metrics and configuration values. This policy document can be used to create a new IAM policy and role attachment to provide Aspect engineers the read only access.

Specifically, the policy defined in this document allows:

  • Read / List on all /aw SSM parameter store keys
  • Describe on all ASGs and their associated instances and the scaling activity
  • Get on log streams and log events with the aw_ prefix
  • SSM access to running instances and port forwarding for Grafana

For example, it could be extended via:

data "aws_iam_policy_document" "support" {
source_policy_documents = [

statement {

Or attached directly to an IAM policy:

resource "aws_iam_policy" "aspect_support" {
name = "AspectSupportReadPolicy"
description = "Allows read only access to areas of Aspect Workflows required for support"
policy = module.aspect-workflows.support_policy.json

SSM access must be enabled on the aspect-workflows module to allow SSM access via the support policy.

Allowing SSM access

By default, SSM access to all running instances is disabled. To enable access, set the following on the aspect-workflows module.

enable_ssm_access = true


  1. Navigate to AWS Console > AWS Secrets Manager > Secrets,
  2. Locate the key starting aw_pd_routing_key_
  3. Set the value to a PagerDuty Integration Key provided by Aspect.

Alternatively, you can supply the value using Terraform. We expose the AWS Secrets Manager Secret Id via an output from the Workflows terraform module.

resource "aws_secretsmanager_secret_version" "this" {
secret_id = module.aspect-workflows.pagerduty_routing_secret_id
secret_string = "my-value"

The secret_string value should be supplied using whatever mechanism you already use for managing secrets.

Cost allocation tagging

To tag all resources created by the Workflows module with cost allocation tags, default tags can be set on the AWS provider that is passed to the module. Workflows also supports overriding the default cost allocation tag, and its value.

provider "aws" {
alias = "workflows"

default_tags {
tags = {
(module.workflows.cost_allocation_tag) = module.workflows.cost_allocation_tag_value

module "workflows" {
providers = {
aws = aws.workflows

# To override the values of the cost allocation tag and tag value
cost_allocation_tag = "MyCustomCostAllocationTag"
cost_allocation_tag_value = "MyCustomCostAllocationTagValue"

To apply additional tags to build resources, see the section below.

Adding custom tags to build resources

It may be required to add custom tags to the instances that are running builds, for example for security auditing or cost tracking. To add additional tags to any EC2 resources Workflows creates from the ASGs to run builds, set the tags attribute on the resource:

resource_types = {
"default" = {
tags = {
CustomTag : "CustomValue"

Adding additional tags to builds resource is not yet supported on Buildkite agents

These tags will always propagate to the runners, in addition to the existing cost allocation tag settings.


Run terraform apply, or use whatever automation you already use for your Infrastructure-as-code such as Atlantis.

You'll get a resulting infrastructure like the following:


Next steps

Continue by choosing which CI platform you plan to interact with, and follow the corresponding installation steps.