← Back to blog

Building Project Canary: A Canary Deployment Environment on AWS

Brandon · April 16, 2026
cloudterraformawsdevopsproject
View on GitHub

Problem

When deploying a new version of an application, you want to make sure it’s working as expected before routing all traffic to it. If you push bad code, and it’s not working as expected, every user sees this.

Solution

A Canary Deployment solves this problem by routing a small percentage of traffic to the new version of the application while the rest of the traffic is routed to the stable version. Project Canary is my version of a Canary Deployment environment. I built a full canary deployment environment on AWS using Terraform, from VPC networking and security group chaining to ALB weighted routing that splits traffic 90/10 between stable and canary versions. The goal was to demonstrate the infrastructure decisions behind a real deployment pattern.

Architecture Overview

diagram of the architecture

The focus of this architecture was to build a resilient system that routes traffic to two separate application versions. To ensure high availability, I deployed across two availability zones so that if one AZ goes down, the stable version continues serving from the surviving AZ. Using ALB weighted target groups, 90% of traffic routes to the stable version and 10% to the canary. This allows a controlled rollout where the new version can be validated against real traffic before a full promotion. The EC2 instances sit in private subnets with no direct internet access. Security group chaining ensures they only accept traffic from the ALB on port 80 and SSH from the bastion host. Responses flow back through the ALB to the end user.

Design Decisions and Trade-offs

What I’d do differently