I’m Charles Fort, a developer at Woot who specializes in deployments and developer experience. Woot is the original daily deals website. It was founded in 2004 and acquired by Amazon in 2010 – https://www.woot.com

We just moved our web front-end deployments from Troop, a deployment agent we developed ourselves, to AWS CodeDeploy and AWS CodePipeline. This migration involved launching a new customer-facing EC2 web server fleet with the CodeDeploy agent installed and creating and managing our CodeDeploy and CodePipeline resources with AWS CloudFormation. Immediately after we completed the migration, we observed a ~50% reduction in HTTP 500 errors during deployment.

In this blog post, I’m going to show you:

  • Why we chose AWS deployment tools.
  • An architectural diagram of Woot’s systems.
  • An overview of the migration project.
  • Our migration results.

 

The old and busted

We wanted to replace our in-house deployment system with something we could build automation on top of, and something that we didn’t have to own or maintain. We already own and maintain a build system, which is bad enough. We didn’t want to run additional infrastructure for our deployment pipeline.

In short, we wanted a cloud service. Because all of our infrastructure is in AWS, CodeDeploy was a natural fit to replace our deployment agent. CodePipeline acts as the automation orchestrator, our wizard in the cloud telling CodeDeploy what to deploy and when.

 

Architecture overview

Here’s a look at the architecture of a Woot web front end:

Woot architecture overview

Woot architecture overview

Project overview

Our project involved migrating five web front ends, which together handle an average of 12 million requests per day, to CodeDeploy and CodePipeline while keeping the site live for our customers.

Here are the steps we took:

  • Wrote some new deployment scripts.
  • Launched a new fleet of EC2 web servers with CodeDeploy support.
  • Created a deployment pipeline for our CloudFormation-defined CodeDeploy and CodePipeline configuration.
  • Introduced our new fleet to live traffic. Hello, customers!

 

Deployment scripts

Our old deployment system didn’t stop or start our web server. Instead, it tried to swap out the build artifacts from under the server while the server was running. That’s definitely a bad idea.

We wrote some deployment scripts in Powershell that are run by CodeDeploy to stop and start our IIS web servers. These scripts work in conjunction with the Elastic Load Balancing (ELB) support in CodeDeploy because we certainly didn’t want to stop the web server while it’s serving customer traffic.

 

New fleet

Because our fleet is running on Amazon EC2, we built an Amazon Machine Image (AMI) for our web fleet with the CodeDeploy agent already installed. From a fleet perspective, this was most of what we had to do. With the agent installed, CodeDeploy can use our deployment scripts to deploy our web projects.

 

AWS CloudFormation deployment pipeline

Because we needed a deployment pipeline and several pieces of CodeDeploy configuration (a CodeDeploy application and at least one CodeDeploy deployment group) for each web project we want to deploy, we decided to use AWS CloudFormation to version this configuration. Our build system (TeamCity) can read from our version control system and write to Amazon S3. We made a simple build in TeamCity to push an AWS CloudFormation template to S3, which triggers a pipeline that deploys to AWS CloudFormation. This creates the CodePipeline and CodeDeploy resources. Now we can do code reviews on our infrastructure changes. More eyes is more safety! We can also trace infrastructure changes over time, just like we can with code changes.

 

Live traffic time!

Our web fleets run behind Classic Load Balancers. By choosing CodeDeploy, we can use its sweet new ELB features. For example, through an elastic load balancer, CodeDeploy can prevent internet traffic from being routed to an instance while it is being deployed to. After the deployment to that instance is complete, it then makes the instance available for traffic.

We launched new hosts with the CodeDeploy agent and deployed to them without ELB support turned on. Then we slowly, manually introduced them into our fleet while monitoring stats. After we had the new machines in, we slowly removed the old ones from the load balancer until our fleet was fully supported by CodeDeploy. Doing the migration this way resulted in 0 downtime for our sites.

One fun detail: When we had 2/3 of the new fleet in our load balancer, we triggered a CodeDeploy deployment to the fleet, but this time with ELB support turned on. This caused CodeDeploy to place the rest of the machines into the load balancer (coexisting with the old fleet), and there were slightly fewer buttons to press.

 

AWS CloudFormation example

This is a simplified example of the AWS CloudFormation template we use to manage the AWS configuration for one of our web projects. It is deployed in a deployment pipeline, much like the web projects themselves.

Parameters: CodePipelineBucket: Type: String CodePipelineRole: Type: String CodeDeployRole: Type: String CodeDeployBucket: Type: String
Resources: ### Woot.Example deployment configuration ### ExampleDeploymentConfig: Type: 'AWS::CodeDeploy::DeploymentConfig' Properties: MinimumHealthyHosts: Type: FLEET_PERCENT Value: '66' # Let's keep 2/3 of the fleet healthy at any point #Woot.Example CodeDeploy application WootExampleApplication: Type: "AWS::CodeDeploy::Application" Properties: ApplicationName: "Woot.Example" #Woot.Example CodeDeploy deployment groups WootExampleDeploymentGroup: DependsOn: "WootExampleApplication" Type: "AWS::CodeDeploy::DeploymentGroup" Properties: ApplicationName: "Woot.Example" DeploymentConfigName: !Ref "ExampleDeploymentConfig" # use the deployment configuration we created DeploymentGroupName: "Woot.Example.Main" AutoRollbackConfiguration: Enabled: true Events: - DEPLOYMENT_FAILURE # this makes the deployment rollback when the deployment hits the failure threshold - DEPLOYMENT_STOP_ON_REQUEST # this makes the deployment rollback if you hit the stop button LoadBalancerInfo: ElbInfoList: - Name: "WootExampleInternal" # this is the ELB the hosts live in, they will be added and removed from here DeploymentStyle: DeploymentOption: "WITH_TRAFFIC_CONTROL" # this tells CodeDeploy to actually add/remove the hosts from the ELB Ec2TagFilters: - Key: "Name" Value: "exampleweb*" # deploy to all machines named like exampleweb001.yourdomain.com, etc Type: "KEY_AND_VALUE" ServiceRoleArn: !Sub "arn:aws:iam::${AWS::AccountId}:role/${CodeDeployRole}" # this is the IAM role CodeDeploy in your account should use #Woot.Example CodePipeline WootExampleDeploymentPipeline: DependsOn: "WootExampleDeploymentGroup" Type: "AWS::CodePipeline::Pipeline" Properties: RoleArn: !Sub "arn:aws:iam::${AWS::AccountId}:role/${CodePipelineRole}" # this is the IAM role CodePipeline in your account should use Name: "Woot.Example" # name of the pipeline ArtifactStore: Type: S3 Location: !Ref "CodePipelineBucket" # the bucket CodePipeline uses in your account to shuffle artifacts around Stages: - Name: Source # one S3 source stage Actions: - Name: SourceAction ActionTypeId: Category: Source Owner: AWS Version: 1 Provider: S3 OutputArtifacts: - Name: SourceOutput Configuration: S3Bucket: !Ref "CodeDeployBucket" # the S3 bucket your builds go into (needs to be versioned) S3ObjectKey: "Woot.Example/Woot.Example-Release.zip" # build artifact path for this application RunOrder: 1 - Name: Deploy # one deploy stage that triggers the CodeDeploy deployment Actions: - Name: DeployAction ActionTypeId: Category: Deploy Owner: AWS Version: 1 Provider: CodeDeploy InputArtifacts: - Name: SourceOutput Configuration: ApplicationName: "Woot.Example" DeploymentGroupName: "Woot.Example.Main" RunOrder: 2

Appspec.yml example

This is the appspec.yml file we use for our main web front end (Woot.Web.Retail). The appspec.yml file tells CodeDeploy where to put files and when to run our deployment scripts.

version: 0.0
os: windows
files: - source: / destination: C:\Woot\Woot.Web.Retail\Initial hooks: BeforeInstall: - location: bin\Deployment\stopWebsite.ps1 - location: bin\Deployment\clearWebsiteDeployment.ps1 AfterInstall: - location: bin\Deployment\startWebsite.ps1

Deployment scripts

Because our server-launching infrastructure doesn’t use CodeDeploy for initial placement of the build artifacts, CodeDeploy won’t overwrite the files. (The service has no knowledge of them.) This is both good and bad: good because CodeDeploy won’t overwrite files it didn’t write, and bad because it means we have to have a deployment script like clearWebsiteDeployment.ps1:

clearWebsiteDeployment.ps1:

$appName = $env:APPLICATION_NAME
Remove-Item "C:\Woot\$appName\Initial\*" -recurse -force

 

stopWebsite.ps1:

# restart script as 64bit powershell if it's 32 bit
if ($PSHOME -like "*SysWOW64*") { & (Join-Path ($PSHOME -replace "SysWOW64", "SysNative") powershell.exe) -File ` (Join-Path $PSScriptRoot $MyInvocation.MyCommand) @args Exit $LastExitCode
} Import-Module WebAdministration Stop-Website -name $env:APPLICATION_NAME #sleep for 10 seconds to give IIS a chance to stop
Start-Sleep -s 10 $website = Get-Website -name "*$env:APPLICATION_NAME*"
if ($website.state -ne 'stopped') { throw "The website cannot be stopped"
}

 

startWebsite.ps1:

# restart script as 64bit powershell if it's 32 bit
if ($PSHOME -like "*SysWOW64*") { & (Join-Path ($PSHOME -replace "SysWOW64", "SysNative") powershell.exe) -File ` (Join-Path $PSScriptRoot $MyInvocation.MyCommand) @args Exit $LastExitCode
} Import-Module WebAdministration Start-Website -name $env:APPLICATION_NAME #sleep for 10 seconds to give IIS a chance to start
Start-Sleep -s 10 $website = Get-Website -name "*$env:APPLICATION_NAME*"
if ($website.state -ne 'started') { throw "The website cannot be started"
}

You can see we used a CodeDeploy environment variable ($env:APPLICATION_NAME) in our scripts. The name of the CodeDeploy application is also the name of the IIS website. This way, we can use the same deployment scripts for multiple websites.

 

The new hotness

Now that we’re running CodeDeploy in production we are extremely pleased with the results. Our old deployment agent, Troop, did not give us much control over the way releases went out. Now we can check on a deployment at a per-instance level, and the opportunities for automation are impressive.

After our migration, we saw a 50% reduction in HTTP 500 errors served to customers during deployments. We looked at the one-hour time slices during a deployment and compared the average count before and after our migration to CodeDeploy. These numbers show our old deployment system was hella busted (really broken).

This graph shows summed deployment errors over time and compares CodeDeploy to our legacy in-house deployment agent (Troop).

 

CodeDeploy vs Troop

CodeDeploy vs Troop

We plan to implement a full cross-account release process on AWS deployment tools. We will have a single AWS account with a pipeline that controls CodeDeploy in our various environments, triggering tests and promoting to the next environment as they pass. Building something like that with our own tooling would take a lot of work. Thanks to CodeDeploy, CodePipeline, and AWS CloudFormation for making our lives easier.

 

Categories: AWS