Terraform is an open-source infrastructure as code software tool that enables you to safely and predictably create, change, and improve infrastructure. This is a little longer blog post, but Terraform is becoming more essential in developers tech stack, so let’s start out with some basics and slowly olive-branch to more complex use cases of Terraform and Travis CI. Let’s do this.
Let’s take a look at my terraform_config
file:
# Montana's Terraform config
provider "aws" {
region = "eu-west-1"
#access_key = "PUT-YOUR-ACCESS-KEY-HERE"
#secret_key = "PUT-YOUR-SECRET-KEY-HERE"
version = "~> 2.55.0"
}
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
}
required_version = ">= 0.13"
}
Couple of things you’ll notice in my config file I selected AWS
as my provider, there’s ENVIRONMENT_VARIABLES
, what version of Terraform, and what version I want to enforce.
We are going to assume this project is in a shared setting, so local state
will not be good enough, we’re going to have to use remote state
. In short with remote state in Terraform, Terraform writes the state data to a remote data store, so in this case probably cloud storage, which can then be shared between all members of a team. Think of it as a Google Doc, and you can pick and choose who has access to it.
Before we move to the architecture section, remember this Terraform list of configuration files:
credentials
credentials_helper
diable_checkpoint
disable_checkpoint_signature
plugin_cache_dir
provider_installation
So you as a developer decide to use Travis CI, the best tool for building on the internet, so first off, good choice! Here are the things you’re going to need to get this off the ground:
We know what Travis does, but incase you don’t in this use case, Travis is going build the website artifacts, deploy the infrastructure, and push the artifacts to production instead of a staging environment.
This example I’m going to be using an Amazon S3
backend with DynamoDB
for Terraform, I’ve found DynamoDB is the most user friendly with Teraform. Terraform will store the state (now remember not local, but remote) within S3 and use DynamoDB to acquire a lock while performing changes. The lock is important to avoid that two Terraform binaries are modifying the same state concurrently, if two Terraform instances were doing this – you could see the trouble and headache this could cause.
Let’s start with the S3 backend config file:
terraform {
required_version = ">= 0.12"
}
provider "aws" {}
data "aws_caller_identity" "current" {}
locals {
account_id = data.aws_caller_identity.current.account_id
}
resource "aws_s3_bucket" "terraform_state" {
bucket = "${local.account_id}-terraform-states"
versioning {
enabled = true
}
# I strongly encourage enabling server side encryption by default - Montana
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
}
Now let’s create the DynamoDB
Terraform config file:
resource "aws_dynamodb_table" "terraform_lock" {
name = "terraform-lock"
billing_mode = "PAY_PER_REQUEST"
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
}
You see the ‘lock’ method in place to enforce only one instance of Terraform to spawn. Next you’re going to want verboseness in this process, so let’s setup some Terraform outputs:
output "s3_bucket_name" {
value = aws_s3_bucket.terraform_state.id
description = "Montana's Terraform S3 Bucket"
}
output "s3_bucket_arn" {
value = aws_s3_bucket.terraform_state.arn
description = "Montana's ARN Bucket"
}
output "s3_bucket_region" {
value = aws_s3_bucket.terraform_state.region
description = "Montana's S3 Region of the Bucket"
}
output "dynamodb_table_name" {
value = aws_dynamodb_table.terraform_lock.name
description = "Montana's ARN of the DynamoDB table"
}
output "dynamodb_table_arn" {
value = aws_dynamodb_table.terraform_lock.arn
description = "The ARN of the DynamoDB table"
}
You know maybe asking yourself, how can we use Terraform to setup the S3 bucket and DynamoDB table we want to use for the remote state backend? There’s a few ways we can do this:
Both of the at solutions involve creating the remote state
resources using local state
. Remember that Terraform state might contain secrets. In the case of only the Amazon S3
bucket and DynamoDB
table there is only one variable which might be problematic: The AWS access key.
In Terraform if you are working with a private repository, this might not be a huge issue. When working on open source code it might be useful to encrypt the state file, as I described earlier, before committing it to GitHub to the repository or whatever VCS you use. You can do this with OpenSSL
or even more specailized tools.
We are currently using:
Let’s create two workspaces in this scenario state
and prod
. The state workspace will manage the remote state resources, so, the S3 bucket and the DynamoDB table. The prod
workspace will manage the production environment of our website. You can add more workspaces for staging or testing later but this is beyond the scope of this.
Let’s create three folders containing Terraform files,
This is now the outline of your project in question:
.
├── locals.tf
├── providers.tf
├── backend
│ ├── backend.tf
│ ├── backend.tf.tmpl
│ ├── locals.tf -> ../locals.tf
│ ├── providers.tf -> ../providers.tf
│ └── state.tf -> ../bootstrap/state.tf
├── bootstrap
│ ├── locals.tf -> ../locals.tf
│ ├── providers.tf -> ../providers.tf
│ └── state.tf
└── website
├── backend.tf -> ../backend/backend.tf
├── locals.tf -> ../locals.tf
├── providers.tf -> ../providers.tf
└── website.tf
Let’s take a look at the contents of bootstrap/state.tf
:
# Montana's config files /bootstrap/state.tf
locals {
state_bucket_name = "${local.project_name}-${data.aws_caller_identity.current.account_id}-${data.aws_region.current.name}"
state_table_name = "${local.state_bucket_name}"
}
resource "aws_dynamodb_table" "locking" {
name = "${local.state_table_name}"
read_capacity = "20"
write_capacity = "20"
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
}
resource "aws_s3_bucket" "state" {
bucket = "${local.state_bucket_name}"
region = "${data.aws_region.current.name}"
versioning {
enabled = true
}
server_side_encryption_configuration {
"rule" {
"apply_server_side_encryption_by_default" {
sse_algorithm = "AES256"
}
}
}
tags {
Name = "terraform-state-bucket"
Environment = "global"
project = "${local.project_name}"
}
}
output "BACKEND_BUCKET_NAME" {
value = "${aws_s3_bucket.state.bucket}"
}
output "BACKEND_TABLE_NAME" {
value = "${aws_dynamodb_table.locking.name}"
}
Above I’ve defined the S3 buckets, enabled encryption as well as versioning. Encryption is important because Terraform more of often than not might contain secrets
.
You’ll notice in Terraform I called an attribute called LockID
so we have to create it and make it the primary key, again so there’s not two instances of Terraform running at once.
Let’s cerate the state
workspace by running these commands:
terraform workspace new state
terraform init bootstrap
terraform apply bootstrap
After this, the S3 bucket and DynamoDB table are created and we will migrate the local state. Let’s look at the backend/backend.tf.tmpl
file, this is the Terraform it will follow, you can generate an environment variable, or in my case I set the environment variables from key value pairs. You can use this crafty bash script I’ve provided if you don’t want to go down the traditional route:
#!/bin/bash
dotenv () {
set -a
[ -f .env ] && . .env
set +a
}
dotenv
cd () {
builtin cd $@
dotenv
}
So now the config file should look like this:
terraform {
backend "s3" {
bucket = "${BACKEND_BUCKET_NAME}"
key = "terraform.tfstate"
region = "eu-central-1"
dynamodb_table = "${BACKEND_TABLE_NAME}"
}
}
Let’s initialize the backend, run:
terraform init backend
So you should have some core assets, now we have to specify the HTML and CSS files. This is a bit cumbersome as we cannot tell Terraform to upload a whole folder much like you can with wget
or curl
, but this is Terraform’s structure, let’s look at website.tf
:
locals {
site_root = "website/static/_site"
index_html = "${local.site_root}/index.html"
about_html = "${local.site_root}/about/index.html"
post_html = "${local.site_root}/jekyll/update/2018/06/30/welcome-to-jekyll.html"
error_html = "${local.site_root}/404.html"
main_css = "${local.site_root}/assets/main.css"
}
resource "aws_s3_bucket_object" "index" {
bucket = "${aws_s3_bucket.website.id}"
key = "index.html"
source = "${local.index_html}"
etag = "${md5(file(local.index_html))}"
content_type = "text/html"
}
resource "aws_s3_bucket_object" "post" {
bucket = "${aws_s3_bucket.website.id}"
key = "jekyll/update/2018/06/30/welcome-to-jekyll.html"
source = "${local.post_html}"
etag = "${md5(file(local.post_html))}"
content_type = "text/html"
}
resource "aws_s3_bucket_object" "about" {
bucket = "${aws_s3_bucket.website.id}"
key = "about/index.html"
source = "${local.about_html}"
etag = "${md5(file(local.about_html))}"
content_type = "text/html"
}
resource "aws_s3_bucket_object" "error" {
bucket = "${aws_s3_bucket.website.id}"
key = "error.html"
source = "${local.error_html}"
etag = "${md5(file(local.error_html))}"
content_type = "text/html"
}
resource "aws_s3_bucket_object" "css" {
bucket = "${aws_s3_bucket.website.id}"
key = "assets/main.css"
source = "${local.main_css}"
etag = "${md5(file(local.main_css))}"
content_type = "text/css"
}
output "url" {
value = "http://${local.website_bucket_name}.s3-website.${aws_s3_bucket.website.region}.amazonaws.com"
}
Now let’s run some of these final commands:
terraform workspace new prod
terraform init website
cd website/static && jekyll build && cd -
terraform apply website
We’ve now created a static website using Terraform and DynamoDB. Let’s now implement Travis!
Let’s get to implementing Travis to catch any bugs! As you know we need to make out .travis.yml.
, this simply put, tells the build server which commands to execute. Let’s take a look at the .travis.yml
file:
---
language: generic # this can also be left out
install:
- gem install bundler jekyll
script:
- ./build.sh
We need that build.sh
file, but remember to give it proper permissions, here’s the contents of that:
cd website/static
bundle install
bundle exec jekyll build
cd -
./terraform-linux init
./terraform-linux validate website
if [[ $TRAVIS_BRANCH == 'master' ]]
then
./terraform-linux workspace select prod
./terraform-linux apply -auto-approve website
fi
If possible I would probably go for local state only and store it directly within the repository, this would make a lot of the steps easier, luckily though I’ve made a bash script specifically for Terraform and Travis to make life a bit easier, this is meant if you’re in a testing environment – so make sure before you use this you’re running this in staging, or some sort of testing env:
#!/bin/bash
set=+x
/ # set -v && echo $HOME
/root
/ # set +v && echo $HOME
set +v && echo $HOME
/root
/ # set -x && echo $HOME
+ echo /root
/root
/ # set +x && echo $HOME
+ set +x
/root
# Check the creds made sure by Montana Mendy.
cred="-var "do_token=${DO_PAT}""
tf=$(which terraform)
init() {
echo ""
echo "Init the provider"
echo ""
$tf init
echo "Formatting files"
echo ""
$tf fmt
echo ""
echo "Validating files"
echo ""
$tf validate
}
clone() {
if [ -d ansible ]; then
rm -rf ansible
git clone --depth=1 $ansible_repo
echo ""
else
git clone --depth=1 $ansible_repo
echo ""
fi
if [ -d shell_scripts ]; then
rm -rf shell_scripts
git clone --depth=1 $shell_script_repo
echo ""
else
git clone --depth=1 $shell_script_repo
echo ""
fi
if [ -d cloud_init ]; then
rm -rf cloud_init
git clone --depth=1 $cloud_init_repo
echo ""
else
git clone --depth=1 $cloud_init_repo
echo ""
fi
}
# Clean up shell scripts.
clean() {
if [ -d shell_scripts ]; then
rm -rf shell_scripts
fi
if [ -d ansible ]; then
rm -rf ansible
if [ -f inventory ]; then
rm inventory
fi
fi
if [ -d cloud_init ]; then
rm -rf cloud_init
fi
if [ -f terraform.tfstate ]; then
rm -rf terraform*
rm -rf .terraform
fi
}
# Run Terraform commands.
plan() {
clone
init
$tf plan $cred
}
apply() {
clone
init
$tf apply -auto-approve $cred
}
case $1 in
-i | init)
$tf init
;;
clone)
clone
;;
plan)
plan
;;
apply)
apply
;;
show)
$tf show
;;
list)
$tf state list
;;
# Run 'terraform destroy'.
destroy)
$tf destroy $cred
clean
;;
clean)
clean
;;
-f | fmt)
$tf fmt
;;
-v | validate)
$tf validate
;;
*)
echo "$0 [options]"
echo ""
echo "Options are: clone | plan | apply| show| list | destroy | fmt | validata"
echo ""
;;
esac
Well there you have it, we used Terraform, Travis CI, Amazon, API Gateways, to create and deploy a Jekyll website with some of my tips and tricks attached that I think you may find useful when using it provisionally.
If you have any questions at all, please email me at montana@travis-ci.org.
As always, happy building!