Skip directly to search

Skip directly to content

 

Infrastructure as Code with Terraform

 
 

Architecture | Vlad Cenan |
25 February 2019

Infrastructure as Code with Terraform | The WHY and the HOW TO

THE WHY

Let’s start by understanding better what Terraform is and why we’ve chosen this solution in our project.

Over the last few years, we’ve started to use Terraform on our projects at Endava and in this article we’ll explain why we find it to be a great tool for cloud infrastructure management and provide some pointers to help you get started with it yourself. Let’s start by understanding what Terraform is and why we’ve chosen to use it in our projects. Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions. It also manages infrastructure including low-level components such as compute instances, storage, and networking, as well as high-level components such as DNS entries, SaaS features, etc. In the big picture, Terraform stands behind the idea of IAC (Infrastructure As Code), where you treat all the operations as software in order to create, update and deploy your infrastructure.

To solve our automation needs for building infrastructure, we started looking at the options available for provisioning/orchestration and configuration management tools.

Unlike cloud-specific tools, like AWS’ Cloud Formation, that can only be used with a specific cloud provider, Terraform acts as an abstraction layer on top of a cloud platform and so can manage infrastructure across a range of cloud providers.

In Terraform we found many advantages that fit our needs for automating infrastructure. Here are some of the key strong points:

■ Open source

■ Planning phase (dry run – since we specify the end-state we can see the actions actually performed)

■ Simple syntax (HCL or JSON)

■ Parallelism (Terraform will build all these resources across all these providers in parallel)

■ Multiple providers (cloud platforms)

■ Cloud agnostic (allows you to automate infrastructure stacks from multiple cloud service providers simultaneously and integrate other third-party services)

■ Well-documented

■ Immutable infrastructure (once deployed you can change it by replacing it for stability and efficiency)

■ Able to write Terraform plugins to add new advanced functionality to the platform

■ Avoid ad-hoc scripts & non-idempotent code

Rather than individual infrastructure resources, Terraform focuses on a higher-level abstraction of the data centre and its associated services, and is very powerful when combined with a configuration management tool Chef or Ansible. It would be ideal to be able to write infrastructure with one tool, but each has its own strengths and they complement each other well. Terraform has over 60 providers and the AWS provider has over 90 resources, for example. Using Terraform and Chef together could solve the complicated problems in providing full infrastructures. In our project we used both to manage the immutable infrastructure for a web application, shown in the images below.

Figure1
Fig.1 Immutable infrastructure diagram (demo)

Infrastructure as Code with Terraform
Fig.2 Project Structure

THE HOW

After seeing what Terraform is and the advantages of using it, let’s see how simple it is to start using it.

Terraform code is written in HCL (Harshicorp Configuration Language) with ".tf" files extension where your goal is to describe the infrastructure you want.

The list of providers for terraform can be found here https://www.terraform.io/docs/providers/. Providers are cloud platforms and can be accessed by adding to a main.tf file the following:

	provider "aws" {
		region = "us-east-1"
	}

This means you will use the AWS provider and deploy in us-east-1 region. For each provider there are different kinds of resources. Credentials that will allow creating and destroying resources can be provided here inside the provider or alternatively the tool will use the default credentials in the ‘~.aws/credentials' file.

By adding the following to main.tf you will deploy an ec2 instance named example in your region:

	resource "aws_instance" "example" {
		ami = "ami-2d39803a"
		instance_type = "t2.micro"
	}

Having installed Terraform, to try it out for yourself, type the following commands in the terminal where you created your HCL file:

~]$ terraform init (will prepare working directory to use. It safe to run multiple times to update the working directory configuration)

~]$ terraform plan
(will be a dry run to see what terraform will do before running it. The output will be visible with + for resources created and with - resources that are going to be deleted, ~ modified resources)

~]$ terraform apply
(will apply the plan and release it)

In your main.tf file, change the instance type value from "t2-micro" to "t2.medium" and run the "terraform apply" command again to see how easy it is to modify infrastructure with Terraform. The prefix -/+ means that Terraform will destroy and recreate the resource. While some attributes can be updated in-place (shown with ~), changing the instance_type or the ami for an EC2 instance requires recreating it. Terraform handles these details for you, and the execution plan makes it clear what Terraform will do.

	-/+ aws_instance.realdoc_vm (new resource required)
	instance_type: "t2.micro" => "t2.medium"

Once again, Terraform prompts for approval of the execution plan before proceeding. Answer yes to execute the planned steps. As indicated by the execution plan, Terraform first destroyed the existing instance and then created a new one in its place. You can use terraform show again to see the new values associated with this instance, and the destroy command for tear-down infrastructure:

	~]$ terraform destroy

The greatest advantage of using Terraform is automating the provisioning of new servers and other resources. This both saves time and reduces the possibility of human error.

Using Terraform to specify infrastructure as code has been a huge productivity boost for us. We can create deployments for new customers much more quickly and with more consistency than before, and I strongly recommend it.

Vlad Cenan

DevOps Engineer

Vlad is a DevOps engineer with close to a decade of experience across release and systems engineering. He loves Linux, open source, sharing his knowledge and using his troubleshooting superpowers to drive organisational development and system optimisation. And running, he loves running too.

 

Related Articles

  • 10 December 2019

    AWS Serverless with Terraform – Best Practices

  • 23 July 2019

    11 Things I wish I knew before working with Terraform – part 2

  • 25 June 2019

    11 Things I wish I knew before working with Terraform – part 1

  • 14 May 2019

    Edge Services

  • 25 February 2019

    Infrastructure as Code with Terraform

 

From This Author

  • 10 December 2019

    AWS Serverless with Terraform – Best Practices

Most Popular Articles

A Virtual Hackathon Together with Microsoft
 

Innovation | Radu Orghidan | 08 July 2020

A Virtual Hackathon Together with Microsoft

Distributed SAFe PI Planning
 

Agile | Florin Manolescu | 30 June 2020

Distributed SAFe PI Planning

The Twisted Concept of Securing Kubernetes Clusters – Part 2
 

Architecture | Vlad Calmic | 09 June 2020

The Twisted Concept of Securing Kubernetes Clusters – Part 2

Performance and security testing shifting left
 

Testing | Alex Gatu | 15 May 2020

Performance and security testing shifting left

AR & ML Deployment in the Wild – A Story About Friendly Animals
 

Augmented Reality | Radu Orghidan | 30 April 2020

AR & ML Deployment in the Wild – A Story About Friendly Animals

Cucumber: Automation Framework or Collaboration Tool?
 

Automation | Martin Borba | 16 April 2020

Cucumber: Automation Framework or Collaboration Tool?

Challenges in creating relevant test data without using personally identifiable information
 

Testing | Alex Gatu | 25 February 2020

Challenges in creating relevant test data without using personally identifiable information

Service Meshes – from Kubernetes service management to universal compute fabric
 

DevOps | Oleksiy Volkov | 04 February 2020

Service Meshes – from Kubernetes service management to universal compute fabric

AWS Serverless with Terraform – Best Practices
 

Architecture | Vlad Cenan | 10 December 2019

AWS Serverless with Terraform – Best Practices

 

Archive

  • 08 July 2020

    A Virtual Hackathon Together with Microsoft

  • 30 June 2020

    Distributed SAFe PI Planning

  • 09 June 2020

    The Twisted Concept of Securing Kubernetes Clusters – Part 2

  • 15 May 2020

    Performance and security testing shifting left

  • 30 April 2020

    AR & ML Deployment in the Wild – A Story About Friendly Animals

  • 16 April 2020

    Cucumber: Automation Framework or Collaboration Tool?

  • 25 February 2020

    Challenges in creating relevant test data without using personally identifiable information

  • 04 February 2020

    Service Meshes – from Kubernetes service management to universal compute fabric

We are listening

How would you rate your experience with Endava so far?

We would appreciate talking to you about your feedback. Could you share with us your contact details?