Putting your code onto Azure using Terraform and Azure DevOps Pipelines

This article is about using two tools – Terraform from HashiCorp and Azure DevOps Pipelines (DOP) – to get code into Azure.  It won’t go into details of either tool, but will describe the problem they’re trying to solve, and how they work together to solve it.  Hopefully this will give you the necessary context for when you learn the details of either tool.

There are four areas of interest here – cloud hosting, configuring a cloud host, getting code ready to deploy (building it, unit testing it etc.) and deploying code.  There are alternatives in each of the areas – I’m not saying that you should use this combination rather than other options, but this is how you can use this combination if you want to.

Some example alternatives:

What is the problem?

Even though it’s easy to think of the problem as “I want to get my code running in Azure”, it’s more complicated than that in reality.  At the risk of seeming nit-picky, there’s a lot of important detail behind both “my code” and “Azure”, plus some other less obvious stuff to do with many worlds in Azure.

The problem with “Azure”

For a start, it’s not just “Azure” it’s something like:

  • App service X
  • Configured with parameters A, B and C (such as the service plan, the location and whether it does or doesn’t have hot and cold slots)
  • That’s part of subscription S

You might have two different lumps of code that each go to their own app service.  Then there’s the database, web server etc.

There is probably enough complexity in the configuration of things that it would be useful to have tools that help you manage that complexity.  This would be via things like:

  • Hiding details of something away until you really want to look at them, and the rest of the time dealing with a simpler alternative e.g. just a text string that points to the details;
  • Making relationships between things explicit, such as
    • If A and B want their own e.g. app service that are basically the same as each other but have a different name, make this similarity explicit by saying something like “AppService(name1) … AppService(name2)” rather than having two separate nearly identical blocks of configuration details that happen to define nearly identical app services;
    • If A is e.g. an important text string (such as the name of a database) that’s the concatenation of other important text strings B, C and D (environment names etc.) plus some other stuff such as punctuation, make it clear that A is built up of B, C and D rather than it happens to have the same value as B, C and D plus some other stuff.
  • Making it easier to change things in the future when requirements change.  The previous two things will help with that, but it’s also things like allowing the user to split stuff into folders and files in a helpful way, allowing comments that explain things that good structure and names can’t explain on their own, etc.

The problem with “my code”

“My code” almost certainly doesn’t mean “exactly what is in version control” (such as GitHub).  It’s likely that the stuff in version control needs building into executable code, running through automated testing, and then packaging ready for deployment.

The problem with many worlds

Fortunately, “many worlds” here doesn’t refer to the many worlds theory of quantum physics.  Rather, you probably have what are effectively many worlds or environments in parallel in Azure because you have different jobs to do relating to hosting and running code:

  1. Integrate the contributions of all developers to check that things hang together OK (via building and tests)
  2. Prepare a release candidate and test that (if you use trunk-based development, this is the same step as the previous one)
  3. Host production code in a way that is as stable and fast as money allows and customers need.

In practice this means that you want a given lump of code to be deployed to app service X and talk to database A during development, but also be deployed to app service Y and talk to database B in production.  While there are unavoidable differences between development and production, you want to minimise these, make them explicit and gather them together in one place as much as possible.  This will make your life easier in the long run.

Big picture

A diagram showing the major components of the system involving Terraform and Azure DevOps Pipelines.  Azure includes hosting, a Terraform server, the DOP config, a DOP server and GitHub.  The user's laptop has the Terraform client and the Terraform config.

Terraform is a tool that configures Azure (or other hosting platforms) based on configuration that you give it.  It doesn’t put any of your code into Azure – it just sets up Azure ready to receive your code.  I’ll go into more detail below.

DOP is a tool that takes code from GitHub (or other version control systems), builds it, runs automated tests on it and deploys it to Azure, which it assumes is already set up and ready to receive the code.

Terraform overview

The heart of Terraform is the config file or files that you write, to describe how Azure should be configured.  In the config file you describe how you want the world to be and don’t bother with any instructions of how to achieve it.  I.e. you say:

  1. Resource group X
  2. Database D that belongs to resource group X
  3. App Service Y that belongs to resource group X and uses the connection strings from database D
  4. Etc.

Rather than which steps are needed, in what order etc.

You then point Terraform at both your config file and the hosting environment (e.g. your development environment in Azure).  It compares the two, works out the differences, and then works out a plan for how to deal with the differences – creating things that are missing, and deleting things that aren’t needed.  It then works through the plan, to make your desired state a reality.

This is a declarative way of working (I declare how I want the world to be) rather than an imperative way (I give you a list of instructions I want you to follow).  It’s similar to interacting with a database via SQL – you specify what information you want from which tables, and the database prepares an execution plan with the details of which low-level operations to do, in which order, using which indexes etc.

The general term that Terraform gives to things that it can create or destroy is resources.  There are many types of resource, such as SQL Server database, app service, Azure function etc.

The config files are written in a language called HCL (Hashicorp Configuration Language).  It allows for the helpful things I mentioned above – abstraction, re-use with parameters for tweaking things, variable substitution, decomposition into folders and files, comments etc.

Terraform has an important feature that acts like a safety catch. Imagine that you have been using Azure for a while and so have already created databases etc. by hand. You then start using Terraform to automate things, and start with an empty set of Terraform config files. If you pointed Terraform at your empty files and your pre-existing databases etc., should it say “The files say there shouldn’t be anything in the cloud, so I’ll delete everything”? Fortunately, it doesn’t do this.

If there are pre-existing databases etc., they need to be explicitly imported into Terraform’s view of the cloud resources. If you create a resource via Terraform, it’s automatically added to this view of the cloud resources. That means:

  1. Pre-existing resources are ignored until you explicitly tell Terraform to start paying attention to them;
  2. If you create a resource via Terraform, or import a pre-existing resource into Terraform, when you later delete the relevant bits of the config files, that resource will be deleted.

DOP overview

DOP is part of Azure, and provides tools that let you define and run pipelines via the Azure GUI.  These pipelines connect to e.g. GitHub, compile code, run automated tests, and deploy code to Azure.  You generally don’t have to worry about the operating systems, file systems etc. behind the scenes that make this happen, just the series of operations that you want.

How the tools co-operate

The tools are independent of each other, and could be run at different times and/or by different kinds of people.  You would run Terraform when you want to make sure that the environment is in a particular state.  This would be necessary when:

  1. You want that state to change, e.g. you’re introducing a new App Service
  2. You want to make sure that the environment hasn’t drifted away from your desired state, i.e. the desired state hasn’t changed, and you’re worried that the environment has changed.

There is currently no support from Azure to help you connect Terraform and DOP.  You need to manually ensure that the (resource type, resource name) pairs in Terraform config files match the deployment target types and names in the relevant bits of pipeline definitions in DOP.

The two tools happening to use the same (resource type, resource name) pairs is the way that Terraform can create an ‘empty’ resource and then DOP can fill it with your code.  So, if you edit the types or names of resources in Terraform config files, you will need to remember to edit the DOP config to keep it in sync (and vice versa).

Increasing the number of instances of code

There’s an interesting grey area between Terraform and DOP to do with changing the number of instances of some code.  There are two ways I can think of doing this – one uses Terraform and the other uses both Terraform and DOP.

The simpler approach is where you are just turning a tap up or down.  Imagine you have an Azure function that reads from queue A and writes to queue B.  You currently have one instance of this, and it works through its input messages at a particular rate.  A recent marketing campaign has been successful so there’s a bigger rate of incoming traffic and hence a growing amount of work on queue A.  One instance of the function isn’t enough to keep up, so you want to increase this to three.

The three instances don’t have meaningful separate identities – there is just a set of interchangeable instances.  In this case, if you were doing things manually you would go into Azure and change the Azure function’s configuration, so it has up to three instances running rather than up to one.  No new code needs deploying, as the code and configuration for the current single instance can support the two additional instances too (without needing to be changed).

Therefore, this is a job for just Terraform.  You change the Terraform config file so that it specifies three instances rather than one, tell Terraform to compare the config file with reality, it finds reality is different, so it fixes reality by increasing the number of instances.

The more complicated approach is where you are creating a new thing with its own identity.  For instance, imagine that you have the Azure function as at the beginning of the previous example.  It currently can all take all kinds of message off the input queue, but you’d like to change this so that there are three instances of the function, where one instance takes only messages of type A, and two other two instances take only messages of type B.

In this case there is a meaningful difference between the instances – they do (slightly) different jobs, based on (slightly) different configuration.  (The configuration is largely the same, but there’s a difference in what filter to apply to input messages.)

The Azure function slot in Azure can’t apply different configuration to different instances of the function, so there will need to be different slots.  This means code will need deploying to two slots rather than to one (it will be the same code, but with different configuration).  This means that the pipeline in DOP will need to change to also deploy to the new slot, as well as Terraform’s config changing to configure the new slot.


This (like my article on Jenkins and Octopus) is a good example of two different automation tools working together to solve a problem. Terraform makes sure that a hosting environment such as Azure is set up ready to receive code. Azure DevOps Pipelines takes software raw materials such as source code from a version control system like GitHub, turns them into a form ready to deploy, and then does the deployment. Together they can get your code is running in Azure.

One thought on “Putting your code onto Azure using Terraform and Azure DevOps Pipelines

  1. Great summary. Currently working with a more complex setup of Azure, to stand up Azure as an integration platform (in the model where you install the Azure integration tools but pay for usage using regular PaaS CPU, storage, ingress/egress).

    Adding to the complexity is desire to have an operations 3rd party lead on owning the production platform but allow other 3rd party dev companies and in-house teams perform dev, test, deploy.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s