An introduction to Octopus Deploy

Introduction

This article aims to answer two questions:

  1. What is Octopus Deploy?
  2. Why should I use it?

It won’t go into the details of how to configure it, all the alternatives to Octopus Deploy and so on.

A statue to Paul the Octopus
Paul the Octopus deploying a large football. By Christophe95 – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=27535997

What is Octopus Deploy?

Octopus Deploy (which I will call just Octopus from now on) is a tool for .Net development.  It is a way of bridging the gap between a successful build and unit test of your code on e.g. a build server, and having your code running on other servers such as testing, staging and production.

You package up the various build outputs (DLLs, config files etc.) into a single file, and send it to the Octopus server.  This is the hub of a hub and spoke model, and all the target machines you want to deploy your code to are the spokes.  You install a local agent, called a tentacle, on each target machine and these are trusted by and trust the server.

There is a surprisingly big set of configuration on the server, which can get a bit confusing, and on top of that I still find the GUI a bit awkward sometimes.

You can define the end-to-end pipeline (e.g. testing -> staging -> production).  You label the target machines with what purpose they have (e.g. database, web server etc.) and also what stage they are in the pipeline, so a given machine might be labelled as e.g. the testing database.

Octopus has a concept called a package, which is a deployable thing.  You might have a set of code that needs to run on the database machine, and then two sets of code that needs to run on the web server, and you’d like to be able to deploy these two sets of code separately.  You would therefore have three packages.  Each time you do a fresh build of a given set of code and push the results to Octopus, this results in a new version of one of the packages.  You label each package with which kind of deployment target it is for (database, web server etc.)

Tailoring to the deployment environment

When you push a package to a target machine, the tentacle will open up the package and copy the contents to the correct place.  This is what you want, but only part of the battle.

If some code is on e.g. a test web server, it will probably need to connect to the test database and maybe use a test bit of the file system.  The same code running on staging will need to connect to the staging database and use the staging bit of the file system.

In order to not invalidate testing at earlier stages in the pipeline, you don’t want to recompile the test for each environment to which it’s deployed.  (There is too great a risk that you aren’t testing exactly the same code at each stage if you recompile it before each one.)  Also, recompiling often takes too long.

So it makes sense to have environment-specific things separated out into some kind of configuration file, such as web.config (for an ASP.NET web app, for instance).  You can them change the configuration file and leave the code itself alone; you will get the behaviour changes you want (e.g. connecting to a different database) and won’t have to stomach the cost of lots of recompiling.

Octopus offers two main ways of tailoring configuration:

  1. Applying a transform;
  2. Changing an individual variable’s value

There is also a more general-purpose tool, which you could use for config changes, that I will come to shortly.

Config file transforms

If you use config file transforms then you create a web.config with default values in, and then a transform for each target machine where you want those default values to be changed.  The name of the transform has to follow a standard pattern that includes the target machine’s name.  As part of deploying to a target machine, Octopus will see if there is a transform for that target, and if so will apply it.

The transform is a bit like a XSLT file – you define a series of actions, and for each you pick out what bit of the base file the action applies to and what to do if you find a match.

Variable substitution

You can define substitution rules using the Octopus GUI, which live in the Octopus server’s database rather than as files in your normal code repository.  These will look for a specific variable in e.g. a web.config file, and then substitute in a value that depends on which target machine Octopus is deploying to.

You can combine both approaches in a single package’s deployment, which can get confusing.

Post-deployment scripts

There is a general-purpose mechanism which can be used for all kinds of things, one of which is changing configuration.  You can specify a PowerShell script or similar that is to be run after deployment.  This should be a plan B, as you have to do all the work yourself rather than relying on a framework built into Octopus.

When you want to do stuff after deploying

So far I have described how you can get code deployed to another machine, and have its configuration tailored to its environment.  This is great, but might not be the end of the story.

The code might have been installed on a test machine, and you now want to run some automated tests against it.  Or it might have been installed on a production machine, and you want to run a script that helps to warm the caches that the code uses so that it’s running at top speed when real users get to it.

In these cases you could use Octopus command line tools in conjunction with an orchestration tool like Jenkins.  You still need to configure the machines, packages etc. which I do using the Octopus GUI.

However, when I want to actually make something happen, I prod Jenkins.  This runs a normal Jenkins Groovy script that, for instance:

  1. Pulls the code from Git
  2. Builds it
  3. Runs unit tests
  4. Packages it (see below)
  5. Sends it to the Octopus server
  6. Tells Octopus to deploy it on a test machine
  7. Runs automated system tests against the code on the test machine

In this way, Jenkins is in charge of the end-to-end flow, but it delegates the details of deployment to Octopus.  In the normal way, if any step fails then Jenkins will report the failure and will then stop, rather than trying to do later stages that I know will fail.

More on the package and packaging

Octopus supports several package formats, one of which is nupkg.  It’s the easiest format for normal .Net things, and is easy to create by adding the Octopus extensions to your .csproj files.  A .nupkg file is just a .zip file that also has a file that’s a table of contents (an XML file of a specific format, whose file name ends .nuspec).

You can create a .nupkg for an arbitrary set of files by creating the table of contents file and then running nuget.

What it doesn’t do at all or do well

Octopus can be very useful but it doesn’t do everything, and there are some things that it doesn’t do as well as the things I’ve already described.

It doesn’t set up the machines that you deploy to – you need to set them up first.

I don’t think it will scale well to hundreds of machines, as the GUI assumes everything will fit on the screen in a simple linear list, rather than paging or doing anything else like that.

The configuration you set up isn’t under source control (which is a contrast to e.g. a Groovy file that can specify a Jenkins pipeline).

If you are deploying to Paas, e.g. deploying an API to Azure Paas, things aren’t as straightforward to configure or even understand as when you’re deploying to a physical machine or a VM.  For a start, there’s no tentacle on the deployment target.  Also, the Octopus concepts and GUI are designed for a VM / physical machine world, and don’t seem to fit a Paas world as well.

I not a Docker expert, but I don’t think that Octopus is a good alternative to specialist Docker tools like Kubernetes.

Summary

Octopus can help you get built .Net code onto all the machines where you need it, with the configuration details handled fairly painlessly.  You can include it in a bigger pipeline such as a Jenkins pipeline, by using the Octopus command line tools.

It has some rough edges to its GUI and information architecture, and you need to remember which jobs it doesn’t do (for which you would need other tools).

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s