A beginner’s guide to practical quantum computing

I recently started dabbling with quantum computing, and this post is kind of the introduction I wish I’d found before I started.  It’s not intended to be anything other than a very small tip of a very big iceberg.  Hopefully, this will give you enough to help you know if quantum computing is worth your time, what to expect if you start, and some links to resources I’ve found useful.  Also, “beginner’s” has a double meaning – it’s for a beginner, by a beginner – I’m probably in ‘knows enough to be dangerous’ territory.

Also, “practical” is there on purpose, because this will be about what’s involved in writing code on your own computer.  I won’t be going into theory much, or its consequences for society etc.  I’ve played with only the Microsoft Quantum Computing stack (details below), so this post will be in that context.  If you search for Quantum Development Kit online you can find alternatives, but I don’t know how they compare to Microsoft’s.

I was inspired to start dabbling by the podcast Impact Quantum, which is from the same people (Frank La Vigne and Andy Leonard) as do the podcast Data Driven.  This blog is also partly because of the Ministry of Testing I wish I knew blogging challenge (more below).

Getting started – the short version

Even though quantum computers aren’t generally available, you can write quantum computing code today on normal hardware (such as your laptop) with the aid of emulators.  These help your laptop to present a quantum-computing-like interface, but the lack of proper quantum computing hardware means that the emulator (and hence your code) will run more slowly than the real thing. 

This will probably limit the scope of what you can realistically tackle, as big things will simply take too long.  The good news is that when quantum computers do eventually become generally available, the code you’ve been running against emulators should run unchanged (and much faster) on the real thing.

With all that out of the way, here’s the short version of how to get started.  (There’s a longer and less simple-sounding version, and also links for where to download things, below.)

  1. Download Visual Studio Code (not Visual Studio) if you don’t already have it.  It can run on Windows, OS X and Linux.
  2. Download the Microsoft Quantum Development Kit, which is an add-on to Visual Studio Code.  This includes a quantum computing emulator, and support for the language Q#.
  3. Write programs in Q# and run them.

This assumes an awful lot of important details, some of which I’ll go into below.

Quantum code uses matrices to rotate points representing complex probabilities around the surface of a Bloch sphere, like rotating the faces of a Rubik’s cube – easy! Image credit: Acdx / CC BY-SA

Quantum computing programs

Before I go into some of the lower-level details, I ought to describe quantum computer code in general because it’s not like any normal code that you’ve already written.  (There’s a section below where I draw comparisons with other bits of computing I’ve already used, but they’re more helpful when you’ve already got the idea than in trying to introduce it.)

Pretty much all the quantum code I’ve seen relies on the quantum theory phenomenon of superposition.  (Some uses entanglement or quantum tunnelling, but these seem rare enough to ignore at least for now.)  Without going into the brain-melting details, what this means for your code is that instead of variables having exactly one value at a time (which could be null or a random unassigned value, or which could change often) special variables called qubits can have all possible values at the same time­. This is quantum computing’s main secret sauce – the ability to evaluate many alternatives very quickly i.e. in parallel.

Yes, this is very weird and takes some getting used to.  You can have normal variables that have only one value at a time (as you’d expect), but the power comes from the qubits.  These have all possible values at once until a point in the code where you decide to measure their value.  At this point, the multiple possibilities and related weirdness evaporate, and you have a single value as per normal computing.  From here on there’s no more quantum secret sauce, so you tend to do this as late as possible.

This doesn’t sound very useful, because it seems like variables start off with all possible values, and then magically pick one possible value at random.  This isn’t true but the reality involves more weirdness and details.

Before the point of measurement, a qubit is an array of N values (actually, 2: for 0 and 1).  The values in the array aren’t the candidate values that measurement picks from.  Instead, the value in element X of the array is the probability that measurement will pick that element’s value.  I.e. element 0 is the probability that you will end up with 0, and element 1 is the probability you will end up with 1.

To initialise a qubit, you set the correct probabilities for the different elements in the array.  The operations that make up your code then modify the probabilities, including the relationship between the probabilities inside qubit A and the probabilities inside qubit B.  Then, at measurement time, the computer effectively rolls dice to decide which value to pick per qubit, based on the probabilities that were around by then.  I tend to think of this as: quantum code deals in meta data rather than data.  The numbers that slosh around in the code are probabilities associated with values, rather than the values themselves.

Note this adds yet another bit of weirdness – quantum code is probabilistic rather than deterministic.  Correctly-constructed code should­ produce the correct result most of the time.  I.e. if you ran the same correctly constructed code many times with the same inputs you might get different outputs per run, but most of the time the outputs will be correct.

Getting started – the much longer version

You can probably see now why the shorter version of getting started was a bit on the unhelpfully optimistic side.  The good news is that it doesn’t appear that you need to understand quantum theory beyond a fairly superficial level to write quantum computing code.  However, there are probably still quite a few barriers to overcome.

The first potential barrier is the fact that qubits and operations on them are expressed in terms of matrices.  You might already be familiar with them, you might have dim memories from long ago, or they might be completely new.  This is things like tensors, transposing etc.

The second potential barrier is the probabilistic nature of the code, and probability is another thing you need to know and be comfortable with, at least a bit.

The third potential barrier is the fact that the probabilities in the matrices are expressed in terms of complex numbers, i.e. numbers that might include a multiple of i – the square root of -1.  This also comes with its own set of notation, such as Dirac notation.  This means you’ll see things like |a> (pronounced “ket a”), <a|(“bra a”), <a|a> (“bra-ket a”), |+>, |->, |0> and |1>.

Complex numbers mean that each number will have two parts.  In general, a number will be of the form x + iy, where x is called the real part and iy is the imaginary part.  (I’ve also written a post on complex numbers, including multiplying them.) You might think that you will therefore be dealing with diagrams that have numbers on a 2D number plane rather than a 1D number line.  Unfortunately, this isn’t always true – you will also see a single probability displayed on the surface of a sphere (called a Bloch sphere).

Points on the surface of a sphere have only two degrees of freedom (e.g. longitude and latitude), even though they exist in 3D space, so you might be asking why a 3D representation of a complex probability helps.  I.e. imagine a sphere with x, y and z axes, where the y axis goes through the North and South poles.  If you have a point on the surface and you increase longitude while keeping latitude constant, the point draws a circle through space.  The circle will be on a plane of constant y, and x and z will trade size as the point goes around the circumference of the circle.

The short and possibly not very helpful answer is that different useful operations rotate a point (a particular probability) around the x, y and / or z axes, like when you twist the different faces of a Rubik’s cube.  Just as combining a pattern of twists of a Rubik’s cube about the different axes can produce useful outputs (e.g. unscrambling it), quantum operations combine patterns of rotations about all three axes to do useful work.

A slightly longer answer is that the size of a point’s probability is determined by two of its three co-ordinates. The third co-ordinate, its phase, is an extra property of the point that you can effectively put in the point’s back pocket, and then use later to affect the two main co-ordinates. (Rotating the point about e.g. the y axis means that its x and z values swap around. You can use the extra co-ordinate to swap a new value into one of the main co-ordinates via rotations.)

Once you’ve understood enough quantum theory concepts (superposition on its own seems to take you a long way), matrices, probability and complex numbers, you then have the relatively simple task of learning Q#.  It’s a Domain-Specific Language (DSL) for quantum computing, that’s a bit like C# and a bit like F# (apparently).

You are now in the position of understanding someone else’s Q# code.  You might be able to write new Q# code, which is likely to involve working out how to re-frame the problem at hand in terms of problems that already have quantum computing solutions, such as graph colouring. You might even be able to solve new kinds of problems using Q# code if you’re much cleverer than me.

Q# code is one way of creating qubits and using the standard quantum operators, but the other quantum development kits are alternative ways to interact with qubits and quantum operators, just as Angular, React and Vue are all alternative ways of interacting with the DOM in web programming.  Knowing about qubits and quantum operators is the important part, and Q# is just the way of expressing those in code.

Comparisons with other bits of computing

I found myself reminded of my youth, doing machine code on a ZX Spectrum.  Since then, I’ve got used to high level languages with their seemingly infinite number of variables.  But, back in the good old days, you had to worry about a very restricted number of machine registers as you wrote your code.  You had only 7 general-purpose registers, and each register was 8 bits, so you had 56 bits to play with.  I realise that this sounds like ridiculously scant resources to modern ears, but in quantum code you must worry about individual [qu]bits.  You can have many qubits, but the more you have the more the emulator will struggle.

I was also reminded of Prolog programming at college.  This also had a division into what were in practice two different activities, often done in separate phases.  First, you would set up constraints or relationships between variables, without the variables having any concrete values (similar to the relationships between the probabilities in a set of qubits).  For instance:

  1. C = A + B
  2. A = 2 * B

You don’t know what the values of A, B or C are yet, but you do know these relationships.  Then, somehow, the code resolves these sets of possibilities down into concrete values.

The flow of (meta) data through a series of processing steps, often drawn in a diagram, reminded me of ETL.  Instead of an ETL pipeline, the operations in quantum code form what’s called a quantum circuit.

Testing

It’s possible to write unit tests for quantum code, which is weird but good.  Don’t forget the probabilistic nature of quantum code, as this will make it hard to have definitive and repeatable tests.

For instance, one of the simple getting started examples I’ve read is code that generates a random integer between 1 and N.  A couple of tests that you could write could check that the result is in the range 1-N.  However, how can you test for randomness?

If the code were correct, randomness would mean that the chance of getting a particular value in the range 1-N would be the same as the chance of getting any other value in the range.  I.e. the probability of getting a particular value would be 1/N.  This would mean that if you got many random numbers and counted how many times you got each value, you would expect each value to occur the same number of times as all the others.

However, this is how the code will work in theory, which isn’t the same as how it will work in practice, due to how probability works.  If N were 5, and the results were 123451234512345… then the probabilities of different values would be correct, but it would be predictable rather than random.  You can say the same for any repeating pattern – 11223344551122334455… would be just as bad.

If you imagine your code producing an infinitely long series of numbers, you are just taking a finite-length slice out of this series.  You start the series and keep a track of how many of each value you’ve seen.  It could be that the numbers so far are a bit short on the number 4, but if you could somehow see into the future you’d see a denser patch of 4s is coming up.  Without super-human sight, you could easily cut the finite-length series off before this clump of 4s, and so the output will have too few 4s.  This is perfectly fine from a probability point of view, but would break a simplistic test.

In fact, being a few 4s short is just the thin end of the wedge.  What does “a few” mean?  It could be that there are 10% fewer 4s than you’d expect.  It could be that there are 60% fewer 4s – this is less likely, but still possible.  Or it could be that the output is just 4s.  This is even less likely, but still possible.  The further the series gets from equal numbers of the values, the lower the probability is that you’d see it from correct code, but it never drops to 0.  You’re then in the realm of significance levels and so on.

It might be that your code doesn’t output random numbers but instead finds the solution to a problem e.g. solves Sudoku.  The probabilistic nature of the code means that the wrong answer will be produced by correct code some of the time.  In something like Sudoku, you can check that it’s wrong quite simply, because there will be a repeated number in a row, column or square.  However, this doesn’t get away from the fact that you will need to accept a certain percentage of wrong results.  I haven’t yet worked out how to judge this percentage.

I wish I knew

When I heard Frank and Andy on the Impact Quantum podcast say: “It’s here now – just have a go”, my first thought was I wish I knew more about that, and then I thought OK, why not have a dabble?  I’d got back-of-an-envelope understanding of quantum things and even less about quantum computing, but that was enough for Frank and Andy’s exhortation to get me going.  I have an enormous list of things to learn and a general sense of falling behind the industry and needing to catch up.  Unusually for me, I didn’t use this list as an excuse to look at quantum computing “one day”.  I realised that I was motivated enough to get stuck into this, which was very useful.

After reading lots of things several times, I’m now at the point where I can describe things in general terms as in this article.  Complex probability is probably (no pun intended) the hardest bit for me – I still need to work on that a lot – followed by getting comfortable with Dirac notation, and I’ll feel more comfortable generally once I’ve properly got my head around testing quantum code.

I’m gradually exploring the set of already solved problems in the documentation, which is slow progress given the level of skill I’ve got with the general quantum computing machinery.  As with many things, it’s a case of repetition and breaking out the code editor to get my hands dirty with the exercises.

I say all this because it’s sometimes easy (at least for me) to read people’s blogs that explain stuff, particularly stuff you don’t know, and think that the author has got it all sorted.  They’re a much better programmer / tester / data scientist / user experience expert / whatever than you.  I really hope that you don’t get that feeling from my blog.  Part of why I blog is that it’s scratching an itch, but another part is that it helps me to understand things better.  There is so much that I don’t know, that maybe you do, and the To Learn list seems to grow rather than shrink over time.  Ho hum, at least it keeps life interesting, even if it’s daunting sometimes.

I hope that people who actually know this stuff take pity on me, and gently correct the mistakes I’ve no doubt made in this.

Resources I’ve found useful

Where to download Visual Studio Code

QDK getting started

Quantum computing primer on Code Project

QDK katas

Stack Overflow for quantum computing

Twisted Oak explanation of Grover’s algorithm

PDF on quantum algorithms

Testing – testing introduction and full testing namespace documentation

Alice in Quantumland (using Alice in Wonderland as a way to introduce quantum theory)

Leave a comment