The idea that programmers and testers are different kinds of people with different kinds of skills is sometimes helpful, but not always. It can help to match people to jobs or show where people have different strengths. But it can also lead to tribalism – you’re different from me so you’re worse than me. In this article I’ll go into a few areas where I think programmers use skills in their normal day-to-day activities that overlap with the skills that testers often use in theirs. I hope that this helps to break down unhelpful barriers between the groups, and improves mutual appreciation for the skills others have.
This was inspired by a conversation with Ben Dowen, who is a tester and active member of the wider testing community. He was looking for a tame developer to work with on some blogging, I volunteered, and this is a result of the conversation that followed. It’s part of a bigger project he’s running, that’s trying to help bring developers and testers closer together.
A myth about programmers
First, I’d like to tackle a myth that can get in the way. The myth is that all a programmer does is repeatedly hew a working mechanism from a fresh block of thought-stuff. This is programmer as creator, imposing their will on the universe by launching their brainchild into it. This is all very heroic and romantic, but not the full picture.
As a programmer you will eventually have to modify code that someone else wrote (or you wrote long enough ago that you’ve forgotten the details). Even with good structure and good names, it can take some effort to understand what the code’s doing before you change it. It’s important that you do this, so that you make the right change at the right place.
Code reading is a surprisingly complex thing. There are all sorts of questions it can bring up, such as:
- What does this bit of code do?
- Where does this job get done?
- Why are these lines in this order?
- Where does this value get initialised / changed / used?
You are trying to understand the static structure of the code, and its more dynamic behaviour, both often over several levels of abstraction. Documentation and automated tests might be available and might help, but there is usually an unavoidable bit of looking at the source code.
By reading the code you are trying to construct a mental model that will help you predict the future (if I make change X it will produce outcome Y, which is hopefully the outcome I want). You are reverse engineering that from the details presented to you in the source code.
Code reading and code review are similar but different. By code reading I mean reading code that’s not been changed recently, so that you understand how it is now. By code review I mean reading a recent change to the code, so that you can see if it’s a good change or not. I.e. you’re trying to assess the quality of the change.
Code review is code reading plus some more stuff. Normally the change is meant to fulfil some purpose e.g. to implement some requirements. Also, the code usually needs to clear some quality bars that are always in place in the background, such as how well tested it is, how readable, easily changed, fast, efficient, accessible to many kinds of user it is and so on.
So, the kinds of problems that code review can throw up are:
- The change duplicates some existing code for no good reason;
- The change has missed some of the requirements;
- The change has stopped the code meeting some of the requirements it was already supposed to meet (i.e. you’ve broken something);
- The changed code fails some existing quality criteria such as readability, efficiency etc.
To do this, the reviewer needs a good understanding of the system as it was before the change was made – what it was supposed to do and how it did that. They also need to know the specific requirements behind the change, and the general requirements that are always in force.
The reviewer also needs to be able to spot things that aren’t there – I was expecting to see a change in file X because that’s the most obvious place to implement a particular requirement, but there’s no change to that file. Or a requirement has been missed out or only partially implemented.
With debugging, you have a particular bit of information, e.g. a log message or change to the system after certain events appear to have occurred, and you’re trying to work out why this happened. It’s like a doctor trying to diagnose an illness from symptoms – they are both forms of abductive reasoning.
The general structure is often a zooming in. Of all possible paths through all possible bits of code, you try to discount as many as possible as quickly as possible, so that you can limit the area where you’re searching more and more. Sometimes you go down a wild goose chase, zooming in on the wrong thing, and realise you need to backtrack and start again. Or at other times you have little idea what’s going on, so you just try to gather more information following your gut, hoping that you will eventually spot a pattern that is the beginning of a trail to follow.
You hope that the outcome of this zooming in is a solid understanding of the cause of the perceived symptoms. This understanding usually leads to the code being changed so that the symptoms go away, i.e. the desired behaviour is what you get.
The process might start when an automated check fails. Or it might start with someone having a reaction to the system such as:
- “Hmm, that’s a bit odd. I don’t think that should happen”
- “You stupid computer! Why did you do that?”
and then a more formal and structured processing of exploration and investigation begins.
A good tester also uses the skills I’ve described above (plus others). They spot things that are odd or missing. They explore and investigate based on skill and experience, which means they sometimes follow their (well-trained) curiosity and develop and use heuristics to guide their work. They need a knowledge of the system as a whole, so that they can test a small part of it.
They keep in mind what’s important to the customer and the business, so they can spend their energy wisely. Expressing this more formally, they attune themselves to the risks and other business context that are in play, and test accordingly.
They try to synthesise a pattern from separate details. They worry about both structure and behaviour. They use tools and formal process as an aid to human creativity, skill and experience.
It makes sense for bits of a programmer’s job to be automated – for instance, coding standards should be enforced via linting tools rather than taking up human effort in code reviews. However, just because some parts can and should be automated, this doesn’t mean that all of it should be.
The same is true of testers. It makes sense to me that automation be used to reduce drudgery and increase efficiency, but I feel that human skill, creativity, pattern-recognition, curiosity, connection-making etc. should also be brought to bear on the issue of assessing a system’s quality, via a tester interacting with it. It’s important that this interaction isn’t simply a poor version of automation – a person slavishly following a detailed script. There needs to be space for the tester to be a human, just as programmers are given space to be more than typists taking dictation.
I hope that this helps to give programmers an entry point into what decent testers can offer, by showing the overlap in the skills between the two disciplines.