I guess if I had to define my role at work it would be: programmer. However, I have learned a lot from people who wouldn’t call themselves programmers, such as testers (Michael Bolton, Jerry Weinberg, the Ministry of Testing community etc.), user experience experts (Paul Boag, Jared Spool, Don Norman etc.), and data people of various kinds (Data Driven and Linear Digressions podcasts, SQLserverCentral etc.).
I haven’t had the pleasure of meeting these people in person, unfortunately, but they have all helped me do my job better, appreciate their job better, and generally enriched my work life. (I also learned about the importance of good writing at work from my Dad.)
I don’t consider myself a tester, but I appreciate testers and testing. All this waffle is to say that what follows are the ramblings of a friendly amateur, so please read it in that context. I’ve been struck by some analogies that have helped me to understand what testing is about – what it’s trying to achieve and how it tries to do that. I don’t claim any of this is original or particularly wise or insightful. I’m sharing it in case it’s helpful.
Before I say what I think testing and testers are, I ought to say what I think they’re not. Testing certainly shouldn’t be reduced to just programmers writing automated tests that are run automatically. These are necesary but not sufficient. It’s also not about breaking things, other than people’s illusions that the software’s perfect.
Testers should not be gatekeepers. There’s a management function to be done (which could be performed by a professional manager, or collectively by a development team) that gets to decide whether software should be released. Testers provide information as input to that management function. If you want your testers to make management decisions as part of their role, please give them the title, salary, clout etc. that go with that.
Investigative journalism
I’ve realised that testing and testers aren’t about bug hunting. Instead they’re about information hunting. This made me realise that there is some similarity between software testing and investigative journalism. You need to hunt out information rather than expecting it to come to you. You often need to put together a fuller picture from many smaller parts, and things aren’t always how they seem initially. Then you need to communicate your findings in a way that your audience will understand.
Science
I read somewhere that key moments in science aren’t always marked by someone saying Eureka!. They’re just as likely to be marked by someone saying Huh, that looks odd… I think that testing is a bit like that too. You use some software and it makes you react with Huh, that’s odd…. Some of the skill of testing is getting to that point (by having an idea about how to probe the software, and being alert to your reaction), and then there’s more skill involved in following up that reaction.
You start trying to gather evidence that will help in the forming of a hypothesis about the software, which people who can see its source code (programmers) can confirm or deny. Maybe the job of the tester stops with the evidence gathered and well-presented, or maybe it extends into coming up with the hypothesis too. I think that this will depend on the context far too much for me to make a big pronouncement about it.

Proof-reading and editing
I’ve heard of testers getting berated by managers when bugs appear in production: How did you let this happen? It’s at times like this that I’m reminded of the acknowledgements sections of books I’ve read. The author says something like: I’d like to thank my editor / proof-reader X, for catching so many mistakes – the book is much better for your hard work. Any that remain are my fault.
This touches on part of what makes a tester’s job tricky – being the bearer of bad news. How many times do you say: This still has a problem? It can lead to awkward conversations if the fix/test or write/proof loop goes around several times for the same thing. Do you let small things slide so that you can concentrate the creator’s attention on bigger things? There are automated tools that can help, but automation on its own isn’t enough.
I think it’s important to not let the author/proof-reader distinction imply a binary separation of concerns between programmers and testers. Programmers should review and test their own work, and get other people (including testers) to help. Programmers should review other people’s work – code and tests written by programmers, test artefacts written by testers etc.
Banging out some code and throwing it over the fence to testers without reviewing or testing it yourself is a false economy in time. While the important information is still in your mental cache is when you’ll be most efficient at fixing bugs, not in a day or so (or more) when you’ve moved onto something else and so will have to dump the new stuff that’s in your mental cache and reload the old stuff relating to the bug.
Thermometer not a heater
I remember a discussion on TV about young children being tested in schools in the UK, and many people thought the testing was excessive, a waste of time and counter-productive. Ian Hislop, who was in this discussion summed it up well by saying: You don’t make a pig heavier by repeatedly weighing it.
In a similar vein, I would describe testers and testing as a thermometer and not a heater. There is a property (quality of software, ability of children) that is valuable, and people agree it should be a big as reasonably possible. It’s important to have at least a vague idea of the current state of that property. But knowing the state isn’t the same as changing (i.e. improving) that state.
In both software quality and children’s ability, improving things is a messy and complicated collaboration of skilled and experienced people. While there are some basics that need to be right, there’s a part that depends on healthy relationships and other things that are hard to produce from a set of simple rules.
Summary
I think it’s easy but wrong to reduce testing to automated tests written by programmers (these are valuable, but not the whole picture). There’s a danger that test cases specified to the Nth degree and worked through and tracked in a tick-box fashion are simply automated tests by another name.
The problem is that what I would consider to be better testing is harder to pin down or explain. I hope that these analogies help to explain it a bit. I’d like to thank Wicked Witch of the Test (Veerle) for letting me bounce ideas off her.
I like to use the analogy of a test pilot when we talk about software testing., A test pilot doesn’t get in a new aeroplane and flick the switches, saying “yes, the landing lights come on; yes, the undercarriage retracts…” Rather, they get into the aeroplane and take it up into the air to explore the “flight envelope” – how high can it fly, and how fast? How low can it fly, and how slow? (Without falling out of the sky, of course.) They try out an aeroplane to find its limits, and its capabilities – what load can it carry, and what limitations do different loads place on the airframe and performance? How does the aeroplane perform in extreme conditions – of heat, or cold? Even, how easy is it to taxi, fuel or maintain?
LikeLiked by 1 person