This is a kind of follow-up post to a previous analysis of a pack of Top Trumps. If you don't know what Top Trumps are, I suggest you look at that. I was contacted via this blog by some people who had designed a pack of Top Trumps. They wanted my help with making their … Continue reading Designing the user experience of Top Trumps
Category: Testing
Tech debt as risk of friction
Tech debt is a term that’s used quite a bit in software development, and I recently realised a new way of thinking about it: the risk of future friction. I’ll explain what I mean below, starting with a brief discussion of what I mean about risk in general. Risk Before I get into risk as … Continue reading Tech debt as risk of friction
Chaos engineering and the relationship between code and teams
This article is about a few things – chaos engineering, an analogy that explains it, then digging a bit deeper into the relationship between software and the team that produced it. It was sparked by a conversation with Stuart Day, for which I’m very grateful. Chaos engineering Chaos engineering is a technique to improve the … Continue reading Chaos engineering and the relationship between code and teams
Generative AI and skills
There has been a lot in tech news and opinion recently about what generative AI will or won’t do, such as take away jobs from programmers and testers. I’ve had a long enough career in software to be able to put generative AI in a bigger context, which I think helps to understand some of … Continue reading Generative AI and skills
Tests to hold code securely in place
If you have automated tests for your code, you are doing better than some programmers. However, how good are those tests? In this article I'll explore how tests can be good (or not). Cable clip analogy Before I get into tests, I want to introduce something that will be useful as an analogy for them. … Continue reading Tests to hold code securely in place
Building computer systems via problems rather than solutions
When it comes to building computer computer system, even something as simple as storing the name and address of universities can be surprisingly complicated and messy. While the mess and complication often can’t be avoided, knowing what the end user needs are can help you come up with the best way of tackling them. “Just” … Continue reading Building computer systems via problems rather than solutions
Reviewing requirements
Ministry of Testing kindly published an article I wrote for them on reviewing requirements. It gives some tips for doing it, and also looks a bit at the human side of things. I hope it's useful for non-testers as well as testers, whether your requirements are big documents full of UML or something much briefer … Continue reading Reviewing requirements
When a failing test might be OK
Usually, a failing test is a problem. In this article I will cover three cases where this might not always be true: performance tests, testing a data science model such as a classifier, and testing in quantum computing. In all these cases, a definitive answer about passing or failing is given by a set of … Continue reading When a failing test might be OK
Fuzzy matching – context and testing
This is the third article in a short series on fuzzy matching: Introduction Example algorithms Testing and context In this article I will consider the difference between context-dependent and context-independent fuzziness, and think about how fuzzy matching systems can be tested. Context-dependent and context-independent fuzziness If you are trying to do fuzzy matching of strings, … Continue reading Fuzzy matching – context and testing
Staging input data to improve testability in data pipelines
In a data pipeline (an ETL or ELT pipeline, to feed a data warehouse, data science model etc.) it is often a good idea to copy input data to storage that you control as soon as possible after you receive it. This can be known as copying the data to a staging table (or other … Continue reading Staging input data to improve testability in data pipelines