Flexibility in software design is often seen as a good thing, but it can come at a cost. This isn’t surprising, because software engineering is a kind of engineering, and engineering involves making trade-offs among several good things. For instance, there is no “best” car, just the best car for a given situation, with a given set of constraints that would skew the trade-offs one way rather than others. Given that flexibility’s cost isn’t always obvious, I’ll go into the cost of some unnecessary flexibility in code I worked with recently.

The simple way to get data from a database table in C#
Before I go into the code that I consider overly flexible, I’ll briefly describe the less flexible approach I was expecting. There is a relatively straightforward way to read data from a database table in C#. It uses LINQ, including LINQ to SQL as a database adapter (to connect LINQ, which is a general-purpose data manipulation language within C#, to a database).
The basic elements are:
- You create a database connection string to point your code at the database, and possibly give credentials to log into the database.
- You define a class for each database table that you want to pull data from, such that one instance of the class corresponds to one row in the table.
- You define a class (a database context) that acts as a container for all the tables that you’re interested in.
- You create an instance of the context using the database connection string, and then use the context in a LINQ statement to read the relevant rows from a table.
The tedious details of copying data between database query results and instances of the correct class are handled for you by LINQ. (LINQ acts as an ORM.) As a result, the C# code you need to write, understand, change, debug etc. is simple and clear, such as:
var activeAccountNums = from account in context.Accounts
where account.IsActive
select account.AccountNum
Behind the scenes, LINQ (including LINQ to SQL) converts the C# code (such as the statement above) into the relevant SQL statement, sends it to the database, and then processes the results including turning them into C# data structures.
Using a stored procedure instead
The code I was working on recently used a different approach to read data from a table, based on a stored procedure. Stored procedures are wonderfully flexible and powerful things, that can do complex database operations, implement detailed business logic and so on. In this case, all the stored procedure was doing was a simple select statement against one table.
There are at least two problems with using this approach. First, there is more code to write, as there is little behind the scenes help in connecting C# and a stored procedure. Secondly, and more importantly, there is essentially no link between the C# and the database table.
The only handle the C# has is a text string that stores the name of the stored procedure. It would be a little odd, but I could, if I chose, have a string that held the stored procedure’s name backwards, and then reverse it just before I used it. I.e., the string is nothing more than arbitrary text that just happens to match the name of a stored procedure in the database. The stored procedure name doesn’t have to match the database table name. In fact, in the general case it’s unlikely to because the stored procedure might be selecting from more than one table, doing some logic on the results etc., and so the most meaningful name for it would be in terms of some business operation rather than a database table.
So what?
There are at least three problems I can think of as a result of using a pass-through stored procedure rather than LINQ in this case.
- First, it will take more time to write, because you have more typing to do. This is because (as I mentioned above) there is less behind the scenes help in connecting C# to a stored procedure.
- Second, there’s a bigger risk of bugs, because you are writing more new code rather than relying on a heavily used (and hence debugged) library, and the code you write will include arbitrary bits of text rather than e.g. class names. Please see my article on Don’t Repeat Yourself (DRY) for more about the perils of arbitrary text and numbers.
- Third, both you and your tools will have a fuzzier view of what’s going on, due to there being next to no link in the C# code between the C# and the database table.
The last one is worth going into a bit more deeply. If you want to change a column in a database table because of its use in area A of the code, it’s sensible to check that it won’t mess up other areas B, C etc. that also use it. If there’s a clearer link between C# code and the database table (including its columns) then this check will be quicker and more likely to be complete.
C# takes this further. If you want to, you can use a strong link between C# code and database to get tools to do major work for you. You can define your table-backing classes, and then get a tool to change the database to reflect those classes. Or you can point a tool at the database and get it to auto-generate the corresponding classes for you.
Both human and computer understanding of the link can be helpful to you as a programmer. Adding a Turing-complete layer (stored procedures) into your ORM means you weaken this ability for humans and computers to understand things in a helpful way.
A more general view
I’m not saying that flexibility is always wrong, just that it has a price. There can be consequences of using a flexible approach, some of which will be unintended. The unintended consequences might be such because of requirements or benefits that have been overlooked or happened as a by-product.
For instance, in the case of reading data from a database table, the requirements might be something like:
- It must be possible to read from any table in the database
- It must be possible to add or change database tables, and read from the database as easily as before the change.
- The data must be read quickly (for some definition of quickly).
A stored-procedure-based approach could satisfy all these requirements. However, it wouldn’t do as well as LINQ does on the following requirements:
- There must be the minimum of arbitrary text strings and numbers involved, so that a compiler can spot mistakes more easily and effectively.
- There must be the minimum of typing that could be automated away.
- The mapping between C# and database tables must be:
- Simple for a human to understand,
- Machine readable, so that tools can help the programmer.
I realise that these are fuzzy – they involve hand-wavy things like “minimum” and “simple to understand”.
Flexibility in database design – to normalise or not
A similar but different example is from database design. Imagine that you want to store a list of people, and for each person you want to store some information about them such as their name and also their address. There are a few common ways to store this in the database.
The first is to have a single table, such as Person, that holds everything. So it will have columns such as:
- Person id (primary key)
- First name
- Last name
- Date of birth
- Address 1
- Address 2
- Address 3
- Address 4
- Post code
Other than post code, there’s no differentiation between the different bits of the address, and the maximum number needed is a guess (4 in this case).
The second way is to break out the address into a separate table. That means you have a person table that looks like this:
- Person id (primary key)
- First name
- Last name
- Date of birth
- Address id (foreign key to address table)
And then an address table that looks a bit like this
- Address id (primary key)
- Address 1
- Address 2
- Address 3
- Address 4
- Post code
You can take this further, and define a list of types of address line, and then redefine the Address table to not have hard-coded columns called Address N and Post code. The address line type table would be very simple:
- Address line type id (primary key)
- Address line type name
And then the address table changes to be something like this:
- Address line id (primary key)
- Address id (foreign key, although I’m hand waving over what it refers to – it joins together the lines of one address)
- Address line type id (foreign key)
- Line contents
They are all good designs, but for different requirements. The benefits of the first design are mostly simplicity and speed. To get all the data for a person is a read of one row from one table.
The second design is more complex, and to read all the data for a person requires a join of two tables – reading one row from each. However, it means that an address can be shared between people, leading to possibly less data duplication which could otherwise lead to data consistency problems. It also separates the concepts of address and person, so that you can more easily have things like separate delivery and billing addresses. It also (although not in exactly this design) allows you to show a history of which address someone has had over time more easily than requiring a whole new Person record for each address change.
The third design is more complex again, and to read all the data for a person requires a join of two tables – reading one row from Person and N rows from Address. The benefits of this approach are things like minimum wasted space – if an address has no 4th part then it is simply not in the database rather than a column being left blank – and the flexibility means it has a better chance of fitting with many different countries’ address formats.
It might be tempting to look at the last one and think it’s always the best approach. However, if all you’re doing is running a physical mailing list for a small local books-by-post club, I think it’s overkill. The UI to add new users to the system, update existing users, display lists of users etc. will all be more complex to code. The back-end code to generate a list of names and addresses to send books will also be more complex. The complexity will manifest itself in extra time to write, to change, to debug, to understand and so on. The costs are only worth it if they are outweighed by benefits, usually benefits to users. In this case I’d argue that they’re not.
Summing up
Flexibility is a quality that can be desirable for software to have, but you rarely get it for free. In order to gain flexibility you might need to give up other things, and whether this is a sensible trade or not depends on your circumstances. There might be things that you’re unaware of, or don’t fully appreciate, that you would give up to gain flexibility – including productivity, performance, understandability, and help from tools.
Flexibility is often, but not always, achieved by adding indirection. In the person / address example above, you can see that normalisation is a form of indirection – you no longer store the address directly with the person, but instead store a pointer to where you need to go to for the address. It would be wrong of me to miss this opportunity to give the wisdom of Butler Lampson:
All problems in computer science can be solved by another level of indirection.
Another interesting article.
I’m all for pragmatic code myself. Sometimes I go heavy, ensuring everything is tightly defined; but other times, I do like the flexibility of value objects, generics; even when just for a bit of an experiment.
What is your take on “Well, it’s done now, so we can track the pain if it comes up?”
LikeLike
Thank you. I try to be disciplined and pragmatic too. If I come across code I don’t like, I sometimes replace it and sometimes leave it alone – it depends. Like you I sometimes use generics and other flexible things.
My grump wasn’t so much about a particular technique. It was more a plea for people to recognise the costs as well as benefits of different solutions to a problem, some of which might not be obvious. And I recognise programmer happiness as one row in the cost / benefit table, although that might surface differently in the short and long term.
LikeLike