Automated testing of a website: Dealing with the database

This article proposes an approach to handling the fine-grained parts of the database parts of an automated system test of something like a website.  You would need other things for the remaining jobs:

  • To orchestrate the test, something like SpecFlow;
  • To interact with the website, something like Selenium;
  • To do the bulk updates to the database, something like database back-ups or reset scripts.

It should be portable across different types of database, as the only interaction with the database is via SQL strings, but you may need to tweak the lower-level code to cope with fine-grained differences between flavours of SQL across databases.

What are the database bits of a system test?

One approach to automated system testing is to break it into a series of isolated test cases.  This means that a given test case can run even if the one before or after it fails, which will give you as much information as possible.  The drawback is that you need to worry about the following for each test case, when it comes to the database:

  1. Is the database in the correct state at the start of the test case, i.e. is the test worth running?
  2. Is the database in the correct state at the end of the test case, i.e. has the test passed?

I assume that the data in the database can be split into two broad categories:

  1. Data that it is shared by lots of tests, and changes little if at all;
  2. Data that is specific to a test.

Unless the shared data is small, I think it makes little sense to create this in small steps for each test.  It’s more sensible to have only a few big files that create it or reset it back to a good state, and I will assume that you have basically solved this somehow.  However, for the set-up data that is specific for the test, it’s important to be precise about the details, as they can easily invalidate the test before it has even started.

Similarly, I will assume that you won’t be checking every table at the end of the test case – just those that will be altered by the test case.  Personal experience has shown me the importance of checking most / all columns in the affected table[s], to make sure that the column you assume is going to stay NULL doesn’t get a stray non-NULL value.

So, we need to set up and check a small number of rows in a small number of tables, but to worry about the details of both.  The rest of this article goes into a single approach that handles both needs.

Checking what’s inserted when I submit a form

Something I find myself doing a lot is testing that filling in a form on a web page and then clicking submit will result in the correct new row being inserted into a table in the database.  To be both robust and lazy I’d like to:

  1. Make it easy to take the data I’m submitting into the form and re-use it as part of checking the database;
  2. Not get fooled by any existing rows in the database table that happen to look like the row I’m inserting now;
  3. Keep the details of column names, which are probably hard to spell or remember, hidden away in code that needs to know about them, so that the rest of the code is more understandable and less brittle;
  4. Have the minimum of extra hassle for each new table I need to worry about.

The general approach is to have a set of key/value pairs, one pair for each column in the current database table.  These will be updated as the test case runs, and then at the end they will be used to produce an SQL string to select rows from the table.  Assuming that one row is created by the code under test, then there should be exactly one row that is returned by the SELECT statement.

Column names and values

To make sure that all columns are covered, and not just those explicitly set by the test case, at the start of the test case there is a key/value pair for each column in the table, which holds a default value for that column.  For me, much of the time this has been NULL or 0, with an acceptably small number of exceptions that are handled as a special case.

Instead of using the database column names directly, there is a mapping between them and more human-readable names.  This is like the mapping you might do in e.g. a C# class that represents a row in the database – the class’s member has an understandable name (probably Camel Cased or something similar) and this is mapped to a database column with a less understandable name.

I happen to have implemented this as a dictionary because I found it easier than classes with annotations, such as you might use with an ORM.  The bulk of the test code uses the human readable name, and the database-specific code uses the mapping to translate from this to the actual column names.

Plugging this into an existing test

Because I’m already using SpecFlow and Selenium, there is already code in the test like this:

[Given(@”I have selected a holiday lasting (\d+) days”)]
void GivenHolidayDuration(int numDays)
{
    pageObject.SetHolidayDuration(numDays);
}

Assuming that I have an object called holiday which handles the relevant database table, and it has a method called SetInt() and the column in question has the human-readable name holiday duration, then adding in the setting of the relevant column of the table is done by just adding one line:

[Given(@”I have selected a holiday lasting (\d+) days”)]
void GivenHolidayDuration(int numDays)
{
    pageObject.SetHolidayDuration(numDays);
    holiday.SetInt(“holiday duration”, numDays);
}

I do the same kind of thing for all the other values that are set via user action, and the rest of the columns will keep their default values.

Generating SQL

The database object will have methods like SetInt, SetString, SetNull, which update the key/value pairs.  It also has a method like GetSelectSQL which will turn the key/value pair (K, V) into a string which is either

    K = V

or

    K IS NULL

and then the bits of string for each key/value pair get concatenated together using “ AND “ to produce something like

SELECT COUNT() FROM table WHERE col_A IS NULL AND col_B=17 AND col_C=’abc’ AND col_D=’2018-03-27 13:55:34’ …

There are two exceptions to this general rule about generating SQL text.  The first is to do with date/times that are essentially now.  This would be used in fields that store things like when the record was created, last updated etc.  You can’t set the value of this field in the key/value pairs to a particular value (specific to the second) because the row might be created a few seconds later after a slow API has responded.  If you’re running your test just before midnight, then the actual date/time might be for the next day.

To cope with this, I have a separate list of key/value pairs to hold approximate date/times.  A pair will hold a specific date/time value, but it will be used to generate SQL that is an approximate match rather than an exact match.  You will need to decide how approximate the match is.  An approximate match is something like the difference between the expected value and actual value is less than the permitted maximum.

So that the test isn’t fooled by existing rows that happen to look like the row about to be inserted, just before the test clicks the submit button to create the row, the key/value pairs are used to create an SQL statement that will find the largest primary key value of any rows that match.  For instance:

SELECT ISNULL(MAX(pk_col, 0)) FROM table WHERE col_A=1 AND col_B=2 …

Once I have the largest primary key value of rows that look like the one just being inserted (or 0 if none exists), then this value can be used as a lower bound on the primary key value of the row being created.  This value is added to the key/value pairs too, but as a lower bound rather than something to equal.  The final SQL used to look for the row inserted therefore looks like this, assuming that the most recent pre-existing row has the primary key value 63:

SELECT pk_col FROM table WHERE pk_col > 63 AND col_A=1 AND col_B=2 …

Note that instead of the list of columns being just the columns set by the test, it will include all columns.  For columns that are nullable and haven’t been given a value by the test, the SQL will be checking that all of those columns are definitely NULL, rather than allowing them to have any value by omitting them from the SQL statement.  Similarly, columns that are non-null and weren’t given a value by the test will be checked to have their expected non-null default value, e.g. 0, rather than being allowed to float to any value by being left out of the SQL string.

Lighting the blue touch paper

The example code above shows how the value of one column can be set as the same value is put into the corresponding part of the screen.  The method that clicks on the submit button is more complicated, doing several jobs described in the paragraphs above.

It needs to:

  1. Fill in any last-minute (or last-millisecond) values that don’t have an equivalent part of the web form, such as created date/time;
  2. Get the SQL to find the maximum primary key of any existing matching rows, then execute it;
  3. Add the maximum primary key value to the key/value pairs as a lower bound on the new primary key value;
  4. Click the submit button and wait for the system to indicate it has successfully responded;
  5. Get fresh SQL to find the new matching row, then execute it;
  6. Mark the test as failing if the SQL doesn’t find exactly one matching row.

Getting starting data ready

Much of the code can be re-used to insert data at the start of the test.  Columns you don’t care about will keep their default values, and the ones you do care about can be set using a human-readable-name.  The date/time values that were previously used in approximate comparisons can simply be inserted along with all the other values.  There is a separate method to generate an INSERT SQL statement rather than a SELECT statement.

Whether you set the non-default values via separate calls to the relevant methods or have an array or list that you loop through is up to you, but neither should be too painful because you only need to worry about columns that should have a non-default value.

As with the SELECT statement, the row inserted will be fully specified.  Every column will be assigned a value, even if that value is NULL.

Code structure

There is a convenient separation of concerns, which will minimise how much you need to do per database table.  There is a class per table, which all derive from a common base class.  A derived class needs to do the following for its table:

  1. Set the table’s name and primary key name;
  2. Create the mapping of human-readable name to database name for each column;
  3. Give the default value for each column.

Everything else is in the base class – holding the key/value pairs, setting the columns to their default values, the methods to update the columns, and the methods to generate the different kinds of SQL from the key/value pairs.

Conclusion

First a couple of caveats or alternatives.  People who actually know what they’re talking about where it comes to automated testing can tell you about the Testing Pyramid.  Don’t forget this when you automate – what is the cost and what is the benefit of the test, and is there anywhere else you’ll get a better ratio of the two before doing end-to-end testing?

As with any tests that involve the UI, ask yourself: Am I testing the UI, or testing through the UI?  Is there a cheaper way of testing the code behind the UI (e.g. its API), and of testing the UI via unit tests, and then have a smaller number of end-to-end tests that are purely for integration?

There are alternative ways of getting starting data in your database, such as the Data Builder Pattern.

Having said all that, I hope that you find this useful.  It doesn’t solve all the database problems for automated testing, but I hope that it does a good job at the problem it tackles.  Please let me know in the comments what you think, and if you have any feedback having used it.

Thanks to Richard Bradshaw for very helpful feedback on this article.  Sorry – still no pictures :-).

One thought on “Automated testing of a website: Dealing with the database

  1. Date/time (and system-assigned serial numbers and the like) is an area where it makes sense to think about how to make the software easy to test in the original design. For date/time, it’s useful to route all requests for a date/time to in specifically developed function under the control of the developer/operator (rather than the native system date/time call), with an ability to put the software into a “test mode” where the tester can control the date/time response and how the date/time is advanced within the test cases. Many older batch type systems have the concept of “business date” which helps with this problem and also help when catching up with back-dated processing. Same for system assigned control/serial numbers.

    The only time when this shouldn’t be done is recording timestamps for actions which are not related directly to the business outcome of the system, e.g. login, requesting key system operations, when tasks started/completed etc.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s