In this article I’m going to talk about view models and similar behaviour-free containers of data. I’ll go over why you might want one, and then talk about testing them (yes, really). I use the name view model to make it clear that I’m not talking about models from e.g. machine learning.
What is a view model?
First, I ought to say what I mean. You have all the data you need for some purpose, lying around in existing objects. Instead of using those objects, you move at least some of the data into a new object. One reason why you might do this is to create a view model as part of an MVC website (or MVVM website etc.)
Why would you want a view model?
The first reason is because you must. The thing that will work on the data is hard or impossible to change, and it needs the data to be as it’s expecting. In practical terms this usually means the view model must derive from some base class or implement some interface.
Assuming you don’t have to, why might you want to use a view model?
The reasons can be summarised as
- Make it easier / impossible to access data;
- Get rid of unnecessary data;
- Change the name given to some data;
- Normalise / denormalise the data;
- Serialise / deseralise the data;
- Change the data’s type.
I’ll go through these in a bit more detail.
If you have collected the data from several sources, it will probably be in several different objects. To spare the consumer of the data the hassle of rummaging in all the separate objects you could collate the data into a single object. On the other side of that is when the consumer of the data must be prevented from accessing some of the data you’ve previously collected, and so you copy only the bits they should see into a new view model.
Instead of preventing the consumer from getting some data, you might want to filter out data just because it’s not needed. More data than necessary would slow down the process of getting the data to the consumer, e.g. over the internet.
It might be that the data is in a field whose name makes little sense to the caller (e.g. it’s a gibberish column name from a database) or is in some other way wrongly named. (What I call AccountNum you call CustomerId, for instance.)
You might want to normalise or denormalise the data. An example of this is the lines that make up an address. Do you have an object with separate fields called Address1, Address2 etc, or do you have a list or array of address lines, which could contain any number of lines? The Address1 etc. approach is denormalised, and the approach using a list of address lines is normalised. Sometimes the data will be in one type of structure but you want it in the other one, and so you must normalise or denormalise it.
Serialising vs. deserialising is a relative of normalising vs. denormalising, in that they both change data between two different ways of representing it. It could be that e.g. a date picker on a web page has supplied a date to the back-end code for the website as the text string “27/02/2019”. The back-end code needs to have a date object, which has separate fields for day, month and year. Converting the string to the date object is deserialising, and converting a date to the corresponding string is serialising.
Sometimes you combine the two approaches, and e.g. do both deserialising and denormalising. You might be given a dictionary or list of (key, value) pairs that represent e.g. a person:
(“FirstName”, “Horatio”), (“LastName”, “Nelson”), (“DOB”, “29/09/1758”), (“NumSiblings”, “10”)
These could be e.g. the attributes you get as the payload of a SAML message during single sign on (SSO). You want this to end up as a Person object, which has string fields for FirstName and LastName, but a Date field for date of birth and an integer field for number of siblings. This means drawing out the list into separate fields (denormalising) and parsing strings into other types (deserialising).
Lastly you might want to do some other kind of type conversion other than to or from string. It could be that data comes to you as e.g. a float, but you know it’s always an integer, and so can be safely converted to an integer type that uses less memory and is safer for integer addition.
Risks in view models
A sensible approach to deciding what and how to test is based on value to the end user, how this is put at risk, and the cost of testing. Because there is usually little or no business logic in a view model, why bother going to the effort of testing it?
There’s a risk of copy / paste / edit errors. Because of the repetitive nature to much code in a view model it’s easy to have something like this, caused by messing up the copy / paste / edit:
Address1 = userData[“addr1”]; Address2 = userData[“addr2”]; Address2 = userData[“addr3”]; Address4 = userData[“addr4”];
Unless you test that all input address lines are correctly transformed into output address lines, you could easily miss this kind of error. (In case you missed it: what value will end up in Address2?)
There’s a risk that you get the destination data type wrong. There are several variations here, but you could assume something can’t be null when it can be, assume something’s an e.g. integer when it is actually a float, or it’s a string of up to e.g. 10 characters when it could have up to 255.
A benefit of testing view models is that it can make tests of higher-level things, such as controllers, simpler and so quicker to write and easier to understand and easier to maintain. This also applies to manual tests – human nature being what it is, a human tester won’t always type values into every field on a web form, or into every column of an input file. It’s reassuring to know that you have at least some automated tests that cover every part of the input.
You obviously can’t test all possible inputs, so I try to start by thinking about the biggest and smallest values that the input can have. There are two dimensions to this:
- What are the most and the fewest bits of the input that could be present? E.g. most / fewest fields populated on a web form, most / fewest non-null fields in a database record or line in an input file.
- For each bit of the input, what is the biggest and the smallest value this could hold? For things like numbers and dates that should be straightforward, and for strings that could mean longest or shortest.
These are often the edge cases. Then there will be probably many cases in between – how many of these you should test is up to you, as it depends so much on the circumstances.
I hope that this gives you an idea of why you might need things like view models. For all their simplicity, it is often still worth testing them. They might not contain complex business logic, but you still don’t want them to be wrong. Being aware of the risks involved, the benefits to other tests, and starting points for your testing should help you know what and how to test.
2 thoughts on “Testing your view models”
The reason I’d cite for using a view model would be to recognise that a specific view may apply transformations to present and receive a representation that is unique or shared but is not 1:1 with the storage and processing representation.
A Common thing we all use which shares this property is currency. We exchange currency using many different view-models, but it backs to a very different representation, which might make decisions and user-interaction more difficult without interstitial mappings at several levels.
As always a lovely article.