Syntactive Blog

Software Development blogs and guides


Most common testability issues in applications

It's been done before

There are a variety of products and services out there, from solving very specific problems to simply improving daily life quality of everyday users. But no matter what issue your product tackles, at its core it's likely very similar to what's already out there. Most programs and applications typically have :

  • A (graphical) user interface
  • A (user) database
  • Network connectivity
  • An API

By consequence, testing them is also bound to be pretty similar. This is what allows us to recognize common patterns and mistakes which make the testability harder and more unreliable than it should be.

Typical testing principles

When writing test automation for a typical application, you will mainly verify that the input given in your application, matches the output returned by another component of the software. For example, in a typical end-to-end test, the username you registered with earlier, has to be shown again on your user profile page. The data went inside your registration form, was processed and stored inside your database by the backend, then reloaded by the backend and shown by the frontend on your profile page. If anything went wrong in any of the steps, you should detect the discrepancy in the data. So you mainly want to control what goes in and goes out, using the various interfaces of the app. The more control you have over the input and output, the more testable your application will be. Thanks to the controlled input, you can very specifically create test data (also called seed data) inside your application, to use during your test execution. Imagine you want to test the login feature of your web-application. You'll always need a registered user first, to log in with. If you were to test this once, manually, sure you could use the GUI to register a user and use it to log in. This however gets old pretty fast and does not scale well at all. An already better approach would be to have an automated script that creates your user through the UI, by using a web-browser. Excellent, you've automated the registration flow and as a bonus can test this too! Yes, but no.


Whilst a lot of automated testing is done this way, it's actually very inefficient to create every dataset like this. Sure you might be testing your registration process every time as a precondition to your login test, but repeating this slow process over and over will cost you a lot of execution time and add additionally steps on top of what you actually want to test. Besides, if there is a blocking issue in the registration process, this will prevent you from proceeding with testing the login. So what would be a better way to go on about this? Being able to directly inject the test data inside the product database. You create the proper dataset once, and apply it as you like, instantly inside the database. One important note to keep in mind is to avoid creating impossible data, for example, if your registration flow does not allow you to create usernames that are shorter than 3 characters, you shouldn't be using shorter names in your test data. Ideally your product's database schematics should also enforce these rules to keep the data sanitized as much as possible. It can however be useful to bypass some application restrictions like setting dates in the past, to test expiration mechanisms (more on that later). Another alternative to efficiently apply datasets is to use an API if the product exposes this and allows you to create data with it. API's are faster and more reliable to create a bulk dataset through than any Graphical User interface will be.

Data volatility

Often within your application you are going to process the data in a certain way. And this processing can create unavoidable side-effects. Imagine you are trying to log in with an incorrect password to ensure the login feature checks the password correctly. This is an extremely common test scenario but it can already cause you some trouble. A lot of login features have a locking or timeout mechanism which prevents anyone from brute-forcing passwords until they are inside the account. It's a tried and true defense mechanism that works well, but it makes the testing more difficult once more. Every time you log in incorrectly, you risk locking your user and invalidating your test data.


Now you could enhance your automation again to call the unlock procedure, or recreate the user after every attempt, but again, depending on how much control you have on the input this can be slow and inefficient again. An alternative approach is to have special users or rules that prevent these side-effects when required. Perhaps you can decide on special test users which do not have a lock counter. Whose password never expires or is forced to be changed. Sure they theoretically increase the code complexity slightly of the app, but the saved time and effort in testing is often worth it.

Traceability and context

As you find your way through the application and execute specific flows, you're bound to encounter glitches and defects. This happens with even the best of products, especially in the earlier stages of development. It's therefor imperative that when, inevitably, an error occurs, this can be root-caused quickly and efficiently. This means :

  • Proper error messages or errorcodes
  • Proper logging and tracing within the app
  • Consistent repeatable/predictable app behavior
  • Clear, identifiable app versioning

Keep it simple

Reduce the product complexity (Simplicity is king). It's very tempting to add in a lot of features as early as possible in the product, however this can easily clutter up the main flows and add unneeded complexity. These additional features, of which the intentions can be unclear, make it difficult for both Developers and Testers. Therefor they are prone to be either not implemented or tested correctly. And if your in-house developers and testers don't know how the product should behave, then how is the client supposed to use it correctly. In conclusion, make a solid basic work-flow, with the minimum amount of additions and make the product self-explanatory so it does not need extensive documentation.

Allow mocking and stubbing

By decoupling the various services and components of your application, and reducing the amount of states and state transitions inside your software, you can allow for mocking and stubbing. This is a technique to test separate components of your application individually, speeding up the process of testing specific parts.


It is therefor imperative to keep your functions as stateless as possible. When a function relies on internal states, this can make the mocking part extremely tedious. Lets take a simple example with a function that performs an action based on the time :

# Example that prevents testing different expiration dates
def isExpired (date)
expiration_date = Date().today()
if date >= expiration_date:
return True
else:
return False

# Example that allows testing different expiration dates easily
def isExpired (date, expiration_date)
if date >= expiration_date:
return True
else:
return False

# Play with the expiration date without having to change the date on your system
expiration_date_today = Date().today()
expiration_date_tomorrow = Date().tomorrow()

As you can see in the above code example, when you can provide parameters inside the function yourself, it gives you greater freedom to cover multiple scenarios during the testing.

Custom test selectors

Make component selectors that are indpendent to the actual product code. When they are not used in the application's structure and styling, they are very unlikely to be changed by a developer. This makes the test automation that uses these specific data-testid much more reliable and easy to implement. It's also not mandatory that the main app developers add these tags. When the testers shift left, they could easily get access to the product code and add these selectors without interfering with the application behavior in any way.

<div data-testid="name_me" />