To test or not to test ? How much to test ? (Part 1)

To test or not to test ? No really, let’s be real. I’m seeing more and more software projects with hardly any tests. Neither unit tests or end to end tests. Almost none ! Why ? Why is that ? Isn’t it against Agile manifesto and Continuous Integration ?

Sometimes people start a new project and write tests for most or all functions, but abandon this practice in the middle of a project. And then when deadline comes they completely stop writing tests.

And it’s not that so many people can’t write tests, or are too lazy. There are several reasons for that. In many cases project manager and/or business owner of this project tries to find a way to cut costs or cut time because of fast approaching deadline or budget is running low. And some people find that they can assign development tasks to Q&A people which used to write and run tests only . Or just that developers can spend more time writing actual code then writing tests. Also many people think their software is good enough that they don’t need any tests, or think that TDD is just an idea but is counterproductive, or is tiresome when applied to the real world. Also management doesn’t help in this matter. Management often asks dev people how many items from sprint were done/burnt, not how many tests you wrote. In the end project managers don’t ask about quality of software (they assume it’s to be highest possible in any event) for they ask: WHEN !?!?

Other interesting conflict appears when somebody is really into tests and starts testing everything for millions of possible technical failures and bad input. Or writing a generator to generate random text input for text fields in user interface and test how software behaves on every possible screen. Or simulate millions of file system, network, power shortage or interface failures. And then you realize that you spend all of your time writing tests, and that after writing a millionth test, there are still another 10 million tests that could be written for almost indefinite number of strange possibilities that may happen.

So what should we do ? How to find the right balance ? Also, should we let business decide whether we should write tests or not ?

So first of all, technical stuff is our stuff. Business should not interfere too much. We are solely responsible as software professionals to choose the right tools and methodologies to do our job, and do it as perfect as possible. If bug happens, we are to be blamed anyway. A bug may be something annoying, but may also kill a company (security bug, bug leading to leaking money, low quality of service leading to loosing customers) or somebody’s life may be in danger (cars, planes, medical equipment). If bug is serious then you can loose job, career, or even have legal problems. Or just you can loose your professional reputation that you built for several years. And then people will ask: how did that happen if you had claimed to be professional. So you can’t just reply: management asked me to stop writing tests and write only what’s in backlog. People count on your expertise, and you have to be responsible for your work.

Anyway you’re going to test your software. Even if you don’t write automated tests then you are going to do test manually. Or let somebody do the tests manually. Even if you say you only write some small unit tests for functions that are complicated, somebody will tests most of of your code manually. Either you , Q&A team, your boss, your boss’s boss. Or in the worst case – your customer. Ask yourself a question: which is better: you run tests and find bugs before anybody, or customer to become your beta tester and file complain ? Also ask yourself a question: which is faster and can be repeated thousands of times a day for a very little of cost: manual testing or automated testing ?

OK, but we need the right balance between two opposing sides: write hardly any tests or write millions of tests to cover 99,99% of code (100% coverage is not possible anyway). My proposal is this. We do this absolute 20% minimum that provides 80% of value (without asking for permission anybody who is not an expert in development, like project manager or business project owner):

  • VALIDATE INPUT. Validate input using best possible techniques offered by framework you use. Data in your system must be absolutely clean and in scope. It’s better that you throw exceptions than to accept s*itty data and base calculations around them. This way you don’t have to test how software behaves in case of different inputs. Use same validation code for both frontend and backend. Even if you have to run validation twice or even three times (if CPU time is not an issue) before data becomes stored
  • VALIDATE INPUT FOR ACTIONS, ROLES AND STATES. Build single parts of code that take also care of validity of data in different states of objects and link it with user roles (“what user with role A can do with object B when it’s in state C”). You don’t want to have validation and authorization “ifs” spread across thousands of files and layers (too many times I’ve seen “if” statements both in controllers, services and in cshtml view files, all calculating the same authorization logic
  • USE WELL ESTABLISHED SECURITY FRAMEWORKS. Use solid authorization and authentication frameworks that cover your butt and important data of your customers. Including Cross-Site-Scripting and SQL Injections. This way you don’t have to do additional tests for security.
  • UNIT TESTS FOR ESSENTIAL FUNCTIONS. Write unit tests at least for really complicated functions, those which are essential for the module you build, like salary for HR module or interest rate for bank customer
  • E2E TESTS FOR BUSINESS ACCEPTANCE TESTS. Write end-to-end tests that are reflection of acceptance tests (definition of done) given by product owner, typically business person, in a story/task. Do not invent your own business tests if you don’t have to. Better ask business side to extend acceptance tests if you feel they are incomplete. You must test these positive path scenarios as a minimum, meaning you should test real live scenarios that will anyway cover 80% (or more) of very day use cases of the system.
  • test interfaces. Write short tests that test interfaces to other systems both on test environment and production . You don’t have to test all APIs in the other platform (like you don’t have to test all available SMTP commands every time you write code that uses SMTP sever, right ?). Testing one simple function of every interface is a minimum you have to do. It will cover most of configuration issues

That’s it for today !

See article Part 2 to read how to write tests that give your code context and meaning.

Dominik Steinhauf

CEO, IT for over 20 years, .Net developer, software architect at Creative Yellow Solutions (formerly Indesys), trainer and software development consultant for banking and energy sectors

If you need help with your software project, or need customized software for your company, contact me at:
dominik.steinhauf ( at)