Tests help you to understand the code as they are minimum uses well and self explained of atomic actions.
When a bug appears a new test, replicating the problem, should appear too. It should fail, so once you fix the problem it should pass and remain there. This is a powerful and useful technique. If you find a bug, write a test that reveals it. Then, you can quickly fix the bug by debugging the test. This turns into a good regression test.
Tests are first and foremost a design tool. Code which is hard to test is likely hard to read, reason about and maintain.
When a test is hard to write, when your a test is brittle (breaking often when the code changes), then you've probably written bad code. Tests are a code quality, not code correctness, metric.
You should be able to take your unit test and run it on another's computer even when it's not connected to the internet.
Follow the AAA rule (Arrange, Act, Assert). This is a general pattern for make tests more readable and useful. (1) You arrange, set variables, properties and all required things for the test to run with the spected result. (2) You act, call the method you are testing. And (3) you assert verifying the given result.
First write the happy-path. Those that are easy. Then, when they pass, look for the edges and boundaries of your code.
If possible, cover every code path.
Name your tests clearly, do not be afraid of long names. It's a really good practice to add good comments\documentation to explain why the test exists and what is being tested.
Test that every raised exception is raised.
Set up a clean environment for each test. Previous run tests can let some residual data that can make current tests fail; even worse, it can happen that your tests depend on that data.
If it takes too much code to setup a test, it’s likely that the unit you want to test is too complicated and would benefit from refactoring. The more code you need to write in the setup part of the unit test, the tighter coupling occurs between the test and its unit, which leads to maintenance headaches in case the unit code starts changing.
Unit tests have to be FIRST:
Write tests when know the answers before to some complicated code.
Try to test over pure-functions.
The more you have to mock out to test your code, the worse your code is. The more code you have to instantiate and put in place to be able to test a specific piece of behavior, the worse your code is. The goal is small testable units, along with higher-level integration and functional tests to test that the units cooperate correctly.
They test individual software units, which can be individual classes, whole aggregates, or whatever else fits. Since we’re only interested in the proper behavior of our unit, we should isolate external dependencies such as databases, remote systems, etc. Hence, we say that unit tests are performed in isolation.
Tests should NOT be fragile. Test should not fail without reasons.
Other properties that tests should follow:
It uses a model of the system to describe the allowed inputs, outputs, and state transitions. Then it randomly (but repeatably) generates a vast number of test cases to exercise the system. Instead of looking for success, property-based testing looks for failures. It detects states and values that could not have been produced according to the laws of the model, and flags those cases as failures.
You run the system under test in a controlled environment, then force “bad things” to happen. These days, “bad things” mostly means network problems and hacking attacks. I'll focus on the network problems for now.
Run the system into a bunch of VMs, then generate load. While the load is running against the system, introduce partitions and delays into the virtual network interfaces. Introducing controlled faults and delays in the network, lets us try out conditions that can happen “in the wild” and see how the system behaves.
In simulation testing, we use a traffic model to generate a large volume of plausible “actions” for the system. Instead of just running those actions, though, we store them in a database.
When we are just asserting that some methods are called in a specific order with specific parameters. This feels almost like testing that the compiler/interpreter works. We've fallen into a trap of testing that the code does what the code says it does, rather than testing functional behavior we care about.
If you can’t make a test smaller, your code probably has too many dependencies.
The key problem with testing is that a test (of any kind) that uses one particular set of inputs tells you nothing at all about the behaviour of the system or component when it is given a different set of inputs. The huge number of different possible inputs usually rules out the possibility of testing them all, hence the unavoidable concern with testing will always be, “have you performed the right tests?” The only certain answer you will ever get to this question is an answer in the negative — when the system breaks.
By definition, unit tests test predictable bugs; or bugs that have already been encountered. They are often reactive, as opposed to proactive.
A good planner can foresee many types of issues, and write unit tests in advance, to validate the code that addresses them, but, by definition, this is “low-hanging fruit.”
No matter how good I am, I WILL miss bugs; sometimes, really bad ones. I have been constantly surprised by the bugs that crop up in projects that I would have SWORN were “perfect.”
This value tells you the percentage of code which is executed during test. However, it does not mean that all that code is tested.
Generally that percentage is “the happy-path”, the best test scenario.
Thus, code coverage value does not rate the code correctness.
Unit tests could run parallelized.