Muestra las diferencias entre dos versiones de la página.
| Ambos lados, revisión anterior Revisión previa Próxima revisión | Revisión previa | ||
|
wiki2:engineering:testing [2020/08/29 15:04] alfred [Testing] |
wiki2:engineering:testing [2020/08/29 19:08] (actual) |
||
|---|---|---|---|
| Línea 3: | Línea 3: | ||
| ===== Types of testing ===== | ===== Types of testing ===== | ||
| - | * Acceptance testing. Tests that ensure a user perspective is fulfilled. | + | * **Acceptance testing**. Tests that ensure a user perspective is fulfilled. |
| - | * Unit test. They test that a single piece of code (function, class, module...) fulfills its requirements. | + | * **Unit test**. They test that a single piece of code (function, class, module...) fulfills its requirements. |
| - | * Integration tests. Test the collaboration between components. | + | * **Integration tests**. Test the collaboration between components. |
| - | * Smoke tests. | + | * **Regression tests**. Those which ensures that previously developed features are still working. |
| - | * Regression tests. Those which ensures that previously developed features are still working. | + | * **Performance tests**. It test execution speed, memory usage, and other performance metrics. |
| - | * Performance tests. It test execution speed, memory usage, and other performance metrics. | + | * **Stress tests**. Test the system response under adverse conditions (failure of components, attacks...). |
| - | * Stress tests. Test the system response under adverse conditions (failure of components, attacks...). | + | * **Load tests**. Test to check the performance of the system under a high workload. |
| - | * Load tests. Test to check the performance of the system under a high workload. | + | |
| + | ---- | ||
| + | * **Smoke testing**: a software testing process that determines whether the deployed software build is stable or not. It's usually done when a build is released. | ||
| + | * **Sanity testing**: performed after receiving a software build, with minor changes in code, or functionality, to ascertain that the bugs have been fixed and no further issues are introduced due to these changes. The goal is to determine that the proposed functionality works roughly as expected. If sanity test fails, the build is rejected to save the time and costs involved in a more rigorous testing. | ||
| + | * Sometimes smoke and sanity are defined as synonyms. | ||
| + | * **Pre-flight check**: Tests that are repeated in a production-like environment, to alleviate the 'builds on my machine' syndrome. Often this is realized by doing an acceptance or smoke test in a production like environment. | ||
| + | * **Functional testing**: validates the software system against the functional requirements/specifications. The purpose of Functional tests is to test each function of the software application, by providing appropriate input, verifying the output against the Functional requirements. | ||
| ===== Testing principles ===== | ===== Testing principles ===== | ||
| Línea 55: | Línea 59: | ||
| They test individual software units, which can be individual classes, whole aggregates, or whatever else fits. Since we’re only interested in the proper behavior of our unit, we should isolate external dependencies such as databases, remote systems, etc. Hence, we say that unit tests are performed in isolation. | They test individual software units, which can be individual classes, whole aggregates, or whatever else fits. Since we’re only interested in the proper behavior of our unit, we should isolate external dependencies such as databases, remote systems, etc. Hence, we say that unit tests are performed in isolation. | ||
| + | |||
| + | Tests should NOT be fragile. Test should not fail without reasons. | ||
| + | |||
| + | Other properties that tests should follow: | ||
| + | |||
| + | - Structural independence: To be structural independent your test result should not change if the structure of the code changes. | ||
| + | - Behavioural dependence: To be behavioural dependent a test case result should change when the behavior of the code under test change. | ||
| + | ===== Testing strategies ===== | ||
| + | |||
| + | {{ :wiki2:engineering:advanced_unit_test_part_v_-_unit_test_patterns.zip |}} | ||
| + | |||
| + | {{ :wiki2:engineering:useful_design_patterns_for_unit_testing_tdd.zip |}} | ||
| + | ==== Better Than Unit Tests - Article ==== | ||
| + | |||
| + | {{ :wiki2:engineering:better_than_unit_tests.zip |}} | ||
| + | |||
| + | === Automated contract === | ||
| + | |||
| + | * The contract model should be written by a consumer to express only the parts of the service interface they care about. (If you overspecify by modeling things you don't actually use, then your tests will throw false negatives.) | ||
| + | * The supplier should not write the contract. Consumers write models that express their desired interface partly to help validate their understanding of the protocol. They may also uncover cases that are in the supplier's blind spot. | ||
| + | * The test double should not make any assumptions about the logical consistency of the results with respect to the parameters. You should only be testing the application code and the way it deals with the protocol. | ||
| + | * E.g., if you are testing an "add to cart" interface, do not verify that the item you requested to add was the one actually added. That is coupling to the implementation logic of the back end service. Instead, simply verify that the service accepts a well-formed request and returns a well-formed result. | ||
| + | |||
| + | === Property-based test === | ||
| + | |||
| + | It uses a model of the system to describe the allowed inputs, outputs, and state transitions. Then it randomly (but repeatably) generates a vast number of test cases to exercise the system. Instead of looking for success, property-based testing looks for failures. It detects states and values that could not have been produced according to the laws of the model, and flags those cases as failures. | ||
| + | |||
| + | === Fault injection === | ||
| + | |||
| + | |||
| + | You run the system under test in a controlled environment, then force "bad things" to happen. These days, "bad things" mostly means network problems and hacking attacks. I'll focus on the network problems for now. | ||
| + | |||
| + | Run the system into a bunch of VMs, then generate load. While the load is running against the system, introduce partitions and delays into the virtual network interfaces. Introducing controlled faults and delays in the network, lets us try out conditions that can happen "in the wild" and see how the system behaves. | ||
| + | |||
| + | === Simulation === | ||
| + | |||
| + | In simulation testing, we use a traffic model to generate a large volume of plausible "actions" for the system. Instead of just running those actions, though, we store them in a database. | ||
| ===== Some notes ===== | ===== Some notes ===== | ||
| Línea 65: | Línea 106: | ||
| If you can’t make a test smaller, your code probably has too many dependencies. | If you can’t make a test smaller, your code probably has too many dependencies. | ||
| + | |||
| + | ---- | ||
| + | * http://blog.codepipes.com/testing/software-testing-antipatterns.html | ||
| + | * {{ :wiki2:engineering:software_testing_anti-patterns.zip |}} | ||
| + | |||
| + | - Having unit tests without integration tests | ||
| + | - Having integration tests without unit tests | ||
| ==== Test criticism ==== | ==== Test criticism ==== | ||
| Línea 70: | Línea 118: | ||
| The key problem with testing is that a test (of any kind) that uses one particular set of inputs tells you nothing at all about the behaviour of the system or component when it is given a different set of inputs. The huge number of different possible inputs usually rules out the possibility of testing them all, hence the unavoidable concern with testing will always be, "have you performed the right tests?" The only certain answer you will ever get to this question is an answer in the negative — when the system breaks. | The key problem with testing is that a test (of any kind) that uses one particular set of inputs tells you nothing at all about the behaviour of the system or component when it is given a different set of inputs. The huge number of different possible inputs usually rules out the possibility of testing them all, hence the unavoidable concern with testing will always be, "have you performed the right tests?" The only certain answer you will ever get to this question is an answer in the negative — when the system breaks. | ||
| + | By definition, unit tests test predictable bugs; or bugs that have already been encountered. They are often reactive, as opposed to proactive. | ||
| + | |||
| + | A good planner can foresee many types of issues, and write unit tests in advance, to validate the code that addresses them, but, by definition, this is “low-hanging fruit.” | ||
| + | |||
| + | No matter how good I am, I WILL miss bugs; sometimes, really bad ones. I have been constantly surprised by the bugs that crop up in projects that I would have SWORN were “perfect.” | ||
| ==== Coverage value ==== | ==== Coverage value ==== | ||
| Línea 93: | Línea 146: | ||
| * When a test fails it should be easy to find the reason. | * When a test fails it should be easy to find the reason. | ||
| - | [[https://dzone.com/articles/10-tips-to-writing-good-unit-tests]] | + | |
| + | * [[https://dzone.com/articles/10-tips-to-writing-good-unit-tests]] | ||
| + | * {{ :wiki2:engineering:10_tips_to_writing_good_unit_tests.zip |}} | ||
| + | |||
| + | |||
| + | ===== Notes ===== | ||
| + | |||
| + | Unit tests could run parallelized. | ||