Tests are code. Writing tests is not a "step"—it is part of writing the code itself. The component is nothing without its tests.
It should be easy to verify any requirement with a test. The tests should tell the story of the requirements.
A developer can test any component in isolation (unit testing) or can test the component in the constellation in which it normally exists (integration testing).
Just so we've said it: tests are not a place to use a different coding style or different coding practices than in "regular" code. Choose your frameworks wisely. It should be easy to write powerful, elegant and easily understood tests. Build your own support code and libraries where needed. Apply the same coding principles as you would with the code being tested. You have to maintain testing code just like any other code.
We discuss below that we prefer integration tests to unit tests—that only works if you provide a way to write high-performance integrated tests without repeating a lot of code.
Unit tests are very easy to write for properly written components. With a proper infrastructure, such tests can just as easily be executed in an integrated environment. In such cases, there is generally no need to invest time (and incur maintenance debt) writing two sets of tests.
Automated tests will sometimes replace components and dependencies with fake or mocked objects, in order to isolate and test only a component's logic without incurring the costs of configuring and setting up unrelated components.
If integration testing is too complicated or too slow, then a web of unit tests may suffice. In most cases, though, this doesn't apply and we avoid mocking entirely and test components directly in common, integrated settings.
For example, if a component is commonly used as part of a database-based application, then it is more effective to test that component in such a scenario, rather than expending effort in isolating the component in order to have a "true" unit test.
With only unit tests, there is a danger that the component works, but only as tested, not as actually used.
Often, these problems arise in component configuration. A unit test will pass in carefully prepared (and sometimes faked) dependencies and run all-green.
However, an integration test will check that the configuration code also works. That is, that the component is configured correctly for products that use it and not just in the tests that verify its behavior.
Mocks and fakes must be used judiciously, otherwise you end up either testing only the mock or you end up hiding certain classes of problems, as discussed in more detail below.
Imagine a UI list that validates and saves entries when the focus changes. This list might work just fine in a test, where notifications and side-effects as a result of saving are disabled with mocks.
This is no longer the real-world situation, though. What happens if one of the notifications would have led to a reload of the list or a state-change in one or more objects? What if the list only saves an object it is is marked as "changed" but that the spurious event resets that status in integration? This kind of interaction—this kind of _bug_—represents exactly the kind of thing we would miss when testing the list in too isolated a manner.
Because we've mocked away too much—because we focused too tightly on a unit test of the list—we've missed a bug that will come up in production instead.
While we don't practice strict TDD at Encodo, we do write tests from the very beginning.
It's really the only way to test the code that you're writing, isn't it? What are you going to do instead? Fire up the web server each time you want to throw data at a controller? Use a browser or Postman to fire those requests? Or are you starting a desktop UI and clicking around and typing? Or did you hack together a little console application in order to debug code?
Stop doing all of those things. Use a testing environment instead, so your product acquires a growing stable of automated, repeatable regression tests. It will become second nature to write tests to verify requirements about the components you write.
As we said above: the tests are part of the component.
A point made above is that unit tests are useful but they're often not complete. Unit tests can fool you with excellent syntactic coverage but sub-standard functional coverage. We have many tools to measure the former, but only experience to measure the latter.
Sure, you've covered all of the lines, but did you actually choose a representative set of inputs? Are you making the right assertions? Did you actually test the requirements?
One technique that we use a lot is expectation files (called snapshots in some frameworks). Instead of writing several (sometimes, dozens of) assertions, we format output to text and then compare it against the text produced by the previous, presumably correct test run.
The idea is to detect when something has changed. We use this in Quino to verify log output during certain operations, or to verify queries or generated SQL or model structure or lists of data. Expectation files increase the depth and robustness of tests while at the same time making it extremely efficient to write and maintain such tests.
An expectation (or snapshot) is updated automatically when it changes and shows up as a difference in source control. If the change is expected, the developer commits it.
It takes a lot of experience to write just the right number and kind of tests. You don't want to write too many tests: it's code you have to maintain, after all. Also, it can be confusing when the same problem crops up in multiple places in different fixtures.
Some components should have unit tests as well as integration tests. For other components, unit tests are redundant because the integration tests cover everything already. Experience guides you in deciding what to write first, what to keep, and what to throw away.
It is possible to have too many tests. If you're not aware in which layer your code resides, you might end up running the same code in multiple scenarios, when that component behaves the same regardless.
For example, if you're testing how expressions are mapped to a database, then that test should definitely run against every supported database. If you're testing how a high-level query composes those expressions before they get to the mapper, then you only really need to run it against one database in integration.
No-one wants to admit to releasing untested software. And no-one really wants to do manual testing. Automating tests reduces turnaround time for changes and enhancements. It also increases confidence for quick turnarounds when going to manual testing or production.
Unit tests are good, but prefer coverage in integration tests so that you have the best guarantee that your tests are running your code in a way that emulates the production environment as closely as possible.