I found this interesting (in another blog post of the same author):
> Keep your integration testing for smoke tests — to make sure your database actually starts and that you haven’t missed anything basic. Only when there is no way to exercise the code except when an actual full instance of the software is running should an end-to-end test be used.
This is the complete opposite of my experience. But I guess this is because he is developing a library-like software, and I'm mostly working on application code. I found unit tests mostly useless and a waste of time. But I'm sure that for a database library they are absolutely key...
When I've heard people making similar claims, what I've usually found is they're testing "glue" code: controllers, routers, etc. Personally I find this a near total waste of time: it's hard to write the tests, almost never actually catches a bug, and failures in this code are totally obvious during a smoke test - automated or not.
I write a lot of "application" code (cli, service and back-end) and a lot of tests. Parsing, calculations, file generation, regex .. that catches lots of bugs.
The value comes from keeping the complex code separate from the glue, and of course testing it. And you can easily test dozens of cases, which is usually not true of integration tests due to complexity and run time.
> failures in this code are totally obvious during a smoke test - automated or not.
Yes, but if your codebase is large enough then a non-automated smoke test can be a very slow process, especially if things are configurable. It would have taken 3-4 days to smoke test all functionality manually at my last workplace.
Any time I make claims like this, people look at me like I'm insane. But here I am, year after year, meeting release targets with robust software while the teams that chase test coverage and other optics don't. I would assume I'm missing something if this industry hasn't give me a million reasons not to believe in the best practices typically put forward.
It really depends on the type of software. An ETL pipeline, for example, is obviously way easier to develop and maintain through tests (record real system inputs, compare with desired outputs). But that logic doesn't extend to all other types of software.
I'm not developing libraries, I'm developing an entire RDBMS. In my experience -- and this is broader than rqlite -- integration and end-to-end tests seem like they are great - at the start. But as you rely more and more on them they become costly to maintain and really hard to debug. A single test failure (often due to a flaky, timing-sensitive issue) means wading through layers and layers of code to identify the issue.
Overly relying on integration and end-to-end testing (note I said over-reliance, there is absolutely a need for them) becomes exponentially more costly over time (measured in development velocity and time to debug) as the test suite grows. If you find you're having difficulty identifying a place for them it may that you're not decomposing your software properly in the first place. All this is probably manageable if you're a solo developer, but when a team is building the software it can become really painful.
For more details see the talk I gave to CMU[1] on my testing strategy.
> Keep your integration testing for smoke tests — to make sure your database actually starts and that you haven’t missed anything basic. Only when there is no way to exercise the code except when an actual full instance of the software is running should an end-to-end test be used.
This is the complete opposite of my experience. But I guess this is because he is developing a library-like software, and I'm mostly working on application code. I found unit tests mostly useless and a waste of time. But I'm sure that for a database library they are absolutely key...