My current theory for why there is so much confusion about testing is because developers are not often taught the difference between specification (theorems) and implementation (proofs) and why you want to separate the two.
It seems like business and investors want us to write code, more of it, and faster. Value, value, value!
So my question is: what is valuable?
Do you value your customers' data? Do you value their time? Their safety? Your brand and reputation? If you answered yes to any of these (and the other questions I may of forgotten or elided) then you should be encouraging your developers to write specifications.
One such specification, and a weak one that developers can write and maintain on their own without involving stakeholders, are unit tests. It's a weak form of specification for a library/module/component you would like to have because it specifies properties and behaviors by example. The spec gives an example of use and the expected outcomes. A good test is a verbose Hoare triple: given some context, when this method is called, then this result is expected.
Your implementation of that specification is what you're after and it's the reason why you should write the tests first. Writing tests first has little to do with productivity or design. You should write them first because your implementation should prove the theorems in your specification. Theory first. Proof after.
Sometimes you have to revise your theories after attempting the proof.. but that's a story for another day.
But we can write better specifications! Unit tests are weak because each test only demonstrates a single expectation. It doesn't quantify over the space of possible inputs! If you want to write a better specification for your parser or transformation try property based testing. Use a library like QuickCheck. You give it a theorem that quantifies over the input space of your function under test and it will find out for you if your proof holds (limited only by how many examples you want to try... say 10000). It's not a proof that your implementation is correct but its a much stronger guarantee than a suite of unit tests and it doesn't cost you that much more to use it.
Integration tests though. I'm not sure if I agree with the advice. You should definitely write them but they can become a time sink and cost quite a bit to run if you need to test a non-trivial system. Where they lack is at level of quantification and this is where the real errors lie in software systems with many components. Integration tests, like unit tests, are proof by example. You write a specification for how a given configuration of components should interact and you supply a context and run the test and see if your assertions hold. It's wise to be aware that it won't catch most safety errors and it will never catch liveness errors.
Safety errors having to do with correctness of expected values over the lifetime of a computation.
Liveness having to do with maintaining invariants over the lifetime of a computation.
So integration tests are useful, do write them, but I wouldn't advise spending most of your time on them if you're dealing with more than 2 or 3 components.
Once you have messaging and co-ordination you're going to want a stronger specification and that would probably look more like a theorem written in a language that can be verified by something called a model checker. Something like TLA+ is making good progress breaking into industry.
... this is getting long. To summarize: developers should be taught and given time to write specifications. Most errors in software arise from poor, incorrect, or missing specifications. Weak specifications are better than none. Think of tests as specifications, write them first, and prove your software meets those specifications. Then you can change your implementation as you refine your specification and write better, faster, more reliable software.
> Liveness having to do with maintaining invariants over the lifetime of a computation.
Nit: liveness is about reaching 'good' states over the lifetime of your computation. If your invariant is violated, even if it's a multistate invariant, it's still a safety error. Liveness would be something like "x is eventually true", or "the program always terminates."
It's not just specifications we need. We also need better tests. "Better" here doesn't mean "integration" or "acceptance", it means things like "fuzzing through contracts" or "comparing snapshots" or "rules-based state machines". Testing is vast and we're not very good at it.
My current theory for why there is so much confusion about testing is because developers are not often taught the difference between specification (theorems) and implementation (proofs) and why you want to separate the two.
It seems like business and investors want us to write code, more of it, and faster. Value, value, value!
So my question is: what is valuable?
Do you value your customers' data? Do you value their time? Their safety? Your brand and reputation? If you answered yes to any of these (and the other questions I may of forgotten or elided) then you should be encouraging your developers to write specifications.
One such specification, and a weak one that developers can write and maintain on their own without involving stakeholders, are unit tests. It's a weak form of specification for a library/module/component you would like to have because it specifies properties and behaviors by example. The spec gives an example of use and the expected outcomes. A good test is a verbose Hoare triple: given some context, when this method is called, then this result is expected.
Your implementation of that specification is what you're after and it's the reason why you should write the tests first. Writing tests first has little to do with productivity or design. You should write them first because your implementation should prove the theorems in your specification. Theory first. Proof after.
Sometimes you have to revise your theories after attempting the proof.. but that's a story for another day.
But we can write better specifications! Unit tests are weak because each test only demonstrates a single expectation. It doesn't quantify over the space of possible inputs! If you want to write a better specification for your parser or transformation try property based testing. Use a library like QuickCheck. You give it a theorem that quantifies over the input space of your function under test and it will find out for you if your proof holds (limited only by how many examples you want to try... say 10000). It's not a proof that your implementation is correct but its a much stronger guarantee than a suite of unit tests and it doesn't cost you that much more to use it.
Integration tests though. I'm not sure if I agree with the advice. You should definitely write them but they can become a time sink and cost quite a bit to run if you need to test a non-trivial system. Where they lack is at level of quantification and this is where the real errors lie in software systems with many components. Integration tests, like unit tests, are proof by example. You write a specification for how a given configuration of components should interact and you supply a context and run the test and see if your assertions hold. It's wise to be aware that it won't catch most safety errors and it will never catch liveness errors.
Safety errors having to do with correctness of expected values over the lifetime of a computation.
Liveness having to do with maintaining invariants over the lifetime of a computation.
So integration tests are useful, do write them, but I wouldn't advise spending most of your time on them if you're dealing with more than 2 or 3 components.
Once you have messaging and co-ordination you're going to want a stronger specification and that would probably look more like a theorem written in a language that can be verified by something called a model checker. Something like TLA+ is making good progress breaking into industry.
... this is getting long. To summarize: developers should be taught and given time to write specifications. Most errors in software arise from poor, incorrect, or missing specifications. Weak specifications are better than none. Think of tests as specifications, write them first, and prove your software meets those specifications. Then you can change your implementation as you refine your specification and write better, faster, more reliable software.