Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Anyone else sort of think that the "To TDD or not" argument eclipses the much more important argument of "To have high unit test coverage or not"?


The research I've seen on TDD seems to essentially point to two things: quality and productivity is positively correlated with the number of tests, and people who use TDD tend to write more tests. People who write the same number of tests but don't use TDD get similar benefits to people who do use TDD. So TDD is itself useful primarily in that it ensures that you actually do write the tests. (Unfortunately I don't have any citations handy to back that up; that's just my recollection of what I've read on the subject over the years).

There's that ever-elusive "it improves design" argument for TDD, but that one's far harder to prove either way, and I've personally seen it cut both ways: sometimes things end up more nicely decomposed, and sometimes they end up wayyyyy too decomposed.


This squares with my experiences.

As for the design argument, I think that the thing that TDD actually encourages in terms of design is testability (something that I do think is desirable, in general). If you have high test coverage without using TDD, you've probably either taken testability into consideration before you wrote your code or you made some changes after the fact to accommodate your tests.


I'd definitely agree that TDD leads to testable code, which is usually a good thing. If you know how to write testable code, it's not as much of a win (in my opinion), since you tend to write with testing in mind. If you don't have experience with unit testing, though, doing strict TDD for a while can really help you learn what sort of constructs make for testable code.

The places I've seen TDD lead to less-than-desirable design is when some larger problem is broken into such tiny pieces with so many interfaces that it becomes hard to read through the code and follow the flow of control. That's certainly testable, but often times I personally get as much mileage out of writing tests one level up, against some higher-level abstraction or API that those other things are all components of, and not being so religious about breaking up all the little parts. In my experience those slightly-higher-level tests (they're not even "integration" tests, since they're still strictly against one logical component) tend to provide the same amount of value in terms of preventing regressions while requiring fewer compromises to make the little parts testable and giving more flexibility as to future refactoring. Essentially, you're treating a whole bunch of classes/parts as internal implementation details.


I think we see eye to eye here. The sort of "one level up" thing is what initially attracted me to spec testing because that seemed to not only give you tools for performing your tests, but also helped direct you towards what would be useful to test: the behaviors of the code that you actually care about. So when I use spec testing tools, I tend not to think about making sure that the low level function traverses the directories in a particular way, but rather that all of the directories have been renamed (for example).


> I personally get as much mileage out of writing tests one level up

This is why I prefer the rebranded "BDD" as opposed to the original "TDD".

I find that the perspective that BDD offers makes me write much more useful tests, since the ultimate reason we write objects in the first place is to use them, right? I don't care about how it works inside, I care about how it gets used. There's nothing technically different between BDD and TDD, it's a matter of philosophy, and I find BDD's take on things to be better.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: