A couple of days ago, Adam Milligan posted an eponymous "law" on the Pivotal Labs blog.
It includes a corollary stating, "The full definition of correct behavior of code exists in the tests for that code."
Now, there seemed to be something fundamentally off about this proposition ... and I wanted to figure out what that was.
Surely Mr. Milligan doesn't mean this in the trivial, tautological sense (define the spec to be no more than what test cases happen to exist at a point in time) ... even though Agile dogma often borders on that view, pretending it's some kind of paradoxical path to enlightenment like a Zen koan.
The comments to his post start to touch on "spec" vs. "test," and whether 100% test coverage is practical, desirable, or conclusive.
Of course even 100% code coverage, with a missing code path and a matching missing test and behavior spec equals ... test success, and wrong functionality.
In such a case, though, the problem has been externalized from the Agile context and put onto a faceless "business person" who "doesn't get" Agile because he or she actually wants to plan ahead and describe some certain specs that persist over time.
This is a neat trick. In physics, all sorts of magical things can happen if you look only inside of one context (or frame of reference) without looking at what's happening outside, or at what's happening to the frame itself. In finance, there can certainly be a free lunch ... if you can make its cost into an externality and remove it from your model.
Once you make the tests and code-based behavior spec a part of the application you're building -- and clearly, once it's a real cost center as well as a critical deliverable part of the project, you have done so -- then you are in a sense simply externalizing that troublesome human interaction (specification, functional analysis, planning). Yes, the tests match the code under test. But seen at a different level of abstraction, it's just another flavor of interface-driven development, or debugging.
Most engineers would love to realize the dream of self-describing, self-verifying code, whether that description be some kind of formal model, or a textual DSL (as with Microsoft's Oslo), or a set of tests and code-based behavior specs. And, indeed, these systems all improve the transparency of the code, propagating requirements from the outside in, and revealing when they are not met.
But even with the most "bought-in" business stakeholders, it is impossible to escape the the outermost specification layer, the one with the humans in it.