How do we recognize quality? How do we know what’s good?
When experiencing the not-yet-experienced, we have an immediate left-brain, intuitive reaction. A split second later, our right brain serializes and categorizes our experience; to reconcile our experience into comfortable intellectual constructions.
Masterfully developed by Robert Pirsig in his books Zen and the Art of Motorcycle Maintenancence and Lila, Quality or Value cannot be defined because they empirically precede intellectual constructions.
Pirsig further defines the concept of Dynamic Quality as “the pre-intellectual cutting edge of reality” — because Dynamic Quality is recognized before one can think about it.
Quality is a direct experience independent of and prior to intellectual abstractions.
–Robert M. Pirsig
For Pirsig, the word “good” functions just as well as a noun as an adjective; reasoning that our concept of Quality is a philosophical measurement of goodness.
One school of thought is that good is quantifiable, even if we can’t measure it scientifically. This is perhaps the antithesis to what Pirsig believed. Yet, it is a commonly-held assumption about quality that we often encounter in the software industry.
Do we owe this short-sighted conception of quality to pointy-headed Managers who salivate on metrics dashboards? Or, do we need to re-examine our own definitions of quality?
What Does Quality Mean? is a brief primer by Mark Levison that includes what others have said about software quality. In an instructive rant called Killing Quality, Mark Bria makes a valuable observation about a big pothole in commonly held assumptions about software quality
“Quality” should be used as a measure of functional/aesthetic utility to our consumer, and not as a measure of defects. Really, it should just be assumed that defects are generally absent. This should just be implied in what it means to be a Professional.
Mark’s implication is that we’ve tainted Quality in the software world by assuming Quality means lack of defects, rather than presence of value.
For Camp Coiner (right), the Home of Zero Defects, zero defects might be a matter of life and death – an essential quality definition.
Zero defects in software is the price of entry — but not a particularly meaningful evaluation of quality.
Perhaps the easiest way to recognize dynamic quality, or lack there-of, is to observe an experienced user struggle with a counter-intuitive user interface.
While not always successful, most of us try to follow the Occam’s Razor approach to user interfaces. The Occam’s Razor school teaches us to shave off those widgets, do-hickies, and ajax-i-fied eye-candy that are not really needed to explain how to arrive at the desired function. Following Occam’s Razor, there is less chance of introducing inconsistencies, ambiguities, redundancies and unexpected behavior.
An example that will help us experience the spectrum of the recognition of quality, or value, is to compare the two popular agile tracking products — the commercial product, Name Withheld, and the open source product, XPlanner.
XPlanner has a plain vanilla interface with classic HTML controls and forms with full post-backs. Not pretty. But if you’ve used it, you remember that you rarely complained because it fulfilled your needs. Simple and intuitive. Who would not would rate this product as a good value or good quality?
Also on the quality spectrum, is the organizationally popular Name Withheld. My initial impression, and ongoing experience using Name Withheld, is frustration. Using this product is like driving in a strange city where the streets are very tidy and the building’s are quite attractive, but I have to circle every block 2-3 times before finding my destination. From a user experience viewpoint, who would rate this product as a good value or good quality?
Here’s hoping that Version Two of Name Withheld will be more intuitive than Version One of Name Withheld.
We avoided the sticky wicket – How do we evaluate quality in software? In a subsequent post, we’ll do this by assuming very practical constraints – we’ll pretend a venture capital firm is paying us a retainer for due diligence on a software product. What would be the range of things we might examine? How would we evaluate them?