Quality’s Okay, But Who’s Paying the Freight?

One of my favorite truisms used to be
good software ain’t cheap and cheap software ain’t good
It’s a parallel phrase that seems to ring true. I appreciate a catchy chiasmus as much as anyone, but now this aphorism seems like a marble in a rock stack.
Lets face it, sometimes cheap software is good. How often do we balance expeditiousness with quality?
Demand for our software services is throttled by the people who buy the products. It is almost always our charge to strike a balance between cost and quality constraints. Dogmatic pursuit of quality might be persceived as gold-plating. The economic reality is

selling software is a prerequisite for being paid to build software
Two observations about expeditiousness and software quality are:
  1. Often teams are faced with a simple choice: more features or higher quality (e.g., refactoring, paying down technical debt, bug fixes).
  2. Often that choice of more features or higher quality is determined by whether your product is a greenfield start-up or you’re supporting an established, public-facing product.
Start-up Product
In the context of a greenfield project or a start-up product, I have been introduced to the concept of Minimum Viable Product or MVP.
In Minimum Viable Product: a guide, Eric Ries explains that an MVP is made to elicit feedback. He defines it as

a new product which allows a team to collect the maximum amount of validated learning about customers with the least effort ~Eric Ries

Kent Beck discusses Eric’s approach in Approching a Minimum Viable Product where he uses MVP to address critical product assumptions, and to illuminate the unknowns, for his new product Tattlebird.
Presumably some degree of quality is sacrificed to get answers. The value of expeditious answers supercedes quality issues like user experience or reskinning a user interface.
Established Product

On the established, public-facing product front, Anna Forss wrote about establishing the notion of business value for bug fixes to the sales organization of her customer (cf. what value does bug fixing have for sales?).
Anna poses Fred Reichheld’s The Ultimate Question
How likely is it that you would recommend product X to a friend or a colleague?
In the context of an established, public-facing product, she suggests that
…bugs create more detractors than features create promoters ~Anna Forss
In this content,
Value = Promoters – Detracters

Higher quality (user experience improvements and bug fixes) provides real value; in some cases more than new features. Sometimes error-free operation and usuability trumps new features.
If there’s no need to walk the goldfish, use a fishbowl. ~Bobtuse
Given the context (startup or established), I understand the need to balance expedience and quality. As long we are focused on who’s paying the freight and that their goals jibe with our mode of operation.

7 thoughts on “Quality’s Okay, But Who’s Paying the Freight?

  1. Thanks Bob – this is a great summary of what ends up being a very complex discussion, esp. when developing a new product. I love the notion of making the choice obvious up front: more features or higher quality. There's obviously a cost to each, but often I think teams chase quantity of features within a set budget and expect quality not to suffer as a result.


  2. Your 'trueism' reminds me of when I was at university, where I had a business professor who used to say, “tell your customers you can be good, fast or cheap- pick any two”.

    I've recently come to the idea that there are three types of quality to consider in software development. The scope, or number of features in a product, is a consideration in quality, as is the defect density. I use the term defect densisty instead number of defects, because for instance, if you have 10 defects (bugs) in a product or varying degrees, the software may be considered good, whereas if you have 10 defects in a single area, it would not.

    The third and final type is a hidden issue, and the one it takes the greatest care in managing- the maintainability. It is possible to have code that provides the correct functionality, and without defects, but is so fragile and poorly designed that even the most trivial change will break it, be it due to poor design, or lack of unit tests or comments.

    I'm quite interested in hearing others thoughts on these three attributes of quality, and if they have others.


  3. Gene – I like your 3 quality considerations.
    Also, thanks for mentioning maintainability.

    Earlier this week it dawned on me that maintainability standards change with time.
    I had to sift through 2000-era code (classic active server pages and VB6 dlls). I realized what was once considered modular and maintainable back in 2000, has become a very fragile house of cards that no one on the team wants to touch because:
    1. It could break something that could take a long time to fix.
    2. So much code depends on this legacy code working flawlessly
    3. Developers would need to load some old tools to recompile legacy dlls if something had to be changed
    4. “Include files”, which once were considered modular and maintainable, are now deemed a rat's nest where it's hard to find or debug anything.
    5. Old school tools to work w/ legacy code doesn’t include stuff that we’ve become accustomed to like object intellisense, right-click find usages, and step-through debugging.

    I wonder if it will be necessary to rebuild a code base every 5 years just to keep up with what is considered “maintainable”.


  4. I think there is a notion to the 'appropriateness' of quality, or put another way what are desirable qualities (in a sense what constitutes 'quality') in a given product according to it's purpose.

    “Quality” doesn't always mean things like maintanable, bug free, etc. For something that's essentially a 'prototype' quality can mean fast, cheap, and just good enough to get viable feedback. In fact in that area, the old adage 'quantity has a quality all its own' applies, and a lot more value can be provided by being able to explore 5 or 6 approaches to solving a problem, all of which are just good enough to use for getting feedback, as would one or two approaches that were more refined and of what we'd consider 'release quality' in a production product.

    Kent Beck (who you mentioned above) points this out pretty well (probably far better than I could manage to express it) in his blog that likens startups (and perhaps any project) to a airplain flight, with distinct phases. http://www.threeriversinstitute.org/blog/?p=251 and in the early phase the value, (or it could be said the 'quality' you desire) is the ability to try and test as many approaches as possible, get as much feedback as possible, in order to find something that looks viable to move forward with.


  5. If the software release is good enough for itself to survive one generation, ie. its sale/trial generates enough revenue/interest to release the next generation, then it means we would have hit the right balance between 'features' and 'quality', consequently their individual proportion, importance or isolated value does not matter, their combined value matters. This is regardless of whether sprints are for internal, intermediate or the final version of a product. For internal versions product managers' insight allows them to play customers' role. Then comes pre-sales, marketing exercises, promotions and certification programs, and finally GA releases; all of which are opportunities to expose the released generation for a survival test. I don't believe there should ever be time that can explicitly be bought for re-factoring. The 're-factoring' must not be a 'purpose', because by itself it has no value. There must be a pragmatic revenue and rationale behind re-factoring per sprint. The purpose is to catch up the next sprint.


  6. Ergun – Thanks for the feedback. I hope readers know about your excellent blog Negative Matter (http://negativematter.blogspot.com/).

    To respond to your post, I’ll use the example of LinkedIn Groups. Specifically, how LinkedIn rolled out their Groups features and the balance LinkedIn’s Product Owners must have struck between features and quality.

    I was an early adopter of LinkedIn’s Groups feature. When I first created a LinkedIn Group, they didn’t yet have features for Discussion Boards, News Items, Jobs Boards, or Sub-groups. A LinkedIn group was merely a contact list I could create around an area of interest (e.g., Agile .Net Practitioners). With few features, they focussed on making the user experience (quality) reasonably positive (presumably so users like me would be a “promoter” rather than a “detractor”).

    I was a promoter, albeit a fairly weak one at first. To get a discussion board feature I had to create a subgroup out on Google Groups for about the first 6 months. But, I stuck with LinkedIn – in parallel with Google Groups – because LinkedIn’s core product worked reasonably well and lots of people seemed to be adopting it as a contact manager. Eventually, LinkedIn added groups, but hadn’t made the distinction between discussion posts, news items, and job posts. Over time, as the discussion board turned into an unfocussed aggregation of posts, spam, and self-promotion, LinkedIn rolled out News Items and Job board features. LinkedIn decided on the necessary balance between features and user experience (quality) as they staged the rollout of their product; essentially following Eric Ries’ MVP. Had the first release been buggy, and had it presented a negative UX, I might not have stuck with it for sorely needed features.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s