Feature Usage Statistics

What we knew before

My own experience says:

  • Some of the most important features are not requested up-front
  • Many of the features requested are seldom, if ever, used.
    • This implies MUCH lower ROI on these features
    • This implies that the most IMPORTANT features are seriously delayed by building the low-to-zero value features
  • Only a small percent of features are built initially AND provide high ROI

Again, I emphasize three things.

  • ROI on the investment (regardless of time)
  • The cost of delay
    • In many cases delay is crucial – if your business is late, you go bankrupt
    • Delay can also be crucial to customers – with Covid innovation re vaccine, life and death to many people.  But that is overly dramatic: but clearly, customers want the important stuff NOW.  They suffer if you are delayed.
  • Learning.
    • We think we know what the customer wants, but much experience proves that is notably off.
    • Delay in finding out that you are off.  “The bad news does not get better with age” is one way of saying it.

New information.

Over the last 10-20 years we have had some data on this problem.

Some of that data is now fairly very old.

Collecting the data is hard, and there is a problem of knowing which data to collect.

For example, imagine a 6 phase waterfall. Imagine you have feature X at the beginning but delete feature X after phase 3.  Clearly for phases 1-3, that feature caused some delay. And some cost.  But it was never completed (never actually built).  How do you account for feature X?

Here is a blog post by Mike Cohn:

https://www.mountaingoatsoftware.com/blog/are-64-of-features-really-rarely-or-never-used

Slightly comforting, in suggesting that things are better “now”.  Or, does it really say that?  No, it only says that we don’t know, and that the oft-quoted old data was based on too little evidence.  But, not being well validated does not mean wrong.  Nor does it suggest whether the truth is better or worse.

Then Mike Cohn shares this quote of Jim Johnson of The Standish Group:

Since then each year we do several TCO studies. We try to look at this as part of these engagements. Based on these casual observations our current estimate of features used for mission-critical applications is 20% often, 30% infrequently and half hardly ever. I am not sure the numbers are 1,000 organizations or 2,000 applications, but it could be close. Thank you, Jim Johnson, Chairman, The Standish Group. If you need further information you can e-mail jim@standishgroup.com

Note that Mr Johnson is talking of mission-critical applications.  My guess would be that feature usage would be “better” on mission-critical than on regular applications (or I would hope so).  That is perhaps a bias, and in fact incorrect.

Again, usage and value are not (always) the same thing.  Example: the button to launch the nuclear missles will be, I hope, never used.  But HAVING that feature does (we hope) make us safer from our enemies.  (OK, an extreme example, but I think you get the point.)

If we made four categories:

  • Often
  • Infrequently
  • Rarely
  • Never

…for usage, and we posit that usage is a rough proxy for value, THEN, how do you see the percentages across those categories for products (projects) you have recently done?  In the last two years or so.

And, if we may ask, what was the basis for your “guess”?

Here are some possible answers:

  • Usage data from the app
  • Customer survey
  • Some indirect data
  • I asked 5 smart people and we averaged our guesses
  • My guess (and I know very little or a lot about what the customers really want)

***

Hope this discussion helped you.  Your comments?

 

 

« « Webinar: Intro to the CSPO Course (plus ARP workshop) || AC: What’s our Agile org approach? » »

Tagged:

2 thoughts on “Feature Usage Statistics

  1. Pingback: AC: What’s our Agile org approach? – Continuous Improvement

  2. Pingback: Webinar: Intro to the CSPO Course (plus ARP workshop) – Continuous Improvement

Leave a Reply