With summer quickly coming to a close, it’s time for fall lawn care.  Since I have around 2/3 of an acre to maintain, I have several pull-behind attachments for my lawn tractor that I use.

During the summer, I’ve seen an increase in both sedge and several types of broadleaves.  Instead of separately spraying for each weed type,  I planned to spray for both at the same time.  Early one morning, I mixed up 12 gallons of the dual mix in my trailer sprayer and went to work.

An hour later, I was done.  The lawn was sprayed, and I was ready to grab a coffee and watch the weeds die.

Several questions lingered in my mind as I put the sprayer away.   Did I get all the weeds?  Did I miss a spot?  Did I calculate my mix correctly and applied at the correct rate?   I might be done but will it accomplish the goal?  Was it adding value or was I just busy? It can take several days to see if the weeds are starting to wilt and several weeks to know if the plant got enough weed killer to kill it completely.

At work, we need to address some of the same questions.  We see problems that need to be solved.  We talk to our product owner or the users to see what they want.   We gather the information and set out to develop a solution.

We then kick it out the door and call it done.   But does our solution really help the user?  Did we add value or did we just get something done?

In a 2009 paper on online experimentation at Microsoft, the authors reported that only about one-third of ideas designed to add value actually added value. (They measured value through the improvement of a key metric).  They go on to say that “it is humbling to see how bad experts are at estimating the value of features.”

The only true way to know if a feature or idea is valuable is to set up a feedback loop with the users.  This feedback loop might include sophisticated multivariable or A/B testing with a product.  It might include surveys or net promoter scores.  If you have a small user base, it is possible to visit the users and watch then use the product?

Quicker feedback is always better.  When spraying my lawn, it takes a week or two before I know that I got the mix correct to kill the weeds or if I missed a spot.

Getting timely feedback from the user is common sense. But ask yourself, is this common practice?  How well do you get feedback from your users?  How quickly do you get feedback on the features you are working on?  What can you do today to improve your feedback?

If it takes months to get features into the hands of our users, how open are we to making changes when the user tells us that it doesn’t provide value?  Are we open to going back to something we did months ago? Every organization has a different answer to this questions.

Checking items off a to-do list is needed, but adding value is the true goal.

It’s been about two weeks since I sprayed my lawn.  By sedge is dying the broadleaf weeds are just a happy as ever.  After re-reading the mixing instructions, I think I under applied that chemical.   Not the value I indented to provide.

How about you? What change can you make this week to verify that you are adding value instead of just being done?