Quantitative Interviews: the importance of scoring customer feedback
Back in December, Roger Huffstetler of Zillabyte contacted us to say he’d applied some of Lean Analytics to his startup, and wanted to fill us in on what happened. At the time, by his own admission, he was “Up to my ass in alligators” But now, a few months later, is his story.
I had just completed my 30th demo of our product, and I remember leaving the feedback session on cloud nine. As I wrapped up showing our API to the potential customer, he suggested what we were building was “amazing.” Yes, I thought giddily, this is going to work.
It took me a few minutes to come down from the clouds and realize as a founder, I’d been here before…and while it felt good, it could also be—unfortunately—an indication of nothing.
This phenomenon of euphoria is well documented: business founders see what they want to see, or as Ben & Alistair of Lean Analytics might say, “small lies are essential to company founders.” These ‘lies’ are what keep you going and believing through the toughest times.
The difficulty, of course, is in facing reality, discerning hard truth from praise and fluff. In particular, when you’re in the in the midst of customer feedback interviews—hearing that your product is “great,” “helpful,” or “amazing”—identifying reality can be incredibly difficult. Qualitative feedback, while often complementary, can be especially confusing.
It was at this point—post interview, but realizing I’d been through this cycle before— when I stumbled upon the Pain Index part of Lean Analytics (Page 170). With Ben’s & Alistair’s scoring system, you’re turning customer feedback into a quantitative process. You begin with a key customer development question: how do I know if the problem (we’re trying to solve) is really painful enough? Think of this method as the referee on the interview field, someone from the sideline to keep you honest. [We wrote this up in a blog post in 2012—AC&BY]
After Ben & Alistair introduce this question, they provide you with a set of six additional questions and a suggested scoring framework, as follows:
- Did the interviewees successfully rank the problems you presented? [Yes (6), Sort of (5), or No (0)]
- Is the interviewee actively trying to solve the problems, or has he done so in the past? [Yes (10), Sort of (5), or No (0)]
- Was the interviewee engaged and focused throughout the interview? [Yes (8), Sort of (4), or No (0)]
- Did the interviewee agree to a follow-up meeting/interview? [Yes (8), Yes, when asked (4), or No (0)]
- Did the interviewee offer to refer others to you for interviews? [Yes (4), Yes, when asked (2), or No (0)]
- Did the interviewee offer to pay you immediately for the solution? [Yes (3), Yes, when asked (1), or No (0)]
On their system, a score of 31 or higher is a good score. Anything below that, you haven’t really nailed a painful problem. But here’s where the real insight comes. Within all your interviews, look for a subset of scores that spike; that’s your early-adopter customer segment.
This process worked like a charm for us where qualitative feedback had failed to move us forward. When I scored the interviewees and sorted from highest to lowest, a pattern quickly emerged: the more technical the interviewee, the higher the score. This insight, that our customers were developers, was deeply buried in the morass of qualitative interviews, but scoring and sorting surfaced it at exactly the right time.
You may quibble with the specifics (so change them!), but the overall process is genius. This forcing function made a tremendous difference for us, as we worked to get through customer development.
Leave a Reply