Lean Analytics Book

How To Score Problem Interviews During the Lean Startup Process

August 28, 2012 by

If you’re a fan of Lean Startup, you know about Problem Interviews. They’re the part of a startup where you first go out and speak to people you think might be customers, and try to determine if they have the problem you want to solve.

This post isn’t about the interviews themselves—there’s a ton of good thinking on how to conduct them already out there, and for many entrepreneurs they’re a rite of passage where you realize that your worldview is radically different from the market reality, and (hopefully) adjust accordingly.

But I do want to float a somewhat controversial idea, and get some feedback.

Yes. This is where YOU participate.

I want to talk about scoring interviews. It’s something we’re working on for the book; and it’s surprisingly thorny and controversial. Consider this a sneak peek, but also an opportunity to help us run an experiment. Let’s start with the idea first.

How to score interviews

http://www.flickr.com/photos/sven_kindler/3987690653/Problem Interviews are designed to collect qualitative data. They’re meant to indicate strongly (or not) that the problem(s) you’re looking to solve are worth pursuing. They’re hard to do well, and take lots of practice and discipline to master. If you do it right, you’re left with a ton of insight into your customers’ needs and thoughts.

Unfortunately, those reams and reams of notes are messy. Interpreting and sharing qualitative data is hard, and often subjective.

So we want to try and score them. Scoring interviews is designed to help you quantify your results, without getting overly scientific.

The challenge here is that you can’t beat a forest of qualitative data into a carefully manicured lawn of  quantitative data. We’re not even going to try that. And we’re also not proposing that you go overboard with this method: if you’re not good at collecting and interpreting qualitative data, it’s going to be difficult to get very far at all (through the Lean process or through any startup.) But our hope is that this method helps coalesce things a bit more, giving you some clarity when analyzing the results of your efforts.

During the Problem Interviews, there are a few critical pieces of information that you should be collecting. I’ll go through those below and show you how to score them.

1. Did the interviewee successfully rank the problems you presented?

Yes 10 points
Sort of 5 points
No 0 points

During a Problem Interview you should be presenting multiple problems to the interviewee—let’s say 3 for the purposes of this post—and asking them to rank those problems in order of severity.

  • If they did so with a strong interest in the problems (irrespective of the ranking) that’s a good sign. Score 10 points.
  • If they couldn’t decide which problem was really painful, but they were still really interested in the problems, that’s OK but you’d rather see more definitive clarity. Score 5 points.
  • If they struggled with this, or they spent more time talking about other problems they have, that’s a bad sign. Score 0 points.

It’s important to note that during the interview process, you’re very likely to discover different problems that interest interviewees. That’s the whole point of doing these interviews, after all. That will mean a poor score (for the problem you thought you were going to solve), but not a poor interview. You may end up discovering a problem worth solving that you’d never thought about, so stay open-minded throughout the process.

2. Is the interviewee actively trying to solve the problems, or have they done so in the past?

Yes 10 points
Sort of 5 points
No 0 points

The more effort the interviewee has put into trying to solve the problems you’re discussing, the better.

  • If they’re trying to solve the problem with Excel and fax machines, you may have just hit on the Holy Grail. Score 10 points.
  • If they spend a bit of time fixing the problem, but just consider it the price of doing their job, they’re not trying to fix it. Score 5 points.
  • If they don’t really spend time tackling the problem, and are okay with the status quo, it’s not a big problem. Score 0 points.

3. Was the interviewee engaged and focused throughout the interview?

Yes 8 points
Sort of 4 points
No 0 points

Ideally your interviewees were completely engaged in the process; listening, talking (being animated is a good thing), leaning forward, and so on. After enough interviews you’ll know the difference between someone that’s focused and engaged, and someone that is not.

  • If they were hanging on your every word, finishing your sentences, and ignoring their smartphone, score 8 points.
  • If they were interested, but showed distraction or didn’t contribute comments unless you actively solicited them, score 4 points.
  • If they tuned out, looked at their phone, cut the meeting short, or generally seemed entirely detached—like they were doing you a favor by meeting with you—score 0 points.

4. Did the interviewee refer others to you for interviews?

Yes, without being asked 4 points
Yes, when you asked them to 2 points
No 0 points

At the end of every interview, you should be asking all of your subjects for others you should talk with. They have contacts within their market, and can give you more data points and potential customers. There’s a good chance the people they recommend are similar in demographics and share the same problems.

Perhaps more importantly at this stage, you want to see if they’re willing to help out further by referring people in their network.  This is a clear indicator that they don’t feel sheepish about introducing you, and that they think you’ll make them look smarter. If they found you annoying, they likely won’t suggest others you might speak with.

  • If they actively suggested people you should talk to without being asked, score 4 points.
  • If they suggested others at the end, in response to your question, score 2 points.
  • If they couldn’t recommend people you should speak with, score 0 points (and ask yourself some hard questions about whether you can reach the market at scale.)

5. Did the interviewee offer to pay you immediately for the solution?

Yes, without being asked 4 points
Yes, when asked 2 points
No 0 points

Although having someone ask to pay or throw money at you is more likely during the Solution Interviews (when you’re actually walking through the solution with people), this is still a good “gut check” moment. And certainly it’s a bonus if people are reaching for their wallets.

  • If they offered to pay you for the product without being asked, and named a price, score 4 points.
  • If they offered to pay you for the product, score 2 points.
  • If they didn’t offer to buy and use it, score 0 points.

Calculating the scores

http://www.flickr.com/photos/jekert/2282064498/A score of 25 or higher is a good score. Anything under is not. Try scoring all the interviews, and see how many have a good score. This is a decent indication of whether you’re onto something or not with the problems you want to solve. Then ask yourself what makes the good score interviews different from the bad score ones. Maybe you’ve identified a market segment; maybe you have better results when you dress well; maybe you shouldn’t do interviews in a coffee shop. Everything is an experiment you can learn from.

You can also sum up the rankings for the problems that you presented. If you presented three problems, which one had the most first place rankings? That’s where you’ll want to dig in further and start proposing solutions (during Solution Interviews.)

The best-case scenario is very high interview scores within a subsection of interviewees where those interviewees all had the same (or very similar) rankings of the problems. That should give you more confidence that you’ve found the right problem and the right market.

The One Metric That Matters

We’ve talked about the One Metric That Matters before and it’s important to think about it even at this early stage in the Lean Startup process. The OMTM at this point is pain—specifically, the pain your interviewees feel related to the problems you’ve presented. It’s largely qualitative, but scoring interviews may put things into perspective in a more analytical way, allowing you to step back and not get lost in or fooled by all the interviews.

So are you ready to help us?

Here’s the thing: we’d would love to speak with people that are currently in the middle of doing Problem Interviews, and have them try out our scoring methodology. We need feedback here to iterate and improve the concept for the book.

So if you’d like to help please contact us or reply in the comment thread below.


  • http://twitter.com/TriKro Tristan Kromer

    Interesting concept. I would like to see reams of data backing this up as a valid scoring method. Particularly since you are suggesting a baseline that people will take as fact.

    • http://twitter.com/byosko Ben Yoskovitz

      Tristan – thank you for stopping by and commenting. We’re working on collecting data; we won’t have *reams* of it (by the time the book is published) but we wanted to present the idea and get feedback. Ultimately we know that people have a hard time analyzing qualitative data from customer interviews — our goal is to open up a discussion around how to better do that.

  • http://www.skmurphy.com/ skmurphy

    I have a lot of trouble with the approach that you are outlining. I
    understand the value of being able to “put some numbers” on a pile of
    anecdotes, but I don’t think this approach is helpful and may in fact be
    harmful.

    1. This may be splitting hairs but I think you should focus on one
    problem. You may explore facets of a problem and offer other potential
    problems for their consideration to make sure that the problem you are
    talking about is a significant one for them, but I think you should
    focus on one problem.

    2. I think you need to make a distinction between spent effort to solve
    the problem and are still looking for a better solution and spent
    effort/dollars to solve and feel that they have a satisfactory solution.
    If they believe they have a satisfactory solution they should certainly
    rank the problem low in #1.

    3. sounds more like an interviewing skill diagnostic or may offer
    insight into your ability to ask the right questions. A better test is
    would they agree to a second interview?

    4. I do a lot of problem interviews and it’s extremely rare that someone
    will refer someone else for an interview when you are still at the
    problem discovery phase. I would delete this question or substitute were
    they willing to refer you to other folks. But I am not sure at the
    problem stage how much of an indicator this is.

    5. seems to have no place in a discussion of the problem. In fact I
    think you risk contaminating your interview if you are truly focused on
    understanding the problem and you start to talk about your solution vs.
    elements of a solution or constraints on a useful solution.

    OMTM – I look for how much pain they appear to be in with the problem as
    an indicator that we are targeting a worthwhile problem. You should
    consider putting a number on that.

    I have a lot of respect for your work and for other things that you have
    written in this area but I worry you have put the cart before the
    horse. Instead of proposing what I have to believe is tentative as
    opposed to well founded solution I would ask people how they have
    attempted to score interviews. I also think points may be a later
    refinement, I would look for binary or presence indicators. You may be
    trying to collapse different dimensions into a single score where it may
    be a multi-step multi-phase process. At least that’s how I look at it,
    in the beginning finding people with relevant experience/expertise to
    the problem is normally the hardest problem. People who “should have”
    the need.

    Then determining if there are refinements needed to your selection
    criteria so that you can reliably talk to folks who have or who have had
    the problem. Then solution constraint discussions, then how much would
    they pay for it.

    I have a B2B focus so there may be a difference between products aimed
    at consumer impulse purchase (where many startups seem to treat problem
    interviews as push polls), consumer considered purchase, and B2B complex
    or orchestrated sale.

  • http://www.facebook.com/perkins2099 Adam Perkins

    I think if someone is offering to toss money at you it should be some kind of special case ; autopass. That is assuming you are asking the right questions / problems which align with your vision.

    Definitely a good idea to have concepts ranked by importance. If you aren’t getting good indicators then rethinking things is good.

  • Pingback: How to Get Out of the Building with the Validation Board

  • Marina

    Actually, it was very good to read something that is really hurting me. I’m doing a lot of interviews, not only problem interviews, and at the first time I was trying to put all the information in excel, in post its, in a whiteboard… and there were no way to measure all the differents opinions. And reading this I said: “Great. I’m not the only person with this problem”.

    • http://www.instigatorblog.com/ Benjamin Yoskovitz

      If you do end up using this system or any scoring system, I hope you’ll share the results and your insights with us.

  • http://bookreviewsbykevinkauzlaric.com/ Kevin Kauzlaric

    I like this approach to analyzing customer problems. I’ve just finished 10 customer interviews and am now trying to analyze the results so far. What about adding a “pain” level question after the ranking question? Ranking by severity is good, but maybe there needs to be the actual level of severity assigned to each problem. I’ve been asking customers to tell me the pain level of each problem on a scale of 1-3: 1=trivial pain, 3=severe pain. However, this may be accounted for in the other questions after the ranking question, many of which act as proxies for the triviality/severity of pain, so this may be unnecessary. Maybe I should just ask for their perception of the severity of pain for the simple purpose that it opens up their thought pattern to me and leads to interesting insights.

    • http://twitter.com/byosko Ben Yoskovitz

      Kevin – thanks for stopping by and commenting. If you do use a scoring method (ours or a modified version of your own making) please let me know if it helps.

  • Pingback: Problem Interview 점수 내보기 | Gsong's Blog

  • Pingback: Kleinburd News | Measuring What Matters: How To Pick A Good Metric

  • Pingback: How To Pick A Good Metric | startupmanship

  • Pingback: The *real* pivot by @ASmartBear

  • Pingback: Measuring What Matters: How To Pick A Good Metric | Tech.

  • Pingback: Measuring What Matters: How To Pick A Good Metric | Web Data

  • Pingback: Validation Board – 走出办公室的利器 | 意启

  • Pingback: Get out of the office