1,000 (How we’re validating the opportunity for Lean Analytics book)

When Alistair and I first started promoting Lean Analytics, we set ourselves a target of adding 1,000 subscribers to our mailing list by the end of August. At this stage of promoting the book, the “Number of subscribers” is our One Metric That Matters, and one thousand is our line in the sand.

Some might consider this a vanity metric, but it’s not—for us, it’s a measurement of interest in the book and its subject matter. It’s also an indication of our ability to contact a number of people with survey questions, additional content, and more so that the book meets the expectations of our intended audience.

Sure, the number goes up continuously (unless we fail miserably and more people unsubscribe than sign-up), and it’s not a ratio or rate (which we’ve said is important for a good metric.) But ultimately it was a good measure for testing our initial MVP (which is basically the website + the content we’ve produced on the site and through the mailing list.)

So why 1,000?

Like any line in the sand, it’s a combination of guesswork, aspirations, and judgement around what is challenging to reach, but attainable at the same time. We also spoke with other authors about how well they did building up their mailing lists, and we have our own experience doing this for other businesses, so we had some rough benchmarks to compare against. It’s also an indicator of things like survey completion and pre-orders that could help us break even.

Drawing a line in the sand is really important. Without that line, you can’t tell if you’re making adequate progress at a fast enough pace to meet your goals. The line also tells you how much effort to invest in your current course and direction, whether to double down, or whether to try something else.

If we had hit 100,000 subscribers in a couple of months, we would have known we were onto something. We’d be able to convince our families that we really needed to skip important vacations in order to write; or start scheduling book tours in advance; or charging more for speaking engagements. Similarly, if we had hit only 100 subscribers, we would have known it was a failure.

Most startups fall into the murky middle—the metrics they’re tracking are neither home runs nor dismal failures. Picking a target helps define success and failure, and gives you the opportunity to be honest with yourself. Are things going well or not? Should I continue or not?

This is a big part of succeeding with Lean Analytics. Draw a line in the sand (and remember, it’s in the sand, so it’s moveable), run some experiments, measure your results, and learn from them. Focus on actionable goals that derive from the metrics that you are tracking.

So how did we do?

Unfortunately, we missed our target. By the end of August we had 906 subscribers. We were off by about 10%. Not bad, but disappointing nonetheless. In evaluating the result versus our target, Alistair and I asked a pretty important question, “Do we think there’s enough interest in Lean Analytics to make the book a success?” And if there is enough interest, why didn’t we hit our target or surpass it? The number alone isn’t enough to give us the answers; but it’s a good starting point for the discussion.

Alistair and I concluded the following:

  • The qualitative feedback we’ve received since announcing the book has been very strong. A lot of it has come from our peers (which you have to discount a bit), but a lot of it has come from strangers too. That means (a) we’re reaching into new audiences we haven’t tapped into yet; and (b) there’s unbiased interest there that’s meaningful.
  • Some of the attention has involved speaking engagements we’ve been invited to, which suggests that we’ll be able to spread the world through those organizations as well.
  • Our survey response rate was amazing. People who signed up really cared, and took the time to give us their thoughts. Response rates of 75% are an excellent sign of engagement.
  • We reviewed the 750+ survey responses we got (the survey pops up after you sign up for updates) and did a bit of lightweight quantitative analysis on people’s interest in Lean Analytics. This gave us good insight into what people care about (validating a number of our hypotheses.) It also helped us trim some things we didn’t want to do.
  • We could have done more to juice our subscription numbers, but we didn’t have the time. For example, we had plans for experimenting with paid advertising, which we didn’t get to (but probably will at some point.) And we also had to spend a lot of time writing the book (which takes away from marketing the book.)
  • Both of us were fairly confident we could have hit our target 1,000 subscribers had we put more effort in. At the same time, that effort was better spent on other things: writing the book (and in many ways changing what we had written and planned to write based on good qualitative feedback from existing subscribers), debating ideas, planning future marketing campaigns, etc.
  • The number we did hit is pretty close to the target we set out. Had we only hit a few hundred, panic would have sunk in. But we were close enough, considering the effort we put in, to justify continuing with the project.

As you can see, we’ve got a mix of qualitative and quantitative analysis going on. There’s a bit of guesswork, gut and intellectual honesty thrown into the mix and that’s common when evaluating almost any metric, particularly at an early stage. You’re not dealing solely in fact, but without any facts at all, you’re not giving yourself the right framework for learning and adapting quickly enough.

Side note: As of this moment, we’re at 945 subscribers. We’ve added 39 subscribers in September so far, which is ~3.25 subscribers per day. In August we had 142 subscribers, which is ~4.6 subscribers per day. Our rate of subscribers is dropping. That’s not surprising, but it’s a number that we’re keeping an eye on. September will provide a decent benchmark that we can then use to compare for future months.

How To Score Problem Interviews During the Lean Startup Process

If you’re a fan of Lean Startup, you know about Problem Interviews. They’re the part of a startup where you first go out and speak to people you think might be customers, and try to determine if they have the problem you want to solve.

This post isn’t about the interviews themselves—there’s a ton of good thinking on how to conduct them already out there, and for many entrepreneurs they’re a rite of passage where you realize that your worldview is radically different from the market reality, and (hopefully) adjust accordingly.

But I do want to float a somewhat controversial idea, and get some feedback.

Yes. This is where YOU participate.

I want to talk about scoring interviews. It’s something we’re working on for the book; and it’s surprisingly thorny and controversial. Consider this a sneak peek, but also an opportunity to help us run an experiment. Let’s start with the idea first.

How to score interviews

http://www.flickr.com/photos/sven_kindler/3987690653/Problem Interviews are designed to collect qualitative data. They’re meant to indicate strongly (or not) that the problem(s) you’re looking to solve are worth pursuing. They’re hard to do well, and take lots of practice and discipline to master. If you do it right, you’re left with a ton of insight into your customers’ needs and thoughts.

Unfortunately, those reams and reams of notes are messy. Interpreting and sharing qualitative data is hard, and often subjective.

So we want to try and score them. Scoring interviews is designed to help you quantify your results, without getting overly scientific.

The challenge here is that you can’t beat a forest of qualitative data into a carefully manicured lawn of  quantitative data. We’re not even going to try that. And we’re also not proposing that you go overboard with this method: if you’re not good at collecting and interpreting qualitative data, it’s going to be difficult to get very far at all (through the Lean process or through any startup.) But our hope is that this method helps coalesce things a bit more, giving you some clarity when analyzing the results of your efforts.

During the Problem Interviews, there are a few critical pieces of information that you should be collecting. I’ll go through those below and show you how to score them.

1. Did the interviewee successfully rank the problems you presented?

Yes 10 points
Sort of 5 points
No 0 points

During a Problem Interview you should be presenting multiple problems to the interviewee—let’s say 3 for the purposes of this post—and asking them to rank those problems in order of severity.

  • If they did so with a strong interest in the problems (irrespective of the ranking) that’s a good sign. Score 10 points.
  • If they couldn’t decide which problem was really painful, but they were still really interested in the problems, that’s OK but you’d rather see more definitive clarity. Score 5 points.
  • If they struggled with this, or they spent more time talking about other problems they have, that’s a bad sign. Score 0 points.

It’s important to note that during the interview process, you’re very likely to discover different problems that interest interviewees. That’s the whole point of doing these interviews, after all. That will mean a poor score (for the problem you thought you were going to solve), but not a poor interview. You may end up discovering a problem worth solving that you’d never thought about, so stay open-minded throughout the process.

2. Is the interviewee actively trying to solve the problems, or have they done so in the past?

Yes 10 points
Sort of 5 points
No 0 points

The more effort the interviewee has put into trying to solve the problems you’re discussing, the better.

  • If they’re trying to solve the problem with Excel and fax machines, you may have just hit on the Holy Grail. Score 10 points.
  • If they spend a bit of time fixing the problem, but just consider it the price of doing their job, they’re not trying to fix it. Score 5 points.
  • If they don’t really spend time tackling the problem, and are okay with the status quo, it’s not a big problem. Score 0 points.

3. Was the interviewee engaged and focused throughout the interview?

Yes 8 points
Sort of 4 points
No 0 points

Ideally your interviewees were completely engaged in the process; listening, talking (being animated is a good thing), leaning forward, and so on. After enough interviews you’ll know the difference between someone that’s focused and engaged, and someone that is not.

  • If they were hanging on your every word, finishing your sentences, and ignoring their smartphone, score 8 points.
  • If they were interested, but showed distraction or didn’t contribute comments unless you actively solicited them, score 4 points.
  • If they tuned out, looked at their phone, cut the meeting short, or generally seemed entirely detached—like they were doing you a favor by meeting with you—score 0 points.

4. Did the interviewee refer others to you for interviews?

Yes, without being asked 4 points
Yes, when you asked them to 2 points
No 0 points

At the end of every interview, you should be asking all of your subjects for others you should talk with. They have contacts within their market, and can give you more data points and potential customers. There’s a good chance the people they recommend are similar in demographics and share the same problems.

Perhaps more importantly at this stage, you want to see if they’re willing to help out further by referring people in their network.  This is a clear indicator that they don’t feel sheepish about introducing you, and that they think you’ll make them look smarter. If they found you annoying, they likely won’t suggest others you might speak with.

  • If they actively suggested people you should talk to without being asked, score 4 points.
  • If they suggested others at the end, in response to your question, score 2 points.
  • If they couldn’t recommend people you should speak with, score 0 points (and ask yourself some hard questions about whether you can reach the market at scale.)

5. Did the interviewee offer to pay you immediately for the solution?

Yes, without being asked 4 points
Yes, when asked 2 points
No 0 points

Although having someone ask to pay or throw money at you is more likely during the Solution Interviews (when you’re actually walking through the solution with people), this is still a good “gut check” moment. And certainly it’s a bonus if people are reaching for their wallets.

  • If they offered to pay you for the product without being asked, and named a price, score 4 points.
  • If they offered to pay you for the product, score 2 points.
  • If they didn’t offer to buy and use it, score 0 points.

Calculating the scores

http://www.flickr.com/photos/jekert/2282064498/A score of 25 or higher is a good score. Anything under is not. Try scoring all the interviews, and see how many have a good score. This is a decent indication of whether you’re onto something or not with the problems you want to solve. Then ask yourself what makes the good score interviews different from the bad score ones. Maybe you’ve identified a market segment; maybe you have better results when you dress well; maybe you shouldn’t do interviews in a coffee shop. Everything is an experiment you can learn from.

You can also sum up the rankings for the problems that you presented. If you presented three problems, which one had the most first place rankings? That’s where you’ll want to dig in further and start proposing solutions (during Solution Interviews.)

The best-case scenario is very high interview scores within a subsection of interviewees where those interviewees all had the same (or very similar) rankings of the problems. That should give you more confidence that you’ve found the right problem and the right market.

The One Metric That Matters

We’ve talked about the One Metric That Matters before and it’s important to think about it even at this early stage in the Lean Startup process. The OMTM at this point is pain—specifically, the pain your interviewees feel related to the problems you’ve presented. It’s largely qualitative, but scoring interviews may put things into perspective in a more analytical way, allowing you to step back and not get lost in or fooled by all the interviews.

So are you ready to help us?

Here’s the thing: we’d would love to speak with people that are currently in the middle of doing Problem Interviews, and have them try out our scoring methodology. We need feedback here to iterate and improve the concept for the book.

So if you’d like to help please contact us or reply in the comment thread below.

The One Metric That Matters

One of the things Ben and I have been discussing a lot is the concept of the One Metric That Matters (OMTM) and how to focus on it.

Founders are magpies, chasing the shiniest new thing they see. Many of us use a pivot as an enabler for chronic ADD, rather than as a way to iterate through ideas in a methodical fashion.

That means it’s better to run the risk of over-focusing (and miss some secondary metric) than it is to throw metrics at the wall and hope one sticks (the latter is what Avinash Kaushik calls Data Puking.)

That doesn’t mean there’s only one metric you care about from the day you wake up with an idea to the day you sell your company. It does, however, mean that at any given time, there’s one metric you should care about above all else. Communicating this focus to your employees, investors, and even the media will really help you concentrate your efforts.

There are three criteria you can use to help choose your OMTM: the business you’re in; the stage of your startup’s growth; and your audience. There are also some rules for what makes a good metric in general.

First: what business are you in?

We’ve found there are a few, big business model Key Performance Indicators (KPIs) that companies track, and they’re dictated largely by the main goal of the company. For online businesses, most of them are transactional, collaborative, SaaS-based, media, game, or app-centric. I’ll explain.

Transactional

Someone buys something in return for something.

Transactional sites are about shopping cart conversion, cart size, and abandonment. This is the typical transaction funnel that anyone who’s used web analytics is familiar with. To be useful today, however, it should be a long funnel that includes sources, email metrics, and social media impact. Companies like Kissmetrics and Mixpanel are championing this plenty these days.

Collaborative

Someone votes, comments, or creates content for you.

Collaboration is about the amount of good content versus bad, and the percent of users that are lurkers versus creators. This is an engagement funnel, and we think it should look something like Charlene Li’s engagement pyramid.

Collaboration varies wildly by site. Consider two companies at opposite ends of the spectrum. Reddit probably has a very high percentage of users who log in: it’s required to upvote posts, and the login process doesn’t demand an email confirmation look, so anonymous accounts are permitted. On the other hand, an adult site likely has a low rate of sign-ins; the content is extremely personal, and nobody wants to share their email details with a site they may not trust.

On Reddit, there are several tiers of engagement: lurking, voting, commenting, submitting links, and creating subreddits. Each of these represents a degree of collaboration by a user, and each segment represents a different lifetime customer value. The key for the site is to move as many people into the more lucrative tiers as possible.

SaaS

Someone uses your system, and their productivity means they don’t churn or cancel their subscription.

SaaS is about time-to-complete-a-task, SLA, and recency of use; and maybe uptime and SLA refunds. Companies like Totango (which predicts churn and upsell for SaaS), as well as uptime transparency sites like Salesforce’s trust.salesforce.com, are examples of this. There are good studies that show a strong correlation between site performance and conversion rates, so startups ignore this stuff at their peril.

Media

Someone clicks on a banner, pay-per-click ad, or affiliate link.

Media is about time on page, pages per visit, and clickthrough rates. That might sound pretty standard, but the variety of revenue models can complicate things. For example, Pinterest’s affiliate URL rewriting model, which requires that the site take into account the likelihood someone will actually buy a thing as well as the percentage of clickthroughs (see also this WSJ piece on the subject.)

Game

Players pay for additional content, time savings, extra lives, in-game currencies, and so on.

Game startups care about Average Revenue Per User Per Month and Lifetime Average Revenue Per User (ARPUs). Companies like Flurry do a lot of work in this space, and many application developers roll their own code to suit the way their games are used.

Game developers walk a fine line between compelling content, and in-game purchases that bring in money. They need to solicit payments without spoiling gameplay, keeping users coming back while still extracting a pound of flesh each month.

App

Users buy and install your software on their device.

App is about number of users, percentage that have loaded the most recent version, uninstalls, sideloading-versus-appstore, ratings and reviews. Ben and I saw a lot of this with High Score House and Localmind while they were in Year One Labs. While similar to SaaS, there are enough differences that it deserves its own category.

App marketing is also fraught with grey-market promotional tools. A large number of downloads makes an application more prominent in the App Store. Because of this, some companies run campaigns to artificially inflate download numbers using mercenaries. This gets the application some visibility, which in turn gives them legitimate users.

It’s not that simple

No company belongs in just one bucket. A game developer cares about the “app” KPI when getting users, and the “game” or “SaaS” KPI when keeping them; Amazon cares about “transactional” KPIs when converting buyers, but also “collaboration” KPIs when collecting reviews.

There are also some “blocking and tackling” metrics that are basic for all companies (and many of which are captured in lists like Dave McClure’s Pirate Metrics.)

  • Viral coefficient (how well your users become your marketers.)
  • Traffic sources and campaign effectiveness (the SEO stuff, measuring how well you get attention.)
  • Signup rates (how often you get permission to contact people; and the related bounce rate, opt-out rate, and list churn.)
  • Engagement (how long since users last used the product) and churn (how fast does someone go away). Peter Yared did a great job explaining this in a recent post on “Little Data”
  • Infrastructure KPIs (cost of running the site; uptime; etc.) This is important because it has a big impact on conversion rates.

Second: what stage are you at?

A second way to split up the OMTM is to consider the stage that your startup is at.

Attention, please

Right away you need attention generation to get people to sign up for your mailing list, MVP, or whatever. This is usually a “long funnel” that tracks which proponents, campaigns, and media drive traffic to you; and which of those are best for your goals (mailing list enrollment, for example.)

We did quite a lot of this when we launched the book a few weeks ago using Bit.ly, Google Analytics, and Google’s URL shortener. We wrote about it here: Behind the scenes of a book launch

Spoiler alert: for us, at least, Twitter beats pretty much everything else.

What do you need?

Then there’s need discovery. This is much more qualitative, but things like survey completions, which fields aren’t being answered, top answers, and so on; as well as which messages generate more interest/discussion are quantitative metrics to track. For many startups, this will be things like “how many qualitative surveys did I do this week?”

On a slightly different tone, there’s also the number of matching hits for a particular topic or term—for example, LinkedIn results for lawyers within 15km of Montreal—which can tell you how big your reachable audience is for interviews.

Am I satisfying that need?

There’s MVP validation—have we identified a product or service that satisfies a need. Here, metrics like amplification (how much does someone tell their friends about it?) and Net Promoter Score (would you tell your friends) and Sean Ellis’ One Question That Matters (from Survey.io—”How would you feel if you could no longer use this product or service?“) are useful.

Increasingly, companies like Indiegogo and Kickstarter are ways to launch, get funding, and test an idea all at the same time, and we’ll be looking at what works there in the book. Meanwhile, Ben found this excellent piece on Kickstarter stats. We’re also talking with the guys behind Pen Type A about their experiences (and I have a shiny new pen from them sitting on the table; it’s wonderful.)

Am I building the right things?

Then there’s Feature optimization. As we figure out what to build, we need to look at things like how much a new feature is being used, and whether the addition of the feature to a particular cohort or segment changes something like signup rates, time on site, etc.

This is an experimentation metric—obviously, the business KPI is still the most important one—but the OMTM is the result of the test you’re running.

Is my business model right?

There’s business model optimization. When we change an aspect of the service (charging by month rather than by transaction, for example) what does that do to our essential KPIs? This is about whether you can grow, or hire, or whether you’re getting the organic growth you expected.

Later, many of these KPIs become accounting inputs—stuff like sales, margins, and so on. Lean tends not to touch on these things, but they’re important for bigger, more established organizations who have found their product/market fit, and for intrapreneurs trying to convince more risk-averse stakeholders within their organization.

Third: who is your audience?

A third way to think about your OMTM is to consider the person you’re measuring it for. You want to tailor your message to your audience. Some things you share internally won’t help you in a board meeting; some metrics the media will talk about are just vanity content that won’t help you grow the business or find product/market fit.

For a startup, audiences may include:

  • Internal business groups, trying to decide on a pivot or a business model
  • Developers, prioritizing features and making experimental validation part of the “Lean QA” process
  • Marketers optimizing campaigns to generate traffic and leads
  • Investors, when we’re trying to raise money
  • Media, for things like infographics and blog posts (like what Massive Damage did.)

What makes a good metric?

Let’s say you’ve thought about your business model, the stage you’re at, and your audience. You’re still not done: you need to make sure it’s a good metric. Here are some rules of thumb for what makes a number that will produce the changes you’re looking for.

  • A rate or a ratio rather than an absolute or cumulative value. New users per day is better than total users.
  • Comparative to other time periods, sites, or segments. Increased conversion from last week is better than “2% conversion.”
  • No more complicated than a golf handicap. Otherwise people won’t remember and discuss it.
  • For “accounting” metrics you use to report the business to the board, investors, and the media, something which, when entered into your spreadsheet, makes your predictions more accurate.
  • For “experimental” metrics you use to optimize the product, pricing, or market, choose something which, based on the answer, will significantly change your behaviour. Better yet, agree on what that change will be before you collect the data.

The squeeze toy

There’s another important aspect to the OMTM. And I can’t really explain it better than with a squeeze toy.

Nope, this isn’t me. But sometimes I feel like this.

If you optimize your business to maximize one metric, something important happens. Just like one of the bulging stress-relief toys shown above, squeezing it in one place makes it bulge out in others. And that’s a good thing.

A smart CEO I worked with once asked me, “Alistair, what’s the most important metric in the business right now?”

I tried to answer him with something glib and erudite. He just smiled knowingly.

“The one that’s most broken.”

He was right, of course. That’s what focusing on the OMTM does. It squeezes that metric, so you get the most out of it. But it also reveals the next place you need to focus your efforts, which often happens at an inflection point for your business:

  • Perhaps you’ve optimized the number of enrolments in your gym—but now you need to focus on cost per customer so you turn a profit.
  • Maybe you’ve increased traffic to your site—but now you need to maximize conversion.
  • Perhaps you have the foot traffic in your coffee shop you’ve always wanted—but now you need to get people to buy several coffees rather than just stealing your wifi for hours.*

Whatever your current OMTM, expect it to change. And expect that change to reveal the next piece of data you need to build a better business faster.

(* with apologies to the excellent Café Baobab in Montreal, where I’m doing exactly that.)