Putting a Pause on the Lean Analytics Backchannel

When you run an experiment, as Alistair and I did with the Lean Analytics Backchannel, you have to be prepared for it to fail. In fact, if you run an experiment that can’t fail, it’s not really an experiment.

Alistair and I launched the Backchannel because we believe that there’s an opportunity to help people over time and consistently with Lean Analytics; the book is great (and we know it’s helped people), but not enough. It’s a reference guide to get you started, but everyone needs help once in awhile, so we came up with the Backchannel as a way of addressing that.

We also setup a pre-order page and decided to charge right away in an effort to de-risk things for us. Ultimately we didn’t get to our threshold of paying customers quickly enough, and so we’ve decided to put the Backchannel on pause and look to new opportunities.

One of the good things about launching experiments is that you’re going to learn something. We learned that there’s a market for Lean Analytics training/help/consulting/etc. (we’ve gotten an increase in speaking/consulting opportunities for example, and we did get quite a few customers pre-ordering) but that the form factor of the Backchannel isn’t the right fit. At least not without rethinking/rejigging it in some way. We also recognized that as a test it had issues. For example, the “product” (a “private community of sorts”) was somewhat vague, which honestly was on purpose because we weren’t 100% sure how the product would manifest itself. The price point was a bit of guesswork on our part, but partially calculated based on the number of customers we wanted signed up and the revenue we wanted to earn from the initiative. Our landing page also wasn’t fantastic. This was all feedback we got from people who signed up, and from people who didn’t. We interviewed and surveyed both groups.

So we learned a lot. And that in fact is the point of an experiment: to learn.

Alistair and I are now taking those learnings and figuring out what’s next. We believe we’ve validated the problem (people want more Lean Analytics information, practical guidance & support), but the solution was wrong. So we’re going to keep experimenting with new solutions. We have a few ideas–nothing we can share immediately–but you’re likely going to see more activity on this site in the near future as we look to find ways of helping people more.

To those of you that did pre-order I want to thank you very much for your support! And I look forward to connecting with you all so Alistair and I can keep learning what you want and how we can serve you.

Introducing the Lean Analytics Backchannel

Lean Analytics has been out for 3 years. It still amazes me how often people reach out to let us know that they’ve enjoyed the book. More importantly, people tell us that it’s helped them. And that’s awesome. Alistair and I wrote Lean Analytics to help people. We weren’t quite sure how good a job we would do, but I think it’s turned out fairly well.

Having said that, we’ve always felt like there’s more to this story than the book. We’ve done a lot of speaking engagements (and continue to do so!) and we’ve presented lots of material, but the world of analytics and Lean Startup continues to grow and evolve. A book isn’t a great medium for shifting ideas, new concepts, more exploration and so forth.

And that’s why Alistair and I are excited to launch the Lean Analytics Backchannel.

Lean Analytics backchannel header-SMALL

The Lean Analytics Backchannel is a private community of sorts (which we’re going to build initially off Slack and a few other tools) to bring together Lean Startup and Lean Analytics enthusiasts and practitioners. We want to go beyond the book and continue to explore how Lean Analytics can help startups and businesses of all sizes succeed using data.

The Lean Analytics Backchannel will be a direct line of communication to Alistair and myself. We’ll be actively participating, answering questions, creating new content and more. But we also believe a community will grow and people will share and help one another in amazing ways.

So what will be included in the Lean Analytics Backchannel?

On a regular basis, Alistair and I will:

  • Run Q&A events where we unpack startup case studies through live business model development.
  • Share new content, case studies, articles, and benchmarks
  • Interview best-in-class data scientists, analytics practitioners, investors, and growth engineers
  • Conduct weekly public consulting sessions in plain view of the community.

Members will also get:

  • Exclusive access to new slides, decks, diagrams, and more
  • Discounts to relevant startup events
  • A weekly newsletter summarizing the best in startup and analytics content
  • A shared table of the latest baselines and metrics from across the Web

Two more important things:

  • The Lean Analytics Backchannel costs $50/month (or $550 for the year)
  • We’re doing this in true Lean fashion as an experiment. We’ve launched a landing page to take pre-orders. If we hit our threshold of orders then we’ll launch the Backchannel. We’re aiming to launch by early April.

We’re excited about taking Lean Analytics to the next level. There’s so much more to do, learn and share, and we think the Backchannel is a great way of doing it. Along the way we’re going to experiment, learn and iterate–eating our own dog food and sharing our experiences throughout.

If you’re interested, please check out our Lean Analytics Backchannel Pre-order Page and sign-up today!

Seven onboarding mistakes you don’t want to make

Bayram Annakov Recently, Bayram Annakov of Appintheair wrote to us about how he’d used analytics and Lean approaches to improve his user onboarding, with some pretty dramatic results. He was kind enough to outline them here for all of us.

We build great apps, we solve critical problems, and we help our users achieve their goals. But you know what the real problem is? Despite all that, sometimes users simply don’t use our product—because we failed to get them on board. All that hard work goes to waste.

Here’s my hit-list of top mistakes that app developers make in onboarding:

Requiring that users register too early

Don’t force the user to register. Delay until it is absolutely necessary, or you risk losing them entirely. Allow them to skip registration, but ask for it when user takes some action. For example,  iTunes invites you to register when you click “Buy.”


And when you do ask them, be sure to explain the benefits of registration, such as backing up their data or syncing between their devices.

Explaining the obvious

Get out of the user’s way. Literally. Don’t force them to watch ten slides just to explain how to use a calculator! Instead, show the interface and make it intuitive.

Grabbing for push notification or address book permission

I know getting permission to contact users is very important for retention. I know you want to spam notify users to make them come back to your app. But hey—if you really want the user to give you their permission, don’t ask them on the app’s first screen. Explain why you need it using a custom dialogue (iOS does not allow customizing permission text—that’s why you need to implement your own screen).

Or better yet, make the user perform an action that signals her desire to give you permission. Banking apps are a great example of this: You really want to get notified when any transaction is done with your credit card, so they likely have a very high notification permission acceptance rate at this stage.

This is what we do in App in the Air: we ask for push notification permission only after a user demonstrates his/her desire to receive flight status alerts.

Look how Snapchat prepares the user for address book permission. First, they explain why they need access:


Then, with this context, they hand things over to iOS to get the authorization:Bayram-snapchat-2

Displaying all instructions at once

Some apps have a bad habit of displaying all their instructions and hints on a single, overwhelming screen. It’s far too much information, so the user remembers none of it—and gets the impression that the app is complex or hard to use. Instead, reveal yourself as they use the app. Teach one thing at a time and let the user learn by doing.


Above is an example of how one feature-rich task management application achieves this.

Not giving them an option to skip

Users sometimes re-install the app, whether they’re purchasing a new gold iPhone 6, or just recovering from a backup. Whatever the case, you want to avoid annoying an expert user with your hundred-page tutorial. Let them skip. Save them time and they’ll thank you.

Displaying empty screens

Please don’t ever display empty screens to the user. Josh Elman, former product lead for growth at Twitter, calls this the the “totally awesome blank screen of death.” Provide instructions, test thoroughly with edge cases, and make sure you avoid the kinds of empty screens that alienate and frustrate users.

Not measuring onboarding

This is perhaps the most important step, and the one most closely tied to Lean Analytics. It’s not enough to measure clickthroughs, or calls to action, or downloads. Your job isn’t done until your user has reached a point in their engagement process where they’re using the application as you intended.

Measure the number of users who successfully pass onboarding. Investigate why users drop, and tirelessly optimize the experience as much as possible. Remember, if your user can’t make it through onboarding, she won’t understand the power and functionality of your application. She definitely won’t use it, and you’ll miss the key leverage in growing your app to millions of users.

Our experience

For AppInTheAir, fixing these mistakes helped us move the bottom of our funnel—onboarding conversion rate—from 50%  to roughly 95%.

Josh Elman agrees—he thinks Twitter’s new onboarding process, which he covers in this video, is the secret behind Twitter’s growth from 10M users to 100M+ daily users: They taught users how to use Twitter without annoying or alienating them along the way.

Here’s some extra homework: check out this great compilation to see how popular web and mobile apps handle their sign-up experiences. Snapchat, in particular, works hard to make signing up not just smooth, but fun.

Traction: A Startup Guide to Getting Customers

“To reiterate, the biggest mistake startups make when trying to get traction is failing to pursue traction in parallel with product development.”

traction-bookThat’s a great quote from a new book called Traction: A Startup Guide to Getting Customers by Gabriel Weinberg and Justin Mares. It emphasizes something we talk a lot about in Lean Analytics–you can’t just build a product in a vacuum without early and frequent customer feedback/engagement. Early traction in Lean Analytics is about proving whether you’ve found a problem worth solving. It’s what we call “Empathy” in the book. And it goes from there through Stickiness, Virality, Revenue and Scale.

In Lean Analytics we went into some of the tactics for acquiring and engaging users/customers, but that wasn’t our full focus. In the book Traction, you’ll get all of the practical how-tos for finding the right customer acquisition (or traction) channels and frameworks for how to discover the best channels, prioritize growth and traction strategies and more.

The authors interviewed 40+ very successful entrepreneurs, marketers, investors and executives to learn the best practices on building traction for your startup.

Traction doesn’t happen by accident. Sure there’s some luck involved in everything, but in my experience it involves a lot of experimentation, iteration and grinding (read: hard work). You could throw a few things on social media, email a couple bloggers and try to get on TechCrunch…*yawn*…and the tumbleweeds will still roll on by. No one will care. The folks that win at growth are those that dig deep, try new things, learn from others, measure things (let’s not forget the analytics!) and work crazy hard. Traction, the book, will be a good guide for anyone that wants to work hard at growth.

To get you started, Gabriel and Justin sent me the first three chapters that you can download for free. Get the first 3 chapters of Traction

Or just go ahead and buy the book directly: Traction: A Startup Guide to Getting Customers

Lean Lunch with Move The Needle

Last year, Ben and I presented a workshop at the International Startup Festival with Brant Cooper and Patrick Vlaskovitz, the co-authors of The Lean Entrepreneur. It was a highlight of the festival for us, and we realized the four of us have a lot in common—and a lot to learn from one another.

Fast-forward a year, and a couple of weeks ago, I got on a Google Hangout with the team from Move The Needle. The brainchild of Brant and Aaron Eden, MTN helps companies implement Lean strategies and customer development. Since I’ve spent much of the last year talking with large organizations about innovation myself, it felt like it was time to catch up with them about the past few months.

Here’s the video of the event.

Quantitative Interviews: the importance of scoring customer feedback

thfs1cI1_400x400.jpeg (JPEG Image, 400 × 400 pixels) 2014-05-14 14-32-06 2014-05-14 14-32-16Back in December, Roger Huffstetler of Zillabyte contacted us to say he’d applied some of Lean Analytics to his startup, and wanted to fill us in on what happened. At the time, by his own admission, he was “Up to my ass in alligators” But now, a few months later, is his story.

I had just completed my 30th demo of our product, and I remember leaving the feedback session on cloud nine. As I wrapped up showing our API to the potential customer, he suggested what we were building was “amazing.” Yes, I thought giddily, this is going to work.

It took me a few minutes to come down from the clouds and realize as a founder, I’d been here before…and while it felt good, it could also be—unfortunately—an indication of nothing.

This phenomenon of euphoria is well documented: business founders see what they want to see, or as Ben & Alistair of Lean Analytics might say, “small lies are essential to company founders.” These ‘lies’ are what keep you going and believing through the toughest times.

The difficulty, of course, is in facing reality, discerning hard truth from praise and fluff. In particular, when you’re in the in the midst of customer feedback interviews—hearing that your product is “great,” “helpful,” or “amazing”—identifying reality can be incredibly difficult. Qualitative feedback, while often complementary, can be especially confusing.

It was at this point—post interview, but realizing I’d been through this cycle before— when I stumbled upon the Pain Index part of Lean Analytics (Page 170). With Ben’s & Alistair’s scoring system, you’re turning customer feedback into a quantitative process. You begin with a key customer development question: how do I know if the problem (we’re trying to solve) is really painful enough? Think of this method as the referee on the interview field, someone from the sideline to keep you honest. [We wrote this up in a blog post in 2012—AC&BY]

After Ben & Alistair introduce this question, they provide you with a set of six additional questions and a suggested scoring framework, as follows:

  1. Did the interviewees successfully rank the problems you presented? [Yes (6), Sort of (5), or No (0)]
  2. Is the interviewee actively trying to solve the problems, or has he done so in the past? [Yes (10), Sort of (5), or No (0)]
  3. Was the interviewee engaged and focused throughout the interview? [Yes (8), Sort of (4), or No (0)]
  4. Did the interviewee agree to a follow-up meeting/interview? [Yes (8), Yes, when asked (4), or No (0)]
  5. Did the interviewee offer to refer others to you for interviews? [Yes (4), Yes, when asked (2), or No (0)]
  6. Did the interviewee offer to pay you immediately for the solution? [Yes (3), Yes, when asked (1), or No (0)]

On their system, a score of 31 or higher is a good score. Anything below that, you haven’t really nailed a painful problem. But here’s where the real insight comes. Within all your interviews, look for a subset of scores that spike; that’s your early-adopter customer segment.

This process worked like a charm for us where qualitative feedback had failed to move us forward. When I scored the interviewees and sorted from highest to lowest, a pattern quickly emerged: the more technical the interviewee, the higher the score. This insight, that our customers were developers, was deeply buried in the morass of qualitative interviews, but scoring and sorting surfaced it at exactly the right time.

You may quibble with the specifics (so change them!), but the overall process is genius. This forcing function made a tremendous difference for us, as we worked to get through customer development.

The Lean Enterprise Experiment Canvas

20130809_eric-150x150This is a guest post from Eric Klaassen of Bloom, a consulting firm that helps companies grow online. We first met Bloom late last year in South Africa, and they’ve been pushing the envelope of applying Lean Startup concepts to big, established companies.

The success of the lean start-up methodology is increasingly resonating in large enterprises. Companies like Intuit, Amazon, GE and many others are implementing key principles of the lean start-up in order to deal with the complexity of their markets and the increasing speed of disruptive innovation.

Our company, BLOOM, has the main objective to grow companies online. While doing so, we see companies achieve great results using the lean start-up principles and tools. However, a challenge that especially the larger companies face, is translating the lean start-up theory to an enterprise setting. As the table below depicts, many differences between start-up and enterprise are at the root of this challenge.

Differences enterprise startup

One method that has proven to be very applicable in both start-ups and large enterprises that are among our clients is the Lean Analytics Cycle. This process provides the much-needed rigour in translating fundamental business problems into metrics that matter, creating hypotheses in order to test them, and driving change in the business from everything you learn.

The fact that the lean analytics cycle can also be applied effectively in large enterprises triggered us to translate it into a tool, based on Javelin’s Experiment Board.

Lean Enterprise Experiment Canvas

Although the canvas is designed to be self-explanatory, it could be helpful to have guide to use the canvas in your team. The text below explains how to use the lean enterprise experiment canvas.

Bloom table 2 - Lean Enterprise Experiment Canvas - 720px

1. Define most important metric and draw a line in the sand

Your most important metric is the one that allows you to track how changes in your products and services impact your business goals. For a start-up this means finding product-market fit, and a sustainable business model.

For a large company, the most important metric differs between departments. Customer service might focus on customer retention, where the IT department steers on number of roadmap items delivered. What is the one metric that helps your department contribute to the overarching goal of the company?

Three criteria help choose the one metric that matters: the business you are in, the growth stage of a company and the audience.

Once you have decided the metric to focus on, it is important to define the current value of the metric before you start experimenting (=set the baseline). If you don’t know what the current value it is, go find out. If you can’t find out, develop the instrumentation to do so.

With the metric and its baseline in mind, you need to set a target value for this metric in order to manage expectations across the team—in other words, you draw a line in the sand for everyone to see. Make sure the target is ambitious, edging on the uncomfortable. Better set high goals and not fully reach them than to aim low. This does not mean that small achievements (e.g. completed an iteration) shouldn’t be celebrated. Celebrate quick wins to boost team morale, but never lose sight of the work that’s still ahead. And if the goal proves impossible: remember that the line is set in sand—not stone. If achieving it proves too easy or too hard, you can change it in a later stage.

Additionally, we suggest you define a control variable to keep track of. This is advisable as experimentation comes with a certain level of risk. This way we ensure we are not improving one KPI at the cost of business as usual. For example: removing a complaint form will bring complaints down to zero, but it surely won’t improve customer satisfaction.

2. Identify and prioritize issues from your customer’s perspective

Once you start analyzing your product or service, you will often identify different problems. This number grows exponentially with the added complexity of having multiple business units, product, market segments and customers. Therefore it is important to keep in mind that not every problem is equally important.

A problem is phrased from customer’s perspective, not only forcing the team to clarify exactly the problem at hand, but also making it easier to share with other departments.

Before a singular observation is deemed an actual systemic problem, it should first be supported by patterns in data, customer feedback or additional anecdotal evidence. When support is found, it can be identified as a (validated) problem.

As to not waste scarce (development) time, efforts should be directed at problems with both high potential and high importance. High potential means that there is a lot that can still me improved; high importance means that the problem has a large business impact. Prioritising problems this way has a beneficial side effect: because you are able to show how important a certain problem is, it is much easier to obtain buy-in from stakeholders.

3. Define possible solutions

Defining the solutions should be done in cross-functional teams. All employees are able to provide valuable insights against the backdrop of why a certain problem exists and how it might be solved. The CEO has a strategic high-level overview, the customer service representative understands the most important complaints, and a developer might know how to solve a problem from a technical point of view. Including more than one function in defining the solution, limits the chance that the genius to your local problem has a negative impact on the business as a whole.

It is important to keep in mind that solutions can be of a more incremental or more radical nature. The low-risk incremental experiments often receive more support from the organization and its leadership. This is a common trap: by experimenting with incremental solutions, you can only climb towards a local optimum. To also allow identification of the radically more effective solutions, experiments for solutions should be a mix of iterative improvements and larger leaps. The iterative improvements help you climb to the top of the mountain that you are on, and the leaps ensure you find the highest mountains.

4. Decide on the test method which allows for maximum learning with minimal amount of resources

A significant part of waste prevention lies in the determination of the minimal effort needed to validate a solution. For start-ups this is often relatively easy; they can move fast and break stuff.

In contrast, large enterprises need to be more careful as there is more at stake (e.g. their reputation). When building an MVP as a test method, keep in mind that the minimal version of a feature should actually solve the original problem.

In addition, the test is only useful if it enables you to act on the results. Relevant options for minimum viable features for start-ups, could also be useful in the enterprise. In some cases, more creativity and care is required.

5. Define success before actually running the test

There is a strong cognitive bias to look for positive results in a test. Make sure to define success criteria upfront in order to prevent yourself from being overly optimistic after the test. Try to phrase your success criterion as: “During the test, I expect strong signal from at least X% of visitors/customers”.

6. Get out of the building

As indicated before, experimentation in start-ups and enterprises serves two different purposes. Start-ups are looking for a sustainable business model, and experiment to find product-market fit. Enterprises already have a customer base and execute a repeatable and scalable business model.

For an enterprise, the goal of getting out of the building changes towards finding out what provides the most additional value to your existing customers. Methods to do so in enterprises include:

7. Analyze the results and check if you moved the needle

This is a critical step in the process. After you have run the test it is time to see if you actually moved the needle. Based on the test results you have three options

  1. Pivot, try to solve a different problem
  2. Persevere, go to the next column and try again
  3. Declare a success and implement full solution

It would be great if all tests are an amazing success, but in reality tests will be invalidated. This is not a bad thing at all. At the very least you prevented a lot of time wasted fully implementing a solution without validating it first. If a solution is invalidated we can choose to focus on a different problem (pivot/give up), or we can think of a different solution. Because we defined the problem as both important and with high potential, you probably want to do the latter.

If a test is successful it is time to scale up and create a lasting solution involving a larger team and tighter integration with the existing business.

Additional resources

The Bloom team has created a Slideshare presentation that annotates the steps described above. We’ve embedded that presentation below so you can see how they move through the Lean Experiment Canvas at each stage.

The enterprise experiment canvas was developed based on our experience applying lean in large enterprises. By publishing our tool we hope you can bring the approach to your organization as well, so feel free to download and use the template.

If you have any questions regarding the content, or if you want to share your experiences from applying these tools in practice, use the comment form or reach out directly at any time

Lean Analytics en français !

We’ve seen copies of the Polish and Korean versions of Lean Analytics in the wild, and spoken with a few of the other translators. We’re excited to see the book reach so many new readers. In the meantime, we’ve been doing a bit of translation of our own!

Photo of the WAQ stage from way up high, by Andréanne Beaulieu

Photo of the WAQ stage from way up high, by Andréanne Beaulieu

Last week, I spoke at Web A Québec, a conference on web technologies that happens in Québec City. I speak French (but far from perfectly) and figured this was a good time to see if I could present some of the ideas behind the book, as well as some lessons about how larger organizations are putting it to work, in French. I translated the slides (and then got some tweaks from the organizers.)

Anyway, here’s the deck, translated. So now you know your Empathie, Fidélité, Viralité, Revenu, and Échelle, whether you are a Marché biface or contenu généré par usagers.

Lean Analytics for Intrapreneurs

Later today, I’ll be speaking at the Lean Startup conference in San Francisco. It seems like only yesterday that Ben and I first taught a workshop on Lean Analytics, prior to the book’s launch. Since that time, we’ve visited a dozen countries, spoken with hundreds of founders, and found out that it’s being translated into eight languages. To say we were surprised by the progress is an understatement.

Rather than repeat last year’s content—which is widely available on Slideshare, on Udemy, and in classrooms by now—Ben and I have spent the past six months talking to Intrapreneurs. These are people within larger organizations, trying to create innovation. It’s a hard, often thankless job. After all, if a startup is an organization designed to search for a sustainable, repeatable business model, then a corporation is designed to perpetuate a business model. And in environments of rapid change and high uncertainty, perpetuating business models is deadly.

The most fundamental truth of intrapreneurship is that the difference between a special operative and a rogue agent is permission.

windmillIf you’re trying to change things, but don’t have organizational backing, you can do it—but you have to be subversive, and pick your battles. You’re tilting at windmills, battling on a pitched field against large, slow, old-fashioned incumbents. On the other hand, if your organization has a deliberate portfolio of innovation, then you’re going to need a program to find, test, incubate, and integrate new ideas.

Batch size changes everything

One of the biggest changes affecting companies of all shapes and sizes is the ability to do things in small batches. Once, scale mattered a lot, because it the only way to get the incremental cost of a customer down was to amortize fixed costs across many sales. This worked well, for a while, and gave us everything from mass production to broadcast media.

But that model is crumbling: tools like social media and on-demand printing and automation are reducing the economic order quantity. Part of this is because software is eating the world, optimizing the back-office and supplanting other channels as the dominant means of communication and delivery on the front end. Consider how quantity, cost, lead time, self-service, and customization vary across different production models.

EOQ of one

How customer value changes as technology moves production to an economic order quantity of one.

Is it any wonder that we’re seeing most innovation in digital sectors?

Big companies are taking notice. We’ve spoken with around 30 large multinationals—GE, DHL, SCA, Motorola, Google, Time, VMWare, Metlife and so on—in the last six months. Each of them has a slightly different take on innovation:

  • Some prefer to incubate new products in-house; others favor acquisition.
  • Some source ideas through hackathons and crowdfunding; others work closely with early-adopter customers.
  • Some believe innovators need isolation, in a kind of Skunk Works model; others want innovation to live within the business units that will ultimately reap the benefits of invention.
  • All seem to differentiate between three kinds of innovation. Some use the Three Horizons model; others distinguish between core, adjacent, and transformational projects.

Despite these differences, they all agree on one thing: that innovation must happen, and that to survive, companies have to constantly reframe the business they’re in and disrupt themselves.

We’re myopic about how

For us, the real lesson of the last six months’ research is that companies spend too much time worrying about adjacent markets (who they sell to) or adjacent products (what they sell them) while ignoring how they sell. It seems that while everyone knows about product/market fit, there’s a myopia around method.

Igor Ansoff’s product/market matrix is common wisdom for business strategists. We’re taught it in business school, and it’s the basis for most strategic discussions of diversification. And yet it’s only two-dimensional, focusing on product and market. There’s no method. It assumes the how. This is undoing of many incumbents. One of the things we’ll propose today is that intrapreneurship is about product/market/method iteration, and that innovation involves changing one (or more) of those three dimensions.

method venn diagram

The three dimensions of innovation that Intrapreneurs can pursue.

The more dimensions you’re changing, the less your metrics resemble typical business cases and the more they focus on de-risking your assumptions through validated learning.

I should point out that a marketing purist would argue that “product” includes the pricing, distribution, and promotion as well as the product itself, and therefore encompasses the “how.” I should know; I’m a marketing purist. But it’s worth breaking out how as a separate thing, because it’s far too often overlooked. Call it go-to-market strategy, or unfair advantage, or method—it’s still the thing people forget, and yet it’s the thing that drives most of the successful innovation we’ve seen.

Amazon, at its core, sold books to readers. Neither the market nor the product was new; it was the method (e-commerce, with recommendations and a focus on logistics) that was new. Later, they were able to diversify the product (kitchenware) and the market (people with poor eyesight who could read Kindle books with large typefaces.) Amazon gets this, which is why it experiments with cloud computing and drones.

Indeed, there may never have been a company as good at iterating on how as Amazon. This is the core reason why its stock price is high despite its score on traditional dimensions like profit and margin. Amazon is really good at cycle time, and while accountants don’t have a good way of measuring “how fast you learn and experiment”, capital markets do.

There’s no evidence about the future

When companies “assume the how,” they reinforce the processes, IP, and organizational structure that makes them really good at the way they work today. For decades, management theorists have urged us to grow and standardize, believing that tomorrow is the same as today, only more so. As a result, sustainable competitive advantage came from those who achieved scale and predictability.

But here’s the thing: there’s no evidence about the future.

Rita Gunther McGrath, author of The End of Competitive Advantage, says that sustainable competitive advantage allows for inertia and power to build up along the lines of an existing business model, which will soon die. Instead, she says, we should seek transient competitive advantage. And this is why Lean methods are so relevant to big, entrenched organizations.

Back to analytics

Our discussions with Intrapreneurs, CTOs, and managers of innovation labs have taken us far afield from the original scope of the book. We haven’t spent as much time talking about analytics, in part because what you measure depends on the change you’re trying to produce. In many markets—particularly those without direct customer instrumentation—intrapreneurs have to use proxy data to estimate things like virality, engagement, and conversion rate. This is hard, and full of errors.

Intrapreneur proxy metrics

Intrapreneurs sometimes need to settle for proxy metrics to measure their business.

There’s also a lack of comparative data across incubators and innovation programs, although this is gradually being addressed as companies formalize their programs and communicate among one another.

Today’s workshop will be our Minimum Viable Presentation. It’s the first time most of the 265 slides have seen the light of day, and I’m eager to see what works and what sucks so I can get to the next iteration.