31, Oct, 2020
Q&A on the Book Testing Business Ideas

Q&A on the Book Testing Business Ideas

Image result for Q&A on the Book Testing Business IdeasThe book Testing Business Ideas by David Bland and Alex Osterwalder provides experiments that can be used to find out if your product ideas are desirable, viable and feasible. Experimentation also helps to reduce the risk and increase the likelihood of success of new venture or business projects.

InfoQ readers can view an extract of the book Testing Business Ideas.

Info interviewed David Bland and Alex Osterwalder about validating business ideas and dealing with risks, how an environment that enables your team to thrive looks, business hypothesis and experiments, doing search trend analysis and using competitor’s products to gain insight, explore what inhibits progress for your customers, experiment pitfalls, and setting up investment committees.

InfoQ: Why did you write this book?

David Bland: I found myself giving the same advice over and over to teams all around the world. They were having a tough time moving from a Business Model Canvas to running experiments. Often they’d ask me what to read and I didn’t have great recommendations. When Alex approached me about writing the book, I jumped at the chance. I felt we could address the market need and provide guidance on how to run experiments while de-risking your ideas over time. It was our opportunity to build upon and extend the tools from the previous books in a real, practical way.

Alex Osterwalder: Steve Blank launched an amazing movement with Customer Development and then his student, Eric Ries gave it even more global visibility with the Lean Startup.

However, David and I had a feeling that entrepreneurs and innovation practitioners were kind of stuck at a beginner’s level — doing interviews and not even necessarily doing those very well, and not getting to more sophisticated experiments. So we decided to build on Steve Blank’s and Eric Ries’ work and create a library of experiments to help people get more professional in the way they test their business ideas. That’s the heart of the book, Testing Business Ideas.

InfoQ: For whom is this book intended?

Bland: Corporate Innovators, Startup Entrepreneurs and Solopreneurs. We gave this a lot of thought and those three types of readers had similar jobs, pains and gains. We even built out Value Proposition Canvases for each to make sure we were adequately dialed into their world.

Osterwalder: We wanted to help people get to the next level, get a lot more professional in the way they test their business ideas, in particular in the types of experiments they do and how well they perform them. The book is for anybody who wants to test business ideas — in particular for the doers. That goes from the solopreneur who after work is thinking about a project they want to realize; to the startup entrepreneur, a funded organization with maybe a strong team of five six seven people; all the way to corporations that want to scale this and systematically bring this into their organizations.

InfoQ: Why should we validate business ideas, what benefits can it bring?

Osterwalder: We always say entrepreneurship and innovation is risky. It is. But you can manage that risk by testing your ideas. If you don’t, you’re going to incur the huge risk of wasting a lot of time, energy and money creating something that looks great on paper but nobody cares about, and potentially run out of money before you’ve brought anything to the market. When you test you reduce risk and the uncertainty of launching an organization that is going to fail.

Bland: Your ideas need to meet other people. If they stay inside your head, it’s a customer free zone. Once you start to test your ideas against reality, it begins to take shape in a real way. Many successful companies initially started out as something entirely different. Your job is to turn that idea into an opportunity that is sustainable. In that regard, it makes sense to test it and bring it to life.

InfoQ: What are the main risks when designing a product?

Bland: We’ve found that risks fall into 3 or 4 themes, similar to design thinking. Desirability risk involves the customer and their jobs, pains and gains. I’ve worked with companies who’ve already spent hundreds of thousands of dollars developing a solution, but have yet talked to a customer. They aren’t sure if their solution solves a customer need. While you’ll look like a genius if you launch it in the market and get it right, so many of us get it wrong. It’s a very slow and expensive way to fail.

Viability risk is all about cost, revenue and whether or not it can be sustainable over time. I’ve seen companies who solved for a meaningful customer problem, but never figured out the pricing model for it. They eventually failed because it wasn’t a sustainable business. Think back to how many products you loved that no longer exist. Many of them don’t exist because they weren’t viable.

Feasibility risk involves not just technology, but also infrastructure and any regulations you might encounter. There are many examples of technology companies going into hospitals with a new, innovative solution that nurses want. However, they fail because they cannot fund their FDA approval process or violate privacy law regulations. In the book, we layer this thinking of desirable, viable and feasible over the business model and value proposition canvas. The tragedy is most teams only focus on feasibility early on and then find out way too late that it wasn’t desirable or viable, when in reality they can find out much earlier on.

Osterwalder: What’s important to understand is that testing rarely means just building a smaller version of what you want to sell. It’s not about building, nor selling something. It’s about testing the most important assumptions, to show this idea could work. And that does not necessarily require building anything for a very long time. You need to first prove that there’s a market, that people have the jobs pains and gains and that they’re willing to pay.

InfoQ: How should an environment that enables your team to thrive look?

Osterwalder: There are three things you need to put in place.

No. 1 is autonomy. You need to allow your team to make decisions and go fast.

No. 2 is you need to give them a wild card where they can fail a lot in order to learn and adapt their idea. It’s also important to evaluate your team in the right way so they can actually fail, learn, and adapt. So if you spend time in both execution and innovation and then you’re evaluated on how well you execute ideas you’re dead, right? You need to have a different assessment than you use for people in execution. There, failure is not an option.

No. 3 is to give people access to everything they need to test their ideas. That means access to customers, to brand, to intellectual property, to the legal team — resources so they can do their job well.

Bland: The leader is responsible for keeping an eye on the environment, otherwise it may not be conducive to working in this manner. You can design a cross-functional, balanced team but that’s only a part of the equation. If the team is multi-tasking across 5 different projects and underfunded, they’ll not be successful over time. Leadership needs to take other responsibilities off their plate and allow them to focus, without micromanaging. Give them some boundaries and a goal, then check in on them at various points to see how they are progressing.

InfoQ: What’s your definition of a business hypothesis?

Osterwalder: It’s an assumption underlying your business idea made explicit.

Bland: A well-formed business hypothesis describes a testable, precise, and discrete thing you want to investigate. These characteristics are important, otherwise you write down a statement in which you cannot test. The biggest difference between an assumption and a hypothesis is that the hypothesis is testable. If you are building a kid’s subscription science fair product, then you’ll have a lot of assumptions around the market and willingness to pay. Instead of writing “Parents prefer craft projects” which is an assumption that’s difficult to test, try phrasing it as “We believe millennial parents prefer curated science projects that match their kids’ education level”. The latter statement is testable and will help you filter through the noise.

InfoQ: How does a good experiment look?

Bland: A good experiment is precise enough so that team members can replicate it and generate usable and comparable data. We recommend often doing multiple experiments for a single hypothesis.

One team I worked with tried to test their value proposition with Facebook and Twitter ads, pushed to a customer segment. It failed miserably. Almost no one clicked the ads and those who did, never signed up on the landing page. Before giving up, we decided to try another experiment. We served up the same ad to people on Google actively searching for a solution. The click through in the search results page was much higher and most of them signed up on the landing page.

If we had not ran that second experiment, we may have given up too early. Quite often you’ll need to run at least one or more a week over 12 weeks in order to generate enough evidence to know if you are on the right track.

Osterwalder: It starts with a very clear and precise hypothesis. You need to define what exactly you need to learn about to reduce uncertainty. People often say, “Oh I’m going to test this product,” and then they go build and pre-sell something. If that experiment fails you don’t know if it was because people don’t care, if it was because the solution is broken, or if they are just not willing to pay. There are too many variables. You need to clearly define what hypothesis you are testing.

Next, you need to pick and define a sound experiment, that is, the right type of experiment for a specific hypothesis at the right time. Early on you go with cheap and fast experiments, because risk and uncertainty is high. The more you know, the more sophisticated, expensive, and time-consuming experiments can become, because you need stronger evidence.

After defining the experiment you need clearly highlight what exactly you will measure and how. You also need to be aware that the context and environment might potentially influence your experiment.

Lastly, you need to define a clear threshold of what success looks like. You need to come up with a threshold that seems reasonable for your business idea to work, for example, how many people should actually convert to paying customers.

InfoQ: How can we do search trend analysis for market research?

Bland: Once we’ve performed a few interviews, it can be interesting but difficult to know the market size. You can be scrappy and start backing into that market size by doing a bit of online research. For example, you can go to Google Trends or Google Keyword Planner and look for search volume for jobs, pains and gains. You can slice it by different dimensions and even look at specific geographic regions. If your business idea is local, it might be a good idea to see if anyone in your area is searching online for a solution to the problem you are solving. You can interview 15-20 local customers in person about the problem and value proposition while taking detailed notes. Then you can use what you’ve learned to go online and search for similar phrases at a regional level. Are people searching online for these same topics? How many more is it? It’s a great way to find out your market size.

Osterwalder: This is one type of experiment among many that you could do to test your business idea, to see if there is any interest for that specific topic. It’s a very good starting point and early experiment. It can create a foundation before you go and do interviews and investigate jobs pains and gains. Search trend analysis can show how strong interest is, before you run other experiments.

InfoQ: How can we use a competitor’s product to gather insights on our value proposition?

Osterwalder: Use a competitor’s product or service to test the jobs, pains, and gains customers have. Start to understand the strengths that customers really appreciate or the shortcomings that bother them. What are they missing? What are they not missing?

Bland: You can learn a lot without building anything. I’ve worked with teams where we recruited customers and then pointed them at a competitor’s product with a series of tasks to complete. We measured how long it took them and what they thought of the experience. I’ve even had large corporations in San Francisco point customers at a startup they were worried about. They’d find out first hand what, if any, advantages the startup had over them. There’s a good chance if you are worried about a startup taking you out, that you’ll not find market research data about it until it’s too late. Therefore, why not generate the research yourselves?

InfoQ: The speed boat is a technique that teams use in agile retrospectives. It can also be used to explore what inhibits progress for your customers. How does that work?

Bland: It’s the same principle, except that you are finding out what’s preventing your customers from succeeding. Sometimes it can be hard to sort through it all with just words, so the visual process of speed boat not only helps you but it also helps customers articulate things in a way that makes sense.

Osterwalder: You’ll have a pretty good idea of different customer pains from doing free flow interviews. However what they don’t give you as easily is the customer’s priorities and their biggest pains. Instead of asking your customers simply to verbally express their biggest pains, use this spatial kind of visual representation so customers can see the different pains in relationship to each other — basically the bigger the pain, the deeper the anchor. They are able to more clearly articulate them and you will be able to more clearly understand the priorities.

You can also do the same thing for gains by asking them: What fuels your progress most? And then they put up sticky notes for that.

InfoQ: What are some of the experiment pitfalls and what can be done to prevent experiments from failing?

Bland: A few of the big ones are similar to what we see in agile. Teams take on too many experiments at once and they all run for too long. You’ll notice that in the book I take inspiration from agile and lean to recommend limiting the number of experiments in progress. This will help your teams focus and not get overwhelmed by not completing anything. Like my co-author Alex says, business models expire like yogurt in the fridge. The same applies for the evidence generated in your experiments. If they take too long or you are not able to put them into action, things change and expire. Simple tips like limiting WIP go a long way in experimentation.

Osterwalder: People build too quickly. They waste time and energy building stuff without having sufficient evidence that justifies building.

People will do some interviews and feel like they’ve validated their idea. A better approach is to start with weak evidence and then go for stronger and stronger evidence for the same hypothesis.

The really good teams test the same hypothesis with different experiments. At the beginning, you do very light experiments that produce relatively weak evidence, and you can go fast. In an interview you might ask about a willingness to pay. You ask people if they have the budget for this, and they say yes. Great. So very quick but then you could go deeper, by then starting to do call-to-action experiments where they have skin in the game, making pre-payment for it so that’s alright. That might be the same hypothesis but we’re now validating it with stronger evidence.

The other big pitfall we see is people mistake opinion for fact. When you ask people what would you pay they say $20 that’s an opinion, not a fact. If you ask people, “When’s the last time you bought this type of product and how much did you pay?” That’s a fact.

InfoQ: What’s your advice for setting up an investment committee?

Bland: Start with one committee, a few teams and build out from there. Also don’t be afraid to invite an outsider to provide a neutral perspective. It’s not easy for a group of people to magically begin to think like venture capitalists. It can help to have someone who has done it before to come in and model the behavior while asking probing questions. Besides, you can’t ask business as usual, return on investment type questions for something that is entirely new. For example, you can start your committee with 3-5 members and create a working agreement amongst them. Talk about how you’ll make decisions and fund the teams based on evidence. Then as teams meet with the committee, look for desirability and viability evidence early on. Ask questions about the customers job, pains and gains along with their willingness to pay. If the team can provide ample evidence, then fund them to move the idea forward. It’ll take a few iterations of these meetings to get the hang out it and that’s ok.

Osterwalder: Train them to judge evidence, not ideas that they like. That’s most important. People still judge ideas rather than judging the evidence that shows that this is a good idea. With Strategyzer we created an Innovation Project Scorecard that you can download for free. We use it to train senior leaders to better judge projects.

[“source=infoq”]