AMA on Testing B2B Product Ideas

May 17, 2024

EVENT RECAP

Too many companies generate an idea and want to jump straight into building it and seeing if it can scale—without testing out the risks inherent to it. David Bland is the Founder of Precoil, an advisory helping companies find product market fit using lean startup, design thinking and business model innovation. He’s worked with companies like GE, Toyota, Adobe, HP, Behr and more. David is also the coauthor of “Testing Business Ideas'' an internationally bestselling business book, now in over 20 different languages. In this session, David walks through how to identify risks in product ideas, derisk through testing, and create a culture of experimentation throughout your company. 

Ideal for: Product and Technology leaders and CEOs/Founders

Join to discuss:

  • The steps to formulating, testing, and implementing product ideas
  • The three types of risks you need to account for in your experiments (Desirability, Viability, and Feasibility)
  • Deploying metered funding to make efficient use of your capital in testing
  • How to overcome some of the hurdles of testing B2B products (smaller testing base, brand risk-aversion, etc.)
  • How to create a culture of experimentation and handoff ideas to your operating team

Video

Unnamed Speaker

All right, we are officially live with David Bland. I’m excited to talk about testing business ideas and particularly B2B business ideas, because it seems to me that there’s much lower barriers to test things with consumers. There’s all kinds of stuff that you can just throw up and try. But with B2B concepts, there’s often barriers real or imagined to why you can’t or shouldn’t test things with your actual customers or your actual prospects.

Unnamed Speaker

And a couple of years ago, David literally wrote a book with 40 different ways to test business ideas, including a couple that really do work for B2B applications. And so I’m excited to dive into some of those today. David, do you mind giving the more in- depth introduction?

Unnamed Speaker

Yeah, my name is David J. Bland. I’m out here in Northern California. Worked at Silicon Valley for about 10 years and really spend all my time on early stage product service business ideas. So anything with a bunch of uncertainty, I don’t really focus on a specific sector, but I’m helping people with a process to go see, hey, is this something we should invest in or not? And so I have a process that kind of blends lean startup design thinking, business model innovation together and helps people make informed investment decisions.

Unnamed Speaker

So I’m really usually focused on new stuff where there’s a lot of uncertainty.

Unnamed Speaker

Awesome. Let’s dive in to some of the starter questions and then feel free to add more questions into the chat as part of this AMA. David, do you mind presenting the slides that we have?

Unnamed Speaker

Yeah,

Unnamed Speaker

I can do that. All right, so first question, how do you get from a good idea to a validated business?

Unnamed Speaker

Yeah, I might have to draw through this one because it’s not a single line usually. So when you start with an idea, it’s more about, usually my companies are trying to reimagine what they do in a different way to create new revenue. So I work with a lot of B2B companies and they’re trying to find, hey, we have this domain expertise. How do we take that and apply it to something where we can generate new revenue? It might be in the same market in a different way. It might be in a tangentable, like tangent market or something that’s adjacent.

Unnamed Speaker

But this idea of it’s really messy to get to a point where we know that we have something that we can scale and something that is repeatable. And so what you normally do is you’re going through this search and testing approach where I try to frame this as like discovery and validation where a lot of the discovery work is you’re going through, hey, do these jobs, pains and gains exist for a specific customer segment?

Unnamed Speaker

And then the validation point is more around, okay, we’ve got evidence around the jobs, pains and gains and we have what we think is a possible solution for that. And we prioritize everything. How do we test this in a very small way to get evidence that this is something we should invest in? So I try to break down the search and testing into kind of discovery activities and validation and then give yourself permission to invest more as you get more evidence that this is something that’s gonna be repeatable and something that you wanna invest in.

Unnamed Speaker

Cool. And we can keep going through these questions and as there are relevant ones, I’ll flash them on the screen. But let’s go into that next question around risks in which you should test for first.

Unnamed Speaker

Yeah, so this comes up a lot in my company. So I advise software, I also advise hardware companies and this idea of what you should test for. And so this idea of themes of risk. And so when we think about desirability, so desirability is much of the risk around your value proposition, your customer’s jobs, pains and gains. Do they understand what you’re offering? Is it urgent? Is it something that they desire? So there’s a lot of desirability risk. I try to frame this as kind of like, do they want this?

Unnamed Speaker

Most of the time, this is where I’m starting when we’re doing our discovery work around a new B2B product. So we wanna make sure that it’s grounded in a need and a desire from the customer. And from there, we go on to usually viability, which is much more around the kind of, should we, is this something that’s gonna be profitable enough, even if you’re not charging directly for the offering, it should move the needle some way inside your company. A lot of B2B companies might offer like a free service or a freemium thing.

Unnamed Speaker

And that should kind of move the needle on some of the more revenue generating things inside the company. And then usually, it might sound weird, but usually last, we’re focused on feasibility because most of the companies I coach, they can build anything.

Unnamed Speaker

And so the can we do this question is much more around, okay, so there’s a desire for this and it looks viable from a financial point of view, from cost and revenue. Can we do this at scale? Is this something we can execute really well? And so it’s almost like the inverse of what we’ve learned sometimes of the build it and they will come method is we start with all the feasibility risk and then we just try to sell it. Usually I’m brought in where, yeah, we can build anything but we’re worried about wasting time building things that nobody wants.

Unnamed Speaker

So let’s like really dig into desirability and viability. Will they, like, is there a need there and will they pay enough for it before we jump into like the details of building everything?

Unnamed Speaker

So a couple of pre- submitted questions on this one. First one being, how do you quantify the desirability metric?

Unnamed Speaker

Yeah, I would say, you know, in my world, I’m a big fan of what I call pirate metrics. So there is A- A- R- R- R, it says R. It’s the only way you’re gonna remember them is pirates. So this idea of acquisition, activation, retention, referral, revenue. Sometimes people do revenue referral but it’s like more a stylistic choice. But this idea of, are they activating? So one, are they activating? Are they becoming active in the product? Is there some desire for it? Are they interacting with your value proposition?

Unnamed Speaker

So you can do sort of the quantitative and the qualitative. So the quantitative is gonna tell you the what and the qualitative is gonna tell you the why. And so if you think about sort of like how people become aware of your product, how they become hopeful, how they become satisfied with it, how they become passionate, it’s usually like a progression they go through.

Unnamed Speaker

So the good news is on the qualitative side, you can have touch points with your customers along the way to say, hey, are they becoming aware that this is something that can solve their problem? Are they becoming hopeful that it actually solves their problem? Are they becoming, you know, aware, hopeful, satisfied, you know, satisfied, which is more around, okay, they’re interacting with it. Are they actually satisfied with it? Are they not gonna turn out right away?

Unnamed Speaker

And then can you actually curate passionate customers where they go off and like evangelize you to other customers and like be a source of acquisition for you? On the quantitative side, it becomes a conversation about your business about, okay, how do we measure acquisition? What’s our cost per acquisition? How do we measure when it becomes an active customer in our system? How do we measure, you know, when they’re retaining, like what is a good retention rate supposed to be for what we’re doing? And then are they referring other people?

Unnamed Speaker

So you can kind of balance the quant and the qual there and go through even more than just desirability, but it is important to define what those things mean for your product and your org because otherwise it’s really tough to manage and have a baseline.

Unnamed Speaker

So another, this is kind of a flavor of a viability question. What if the product idea is backed by a strong reseller and they believe they can sell it?

Unnamed Speaker

Yeah, it becomes a conversation about do you trust? Like what’s the level of trust there? And also, is there any evidence you can generate as a proxy or even go to the end customer sometimes and try to figure out is the demand there and use that? It doesn’t mean you’re trying to burn the relationship, right? But this idea of can you find at least anecdotal evidence that the end demand is there and then use that as leverage in the conversation with, you know, a reseller.

Unnamed Speaker

So I think people always look on the optimistic side of things and they kind of overpromise at times. And so I think it’s more about, well, let’s just go check. Like, let’s go see if this is something that is gonna hold up. And usually my B2B customers that are very complex value chain, where it’s like B2B to B2C or something like that, we go in the round to the C portion and we do some discovery there to see, you know, hey, are we losing kind of the narrative through the supply chain and not understanding what the real needs are?

Unnamed Speaker

So usually even my very complex B2B clients, we still do kind of end user research. Not that we’re gonna scale that, but just enough to check because it’s really tough when you take the opinion or the point of view of somebody in the supply chain and then it turns out to be wrong, it can be feel almost as if you were blindsided by that versus, you know, trying to de- risk that a bit earlier.

Unnamed Speaker

Cool. So let’s get into my favorite slide, which is the next one with some of the specific experiments. So what, can you walk through this experiment sequence and feel free to take some time and explain what each of these are? Because I think these are really interesting.

Unnamed Speaker

Yeah, so when it comes to sequences, think about you’re gonna need to run more than one experiment. And so very rarely have I ever worked with a company where they just ran experiment, it was wildly successful and they just made millions of dollars. Usually it’s a kind of a winding path. But what you’re trying to look for is almost like your next best test or a test that’ll give you a little more evidence to kind of close what I would call like a say- do gap.

Unnamed Speaker

So this idea of, well, customers say something, but then when it comes to them behaving, they’re behaving a very different way than what they said.

Unnamed Speaker

And so what we try to do is we build custom, like I build custom sequences for all my clients. So all my companies I work with, I try to say, okay, what are you trying to achieve? And then let me learn a little bit more about your industry. And then what’s that process of experimentation that we can use? So for example, a good place to start with anything usually new is talking to customers.

Unnamed Speaker

Now, you’re not just hanging out with them, which I find some of my B2B clients, they have repeated touch points with clients and with their customers, but it’s more just like they’re just chatting about what’s going on. Like there’s not really a structured approach to say, oh, we have some things we need to learn in this conversation that are tied back to risky assumptions that we have in this idea.

Unnamed Speaker

So what I do is I help them sort of like write a script and figure out, okay, how would we talk to customers in a way where we’re learning what we need to learn? And then from there, doing something like discussion forums, you could go to your competitors forums. Sometimes you can go to public forums like Reddit and places like Quora and LinkedIn, and you’re really trying to find, is there any observable evidence that what we’re hearing anecdotally from customers, are there more customers talking about this? Or is this an industry trend?

Unnamed Speaker

Or is there something that we can look and find jobs, pains and gains in a more kind of quantitative way versus just like the qualitative, really small number of interviews we’re doing. From there, some of my companies do things like boomerang testing. So they do competitive user testing. So if there’s this like really cool hip startup that came on the scene, they’re eating away from your customers, like understanding what the value prop of that startup is, what are some of the things they do well and they don’t do well.

Unnamed Speaker

So we do competitive user testing. I call it boomerang testing, which is sort of like you’re going in and you’re basically trying to figure out, hey, how do we position ourselves against our competitors? And then from there, taking what you’ve learned and building like the smallest thing that’s interactive. And usually in software, that’s some kind of clickable thing, either in Figma or you can even do this in Keynote and PowerPoint to an extent where you just have like hot zones and you’re clicking on things.

Unnamed Speaker

There isn’t a lot of backend data, like we will dummy up the data and things like that. But when we go back to people, like for example, if we went back to the people we interviewed and we say, hey, we have something early stage that you can co- create with us. Can you give us some feedback on this and have them interact with it?

Unnamed Speaker

The type of feedback you get when they’re interacting with something, especially early stage, is very different than just talking to them with a script, just words, no visual artifacts to represent other aspects of that conversation. So something clickable is usually really great there. And then if you’re getting positive input on that, you could also think of it almost like a funnel. So you’re getting people to a point where, hey, you said you thought this was really amazing and it can help a problem that you have.

Unnamed Speaker

Do you want to join us on our presale where we’re just rolling out the first version of this? You can get exclusive discount or whatever that might be for your org. But this idea of, are they going to pay for this? Is it something where, yeah, it’s something desirable, but it’s not viable enough to move forward. And then working your way to some sort of minimum viable product. I know there are vastly different definitions for that. And it’s very much been popularized by the lean startup from Eric Ries.

Unnamed Speaker

But this idea of what’s the smallest thing you can build to pay down your risk, that will be a value exchange. So with the MVP, there should be something of service you’re providing to them and they’re paying for it. And so you can test all three phases at that point, desirability, viability, and feasibility, because they’re paying for something that’s maybe not scalable, but it’s a test of the value exchange. So I always think of these as sequences.

Unnamed Speaker

It doesn’t mean you can’t go back if something, if you learn something new, you can always take a step back, of course. But this idea of how do we build our next best test and get more evidence, stronger evidence, that we’re on the right track and we should invest more in the solution.

Unnamed Speaker

Awesome. So a relevant question on this one. And the funny thing is you and I were just talking about this before the call started. How do you suggest balancing the need for rapid product enhancements when there are sales opportunities and taking the time necessary to scope and analyze more thorough and sound design approaches? So I guess what happens when your customer or prospect is drawing you almost into this pre- sale test? How do you handle that situation?

Unnamed Speaker

Yeah, I have this a time and time again, where my companies like Coach, we almost want to skip the desirability part because somebody asked us for something, to build something. I have this little sketch that I call the product test cycle that has been written about in Inc. and Forbes and everything and kind of went viral. But it was this idea of, well, people tell us what to build, we go build the thing and then they still don’t use it and we were sort of wondering why.

Unnamed Speaker

So even when they are coming to you and asking for things, what I always try to recommend is get to the job.

Unnamed Speaker

Because there might be a more elegant way to solve it that they’re not even aware of, or there might be a more standard way you could solve it because it’s really hard to scale a company if you’re building custom one- off solutions. I’ve learned that from experience in some of the B2B startups I’ve worked at in the past. So this idea of let’s get to the need or the job to be done behind the request, and then let’s keep this kind of co- creation environment going where they can see things before they’re fully baked out and fully polished.

Unnamed Speaker

And I think that trend is starting to change in B2B. I know old school was, hey, we can’t bother our customers, we can’t talk to them unless we have a fully polished solution to put in front of them and sell them. And what I’m seeing is a shift in that culture over the last, I’d say, five to 10 years, which are more of a, no, it’s okay, you can talk to me if you don’t have something to pitch me. I can talk about my problems all day. And if, especially if it’s existing client, you’re gonna have some frame of reference for that.

Unnamed Speaker

So I would just always try to get to, you know, what’s the job or the need behind, and don’t necessarily shy away from inviting people in to co- create and help you sort of mold and shape what your solution is going to be.

Unnamed Speaker

Another one that’s kind of in this vein, and question asker, if you’re on, feel free to clarify this. How do you testify, or how do you do testing in B2B for sales. org that aligns with Challenger and says you should only be delivering solutions? So I think this is not asking your clients questions or experimenting with them.

Unnamed Speaker

Yeah, it’s tough. I do a lot of sales training, actually. Not by design, I just get pulled into sales orgs at some of the biggest companies in the world. And usually when I’m doing that, it’s, I’m usually training them on value propositions and the jobs, pains and gains of customers. Because even if you are, if you’re pitching a solution that isn’t necessarily dialed into something they really care about, you’re always going to have high churn, you’re going to have low adoption rates.

Unnamed Speaker

So what I’ve been teaching sales orgs and other companies around the world has been, okay, early on sales conversations, can you start mapping this stuff back to, what are the functional jobs they’re trying to do? What are the pains that they are experiencing trying to do those jobs? What are some of the gains they’re looking for? And some of the gains might be, they’re looking for a promotion, or they’re looking for peer recognition. It doesn’t have to be, oh, my gain is just, I was able to do that job or task.

Unnamed Speaker

There’s all kinds of stuff you could learn. And so what I’ve been working with with sales companies is, hey, let’s, or sales departments inside big companies has been, let’s have that conversation and work that into those conversations. And then when you’re trying to close, it’s something that’s just, you’re really dialed into what we call high value jobs.

Unnamed Speaker

So it’s something that’s really important to the customer, but it’s also something important to the business because you don’t just blindly follow what the customers are asking for because you might end up building solutions that aren’t viable. So there’s a kind of balancing act between what they really need and then what’s viable to the business.

Unnamed Speaker

Awesome, okay, one more on this question and then we can keep going. What’s your advice for testing pricing and packaging?

Unnamed Speaker

Yeah, I think there’s a lot of anxiety around price testing. And granted, there are some regulations around certain things you can do. Like you can’t have a cookie that shows one price to one user and a different price to another user and all this stuff. But usually what I’m doing is I’m trying to find out, especially in B2B context, I’m trying to find out what are they paying or maybe they don’t even know what they’re trying, how much money they’re spending on trying to solve for a thing.

Unnamed Speaker

So for example, in my really complex B2B companies, I advise we have to do a bit of research with the customer to see, hey, how many people have you put on this trying to solve it? How much have you spent trying to stitch together different packages, different software applications and how’s that worked out for you and everything? And sometimes they don’t even know, they don’t keep a running tally in their head, how much they’ve spent.

Unnamed Speaker

And so if you do that kind of research with your customers and you start to find out what they’ve actually spent on trying to solve for something, then you can kind of anchor your pricing against that. And so when they say, oh, well, that’s like, I don’t know how to respond to that price, but you can say, but look, if you think about all the people you’ve put on this and time and effort you’ve spent, then you can have some way to justify that. And so the way I try to frame that is like, do they have the problem? Are they aware of the problem they have?

Unnamed Speaker

And are they actively seeking a solution to that problem and have budget to kind of solve it? But this pricing kind of strategy, I tend to aim at the top of that pyramid. So they know they have a problem, they’ve been trying to solve it and they haven’t solved it really well. And so when I think about early adopters, I try to test pricing against what have they done and why hasn’t it worked as well as it should have.

Unnamed Speaker

And so that gives me something that they can respond to versus if you just put a number out, or my favorite is, what would you pay for this? Is like the worst question you could possibly ask because they don’t have a frame of reference of what they would pay. Mostly, even if you did that question, I would say, what’s the most you would pay? Because usually people have a ceiling about what’s the most they would pay. But even that question can be fraught with peril sometimes.

Unnamed Speaker

So I really try to go through, hey, what are they trying to, what ways are they trying to solve it now? And then can we anchor against that pricing and start testing it that way?

Unnamed Speaker

Cool. All right, we can keep cruising into the next question.

Unnamed Speaker

So how do you think about how much of sort of share of resources you should dedicate to testing versus iterating on your core?

Unnamed Speaker

Yeah, this comes up time and time again with a lot of big companies like Coach. And what I generally recommend, and I say this is like a bare minimum, right? This idea of, okay, we’re spending, you know, quite a bit on our core because we have to keep the lights on. This is what we’ve kind of stumbled into over the years that became a repeatable business. We play in this space really well. The challenge with this is that some industries, their core is declining. So they’re seeing things get more expensive in the supply chain.

Unnamed Speaker

They’re seeing things, basically their market getting, you know, I don’t say it’s like a race to the bottom, but they’re seeing their market slowly decline year over year. And so you don’t really have the luxury to just only focus on the core. So this might even be a bit high, but this idea of like, you do need to focus on what you do best and keep things going.

Unnamed Speaker

Now when you think about ideas, you should look at like adolescent ideas, things that are, they might be a business, it might not be something that’s ever going to be meaningful revenue, but they need some nurture and love to find out. And so this idea of budgeting for, okay, let’s spend about 20% on this adolescent stuff that might end up being part of our core. It might replace things that we have in our core over time.

Unnamed Speaker

And then there was just a, the World Economic Forum just did a research paper on this and some videos about how much you should spend on experimentation. I view as at least 10% because only one or two out of 10 is going to even make it to something that’s adolescent out of that approach. So it’s, I was talking to the CEO in Silicon Valley and he said, it’s like deep sea diving. You go on down like 10 times, you might have one thing that you come up with. And so you really need to work on that now and not be really conservative.

Unnamed Speaker

And then over time, when your market’s really declining on the core, then you say, okay, everyone, now we have all this anxiety about a declining market. Let’s now focus on experimentation. This is like, there’s a lot of, oh, we can test, but it can’t fail kind of pressure, which isn’t really a test.

Unnamed Speaker

So I really think all companies should be doing this no matter what your stage, you should always be trying to explore what potentially could be new and just basically try to strive to find a couple of things you can bring over and then nurture into adolescent businesses. So I would say this is the bare minimum. I would love to see experimentation be even more, but 10% is what the normally what I’m seeing inside B2B companies like Coach.

Unnamed Speaker

Here’s an interesting kind of case study example. So what if you were a leading in the space in your industry and provide, oh, sorry, not this one, different one. It was going to be this one. As a leading hardware manufacturer with a great reseller network, how risky is it to transition into providing other products like software and services? So how would you think about the core of hardware versus other product lines in this framework?

Unnamed Speaker

Yeah, I have some clients that do this and we are very thoughtful about how we approach it because you don’t want to necessarily burn bridges with your partners, especially if you have channel partners, right, which is often a reseller. So you have to very thoughtfully kind of go through that experimentation and not necessarily make it this perception you’re going all in on this one approach.

Unnamed Speaker

So usually what we’re trying to do is we’re just trying to put kind of like feelers out and see, hey, is there something here that we can repackage in a different way? So with hardware, you can’t iterate on hardware as quick as you can with software. But I just had HP on the podcast this week where HP was talking about how they go about going through experimentation because I’ve worked with them in the past. And so it really comes down to, can you thoughtfully test in a small way and then start to see, is there something there?

Unnamed Speaker

And maybe you can even bring your reseller network into this somehow or partner with them in a way that doesn’t feel threatening. But I think sometimes the approach is, well, we can’t do this because everything was working as it is and we can’t rock the boat, you know. But it’s not always going to be working as it is forever, right? Things happen, recessions happen, companies go bankrupt, right? So I think just being very thoughtful on it and not trying to go big right away, it helps with the, you know, testing mentality there.

Unnamed Speaker

Another question from Earl, how can you filter out which clients are excited to co- create or pilot versus wasting time on companies that don’t move or move very slowly with emerging solutions?

Unnamed Speaker

Yeah, this happens quite a bit. I think if you think of that framework I mentioned earlier about, are they aware of the problem or do they have it aware and actively seeking a solution? That’s kind of like it’s a little funnel, right? So there’s many more that like have the problem and they’re not aware. And then the people that are aware may not be seeking and the people that are seeking are probably a small subset.

Unnamed Speaker

And so I try to find like, I don’t wanna say risk takers but who are the ones that see this coming and they’re trying to get out ahead of it. And they’ve always been willing to work with stuff that isn’t fully polished and fully complete. And usually no matter what industry I’m working with and I ask executives, like who are some of your clients that would be fit that bill? They can usually come up with a few. And so I usually start there.

Unnamed Speaker

And then in, we were talking about this even before we jumped on this session, this idea of like a customer advisory council or this kind of trusted network of peers who are willing to test with you. I think you kind of look at through that frame of they have to be having the problem aware and seeking a solution to it. And they’re aware of this coming. Don’t try to convince ones that have the problem and aren’t aware or the ones that are aware and aren’t seeking. Try to start with the ones that are already kind of primed for this.

Unnamed Speaker

And usually if you think through your list, there’s probably a few that stand out. Awesome.

Unnamed Speaker

Okay, we can move into our last pre- planned question, which is around the implications, the ethical implications of running tests on people. So do you want to opine a little bit on how to ethically run tests with your customers and prospects?

Unnamed Speaker

Yeah, I would say like it’s less of on now and more of a with, and I’ll even like kind of do that. So I’ve been trying to frame this, even in the book when I wrote the book, I kind of pre- screened some experiments that I’d picked up in meetups in San Francisco and Silicon Valley and mostly startups were doing this, like B2B startups where they would do things. And I thought, hmm, that’s more like testing on people and not necessarily with them. And so I just didn’t include them in the book.

Unnamed Speaker

And that’s why some of the experiments are not present in the book, even though I have 200 pages of experiments in there. So I’ve been really trying to push this with of this idea of, okay, if I just asked a question, if we’re about to do this, am I testing with people or am I testing on them? And there are some very subtle ways that you can include them, right? So letting them know it’s a test, or if you’re price testing and you end up going with a lower price early on, discounting them the difference, right?

Unnamed Speaker

And showing them that what you’re doing is in good faith. So I just find this, I don’t know, when I started in startups, there was this big thing of vaporware, and it was always stuff that didn’t exist and people would just sell to really pump up their numbers. And then a lot of that stuff fell flat because there’s a lot of startups imploded around us in the 90s and early 2000s doing that. And so it really left kind of a bad taste in my mouth. And so that’s not really what we’re proposing with this method of working.

Unnamed Speaker

It’s more about, look, we just don’t wanna waste time building stuff nobody cares about. So how do we balance that risk versus selling them something that doesn’t exist yet? And so what I’m trying to do is a lot of, okay, do we have evidence of jobs, pains, and gains? Can we talk with our customers and start to really deeply understand that? And then, oh, can we sort them in a way where the customers are ranking with us to say, these are the things that are the most important to us?

Unnamed Speaker

And then we could test our value prop with them and saying, okay, well, here’s what we could do to solve those. And then here’s the first iteration of us trying to do that. And what do you think? And so I just want it to be a more of a co- creation, more of a with sort of attitude. And I think, and I’ve been interviewed about ethics and experimentation recently. And my response has always been, there’s a lot of stuff out there already with social sciences that we can pull from. We don’t have to reinvent everything for business.

Unnamed Speaker

We can pull from a lot of the standards that are already existing in the guidelines that are out there and just make sure we’re thinking about them in our culture. And so I do think there’s a little backlash on the whole Lean Startup movement to an extent where it felt like people were peddling things that didn’t exist. And I don’t think that was the intent of the movement. That certainly wasn’t my intent. But I do think some of the behavior that has occurred over the years has kind of burned people.

Unnamed Speaker

So I’m more of just pushing with, can we ask these questions? And usually it just comes down to that one question, is this testing with or on? If it’s on, you have to really start to take a step back and say, well, is there a better way to test this that would feel less about like testing on people? And so I do think pulling from social sciences definitely helps.

Unnamed Speaker

Awesome, cool. So I think we can move into kind of open questions now. And so I’ll start with this one. Maybe kind of an overall, maybe we’re coming back to feasibility, viability, desirability. What’s the checklist that you should encourage your team to consider when examining new product ideas?

Unnamed Speaker

Yeah. So there are different on ramps into that conversation. So when you think about desirable, viable, feasible, it means what at the core it’s meaning in the kind of questions of do they, should we and can we? So think about that at the core? But then that ends up being different things for different companies, so there are different ways I do that. One is just literally a list of questions where I ask questions about the customer, about what their needs are, about willingness to pay, about ability to deliver and all that.

Unnamed Speaker

You can also use canvases for that. So a lot of people use like a business model canvas or value prop canvas. The book I wrote is part of the strategizer stack, so I’m pretty familiar with all those and so you could say, oh, we have this overall business model. What are our risks around? You know, desirability, viability, feels to be feasibility with that business model. Or we have, you know, like a roadmap or we have a backlog and we can start looking at our risk through that lens.

Unnamed Speaker

So a lot of my companies that use kind of agile or some kind of iterative software development practices we look at, you know there’s big epics coming in or the stories, and we say, well, who is this user and what’s our risk around that? And, yeah, this feature, but what do we know about this feature? And then what is the benefit if we check to see if that really the benefit they need? And and so there are ways you can kind of play with those themes and then make it fit with your processes.

Unnamed Speaker

And I have some things like assumptions, mapping and stuff I do with companies where I just have a structured conversation around the inputs and then from there we can say, well, what kind of tests make sense for like our situation, because some of you might be a little further along in the journey. It may not have to start with a customer interview, right, it may be further down down the path. But this idea of the on ramps into that.

Unnamed Speaker

Just making sure you’re kind of having people in the room that can answer those kinds of questions or at least call out the risks you have, I think helps tremendously cool.

Unnamed Speaker

Here’s another, here’s another. We’re kind of into assorted questions. Now what if you’re leading in your space or industry and provide more exposure than the competition does to help make that product, that product, ideas, market larger? So how do you, I guess, as you experiment and expand the pie, capture more of it, versus experimenting, expanding the pie and having your your competitors capture that.

Unnamed Speaker

Hmm, that’s a good one. So I mean, there’s always this challenge of: well, if you move first, people are gonna copy it and then potentially do it better. I love the basis quote there where it’s: you know, if we focus on our customers and our competition focus on us, we’re gonna win. And it’s because they’re getting that information secondhand, like they can copy the what you’re doing, but they don’t know the why. And this has happened.

Unnamed Speaker

I advise a lot of car companies right in automobile companies, and if you think it back, you know through the decades you’ve had a lot of companies like: oh well, this is how Toyota works and we’re just gonna copy how Toyota works and and therefore we’ll be like Toyota and they didn’t understand the why and therefore it didn’t work out really well.

Unnamed Speaker

And so there’s like a lot about you having a deep customer connection and being able to grow the market in ways that other people can just see like sort of the outputs of what you’re doing, but they don’t understand the why and they don’t understand your reasoning and where you’re getting that information. So I think in that way they’re always going to be lagging, you know. So I hope that answers your question. But I just think, like, if you’re leading in a space, don’t take, don’t become complacent, don’t take that for granted.

Unnamed Speaker

Always keep this like deep customer focus, because there’s always going to be changes you want to be ahead of and versus like trying to look at your competition and trying to see what they’re doing. I think it’s always, always focus on the customer. It’d be my advice.

Unnamed Speaker

Coming back to pricing testing, how do you balance the overhead, deploying different prices, managing skews, servicing different cohorts on testing pricing variance: is the juice juice worth the squeeze or is it better to just stick with market research?

Unnamed Speaker

Well, I think you have to be careful of the amount of tests in general you’re running at one time and the more tests you run, the more complicated it gets, and if you don’t have a framework to manage that, you can end up in a very frustrating situation in the product, in a metric mood, but you don’t know why, and so my favorite response is like: oh, it’s just seasonal. That’s why it’s just kind of like. Sometimes that happens. But like I was working with a company in San Francisco where we were, we had like supply, we had, as, like, a supply team.

Unnamed Speaker

We had a consumer team and we had people changing stuff on the website, but we also had people changing stuff on the back end with inventory, and so when the numbers went up, numbers went up, it was always the people thinking: that’s why the numbers went up is because what I did, so I changed something in the product and that’s why the numbers went up. But then the supply team was like: well, we had better supply that day and that’s why the numbers went up. And without being able to trace that thread, you don’t know.

Unnamed Speaker

And- and that’s the best case scenario, when numbers go down, it’s well they, that that team screwed it up. It’s like it’s not my fault that the numbers went down. So you have to really be careful about that, that dynamic and that culture inside your company, especially as you scale it.

Unnamed Speaker

So I, I think, just be mindful of how you’re tracking your experiments and then how many you have in flight at any given time, because you don’t want a bunch of long- running experiments and then you’re just always queuing up the next one without finding out what happened in the one you’re doing, and so with pricing it’s. It’s the same kind of principle of I.

Unnamed Speaker

So be mindful of what you can actually handle as a company and don’t necessarily take on too many because then you’re not necessarily completing that learning loop and then using that to shape your strategy.

Unnamed Speaker

Awesome. Here’s a fun one. So the highest paid person, the hippo problem. If there’s somebody important who has an idea, like a CEO of a board member, and maybe you’re unsure of the idea or maybe you just want to have a culture of testing always, how do you kind of insist on testing despite the fact that that’s a powerful person?

Unnamed Speaker

Yeah, that’s tough. You know, a lot of the way I facilitate is in a group. So I try to mitigate that through just my facilitation. So if I’m doing assumptions mapping, right, there’s a lot of writing without talking and then we’re talking when we’re prioritizing and we’re doing that. What I notice is people at the top might have like a broader sort of visibility across risk than maybe a team or an IC that’s really down in the weeds and doesn’t have that breadth of information. But usually what I try to do is just not speak language.

Unnamed Speaker

It’s going to completely make them defensive. And I try to get them to talk about what they’re worried about. So even if they have a great idea, it’s like, well, what are you worried about the idea? Like what would have to be true for that idea to work? And there’s a good chance of whatever they’re going to say next maps back to those three themes that we’re talking about. They’re either worried that, oh, I don’t know if customers, like there’s a big enough market for this. Or I don’t know if people would pay enough.

Unnamed Speaker

Or I don’t know, are we really set up as a company to deliver this if it’s successful? And I try to always map those back into that little framework. And then it’s like, oh, let’s go check. Like how we go test, how we go find out sooner versus later that we’re on the right track with this. Like, I’m sure you’re right, but how do we go check? That’s something I picked up with Eric Ries when I was working with him at GE. So this idea of using language, it doesn’t make them defensive.

Unnamed Speaker

And then asking them what they’re worried about, because it gives them space to talk about that. They don’t always have to be right. I know the perfect approach and all that, which is a lot of pressure. And it’s kind of like not realistic. And then start mapping them back to those themes and saying, oh, well, how do we go check? Like, can we just go check quickly on that? Because I’m sure it’s a great idea, but let’s go check. That has worked out really well for me. So speaking in words that don’t make them defensive, trying to say how we go check.

Unnamed Speaker

And then it’s always about reducing risk. So it’s more like, well, how do we reduce risk in that idea? What would have to be true? And so that all kind of dovetails nicely into a little testing framework.

Unnamed Speaker

Awesome. So this person submitted a very specific question. Maybe you can think about experiments that you would recommend for them. And if others want to comment their situation, we can pick David’s brain on specific experiments. So if you’re an education technology hardware company, and you’re trying to identify the next or other ed tech hardware items that should be in a next generation classroom, what types of experiments would you recommend running?

Unnamed Speaker

Oh, that’s a good one. Well, I can say what not to do first, maybe. So I have a lot of, so I still do a lot of coaching and training in Silicon Valley. And I have cohorts of people that are excited about whatever the next tech is. So it’s VR, it’s AR, it’s AI. And they always make this kind of mistake of saying, well, we’re like VR for schools or we’re VR for math or whatever. And they’re leading with the tech and not what the value the tech provides.

Unnamed Speaker

So there are some cases where something like VR could be really immensely valuable, as if like the way we teach this is kind of fundamentally flawed and this will help the concept stick in a different way. It’s not just about applying the tech to a topic or something. I would say with schools, they have to say like private versus public and all that. And I do advise schools. I do advise the boards of some schooling systems inside the United States.

Unnamed Speaker

And usually it’s a really complex environment, as you all probably know, if you’re submitting this question, you live and breathe that world. And so I’m always looking for like, what kind of things are the teachers doing that are hacks around the existing process and what kind of new tech are they like bringing in and trying to use to solve a problem? And is that something that’s a big enough problem that could warrant a more sort of like repeatable solution? So I kind of look at teacher behavior.

Unnamed Speaker

So there’s probably some interviews you could do there, some observational research if they let you do that. But I would say the challenge with it is usually they don’t have purchasing decision power, right? Authority. And so you usually have to navigate your way up to maybe even the board level of, okay, my value prop to the classroom and the teacher might be one thing, but then I have to kind of shape that a little differently for a decision maker that has budget authority. And that’s notoriously difficult in the school system.

Unnamed Speaker

But I look around the fringes. I look at, you know, I’m always interested in how people are hacking things.

Unnamed Speaker

And so there are ways you can look around the fringes, look at trends, but I would say that’s not usually the biggest problem. The biggest problem is then your value prop to the decision makers and the people with authority. And then those are going to be different, you know, experiments you would run there. But yeah, it’s a fascinating space. I just think it’s very notoriously difficult. I almost liken it to hospitals, you know, nurses will love something, but they don’t have decision- making or purchase power.

Unnamed Speaker

And then you have to go to administrator and they have a very different view on things. So it’s more about navigating that and testing your way through that.

Unnamed Speaker

What about testing strategies when you have impractical deadlines?

Unnamed Speaker

Yeah, I’d wish they were impractical, but if so, that usually means you have to limit your testing to what you can accomplish in a timeframe. So it’s usually, the good news is, as long as you have access to customers, there’s stuff you can do very quickly that’s usually low cost. The bad news is it’s going to be light evidence as far as should you make a big investment decision in this thing?

Unnamed Speaker

So what I try to do is over time, we try to, again, close that say- do gap and get to a point where, okay, this is how people behaving, and this gives us more confidence to invest more in it. So if you say we have a fixed date and we can’t go beyond that date, then I would reverse engineer your way back from that date and say, okay, what kind of experiments could we do in that timeframe that would give us some evidence, some directional evidence?

Unnamed Speaker

But just keep in mind that hopefully you have some kind of iterative deadlines or some kind of roadmap where it gives you the option of coming back in and reevaluating things, because the challenge with that, again, it’s going to be relatively light evidence. And you might be placing a big bet on light evidence, and if you’re wrong, it’s really hard to recover from because you’ve already built something and it takes money to pivot something.

Unnamed Speaker

It’s not, you say pivot all the time, but to actually pivot something, and Kate’s lived this too, it’s like, it’s a lot of hard work. And so it costs money to pivot is basically what I’m saying. So I would just say, work backwards from the date and then just caveat it with, it’s going to be lighter evidence because we didn’t have the time to generate stronger evidence.

Unnamed Speaker

Awesome. How should you or should you incentivize or compensate customers and prospects when they participate in experiments?

Unnamed Speaker

Yeah, it’s pretty common to incentivize people for time, like in an interview, for example. Not always required though. People love to talk about their problems. Especially if, I don’t want to say like you’re preying on their expertise, if you’re an expert in a field and you’re saying, I’m doing research in this. We’re not sure we’re going to build something yet, but can you tell us about these problems? Usually they’ll just tell you those for free.

Unnamed Speaker

But there is like, if you’re using a service to compensate people, like let’s say, you’re trying to get to doctors or dentists or somebody that you’re not going to find usually like in a Starbucks or something, right? It’s not like a B2C where you just go find customers on the street. Then I’ve seen it go up to, I think recently, about $ 1, 000 per interview. So it can get quite expensive quickly.

Unnamed Speaker

So before you know it, you spent 15 to 20 grand on a small set of interviews, but there were people you would know, in no other way would be able to get in front of. So it varies. I would say, you know, if you’re running something through like another platform where it’s not as hard but still B2B, you know, you usually comp them like Amazon gift card or something like that.

Unnamed Speaker

Just be mindful that you don’t want to always have to compensate because then you might fall into this trap where people are telling you what you want to hear because they’re being compensated. So you never want the comp to be like proportionately different from the information you’re trying to get from them because then they’ll just tell you what you want to hear usually. So, but yeah, it varies widely depending on how accessible your customers are.

Unnamed Speaker

What about offering free or discounted access to the thing that you’re building?

Unnamed Speaker

You can do that. That is certainly a way. I would say as you build out that group though, be careful of just listening to what their feedback is. You do want to go beyond that group because I have advised some companies where unfortunately what happened was they always kept going back to the same group over and over and over again. And that didn’t, that wasn’t representative of the larger market.

Unnamed Speaker

And so they were kind of blindsided when they went larger and then they realized people weren’t responding positively to what they’ve created because it was narrowly built for a small group. So that is something I see as a trap. So just be mindful of not going back to them. It’s great to have like an advisory council and such, just make sure you’re going beyond them as well. And you’re not, you’re not,

Unnamed Speaker

proportionately, like locally optimizing your product for a really small sample size and then realizing, oh, this isn’t representative of a larger market and it’s actually going to be a waste of time.

Unnamed Speaker

So we have two different questions on collaboration. They’re a little different but maybe overlapping. The first is how do you have collaborative discussions with other manufacturing companies? So presumably you’re both manufacturing pieces of something that your shared customer uses or inputs for each other’s processes. And we’ve got another one on how do you leverage, how do you think about the current market versus market growth with a collaboration?

Unnamed Speaker

So how do you think about testing ideas when there are other companies involved in the potential output solution?

Unnamed Speaker

Yeah. So I have a lot of clients who are doing this now. I might be able to introduce you to them. It just depends on what your industry is and if they’re competitors. So you can always contact me through Kate.

Unnamed Speaker

We’ll share David’s contact info at the end.

Unnamed Speaker

Yeah. I could say normally what we’re doing is we’re kind of looking through the landscape and we’re seeing who are good partners. And then we’re just making sure that we’re really open and honest about how we plug and play together and then not overspending. So for example, we do a lot of 3D printing. We use cardboard. We use a lot of really quick, rapid prototype techniques to figure out is this going to work together.

Unnamed Speaker

I think in my hardware clients, it’s really easy to say, well, we need a factory run for this and we have to spend so many different… This amount of inventory we need to create and statistically significant and all that. And so it can end up being months or years to find out if it’s worth pursuing. And so what I’ve been tasked with with a lot of my hardware companies I coach is we want to shrink that down to weeks and months. So how do we move faster? And the great news is there’s a lot of tech now that lets you move faster.

Unnamed Speaker

So usually what we’re doing is we’re co- creating and we’re trying to use rapid experimentation techniques. So a lot of, again, 3D printing, paper, things like that. Yeah, there needs to be some sort of agreement at a principal level. There’s some kind of joint development agreement usually comes up. I’m not going to be legal advice here, but there’s usually some sort of document we sign that we agree upon these terms. And then what was the other… The collaboration question really quick? I was going to get carried away with that one.

Unnamed Speaker

This one was, I think, broader.

Unnamed Speaker

So how do we leverage the current market compared to the market growth with a collaboration?

Unnamed Speaker

How do you test what you could do by yourself versus what you could do with a collaborator and how you consider and evaluate those two different options?

Unnamed Speaker

Yeah. There are some things I’ve helped with companies where we’re trying to figure out what would the success metrics be for collaboration. Usually, if you think about why you’d bring in a partner, for example, and there’s some shared risk with partnership usually, it’s usually like they bring an activity that you can’t do or don’t want to do, or they bring a resource that you don’t have or don’t want to create yourself, or potentially they’re a channel partner, almost like a reseller in a way, where they help you get to your customers.

Unnamed Speaker

So first I try to say, well, what category do these fit into? And then how would we measure success for those areas? And then how much would that cost? And so you can do back- of- the- napkin math on that. And basically you start figuring out, okay, is this going to be a fruitful partnership? And what’s surprising to me is a lot of big companies still don’t do this. Now, startups, I can almost give a pass because they’re just like, oh, a big company wanted to partner with me. Let’s go do it. And they just squirrel and they run over that way.

Unnamed Speaker

But with bigger companies, I still see this not a common practice where we’re not necessarily setting the criteria for a partnership and we don’t know where they fit in those buckets and we just use really vague terms. So I would say think about activities, resources, channels, and then try to understand what is it they’re bringing to the table versus what you can do yourself. And it becomes like a build- by versus borrow conversation. But I would recommend going down that path.

Unnamed Speaker

There’s some tools, but I don’t think there are a lot of tools publicly available, but you can probably just back- of- the- napkin math that and go from there. I have corporate clients that say, oh, we want our MVP development with a startup because they can do it much quicker than us. And we think the partnership would be fruitful because they’re moving into a space that we don’t know as well, but it’s related to what we do.

Unnamed Speaker

So I see a lot of really cool stuff going on in the venture community, startup studios, MVP development with startups that wasn’t as common 5 to 10 years ago. I see that gaining a lot of steam and popularity in some of my clients.

Unnamed Speaker

Cool.

Unnamed Speaker

All right.

Unnamed Speaker

It’s been almost an hour and we haven’t talked about AI. So here’s a layup. I know you have a workshop coming up. How can you use AI in testing?

Unnamed Speaker

Yeah. I’ve been spending a lot more time on this recently. And most of it is because I saw some weirdness. I saw people trying to replace actual research with AI, which I’m a little dubious of still. But I do view it as more of like an extra team member. So for example, let’s say, so my process is we extract your assumptions, desirable, viable, we map those, importance and evidence, and then we run experimentation, we run experiments to go test the really important things, like your riskiest assumptions.

Unnamed Speaker

Sometimes teams have a hard time coming up with those assumptions and prioritizing them. Like they might have a really small subset or they just, there’s some questions they’re not sure about. And so what we’ve been using AI for is to augment that process. And this is what I’m running a workshop on, on the 23rd. And it’s basically going through, okay, can we use like, like I use GPT, for example. So you can give it an idea and a target customer, and it starts to generate the desirable, viable, feasible assumptions. Now, some of them may overlap.

Unnamed Speaker

I just did this with, I did the guest lecture at UC Berkeley, and we had one of the students give the ideas, and I just did it live, kind of in flow with them. And it came up with some assumptions that they already had, but it had some assumptions they didn’t think of. And they thought, oh, we didn’t consider this and this and this. So for example, some early stage hardware companies, they almost always forget about like FDA compliance. You know, they’re like, oh, I just want the tech to work.

Unnamed Speaker

And I’m like, yeah, but you might need FDA, like, like there might be something there. And so some things might come up there. And then we use it to help prioritize. So we could say how important is this to the success and then how much evidence exists. And some teams, they do that, right. But we can also have another team member, which is the AI, basically say, this is what I think is important. This is where I think the evidence is.

Unnamed Speaker

And then when it comes to experiments, one of the reasons I wrote the book was that it gives you a way to kind of match. So you can say, oh, I have this kind of risk. But other than interviews and surveys, what else could I do? And the way the library is coded is it has all those themes of risk. And you can say, oh, this might work and this might work and this might work. So with AI, you could have it generate, you know, some experiments you might not have considered to run against that kind of risk. So I use it almost like an extra team member.

Unnamed Speaker

What I don’t do, and I don’t recommend yet, is I just don’t think it’s there to say, well, we don’t have to talk to customers. Let’s just talk to GPT and it’ll tell us what the jobs, pains, and gains are.

Unnamed Speaker

Customers would have said, yeah.

Unnamed Speaker

Yeah. And it’s making a lot of inference there. And I do still think you need to talk to customers. What it can do, though, is you can say, oh, I have an interview script. What’s wrong with this script? What might I be missing? And it can do that. Or I might say, I have a bunch of interview notes. Can you theme this for me and tell me what the overall themes of jobs, pains, and gains are? It can do that. But I don’t use it to replace. I use it more as like an extra team member.

Unnamed Speaker

In that vein, and maybe a last question, what’s your favorite experiment or a couple of favorite experiments from your book?

Unnamed Speaker

Oh, that’s hard. I have to say Concierge and Wizard of Oz are still probably two of my favorites because they tap into the team’s creativity. Basically at Concierge, you’re just doing it manually. It’s pretty obvious that you’re doing it manually. It’s almost like I’m just going to give you a service.

Unnamed Speaker

I just call that bootstrapping a business.

Unnamed Speaker

Yeah.

Unnamed Speaker

And so a lot of us are going to do things manually at first. It doesn’t scale. And that’s okay. I think it’s okay to do things that don’t scale. But you should learn from that and be able to inform your design of what needs to happen to scale based on real interactions. Wizard of Oz is very similar to that, except it’s not obvious a person’s in the loop. So a lot of the early AI startups I was advising, you know, it’d be a lot of spreadsheets and things behind the scenes. And they would manually deliver it.

Unnamed Speaker

And it was through like some kind of digital curtain. So it’s through a landing page or some kind of thing that wasn’t obvious a person was involved. People still paid for it, still loved it. It didn’t scale, but it gave them feedback on, okay, if I was to automate the backend, what might that start to look like? And will this pricing work out and everything? So I still love those because they tap into creativity. And we’re not just worried about all the tech right away. We’re worried about the value we provide to the customer.

Unnamed Speaker

And in the end, that’s what really matters. And so I think it’s okay to do things that don’t scale sometimes. I know that sounds weird, but I find that doing stuff that doesn’t scale, give you time to really deeply understand the needs. And I still think concierge and Wizard of Oz are just, they’re some of my favorites because I’ve seen some amazing stuff come out of those that have to be like, ended up really successful products. And it started off as like the product manager doing something that didn’t scale.

Unnamed Speaker

Of course. Yeah. Well, David, thank you so much for your time today. This has been an awesome conversation. We will share all kinds of links. So recording links to your book, your contact information, to make sure that folks who attended who want to get in touch can do that.

Unnamed Speaker

Appreciate it.

Unnamed Speaker

Thanks for having me.

Unnamed Speaker

Have a great rest of your day.

Unnamed Speaker

Thanks, everyone. Appreciate it. Thanks for having me. Thanks, everyone.

💡 Quick tip: Click a word in the transcript below to navigate the video.

Key Takeaways

  • Understand Customer Needs: Prioritize understanding customer needs through direct interactions and observation to inform product development effectively.
  • Value Proposition Clarity: Clearly define your value proposition and align it with the needs and desires of decision-makers and end-users to gain buy-in and adoption.
  • Navigating Decision-Making Dynamics: Acknowledge and navigate the complex dynamics of decision-making, recognizing the differing perspectives and authority levels within organizations.
  • Testing Strategies in Time Constraints: When facing impractical deadlines, prioritize testing efforts within the available timeframe, even if it means accepting lighter evidence, while allowing for iterative reevaluation.
  • Incentivizing Participation: Offer incentives for customer and prospect participation in experiments, balancing compensation with the value of the information obtained to avoid bias.
  • Diversify Feedback Sources: While offering free or discounted access can be valuable, diversify feedback sources beyond a small group to ensure representation of the broader market and avoid tunnel vision.
  • Effective Collaboration Strategies: Foster open communication and establish clear agreements when collaborating with other companies, leveraging rapid experimentation techniques to streamline the process and mitigate risks.
  • AI Augmentation in Testing: Utilize AI as an additional team member to augment idea generation, assumption prioritization, and experiment suggestion processes, while recognizing the irreplaceable value of direct customer feedback.
  • Continuous Learning and Adaptation: Embrace a mindset of continuous learning and adaptation, leveraging experimentation to inform decision-making, iterate on product development, and drive successful outcomes.

Slides

Additional Resources

Testing Business Ideas: A Field Guide for Rapid Experimentation by David Bland and Alexander Osterwalder

David’s Website

Testing B2B Product Ideas

David Bland is the Founder of Precoil, helping companies find product market fit using lean startup, design thinking, and business model innovation. He’s worked with companies like GE, Toyota, Adobe, HP, Behr and more. David is also the author of “Testing Business Ideas”, an internationally best-selling business book, now in over 20 different languages. In this guide, David walks through how to identify risks in product ideas, derisk through testing, and create a culture of experimentation throughout your company.
YOU MAY ALSO BE INTERESTED IN
Destiny LaLane
Leveraging AI and Automation in Recruiting
Destiny Lalane is the Founder of the Recruiting School, providing training for recruiters and embedded recruiter services. She’s worked with FAANG companies, YCombinator and Techstars-backed companies like DrChrono, Ashby, Laudable, and Propagate and later-stage companies like Affirm and Chainalysis. In this guide, Destiny walks through the different ways that recruiting orgs can incorporate AI across the recruiting process, from designing a role to taking it to market and evaluating candidates.
Sam Torres
Leveraging AI in SEO
Samantha Torres is the Chief Data Officer of The Gray Dot Company, an SEO and Data Consulting Company. Samantha has more than 10 years of SEO experience optimizing SEO at 90+ B2B and B2C companies. In this guide, Samantha walks through how to leverage AI to improve your SEO and content strategy, and the implications of generative AI tools on SEO.
Dave Boyce
Building a Product-Led GTM
Dave Boyce is a go-to-market-focused advisor and board member with over 20 years of experience leading SaaS companies. Dave is the author of Product That Sells Itself, the forthcoming book from Stanford Business Press. Previously, Dave was the Chief Strategy Officer at InsideSales.com, CEO of ZenPrint, and the GM Consulting and PLG Practice Lead at Winning by Design. In this guide, Dave walks through the different approaches to building product-led growth and outfitting teams and your product to support self-service sales, renewal, and onboarding.