Selecting the Right AI Solutions for Use in New Product Development
Dr. Robert Cooper
kHub Post Date: August 27, 2024
Read Time: 8 Minutes
Businesses seeking to adopt AI for new product development (NPD) are faced with a dilemma: choosing from an abundance of riches! There are over 40 different proven applications for AI in NPD, from generating novel product ideas to designing the launch strategy, and over 400 different AI solutions offered from vendors.[1] It’s overwhelming! Where does one start? With so much at stake, and so many options available, project selection becomes critical in AI deployment. When it comes to AI projects, however, project selection appears to be in trouble.
Too Many Failures
Despite remarkable results reported by the large early adopters1, AI adoption for new product development (NPD) remains weak, particularly among U.S. firms.[2] A major barrier to adoption is the lack of a robust business case.[3] The fact is that many AI projects fail to demonstrate clear business value.[4],[5] For example, a Deloitte study found only 18-36% of organizations achieved expected benefits from AI.3
One reason for the failure to deliver results is that most AI projects fail, with estimates placing the AI failure rate as high as 80%.[6] Indeed, only half of AI projects make it past the piloting stage![7] One major reason is that the product did not work so well; a second equally serious cause is that the wrong project was chosen—a poor-value project that failed to meet users’ needs or did not solve a major user problem.[8] Had more careful project choices been made, both failure reasons might have been avoided.
In a similar process where there is more experience, namely NPD, making the right project investment decisions or “portfolio management”, has been consistently found to be lacking, especially when it comes to using the best decision-making methods.[9]
Towards Better Go/No-Go Decisions for AI Projects
Throughout the AI project, one is constantly faced with making tough choices at project reviews or gates; a recommended gating process is shown in Figure 1, the RAPID process.[10] AI project prioritization and go/no-go decisions are made under conditions of extreme uncertainty. (The RAPID process in Figure 1 is for AI solution acquisition from vendors; a modified versions of RAPID is used for internal development of AI solutions).
Having a robust business case, that correctly quantifies the economic benefits of AI for the proposed application, is ideal. But proving AI’s business value is a major challenge, cited by 37% of managers.4 For qualitative benefits, such as superior ideation, an economic value for these benefits can be imputed by using cost-benefit analysis. For those benefits that are more quantifiable, such as improved decision-making, companies can use more traditional financial analysis.
Financial Analysis – But Not Too Useful
Figure 1. The RAPID Technology Acquisition & Deployment Process Map for AI in NPD
The NPV is the most popular project evaluation metric in business and can be used with AI projects, but with caution. Given the extreme level of uncertainty regarding outcomes, financial estimates are likely to be highly
unreliable—AI projects are quite new to most firms, and experience is limited. For example, estimating just how much engineering time will be saved by an AI tool and its economic impact, is difficult.
Certainly, one can adjust for uncertainties by using probability-based financial metrics like Expected Commercial Value.[11] But these probabilities are also tough to estimate—the costs and payoffs of various AI applications in NPD are not just uncertain, often they are unknown!
Scoring Models Work!
A useful complement to financial metrics is the Scoring Model. Such scoring models have proved very effective at project selection in NPD where they are quite common. Scorecards range from the original and simple “real worth win” model to more realistic “value-based NPD scorecards”.[12] Less well-known is that similar scorecards have also been developed for technology development projects, where the deliverable is not a new product, but a new technology or technology platform. Some technology scorecard models are research-based, developed and/or validated based on actual technology development project cases.[13]
The new SPARK model—“Scoring Projects for AI R&D Knowledge”—is shown in Table 1: It features valid prioritization criteria in the form of a seven-factor scoring model—factors that distinguish winning AI projects from losers, based on similar and proven models for technology developments,13 together with assistance from the AI model Perplexity.[14] This scorecard is designed to be used in combination with financial metrics at the gate meeting to evaluate and even rank the AI application projects for use in NPD and RD&E.
Table 1: The SPARK Scoring Model for Rating AI Project Attractiveness
Why Are Scorecards So Effective?
Most decisions we make are by intuition (called System 1), even though we may believe it is a rational decision, according to the Nobel prize-winner Daniel Kahneman’s research.[15] Careful, rational, and rigorous decision-making (called System 2) takes more energy and time than following our instincts, which was not so good if you were an early human facing a predictor in the jungle. System 1’s instinct says “run”!
The argument in support of more rigorous methods such as scorecards at gate decision points is that relying on intuition is only effective if the decision is a routine one where the decision-maker has much experience, such as a doctor diagnosing a common ailment. This is not so in the case of innovation-project go/no-go decisions, which are complex and not confronted often. Thus, “intuition must be supplemented with as much of a logical structure as possible”.[16],[17]
This logical structure is missing, however, in most project evaluation or gate meetings on innovation projects. Sadly, in the typical project review meeting, the project team presents their project results to date, managers ask questions, and discussion ensues, often into topics of little relevance; finally, with time running out, a decision to “carry on” is made… all very informal and intuitive.
By contrast, the Scorecard model used is built on validated criteria, as illustrated in Table 1. Following the presentation of the project, each decision-maker thinks carefully and scores the project privately on these seven important criteria or factors. Results are then shared, usually on a large screen. The scores on each of the factors help to identify the project’s strengths and weaknesses. Discussion and debate focuses on these scores and thus on what’s important, leading to a rational rather than an intuitive decision.
Research shows that the “average decision-maker is optimal” and thus the average score across the evaluators is a strong indicator of the projects’ attractiveness.[18] No one evaluator gets the right answer of course, but the average across the group is close to being right!
When using a scoring model, note that its value lies not just in the project’s final score, but also in the useful and focused discussion by evaluators, which is structured by the model and process, leading to more rational decision-making. Some scorecards have achieved remarkable performance results—over 80% correct decisions, almost three times better than management decisions—in the case of product innovation projects![19]
Using Scorecards at Gates and Portfolio Reviews
Scorecards are particularly effective at real-time gate reviews of innovation projects. Dynamic portfolio management requires that the project’s business case and its rationale be updated for each gate using the most recent information.[20]
At each gate in the process in Figure 1, the project is reviewed and scored by senior management. As noted above, the Scorecard Score, particularly the average evaluator’s score, is a key indicator of project attractiveness, with scores of 65 out of 100 usually meaning a “positive project”. A typical scoring result, displayed at the gate meeting, is in Figure 2.
Figure 2: Results of an AI Project Evaluation Using SPARK,
With Scorecard Results on Left, and Strength/Weaknesses Assessment on Right
The NPV, despite its lack of reliability, is also an input to the go/no-go decision. If both the Average Evaluator Score and the NPV are positive, management makes a “go” decision and commits resources for the next stage of the process in Figure 1. In this way, the resourcing or killing of AI projects is based on a fairly rigorous process and model, and also on real-time information. Projects that have become weaker over time are spotted and killed, thus releasing their resources for better projects.
Portfolio Reviews by contrast are periodic, often four times per year, and review the entire set of AI projects. They ensure the right mix and balance and the correct prioritization of projects. Both the NPV and the Scorecard Score are useful at this review. Here the Productivity Index, which is determined from the NPV, is used as a ranking tool, along with the Scorecard Score from the most recent gate meeting. [21]
Using both criteria together, projects are ranked from best to worst until there are no more resources available.21 Some projects end up down the list and past the resource limit, and so must be put on hold. Not only does this portfolio method ensure the best set of projects, but also that the AI project pipeline is not overloaded.
Conclusions
Given the importance of AI adoption decisions, the number of AI application choices at hand, and the uncertainty of the decision situation, coupled with the current high failure rate for AI projects, it is vital that management use whatever tools are available to improve the effectiveness of the go/no-go decision. The AI SPARK scorecard in Table 1, when used with the proven methodology outlined above, promises to yield much better choices regarding AI projects, and thus helps to mitigate the high failure rates.
About the Author
Dr. Robert G. Cooper is ISBM Distinguished Research Fellow at Pennsylvania State University’s Smeal College of Business Administration, Professor Emeritus at McMaster University’s DeGroote School of Business (Canada), and a Crawford Fellow of the Product Development and Management Association (PDMA).
Bob is the creator of the popular Stage-Gate® process model, now the most popular idea-to-launch NPD process globally (for physical product firms). He also developed Stage-Gate-TD for internal technology projects, and co-developed the Agile-Stage-Gate process. In terms of project selection models, Cooper developed the original NewProd™ scoring model and the Value-Based Model for NPD projects.
Bob has published 12 books – including the “bible for NPD”, Winning at New Products, and more than 160 articles on the management of new products, most in refereed journals, including seven refereed articles on “AI in NPD” just in 2023-24. He has won the IRI’s (Innovation Research Interchange) prestigious Maurice Holland Award three times for “best article of the year”. Bob has helped hundreds of firms over the years implement best practices in product innovation, including companies such as 3M, Dow Chem, DuPont, Bosche, Danfoss, LEGO, HP, ExxonMobil, Guinness, and P&G.
Cooper holds Bachelor and Master’s degrees in chemical engineering from McGill University in Canada; and a PhD in Business and an MBA from Western University, Canada.
Website: www.bobcooper.ca
Contact: robertcooper@cogeco.ca
References
[2] Robert G. Cooper and Alexander M. Brem, “The Adoption of AI in New Product Development: Results of a Multi-firm Study in the US and Europe,” Research-Technology Management 67(3), (2024): 33–54: (2024). DOI: 10.1080/08956308.2024.2324241
[3] Robert G. Cooper, “Overcoming Roadblocks to AI Adoption in New Product Development,” Research-Technology Management, (forthcoming, Sept. 2024).
[9] Mette P. Knudsen, Max von Pedowitz, Abbie Griffin, and Gloria Barczak, “Best Practices in New Product Development and Innovation: Results From PDMA’s 2021 Global Survey,” Journal of Product Innovation Management, (2023):1–19. DOI: 10.1111/jpim.12663
[11] Robert G. Cooper, “Expected Commercial Value for New-Product Project Valuation When High Uncertainty Exists,” IEEE Engineering Management Review, 51(2), (June, 2023): 75–87, doi: 10.1109/EMR.2023.3267328.
[12] Robert G. Cooper and Anita F. Sommer. “Value-Based Strategy-Reward-Win Portfolio Management for New Products,” IEEE Engineering Management Review, 51(1), (March 2023): 172–182. DOI: 10.1109/EMR.2023.3260319.
[15] Daniel Kahneman, “A Perspective on Judgment and choice: Mapping Bounded Rationality.” American Psychologist, 58(9), (2003): 697–720. DOI:10.1037/0003-066X.58.9.697
[16] Daniel Kahneman, Thinking, Fast and Slow. New York. NY: Farrar, Straus and Giroux, 2011. ISBN 978-0141033570
[17] Rick Mitchell, Rob Phaal, Nikoletta Athanassopoulou, Clare Farrukh, and Christian Rassmussen, “How to Build a Customized Scoring Tool to Evaluate and Select Early-Stage Projects,” Research-Technology Management, 65(3), (2022): 27–38, DOI: 10.1080/08956308.2022.2026185
[19] J.J.A.M. Bronnenberg and M.L. van Engelen, “A Dutch Test With the NewProd Model,” R&D Management 18(4), (1988): 321–332.