AI-PRISM: A New Lens for Predicting New Product Success

AI-PRISM: A New Lens for Predicting New Product Success

AI-PRISM: A New Lens for Predicting New Product Success

Dr. Robert Cooper

kHUB post date: February 28, 2025
Read time: 10 minutes

The Challenge

A huge majority of new product (NP) investment decisions are wrong! Almost 80% of approved projects fail commercially or are canceled later, often after substantial resources have been spent, according to a global PDMA study [1, 2]. A toss of a coin would give better results! Another recent investigation into portfolio management practices in the U.S. found that only 41% of approved projects ultimately met their profit targets [3].

Can AI Help?

AI models, including neural networks, have not yet been deployed to autonomously make investment decisions for NPD [4]. For example, a 2024 study we conducted found that no firms in the U.S. or Europe had entrusted NPD investment decisions to AI, nor do they plan to in the foreseeable future [5]. The findings suggest that managers either lack trust in AI for these critical decisions or believe that managers can outperform any automated system.

Instead, AI primarily serves as a tool to provide data to enhance human decision-making:

  1. Data: Accessing market data, customer feedback, and technical information to generate insights informing NP investment decisions; and using predictive analytics for market or sales forecasts [6,7].
  2. Evaluating: Considering a wide range of variables, including financial metrics, market conditions, and competitive landscape, to provide a holistic view of a project’s potential [8].
  3. Execution: Executing tasks in NPD stages—from ideation to launch—yielding better data which improves go/no-go decisions [9].

AI in Financial Decision-Making

The situation is markedly different in the financial sector, where AI plays a huge role in investment decision-making. In hedge funds, for example, AI-driven trading accounted for over 40% of trading volume in 2024 [10]. Hedge funds leveraging AI outperformed their peers by an average of 12% annually. Another study recommends “a man versus machine zero-cost strategy”, namely “buy AI-managed hedge funds and short sell human managed funds”; this strategy yields “a highly significant spread of at least 50 basis points per month" [11].

In mutual funds, a rigorous study revealed that the funds with greatest use of AI for analysis and trading outperformed non-AI funds: “A long-short portfolio, which goes long in the top 20% of funds with the highest AI ratio [usage] and short in the bottom 20% of funds with the lowest AI ratio, delivers an annual excess return of 1.56% [12].

And even in bitcoin trading, “artificial intelligence funds utilize machine learning together with deep learning algorithms and vast data analytics generate accurate, emotionless trading decisions for Bitcoin” [13]. Can we in NPD learn from the experience of AI making investment decisions in the financial sector?

An AI Model That Can Predict NP Success

Since managers remain reluctant to delegate NP investment decisions to AI, we explored how AI could play a bigger role. “One of the most promising applications of AI is to predict the success of new products based on data analysis and modeling,” notes Awasthi [14].

First, we engaged AI, specifically Perplexity Pro , to review the extensive research on NP success-versus-failure reasons, and then to develop a predictive model based on what it uncovered—to create a robust rating model that would assess a NP project and then predict its success.

The result was PRISM, a seven-factor model (with 20 optional sub-questions), shown in the Appendix [15]. The initial role for PRISM was that managers would do the scoring—but the result is still subject to managerial biases and knowledge gaps. PRISM does not make the go/no-go decision—that’s still the role for humans.

Next Step: Autonomous AI Prediction

What if AI could itself do the scoring using the PRISM model and make the success predictions autonomously? An autonomous prediction model would eliminate management biases and also access more information to enable better predictions.

We used 13 projects from various businesses to test AI and PRISM with this task**. Each project typically had a short description with information on the product, its market, technology, and relevant company resources.

We directed Perplexity Pro* to review each project and its outline, next to undertake a thorough market and technical assessment using external online sources, and then to score the project zero to 10 on PRISM’s 20 sub-questions in Appendix A. Perplexity then calculated a percent probability of success.

Perplexity Pro is an AI platform providing users access to a variety of powerful AI models, including GPT-4 Omni, Claude 3.5, Haiku, Sonar Large, DALL-E3, and now DeepSeek. Pro is the for-a-fee version; Perplexity is also free.

**13 real projects but well-disguised to protect privacy.
*Perplexity Pro is an AI platform providing users access to a variety of powerful AI models, including GPT-4 Omni, Claude 3.5, Haiku, Sonar Large, DALL-E3, and now DeepSeek. Pro is the for-a-fee version; Perplexity is also free.

 

Training and refining the AI model to effectively employ PRISM required significant time before achieving reliable performance. Numerous iterative trial runs and tests were conducted.

In early tests, for instance, the model occasionally demonstrated a degree of “laziness,” relying heavily on the information summary prepared by the project team rather than sourcing additional market and technical data. This was problematic because project teams often neglect critical front-end research.

To address this, we refined the AI’s prompting instructions to ensure it supplemented the team’s deliverables with a comprehensive online search across diverse sources. Iterative testing with adjustments to the prompting methodology ultimately resolved this and other issues, improving the AI-PRISM model’s effectiveness.

An AI Model That Can Predict NP Success

The AI-PRISM model does not make the go/no-go decision. The main criterion for that decision is value to the business, where the NPV is the most popular metric [17].

The challenge is that most NP financial projections far exceed reality, typically by more than a factor of two [18]. Introducing the probability of success is one way to build reality to the financial projections, namely, the likelihood that these profit forecasts will actually occur.

The appropriate metric is the Expected Commercial Value (ECV). Here, the probability of success is combined with traditional financial numbers available at project review meeting yielding a much more valid estimate of the project’s economic value to the company:
ECV = Ps x PV - (D+C)

Where:
ECV = expected commercial value
Ps = probability of NP success from Figure 1
PV = the present value of future earnings from the project
D = development cost
C = commercialization and launch costs [19].

The example in Figure 1 shows that, besides AI, six managers also scored the project using PRISM’s seven factors in Table 3 . Figure 2 gives the combined strengths/weaknesses assessment. Here, managers and AI-PRISM are closely aligned; this is not always the case!

The probabilities of success—a weighted average of AI-PRISM and managerial scoring in Figure 1—are used in the ECV equation to yield a realistic estimate of the project’s economic value to the firm.

 

Figure 1: Success Prediction for a NP Project (disguised), 
Showing Results From Management and AI-PRISM   


Are AI-PRISM's Results Reliable and Valid?

Are the AI-generated results from PRISM and Perplexity Pro both reliable and valid enough to use in NP investment decisions?

1. Reliability – In research, when the reliability of a metric is uncertain, multiple measurements are typically taken and the average across those readings is used. Following this principle, Perplexity Pro was instructed to evaluate the same 13 projects ten times each. Note that the scores varied across the 10 runs, as AI accessed different sources and made different assumptions each time—much like a human analyst would. Table 2 shows the relatively low standard deviations from AI-PRISM, which indicate a reasonable degree of repeatability and reliability.

 

Figure 2: Strengths Weaknesses Assessment Using Data from Managers and AI-PRISM (from Figure 1).

Compared to human evaluators, the consistencies of AI-PRISM’s predictions are much better, with their standard deviations less than half of those from managers for the same 13 projects (see Table 2). Teams of six to nine managers, depending on the project, scored each project. This result suggests that the reliability of AI-PRISM may surpass that of traditional managerial assessments.

2. Validity – While a model’s outputs may be consistent and reliable, the question remains: Are they valid and correct? To assess validity, we gave the same prediction task to a quite different AI model, DeepSeek*, and compared the predictions from DeepSeek and Perplexity Pro across ten runs for each of the 13 projects. The average predicted success probabilities for each project, as generated by the two models, are presented in Figure 3.

 

Figure 3: Prediction Results from Perplexity and DeepSeek,
Both Using PRISM, for the 13 Test Projects, 10 Runs Each.

When plotted against each other, the results show an almost perfect 1:1 relationship, with an exceptionally high correlation (R² = 96.8%). This strong alignment reveals that, despite their distinct methodologies and origins, both AI models produce nearly identical predictions. This consistency reinforces confidence in the validity of AI-PRISM’s results and shows that both models perform at an equivalent level.

We also compared AI-generated scores with management evaluations. For this analysis, for each project, we computed the average of the managers’ ratings (groups of six to nine managers rated each project). We then compared this management rating to the average AI-generated score from ten runs per project.

As previously noted in Table 2, managerial ratings for the same project exhibited considerable variability as gauged by high standard deviations—substantially higher than those observed in the AI-generated scores—indicating weaker reliability. Lacking reliability, it is therefore unsurprising that managers’ ratings plotted against AI-generated scores showed a high degree of scatter and weak correlations, with R² values hovering around 30% (figures not shown).

*DeepSeek, the new Chinese AI model, is available through Perplexity Pro. Data is hosted on servers in the US; the user’s data is not shared with the model provider or with China, and is subject to US law (California privacy regulations).

Once again, one might cautiously conclude that the AI-PRISM model outperforms managerial assessments in predicting project success and failure. This finding aligns closely with Nobel Prize winner Daniel Kahneman’s assertion that “uncertainty is poorly represented in intuition” [20, 21]. In essence, System 1 (intuitive thinking) struggles to handle uncertainty effectively, whereas AI models demonstrate superior predictive capability.

Conclusion

The PRISM model was utilized by AI to autonomously predict the success of various NP projects. Both the reliability (repeatability of results) and validity of these predictions appear satisfactory. Notably, the PRISM model itself was developed by AI specifically for forecasting new product success.

One recommendation is for project teams to leverage AI-PRISM as a self-assessment tool. Often project teams are overly optimistic about their projects, and a tool like AI-PRISM introduces a reality check. By undertaking an AI-PRISM analysis on their own project, and inviting knowledgeable outsiders to participate, teams can identify their project’s strengths and weaknesses.

Additionally, it is recommended that management integrates the AI-PRISM model into management NPD go/no-go decision-making meetings. Acknowledging that this represents a significant step forward, our suggestion is to incorporate AI-PRISM as an “additional evaluator” during such decision meetings, as shown in Figure 1. To ensure reliability, one should conduct multiple AI-PRISM evaluations per project, as we did. Furthermore, human oversight during decision meetings remains essential [22].


About the Author

Dr. Robert Cooper, Professor Emeritus, McMaster University, Canada ISBM Distinguished Research Fellow at Penn State University A world expert in the field of management of new-product development and product innovation, Dr. Cooper has written 10 books on the topic and more than 170 articles. Bob is the creator of the globally-employed Stage-Gate (trademarked) process used to drive new products to market; a Fellow of the Product Development & Management Association; ISBM Distinguished Research Fellow at Penn State University. He is a noted consultant and advisor to Fortune 500 firms, and also gives public and in-house seminars globally.


Appendix A: The PRISM Scorecard Model

Perplexity Pro, an advanced AI model, created the PRISM scoring model for predicting new product success, with guidance from the author. Perplexity analyzed dozens of success/failure research articles and existing scoring models—a thorough literature search—and Identified the most important factors that separate NP winners from losers.

AI then estimated appropriate weights for these factors. Perplexity even proposed the name “PRISM”, an acronym for Product Risk and Innovation Success Model. The development and details of PRISM is in process as an article in Research Technology Management [15].

The resulting model includes seven key criteria, consistently identified as critical to new product success, plus their sub-questions, detailed in Table 3 below. PRISM was initially developed for use by humans, that is, managers or project team members do the scoring on the seven criteria or 20 sub-questions.

 

Main criteria weights add to 10. Sub-question weights add to 10 for each main criterion.
Perplexity created an algorithm for PRISM that calculates a probability of success from the scores.


References

[1] Mette P. Knudsen, Max von Zedtwitz, M., Abbie Griffin, and Gloria Barczak. “Best Practices in New Product Development and Innovation: Results from PDMA’s 2021 Global Survey,” Journal of Product Innovation Management (40)3: (2023): 57–275. https://doi.org/10.1111/jpim.1266

[2] Gloria Barczak, Abbie Griffin, and Kenneth B. Kahn. “Trends and Drivers of Success in NPD Practices: Results of the 2003 PDMA Best Pravbn ..ctices Study,” Journal of Product Innovation Management 26(1): (2009): 3–23. https://doi.org/10.1111/j.1540-5885.2009.00331.x

[3] Robert G. Cooper, Meetal Desai, Lee Green, and Elko J. Kleinschmidt. “Strategies to Improve Portfolio Management of New Products,” Research-Technology Management 67(1): (2024): 55–66. https://doi.org/10.1080/08956308.2023.2277992

[4] Tucker J. Marion, Mahdi Srour, and Frank Piller. “When Generative AI Meets Product Development,” MIT Sloan Management Review (July 29, 2024). Link: When Generative AI Meets Product Development

[5] Robert G. Cooper and Alexander M. Brem. “Insights for Managers About AI Adoption in New Product Development,” Research-Technology Management 67(6): (2024): 39–46. https://doi.org/10.1080/08956308.2024.2418734

[6] Robert G. Cooper “The Artificial Intelligence Revolution in New-Product Development," IEEE Engineering Management Review 52(1): (Feb. 2024): 195-211. https://doi.org/10.1109/EMR.2023.3336834

[7] Robert G. Cooper and Tammy McCausland. “AI and New Product Development,”Research-Technology Management 67(1): (2024): 70–75. https://doi.org/10.1080/08956308.2024.2280485

[8] Pilar Carbonell-Foulquié, Jose L. Munuera-Alemán, and Ana I. Rodrı́guez-Escudero. “Criteria Employed for Go/No-Go Decisions When Developing Successful Highly Innovative Products,” Industrial Marketing Management 33 (4): (2024): 307–316 https://doi.org/10.1016/S0019-8501(03)00080-4

[9] Leeway Hertz. “AI in Product Development: Use Cases, Benefits, Solution and Implementation,” Leeway-Hackett blog, (2025). Link: AI in product development: Use cases, benefits, solution and implementation

[10] Clarigro. “AI Impact on Hedge Fund Performance and What to Expect in 2025,” Clarigro blog, (December 5, 2024). Link: https://www.clarigro.com/ai-impact-on-hedge-fund-returns-performance/

[11] Klaus Grobys, James W. Kolari, and Joachim Niang. “Man Versus Machine: On Artificial Intelligence and Hedge Funds Performance. Applied Economics 54(40): (2022): 4632–4646. https://doi.org/10.1080/00036846.2022.2032585

[12] Yiming Zhang. “Do Mutual Funds Benefit from the Adoption of AI Technology?” HKUST Business School Research Paper No. 2024-165: (August 7, 2024). http://dx.doi.org/10.2139/ssrn.4871159 Link: Do Mutual Funds Benefit from the Adoption of AI Technology? by Yiming Zhang :: SSRN

[13] Joey Mazars. “AI-Powered Hedge Funds: How AI is Beating Humans in Bitcoin Investing,” Autogpt blog:(Feb. 13, 2025). Link: AI-Powered Hedge Funds: How AI is Beating Humans in Bitcoin Investing

[14] Jaya S. Awasthi. How Can AI Predict the Success of New Products,” All/Manufacturing/Product R&D, Ch. 1: (2025). Link: How can AI predict the success of new products?                  

[15] Robert G. Cooper. “Using AI to Predict New Product Success,” Product Development Institute: (Feb. 2025), in process in Research-Technology Management, 2025. Link: UsingAIPredictNPSuccess.pdf

[16]The PDMA Handbook of Innovation and New Product Development, 4th ed., edited by Ludwig Bstieler and. and Charles H. Noble, Chapter 1: “New Products—What Separates the Winners From the Losers and What Drives Success.” Hoboken, NJ: John Wiley & Sons. Link: The PDMA Handbook of Innovation and New Product Development: Bstieler, Ludwig, Noble, Charles H.: 9781119890218: Amazon.com: Books

[17] Fabio Magnacca and Ricardo Giannetti, 2023. “Management Accounting and New Product Development: A Systematic Literature Review and Future Research Directions,” Journal of Management and Governance, 28: (2024): 651–685. https://doi.org/10.1007/s10997-022-09650-9

[18] Robert G. Cooper and Anita F. Sommer. 2023. “Dynamic Portfolio Management for New Product Development.” Research-Technology Management, 66(3): (2023): 19–31. https://doi.org/10.1080/08956308.2023.2183004

[19] Robert G. Cooper. "Expected Commercial Value for New-Product Project Valuation When High Uncertainty Exists," IEEE Engineering Management Review,51(2): (June, 2023): 75–87. https://doi.org/10.1109/EMR.2023.3267328.

[20] Daniel Kahneman, Thinking, Fast and Slow. Macmillan. (2011). ISBN 978-1-4299-6935-2

[21] Daniel Kahneman, D. “Maps of Unbounded Rationality: A Perspective on Intuitive Judgement and Choice”, Nobel Prize Lecture (December 8, 2002). https://www.nobelprize.org/uploads/2018/06/kahnemann-lecture.pdf .

[22] Joe McKendrick and Andy Thurai.“AI Isn’t Ready to Make Unsupervised Decisions,” Harvard Business Review: (Sept. 15, 2022). Link: AI Isn’t Ready to Make Unsupervised Decisions

What did you think of this post?

Start a conversation with your peers by posting to our kHUB Discussion board! Browse trending posts and reply to other thought leaders OR start your own discussion by clicking "Post New Message."

Start a Discussion

If you don't have an account with us, create a guest account or become a member today and receive exclusive access to all PDMA member benefits. Please note that both members and non-members are welcome to participate in the kHUB.