Monitoring the performance of R&D organizations is a notoriously difficult challenge. Billions are spent annually on corporate labs and global innovation centers, and while many company leaders would readily vouch for the importance of this spending, R&D doesn’t lend itself to easy performance measurement. In the face of anxious boards, impatient investors, and performance-driven executives, how can R&D leaders prove the value of their spending? How can they fight to protect or even grow their budgets by demonstrating how critical their role is?

These challenges will be familiar to many R&D leaders, most of whom turn to some set of standard performance metrics like the Vitality Index (revenue and EBITDA for new products) to monitor results and demonstrate value. Traditional financial measures of business impact are often supplemented by a host of other metrics, which generally fall into four categories:

  • Strategy-Focused: Measures of how well R&D supports overall corporate strategy (e.g., ROI, sales by business lifecycle, margin improvement)
  • Execution-Focused: Measures of research productivity and efficiency (e.g., patent filings, stage gate success rate)
  • Project-Focused: Measures of individual project value (e.g., third-year EBITDA, risk-adjusted project potential, cumulative cost, or project newness to the world)
  • Talent-Focused: Measures of successful human resource management (e.g., employee-initiated departures, economic contributions of employees)

While some of these assessments aim to evaluate internal effectiveness and drive behavioral changes, others (especially the strategic and execution-oriented metrics) are more often deployed as vehicles for communicating to various external audiences. These frameworks and concepts have become such an integral part of corporate vocabulary that we take for granted their capacity to express meaningful information about innovation performance.

However, in Newry’s experience working closely with technology leadership at innovative B2B companies, we’ve observed that many common business metrics have significant limitations, especially when applied to early-stage innovation activities or fundamental research. Some, like the Vitality Index, are easily gamed. Do minor product changes or updates really count as new products, for instance? Others risk driving the wrong behaviors: one company that had been tracking IP filings to gauge research productivity had to stop when they found that innovations were being split unnecessarily into multiple patents to drive up the counts.

One of the biggest challenges of applying metrics to innovation programs is their inherent uncertainty. A contact at one of our clients considered tracking the net present value of their research programs, but soon realized that “the NPV of noise is noise” – that is, at the early stages of any innovation effort, there are too many unknowns to make a reasonable or reliable prediction about specific future outcomes. At their best, metrics are helpful but don’t tell the whole story, and at their worst, they confuse more than clarify.

Of course, most R&D leaders can’t just opt out of using metrics – audiences like CEOs, boards of directors, and Wall Street analysts typically demand some form of hard evidence around innovation performance. Given that reality, several best practices should be considered when deciding what to measure and how to measure it:

  • Align metrics with strategic goals. Whether at the level of the individual program or the overall portfolio, R&D leaders should deploy metrics that support desired strategic outcomes. A Fortune 500 coatings manufacturer in our network uses one metric (margin expansion) for programs aimed at replacing existing products, and a different metric (the Vitality Index) for programs that target increased volumes.
  • Pick something easy to measure and simple to express. A portfolio of a dozen different metrics is hard to track and even harder to make sense of and communicate – far better to stick to one idea that is easy to grasp. For example, the leadership team at one of our clients knew that their intensive investment in R&D delivered results, but they struggled to communicate the value of this investment to the board and investors. Newry worked with the client to develop a new metric called the Innovation Premium that compares the client’s R&D spending and operating margin to those of 25 peer companies. The metric is calculated annually, and the key finding has consistently remained the same: while Newry’s client invests more than peers in R&D, it significantly outperforms its peers’ operating margins. While the metric doesn’t prove causation between R&D spend and performance, the consistent long-term trend has been a compelling defense of the client’s substantial R&D investment.
  • Monitor metrics as trends, not snapshots. Another Newry client is making a concerted effort to commercialize more innovations more quickly. They’re aiming to move past the days when they’d spend millions of dollars and years of effort on R&D programs only to realize their technology wouldn’t meet a customer need or the market had changed. We’ve worked with this client to track project “clock speed” and spend – and, most importantly, to compare these measures for current projects to those from the past to show progress over time. Presenting a metric frozen at a moment in a company’s history is meaningless – it is much more useful to know that project clock speed is accelerating than that each project takes 4 years on average.

Metrics can be effective and compelling storytelling tools when applied according to these best practices, but at the end of the day, no metric can ever replace a strong foundation of trust. Trust is what really sustains R&D organizations over the long timelines needed to realize the true value of innovation, and it can’t be achieved by crunching numbers. Rather, it depends on open communication; rigorous governance; and critical debate of fundamental assumptions.

Two of Newry’s clients accomplish these objectives through differing means. One takes a top-down approach, with a team of functional experts and senior executives who review a short list of the organization’s top innovation programs on a near continuous basis to assess commercial viability, identify potential pitfalls, and make decisions about resource allocation. This intensely time-consuming activity is considered worthwhile because it fosters complete transparency and a culture of productive conflict. Another company achieves similar ends with a more bottom-up approach, letting R&D leaders and team members decide which projects they want to dedicate their time to. While this method of “voting with feet” may look completely different from the command-and-control style of the first example, it nurtures a similar culture of truth-telling and active engagement with innovation processes.

Metrics may always be a necessary element – a necessary evil, perhaps? – of monitoring research activities, assessing progress over time, and communicating performance. With a certain amount of deep analysis and creative reframing, they can even help an R&D organization shine. But innovation leaders should be wary of relying on them as a foundation for success. Metrics may help to flesh out an annual report, but nothing beats building strong relationships with influential allies, demonstrating reliably good judgement, and fostering a culture of openness, honesty, and realism about what you’re working on and what you can achieve.

David Wylie

Over the past several years, David Wylie has advised clients on growth and development strategies and led dozens of engagements focused on technical opportunity identification and assessment. Leveraging data science techniques and deep primary research, he has surfaced thousands of potential applications for new and existing technologies and vetted hundreds, many of which have already generated or will soon generate significant revenue for our clients. David also has extensive experience in financial analysis and product commercialization strategy.

Find out how we can help