Data Hurts: A Better Way to Measure, Analyze and Evaluate PR

At the start of my career, I assumed the primary motivation for PR research was to inform continuous improvement. It seemed natural that PR pros focused on what worked, what didn’t and what should be done about it.

As I met prospective clients, I began with research applications for objectives-setting, strategy development and evaluation. But there was always a pivot after about 10 minutes: ‘That’s all great,’ they’d say, ‘but how can you help me prove the value of PR?’

Shifting Motivations

Interest in PR measurement, research and evaluation have never been greater, for two main reasons: low-cost SaaS platforms put media analysis within reach of even the smallest organizations; and C-suites demand PR be measured, just like every other part of the enterprise.

The underlying motivation, however, is different: PR spins data. Even though most communicators would declare spin is their least favorite ‘S word,’ there’s a lot of it happening.

In an attempt to ‘prove the value of PR,’ reports focus on selective data and elements that foster good feelings. Many reports–even in awards competitions that proclaim to reflect the best-of-the-best–emphasize success rather than learning. Clip books and sizzle reels, ad value equivalency and ‘big-number’ PR occupy an inordinate amount of reporting.

As PR research pundit Allyson Hugley says, “Data hurts.” Or at least it has the potential to cause injury. And yet, many PR pros promote win-lose thinking.

When we choose to limit analysis and reporting in favor of clip counts, audience figures and other ‘scorecard’ metrics, we invite a scorecard mentality.

Unfortunately, clip counting and media coverage ledgers are no-win games: everyone misses clips. And if you generated fewer clips or a lower audience figure than last month, you fail. Given the diminishing number of traditional media outlets and fewer journalists, the likelihood that you will generate bigger numbers than last year is increasingly unlikely.

A Broader Framework

While scorecard metrics provide motivation to improve, they limit the number of ways you can win. In the case of media measurement, analysis and evaluation, a broader framework includes the following data streams:

  • Quantitative data: Clip counts and audience figures are important, but not the sole, indicators. Ad values still are a bad measure, not just because they relate to advertising more than PR. But because ad rates are down. Beyond counting, track media outlets or members that are most and least receptive to running your stories and prioritize them for greater efficiency.
  • Qualitative data: Message tone and sentiment help provide context to your reporting. While turmoil in the media business will diminish volume over time, quality of coverage will remain an important staple
  • Comparative data: Executives love to beat benchmarks. Choose competitors and aspirational peers your executives care about most and compare your results. In addition, communicate how you’re performing against your past performance and established measurable objectives. Hypothetically, if you achieve a 110 against an objective of 100, versus last year’s performance of 85, and your competitors generated a 90, you win. Allow for a variety of quantitative and qualitative measures to give yourself more opportunities for success, but also to provide context on how you’re performing and what you should do to improve performance.
  • Attributive data: More and more PR people now attribute a behavior to media coverage using either surveys or attribution technology. But be prepared: of the 2 million people who subscribe to The New York Times the day your story ran, perhaps only 2,500 clicked on it. Attribution analysis upsets big-number PR. But a combination of the two provides context, aids understanding and offers another way to win…one that executives value most.

Always try to deliver analysis and reporting while there’s time to react. Waiting for mid-year or year-end to report reduces your ability to adjust, since refinement can take time.


For PR to achieve full recognition and professional standing, we must accept that research is not so much ‘win-lose’ as it is ‘win-learn.’

Measure and be accountable for what you measure. Not just for the sake of communication and PR, but also to fit in with the rest of the organization. No one likes bad news, but a win-learn approach facilitates greater homogeny, less fear and better business results.

Learning is difficult and failure hurts; perhaps that’s why the most enduring lessons are those we learn the hard way. To help organizations adapt, you may need to continue providing scorecards–they aren’t necessarily bad–but you can simultaneously introduce more contextual win/learn measures. The two approaches combine to reinforce your ability to take corrective action when necessary and to emphasize success where you can. What’s more, win/learn helps elevate your status from clip counter to trusted advisor.


Mark Weiner authored the book “PR Technology, Data and Insights.”