5 Lessons for Effectively Critiquing a News Media Content Analysis Program

A great deal of PR measurement focuses on describing media coverage, as though the most important metrics ever to exist were tone of coverage, prominence and reach. There are a lot of options for media measurement suppliers, and the costs can soar. Then there’s the question of what we’re supposed to do with our metrics, a hard question for people who, by and large, went into this line of work to avoid math.

But there are lessons to be learned—five, to be exact, which were culled from a hard look at one company’s measurement methods, described in an Institute for Public Relations research paper titled, “Measuring ‘Company A’: A Case Study and Critique of a News Media Content Analysis Program,” which is available for download at www.instituteforpr.org.www.instituteforpr.org.

â–¶ Lesson One: Media coverage has to link to some kind of appropriate business objective—for example, evaluating and mitigating risks to organizational reputation. Tone, prominence and the potential audience were really useful only when evaluated against overall brand research, reputation risks and direct mail response.

Connecting the dots for leadership is critical. This is the universe the C-suite inhabits. The organization analyzed in the IPR paper used the media metrics to facilitate strategy and formulate forward-looking plans, in addition to evaluating actions taken previously.

â–¶ Lesson Two: The language that describes media is not familiar to many business leaders. The interpretation of media coverage can’t be haphazard or oblique. We need to be on guard for jargon that only we understand. Too many media research suppliers use proprietary terms that make the coverage hard to explain to others, let alone to compare suppliers to one another.

For example, our marketing cousins often talk about “impressions” and many PR professionals are doing the same thing, or paraphrasing the term as something like, “opportunities to see,” which isn’t the same as “audience.” Those terms also don’t answer the question of effectiveness of a media program; they just purport to describe the potential audience. Our desire to show nice, big round numbers is frequently at odds with our vital need to provide correct context and analysis of news media stories.

One company showed more than 4 billion potential viewers/readers in the course of 2008—a nice big number. But, 80% of that traffic was negative and most of it came from a handful of national news outlets. It may make the most sense to compose a short list of target media and evaluate the clippings only from that list.

For the organization examined in the IPR research paper, the correlations to brand awareness, attitude and disposition were much stronger among some geographic publication lists than others.

â–¶ Lesson Three: Whether automated or not, coding articles can be a bias-heavy exercise. All systems require intervention, the scale of which will vary. What you gain in speed with automated systems, you lose in tone accuracy. If you use people to code, it’s a subjective process that can take hours, days or weeks, depending on what you are willing to pay.

Meanwhile, you have to make some choices about exactly how you want the material interpreted and that means lists of terms, people’s names and other situations to feed into the system to help it make better choices. That’s a lot of work and you’ll be second-guessed.

Most systems don’t differentiate the relative value of a clip to the organization—except by making the assumption that more circulation (more eyeballs) is better than less (or fewer eyeballs), which may or may not be true for your organization. Assumptions are built into every system (again, regardless of it being automated or not). Dealing with that element of bias is important.

â–¶ Lesson Four: You cannot account for every factor. There will be leaders who applaud your effort to become more data-driven and offer more robust metrics. And there will be those for whom nothing is quantitative enough. Correlation is not causation, and unless you’re doing a comprehensive brand and market research audit, you might not even know what you’re leaving out of the set of behavioral influences.

As long as you are clear in your intentions, however, that’s not a fatal flaw. You’re looking to take positive steps forward, not vault from the earth to the measurement moon. Understanding the possible impact of your coverage is just such a step.

â–¶ Lesson Five: Social media is up and coming, but no one really knows what business impact it has, broadly. Your business needs should determine your measurement strategy in the universe of blogs, wikis, Twitter and all. At the very least, however, you should monitor what is said about you in social media.

There are plenty of low-cost/no-cost methods to give you a window into that world. Get a Twitter account. Search for your company name on blogsearch.google.com. But if you’re a big brand, you might need a solution to sift the beach sand of social media— the caveats of mainstream media measurement hold true in the Web. 2.0 world. You still need to think critically about your objectives and choose the solutions that match your business.

Getting your feet wet in measurement doesn’t have to mean a cold plunge to the bottom of the pool on your first step. Scared of math? It’s not a math test—it’s a strategic test, and one that all PR and communications professionals need to pass to keep their seats at the table, as well as to earn them in the first place. PRN

CONTACT:

Sean D. Williams is the owner of Communication AMMO, Inc. He is also a member of the Institute for Public Relations Commission on PR Measurement & Evaluation, where he serves on the 2009 Jack Felton Golden Ruler Award Committee. Contact Williams via e-mail at [email protected]. You can also follow him on Twitter at @CommAMMO.