This article was first published in The Measurement Standard, a publication of KDPaine & Partners, a company that delivers custom research to measure brand image, public relationships, and engagement.

Bridge - Paine MeasurementFor years, those of us who live on the Seacoast of New Hampshire had been hearing warnings about the dire state of the Memorial Bridge over the Piscataqua River between Portsmouth, NH, and Kittery, Maine. Last year, after lengthy delays and an exhausting exploration of replacement options, the 90-year-old local icon was finally torn down. Thousands mourned its passing. Since then even more thousands have had to travel the long and frustrating way around to get between the two close-knit communities.

When, last week, the first section of the new bridge was floated into place, I was overcome with a feeling of wonder and awe. The new bridge is actually becoming a reality! Progress can happen! People and things can and do change!

I feel very much the same way about standards for measurement of public relations.

Just like the Memorial Bridge, the practice of PR filled its basic charter, bridging the gap between organizations and their publics, the media, and their subjects. And, similar to the occasional inspections that would discover flaws and failings in the bridge, PR industry gadflies like Jack Felton, Walt Lindemann, and myself would call for introspection, asking “How do we know this stuff is really working? Where are the real metrics that show management that PR is actually delivering on its promises?”

The IPR Measurement Commission

From those gadflies came The Institute for Public Relations Measurement Commission, an organization that has been writing standards and guidelines for nearly two decades. And, like the early warnings from inspectors of the Memorial Bridge, those guidelines and standards were generally ignored.

For the world of communications, the equivalent of the bridge’s mandatory closing was the advent of social media and the adoption of The Barcelona Principles.

Social media thrust so much opportunity into the arms of marketers, advertisers and social media “gurus” that no one knew quite what to do. So they looked to back to their old ways of decision making and sought the social media equivalent of Advertising Value Equivalency, a.k.a. AVEs, and impressions and Gross Rating Points (GRPs). But these were just patches and temporary fixes. Under closer scrutiny, especially by CFOs and Product Managers who viewed all this new stuff as mostly hype, the flaws quickly became visible.

Business marketers now realize that, for them, the value of a Facebook like is very different from what it is for Coca-Cola. Brand managers realize that the purchase of 100,000 followers on Twitter doesn’t mean anyone buys anything or actually follows anything they said. Producers of high-end corporate video realized that no matter how much effort they put into a production it wasn’t ever going to be as popular as a shaky video taken with a smart phone. 

The Barcelona Principles

Hence the call for standards, leading to hundreds of representatives of numerous associations, vendor firms, and clients all agreeing on seven principles of PR measurement at a 2010 industry event in Barcelona. Now that some of the largest and most respected brands in the world have adopted them, they are established as the gold standard for measurement. If nothing else, companies are at least buying into principles 5 and 6, rejecting advertising value equivalency and agreeing that social media can and should be measured.

As a result of this push for better measurement, the Coalition for PR Research Standards was formed, bringing together the Institute for Public Relations (IPR), Council of PR Firms, the Public Relations Society of America, the International Association for Measurement and Evaluation of Communication (AMEC), and the Global Alliance for PR and Communication Management.

Progress So Far, Part 1:
The Coalition Sets Standards for Measuring Traditional Public Relations in Two Papers

The Coalition has released two standards-setting papers for the PR industry. The first, “Proposed Interim Standards for Metrics in Traditional Media Analysis,” by Marianne Eisenmann (with the assistance of myself and several others), offers recommendations for how to calculate some of the most commonly debated data points in traditional media analysis. It covers standard definitions and approaches for:

  • Awareness It’s measured by surveys, not by counting impressions.
  • Impressions A.k.a. Opportunities To See, are based in the total audited circulation as opposed to reach which is the percentage of people in your target audience that are reached.
  • Tone and Sentiment Measures how a target audience is likely to feel about the individual, company, product, or topic after reading/viewing/listening to the item. See Sidebar below.
  • Items to be included in analysis (do not include press releases) defines mentions (you can have multiple mentions in a single item). See Sidebar below.

The paper proposes standard definitions for assessing the quality of media coverage including visuals, placement, prominence, message penetration, and spokesperson effectiveness. And it reiterates that AVEs should not be used as a measure of media.

 

Sidebar: Standard #1– How to Calculate Impressions

  • Reach The scope or range of distribution that a given communication product has in a targeted audience group; broadcasting, the net unduplicated (also called “deduplicated”) radio or TV audience for programs or commercials as measured for a specific time period.
  • Circulation  Number of copies of a publication as distributed
  • Impressions Number of people who might have had the opportunity to be exposed to a story that has appeared in the media (aka  opportunity to see (OTS))
  • Impressions do not equal awareness Awareness needs to be measured using other research tools. Impressions are indicative of the opportunity to see (OTS). Consider OTS as an alternative nomenclature to better clarify what impressions really means – [the] potential to see/read.
  • Multipliers Should never be used.

 

Sidebar: Standard #2 — Items for Analysis: What Counts as a Media “Hit”? 

A story counts only if it has passed through some form of “editorial filter,” i.e., a person has made a decision to run or not run the story. An item is:

  • An article in print media (e.g., The New York Times).
  • Newswire stories from organizations such as Dow Jones, Reuters, and AP. If the wire story is updated multiple times in one day, only count the story once in a 24-hour period using the latest, most updated version.
  • An article in the online version of print media (e.g. nytimes.com).
  • An article in an online publication (huffingtonpost.com).
  • A broadcast segment (TV or radio). In the case of a broadcast segment that repeats during the day, each segment should be counted as an item because audiences change during the day. For example, a story broadcast at 1:00 PM, 2:00 PM, and 6:00 PM on cable TV news counts as three items.
  • A news item on the website of a broadcast channel or station.
  • A blog post (e.g. ,WSJ Health blog, GigaOm.com, etc.).
  • An analyst report.
  • A microblog post, e.g., a Tweet.
  • A post to a forum or discussion group.
  • A video segment on YouTube or other video sharing sites.
  • A photo on a photo sharing site.
  • A comment on a blog post, online news story, or other online item.
  • Reprints or syndication: Each appearance counts as a hit because they appear in unique, individual media titles with different readerships.
  • Company bylined features count as an article.

What does not count?

  • Press release pickups generated through “controlled vehicles,” such as posting a story on PR Newswire or Business Newswire
  • Pay per post items
  • Paid bloggers
  • Public broadcast underwriting

 

Sidebar: Standard #3 — How to Calculate Tone or Sentiment

 For what or whom you want to determine sentiment? You may be looking to understand tone regarding an industry or sector, or sentiment around a specific product or service, individual, or organization. A single article could mention all of these, therefore it is necessary to define specifically what element(s) you are targeting for sentiment.  

     Define from whose perspective you are judging the sentiment. It could be the point of view of the general public, or of a specific stakeholder group such as investors, physicians, teachers, parents, etc.

     Whatever process is defined and applied, it must be used consistently throughout any analysis.

Sentiment coding options:

  •  Positive An item leaves the reader more likely to support, recommend, and/or work or do business with the brand.
  •  Neutral An item contains no sentiment at all, just reports the facts. If the news is negative, an article can be neutral if it just reports the facts, without any editorial commentary.  In an unfavorable environment, neutral may be the best   you can achieve. Base your coding on whether or not the clip makes people more or less likely to do business with your organization.
  • Negative An item leaves the reader less likely to support, and/work or do business with the brand.
  • Balanced An item includes both positive and negative sentiment, and therefore the resulting overall tone and perception of the reader is balanced.

 

The second paper from The Coalition, “Ethical Standards and Guidelines for Public Relations Research and Measurement,” outlines what it means to practice PR in an ethical way.

The Coalition is currently soliciting comments on the proposed standards. The papers and their standards are published, the papers are currently out for review and your comments are welcome. You can view the standards and comment on them here

Progress So Far, Part 2:
The Conclave Sets Social Media Measurement Standards

While the Coalition has been hard at work defining standards for the more traditional aspects of public relations, it has not ignored social media. A year after the Barcelona principles was announced, a similar group in Lisbon recommitted itself to social media measurement standards. At the same time, various other groups including the Society for New Communications Research (SNCR), the Digital Analytics Association (DAA), and the Word of Mouth Marketing Association (WOMMA) were also working on various aspects of standards for social media. Since I seemed to be in the midst of all of these conversations, I invited them all to a free-flowing discussion in my living room in Durham, NH, in the fall of 2011. I called it “The Conclave” because every other synonym for coalition was taken.

In the 16 months since that first meeting, we have met half-a-dozen times in person and by phone and the original Conclave attendees have now expanded to include a far broader array of organizations and clients including:

  • The Coalition members, including AMEC, CPRF, IPR, PRSA, and Global Alliance;
  • Other associations, including IABC, SNCR, DAA, WOMMA, ARF, FIBEP, CIPR, and PRCA;
  • The Media Ratings Council, which is the industry body chartered by Congress to establish standards for ratings and reach and includes: AAAA, ANA, IAB;
  • Clients, including Dell, General Motors, SAS, McDonald’s, Ford, Procter & Gamble, Southwest Airlines, and Thomson Reuters; and
  • Agencies, including Waggener Edstrom, Porter Novelli, Ketchum, Hill & Knowlton, and EdelmanBerland;

The Conclave defined six areas of confusion that we felt were most pressing to standardize:

  1. Content
  2. Reach & Impressions
  3. Engagement
  4. Influence
  5. Tone & Advocacy
  6. Value

And one by one, we have been establishing standards for them:

1. Content

Content Standards were first announced in Dublin last summer at the AMEC Measurement Summit. They have been posted, commented upon, and are now published and are available here.

2. Reach and Impressions

The Digital Analytics Association (DAA) has taken the lead on this standard and proposed a set of standards at last October’s Conclave gathering. They are due back to the group with their revised standards any minute.

The standards that have been discussed include:

  • All impression numbers are flawed for a variety of reasons, and the most important consideration is consistency – which is why we created the Transparency Table.
  • Multipliers should never be used. A divider is more appropriate because it is likely that about 10% of what is posted is actually seen.
  • OTS must be specific to a particular channel. For instance, for Twitter, OTS is the number of first line followers. For Facebook it is the number of fans to a page. 
  • Consistent definitions are required
  • An item – Refers to an item of content. The term “item” replaces “clip,” “post,” and other unclear terminology
  • A mention – A “mention” refers to a brand, organization, campaign or entity that is being measured. A single item can have multiple mentions.
  • OTS – Opportunities To See is the most accurate description of “gross impressions” because any item posted is only potentially seen by all the fans, followers, subscribers, etc. It is likely that only about 10% of what is posted is actually seen.
  • Reach – Reach is the disaggregated percentage of your targeted audience that is actually reached by a specific item.

3. Engagement

We tackled standards for engagement last October and drafted the following standards. The comment period ends in January, so if you have thoughts, please let us know ASAP.

  • Engagement is an action that happens after reach, and implies an interaction.
  • Engagement can be desirable or undesirable.
  • Engagement occurs in response to content, when someone engages with you (not “about” you).
  • Any measure of engagement must be tied to the goals and objectives for an individual program.
  • Engagement occurs both off- and online, and both must be considered if you intend to integrate your metrics with other marketing or communications efforts.
  • Engagement includes such actions as likes, +1, shares, votes, comments, links, links, retweets, Facebook’s “Talking about you,” etc.
  • Engagement actions should be counted by the number of interactions, the percentage of people engaged (by day, week, or month), and/or the percentage of engagement per post.
  • Engagement manifests differently by channel, but is typically measurable based on the effort required for a particular action, and how it the action interacts with others.
  • Engagement should be measured both for an individual (how often someone engages with your site), as well as the outcome of an action or post (how many shares, likes, etc.).

4. Influence and Relevance

This area has generated perhaps the liveliest conversation (see article). A subcommittee is currently working on these standards, headed up by two leading authorities on the topic: Brad Fay, member of WOMMA, author of The Face-to-Face Book, and co-founder of the Keller-Fay group, and Philip Sheldrake, author of The Business of Influence representing CIPR.

We sent them off from The Conclave with the following suggestions.

  • Influence is something that takes place beyond engagement.
  • Influence happens when you are persuaded to change behavior or opinion that would otherwise not have changed. Influence happens online and off line and both should be considered.
  • Any influence score requires transparency.
  • You can measure what has influenced change to happen, but that is not in the scope of this effort. Survey research measures what people say they are influenced by. Data analysis can measure the impact of a campaign on a business outcome.
  • Influence cannot be expressed in a single score or algorithm.
  • Can be an outlet or an individual.
  • Influence must be tied to a specific topic, brand or issue.
  • Should include some combination of the following five elements (if an individual scores a 0 on one element, they don’t count):
    1. Reach
    2. Engagement around individual
    3. Relevance to topic
    4. Frequency of posts around the topic
    5. Audience impact as measured by the ability to get the target audience to change behavior or opinion.

5. Opinion and Advocacy

The subject of the next Conclave meeting is Opinion and Advocacy. So far we have concluded that:

  • Sentiment is over-rated and over-used.
  • Sentiment reliability varies by vendor and approach, so be transparent.
  • Opinions, recommendations, and other qualitative measures are typically more valuable than raw sentiment and are increasingly measurable:
    • Opinions (“it’s a good product”)
    • Recommendations (“try it” or “avoid it”)
    • Feeling/Emotions (“That product makes me feel happy”)
    • Intended action (“I’m going to buy that product tomorrow”)
    • Coding definitions, consistency, and transparency are critical.

6. Impact and Value

  • Impact and value will always be dependent on client objectives.
  • Need to define outcomes in advance – will likely span multiple business goals, especially for social (crosses disciplines).
  • “ROI” should be strictly limited to measurable financial impact; “total value” can be used for financial and non-financial impact combination.
  • Value can be calculated in positive returns (sales, reputation, etc.) or avoided negative returns (risk mitigated, costs avoided).
  • Key performance indicators and balanced scorecards are helpful to connect social media impact to business results/language.

When I am asked, “What is the current state of standards?” I think back to our beloved Memorial Bridge. Progress has definitely been made. A new span is in place, another is on its way.

But it will be while before all the parts are in completed and assembled. And, even when it is all tested and finally open, life won’t be the same. It will be a very new way to get from point A to point B. And as a result it will still take a long time for people to change their habits, reroute their commutes, and make it part of their daily routines.

 

Katie Delahaye Paine is Chairman, KDPaine & Partners, (a Salience Insight company), and Chief Marketing Officer of News Group International. KDP&P delivers custom research to measure brand image, public relationships, and engagement. Katie Paine is a dynamic and experienced speaker on public relations and social media measurement.

 

 

Heidy Modarelli handles Growth & Marketing for IPR. She has previously written for Entrepreneur, TechCrunch, The Next Web, and VentureBeat.
Follow on Twitter

2 thoughts on “The State of Measurement Standards January 2013: It’s a Bridge, it’s a Bridge!

  1. I’m only now learning more about the Barcelona Principles, and I applaud this necessary effort to introduce rigor into a field previously evaluated on the power of personality. That is to say… relationships (or for those that remember them, the contents of your Rolodex). Building on this, I suggest we look more closely at communications measurement in the context of behavior change? Specifically, to what degree did communications influence stakeholder behaviors? The value of a “mention”, the frequency of “hits”, and the “tone” of coverage tell us about the medium’s behavior and, frankly, it feels like a legacy of the infamous clip reports we generated (are generating) not too long ago. These indicators, however, do not address the target audiences behavior. Though I acknowledge the emphasis on “impact”, I believe more can be done in this arena. How did the audience change behavior, is that going to be sustained or is it temporary? How did they interpret key messages and does the variation of their interpretations correlate along demographic lines? The answers to those questions would yield more about our audiences than we ever imagined. I look forward to expanding this dialogue. Thanks for tackling this issue.

  2. This is a solid post, and I appreciate the bridge metaphor – not least because it helps to keep a large and complex topic in perspective, saving the reader from becoming mired in the minutiae. Forgive me if this is something already included here (have not yet clicked through to all links) but it would be great to have a quick-reference graphic representation of the foundation principles and rules discussed above. This not only for practitioners to refer to in their daily work, but to aid them in communicating same to corporate leaders/clients and others who need to get the message quick and clean.

Leave a Reply