‘The Science Beneath the Art:’ Know Your Data Types

By Frank Ovaitt

David Letterman rewards audience members for knowing their cuts of meat. Likewise, public relations practitioners can reap the rewards of smarter use of research - and,

subsequently, better measurement results - by knowing their data types.

There are two distinct classes of measurement observations - categorical and continuous - says Dr. Don W. Stacks, author of "Primer of Public Relations Research" (The Guildford

Press, 2002), and each class can be further broken down into two types of data. Categorical observations are those that can be sorted into "buckets" and counted. The most basic

type of these are nominal observations, which identify a difference (male or female, sedan or SUV) with no inference that one category is better than another. Ordinal

measurements, on the other hand, involve categories that are inherently ordered (under $100, from $101 to $1,000, or $1,001 and higher).

Continuous observations are more complex and occur along a continuum. Interval measurement deals with the distances between observations (the distance between milepost 1 and 2

is the same as between milepost 21 and 22). Ratio measurement is more precise (1.26 miles) and adds the requirement for an absolute zero point to allow proportional comparison

($0.00 revenue is generally accepted to mean no revenue at all).

Categorical observations are simpler to make but more difficult to interpret (just what does small, medium or large mean?). Continuous observations, besides having more

explanatory power, can also be reduced downward from ratio to interval to ordinal to nominal. But it's impossible to move the other direction, from simpler to more complex data,

so you need to decide in advance what types of observations will produce all of the information you require.

Regardless of the type of data collected, being able to assess reliability and validity are cornerstones of effective research usage. These two concepts are very different:

Validity refers to whether a measure is actually measuring what you say it is, while reliability is whether you can measure the same thing comparably over time.

Consider your bathroom scale. If you step on it twice and it gives you two different readings, it's not very reliable and you're probably going to be skeptical of what it

tells you tomorrow morning. But if the scale is reliable, you can readily detect weight changes from week to week - even if you can't be sure your scale is exactly accurate. For

that kind of validity, you may need a precisely tested scale of the type found at your doctor's office.

All measurement involves some error. This may be instrument error (a well-used yardstick gets worn down at the edges) or application error (you grab a meter stick when you

need a yardstick). The key to maximizing reliability is to minimize random error (as opposed to systematic or known error). Thus, reliability is usually reported on a

statistical basis.

Validity comes in four types, each more rigorous than the last. Face validity occurs when you define the measurement based on your own credibility in a given area. Content

validity occurs when you ask impartial experts to review your measure and confirm its applicability. Criterion-related validity means that your measure is related to other

accepted measures or successfully predicts behavior. Construct validity is obtained through observation or statistical testing of a measure in actual use.

In a public relations context, if you are conducting an attitudinal survey, reliability may depend on factors such as audience selection, a representative sample, and whether

each question is understood the same way by every respondent. Validity, on the other hand, may hinge on demonstrating that you are measuring what you say you are. For example,

if you claim that a positive attitude toward your company's community programs is an important indicator of purchase intent, how do you prove that?

Stacks concludes that a measure can be reliable without being valid, but it cannot be valid without also being reliable. Every competent research report should indicate how

both reliability and validity were assessed.

Contact: Frank Ovaitt, [email protected]; Don W. Stacks, [email protected]