Thursday, April 28, 2011

Advertising 101 for Race Fans, Lesson 3: Neilson Ratings

A lot of discussion has taken place recently concerning the Neilson ratings that the league has garnered in recent races and more importantly in recent years.  The news generally has not been good and many passionate fans of the league, but novices in the world of advertising and media measurement have taken the stance to thrash the ratings, stating that “They are a flawed methodology and wrong” or “They are no longer pertinent or important anymore”
I thought I would take some space to discuss the ratings and how they are constructed.  Neilson is the primary supplier of television ratings in the US.  There are other providers in other countries such as TNS in Europe, but to this day Neilson is the sole significant provider here.  In radio, Arbitron is the primary provider for listener information.  On the web: Comscore and Neilson’s Net Ratings provide browsing consumption data.  These three sources of data represent examples of what is referred to as syndicated research.  Syndicated Research is where the providing company collects a single data set for the purpose of selling it to many clients, which is different than primary research that creates a specific research project for the specific needs of a single client (for the record, I work in primary research).
Neilson provides two numbers, the ratings, which is the share of all televisions tuned to a show when it is on and then the share that indicates the percentage of televisions actually on that were tuned to the show.  The latter is a larger number than the former.  The ratings are used to compute an audience size and measure how many people were reached by the ads running during that show, this number is all about the ads.  The share number is used by networks to determine who is “winning” in a particular time slot since the number of TV’s actually on changes from time slot to time slot.  In theory, take the ratings number, divide it by 100, multiply it times the number of Households, then times the average members per household and you get the number of people watching the show.
Originally back in the day, Data collection for the Neilson ratings consisted of 1000 randomly sample households from across the US, sampled to be representative of all households in the US (as compared and balanced back to the census) filling out weekly paper “viewing” diaries.  The 1000 study participants were continually recruited and replaced. 
As the television market fragmented from the 4 channels on the air to the 100’s of cable channels we have today are vying for the same general number of eyeballs, the ratings numbers began to become smaller and smaller for the shows being measured.  As a result, the methodology had to change as well.  Now the Neilson ratings are collected with a static sample of 25000 participants.  The additional respondents help reliably measure shows with the smaller audiences observed since fragmentation.  In addition to the increased sample size, the data collection has evolved from being pencil and paper to an electronic box that sits in the homes of the families that participate in the study.  The box has a coax cable to the TV on one side and a telephone cable on the other that transmits data directly to Neilson on the other.
The first criticism of the Neilson ratings that casual people have is the same that people have towards any sort of information estimated by sampling techniques.  “It’s not measuring everyone, so how can it be right?”  Unfortunately, basic education of probability and statistics is not seen as being as important to a math and science education as are physics and chemistry, which is too bad.  The Central Limit Theorem, the central tenet upon which nearly all of statistics is based is as vital as the Law of Gravity is to the physical sciences.  Unfortunately, it is not as likely to taken on faith by science novices as the Law Of Gravity since is it not as intuitive nor is it so obvious in common day life.  But it is every bit as true.
An intuitive example of how it works.  Suppose that the average height of men in the country was 70” (5’ 10”).  Only (say) 10% of men are 6’5” or over.  Now Suppose you measured a sample of 10 men, how likely is it that you would find an average height of 6’5” or higher?  Well if the probability of finding a single male that tall or higher was .1 then the probability of finding two is .1 x .1 = .01 (Independent probabilities are multiplicative).  Further the probability of a randomly drawn sample of 10 men being that tall is .110 = a really small number. 
Let’s take this to a less extreme example, suppose there is a .48 chance that a man is more than 5’11” tall, then what is the probability that a sample of 400 men will average 5’11” or higher given the population truth that men average 5’10”?  Given the example above, that would mean the probability is .48400 = again a really small number.  Point is, drawing a single observation far removed from the mean height or rating might be easy, but drawing a LARGE group that is far removed from the true census statistic is very unlikely unless there is a flaw in how you drawing your sample.  Which means if you are measuring the average height of men, don’t hang out in the locker room of an NBA team. 
So if Neilson is collecting 25,000 they are pretty well covered not only for measuring the total audience, but also for specific regions, demo’s and shows.  Their 25k is balanced to census statistics across several demographics and within specific census regions.  As much due diligence as could be expected by industry standards.
The next major gripe race fans seem to have about the ratings deals with the recording of races by viewers for future viewing.  Fans say that so many people DVR that the real number of people who watch the race is higher than the ratings because the ratings don’t include these people.  The first thing I would point out here is that the recording of races is not new.  There is new technology being used now that wasn’t available 10 years ago, but even 20 years ago VCRs were commonplace in households and the recording of broadcasts nearly as frequent as today.  So if the ratings trend for IndyCar is down, recording of events can’t be the culprit. 
But the philosophical question remains, should delayed recorded viewing of a show be counted in the ratings number?  To answer this we have to go back to who is ultimately pays the bills…The advertisers.  Neilson ratings exist so that Media (TV) companies can justify their advertising rates to the advertisers who buy the ad space.  So follow the money and at the end of the trail you find advertisers and sponsors. 
Advertisers have always been clear here.  NO! recorded viewing does not count.  The problem with recorded views is that it is generally assumed that people who watch a delay, fast forward past the commercials that have been paid for by the advertisers.  Therefore, that portion of the viewing audience delivered by the network provides no ROI for the advertiser footing the bill.  At the end of the day, nobody, and most importantly the advertisers, really cares who watches the shows or game themselves, only that the commercials are being viewed.  Your retort?  What about people who get up and take a pee during a commercial?  Well then if you come up with a methodology that gets around that, then the company I work for would love to talk to you, but no one has come up with one yet.
If you need more affirmation that the Neilson ratings are an accurate measure of the size of a television audience who are being exposed to ads in an IndyCar broadcast, consider this.  Essentially, Neilson ratings measure three audiences:  Those who watch over the air; those who watch on cable and finally those who watch via satellite.  For the over the air audience, Neilson ratings are the only source for viewership data, but for cable and satellite, there is another way to measure viewership.  If you have a cable box, satellite modem or Tivo/DVR with a telephone line plugged into it, then you are not only getting a feed from the distribution company but what you watch is going back up stream to the satellite/cable company and being tracked, they know what you are watching and they are creating massive databases to store it and mine it forever (creepy huh?).  This data collected by cable and satellite companies is referred to as “Set Top” data and is incredibly accurate…Guess what…when the Neilson panel is screened to exclude the broadcast viewers, the ratings trend very well with the set top data.
Many will say that soon everyone will stream TV and the Neilson ratings will go away.  I won’t debate that there won’t be a growing audience for streaming audiences, but it will be quite a while a large enough a cultural shift takes place to remove over air and cable audiences.  Part of it is technology.  Not all the country has cable access, so what makes us think that those same portions of the country will have ample internet bandwidth to stream anytime soon?  Even those with access to a wireless internet provider, there are usually “Fair Access” policies that limit how much data is delivered to a single customer during a given day.  A three hour race streaming on-line is a BUNCH of data and is sure to hit the fair access policy limits. 
Ultimately, on-line streams will simply be another distribution channel for people to select from, joining over the air, cable and satellite.  It is not realistic to expect that the media companies will share ratings info with each other so a third party measurement system will be required for the future streaming audience as well.  Remember who I said provided on-line audience measurement?  One company you have probably forgotten about by now and a familiar name.  Neilson is prepared for that day as well. 
I hope this has helped to demystify the Neilson ratings a bit.  I could probably go on but it getting late, Good Day Sirs.

2 comments:

  1. Amazing... Thanks for all the information had no idea it was all so accurate.

    ReplyDelete
  2. Thanks - there does seem to be some concern about one thing I mention. I am tracking down a reliable source to confirm or refute...

    ReplyDelete