localhost

The Rating Game: Broadcasters Rely on Poll Numbers They Don't Trust
home | many pasts | evidence | www.history | blackboard | reference
talking history | syllabi | students | teachers | puzzle | about us
search: go!
advanced search - go!


The Rating Game: Broadcasters Rely on Poll Numbers They Don’t Trust

From the late 1920s, when the first rival radio broadcasting networks were formed, until the present day, as network television competes for viewers with cable and satellite fare, broadcasters and advertisers have sought ways to quantify information about their audiences. The first ratings system, devised by Archibald Crossley in 1929 for the Association of National Advertisers, relied on random telephone surveys. Interviewers asked what programs and sponsors listeners remembered from the previous day. By the late 1930s, C. E. Hooper successfully challenged this method by surveying only the shows respondents were listening to at the time of the phone interview. Hooper’s method dominated until the A. C. Nielsen Company began attaching Audimeters directly to television sets. The following Collier’s article from 1954 offered a critique of the four ratings methods in use at that time and discussed adverse consequences caused by the industry’s reliance on ratings. This assessment by author Bill Davidson and the men in the industry whom he quoted showed signs of an assumed disparagement of ordinary American housewives, identified implicitly as the predominant audience for mass media consumption. Both the reliability of housewives as part-time interviewers and their ability as viewers to maintain accurate diaries were deemed suspect in this piece.


Who Knows Who’s on Top? By Bill Davidson

. . . The whole industry is in an uproar over the ratings, which are commercial public-opinion polls that attempt to estimate, from a cross section of the population, just how many people are listening to each radio and TV program. The information is sold to sponsors, networks, individual stations, advertising agencies and others interested in the relative standings of the performers.

The results of the polls are expressed in percentages; thus, when Groucho Marx gets a 48.7 rating, he is presumably being watched on 48.7 per cent of the sets that can be reached by stations transmitting his show. For several seasons, the highest ratings have gone to the I Love Lucy program on CBS, with Lucille Ball and Desi Arnaz. Jack Webb’s NBC program, Dragnet, usually places second. After them, in varying order from week to week and from one rating service to another, come Groucho Marx and Milton Berle, both on NBC, Jackie Gleason and Ed Sullivan, on CBS, and others.

As Red Buttons discovered, however, the darling of one rating service may be the dud of another. There are a number of rating services engaged in the lucrative business of measuring the popularity of radio and TV shows. Six of the best-known are the A. C. Nielsen Company, Videodex, Inc., Trendex, Inc., the American Research Bureau, C. E. Hooper, Inc., and The Pulse, Inc.

No two of the services use identical methods of collecting information. The result is as if the major leagues were to put out widely varying sets of batting averages, all based on different criteria.

Statistics Make or Break a Program

Nevertheless, the rating services have wielded tremendous influence. As comedian Sid Caesar puts it, “We live or die by the ratings.” In short, the public is usually permitted to enjoy only those programs that safely run the poll takers 'gantlet. If your favorite program has a low rating it’s probably doomed, no matter how much you—and others who share your taste—may cherish it. Naturally, that isn’t the responsibility of the rating services; they just pass along statistics. It’s the advertising agencies, sponsors and networks which give the numbers undue emphasis.

Under the circumstances, however, the precision of the surveys is of prime importance—and until recently, the only challenge to their accuracy had come from performers, writers and others directly affected. . . .

Executives of the rating services are used to the attacks of performers and writers. They usually smile understandingly and quote the classic remark of comedian Abe Burrows: “A rating is a figure which tells you the size of your audience, and which is completely inaccurate if it is too low.” But they do not smile when they are attacked by executives and other influential figures in the broadcasting industry—and lately there has been a sharp increase in such high-level criticism.

The mounting complaints of the broadcasting executives pose the greatest challenge the rating services have ever had to face; the outcome may affect what you see and hear on TV and radio for some time to come.

Some months ago, the rating people were vigorously assailed by a distinguished committee of the National Association of Broadcasters for producing “what appears to be conflicting testimony” on the standing of shows. Last December, they were attacked by Sponsor, a trade publication for radio and TV advertisers, in a report headed What’s Wrong with the Rating Services? Among other criticisms, the Sponsor article voiced suspicion regarding relations between some rating services and certain of their customers. Said the report: “Stations, agencies, all bring pressure to bear to keep ratings high. Sponsor has seen letters from stations to rating services promising to buy the service ‘when you can show us on top.’” Currently, a second important group of broadcasters is investigating the whole ratings industry. . . .

Yet, despite the widespread criticism and suspicion of many of the ratings, they continue to wield tremendous power in the broadcasting industry. “And the damage they can do,” says George Rosen, TV and radio editor of the trade paper Variety, “is amazing.” Sometimes the fault lies in the ratings themselves, sometimes it’s in how they’re used. In either case, the audience is likely to be the loser. . . .

The Tribulations of Mr. Red Buttons

Another example is Red Buttons. He was an obscure young night-club comedian a couple of years ago when he was discovered by a CBS vice-president who gave him a chance to do his own program on TV. At the end of his first year, Buttons had a comfortable position among the first five in the Nielsen ratings. But in his second year Buttons ‘ratings began to slip. They didn’t slip far—but Red’s show followed I Love Lucy, and his sponsors couldn’t understand why Lucy’s 60 rating fell off to Buttons ’40. Buttons says, “Imagine anyone complaining about a 40 rating? That’s better than some of the most successful shows on the air.” At one point, he called to the sponsor’s attention that though he had slipped to 19th place in the Nielsen ratings, he still was listed third by another rating service.

His protest was to no avail, and the format-changers got to work. Instead of Buttons 'freewheeling style of comedy, they pinned him down to situation comedy—over his objection that there were too many situation comedies on the air already. His rating continued to drop.

Finally, in desperation, Buttons went back to his original format. By that time, it was too late. The sponsor canceled the show, and CBS, which could have renewed Buttons ‘contract for another year, decided to let him go. Almost immediately, the young comedian was signed by NBC, which had scientifically analyzed the composition of Buttons ’audience and decided that he’d have great appeal for children in an early evening show.

“The pay-off,” says Variety’s George Rosen, “is that CBS learned at the end of last season that after reverting to his original format Buttons had climbed into a tie for the No. 7 spot in the Nielsen ratings with CBS 'Ed Sullivan.” . . .

The general failure to understand the proper function of the ratings can be almost catastrophic—as it was in the near demolition of the radio industry.

For years, C. E. Hooper was the kingpin of the radio ratings field. His so-called Hooperatings were based on samplings of listeners in 36 key American cities. Though Hooper never claimed he was producing accurate national ratings, most broadcasting bigwigs took it for granted that the 36 Hooper cities gave an exact picture of listening behavior all over the country.

What the Advertising Brass Forgot

But then, in the late 1940s, came the mushrooming of television. As it turned out, nearly all of the new stations sprang up in Hooper’s 36 cities, causing an inevitable decline in the radio ratings there. The advertising brass took one look at the plummeting ratings and rushed to get out of radio programming. The radio industry was dealt a blow from which it has never fully recovered. It just didn’t occur to anyone that Hooper’s city ratings bore no relation to what was going on in huge areas of the nation where television had not yet arrived.

In sum, there can be little doubt that most of the damage done by the ratings, from the viewpoint of the audience, at least, results from misuse rather than from defects in the ratings themselves. Yet in some respects, the rating services do fail—just as in others they provide a useful tool for the industry.

There are four major polling techniques: the roster-recall method, the telephone-coincidental method, the diary method, and the mechanical recorder.

Under the roster-recall method, used by The Pulse, Inc., among others, interviewers go from home to home, show the householder a list of programs, and ask what shows were listened to in the preceding few hours. This method is fast and inexpensive, and it can include more people in the sample than any other technique (Pulse interviews 67,000 families, compared with less than 1,000 for some other methods).

But the roster-recall has disadvantages, too. The person interviewed generally is the housewife, and she often has no idea what programs attracted her husband and children. Also, the memory—or the interviewer—can play strange tricks. Not long ago a rating service using the roster-recall method inexplicably came up with a complete set of ratings for the evening programs of a San Antonio radio station. The catch was that the station goes off the air daily at sunset. . . .

The second method is the telephone-coincidental. Among its users are Trendex and, in part, Hooper (who cross-checks with the third, or diary, method). They pick names in a set rotation from the telephone book and phone people to ask what program their set is tuned to at that moment. There is no memory loss, and the service is extremely fast. Trendex, with interviewers in 10 key cities, can furnish information on a TV show the morning after it has appeared. Furthermore, since it is set up only in cities where there are three or more competing stations, Trendex can provide comparative, or share-of-audience, figures overnight.

Hazards of Phoning Viewers at Home

But the telephone-coincidental method, too, has its disadvantages. Homes without telephones cannot be reached, nor can people with unlisted phones. In addition, Trendex 'Robert B. Rogers readily acknowledges that a man who has been watching a children’s program, say, may be ashamed to report it. Rogers also says that his interviewers can’t phone people in the morning or late at night because of the danger of irritating slumberers; as a result, unless specifically asked he produces no ratings whatever for off-hour programs. However, the biggest weakness of the system is the questionable reliability of the interviewers—who usually are untrained housewives, shut-ins or schoolteachers hired on a part-time, piece-work basis. (The average interviewer makes about $50 a month in her spare time.) . . .

The third system of producing ratings is the diary method, used mainly by Videodex and the American Research Bureau (known to the industry as ARB), and also—along with the telephone—by Hooper. Sample households are given a diary and asked to note down all the programs they see and hear over a seven-day period. On the whole, the method is cheap and permits a comparatively large sample (9,000 diaries for Videodex, 2,200 for ARB). The diaries reach all kinds of homes—rich and poor, telephone and nontelephone, urban and rural; and the method allows more detailed information to be gathered, if desired, by asking the householder to jot down such extras as the ages of the people in the audience, their attentiveness to commercials, and so on.

But the placing of diaries can be a haphazard method scientifically, since many people refuse to accept them, which could throw an entire sample out of kilter. Also, there is a tendency to neglect filling out the diary until the last day of the week. Here, too, memory is unreliable and people will put down anything that comes into their heads—including, occasionally, shows which haven’t been on the air for years, like the old Ken Murray program. As comedian Herb Shriner put it, “If you stop a woman leaving a supermarket and ask her to tell you everything she just bought, she won’t be able to. So how can she be expected to remember what she listened to a week ago?”

The fourth and most widely admired method of producing ratings is the mechanical recording device. Because it’s so costly, the only outfit using the mechanical method today is the multimillion dollar A. C. Nielsen Company, which has been a leader for years in other forms of research. The Nielsen device, called the Audimeter, is a small black box about the size of a portable typewriter case. It is attached by wire to all radios and television receivers in a household, and it records on film every station to which the set is tuned during a two-week period. The householder then mails the film to the Nielsen Company in Chicago, where electronic computers add up and analyze the data. . . .

The Mountaineer and the Black Box

But the Audimeter method is not infallible. There are mechanical breakdowns, and many people simply don’t want the black box in their homes. . . .

Another major complaint is that the Audimeter—unlike other systems—measures tuning, but not listening or viewing. A children’s program may be watched by 15 youngsters gathered around a set, yet the black box records only one viewer. Also, a housewife who had the TV or radio turned on might actually be in some other room doing her housework. . . .

A few TV people object, as well, to the fact that Nielsen keeps Audimeters in homes permanently and does not change the sample. . . .

Another concern of broadcasting executives is that the ratings are often at wide variance with sales figures—which is an indication that something must be wrong somewhere. For example, one sponsor who has had a top-rated show for years has, nevertheless, suffered from steadily declining sales since 1951. Conversely, Ray Bolger, with a consistently low rating, sold copious quantities of his sponsors 'products last year. Arthur Godfrey points out that his lowest-rated show sells more of the sponsor’s product than any of his other programs. “A rating is like a batting average,” says Godfrey. “It doesn’t mean a thing unless you score or drive in a lot of runs.” . . .

In addition, networks have found that they can virtually ensure a good rating for a show by scheduling it next to a show with proven popularity. Burns and Allen had fairly low ratings when they were on Thursday nights. Then CBS moved the show to Monday night, just ahead of Arthur Godfrey’s Talent Scouts and I Love Lucy. The rating shot up to the high 30’s, putting the program among the top shows. The reason: most TV viewers, it has been discovered, turn the dial to the channel on which they plan to see their favorite program, and leave it there most of the evening.

Ratings can be just as easily wrecked by programming. The Arthur Godfrey and His Friends show on CBS was among the top 10 in the Nielsen ratings until NBC decided to throw some potent situation-comedy competition against Godfrey, instead of giving him the audience by default. NBC, which already had I Married Joan at 8 P.M., added My Little Margie at 8:30—and in a few months, Godfrey plummeted to No. 32. . . .

On at least one occasion, two services provided ratings for the same program on the same day that were no less than 3800 per cent apart.

It was such discrepancies that finally induced the industry to act. In 1950, radio station KJBS in San Francisco became so incensed over contradictory testimony by Hooper and Pulse that it ran a full-page advertisement in the trade magazine Broadcasting-Telecasting, headed, “Two Umpires Behind the Plate Isn’t Any Good in Broadcasting, Either.” This ad led to a series of events which soon may clean up the entire ratings mess—and remove artificial controls over what you see on TV.

A committee that was formed in New York to investigate the radio station’s charges quickly expanded the scope of its inquiry to take in the entire ratings field. Its report, filed for the National Association of Broadcasters, criticized the rating services so sharply that when the Advertising Research Foundation polled its members in 1952, asking them what they most wanted the foundation to do, there was an overwhelming vote in favor of “ending confusion in radio and TV audience ratings.”

So the foundation appointed another committee, with all branches of the industry represented. The committee worked nights and weekends, and took hundreds of pages of testimony. Now, after two years, its report is just about ready to be published. The Biow agency’s Dr. E. L. Deckinger, chairman of the committee, says, “We have a feeling that this study will be the Kinsey Report of the TV industry.”

The content of the report is a secret, but I have learned that in it the investigators first offer 10 criteria for an ideal rating service—then show that none of the existing services can meet all 10. Closest is Nielsen, whose Audimeter fails on only two counts—that it can’t measure out-of-home listening, and that its very presence might psychologically affect a family so that they watch TV more or less than they ordinarily would. The mechanical device also gets only a conditional okay on two other counts: possibility of mechanical failure and the limitation on the number of homes in which the machines are placed. The report will insist on a minimum sample of 1,200 homes for accuracy. But that figure is based on selection methods so tightly controlled that the present Nielsen sample might have to be expanded to as many as 1,800 homes to meet the report’s standards.

Source: Bill Davidson, “Who Knows Who’s on Top,” Collier’s, 29 October 1954, 23–27.