In the past, subsequent to the declaration of survey results, analysis and comparisons would begin between the data published by the National Readership Survey (NRS) and the Indian Readership Survey (IRS). Publications that were up in one were sometimes down in the other. Agencies and media owners would take sides, depending on which study best served their purpose. But, ultimately, the advertiser was left in doubt, and it was his money that was on the line.
The NRS and IRS merger is a welcome change. We think the time has come to take an impartial and dispassionate look at both the studies, with a view to understanding the merits and demerits in both and choose the correct way forward. The responsibility on the new council is much larger this time (given the new survey will be the sole determinant of how Rs 11,000 crore is going to be spent), so as to ensure that the new survey is not IRS-ish or NRS-ish. The council needs to take a fresh perspective on survey methodology and execution rigour.
Survey design and construct
Getting such a large survey executed is usually prone to some degree of error. And when that survey is one of the largest in the world, conducted in several languages, on a subject like 'readership' in a country where a large part is still not literate, with poor population records, dated statistics and unique situations that range from tea-stall readership to multilingual households, there will be errors. It's just about how to minimise those errors to acceptable levels.
To build credibility, there must be continuous and independent third-party validation of the defined survey process that is being implemented in the real world, within acceptable tolerance limits. It will help communicate that the council is determined to get things right, that there is transparency. And that will lead to a definite increase in sponsor and user comfort.
(Both the authors handle Media and Entertainment Practice at Ernst & Young)