By Zubair Faisal Abbasi
“If social and political development organisations and anchor-persons are acquainted with survey research methods, then they can avoid being doubly wrong”.
We usually see that whenever the results of a survey are released, people start raising questions about its sample size. It is not surprising when a layman raises an eyebrow, but it is astonishing when senior anchor-persons start repeating this misplaced observation. Far too often, they try to superimpose their own perspectives, claiming that it is more accurate than the survey results. “Since we have our hands on the pulse of people, hence we know what is and what is not”, they say. However, the sad reality is that such anchor-persons are in fact, doubly wrong. This is one of the reasons that most of the “perceptions” shown on TV screens were proven wrong in the 2013 election results.
In this article, we are trying to argue that it is not the sample size per se which really matters but there are a host of other factors which can compromise the quality of survey results. As a result, surveys become erroneous and weak when their reliability and validity are tested. Put simply, such errors and biases create a situation in which survey results do not represent the greater scheme of things accurately.
It is common knowledge amongst statisticians that calculating the sample size from a truly random sampling technique is not rocket science. They also know that one can always increase the sample size to cater to the design effect. However, it is important to mention here that increasing the number of respondents does not increase validity and reliability and may even cause more errors since it increases administrative and financial overload without any positive effect on the survey results.
So what are the other factors which matter most?
Our experience of surveys show that there are at least five different factors which influence survey outcomes more than sample size. There are many other factors as well, but we will leave them out for now to avoid complexity.
Let us assume that the survey research methodology has been designed in a scientific, by the book fashion. Population blocks are known and the only issue at hand is to pick the exact person, according to the methodology of respondent selection, whom the questionnaire must be administered to.
Here enumerators usually add bias when they start using their own brain for respondent selection. They pick people who will make the easiest respondents and are available at convenient places. This creates serious bias, meaning “not all people under study have equal chance of being selected as respondent”. The first dent in the spectrum of responses is made at this stage so the first distortion in the miniature painting occurs here.
Moving forward, there comes another dose of bias known as interviewer bias. This usually happens when enumerators are not trained enough, the language of question is convoluted, or the questionnaire tries to lead the respondent with a “leading question” so that the answer is close to what the interviewer wishes to see. Such biases may be added with a wrongly designed questionnaire as well, which comes with the sequence of questions. If the sequence of questions is from general to specific then answers of the respondents may be different from the ones when he or she is answering questions which run from specific to general.
For example, if you first ask a question about recent crimes and personal suffering first, and then ask about the overall situation in the country,the likelihood of getting a pessimistic answer is higher. If you ask about the general law and order situation and then ask about recent crimes or personal suffering.
There is another battery of biases which attacks the quality of surveys and this comes from the perspective of the respondent. Let us call it cognitive overload. At times, there are far too many questions or response categories, therefore it is not possible for an average respondent to fully understand a question. The respondent lapses into what is called “satisficing” which means they just pick the initial response categories or lose interest in the interview and start responding without making an effort to provide real answers.
Last but not least, in most surveys, the trust between interviewer and interviewee is one of the single most important elements. Especially when sensitive information is required, for example about sexual behaviour or drug abuse.
In such surveys, there is a serious question of privacy which affects the responses of respondents. It has been observed that surveys administered to campus students in the presence of their teachers or peers give different results as compared to when respondents can answer questions anonymously and know that no harm shall come to their person or property.
These are some examples of factors that can influence survey results much more than the sample size. We hope that if social and political development organizations and anchor-persons are acquainted with survey research methods then they can avoid being doubly wrong: first on not using any scientifically collected evidence and secondly, being overzealous about their own perceptions.
[The writer is a monitoring and evaluation specialist and works with IMPACT Consulting.]