Sample Answers’ response to ESOMAR’s questions designed to help Research buyers of online samples.
This document provides Sample Answers’ responses to the 26 questions recently suggested by ESOMAR as representing the primary concerns that buyers of on-line samples should use to be sure that their suppliers’ have considered. Thus the document primarily reproduces the ESOMAR original series of questions in full, together with the explanations as provided by ESOMAR and then also gives our answer in each case.
For this purpose ESOMAR professional standards have kindly granted permission for us to reproduce their questions in this document.
The ESOMAR Guide to Conducting Research on the Internet, as published in 2005, contains a section with 25 questions. The questions were designed to help researchers discuss online access panel research methodology by creating a framework and language for dialogue. Since the ESOMAR Guide was published the growth in online research has been enormous. We have seen a large increase in the numbers and types of online sampling sources and multiple sources are frequently used to provide sample for a single project. ESOMAR has therefore revised the questions to reflect current issues and to provide an explanation of the reasons why the questions should be asked.
The new revised 26 questions are not a summary of the ESOMAR Guide to Conducting Research on the Internet nor a substitute for reading it, since that covers a much broader area than sampling, however all online sample providers should be able to answer these questions.
The notes on the context for the questions will help researchers identify issues which they should expect to be covered in the answer. These questions, in combination with additional information, will help researchers consider issues which influence whether an online sampling approach is fit for purpose in relation to a particular set of objectives; for example whether an online sample will be sufficiently representative and unbiased.
They will help the researcher ensure that they receive what they expect from an online sample provider.
The 26 questions are categorized under the following main headings:
- Company profile
- Sample source
- Panel recruitment
- Panel and sample management
- Policies and compliance
- Partnerships and multiple panel partnership
- Data quality and validation
In each case the question is accompanied by a ‘guideline’ for interpretation of the reply to the question. That additional guidance follows each question in italics we then follow each guideline with Sample Answers response.
Q1. What experience does your company have with providing online samples for market research? (This answer might help you to form an opinion about the relevant experience of the sample provider. How long has the sample provider been providing this service and do they have for example a market research, direct marketing or more technological background? Are the samples solely provided for third party research, or does the company also conduct proprietary work using their panels?).
Sample Answers: We have been providing on-line sample and other web services for more than 10 years. As the principal operating company of CMRGroup.com Ltd we were partly instrumental in running the series of 6 Seminars entitled “The Internet, Marketing and Research” which began in July 1996 and finished in November 1998, at which point ESOMAR began their own series of special conferences “Net Effects” the first of which was in February 1999.
Q2. Please describe and explain the types of source(s) for the online sample that you provide (are these databases, actively managed panels, direct marketing lists, web intercept sampling, river sampling or other)? (The description of the type of source a provider uses for delivering an online sample might provide insight into the quality of the sample. An actively managed panel is one which contains only active panel members – see question 11. Note that not all online samples are based on online access panels).
Sample Answers: We use every type of source to assist our clients with their on-line work, including each of the sources mentioned in the question. Of course, different sources are appropriate for different needs and we will always discuss that aspect with our clients.
Q3. What do you consider to be the primary advantage of your sample over other sample sources in the marketplace? (The answer to this question may simplify the comparison of online sample providers in the market).
Sample Answers: We consider the fact that we can offer a comprehensive range of services to be our USP – rather than seeking to claim to be better in any specific area. Since we also trade with other sample providers we believe that we are able to select the best source for our clients, although not necessarily at the best price.
Q4. If the sample source is a panel or database, is the panel or database used solely for market research? If not, please explain. (Combining panelists for different types of usage (like direct marketing) might cause survey effects).
Sample Answers: As far as we are aware most Market Research panels (including our own small panel in the UK) come from a variety of sources, which may include: a) direct ‘on-line’ recruitment to the panel web-site; b) e-mail invitations from a direct marketing source; c) ‘click through’ invitations from other web-sites (normally, specific interest websites) and d) special recruitment from telephone or mail invitations.
Coming from a number of sources rather than just the one source, is more likely to increase the representation of the on-line population. Ultimately, a panel (and indeed any specific survey) is concerned with engagement of the respondent and the results are always subject to potential response bias. In by far the majority of instances, such bias will not affect the use that is made of the data – but, there are exceptions and, to understand such exceptions, we have developed a special software package which is available for clients to quickly evaluate any potential problems arising from samples from different sources (see www.risk-e.net for further information).
Q5. How do you source groups that may be hard-to-reach on the internet? (The inclusion of hard-to-reach groups on the internet – like ethnic minority groups, young people, seniors etc. – might improve the quality of the sample provided).
Sample Answers: Evidently, the inclusion of such minorities will enhance the quality of the sample in respect of anyone seeking a ‘universe’ of the population for their study.
However, when responses are ‘short’ for any important minority we assist our clients by searching across our suppliers for an additional source, as may be necessary. Often that will involve a ‘click through’ from specialist minority interest sites. It is normally possible to identify the source of response to the survey and then evaluate the effect of different sources if required.
Q6. What are people told when they are recruited? (The type of rewards and proposition could influence the type of people who agree to answer a questionnaire or join a specific panel and can therefore influence sample quality).
Q7. If the sample comes from a panel, what is your annual panel turnover / attrition /retention rate and how is it calculated? (The panel attrition rate may be an indicator of panelists’ satisfaction and, therefore, panel management, but a high turnover could also be a result of placing surveys which are too long with poor question design. The method of calculation is important because it can have a significant impact on the rate quoted).
Sample Answers: The fact is that attrition rates (however calculated) vary across different classification groups – most importantly they tend to be higher for younger people than for older. In truth, we do not see this as a significant guide to the ease with which a particular survey may be conducted at a particular cost.
Q8. Please describe the opt-in process (The opt-in process might indicate the respondents’ relationship with the panel provider. The market generally makes a distinction between single and double opt-in. Double opt-in describes the process by which a check is made to confirm that the person joining the panel wishes to be a member and understands what to expect).
Sample Answers: Double ‘opt-in’ is our standard.
Q9. Do you have a confirmation of identity procedure? Do you have procedures to detect fraudulent respondents at the time of registration with the panel? If so, please describe. (Confirmation of identity might increase quality by decreasing multiple entries, fraudulent panelists, etc).
Sample Answers: No. We have not found a foolproof process that truly assures this. Our approach is to monitor responses and remove the data (and the respondent) if there is a concern. Our experience is that this is a minor issue compared with other aspects (e.g. children completing the questionnaires for the parent). The latter may be identified by investigating responses.
Q10. What profile data is kept on panel members? For how many members is this data collected and how often is this data updated? (Extended and up-to-date profile data increases the effectiveness of low incidence sampling and reduces pre-screening of panelists).
Sample Answers: We maintain only the standard demographics required to ‘control’ samples by that means – more detailed information such as car ownership is only maintained for those concerned about their cars. Even so, it is difficult to guarantee that the information will always be ‘up to date’. We prefer that the email invitation will always state who is requested for this survey and invite the click through accordingly.
Q11. What is the size and/or the capacity of the panel, based on active panel members on a given date? Can you provide an overview of active panelists by type of source? (The size of the panel might give an indication of the capacity of a panel. In general terms, a panel’s capacity is a function of the availability of specific target groups and the actual completion rate. There is no agreed definition of an active panel member, so it is important to establish how this is defined. It is likely that the new ISO for access-panels which is being discussed will propose that an active panel member is defined as a member that has participated in at least one survey, or updated his/her profile data, or registered to join the panel, within the last 12 months. The type and number of sources might be an indicator of source effects and source effects might influence the data quality. For example, if the sample is sourced from a loyalty program
– travel, shopping, etc. – respondents may be unrepresentatively high users of certain services or products).
Sample Answers: There are no standard definitions and, indeed no way in which the ‘active panel’ size will predict response to a particular survey. Each survey needs to be judged independently. Indeed the ‘active’ panel in October was rarely that active in August!
PANEL AND SAMPLE MANAGEMENT
Q12. Please describe your sampling process including your exclusion procedures if applicable. Can samples be deployed as batches/replicates, by time zones, geography, etc? If so, how is this controlled? (The sampling processes for the sample sources used are a main factor in sample provision. A systematic approach based on market research fundamentals may increase sample quality).
Sample Answers: The sampling ‘procedures’ employed are those most appropriate for each project and depend entirely on the ‘target’ audience required for the project.
Normally, we will only exclude those who are not eligible but other exclusions (e.g. those who have recently responded to a previous survey) will depend upon the clients wishes. For ‘on-line’ work we see little value in using time zone differential (unlike telephone, of course). We do like to vary the day of week, particularly for reminders.
Q13. Explain how people are invited to take part in a survey. What does a typical invitation look like? (Survey results can sometimes be influenced by the wording used in subject lines or in the body of an invitation).
Sample Answers: The invitation is, of course, always agreed with the client. Aside from seeking to have a simple, engaging, email we will usually include the important details of a) the subject of the survey and b) the ‘type’ of respondent who is invited to complete it, as well as any incentive that may be offered.
Q14. Please describe the nature of your incentive system(s). How does this vary by length of interview, respondent characteristics, or other factors you may consider?
(The reward or incentive system might impact on the reasons why people participate in a specific panel and these effects can cause bias to the sample).
Sample Answers: This is very much ‘horses for courses’. We have not, as yet, brought in a ‘loyalty’ scheme what we do is vary the incentive according to the project. That incentive can be nothing, a feedback report, a prize draw or be a tangible direct money or voucher reward (or a combination of these).
Q15. How often are individual members contacted for online surveys within a given time period? Do you keep data on panellist participation history and are limits placed on the frequency that members are contacted and asked to participate in a survey? (Frequency of survey participation might increase conditioning effects whereas a controlled survey load environment can lead to higher data quality).
Sample Answers: Yes, we maintain a history of all ‘click throughs’ – but it is not always possible to obtain completion data from every client who hosts their own surveys.
Our principal ‘philosophy’ is that we are facilitating the opportunity for panelists and, occasionally, others to participate in a survey if they wish and, in the main, we are doubtful about ‘conditioning’ effects. Clearly if a respondent has completed a survey in an exceptionally ‘quick’ time we will remove them from the sample and, if repeated, will remove them from the panel. If the client wishes to ensure that they only contact those who have not completed a survey recently we can accommodate that need. In truth, we are aware that many of those who do have rules of this kind often find it necessary to ‘break’ them to fulfill a particular survey.
POLICIES AND COMPLIANCE
Sample Answers: We provide samples (of all types) across the world and, as members of ESOMAR, we are familiar with, and abide by, all the rules of conduct and data protection as appropriate to each country accordingly.
Q17. What data protection/security measures do you have in place? (The sample provider usually stores sensitive and confidential information on panelists and clients in databases. These need to be properly secured and backed-up, as does any confidential information provided by the client).
Sample Answers: All our data bases are secured and appropriately ‘firewalled’. Sample Answers maintains many confidential data bases ‘in house’ and have done for many years, these are maintained in accordance with the guidelines as provided by the Data Protection authorities in the UK and other relevant bodies.
Q18. Do you apply a quality management system? Please describe it. (A quality management system is a system by which processes in a company are described and employees are accountable. The system should be based on continuous Certification of these processes can be independently done by auditing organizations, based for instance, on ISO norms).
Sample Answers: Yes, but our systems are not accredited.
Q19. Do you conduct online surveys with children and young people? If so, please describe the process for obtaining permission. (The ICC/ESOMAR International Code requires special permissions for interviewing children)
Sample Answers: Yes, we have assisted with projects involving children – although that is not a ‘target’ market for us. We always seek parental permission.
PARTNERSHIPS AND MULTIPLE PANEL MEMBERSHIP
Q20. Do you supplement your samples with samples from other providers? How do you select these partners? Is it your policy to notify a client in advance when using a third party provider? Do you de-duplicate the sample when using multiple sample providers? (Many providers work with third parties. This means that the quality of the sample is also dependent on the quality of sample providers that the buyer did not select. Transparency is a key issue in this situation. Overlap between different panel providers can be significant in some cases and de-duplication removes this source of error, and frustration for respondents).
Sample Answers: Yes, we do supplement samples from other suppliers and yes, normally our clients are informed. De-duping is a serious issue and, unfortunately not everyone is able (or willing?) to supply the information necessary to de-dupe a sample and thus avoid the possibility of a second invitation. In fairness, it is often the case that time is of the essence for the client and there simply is not the time (or money) for such ‘proper’ procedures. Also, respondents on more than one panel for any length of time are ‘wise’ to this issue and it does not worry them. Of more concern is the potential for a multi-response and processes are in place to reduce the potential for this to a minimum.
Q21. Do you have a policy regarding multi-panel membership? What efforts do you undertake to ensure that survey results are unbiased given that some individuals belong to multiple panels? (It is not that uncommon for a panellist to be a member of more than one panel nowadays. The effects of multi-panel membership by country, survey topic, etc., are not yet fully known. Proactive and clear policies on how any potential negative effects are minimized by recruitment, sampling, and weighting practices is important).
Sample Answers: No, we do not have a policy for this. It is impossible to truly police without complete co-operation across the ‘industry’.
DATA QUALITY AND VALIDATION
Q22. What are likely survey start rates, drop-out and participation rates in connection with a provided sample? How are these computed? (Panel response might be a function of factors like invitation frequency, panel management (cleaning) policies, incentive systems and so on. Although not a quality measure by itself these rates can provide an indication of the way a panel is managed. A high start rate might indicate a strong relationship between the panel member and the panel. A high drop-out rate might be a result of poor questionnaire design, questionnaire length, survey topic or incentive scheme as well as an effect of panel management. The new ISO for access panels will likely propose that participation rate is defined as the number of panel members who have provided a usable response divided by the total number of initial personal invitations requesting members to participate).
Sample Answers: We are always prepared to share this information with our clients. The statistic proposed by the new ISO seems quite appropriate but (in our view) it is also necessary to consider the click through rate as a % of invitations sent. Poor response from click through to completion is often indicative of questionnaire difficulties rather than attempted participation rates.
Q23. Do you maintain individual level data such as recent participation history, date of entry, source, etc., on your panelists? Are you able to supply your client with a per job analysis of such individual level data? (This type of data per respondent increases the possibility of analysis for data quality, as described in
ESOMAR’s Guideline on Access Panels).
Sample Answers: We are not always able to provide the client with such information nor, in general, do they request it (see comment below). We are able to provide clients with Risk-e to assist them in evaluating potential bias by this (or other) information, when available.
Q24. Do you use data quality analysis and validation techniques to identify inattentive and fraudulent respondents? If yes, what techniques are used and at what point in the process are they applied? (When the sample provider is also hosting the online survey, preliminary data quality analysis and validation is usually preferable).
Sample Answers: Since our own hosting of studies is relatively rare this can only be done occasionally and, normally, upon completion.
Q25. Do you measure respondent satisfaction? (Respondent satisfaction may be an indicator of willingness to take future surveys. Respondent reactions to your survey from self-reported feedback or from an analysis of suspend points might be very valuable to help understand survey results).
Sample Answers: No, we don’t measure this on a job by job basis. It is an additional ‘task’ for which, often, there is insufficient time to include. The response itself is a good indicator of the ‘satisfaction’ and, of course, the truly dis-satisfied can always ‘opt-out’.
Q26. What information do you provide to debrief your client after the project has finished? (One might expect a full sample provider debrief report, including gross sample, start rate, participation rate, drop-out rate, the invitation text, a description of the field work process, and so on).
Sample Answers: It does depend upon the nature of the job and the client’s requirements.
But as a rule we expect to supply the following information whenever appropriate:
- Number of invitations sent
- Number of resultant ‘click throughs’ And, if hosting,
- Number starting the questionnaire and details of ‘fall out’ during the questionnaire.
- Length of time
Note – questionnaire design and hosting is, now, normally conducted under our ‘On-line Answers’ trading name.
SAMPLE ANSWERS FINAL COMMENT:
Quantitative work over the Internet has grown very quickly and provided clients with a cost effective means of gathering information that, previously, would have taken longer and been more expensive. Unfortunately, it is NOT, a method that enables the scientist (statistician) to prove that the results can be truly representative of a population. Indeed, a lot of ‘quick’ telephone work can be said to be equally non verifiable. Unfortunately, with the web, the practice of paying per interview rather than per project has driven down standards in favour of simply speed and cost. This uncomfortable fact is, in many cases, not that significant insofar as the results are true unto themselves and, in the main, provide marketers with effective guidance the implementation of which is a ‘low risk’ tactic. Whenever the decision to be taken is a high risk one – then the results need to be more carefully evaluated and there are any number of means by which that can be achieved – including use of the program Risk-e, which program provides a ‘quick guide’ to the significant influences on any chosen key variable.
Note – this document has been prepared by Tony Dent, Chairman of Sample Answers please advise him of any problems of interpretation you may have – email: [email protected]
Alternatively please feel free to contact Mark Dent: [email protected]
62 High Street
Tel. +44 (0) 20 8274 5000
Fax. +44 (0) 20 8274 5020
The 26 questions were prepared on behalf of ESOMAR by the following project team:
- Adam Phillips, Chairman, ESOMAR Professional Standards Committee
- Reg Baker, Market Strategies
- Mike Cooke, GfK
- Jonathan Jephcott, Synovate
- Kees de Jong, Survey Sampling International
- Andrew Mairon, TNS Interactive
- David Pring, Ipsos Insight
- Cyril Stern, (previously Ciao) Internet Entrepreneur
- George Terhanian, Harris Interactive (Europe)
Vondelstraat 172 1054GV Amsterdam The Netherlands
Tel +31 20 664 2141
Fax +31 20 664 2922
All ESOMAR world research codes and guidelines, including latest updates, are available
Online at www.esomar.org Last revised March 2008
© 2008 ESOMAR. All rights reserved.
No part of this publication may be reproduced or copied in any form or by any means, or translated, without the prior permission in writing of ESOMAR.
ESOMAR codes and guidelines are drafted in English and the English texts are the definitive versions.