Entry
Reader's guide
Entries A-Z
Subject index
Supervisor-to-Interviewer Ratio
This entry addresses the issue of how the average number of interviewers that are assigned to a survey supervisor affects data quality. There are many factors that affect survey interviewer performance and success. Individual skill set, training, experience, and questionnaire design all influence the ability of interviewers to gain cooperation and gather accurate data. The one ongoing factor that has a significant impact on data quality in surveys that are interviewer-administered is the interaction of the supervisor with the telephone interviewer. Sometimes called a monitor, coach, team lead, or project/section/floor supervisor, this staff position gives management the greatest leverage to influence the human aspect of such survey projects.
The supervisors of survey interviewers may fulfill many roles in a survey project, including any or all of the following:
- Monitor the quality of the data being collected.
- Motivate the interviewers to handle a very difficult job.
- Provide insight into how a study is going and what the problem areas are.
- Provide training to improve the interviewing skills of the interviewers.
- Supply input for the interviewer performance evaluation.
It is the supervisor who is in daily contact with the interviewers and who knows each interviewer's strengths and skills to be developed, maintained, or enhanced. The supervisor answers questions, guides interviewers through new or difficult processes, and sets the mood, pace, and outlook of the whole team. A supervisor who is positive, uplifting, professional, and supportive will increase retention of interviewers, improve data quality, and lower costs of the project. There have been many human resource studies that show employees are most influenced by their immediate supervisor to be happy in their jobs or to leave a job. To ensure the best results on survey projects, research management must not only select the right people to be motivating, inspiring supervisors but also determine the right ratio of interviewer to supervisor.
The right ratio for a project is the one that provides enough supervisory coverage to meet the above goals: retain interviewers, ensure good quality, and provide cost efficiencies. A ratio that is too high (e.g.> 1:20) for a project's needs will lead to too little interaction with interviewers, lower-quality data, and increased turnover. A ratio that is too low (e.g. 1:5 or less) for a project's needs will increase costs and may lower supervisory motivation due to boredom. To meet the challenge of determining the correct ratio for a project, research management can use a checklist of criteria questions.
Some suggested questions to help develop the right ratio include the following:
- Is the study a new type of survey in its content, group to be sampled, procedures, or client/industry? If the interviewers are not experienced with the type of study being done, the supervisor-to-interviewer ratio should be lower than the normal benchmark (e.g. < 1:10).
- Is the study an ongoing survey that will change very little, or will the questions, procedure, shifts, and sample change frequently? An ongoing study with few changes will need a lower ratio of supervisor to interviewer, but a survey with constantly changing requirements needs proportionally more supervision.
- Does the study have very stringent or complex requirements, such as complicated sample quotas, sensitive or difficult selection criteria, or comprehensive validation? If there are other measurements besides real-time monitoring of interviewers' work and answering questions that must be tracked hourly, daily, or frequently within a shift by the supervisors, such as in the case of centralized telephone surveys, then a lower ratio is demanded.
- Is the project a business-to-business study, or a social/political study that requires respondents who are professionals or executives, like doctors, lawyers, chief executive officers, and so forth? Interviewers will have to possess more professional skills on these projects and may need more guidance from supervisors on getting through gatekeepers and other obstacles. Surveys conducted with professionals or highly affluent households or households with high-security access challenges often require a lower ratio than general consumer surveys.
- Does an ongoing project typically have a high percentage of new or inexperienced interviewers? If a survey organization experiences a steady, high level of turnover and an influx of raw interviewers, the supervisor-to-interviewer ratio will need to be lower to support the development of this staff.
- Is the study or project one that is audited by an independent group, or is there a third party that has oversight on the survey procedures and requirements, such as a government agency or industry watchdog committee? The project may need proportionally more supervisors to ensure compliance with the audit or oversight.
- Does the client funding the project represent a major portion of the survey organization's business or is otherwise a high-profile client with heavy impact on the organization's success? If so, then management may allocate more supervision to make sure all aspects of the work are going well and according to client expectations.
- What is the budget for the project? How was the bid costed out when the work was contracted? How much supervisory costs were built in? The manager may be restricted in what he or she can work with on the ratio by the cost expectations already set up by the company or the client.
Once all of these questions have been answered, the project management can derive the supervisory ratio best suited for that project. There has to be a beginning point, or benchmark, that the manager uses to develop the specific ratio. Although there is no one benchmark used by all survey research data collection centers, a common range of ratios for supervisor to interviewer in many commercial organizations that do interviewing is in the range of 1:15 to 1:20 of supervisor per interviewers. Starting with that range, the manager can work up or down in setting the ratio against the sample questions listed previously. In contrast, since the 1980s, Paul J. Lavrakas has advised that for high-quality supervision, this ratio for supervising telephone interviews should be more in the 1:8 to 1:10 range.
...
- Ethical Issues in Survey Research
- Anonymity
- Beneficence
- Cell Suppression
- Certificate of Confidentiality
- Common Rule
- Confidentiality
- Consent Form
- Debriefing
- Deception
- Disclosure
- Disclosure Limitation
- Ethical Principles
- Falsification
- Informed Consent
- Institutional Review Board (IRB)
- Minimal Risk
- Perturbation Methods
- Privacy
- Protection of Human Subjects
- Respondent Debriefing
- Survey Ethics
- Voluntary Participation
- Measurement - Interviewer
- Measurement - Mode
- Measurement - Questionnaire
- Aided Recall
- Aided Recognition
- Attitude Measurement
- Attitude Strength
- Attitudes
- Aural Communication
- Balanced Question
- Behavioral Question
- Bipolar Scale
- Bogus Question
- Bounding
- Branching
- Check All that Apply
- Closed-Ended Question
- Codebook
- Cognitive Interviewing
- Construct
- Construct Validity
- Context Effect
- Contingency Question
- Demographic Measure
- Dependent Variable
- Diary
- Don't Knows (DKs)
- Double Negative
- Double-Barreled Question
- Drop-down Menus
- Event History Calendar
- Exhaustive
- Factorial Survey Method (Rossi's Method)
- Feeling Thermometer
- Forced Choice
- Gestalt Psychology
- Graphical Language
- Guttman Scale
- HTML Boxes
- Item Order Randomization
- Item Response Theory
- Knowledge Question
- Language Translations
- Likert Scale
- List-Experiment Technique
- Mail Questionnaire
- Mutually Exclusive
- Open-Ended Question
- Paired Comparison Technique
- Precoded Question
- Priming
- Psychographic Measure
- Question Order Effects
- Question Stem
- Questionnaire
- Questionnaire Design
- Questionnaire Length
- Questionnaire-Related Error
- Radio Buttons
- Random Order
- Random Start
- Randomized Response
- Ranking
- Rating
- Reference Period
- Response Alternatives
- Response Order Effects
- Self-Administered Questionnaire
- Self-Reported Measure
- Semantic Differential Technique
- Sensitive Topics
- Show Card
- Step-Ladder Question
- True Value
- Unaided Recall
- Unbalanced Question
- Unfolding Question
- Vignette Question
- Visual Communication
- Measurement - Respondent
- Acquiescence Response Bias
- Behavior Coding
- Cognitive Aspects of Survey Methodology (CASM)
- Comprehension
- Encoding
- Extreme Response Style
- Key Informant
- Misreporting
- Nonattitude
- Nondifferentiation
- Overreporting
- Panel Conditioning
- Panel Fatigue
- Positivity Bias
- Primacy Effect
- Reactivity
- Recency Effect
- Record Check
- Respondent
- Respondent Burden
- Respondent Fatigue
- Respondent-Related Error
- Response
- Response Bias
- Response Latency
- Retrieval
- Reverse Record Check
- Satisficing
- Social Desirability
- Telescoping
- Underreporting
- Measurement - Miscellaneous
- Nonresponse - Item-Level
- Nonresponse - Outcome Codes and Rates
- Busies
- Completed Interview
- Completion Rate
- Contact Rate
- Contactability
- Contacts
- Cooperation Rate
- e
- Fast Busy
- Final Dispositions
- Hang-up during Introduction (HUDI)
- Household Refusal
- Ineligible
- Language Barrier
- Noncontact Rate
- Noncontacts
- Noncooperation Rate
- Nonresidential
- Nonresponse Rates
- Number Changed
- Out of Order
- Out of Sample
- Partial Completion
- Refusal
- Refusal Rate
- Respondent Refusal
- Response Rates
- Standard Definitions
- Temporary Dispositions
- Unable to Participate
- Unavailable Respondent
- Unknown Eligibility
- Unlisted Household
- Nonresponse - Unit-Level
- Advance Contact
- Attrition
- Contingent Incentives
- Controlled Access
- Cooperation
- Differential Attrition
- Differential Nonresponse
- Economic Exchange Theory
- Fallback Statements
- Gatekeeper
- Ignorable Nonresponse
- Incentives
- Introduction
- Leverage-Saliency Theory
- Noncontingent Incentives
- Nonignorable Nonresponse
- Nonresponse
- Nonresponse Bias
- Nonresponse Error
- Refusal Avoidance
- Refusal Avoidance Training (RAT)
- Refusal Conversion
- Refusal Report Form (RRF)
- Response Propensity
- Saliency
- Social Exchange Theory
- Social Isolation
- Tailoring
- Total Design Method (TDM)
- Unit Nonresponse
- Operations - General
- Advance Letter
- Bilingual Interviewing
- Case
- Data Management
- Dispositions
- Field Director
- Field Period
- Mode of Data Collection
- Multi-Level Integrated Database Approach (MIDA)
- Paper-and-Pencil Interviewing (PAPI)
- Paradata
- Quality Control
- Recontact
- Reinterview
- Research Management
- Sample Management
- Sample Replicates
- Supervisor
- Survey Costs
- Technology-Based Training
- Validation
- Verification
- Video Computer-Assisted Self-Interviewing (VCASI)
- Operations - In-Person Surveys
- Operations - Interviewer-Administered Surveys
- Operations - Mall Surveys
- Operations - Telephone Surveys
- Access Lines
- Answering Machine Messages
- Call Forwarding
- Call Screening
- Call Sheet
- Callbacks
- Caller ID
- Calling Rules
- Cold Call
- Computer-Assisted Telephone Interviewing (CATI)
- Do-Not-Call (DNC) Registries
- Federal Communications Commission (FCC) Regulations
- Federal Trade Commission (FTC) Regulations
- Hit Rate
- Inbound Calling
- Interactive Voice Response (IVR)
- Listed Number
- Matched Number
- Nontelephone Household
- Number Portability
- Number Verification
- Outbound Calling
- Predictive Dialing
- Prefix
- Privacy Manager
- Research Call Center
- Reverse Directory
- Suffix Banks
- Supervisor-to-interviewer Ratio
- Telephone Consumer Protection Act 1991
- Telephone Penetration
- Telephone Surveys
- Touchtone Data Entry
- Unmatched Number
- Unpublished Number
- Videophone Interviewing
- Voice over Internet Protocol (VoIP) and the Virtual Computer-Assisted Telephone Interview (CATI) Facility
- Political and Election Polling
- 800 Poll
- 900 Poll
- ABC News/Washington Post Poll
- Approval Ratings
- Bandwagon and Underdog Effects
- Call-in Polls
- Computerized-Response Audience Polling (CRAP)
- Convention Bounce
- Deliberative Poll
- Election Night Projections
- Election Polls
- Exit Polls
- Favorability Ratings
- FRUGing
- Horse Race Journalism
- Leaning Voters
- Likely Voter
- Media Polls
- Methods Box
- National Council on Public Polls (NCPP)
- National Election Pool (NEP)
- National Election Studies (NES)
- New York Times/CBS News Poll
- Poll
- Polling Review Board (PRB)
- Pollster
- Pre-Election Polls
- Pre-Primary Polls
- Precision Journalism
- Prior Restraint
- Probable Electorate
- Pseudo-Polls
- Push Polls
- Rolling Averages
- Sample Precinct
- Self-Selected Listener Opinion Poll (SLOP)
- Straw Polls
- Subgroup Analysis
- SUGing
- Tracking Polls
- Trend Analysis
- Trial Heat Question
- Undecided Voters
- Public Opinion
- Agenda Setting
- Consumer Sentiment Index
- Issue Definition (Framing)
- Knowledge Gap
- Mass Beliefs
- Opinion Norms
- Opinion Question
- Opinions
- Perception Question
- Political Knowledge
- Public Opinion
- Public Opinion Research
- Quality of Life Indicators
- Question Wording as Discourse Indicators
- Social Capital
- Spiral of Silence
- Third-Person Effect
- Topic Saliency
- Trust in Government
- Sampling, Coverage, and Weighting
- Adaptive Sampling
- Add-a-Digit Sampling
- Address-Based Sampling
- Area Frame
- Area Probability Sample
- Capture-Recapture Sampling
- Cell Phone Only Household
- Cell Phone Sampling
- Census
- Cluster Sample
- Clustering
- Complex Sample Surveys
- Convenience Sampling
- Coverage
- Coverage Error
- Cross-Sectional Survey Design
- Cutoff Sampling
- Designated Respondent
- Directory Sampling
- Disproportionate Allocation to Strata
- Dual-Frame Sampling
- Duplication
- Elements
- Eligibility
- Email Survey
- EPSEM Sample
- Equal Probability of Selection
- Error of Nonobservation
- Errors of Commission
- Errors of Omission
- Establishment Survey
- External Validity
- Field Survey
- Finite Population
- Frame
- Geographic Screening
- Hagan and Collier Selection Method
- Half-Open Interval
- Informant
- Internet Pop-up Polls
- Internet Surveys
- Interpenetrated Design
- Inverse Sampling
- Kish Selection Method
- Last-Birthday Selection
- List Sampling
- List-Assisted Sampling
- Log-in Polls
- Longitudinal Studies
- Mail Survey
- Mall Intercept Survey
- Mitofsky-Waksberg Sampling
- Mixed-Mode
- Multi-Mode Surveys
- Multi-Stage Sample
- Multiple-Frame Sampling
- Multiplicity Sampling
- n
- N
- Network Sampling
- Neyman Allocation
- Noncoverage
- Nonprobability Sampling
- Nonsampling Error
- Optimal Allocation
- Overcoverage
- Panel
- Panel Survey
- Population
- Population of Inference
- Population of Interest
- Post-Stratification
- Primary Sampling Unit (PSU)
- Probability of Selection
- Probability Proportional to Size (PPS) Sampling
- Probability Sample
- Propensity Scores
- Propensity-Weighted Web Survey
- Proportional Allocation to Strata
- Proxy Respondent
- Purposive Sample
- Quota Sampling
- Random
- Random Sampling
- Random-Digit Dialing (RDD)
- Ranked-Set Sampling (RSS)
- Rare Populations
- Registration-Based Sampling (RBS)
- Repeated Cross-Sectional Design
- Replacement
- Representative Sample
- Research Design
- Respondent-Driven Sampling (RDS)
- Reverse Directory Sampling
- Rotating Panel Design
- Sample
- Sample Design
- Sample Size
- Sampling
- Sampling Fraction
- Sampling Frame
- Sampling Interval
- Sampling Pool
- Sampling without Replacement
- Screening
- Segments
- Self-Selected Sample
- Self-Selection Bias
- Sequential Sampling
- Simple Random Sample
- Small Area Estimation
- Snowball Sampling
- Strata
- Stratified Sampling
- Superpopulation
- Survey
- Systematic Sampling
- Target Population
- Telephone Households
- Telephone Surveys
- Troldahl-Carter-Bryant Respondent Selection Method
- Undercoverage
- Unit
- Unit Coverage
- Unit of Observation
- Universe
- Wave
- Web Survey
- Weighting
- Within-Unit Coverage
- Within-Unit Coverage Error
- Within-Unit Selection
- Zero-Number Banks
- Survey Industry
- American Association for Public Opinion Research (AAPOR)
- American Community Survey (ACS)
- American Statistical Association Section on Survey Research Methods (ASA-SRMS)
- Behavioral Risk Factor Surveillance System (BRFSS)
- Bureau of Labor Statistics (BLS)
- Cochran, W. G.
- Council for Marketing and Opinion Research (CMOR)
- Council of American Survey Research Organizations (CASRO)
- Crossley, Archibald
- Current Population Survey (CPS)
- Gallup Poll
- Gallup, George
- General Social Survey (GSS)
- Hansen, Morris
- Institute for Social Research (ISR)
- International Field Directors and Technologies Conference (IFD&TC)
- International Journal of Public Opinion Research (IJPOR)
- International Social Survey Programme (ISSP)
- Joint Program in Survey Methodology (JPSM)
- Journal of Official Statistics (JOS)
- Kish, Leslie
- National Health and Nutrition Examination Survey (NHANES)
- National Health Interview Survey (NHIS)
- National Household Education Surveys (NHES) Program
- National Opinion Research Center (NORC)
- Pew Research Center
- Public Opinion Quarterly (POQ)
- Roper Center for Public Opinion Research
- Roper, Elmo
- Sheatsley, Paul
- Statistics Canada
- Survey Methodology
- Survey Sponsor
- Telemarketing
- U.S. Bureau of the Census
- World Association for Public Opinion Research (WAPOR)
- Survey Statistics
- Algorithm
- Alpha, Significance Level of Test
- Alternative Hypothesis
- Analysis of Variance (ANOVA)
- Attenuation
- Auxiliary Variable
- Balanced Repeated Replication (BRR)
- Bias
- Bootstrapping
- Chi-Square
- Composite Estimation
- Confidence Interval
- Confidence Level
- Constant
- Contingency Table
- Control Group
- Correlation
- Covariance
- Cronbach's Alpha
- Cross-Sectional Data
- Data Swapping
- Design Effects (deff)
- Design-Based Estimation
- Ecological Fallacy
- Effective Sample Size
- Experimental Design
- F-Test
- Factorial Design
- Finite Population Correction (fpc) Factor
- Frequency Distribution
- Hot-Deck Imputation
- Imputation
- Independent Variable
- Inference
- Interaction Effect
- Internal Validity
- Interval Estimate
- Intracluster Homogeneity
- Jackknife Variance Estimation
- Level of Analysis
- Main Effect
- Margin of Error (MOE)
- Marginals
- Mean
- Mean Square Error
- Median
- Metadata
- Mode
- Model-Based Estimation
- Multiple Imputation
- Noncausal Covariation
- Null Hypothesis
- Outliers
- p-Value
- Panel Data Analysis
- Parameter
- Percentage Frequency Distribution
- Percentile
- Point Estimate
- Population Parameter
- Post-Survey Adjustments
- Precision
- Probability
- Raking
- Random Assignment
- Random Error
- Raw Data
- Recoded Variable
- Regression Analysis
- Relative Frequency
- Replicate Methods for Variance Estimation
- Research Hypothesis
- Research Question
- Rho
- Sampling Bias
- Sampling Error
- Sampling Variance
- SAS
- Seam Effect
- Significance Level
- Solomon Four-Group Design
- Standard Error
- Standard Error of the Mean
- STATA
- Statistic
- Statistical Package for the Social Sciences (SPSS)
- Statistical Power
- SUDAAN
- Systematic Error
- t-Test
- Taylor Series Linearization
- Test-Retest Reliability
- Total Survey Error (TSE)
- Type I Error
- Type II Error
- Unbiased Statistic
- Validity
- Variable
- Variance
- Variance Estimation
- WesVar
- z-Score
- Loading...
Get a 30 day FREE TRIAL
-
Watch videos from a variety of sources bringing classroom topics to life
-
Read modern, diverse business cases
-
Explore hundreds of books and reference titles
Sage Recommends
We found other relevant content for you on other Sage platforms.
Have you created a personal profile? Login or create a profile so that you can save clips, playlists and searches