Datos agregados para corregir los sesgos de no respuesta y de cobertura en encuestas

  1. Cabrera-Álvarez, Pablo
Revista:
Empiria: Revista de metodología de ciencias sociales

ISSN: 1139-5737

Año de publicación: 2021

Número: 49

Páginas: 39-64

Tipo: Artículo

DOI: 10.5944/EMPIRIA.49.2021.29231 DIALNET GOOGLE SCHOLAR lock_openDialnet editor

Otras publicaciones en: Empiria: Revista de metodología de ciencias sociales

Resumen

En las últimas décadas la incidencia creciente de los sesgos de no respuesta y cobertura en las encuestas han puesto en entredicho la capacidad de inferir los resultados a la población. Una forma extendida de corregir los sesgos de no respuesta y cobertura en las encuestas es el uso de ponderaciones que equilibran la muestra final de entrevistados. La construcción de ponderaciones requiere información auxiliar, totales poblacionales que estén disponibles para los que responden y para los que no cooperan. En este trabajo, a partir de simulaciones estadísticas, se comprueba la capacidad de la información agregada para corregir el sesgo de no respuesta. Para ello se comparan el ajuste con datos individuales y el sistema de datos agregados, dando como resultado que el uso de datos agregados puede ser útil si se cumplen tres requisitos: 1) la variable estimada está agrupada, 2) la variable estimada y la auxiliar están correlacionadas y 3) la probabilidad de completar la encuesta está relacionada con la variable auxiliar.

Información de financiación

El proyecto que ha generado estos resultados ha contado con el apoyo de una beca de la Fundaci?n Bancaria?la Caixa? (ID 100010434), cuyo c?digo es LCF/BQ/ES16/11570005.

Financiadores

Referencias bibliográficas

  • BETHLEHEM, J., COBBEN, F., y SCHOUTEN, B. (2011): Handbook of Nonresponse in Household Surveys, Nueva Jersey, Wiley and Sons.
  • BEULLENS, K., LOOSVELDT, G., VANDENPLAS C., y STOOP I. (2018): “Response Rates in the European Social Survey: Increasing, Decreasing, or a Matter of Fieldwork Efforts?”, Survey Methods: Insights from the Field. https://surveyinsights. org/?p=9673
  • BIEMER, P., y PEYTCHEV, A. (2012): “Census geocoding for nonresponse bias evaluation in telephone surveys”, Public Opinion Quarterly, 76(3), 432-452. https://doi. org/10.1093/poq/nfs035
  • BIEMER, P., y PEYTCHEV, A. (2013): “Using geocoded census data for nonresponse bias correction: An assessment”, Journal of Survey Statistics and Methodology, 1(1), 24-44. https://doi.org/10.1093/jssam/smt003
  • BLAIR, G., COOPER, J., HUMPHREYS, A. C. M., Rudkin, A., y Fultz, N. (2018): fabricatr: Imagine Your Data Before You Collect It.
  • BLOM, A. G., BOSNJAK, M., CORNILLEAU, A., COUSTEAUX, A. S., Das, M., DOUHOU, S., y KRIEGER, U. (2016): “A Comparison of Four Probability-Based Online and Mixed-Mode Panels in Europe”, Social Science Computer Review, 34(1), 8-25. https://doi.org/10.1177/0894439315574825
  • BURROWS, R., y SAVAGE, M. (2014): “After the crisis? Big Data and the methodological challenges of empirical sociology”. Big Data y Society, 1(1), 205395171454028. https://doi.org/10.1177/2053951714540280
  • BUTT, S., y LAHTINEN, K. (2015): Using auxiliary data to model nonresponse bias The challenge of knowing too much about nonrespondents rather than too little?, presentado en el International Workshop on Household Nonresponse 2015, 02 Sep 2015 04 Sep 2015, Leuven, Bélgica.
  • BUTT, S., y LAHTINEN, K. (2016): ADDResponse : auxiliary data driven non response bias analysis technical report on appending geocoded auxiliary data to Round 6 of European Social Survey ( UK ), Londres, City University.
  • COUPER, M. P. (2013): “Is the sky falling? New technology, changing media, and the future of surveys”, Survey Research Methods, 7(3), 145-156.
  • de LEEUW, E. D., y HOX, J. J. (2011): “Internet surveys as part of a mixed-mode design”, Social and Behavioral Research and the Internet, 45-76.
  • de LEEUW, E., HOX, J., y LUITEN, A. (2018): “International Nonresponse Trends across Countries and Years: An analysis of 36 years of Labour Force Survey data”, Survey Insights: Methods from the Field, 1-11. https://doi.org/10.13094/SMIF-2018- 00008
  • de PEDRAZA, P., TIJDENS, K., de BUSTILLO, R. M., y STEINMETZ, S. (2010): “A Spanish Continuous Volunteer Web Survey: Sample Bias, Weighting and Efficiency”, Revista Española de Investigaciones Sociológicas, 131(1), 109-130.
  • DEVER, J., RAFFERTY, A., y VALLIANT, R. (2008): “Internet Surveys: Can Statistical Adjustments Eliminate Coverage Bias?” Survey Research Methods, 2(2), 47-60. https://doi.org/10.18148/srm/2008.v2i2.128
  • DILLMAN, D., ELTINGE, J., GROVES, R. M., y LITTLE, R. (2002): “Survey nonresponse in design, data collection and analysis”, en Survey nonresponse, Nueva York, Wiley & Sons, 3–26.
  • ELLIOTT, M. R., y VALLIANT, R. (2017): “Inference for Nonprobability Samples”, Statistical Science, 32(2), 249-264. https://doi.org/10.1214/16-STS598
  • ESOMAR. (2017): Global Market Research 2017. Amsterdam.
  • GROVES, R.M., y COUPER, M. (1998): Nonresponse in household interview surveys, Nueva York, Wiley and Sons.
  • GROVES, R.M., y HEERINGA, S. G. (2006): “Responsive design for household surveys: tools for actively controlling survey errors and costs”, Journal of the Royal Statistical Society: Series A (Statistics in Society), 169(3), 439-457. https://doi. org/10.1111/j.1467-985X.2006.00423.x
  • GUMMER, T., y ROßMANN, J. (2018): “The effects of propensity score weighting on attrition biases in attitudinal, behavioral, and socio-demographic variables in a shortterm web-based panel survey”, International Journal of Social Research Methodology, 22(1), 81-95. https://doi.org/10.1080/13645579.2018.1496052
  • HANSEN, K. (2007): “The effects of incentives, interview length, and interviewer characteristics on response rates in a CATI-study”, International Journal of Public Opinion Research, 19(1).
  • KENNEDY C., y HARTIG, H. (2019): Response rates in telephone surveys have resumed their decline, disponible en https://www.pewresearch.org/facttank/2019/02/27/response-rates-in-telephone-surveys-have-resumed-their-decline/ [consultado: 7-09-2020].
  • KISH, L., GROVES, R. M., KROTKI, K. P. (1976): Sampling errors for fertility surveys. Voorburg, Netherlands: International Statistical Institute.
  • LEE, S., y VALLIANT, R. (2009): “ Estimation for Volunteer Panel Web Surveys Using Propensity Score Adjustment and Calibration Adjustment”, Sociological Methods y Research, 37(3), 319-343.
  • LEPKOWSKI, J. M., MOSHER, W. D., GROVES, R. M., WEST, B. T., WAGNER, J., y GU, H. (2013): “Responsive Design, Weighting, andVariance Estimation in the 2006-2010 National Survey of Family Growth”, Vital and Health Statistics. Series 2, Data Evaluation and Methods Research, (158), 1-52.
  • LEVY, P. S., y LEMESHOW, S. (2013): Sampling of Populations: Methods and Applications, Nueva Jersey, Wiley and Sons.
  • LILJEQUIST, D., ELFVING, B., ROALDSEN, K. S. (2019): “Intraclass correlation – A discussion and demonstration of basic features”, PLoS ONE 14(7), e0219854. https://doi.org/10.1371/journal.pone.0219854
  • LITTLE, R.J. y RUBIN, D. (1987): Statistical Analysis with Missing Data, Wiley, New York., 381. https://doi.org/10.1002/9781119013563
  • LITTLE, R. J. A., y VARTIVARIAN, S. (2005): Does Weighting for Nonresponse Increase the Variance of Survey Means? Survey Methodology, 31(2), 161-168.
  • LUNDSTROM, S., y SARNDAL, C. E. (2001): Estimation in the Presence of Nonresponse and Frame Imperfection, Estocolmo, Statistics Sweden.
  • MANFREDA, K. L., BERZELAK, J., VEHOVAR, V., BOSNJAK, M., y HAAS, I. (2008): “Web Surveys versus other Survey Modes: A Meta-Analysis Comparing Response Rates”, International Journal of Market Research, 50(1), 79-104. https:// doi.org/10.1177/147078530805000107
  • MERCER, A., LAU, A., y KENNEDY, C. (2018): For Weighting Online Opt-In Samples, What Matters Most?, Washington, Pew Research.
  • MOHORKO, A., LEEUW, E. De, y HOX, J. (2011): “Internet Coverage and Coverage Bias Trends across Countries in Europe and over Time”, Background, Methods, Question Wording and Bias Tables, 29(4), 1-28.
  • MORALES, L., y ROS, V. (2013): “Comparing the response rates of autochthonous and migrant populations in nominal sampling surveys: The LOCALMULTIDEM study in Madrid”, en Surveying Ethnic Minorities and Immigrant Populations, Amsterdam, Amsterdam University Press, 147-166.
  • OLSON, K., y PEYTCHEV, A. (2007): “Effect of Interviewer Experience on Interview Pace and Interviewer Attitudes”, Public Opinion Quarterly, 71(2), 273-286. https:// doi.org/10.1093/poq/nfm007
  • PARK, A., BRYSON, C., CIERY, E., CURTICE, J., y PHILLIPS, M. (2013): British Social Attitudes 30th Report, Londres, NatCen Social Research.
  • PASEK, J. (2016): “When will nonprobability surveys mirror probability surveys? Considering types of inference and weighting strategies as criteria for correspondence”, International Journal of Public Opinion Research, 28(2), 269-291. https://doi. org/10.1093/ijpor/edv016
  • RYU, E., COUPER, M. P., y MARANS, R. W. (2006): “Survey incentives: Cash vs. inkind; Face-to-face vs. mail; Response rate vs. nonresponse error”, International Journal of Public Opinion Research. https://doi.org/10.1093/ijpor/edh089
  • SAKSHAUG, J. W., y ECKMAN, S. (2017): “Are survey nonrespondents willing to provide consent to use administrative records? Evidence from a nonresponse followup survey in Germany”, Public Opinion Quarterly, 81(2), 495-522. https://doi. org/10.1093/poq/nfw053
  • SANTIAGO, J., y PÉREZ-AGOTE, A. (2013): La nueva pluralidad religiosa, Madrid, Ministerio de Justicia.
  • SÄRNDAL, C., y LUNDSTRÖM, S. (2005): Estimation in Surveys with Nonresponse.
  • SÄRNDAL, C. (2007): “The calibration approach in survey theory and practice”, Survey Methodology, 33(2), 99-119.
  • SINGER, E., GROVES, R.M., y CORNING, A.D. (1999): “Differential incentives: Beliefs about practices, perceptions of equity, and effects on survey participation”, Public Opinion Quarterly, 63(2), 251-260. https://doi.org/10.1086/297714
  • SMITH, T. W. (2011): “The report of the International Workshop on using multi-level data from sample frames, auxiliary databases, paradata and related sources to detect and adjust for nonresponse bias in surveys”, International Journal of Public Opinion Research, 23(3), 389-402. https://doi.org/10.1093/ijpor/edr035
  • SMITH, T. W., y KIM, J. (2013): “An Assessment of the Multi-level Integrated Database Approach”, Annals of the American Academy of Political and Social Science (Vol. 645). https://doi.org/10.1177/0002716212463340
  • TSUNG, K., VALLIANT, R. L., y ELLIOTT, M. R. (2018): “Model-assisted calibration of non-probability sample survey data using adaptive LASSO”, (12).
  • VALLIANT, R., DEVER, J. A., y KREUTER, F. (2018): Practical tools for designing and weighting survey samples, Cham, Springer.
  • WEISEBERG, H. (2005): The total survey error approach, Chicago, The University of Chicago Press.
  • ZHANG, L.C. (2000): “Post-Stratification and Calibration-A Synthesis”, The American Statistician, 54(3), 178. https://doi.org/10.2307/2685587