Meta-Analysis
AI in, Garbage Out: Is Meta-Analysis in Danger?
Meta-analysis is an important method in psychology, but “AI slop” threatens it.
Posted February 28, 2026 Reviewed by Gary Drevitch
Key points
- Meta-analysis is crucial for integrating research findings across several psychological studies.
- "Garbage in, garbage out" is a core problem in meta-analysis.
- Fake AI slop publications are on the rise.
- If AI slop is not filtered out in meta-analysis, the quality of psychological research will decrease.
Sometimes, psychological studies on the same topic show different results. To know what the evidence looks like when the data from all studies on a topic are combined, psychologists often conduct so-called meta-analyses. In a meta-analysis, the psychological evidence from a variety of sources is statistically combined. In this way, meta-analysis is crucial for highlighting the current best available evidence to optimize healthcare decisions for mental-health problems.
“Garbage in, garbage out” is a core problem in meta-analysis
Meta-analysis is, however, not without problems. The quality of a meta-analysis critically depends on the quality of the research studies integrated in the meta-analysis. If the quality of integrated studies is low, the quality of the evidence synthesis is low, too, a phenomenon that has been aptly named GIGO, for “Garbage in, garbage out” (Sotola and co-workers, 2022). The core problem with “Garbage in, garbage out” is that readers rarely have the time to check the quality of all studies included in a meta-analysis. Therefore, it is often difficult to distinguish between a meta-analysis based on high quality studies that yields robust findings and one that is based on low-quality, biased studies. This is why guidelines on the reporting of meta-analyses, like the widely cited PRISMA 2020 statement (Page and co-workers, 2021), typically include the suggestion to include a measure of study quality and risk of bias for individual studies included in the evidence synthesis. However, the traditional ways of assessing bias and study quality may not be enough for the incoming wave of so-called “AI slop” papers or fraudulent, AI-generated articles. While AI can be a powerful tool to support psychological science, fake articles with AI-generated data and results pose an existential threat to psychological research.
AI slop publications are on the rise and they threaten psychological science
A recent article in Nature highlighted how big this problem is in computer science, a field that faces a fast increase of the volume of paper submissions to prestigious conferences, with clear indicators that some are entirely AI-generated and contain fraudulent fabrications of data (Gibney, 2026). While psychology is not computer science, as it depends more heavily on patient data and not computer code, it has been convincingly shown that AI can generate fraudulent clinical articles that look authentic at first glance (Májovský and co-workers, 2023).
“AI in, garbage out” could become a real problem for psychological science
Only through in-depth expert analysis for markers of AI-generation in illustrations, analyses, and references can AI slop be detected. This poses a major issue for meta-analysis in psychological research. While many “AI slop” papers may never be published in traditional journals, some may escape the scrutiny of expert reviewers and become published in traditional psychological journals. Others may be published on preprint platforms with less editorial scrutiny than academic journals and still make it on the inclusion list of some meta-analyses. Over time, this may lead to an “AI in, garbage out“ (AIGO) phenomenon in which the quality of psychological meta-analysis will significantly deteriorate. It is therefore crucial that psychological science finds a way to include screening for and weeding out “AI slop” papers in guidelines for meta-analysis. If “AI slop” papers are not systematically excluded from meta-analysis, their results may be wrong. This could have devastating consequences, for example, if decisions on which therapy to use to treat patients are based on the results of a biased meta-analysis.
References
Gibney E. (2026). How AI slop is causing a crisis in computer science. Nature, 10.1038/d41586-025-03967-9. Advance online publication.
Májovský, M., Černý, M., Kasal, M., Komarc, M., & Netuka, D. (2023). Artificial Intelligence Can Generate Fraudulent but Authentic-Looking Scientific Medical Articles: Pandora's Box Has Been Opened. Journal of medical Internet research, 25, e46924.
Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., Shamseer, L., Tetzlaff, J. M., Akl, E. A., Brennan, S. E., Chou, R., Glanville, J., Grimshaw, J. M., Hróbjartsson, A., Lalu, M. M., Li, T., Loder, E. W., Mayo-Wilson, E., McDonald, S., McGuinness, L. A., … Moher, D. (2021). The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ (Clinical research ed.), 372, n71.
Sotola LK. (2022). Garbage In, Garbage Out? Evaluating the Evidentiary Value of Published Meta-analyses Using Z-Curve Analysis. Collabra: Psychology, 8, 1.