In a new study published in the Harvard Kennedy School Misinformation Review, researchers from the University of Borås, Lund University and the Swedish University of Agricultural Sciences found a total of 139 papers with a suspected deceptive use of ChatGPT or similar large language model applications; out of these, 19 were in indexed journals, 89 were in non-indexed journals, 19 were student papers found in university databases, and 12 were working papers (mostly in preprint databases); health and environment papers made up around 34% of the sample; of these, 66% were present in non-indexed journals.
Support authors and subscribe to content
This is premium stuff. Subscribe to read the entire article.
Login if you have purchased