Hallucinated Citations in the Age of AI
- Nicea Ali
- 2 days ago
- 1 min read

By: Kritika Goel
A disturbing trend has been observed with research publications, hallucinated citations, and AI have a big hand in it. The use of large language models by researchers to conduct literature searches or format bibliographies is a common practice, but this is leading to the generation of non-existent academic references. Citation errors in publications are not uncommon; they may involve an incorrectly spelled author name or an incorrect year of publication, but the difference between inaccuracy and fraud is huge.
In order to maintain the veracity of the publication, tools provided by companies like
Grounded AI, as well as manual checks, would be the way going forward. This might be time-consuming, and there will be room for human errors in the process of manual checking, but the problem of fake references can be tackled.
In reference to the research carried out by Improve Life PLLC, iterations of checking would be imperative. From a recent perspective, we have to recognize that there would be human errors as well as machine errors, and those can often overlap. The image below from nature.com is a good reference point to avoid hallucinated citations. It is to be taken into account as well that receiving 100% accuracy would be challenging; the below-mentioned methodology is a step in the right direction.
This is such a relevant topic, especially as AI continues to infiltrate academic and professional workspaces. I hope this post can help start the conversation and raise awareness about these tricks that AI plays. Find your own citations!