top of page

Ghost references: AI's citation problem

Writer's picture: Leanna Coy, FNP-CLeanna Coy, FNP-C

Updated: 5 days ago

A reality check for using AI tools to help with your writing and references.


Open laptop with the text on the screen asking is this real

Writers are trying to find ways to use artificial intelligence (AI) as a tool to improve their work. I see suggestions in different spaces about how AI can help the writer. These suggestions include research, brainstorming ideas, and editing. Like anyone else, if there is a tool that can help simplify my job, I'll use it. I've dabbled with using AI for idea generation, but I haven't found the results that useful. For the most part, they seem pretty basic. This week, I tried to branch out with a different approach.


Working on an article, I decided to try my hand at using AI to help with sourcing my references. It didn't go so well. Most of my writing is health content, and I strive to provide accurate information. I've heard patients quoting far too much misinformation and inaccuracies as if they were a sacred text. I plan to slowly chip away at some of the bad info out there with my writing.


The article I was working on was about the relationship between musculoskeletal health and hormones. I'd reviewed a few articles on the effect of declining estrogen and muscle health. With my rough draft complete, I wanted to see if the AI would come up with any additional articles that would be useful.


I started with ChatGPT. The citations the bot came up with looked very legit. I recognized the names of the journals being referenced. "Cool!" I thought. Just let me review these articles…uh. Wait. Where are the articles? The links did not link. Hmm.


I asked ChatGPT to provide DOI numbers for the articles. The ever-polite bot apologized for not giving them to me initially and then spit out the digits. Now we were cooking. Except, there was no gas. The DOI numbers did not link to anything either. Suspecting the worst, I went directly to the journal sites to try to find the articles using the issue and volume numbers. Nope. They did not exist. This was not the droid I was looking for.


I decided to switch gears. I popped the exact same request into Gemini. The bot came up with three articles. With expectations low, I checked the first one. I was pleasantly surprised to see an actual research paper pull up. Next, one…nope. Not there, and neither was the third one. I asked it to give me the DOI numbers. It found the DOI for the first reference but apologized for not being able to give DOI numbers for the other two. I'd been given ghost references.


I'm no unicorn, so you know my experiences were not unique. Science.org published a news brief this week about a similar issue. A researcher specializing in AI was providing expert testimony in a Minnesota court. He was there to defend a state law banning AI-generated fake content used to damage political candidates. His testimony was tossed out by the judge. Why? The judge found the references he used were false. The expert admitted to using GPT-40 to source his references and prepare his testimony. The expert got duped and didn't check his work.


AI tools are known to “hallucinate.” Meaning they share made-up information. Stanford University found chatbots hallucinate often. Really often. They ran a study where the chatbots hallucinated between 58-82% of the time. If the bots were human, children would taunt them as having their pants on fire.


Some of the gurus tell us that AI isn't the enemy. In my opinion, it may be more of a frenemy. These large language models have a lot of growing up to do before becoming a replacement for human knowledge. So remember, when playing with your frenemy bot, always check your work.

Recent Posts

See All

Comments


  • LinkedIn
bottom of page