post photo

Neural Large Languag Models Hallucinations

Summary:

https://arxiv.org/pdf/2305.13534.pdf

 

How Language Model Hallucinations Can Snowball

 

 Muru Zhang , Ofir Press ,William Merrill , Alisa Liu ,Noah A. Smith 

published on May 22

 

Abstract

 

A major risk of using language models in practical applications is their tendency to hallucinate incorrect statements. Hallucinations are often attributed to knowledge gaps in LMs, but we hypothesize that in some cases, when justifying previously generated hallucinations, LMs output false claims that they can separately recognize as incorrect. We construct three question-answering datasets where ChatGPT and GPT-4 often state an incorrect answer and offer an explanation with at least one incorrect claim. Crucially, we find that ChatGPT and GPT-4 can identify 67% and 87% of their own mistakes, respectively. We refer to this phenomenon as hallucination snowballing: an LM over-commits to early mistakes, leading to more mistakes that it otherwise would not make.

 

Login to post comments below.

Sign up for our newsletter

Stay up to date with the roadmap progress, announcements and exclusive discounts feel free to sign up with your email.