Friday, 17 May 2024

Hallucinations

In the AI realm, a "hallucination" is a terrible thing. It is generally described as an output that lacks a basis - meaning that it is inaccurate. The AI system tries to give you the right answer for your question / prompt, but, for some reason, it lacks the right reasoning or data and gives you their best guess.

An example (from an IBM lecture): the AI database has been trained up until mid 2022 and you ask about a planet that has been discovered in January 2024. The system might generate a fictional, scenario-based and factually inaccurate answer based on its understanding of astronomy and scientific discoveries.

There are some user-side techniques to mitigate hallucinations. One of them is that you should check the database of the AI system (in my example, you should have a look at the cutoff date of the system). But probably the most powerful ones are to use prompt engineering techniques and tools, that work on the way you present your questions / prompts and guide the AI system through it.

No comments:

Post a Comment