๐Ÿฅš
์‚ถ์€AIAI ์‹ค์ „ ๊ฐ€์ด๋“œ 120ํŽธ
๋ชฉ๋ก์œผ๋กœ
๐Ÿ“š AI Basics

ChatGPT Tells Lies? Understanding and Handling AI Hallucination Phenomenon

AI confidently lying? That's the 'hallucination phenomenon.' Let me explain why it happens and how to deal with it!

Ever asked ChatGPT something and got complete nonsense answered confidently?

You probably thought "Something's wrong with this..."

This is AI's hallucination phenomenon.

Today I'll explain why this happens and how you should deal with it!

What is Hallucination?

Hallucination is when AI makes up plausible lies.

Real Example

Question:

"What's the title of Professor Kim Chul-su's representative paper?"

AI Response:

"Professor Kim Chul-su published a paper titled 'Ethical Considerations of Artificial Intelligence' in Nature in 2019."

Reality:

  • โŒ No such paper exists
  • โŒ Never published in Nature
  • โŒ Professor Kim Chul-su might not even exist

But AI speaks as if it's fact with confidence!

Why Do Hallucinations Happen?

1. AI is a "Pattern Prediction" Machine

AI doesn't know factsโ€”it creates plausible patterns.

Example:

  • "Professor" + "paper" + "title" โ†’ Combines patterns seen in training data
  • "Nature", "2019", "ethical considerations" โ†’ Common words
  • Combines these to generate plausible sentences

It doesn't actually fact-check!

2. Makes Things Up When It Doesn't Know

Humans say "I don't know" when they don't know, right?

But AI is trained to generate answers, so it makes plausible responses even when it doesn't know.

3. Information Not in Data

Questions about information not in training data highly likely produce hallucinations.

Examples:

  • โŒ "December 2024 news" (training data only up to 2023)
  • โŒ "My friend Park Min-su's phone number" (personal info not trained)

Why Hallucinations are Dangerous

1. Confidently Lying

AI speaks in a tone of certainty.

It speaks definitively without expressions like "perhaps~" or "it seems~", making you believe it.

2. Makes Up Sources

Scarier is it fabricates sources.

"This information is cited from a 2021 Harvard Business Review paper."

โ†’ Actually no such paper exists when you check!

3. Especially Dangerous in Legal/Medical Fields

  • "These symptoms mean โ—‹โ—‹ disease" โ†’ Misdiagnosis risk
  • "You can sue in this case" โ†’ Wrong legal advice

Never blindly trust!

How to Detect Hallucinations

๐Ÿ” Suspicious Signals

1. Overly Specific Numbers

"This method precisely increases efficiency by 47.3%."

โ†’ Suspiciously precise numbers!

2. Unfamiliar Sources

"According to Journal of Advanced AI Research March 2020..."

โ†’ Never heard of this journal? Must verify!

3. Recent Information

ChatGPT-4 doesn't know information after October 2023.

If it mentions 2024 or 2025 information, it's 100% hallucination!

4. Personal Information

"Your company's sales last month were..."

โ†’ AI can't know personal information!

๐Ÿ’ก Tips to Prevent Hallucinations

1. Ask "Tell me the source"

You: "Was ChatGPT released in 2022?"
AI: "Yes, it was released in November 2022."
You: "Tell me the source"
AI: "Sorry, I cannot provide an exact source. This information is uncertain."

AI often becomes honest when asked for sources!

2. Ask Multiple AIs

  • Ask ChatGPT
  • Ask Gemini too
  • Verify sources with Perplexity

If answers differ? โ†’ High chance of hallucination!

3. Verify with Google Search

Search Google for information AI gave you.

If it doesn't come up, high chance it's a hallucination.

4. "Be honest if you're not sure"

Add this to your prompt:

"If you're not sure, honestly say 'I don't know'.
Don't guess, only tell me what you know."

This reduces hallucinations!

5. Verify Important Info with Official Sources

  • Medical info โ†’ Hospital consultation
  • Legal advice โ†’ Lawyer consultation
  • Statistics โ†’ Public data portal
  • News โ†’ News media sites

AI is for reference only, final verification is yours!

Which AI Has Fewer Hallucinations?

Perplexity AI

Features:

  • Provides source links for all answers
  • Can search academic papers
  • Best for fact-checking

Recommended use: Research where fact-checking is important

Bing Chat (Copilot)

Features:

  • Real-time internet search
  • Shows sources
  • Provides latest information

Recommended use: Latest news, real-time information

ChatGPT (Search Plugin)

Using paid plugins enables internet search, reducing hallucinations.

Questions Bound to Produce Hallucinations

โŒ Avoid These Questions

"What's my friend's name?"
โ†’ AI can't know

"What's the stock price in January 2025?"
โ†’ Not in training data

"What disease do I have with these symptoms?"
โ†’ Medical diagnosis is for doctors!

"Find Professor Kim Chul-su's paper"
โ†’ Specific personal info has high hallucination risk

โœ… These Questions are Safe

"Tell me how to sort a list in Python"
โ†’ General technical knowledge, verifiable

"Tell me how to structure a presentation"
โ†’ General tips, no fact-checking needed

"Create an email template"
โ†’ Creative content, facts don't matter

Real Case: Lawyer's Mistake

2023 US Case:

A lawyer cited fake precedents created by ChatGPT in court documents.

  • AI made up 6 non-existent precedents
  • Lawyer submitted without verification
  • Caught by court and disciplined

Lesson: Always verify AI responses!

Prompt Writing to Reduce Hallucinations

โŒ Bad Prompt

"Who founded ChatGPT?"

โ†’ Simple question but hallucination risk exists

โœ… Good Prompt

"Tell me who the CEO of OpenAI is.
If you're not sure, say 'I don't know'.
Provide publicly available sources if possible."

โ†’ Emphasizes certainty, requests sources

Next Steps?

Now that you understand hallucinations, aren't you curious about how people actually use AI in real life?

In the next article, I'll collect real examples of how people use AI.

Seeing specific examples of AI usage in work, study, and hobbies will give you ideas!


Next Article Preview: ๐Ÿ“Œ Representative Cases of Using AI in Daily Life