

Exactly.
When the predictive text gives the right answer we label it “fact”
When the predictive text gives the wrong answer we label it “hallucination”
Both were arrived at by the exact same mechanism. It’s not a hallucination because “something went wrong” - both good and bad outputs are functionally identical. it’s only a hallucination because that’s what we humans - as actually thinking creatures - decided to name the output we don’t like.
Quite probable the LLM didn’t even know that was there. Just because it appears in the chat window doesn’t mean it’s part of the LLMs chat history.
This is just Meta dumping bullshit ads in the chat in a way that is invisible to the chatbot.