The machine appears to have hallucinated the incomparable comparison, instead of a human.
(And I'm not picking on the machine at all here. I use it all the time. At first, I used to treat it like an idiot intern that shouldn't have been hired at all: Creative and full of spirit, but untrustworthy and all ideas need to be filtered. But lately, it's more like an decent apprentice who has a hangover and isn't thinking straight today. The machine has been getting better as time presses on, but it still goes rather aloof from time to time.)
What do you mean all LLM output is hallucination? Would you say the same about AlphaGo? That system was also trained to predict human data initially yet it's competent to the point of beating most humans in Go.
> Weird you don't have this requirement for the OP spewing his urban myths above.
It isn't my purpose try to convince you that your apparent presumption that the output of a human and a machine are somehow equivalent and should be treated equally is wrong.
Why is an LLM more prone to hallucination than AlphaGo?
> It isn't my purpose try to convince you that your apparent presumption that the output of a human and a machine are somehow equivalent and should be treated equally is wrong.
You should judge arguments by their merits, not by who is saying them.
(And I'm not picking on the machine at all here. I use it all the time. At first, I used to treat it like an idiot intern that shouldn't have been hired at all: Creative and full of spirit, but untrustworthy and all ideas need to be filtered. But lately, it's more like an decent apprentice who has a hangover and isn't thinking straight today. The machine has been getting better as time presses on, but it still goes rather aloof from time to time.)