Yes and no. The hallucination rate shown there is the percentage of time the model answers incorrectly when it should have instead admitted to not knowing the answer. Most models score very poorly on this, with a few exceptions, because they nearly always try to answer. It's true that 3.0 is no better than others on this. By given that it does know the correct answers much more often than eg. GPT 5.2, it does in fact give hallucinated answers much less often.
In short, its hallucination rate as a percentage of unknown answers is no better than most models, but its hallucination rate as a percentage of total answers in indeed better.
I'm not the person you asked, but I assume their basis is that the majority of the Adult US Population is overweight or obese.[1]
However, we're conflating the related problems of hunger, food insecurity, and malnutrition. Food insecurity at its most extreme will result in hunger (a lack of any food), but the affordable food that is available in food deserts (and at food banks) is often ultraprocessed and incompletely nutritious, which can lead to obesity.[2]
Largely, Americans don't seem to be affected by "hunger" as defined by the United Nations Food and Agriculture Organization[3], but are very affected by malnutrition and food insecurity (as defined by that same body).
Yeah, also it shows the comment is ignorant of history.
In the immediate aftermath of the Korean war, the North was actually more prosperous than the South. That changed with time, dramatically so, but initially it'd be reasonable to see the north as having better economic prospects.
https://artificialanalysis.ai/#aa-omniscience-hallucination-...
If you look at the results 3.0 hallucinates an awful lot, when it's wrong.
It's just not wrong that often.
(And it looks like 3.1 does better on both fronts)
reply