> Didn't Bing's implementation argue with someone about what freaking year it was not awfully long ago?
I'm sorry, but that's a stellar example of holding an LLM wrong. These models are frozen in time.
> But "knowing" and "understanding" are two different things.
Indeed, and that is a big part of misunderstanding. GPT-4 is, on many topics, closer to understanding than knowing (note that neither is a subset of the other). The conceptual patterns are there, even if sometimes are easy to accidentally overpower by the prompt, or by the sequence of tokens already emitted.
> I'm sorry, but that's a stellar example of holding an LLM wrong. These models are frozen in time.
Y'all keep throwing out these gotcha statements that just make the technology you're trying to tell me is great seem more and more useless.
How can you even attempt to call something artificial intelligence if it doesn't even know the year it is!?
> Indeed, and that is a big part of misunderstanding. GPT-4 is, on many topics, closer to understanding than knowing (note that neither is a subset of the other). The conceptual patterns are there, even if sometimes are easy to accidentally overpower by the prompt, or by the sequence of tokens already emitted.
I don't think it's either understanding or knowing. Someone who knows something isn't going to spontaneously forget it because someone asked them a question incorrectly.
I'm sorry, but that's a stellar example of holding an LLM wrong. These models are frozen in time.
> But "knowing" and "understanding" are two different things.
Indeed, and that is a big part of misunderstanding. GPT-4 is, on many topics, closer to understanding than knowing (note that neither is a subset of the other). The conceptual patterns are there, even if sometimes are easy to accidentally overpower by the prompt, or by the sequence of tokens already emitted.