Imagine that GPT is just another person you communicate with. When they give you new information, how do you guard against them being possibly wrong? You verify the information by other sources.
The "quality" of the wrong information you get from GPT-4 is very different from a human who is wrong. For example, I wouldn't expect a human to give me a long list of books that don't actually exist without hesitation.
Sure, but still, if you ask GPT/person "What are the best books about teaching dogs to sit?" you'd still look up each book individually, read reviews and figure out if they really are worth the time reading, before starting to purchasing the books. And you'd find out if the book exists or not as soon as you search.
So even if the "quality" is different, the way to verify the information is the same.
Both AI and humans can be wrong, but in different ways. Humans often mess up due to bias or memory slips, while AI usually stumbles due to data gaps or misunderstood context. AI misinformation isn't 'worse,' it's just different. Understanding this helps us use AI more effectively.
Zero trust, you have to unit test, run what it gives you, tell it in a separate session a co worker gave you this solution, it doesn't work but explain why. I quite often enlist the bot in helping to prove itself right.
Googling. Just today I asked Chat-GPT (not 4) for papers or books about some topic, and it gave me five pointers. Two of them contained useful information, the rest was hallucinated.
Use the GPT4 with web browsing mode enabled or Bing Chat if you want links to real articles. Bing Chat has come a long, long ways. Impressive capabilities. Much less hallucination.
Bing chat? You mean having to use Edge, aka Chromium without any extensions? I'd sooner go to Firefox.
GPT 4 with browsing isn't quite there yet either, usually takes at least two or three attempts to not have it fail somewhere in between. Should be pretty good once they iron it out though.