For some reason chatGPT gets further from reality the deeper it gets into it's response. Maybe some depth of tree limit or something.
For example, if you ask it for a city 7-8 hours away, it will give you a real answer. If you ask for another, it will give you another real answer.
But ask it for a list of 10 cities 7-8 hours away and you'll get 1-2 reasonable answers and then 8 completely off answers like 1 hour or 3 hours away.
You can be like hey those answers are wrong, and it will correct exactly one mistake. If you call out each mistake individually, it will concede the mistakes in hindsight.
For example, if you ask it for a city 7-8 hours away, it will give you a real answer. If you ask for another, it will give you another real answer.
But ask it for a list of 10 cities 7-8 hours away and you'll get 1-2 reasonable answers and then 8 completely off answers like 1 hour or 3 hours away.
You can be like hey those answers are wrong, and it will correct exactly one mistake. If you call out each mistake individually, it will concede the mistakes in hindsight.