Same. Even for technologies that it supposedly should know a lot about (e.g. Kafka), if I prompt it for something slightly non-standard, it just makes up things that aren't supported or is otherwise unhelpful.
The one time I've found ChatGPT to be genuinely useful is when I asked it to explain a bash script to me, seeing as bash is notoriously inscrutable. Still, it did get a detail wrong somehow.
Yes, it is good at summarizing things and regressing things down to labels. It’s much worse at producing concrete and specific results from its corpus of abstract knowledge.
I think that’s the case with every discipline for it, not only programming. Even when everyone was amazed it could make poetry out of everything, if you asked for a specific type of poem and specific imagery in it, it would generally fail.
The one time I've found ChatGPT to be genuinely useful is when I asked it to explain a bash script to me, seeing as bash is notoriously inscrutable. Still, it did get a detail wrong somehow.