This is the correct answer. I have 23 years of experience in datacenter ops and it has been a game changer for me. Just like any tool on one's arsenal, it's utility increases with practice and learning to use it correctly. ChatGPT is no different. You get out of it what you put in to it. This is the way of the world.
I used to be puzzled as to why my peers are so dismissive of this tech. Same folks who would say "We don't need to learn no Kubernetes! We don't need to code! We don't need ChatGPT". They don't!
And it's fine. If their idea of a career is working in same small co. doing the same basic Linux sysadmin tasks for a third of the salary I make then more power to them.
The folks dismissive of the AI/ML tech are effectively capping their salary and future prospects in this industry. This is good for us! More demand for experts and less supply.
You ever hire someone that uses punch cards to code?
I think it's more akin to using compilers in the early days of BCPL or C. You could expect it to produce working assembly for most code but sometimes it would be slower than a hand-tuned version and sometimes a compiler bug would surface, but it would work well enough most of the time.
For decades there were still people who coded directly in assembly, and with good reason. And eventually the compiler bugs would be encountered less frequently (and the programmer would get a better understanding of undefined behavior in that language).
Similar to how dropping into inline assembly for speeding up execution time can still have its place sometimes, I think using GPT for small blocks of code to speed up developer time may make some sense (or tabbing through CoPilot), but just as with the early days of higher level programming languages, expect to come across cases where it doesn't speed up DX or introduces a bug.
These bugs can be quite costly, I've seen GPT spit out encryption code and completely leave out critical parts like missing arguments to a library or generating the same nonce or salt value every execution. With code like this, if you're not well versed in the domain it is very easy to overlook, and unit tests would likely still pass.
I think the same lesson told to young programmers should be used here -- don't copy/paste any code that you do not sufficiently understand. Also maybe avoid using this tool for critical pieces like security and reliability.
I think many folks have those early chats with friends where one side was dismissing llms for so many reasons, when the job was to see and test the potential.
While part of me enjoyed the early gpt much more than the polished version today, as a tool itβs much more useful to the average person should they make it back to gpt somehow.