In my personal experience chatgpt-4 has been a great addition to the toolbox, both for coding assistance and research. You do have to have sufficient grounding in the subjects to be able to evaluate the responses critically though.
However, in all cases I have seen a very substantial decline of the capabilities of chatgpt-4 with the last few releases. E.g. it used to get code snippets most often right before. Whereas now it tends to be wrong most of the time. Usually it conflates capabilities of several distict libraries, just hallucinating (extrapolating) some non existing functions or attributes.
I personally suspect they are 'cleansing' the training data, and/or driving severe 'schizophrenia' into the model through conflicting RLHF.
However, in all cases I have seen a very substantial decline of the capabilities of chatgpt-4 with the last few releases. E.g. it used to get code snippets most often right before. Whereas now it tends to be wrong most of the time. Usually it conflates capabilities of several distict libraries, just hallucinating (extrapolating) some non existing functions or attributes.
I personally suspect they are 'cleansing' the training data, and/or driving severe 'schizophrenia' into the model through conflicting RLHF.