Oh wow! I'm guessing this "<|endoftext|> "leakage" is related to OPs behavior, with us seeing a "free running" text completion, without a prompt/initial bias?
It nearly always provides the "original question" when asked, which I'm naively assuming isn't generated in response. With the dozen or so times I've tried, there's never more than a single previous question, before the response.
I suppose it would make sense there would be much more bias towards RLHF questions/responses.
edit: Actually, this may be some RLHF leakage for 3.5-turbo: https://chat.openai.com/share/d223c02c-77c1-4172-b1e3-2592f4...
It nearly always provides the "original question" when asked, which I'm naively assuming isn't generated in response. With the dozen or so times I've tried, there's never more than a single previous question, before the response.
I suppose it would make sense there would be much more bias towards RLHF questions/responses.