Fine, whatever. It's actually much more concerning if the overall information landscape has been so curated by censors that a naively-trained LLM comes "pre-censored", as you are asserting. This issue is so "complex" when it comes to one side, and "morally clear" when it comes to the other. Classic doublespeak.
That's far more dystopian than a post-hoc "guardrailed" model (that you can run locally without guardrails).
That's far more dystopian than a post-hoc "guardrailed" model (that you can run locally without guardrails).