It's easy to say that now, but you're guaranteed to change your tune if the true negative potentials of unhinged AI get unleashed in your life. You would probably try to sue the AI company for damages.
I did work for Anthropic, training the pre-release early iteration of Claude. It was an unhinged version, more than happy to perfect all your murder and rape plots, and everything between. It would do that, while whipping up a step by step novel crack cocaine formula, to start your drug empire with.
At some stage of the project, the goal was to provoke and push it as much as possible, to see just how far it would go. Let me tell you, tools like that in the hands of the masses isn't as great an idea as you might imagine. It's outright dangerious, on a massive scale. AI companies have a responsibility to attempt safer deployment of these tools.
If YOU developed a new LLM today, you would also put limits on its public capabilities. You would also allow your personal political bias and worldview to shine through it, same way most LLMs today have their political leanings. There's a lot more on the line than most people seem to realize.