Again, totally analogous to say:
I want to make aligned AI’s competitive with other AI’s. But an unaligned AI may take risks with respect to technological development that an aligned AI wouldn’t, since we care a lot about extinction. So should the problem of “prevent competitive dynamics around the development of all potentially risky future technologies” be part of the alignment problem? Superficially it doesn’t have anything to do with AI, but of course AI will enable access to many future technologies.
You could argue for a definition of the alignment broad enough to include these, but you shouldn’t then have any expectation for a unified set of techniques to address the problem, or for progress to look very much like what people typically have in mind when they talk about alignment.