I agree; it has its part to play; I guess I just see it as such a miniscule one. True bad actors are going to use abliterated models. The main value I see in alignment of frontier LLMs is less bias and prejudice in their outputs. That's a good net positive. But fundamentally, these little psuedo wins of "it no longer outputs gore or terrorism vibes" just feel like complete red herrings. It's like politicians saying they're gonna ban books that detail historical crimes, as if such books are fundamental elements of some imagined pipeline to criminality.