I have already written a prior post arguing against Leopold Aschenbrenner's logic for a Manhattan project to achieve artificial super intelligence (ASI). In that post I made the argument that having more than one system mitigates risks because we can use cooperative systems to overcome the problems being created by non-aligned systems. It has since occurred to me that even taking the nuclear bomb historical analogy entirely on its own merit, Aschenbrenner is wrong. The way we have avoided nuc...