the crazy thing i think about is whether superintelligence can emerge on its own. that's what I'm scared of. we're so wrapped up in this idea that something we create will get smarter and smarter through machine learning, better hardware, better algorithms etc. but in the same way that life itself emerged through some random process that turned inorganic chemicals to organic molecules/proteins/rna/dna, from uni to multi cellular, eventually to humans with intelligence, is it possible that what we've done with computers, the fact that they already do so much autonomously, all connected, constantly learning, oftentimes self diagnosing/repairing, creating and destroying, in ways that are basically undetectable bc there are only so many programmers...can that lead to a non-human "intelligence" that eventually develops some version of consciousness?
my sense is that countries are going to treat this like they do with nuclear proliferation. this means some form of disclosure, willingness to allow others to "inspect" certain programs/initiatives and punishing sanctions/war for the ones that refuse. this is something i could see leading to the use of nuclear weapons. imagine a nation announces some major breakthrough in computing that leads to capabilities previously considered decades away from development. other countries might not be willing to take a chance that this won't be used offensively.