On Fri, Jun 24, 2016 at 05:24:00PM -0700, Jim Vassilakos wrote: > So, my first guess is that when a society develops strong AI, > clearing the final hurdle is not due to a breakthrough in hardware > so much as software, which means that the whole process will be > replicable on a mass scale and that various “finished” AIs could be > copied rather easily. At the moment, it looks like that could go either way. There are certainly reasons to believe that we could get a lot better AI on existing hardware than we have so far, but there are also reasons to think that we may need better (and so very expensive) hardware to approach or exceed human capabilities. > I’m guessing that the first strong AIs will be able to inhabit robot > bodies and will be raised to some extent as are children. [...] > My guess is that AIs that chronically misbehave will be aborted. This is where things start to get very messy. The number of ways strong AI might develop are extremely broad. If socially acceptable behaviour cannot be designed but must be learned as humans do, that indicates a dangerous lack of understanding about how the AIs actually work internally. If strong AI is just a "software issue" so that hardware capable of supporting such sapience is cheap, portable, and widely available by the time it develops, that's probably the most dangerous and unpredictable path of all. It means that of the few initial strong AIs, nobody really knows how they think, and whichever convinces humanity of their benignity first probably gets to replicate billionfold in short order. What's more, it also means that there is very much faster hardware on which they might run than the mass produced hardware generally available. So virtually overnight, there's a billion copies of some AI in robot bodies, of unknown internal thought processes and motivations, and probably thousands of copies embodied as super-AIs. So the society ends up with a huge monoculture of poorly understood beings, who can easily copy themselves into commodity hardware, and some who can think and learn just as well and very much faster than any human. That's not an automatic recipe for disaster, but it's fertile ground for one. > They will create entirely new sciences and technologies and > eventually even more powerful versions of themselves. One day, > humanity and strong-AI may merge to an extent, if not completely. That's probably a scenario with about the most hope for humanity under these assumptions, but I think with the given starting points it's less likely to arise than otherwise. [ Snipped tables that have been saved and look like a useful starting point for a game ] > I’ll stop here for the moment, but you get the general idea. What do you > think so far? Very interesting indeed. It's probably a rockier and more dangerous start to the development scenario than I would hope for, but the assumption that strong AI can inhabit cheap hardware as soon as it is developed does seem to be a common theme across a lot of science fiction, both optimistic and horribly grim. - Tim