Apologies for the possible off-topicness, but it seemed a bit quiet
recently.
Was talking to FreeTrav over on #traveller (channel he admins on
UnderNet) and he did an-IME characteristically FreeTrav thing - rumble
my implicit assumptions and challenge them.
To paraphrase FT, the problem we have with "AI" now is that we don't
know what it _is_, only what it's _not_. IOW, as soon as a formerly-AI
problem becomes doable, it's no longer AI.
FT further posited that "the AI question" is the compute/biological
equivalent of Godel's Incompleteness Theorem - we can either never
achieve AI, or never know that we've done so.
Your thoughts on that?
As an aside, an artificial _general_ intelligence (AGI) is a subtype of
AI (whatever _that_ works out to be) with similar breadth and depth of
capability as a human (or, given the list, sophont). Also known as
"strong AI".
An artificial _super_ intelligence (ASI) , by extension, is an AI with
_greater_ breadth and depth of capability than a human.
An artificial _narrow_ intelligence (ANI), is an AI with _lesser_
breadth and depth of capability than a human.
For a comparison of human capabilities (from
https://www.britannica.com/science/information-theory/Physiology ),
_unconscious_ individual-human processing capacity is on the rough order
of 11 million bits/sec and _conscious_ individual-human processing
capacity is on the rough order of 50-60 bits/sec. I wouldn't expect a
full order-of-magnitude difference in either capacity between different
people.
Upon further thought (thank you, FT), I'm not sure the human brain has a
meaningful analogue to clock rate. _Individual_ neurons can't fire
faster than roughly 1 kHz, but there's on the order (IIRC) of 100
billion of them, on average each being connected to another 40,000 neurons.
Given those capacities, I _think_ an individual computer would beat a
baseline human for information processing capacity. How far am I up the
garden path on this one?
Next step along the chat was what was needed to give a pile of
brute-force compute power the stimulus to "wake up" and become
spontaneously intelligent (at AGI or ASI levels), a la the
(post)-cyberpunk trope.
FT wasn't convinced it was a function of available compute power on its
own - I suggested that was necessary, but insufficient. Some other
unknown, "X", factor or factors was/where also necessary.
What could those X factors be?
FT posed the following criteria for (what I think, from context, is AGI
- sapience/sophontry):
- Ability to learn from one's errors;
- Ability to conceive of need for new information;
- Volition to seek out said new information;
- Volition to tell a pestiferous questioner to sod off;
What else, or has FT come up with a reasonably-minimal set?
Alex
--