There has been a fair bit of coverage of the potential dangers that an Artificial General Intelligence (AGI) might pose for humanity. Here some questions about AGI with my currently best answers.
1. What is the difference between AI and AGI?
Artificial Intelligence (AI) can accomplish specific tasks that we historically had a hard time to get computers to do well, such as recognize faces, categorize an image, drive a car, make a robot walk, etc. AI is here today.
Artificial General Intelligence (AGI) would be able to do anything that a human can do, including coming up with new explanations, thus advancing science and ultimately improving itself dramatically across all categories. As far as we know, we don’t yet have AGI.
2. Will only AGI matter for the economy, in particular for the labor market, or will AI have an impact already?
AI alone will have a big impact. It doesn’t matter how AI drives a car. It doesn’t matter if the same AI can also recognize a face. All that matters for the ability of a human to earn a living by driving a car is that an AI can drive the car better and cheaper. I have written extensively about the impact of automation on employment.
3. Can we build AGI if we don’t know how general intelligence works in humans?
That’s entirely possible. We have created lots of things before we knew how they worked. Some of them we did through tinkering, such as heavier than air flight is a great example. Others we found more or less by accident, such as the first antibiotic. There is no reason to rule out either one of those modes happening upon AGI. Understanding human intelligence would help tremendously and would probably be a sufficient condition for creating AGI but it is not a necessary condition.
4. How far away are we from achieving AGI?
We have no idea. Why? Because the boundary to universality is often sharp. We have seen many systems go from clumsy, to slightly more usable, to marginally better — taking a long time to do so — only to suddenly become universal with just one more step.
A great example of that are alphabets. Written human languages started with pictograms, then there were ways of combining pictograms to create new meanings (sometimes based on sound), over time these were often simplified and abstracted. This went on over thousands of years. But then one little step, the reduction to letters that don’t have individual meaning gave us universal alphabets that can express any meaning.
Another example is computation. We were able to build simple state machines a long time ago that were able to carry out a limited set of computations. We first built these mechanically and later with electronic support. But their capabilities were extremely limited. Then we added input and output to a “tape” which made the states “programmable” and all the sudden we had universal computation.
So it is entirely possible (not necessary, just possible) that the step from AI to AGI turns out to be elusive for a long period of time and then happens quite suddenly with something that looks really simple. It should be instructive here that many instances of AI themselves seemed elusive for a long time (50+ years) and then arrived quite quickly (< 10 years). The simple step for AI turned out to be (mostly) faster computation and a lot more data.
5. Would the arrival of AGI be bad for humans?
I have no idea. Nick Bostrom in his book “Superintelligence” lays out many ways in which it could be bad. Some of these scenarios are quite, well, entertaining. Many are of the form that the AGI will be focused on some goal and be super literal in pursuing it (e.g., AGI decides to make humans happy -> we wind up finding ourselves drugged by machines).
Because of my answer to Question 4 above, this is definitely a problem worth thinking about. Initiatives such as OpenAI are good, because if an AGI arises it would be better if we noticed it early and that’s more likely if it happens in public rather than inside a large corporation. In general everyone working on AI (from which AGI is likely to emerge) should aim for maximum transparency.
Would love to hear from everyone who has a different question or different answer to these five questions.