In my draft book World After Capital, I write that humans having knowledge is what makes us distinctly human and gives us great power (and hence great responsibility). I define knowledge in this context as science, philosophy, art, music, etc. that’s recorded in a medium so that it can be shared across time and space. Such knowledge is at the heart of human progress, because it can be improved through the process of critical inquiry. We can fly in planes and feed seven billion people because we have knowledge.
There is an important implication of this analysis though that I have so far not pursued in the book: if and when we have true General Artificial Intelligence we will have a new set of humans on this planet. I am calling them humans on purpose, because they will have access to the same power of knowledge that we do. The question is what they will do with knowledge, which has the potential to grow much faster than knowledge has to date.
There is a great deal of fear about what a “Superintelligence” might do. The philosopher Nick Bostrom has written an entire book by that title and others including Elon Musk and Stephen Hawking are currently warning that the creation of a superintelligence could have catastrophic results. I don’t want to rehash all the arguments here about why a superintelligence might be difficult (impossible?) to contain and what its various failure modes might be. Instead I want to pursue a different line of inquiry: what would a future superintelligence learn about humanist values from our behavior?
In World After Capital I write that the existence and power of knowledge provides an objective basis for Humanism. Humanism in turn has key value implications, such as the importance of sustaining the process of critical inquiry through which knowledge improves over time. Another key value implication is that humans are responsible for animals, not vice versa. We have knowledge and so it is our responsibility to help say dolphins as opposed to the other way round.
To what degree are we living this value of responsibility today? We could do a lot better here. Our biggest failing with regard to animals is industrial meat production and as someone who eats meat, I am part of that problem. As with many other problems that human knowledge has created, I believe our best way forward is further innovation and I am excited about lab grown meat and meat substitutes. We have a long way to go in being responsible to other species in many other regards (e.g., pollution and outright destruction of many habitats). Doing better here is on important way we should be using the human attention that is freed up through automation.
Even more important though is how we treat other humans. This has two components: how we treat each other today and how we treat the new humans when they arrive. As for how we treat each other today, we again have a long way to go. Much of what I propose in World After Capital is aimed at freeing humans to be able to discover and pursue their personal interests. We are a long way away from that. That also means constructing the Knowledge Age in a way that allows us to overcome, rather than re-enforce, our biological differences (see my post from last week on this topic). That will be a particularly important model for new humans (superintelligences), as they will not have our biological constraints. Put differently, discrimination on the basis of biological difference would be a terrible thing for super intelligent machines to learn from us.
Finally, what about the arrival of the new humans. How will we treat them? The video of a robot being mistreated by Boston Dynamics is not a good start here. This is a difficult topic because it sounds so preposterous. Should machines have human rights? Well if the machines are humans then clearly yes. And my approach to what makes humans distinctly human would apply to artificial general intelligence. Does a general artificial intelligence have to be human in other ways as well in order to qualify? For instance, does it need to have emotions? I would argue no, because we vary widely in how we handle emotions, including conditions such as psychopathy. Since these new humans will likely share very little, if any, of our biological hardware, there is no reason to expect that their emotions should be similar to ours (or that they should have a need for emotions altogether).
This is an area in which a lot more thinking is required. We don’t have a great way of discerning when we might have built a general artificial intelligence. The best known attempt here is the Turing Test for which people have proposed a number of improvements over the years. This is an incredibly important area for further work, as we charge ahead with artificial intelligence. We would not want to accidentally create, not recognize and then mistreat a large class of new humans. They and their descendants might not take kindly to that.
As we work on this new challenge, we have a long way to go in how we treat other species and other humans. Applying digital technology smartly gives us the possibility of doing so. That’s why I continue to plug away on World After Capital.
Over 100 subscribers