By now, if you are in tech and haven’t been on an extended Techmeme and Twitter break you have heard about GPT-3 a new and massive language model developed by OpenAI. I have played around with it a bit myself and its capabilities are impressive. Here are some thoughts.
The model perfectly illustrates the ever moving bar on what we consider artificial intelligence. Not that long ago facial recognition didn’t work at all, now computers routinely outperform humans. Putting together more than a couple of new sentences that actually made sense was a really challenging problem, GPT-3 routinely cranks out multiple paragraphs. “But it doesn’t understand the sentences” is the immediate objection often heard. Of course “understanding” is as poorly defined a term as “intelligence” so that objection really doesn’t do much.
Instead, a better way of thinking about these capabilities is to consider where and how they can be deployed instead of using humans. One obvious example would be customer support. A customer writes in and someone needs to write an answer. I am not suggesting that anyone should send out GPT-3 produced answers without checking them first, but the number of customer support requests that someone might be able to handle using GPT-3 could go up tremendously. Another great example is producing UI code. This is often a labor intensive part of projects and a lot of engineers dread it as it’s not exactly solving fun problems but rather wrestling with the idiosyncrasies of different platforms. There are already several early demos of how GPT-3 or a model like it could be used for that. And yes, this does and should change how we think about the longterm demand for human software engineers (something I pointed out 6 years ago in a post titled “Software Is Eating Software”).
A similarly faulty line of thinking has been that only humans can be creative. Again this is enabled by a completely underspecified definition of what it means to be creative. Here is an example of a poem that GPT-3 cranked out. Admittedly not exactly a masterwork but if a young student would turn this in the teacher or parents might say “that’s so creative!” But it doesn’t stop here. After a bit of back and forth over Twitter about submitting GPT-3 work to a poetry contest, Joshua Schachter prompted the model for a story about using AI to submit a poem to a contest. And the resulting short story is really quite impressive.
All of this is to say that objections around intelligence and creativity are rooted in definitional problems and also obscure the extraordinary potential of this technology to change the need for human labor. Of course, this is one of the foundational premises of my book World After Capital.
Of course it is also clear how this type of model can be used for all sorts of bad things, such as automating high quality bot attacks on social media or even creating content that can be passed off as having come from a particular author (but was not in fact written by them). This will put a premium on attribution, something I have written about in the past in a post called “Sign All Things.” One key reason for having self sovereign identity, with some probability of that identity being a human attached, is to mitigate against these types of attacks (btw, humans even under their real names say plenty of terrible things, so it doesn’t help with that as people sometimes think).
Finally a brief thought on the question of bias which always arises. Of course models are biased by the data on which they have been trained (at this point this is well established). The same is of course true for humans who are biased based on how they have been trained. But there are some key differences that are worth keeping in mind. The bias in a model is more measurable than for a human – the will produce text after text after text (and it will not be strategic about its answers, well, at least not intentionally strategic, it might be implicitly strategic in as much as it has picked those strategies up from the training data). There is also a much clearer hope of reducing the bias in models compared to humans.
Where does all of that leave us? GPT-3 is a major step forward in the capability set. It shows great new powers and as we know from Spiderman, with that comes great responsibility. There is the responsibility of OpenAI to monitor how this model is used and to measure and reduce its biases. But as importantly is our collective responsibility to create a future that lets humanity benefit broadly from these emerging powers. That is the very subject of World After Capital.