What’s Our Problem by Tim Urban (Book Review)

Politics in the US has become ever more tribal on both the left and the right. Either you agree with 100 percent of group doctrine or you are considered an enemy. Tim Urban, the author of the wonderful Wait but Why blog has written a book digging into how we have gotten here. Titled “What’s Our Problem” the book is a full throated defense of liberalism in general and free speech in particular.

As with his blog, Urban does two valuable things rather well: He goes as much as possible to source material and he provides excellent (illustrated) frameworks for analysis. The combination is exactly what is needed to make progress on difficult issues and I got a lot out of reading the book as a result. I highly recommend reading it and am excited that it is the current selection for the USV book club.

The most important contribution of What’s Our Problem is drawing a clear distinction between horizontal politics (left versus right) and vertical politics (low-rung versus high-rung). Low-rung politics is tribal, emotional, religious, whereas high-rung politics attempts to be open, intellectual, secular/scientific. Low-rung politics brings out the worst in people and brings with it the potential of violent conflict. High-rung politics holds the promise of progress without bloodshed. Much of what is happening in the US today can be understood as low-rung politics having become dominant.

image

The book on a relative basis spends a lot more time examining low-rung politics on the left in the form of what Urban calls Social Justice Fundamentalism compared to the same phenomenon on the right. Now that can be excused to a dgree because his likely audience is politically left and already convinced that the right has descended into tribalism but not been willing to admit that the same is the case on the left. Still for me it somewhat weakened the overall effect and a more frequent juxtaposition of left and right low-rung poltics would have been stronger in my view.

My second criticism is that the book could have done a bit more to point out that the descend to low-rung politics isn’t just a result of certain groups pulling everyone down but rather also of the abysmal failure of nominally high-rung groups. In that regard I strongly recommend reading Martin Gurri’s “Revolt of the Public” as a complement.

This leads my to my third point. The book is mostly analysis and has only a small recommendation section at the end. And while I fully agree with the suggestions there, the central one of which is an exhortation to speak up if you are in a position to do so, they do fall short in an important way. We are still missing a new focal point (or points) for high-rung politics. There may indeed be a majority of people who are fed up with low-rung politics on both sides but it is not clear where they should turn to. Beginning to establish such a place has been the central goal of my own writing in The World After Capital and here on Continuations.

Addressing these three criticisms would of course have resulted in a much longer book and that might in the end have been less effective than the book at hand. So let me reiterate my earlier point: this is an important book and if you care about human and societal progress you should absolutely read What’s Our Problem.

Posted: 30th April 2023Comments
Tags:  book review society progress politics

Thinking About AI: Part 3 - Existential Risk (Terminator Scenario)

Now we are getting to the biggest and weirdest risk of AI: a super intelligence emerging and wiping out humanity in pursuit of its own goals. To a lot of people this seems like a totally absurd idea, held only by a tiny fringe of people who appear weird and borderline culty. It seems so far out there and also so huge that most people wind up dismissing it and/or forgetting about shortly after hearing it. There is a big similarity here to the climate crisis, where the more extreme views are widely dismissed.

In case you have not encountered the argument yet, let me give a very brief summary (Nick Bostrom has an entire book on the topic and Eliezer Yudkowsky has been blogging about it for two decades, so this will be super compressed by comparison): A superintelligence when it emerges will be pursuing its own set of goals. In many imaginable scenarios, humans will be a hindrance rather than a help in accomplishing these goals. And once the superintelligence comes to that conclusion it will set about removing humans as an obstacle. Since it is a superintelligence we won’t be able to stop it and there goes humanity.

Now you might have all sorts of objections here. Such as can’t we just unplug it? Suffice it to say that the people thinking about this for some time have considered these objections already. They are pretty systematic in their thinking (so systematic that the Bostrom book is quite boring to read and I had to push myself to finish it). And in case you are still wondering why we can’t just unplug it: by the time we discover it is a superintelligence it will have spread itself across many computers and built deep and hard defenses for these. That could happen for example by manipulating humans into thinking they are building defenses for a completely different reason.

Now I am not in the camp that says this is guaranteed to happen. Personally I believe there are also good chances that a superintelligence upon emerging could be benevolent. But with existential risk one doesn’t need certainty (same is true for the other existential risks, such as the climate crisis or an asteroid strike). What matters is that there is a non zero likelihood. And that is the case for superintelligence, which means we need to proceed with caution. My book The World After Capital is all about how we can as humanity allocate more attention to these kinds of problems and opportunities.

So what are we to do? There is a petition for a 6 months research moratorium. Eliezer wrote a piece in Time pleading to shut it all down and threaten anyone who tries to build it with destruction. I understand the motivation for both of these and am glad that people are ringing loud alarm bells, but neither of these makes much sense. First, we have shown no ability to globally coordinate on other existential threats including ones that are much more obvious, so why do we think we could succeed here? Second, who wants to give government that much power over controlling core parts of computing infrastructure, such as the shipment of GPUs?

So what could we do instead? We need to accept that superintelligences will come about faster than we had previously thought and act accordingly. There is no silver bullet but there are a several initiatives that can be taken by individuals, companies and governments that can dramatically improve our odds.

The first and most important are well funded efforts to create a benign superintelligence. This requires the level of resources that only governments can command easily, although some of the richest people and companies in the world might also be able to make a difference. The key here will be to invert the approach to training that we have take so far. It is absurd to expect that you can have a good outcome when you train a model first on the web corpus and then attempt to constrain it via reinforcement learning from human feedback (RLHF). This is akin to letting a child grow up without any moral guidance along the way and then expect them to be a well behaved adult based on occasionally telling them they are doing something wrong. We have to create a large corpus of moral reasoning that can be ingested early and form the core of a superintelligence before exposing it to all the world’s output. This is a hard problem but interestingly we can use some of the models we now have to speed up the creation of such a corpus. Of course a key challenge will be what it should contain. It is for that very reason that in my book The World After Capital, I make such a big deal of living and promoting humanism, here is what I wrote (it’s an entire section from the conclusion but I think worth it)

There’s another reason for urgency in navigating the transition to the Knowledge Age: we find ourselves on the threshold of creating both transhumans and neohumans. ‘Transhumans’ are humans with capabilities enhanced through both genetic modification (for example, via CRISPR gene editing) and digital augmentation (for example, the brain-machine interface Neuralink). ‘Neohumans’ are machines with artificial general intelligence. I’m including them both here, because both can be full-fledged participants in the knowledge loop.

Both transhumans and neohumans may eventually become a form of ‘superintelligence,’ and pose a threat to humanity. The philosopher Nick Bostrom published a book on the subject, and he and other thinkers warn that a superintelligence could have catastrophic results. Rather than rehashing their arguments here, I want to pursue a different line of inquiry: what would a future superintelligence learn about humanist values from our current behavior?

As we have seen, we’re not doing terribly well on the central humanist value of critical inquiry. We’re also not treating other species well, our biggest failing in this area being industrial meat production. Here as with many other problems that humans have created, I believe the best way forward is innovation. I’m excited about lab-grown meat and plant-based meat substitutes. Improving our treatment of other species is an important way in which we can use the attention freed up by automation.

Even more important, however, is our treatment of other humans. This has two components: how we treat each other now, and how we will treat the new humans when they arrive. As for how we treat each other now, we have a long way to go. Many of my proposals are aimed at freeing humans so they can discover and pursue their personal interests and purpose, while existing education and job loop systems stand in opposition to this freedom. In particular we need to construct the Knowledge Age in a way that allows us to overcome, rather than reinforce, our biological differences which have been used as justification for so much existing discrimination and mistreatment. That will be a crucial model for transhuman and neohuman superintelligences, as they will not have our biological constraints.

Finally, how will we treat the new humans? This is a difficult question to answer because it sounds so preposterous. Should machines have human rights? If they are humans, then they clearly should. My approach to what makes humans human—the ability to create and make use of knowledge—would also apply to artificial general intelligence. Does an artificial general intelligence need to have emotions in order to qualify? Does it require consciousness? These are difficult questions to answer but we need to tackle them urgently. Since these new humans will likely share little of our biological hardware, there is no reason to expect that their emotions or consciousness should be similar to ours. As we charge ahead, this is an important area for further work. We would not want to accidentally create a large class of new humans, not recognize them, and then mistreat them.

The second are efforts to help humanity defend against an alien invasion. This may sound facetious but I am using alien invasion as a stand in for all sort of existential threats. We need much better preparation for extreme outcomes of the climate crisis, asteroid strikes, runaway epidemics, nuclear war and more. Yes we 100 percent need to invest more in avoiding these, for example through early detection of asteroids and building deflection systems, but we also need to harden our civilization.

There are a ton of different steps that can be taken here and I may write another post some time about that as this post is getting rather long. For now let me just say a key point is to decentralize our technology base much more than it is today. For example we need many more places that can make chips and ideally do so at much smaller scale than we have today.

Existential AI risk aka the Terminator scenario are real threats. Dismissing them would be a horrible mistake. But so would be seeing global government control as the answer. We need to harden our civilization and develop a benign superintelligence. To do these well we need to free up attention and further develop humanism. That’s the message of The World After Capital.

Posted: 8th April 2023Comments
Tags:  artificial intelligence ai

Thinking About AI: Part 3 - Existential Risk (Loss of Reality)

In my prior post I wrote about structural risk from AI. Today I want to start delving into existential risk. This broadly comes in two not entirely distinct subtypes: first, that we lose any grip on reality which could result in a Matrix style scenario or global war of all against all and second, a superintelligence getting rid of humans directly in the pursuit of its own goals.

The loss of reality scenario was the subject of an op-ed in the New York Time the other day. And right around the same time there was an amazing viral picture of the pope that had been AI generated.

image

I have long said that the key mistake of the Matrix movies was to posit a war between humans and machines. That instead we will be giving ourselves willingly to the machines, more akin to the “Free wifi” scenario of Mitchells vs. the Machines.

The loss of reality is a very real threat. It builds on a long tradition, such as Stalin having people edited out of historic photographs or Potemkin building fake villages to fool the invading Germans (why did I think of two Russian examples here?). And now that kind of capability is available to anyone at the push of a button. Anyone see those pictures of Trump getting arrested?

Still I am not particularly concerned about this type of existential threat from AI (outside of the superintelligence scenario). That’s for a number of different reasons. First, distribution has been the bottleneck for manipulation for some time, rather than content creation (it doesn’t take advanced AI tools to come up with a meme). Second, I believe that the approach of more AI that can help with structural risk can also help with this type of existential risk. For example, having an AI copilot when consuming the web that points out content that appears to be manipulated. Third, we have an important tool availalbe to us as individuals that can dramatically reduce the likelihood of being manipulated and that is mindfulness.

In my book “The World After Capital” I argue for the importance of developing a mindfulness practice in a world that’s already overflowing with information in a chapter titled “Psychological Freedom.” Our brains evolved in an environment that was mostly real. When you saw a cat there was a cat. Even before AI generated cats the Internet was able to serve up an endless stream of cat pictures. So we have already been facing this problem for some time. It is encouraging that studies show that younger people are already more skeptical of the digital information they encounter.

Bottom line then for me is that “loss of reality” is an existential threat, but one that we have already been facing and where further AI advancement will both help and hurt. So I am not losing any sleep over it. There is, however, an overlap with a second type of existential risk, which is a super intelligence simply wiping out humanity. The overlap is that the AI could be using the loss of reality to accomplish its goals. I will address the superintelligence scenario in the next post (preview: much more worrisome).

Posted: 3rd April 2023Comments
Tags:  artificial intelligence ai

Thinking About AI: Part 2 - Structural Risks

Yesterday I wrote a post on where we are with artificial intelligence by providing some history and foundational ideas around neural network size. Today I want to start in on risks from artificial intelligence. These fall broadly into two categories: existential and structural. Existential risk is about AI wiping out most or all of humanity. Structural risk is about AI aggravating existing problems, such as wealth and power inequality in the world. Today’s post is about structural risks.

Structural risks of AI have been with us for quite some time. A great example of these is the Youtube recommendation algorithm. The algorithm, as far as we know, optimizes for engagement because Youtube’s primary monetization are ads. This means the algorithm is more likely to surface videos that have an emotional hook than ones that require the viewer to think. It will also pick content that emphasizes the same point of view, instead of surfacing opposing views. And finally it will tend to recommend videos that have already demonstrated engagement over those that have not, giving rise to a “rich getting richer” effect in influence.

With the current progress it may look at first like these structural risks will just explode. Start using models everywhere and wind up having bias risk, “rich get richer” risk, wrong objective function risk, etc. everywhere. This is a completely legitimate concern and I don’t want to dismiss it.

On the other hand there are also new opportunities that come from potentially giving broad access to models and thus empowering individuals. For example, I tried the following prompt in Chat GPT “I just watched a video that argues against universal basic income. Can you please suggest some videos that make the case for it? Please provide URLs so I can easily watch the videos.” and it quickly produced a list videos for me to watch. Because so much content has been ingested, users can now have their own “Opposing View Provider” (something I had suggested years ago).

image

There are many other ways in which these models can empower individuals, for example summarizing text at a level that might be more accessible. Or pointing somebody in the right direction when they have encountered a problem. And here we immediately run into some interesting regulatory challenges. For example: I am quite certain that Chat GPT could give pretty good free legal advice. But that would be running afoul of the regulations on practicing law. So part of the structural risk issue is that our existing regulations predate any such artificial intelligence and will oddly contribute to making its power available to a smaller group (imagine more profitable law firms instead of widely available legal advice).

There is a strong interaction here also between how many such models will exist (from a small oligopoly to potentially a great many) and to what extent endusers can embed these capabilities programmatically or have to use them manually. To continue my earlier example, if I have to head of Chat GPT every time I want to ask for an opposing view I will be less likely to do so than if I could script the sites I use so that an intelligent agent can represent me in my interactions. This is of course one of the core suggestions I make in my book The World After Capital in a section titled “Bots for All of Us.

I am sympathetic to those who point to structural risks as a reason to slow down the development of these new AI systems. But I believe that for addressing structural risks the better answer is to make sure that there are many AIs, that they can be controlled by endusers, that we have programmatic access to these and other systems, etc. Put differently structural risks are best addressed by having more artificial intelligence with broader access.

We should still think about other regulation to address structural risks but much of what has been proposed here doesn’t make a ton of sense. For example, publishing an algorithm isn’t that helpful if you don’t also publish all the data running through it. In the case of a neural network alternatively you could require publishing the network structure and weights but that would be tantamount to open sourcing the entire model as now anyone could replicate it. So for now I believe the focus of regulation should be avoiding a situation where there are just a few huge models that have a ton of market power.

Some will object right here that this would dramatically aggravate the existential risk question, but I will make an argument in my next post why that may not be the case.

Posted: 26th March 2023Comments
Tags:  artificial intelligence ai

Thinking About AI

I am writing this post to organize and share my thoughts about the extraordinary progress in artificial intelligence over the last years and especially the last few months (link to a lot of my prior writing). First, I want to come right out and say that anyone still dismissing what we are now seeing as a “parlor trick” or a “statistical parrot” is engaging in the most epic goal post moving ever. We are not talking a few extra yards here, the goal posts are not in the stadium anymore, they are in a far away city.

Growing up I was extremely fortunate that my parents supported my interest in computers by buying an Apple II for me and that a local computer science student took me under his wing. Through him I found two early AI books: one in German by Stoyan and Goerz (I don’t recall the title) and Winston and Horn’s “Artifical Intelligence.” I still have both of these although locating them among the thousand or more books in our home will require a lot of time or hopefully soon a highly intelligent robot (ideally running the VIAM operating system – shameless plug for a USV portfolio company). I am bringing this up here as a way of saying that I have spent a lot of time not just thinking about AI but also coding on early versions and have been following closely ever since.

I also pretty early on developed a conviction that computers would be better than humans at a great many things. For example, I told my Dad right after I first learned about programming around age 13 that I didn’t really want to spend a lot of time learning how to play chess because computers would certainly beat us at this hands down. This was long before a chess program was actually good enough to beat the best human players. As an aside, I have changed my mind on this as follows: Chess is an incredible board game and if you want to learn it to play other humans (or machines) by all means do so as it can be a lot of fun (although I still suck at it). Much of my writing both here on Continuations and in my book is also based on the insight that much of what humans do is a type of computation and hence computers will eventually do it better than humans. Despite that there will still be many situations where we want a human instead exactly because they are a human. Sort of the way we still go to concerts instead of just listening to recorded music.

As I studied computer science both as an undergraduate and graduate student, one of the things that fascinated me was the history of trying to use brain like structures to compute. I don’t want to rehash all of it here, but to understand where we are today, it is useful to understand where we have come from. The idea of modeling neurons in a computer as a way to build intelligence is quite old. Early electromechanical and electrical computers started getting built in the 1940s (e.g. ENIAC was completed in 1946) and the early papers on modeling neurons can be found from the same time in work by McCulloch and Pitts.

But almost as soon as people started working on neural networks more seriously, the naysayers emerged also. Famously Marvin Minsky and Seymour Paper wrote a book titled “Perceptrons” that showed that certain types of relatively simple neural networks had severe limitations, e.g. in expressing the XOR function. This was taken by many at the time as evidence that neural networks would never amount to much, when it came to building computer intelligence, helping to usher in the first artificial intelligence winter.

And so it went for several cycles. People would build bigger networks and make progress and others would point out the limitations of these networks. At one time people were so disenchanted that very few researchers were left in the field altogether. The most notable of these was Geoffrey Hinton who kept plugging away at finding new training algorithms and building bigger networks.

But then a funny thing happened. Computation kept getting cheaper and faster and memory became unfathomably large (my Apple II for reference had 48KB of storage on the motherboard and an extra 16KB in an extension card). That made it possible to build and train much larger networks. And all of a sudden some tasks that had seemed out of reach, such as deciphering handwriting or recognizing faces started to work pretty well. Of course immediately the goal post moving set in with people arguing that those are not examples of intelligence. I am not trying to repeat any of the arguments here because they were basically silly. We had taken a task that previously only humans could do and built machines that could do them. To me that’s, well, artificial intelligence.

The next thing that we discovered is that while humans have big brains with lots of neurons in them, we can use only a tiny subset of our brain on highly specific tasks, such as playing the game of Go. With another turn of size and some further algorithmic breakthroughs all of a sudden we were able to build networks large enough to beat the best human player at Go. And not just beat the player but do so by making moves that were entirely novel. Or as we would have said if a human had made those moves “creative.” Let me stay with this point of brain and network size for moment as it will turn out to be crucial shortly. A human Go player not only can only use a small part of their brain to play the game but the rest of their brain is actually a hindrance. It comes up with pesky thoughts at just the wrong time “Did I leave the stove on at home?” or “What is wrong with me that I didn’t see this move coming, I am really bad at this” and all sorts of other interference that a neural network just trained to play Go does not have to contend with. The same is true for many other tasks such as reading radiology images to detect signs of cancer.

The other thing that should have probably occurred to us by then is that there is a lot of structure in the world. This is of course a good thing. Without structure, such as DNA, life wouldn’t exist and you wouldn’t be reading this text right now. Structure is an emergent property of systems and that’s true for all systems, so structure is everywhere we look including in language. A string of random letters means nothing. The strings that mean something are a tiny subset of all the possible letter strings and so unsurprisingly that tiny subset contains a lot of structure. As we make neural networks bigger and train them better they uncover that structure. And of course that’s exactly what that big brain of ours does too.

So I was not all that surprised when large language models were able to produce text that sounded highly credible (even when it was hallucinated). Conversely I found the criticism from some people that making language models larger would simply be a waste of time confounding. After all, it seems pretty obvious that more intelligent species have, larger brains than less intelligent ones (this is obviously not perfectly correlated). I am using the word intelligence here loosely in a way that I think is accessible but also hides the fact that we don’t actually have a good definition of what intelligence is, which is what has made the goal post moving possible.

Now we find ourselves confronted with the clear reality that our big brains are using only a fraction of their neurons for most language interactions. The word “most” is doing a lot of work here but bear with me. The biggest language models today are still a lot smaller than our brain but damn are they good at language. So the latest refuge of the goal post movers is the “but they don’t understand what the language means.” But is that really true?

As is often the case with complex material, Sabine Hossenfelder, has a great video that helps us think about what it means to “understand” something. Disclosure: I have been supporting Sabine for some time via Patreon. Further disclosure: Brilliant, which is a major advertiser on Sabine’s channel, is a USV portfolio company. With this out of the way I encourage you to watch the following video.

So where do I think we are? At a place where for fields where language and/or two dimensional images let you build a good model, AI is rapidly performing at a level that exceeds that of many humans. That’s because the structure it uncovers from the language is the model. We can see this simply by looking at tests in those domains. I really liked Bryan Caplan’s post where he was first skeptical based on an earlier version performing poorly on his exams but the latest version did better than many of his students. But when building the model requires input that goes beyond language and two dimensional images, such as understanding three dimensional shapes from three dimensional images (instead of inferring them from two dimensional ones) then the currently inferred models are still weak or incomplete. It seems pretty clear though that progress in filling in those will happen at a breathtaking pace from here.

Since this is getting rather long, I will separate out my thoughts on where we are going next into more posts. As a preview, I believe we are now at the threshold to artificial general intelligence, or what I call “neohumans” in my book The World After Capital. And even if that takes a bit longer, artificial domain specific intelligence will be outperforming humans in a great many fields, especially ones that do not require manipulating the world with that other magic piece of equipment we have: hands with opposable thumbs. No matter what the stakes are now extremely high and we have to get our act together quickly on the implications of artificial intelligence.

Posted: 25th March 2023Comments
Tags:  artificial intelligence ai

The Banking Crisis: More Kicking the Can

There were a ton of hot takes on the banking crisis over the last few days. I didn’t feel like contributing to the cacaphony on Twitter because I was busy working with USV portfolio companies and also in Mexico City with Susan celebrating her birthday.

Before addressing some of the takes, let me succinctly state what happened. SVB had taken a large percentage of their assets and invested them in low-interest-rate long-duration bonds. As interest rates rose, the value of those bonds fell. Already back in November that was enough of a loss to wipe out all of SVB’s equity. But you would only know that if you looked carefully at their SEC filings, because SVB kept reporting those bonds on “hold-to-maturity” basis (meaning at their full face value). That would have been fine if SVB kept having deposit inflows, but already in November they reported $3 billion in cash outflows in the prior quarter. And of course cash was flowing out because companies were able to put it in places where it yielded more (as well as startups just burning cash). Once the cash outflow accelerated, SVB had to start selling the bonds, at which point they had to realize the losses. This forced SVB to have to raise equity which they failed to do. When it became clear that a private raise wasn’t happening their public equity sold off rapidly making a raise impossible and thus causing the bank to fail. This is a classic example of the old adage: “How do you go bankrupt? Slowly at first and then all at once.”

With that as background now on to the hot takes

  1. The SVB bank run was caused by VCs and could have been avoided if only VCs had stayed calm

That’s like saying the sinking of the Titanic was caused by the iceberg and could have been avoided by everyone just bailing water using their coffee cups. The cause was senior management at SVB grossly mismananging the bank’s assets (captain going full speed in waters that could contain icebergs). Once there was a certain momentum of withdrawals (the hull was breached), the only rational thing to do was to attempt to get to safety. Any one company or VC suggesting to keep funds there could have been completely steamrolled. Yes in some sense it is of course true that if everyone had stayed calm then this wouldn’t have happened but this is a classic case of the prisoner’s dilemma and one with a great many players. Saying after the fact that “look everyone came out fine, so why panic?” is 20-20 hindsight – as I will remark below there were a lot of people arguing against making depositors whole.

2. The SVB bank run is the Fed’s responsibility due to their fast raising of rates

This is another form of blaming the iceberg. The asset duration mismatch problem is foundational to banking and anyone running a bank should know it. Having a large percentage of assets in long-duration low-interest-rate fixed income instruments without hedging is madness, as it is premised on interest rates staying low for a long time and continuing to accumulate deposits. Now suppose you have made this mistake. What should you do if rates start to go up? Start selling your long duration bonds at the first sign of rate increases and raise equity immediately if needed. Instead of realizing losses early and accepting a lower equity value in a raise, SVB kept a fiction going for many months that ultimately lost everything.

3. Regulators are not to blame

One reason for industries to be regulated, is to make them safer. Aviation is a great example of this. The safety doesn’t just benefit people flying, it also benefits companies because the industry can be much bigger when it is safe. The same goes for banking. You have to have a charter to be a bank and there are multiple bank regulators. Their primary job should be to ensure that depositors don’t need to pour over bank financials to understand where it is safe to bank. If regulators had done their job here they would have intervened at SVB weeks if not months ago and forced an equity raise or sale of the bank before a panic could occur.

4. This crisis was an opportunity to stick it to tech

A lot people online and some in government saw this as an opportunity to punish tech companies as part of the overall tech backlash that’s been going on for some time. This brought together some progressives with some right wing folks who both – for different ideological reasons – want to see tech punished. There was a “just let them burn” attitude, especially on Twitter. This was, however, never a real option because SVB is not the only bank with a bad balance sheet. Lots of regional and smaller banks are in similar situations. So the contagion risk was extremely high. The widespread sell-off in those bank stocks even after the announced backstopping of SVB underlines just how likely a broad meltdown would have been. It is extremely unfortunate that our banking system continues to be so fragile (more on that later) but that meant using this to punish tech only was never a realistic option.

5. Depositors should have taken a haircut

I have some sympathy for this argument. After all didn’t people know that their deposits above $250K were not insured? Yes that’s true in the abstract but when everyone is led to believe that banking is safe because it is regulated (see #3 above), then it would still come as a massive surprise to find out that deposits are not in fact safe. As always what matters is the difference between expectation and realization. If SVB depositors would take a haircut, then why would anyone leave their funds at a bank where they suspect they would be subject to a 5% haircut? There would have been a massive rush away from smaller banks to the behemoths like JP Morgan Chase.

6. The problem is now solved

The only thing that is solved is that we have likely avoided a wide set of bankruns. But it has been accomplished at the cost of applying a massive patch to the system by basically insuring all deposits. This leaves us with a terrible system: fully insured fractional reserve banking. I have been an advocate for full reserve banking as an alternative. This would let us use basic income as the money creation mechanism. In short the idea is that money would still enter the economy but it would do so through giving money to people directly instead of putting banks in charge of figuring out where money goes. The problem of course is that bank investors and bank management don’t like this idea because they benefit so much from the existing system. So there will be fierce lobbying opposition to making such a fundamental change. I will write more posts about this in the future but one way to get the ball rolling is to issue new bank charters aggressively now for full reserve banks (sometimes called “narrow banks”). Many existing fintechs and some new ones could pick these charters up and provide interesting competition for the existing behemoths.

All of this is to say that this whole crisis is yet another example of how broken and held together by duct tape our existing systems are. That’s why we are lurching from crisis to crisis. And yet we are not willing to try to fundamentally re-envision how things might work differently. Instead we are just kicking the can.

Posted: 14th March 2023Comments
Tags:  banking

India Impressions (2023)

I just returned from a week-long trip to India. Most of this trip was meeting entrepreneurs and investors centered around spending time with the team from Bolt in Bangalore (a USV portfolio company). This was my second time in India, following a family vacation in 2015. Here are some observations from my visit:

First, the mood in the country feels optimistic and assertive. People I spoke to, not just from the tech ecosystem, but also drivers, tour guides, waiters, students, and professors, all seemed excited and energized. There was a distinct sense of India emerging as a global powerhouse that has the potential to rival China. As it turns out quite a few government policies are aimed at protecting Indian industrial growth and separating it from China (including the recent ban on TikTok and other Chinese apps). Also, if you haven’t seen it yet, I recommend watching the movie RRR. It is a “muscular” embodiment of the spirit that I encountered that based on my admittedly unscientific polling was much liked by younger people there (and hardly watched by older ones).

Second, air pollution in Delhi was as bad as I remembered it and in Mumbai way worse. Mumbai now appears to be on par with Delhi. For example, here is a picture taken from the Bandra-Worli Sea Link, which is en route from the airport, where you can barely see the high rise buildings of the city across the bay.

image

Third, there is an insane amount of construction everywhere. Not just new buildings going up but also new sewer lines, elevated highways, and rail systems. Most of these were yet to be completed but it is clear that the country is on a major infrastructure spree. Some of these projects are extremely ambitious, such as the new coastal road for Mumbai.

Fourth, traffic is even more dysfunctional than I remember it and distances are measured in time, not miles. Depending on the time of day, it can easily take one hour to get somewhere that would be ten minutes away without traffic. This is true for all the big cities I went to visit on this trip (Delhi, Mumbai and Bangalore). I don’t really understand how people can plan for attending in person meetings but I suppose one gets used to it. I wound up taking one meeting simply in a car en route to the next one.

Fifth, in venture capital there are now many local funds, meaning funds that are not branded offshoots of US funds, such as Sequoia India. I spent time with the team from Prime Venture Partners (co-investors in Bolt) and Good Capital among others. It is great to see that in addition to software focused funds there are also ones focused on agtech/food (e.g. Omnivore) and deep tech (e.g. Navam Capital). Interestingly all the ones I talked to have only offshore LPs. There is not yet a broad India LP base other than a few family offices and regulations within India are apparently quite cumbersome, so the funds are domiciled in the US or in Mauritius.

Sixth, the “India Stack” is enabling a ton of innovation and deserves to be more widely known outside of India (US regulators should take note). In particular, the availability of a verified digital identity and of unified payments interfaces is incredibly helpful in the creation of new online and offline experiences, such as paying for a charge on the Bolt charging network. This infrastructure creates a much more level playing field and is very startup friendly. Add to this incredibly cheap data plans and you have the foundations for a massive digitally led transformation.

Seventh, India is finally recognizing the importance of the climate crisis both as a threat and as an opportunity. India is already experiencing extreme temperatures in some parts of the country on a regular basis (the opening of Kim Stanley Robinson’s Ministry for the Future extrapolates what that might lead to). India is also dependent on sufficient rainfall during the Monsoon season and those patterns are changing also (this is part of the plot of Neal Stephenson’s Termination Shock). As far as opportunity goes, India recently discovered a major lithium deposit, which means that a key natural resource for the EV transition exists locally (unlike oil which has to be imported). India has started to accelerate EV adoption by offering subsidies.

All in all this trip has made me bullish on India. Over the coming years I would not be surprised if we wind up with more investments from USV there, assuming we can find companies that are a fit with our investment theses. In the meantime, I will look for some public market opportunities for my personal portfolio.

Posted: 6th March 2023Comments

Termination Shock (Book Review)

Over vacation I read Termination Shock by Neal Stephenson. Unlike many other recent books tackling the climate crisis, it is entirely focused on the controversial issue of geoengineering through solar radiation modification (SMR). The basic idea of SMR is to let slightly less sunlight into the Earth’s lower atmosphere where it can heat things up. Even a tiny decrease in solar radiation will have a big impact on global warming.

To put upfront where I stand on this: I first wrote about the need to research geoengineering in 2009. Since then Susan and I have funded some research in this area, including a study at Columbia University to independently verify some chemistry proposed by the Keith group at Harvard. The results suggest that using calcite aerosols may not be so great for the stratosphere which includes the Ozone layer that protects us from too much UV radiation. That means spreading sulfites is likely better – this is what happens naturally during a big volcano eruption, such as the famous Mount Pinatubo eruption.

Artificially putting sulfur into the stratosphere turns out to be the key plot device in Termination Shock. Delays by governments in addressing the climate crisis have a rich individual start to launch shells containing sulfur into the stratosphere. In a classic life imitating art moment, Luke Iseman, the founder of Make Sunsets, is explicitly referring to reading Termination Shock as an inspiration for starting the company and releasing a first balloon carrying a tiny amount of sulfur into the stratosphere.

Termination Shock does a good job of neither praising its lone ranger character for attempting to mitigate the climate crisis, nor condemning him for kicking off something with obviously global impact. Instead, the plot extends to several nation states that might be positively or adversely affected and the actions they take to either support or interfere with the project. While these aren’t as fully developed as I might have liked (the book still clocks in at 720 pages), they do show how different the interests on SMR might wind up across the globe.

In that regard I loved that India gets a starring turn, as I believe we are about to see a lot more of India on the global stage. I only wish Saskia, Queen of the Netherlands, played a more active role, like the female protagonists of Seveneves do. To be fair, she is a very likable character, but mostly with events happening to her. As always with books by Neal Stephenson, there are tons of fascinating historical and technical detais. For example, I had no idea that there were tall mountains in New Guinea.

Overall Termination Shock is a great read and an excellent complement to Ministry for the Future in the climate crisis fiction department (can’t believe I didn’t write about that book yet).

Posted: 21st January 2023Comments
Tags:  climate crisis book review termination shock

A Philosophical Start to 2023

We are once again at a transition moment in history. Where our journey goes from here could be exceptionally good or absurdly bad. This mirrors past moments, such as the transition into the Agrarian Age, which gave us early high cultures but also various dark ages. Or more recently the transition into the Industrial Age, with democracies flourishing, but also fascism and communism killing tens of millions.

In the past few years we have had incredible unlocks across many fields. To name just a few, in computation we are making real progress on artificial intelligence; in biology, we can now read and write genetic code; in energy, we are closing in on nuclear fusion.

At the same time we are facing unprecedented threats. The climate crisis is accelerating at a pace faster than most of the dire predictions. Our democracies are moribund with bloated and risk-averse bureaucracies. With social media and easy image/video manipulation and creation we live in post-truth world. Russia’s invasion of Ukraine has edged us closer to the possibility of nuclear war.

The threats are truly scary and I completely understand why some people find it hard to get out of bed. The opportunities can be anxiety inducing in their own ways, even if you don’t think a superintelligence will wipe out humanity any day now. At all times there are people seeing that opportunity is elsewhere, finding themselves trapped in stagnant fields. Personally I am excited by the opportunities. But even excitement carries potential failure modes with it, such as “all work and no play.”

So where does this leave me? Aristotle was wrong about a lot of things, but I quite like his conception of virtues as intermediates between too little and too much of something. For example, courage sits between cowardice and rashness. In a similar vein I try to find middle paths between ignoring threats and despairing about them, between dismissing opportunities and glorifying them, and between asceticism and hedonism.

Finding these balance points is an ongoing process as it is easy to be drawn away to either extreme. The story of Odysseus needing to navigate between Scylla and Charybdis can be read as a metaphor for this challenge. As an aside, the same is true for making choices in startups and I have a series of blog posts about that.

I am sharing this framework in the hope that it may be helpful to others. Also if more people start thinking and operating this way, maybe we can get past the current state of discourse which favors extremes.

May you all find the right middle paths in 2023!

Posted: 2nd January 2023Comments
Tags:  philosophy life personal 2023

The Eutopian Network State

If you are not familiar with it, I encourage you to check out the Network State Project. The basic idea is to form new states online first, with an eventual goal of controlling land in the real world. While I disagree with parts of the historic analysis and also some of the suggestions for forming a network state, I fundamentally believe this is an important project.

In my book The World After Capital, I trace how scarcity for humanity has shifted from food to land (agrarian revolution), from land to capital (industrial revolution), and now from capital to attention (digital revolution). The states that we have today were first formed during the Agrarian Age and were solidified during the Industrial Age. As a result they carry the baggage of both of these periods. When it comes to states we have serious issues of “technical debt” and “cruft.” One way to tackle these is through gradual rewrites of existing laws and constitutions. That’s a slow process under the best of circumstances and in an age of increasing polarization likely an impossibility. Another mode of change is to create a new system elsewhere, with the goal that it might eventually replace the existing one. Given that pretty much all the habitable space on Earth is taken up by existing nation states, the best place to get started is virtual.

Now what would a Eutopian network state look like? I am still trying to figure a lot of that out but fundamentally it would seek to embrace the values described in The World After Capital in order to build a state for the Knowledge Age. As such it would aim to recognize how much progress has been made since most states were first established and how much more progress lies yet ahead of us. It would take into consideration that the right to bear arms or the functioning of free speech should likely be different in an age of nuclear bombs and global social networks than when we had muzzleloaders and town criers.

I am particularly interested in the idea of a “minimum viable state.” What are the core concepts that need to be in place to get going and what can be filled in over time or maybe omitted entirely? For example, does a Eutopian Network State, have to define what constitutes a family? What is the minimal set of rules? I am currently reading the fourth book in Ada Palmer’s Terra Ignota series. In the series humans voluntary join Hives but even those who don’t, the Hiveless, must follow eight universal laws.

There are many other fascinating questions. For example, what does it take to become a Eutopian? Does one simply declare membership or is there some application process? Is there a pledge of some kind? Does Eutopia need to have fees or taxes to support itself financially? Once members have joined how are decisions made? And so on.

So far the Networkstate Dashboard is tracking 26 startup nations. I am curious to see how they have been answering some of these questions. I believe getting the answers to some of these baseline questions right is essential as they will determine who feels initially attracted to the state. And these core concepts should intentionally be difficult to change as they speak to the foundational nature of the society.

If you have thoughts on a minimum viable Eutopian state, I would love to read them in the comments!

Posted: 11th December 2022Comments
Tags:  eutopia network state

Older posts