Our Common Humanity: We Still Have a Lot to Build on the Internet

There have been a number of tweets in my timeline roughly saying: “Ferguson, Gaza, Syria, Ukraine — what is going on with the world?” I don’t know that these can all be reduced to a single underlying reason although the current rate of global change likely has something to do with it. There is, however, one thing they all have in common: an abject failure to recognize each other’s humanity.

This failure is all the more frustrating at a time when we should all know that we are inhabiting a rock hurling through space protected by a ridiculously thin atmosphere. An atmosphere that we have been collectively mistreating together with our water and land. There is now more than ever an urgent need to correct course, if not for the sake of other species, than at least for our own.

Yet this failure shouldn’t be surprising as we are engaged in all sorts of practices both individually and collectively that emphasize and re-enforce otherness over shared humanity. Organized religion deserves a lot of the blame here especially when not tempered by science (possibly modulo Buddhism about which I am learning more), but so does everyday politics and the global economy with its vast and growing differences in wealth and income.

That’s why it is all the more important that we think critically about the role of the internet. It connects us with each other as never before. Fundamentally it has the power to let us become closer, recognize how we are all human and fight the threats we face as a species. Yet to date so many internet systems seem to further our differences (although they do have other positive sides). In the blogosphere we read predominantly what we already believe. On Facebook we mostly connect with the people we already know. On Twitter we often yell at each other because we don’t see the human behind the tweet.

So if/when you are thinking about what to build, please consider anything that has a shot at bringing us closer together and letting us see each other as human beings first and foremost. I will readily admit that I don’t know exactly what that would be, although I suspect that anything that helps people learn as well as anything that shows people in much more depth are steps in the right direction.

Posted: 21st August 2014Comments
Tags:  humanity internet

Buzzfeed, Native Ads and Crowdfunding (Beacon Reader et al)

I will be the first to admit that I am a sucker for a good listicle. That’s probably a result of being overly prone to making lists myself. I am also not offended by native advertising per se. After all it completely predates the internet. Thumbing through pretty much any fashion magazine or one of the glossy local publications like “Westchester" will make that instantly evident. All of this is to say that I am happy for Jonah Peretti and the folks at Buzzfeed about the recent financing led by Chris Dixon for A16Z.

But I feel very differently when it comes to original news coverage and investigative journalism. There I am completely with John Oliver: I don’t want native advertising anywhere near it. Or any other advertising. Or even an editorial board for that matter (cf the disastrous NY Times coverage of many topics). So what is the alternative funding model? Crowdfunding. That’s why i have been a big fan of Beacon Reader (where I just supported sending an independent journalist to Ferguson, Missouri). Similar efforts are underway in other parts of the world, including Contributoria (UK), De Correspondent (Netherlands) and Krautreporter (Germany).

Posted: 13th August 2014Comments
Tags:  buzzfeed native advertising beacon reader crowdfunding

Organizing Knowledge Around Basic Concepts

This summer I have been blogging less. That’s largely because I have been reading more and also been spending more time thinking about some of my favorite topics. All of this has me more convinced than ever that we need to invest heavily in elucidating the interconnections of knowledge. Way too much of our teaching and learning occurs in highly fragmented pieces. I have written about this before but now I have an idea for what to do about it.

I want to help organize knowledge around basic concepts such as trees, waves, growth, etc. What do I mean by this? Take trees as an example. A tree is a structure that has roots, a trunk and then branches. From that basic concept one can explore knowledge in all sorts of directions. There are of course the different varieties of trees, such as deciduous and evergreen varieties. Then there are famous trees in literature such as the Tree of Knowledge of Good and Evil in Genesis. Tree related expressions in language would be covered, as in “going back to ones roots,” which link over to the use of the structure of a tree in other fields, as in a family tree. From there it is just a small step to trees as a data structure in programming.

With relatively few basic concepts like trees can cover a lot (all?) of knowledge and make its interconnections visible. I am not yet sure what the right format for this is. Maybe it is a web site of its own but it could also be a kind of overlay on the existing web possibly using a service such as Wayfinder. If anyone is aware of other efforts like this I would love to hear about them as well as any thoughts on how to best organize this.

Posted: 11th August 2014Comments
Tags:  knowledge learning

Foursquare: Personalized Recommendations Unleashed

Today’s release of the latest version of Foursquare has me super excited. It puts together all of the data and capabilities the team there has built over the last five years to provide the best local recommendations: based on lots of detailed data and knowing your tastes. Dennis’s vision for Foursquare has always been to create the best local discovery engine — one that’s personalized just for you — by crowdsourcing bits of information from users all over the world. Today brings Foursquare a big step closer to that vision.

Unlike traditional review-based systems, Foursquare gathers lots of information on where people actually go and spend time and what they recommend. So instead of simply providing an average of a number of stars Foursquare computes a score. Because of the many signals that go into this score it is difficult to manipulate by any one individual (such as the venue’s owner or drive-by review). As another important innovation, Foursquare furthermore takes your specific tastes into account when making recommendations. When you start the new app for the first time it will prompt you with possible tastes (these are derived from your past usage but you can confirm and change them). Those tastes are then highlighted in the recommendations and you can change them over time.

With this new release there is also a different privacy model. Everything in the new Foursquare app is explicitly public and that includes a new asymmetric follower model. It means I can follow anyone to see the tips that they are leaving and more importantly have the recommendations I receive be influenced by experts that I have personally curated. You can go ahead and follow me — I am not yet an expert on anything although I am getting close on airports, American food and Chelsea.

This completely public model became possible by splitting out checkins (the privacy sensitive part) into their own app called Swarm. Swarm retains the symmetric friends model, which means that you are sharing your location only with people you have confirmed. Better yet, with Swarm you no longer even need to check in. The app knows where you are and shares that location with your confirmed friends (only at the neighborhood level — to provide a detailed location you check in). Again, this passive location sharing has become possible without being a drain on your battery through the company’s technology and accumulated data.

Both Swarm and Foursquare will continue to improve over the coming months. That’s not just because the team has a great roadmap and will also observe user behavior and feedback but also because the system constantly gets better as more people use it. Speaking of which, I am on the road today traveling up to Massachusetts and will shortly be using Foursquare to find a lunch spot.

Posted: 6th August 2014Comments
Tags:  foursquare local recommenations personalization

It is OK to Worry about Work (& Doesn’t Make you a Luddite or Socialist)

In venture capital we come across startups all the time that are building something that has been tried in the past and failed. It would be very easy to dismiss these opportunities based on a naive “pattern matching” approach to investing. Instead at USV we always ask ourselves “what is different now?” to understand whether the lessons from the past do in fact apply or something profound has changed.

This is important because things that failed in the past tend to eventually succeed due to changes in technology, the market or society / behavior. It is useful to keep some big examples in mind to remind oneself of this. Here are a few: Apple had an epic failure with the Newton but eventually a huge success with the iPhone (changes: further turns of Moore’s law, cellular network buildout). Car companies had many failed stabs at electric cars until Tesla came along (changes: more expensive gasoline, cheaper batteries, critical mass of affluent environmentally conscious buyers). There were attempts at digital currencies (remember Flooz anyone?) which failed where bitcoin is appearing to succeed (changes: technical breakthrough, growing distrust of governments).

Or how about this one? Social networks in the past failed or didn’t make much money (friendster, MySpace) until Facebook came along (and recently inspired a whole series of tweets by Marc about failed FB predictions). Nassim Taleb in Antifragile and elsewhere has referred to the “it hasn’t happened so it won’t happen” argument as the Turkey fallacy (the Turkey is happy about the farmer feeding it every day until Thanksgiving).

All of this is to say that I strongly disagree with Marc’s assessment that people who worry about technology’s impact on the labor market today are either Luddites or socialists simply because this argument has been wrong in the past. There are plenty of things that have changed fundamentally since 1964. Most importantly, the cost of computing has plummeted and the power of computing has skyrocketed (see my comparison btw NASA’s compute power in 1960s and Raspberry Pi).

During the first industrial revolution people worried about machines replacing human workers because machines provided mechanical power. Well, it turned out that humans were still needed because we supplied brain power. This time round though, at the dawning of the “Second Machine Age" we are worrying because machines are providing brain power. That’s a new and different set of circumstances and so we should rightly re-examine this question and not just take a no answer for granted.

Marc has written that he is “way long human creativity” to say that even if computers replace humans for some thinking tasks we will be the ones who are creative (and come up with interesting things for ourselves to do). Even if you believe that — and I happen to share this belief — that still doesn’t mean that we shouldn’t worry about the transition period.

In fact if we want to look at history for a lesson we would do well to keep in mind that industrialization was incredibly ugly and had us go through lots of revolutions and two world wars. Agricultural jobs were lost at a far faster pace (due to mechanization) than industrial jobs appeared. And the early industrial jobs were awful involving horrid working hours and conditions.

In summary: something has changed (computing) and it is entirely appropriate to worry about existing jobs disappearing. At a minimum the worry is that they will disappear far faster than we can figure out how to be creative and at a maximum that we won’t in fact figure out what to substitute for them (I am personally less worried about the latter but not dismissive).

Posted: 4th August 2014Comments
Tags:  work employment computing

The Unbundling of Scale

During the industrial age, economies of scale were a major source of competitive advantage. Many production processes exhibited decreasing unit costs over a very large range of output. Steel was a classic example which resulted in a few very large steel companies dominating the market (at least until the rise of mini mills which made steel from scrap).

The defensibility of scale in the information age is a lot less clear. Why? Because there are companies that are providing scale to others. Amazon AWS is a great example. No longer do you need your own data center to have servers anywhere in the world. Cloudflare is another which gives you a global edge network. Twilio connects you with carriers around the world. Sift gives you fraud detection and Firebase data synchronization.

I should be quick to point out that network effects (and the related supermodularity of information) may still give large players a big advantage but they should not count on traditional scale economies as being defensible by themselves. These will be available through service providers even to the smallest of startups.

It is also useful to think about how this has become possible if it was not possible during the industrial era. In the industrial processes many of the steps had to be tightly coupled. For instance you could not heat up the steel in one place and pour it into a shape in a different one. With information by contrast the cost of transfer and combination is extremely low and so we should expect to see a much finer slicing (something I wrote about nearly 6 years ago in a post titled “Feature vs Company”).

Posted: 30th July 2014Comments
Tags:  strategy scale unbundling competitive advantage

Strong AI: Employment Impact Doesn’t Depend on “How” Computers Think

There was a fun Twitter convo about strong AI between Patrick Collison and Marc Andreessen. I also love speculating about this topic but before I engage in that I want to point out that from an employment perspective this is a red herring. A car is not a horse. And yet cars replaced horses in transportation. A tractor is not a horse. And yet tractors replaced horses in agriculture. A tank is not a horse. And yet tanks replaced horses in war.

It is irrelevant from an employment perspective as to how a robot or machine intelligence solves a problem. What matters is whether or not it solves that problem at a lower unit cost than a human. In fact, even the problem statement itself is up for change, meaning that a machine maybe solves a slightly different problem or a sub-problem that can then be used (together with cheaper — read less trained — humans) to accomplish the same overall outcome.

Let me illustrate that last statement with two examples. In order to get a car to its destination you could either use a trained driver who knows where everything is in the city or an untrained one who simply knows how to drive together with routing software (see Semil’s blog post). Similarly you could use skilled drivers of forklift trucks in a traditionally laid out warehouse or you can dramatically change the layout so robots can move the shelves.

That’s why I think the “it will be a longtime before computers/robots can do x” argument *underestimates* the labor market impact. As always, it will take a bit longer than we assume for these impacts to actually arrive but when they do it will be more profound than we anticipate.

So now for the speculation on strong AI. Personally, I am in the same camp as Patrick. Just because it hasn’t happened in the first 70 years of having computers barely moves my prior on whether it can happen. After all, human intelligence took millions of years to emerge. Also, I don’t really subscribe to the idea that there is anything more to the functioning of our intelligence than, well, our brains.

Here too I think we need to be careful about what exactly we expect to see. Airplanes fly but they don’t fly exactly like birds. Yet, I think we can all agree that the similarity in effect is more important than the differences in technique. The same goes for intelligence. Machines may exhibit human like intelligence and yet use a somewhat different way of getting there.

Posted: 28th July 2014Comments
Tags:  artificial intelligence robots machine learning employment work

Is it 1880 or 1914?

I haven’t been posting much the last couple of weeks. There are a variety of reasons for that including spending extra time reading and learning new things but the main reason is that I am trying to understand better where we are today.

What do I mean by that? I have been making the argument that we are facing a transition as big as the one from agrarian society to industrial society (and from hunter gatherer to agrarian before that). Are we near the beginning of this transition or somewhere in the middle? Or maybe put more starkly are we in the 1880s or are we in 1914? That is do we have some time to figure things our or are we on the brink of disaster?

Ultimately the answer to this will only be knowable ex-post, once history has unfolded but in the meantime here is why I think looking at the past is instructive. At around 1880 we had all the ingredients for rapid industrialization in place. We knew how to make steel in quantity, we had the beginnings of electric power, and we had more or less figured out the assembly line system of factory layout. The population had shifted from the country to the city as part or urbanization (with most cities having pretty horrid living conditions).

At the same time we had the old agrarian system defining much of the political landscape including territorial conflicts, such as the Franco-Prussian war of 1870-71. We wound up with a volatile combination of disenfranchised city dwellers (who were not yet reaping the rewards of industrialization) and leaders who were still obsessed with “land” as the primary source of wealth and power.

A lot about where we are today has similar characteristics. We have all of the ingredients for an information society in place: a global network that connects everyone, rapidly improving machine learning and automation, additive manufacturing and robotics (to name just a few key ingredients). We also have globalized many aspects of the economy with global corporations and supply chains.

Yet again the political leadership throughout the world is still largely thinking in industrial terms, including emphasizing the nation state as the geographic organizing principle (and playing up ethnic and religious differences). Once again we also have large groups of people who feel pushed around or left behind by the emerging information economy.

In combination then it seems like we have once again reached a time period of potentially dramatic change. We are using dubious tools, such as quantitative easing, to manage the economy. We are using information technology to surveil and control rather than empower people. All of these suppress short term volatility but likely at the cost of much the eventual transition worse.

Posted: 23rd July 2014Comments
Tags:  history

CPU and Memory are the New Crude Oil

The definition of a commodity is a good that is “supplied without qualitative differentiation.” You can’t charge more than others for crude oil, you have to turn it at least into gasoline. And if you really want to charge a lot more you have to turn it into a plastic product.

CPU cycles and memory are the crude oil of our times. If you want to charge more for them you have to add value. If you are SaaS company that provides a complete solutions you are adding a lot of value (plastic product). Conversely, if you are a developer platform you are just a smidgen above the commodity (gasoline).

So when thinking about margin, a SaaS business might have 80-plus percent gross margin. But cloud platforms aimed at developers will wind up being in the low double digits or possibly even in the single digits using total revenues as the denominator. Put differently, the financial metrics for the latter over time will look more like a refinery or a retailer.

From a long term investment perspective though it is important to keep in mind that ultimately what matters is total cash flow. If you are Walmart you make very little margin on any one purchase but you process a lot of them. So developer cloud platforms have to be scale businesses. It’s not a surprise then that another retailer, Amazon, is the leading cloud player and recently announced their 42nd price cut.

You can have  a great business as a SaaS company or as a cloud platform, just don’t mistake one for the other as how you think about financial metrics will be quite different. 

Posted: 15th July 2014Comments
Tags:  saas cloud commodity

Older posts