The Unbundling of Scale

During the industrial age, economies of scale were a major source of competitive advantage. Many production processes exhibited decreasing unit costs over a very large range of output. Steel was a classic example which resulted in a few very large steel companies dominating the market (at least until the rise of mini mills which made steel from scrap).

The defensibility of scale in the information age is a lot less clear. Why? Because there are companies that are providing scale to others. Amazon AWS is a great example. No longer do you need your own data center to have servers anywhere in the world. Cloudflare is another which gives you a global edge network. Twilio connects you with carriers around the world. Sift gives you fraud detection and Firebase data synchronization.

I should be quick to point out that network effects (and the related supermodularity of information) may still give large players a big advantage but they should not count on traditional scale economies as being defensible by themselves. These will be available through service providers even to the smallest of startups.

It is also useful to think about how this has become possible if it was not possible during the industrial era. In the industrial processes many of the steps had to be tightly coupled. For instance you could not heat up the steel in one place and pour it into a shape in a different one. With information by contrast the cost of transfer and combination is extremely low and so we should expect to see a much finer slicing (something I wrote about nearly 6 years ago in a post titled “Feature vs Company”).

Posted: 30th July 2014Comments
Tags:  strategy scale unbundling competitive advantage

Strong AI: Employment Impact Doesn’t Depend on “How” Computers Think

There was a fun Twitter convo about strong AI between Patrick Collison and Marc Andreessen. I also love speculating about this topic but before I engage in that I want to point out that from an employment perspective this is a red herring. A car is not a horse. And yet cars replaced horses in transportation. A tractor is not a horse. And yet tractors replaced horses in agriculture. A tank is not a horse. And yet tanks replaced horses in war.

It is irrelevant from an employment perspective as to how a robot or machine intelligence solves a problem. What matters is whether or not it solves that problem at a lower unit cost than a human. In fact, even the problem statement itself is up for change, meaning that a machine maybe solves a slightly different problem or a sub-problem that can then be used (together with cheaper — read less trained — humans) to accomplish the same overall outcome.

Let me illustrate that last statement with two examples. In order to get a car to its destination you could either use a trained driver who knows where everything is in the city or an untrained one who simply knows how to drive together with routing software (see Semil’s blog post). Similarly you could use skilled drivers of forklift trucks in a traditionally laid out warehouse or you can dramatically change the layout so robots can move the shelves.

That’s why I think the “it will be a longtime before computers/robots can do x” argument *underestimates* the labor market impact. As always, it will take a bit longer than we assume for these impacts to actually arrive but when they do it will be more profound than we anticipate.

So now for the speculation on strong AI. Personally, I am in the same camp as Patrick. Just because it hasn’t happened in the first 70 years of having computers barely moves my prior on whether it can happen. After all, human intelligence took millions of years to emerge. Also, I don’t really subscribe to the idea that there is anything more to the functioning of our intelligence than, well, our brains.

Here too I think we need to be careful about what exactly we expect to see. Airplanes fly but they don’t fly exactly like birds. Yet, I think we can all agree that the similarity in effect is more important than the differences in technique. The same goes for intelligence. Machines may exhibit human like intelligence and yet use a somewhat different way of getting there.

Posted: 28th July 2014Comments
Tags:  artificial intelligence robots machine learning employment work

Is it 1880 or 1914?

I haven’t been posting much the last couple of weeks. There are a variety of reasons for that including spending extra time reading and learning new things but the main reason is that I am trying to understand better where we are today.

What do I mean by that? I have been making the argument that we are facing a transition as big as the one from agrarian society to industrial society (and from hunter gatherer to agrarian before that). Are we near the beginning of this transition or somewhere in the middle? Or maybe put more starkly are we in the 1880s or are we in 1914? That is do we have some time to figure things our or are we on the brink of disaster?

Ultimately the answer to this will only be knowable ex-post, once history has unfolded but in the meantime here is why I think looking at the past is instructive. At around 1880 we had all the ingredients for rapid industrialization in place. We knew how to make steel in quantity, we had the beginnings of electric power, and we had more or less figured out the assembly line system of factory layout. The population had shifted from the country to the city as part or urbanization (with most cities having pretty horrid living conditions).

At the same time we had the old agrarian system defining much of the political landscape including territorial conflicts, such as the Franco-Prussian war of 1870-71. We wound up with a volatile combination of disenfranchised city dwellers (who were not yet reaping the rewards of industrialization) and leaders who were still obsessed with “land” as the primary source of wealth and power.

A lot about where we are today has similar characteristics. We have all of the ingredients for an information society in place: a global network that connects everyone, rapidly improving machine learning and automation, additive manufacturing and robotics (to name just a few key ingredients). We also have globalized many aspects of the economy with global corporations and supply chains.

Yet again the political leadership throughout the world is still largely thinking in industrial terms, including emphasizing the nation state as the geographic organizing principle (and playing up ethnic and religious differences). Once again we also have large groups of people who feel pushed around or left behind by the emerging information economy.

In combination then it seems like we have once again reached a time period of potentially dramatic change. We are using dubious tools, such as quantitative easing, to manage the economy. We are using information technology to surveil and control rather than empower people. All of these suppress short term volatility but likely at the cost of much the eventual transition worse.

Posted: 23rd July 2014Comments
Tags:  history

CPU and Memory are the New Crude Oil

The definition of a commodity is a good that is “supplied without qualitative differentiation.” You can’t charge more than others for crude oil, you have to turn it at least into gasoline. And if you really want to charge a lot more you have to turn it into a plastic product.

CPU cycles and memory are the crude oil of our times. If you want to charge more for them you have to add value. If you are SaaS company that provides a complete solutions you are adding a lot of value (plastic product). Conversely, if you are a developer platform you are just a smidgen above the commodity (gasoline).

So when thinking about margin, a SaaS business might have 80-plus percent gross margin. But cloud platforms aimed at developers will wind up being in the low double digits or possibly even in the single digits using total revenues as the denominator. Put differently, the financial metrics for the latter over time will look more like a refinery or a retailer.

From a long term investment perspective though it is important to keep in mind that ultimately what matters is total cash flow. If you are Walmart you make very little margin on any one purchase but you process a lot of them. So developer cloud platforms have to be scale businesses. It’s not a surprise then that another retailer, Amazon, is the leading cloud player and recently announced their 42nd price cut.

You can have  a great business as a SaaS company or as a cloud platform, just don’t mistake one for the other as how you think about financial metrics will be quite different. 

Posted: 15th July 2014Comments
Tags:  saas cloud commodity

SeeChange: Video Will Be Everywhere (What Do We Want?)

I recently finished Dave Eggers’s “The Circle” which provided a good challenge to my baseline view that more transparency is good and that data protection is a futile effort. One of the systems in the novel is an easy to deploy camera that anyone can point at anything and provide a livestream. Called “SeeChange” the promoters argue that it will provide for reduced crime, increased safety and just additional information all around. The detractors are either smeared (with planted information) or hunted down by aggressive mobs.

SeeChange doesn’t exist in precisely that form, but cameras are rapidly becoming cheaper and eventually a bundle of camera plus networking plus battery / solar recharge will drop below $50 and millions of always on cameras will be watching us. This is not a question of if but when. And there are startups, such as Placemeter which lets individuals contribute and be paid for video feeds to help with city data (including car and foot traffic flows).

I was thinking about this a lot over the weekend as Susan and I were driving a lot to visit two of our children at different summer camps. We were on the road for over 12 hours and saw a lot of police cars that had pulled over speeders. We too were speeding most of the time. It struck me that the existing system of enforcement is both arbitrary and inefficient. Conversely a system of cameras would let us raise the overall speed limit and more importantly make it adaptive to the conditions (eg visibility, rain). Tickets can then be issued automatically on the basis of reading license plates. Similar systems are already in use in Germany.

Here then are two possible extremes: one one end, we could try to outlaw such cameras or outlaw their broad based deployment. This seems like a complete losing proposition to me. Enforcement would be impossible (short of a dictatorship). We would wind up with police surveillance owned and controlled by the existing power structure. And if you doubt the potency of video to curtail official violence read the grim account of officers deliberately taking inmates to areas on Rikers without video cameras before brutally beating them.

The other extreme would be to actively promote more cameras and make all the streams public at all times. In The Circle this extreme also includes cameras (not unlike Google Glass) worn around the neck that broadcast what the wearer is doing — including their meetings and conversations, which politicians are using to “go transparent.” A centralized system would of course have a huge opportunity for manipulation so the extreme would be to have lots of these streaming directly or through many different systems.

There may be viable in-between positions which impose some limits (eg requiring disclosure of cameras inside of restaurants or other semi-public locations). This is a real challenge and one we should debate publicly with some urgency. Literature can and should contribute to that debate. Much as I am a fan of work like Egger’s and Doctorow’s it is almost too easy to write a dystopia these days. The real challenge, it seems to me, is to write a new utopia.

Posted: 14th July 2014Comments
Tags:  video surveillance dystopia utopia

More On Basic Income (and Robots)

Marc Andreessen recently wrote a post titled “This is Probably a Good Time to Say That I Don’t Believe Robots Will Eat All the Jobs …" — like all of Marc’s posts it is full of good ideas and worth reading. I agree with many of the points including the benefits of technology driven deflation and being long on human creativity to find interesting things for us to do. There is, however, a critical distribution question that Marc mostly avoids but is at the crux of the transition.

Marc writes:

Imagine 6 billion or 10 billion people doing nothing but arts and sciences, culture and exploring and learning. What a world that would be. The problem seems unlikely to be that we’ll get there too fast. The problem seems likely to be that we’ll get there too slow.

We could add other great activities to this list such as caring for each other, our communities and the environment. I too find that a desirable state of the world and I am optimistic that we can get there in the longrun. 

The transition is difficult though because pretty much all of these activities are either not paid at all (eg making music, cleaning up the environment) or paid poorly (eg teaching, nursing, open source, basic research). Now we can use crowdfunding mechanisms such as Kickstarter, Patreon, Beacon, Experiment, etc to pay some people for some of these and over time that can and will grow significantly. Still the money has to come from someone. And that’s a meaningful limitation at a time of great and growing inequality with nearly half of Americans without any savings (or net in debt).

A higher minimum wage, as vigorously argued for in an interesting recent piece on Politico, can inject some short term liquidity into the economy and I am sympathetic to that but it is also a very blunt instrument and still doesn’t help with the many unpriced activities. The same goes for government mandated shorter working hours or longer vacations (although I am pretty sure that Google’s founders did not have a government mandate in mind).

Marc suggests that we “[c]reate and sustain a vigorous social safety net so that people are not stranded and unable to provide for their families.” Our present approach to that though has gotten us stuck with a large government sector and complicated entitlement programs. 

This brings me once again to the idea of a guaranteed basic income. This is a potentially attractive alternative for a number of reasons:

First, it sets human creativity free to work on whatever comes to mind. For many people that could be making music or learning something new or doing research.

Second, it does not suppress the market mechanism. Innovative new products and services can continue to emerge. Much of that can be artisanal products or high touch services (not just new technology). 

Third, it will allow crowdfunding to expand massively in scale and simultaneously permit much smaller federal, state and local government (they still have a role — I am not a libertarian and believe that market failures are real and some regulation and enforcement are needed, eg sewage, police).

Fourth, it will force us to more rapidly automate dangerous and unpleasant jobs. Many of these are currently held by people who would much rather engage in one of the activities from above.

Fifth, in a world of technological deflation, a basic income could be deflationary instead of inflationary. How? Because it could increase the amount of time that is volunteered.

I will write more about how such as system could be financed. In the meantime suffice it to say that one of the (relatively few) roles of government should be the collection of taxes from companies and individuals (like myself) who have already benefited from technological change.

PS One way to think about a basic income is as follows: it removes a currently binding constraint on time optimization for many individuals allowing them to escape a local minimum — that in turn lets the economy as a whole adjust much faster (and with far less pain). 

Posted: 7th July 2014Comments
Tags:  basic income work labor robots

A Basic Income Experiment I Would Like to See (Detroit)

I have referred to a basic income many times here on Continuations so now it is time to flush out a bit more how this could ever possibly work. Many people do simple back of the envelope math like saying there are 319 million people in the US, so if you paid each of them $10K per year that would be $3.2 trillion which exceeds the annual federal tax base (which is about $2.7 trillion) and then quickly conclude that the whole thing is a lost cause.

My basic contention though is that the amounts for a basic income could be significantly less and still achieve the goals of letting local activity flourish. So with that in mind here is an experiment I would like to see: have the city of Detroit recruit up to a thousand people to one of their destitute areas with a basic income set at something like $400 per month. So the monthly cost of the experiment including some initial overhead might be $500K or $6 million for a full year. I believe the funds for that could be raised from people like myself who are interested in seeing such an experiment.

Now imagine three or four people sharing a house. They could easily afford the utility bill (unlike current situation in Detroit where almost half of households cannot even pay their water bill). As part of the experiment the city should also work to provide high speed internet at only slightly above cost in a utility model. By picking a relatively compact area this could be done wirelessly as a start to reduce the initial set up cost and time. Eventually if the experiment works the network can be expanded. There are plenty of houses in Detroit that are being either razed entirely or auctioned off in the low thousands of dollars, so housing should be the least of issues. Especially because people with a basic income would be excellent credit risks on a P2P lending platform such as Lending Club.

Here are two other components of the experiment that I think would be critical. First, there should be relatively little regulation on activity for instance to make it possible to do local farming, operate small schools, drive others around, etc. exactly the kind of activities that used to historically allow for people in communities to help each other. Second, I believe that participants for this experiment should be recruited and screened. I am not sure exactly what the right criteria would be but ideally they generate some diversity in interests and backgrounds (eg include people who already know how to renovate houses). One could think of this as a colony, not in a new geographic area but in a new social arrangement. Therefore initial recruitment is essential to increase the likelihood of success.

Would love to hear from anyone who thinks this kind of experiment would be interesting. Please also provide any and all feedback on the conditions for such an experiment that you think make sense (or don’t).

Posted: 24th June 2014Comments
Tags:  basic income experiment Detroit

Debating Disruption: Mind the Non-linear

Following the publication of Jill Lepore’sThe Disruption Machine" and Clayton Christensen’s vigorous response in an interview there has been a healthy debate around the merits and even existence of disruption in many posts and tweets. I had been busy preparing for and then teaching a computer bootcamp for our children and some friends so I had mostly ignored this debate. It is of course highly relevant to my claim that we are at the beginning of a massive transition from industrial to information society.

I was particularly intrigued by Nassim Taleb’s tweeted claim that

"Disruption" has to be BS as it is mathematically incompatible with the Lindy Effect. The 2 cannot coexist.

The Lindy Effect states that for many types of objects but especially for technology and ideas the best predictor of their expected future lifespan is how long they have already been around. I believe he pinpoints the crux of the debate, although he is wrong on a critical detail which is the power of non-linear change (something he certainly appreciates as it permeates his books including the fantastic Antifragile).

The reason a lot of technology persists is because in many areas change has only been incremental. We continue to use glasses, tables, silverware, etc. because these are technologies for which change has been linear/incremental (at best). But we drive cars to work instead of riding horses because cars represent a non-linear change over horses. In fact, horses turn out to have been eclipsed by non-linear technology changes in agriculture, warfare and transportation which is why we have gone from having 26 million working horses in the US in 1916 to a few thousand now.

Nowhere have we seen more exponential (ie highly non-linear) change than in information technology. We no longer use an Abacus, or a slide rule, or a mechanical tabulator, or a room filling mainframe, or a million dollar workstation because the phone in our pockets has more compute power than all of these! These technologies were disrupted by the exponentially better ones that came after them. And the Internet has been an exponential change for the size and scale of networks, which is why there are now networks that hundreds of millions and even billions of people participating in them. That too has and will continue to disrupt existing smaller networks and hierarchies.

Yet even in the world of information technology and networks the word disruption is used all too frequently and for things that aren’t in fact disruptive. Craigslist represented a non-linear change for the classified ad business. It took ads that cost tens or even hundreds of dollars and reached thousands of people and replaced it with ads that were free and reached millions. That’s a highly non-linear change and it resulted in a disruption of the newspaper industry. New web sites that are offering prettier listings, more functionality, etc. are not disrupting Craigslist as much as they tend to be closer to incremental improvements. Hence the persistence (much to many technologist’s consternation) of Craigslist.

So yes. The Lindy Effect is real. Disruption is rare. But it does exist and it is caused by non-linear changes in technology. We happen to be in a period of such non-linear change because we have figured out how to use computers for lots and lots of things, including machine learning and robotics, DNA analysis and synthesis, scanning and additive manufacturing of objects. As a result the exponential improvements in computers powered by Moore’s and Metcalfe’s ”laws” are invading many other industries. 

Posted: 23rd June 2014Comments
Tags:  disruption lindy effect non-linearity

Computer Bootcamp Day 2: Hosting Your Website

Today was the second day of our mini computer bootcamp for friends and family. The goal was to have a functioning and hosted website up and running.

We started out by using nslookup to find the IP address of a web server — we use as an example. If you are on Mac OS you can simply do this in terminal (which we learned about yesterday and used again extensively today — I am not sure what the easiest way to do this is on Windows, but if you have your Raspberry Pi handy from yesterday you can use that instead). We then typed the IP Address directly into the browser’s address bar to demonstrate that it does indeed load the same web site.

We then learned about the Domain Name System (DNS) and that it allows computers to turn a domain name into an IP Address. I explained the structure of a domain name and what a fully formed URL looks like. We then talked about HTTP and how the request and response cycle works between the web browser and a web server on the other end, with an initial GET request potentially resulting in many more files being requested from the original and other servers.

We then used Developer Tools in Chrome to actually examine how all of these parts get loaded starting with the HTML of a page. We went to bot the elements tab and the network tab in the Developer Tools. In the network tab we looked at how all the components of a page, such as CSS and Javascript files and images got loaded. And we played around on the Elements tab with actually changing the content on the page.

We then proceeded to install the Apache web server on our RPis via sudo apt-get install apache2. To make sure that they were fully up-to-date we first ran sudo apt-get update and sudo apt-get upgrade (the latter runs for nearly 30 minutes). While we were waiting we dove a bit deeper into HTML and created sample files on our laptops which we loaded using  ”File -> Open File” from our web browsers. We included at least one unordered list using the ul and li tags as well as an image using the img tag.

Once we had our RPis running Apache we now used our locally created samples to replace the contents of /var/www/index.html (use cd /var/www first and then sudo nano index.html to edit). We had great fun visiting each other’s web sites simply by typing the IP Address of the RPi into the browser address bar. I explained that these IP Adresses were only valid on the local network and had been assigned via DHCP. I then showed how I could make any one of the RPis appear on the public internet by adding a rule to the firewall (and drew a diagram showing how the firewall separates the public internet from the local area network we were on).

During lunch we talked more about the HTTP request-response cycle and how there are different request methods such as POST, PUT and DELETE. Answering a question I explained how cookies are set and then included in subsequent requests to a domain and how inclusion of code from other domains, such as a Facebook like button which is served up from Facebook’s servers, means that when you visit that page, Facebook knows about it. We also talked about how web content can be cached at multiple layers such as Cloudflare, your local machine or the current browser session.

After lunch I spun up a cloud server for each student and one for myself at Digital Ocean. We used ssh to connect to our servers and once again installed Apache and started to edit index.html. We then learned how to update the zonefile at our domain registrar to make a domain point to our cloudserver. Thankfully Susan and I have lots of dormant domains and we let everyone in the bootcamp pick one for this exercise. Everyone was super excited how easy it was to have their own website up and running.

We then added a CSS file to our website by including a <link rel=”stylesheet” type=”text/css” href=”main.css”> in our html and started styling it with simple changes, such as background and font color and using sans serif as the font-family. We learned the basics of CSS syntax of specifying a selector such as a tag (or a class or an id) and then adding styles.

As a next step we added a bit of Javascript to our site using a <script type=”text/javascript” src=”first.js”></script> tag. At first we added a simple alert(“Hello World”) but then learned how to attach this code to an element on the page by using the onClick event on one of the HTML tags. One of the students pointed out that I had now inserted some Javascript directly into the HTML while I had earlier said that one shouldn’t put styles there. So I wound up explaining how to use the JQuery library to add the onClick event handler from the Javascript.

At this point everyone was happy but exhausted and we called it a day. We have left the cloud servers and websites up for now so that everyone who participated can revisit theirs when at home. This seemed like a big success and I can feel a follow up coming in the fall — I certainly thoroughly enjoyed myself!

Posted: 22nd June 2014Comments
Tags:  computer bootcamp website hosting learning

Newer posts

Older posts