Tech Tuesday: No Computer is an Island (Networking)

It is funny how quickly we take things for granted that didn’t exist just a few years back (obligatory reference to Louis C.K. rant about appreciating technology).  Today the thought of using a computer that’s not connected to a network is almost unimaginable.  What would you do on that computer? How would you install new software? How would you send/receive email? How would you browse the web?  And yet pervasive computer networking is a relatively recent arrival, especially wireless.  That doesn’t mean networking itself is a new idea and there were networks going almost as far back as early computers.

Not surprisingly one of the first networks was military, the so called Semi-Automatic Ground Environment (SAGE) from the late 1950s.  Then in the 1960s the Semi-Automatic Business Research Environment (SABRE) went online. If SABRE sounds familiar to you it should, because SABRE is the commercial airline reservation system that is still in use today!  Another seminal network was the Dartmouth Time-Sharing System (DTSS) which was in operation from 1964 to 1999.  What many early networks had in common is that they were more or less one off proprietary implementations.

Yet, today’s ubiquitous networking can trace its ancestry almost equally far back as these proprietary networks: the Advanced Research Projects Agency Network (ARPANET) was created in the 1960s also with the first message sent on October 29, 1969.  The ideas underlying ARPANET go back to a series of memos written in the early 1960s by JCR Licklider who all the way back then used the term “Intergalactic Computer Network” (yeah for big thinking!) and to papers written around the same time by Leonard Kleinrock.  

While the ARPANET innovations are too numerous to list here, there are a three that are absolutely critical to understand.  First, is the notion of a packet-switched network.  All communications until then had been circuit-switched.  If you wanted to talk on the phone to another person across the country a complete electrical circuit had to be established between you and that person.  A circuit is literally the electronic equivalent of two tin cans connected by string!  Circuits have exactly the same problems as string:  they don’t scale well as each conversation requires its own circuit and they are  easily disrupted if a circuit (string) is dropped (cut).  In a packet switched network communication is cut up into small data packets instead and these packets can travel along different paths between their origin and destination.

Second is the notion of a network of networks (hence Internet) or as it was initially known an “open-architecture network” that would connect networks that could be separately designed and maintained, recognizing that different approaches would be best for different settings (business versus military, radio transmission versus wire, etc).  This idea was first put forth in the early 1970s by Bob Kahn.  After setting out crucial core principles of “open-architecture” such as no global network control (i.e., a distributed system) and only requiring best effort (i.e., no guarantee of delivery), Kahn worked with Vint Cerf on coming up with a protocol.  Their incredibly productive collaboration results in a first version of what became known as the Transmission Control Protocol (TCP) that allowed for reliable information transmission and still adhered to the core principles of an open archtiecture 

A short while later they realized that TCP was too comprehensive and it was broken up into two pieces which became widely known as TCP/IP where the IP stands simply for Internet Protocol.  The Internet Protocol defines what an address for a computer on the network looks like and how those addresses are used to route packets from one computer to another along a path of potentially many intermediary points.  Those addresses are known as IP addresses.  The current version of the Internet Protocol is IPv4 (version 4), which allows for 32-bit long addresses.  At the end of the 1970s that must have seemed like it would last for a very long time as this allows for about 4 billion separate addresses!  Yet in 2011 alone there will be about half a billion smartphones sold in the world, each essentially a computer on the network. In a subsequent post I will describe how we are currently dealing with the resulting address shortage and how we hope to deal with it in the future (if you have followed the Tech Tuesday series so far you should already be able to guess the answer: more bits for the addresses - 128-bits to be precise under IPv6).

If you are still reading this, you might ask, but what about “Ethernet” how does that fit into the equation?  Or “Wifi”?  Great questions!  These are protocols at what is commonly referred to as the “Physical Layer” of networking.  These quite literally address the question of how to transmit a series of bits down a wire or through the air.  This is the third breakthrough innovation: a layered architecture of networks where each layer serves a different and well defined function.  Over the years this model has become refined into what is now know as the Open Systems Interconnection (OSI) 7-layer model.  The bottom most layer is the Physical Layer (#1) at which protocols such as Ethernet and Wifi live.  Above it is the Data Link Layer (#2) which we will skip here. IPv4 and IPv6 live in the Network Layer (#3).  TCP is in the Transport Layer (#4).  We will also skip for the moment the Session Layer (#5) and the Presentation Layer (#6) to point out briefly that the now ubiquitous Hyper Text Transport Protocol (HTTP) which web browsers use lives at the Application Layer (#7).

Subsequent posts will drill deeper into every aspect of networking.  For now, the takeaways should be that the people who created the core networking protocols had amazing foresight in two critical regards: they created a layered and open architecture.  The layering and openness has allowed for massive innovation and scaling to proceed independently at different layers which has given us huge improvements in speed, types of transmission (wireless), number of connected devices and applications, including the world wide web.

Enhanced by Zemanta

Posted: 15th November 2011Comments
Tags:  tech tuesday networking

Work on an Amazing Community: Wattpad is Hiring Engineers

A few months ago we invested in Toronto-based Wattpad.  In introducing the investment, I wrote the following paragraph on the Union Square Ventures blog:

Wattpad is a community for telling and reading stories. Collectively, Wattpad users last quarter spent an amazing 2 billion minutes writing and reading stories on Wattpad. During that time, nearly three quarter million new stories or parts of stories were uploaded and writers received almost five million comments. To learn more about how Wattpad works you may want to watch this explanatory video.

Since then Wattpad has moved into a new office and celebrated its 5th birthday.  Yes – Wattpad isn’t some kind of overnight success that came out of nowhere.  Instead, Allen, Ivan and the team have facilitated the growth of the Wattpad community since 2006.  They have published this infographic to show everything that has changed:

Wattpad Infographic

But this is just the beginning.  The team at Wattpad is working on many different initiatives to enhance how stories are created, discovered, read, commented upon, shared and more!  To do all of this they are looking for great engineers. If this is for you, please apply and if there is someone you know who would be perfect to work on Wattpad, please point them over there.  

Posted: 14th November 2011Comments

Wikipedia, Occupy Wallstreet and the Possibility of an Open Congress

Whenever my kids tell me how their teachers don’t want them to use Wikipedia as a source I redouble my effort to show them why Wikipedia is important and how it works.  In particular, I make sure they understand how to look at the history of a page and to check out the discussion or “talk” page that sits behind the content page.  Two principles of those pages are critically important.  First, they are inclusive by allowing anyone to contribute to the process at least initially (on some controversial subjects there are eventual restrictions).  Second, they provide a complete historical and entirely public record of change.

The New York City General Assembly which is the governance for Occupy Wallstreet is successfully emulating these principles in a real world deliberation process.  Anyone can participate in the working groups and the General Assembly (open access) and there are detailed minutes online from both the coordinators’ meetings and the General Assembly itself (public record).

The contrast between Wikipedia, Occupy Wallstreet and how our Congress is run couldn’t be more stark.  Instead of doing the bulk of its work in openly accessible committees and on the floor of the House and the Senate, most drafting of legislation happens behind closed doors.  Access to those drafting the bills is largely regulated by money with access being available to those who fill the campaign coffers of the politicians.

What is to be done?  We need to figure out how to use technology to bring the principles of open access and public record to government.  Together these could help overcome the influence of money in politics.

Posted: 11th November 2011Comments
Tags:  politics occupy wallstreet government wikipedia

Emergency Broadcast System Fails Social Test

While I was driving the kids around on the weekend one of the radio stations announced that there would be the first ever nation wide test of the Emergency Broadcast System.  My almost immediate reaction was something like “isn’t Twitter now the Emergency Broadcast system”?  I wound up forgetting about the whole thing seconds later (probably because I arrived at whatever place I was picking the kids up).  Then the actual test took place yesterday and wound up failing as many TV stations simply didn’t get the proper signal.  Of course, I learned about this test failure the way I learn about all breaking news these days: on Twitter.

Now I am hoping that whoever is in charge of this system at FEMA and other federal agencies learns the real lesson from this test.  In 2011 an emergency broadcast system that is based on radio and TV might as well be based on printing the alert in the newspaper the next day.  Fixing the problem with the TV stations that didn’t participate will probably be hugely costly.  Instead they should just make it a priority to connect the alert system to Twitter, Tumblr, Facebook and Skype.  All of these have alerting functionality built right into the service and alert via email and SMS.  The alerts should contain a link to a White House web site where the details can be provided.

Not only would the alert spread much more effectively, but the system would be much more resilient to failure.  After all, that’s what the Internet is superbly good at: routing around problems.  I remember well that on 9/11 the only communication that didn’t jam up for me was IM.  So folks in Washington: please bring the Emergency Broadcast system into the 21st century.

Enhanced by Zemanta

Posted: 10th November 2011Comments
Tags:  Emergency Broadcast System

Thinking About Social Networks

This morning on Techmeme I ran across three interesting and thought provoking pieces about social networks (all from within the last 24 hours it seems):

1. Maciej Ceglowski’s “The Social Graph is Neither” (10 points for great title alone) is a hilarious dissection of what’s wrong with explicitly declared relationships and even with the act of declaring relationships.  It is also a blistering critique of Facebook and a bit of a paean to the message board.  Very much worth the read and I found myself agreeing with many of the points.

2. Farhad Manjoo’s piece in slate titled “Google+ is Dead” argues that Google fatally wounded its own social efforts by ignoring important user needs.  In particular the author points to the fight against pseudonyms and the delayed and incomplete brand pages as examples of making Google+ unwelcoming from the start.  His argument rests on comparing social networks to bars.  While I agree that these were big missteps I think it is premature to ring the death knell for Google+. There are parts of it, such as hangouts, that work quite well and those are actually the most like bars.

 3. A piece in Fastcompany by Matt Haber on a site I hadn’t heard of before called Whosay that lets celebrities post information and keep control of it.  The article points to Twitter’s terms of service (which allow Twitter to use content with partners) as a reason that celebrities wouldn’t want to put pictures there but rather put them on Whosay instead.  This seems like an interesting experiment to me because we clearly live in a celebrity culture (Exhibit A: the Kim Kardashian wedding, I mean divorce).  Yet networks like Twitter and Youtube are all about undermining the existing ways in which people become celebrities.

All good food for thought.  A huge takeaway for me from reading it all is that despite what may feel like a huge chokehold by Facebook, we are still in the early innings of what the Internet will mean for how our relationships work online and more generally how society is organized.

Enhanced by Zemanta

Posted: 9th November 2011Comments
Tags:  social networks

Tech Tuesday: Storage (Oh My, How It Has Grown)

So far of the building blocks for computer systems we have covered processors and memory.   We have seen that processors have become massively faster and that memory has become massively cheaper.  Today we will learn about storage and find that it has become massively bigger.  Throughout this discussion we will use a somewhat eclectic reference point: the length of that great American novel “Moby Dick” which clocks in at 1,203,686 bytes (counted off the text version on DailyLit where you can read it via email in 260 installments) or about 1.15 MB (where 1 MB is 1 Mega Byte or 1,048,576 = 2 ^ 20 Bytes).

The goal of storage, as compared to memory, has always been to store many more bits and to trade off slower speed of access for higher capacity and much reduced cost.  In the early days of computers that meant punch cards, which actually go back as far as textile looms in the 18th century.  Here the trade off is quite extreme.  The cards were very cheap – essentially the cost of paper (with some mark up).  But they were also really slow with read speeds of a few hundred cards per minute.  Since cards contained about 64 bytes each, that’s considerably less than 1 KB/second (where 1 KB is 1 Kilo Byte or 1,024 Bytes).   To store all of Moby Dick would have required 18,807 cards and reading it in would take about 1 hour!

The first big advance was magnetic tape.  While much could be written about magnetic tape storage, I am highly partial to the great hack that came along with the Apple II: storage on cassette tape.  Cassette tapes were reasonably cheap at something like a few bucks per tape.  Due to a very inefficient way of “encoding” the data, a 30 minute tape held only about 300 KB.  Still we are down to only 4 tapes for holding Moby Dick but we have actually slowed down by a factor of 2 and it would take 2 hours to read Moby Dick in from tape!  Commercial tape storage solutions had much more capacity and were much faster, achieving transfer rates of about 10 KB/s so that it would take only 2 minutes to read in all of Moby Dick.  The biggest problem with tape though was not its speed or capacity but that access was essentially sequential.  If you wanted to read data in say the middle of the tape you had to forward the tape to that position first before being able to read the data.

The real breakthrough came with magnetic disk storage which to this date holds the bulk of all data in computer systems (although so called Solid State Disks or SSDs are making meaningful inroads).  Magnetic disks used to come in two versions: floppy and hard.  Today we no longer use the floppy kind, but I will never forget when my parents got me the Apple II 5.25 floppy drive for Christmas (thanks Mom and Dad!).  The drive was by today’s standards outrageously expensive.  It held only 115 KB initially per floppy and later 140 KB for almost $600 and way more than that in Germany.  So now we are back to actually needing 9 floppy disks for holding Moby Dick, but we can read it in at much higher speed.  Unfortunately despite a fair bit of poking around I haven’t been able to find just how fast the Disk II was, but suffice it to say it was much faster than the tape!

Now what we mostly have storing data are no longer floppy disks but hard drives.  In a hard drive the magnetic disk is permanently mounted in place and as a result can rotate much faster and be made to hold more information.  Over the last couple of decades the growth in capacity of these hard drives has been nothing short of astounding.  Today you can buy a 1 TB hard drive at CDW for just $59.  Now to put that in perspective 1 TB = 1 Tera Byte or about 1,000 GB, or about 1 Million MB – in other words over 1 million copies of Moby Dick fit onto that drive!  And the speed is blazing too.  A computer can read data from this disk at a rate of some several hundred MB/s.  At that speed, you can read in several hundred copies of Moby Dick in just one second.

But our need for storage has been growing just as fast if not faster.  For instance, Youtube is adding an amazing 50 hours of video every minute!  Now 1 hour of video can take up as much as 80 GB of storage.  Let’s assume that because of lower average quality on Youtube every hour takes up only 20 GB.  Then Youtube is still adding 1 TB of video every minute.  Without any redundancy (meaning storing only a single copy), that means Youtube needs to add an extra 1 TB every minute.  Using $50 as an approximation based on the CDW price above, that would be 525,600 minutes / year * $50/ TB = $26 million for additional hard drives alone.  Just for fun, how many punch cards would this require in a year?  Somebody may want to check my math, but I get something like  8 quadrillion cards.  That’s an 8 followed by 15 zeros!

In upcoming Tech Tuesdays, we will learn more about how hard disks work and what that implies for computer systems.  But for now the key message to take away is that we have entered the age of nearly limitless storage.

Enhanced by Zemanta

Posted: 8th November 2011Comments
Tags:  Tech Tuesday storage

Margin Call (Movie Review)

Susan and I saw Margin Call on Saturday night.  The movie has a great cast, including Jeremy Irons, Kevin Spacey and one of my personal favorites, Paul Bettany.  My theory is that the more you know about finance, the more you will have trouble enjoying this movie.  That’s because Margin Call gets some finance language and mechanics wrong including in a very early scene where Eric Dale, played by the always fun to watch Stanley Tucci, gets fired.  The HR rep says to him “of course you get to keep your *unvested* options” which should almost certainly have been “vested” and even there would have likely been some need to exercise those options.  From observing the audience a bit it was pretty clear that the majority did not pick up on these finance errors.

Margin Call’s strength is that it remains strictly hero less.  Leaving out a couple of minor outside scenes, we only observe people working inside a large fictitious financial firm which is at least partially based on Lehman (including the CEO’s name John Tuld, played by Jeremy Irons, in an overly obvious reference to Lehman’s Dick Fuld).  None of these insiders makes a meaningful attempt to stop the firm’s actions and any hesitation is based more on personal career concerns (ability to be active in the market in the future) than really taking responsibility.  

Within that, Margin actually manages to provide a range of characters that are semi credible.  There is Zachary Quinto’s Peter Sullivan, the young quant jock (ex rocket scientist) for whom the math seems more interesting than the morals.  There is Demi Moore’s Sarah Robertson the lone female executive who early on realizes that it will be her head on the chopping block (an echo of Lehman’s Erin Callan).  Since I like Bettany it wasn’t surprising that I most enjoyed his Will Emerson, a hedonistic bachelor who seems supremely cynical yet has his bosses back at a key moment.

Margin Call is at its best when these different characters are interacting in a natural way and at its worst when it sets up didactic dialog.  [Spoiler alert] An example of the former is the eventual firing of Sarah Robertson (Moore) by John Tuld (Irons).  An example of the latter is a scene in which Eric Dale (Tucci) is picked up in Brooklyn by Will Emerson (Bettany) and goes on too long about a bridge he built in his former life as an engineer.

All told, Margin Call didn’t work for me as a “suspension of disbelief” kind of movie (because of the finance mistakes), but it was a solid outing in what Schiller called the “theater as a moral institution” – in fact the movie is reminiscent of a play.  I left Margin Call thinking about what system it might be that I am compromisingly participating in despite knowing about its deep problems.  More on that in my upcoming review of Larry Lessig’s new book “Republic, Lost.”

Enhanced by Zemanta

Posted: 7th November 2011Comments
Tags:  Margin Call movie review

Startup Weekend Syracuse

I am about to get on a plane to fly up to Syracuse for Startup Weekend.  Really looking forward to meeting the participants.  It is awesome to have many more people exploring startups as an alternative to working at some existing company!  Was also pleased to see Twilio as a sponsor – after all, the first version of GroupMe was built on top of Twilio over a weekend.

One of the many wonderful things about the Internet is that geography is no longer destiny when it comes to startups.  No city or region has a monopoly on innovation.  And if you set yourself up right from the beginning you can get talent from all over the world to work with you.  Many of our portfolio companies have important contributors in remote locations. 

Enhanced by Zemanta

Posted: 4th November 2011Comments
Tags:  StartupWeekend

Don’t Need No VC (True for Lots of Companies)

Roger Ehrenberg has an excellent post today on the question of whether there are too many companies or not enough venture capital.  Roger’s answer, which I agree with, is “neither” - but there are too many companies that act as if they will raise venture capital.  And Roger points out correctly that most of those companies would be much better off if they never put their sights on venture capital and behaved accordingly.

I find myself giving that out as advice a fair bit to entrepreneurs.  I meet a lot of people who are starting perfectly good businesses that can be worth in the tens of millions of dollars.  If the entrepreneurs own the bulk of those businesses the outcomes will be life changing events.  In addition those outcomes can provide excellent returns for angel investors.

It is really useful to provide a numeric example.  Let’s say you raise $500K for 20% of your business – that’s a $2 million pre- money and a $2.5 million post-money valuation.  Five years later you sell the company for $25 million.  Your investors will receive 20% of that or $5 million for a 10x return.  And you and your co-founder will split $20 million for a $10 million pay day per person (assuming equal co-founders).  I consider that life changing and if you don’t then you have a bigger problem.

Now assume instead that 1 year after your angel round you raise a $5 million venture round at a $15 million pre-money valuation and $20 million post.  Well, you have just taken the possibility of that $25 million exit off the table.  To start to get into territory that will make the investors happy and have them approve a sale of the company (and they will likely ask for such an approval right) you have to get to an exit value of $60 million or more.  And instead of owning 80% of the business you will own something like 50% (after allowing for an option pool).  And if you have participating preferred, the first $5 million will go to the VC firm.

Worse yet and this is the crux of Roger’s argument let’s assume that you spend your first year building up a burn rate that assumes that you will raise Venture Capital after 1 year but you fail to do that.  Now you are most likely looking at a shut down unless you can convince your angels to give you a bit more money.  That’s not only extremely unlikely but even if it happens you will have to restructure the business in a painful way (usually: lay off a bunch of people so that the burn rate is low enough to make a go with the additional angel money).

Bottom line: you are much, much better off operating as if you will not raise venture capital.  Then if it turns out that you are onto something big you can always engage in a fundraising progress but you haven’t cut off any options prematurely or added big shutdown risk.  If you are not on to something big you can get to cash flow positive on relatively small revenues and then grow from there entirely under your own control.  To provide some guidance here, I think that you should avoid taking your burn rate north of 10% of the angel funding you have raised and it is best to initially keep it at 5% until you have some visibility into how well your service or product works.

Enhanced by Zemanta

Posted: 3rd November 2011Comments
Tags:  Venture capital startups

Greece and Occupy Wallstreet (and Startup Financing)

The European Union (EU) has been a work in progress for many years.  Since its inception there has been tremendous tension between what can be decided at the EU level and what is decided by the individual countries.  Despite an expensive European Parliament the vast majority of the power is held by country governments.  That’s in stark contrast to the US where thanks to the Federalists, the Federal government has real clout.  As we are now experiencing, the European setup makes it difficult to maintain the cohesion required for a single currency.  Calls in Greece for a return to the Drachma are growing and a messy break-away from the EC is now a distinct possibility.

It is fascinating to see that part of what is fueling the Greek resistance to the draconian cuts demanded by the EU is not that distant from Occupy Wallstreet: it is the sense that government is controlled by outside interests which represent a few large banks instead of representing the people.  It is easy to point fingers and say “you lived beyond your means” (in Greece the government, in the US the home buyers) and now have to bear the consequences.  But the flip side of that argument is that the banks too behaved as if there were no risk.  So if both lenders and borrowers are to blame, why are the policies favoring the lenders so heavily?  That is at the heart of the Greek conflict and it is at the heart of Occupy Wallstreet.

These conflicts will not go away until we collectively manage to claw government back from the control of a few large corporations.  That’s why we need more separate democracies rather than fewer.  We need more experimentation not less to find our way to the future.  And why everyone should read “Republic, Lost.”

As an important aside for startups: I wrote a while back that companies that need to raise money should do so quickly.  For a moment it seemed like I might be wrong about that as the EU appeared close to resolving the Greek debt crisis and the US markets were rallying.  But I am continuing to be very bearish on the financing environment.  Even if there is a short term resolution for Greece, the underlying structural problems are not being fixed and so the next crisis is just around the corner.

Enhanced by Zemanta

Posted: 2nd November 2011Comments
Tags:  Greece European Union Startups

Newer posts

Older posts