There is something very exciting going on in the world of software: a shift towards developing new application software in-house (as opposed to relying on third party offerings). For instance, at USV we have developed the linksharing on usv.com and our new analyst application process. Why does that make sense now? Because writing small applications has become so easy that it is today’s equivalent of writing an Excel macro.
Chris Dixon has a good post yesterday where he describes some of the trends that have made this possible, including on demand infrastructure, lots of API based services, open source libraries and scripting languages. I would add to that a database layer that makes it easy to deal with semi-structured data, such as MongoDB (we are investors).
As a result the tradeoff has shifted from using a complete third party application to building it yourself for many projects. What you gain is fine grained control over design, functionality and integration with other systems you are using. What you pay is having code that you need to maintain. So if you are going that route there is a real premium on keeping your code short and super readable.
I am fully aware that we have gone through phases like this before. For instance in the early days of the PC that’s how I first earned money — writing in house software for the personnel department of the local Siemens branch in my hometown.
Still it feels like this time the in house approach will be here to stay and if anything be extended to non-programmers through services such as IFTTT. One of the key drawbacks of the in-house move during the PC days was the fragmentation of data. That is largely eliminated today as the data resides in the cloud in any case.
In addition to the cost and complexity of in-house development having come down massively there is another crucial reason for the shift (which also makes me believe it will be longer lasting): software plus data is increasingly the key competitive differentiator.
PS Back to my posts on the economy tomorrow
Another topic that I have written a lot about here on Continuations are patents and in particular software patents. While there is a lot of reform that I would love to see I have also come to appreciate that sometimes the only way to get there is in small steps. One relatively meaningful step was just introduced by Senator Schumer. The basic idea is to allow for a much fast tracked review of many of the suspect business process patents used by patent trolls to sue startups and larger tech companies. Because of changes in the interpretation of patent law many of these fast track reviews have a good shot at invalidating the patents. For a more detailed comment on this proposal, please read Nick Grossman's post over at USV.
Last Tech Tuesday, we learned about atomic actions as a way of dealing with the problems arising from concurrency. I ended that post by pointing to the limits of atomic actions — most notably that the operating system and/or database system cannot provide arbitrarily complex atomic actions as it cannot possibly anticipate all the needs of different programs. At the time I also raised the question as to how atomic actions can be provided in the first place!
Let’s start with the second question. One answer is that this is very easy to do if the hardware is able to guarantee that a set of instructions or potentially a single instruction can complete without any interference from other concurrently executing programs. As it turns out, very little is needed from the hardware in order to build more complex ways of managing concurrency in software. For instance, the ability to check and change the value of a memory location in one go would be enough. As it turns out with some fancy footwork such as Dekker’s algorithm it is also possible to achieve a similar result using software only without any explicit hardware support.
Once we have a primitive atomic action or mutual exclusion capability we can use that to build up more complicated ways of managing concurrency. For instance, a so-called counting semaphore can be used to let a pre-defined number of programs (but no more) access a resource concurrently. Or we can use it to acquire a write lock on a row in a database, which we can then update without interference from any concurrently executing programs. The idea behind all of these mechanism is essentially the same: limit access to prevent conflict. Unlike an atomic action that means that arbitrarily complex sequences of activity can be carried out before another program is given access.
So let’s go back to our ATM problem from before and see how we can now solve it. Here is some example pseudo code
lock(account) retrieve(account, balance) if (balance > amount): balance = balance - amount update(account, balance) dispense(amount) unlock(account)
The call to lock() will block program execution until it has acquired the lock on the account. It guarantees that only on program can acquire a lock at any one time. There goes our chance of getting rich by lining up thousands of simultaneous withdrawals at different ATM machines!
Does that mean all is fine? And if so, why would anybody want to just use atomic actions instead? As it turns out having more powerful capabilities for managing concurrency gives us opportunities to mess things up in other ways (as in “with great power comes great responsibility”). Here is just one quick example — the potential for two programs to deadlock. Consider the following naive implementation for transferring money between two accounts:
lock(account1) retrieve(balance1, account1) if (balance1 > amount): lock(account2) retrieve(balance2, account2) balance1 = balance1 - amount balance2 = balance2 + amount update(account1, balance1) update(account2, balance2) unlock(account2) unlock(account1)
Imagine now that I try to send money to Susan at the same time as she is trying to send money to me. Two transfer programs start to execute concurrently and for one of them account1 is my account and for the other account1 is Susan’s account. So each program manages to acquire the first lock. Let’s assume each of us has enough money in the account to make the transfer so that we get inside the conditional. Well, now the program trying to execute my transfer wants a lock for account2 which is Susan’s account but that’s already held by the program trying to execute Susan’s transfer to me. And vice versa. Since locks block, neither program can continue to execute. That also means that neither of the initial locks will ever be released and the two programs will remain suspended forever. Any subsequent programs that need access to our accounts would also be stuck. In fact until the processes are forced to terminate both accounts would be inaccessible.
As so often in software, by solving one problem (the limited capabilities of atomic actions) we have introduced another (deadlock) that is potentially much more severe! Next Tuesday I will wrap up this little mini series about concurrency with a brief look at asynchronous programming with non-blocking algorithms.
Last Tech Tuesday, I introduced the problem of concurrency: multiple programs executing at the same time. I used an example of a naive program for updating a vote count that could lose votes if multiple instances run concurrently. So how do we deal with this problem? There are many different approaches that people have taken over time and I will only try to cover some of them. Since I am still traveling and keeping my posts short, today I will only cover the idea of atomic actions.
An atomic action is guaranteed to be completed in one go as if it were instantaneous. How does that help with concurrency? Well imagine that the database we are using provides for an atomic increment and decrement operation. Then we could rewrite our code as:
Now we don’t care how many copies of this program are running concurrently. We are guaranteed that every change in vote count is properly recorded in the database.
Do atomic actions solve all our problems with concurrency? No, as the following example illustrates. Consider a naive program for dispensing money from an ATM machine
if (get_balance() > amount):
What is the problem? If you cloned your ATM card (don’t do this) and lined up a concurrent withdrawal at many different ATM machines you might be able to withdraw much more money than is in the account! Why? Because the balance check is performed separately from the atomic decrement of the balance.
So if many of these programs run concurrently because we hit the withdraw button on thousands of ATMs at the same time, they might all show the account as containing the balance and hence all dispense money after which the account would be massively overdrawn — but as a crook we would have our money and run.
How could we fix this problem? Well, one answer is to have a larger atomic action. If we had
there would be no concurrency problem that someone could exploit. That solution of course has a problem: for any but the easiest cases (like this one), we would need the operating system to provide us with ever more complicated atomic actions.
There is also a question that should be nagging you right now? How can the operating system provide atomic actions in the first place? In other words, using atomic actions to solve the concurrency problem is really just relying on someone else having figured it out for us. Which is perfectly fine from the perspective of the application layer program, but does beg the all important question: how does the operating system solve the concurrency problem? This seems like a case of “it’s turtles all the way down.” Next week we’ll look at some of those turtles …
It’s only two months ago that I posted that these are Apple’s glory years and that it is likely too soon to bet against Apple’s stock. As it turns out I was wrong. When I wrote, the stock was at $650/share and now it is at $550/share, which is down 15% (and it is down over 20% from the top of about $700/share). One of the big drivers here is a decline in customer satisfaction and loyalty. I believe a lot of that is driven by software. A bunch of people I know have folders on their iPhones labeled something like “crap apps that Apple doesn’t let me uninstall” (I am paraphrasing).
The software problem for Apple goes further though as I discovered when I bought my new laptop. After some agonizing I caved and instead of buying a Linux laptop I went with a MacBook Air. I have been super swamped at work (as those waiting for email replies from me know) and I was nervous about how much time it might take to get a new machine to work flawlessly. So the good news for Apple is that once more I spent a lot of money with them.
But the bad news is probably more important. I got my new MacBook Air and I was up and running with it in sub 10 minutes because everything I have is in the cloud. My files are on DropBox and Google Drive, my email is in Gmail, my code sits at github. And I really only use two types of software on my Mac — three browsers and Terminal. I even put my various config files (.ssh, .vimrc, etc) on DropBox, so those were back in seconds as well. Bottom line is my MacBook Air is a beautiful piece of hardware but has no meaningful other ties to Apple.
Growing up in Germany I learned a lot about a legal system in which judges are only applying the law not also making it through their decisions. Precedent plays a very limited roll in the German system. It took me some time to come to grips with the US system. But now I have come to think that an independent judiciary that has the ability to influence the law through precedent is a wonderful balance to the executive and legislative branches. In this 4th of July week this is another independence that’s worth celebrating!
It’s been great to see independent judges at work with regard to patents in the Oracle versus Google case and now the Apple versus Motorola (now also Google) case. Even better is to read that Judge Posner is fundamentally questioning the need for patents in many industries. This is very encouraging. We have to keep in mind why patent legislation was created in the first place: as a way to encourage innovation at a time and in circumstances when the cost of innovation was high. This is essentially a way to solve a market failure. But we don’t have an issue of “under innovation” in software. With open source and the web we have found other ways to provide incentives for software innovation. And now we need to weigh the consumer cost much more heavily. That’s exactly what Posner is calling for.
Years ago I worked with the team at Tacoda on the technology for audience based targeting. Some of my work there wound up in a patent that was sold along with Tacoda to AOL. I don’t know if this patent is part of the patents recently sold by AOL to Microsoft but I wouldn’t be surprised if it were. I also worked closely with Joshua on a number of patents filed for by del.icio.us and sold along with the company to Yahoo. Yahoo of course has recently started a massive patent lawsuit against Facebook (fortunately none of the del.icio.us patents feature in it).
In both cases my work on the patents was done to help protect a startup company. But once that company exited the patents were in someone else’s hands who could use them any which way they would want to, including offensively. The bulk of the patents used by trolls (er, non-practicing entities) in lawsuits against USV portfolio companies are patents that were acquired from startups that had failed. The original innovators — the engineers who did the work — lose complete control over their work which has resulted in a number of high profile posts of people distancing themselves from the use of their patents.
I am therefore thrilled that our portfolio company Twitter has come up with a terrific hack of the system: a patent assignment agreement that gives the company only defensive use of the patent. The innovator retains control for any offensive use. What I love about this hack is that companies can adopt it one at a time and that it doesn’t require any change in the legal system (that’s why it is a “hack”). I also love that the company has put the agreement up on github where it can spread virally among engineers. I already know that we have several other portfolio companies that are interested in adopting the same agreement.
So if you are an engineer and care about the drag of software patents, go lobby your company to adopt this agreement. And if they don’t, go work for one that has! At USV we are supporting this new agreement and are actively encouraging our portfolio companies to adopt it.
If Yahoo had any shred of credibility left with developers then it has succeeded in destroying that with its misguided patent lawsuit against Facebook. But the suit isn’t all bad. It has the potential to become a catalytic event for broader social awareness of the perils of software patents, similar to how the SOPA/PIPA battle moved copyright and its enforcement into more of a mainstream issue. That was sort of the gist of Mark Cuban’s post.
The first group of people who should really start to get engaged are engineers. After all, they are the one’s whose work becomes — as Andy Baio put it — “weaponized" in the hands of corporations. A first step here might be to change how patent assignment works. Engineers at a startup could require that assignment is made only for defensive purposes instead of unconditionally. This would prevent the kind of fate that befalls so many of the patents when companies are either acquired, get into trouble or fail (and their patents are acquired by a non-practicing entity, better known as a troll).
As another approach (albeit one that might take more time to construct), companies could assign their patents to a pool that would be used for defensive purposes only. RPX does something along those lines but seems to be geared at big corporations and in RPX’s case the patents are still available for offensive purposes as well (at least as far as I know).
Between mobilizing developers and approaches to peer producing research to invalidate patents, I believe it is possible to build enough outside pressure on the system to achieve some real change.
My partner Brad put up a great post on the USV blog yesterday, arguing for an independent invention defense against software patents. A while back, I had proposed an alternative, a change in how litigation works. In that post, I wrote that:
Some folks have suggested doing away with software patents altogether as a way of addressing this problem. That strikes me as too dramatic a solution as I don’t believe that all software patents are evil. For instance, if someone were to spend years and lots of money to develop a new and improved way of recognizing images then it is not clear to me why that is less worthy of patent protection than say a new machine or a new drug.
I have since then changed my view of that. After a lot of digging into what has been patented over the years in software, I am now convinced that neither a change to litigation nor an independent invention defense are sufficient.
Instead we need to hit the restart button by invalidating software patents wholesale and either not allowing them going forward or only in some incredibly restrictive form. That now puts me firmly in the camp of Brad Feld, who has a post today supporting Brad’s effort and trying to rally more investor support for fundamental reform.
Running around a lot at the moment, so expect a longer post in the future detailing the process of my conversion!
I love Vernor Vinge's concept of a “programmer-archeologist” (A Deepness In the Sky, he also coined “programmer-at-arms”) as someone who digs through the layers of existing systems to understand why and how stuff works. I was reminded of this over the weekend, as I worked through some old code of my own and that from a bunch of other developers on DailyLit.
My exposure to “software archeology” goes as far as back as the first time I earned money from programming. I was 17 and had just come back to Germany from one year as a high school exchange student in the US (Rochester, MN), where I was impressed by how everyone was earning their own money. Since I thought I knew how to program, I looked for a job as a programmer. My first job wound up being with a small company that was developing accounting software. They were working on a rewrite of an “inherited” system that was written entirely in an early form of Basic which allowed variable names to be only 2 letters. There were no comments anywhere in the code. At the time, we wound up building a whole bunch of “archeology” tools, such as a cross reference generator.
From there I went on to work for Siemens developing some HR software while finishing school in Germany. The job came about because the local Siemens branch in Nuremberg did not want to wait for the central IT department to get around to this piece of software and so hired a bunch of contractors instead. I wrote a big chunk of the “training” module which kept track of ongoing education courses taken by employees. This was in 1986 and I only recently found out that a heavily revised version of the training module is still in use today.
Since then I have seen many more examples of the longevity of code — and hence the need for programmer-archeologists! Many development problems (including some obvious ones I was looking over this weekend) are caused by programmers new to a project not sufficiently understanding the structure and capabilities of the existing code. As a result, they often write unnecessary code or put unnecessary kludges in place. They often do so without taking the time to figure out whether the old code is still needed at all, which means that even more code accretes for future programmers on the project to dig through.
My conviction now is that it is as important a skill to be able to read and understand code as it is to write it. As result, I believe that any developer interview should contain at least a couple of code reading exercises. Ideally, these are integrated with some other problems, e.g. “extend this to do xyz.” I can’t wait until I see the first actual job posting for a “programmer-archeologist.”