There were several great comments on my post yesterday on gameplay relating to the data generated and the possible issues with keeping folks engaged. I believe the ideal application of gameplay in a site or service recognizes and works for three distinct groups: devotees, casuals and don’t-cares (I am obviously making these up - suggestions for better ones appreciated). Devotees are the ones who truly care about the gameplay and are on a mission to top a leaderboard or outplay another devotee. To keep devotees happy the gameplay must be fair and there have to be lots of distinct opportunities with new ones being introduced with some regularity. Foursquare, for example, has the concept of being the mayor of a location which clearly speaks to devotees. Casuals get a kick out of the gameplay but don’t care deeply about achievement. I consider myself a casual with respect to gameplay in TheSixtyOne. When something new pops up and says ‘hey you got some points’ it makes me smile and occasionally I will check something out to see if it results in points. Don’t-cares are either first time visitors or folks who simply want to use the service for its utility value. For that group it is essential that gameplay doesn’t get in the way of that. So there should be no learning of rules or other steps required to just get going. Most important though for the longterm success of a gameplay strategy is the interplay between the three groups. Ideally, the gameplay by the devotees and the casuals generates a ton of data that can be used to make the service much more useful for the don’t-cares. For instance, for TheSixtyOne the gameplay drives the music discovery. It does so with much richer signals than a simple across-the-board voting scheme would. Conversely, the much larger number of don’t cares should drive the economics of the service. That will allow a service to get big and sustain itself in a way that would be difficult based on devotees alone (keeping in mind that these are services that are aiming to have a utility outside the gameplay). P.S. Sorry for lack of links - written on BlackBerry
Yesterday, Techcrunch announced that Mint was adding gameplay to its service in the form of a financial fitness score. The Techcrunch post already pointed to Foursquare as a service that is making very effective use of gameplay to motivate activity and engagement in the system. I have also found this to be true for TheSixtyOne, which awards points for many different kinds of behavior. I believe we will see many more sites adding gameplay components going forward, much like sites added social features over the last few years.
There are now several generations of users who have grown up with video games (and it is certainly true for all coming generations). Elements such as points, quests, leader boards, challenges will seem completely natural to them and provide a higher level of engagement. In fact, Katie Salen who co-authored Rules of Play, a book about game design, is starting a High School in New York where the curriculum and instruction will be organized around quests and other gameplay concepts.
Much as was the case with adding social elements, the more successful implementations will be the ones that design the gameplay deeply into the service instead of just “bolting it on.” For instance, TheSixtyOne is building its service with gameplay as an essential component from day one. Someone else, like say Amiestreet, has added a rewards system to an existing service but it’s not all that playful (also uses real money) and relates only to a single behavior (recommending a song).
The New York Times has a fascinating article today about “shanzai phones” in China. These are knockoffs of existing brands but often come with interesting enhancements, such as support for multiple phone numbers and better cameras. Now one might think of a cell phone as something that is very costly to build and not an area in which one would find knockoffs (unlike, say, designer bags). What makes this possible is the network of parts suppliers that exists in China. Almost all of the global brands have the bulk of their manufacturing in China and the division of labor today has suppliers delivering assembly-ready components (instead of tiny individual parts).
There is a lesson in this for how to deal with the crisis of the car industry in the US. Automotive parts suppliers too have developed extensive systems capabilities over the last two decades. While there once was a time where car companies would literally buy leather to make seats (wonder whether anyone had their own herds?), they now get complete seat assemblies from suppliers such as Recaro. The same is true for more other components such as breaks or headlights. As a result it would be possible for startup car companies without any legacy to come to market fairly quickly with innovative models. Sadly, much more of that seems to be happening in other countries than in the US. I am afraid that in our efforts to save GM and Chrysler we are wasting an opportunity to help get new companies off the ground. It would be great to see some cheap loans made available specifically for startup car companies to help attract equity to this opportunity.
If you operate a site or service and want to accept identity from large third parties, you encounter a bewildering array of implementations. Some are fairly proprietary, such as Facebook. Others, such as Google, are based on a standard but with a twist. New ones come along, such as Twitter, and present their own versions. This makes building them all a real resource challenge. On top of that there is a real possibility of getting it wrong and creating a usability or worse yet a security problem as a result.
This challenge for site owners is a real opportunity for third-parties to provide a service. A number of folks have been working on this, but by far the farthest along appears to be JanRain with with their RPX offering. RPX is a software-as-a-service solution that abstracts away the multiple authentication offerings behind a fairly simple set of calls. Given the variety of implementations in use that is no small feat. JanRain is currently pursuing a freemium model with the base version of RPX available for free.
I believe that the benefits from using a “gateway” service can go significantly beyond ease of implementation, provided the gateway becomes popular among sites. A popular gateway will have a lot of information that is not available to any single site and could be used to provide value-added services. For instance, if all sites report back on accounts that are spamming or have otherwise been compromised, then the gateway could suspend those accounts across the network. This is why I chose the word gateway on purpose — this is similar to credit card processing (thanks to Johannes Ernst for first turning me on to this analogy). Any single merchant has a tiny (if any) fraud history for a particular card, but the network as a whole is efficient at detecting fraudulent activity.
If the network grows big enough, it might also be possible to provide some level of service discovery. Not sure exactly how this would be presented to endusers (could be at one of the participating sites), but with enough sites participating the gateway could identify sites or services that are often used together and then propose “sites you might also enjoy.”
What is essential for the success of any such gateway is for the big players to support it as a way of speeding up adoption of external identity. Ideal would be to have official endorsements so that developers can be certain that relying on the gateway will be supported going forward and they won’t suddenly find themselves cut off.
This is another post in thinking out loud about the economics of abundant and digital goods. In particular, I am interested in mechanisms that allow producers to recapture some of the consumer surplus even when the price for (a version of) the good is zero.
In photography, there is a well established difference between limited edition prints (sometimes a single one) made by the artist and reproductions of the photograph in catalogs or online. I believe it is possible to create a similar mechanism for digital music. An artist could take something like the original recording of a song and/or a track specifically remixed for this purpose an sell it in a limited edition model.
How would that work? What identifies the limited edition print of a photograph is usually a combination of serial number and signature by the artist. This could be recreated for digital music using a service that encodes both the serial number and the artist’s signature (which might take the form of a public key) into the music file. Now unlike the print, someone could make a perfect copy of that file. But even for prints this is an issue and so the added requirement for value tends to be some external documentation, such as a sales receipt. This will be critical for limited edition digital songs. The easiest way to do this would be for the distribution service itself to keep track of information about who has actually paid for a limited edition copy. One particularly elegant way of doing this would be by allowing the purchaser to specify a URL where the file will live (more on that below). It is not strictly necessary, but as a further mechanism this purchase information could also be encoded on the fly in the file itself.
It is important to note that this is completely different from DRM. The buyer could freely transfer the file between devices, etc. They could put it up on their own web site and stream it. None of this would reduce the value to the purchaser because it does not increase the number of authorized copies that have been sold and any claims of having an authorized copy can be compared against the sales records. Having the artist’s site publish out the list of valid URLs would make this especially easy to verify for copies published on the web.
An artist could choose to keep the edition really small and auction off only a few or make it larger and set a fixed price. All the while the artist can continue to distribute unsigned copies of the file for free! With the URL mechanism, purchasers can resell the limited edition file simply by transferring control of the associated URL. In fact, that way someone could build a collection and eventually sell the entire collection.
I have written this post in terms of music, but I believe that a similar approach could apply to any digital art (photography, design, movies, games) and possibly even to news.
The latest salvo in the fight for owning users’ identities revolves around URLs. Google just earlier this week promoted profile search and with it URLs of the form http://www.google.com/profiles/awenger (when I signed up for gmail a long time ago I thought shorter would be better — now wish I had chosen albertwenger, as I have on most other services). Facebook is clearly getting ready to launch their version more broadly and is apparently asking users whether they would pay to have a specific word as their URL (the already have the capability, e.g. http://www.facebook.com/fredwilson).
It will be interesting to see how this plays out both in the near term and in the long run. In the near term Facebook and others (e.g. Twitter) need to make a trade-off call between free, which will foster fast growth and rapid adoption, and paid, which will slow things down but generate revenues and reduce squatting issues. This is not a case of a typical digital good. While it costs nothing to produce one, the cost to produce the second (identical one) is infinite within a service (which is another way of saying you can only have one instance of each name). To extract the most consumer surplus you would run an auction process for names, instead of a fixed price per name as is the case for domain names.
In the long run I suspect that there will be real influence on people picking names for their children or changing their own names. This would not be unprecedented. Historically, people often only had a first name which was fine when the “namespace” was your local village because hardly anyone ever traveled beyond that. I believe that the use of surnames really got going when people started to live in larger cities and mobility was increased. In the age of the Internet we all live in a global namespace.
There is also a bit of irony here around the history of identity standards. OpenID was originally developed with the premise that folks would remember and use a URL to identify themselves. That was a non-starter for mass adoption because most OpenID URLs were really long and ugly and in any case people had been conditioned for years to think about a username or at best an email address as their identifier. So I was happy to see that new implementations were using the protocols behind the scenes but created a user experience that lends itself to broad adoption. Now we may be coming full circle with meaningful and easy to remember URLs actually identifying people.
In Monday’s post on the Economics of Abundance, I gave “name your price” as an example for how to deal with zero marginal cost. This is a case of voluntary price discrimination, i.e. folks who value something more are more likely to name a higher price. The first question of course is if such a scheme can work at all, that is will anyone pay if paying is entirely voluntary. We know the answer for this to be yes from the Radiohead experiment with about 40% of folks paying (according to Comscore, but band claims more did), but even for the much lesser known Saul Williams about 20% of folks payed (this is actually not an apples-to-apples comparison as in the Saul Williams case folks could only choose from two prices: free and $5). There is also plenty of evidence from other situations in life where the strictly rational choice (as in economic rationality) would be to pay nothing, such as the honor-system bagle example from Steven Levitt’s “Freakonomics” book.
The second question is how to optimally implement a “name your price” scheme. The above cited experiments are all one-off payments that take place prior to downloading or playing the music. Of course you could first play it for free and then if you like it come back and pay, but the basic setup is to put payment first. That is in direct contrast with the basic nature of music and news in that they are experience goods. You don’t really know how much you will like a song or how informative a news article is until you have listened to the song and read the article. Any “name your price” upfront scheme will be hobbled by this.
Conversely any scheme which relies on the consumer to remember to go back after the fact of having listened to the music and enjoyed it or read the article and found it informative is hobbled by the inconvenience of doing so. This may sound trivial but is a significant hurdle because it combines with other costs of per-song or per-article payment such as credit card fees and the cognitive load of figuring out how much to pay, into a total transaction cost that is likely to be very high compared to the value in question. For instance, let’s say that deciding how much to pay, figuring out where to pay and getting the payment done adds up to 6 minutes of time, then even if you value your time (implicitly) at only $20/hour you would be “incurring” $2 of cost, which totally swamps the say $1 you might have wanted to give (and this is not even counting any processing cost).
So an optimal scheme is likely based on some kind of automated tracking of actual consumption patterns. This is not an original insight. For instance, Fred hints at it in a post from 2005. But it is getting surprisingly little attention in the current round of discussion around pricing news and is rarely mentioned in conjunction with naming your own price. Such as scheme would let folks set a voluntary monthly value for their music listening and (separately) for their news reading. At the end of each month the system would allocate the monthly value across artists and writers based on actual consumption and possibly other signals (such as bookmarking a blog post). For folks who want to manually make changes to the allocation it should be possible to do so.
There are a two fundamentally different ways such as system could come about. The easiest starting point would be a closed system implemented on a single service. For instance, TheSixtyOne could fairly easily add this to their existing site. The downside of that is it will only cover music on TheSixtyOne, but I believe that is far offset by the upside of not having to deal with labels and other intermediaries that might otherwise have to be paid also. Eventually, there might be an open system ideally using server side tracking (much like an ad server) but maybe getting started with a browser plug-in.
One other important aspect of such a system would be a way to show to the world that you are participating in it. Talk is cheap and I am pretty sure that a much higher fraction of people say they paid for the Radiohead album than actually did. But a name-your-price system could easily allow folks to have a badge (for their blog, MySpace page, Twitter profile) that shows they are participating and on click-through shows their actual allocation (this could be done in percentages if people don’t want to show dollar amounts). Again this would initially be super easy to do in a controlled environment such as TheSixtyOne. This type of external signal can help foster the right kind of social dynamic in which participating becomes the norm rather than the exception. As more people participate there will be a lot of data in the system which can be use to further improve the level of voluntary contribution. For instance, when someone new signs up, the system could show the monthly level of contribution by others in the same ZIP code.
A while back I had argued that there was great potential for visual search now that we have a lot of images available. Well now someone has delivered and it is Google Labs with it’s Similar Images prototype. In fact, they did exactly what I had proposed in my original post, which was to make this a refinement to the regular search process, saying:
In fact, Google has had a close analogy in text search for a long time. Results from a search have a link for “similar pages.”
Well now there is “similar images” and it works amazingly well. First, I tried a search for “ball” as there many different kinds of balls. Not all the results have a “similar images” link yet, but the ones that do produce great results. Next, because it is near and dear to me because of Etsy, I tried a search for “ring”. The results of exploring the “similar images” are nothing short of amazing. Try different ring styles and you will almost always find lots more rings with a very similar basic style but lots of small variations. I look forward to making ample usage of this.
A little while ago, I read Cory Doctorow’s “Down and Out in the Magic Kingdom” (on DailyLit of course). In the book, he has the concept of Whuffie, a reputation-based currency that has replaced traditional money in a world of abundance. Since then I have been thinking about whether we might possibly be headed towards a scenario in which we have so much of everything that we can choose to or maybe even have to use mechanisms other than traditional monetary cost/price for making decisions.
I believe it is worth spending more time thinking about this for two reasons. First, when it comes to digital goods, we have essentially reached a state of negligible marginal cost (the cost of making and distributing an additional copy). This is already driving some profound changes. For instance, the current confusion over how to price something like the news is more than a failure of imagination. Second, even in the world of physical goods we are currently seeing declining prices. Now the latter could easily be dismissed as being purely the result of the current deep and global recession, but I am not so sure because of two divergent trends. Global population growth has slowed to about 1.2% annually in each of the last three years. At the same time technological progress has been accelerating. For instance even a crude measure such as global labor productivity has been growing at least 2% annually and in some years significantly more than that. I should be quick to point out that the benefits of this divergence are extremely unevenly distributed across the world population, but it is not inconceivable that we can eventually bring the marginal cost of products below their marginal benefit for many categories (the marginal benefit being the benefit derived when someone who does not yet have the product receives it).
Such a future has also been referred to as the “post-scarcity economy,” but I don’t like that term because there is still likely to be significant fixed cost involved (even for digital goods) and some things are likely to be scarce for a very long time, such as the environment (there is only one earth for us to live on) and people’s attention. So what I want to focus on instead is what happens as marginal cost falls below marginal benefit. Consider a picture somewhat like the following:
Instead of the traditional demand curve picture which simply ends in mid-air, eventually the population of possible customers is exhausted. At that point even a further reduction in the price of the product will not result in an increase in demand (or as economists would put it, demand becomes perfectly inelastic). In such a situation the social optimum is achieved by everyone having the product.
Interestingly, any price below P* and above Pmin in the chart will accomplish this social optimum. Prices between Pmin and P* no longer act to allocate the product among possible consumers (everyone gets the product) but act solely as a transfer between consumers and producers (I am using these terms because they are the traditional economics terms). So here we have our first indication how we might get to a point where price is not necessarily needed to achieve an efficient allocation, or where a range of different prices can sustain the optimal allocation. But price plays a second important role and that is it affects the distribution of surplus.
The area in the chart above the price line and below the demand curve is referred to as the “consumer surplus” — this is the net benefit accruing to consumers, i.e. how much more they value the product than they pay for it. The area below the price line and above the supply curve is the “producer surplus” — which captures the total contribution towards fixed cost. Shifting the price around between Pmin and P* changes the distribution of surplus between consumers and producers but leaves everything else unchanged. In order for the system to be sustainable, the price needs to be such that the producer surplus, i.e. the total contribution, covers the fixed costs of production.
It is easy to see how if fixed costs are high, there may not be a price that makes the social optimum possible. In fact, here is the picture redrawn for a digital good (credit to Mike Speiser’s closely related post which also uses hand-drawn diagrams but gets the digital good case slightly wrong.
The supply curve is now completely flat at price zero and the marginal benefit is also zero (no vertical portion of the demand curve). The argument for the latter is that the demand for most digital goods is smaller than the total population and in fact you eventually get to folks who would have negative benefit from consumption (e.g. listening to a song they don’t like). The social optimum is clearly at price zero and all net benefit occurs in the form of consumer surplus. But at price zero there is no contribution towards fixed cost. This is the dilemma that we are already facing with many digital goods such as music and news today.
Creating an artificial price for such a good, for instance by forming a news oligopoly, would move us away from the social optimum. Yet we need to figure out as a society how to cover the fixed cost of news and music production or we will have less of it than would be optimal.
The historic experiments to date of trying to produce goods for everyone and not relying on price for allocation were of course largely a failure, e.g. the Soviet Union. Two of the key issues are what happens to incentives for innovation and who decides what to produce among the many possible things that could be produced (keeping in mind that there are still fixed cost). So what alternatives exist?
One important alternative that is not receiving nearly enough attention is to stop charging the same price to everyone. In economics this is know as “price discrimination” and there is an extensive literature on when and how it is possible. For instance, with so-called “perfect” price discrimination everyone would pay exactly what the good is worth to them. The outcome would be efficient but in this case all the surplus would accumulate to producers. Because digital goods are so easy to redistribute they don’t meet the traditional criteria which would allow for price discrimination. But somewhat surprisingly it turns out to be possible to have voluntary different prices as was shown for example by Radiohead’s pick-your-price experiment.
I hope that the logic above shows at a minimum why and how the traditional price mechanism breaks down for abundant and digital goods. There is a lot more that follows from the economics of supply and demand than makes sense to explore in a single blog post. Stay tuned for more posts exploring both the physical goods and the digital goods cases further.
Caring about the security of your site or service is a bit akin to going to the dentist on a regular basis: It’s not pleasant, doesn’t really get you any visible results and costs time and money. Hence the people who care/go regularly are the ones who have had a bad experience by not doing so. In my own case that is now (sadly) true for both my teeth and security.
My exposure to computer security issues started in the fall of 1987 when I was a freshman at Harvard. We had a bunch of VAX machines (remember those?) running BSD Unix. I logged in repeatedly with the default account password that I had been given at the beginning of class. A week or so into classes when I logged in again, a little shell script ran saying something like “You should change your password! RTM” RTM is of course non other than Robert T. Morris (who graduated that year) and the following year created what became known as the first Internet or Morris worm (btw, lest anyone think differently I am an RTM fan). Since then I have encountered enough nefarious activity on the Internet that I take even far-fetched sounding concerns about the security of smart electricity grids seriously.
Most startups have extremely limited resources in terms of time and money and need to worry primarily about delivering a service that people will actually use. Having said that, there are a bunch of basic security items that no startup should ignore:
- Guard against SQL injection attacks by using a framework or escaping inputs or using parameterized queries
- Limit the potential for XSS attacks (like the one Twitter was hit with) by sanitizing user inputs that get displayed on the site (if you are asking a user for a color code, exclude anything from the input that is not a color code).
- Limit access to your machines to traffic that is absolutely required using netfilter/iptables (in most cases that will just be http, https, ssh and maybe smtp, pop).
- Don’t just use the default configuration files for Apache, PHP (or whatever you are using) and ssh. The defaults tend to have poor security and even a few minutes of work will make them more secure.
- If you have a web based admin console for your service (who doesn’t?) make sure that it requires strong passwords and if it permits delete or modification operations have scripts ready to undo (soft delete is the way to go). Also run the web based admin over https to make password sniffing on wifi connections harder.
- Avoid URLs based on auto-increment row ids, which make it easy for an attacker to traverse your entire database (there are also scaling reasons for avoiding these).
I am sure the list could be made longer, but these strike me as must-have items even when you are just getting started. Once your site or service takes off and you have many thousands or even millions of users (or significant ecommerce transactions) there will be lots of other things you have to do (such as external security audits and hiring “paranoids”), but those are all great problems to be having!
Yesterday evening I had the pleasure of attending a wonderful concert at Carnegie Hall: the YouTube Symphony. The YouTube Symphony is an orchestra that was assembled through auditions conducted via — you guessed it — YouTube. As such, the process was open to musicians from around the world which is entirely different from how orchestras (even youth orchestras) are traditionally assembled. The idea fits with a concept that we discussed at our hacking education conference: the Internet provides a whole new way of credentialing. No longer is it a function of which school you graduated from (or didn’t) or what degree you received (or didn’t) but rather how much you have actually learned as shown through a visible and shared record (here videos posted to YouTube, but could be a blog, contributions to open source, etc).
The concert itself was terrific. Michael Tilson Thomas was probably the perfect conductor for this given the range of what he has done and his passion for a broad conception of classical music which was on ample display yesterday. The Orchestra did amazingly well for a group of musicians that had only rehearsed together a few times. The sound while not distinctive was clear with great timing and precision. The major orchestral pieces performed yesterday were mostly of the high energy variety and the enthusiasm of the performers really came through. But even on the more ethereal Villa-Lobos and Debussy they sounded really good. There were a few stunt performances, such as the piano solos by Yuja Wang which was probably the fastest precise playing I have ever witnessed. Here is an early more detailed review of the concert.
The other thing that worked well most of the time (and I wish other classical concerts would do much more of) was the incorporation of video. Soloists were often projected on the wall behind the orchestra and sometimes accompanied by additional projections onto the ceilings and side walls. While some may think of this a as a cheesy son et lumière show, I felt it provided a great additional dimension to the performance and only on one occasion was distracting from the musical performance.
All in all, this was a great success and I was moved by attending. Many, many thanks to Margot Heiligman and her husband Bill Williams - the artistic coordinator for the YouTube Symphony - for inviting me to this inspiring event.
Last year I had suggested that this was possibly one way to create a phone that might ultimately beat the iPhone. That of course is exactly the approach that Palm is taking. So by showing how powerful a web app can be that actually leverages local storage, Google has given that model a further boost. Now the big question shifts to where and how such apps will be best distributed because clearly the App Store has proven to be a successful model. That is no issue for google since it can just promote the gmail app when folks search google from their iphone. But it will be an issue for other developers and for the Palm Pre. One interesting possibility would be to have a Palm sponsored but essentially peer-produced directory that is accessible from the top level of the phone. Excited to see how this will play itself out!
What you call things matters. There is a reason why “nomen est omen” goes back as far as ancient Rome. They of course believed that a person’s name determines (or at least reveals) fate, but more recently we have seen plenty of evidence of company or product names making a meaningful difference. And anyone who has done user testing will know that what label you put on a button can significantly impact how often it will be clicked.
I was therefore happy to see that the Obama administration is getting rid of the expression “war on terror.” From when it was first introduced by the Bush administration, I thought it was a name that would misguide our efforts, much as has been the case for the “war on drugs.” A war is something you can fight when you have a clearly defined enemy with a territory, an army and a military objective (e.g., kicking the Iraqi army out of Kuwait in the first Gulf War). Now terrorists and drug lords have lots of weapons and but they operate as dispersed networks, employ small groups of non-uniformed fighters and have monetary or religious/worldview objectives. Framing the fight against them as a “war” starts thinking about strategy and tactics off on the wrong foot.
I was a bit aghast though to see a retreat from the word terrorism. To me a “man-caused disaster” is when a dam that we built breaks by accident and floods a residential area. If the same dam breaks because of a bomb than that’s “terrorism” (an act intended to instill terror, i.e. fear). To refer to the latter as a “man-caused disaster” has a distinctly Orwellian feel to it.
So I will continue to call this what I have for a while, which is “Fight against Terror.” I believe it is terrorism that we are fighting and “fight” has been a successful term in another and closely related area, the “fight against crime.”
I got some firsthand experience with that yesterday afternoon as I was playing around with the Facebook Connect API. For instance, if you want your Connect application to publish to the feed, you must first submit story templates for approval to Facebook (there are separate templates for one-line, two-line and three-line stories). This means you need to plan in advance what messages your application wants to send and then wait for a review by Facebook. Oddly, no such constraints apply when updating someone’s status. There instead, you need to ask the user for a separate permission to update the status. If you want to be able to do any of this even when the user is not logged into Facebook at the time (for instance because it reflects an action taken via email) you need to ask for another separate permission (“offline access”).
Facebook is putting these hurdles upfront in an attempt to control the experience and prevent such things as spamming of feeds. But there is a tradeoff. Hurdles like this also limit innovation. I believe the better approach would be to have fewer up front hurdles and rely more on enduser behavior and automated filtering. For instance, when Susan rolled out Twitter integration for DailyLit, the initial version did not provide enough control and some folks promptly unlinked their accounts (and complained about it on Twitter). That resulted in a new version which provides detailed control over what will be tweeted and how many tweets will be sent. I am pretty sure that Facebook users would exercise the same type of discretion with respect to Connect apps. But even when users don’t, there is a lot that can be done automatically to suppress what would otherwise be feed spam. Twitter, for instance, already has extensive duplicate suppression in case an external app is in some kind of loop and spits out the same status repeatedly.
This reminds me of Clay Shirky’s “Publish First, Filter Later” chapter. In the digital age, the cost of upfront hurdles in terms of diminished innovation and adoption seems to far outweigh their benefits. We would all get less benefit from Google if it included only sites in their index that conformed to a bunch of upfront criteria, instead of relying on ex-post filtering and user behavior to identify spam. This is not to say that the latter is easy. Much work remains to be done for Google and Twitter (and everybody else), but I believe the long term results will be better.
Our portfolio company 10gen has released the core of its MongoDB database under the GNU Affero General Public License (the drivers are licensed under Apache). Some people react to the Affero GPL by saying something along the lines of “that’s a new license which I don’t want to understand or bother with.” I have some sympathy for that initial reaction (who wants to figure out legal stuff), but most of the same people would happily use GPL licensed code without checking whether it is v2 or v3.
Now the funny thing is that that the GNU Affero GPL differs from the GPL v3 in one added paragraph and one changed sentence (you can easily confirm that by downloading the licenses and running them through diff — ignore the crud from the preamble which has no legal implications). Here is the critical additional paragraph:
Notwithstanding any other provision of this License, if you modify the
Program, your modified version must prominently offer all users
interacting with it remotely through a computer network (if your version
supports such interaction) an opportunity to receive the Corresponding
Source of your version by providing access to the Corresponding Source
from a network server at no charge, through some standard or customary
means of facilitating copying of software.
The intent here — and that is the interpretation that 10gen is interested in — is to say that if Hosting Company X runs MongoDB to provide a “database in the cloud” service and makes changes to MongoDB they have to publish those changes. In other words, this license closes what some folks have referred to as the “ASP loophole,” i.e. the ability of application service providers to modify GPL licensed code without having to contribute the modifications back.
Another way to look at this is that before the web was huge, the predominant mode of using software required distribution of the software. The GPL is and alway has been extremely clear that if you modify GPL licensed code and then distribute the modified program, you must make the modified source available. So what the added paragraph says is that making your software available over a network is a form of distribution. That seems entirely logical in a networked world.
So as to not be accused of incompleteness by anyone who runs diff, the added paragraph goes on to specify that the source that is made available must include any GPL v3 licensed code that was also used. The ability to mix GPL v3 and Affero GPL licensed code is the only other addition to the terms.
My recommendation to anyone concerned about the Affero GPL is to run diff on the two licenses. In fact, it would be a great service if the Free Software Foundation did this right on their license pages. I believe that would remove the odd amount of FUD that has somehow gathered up around this license. It would of course also help if Google changed its stance and made the Affero GPL one of the available licenses on Google Code. This issue was covered well in a post by Matt Asay which also has comments worth reading (especially the second-to-last and last comments).
I was reading an article recently about how many college savings plans are totally under water following the plunge in the stock market. These plans theoretically offer significant tax benefits and so look quite attractive. A few years ago my broker was using this as the argument why we should put some money into them. But I had previous bad experience with doing something primarily for tax reasons that wound up really backfiring (converting an LLC to be treated like a C-Corp) and made sure to focus on other factors first in looking at these plans, such as how big are fees, who is managing the money, what restrictions apply on redemptions, etc. The upshot was that the plans didn’t stack up all that well and given the performance of the plans my broker was suggesting, I am very glad I stayed away. It also turns out that a bunch of companies and municipalities that have gotten themselves into trouble with derivatives did so over deals that were primarily or sometimes exclusively done to avoid taxes. So now my operating principle has become to optimize for taxes last, if at all. I focus first on whether something makes sense strategically and financially. Once I have that squared away if there are alternatives with clear tax implications I will consider it, but even then only if I completely understand the mechanism.
Google has finally officially launched their long rumored venture fund. Corporate venture funds have a somewhat spotty history and it will be interesting to see how the google team approaches this. My partner Brad started in Venture Capital with AT&T ventures, which was set up like a real independent fund with a GP/LP structure. That is clearly the way to do it if you want to optimize for returns. The description of Google Ventures given by the principals and on the Google Ventures site makes it appear that they too are focusing on backing innovative startups and de-emphasizing any strategic value for google. It will be interesting to see what entrepreneurs think of that and how it plays out in practice. For someone like Intel Capital there have been a lot of things to fund (e.g. in software or web services) that have a low likelihood of a future conflict between the fund’s sponsor and the startup. Given the breadth of google’s activities (and ambition?) that may be a bit harder. In any case, I am happy to see another source of funding for startups, especially at a time when some other funds are retrenching, and look forward to meeting the Google Ventures team.