Germany’s parliament today passed the highly controversial “Leistungsschutzrecht” which is an ancillary copyright that requires payment for excerpting from and linking to content in some instances. When I walked around in a slight daze at Chicago O’Hare this morning after an all too short red eye flight, my Twitter feed was full of German friends and others outraged about the bill (just look for #lsr). Jeff Jarvis — who spoke out strongly against the law at this year’s DLD conference — in particular was tweeting up a storm.
I agree that the LSR is the wrong approach because it is a blunt instrument that will have significant collateral damage. Excerpting and linking are the lifeblood of the web. But it would be naive to think that there isn’t a problem here that needs addressing. It is very different if I excerpt and link here on Continuations than when Google does it. Why? Because I account for some vanishingly small faction of Internet traffic whereas Google often has 80 percent or more of all search queries.
When Larry Page took over as CEO of Google I wrote that I hoped he would address the problem of Google’s role in the Internet marketplace by figuring out how to share some of Google’s economics. Because that’s what this is all about. It’s about trying to fix rent extraction by a near monopolist in search. And that rent extraction is a very real issue (how do you think Google is financing self driving cars and Glass?).
Imagine for a moment a situation in which there are many competitive search providers and consumers switch easily between them. Now consider a company such as Yelp that has accumulated a lot of information about restaurants. If one of the search engines starts to publish more and more of Yelp’s data in the search results instead of finding the listing and driving traffic to it, Yelp could simply choose to not let that search engine crawl and index Yelp. That becomes a credible threat as consumers might switch to a different search engine if they can no longer find high quality restaurant data. That credible threat would over time lead to a situation where companies such as Yelp receive a licensing fee from the search engines for inclusion of the content, whenever including the content directly creates more value rather than driving traffic.
But with the market structure in search as it is, Google has been holding on to all of the value creation from including ever more content in its search results. All the while adding insult to injury by starting to build or buy competitive content services. That is not sustainable and Google needs to change its practices or we all risk seeing more legislation like the LSR that in trying to fix a market structure problem in search is likely to do more overall harm than good.
Bijan has a good post up today with “Some Thoughts on the Future of Search.” This is a topic on my mind following Facebook’s announcement of Graph Search and my upcoming appearance on Monday at DLD on a panel titled Next Generation Search, together with Philip Inghelbrecht (Rockmelt), Wolf Garbe (Faroo), Thomer Kagan (Quixey) and moderated by Henry Blodget. We have several investments that deal directly (Duck Duck Go) or indirectly with search (eg foursquare).
The intersecting and sometimes conflicting trends of location, mobile form factors, social context, pools of knowledge (eg Stackexchange), machine learning and privacy suggest to me that the search game instead of being over is just really getting going. In preparing for the panel I would love to hear about the most interesting new search or discovery experiences people are excited about. Please let me know!
Yesterday Facebook announced its long awaited search offering which they are calling graph search. It is the logical way to make the graph data that Facebook has been accumulating on its own and through its open graph accessible to users. Graph search is launching in beta with the obvious types of searches: people based on interests, places, movies, books, music, etc based on people. This is of course what Google has been fearing and why Google has been doubling down on Google+ at every turn (and I happen to agree with Brad Feld that Google’s patience has the potential to pay off).
At this point then it couldn’t be clearer that Facebook and Google are on an epic collision course. What is disappointing to me is that both are pursuing a vision which seems counter to the spirit of the internet as a collection of small pieces loosely joined. As an enduser I would much prefer a rich ecosystem of competing smaller services that work on specific problems, such as foursquare and yelp for places, netflix and vudu for movies, rdio and spotify for music, etc. The desire for a network of networks has also informed how we have invested.
I continue to believe that smaller services focused on specific areas can ultimately deliver a richer experience for endusers and also avoid a concentration of power. But the individual players are up against the powerful effects of supermodularity of information based production functions. On the margin I can make a better recommendation for say books if I also have information on movies. The big question then is whether the resulting complementarities doom us to face a few highly integrated players or whether it is possible for independent services to be sufficiently differentiated as to offset this advantage.
Evan Williams apparently recently said that there is an issue with all of us being stuck in a kind of “continuous present” on the web (ironically, I can’t find that quote right now). I am certainly stuck in that all powerful present many days. There is so much new output hitting the web every day that one can barely scratch the surface of it, let alone delve into the past. Google has only aggravated this problem by tilting their search algorithm more heavily towards recency. Techmeme — one of my daily go-to sites — only aggregates the day’s output.
The power of the present is another example of a type of “filter bubble.” And just like I have called for an “opposing views reader”, what we need to do is surface time explicitly. I am not a fan of Facebook by any means, but timeline may turn out to be an important contribution to the future of the web. Similarly there is something quite magical about Timehop as a way of bringing our own past back to us. Just the other day my Timehop email reminded me that a year earlier we had picked up a dog from a shelter.
Now imagine a version of Techmeme that links today’s topics to their historical precedents using a kind of timeline view. Or think of a search engine that adds a time dimension to the results navigation — so that instead of having to explicitly ask for older content you can just “scroll” into the past. Thinking about this has given me a whole new appreciation for the importance of what Brewster Kahle and the team at the Internet Archive are working on.
Last July I had predicted that Google would go all in by bundling Google+ aggressively with search and that is exactly what was just announced yesterday with Search, plus Your World. The “plus Your World” part right now refers “your world on Google” as only Google+ profiles, posts and shared images are included and not content from Twitter, Facebook or others. John Batelle’s capture this well in his aptly titled “Search, Plus Your World, As Long As It’s Our World.”
Also worth reading are Danny Sullivan’s excellent overview of what Search+ offers and his detailed analysis of whether or not Google could already include some Twitter content without a commercial arrangement with Twitter. Danny’s analysis has actual comments from an interview with Eric Schmidt. Finally, the most scathing reaction has come from MG Siegler who flat out titles his piece “Antitrust+.”
While it’s too early to know how all of this will play itself out over time (there has already been some public back and forth between Google and Twitter), two things seem fairly clear. First, in the near term this will be bad for end users. Second, the root of the problem are Google’s economics for search. The two point are intimately related.
On the first point, John Perry Barlow aptly tweeted:
We are becoming helpless collateral casualties in the war between Google and Facebook. bit.ly/WorldWarIII— John Perry Barlow (@JPBarlow) January 10, 2012
From an enduser perspective the best web is one of little pieces loosely joined. That kind of web allows for lots of innovation and individuality. Instead, we are currently headed for big chunks of experience provided by just a couple of players. While a high degree of integration may look appealing to some under an “ease-of-use” type argument, all you have to do is look at the enterprise where a few large vendors have dominated for years (SAP, Oracle) to know how undesirable that is.
On the second point. the root cause of all of this are search economics. Google keeps one hundred percent of the search revenue from searches on Google. The explicit quid pro quo has always been that Google sends traffic to a site in return for getting to include the content among the search results. No search revenue is shared with the sources. During days when Google was just a search engine that seemed like a reasonable quid pro quo. But two things have happened to make this balance not work. First, Google has gradually entered many businesses that compete directly with providers of content and second we have seen the emergence and inclusion of many content “micro chunks” that will hardly ever generate traffic to the originating site, such as a restaurant rating from Yelp. I have argued before that some kind of revenue sharing will be required to break through this.
When Larry Page became Google’s CEO I had hoped that he would maybe pursue a vision of the web of little pieces loosely joined with Google providing a lot of that glue. It is by now amply clear that Google is going exactly in the opposite direction. That’s a shame in the near term. In the long run I agree with John Batelle that the web will find a way to route around all of this (assuming we don’t let the politicians screw it up in the meantime).
There are a lot of posts worth reading about the potential impact or lack thereof of a possible deal between Microsoft and News Corp (and maybe other news sources as well), including Danny Sullivan, Jeff Jarvis, Andrew Parker and others. But one thing that I have missed from everything I have read (and it may well be that it’s there and I missed it) is what this is truly all about: the immense competitive pull of Google’s amazing profits! The big challenge for businesses with network effects is how to split the profits pie between themselves and others.
One of the reasons that Craigslist is so hard to attack is that Craig has chosen to the give the bulk of the benefits to the network itself (as “consumer surplus”) by operating most of Craigslist for free. Google made a different choice, which is to keep a ton of the economics for themselves (and often in a non-transparent manner, i.e. it’s unclear how much of the economics Google takes). That will eventually create openings for others based on a willingness to share the economics differently.
Microsoft took a first step into that direction with the cash back for shopping. A deal with news organizations would simply be a further step in that direction. It is therefore not clear that Google is maximizing long term value by hanging on to as much of the profit as they are. That is the real heart of the ongoing conflict over news indexing.
Something big may be happening to user experience. Bing and Google will soon be showing tweets as part of the search results (they have both licensed the full Twitter firehose and Marissa Meyer yesterday demoed a version of “Social Search”). Google is already returning videos from Youtube as search results and apparently will soon be offering music. Bing video results can be played right on the results page. Facebook just announced music gifts that can be played right inside of the news feed and wall. All of these might simply be seen as UI affordances, but I am wondering whether this is a trend that will ultimately result in a convergence of user experience.
For some time now we have taken tree like navigation as a given for the web user experience, with the root of the tree being search. But as you click on results, the experience can be jarring as a single click transports you into a completely different color scheme, layout and set of capabilities. If you don’t find what you are looking for you hit “back” and try again with a similar potential for disruption. Put differently, the web experience is “site centric” as opposed to “user centric.” This matters very little when you are exploring and discovering — in fact it is part of the fun of doing so. But it matters a lot when you are just trying to get stuff done.
The reason I put a “re” in front of convergence in the title of this post is that of course the early walled gardens, such as AOL provided a converged user experience. Now it seems like we might be headed back there but built on top of APIs and through partnerships. One place where that is most likely to happen is mobile as the overhead of switching there is even bigger (e.g. on iPhone quit one app and start another). On the web right now this might seem to shape up as a fight between the “big guys” (Facebook, Microsoft, Yahoo and Google), but I know of at least one startup that is trying to provide a “neutral” play here — you can try it out at kikin.com.
A while back, I wrote that display advertising is too complicated. Today Google announced their entry into the display ad exchange business. Both in their announcement and in this conversation with Neal Mohan from Google, the idea of bringing simplicity to the market is cited as a major driving force. Google also describes the system as if it were the first display ad exchange ever.
Of course the reality is a bit different, with Yahoo/Right Media and others having offered exchange capabilities for quite some time. For advertisers and agencies who have already been using exchanges this simply means that another exchange is available. What remains to be seen is if Google can succeed in pulling smaller advertisers into display the way they did with AdWords and AdSense.
In the meantime, however, if the Justice Department is paying attention, they should stop wasting time reviewing the Yahoo-Microsoft search deal. Google now has a complete display offering, so for there to be a credible second player that can cover both search and display, the Yahoo-Microsoft search deal is actually a competitive necessity.
The top link on techmeme right now is yesterday’s post by Jason Calacanis titled “Yahoo committed seppukku today.” It is worth reading because it powerfully (and hilariously) lays out the position that handing over search to Microsoft is a huge mistake for Yahoo. It is a position I am very familiar with, because I made much of the same argument (although in a less entertaining fashion) last May. But it is more than a year later and I am not sure the logic still applies.
First, Yahoo has lost a ton of talent along the way. Some of that talent went to Microsoft. Most notably, Microsoft recruited Qi Lu, one of Yahoo’s key search engineers in December of last year.
Second, despite some slight gains in search share (from hovering just above 20% for most of 2008 to slightly above 21% in early parts of 2009), Yahoo’s search monetization has been falling. In Q2 they reported a 15% decline in search ad revenues at a time when Google reported a 3% increase.
Third, search may simply not be a winnable category for some time for anyone other than the leader. It is worth remembering that the arms race with the US was a key contributing factor in the demise of the Soviet Union. So if you are Yahoo and you are behind and slipping (see first two points), do you really want to keep plowing money into search? Might be smarter to let someone with deeper pockets wage the fight on your behalf and wait it out.
Fourth, the economics of the deal with Microsoft might be attractive. On one hand, there was no upfront, but on the other there are a number of guarantees, such as “Microsoft will pay traffic acquisition costs (TAC) to Yahoo! at an initial rate of 88 percent of search revenue generated on Yahoo!’s O&O sites during the first five years of the agreement” and “Microsoft will guarantee Yahoo!’s O&O revenue per search (RPS) in each country for the first 18 months following initial implementation in that country.” Both of these are difficult to interpret since not enough detail is provided, but it sounds intriguing.
Bottomline is that in May 2008 I shared the same point of view that Jason published yesterday. But at the end of July 2009 a lot has happened and now I am thinking this could actually turn out to be a smart move. The initial reaction clearly suggested the stock market did not think so. But this seems more like a knee jerk reaction to me. I think it will be a couple of years before one can properly judge this decision.
On Friday evening the much anticipated WolframAlpha launched publicly. It is great fun to play around with, not only because it has some hilarious easter eggs, but because there is a renewed sense of exploration. At least I found myself coming up with ever new queries to see what it knows (computes) and what it doesn’t. For instance, here are two queries that show off the range of WolframAlpha:
One thing is for sure — a lot of homework will never be the same! But what about beyond that? After some pre-launch coverage calling it a potential “Google Killer” and google’s less-than-subtle trumpeting of its own structured data efforts there has been an unreasonable amount of expectation built up and not surprisingly some of the initial reviews are now lukewarm.
I think that a lot of what happens next depends on how quickly WolframAlpha can open up their system for participation by third parties. The power of a web search engine derives from the fact that the underlying corpus is constantly growing through many individual additions. The additional structure required for computational search meant that WolframAlpha had much of its corpus manually prepared in a controlled fashion. That of course does not scale. If WolframAlpha can open up the system to third party contributions of data and eventually of algorithms, then it will take us a big step closer to that omniscient computer from the Star Trek bridge.
In the meantime, it is nice to see something new and differentiated in the market place. As Dare Obasanjo points out, WolframAlpha may fit with a pattern of using different search engines for different purposes. And if nothing else, this will make sure that Google gets better faster!