So yesterday the Feds busted the guy behind Silk Road, the marketplace for drugs and other illegal things paid in bitcoin. The indictment reads like a screenplay for a movie or Breaking Bad style television series (not that I watched Breaking Bad, just basing this on the inevitable flurry of tweets about it that I had in my stream). There should be one key takeaway here: law enforcement is easier online, not harder as government would generally have us believe. It is basically impossible to operate in modern live without leaving lots of digital footprints. Now admittedly “Dread Pirate Roberts” (really?) made some pretty glaring mistakes, such as apparently posting a question in Stackoverflow under his real name, then replacing it with a handle and later using that handle also in one of his keys.
Of course government seems hell bent on screwing up the very advantage that online provides. By taking a highly adversarial position to service providers and disrupting the trust between service providers and their endusers, government is fueling a spy-versus-spy arms race which is pushing both legal and illegal activities off shore and into deeper crypto. To see how counterproductive this is we now beginning to know what happened at Lavabit, the encrypted email service. The court ordered a turnover of keys and a wholesale access to the data with an agency promising to filter only relevant data. Instead of complying the founder shut down the service instead and is now helping to bring the litigation to light. This is a case that deserves to make its way to the US Supreme Court.
If we want any kind of network analysis at all (and I have argued that we might), then it has to be based on transparency and be done in a way that doesn’t pit service providers against their endusers or forces them to shutdown. At the moment we are doing the exact opposite which is a continuation of bad policies in past actions against Craigslist.
A few days ago I wrote about my support of the defense fund for Barrett Brown. One of the reasons regulators and legislators seem to think we need such over the top draconian sentences is because we have the wrong security model for payments in particular and authentication more generally. Information will continue to leak — that is it’s nature. It should not be possible for me to transact in your account simply by having information on your account. Instead we need to broadly embrace two factor authentication.
There will continue to be disclosures of account information both in the small and in the large. Some of it will simply be inadvertent. Others will be the result of leaks or breaches of systems. Trying to fight that with draconian penalties is simply wrong. I should be able to publish my bank account number and even credit card number on my blog or via Twitter! Instead of just requiring that information in order to transact I should also require authentication via my smartphone.
This change won’t come overnight and some incentives would be helpful. Instead of pushing tougher and tougher sentencing legislators should pass laws that require financial institutions and others (eg healthcare) to transition to two factor auth.
Last week in Tech Tuesday I asked for topics to write about in my series on technology in startups. There seemed to be a fair bit of interest in security, so here we go. First off a disclaimer. As with any general purpose advice, you need to think a lot about what it is you are trying to do. The security requirements for a bitcoin startup are vastly different from those for a social media one.
When you are just getting going you should treat security the same way as scalability: make sure you have the basics covered but don’t spend too much time on it as your bigger problem is to build something that people actually want to use. Again, please keep the disclaimer from above in mind though!
As it turns out even the basics still seem harder than they should for a lot of folks. Here is what I consider to be included: hashed paswords, SSL for all logged in users, safeguards against SQL injection and cross site scripting attacks, two factor auth or VPN requirement for web based site administration, key based auth for all server access (and limit dramatically who has server access), disciplined access to all cloud services.
One way to get a lot of the basics is through widely used web development frameworks. That comes with a *very* important caveat. Because those frameworks are widely used lots of people are looking for exploits and when a zero-day exploit is found you will be vulnerable and you *must* apply all security patches immediately and generally stay up to date with the framework.
For managing coud services access there are two promising startups: Meldium and Bitium. These are both relatively young and so might turn out to have their own security issues but they are a lot better than emailing cloud services passwords around or keeping them in Google Docs which is what a lot of startups are doing right now.
Bottom line: when you are just getting going be pragmatic and focus on the must have items. Once you start to grow though make sure not to neglect security — you will need to upgrade as you scale.
I was planning to write more on cyber security but then yesterday I read this harrowing letter from a prisoner at Guantanamo Bay (Gitmo). I don’t take his claims about his lack of involvement at face value. It is irrelevant. He has been held for a shocking 11 years and 3 months without a trial. That goes against everything we as a country should stand for.
I wrote in 2010 that “I wish we had the courage to go ahead with a shutdown of Guantanamo, even if that results in releasing people who will want to attack us.” I believe that today more than ever. If we want long term security not just for ourselves but for the world, we have to stop believing in drones and start by leading with the values that we want others to embrace.
Gitmo Must Go.
I have written previously about cyber security and cyber defense topics that have become more acute in the wake of several large scale attacks on banks and other companies. Unfortunately, law makers in DC are reacting the only way they seem to know how: by further broadening laws that are already overreaching and yet ineffective at the same time. In particular the House Judiciary committee is proposing changes to make the Computer Fraud and Abuse Act (CFAA) even more draconian. As a quick reminder, this is the act under which Aaron Swartz was charged.
Why is the CFAA ineffective? Because most of the attack activity comes from other jurisdictions. Yes, there is some of it here domestically but we have had relatively little problem tracking down folks and applying existing law.
Why is the CFAA overly broad already? Because it elevates terms of service violations to criminal offenses with significant jail penalties. And we all know that nobody reads the Terms of Service and that they tend to include the kitchen sink.
How is this about to get worse? The new draft makes this broadness much worse by adding the possibility of racketeering charges, making intent — not just actual breach — punishable, further increasing penalties and expanding the definition of “exceeding authorized access.” Here is a good summary of the changes.
Why does this matter? Because it is turning activities that many of us engage in nearly every day into crimes and putting a huge damper on important innovation. As an example of the former, I frequently when checking out a startup that has auto-increment ID numbers in their user URLs will see how many users they actually have by trying out higher ID numbers. Under the CFAA this is punishable with jail time. In fact, any kind of manual change to a URL in the browser bar become basically illegal. Now imagine trying to build a new piece of technology that does web scraping or spidering or tries to interact with a site on behalf of a user. Basically, the CFAA makes this kind of innovation illegal.
Securing your site or service has become ever more important as the number of attacks is rapidly on the rise. As I have written before on Continuations I am not a fan of overreaching security legislation as a response. If we don’t want to keep these efforts at bay it will help if we do a better job with security. Increasingly that means you are only as secure as some of your key vendors.
In particular hosted email and DNS have proven to be big holes. If you use hosted email make sure that it has two factor auth which cannot be overridden through social engineering. A lot of damage can be done with access to email as Cloudflare discovered a while back. This should really also be a requirement for your DNS provider. If your DNS can be repointed that opens up all sorts of crazy security holes including the potential for a massive man-in-the-middle attack. Or, as BitInstant found out recently, DNS control can be used to lock you out of your own systems if you don’t have IP based access.
So what should you do? Start by making a list of all the external systems that are security relevant and put hosted email and DNS at the top of the list. Make sure all of these external systems ideally use two factor auth. If not, make password resets and security questions for these systems as difficult as possible (and certainly never use factual answers such as your mother’s real maiden name).
I wrote a blog post last week about the current Privacy Theater in the US, where the government is simultaneously pushing stricter privacy regulations and huge backdoors that would completely undermine privacy. The backdoors come in the form of the Cyber Intelligence Sharing and Protection Act or CISPA. The folks at Lumin Consulting have put together a good infographic that illustrates how CISPA undermines privacy:
I am actually sympathetic to the basic idea behind CISPA, which is to make it easier to share incident data as a way to identify and protect against attacks. But the way that CISPA goes about it is wrong on two important levels. First, it would stuff the incident information into the existing agency and vendor world instead of making it widely available on the Internet. Wide availability would let researchers, hobbyists and new vendors all work on improving security. In other words it would enable the Internet to help protect the Internet.
The second big mistake in CISPA is that it uses broad language when what we need is a tight and well specified sharing protocol. I am not suggesting that such a protocol can be devised to cover all types of attacks and attack related information but rather that by starting with something tight we can go from no public data to a lot of public data. For instance, reporting IPs involved in DDOS attacks would be a great and very precise starting point. The way the government can help here is by helping to define the reporting standard and starting small instead of shooting for some all encompassing solution.
First there was the Path address book tempest. Now there is a concern about apps being able to access photos without permission. It would be a shame if this resulted in more centralized control over apps and longer review processes. What we need instead is some kind of peer produced approach to app security. What I have in mind is something along the lines of what Chris Dixon did with SiteAdvisor for web sites. Some people will (voluntarily?) run software on their mobile handsets that monitors app activity, including which servers these apps communicate with. The results from these “monitors” are aggregated to provide security rankings for applications.
This is not meant to be a substitute for a permissions model but to complement it. I like that apps need to check with me about accessing say my location and I certainly would want the same for my address book. But that still doesn’t tell me anything about where this data goes. Admittedly monitoring what an app does won’t capture what happens once the data reaches servers. For that we will need to rely on other trust models. This is an opportunity for startups like Parse that are providing a backend for mobile apps.
If there is an initiative like this already out there, I would love to know about it. I think it will be critical to a healthy app ecosystem that doesn’t get choked by a few centralized market places.
The Internet has experienced an epic set of attacks over the last few months. This has ranged from massive compromises such as Sony’s Playstation Network to the smaller but potentially equally impactful breach of Mt. Gox (a bitcoin exchange that is still trying to recover). The affected entities have included both companies and governments, as in the recent hack of the CIA’s web site. It would be naive to believe that groups such as Anonymous or LulzSec will go away easily or even if they did would not be replaced by others. In addition, there are meaningful security threats from ruthless competitors (or individuals at those competitors) and from (quasi-)government entities.
So unless we dramatically restrict the Internet, which would be a terrible idea, we will have to assume that someone will be attacking. That means security should be a board level issue for companies just as much financing risk. How should a board approach this? Here are some of the key questions that I believe every board should ask of management:
- Who owns security inside the company? How qualified are they? If the answer is nobody or not qualified, then need to get outside help quickly and add to recruiting plan.
- Has an external security audit been performed? If so, what critical vulnerabilities have been identified and when will those be closed? If not, when will it be performed?
- Even prior to or without an audit, does the company adhere to some minimal security practices? My personal short list: Password storage (one-way salted hashes), strong passwords for admin systems (ideally two factor auth), https-only for all admin systems (to prevent hijacking of wifi admin usage), rigorous input sanitizing (to guard against XSS and SQL injection attacks), DDOS preparedness.
I am writing this post in part to remind myself as a board member to go over these issues. Most startups have so many things going on that it security could be perennially below the cutoff on the priority list of board topics. The last few months have made it clear that we cannot afford that going forward.
Would love to hear from other board members and from startup teams what they are doing re security!
In case you missed it, email services provider Epsilon had a massive security breach and a large number of email lists were exposed, including those of several large banks such as JP Morgan Chase and Citi. This is likely to result in more targeted phishing campaigns as it is now in the open who is an actual customer. As a quick aside, it is somewhat shocking how many sites leak this kind of information in due course of operation — most password resets work on the basis of entering an email address. If the address is not in the system this fact is displayed on screen! That of course means that someone can simply try a large number of email addresses and retain only the ones which the password reset system acknowledges as known.
More importantly, it highlights that when lots of information is stored in one place that place makes for a juicy target. I am sure that we will see more breaches in the future with even more sensitive data exposed, including social security numbers. I had been thinking about this as the week before I filled out several online forms at financial institutions that required my social security number.
So why are we so afraid of these breaches? For two reasons: first, because single factor authentication is common on the web and is fundamentally broken. Second, because there is still a lot of “offline” authentication that can result in meaningful exposure (eg issuance of a credit card). Both of these would seem to be solvable with a bit of determination, in particular by leveraging the fact that cell phones tend to be tied reasonably well to identities.
Even with just feature phones, receiving an SMS text with a code provides a useful second factor of authentication. This is super easy to implement using Twilio. The challenge then becomes — at account creation, how do you know which phone number belongs to which individual? This is where carriers seem to have completely dropped the ball to date. They could be meaningful authentication providers (letting their users opt-in), which incidentally would provide a pretty strong tie to a carrier something that has proven difficult for carriers to maintain.
Google has done a good job offering two factor auth. But where we really need it is in conjunction with payments (where Google has made limited progress). Another alternative of course would be for there to be single sign in providers that have strong authentication and can then be used to log into payments.