Feeds:
Posts
Comments

Archive for the ‘Cybercrime’ Category

A commenter over at the Small Wars Council thought my theory about the possible motive of the Iranian Metasploit hijinks would make for a good movie–but, I assume, not the most credible analysis.  First, typing commands in to msfconsole is a little hard to dramatize on screen. About the closest we’ve come to making the command line sexy was having Trinity from The Matrix run an nmap scan and a fictitious SSH exploit, and Trinity did it wearing a leather outfit (see article and YouTube clip*). The real perpetrator may be doing it unshaven and in a bathrobe. At least, that’s how I do my best work. Secondly, I am, like, so totally serious about my theory of someone more interested in disrupting intelligence agencies than Iran’s nuclear program.  Here’s why:

There are certainly credible reasons why a professional intelligence agency would bang away in Iranian networks with Metasploit. If the Iranians are shutting down key parts of their network (I don’t know how vital the automation bits mentioned in Mikko’s piece are) to do forensics to figure out how the attacker is getting in, maybe blasting “Thunderstruck” is the next best thing to some fancy exploit to ruin centrifuges. Or, perhaps, some group who wants to disrupt Iran’s nuclear program is flooding them with garbage attacks to overwhelm Iranians attempts to analyze their more ‘long-term,’ targeted malware. That analysis takes time and personnel who are in short supply even in the U.S. Think of it, to borrow a phrase from one of my brilliant friends, Federico Rosario, as “a DOS attack on skilled personnel.” Others have mentioned playing “Thunderstruck” as a kind of psychological warfare on trust in terms of Iranian infrastructure.

However, these types of attacks seem every bit as likely to disrupt professional intelligence agencies’ access as help them in some way. I also am unimpressed with the PSYOPS theory, because (1) this has already been accomplished via previous malware and (2) announcing one’s presence contradicts the IC’s modus operandi in terms of being able to discretely collect information and disrupt systems.  That’s why I think there is another motive at work here. The reported worm and Metasploit hijinks may even be two separate actors.

* – Funny enough, that little 1:09 clip dramatizes pretty much every policy maker’s fear of an infrastructure attack on the US

Advertisements

Read Full Post »

Dave Aitel of Immunity delivered a talk at the 20th USENIX Security Symposium, which built on the in-progress talk I discussed here and here. It is worth watching and attempting to understand. This is not a 100% endorsement, but there are a few lessons here that transcend the so-called “cyber” domain and apply to strategic thinking about technology in some really profound ways.

The first is the fact that attackers win and defenders loose is not a feature of cyber war but because the attackers have a better strategy. To start, Aitel attributes lays out a number of defenders’ excuses for why attackers are winning: inadequate resources, attackers “only have to be right once,” users are easy targets, etc. Aitel continues, “They keep saying its asymmetric, because they made a strategic choice and lost.” This goes to Rupert Smith’s point about asymmetric warfare–that it is a phrase “invented to explain a situation in which conventional states were threatened by unconventional powers but in which conventional military power would be able of both deterring the treat and responding to it.”* Rather than challenge the strategy, defenders have redefined the environment to allow for their failure.

Secondly, this poor strategy comes from cultural and technological weaknesses–and technological weaknesses are really cultural weaknesses. This is a case I have been trying to make for the last four years on everything from small arms design to cyber war, but Aitel does it with superior technological knowledge and better smart ass commentary.** In terms of “cyber warfare,” Aitel says defenders are unwilling to say no to insecure systems or designs (e. g. most browsers and SSL VPNs, he argues). This itself is not very shocking. Spend one day as an IT consultant with an interest in security, and you will get push-back when you ask users to change their behavior. Aitel goes further and says that the whole process–the whole human process–for designing and implementing security is broken. As I wrote at the council, “After all, when someone writes an exploit or takes advantage of some misconfiguration in a network to gain or deny access, they are attacking humans and human processes ultimately. The medium–a wireless network, an embedded device, whatever–is inconsequential.” I point this out, because it is relatively easy for me to say this; my technological understanding of offensive techniques is modest at best, and attacking networks (much less making attack platforms) is not my business. Aitel is in the business of finding, writing, and selling exploits–and he’s telling you he’s winning because the way humans approach security is broken, not because of some whiz-bang widget.

On the opposite side of this human equation, attackers are, as Aitel says, “mature, self-organizing, [and] highly motivated.” Do you think government’s recent approach to USCYBERCOM, etc. is “mature?” Government functionaries are still waiting on wonks to hand them a piece of doctrine that will most likely be wrong before they act. It reminds me of what Boyd said: “[I]f you have one doctrine, you’re a dinosaur.” We are standing up dinosaurs, and this is a fundamental cultural problem.

What concerns me more is how these cultural problems transcend this “cyber” domain. Do we have our money invested in the right technology in terms of engaging near-peer competitors whether its another aircraft carrier vulnerable to ASBM attack or some other high dollar system? Do we examine flaws in human processes throughout Defense like the failures to address insider threats like Nidal Hasan or Bradley Manning? How does, say, our strategy in Afghanistan rate in terms of maturity compared to that of the Taliban?

I would still like to see his talk written up into a larger work, but it is well worth the effort for defenders–no matter what the domain–to consider Aitel’s challenge:  “Attackers win because they have better strategy. The problem is not intractable.”  Now, as Big Boi once quipped, “Go on and marinate on that for a minute.”

* – Rupert Smith, The Utility of Force (New York: Knopf, 2007). Kindle edition.

** – For example, “Why are these browsers not written in Java? Why is that? It’s retarded.” This is the hacker equivalent of Boyd saying, “I’ve never built an airplane before, but I could fuck up and do better than this.” I’m not 100% certain about Java, but I love this comment.

Read Full Post »

In Mikko Hypponen’s fantastic TED talk, there were two big takeaways.  First, we must be prepared those times when–not if–hackers will be able to break systems (perhaps even the system) in which we live and work.  This is not simply a matter of low-tech alternatives (although that is not a bad idea) but also making sure our technology is resilient.  Secondly, those on the side of law and order must find those who are about to become cybercriminals, as Hypponen says, “with the skills but without the opportunities” and co-opt them into using their skills for good.

While I could not agree more with these two priorities, I do not share Hypponen’s optimism that they will be addressed.  In terms of resilience, the start of the rebooted Battlestar Galactica in which humanity is annihilated through an enemy exploiting vulnerabilities in complex, hypertechnological military systems seems completely plausible to me.  (The miniseries should be required viewing for RMA kool-aid drinkers.)  In terms of recruiting those on the verge of becoming cybercriminals or, indeed, cyberguerrillas like Anonymous, I see an outcome that is even less hopeful than the Cylons’ onslaught.  We are failing–miserably–at co-opting talent.

There are a lot of reasons for this, but one of the most important requires broaching an uncomfortable subject.  Earlier in the month, Robert Graham of Errata Security made a provocative claim that, while white hat hackers on on the side of the “law,” they are not on the “side of law enforcement” or, as Graham puts it, “order.”  He goes on to explain:

The issue is not “law” but “order”. Police believe their job is not just to enforce the law but also to maintain order. White-hats are disruptive. While they are on the same side of the “law”, they are on opposite sides of “order”.

During the J. Edgar Hoover era, the FBI investigated and wiretapped anybody deemed a troublemaker, from Einstein to Martin Luther King. White-hats aren’t as noble as MLK, but neither are white-hats anarchists who cause disruption for disruption’s sake. White-hats believe that cybersecurity research is like speech: short term disruption for long term benefits to society.

I have personal experience with this. In 2007, I gave a speech at the biggest white-hat conference. It was nothing special, about reverse engineering to find problems in a security product. Two days before the speech, FBI agents showed up at my office and threatened me in order to get me to stop the talk, on (false) grounds of national security. Specifically, the agents threatened to taint my FBI file so that I could never pass a background check, and thus never work for the government again. I respond poorly to threats, so I gave the talk anyway.

I point this out because it so aptly proves my point. I am not on the side of law enforcement, because law enforcement has put me on the other side. One of the requirements (from the above post) to volunteer is to pass a background check — a check that I can no longer pass (in theory). I cannot volunteer to train law enforcement because they perceive me as the enemy.

This is exactly why I am so dire about recruitment. First, there is a distinctly libertarian bent throughout hacker culture suspicious of government and resistent to impingement of freedoms as far flung as free speech and fair use of digital media.  This, as Graham argues, puts those inclined to respect the “law” against “order.”  Secondly, abuses do more to create cybercriminals than curtail them.

This got me thinking about David Kilcullen’s idea of “the accidental guerrilla”–that, in a counterinsurgency, even the slightest misapplication of force or failure to understand the complexities of one’s operating environment (culturally or otherwise) may lead to the exponential creation of insurgents.  Misinterpretation of this idea has caused many to come to the conclusion that less force is always better, but Kilcullen does not suggest this.  Similarly, it is not simply that the U. S. has begun to project force through this crudely defined “cyber” realm but rather that it does so without any understanding of its human terrain.

I am throwing some counterinsurgency buzzwords around too flippantly; thinking about a population-centric cyberwarfare would be a useful lens, but there needs to be a long hard look at past failures in addressing those Americans previously labeled as insurgents–for example, the Civil Rights Movements as Graham so aptly notes.  There also needs to be a look at the “short-term disruptions” that Graham touches on with the context of cyberguerrillas as well as counterinsurgency practice at large.

I am not purporting any of this to be new or even my own; I am sure folks like John Robb have been connecting these dots for a long time.  However, I am flagging this as an issue that needs more attention.

Read Full Post »