(Photo: wavebreakmedia/shutterstock.com)

It’s 2015, and living half your life online is unavoidable. Our personal computers have become like extensions of our brains and bodies, portals to a world where we can assume new identities, interact with strangers across the globe, and learn once unimaginable things.

But computers are also tools. Not only do we store our sensitive information within the nebulous web of hardware and software, but we also search. Want to know a person’s secrets? Check their Google history.

This is why data security plays into our deepest, darkest fears. The risk of exposing our personal information to the rest of the internet causes paranoia. In a post-Snowden era, security controversies dominate headlines.

We spoke with Karsten Nohl, a Berlin-based crypto-specialist, to get a better handle on these issues. Karsten views himself as an ethical hacker who exposes the security flaws of large corporations, including GSM mobile phone carriers and credit card companies, in order to better protect the customers.

And his research is fascinating. From developing USB “condoms,” to working to help over a billion people in India connect to the internet securely, Karsten is something of a renegade, trying to make the online world a bit safer for us all.


 (Photo: Courtesy of Karsten Nohl)

Is there a specific job title you would use to describe yourself?

I work at Security Research Lab in Berlin, and my title is Chief Scientist. We look at everyday technology­–past, present, and future–so anything from a 20-year-old payment card to what is now being built, like autonomous cars. We want to understand whether users of these technologies are exposed to unnecessary risks.

Of course, there are risks everywhere. Credit cards can be cloned and cars can be crashed, even with no hackers involved. There is always risk, but we want to understand whether these risks are acceptable, and whether the risk ownership is in the right place. We criticize when a big organization, let’s say a telecommunications company like AT&T, or a credit card company or bank, creates risks, but then has ordinary users [the customers] suffer from the risks.

Whenever companies are holding on to customer information, sometimes even just in transit, there’s a risk. Think about the phone call that we are having right now. It’s going over Microsoft technology, so we’re trusting Microsoft with our private information, and they may or may not protect it very well. If this leaked out, and there are negative consequences, it would be us suffering from it–Microsoft doesn’t actually care whether this call is confidential or not.

So that’s what our research focuses on: finding technology weaknesses that should be advertised much more publicly, because many people are affected by them. 

Are you ever hired by specific corporations to test their security, or are you just playing more of a watchdog role? 

We are often in the watchdog position first, and predominately. That’s why we started this whole operation, which began extremely non-commercially. As you can imagine, by calling out big companies, you may get publicity, but certainly not revenue.

As a follow-up, and strictly a follow-up—never as a first step—we then help some companies dig themselves out of the mess. But not everyone is interested in it. If they were aware, the problems wouldn’t exist in the first place.

But there are always a few people in the industry who want to be better than average, who actually care about their customers, or, more likely, have some marketing message that they want to emphasize, trustworthiness or whatever. 

Can you speak to some examples, or is that confidential?    

Lots of our research is around mobile security. First, we broke the encryption of GSM phones. In the U.S. this would be AT&T and T-Mobile, for instance. We then went on to finding bugs in SIM cards–so that affects pretty much everybody.

Last year, we found some vulnerabilities in roaming technology in 2G and 3G, and in parts, even 4G. So we keep coming up with these results that really affect most people in the world, or could potentially. 

Out of the 800 or so telecommunications companies that exist worldwide, we work with about 30 of them. So it’s a tiny fraction, but then often these companies liaison with other companies, or we help the GSM-A [an umbrella organization for the telcos], to spread the word.

Every day we are reaching more companies that actually pay us money. But those who engage us are basically getting an advisory and assurance service. They understand the issues, and want help making them go away. So it’s a very fine line to walk.

If you approach it slightly differently from how we do it, it’s borderline blackmail. First, we create a problem, and then we help them solve it. Right?

But this is where we differ. We help them solve it anyway, through our publicity. If they want to fast track it, that’s when we fly out, and of course that costs the company money. 

(Photo: Ken Easter/shutterstock.com)

Have you ever received volatile, or extreme reactions from people? Or are people mostly willing to improve their companies?    

I’ll give you an example of somebody else in this space. These guys get results. They’ll say: ‘We can crash your mobile network.’ They’ve shown it across Europe. They can take out a mobile network, and it takes like an hour to reboot.

That would be terrible, right? So they say to the companies: ‘If you want to understand how, you can buy this database. It’ll cost you $100,000, and that’s the only way to get access to that information.’ But this is closer to blackmail than what we do. 

We say: ‘Here’s this vulnerability. And now, you can all go fix it.’

We do send the data to all the mobile networks, usually three months ahead of when we release it, but there’s no guarantee that no criminal is reading it, or that we can actually fix it in three months. It’s always the one security guy who understands the issues, but nobody else at the company is interested, up until a deadline approaches.

Why do you reveal these security holes so publicly?

Everyone is entitled to the information, including the criminals, right? If they want it, free information is free information.

We get criticized for releasing things with too much publicity pressure. At the same time, if we didn’t have the publicity, the impact would be much lower. People do need to look into the void to understand how bad their issues are. 

So the publicity is just a tool you use to leverage your case, in a sense. By going public with this information, you want to give some weight to your argument?

Exactly. In a sense, that’s even reversing the causality. Our primary goal, with all the work we are doing, is to inform about security issues, and make them go away. So the publicity is very much a part of that. If there were an option that we could just publish a security vulnerability to the GSM-A, the GSM-A would work with all 800 networks to fix the problem, and the risk goes away–that would be fine too.

But this isn’t how the world works. With no publicity whatsoever, we would maybe convince this group of 30-odd telcos that we work with intensely to fix the problems. With the publicity, we probably reach more on the order of 100 or 200 companies. Still way off target, but much, much more than would otherwise would be possible. 

I know you have a PhD in computer science, but what is your background? How did you get into this space, and how did you find yourself where you are today?

I started off wanting to be an inventor! Of course as a child you have very romantic ideas as to what it actually means. There’s this one figure in Donald Duck comics – you know the one with the little light bulb sitting over his shoulder?    

Yes, he was one of my heroes.

That guy is who I wanted to be. But it doesn’t really fit any real-world college degree or job description. So the closest thing to that–putting together different parts, and making something technical out of it­­–was electrical engineering, so that was my undergrad.

But through electrical engineering, you understand that most functionality rises from software. So it’s not so much the plugging together of electrical and mechanical parts that makes things magical–it’s the software actually running on it. That was then my actual path to computer science. 

(Photo: AngeloDeVal/shutterstock.com


 You’ve been working a lot in India lately. What you are doing there?

India is on the verge of finally becoming an internet-connected country. There’s already millions of Indians on the internet, obviously, but it’s still a tiny proportion of the population.

They have 950 million phones–almost a billion phone lines connected–of which only 5 percent are connected to the internet. So there’s a billion people ready to jump, as soon as you give them a smartphone, and a little bit of money to pay for the internet plan. They will be the next big cohort on the internet.

But these people face challenges that we [in the West] didn’t face when first connecting to the internet. The first passwords that I chose in the ‘90s were crap, but nobody broke passwords back then!

I didn’t have to worry about phishing, and I clicked on every email because I didn’t receive that much. There was very little spam, and certainly no phishing. I grew into the internet as the internet became more and more evil. As such, I can now behave in more or less secure ways.

Somebody entering the internet for the first time right now does not have that luxury, especially somebody who’s illiterate, or mostly illiterate, who can barely use a tablet computer.

This is a huge problem, in choosing passwords, for instance. Part of what this venture in India is doing is bringing education to people. So you get onto the internet, to get your basic education, for literacy, to then use the rest of the internet to your advantage.

I’m the person responsible for the security and privacy of all these new internet users. I make sure the scammers and the phishers don’t abuse these internet virgins. 

That’s a really interesting notion. I grew up in Canada and you in Germany, and we’ve basically been on the internet since the mid-1990s. Whereas in India, they don’t have that 20 years of awareness, as you said, of the internet “growing more evil over time.”

The technology is becoming cheap enough, the access is becoming cheap enough, and they are achieving a socio-economic status where they can afford it. And then they are in the same situation.

They’ll end up receiving emails, they’ll click and respond to them, they’ll volunteer personal information to everybody. So India is in this unique situation where the benefits of the internet for those people will be infinite. They don’t all currently have access to good education, and the internet can modernize this. They don’t have very good communication tools within their workforce, and a lot of people earn far too little, or they’re working in the wrong job, so this can all be modernized. 

What are the downsides?

Imagine somebody getting onto the internet with all these promises, and then their bank account gets pillaged, and their personal information gets abused through identity theft, and then all their friends get bombarded with spam. It may well be that these people will decide the internet isn’t for them, foregoing a huge opportunity. 

When I was in Delhi last summer, I noticed that a lot of people who had never owned desktop computers had smartphones.

Most people in India have these very low-end Android smartphones that cost around $30 or $40. Those providers are much worse than the Android providers we are familiar with, like LG and HTC, that are always criticized for security that is relatively worse than Apple’s or BlackBerry’s. In India, the phones are kind of fly-by-night. They have new models all the time, and some never get updated anymore. You use them for one year, and they break anyways.

This, plus illiteracy, plus the lack of experience, with nobody to turn to… it’s just bound for disaster, but hopefully not a disaster that’s large enough to discourage people from using the internet. So now you see why I like to work over there, and how this is a really fulfilling project, being able to contribute to a few hundred million people getting onto the internet, with fewer bruises than would otherwise happen. 

It’s certainly a noble goal.

The culture there is so different from ours. It’s profound, you know, in five or so years there’s going to be a billion new people on the internet. It’s going to change things so much.

I’ll give you one example of how India is different, in both a good and a bad sense, security-wise: A few years ago, the government of India introduced a citizen registry system–the first of its kind. So far, the government has very little idea of who is actually living in the country. Through this government database, they have now captured about half the population, and it’s growing quickly. This database includes all ten of your fingerprints, and your iris scans. It’s a full biometrics database.

The government’s building this database, and they make the telcos (that’s how I’m involved) collect the data. In India, you cannot get a phone contract right now without giving your ten fingerprints and two iris scans to some random Vodafone shop on the corner. Your information is then transmitted in maybe secure ways, stored in maybe secure ways. Again, it’s another disaster looming.

Somebody could legitimately steal all ten of your fingerprints and both of your iris scans. And they could do this for upwards of 600 million people. In a Western country, this biometrics database would not happen at such a low-quality level, and on that scale, so quickly.

To flip it around, this has the potential to make Indians not have password problems in the future. If this database is used carefully, people will be able to authenticate against anything on the India internet with just their fingers. And the Indian Government itself is vouching for these identities.

They are leapfrogging into a very different internet from ours. They never got used to passwords, so why use them in the first place?

Is that what you are trying to do–make that technology actually useful? 

Well, what’s actually more interesting to us is authenticating transactions. One of the things we’re building is a PayPal competitor–with a modest target of having a few hundred million customers. Everything in India is always on a massive scale. If you could get rid of PayPal passwords, and instead just have a fingerprint–if you could pay for goods at a store with just your fingerprint, that would simplify people’s lives a lot. It would also have the secondary effect of saving some of the security problems, like phishing, that we currently encounter. And this government database is a huge enabler.

If we already have a mandate to collect everybody’s fingerprints, why not use it in the customer’s benefit? The privacy risk is always there. That’s the law and I can’t argue with that. But if the law is already creating this risk, why not create opportunity in the same step?

Just to get a larger understanding of this concept: in an ideal India, a customer could just swipe their fingerprint at a store, and that would have all of their payment information? The opposite side of things, is that with low-data security, identities could be stolen down to someone’s fingerprint?

Let me put it in a more pointed way. There’s a single identity authenticator in India right now, and that’s the government. The government can vouch for everyone’s identity, and certainly everyone who can afford smartphones is already in that database. So, you don’t need to worry about passwords anymore. It’s like the Facebook password. You type in the Facebook password in every single dingy website these days, and it just opens up everything. In India, it’s not Facebook, it’s the government, and it’s not passwords, it’s your fingerprints. So that’s great.

But what I’m criticizing is that they are doing it at such a massive scale, and at such a low quality level. It could very well be that large chunks of this dataset leak out at some point. But then, unlike passwords, your finger cannot be changed. So they are jumping way ahead in the biometrics game, collecting all ten fingers at the same time, and possibly burning all of them. What are you left with once they’ve taken your ten fingers?

I can see how this gets risky.   

And it’s not like they are just starting with one finger. It’s all ten, all stored in the same place, with various companies that they work with. By being so ambitious, they are taking away the security foundation, and kind of burn it for everybody, forever. Or at least until you can grow an eleventh finger.

At that point, it’s not just your hardware that’s compromised; it’s your fingers–a part of you. And how can you change that? 

That’s the biggest question, and that’s why we are doing this work!  

(Photo: Robert Hoehne/shutterstock.com


 I read a piece about USB system security that had some funny but unsettling quotes from you, about how you can’t be safe unless you physically block up the USB ports on your computer. Can you explain what the issues are?

That research was structurally different from everything else we have done, because, in this case we aren’t finding a vulnerability, and demanding a fix for the vulnerability as we usually do. That’s because there is no easy fix. 

It’s not one company’s responsibility, or even 800 companies’ responsibility to fix it for their customers. The USB issue is more of a systemic risk.

The risk is this: Everything you plug into a USB port can masquerade as any number of devices. In the good old days, you plugged in a printer to the parallel port. You had to install the specific printer driver. As a user, were in the loop as to what was happening.

But USB got rid of all that extra work. You plug anything into a USB port, be it a storage device, a keyboard, a printer, or anything that works right out of the box. But in some sense, you lose control over what attaches to your computer. You plug something in, and just by its physical shape, you think, “this is a storage device, or this is a printer.”

What takes people by surprise is that any of these devices can pretend to also be any of the other devices. And that’s not necessarily a vulnerability, but it’s a big risk. And it’s a risk that we all accepted when we started using this plug-and-play technology.

There’s no human in the loop, no decision required. But the risks have been under-emphasized. People are not really aware of how much they are actually trusting every single USB device that they plug into their computer.

There’s a product that came out because of our research, called a USB Condom. So you have this great metaphor of sticking one thing into another thing. And we don’t do this with unfamiliar people’s parts, right? There must be a condom in between [laughs].

What this USB Condom achieves is that you can charge a phone without transferring any viruses or any data. The USB Condom [now called SyncStop] selectively wires the charging wires, and not the data transfer wires. 

There’s a whole STD parallel here, for sure.

We want to warn people about risks they may not be taking consciously, and the risk here is that when you plug any USB device into your computer, you are at risk of that USB taking over your entire computer. And you are at an additional risk that it does so in a way you can never recover from. A virus that is already aware of USB’s possibilities doesn’t necessarily have to install itself into the hard disk.

It can just as well install itself into other USB devices, let’s say, the webcam, which is usually connected over USB. So now you have a virus living in your webcam, and you reinstall your operating system, but the virus is still there.

To make matters worse, there’s no way to know what’s running on your webcam–it’s a complete black box. You will end up in a somewhat paranoid state. You know your computer was infected once, but you’ll never know whether you can recover from it.

So the only way to recover from that would be to get totally new hardware?    

Well you’d need a new computer entirely, but also every single USB device that also attached to your old computer could be infected, so you’d also want to get those replaced, including your keyboard, your mouse, your thumb-drives, and printer. But also all the computers that those devices were ever plugged into could be affected, like your TV or even your girlfriend’s computer. So the paranoia is…. bottomless.  

It’s like mono, or even chlamydia. But there’s no test for it. So you know that the ex-ex of your girlfriend was showing symptoms for chlamydia, and there’s a slight chance that it will be transmitted to you, but there’s no way of knowing until you see the symptoms. It really puts people into a paranoid state. This USB research resonated very strongly with a lot of people, and not just in the corporate world.

The way I heard it summarized best is: USB is a hole in your computer. You can read this in two ways. It’s both a physical hole, in the enclosure of your computer, but it’s also a gaping security hole. 

Are these attacks becoming more common?

When we did this research, there was very little evidence that these attacks were actually happening. This goes for a lot of our work. We venture into unknown territory and see what we find. And oftentimes we find possibilities that criminals haven’t really used yet.

In the case of bad USBs though, the plot is thickening with state-sponsored adversaries doing this a lot. Out of the NSA papers, you know, the stolen papers, some evidence came out that they actually were using [USB hacks] offensively.

But then, more interestingly, even pre-dating our research from last year, there was a large government agency that got infected by a virus, and in response, they didn’t just re-image their computers [reformat the hard drives and reinstall everything]. They destroyed all their computers, including mice, keyboards, everything!

Wow.

They smashed it. They had to get new keyboards, because they were worried the virus had spread everywhere. You’d think that’s going a little bit overboard, right?

But they didn’t even want to explain why they needed to get new keyboards, even though everybody was making fun of them. Probably if you Google that, you’ll get a lot of cynical remarks over burning up taxpayer’s money.

But let’s say you are the NSA–then you know what’s going on. And if you are part of the same government, the NSA advises you that it’s probably a good idea to also throw away the keyboard if the Russians have broken into your computer.

For you and I, and probably most of your readers, it still seems like science fiction paranoia. But it’s certainly not science fiction as in time travel. It’s already here. It’s just a matter of who your adversary is. 

Adding to that thought, malicious USB attacks aren’t actually that difficult to pull off anymore. You don’t need a $10 billion per year budget to do it, like the NSA has. The source code is on the internet now, published by some U.S. researcher.

With just a few weeks of preparation, you can pull off a beautifully bad USB attack. If you do it the right way, by using a normal virus, for something that’s on a million computers already, you can make that virus infect all the USB devices connected to it, and make those infect all the computers that they are connected to. This could spread very widely, and at relatively little cost.

And like you said with the STD metaphor, it can almost take on a life of its own, even beyond the scope of what the original planners thought of, right?

Absolutely, yeah. 

I love these human metaphors with the computer network. It’s so fascinating.

And it seems like a lot of people do. This USB research we did was pretty simple, but it got huge attention. Everybody felt affected by it. And I think everybody was reminded of using USB, and maybe having a slightly bad feeling, and then running a virus scan on a new USB stick and finding that there’s no virus.

Being told that you were doing it wrong all along, since it’s not necessarily detectable by a virus scanner. The idea that there may be viruses actually living inside the hardware, I think, resonated with people very strongly.  

And, I’m sure, also terrified them.    

It’s everywhere, right? When you have one person with Ebola running around, or in an airport, or a crowded mall in the U.S., who knows what’s next? Thousands of people could die, or maybe nothing will happen. Who knows, right? But the possibility is just so immense. 

(Photo: Security Research Labs/Courtesy of Karsten Nohl)