Author Archive

Schneier on Security: Friday Squid Blogging: Squid Beard

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Impressive.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Schneier on Security: Lessons from the Sony Hack

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Earlier this month, a mysterious group that calls itself Guardians of Peace hacked into Sony Pictures Entertainment’s computer systems and began revealing many of the Hollywood studio’s best-kept secrets, from details about unreleased movies to embarrassing emails (notably some racist notes from Sony bigwigs about President Barack Obama’s presumed movie-watching preferences) to the personnel data of employees, including salaries and performance reviews. The Federal Bureau of Investigation now says it has evidence that North Korea was behind the attack, and Sony Pictures pulled its planned release of “The Interview,” a satire targeting that country’s dictator, after the hackers made some ridiculous threats about terrorist violence.

Your reaction to the massive hacking of such a prominent company will depend on whether you’re fluent in information-technology security. If you’re not, you’re probably wondering how in the world this could happen. If you are, you’re aware that this could happen to any company (though it is still amazing that Sony made it so easy).

To understand any given episode of hacking, you need to understand who your adversary is. I’ve spent decades dealing with Internet hackers (as I do now at my current firm), and I’ve learned to separate opportunistic attacks from targeted ones.

You can characterize attackers along two axes: skill and focus. Most attacks are low-skill and low-focus­people using common hacking tools against thousands of networks world-wide. These low-end attacks include sending spam out to millions of email addresses, hoping that someone will fall for it and click on a poisoned link. I think of them as the background radiation of the Internet.

High-skill, low-focus attacks are more serious. These include the more sophisticated attacks using newly discovered “zero-day” vulnerabilities in software, systems and networks. This is the sort of attack that affected Target, J.P. Morgan Chase and most of the other commercial networks that you’ve heard about in the past year or so.

But even scarier are the high-skill, high-focus attacks­the type that hit Sony. This includes sophisticated attacks seemingly run by national intelligence agencies, using such spying tools as Regin and Flame, which many in the IT world suspect were created by the U.S.; Turla, a piece of malware that many blame on the Russian government; and a huge snooping effort called GhostNet, which spied on the Dalai Lama and Asian governments, leading many of my colleagues to blame China. (We’re mostly guessing about the origins of these attacks; governments refuse to comment on such issues.) China has also been accused of trying to hack into the New York Times in 2010, and in May, Attorney General Eric Holder announced the indictment of five Chinese military officials for cyberattacks against U.S. corporations.

This category also includes private actors, including the hacker group known as Anonymous, which mounted a Sony-style attack against the Internet-security firm HBGary Federal, and the unknown hackers who stole racy celebrity photos from Apple’s iCloud and posted them. If you’ve heard the IT-security buzz phrase “advanced persistent threat,” this is it.

There is a key difference among these kinds of hacking. In the first two categories, the attacker is an opportunist. The hackers who penetrated Home Depot’s networks didn’t seem to care much about Home Depot; they just wanted a large database of credit-card numbers. Any large retailer would do.

But a skilled, determined attacker wants to attack a specific victim. The reasons may be political: to hurt a government or leader enmeshed in a geopolitical battle. Or ethical: to punish an industry that the hacker abhors, like big oil or big pharma. Or maybe the victim is just a company that hackers love to hate. (Sony falls into this category: It has been infuriating hackers since 2005, when the company put malicious software on its CDs in a failed attempt to prevent copying.)

Low-focus attacks are easier to defend against: If Home Depot’s systems had been better protected, the hackers would have just moved on to an easier target. With attackers who are highly skilled and highly focused, however, what matters is whether a targeted company’s security is superior to the attacker’s skills, not just to the security measures of other companies. Often, it isn’t. We’re much better at such relative security than we are at absolute security.

That is why security experts aren’t surprised by the Sony story. We know people who do penetration testing for a living­real, no-holds-barred attacks that mimic a full-on assault by a dogged, expert attacker­and we know that the expert always gets in. Against a sufficiently skilled, funded and motivated attacker, all networks are vulnerable. But good security makes many kinds of attack harder, costlier and riskier. Against attackers who aren’t sufficiently skilled, good security may protect you completely.

It is hard to put a dollar value on security that is strong enough to assure you that your embarrassing emails and personnel information won’t end up posted online somewhere, but Sony clearly failed here. Its security turned out to be subpar. They didn’t have to leave so much information exposed. And they didn’t have to be so slow detecting the breach, giving the attackers free rein to wander about and take so much stuff.

For those worried that what happened to Sony could happen to you, I have two pieces of advice. The first is for organizations: take this stuff seriously. Security is a combination of protection, detection and response. You need prevention to defend against low-focus attacks and to make targeted attacks harder. You need detection to spot the attackers who inevitably get through. And you need response to minimize the damage, restore security and manage the fallout.

The time to start is before the attack hits: Sony would have fared much better if its executives simply hadn’t made racist jokes about Mr. Obama or insulted its stars­or if their response systems had been agile enough to kick the hackers out before they grabbed everything.

My second piece of advice is for individuals. The worst invasion of privacy from the Sony hack didn’t happen to the executives or the stars; it happened to the blameless random employees who were just using their company’s email system. Because of that, they’ve had their most personal conversations­gossip, medical conditions, love lives­exposed. The press may not have divulged this information, but their friends and relatives peeked at it. Hundreds of personal tragedies must be unfolding right now.

This could be any of us. We have no choice but to entrust companies with our intimate conversations: on email, on Facebook, by text and so on. We have no choice but to entrust the retailers that we use with our financial details. And we have little choice but to use cloud services such as iCloud and Google Docs.

So be smart: Understand the risks. Know that your data are vulnerable. Opt out when you can. And agitate for government intervention to ensure that organizations protect your data as well as you would. Like many areas of our hyper-technical world, this isn’t something markets can fix.

This essay previously appeared on the Wall Street Journal CIO Journal.

Schneier on Security: SS7 Vulnerabilities

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

There are security vulnerability in the phone-call routing protocol called SS7.

The flaws discovered by the German researchers are actually functions built into SS7 for other purposes — such as keeping calls connected as users speed down highways, switching from cell tower to cell tower — that hackers can repurpose for surveillance because of the lax security on the network.

Those skilled at the myriad functions built into SS7 can locate callers anywhere in the world, listen to calls as they happen or record hundreds of encrypted calls and texts at a time for later decryption. There also is potential to defraud users and cellular carriers by using SS7 functions, the researchers say.

Some details:

The German researchers found two distinct ways to eavesdrop on calls using SS7 technology. In the first, commands sent over SS7 could be used to hijack a cell phone’s “forwarding” function — a service offered by many carriers. Hackers would redirect calls to themselves, for listening or recording, and then onward to the intended recipient of a call. Once that system was in place, the hackers could eavesdrop on all incoming and outgoing calls indefinitely, from anywhere in the world.

The second technique requires physical proximity but could be deployed on a much wider scale. Hackers would use radio antennas to collect all the calls and texts passing through the airwaves in an area. For calls or texts transmitted using strong encryption, such as is commonly used for advanced 3G connections, hackers could request through SS7 that each caller’s carrier release a temporary encryption key to unlock the communication after it has been recorded.

We’ll learn more when the researchers present their results.

Schneier on Security: ISIS Cyberattacks

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Citizen Lab has a new report on a probable ISIS-launched cyberattack:

This report describes a malware attack with circumstantial links to the Islamic State in Iraq and Syria. In the interest of highlighting a developing threat, this post analyzes the attack and provides a list of Indicators of Compromise.

A Syrian citizen media group critical of Islamic State of Iraq and Syria (ISIS) was recently targeted in a customized digital attack designed to unmask their location. The Syrian group, Raqqah is being Slaughtered Silently (RSS), focuses its advocacy on documenting human rights abuses by ISIS elements occupying the city of Ar-Raqah. In response, ISIS forces in the city have reportedly targeted the group with house raids, kidnappings, and an alleged assassination. The group also faces online threats from ISIS and its supporters, including taunts that ISIS is spying on the group.

Though we are unable to conclusively attribute the attack to ISIS or its supporters, a link to ISIS is plausible. The malware used in the attack differs substantially from campaigns linked to the Syrian regime, and the attack is focused against a group that is an active target of ISIS forces.

News article.

Schneier on Security: The Limits of Police Subterfuge

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

“The next time you call for assistance because the Internet service in your home is not working, the ‘technician’ who comes to your door may actually be an undercover government agent. He will have secretly disconnected the service, knowing that you will naturally call for help and — ­when he shows up at your door, impersonating a technician­ — let him in. He will walk through each room of your house, claiming to diagnose the problem. Actually, he will be videotaping everything (and everyone) inside. He will have no reason to suspect you have broken the law, much less probable cause to obtain a search warrant. But that makes no difference, because by letting him in, you will have ‘consented’ to an intrusive search of your home.”

This chilling scenario is the first paragraph of a motion to suppress evidence gathered by the police in exactly this manner, from a hotel room. Unbelievably, this isn’t a story from some totalitarian government on the other side of an ocean. This happened in the United States, and by the FBI. Eventually — I’m sure there will be appeals — higher U.S. courts will decide whether this sort of practice is legal. If it is, the county will slide even further into a society where the police have even more unchecked power than they already possess.

The facts are these. In June, Two wealthy Macau residents stayed at Caesar’s Palace in Las Vegas. The hotel suspected that they were running an illegal gambling operation out of their room. They enlisted the police and the FBI, but could not provide enough evidence for them to get a warrant. So instead they repeatedly cut the guests’ Internet connection. When the guests complained to the hotel, FBI agents wearing hidden cameras and recorders pretended to be Internet repair technicians and convinced the guests to let them in. They filmed and recorded everything under the pretense of fixing the Internet, and then used the information collected from that to get an actual search warrant. To make matters even worse, they lied to the judge about how they got their evidence.

The FBI claims that their actions are no different from any conventional sting operation. For example, an undercover policeman can legitimately look around and report on what he sees when he invited into a suspect’s home under the pretext of trying to buy drugs. But there are two very important differences: one of consent, and the other of trust. The former is easier to see in this specific instance, but the latter is much more important for society.

You can’t give consent to something you don’t know and understand. The FBI agents did not enter the hotel room under the pretext of making an illegal bet. They entered under a false pretext, and relied on that for consent of their true mission. That makes things different. The occupants of the hotel room didn’t realize who they were giving access to, and they didn’t know their intentions. The FBI knew this would be a problem. According to the New York Times, “a federal prosecutor had initially warned the agents not to use trickery because of the ‘consent issue.’ In fact, a previous ruse by agents had failed when a person in one of the rooms refused to let them in.” Claiming that a person granting an Internet technician access is consenting to a police search makes no sense, and is no different than one of those “click through” Internet license agreements that you didn’t read saying one thing and while meaning another. It’s not consent in any meaningful sense of the term.

Far more important is the matter of trust. Trust is central to how a society functions. No one, not even the most hardened survivalists who live in backwoods log cabins, can do everything by themselves. Humans need help from each other, and most of us need a lot of help from each other. And that requires trust. Many Americans’ homes, for example, are filled with systems that require outside technical expertise when they break: phone, cable, Internet, power, heat, water. Citizens need to trust each other enough to give them access to their hotel rooms, their homes, their cars, their person. Americans simply can’t live any other way.

It cannot be that every time someone allows one of those technicians into our homes they are consenting to a police search. Again from the motion to suppress: “Our lives cannot be private — ­and our personal relationships intimate­ — if each physical connection that links our homes to the outside world doubles as a ready-made excuse for the government to conduct a secret, suspicionless, warrantless search.” The resultant breakdown in trust would be catastrophic. People would not be able to get the assistance they need. Legitimate servicemen would find it much harder to do their job. Everyone would suffer.

It all comes back to the warrant. Through warrants, Americans legitimately grant the police an incredible level of access into our personal lives. This is a reasonable choice because the police need this access in order to solve crimes. But to protect ordinary citizens, the law requires the police to go before a neutral third party and convince them that they have a legitimate reason to demand that access. That neutral third party, a judge, then issues the warrant when he or she is convinced. This check on the police’s power is for Americans’ security, and is an important part of the Constitution.

In recent years, the FBI has been pushing the boundaries of its warrantless investigative powers in disturbing and dangerous ways. It collects phone-call records of millions of innocent people. It uses hacking tools against unknown individuals without warrants. It impersonates legitimate news sites. If the lower court sanctions this particular FBI subterfuge, the matter needs to be taken up — ­and reversed­ — by the Supreme Court.

This essay previously appeared in The Atlantic.

Schneier on Security: How the FBI Unmasked Tor Users

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Kevin Poulson has a good article up on Wired about how the FBI used a Metasploit variant to identity Tor users.

Schneier on Security: Fake Cell Towers Found in Norway

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

In yet another example of what happens when you build an insecure communications infrastructure, fake cell phone towers have been found in Oslo. No one knows who has been using them to eavesdrop.

This is happening in the US, too. Remember the rule: we’re all using the same infrastructure, so we can either keep it insecure so we — and everyone else — can use it to spy, or we can secure it so that no one can use it to spy.

Schneier on Security: Understanding Zero-Knowledge Proofs

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Matthew Green has a good primer.

Schneier on Security: Over 700 Million People Taking Steps to Avoid NSA Surveillance

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

There’s a new international survey on Internet security and trust, of “23,376 Internet users in 24 countries,” including “Australia, Brazil, Canada, China, Egypt, France, Germany, Great Britain, Hong Kong, India, Indonesia, Italy, Japan, Kenya, Mexico, Nigeria, Pakistan, Poland, South Africa, South Korea, Sweden, Tunisia, Turkey and the United States.” Amongst the findings, 60% of Internet users have heard of Edward Snowden, and 39% of those “have taken steps to protect their online privacy and security as a result of his revelations.”

The press is mostly spinning this as evidence that Snowden has not had an effect: “merely 39%,” “only 39%,” and so on. (Note that these articles are completely misunderstanding the data. It’s not 39% of people who are taking steps to protect their privacy post-Snowden, it’s 39% of the 60% of Internet users — which is not everybody — who have heard of him. So it’s much less than 39%.)

Even so, I disagree with the “Edward Snowden Revelations Not Having Much Impact on Internet Users” headline. He’s having an enormous impact. I ran the actual numbers country by country, combining data on Internet penetration with data from this survey. Multiplying everything out, I calculate that 706 million people have changed their behavior on the Internet because of what the NSA and GCHQ are doing. (For example, 17% of Indonesians use the Internet, 64% of them have heard of Snowden and 62% of them have taken steps to protect their privacy, which equals 17 million people out of its total 250-million population.)

Note that the countries in this survey only cover 4.7 billion out of a total 7 billion world population. Taking the conservative estimates that 20% of the remaining population uses the Internet, 40% of them have heard of Snowden, and 25% of those have done something about it, that’s an additional 46 million people around the world.

It’s probably true that most of those people took steps that didn’t make any appreciable difference against an NSA level of surveillance, and probably not even against the even more pervasive corporate variety of surveillance. It’s probably even true that some of those people didn’t take steps at all, and just wish they did or wish they knew what to do. But it is absolutely extraordinary that 750 million people are disturbed enough about their online privacy that they will represent to a survey taker that they did something about it.

Name another news story that has caused over ten percent of the world’s population to change their behavior in the past year? Cory Doctorow is right: we have reached “peak indifference to surveillance.” From now on, this issue is going to matter more and more, and policymakers around the world need to start paying attention.

Related: a recent Pew Research Internet Project survey on Americans’ perceptions of privacy, commented on by Ben Wittes.

Schneier on Security: Friday Squid Blogging: Recreational Squid Fishing in Washington State

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

There is year-round recreational squid fishing from the Strait of Juan de Fuca to south Puget Sound.

A nighttime sport that requires simple, inexpensive fishing tackle, squid fishing-or jigging-typically takes place on the many piers and docks throughout the Puget Sound region

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Schneier on Security: Incident Response Webinar on Thursday

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

On 12/18 I’ll be part of a Co3 webinar where we examine incident-response trends of 2014 and look ahead to 2015. I tend not to do these, but this is an exception. Please sign up if you’re interested.

Schneier on Security: Who Might Control Your Telephone Metadata

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Remember last winter when President Obama called for an end to the NSA’s telephone metadata collection program? He didn’t actually call for an end to it; he just wanted it moved from an NSA database to some commercial database. (I still think this is a bad idea, and that having the companies store it is worse than having the government store it.)

Anyway, the Director of National Intelligence solicited companies who might be interested and capable of storing all this data. Here’s the list of companies that expressed interest. Note that Oracle is on the list — the only company I’ve heard of. Also note that many of these companies are just intermediaries that register for all sorts of things.

Schneier on Security: Comments on the Sony Hack

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

I don’t have a lot to say about the Sony hack, which seems to still be ongoing. I want to highlight a few points, though.

  1. At this point, the attacks seem to be a few hackers and not the North Korean government. (My guess is that it’s not an insider, either.) That we live in the world where we aren’t sure if any given cyberattack is the work of a foreign government or a couple of guys should be scary to us all.

  2. Sony is a company that hackers have loved to hate for years now. (Remember their rootkit from 2005?) We’ve learned previously that putting yourself in this position can be disastrous. (Remember HBGary.) We’re learning that again.
  3. I don’t see how Sony launching a DDoS attack against the attackers is going to help at all.
  4. The most sensitive information that’s being leaked as a result of this attack isn’t the unreleased movies, the executive emails, or the celebrity gossip. It’s the minutia from random employees:
  5. The most painful stuff in the Sony cache is a doctor shopping for Ritalin. It’s an email about trying to get pregnant. It’s shit-talking coworkers behind their backs, and people’s credit card log-ins. It’s literally thousands of Social Security numbers laid bare. It’s even the harmless, mundane, trivial stuff that makes up any day’s email load that suddenly feels ugly and raw out in the open, a digital Babadook brought to life by a scorched earth cyberattack.

    These people didn’t have anything to hide. They aren’t public figures. Their details aren’t going to be news anywhere in the world. But their privacy as been violated, and there are literally thousands of personal tragedies unfolding right now as these people deal with their friends and relatives who have searched and reads this stuff.

    These are people who did nothing wrong. They didn’t click on phishing links, or use dumb passwords (or even if they did, they didn’t cause this). They just showed up. They sent the same banal workplace emails you send every day, some personal, some not, some thoughtful, some dumb. Even if they didn’t have the expectation of full privacy, at most they may have assumed that an IT creeper might flip through their inbox, or that it was being crunched in an NSA server somewhere. For better or worse, we’ve become inured to small, anonymous violations. What happened to Sony Pictures employees, though, is public. And it is total.

    Gizmodo got this 100% correct. And this is why privacy is so important for everyone.

I’m sure there’ll be more information as this continues to unfold.

Schneier on Security: Not Enough CISOs to Go Around

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

This article is reporting that the demand for Chief Information Security Officers far exceeds supply:

Sony and every other company that realizes the need for a strong, senior-level security officer are scrambling to find talent, said Kris Lovejoy, general manager of IBM’s security service and former IBM chief security officer.

CISOs are “almost impossible to find these days,” she said. “It’s a bit like musical chairs; there’s a finite number of CISOs and they tend to go from job to job in similar industries.”

I’m not surprised, really. This is a tough job: never enough budget, and you’re the one blamed when the inevitable attacks occur. And it’s a tough skill set: enough technical ability to understand cybersecurity, and sufficient management skill to navigate senior management. I would never want a job like that in a million years.

Here’s a tip: if you want to make your CISO happy, here’s her holiday wish list.

“My first wish is for companies to thoroughly test software releases before release to customers….”

Can we get that gift wrapped?

Schneier on Security: Effects of Terrorism Fears

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Interesting article: “How terrorism fears are transforming America’s public space.”

I am reminded of my essay from four years ago: “Close the Washington Monument.”

Schneier on Security: NSA Hacking of Cell Phone Networks

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

The Intercept has published an article — based on the Snowden documents — about AURORAGOLD, an NSA surveillance operation against cell phone network operators and standards bodies worldwide. This is not a typical NSA surveillance operation where agents identify the bad guys and spy on them. This is an operation where the NSA spies on people designing and building a general communications infrastructure, looking for weaknesses and vulnerabilities that will allow it to spy on the bad guys at some later date.

In that way, AURORAGOLD is similar to the NSA’s program to hack sysadmins around the world, just in case that access will be useful at some later date; and to the GCHQ’s hacking of the Belgian phone company Belgacom. In both cases, the NSA/GCHQ is finding general vulnerabilities in systems that are protecting many innocent people, and exploiting them instead of fixing them.

It is unclear from the documents exactly what cell phone vulnerabilities the NSA is exploiting. Remember that cell phone calls go through the regular phone network, and are as vulnerable there as non-cell calls. (GSM encryption only protects calls from the handset to the tower, not within the phone operators’ networks.) For the NSA to target cell phone networks particularly rather than phone networks in general means that it is interested in information specific to the cell phone network: location is the most obvious. We already know that the NSA can eavesdrop on most of the world’s cell phone networks, and that it tracks location data.

I’m not sure what to make of the NSA’s cryptanalysis efforts against GSM encryption. The GSM cellular network uses three different encryption schemes: A5/1, which has been badly broken in the academic world for over a decade (a previous Snowden document said the NSA could process A5/1 in real time — and so can everyone else); A5/2, which was designed deliberately weak and is even more easily broken; and A5/3 (aka KASUMI), which is generally believed to be secure. There are additional attacks against all A5 ciphers as they are used in the GSM system known in the academic world. Almost certainly the NSA has operationalized all of these attacks, and probably others as well. Two documents published by the Intercept mention attacks against A5/3 — OPULENT PUP and WOLFRAMITE — although there is no detail, and thus no way to know how much of these attacks consist of cryptanalysis of A5/3, attacks against the GSM protocols, or attacks based on exfiltrating keys. For example, GSM carriers know their users’ A5 keys and store them in databases. It would be much easier for the NSA’s TAO group to steal those keys and use them for real-time decryption than it would be to apply mathematics and computing resources against the encrypted traffic.

The Intercept points to these documents as an example of the NSA deliberately introducing flaws into global communications standards, but I don’t really see the evidence here. Yes, the NSA is spying on industry organizations like the GSM Association in an effort to learn about new GSM standards as early as possible, but I don’t see evidence of it influencing those standards. The one relevant sentence is in a presentation about the “SIGINT Planning Cycle”: “How do we introduce vulnerabilities where they do not yet exist?” That’s pretty damning in general, but it feels more aspirational than a statement of practical intent. Already there are lots of pressures on the GSM Association to allow for “lawful surveillance” on users from countries around the world. That surveillance is generally with the assistance of the cell phone companies, which is why hacking them is such a priority. My guess is that the NSA just sits back and lets other countries weaken cell phone standards, then exploits those weaknesses.

Other countries do as well. There are many vulnerabilities in the cell phone system, and it’s folly to believe that only the NSA and GCHQ exploits them. And countries that can’t afford their own research and development organization can buy the capability from cyberweapons arms manufacturers. And remember that technology flows downhill: today’s top-secret NSA programs become tomorrow’s PhD theses and the next day’s hacker tools.

For example, the US company Verint sells cell phone tracking systems to both corporations and governments worldwide. The company’s website says that it’s “a global leader in Actionable Intelligence solutions for customer engagement optimization, security intelligence, and fraud, risk and compliance,” with clients in “more than 10,000 organizations in over 180 countries.” The UK company Cobham sells a system that allows someone to send a “blind” call to a phone — one that doesn’t ring, and isn’t detectable. The blind call forces the phone to transmit on a certain frequency, allowing the sender to track that phone to within one meter. The company boasts government customers in Algeria, Brunei, Ghana, Pakistan, Saudi Arabia, Singapore, and the United States. Defentek, a company mysteriously registered in Panama, sells a system that can “locate and track any phone number in the world…undetected and unknown by the network, carrier, or the target.” It’s not an idle boast; telecommunications researcher Tobias Engel demonstrated the same capability at a hacker conference in 2008. Criminals can purchase illicit products to let them do the same today.

As I keep saying, we no longer live in a world where technology allows us to separate communications we want to protect from communications we want to exploit. Assume that anything we learn about what the NSA does today is a preview of what cybercriminals are going to do in six months to two years. That the NSA chooses to exploit the vulnerabilities it finds, rather than fix them, puts us all at risk.

This essay has previously appeared on the Lawfare blog.

Schneier on Security: Rapiscan Full-Body Scanner for Sale

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Government surplus. Only $8,000 on eBay. Note that this device has been analyzed before.

Schneier on Security: Corporate Abuse of our Data

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Last week, we learned about a striking piece of malware called Regin that has been infecting computer networks worldwide since 2008. It’s more sophisticated than any known criminal malware, and everyone believes a government is behind it. No country has taken credit for Regin, but there’s substantial evidence that it was built and operated by the United States.

This isn’t the first government malware discovered. GhostNet is believed to be Chinese. Red October and Turla are believed to be Russian. The Mask is probably Spanish. Stuxnet and Flame are probably from the U.S. All these were discovered in the past five years, and named by researchers who inferred their creators from clues such as who the malware targeted.

I dislike the “cyberwar” metaphor for espionage and hacking, but there is a war of sorts going on in cyberspace. Countries are using these weapons against each other. This affects all of us not just because we might be citizens of one of these countries, but because we are all potentially collateral damage. Most of the varieties of malware listed above have been used against nongovernment targets, such as national infrastructure, corporations, and NGOs. Sometimes these attacks are accidental, but often they are deliberate.

For their defense, civilian networks must rely on commercial security products and services. We largely rely on antivirus products from companies such as Symantec, Kaspersky, and F-Secure. These products continuously scan our computers, looking for malware, deleting it, and alerting us as they find it. We expect these companies to act in our interests, and never deliberately fail to protect us from a known threat.

This is why the recent disclosure of Regin is so disquieting. The first public announcement of Regin was from Symantec, on November 23. The company said that its researchers had been studying it for about a year, and announced its existence because they knew of another source that was going to announce it. That source was a news site, the Intercept, which described Regin and its U.S. connections the following day. Both Kaspersky and F-Secure soon published their own findings. Both stated that they had been tracking Regin for years. All three of the antivirus companies were able to find samples of it in their files since 2008 or 2009.

So why did these companies all keep Regin a secret for so long? And why did they leave us vulnerable for all this time?

To get an answer, we have to disentangle two things. Near as we can tell, all the companies had added signatures for Regin to their detection database long before last month. The VirusTotal website has a signature for Regin as of 2011. Both Microsoft security and F-Secure started detecting and removing it that year as well. Symantec has protected its users against Regin since 2013, although it certainly added the VirusTotal signature in 2011.

Entirely separately and seemingly independently, all of these companies decided not to publicly discuss Regin’s existence until after Symantec and the Intercept did so. Reasons given vary. Mikko Hyponnen of F-Secure said that specific customers asked him not to discuss the malware that had been found on their networks. Fox IT, which was hired to remove Regin from the Belgian phone company Belgacom’s website, didn’t say anything about what it discovered because it “didn’t want to interfere with NSA/GCHQ operations.”

My guess is that none of the companies wanted to go public with an incomplete picture. Unlike criminal malware, government-grade malware can be hard to figure out. It’s much more elusive and complicated. It is constantly updated. Regin is made up of multiple modules — Fox IT called it “a full framework of a lot of species of malware” — making it even harder to figure out what’s going on. Regin has also been used sparingly, against only a select few targets, making it hard to get samples. When you make a press splash by identifying a piece of malware, you want to have the whole story. Apparently, no one felt they had that with Regin.

That is not a good enough excuse, though. As nation-state malware becomes more common, we will often lack the whole story. And as long as countries are battling it out in cyberspace, some of us will be targets and the rest of us might be unlucky enough to be sitting in the blast radius. Military-grade malware will continue to be elusive.

Right now, antivirus companies are probably sitting on incomplete stories about a dozen more varieties of government-grade malware. But they shouldn’t. We want, and need, our antivirus companies to tell us everything they can about these threats as soon as they know them, and not wait until the release of a political story makes it impossible for them to remain silent.

This essay previously appeared in the MIT Technology Review.

Schneier on Security: Corporate Abuse of Our Data

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Last week, we learned about a striking piece of malware called Regin that has been infecting computer networks worldwide since 2008. It’s more sophisticated than any known criminal malware, and everyone believes a government is behind it. No country has taken credit for Regin, but there’s substantial evidence that it was built and operated by the United States.

This isn’t the first government malware discovered. GhostNet is believed to be Chinese. Red October and Turla are believed to be Russian. The Mask is probably Spanish. Stuxnet and Flame are probably from the U.S. All these were discovered in the past five years, and named by researchers who inferred their creators from clues such as who the malware targeted.

I dislike the “cyberwar” metaphor for espionage and hacking, but there is a war of sorts going on in cyberspace. Countries are using these weapons against each other. This affects all of us not just because we might be citizens of one of these countries, but because we are all potentially collateral damage. Most of the varieties of malware listed above have been used against nongovernment targets, such as national infrastructure, corporations, and NGOs. Sometimes these attacks are accidental, but often they are deliberate.

For their defense, civilian networks must rely on commercial security products and services. We largely rely on antivirus products from companies such as Symantec, Kaspersky, and F-Secure. These products continuously scan our computers, looking for malware, deleting it, and alerting us as they find it. We expect these companies to act in our interests, and never deliberately fail to protect us from a known threat.

This is why the recent disclosure of Regin is so disquieting. The first public announcement of Regin was from Symantec, on November 23. The company said that its researchers had been studying it for about a year, and announced its existence because they knew of another source that was going to announce it. That source was a news site, the Intercept, which described Regin and its U.S. connections the following day. Both Kaspersky and F-Secure soon published their own findings. Both stated that they had been tracking Regin for years. All three of the antivirus companies were able to find samples of it in their files since 2008 or 2009.

So why did these companies all keep Regin a secret for so long? And why did they leave us vulnerable for all this time?

To get an answer, we have to disentangle two things. Near as we can tell, all the companies had added signatures for Regin to their detection database long before last month. The VirusTotal website has a signature for Regin as of 2011. Both Microsoft security and F-Secure started detecting and removing it that year as well. Symantec has protected its users against Regin since 2013, although it certainly added the VirusTotal signature in 2011.

Entirely separately and seemingly independently, all of these companies decided not to publicly discuss Regin’s existence until after Symantec and the Intercept did so. Reasons given vary. Mikko Hyponnen of F-Secure said that specific customers asked him not to discuss the malware that had been found on their networks. Fox IT, which was hired to remove Regin from the Belgian phone company Belgacom’s website, didn’t say anything about what it discovered because it “didn’t want to interfere with NSA/GCHQ operations.”

My guess is that none of the companies wanted to go public with an incomplete picture. Unlike criminal malware, government-grade malware can be hard to figure out. It’s much more elusive and complicated. It is constantly updated. Regin is made up of multiple modules — Fox IT called it “a full framework of a lot of species of malware” — making it even harder to figure out what’s going on. Regin has also been used sparingly, against only a select few targets, making it hard to get samples. When you make a press splash by identifying a piece of malware, you want to have the whole story. Apparently, no one felt they had that with Regin.

That is not a good enough excuse, though. As nation-state malware becomes more common, we will often lack the whole story. And as long as countries are battling it out in cyberspace, some of us will be targets and the rest of us might be unlucky enough to be sitting in the blast radius. Military-grade malware will continue to be elusive.

Right now, antivirus companies are probably sitting on incomplete stories about a dozen more varieties of government-grade malware. But they shouldn’t. We want, and need, our antivirus companies to tell us everything they can about these threats as soon as they know them, and not wait until the release of a political story makes it impossible for them to remain silent.

This essay previously appeared in the MIT Technology Review.

Schneier on Security: Regin Malware

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Last week, we learned about a striking piece of malware called Regin that has been infecting computer networks worldwide since 2008. It’s more sophisticated than any known criminal malware, and everyone believes a government is behind it. No country has taken credit for Regin, but there’s substantial evidence that it was built and operated by the United States.

This isn’t the first government malware discovered. GhostNet is believed to be Chinese. Red October and Turla are believed to be Russian. The Mask is probably Spanish. Stuxnet and Flame are probably from the U.S. All these were discovered in the past five years, and named by researchers who inferred their creators from clues such as who the malware targeted.

I dislike the “cyberwar” metaphor for espionage and hacking, but there is a war of sorts going on in cyberspace. Countries are using these weapons against each other. This affects all of us not just because we might be citizens of one of these countries, but because we are all potentially collateral damage. Most of the varieties of malware listed above have been used against nongovernment targets, such as national infrastructure, corporations, and NGOs. Sometimes these attacks are accidental, but often they are deliberate.

For their defense, civilian networks must rely on commercial security products and services. We largely rely on antivirus products from companies such as Symantec, Kaspersky, and F-Secure. These products continuously scan our computers, looking for malware, deleting it, and alerting us as they find it. We expect these companies to act in our interests, and never deliberately fail to protect us from a known threat.

This is why the recent disclosure of Regin is so disquieting. The first public announcement of Regin was from Symantec, on November 23. The company said that its researchers had been studying it for about a year, and announced its existence because they knew of another source that was going to announce it. That source was a news site, the Intercept, which described Regin and its U.S. connections the following day. Both Kaspersky and F-Secure soon published their own findings. Both stated that they had been tracking Regin for years. All three of the antivirus companies were able to find samples of it in their files since 2008 or 2009.

So why did these companies all keep Regin a secret for so long? And why did they leave us vulnerable for all this time?

To get an answer, we have to disentangle two things. Near as we can tell, all the companies had added signatures for Regin to their detection database long before last month. The VirusTotal website has a signature for Regin as of 2011. Both Microsoft security and F-Secure started detecting and removing it that year as well. Symantec has protected its users against Regin since 2013, although it certainly added the VirusTotal signature in 2011.

Entirely separately and seemingly independently, all of these companies decided not to publicly discuss Regin’s existence until after Symantec and the Intercept did so. Reasons given vary. Mikko Hyponnen of F-Secure said that specific customers asked him not to discuss the malware that had been found on their networks. Fox IT, which was hired to remove Regin from the Belgian phone company Belgacom’s website, didn’t say anything about what it discovered because it “didn’t want to interfere with NSA/GCHQ operations.”

My guess is that none of the companies wanted to go public with an incomplete picture. Unlike criminal malware, government-grade malware can be hard to figure out. It’s much more elusive and complicated. It is constantly updated. Regin is made up of multiple modules — Fox IT called it “a full framework of a lot of species of malware” — making it even harder to figure out what’s going on. Regin has also been used sparingly, against only a select few targets, making it hard to get samples. When you make a press splash by identifying a piece of malware, you want to have the whole story. Apparently, no one felt they had that with Regin.

That is not a good enough excuse, though. As nation-state malware becomes more common, we will often lack the whole story. And as long as countries are battling it out in cyberspace, some of us will be targets and the rest of us might be unlucky enough to be sitting in the blast radius. Military-grade malware will continue to be elusive.

Right now, antivirus companies are probably sitting on incomplete stories about a dozen more varieties of government-grade malware. But they shouldn’t. We want, and need, our antivirus companies to tell us everything they can about these threats as soon as they know them, and not wait until the release of a political story makes it impossible for them to remain silent.

This essay previously appeared in the MIT Technology Review.

Schneier on Security: Friday Squid Blogging: Squid Poaching off the Coast of Japan

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

There has been an increase in squid poaching by North Korea out of Japanese territorial waters.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Schneier on Security: Surveillance Cartoon

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Funny.

Schneier on Security: Corporations Misusing Our Data

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

In the Internet age, we have no choice but to entrust our data with private companies: e-mail providers, service providers, retailers, and so on.

We realize that this data is at risk from hackers. But there’s another risk as well: the employees of the companies who are holding our data for us.

In the early years of Facebook, employees had a master password that enabled them to view anything they wanted in any account. NSA employees occasionally snoop on their friends and partners. The agency even has a name for it: LOVEINT. And well before the Internet, people with access to police or medical records occasionally used that power to look up either famous people or people they knew.

The latest company accused of allowing this sort of thing is Uber, the Internet car-ride service. The company is under investigation for spying on riders without their permission. Called the “god view,” some Uber employees are able to see who is using the service and where they’re going — and used this at least once in 2011 as a party trick to show off the service. A senior executive also suggested the company should hire people to dig up dirt on their critics, making their database of people’s rides even more “useful.”

None of us wants to be stalked — whether it’s from looking at our location data, our medical data, our emails and texts, or anything else — by friends or strangers who have access due to their jobs. Unfortunately, there are few rules protecting us.

Government employees are prohibited from looking at our data, although none of the NSA LOVEINT creeps were ever prosecuted. The HIPAA law protects the privacy of our medical records, but we have nothing to protect most of our other information.

Your Facebook and Uber data are only protected by company culture. There’s nothing in their license agreements that you clicked “agree” to but didn’t read that prevents those companies from violating your privacy.

This needs to change. Corporate databases containing our data should be secured from everyone who doesn’t need access for their work. Voyeurs who peek at our data without a legitimate reason should be punished.

There are audit technologies that can detect this sort of thing, and they should be required. As long as we have to give our data to companies and government agencies, we need assurances that our privacy will be protected.

This essay previously appeared on CNN.com.

Schneier on Security: Olfactory Surveillance

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

The Denver police are using olfactometers to measure the concentration of cannabis in the air. I haven’t found any technical information about these devices, their sensitivity, range, etc.

Schneier on Security: Quantum Attack on Public-Key Algorithm

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

This talk (and paper) describe a lattice-based public-key algorithm called Soliloquy developed by GCHQ, and a quantum-computer attack on it.

News article.