Author Archive

Schneier on Security: Friday Squid Blogging: Using Squid Proteins for Commercial Camouflage Products

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

More research.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Schneier on Security: Yet Another Computer Side Channel

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Researchers have managed to get two computers to communicate using heat and thermal sensors. It’s not really viable communication — the bit rate is eight per hour over fifteen inches — but it’s neat.

Schneier on Security: New Zealand’s XKEYSCORE Use

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

The Intercept and the New Zealand Herald have reported that New Zealand spied on communications about the World Trade Organization director-general candidates. I’m not sure why this is news; it seems like a perfectly reasonable national intelligence target. More interesting to me is that the Intercept published the XKEYSCORE rules. It’s interesting to see how primitive the keyword targeting is, and how broadly it collects e-mails.

The second really important point is that Edward Snowden’s name is mentioned nowhere in the stories. Given how scrupulous the Intercept is about identifying him as the source of his NSA documents, I have to conclude that this is from another leaker. For a while, I have believed that there are at least three leakers inside the Five Eyes intelligence community, plus another CIA leaker. What I have called Leaker #2 has previously revealed XKEYSCORE rules. Whether this new disclosure is from Leaker #2 or a new Leaker #5, I have no idea. I hope someone is keeping a list.

Schneier on Security: Capabilities of Canada’s Communications Security Establishment

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

There’s a new story about the hacking capabilities of Canada’s Communications Security Establishment (CSE), based on the Snowden documents.

Schneier on Security: Reforming the FISA Court

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

The Brennan Center has a long report on what’s wrong with the FISA Court and how to fix it.

At the time of its creation, many lawmakers saw constitutional problems in a court that operated in total secrecy and outside the normal “adversarial” process…. But the majority of Congress was reassured by similarities between FISA Court proceedings and the hearings that take place when the government seeks a search warrant in a criminal investigation. Moreover, the rules governing who could be targeted for “foreign intelligence” purposes were narrow enough to mitigate concerns that the FISA Court process might be used to suppress political dissent in the U.S. — or to avoid the stricter standards that apply in domestic criminal cases.

In the years since then, however, changes in technology and the law have altered the constitutional calculus. Technological advances have revolutionized communications. People are communicating at a scale unimaginable just a few years ago. International phone calls, once difficult and expensive, are now as simple as flipping a light switch, and the Internet provides countless additional means of international communication. Globalization makes such exchanges as necessary as they are easy. As a result of these changes, the amount of information about Americans that the NSA intercepts, even when targeting foreigners overseas, has exploded.

Instead of increasing safeguards for Americans’ privacy as technology advances, the law has evolved in the opposite direction since 9/11…. While surveillance involving Americans previously required individualized court orders, it now happens through massive collection programs…involving no case-by-case judicial review. The pool of permissible targets is no longer limited to foreign powers — such as foreign governments or terrorist groups — and their agents. Furthermore, the government may invoke the FISA Court process even if its primary purpose is to gather evidence for a domestic criminal prosecution rather than to thwart foreign threats.

…[T]hese developments…have had a profound effect on the role exercised by the FISA Court. They have caused the court to veer off course, departing from its traditional role of ensuring that the government has sufficient cause to intercept communications or obtain records in particular cases and instead authorizing broad surveillance programs. It is questionable whether the court’s new role comports with Article III of the Constitution, which mandates that courts must adjudicate concrete disputes rather than issuing advisory opinions on abstract questions. The constitutional infirmity is compounded by the fact that the court generally hears only from the government, while the people whose communications are intercepted have no meaningful opportunity to challenge the surveillance, even after the fact.

Moreover, under current law, the FISA Court does not provide the check on executive action that the Fourth Amendment demands. Interception of communications generally requires the government to obtain a warrant based on probable cause of criminal activity. Although some courts have held that a traditional warrant is not needed to collect foreign intelligence, they have imposed strict limits on the scope of such surveillance and have emphasized the importance of close judicial scrutiny in policing these limits. The FISA Court’s minimal involvement in overseeing programmatic surveillance does not meet these constitutional standards.

[…]

Fundamental changes are needed to fix these flaws. Congress should end programmatic surveillance and require the government to obtain judicial approval whenever it seeks to obtain communications or information involving Americans. It should shore up the Article III soundness of the FISA Court by ensuring that the interests of those affected by surveillance are represented in court proceedings, increasing transparency, and facilitating the ability of affected individuals to challenge surveillance programs in regular federal courts. Finally, Congress should address additional Fourth Amendment concerns by narrowing the permissible scope of “foreign intelligence surveillance” and ensuring that it cannot be used as an end-run around the constitutional standards for criminal investigations.

Just Security post — where I copied the above excerpt. Lawfare post.

Schneier on Security: BIOS Hacking

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

We’ve learned a lot about the NSA’s abilities to hack a computer’s BIOS so that the hack survives reinstalling the OS. Now we have a research presentation about it.

From Wired:

The BIOS boots a computer and helps load the operating system. By infecting this core software, which operates below antivirus and other security products and therefore is not usually scanned by them, spies can plant malware that remains live and undetected even if the computer’s operating system were wiped and re-installed.

[…]

Although most BIOS have protections to prevent unauthorized modifications, the researchers were able to bypass these to reflash the BIOS and implant their malicious code.

[…]

Because many BIOS share some of the same code, they were able to uncover vulnerabilities in 80 percent of the PCs they examined, including ones from Dell, Lenovo and HP. The vulnerabilities, which they’re calling incursion vulnerabilities, were so easy to find that they wrote a script to automate the process and eventually stopped counting the vulns it uncovered because there were too many.

From ThreatPost:

Kallenberg said an attacker would need to already have remote access to a compromised computer in order to execute the implant and elevate privileges on the machine through the hardware. Their exploit turns down existing protections in place to prevent re-flashing of the firmware, enabling the implant to be inserted and executed.

The devious part of their exploit is that they’ve found a way to insert their agent into System Management Mode, which is used by firmware and runs separately from the operating system, managing various hardware controls. System Management Mode also has access to memory, which puts supposedly secure operating systems such as Tails in the line of fire of the implant.

From the Register:

“Because almost no one patches their BIOSes, almost every BIOS in the wild is affected by at least one vulnerability, and can be infected,” Kopvah says.

“The high amount of code reuse across UEFI BIOSes means that BIOS infection can be automatic and reliable.

“The point is less about how vendors don’t fix the problems, and more how the vendors’ fixes are going un-applied by users, corporations, and governments.”

From Forbes:

Though such “voodoo” hacking will likely remain a tool in the arsenal of intelligence and military agencies, it’s getting easier, Kallenberg and Kovah believe. This is in part due to the widespread adoption of UEFI, a framework that makes it easier for the vendors along the manufacturing chain to add modules and tinker with the code. That’s proven useful for the good guys, but also made it simpler for researchers to inspect the BIOS, find holes and create tools that find problems, allowing Kallenberg and Kovah to show off exploits across different PCs. In the demo to FORBES, an HP PC was used to carry out an attack on an ASUS machine. Kovah claimed that in tests across different PCs, he was able to find and exploit BIOS vulnerabilities across 80 per cent of machines he had access to and he could find flaws in the remaining 10 per cent.

“There are protections in place that are supposed to prevent you from flashing the BIOS and we’ve essentially automated a way to find vulnerabilities in this process to allow us to bypass them. It turns out bypassing the protections is pretty easy as well,” added Kallenberg.

The NSA has a term for vulnerabilities it think are exclusive to it: NOBUS, for “nobody but us.” Turns out that NOBUS is a flawed concept. As I keep saying: “Today’s top-secret programs become tomorrow’s PhD theses and the next day’s hacker tools.” By continuing to exploit these vulnerabilities rather than fixing them, the NSA is keeping us all vulnerable.

Two Slashdot threads. Hacker News thread. Reddit thread.

Schneier on Security: Friday Squid Blogging: Squid Pen

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Neat.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Schneier on Security: New Paper on Digital Intelligence

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

David Omand — GCHQ director from 1996-1997, and the UK’s security and intelligence coordinator from 2000-2005 — has just published a new paper: “Understanding Digital Intelligence and the Norms That Might Govern It.”

Executive Summary: This paper describes the nature of digital intelligence and provides context for the material published as a result of the actions of National Security Agency (NSA) contractor Edward Snowden. Digital intelligence is presented as enabled by the opportunities of global communications and private sector innovation and as growing in response to changing demands from government and law enforcement, in part mediated through legal, parliamentary and executive regulation. A common set of organizational and ethical norms based on human rights considerations are suggested to govern such modern intelligence activity (both domestic and external) using a three-layer model of security activity on the Internet: securing the use of the Internet for everyday economic and social life; the activity of law enforcement — both nationally and through international agreements — attempting to manage criminal threats exploiting the Internet; and the work of secret intelligence and security agencies using the Internet to gain information on their targets, including in support of law enforcement.

I don’t agree with a lot of it, but it’s worth reading.

My favorite Omand quote is this, defending the close partnership between the NSA and GCHQ in 2013: “We have the brains. They have the money. It’s a collaboration that’s worked very well.”

Schneier on Security: Cisco Shipping Equipment to Fake Addresses to Foil NSA Interception

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Last May, we learned that the NSA intercepts equipment being shipped around the world and installs eavesdropping implants. There were photos of NSA employees opening up a Cisco box. Cisco’s CEO John Chambers personally complained to President Obama about this practice, which is not exactly a selling point for Cisco equipment abroad. Der Spiegel published the more complete document, along with a broader story, in January of this year:

In one recent case, after several months a beacon implanted through supply-chain interdiction called back to the NSA covert infrastructure. The call back provided us access to further exploit the device and survey the network. Upon initiating the survey, SIGINT analysis from TAO/Requirements & Targeting determined that the implanted device was providing even greater access than we had hoped: We knew the devices were bound for the Syrian Telecommunications Establishment (STE) to be used as part of their internet backbone, but what we did not know was that STE’s GSM (cellular) network was also using this backbone. Since the STE GSM network had never before been exploited, this new access represented a real coup.

Now Cisco is taking matters into its own hands, offering to ship equipment to fake addresses in an effort to avoid NSA interception.

I don’t think we have even begun to understand the long-term damage the NSA has done to the US tech industry.

Slashdot thread.

Schneier on Security: More <i>Data and Goliath</i> News

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Right now, the book is #6 on the New York Times best-seller list in hardcover nonfiction, and #13 in combined print and e-book nonfiction. This is the March 22 list, and covers sales from the first week of March. The March 29 list — covering sales from the second week of March — is not yet on the Internet. On that list, I’m #11 on the hardcover nonfiction list, and not at all on the combined print and e-book nonfiction list.

Marc Rotenberg of EPIC tells me that Vance Packard’s The Naked Society made it to #7 on the list during the week of July 12, 1964, and — by that measure — Data and Goliath is the most popular privacy book of all time. I’m not sure I can claim that honor yet, but it’s a nice thought. And two weeks on the New York Times best-seller list is super fantastic.

For those curious to know what sorts of raw numbers translate into those rankings, this is what I know. Nielsen Bookscan tracks retail sales across the US, and captures about 80% of the book market. It reports that my book sold 4,706 copies during the first week of March, and 2,339 copies in the second week. Taking that 80% figure, that means I sold 6,000 copies the first week and 3,000 the second.

My publisher tells me that Amazon sold 650 hardcovers and 600 e-books during the first week, and 400 hardcovers and 500 e-books during the second week. The hardcover sales ranking was 865, 949, 611, 686, 657, 602, 595 during the first week, and 398, 511, 693, 867, 341, 357, 343 during the second. The book’s rankings during those first few days don’t match sales, because Amazon records a sale for the rankings when a person orders a book, but only counts the sale when it actually ships it. So all of my preorders sold on that first day, even though they were calculated in the rankings during the days and weeks before publication date.

There are few new book reviews. There’s one from the Dealbook blog at the New York Times that treats the book very seriously, but doesn’t agree with my conclusions. (A rebuttal to that review is here.) A review from the Wall Street Journal was even less kind. This review from InfoWorld is much more positive.

All of this, and more, is on the book’s website.

There are several book-related videos online. The first is the talk I gave at the Harvard Bookstore on March 4th. The second and third are interviews of me on Democracy Now. I also did a more general Q&A with Gizmodo.

Note to readers. The book is 80,000 words long, which is a normal length for a book like this. But the book’s size is much larger, because it contains a lot of references. They’re not numbered, but if they were, there would be over 1,000 numbers. I counted all the links, and there are 1,622 individual citations. That’s a lot of text. This means that if you’re reading the book on paper, the narrative ends on page 238, even though the book continues to page 364. If you’re reading it on the Kindle, you’ll finish the book when the Kindle says you’re only 44% of the way through. The difference between pages and percentages is because the references are set in smaller type than the body. I warn you of this now, so you know what to expect. It always annoys me that the Kindle calculates percent done from the end of the file, not the end of the book.

And if you’ve read the book, please post a review on the book’s Amazon page or on Goodreads. Reviews are important on those sites, and I need more of them.

Schneier on Security: Understanding the Organizational Failures of Terrorist Organizations

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

New research: Max Abrahms and Philip B.K. Potter, “Explaining Terrorism: Leadership Deficits and Militant Group Tactics,” International Organizations.

Abstract: Certain types of militant groups — those suffering from leadership deficits — are more likely to attack civilians. Their leadership deficits exacerbate the principal-agent problem between leaders and foot soldiers, who have stronger incentives to harm civilians. We establish the validity of this proposition with a tripartite research strategy that balances generalizability and identification. First, we demonstrate in a sample of militant organizations operating in the Middle East and North Africa that those lacking centralized leadership are prone to targeting civilians. Second, we show that when the leaderships of militant groups are degraded from drone strikes in the Afghanistan-Pakistan tribal regions, the selectivity of organizational violence plummets. Third, we elucidate the mechanism with a detailed case study of the al-Aqsa Martyrs Brigade, a Palestinian group that turned to terrorism during the Second Intifada because pressure on the leadership allowed low-level members to act on their preexisting incentives to attack civilians. These findings indicate that a lack of principal control is an important, underappreciated cause of militant group violence against civilians.

I have previously blogged Max Abrahms’s work here, here, and here.

Schneier on Security: How We Become Habituated to Security Warnings on Computers

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

New research: “How Polymorphic Warnings Reduce Habituation in the Brain ­- Insights from an fMRI Study.”

Abstract: Research on security warnings consistently points to habituation as a key reason why users ignore security warnings. However, because habituation as a mental state is difficult to observe, previous research has examined habituation indirectly by observing its influence on security behaviors. This study addresses this gap by using functional magnetic resonance imaging (fMRI) to open the “black box” of the brain to observe habituation as it develops in response to security warnings. Our results show a dramatic drop in the visual processing centers of the brain after only the second exposure to a warning, with further decreases with subsequent exposures. To combat the problem of habituation, we designed a polymorphic warning that changes its appearance. We show in two separate experiments using fMRI and mouse cursor tracking that our polymorphic warning is substantially more resistant to habituation than conventional warnings. Together, our neurophysiological findings illustrate the considerable influence of human biology on users’ habituation to security warnings.

Webpage.

Schneier on Security: Details on Hacking Team Software Used by Ethiopian Government

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

The Citizen Lab at the University of Toronto published a new report on the use of spyware from the Italian cyberweapons arms manufacturer Hacking Team by the Ethiopian intelligence service. We previously learned that the government used this software to target US-based Ethiopian journalists.

News articles. Human Rights Watch press release.

Schneier on Security: How the CIA Might Target Apple’s XCode

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

The Intercept recently posted a story on the CIA’s attempts to hack the iOS operating system. Most interesting was the speculation that they hacked XCode, which would mean that any apps developed using that tool would be compromised.

The security researchers also claimed they had created a modified version of Apple’s proprietary software development tool, Xcode, which could sneak surveillance backdoors into any apps or programs created using the tool. Xcode, which is distributed by Apple to hundreds of thousands of developers, is used to create apps that are sold through Apple’s App Store.

The modified version of Xcode, the researchers claimed, could enable spies to steal passwords and grab messages on infected devices. Researchers also claimed the modified Xcode could “force all iOS applications to send embedded data to a listening post.” It remains unclear how intelligence agencies would get developers to use the poisoned version of Xcode.

Researchers also claimed they had successfully modified the OS X updater, a program used to deliver updates to laptop and desktop computers, to install a “keylogger.”

It’s a classic application of Ken Thompson’s classic 1984 paper, “Reflections on Trusting Trust,” and a very nasty attack. Dan Wallach speculates on how this might work.

Schneier on Security: Friday Squid Blogging: Squid Stir-Fry

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Spicy squid masala stir-fry. Easy and delicious.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Schneier on Security: Fall Seminar on Catastrophic Risk

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

I am planning a study group at Harvard University (in Boston) for the Fall semester, on catastrophic risk.

Berkman Study Group — Catastrophic Risk: Technologies and Policy

Technology empowers, for both good and bad. A broad history of “attack” technologies shows trends of empowerment, as individuals wield ever more destructive power. The natural endgame is a nuclear bomb in everybody’s back pocket, or a bio-printer that can drop a species. And then what? Is society even possible when the most extreme individual can kill everyone else? Is totalitarian control the only way to prevent human devastation, or are there other possibilities? And how realistic are these scenarios, anyway? In this class, we’ll discuss technologies like cyber, bio, nanotech, artificial intelligence, and autonomous drones; security technologies and policies for catastrophic risk; and more. Is the reason we’ve never met any extraterrestrials that natural selection dictates that any species achieving a sufficiently advanced technology level inevitably exterminates itself?

The study group may serve as a springboard for an independent paper and credit, in conjunction with faculty supervision from your program.

All disciplines and backgrounds welcome, students and non-students alike. This discussion needs diverse perspectives. We also ask that you commit to preparing for and participating in all sessions.

Six sessions, Mondays, 5:00­-7:00 PM, Location TBD
9/14, 9/28, 10/5, 10/19, 11/2, 11/16

Please respond to Bruce Schneier with a resume and statement of interest. Applications due August 14. Bruce will review applications and aim for a seminar size of roughly 16­20 people with a diversity of backgrounds and expertise.

Please help me spread the word far and wide. The description is only on a Berkman page, so students won’t see it in their normal perusal of fall classes.

Schneier on Security: Threats to Information Integrity

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Every year, the Director of National Intelligence publishes an unclassified “Worldwide Threat Assessment.” This year’s report was published two weeks ago. “Cyber” is the first threat listed, and includes most of what you’d expect from a report like this.

More interesting is this comment about information integrity:

Most of the public discussion regarding cyber threats has focused on the confidentiality and availability of information; cyber espionage undermines confidentiality, whereas denial-of-service operations and data-deletion attacks undermine availability. In the future, however, we might also see more cyber operations that will change or manipulate electronic information in order to compromise its integrity (i.e. accuracy and reliability) instead of deleting it or disrupting access to it. Decisionmaking by senior government officials (civilian and military), corporate executives, investors, or others will be impaired if they cannot trust the information they are receiving.

This speaks directly to the need for strong cryptography to protect the integrity of information.

Schneier on Security: <i>Data and Goliath</i> Makes <i>New York Times</i> Best-Seller List

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

The March 22 best-seller list from the New York Times will list me as #6 in the hardcover nonfiction category, and #13 in the combined paper/e-book category. This is amazing, really. The book just barely crossed #400 on Amazon this week, but it seems that other booksellers did more.

There are new reviews from the LA Times, Lawfare, EFF, and Slashdot.

The Internet Society recorded a short video of me talking about my book. I’ve given longer talks, and videos should be up soon. “Science Friday” interviewed me about my book.

Amazon has it back in stock. And, as always, more information on the book’s website.

Schneier on Security: The Changing Economics of Surveillance

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Cory Doctorow examines the changing economics of surveillance and what it means:

The Stasi employed one snitch for every 50 or 60 people it watched. We can’t be sure of the size of the entire Five Eyes global surveillance workforce, but there are only about 1.4 million Americans with Top Secret clearance, and many of them don’t work at or for the NSA, which means that the number is smaller than that (the other Five Eyes states have much smaller workforces than the US). This million-ish person workforce keeps six or seven billion people under surveillance — a ratio approaching 1:10,000. What’s more, the US has only (“only”!) quadrupled its surveillance budget since the end of the Cold War: tooling up to give the spies their toys wasn’t all that expensive, compared to the number of lives that gear lets them pry into.

IT has been responsible for a 2-3 order of magnitude productivity gain in surveillance efficiency. The Stasi used an army to surveil a nation; the NSA uses a battalion to surveil a planet.

Schneier on Security: Equation Group Update

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

More information about the Equation Group, aka the NSA.

Kaspersky Labs has published more information about the Equation Group — that’s the NSA — and its sophisticated malware platform.

Ars Technica article.

Schneier on Security: Hardware Bit-Flipping Attack

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

The Project Zero team at Google has posted details of a new attack that targets a computer’s’ DRAM. It’s called Rowhammer. Here’s a good description:

Here’s how Rowhammer gets its name: In the Dynamic Random Access Memory (DRAM) used in some laptops, a hacker can run a program designed to repeatedly access a certain row of transistors in the computer’s memory, “hammering” it until the charge from that row leaks into the next row of memory. That electromagnetic leakage can cause what’s known as “bit flipping,” in which transistors in the neighboring row of memory have their state reversed, turning ones into zeros or vice versa. And for the first time, the Google researchers have shown that they can use that bit flipping to actually gain unintended levels of control over a victim computer. Their Rowhammer hack can allow a “privilege escalation,” expanding the attacker’s influence beyond a certain fenced-in portion of memory to more sensitive areas.

Basically:

When run on a machine vulnerable to the rowhammer problem, the process was able to induce bit flips in page table entries (PTEs). It was able to use this to gain write access to its own page table, and hence gain read-write access to all of physical memory.

The cause is simply the super dense packing of chips:

This works because DRAM cells have been getting smaller and closer together. As DRAM manufacturing scales down chip features to smaller physical dimensions, to fit more memory capacity onto a chip, it has become harder to prevent DRAM cells from interacting electrically with each other. As a result, accessing one location in memory can disturb neighbouring locations, causing charge to leak into or out of neighbouring cells. With enough accesses, this can change a cell’s value from 1 to 0 or vice versa.

Very clever, and yet another example of the security interplay between hardware and software.

This kind of thing is hard to fix, although the Google team gives some mitigation techniques at the end of their analysis.

Slashdot thread.

Schneier on Security: Can the NSA Break Microsoft’s BitLocker?

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

The Intercept has a new story on the CIA’s — yes, the CIA, not the NSA — efforts to break encryption. These are from the Snowden documents, and talk about a conference called the Trusted Computing Base Jamboree. There are some interesting documents associated with the article, but not a lot of hard information.

There’s a paragraph about Microsoft’s BitLocker, the encryption system used to protect MS Windows computers:

Also presented at the Jamboree were successes in the targeting of Microsoft’s disk encryption technology, and the TPM chips that are used to store its encryption keys. Researchers at the CIA conference in 2010 boasted about the ability to extract the encryption keys used by BitLocker and thus decrypt private data stored on the computer. Because the TPM chip is used to protect the system from untrusted software, attacking it could allow the covert installation of malware onto the computer, which could be used to access otherwise encrypted communications and files of consumers. Microsoft declined to comment for this story.

This implies that the US intelligence community — I’m guessing the NSA here — can break BitLocker. The source document, though, is much less definitive about it.

Power analysis, a side-channel attack, can be used against secure devices to non-invasively extract protected cryptographic information such as implementation details or secret keys. We have employed a number of publically known attacks against the RSA cryptography found in TPMs from five different manufacturers. We will discuss the details of these attacks and provide insight into how private TPM key information can be obtained with power analysis. In addition to conventional wired power analysis, we will present results for extracting the key by measuring electromagnetic signals emanating from the TPM while it remains on the motherboard. We will also describe and present results for an entirely new unpublished attack against a Chinese Remainder Theorem (CRT) implementation of RSA that will yield private key information in a single trace.

The ability to obtain a private TPM key not only provides access to TPM-encrypted data, but also enables us to circumvent the root-of-trust system by modifying expected digest values in sealed data. We will describe a case study in which modifications to Microsoft’s Bitlocker encrypted metadata prevents software-level detection of changes to the BIOS.

Differential power analysis is a powerful cryptanalytic attack. Basically, it examines a chip’s power consumption while it performs encryption and decryption operations and uses that information to recover the key. What’s important here is that this is an attack to extract key information from a chip while it is running. If the chip is powered down, or if it doesn’t have the key inside, there’s no attack.

I don’t take this to mean that the NSA can take a BitLocker-encrypted hard drive and recover the key. I do take it to mean that the NSA can perform a bunch of clever hacks on a BitLocker-encrypted hard drive while it is running. So I don’t think this means that BitLocker is broken.

But who knows? We do know that the FBI pressured Microsoft into adding a backdoor in BitLocker in 2005. I believe that was unsuccessful.

More than that, we don’t know.

Schneier on Security: Geotagging Twitter Users by Mining Their Social Graphs

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

New research: Geotagging One Hundred Million Twitter Accounts with Total Variation Minimization,” by Ryan Compton, David Jurgens, and David Allen.

Abstract: Geographically annotated social media is extremely valuable for modern information retrieval. However, when researchers can only access publicly-visible data, one quickly finds that social media users rarely publish location information. In this work, we provide a method which can geolocate the overwhelming majority of active Twitter users, independent of their location sharing preferences, using only publicly-visible Twitter data.

Our method infers an unknown user’s location by examining their friend’s locations. We frame the geotagging problem as an optimization over a social network with a total variation-based objective and provide a scalable and distributed algorithm for its solution. Furthermore, we show how a robust estimate of the geographic dispersion of each user’s ego network can be used as a per-user accuracy measure which is effective at removing outlying errors.

Leave-many-out evaluation shows that our method is able to infer location for 101,846,236 Twitter users at a median error of 6.38 km, allowing us to geotag over 80% of public tweets.

Schneier on Security: Identifying When Someone is Operating a Computer Remotely

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Here’s an interesting technique to detect Remote Access Trojans, or RATS: differences in how local and remote users use the keyboard and mouse:

By using biometric analysis tools, we are able to analyze cognitive traits such as hand-eye coordination, usage preferences, as well as device interaction patterns to identify a delay or latency often associated with remote access attacks. Simply put, a RAT’s keyboard typing or cursor movement will often cause delayed visual feedback which in turn results in delayed response time; the data is simply not as fluent as would be expected from standard human behavior data.

No data on false positives vs. false negatives, but interesting nonetheless.

Schneier on Security: Identifying When Someone Is Operating a Computer Remotely

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Here’s an interesting technique to detect Remote Access Trojans, or RATS: differences in how local and remote users use the keyboard and mouse:

By using biometric analysis tools, we are able to analyze cognitive traits such as hand-eye coordination, usage preferences, as well as device interaction patterns to identify a delay or latency often associated with remote access attacks. Simply put, a RAT’s keyboard typing or cursor movement will often cause delayed visual feedback which in turn results in delayed response time; the data is simply not as fluent as would be expected from standard human behavior data.

No data on false positives vs. false negatives, but interesting nonetheless.