Author Archive

Schneier on Security: New Pew Research Report on Americans’ Attitudes on Privacy, Security, and Surveillance

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

This is interesting:

The surveys find that Americans feel privacy is important in their daily lives in a number of essential ways. Yet, they have a pervasive sense that they are under surveillance when in public and very few feel they have a great deal of control over the data that is collected about them and how it is used. Adding to earlier Pew Research reports that have documented low levels of trust in sectors that Americans associate with data collection and monitoring, the new findings show Americans also have exceedingly low levels of confidence in the privacy and security of the records that are maintained by a variety of institutions in the digital age.

While some Americans have taken modest steps to stem the tide of data collection, few have adopted advanced privacy-enhancing measures. However, majorities of Americans expect that a wide array of organizations should have limits on the length of time that they can retain records of their activities and communications. At the same time, Americans continue to express the belief that there should be greater limits on government surveillance programs. Additionally, they say it is important to preserve the ability to be anonymous for certain online activities.

Lots of detail in the reports.

Schneier on Security: The Logjam (and Another) Vulnerability against Diffie-Hellman Key Exchange

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Logjam is a new attack against the Diffie-Hellman key-exchange protocol used in TLS. Basically:

The Logjam attack allows a man-in-the-middle attacker to downgrade vulnerable TLS connections to 512-bit export-grade cryptography. This allows the attacker to read and modify any data passed over the connection. The attack is reminiscent of the FREAK attack, but is due to a flaw in the TLS protocol rather than an implementation vulnerability, and attacks a Diffie-Hellman key exchange rather than an RSA key exchange. The attack affects any server that supports DHE_EXPORT ciphers, and affects all modern web browsers. 8.4% of the Top 1 Million domains were initially vulnerable.

Here’s the academic paper.

One of the problems with patching the vulnerability is that it breaks things:

On the plus side, the vulnerability has largely been patched thanks to consultation with tech companies like Google, and updates are available now or coming soon for Chrome, Firefox and other browsers. The bad news is that the fix rendered many sites unreachable, including the main website at the University of Michigan, which is home to many of the researchers that found the security hole.

This is a common problem with version downgrade attacks; patching them makes you incompatible with anyone who hasn’t patched. And it’s the vulnerability the media is focusing on.

Much more interesting is the other vulnerability that the researchers found:

Millions of HTTPS, SSH, and VPN servers all use the same prime numbers for Diffie-Hellman key exchange. Practitioners believed this was safe as long as new key exchange messages were generated for every connection. However, the first step in the number field sieve — the most efficient algorithm for breaking a Diffie-Hellman connection — is dependent only on this prime. After this first step, an attacker can quickly break individual connections.

The researchers believe the NSA has been using this attack:

We carried out this computation against the most common 512-bit prime used for TLS and demonstrate that the Logjam attack can be used to downgrade connections to 80% of TLS servers supporting DHE_EXPORT. We further estimate that an academic team can break a 768-bit prime and that a nation-state can break a 1024-bit prime. Breaking the single, most common 1024-bit prime used by web servers would allow passive eavesdropping on connections to 18% of the Top 1 Million HTTPS domains. A second prime would allow passive decryption of connections to 66% of VPN servers and 26% of SSH servers. A close reading of published NSA leaks shows that the agency’s attacks on VPNs are consistent with having achieved such a break.

Remember James Bamford’s 2012 comment about the NSA’s cryptanalytic capabilities:

According to another top official also involved with the program, the NSA made an enormous breakthrough several years ago in its ability to cryptanalyze, or break, unfathomably complex encryption systems employed by not only governments around the world but also many average computer users in the US. The upshot, according to this official: “Everybody’s a target; everybody with communication is a target.”

[…]

The breakthrough was enormous, says the former official, and soon afterward the agency pulled the shade down tight on the project, even within the intelligence community and Congress. “Only the chairman and vice chairman and the two staff directors of each intelligence committee were told about it,” he says. The reason? “They were thinking that this computing breakthrough was going to give them the ability to crack current public encryption.”

And remember Director of National Intelligence James Clapper’s introduction to the 2013 “Black Budget“:

Also, we are investing in groundbreaking cryptanalytic capabilities to defeat adversarial cryptography and exploit internet traffic.

It’s a reasonable guess that this is what both Bamford’s source and Clapper are talking about. It’s an attack that requires a lot of precomputation — just the sort of thing a national intelligence agency would go for.

But that requirement also speaks to its limitations. The NSA isn’t going to put this capability at collection points like Room 641A at AT&T’s San Francisco office: the precomputation table is too big, and the sensitivity of the capability is too high. More likely, an analyst identifies a target through some other means, and then looks for data by that target in databases like XKEYSCORE. Then he sends whatever ciphertext he finds to the Cryptanalysis and Exploitation Services (CES) group, which decrypts it if it can using this and other techniques.

Ross Anderson wrote about this earlier this month, almost certainly quoting Snowden:

As for crypto capabilities, a lot of stuff is decrypted automatically on ingest (e.g. using a “stolen cert”, presumably a private key obtained through hacking). Else the analyst sends the ciphertext to CES and they either decrypt it or say they can’t.

The analysts are instructed not to think about how this all works. This quote also applied to NSA employees:

Strict guidelines were laid down at the GCHQ complex in Cheltenham, Gloucestershire, on how to discuss projects relating to decryption. Analysts were instructed: “Do not ask about or speculate on sources or methods underpinning Bullrun.”

I remember the same instructions in documents I saw about the NSA’s CES.

Again, the NSA has put surveillance ahead of security. It never bothered to tell us that many of the “secure” encryption systems we were using were not secure. And we don’t know what other national intelligence agencies independently discovered and used this attack.

The good news is now that we know reusing prime numbers is a bad idea, we can stop doing it.

Schneier on Security: Research on Patch Deployment

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

New research indicates that it’s very hard to completely patch systems against vulnerabilities:

It turns out that it may not be that easy to patch vulnerabilities completely. Using WINE, we analyzed the patch deployment process for 1,593 vulnerabilities from 10 Windows client applications, on 8.4 million hosts worldwide [Oakland 2015]. We found that a host may be affected by multiple instances of the same vulnerability, because the vulnerable program is installed in several directories or because the vulnerability is in a shared library distributed with several applications. For example, CVE-2011-0611 affected both the Adobe Flash Player and Adobe Reader (Reader includes a library for playing .swf objects embedded in a PDF). Because updates for the two products were distributed using different channels, the vulnerable host population decreased at different rates, as illustrated in the figure on the left. For Reader patching started 9 days after disclosure (after patch for CVE-2011-0611 was bundled with another patch in a new Reader release), and the update reached 50% of the vulnerable hosts after 152 days.

For Flash patching started earlier, 3 days after disclosure, but the patching rate soon dropped (a second patching wave, suggested by the inflection in the curve after 43 days, eventually subsided as well). Perhaps for this reason, CVE-2011-0611 was frequently targeted by exploits in 2011, using both the .swf and PDF vectors.

Paper.

Schneier on Security: Spy Dust

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Used by the Soviet Union during the Cold War:

A defecting agent revealed that powder containing both luminol and a substance called nitrophenyl pentadien (NPPD) had been applied to doorknobs, the floor mats of cars, and other surfaces that Americans living in Moscow had touched. They would then track or smear the substance over every surface they subsequently touched.

Schneier on Security: More on Chris Roberts and Avionics Security

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Last month ago I blogged about security researcher Chris Roberts being detained by the FBI after tweeting about avionics security while on a United flight:

But to me, the fascinating part of this story is that a computer was monitoring the Twitter feed and understood the obscure references, alerted a person who figured out who wrote them, researched what flight he was on, and sent an FBI team to the Syracuse airport within a couple of hours. There’s some serious surveillance going on.

We know a lot more of the backstory from the FBI’s warrant application. He was interviewed by the FBI multiple times previously, and was able to take control of at least some of the panes’ controls during flight.

During two interviews with F.B.I. agents in February and March of this year, Roberts said he hacked the inflight entertainment systems of Boeing and Airbus aircraft, during flights, about 15 to 20 times between 2011 and 2014. In one instance, Roberts told the federal agents he hacked into an airplane’s thrust management computer and momentarily took control of an engine, according to an affidavit attached to the application for a search warrant.

“He stated that he successfully commanded the system he had accessed to issue the ‘CLB’ or climb command. He stated that he thereby caused one of the airplane engines to climb resulting in a lateral or sideways movement of the plane during one of these flights,” said the affidavit, signed by F.B.I. agent Mike Hurley.

Roberts also told the agents he hacked into airplane networks and was able “to monitor traffic from the cockpit system.”

According to the search warrant application, Roberts said he hacked into the systems by accessing the in-flight entertainment system using his laptop and an Ethernet cable.

Wired has more.

This makes the FBI’s behavior much more reasonable. They weren’t scanning the Twitter feed for random keywords; they were watching his account.

We don’t know if the FBI’s statements are true, though. But if Roberts was hacking an airplane while sitting in the passenger seat…wow is that a stupid thing to do.

From the Christian Science Monitor:

But Roberts’ statements and the FBI’s actions raise as many questions as they answer. For Roberts, the question is why the FBI is suddenly focused on years-old research that has long been part of the public record.

“This has been a known issue for four or five years, where a bunch of us have been stood up and pounding our chest and saying, ‘This has to be fixed,'” Roberts noted. “Is there a credible threat? Is something happening? If so, they’re not going to tell us,” he said.

Roberts isn’t the only one confused by the series of events surrounding his detention in April and the revelations about his interviews with federal agents.

“I would like to see a transcript (of the interviews),” said one former federal computer crimes prosecutor, speaking on condition of anonymity. “If he did what he said he did, why is he not in jail? And if he didn’t do it, why is the FBI saying he did?”

The real issue is that the avionics and the entertainment system are on the same network. That’s an even stupider thing to do. Also last month I wrote about the risks of hacking airplanes, and said that I wasn’t all that worried about it. Now I’m more worried.

Schneier on Security: United Airlines Offers Frequent Flier Miles for Finding Security Vulnerabilities

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Vulnerabilities on the website only, not in airport security or in the avionics.

Schneier on Security: Friday Squid Blogging: NASA’s Squid Rover

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

NASA is funding a study for a squid rover that could explore Europa’s oceans.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Schneier on Security: Microbe Biometric

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Interesting:

Franzosa and colleagues used publicly available microbiome data produced through the Human Microbiome Project (HMP), which surveyed microbes in the stool, saliva, skin, and other body sites from up to 242 individuals over a months-long period. The authors adapted a classical computer science algorithm to combine stable and distinguishing sequence features from individuals’ initial microbiome samples into individual-specific “codes.” They then compared the codes to microbiome samples collected from the same individuals’ at follow-up visits and to samples from independent groups of individuals.

The results showed that the codes were unique among hundreds of individuals, and that a large fraction of individuals’ microbial “fingerprints” remained stable over a one-year sampling period. The codes constructed from gut samples were particularly stable, with more than 80% of individuals identifiable up to a year after the sampling period.

Schneier on Security: Eighth Movie-Plot Threat Contest Semifinalists

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

On April 1, I announced the Eighth Movie Plot Threat Contest: demonstrate the evils of encryption.

Not a whole lot of good submissions this year. Possibly this contest has run its course, and there’s not a whole lot of interest left. On the other hand, it’s heartening to know that there aren’t a lot of encryption movie-plot threats out there.

Anyway, here are the semifinalists.

  1. Child pornographers.

  2. Bombing the NSA.
  3. Torture.
  4. Terrorists and a vaccine.
  5. Election systems.

Cast your vote by number here; voting closes at the end of the month.

Contest.

Previous contests.

Schneier on Security: In Which I Collide with Admiral Rogers

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Universe does not explode.

Photo here.

Schneier on Security: Admiral Rogers Speaking at the Joint Service Academy Cyber Security Summit

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Admiral Mike Rogers gave the keynote address at the Joint Service Academy Cyber Security Summit today at West Point. He started by explaining the four tenets of security that he thinks about.

First: partnerships. This includes government, civilian, everyone. Capabilities, knowledge, and insight of various groups, and aligning them to generate better outcomes to everyone. Ability to generate and share insight and knowledge, and to do that in a timely manner.

Second, innovation. It’s about much more than just technology. It’s about ways to organize, values, training, and so on. We need to think about innovation very broadly.

Third, technology. This is a technologically based problem, and we need to apply technology to defense as well.

Fourth, human capital. If we don’t get people working right, all of this is doomed to fail. We need to build security workforces inside and outside of military. We need to keep them current in a world of changing technology.

So, what is the Department of Defense doing? They’re investing in cyber, both because it’s a critical part of future fighting of wars and because of the mission to defend the nation.

Rogers then explained the five strategic goals listed in the recent DoD cyber strategy:

  1. Build and maintain ready forces and capabilities to conduct cyberspace operations;

  2. Defend the DoD information network, secure DoD data, and mitigate risks to DoD missions;
  3. Be prepared to defend the U.S. homeland and U.S. vital interests from disruptive or destructive cyberattacks of significant consequence;
  4. Build and maintain viable cyber options and plan to use those options to control conflict escalation and to shape the conflict environment at all stages;
  5. Build and maintain robust international alliances and partnerships to deter shared threats and increase international security and stability.

Expect to see more detailed policy around these coming goals in the coming months.

What is the role of the US CyberCommand and the NSA in all of this? The CyberCommand has three missions related to the five strategic goals. They defend DoD networks. They create the cyber workforce. And, if directed, they defend national critical infrastructure.

At one point, Rogers said that he constantly reminds his people: “If it was designed by man, it can be defeated by man.” I hope he also tells this to the FBI when they talk about needing third-party access to encrypted communications.

All of this has to be underpinned by a cultural ethos that recognizes the importance of professionalism and compliance. Every person with a keyboard is both a potential asset and a threat. There needs to be well-defined processes and procedures within DoD, and a culture of following them.

What’s the threat dynamic, and what’s the nature of the world? The threat is going to increase; it’s going to get worse, not better; cyber is a great equalizer. Cyber doesn’t recognize physical geography. Four “prisms” to look at threat: criminals, nation states, hacktivists, groups wanting to do harm to the nation. This fourth group is increasing. Groups like ISIL are going to use the Internet to cause harm. Also embarrassment: releasing documents, shutting down services, and so on.

We spend a lot of time thinking about how to stop attackers from getting in; we need to think more about how to get them out once they’ve gotten in — and how to continue to operate even though they are in. (That was especially nice to hear, because that’s what I’m doing at my company.) Sony was a “wake-up call”: a nation-state using cyber for coercion. It was theft of intellectual property, denial of service, and destruction. And it was important for the US to acknowledge the attack, attribute it, and retaliate.

Last point: “Total force approach to the problem.” It’s not just about people in uniform. It’s about active duty military, reserve military, corporations, government contractors — everyone. We need to work on this together. “I am not interested in endless discussion…. I am interested in outcomes.” “Cyber is the ultimate team sport.” There’s no single entity, or single technology, or single anything, that will solve all of this. He wants to partner with the corporate world, and to do it in a way that benefits both.

First question was about the domains and missions of the respective services. Rogers talked about the inherent expertise that each service brings to the problem, and how to use cyber to extend that expertise — and the mission. The goal is to create a single integrated cyber force, but not a single service. Cyber occurs in a broader context, and that context is applicable to all the military services. We need to build on their individual expertises and contexts, and to apply it in an integrated way. Similar to how we do special forces.

Second question was about values, intention, and what’s at risk. Rogers replied that any structure for the NSA has to integrate with the nation’s values. He talked about the value of privacy. He also talked about “the security of the nation.” Both are imperatives, and we need to achieve both at the same time. The problem is that the nation is polarized; the threat is getting worse at the same time trust is decreasing. We need to figure out how to improve trust.

Third question we about DoD protecting commercial cyberspace. Rogers replied that the DHS is the lead organization in this regard, and DoD provides capability through that civilian authority. Any DoD partnership with the private sector will go through DHS.

Fourth question: How will DoD reach out to corporations, both established and start-ups? Many ways. By providing people to the private sectors. Funding companies, through mechanisms like the CIA’s In-Q-Tel.. And some sort of innovation capability. Those are the three main vectors, but more important is that the DoD mindset has to change. DoD has traditionally been very insular; in this case, more partnerships are required.

Final question was about the NSA sharing security information in some sort of semi-classified way. Rogers said that there are lot of internal conversations about doing this. It’s important.

In all, nothing really new or controversial.

These comments were recorded — I can’t find them online now — and are on the record. Much of the rest of the summit was held under Chatham House Rules. I participated in a panel on “Crypto Wars 2015″ with Matt Blaze and a couple of government employees.

Schneier on Security: License Plate Scanners Hidden in Fake Cactus

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

The city of Paradise Valley, AZ, is hiding license plate scanners in fake cactus plants.

Schneier on Security: German Cryptanalysis of the M-209

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

This 1947 document describes a German machine to cryptanalyze the American
M-209 mechanical encryption machine. I can’t figure out anything about how it works.

Schneier on Security: Amateurs Produce Amateur Cryptography

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Anyone can design a cipher that he himself cannot break. This is why you should uniformly distrust amateur cryptography, and why you should only use published algorithms that have withstood broad cryptanalysis. All cryptographers know this, but non-cryptographers do not. And this is why we repeatedly see bad amateur cryptography in fielded systems.

The latest is the cryptography in the Open Smart Grid Protocol, which is so bad as to be laughable. From the paper:

Dumb Crypto in Smart Grids: Practical Cryptanalysis of the Open Smart Grid Protocol

Philipp Jovanovic and Samuel Neves

Abstract: This paper analyses the cryptography used in the Open Smart Grid Protocol (OSGP). The authenticated encryption (AE) scheme deployed by OSGP is a non-standard composition of RC4 and a home-brewed MAC, the “OMA digest’.”

We present several practical key-recovery attacks against the OMA digest. The first and basic variant can achieve this with a mere 13 queries to an OMA digest oracle and negligible time complexity. A more sophisticated version breaks the OMA digest with only 4 queries and a time complexity of about 2^25 simple operations. A different approach only requires one arbitrary valid plaintext-tag pair, and recovers the key in an average of 144 message verification queries, or one ciphertext-tag pair and 168 ciphertext verification queries.

Since the encryption key is derived from the key used by the OMA digest, our attacks break both confidentiality and authenticity of OSGP.

My still relevant 1998 essay: “Memo to the Amateur Cipher Designer.” And my 1999 essay on cryptographic snake oil.

ThreatPost article. BoingBoing post.

Note: That first sentence has been called “Schneier’s Law,” although the sentiment is much older.

Schneier on Security: More on the NSA’s Capabilities

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Ross Anderson summarizes a meeting in Princeton where Edward Snowden was “present.”

Third, the leaks give us a clear view of an intelligence analyst’s workflow. She will mainly look in Xkeyscore which is the Google of 5eyes comint; it’s a federated system hoovering up masses of stuff not just from 5eyes own assets but from other countries where the NSA cooperates or pays for access. Data are “ingested” into a vast rolling buffer; an analyst can run a federated search, using a selector (such as an IP address) or fingerprint (something that can be matched against the traffic). There are other such systems: “Dancing oasis” is the middle eastern version. Some xkeyscore assets are actually compromised third-party systems; there are multiple cases of rooted SMS servers that are queried in place and the results exfiltrated. Others involve vast infrastructure, like Tempora. If data in Xkeyscore are marked as of interest, they’re moved to Pinwale to be memorialised for 5+ years. This is one function of the MDRs (massive data repositories, now more tactfully renamed mission data repositories) like Utah. At present storage is behind ingestion. Xkeyscore buffer times just depend on volumes and what storage they managed to install, plus what they manage to filter out.

As for crypto capabilities, a lot of stuff is decrypted automatically on ingest (e.g. using a “stolen cert,” presumably a private key obtained through hacking). Else the analyst sends the ciphertext to CES and they either decrypt it or say they can’t. There’s no evidence of a “wow” cryptanalysis; it was key theft, or an implant, or a predicted RNG or supply-chain interference. Cryptanalysis has been seen of RC4, but not of elliptic curve crypto, and there’s no sign of exploits against other commonly used algorithms. Of course, the vendors of some products have been coopted, notably skype. Homegrown crypto is routinely problematic, but properly implemented crypto keeps the agency out; gpg ciphertexts with RSA 1024 were returned as fails.

[…]

What else might we learn from the disclosures when designing and implementing crypto? Well, read the disclosures and use your brain. Why did GCHQ bother stealing all the SIM card keys for Iceland from Gemalto, unless they have access to the local GSM radio links? Just look at the roof panels on US or UK embassies, that look like concrete but are actually transparent to RF. So when designing a protocol ask yourself whether a local listener is a serious consideration.

[…]

On the policy front, one of the eye-openers was the scale of intelligence sharing — it’s not just 5 eyes, but 15 or 35 or even 65 once you count all the countries sharing stuff with the NSA. So how does governance work? Quite simply, the NSA doesn’t care about policy. Their OGC has 100 lawyers whose job is to “enable the mission”; to figure out loopholes or new interpretations of the law that let stuff get done. How do you restrain this? Could you use courts in other countries, that have stronger human-rights law? The precedents are not encouraging. New Zealand’s GCSB was sharing intel with Bangladesh agencies while the NZ government was investigating them for human-rights abuses. Ramstein in Germany is involved in all the drone killings, as fibre is needed to keep latency down low enough for remote vehicle pilots. The problem is that the intelligence agencies figure out ways to shield the authorities from culpability, and this should not happen.

[…]

The spooks’ lawyers play games saying for example that they dumped content, but if you know IP address and file size you often have it; and IP address is a good enough pseudonym for most intel / LE use. They deny that they outsource to do legal arbitrage (e.g. NSA spies on Brits and GCHQ returns the favour by spying on Americans). Are they telling the truth? In theory there will be an MOU between NSA and the partner agency stipulating respect for each others’ laws, but there can be caveats, such as a classified version which says “this is not a binding legal document.” The sad fact is that law and legislators are losing the capability to hold people in the intelligence world to account, and also losing the appetite for it.

Worth reading in full.

Schneier on Security: Friday Squid Blogging: Squid Chair

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Squid chair.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Schneier on Security: Cybersecurity Summer Camps

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

For high-school kids.

Schneier on Security: Stealing a Billion

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

It helps if you own the banks:

The report said Shor and his associates worked together in 2012 to buy a controlling stake in three Moldovan banks and then gradually increased the banks’ liquidity through a series of complex transactions involving loans being passed between the three banks and foreign entities.

The three banks then issued multimillion-dollar loans to companies that Shor either controlled or was connected to, the report said.

In the end, over $767 million disappeared from the banks in just three days through complex transactions.

A large portion of this money was transferred to offshore entities connected to Shor, according to the report. Some of the money was then deposited into Latvian bank accounts under the names of various foreigners.

Moldova’s central bank was subsequently forced to bail out the three banks with $870 million in emergency loans, a move designed to keep the economy afloat.

It’s an insider attack, where the insider is in charge.

What’s interesting to me is not the extent of the fraud, but how electronic banking makes this sort of thing easier. And possibly easier to investigate as well.

Schneier on Security: Online Dating Scams

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Interesting research:

We identified three types of scams happening on Jiayuan. The first one involves advertising of escort services or illicit goods, and is very similar to traditional spam. The other two are far more interesting and specific to the online dating landscape. One type of scammers are what we call swindlers. For this scheme, the scammer starts a long-distance relationship with an emotionally vulnerable victim, and eventually asks her for money, for example to purchase the flight ticket to visit her. Needless to say, after the money has been transferred the scammer disappears. Another interesting type of scams that we identified are what we call dates for profit. In this scheme, attractive young ladies are hired by the owners of fancy restaurants. The scam then consists in having the ladies contact people on the dating site, taking them on a date at the restaurant, having the victim pay for the meal, and never arranging a second date. This scam is particularly interesting, because there are good chances that the victim will never realize that he’s been scammed — in fact, he probably had a good time.

Schneier on Security: Another Example of Cell Phone Metadata Forensic Surveillance

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Matthew Cole explains how the Italian police figured out how the CIA kidnapped Abu Omar in Milan. Interesting use of cell phone metadata, showing how valuable it is for intelligence purposes.

Schneier on Security: An Example of Cell Phone Metadata Forensic Surveillance

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

In this long article on the 2005 assassination of Rafik Hariri in Beirut, there’s a detailed section on what the investigators were able to learn from the cell phone metadata:

At Eid’s request, a judge ordered Lebanon’s two cellphone companies, Alfa and MTC Touch, to produce records of calls and text messages in Lebanon in the four months before the bombing. Eid then studied the records in secret for months. He focused on the phone records of Hariri and his entourage, looking at whom they called, where they went, whom they met and when. He also followed where Adass, the supposed suicide bomber, spent time before he disappeared. He looked at all the calls that took place along the route taken by Hariri’s entourage on the day of the assassination. Always he looked for cause and effect. How did one call lead to the next? “He was brilliant, just brilliant,” the senior U.N. investigator told me. “He himself, on his own, developed a simple but amazingly efficient program to set about mining this massive bank of data.”

The simple algorithm quickly revealed a peculiar pattern. In October 2004, just after Hariri resigned, a certain cluster of cellphones began following him and his now-reduced motorcade wherever they went. These phones stayed close day and night, until the day of the bombing -­ when nearly all 63 phones in the group immediately went dark and never worked again.

[…]

The investigators now turned their full attention to the cellphone records. Building on Eid’s work, they determined that the assassins worked in groups, each with a leader and each adhering to specific procedures. Everyone in the group called the leader, and he called everyone in the group, but the lower-level operatives never called one another.

The investigators gave each group a color. The green group consisted of 18 Alfa phones, purchased with fake identification from two shops in South Beirut in July and August 2004. The purpose of the fake IDs was not to defraud Alfa out of payment; every month from September 2004 to May 2005, someone went to an Alfa office and paid all 18 bills in cash, without leaving any clue to his identity. The total phone bill for the green network, including activation fees, was $7,375 ­– a prodigious amount, considering that 15 of the green group’s 18 phones went almost entirely unused.

The first spike in call activity occurred in September 2004, immediately after Hariri announced his resignation. The investigators contend that the green group was at the center of the conspiracy. The phone number 3140023 belonged to the top leader, and the numbers 3159300 and 3150071 belonged to his two deputies. (He called them and they called him, but with those phones, they never called each other.) The two deputies carried phones belonging to other groups, through which they passed on instructions to the other participants in the operation. When a member of one group would call a group leader, the group leader would often follow up by switching to a green phone and calling the supreme leader, who was nearly always in South Beirut, where Hezbollah keeps its headquarters.

On Oct. 20, 2004, the day Hariri left office and his security detail was significantly reduced, the blue group went into operation. It originally worked according to the same rules as the green group, but its active membership increased from three phones to 15, with seven connected to Alfa and eight to MTC Touch. All of the blue phones were prepaid. Some were acquired as early as 2003 and had seen little or no use. The people who bought them also gave false identification, and again money seemed to be in plentiful supply. The minutes that expired each month went largely unused, but the phones were loaded again and again. When the blue group went dark, the phones still had unused minutes worth $4,287.

The prosecutors say the blue group followed Hariri’s movements. On the morning of Oct. 20, its members were already deployed around Quraitem Palace. At 10:30 a.m., Hariri set out toward Parliament and then to the presidential palace, where Lahoud was waiting to receive his resignation. The cell towers picked up the blue group’s members moving with him and calling their chief. From then on, the blue phones trailed Hariri nearly everywhere –­ to Parliament, to meetings with political leaders, to long lunches at the Saint-Georges Yacht Club & Marina. When Hariri was at his home, so were they. When he flew abroad, they moved with him to the airport and then stopped operating until he returned, when they would pick up the trail again.

Eventually, the yellow group was added….

There’s a lot more. It’s section 6 of the article.

Schneier on Security: The NSA’s Voice-to-Text Capabilities

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

New article from the Intercept based on the Snowden documents.

Schneier on Security: Easily Cracking a Master Combination Lock

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Impressive.

Kamkar told Ars his Master Lock exploit started with a well-known vulnerability that allows Master Lock combinations to be cracked in 100 or fewer tries. He then physically broke open a combination lock and noticed the resistance he observed was caused by two lock parts that touched in a way that revealed important clues about the combination. (He likened the Master Lock design to a side channel in cryptographic devices that can be exploited to obtain the secret key.) Kamkar then made a third observation that was instrumental to his Master Lock exploit: the first and third digit of the combination, when divided by four, always return the same remainder. By combining the insights from all three weaknesses he devised the attack laid out in the video.

Schneier on Security: Detecting QUANTUMINSERT

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Fox-IT has a blog post (and has published Snort rules) on how to detect man-on-the-side Internet attacks like the NSA’s QUANTUMINSERT.

From a Wired article:

But hidden within another document leaked by Snowden was a slide that provided a few hints about detecting Quantum Insert attacks, which prompted the Fox-IT researchers to test a method that ultimately proved to be successful. They set up a controlled environment and launched a number of Quantum Insert attacks against their own machines to analyze the packets and devise a detection method.

According to the Snowden document, the secret lies in analyzing the first content-carrying packets that come back to a browser in response to its GET request. One of the packets will contain content for the rogue page; the other will be content for the legitimate site sent from a legitimate server. Both packets, however, will have the same sequence number. That, it turns out, is a dead giveaway.

Here’s why: When your browser sends a GET request to pull up a web page, it sends out a packet containing a variety of information, including the source and destination IP address of the browser as well as so-called sequence and acknowledge numbers, or ACK numbers. The responding server sends back a response in the form of a series of packets, each with the same ACK number as well as a sequential number so that the series of packets can be reconstructed by the browser as each packet arrives to render the web page.

But when the NSA or another attacker launches a Quantum Insert attack, the victim’s machine receives duplicate TCP packets with the same sequence number but with a different payload. “The first TCP packet will be the ‘inserted’ one while the other is from the real server, but will be ignored by the [browser],” the researchers note in their blog post. “Of course it could also be the other way around; if the QI failed because it lost the race with the real server response.”

Although it’s possible that in some cases a browser will receive two packets with the same sequence number from a legitimate server, they will still contain the same general content; a Quantum Insert packet, however, will have content with significant differences.

It’s important we develop defenses against these attacks, because everyone is using them.

Schneier on Security: Friday Squid Blogging: Ceramic Squid Planters

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Nice.