Author Archive

Schneier on Security: AT&T Does Not Care about Your Privacy

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

AT&T’s CEO believes that the company should not offer robust security to its customers:

But tech company leaders aren’t all joining the fight against the deliberate weakening of encryption. AT&T CEO Randall Stephenson said this week that AT&T, Apple, and other tech companies shouldn’t have any say in the debate.

“I don’t think it is Silicon Valley’s decision to make about whether encryption is the right thing to do,” Stephenson said in an interview with The Wall Street Journal. “I understand [Apple CEO] Tim Cook’s decision, but I don’t think it’s his decision to make.”

His position is extreme in its disregard for the privacy of his customers. If he doesn’t believe that companies should have any say in what levels of privacy they offer their customers, you can be sure that AT&T won’t offer any robust privacy or security to you.

Does he have any clue what an anti-market position this is? He says that it is not the business of Silicon Valley companies to offer product features that might annoy the government. The “debate” about what features commercial products should have should happen elsewhere — presumably within the government. I thought we all agreed that state-controlled economies just don’t work.

My guess is that he doesn’t realize what an extreme position he’s taking by saying that product design isn’t the decision of companies to make. My guess is that AT&T is so deep in bed with the NSA and FBI that he’s just saying things he believes justifies his position.

Here’s the original, behind a paywall.

Schneier on Security: 10,000-Year-Old Warfare

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Evidence of primitive warfare from Kenya’s Rift Valley.

Schneier on Security: The 2016 National Threat Assessment

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

It’s National Threat Assessment Day. Published annually by the Director of National Intelligence, the “Worldwide Threat Assessment of the US Intelligence Community” is the US intelligence community’s one time to publicly talk about the threats in general. The document is the results of weeks of work and input from lots of people. For Clapper, it’s his chance to shape the dialog, set up priorities, and prepare Congress for budget requests. The document is an unclassified summary of a much longer classified document. And the day also includes Clapper testifying before the Senate Armed Service Committee. (You’ll remember his now-famous lie to the committee in 2013.)

The document covers a wide variety of threats, from terrorism to organized crime, from energy politics to climate change. Although the document clearly says “The order of the topics presented in this statement does not necessarily indicate the relative importance or magnitude of the threat in the view of the Intelligence Community,” it does. And like 2015 and 2014, cyber threats are #1 — although this year it’s called “Cyber and Technology.”

The consequences of innovation and increased reliance on information technology in the next few years on both our society’s way of life in general and how we in the Intelligence Community specifically perform our mission will probably be far greater in scope and impact than ever. Devices, designed and fielded with minimal security requirements and testing, and an ever — increasing complexity of networks could lead to widespread vulnerabilities in civilian infrastructures and US Government systems. These developments will pose challenges to our cyber defenses and operational tradecraft but also create new opportunities for our own intelligence collectors.

Especially note that last clause. The FBI might hate encryption, but the intelligence community is not going dark.

The document then calls out a few specifics like the Internet of Things and Artificial Intelligence — so surprise, considering other recent statements from government officials. This is the “…and Technology” part of the category.

More specifically:

Future cyber operations will almost certainly include an increased emphasis on changing or manipulating data to compromise its integrity (i.e., accuracy and reliability) to affect decisionmaking, reduce trust in systems, or cause adverse physical effects. Broader adoption of IoT devices and AI ­– in settings such as public utilities and health care — will only exacerbate these potential effects. Russian cyber actors, who post disinformation on commercial websites, might seek to alter online media as a means to influence public discourse and create confusion. Chinese military doctrine outlines the use of cyber deception operations to conceal intentions, modify stored data, transmit false data, manipulate the flow of information, or influence public sentiments -­ all to induce errors and miscalculation in decisionmaking.

Russia is the number one threat, followed by China, Iran, North Korea, and non-state actors:

Russia is assuming a more assertive cyber posture based on its willingness to target critical infrastructure systems and conduct espionage operations even when detected and under increased public scrutiny. Russian cyber operations are likely to target US interests to support several strategic objectives: intelligence gathering to support Russian decisionmaking in the Ukraine and Syrian crises, influence operations to support military and political objectives, and continuing preparation of the cyber environment for future contingencies.

Comments on China refer to the cybersecurity agreement from last September:

China continues to have success in cyber espionage against the US Government, our allies, and US companies. Beijing also selectively uses cyberattacks against targets it believes threaten Chinese domestic stability or regime legitimacy. We will monitor compliance with China’s September 2015 commitment to refrain from conducting or knowingly supporting cyber — enabled theft of intellectual property with the intent of providing competitive advantage to companies or commercial sectors. Private — sector security experts have identified limited ongoing cyber activity from China but have not verified state sponsorship or the use of exfiltrated data for commercial gain.

Also interesting are the comments on non-state actors, which discuss both propaganda campaigns from ISIL, criminal ransomware, and hacker tools.

Schneier on Security: Large-Scale FBI Hacking

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

As part of a child pornography investigation, the FBI hacked into over 1,300 computers.

But after Playpen was seized, it wasn’t immediately closed down, unlike previous dark web sites that have been shuttered” by law enforcement. Instead, the FBI ran Playpen from its own servers in Newington, Virginia, from February 20 to March 4, reads a complaint filed against a defendant in Utah. During this time, the FBI deployed what is known as a network investigative technique (NIT), the agency’s term for a hacking tool.

While Playpen was being run out of a server in Virginia, and the hacking tool was infecting targets, “approximately 1300 true internet protocol (IP) addresses were identified during this time,” according to the same complaint.

The FBI seems to have obtained a single warrant, but it’s hard to believe that a legal warrant could allow the police to hack 1,300 different computers. We do know that the FBI is very vague about the extent of its operations in warrant applications. And surely we need actual public debate about this sort of technique.

Also, “Playpen” is a super-creepy name for a child porn site. I feel icky just typing it.

Schneier on Security: <i>Data and Goliath</i> Published in Paperback

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Today, Data and Goliath is being published in paperback.

Everyone tells me that the paperback version sells better than the hardcover, even though it’s a year later. I can’t really imagine that there are tens of thousands of people who wouldn’t spend $28 on a hardcover but are happy to spend $18 on the paperback, but we’ll see. (Amazon has the hardcover for $19, the paperback for $11.70, and the Kindle edition for $14.60, plus shipping, if any. I am still selling signed hardcovers for $28 including domestic shipping — more for international.)

I got a box of paperbacks from my publisher last week. They look good. Not as good as the hardcover, but good for a trade paperback.

Schneier on Security: Exploiting Google Maps for Fraud

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

The New York Times has a long article on fraudulent locksmiths. The scam is a basic one: quote a low price on the phone, but charge much more once you show up and do the work. But the method by which the scammers get victims is new. They exploit Google’s crowdsourced system for identifying businesses on their maps. The scammers convince Google that they have a local address, which Google displays to its uses who are searching for local businesses.

But they involve chicanery with two platforms: Google My Business, essentially the company’s version of the Yellow Pages, and Map Maker, which is Google’s crowdsourced online map of the world. The latter allows people around the planet to log in to the system and input data about streets, companies and points of interest.

Both Google My Business and Map Maker are a bit like Wikipedia, insofar as they are largely built and maintained by millions of contributors. Keeping the system open, with verification, gives countless businesses an invaluable online presence. Google officials say that the system is so good that many local companies do not bother building their own websites. Anyone who has ever navigated using Google Maps knows the service is a technological wonder.

But the very quality that makes Google’s systems accessible to companies that want to be listed makes them vulnerable to pernicious meddling.

“This is what you get when you rely on crowdsourcing for all your ‘up to date’ and ‘relevant’ local business content,” Mr. Seely said. “You get people who contribute meaningful content, and you get people who abuse the system.”

The scam is growing:

Lead gens have their deepest roots in locksmithing, but the model has migrated to an array of services, including garage door repair, carpet cleaning, moving and home security. Basically, they surface in any business where consumers need someone in the vicinity to swing by and clean, fix, relocate or install something.

What’s interesting to me are the economic incentives involved:

Only Google, it seems, can fix Google. The company is trying, its representatives say, by, among other things, removing fake information quickly and providing a “Report a Problem” tool on the maps. After looking over the fake Locksmith Force building, a bunch of other lead-gen advertisers in Phoenix and that Mountain View operation with more than 800 websites, Google took action.

Not only has the fake Locksmith Force building vanished from Google Maps, but the company no longer turns up in a “locksmith Phoenix” search. At least not in the first 20 pages. Nearly all the other spammy locksmiths pointed out to Google have disappeared from results, too.

“We’re in a constant arms race with local business spammers who, unfortunately, use all sorts of tricks to try to game our system and who’ve been a thorn in the Internet’s side for over a decade,” a Google spokesman wrote in an email. “As spammers change their techniques, we’re continually working on new, better ways to keep them off Google Search and Maps. There’s work to do, and we want to keep doing better.”

There was no mention of a stronger verification system or a beefed-up spam team at Google. Without such systemic solutions, Google’s critics say, the change to local results will not rise even to the level of superficial.

And that’s Google’s best option, really. They’re not the ones losing money from these scammers, so they’re not incented to fix the problem. Unless it rises to the level of affecting user trust in the entire system, they’re just going to do superficial things.

This is exactly the sort of market failure that government regulation needs to fix.

Schneier on Security: Friday Squid Blogging: Squid Knitting Pattern

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Surprisingly realistic for a knitted stuffed animal.

Schneier on Security: NSA Reorganizing

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

The NSA is undergoing a major reorganization, combining its attack and defense sides into a single organization:

In place of the Signals Intelligence and Information Assurance directorates ­ the organizations that historically have spied on foreign targets and defended classified networks against spying, respectively ­ the NSA is creating a Directorate of Operations that combines the operational elements of each.

It’s going to be difficult, since their missions and culture are so different.

The Information Assurance Directorate (IAD) seeks to build relationships with private-sector companies and help find vulnerabilities in software ­ most of which officials say wind up being disclosed. It issues software guidance and tests the security of systems to help strengthen their defenses.

But the other side of the NSA house, which looks for vulnerabilities that can be exploited to hack a foreign network, is much more secretive.

“You have this kind of clash between the closed environment of the sigint mission and the need of the information-assurance team to be out there in the public and be seen as part of the solution,” said a second former official. “I think that’s going to be a hard trick to pull off.”

I think this will make it even harder to trust the NSA. In my book Data and Goliath, I recommended separating the attack and defense missions of the NSA even further, breaking up the agency. (I also wrote about that idea here.)

And missing in their reorg is how US CyberCommmand’s offensive and defensive capabilities relate to the NSA’s. That seems pretty important, too.

Schneier on Security: Tracking Anonymous Web Users

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

This research shows how to track e-commerce users better across multiple sessions, even when they do not provide unique identifiers such as user IDs or cookies.

Abstract: Targeting individual consumers has become a hallmark of direct and digital marketing, particularly as it has become easier to identify customers as they interact repeatedly with a company. However, across a wide variety of contexts and tracking technologies, companies find that customers can not be consistently identified which leads to a substantial fraction of anonymous visits in any CRM database. We develop a Bayesian imputation approach that allows us to probabilistically assign anonymous sessions to users, while ac- counting for a customer’s demographic information, frequency of interaction with the firm, and activities the customer engages in. Our approach simultaneously estimates a hierarchical model of customer behavior while probabilistically imputing which customers made the anonymous visits. We present both synthetic and real data studies that demonstrate our approach makes more accurate inference about individual customers’ preferences and responsiveness to marketing, relative to common approaches to anonymous visits: nearest- neighbor matching or ignoring the anonymous visits. We show how companies who use the proposed method will be better able to target individual customers, as well as infer how many of the anonymous visits are made by new customers.

Schneier on Security: The Internet of Things Will Be the World’s Biggest Robot

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

The Internet of Things is the name given to the computerization of everything in our lives. Already you can buy Internet-enabled thermostats, light bulbs, refrigerators, and cars. Soon everything will be on the Internet: the things we own, the things we interact with in public, autonomous things that interact with each other.

These “things” will have two separate parts. One part will be sensors that collect data about us and our environment. Already our smartphones know our location and, with their onboard accelerometers, track our movements. Things like our thermostats and light bulbs will know who is in the room. Internet-enabled street and highway sensors will know how many people are out and about­and eventually who they are. Sensors will collect environmental data from all over the world.

The other part will be actuators. They’ll affect our environment. Our smart thermostats aren’t collecting information about ambient temperature and who’s in the room for nothing; they set the temperature accordingly. Phones already know our location, and send that information back to Google Maps and Waze to determine where traffic congestion is; when they’re linked to driverless cars, they’ll automatically route us around that congestion. Amazon already wants autonomous drones to deliver packages. The Internet of Things will increasingly perform actions for us and in our name.

Increasingly, human intervention will be unnecessary. The sensors will collect data. The system’s smarts will interpret the data and figure out what to do. And the actuators will do things in our world. You can think of the sensors as the eyes and ears of the Internet, the actuators as the hands and feet of the Internet, and the stuff in the middle as the brain. This makes the future clearer. The Internet now senses, thinks, and acts.

We’re building a world-sized robot, and we don’t even realize it.

I’ve started calling this robot the World-Sized Web.

The World-Sized Web — can I call it WSW? — is more than just the Internet of Things. Much of the WSW’s brains will be in the cloud, on servers connected via cellular, Wi-Fi, or short-range data networks. It’s mobile, of course, because many of these things will move around with us, like our smartphones. And it’s persistent. You might be able to turn off small pieces of it here and there, but in the main the WSW will always be on, and always be there.

None of these technologies are new, but they’re all becoming more prevalent. I believe that we’re at the brink of a phase change around information and networks. The difference in degree will become a difference in kind. That’s the robot that is the WSW.

This robot will increasingly be autonomous, at first simply and increasingly using the capabilities of artificial intelligence. Drones with sensors will fly to places that the WSW needs to collect data. Vehicles with actuators will drive to places that the WSW needs to affect. Other parts of the robots will “decide” where to go, what data to collect, and what to do.

We’re already seeing this kind of thing in warfare; drones are surveilling the battlefield and firing weapons at targets. Humans are still in the loop, but how long will that last? And when both the data collection and resultant actions are more benign than a missile strike, autonomy will be an easier sell.

By and large, the WSW will be a benign robot. It will collect data and do things in our interests; that’s why we’re building it. But it will change our society in ways we can’t predict, some of them good and some of them bad. It will maximize profits for the people who control the components. It will enable totalitarian governments. It will empower criminals and hackers in new and different ways. It will cause power balances to shift and societies to change.

These changes are inherently unpredictable, because they’re based on the emergent properties of these new technologies interacting with each other, us, and the world. In general, it’s easy to predict technological changes due to scientific advances, but much harder to predict social changes due to those technological changes. For example, it was easy to predict that better engines would mean that cars could go faster. It was much harder to predict that the result would be a demographic shift into suburbs. Driverless cars and smart roads will again transform our cities in new ways, as will autonomous drones, cheap and ubiquitous environmental sensors, and a network that can anticipate our needs.

Maybe the WSW is more like an organism. It won’t have a single mind. Parts of it will be controlled by large corporations and governments. Small parts of it will be controlled by us. But writ large its behavior will be unpredictable, the result of millions of tiny goals and billions of interactions between parts of itself.

We need to start thinking seriously about our new world-spanning robot. The market will not sort this out all by itself. By nature, it is short-term and profit-motivated­and these issues require broader thinking. University of Washington law professor Ryan Calo has proposed a Federal Robotics Commission as a place where robotics expertise and advice can be centralized within the government. Japan and Korea are already moving in this direction.

Speaking as someone with a healthy skepticism for another government agency, I think we need to go further. We need to create agency, a Department of Technology Policy, that can deal with the WSW in all its complexities. It needs the power to aggregate expertise and advice other agencies, and probably the authority to regulate when appropriate. We can argue the details, but there is no existing government entity that has the either the expertise or authority to tackle something this broad and far reaching. And the question is not about whether government will start regulating these technologies, it’s about how smart they’ll be when they do it.

The WSW is being built right now, without anyone noticing, and it’ll be here before we know it. Whatever changes it means for society, we don’t want it to take us by surprise.

This essay originally appeared on Forbes.com, which annoyingly blocks browsers using ad blockers.

EDITED TO ADD: Kevin Kelly has thought along these lines previously, calling the robot “Holos.”

Schneier on Security: Security vs. Surveillance

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Both the “going dark” metaphor of FBI Director James Comey and the contrasting “golden age of surveillance” metaphor of privacy law professor Peter Swire focus on the value of data to law enforcement. As framed in the media, encryption debates are about whether law enforcement should have surreptitious access to data, or whether companies should be allowed to provide strong encryption to their customers.

It’s a myopic framing that focuses only on one threat — criminals, including domestic terrorists — and the demands of law enforcement and national intelligence. This obscures the most important aspects of the encryption issue: the security it provides against a much wider variety of threats.

Encryption secures our data and communications against eavesdroppers like criminals, foreign governments, and terrorists. We use it every day to hide our cell phone conversations from eavesdroppers, and to hide our Internet purchasing from credit card thieves. Dissidents in China and many other countries use it to avoid arrest. It’s a vital tool for journalists to communicate with their sources, for NGOs to protect their work in repressive countries, and for attorneys to communicate with their clients.

Many technological security failures of today can be traced to failures of encryption. In 2014 and 2015, unnamed hackers — probably the Chinese government — stole 21.5 million personal files of U.S. government employees and others. They wouldn’t have obtained this data if it had been encrypted. Many large-scale criminal data thefts were made either easier or more damaging because data wasn’t encrypted: Target, TJ Maxx, Heartland Payment Systems, and so on. Many countries are eavesdropping on the unencrypted communications of their own citizens, looking for dissidents and other voices they want to silence.

Adding backdoors will only exacerbate the risks. As technologists, we can’t build an access system that only works for people of a certain citizenship, or with a particular morality, or only in the presence of a specified legal document. If the FBI can eavesdrop on your text messages or get at your computer’s hard drive, so can other governments. So can criminals. So can terrorists. This is not theoretical; again and again, backdoor accesses built for one purpose have been surreptitiously used for another. Vodafone built backdoor access into Greece’s cell phone network for the Greek government; it was used against the Greek government in 2004-2005. Google kept a database of backdoor accesses provided to the U.S. government under CALEA; the Chinese breached that database in 2009.

We’re not being asked to choose between security and privacy. We’re being asked to choose between less security and more security.

This trade-off isn’t new. In the mid-1990s, cryptographers argued that escrowing encryption keys with central authorities would weaken security. In 2013, cybersecurity researcher Susan Landau published her excellent book Surveillance or Security?, which deftly parsed the details of this trade-off and concluded that security is far more important.

Ubiquitous encryption protects us much more from bulk surveillance than from targeted surveillance. For a variety of technical reasons, computer security is extraordinarily weak. If a sufficiently skilled, funded, and motivated attacker wants in to your computer, they’re in. If they’re not, it’s because you’re not high enough on their priority list to bother with. Widespread encryption forces the listener — whether a foreign government, criminal, or terrorist — to target. And this hurts repressive governments much more than it hurts terrorists and criminals.

Of course, criminals and terrorists have used, are using, and will use encryption to hide their planning from the authorities, just as they will use many aspects of society’s capabilities and infrastructure: cars, restaurants, telecommunications. In general, we recognize that such things can be used by both honest and dishonest people. Society thrives nonetheless because the honest so outnumber the dishonest. Compare this with the tactic of secretly poisoning all the food at a restaurant. Yes, we might get lucky and poison a terrorist before he strikes, but we’ll harm all the innocent customers in the process. Weakening encryption for everyone is harmful in exactly the same way.

This essay previously appeared as part of the paper “Don’t Panic: Making Progress on the ‘Going Dark’ Debate.” It was reprinted on Lawfare. A modified version was reprinted by the MIT Technology Review.

Schneier on Security: Paper on the Going Dark Debate

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

I am pleased to have been a part of this report, part of the Berkman Center’s Berklett Cybersecurity project:

Don’t Panic: Making Progress on the “Going Dark” Debate

From the report:

In this report, we question whether the “going dark” metaphor accurately describes the state of affairs. Are we really headed to a future in which our ability to effectively surveil criminals and bad actors is impossible? We think not. The question we explore is the significance of this lack of access to communications for legitimate government interests. We argue that communications in the future will neither be eclipsed into darkness nor illuminated without shadow.

In short our findings are:

  • End-to-end encryption and other technological architectures for obscuring user data are unlikely to be adopted ubiquitously by companies, because the majority of businesses that provide communications services rely on access to user data for revenue streams and product functionality, including user data recovery should a password be forgotten.

  • Software ecosystems tend to be fragmented. In order for encryption to become both widespread and comprehensive, far more coordination and standardization than currently exists would be required.
  • Networked sensors and the Internet of Things are projected to grow substantially, and this has the potential to drastically change surveillance. The still images, video, and audio captured by these devices may enable real-time intercept and recording with after-the-fact access. Thus an inability to monitor an encrypted channel could be mitigated by the ability to monitor from afar a person through a different channel.
  • Metadata is not encrypted, and the vast majority is likely to remain so. This is data that needs to stay unencrypted in order for the systems to operate: location data from cell phones and other devices, telephone calling records, header information in e-mail, and so on. This information provides an enormous amount of surveillance data that widespread.
  • These trends raise novel questions about how we will protect individual privacy and security in the future. Today’s debate is important, but for all its efforts to take account of technological trends, it is largely taking place without reference to the full picture.

New York Times coverage. Lots more news coverage here. Slashdot thread. BoingBoing post.

Schneier on Security: More Details on the NSA Switching to Quantum-Resistant Cryptography

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

The NSA is publicly moving away from cryptographic algorithms vulnerable to cryptanalysis using a quantum computer. It just published a FAQ about the process:

Q: Is there a quantum resistant public-key algorithm that commercial vendors should adopt?

A: While a number of interesting quantum resistant public key algorithms have been proposed external to NSA, nothing has been standardized by NIST, and NSA is not specifying any commercial quantum resistant standards at this time. NSA expects that NIST will play a leading role in the effort to develop a widely accepted, standardized set of quantum resistant algorithms. Once these algorithms have been standardized, NSA will require vendors selling to NSS operators to provide FIPS validated implementations in their products. Given the level of interest in the cryptographic community, we hope that there will be quantum resistant algorithms widely available in the next decade. NSA does not recommend implementing or using non-standard algorithms, and the field of quantum resistant cryptography is no exception.

[…]

Q: When will quantum resistant cryptography be available?

A: For systems that will use unclassified cryptographic algorithms it is vital that NSA use cryptography that is widely accepted and widely available as part of standard commercial offerings vetted through NIST’s cryptographic standards development process. NSA will continue to support NIST in the standardization process and will also encourage work in the vendor and larger standards communities to help produce standards with broad support for deployment in NSS. NSA believes that NIST can lead a robust and transparent process for the standardization of publicly developed and vetted algorithms, and we encourage this process to begin soon. NSA believes that the external cryptographic community can develop quantum resistant algorithms and reach broad agreement for standardization within a few years.

Lots of other interesting stuff in the Q&A.

Schneier on Security: NSA and GCHQ Hacked Israeli Drone Feeds

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

The NSA and GCHQ have successfully hacked Israel’s drones, according to the Snowden documents. The story is being reported by the Intercept and Der Spiegel. The Times of Israel has more.

Schneier on Security: NSA’s TAO Head on Internet Offense and Defense

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Rob Joyce, the head of the NSA’s Tailored Access Operations (TAO) group — basically the country’s chief hacker — spoke in public earlier this week. He talked both about how the NSA hacks into networks, and what network defenders can do to protect themselves. Here’s a video of the talk, and here are two good summaries.

Intrusion Phases

  • Reconnaissance
  • Initial Exploitation
  • Establish Persistence
  • Install Tools
  • Move Laterally
  • Collect Exfil and Exploit

The event was the USENIX Enigma Conference.

The talk is full of good information about how APT attacks work and how networks can defend themselves. Nothing really surprising, but all interesting. Which brings up the most important question: why did the NSA decide to put Joyce on stage in public? It surely doesn’t want all of its target networks to improve their security so much that the NSA can no longer get in. On the other hand, the NSA does want the general security of US — and presumably allied — networks to improve. My guess is that this is simply a NOBUS issue. The NSA is, or at least believes it is, so sophisticated in its attack techniques that these defensive recommendations won’t slow it down significantly. And the Chinese/Russian/etc state-sponsored attackers will have a harder time. Or, at least, that’s what the NSA wants us to believe.

Wheels within wheels….

More information about the NSA’s TAO group is here and here. Here’s an article about TAO’s catalog of implants and attack tools. Note that the catalog is from 2007. Presumably TAO has been very busy developing new attack tools over the past ten years.

BoingBoing post.

Schneier on Security: Friday Squid Blogging: Polynesian Squid Hook

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

From 1909, for squid fishing.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Schneier on Security: Encryption Backdoor Comic

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Support our Snoops.”

Schneier on Security: Integrity and Availability Threats

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Cyberthreats are changing. We’re worried about hackers crashing airplanes by hacking into computer networks. We’re worried about hackers remotely disabling cars. We’re worried about manipulated counts from electronic voting booths, remote murder through hacked medical devices and someone hacking an Internet thermostat to turn off the heat and freeze the pipes.

The traditional academic way of thinking about information security is as a triad: confidentiality, integrity,e and availability. For years, the security industry has been trying to prevent data theft. Stolen data is used for identity theft and other frauds. It can be embarrassing, as in the Ashley Madison breach. It can be damaging, as in the Sony data theft. It can even be a national security threat, as in the case of the Office of Personal Management data breach. These are all breaches of privacy and confidentiality.

As bad as these threats are, they seem abstract. It’s been hard to craft public policy around them. But this is all changing. Threats to integrity and availability are much more visceral and much more devastating. And they will spur legislative action in a way that privacy risks never have.

Take one example: driverless cars and smart roads.

We’re heading toward a world where driverless cars will automatically communicate with each other and the roads, automatically taking us where we need to go safely and efficiently. The confidentiality threats are real: Someone who can eavesdrop on those communications can learn where the cars are going and maybe who is inside them. But the integrity threats are much worse.

Someone who can feed the cars false information can potentially cause them to crash into each other or nearby walls. Someone could also disable your car so it can’t start. Or worse, disable the entire system so that no one’s car can start.

This new rise in integrity and availability threats is a result of the Internet of Things. The objects we own and interact with will all become computerized and on the Internet. It’s actually more complicated.

What I’m calling the “World Sized Web” is a combination of these Internet-enabled things, cloud computing, mobile computing and the pervasiveness that comes from these systems being always on all the time. Together this means that computers and networks will be much more embedded in our daily lives. Yes, there will be more need for confidentiality, but there is a newfound need to ensure that these systems can’t be subverted to do real damage.

It’s one thing if your smart door lock can be eavesdropped to know who is home. It’s another thing entirely if it can be hacked to prevent you from opening your door or allowing a burglar to open the door.

In separate testimonies before different House and Senate committees last year, both the Director of National Intelligence James Clapper and NSA Director Mike Rogers warned of these threats. They both consider them far larger and more important than the confidentiality threat and believe that we are vulnerable to attack.

And once the attacks start doing real damage — once someone dies from a hacked car or medical device, or an entire city’s 911 services go down for a day — there will be a real outcry to do something.

Congress will be forced to act. They might authorize more surveillance. They might authorize more government involvement in private-sector cybersecurity. They might try to ban certain technologies or certain uses. The results won’t be well-thought-out, and they probably won’t mitigate the actual risks. If we’re lucky, they won’t cause even more problems.

I worry that we’re rushing headlong into the World-Sized Web, and not paying enough attention to the new threats that it brings with it. Again and again, we’ve tried to retrofit security in after the fact.

It would be nice if we could do it right from the beginning this time. That’s going to take foresight and planning. The Obama administration just proposed spending $4 billion to advance the engineering of driverless cars.

How about focusing some of that money on the integrity and availability threats from that and similar technologies?

This essay previously appeared on CNN.com.

Schneier on Security: Psychological Model of Selfishness

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

This is interesting:

Game theory decision-making is based entirely on reason, but humans don’t always behave rationally. David Rand, assistant professor of psychology, economics, cognitive science, and management at Yale University, and psychology doctoral student Adam Bear incorporated theories on intuition into their model, allowing agents to make a decision either based on instinct or rational deliberation.

In the model, there are multiple games of prisoners dilemma. But while some have the standard set-up, others introduce punishment for those who refuse to cooperate with a willing partner. Rand and Bear found that agents who went through many games with repercussions for selfishness became instinctively cooperative, though they could override their instinct to behave selfishly in cases where it made sense to do so.

However, those who became instinctively selfish were far less flexible. Even in situations where refusing to cooperate was punished, they would not then deliberate and rationally choose to cooperate instead.

The paper:

Abstract: Humans often cooperate with strangers, despite the costs involved. A long tradition of theoretical modeling has sought ultimate evolutionary explanations for this seemingly altruistic behavior. More recently, an entirely separate body of experimental work has begun to investigate cooperation’s proximate cognitive underpinnings using a dual-process framework: Is deliberative self-control necessary to reign in selfish impulses, or does self-interested deliberation restrain an intuitive desire to cooperate? Integrating these ultimate and proximate approaches, we introduce dual-process cognition into a formal game-theoretic model of the evolution of cooperation. Agents play prisoner’s dilemma games, some of which are one-shot and others of which involve reciprocity. They can either respond by using a generalized intuition, which is not sensitive to whether the game is one-shot or reciprocal, or pay a (stochastically varying) cost to deliberate and tailor their strategy to the type of game they are facing. We find that, depending on the level of reciprocity and assortment, selection favors one of two strategies: intuitive defectors who never deliberate, or dual-process agents who intuitively cooperate but sometimes use deliberation to defect in one-shot games. Critically, selection never favors agents who use deliberation to override selfish impulses: Deliberation only serves to undermine cooperation with strangers. Thus, by introducing a formal theoretical framework for exploring cooperation through a dual-process lens, we provide a clear answer regarding the role of deliberation in cooperation based on evolutionary modeling, help to organize a growing body of sometimes-conflicting empirical results, and shed light on the nature of human cognition and social decision making.

Very much in line with what I wrote in Liars and Outliers.

Schneier on Security: Horrible Story of Digital Harassment

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

This is just awful.

Their troll — or trolls, as the case may be — have harassed Paul and Amy in nearly every way imaginable. Bomb threats have been made under their names. Police cars and fire trucks have arrived at their house in the middle of the night to respond to fake hostage calls. Their email and social media accounts have been hacked, and used to bring ruin to their social lives. They’ve lost jobs, friends, and relationships. They’ve developed chronic anxiety and other psychological problems. More than once, they described their lives as having been “ruined” by their mystery tormenter.

We need to figure out how to identify perpetrators like this without destroying Internet privacy in the process.

Schneier on Security: Data Driven Policing

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Good article from The Washington Post.

Schneier on Security: Data-Driven Policing

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Good article from the Washington Post.

Schneier on Security: Shodan Lets Your Browse Insecure Webcams

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

There’s a lot out there:

The feed includes images of marijuana plantations, back rooms of banks, children, kitchens, living rooms, garages, front gardens, back gardens, ski slopes, swimming pools, colleges and schools, laboratories, and cash register cameras in retail stores….

Slashdot thread.

Schneier on Security: Shodan Lets You Browse Insecure Webcams

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

There’s a lot out there:

The feed includes images of marijuana plantations, back rooms of banks, children, kitchens, living rooms, garages, front gardens, back gardens, ski slopes, swimming pools, colleges and schools, laboratories, and cash register cameras in retail stores….

Slashdot thread.

Schneier on Security: Friday Squid Blogging: North Coast Squid

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

North Coast Squid is a local writing journal from Manzanita, Oregon. It’s going to publish its fifth edition this year.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.