Long article about a very lucrative squid-fishing industry that involves bribing the Cambodian Navy.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
The collective noises of the interwebz
Long article about a very lucrative squid-fishing industry that involves bribing the Cambodian Navy.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
It’s the Internet, which means there must be cute animal videos on this blog. But this one is different. Watch a mother rabbit beat up a snake to protect her children. It’s impressive the way she keeps attacking the snake until it is far away from her nest, but I worry that she doesn’t know enough to grab the snake by the neck. Maybe there just aren’t any venomous snakes around those parts.
This is really clever:
Enigma’s technique — what cryptographers call “secure multiparty computation” — works by mimicking a few of the features of bitcoin’s decentralized network architecture: It encrypts data by splitting it up into pieces and randomly distributing indecipherable chunks of it to hundreds of computers in the Enigma network known as “nodes.” Each node performs calculations on its discrete chunk of information before the user recombines the results to derive an unencrypted answer. Thanks to some mathematical tricks the Enigma creators implemented, the nodes are able to collectively perform every kind of computation that computers normally do, but without accessing any other portion of the data except the tiny chunk they were assigned.
To keep track of who owns what data — and where any given data’s pieces have been distributed — Enigma stores that metadata in the bitcoin blockchain, the unforgeable record of messages copied to thousands of computers to prevent counterfeit and fraud in the bitcoin economy.
It’s not homomorphic encryption. But it is really clever. Paper here.
So much to digest. Please post anything interesting you notice in the comments.
So much to digest. Please post anything interesting you notice in the comments.
I don’t have much to say about the recent hack of the US Office of Personnel Management, which has been attributed to China (and seems to be getting worse all the time). We know that government networks aren’t any more secure than corporate networks, and might even be less secure.
I agree with Ben Wittes here (although not the imaginary double standard he talks about in the rest of the essay):
For the record, I have no problem with the Chinese going after this kind of data. Espionage is a rough business and the Chinese owe as little to the privacy rights of our citizens as our intelligence services do to the employees of the Chinese government. It’s our government’s job to protect this material, knowing it could be used to compromise, threaten, or injure its people — not the job of the People’s Liberation Army to forebear collection of material that may have real utility.
Former NSA Director Michael Hayden says much the same thing:
If Hayden had had the ability to get the equivalent Chinese records when running CIA or NSA, he says, “I would not have thought twice. I would not have asked permission. I’d have launched the star fleet. And we’d have brought those suckers home at the speed of light.” The episode, he says, “is not shame on China. This is shame on us for not protecting that kind of information.” The episode is “a tremendously big deal, and my deepest emotion is embarrassment.”
My question is this: Has anyone thought about the possibility of the attackers manipulating data in the database? What are the potential attacks that could stem from adding, deleting, and changing data? I don’t think they can add a person with a security clearance, but I’d like someone who knows more than I do to understand that risks.
Normally I wouldn’t mind, but the unofficial blog fails intermittently. Also, @Bruce_Schneier follows people who then think I’m following them. I’m not; I never log in to Twitter and I don’t follow anyone there.
So if you want to read my blog on Twitter, please make sure you’re following @schneierblog. If you are the person who runs the @Bruce_Schneier account — if anyone is even running it anymore — please e-mail me at the address on my Contact page.
And if anyone from the Twitter fraud department is reading this, please contact me. I know I can get the @Bruce_Schneier account deleted, but I don’t want to lose the 27,300 followers on it. What I want is to consolidate them with the 67,700 followers on my real account. There’s no way to explain this on the form to report Twitter impersonation. (Although maybe I should just delete the account. I didn’t do it 18 months ago when there were only 16,000 followers on that account, and look what happened. It’ll only be worse next year.)
Abstract: September 11 created a natural experiment that enables us to track the psychological effects of a large-scale terror event over time. The archival data came from 8,070 participants of 10 ABC and CBS News polls collected from September 2001 until September 2006. Six questions investigated emotional, behavioral, and cognitive responses to the events of September 11 over a five-year period. We found that heightened responses after September 11 dissipated and reached a plateau at various points in time over a five-year period. We also found that emotional, cognitive, and behavioral reactions were moderated by age, sex, political affiliation, and proximity to the attack. Both emotional and behavioral responses returned to a normal state after one year, whereas cognitively-based perceptions of risk were still diminishing as late as September 2006. These results provide insight into how individuals will perceive and respond to future similar attacks.
There’s a new paper on a low-cost TEMPEST attack against PC cryptography:
We demonstrate the extraction of secret decryption keys from laptop computers, by nonintrusively measuring electromagnetic emanations for a few seconds from a distance of 50 cm. The attack can be executed using cheap and readily-available equipment: a consumer-grade radio receiver or a Software Defined Radio USB dongle. The setup is compact and can operate untethered; it can be easily concealed, e.g., inside pita bread. Common laptops, and popular implementations of RSA and ElGamal encryptions, are vulnerable to this attack, including those that implement the decryption using modern exponentiation algorithms such as sliding-window, or even its side-channel resistant variant, fixed-window (m-ary) exponentiation.
We successfully extracted keys from laptops of various models running GnuPG (popular open source encryption software, implementing the OpenPGP standard), within a few seconds. The attack sends a few carefully-crafted ciphertexts, and when these are decrypted by the target computer, they trigger the occurrence of specially-structured values inside the decryption software. These special values cause observable fluctuations in the electromagnetic field surrounding the laptop, in a way that depends on the pattern of key bits (specifically, the key-bits window in the exponentiation routine). The secret key can be deduced from these fluctuations, through signal processing and cryptanalysis.
Researchers at Tel Aviv University and Israel’s Technion research institute have developed a new palm-sized device that can wirelessly steal data from a nearby laptop based on the radio waves leaked by its processor’s power use. Their spy bug, built for less than $300, is designed to allow anyone to “listen” to the accidental radio emanations of a computer’s electronics from 19 inches away and derive the user’s secret decryption keys, enabling the attacker to read their encrypted communications. And that device, described in a paper they’re presenting at the Workshop on Cryptographic Hardware and Embedded Systems in September, is both cheaper and more compact than similar attacks from the past — so small, in fact, that the Israeli researchers demonstrated it can fit inside a piece of pita bread.
While some of the unit’s activities are focused on the claimed areas, JTRIG also appears to be intimately involved in traditional law enforcement areas and U.K.-specific activity, as previously unpublished documents demonstrate. An August 2009 JTRIG memo entitled “Operational Highlights” boasts of “GCHQ’s first serious crime effects operation” against a website that was identifying police informants and members of a witness protection program. Another operation investigated an Internet forum allegedly “used to facilitate and execute online fraud.” The document also describes GCHQ advice provided :to assist the UK negotiating team on climate change.”
Particularly revealing is a fascinating 42-page document from 2011 detailing JTRIG’s activities. It provides the most comprehensive and sweeping insight to date into the scope of this unit’s extreme methods. Entitled “Behavioral Science Support for JTRIG’s Effects and Online HUMINT [Human Intelligence] Operations,” it describes the types of targets on which the unit focuses, the psychological and behavioral research it commissions and exploits, and its future organizational aspirations. It is authored by a psychologist, Mandeep K. Dhami.
Among other things, the document lays out the tactics the agency uses to manipulate public opinion, its scientific and psychological research into how human thinking and behavior can be influenced, and the broad range of targets that are traditionally the province of law enforcement rather than intelligence agencies.
On Monday, the Intercept published a new story from the Snowden documents:
The spy agencies have reverse engineered software products, sometimes under questionable legal authority, and monitored web and email traffic in order to discreetly thwart anti-virus software and obtain intelligence from companies about security software and users of such software. One security software maker repeatedly singled out in the documents is Moscow-based Kaspersky Lab, which has a holding registered in the U.K., claims more than 270,000 corporate clients, and says it protects more than 400 million people with its products.
British spies aimed to thwart Kaspersky software in part through a technique known as software reverse engineering, or SRE, according to a top-secret warrant renewal request. The NSA has also studied Kaspersky Lab’s software for weaknesses, obtaining sensitive customer information by monitoring communications between the software and Kaspersky servers, according to a draft top-secret report. The U.S. spy agency also appears to have examined emails inbound to security software companies flagging new viruses and vulnerabilities.
Wired has a good article on the documents:
The documents…don’t describe actual computer breaches against the security firms, but instead depict a systematic campaign to reverse-engineer their software in order to uncover vulnerabilities that could help the spy agencies subvert it.
An NSA slide describing “Project CAMBERDADA” lists at least 23 antivirus and security firms that were in that spy agency’s sights. They include the Finnish antivirus firm F-Secure, the Slovakian firm Eset, Avast software from the Czech Republic. and Bit-Defender from Romania. Notably missing from the list are the American anti-virus firms Symantec and McAfee as well as the UK-based firm Sophos.
But antivirus wasn’t the only target of the two spy agencies. They also targeted their reverse-engineering skills against CheckPoint, an Israeli maker of firewall software, as well as commercial encryption programs and software underpinning the online bulletin boards of numerous companies. GCHQ, for example, reverse-engineered both the CrypticDisk program made by Exlade and the eDataSecurity system from Acer. The spy agency also targeted web forum systems like vBulletin and Invision Power Boardused by Sony Pictures, Electronic Arts, NBC Universal and othersas well as CPanel, a software used by GoDaddy for configuring its servers, and PostfixAdmin, for managing the Postfix email server software But that’s not all. GCHQ reverse-engineered Cisco routers, too, which allowed the agency’s spies to access “almost any user of the internet” inside Pakistan and “to re-route selective traffic” straight into the mouth of GCHQ’s collection systems.
Kaspersky recently announced that it was the victim of Duqu 2.0, probably from Israel.
Wikileaks has published some NSA SIGINT documents describing intercepted French government communications. This seems not be from the Snowden documents. It could be one of the other NSA leakers, or it could be someone else entirely.
As leaks go, this isn’t much. As I’ve said before, spying on foreign leaders is the kind of thing we want the NSA to do. I’m sure French Intelligence does the same to us.
In May, Admiral James A. Winnefeld, Jr., vice-chairman of the Joint Chiefs of Staff, gave an address at the Joint Service Academies Cyber Security Summit at West Point. After he spoke for twenty minutes on the importance of Internet security and a good national defense, I was able to ask him a question (32:42 mark) about security versus surveillance:
Bruce Schneier: I’d like to hear you talk about this need to get beyond signatures and the more robust cyber defense and ask the industry to provide these technologies to make the infrastructure more secure. My question is, the only definition of “us” that makes sense is the world, is everybody. Any technologies that we’ve developed and built will be used by everyone — nation-state and non-nation-state. So anything we do to increase our resilience, infrastructure, and security will naturally make Admiral Rogers’s both intelligence and attack jobs much harder. Are you okay with that?
Admiral James A. Winnefeld: Yes. I think Mike’s okay with that, also. That’s a really, really good question. We call that IGL. Anyone know what IGL stands for? Intel gain-loss. And there’s this constant tension between the operational community and the intelligence community when a military action could cause the loss of a critical intelligence node. We live this every day. In fact, in ancient times, when we were collecting actual signals in the air, we would be on the operational side, “I want to take down that emitter so it’ll make it safer for my airplanes to penetrate the airspace,” and they’re saying, “No, you’ve got to keep that emitter up, because I’m getting all kinds of intelligence from it.” So this is a familiar problem. But I think we all win if our networks are more secure. And I think I would rather live on the side of secure networks and a harder problem for Mike on the intelligence side than very vulnerable networks and an easy problem for Mike. And part of that — it’s not only the right thing do, but part of that goes to the fact that we are more vulnerable than any other country in the world, on our dependence on cyber. I’m also very confident that Mike has some very clever people working for him. He might actually still be able to get some work done. But it’s an excellent question. It really is.
It’s a good answer, and one firmly on the side of not introducing security vulnerabilities, backdoors, key-escrow systems, or anything that weakens Internet systems. It speaks to what I have seen as a split in the the Second Crypto War, between the NSA and the FBI on building secure systems versus building systems with surveillance capabilities.
I have written about this before:
But here’s the problem: technological capabilities cannot distinguish based on morality, nationality, or legality; if the US government is able to use a backdoor in a communications system to spy on its enemies, the Chinese government can use the same backdoor to spy on its dissidents.
Even worse, modern computer technology is inherently democratizing. Today’s NSA secrets become tomorrow’s PhD theses and the next day’s hacker tools. As long as we’re all using the same computers, phones, social networking platforms, and computer networks, a vulnerability that allows us to spy also allows us to be spied upon.
We can’t choose a world where the US gets to spy but China doesn’t, or even a world where governments get to spy and criminals don’t. We need to choose, as a matter of policy, communications systems that are secure for all users, or ones that are vulnerable to all attackers. It’s security or surveillance.
NSA Director Admiral Mike Rogers was in the audience (he spoke earlier), and I saw him nodding at Winnefeld’s answer. Two weeks later, at CyCon in Tallinn, Rogers gave the opening keynote, and he seemed to be saying the opposite.
“Can we create some mechanism where within this legal framework there’s a means to access information that directly relates to the security of our respective nations, even as at the same time we are mindful we have got to protect the rights of our individual citizens?”
Rogers said a framework to allow law enforcement agencies to gain access to communications is in place within the phone system in the United States and other areas, so “why can’t we create a similar kind of framework within the internet and the digital age?”
He added: “I certainly have great respect for those that would argue that they most important thing is to ensure the privacy of our citizens and we shouldn’t allow any means for the government to access information. I would argue that’s not in the nation’s best long term interest, that we’ve got to create some structure that should enable us to do that mindful that it has to be done in a legal way and mindful that it shouldn’t be something arbitrary.”
Does Winnefeld know that Rogers is contradicting him? Can someone ask JCS about this?
If somebody would come up to me and say, “Look, Hayden, here’s the thing: This Snowden thing is going to be a nightmare for you guys for about two years. And when we get all done with it, what you’re going to be required to do is that little 215 program about American telephony metadata — and by the way, you can still have access to it, but you got to go to the court and get access to it from the companies, rather than keep it to yourself.” I go: “And this is it after two years? Cool!”
The thing is, he’s right. And Peter Swire is also right when he calls the law “the biggest pro-privacy change to U.S. intelligence law since the original enactment of the Foreign Intelligence Surveillance Act in 1978.” I supported the bill not because it was the answer, but because it was a step in the right direction. And Hayden’s comments demonstrate how much more work we have to do.
Encryption protects our data. It protects our data when it’s sitting on our computers and in data centers, and it protects it when it’s being transmitted around the Internet. It protects our conversations, whether video, voice, or text. It protects our privacy. It protects our anonymity. And sometimes, it protects our lives.
This protection is important for everyone. It’s easy to see how encryption protects journalists, human rights defenders, and political activists in authoritarian countries. But encryption protects the rest of us as well. It protects our data from criminals. It protects it from competitors, neighbors, and family members. It protects it from malicious attackers, and it protects it from accidents.
Encryption works best if it’s ubiquitous and automatic. The two forms of encryption you use most often — https URLs on your browser, and the handset-to-tower link for your cell phone calls — work so well because you don’t even know they’re there.
Encryption should be enabled for everything by default, not a feature you turn on only if you’re doing something you consider worth protecting.
This is important. If we only use encryption when we’re working with important data, then encryption signals that data’s importance. If only dissidents use encryption in a country, that country’s authorities have an easy way of identifying them. But if everyone uses it all of the time, encryption ceases to be a signal. No one can distinguish simple chatting from deeply private conversation. The government can’t tell the dissidents from the rest of the population. Every time you use encryption, you’re protecting someone who needs to use it to stay alive.
It’s important to remember that encryption doesn’t magically convey security. There are many ways to get encryption wrong, and we regularly see them in the headlines. Encryption doesn’t protect your computer or phone from being hacked, and it can’t protect metadata, such as e-mail addresses that need to be unencrypted so your mail can be delivered.
But encryption is the most important privacy-preserving technology we have, and one that is uniquely suited to protect against bulk surveillance the kind done by governments looking to control their populations and criminals looking for vulnerable victims. By forcing both to target their attacks against individuals, we protect society.
Today, we are seeing government pushback against encryption. Many countries, from States like China and Russia to more democratic governments like the United States and the United Kingdom, are either talking about or implementing policies that limit strong encryption. This is dangerous, because it’s technically impossible, and the attempt will cause incredible damage to the security of the Internet.
There are two morals to all of this. One, we should push companies to offer encryption to everyone, by default. And two, we should resist demands from governments to weaken encryption. Any weakening, even in the name of legitimate law enforcement, puts us all at risk. Even though criminals benefit from strong encryption, we’re all much more secure when we all have strong encryption.
This originally appeared in Securing Safe Spaces Online.
To support the findings contained in the Special Rapporteur’s report, Privacy International, the Harvard Law School’s International Human Rights Law Clinic and ARTICLE 19 have published an accompanying booklet, Securing Safe Spaces Online: Encryption, online anonymity and human rights which explores the impact of measures to restrict online encryption and anonymity in four particular countries – the United Kingdom, Morocco, Pakistan and South Korea.
As we’re all gearing up to fight the Second Crypto War over governments’ demands to be able to back-door any cryptographic system, it pays for us to remember the history of the First Crypto War. The Open Technology Instutute has written the story of those years in the mid-1990s.
The act that truly launched the Crypto Wars was the White House’s introduction of the “Clipper Chip” in 1993. The Clipper Chip was a state-of-the-art microchip developed by government engineers which could be inserted into consumer hardware telephones, providing the public with strong cryptographic tools without sacrificing the ability of law enforcement and intelligence agencies to access unencrypted versions of those communications. The technology relied on a system of “key escrow,” in which a copy of each chip’s unique encryption key would be stored by the government. Although White House officials mobilized both political and technical allies in support of the proposal, it faced immediate backlash from technical experts, privacy advocates, and industry leaders, who were concerned about the security and economic impact of the technology in addition to obvious civil liberties concerns. As the battle wore on throughout 1993 and into 1994, leaders from across the political spectrum joined the fray, supported by a broad coalition that opposed the Clipper Chip. When computer scientist Matt Blaze discovered a flaw in the system in May 1994, it proved to be the final death blow: the Clipper Chip was dead.
Nonetheless, the idea that the government could find a palatable way to access the keys to encrypted communications lived on throughout the 1990s. Many policymakers held onto hopes that it was possible to securely implement what they called “software key escrow” to preserve access to phone calls, emails, and other communications and storage applications. Under key escrow schemes, a government-certified third party would keep a “key” to every device. But the government’s shift in tactics ultimately proved unsuccessful; the privacy, security, and economic concerns continued to outweigh any potential benefits. By 1997, there was an overwhelming amount of evidence against moving ahead with any key escrow schemes.
The Second Crypto War is going to be harder and nastier, and I am less optimistic that strong cryptography will win in the short term.
Last weekend, the Sunday Times published a front-page story (full text here), citing anonymous British sources claiming that both China and Russia have copies of the Snowden documents. It’s a terrible article, filled with factual inaccuracies and unsubstantiated claims about both Snowden’s actions and the damage caused by his disclosure, and others have thoroughly refuted the story. I want to focus on the actual question: Do countries like China and Russia have copies of the Snowden documents?
I believe the answer is certainly yes, but that it’s almost certainly not Snowden’s fault.
Snowden has claimed that he gave nothing to China while he was in Hong Kong, and brought nothing to Russia. He has said that he encrypted the documents in such a way that even he no longer has access to them, and that he did this before the US government stranded him in Russia. I have no doubt he did as he said, because A) it’s the smart thing to do, and B) it’s easy. All he would have had to do was encrypt the file with a long random key, break the encrypted text up into a few parts and mail them to trusted friends around the world, then forget the key. He probably added some security embellishments, but — regardless — the first sentence of the Times story simply makes no sense: “Russia and China have cracked the top-secret cache of files…”
But while cryptography is strong, computer security is weak. The vulnerability is not Snowden; it’s everyone who has access to the files.
First, the journalists working with the documents. I’ve handled some of the Snowden documents myself, and even though I’m a paranoid cryptographer, I know how difficult it is to maintain perfect security. It’s been open season on the computers of the journalists Snowden shared documents with since this story broke in July 2013. And while they have been taking extraordinary pains to secure those computers, it’s almost certainly not enough to keep out the world’s intelligence services.
There is a lot of evidence for this belief. We know from other top-secret NSA documents that as far back as 2008, the agency’s Tailored Access Operations group has extraordinary capabilities to hack into and “exfiltrate” data from specific computers, even if those computers are highly secured and not connected to the Internet.
These NSA capabilities are not unique, and it’s reasonable to assume both that other countries had similar capabilities in 2008 and that everyone has improved their attack techniques in the seven years since then. Last week, we learned that Israel had successfully hacked a wide variety of networks, including that of a major computer antivirus company. We also learned that China successfully hacked US government personnel databases. And earlier this year, Russia successfully hacked the White House’s network. These sorts of stories are now routine.
Which brings me to the second potential source of these documents to foreign intelligence agencies: the US and UK governments themselves. I believe that both China and Russia had access to all the files that Snowden took well before Snowden took them because they’ve penetrated the NSA networks where those files reside. After all, the NSA has been a prime target for decades.
Those government hacking examples above were against unclassified networks, but the nation-state techniques we’re seeing work against classified and unconnected networks as well. In general, it’s far easier to attack a network than it is to defend the same network. This isn’t a statement about willpower or budget; it’s how computer and network security work today. A former NSA deputy director recently said that if we were to score cyber the way we score soccer, the tally would be 462456 twenty minutes into the game. In other words, it’s all offense and no defense.
In this kind of environment, we simply have to assume that even our classified networks have been penetrated. Remember that Snowden was able to wander through the NSA’s networks with impunity, and that the agency had so few controls in place that the only way they can guess what has been taken is to extrapolate based on what has been published. Does anyone believe that Snowden was the first to take advantage of that lax security? I don’t.
This is why I find allegations that Snowden was working for the Russians or the Chinese simply laughable. What makes you think those countries waited for Snowden? And why do you think someone working for the Russians or the Chinese would go public with their haul?
I am reminded of a comment made to me in confidence by a US intelligence official. I asked him what he was most worried about, and he replied: “I know how deep we are in our enemies’ networks without them having any idea that we’re there. I’m worried that our networks are penetrated just as deeply.”
Seems like a reasonable worry to me.
The open question is which countries have sophisticated enough cyberespionage operations to mount a successful attack against one of the journalists or against the intelligence agencies themselves. And while I have my own mental list, the truth is that I don’t know. But certainly Russia and China are on the list, and it’s just as certain they didn’t have to wait for Snowden to get access to the files. While it might be politically convenient to blame Snowden because, as the Sunday Times reported an anonymous source saying, “we have now seen our agents and assets being targeted,” the NSA and GCHQ should first take a look into their mirrors.
This essay originally appeared on Wired.com.
EDITED TO ADD: I wrote about this essay on Lawfare:
A Twitter user commented: “Surely if agencies accessed computers of people Snowden shared with then is still his fault?”
Yes, that’s right. Snowden took the documents out of the well-protected NSA network and shared with people who don’t have those levels of computer security. Given what we’ve seen of the NSA’s hacking capabilities, I think the odds are zero that other nations were unable to hack at least one of those journalists’ computers. And yes, Snowden has to own that.
The point I make in the article is that those nations didn’t have to wait for Snowden. More specifically, GCHQ claims that “we have now seen our agents and assets being targeted.” One, agents and assets are not discussed in the Snowden documents. Two, it’s two years after Snowden handed those documents to reporters. Whatever is happening, it’s unlikely to be related to Snowden.
When you connect hospital drug pumps to the Internet, they’re hackable — only surprising people who aren’t paying attention.
Rios says when he first told Hospira a year ago that hackers could update the firmware on its pumps, the company “didn’t believe it could be done.” Hospira insisted there was “separation” between the communications module and the circuit board that would make this impossible. Rios says technically there is physical separation between the two. But the serial cable provides a bridge to jump from one to the other.
An attacker wouldn’t need physical access to the pump because the communication modules are connected to hospital networks, which are in turn connected to the Internet.
“From an architecture standpoint, it looks like these two modules are separated,” he says. “But when you open the device up, you can see they’re actually connected with a serial cable, and they”re connected in a way that you can actually change the core software on the pump.”
An attacker wouldn’t need physical access to the pump. The communication modules are connected to hospital networks, which are in turn connected to the Internet. “You can talk to that communication module over the network or over a wireless network,” Rios warns.
Hospira knows this, he says, because this is how it delivers firmware updates to its pumps. Yet despite this, he says, the company insists that “the separation makes it so you can’t hurt someone. So we’re going to develop a proof-of-concept that proves that’s not true.”
One of the biggest conceptual problems we have is that something is believed secure until demonstrated otherwise. We need to reverse that: everything should be believed insecure until demonstrated otherwise.
New Annenberg survey results indicate that marketers are misrepresenting a large majority of Americans by claiming that Americas give out information about themselves as a tradeoff for benefits they receive. To the contrary, the survey reveals most Americans do not believe that ‘data for discounts’ is a square deal.
The findings also suggest, in contrast to other academics’ claims, that Americans’ willingness to provide personal information to marketers cannot be explained by the public’s poor knowledge of the ins and outs of digital commerce. In fact, people who know more about ways marketers can use their personal information are more likely rather than less likely to accept discounts in exchange for data when presented with a real-life scenario.
Our findings, instead, support a new explanation: a majority of Americans are resigned to giving up their data — and that is why many appear to be engaging in tradeoffs. Resignation occurs when a person believes an undesirable outcome is inevitable and feels powerless to stop it. Rather than feeling able to make choices, Americans believe it is futile to manage what companies can learn about them. Our study reveals that more than half do not want to lose control over their information but also believe this loss of control has already happened.
By misrepresenting the American people and championing the tradeoff argument, marketers give policymakers false justifications for allowing the collection and use of all kinds of consumer data often in ways that the public find objectionable. Moreover, the futility we found, combined with a broad public fear about what companies can do with the data, portends serious difficulties not just for individuals but also — over time — for the institution of consumer commerce.