Fans attending Major League Baseball games are being greeted in a new way this year: with metal detectors at the ballparks. Touted as a counterterrorism measure, they’re nothing of the sort. They’re pure security theater: They look good without doing anything to make us safer. We’re stuck with them because of a combination of buck passing, CYA thinking, and fear.
As a security measure, the new devices are laughable. The ballpark metal detectors are much more lax than the ones at an airport checkpoint. They aren’t very sensitive — people with phones and keys in their pockets are sailing through — and there are no X-ray machines. Bags get the same cursory search they’ve gotten for years. And fans wanting to avoid the detectors can opt for a “light pat-down search” instead.
There’s no evidence that this new measure makes anyone safer. A halfway competent ticketholder would have no trouble sneaking a gun into the stadium. For that matter, a bomb exploded at a crowded checkpoint would be no less deadly than one exploded in the stands. These measures will, at best, be effective at stopping the random baseball fan who’s carrying a gun or knife into the stadium. That may be a good idea, but unless there’s been a recent spate of fan shootings and stabbings at baseball games — and there hasn’t — this is a whole lot of time and money being spent to combat an imaginary threat.
But imaginary threats are the only ones baseball executives have to stop this season; there’s been no specific terrorist threat or actual intelligence to be concerned about. MLB executives forced this change on ballparks based on unspecified discussions with the Department of Homeland Security after the Boston Marathon bombing in 2013. Because, you know, that was also a sporting event.
This system of vague consultations and equally vague threats ensure that no one organization can be seen as responsible for the change. MLB can claim that the league and teams “work closely” with DHS. DHS can claim that it was MLB’s initiative. And both can safely relax because if something happens, at least they did something.
It’s an attitude I’ve seen before: “Something must be done. This is something. Therefore, we must do it.” Never mind if the something makes any sense or not.
In reality, this is CYA security, and it’s pervasive in post-9/11 America. It no longer matters if a security measure makes sense, if it’s cost-effective or if it mitigates any actual threats. All that matters is that you took the threat seriously, so if something happens you won’t be blamed for inaction. It’s security, all right — security for the careers of those in charge.
I’m not saying that these officials care only about their jobs and not at all about preventing terrorism, only that their priorities are skewed. They imagine vague threats, and come up with correspondingly vague security measures intended to address them. They experience none of the costs. They’re not the ones who have to deal with the long lines and confusion at the gates. They’re not the ones who have to arrive early to avoid the messes the new policies have caused around the league. And if fans spend more money at the concession stands because they’ve arrived an hour early and have had the food and drinks they tried to bring along confiscated, so much the better, from the team owners’ point of view.
I can hear the objections to this as I write. You don’t know these measures won’t be effective! What if something happens? Don’t we have to do everything possible to protect ourselves against terrorism?
That’s worst-case thinking, and it’s dangerous. It leads to bad decisions, bad design and bad security. A better approach is to realistically assess the threats, judge security measures on their effectiveness and take their costs into account. And the result of that calm, rational look will be the realization that there will always be places where we pack ourselves densely together, and that we should spend less time trying to secure those places and more time finding terrorist plots before they can be carried out.
So far, fans have been exasperated but mostly accepting of these new security measures. And this is precisely the problem — most of us don’t care all that much. Our options are to put up with these measures, or stay home. Going to a baseball game is not a political act, and metal detectors aren’t worth a boycott. But there’s an undercurrent of fear as well. If it’s in the name of security, we’ll accept it. As long as our leaders are scared of the terrorists, they’re going to continue the security theater. And we’re similarly going to accept whatever measures are forced upon us in the name of security. We’re going to accept the National Security Agency’s surveillance of every American, airport security procedures that make no sense and metal detectors at baseball and football stadiums. We’re going to continue to waste money overreacting to irrational fears.
We no longer need the terrorists. We’re now so good at terrorizing ourselves.
This essay previously appeared in the Washington Post.
Paul Krugman argues that we’ll give up our privacy because we want to emulate the rich, who are surrounded by servants who know everything about them:
Consider the Varian rule, which says that you can forecast the future by looking at what the rich have today — that is, that what affluent people will want in the future is, in general, something like what only the truly rich can afford right now. Well, one thing that’s very clear if you spend any time around the rich — and one of the very few things that I, who by and large never worry about money, sometimes envy — is that rich people don’t wait in line. They have minions who ensure that there’s a car waiting at the curb, that the maitre-d escorts them straight to their table, that there’s a staff member to hand them their keys and their bags are already in the room.
And it’s fairly obvious how smart wristbands could replicate some of that for the merely affluent. Your reservation app provides the restaurant with the data it needs to recognize your wristband, and maybe causes your table to flash up on your watch, so you don’t mill around at the entrance, you just walk in and sit down (which already happens in Disney World.) You walk straight into the concert or movie you’ve bought tickets for, no need even to have your phone scanned. And I’m sure there’s much more — all kinds of context-specific services that you won’t even have to ask for, because systems that track you know what you’re up to and what you’re about to need.
Daniel C. Dennett and Deb Roy look at our loss of privacy in evolutionary terms, and see all sorts of adaptations coming:
The tremendous change in our world triggered by this media inundation can be summed up in a word: transparency. We can now see further, faster, and more cheaply and easily than ever before — and we can be seen. And you and I can see that everyone can see what we see, in a recursive hall of mirrors of mutual knowledge that both enables and hobbles. The age-old game of hide-and-seek that has shaped all life on the planet has suddenly shifted its playing field, its equipment and its rules. The players who cannot adjust will not last long.
The impact on our organizations and institutions will be profound. Governments, armies, churches, universities, banks and companies all evolved to thrive in a relatively murky epistemological environment, in which most knowledge was local, secrets were easily kept, and individuals were, if not blind, myopic. When these organizations suddenly find themselves exposed to daylight, they quickly discover that they can no longer rely on old methods; they must respond to the new transparency or go extinct. Just as a living cell needs an effective membrane to protect its internal machinery from the vicissitudes of the outside world, so human organizations need a protective interface between their internal affairs and the public world, and the old interfaces are losing their effectiveness.
Citizen Lab has issued a report on China’s “Great Cannon” attack tool, used in the recent DDoS attack against GitHub.
We show that, while the attack infrastructure is co-located with the Great Firewall, the attack was carried out by a separate offensive system, with different capabilities and design, that we term the “Great Cannon.” The Great Cannon is not simply an extension of the Great Firewall, but a distinct attack tool that hijacks traffic to (or presumably from) individual IP addresses, and can arbitrarily replace unencrypted content as a man-in-the-middle.
The operational deployment of the Great Cannon represents a significant escalation in state-level information control: the normalization of widespread use of an attack tool to enforce censorship by weaponizing users. Specifically, the Cannon manipulates the traffic of “bystander” systems outside China, silently programming their browsers to create a massive DDoS attack. While employed for a highly visible attack in this case, the Great Cannon clearly has the capability for use in a manner similar to the NSA’s QUANTUM system, affording China the opportunity to deliver exploits targeting any foreign computer that communicates with any China-based website not fully utilizing HTTPS.
It’s kind of hard for the US to complain about this kind of thing, since we do it too.
John Mueller suggests an alternative to the FBI’s practice of encouraging terrorists and then arresting them for something they would have never have planned on their own:
The experience with another case can be taken to suggest that there could be an alternative, and far less costly, approach to dealing with would-be terrorists, one that might generally (but not always) be effective at stopping them without actually having to jail them.
It involves a hothead in Virginia who ranted about jihad on Facebook, bragging about how “we dropped the twin towers.” He then told a correspondent in New Orleans that he was going to bomb the Washington, D.C. Metro the next day. Not wanting to take any chances and not having the time to insinuate an informant, the FBI arrested him. Not surprisingly, they found no bomb materials in his possession. Since irresponsible bloviating is not illegal (if it were, Washington would quickly become severely underpopulated), the police could only charge him with a minor crime — making an interstate threat. He received only a good scare, a penalty of time served and two years of supervised release.
That approach seems to have worked: the guy seems never to have been heard from again. It resembles the Secret Service’s response when they get a tip that someone has ranted about killing the president. They do not insinuate an encouraging informant into the ranter’s company to eventually offer crucial, if bogus, facilitating assistance to the assassination plot. Instead, they pay the person a Meaningful Visit and find that this works rather well as a dissuasion device. Also, in the event of a presidential trip to the ranter’s vicinity, the ranter is visited again. It seems entirely possible that this approach could productively be applied more widely in terrorism cases. Ranting about killing the president may be about as predictive of violent action as ranting about the virtues of terrorism to deal with a political grievance. The terrorism cases are populated by many such ranters — indeed, tips about their railing have frequently led to FBI involvement. It seems likely, as apparently happened in the Metro case, that the ranter could often be productively deflected by an open visit from the police indicating that they are on to him. By contrast, sending in a paid operative to worm his way into the ranter’s confidence may have the opposite result, encouraging, even gulling, him toward violence.
The Southern Poverty Law Center warns of the rise of lone-wolf terrorism.
From a security perspective, lone wolves are much harder to prevent because there is no conspiracy to detect.
The long-term trend away from violence planned and committed by groups and toward lone wolf terrorism is a worrying one. Authorities have had far more success penetrating plots concocted by several people than individuals who act on their own. Indeed, the lone wolf’s chief asset is the fact that no one else knows of his plans for violence and they are therefore exceedingly difficult to disrupt.
The temptation to focus on horrific groups like Al Qaeda and the Islamic State is wholly understandable. And the federal government recently has taken steps to address the terrorist threat more comprehensively, with Attorney General Eric Holder announcing the coming reconstitution of the Domestic Terrorism Executive Committee. There has been a recent increase in funding for studies of terrorism and radicalization, and the FBI has produced a number of informative reports.
And Holder seems to understand clearly that lone wolves and small cells are an increasing threat. “It’s something that frankly keeps me up at night, worrying about the lone wolf or a group of people, a very small group of people, who decide to get arms on their own and do what we saw in France,” he said recently.
Here’s an article on making secret phone calls with cell phones.
His step-by-step instructions for making a clandestine phone call are as follows:
- Analyze your daily movements, paying special attention to anchor points (basis of operation like home or work) and dormant periods in schedules (8-12 p.m. or when cell phones aren’t changing locations);
- Leave your daily cell phone behind during dormant periods and purchase a prepaid no-contract cell phone (“burner phone”);
- After storing burner phone in a Faraday bag, activate it using a clean computer connected to a public Wi-Fi network;
- Encrypt the cell phone number using a onetime pad (OTP) system and rename an image file with the encrypted code. Using Tor to hide your web traffic, post the image to an agreed upon anonymous Twitter account, which signals a communications request to your partner;
- Leave cell phone behind, avoid anchor points, and receive phone call from partner on burner phone at 9:30 p.m. — or another pre-arranged “dormant” time — on the following day;
- Wipe down and destroy handset.
Note that it actually makes sense to use a one-time pad in this instance. The message is a ten-digit number, and a one-time pad is easier, faster, and cleaner than using any computer encryption program.
From Matthew Green, who is leading the project:
The TL;DR is that based on this audit, Truecrypt appears to be a relatively well-designed piece of crypto software. The NCC audit found no evidence of deliberate backdoors, or any severe design flaws that will make the software insecure in most instances.
That doesn’t mean Truecrypt is perfect. The auditors did find a few glitches and some incautious programming — leading to a couple of issues that could, in the right circumstances, cause Truecrypt to give less assurance than we’d like it to.
Nothing that would make me not use the program, though.
It’s April 1, and time for another Movie-Plot Threat Contest. This year, the theme is Crypto Wars II. Strong encryption is evil, because it prevents the police from solving crimes. (No, really — that’s the argument.) FBI Director James Comey is going to be hard to beat with his heartfelt litany of movie-plot threats:
“We’re drifting toward a place where a whole lot of people are going to be looking at us with tears in their eyes,” Comey argued, “and say ‘What do you mean you can’t? My daughter is missing. You have her phone. What do you mean you can’t tell me who she was texting with before she disappeared?”
“I’ve heard tech executives say privacy should be the paramount virtue,” Comey said. “When I hear that, I close my eyes and say, ‘Try to imagine what that world looks like where pedophiles can’t be seen, kidnappers can’t be seen, drug dealers can’t be seen.'”
Come on, Comey. You might be able to scare noobs like Rep. John Carter with that talk, but you’re going to have to do better if you want to win this contest. We heard this same sort of stuff out of then-FBI director Louis Freeh in 1996 and 1997.
This is the contest: I want a movie-plot threat that shows the evils of encryption. (For those who don’t know, a movie-plot threat is a scary-threat story that would make a great movie, but is much too specific to build security policies around. Contest history here.) We’ve long heard about the evils of the Four Horsemen of the Internet Apocalypse — terrorists, drug dealers, kidnappers, and child pornographers. (Or maybe they’re terrorists, pedophiles, drug dealers, and money launderers; I can never remember.) Try to be more original than that. And nothing too science fictional; today’s technology or presumed technology only.
Entries are limited to 500 words — I check — and should be posted in the comments. At the end of the month, I’ll choose five or so semifinalists, and we can all vote and pick the winner.
The prize will be signed copies of the 20th Anniversary Edition of the 2nd Edition of Applied Cryptography, and the 15th Anniversary Edition of Secrets and Lies, both being published by Wiley this year in an attempt to ride the Data and Goliath bandwagon.
Pew Research has a new survey on Americans’ privacy habits in a post-Snowden world.
The 87% of those who had heard at least something about the programs were asked follow-up questions about their own behaviors and privacy strategies:
34% of those who are aware of the surveillance programs (30% of all adults) have taken at least one step to hide or shield their information from the government. For instance, 17% changed their privacy settings on social media; 15% use social media less often; 15% have avoided certain apps and 13% have uninstalled apps; 14% say they speak more in person instead of communicating online or on the phone; and 13% have avoided using certain terms in online communications.
25% of those who are aware of the surveillance programs (22% of all adults) say they have changed the patterns of their own use of various technological platforms “a great deal” or “somewhat” since the Snowden revelations. For instance, 18% say they have changed the way they use email “a great deal” or “somewhat”; 17% have changed the way they use search engines; 15% say they have changed the way they use social media sites such as Twitter and Facebook; and 15% have changed the way they use their cell phones.
Also interesting are the people who have not changed their behavior because they’re afraid that it would lead to more surveillance. From pages 22-23 of the report:
Still, others said they avoid taking more advanced privacy measures because they believe that taking such measures could make them appear suspicious:
“There’s no point in inviting scrutiny if it’s not necessary.”
“I didn’t significantly change anything. It’s more like trying to avoid anything questionable, so as not to be scrutinized unnecessarily.
“[I] don’t want them misunderstanding something and investigating me.”
There’s also data about how Americans feel about government surveillance:
This survey asked the 87% of respondents who had heard about the surveillance programs: “As you have watched the developments in news stories about government monitoring programs over recent months, would you say that you have become more confident or less confident that the programs are serving the public interest?” Some 61% of them say they have become less confident the surveillance efforts are serving the public interest after they have watched news and other developments in recent months and 37% say they have become more confident the programs serve the public interest. Republicans and those leaning Republican are more likely than Democrats and those leaning Democratic to say they are losing confidence (70% vs. 55%).
Moreover, there is a striking divide among citizens over whether the courts are doing a good job balancing the needs of law enforcement and intelligence agencies with citizens’ right to privacy: 48% say courts and judges are balancing those interests, while 49% say they are not.
At the same time, the public generally believes it is acceptable for the government to monitor many others, including foreign citizens, foreign leaders, and American leaders:
- 82% say it is acceptable to monitor communications of suspected terrorists
- 60% believe it is acceptable to monitor the communications of American leaders.
- 60% think it is okay to monitor the communications of foreign leaders
- 54% say it is acceptable to monitor communications from foreign citizens
Yet, 57% say it is unacceptable for the government to monitor the communications of U.S. citizens. At the same time, majorities support monitoring of those particular individuals who use words like “explosives” and “automatic weapons” in their search engine queries (65% say that) and those who visit anti-American websites (67% say that).
Overall, 52% describe themselves as “very concerned” or “somewhat concerned” about government surveillance of Americans’ data and electronic communications, compared with 46% who describe themselves as “not very concerned” or “not at all concerned” about the surveillance.
It’s worth reading these results in detail. Overall, these numbers are consistent with a worldwide survey from December. The press is spinning this as “Most Americans’ behavior unchanged after Snowden revelations, study finds,” but I see something very different. I see a sizable percentage of Americans not only concerned about government surveillance, but actively doing something about it. “Third of Americans shield data from government.” Edward Snowden’s goal was to start a national dialog about government surveillance, and these surveys show that he has succeeded in doing exactly that.
In the US, certain types of warrants can come with gag orders preventing the recipient from disclosing the existence of warrant to anyone else. A warrant canary is basically a legal hack of that prohibition. Instead of saying “I just received a warrant with a gag order,” the potential recipient keeps repeating “I have not received any warrants.” If the recipient stops saying that, the rest of us are supposed to assume that he has been served one.
Lots of organizations maintain them. Personally, I have never believed this trick would work. It relies on the fact that a prohibition against speaking doesn’t prevent someone from not speaking. But courts generally aren’t impressed by this sort of thing, and I can easily imagine a secret warrant that includes a prohibition against triggering the warrant canary. And for all I know, there are right now secret legal proceedings on this very issue.
Australia has sidestepped all of this by outlawing warrant canaries entirely:
Section 182A of the new law says that a person commits an offense if he or she discloses or uses information about “the existence or non-existence of such a [journalist information] warrant.” The penalty upon conviction is two years imprisonment.
Expect that sort of wording in future US surveillance bills, too.
This is a clever attack, using a black box that attaches to the iPhone via USB:
As you know, an iPhone keeps a count of how many wrong PINs have been entered, in case you have turned on the Erase Data option on the Settings | Touch ID & Passcode screen.
That’s a highly-recommended option, because it wipes your device after 10 passcode mistakes.
Even if you only set a 4-digit PIN, that gives a crook who steals your phone just a 10 in 10,000 chance, or 0.1%, of guessing your unlock code in time.
But this Black Box has a trick up its cable.
Apparently, the device uses a light sensor to work out, from the change in screen intensity, when it has got the right PIN.
In other words, it also knows when it gets the PIN wrong, as it will most of the time, so it can kill the power to your iPhone when that happens.
And the power-down happens quickly enough (it seems you need to open up the iPhone and bypass the battery so you can power the device entirely via the USB cable) that your iPhone doesn’t have time to subtract one from the “PIN guesses remaining” counter stored on the device.
Because every set of wrong guesses requires a reboot, the process takes about five days. Still, a very clever attack.