Posts tagged ‘fbi’

Schneier on Security: Fake Cell Phone Towers Across the US

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Earlier this month, there were a bunch of stories about fake cell phone towers discovered around the US These seems to be ISMI catchers, like Harris Corporation’s Stingray, and are used to capture location information and potentially phone calls, text messages, and smart-phone Internet traffic. A couple of days ago, the Washington Post ran a story about fake cell phone towers in politically interesting places around Washington DC. In both cases, researchers used by security software that’s part of CryptoPhone from the German company GSMK. And in both cases, we don’t know who is running these fake cell phone towers. Is it the US government? A foreign government? Multiple foreign governments? Criminals?

This is the problem with building an infrastructure of surveillance: you can’t regulate who gets to use it. The FBI has been protecting Stingray like its an enormous secret, but it’s not a secret anymore. We are all vulnerable to everyone because the NSA wanted us to be vulnerable to them.

We have one infrastructure. We can’t choose a world where the US gets to spy and the Chinese don’t. We get to choose a world where everyone can spy, or a world where no one can spy. We can be secure from everyone, or vulnerable to anyone. And I’m tired of us choosing surveillance over security.

Krebs on Security: Medical Records For Sale in Underground Stolen From Texas Life Insurance Firm

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

How much are your medical records worth in the cybercrime underground? This week, KrebsOnSecurity discovered medical records being sold in bulk for as little as $6.40 apiece. The digital documents, several of which were obtained by sources working with this publication, were apparently stolen from a Texas-based life insurance company that now says it is working with federal authorities on an investigation into a possible data breach.

The "Fraud Related" section of the Evolution Market.

The “Fraud Related” section of the Evolution Market.

Purloined medical records are among the many illicit goods for sale on the Evolution Market, a black market bazaar that traffics mostly in narcotics and fraud-related goods — including plenty of stolen financial data. Evolution cannot be reached from the regular Internet. Rather, visitors can only browse the site using Tor, software that helps users disguise their identity by bouncing their traffic between different servers, and by encrypting that traffic at every hop along the way.

Last week, a reader alerted this author to a merchant on Evolution Market nicknamed “ImperialRussia” who was advertising medical records for sale. ImperialRussia was hawking his goods as “fullz” — street slang for a package of all the personal and financial records that thieves would need to fraudulently open up new lines of credit in a person’s name.

Each document for sale by this seller includes the would-be identity theft victim’s name, their medical history, address, phone and driver license number, Social Security number, date of birth, bank name, routing number and checking/savings account number. Customers can purchase the records using the digital currency Bitcoin.

A set of five fullz retails for $40 ($8 per record). Buy 20 fullz and the price drops to $7 per record. Purchase 50 or more fullz, and the per record cost falls to just $6.40 — roughly the price of a value meal at a fast food restaurant. Incidentally, even at $8 per record, that’s cheaper than the price most stolen credit cards fetch on the underground markets.

Imperial Russia's ad on Evolution pimping medical and financial records stolen from a Texas life insurance firm.

Imperial Russia’s ad pimping medical and financial records stolen from a Texas life insurance firm.

“Live and Exclusive database of US FULLZ from an insurance company, particularly from NorthWestern region of U.S.,” ImperialRussia’s ad on Evolution enthuses. The pitch continues:

“Most of the fullz come with EXTRA FREEBIES inside as additional policyholders. All of the information is accurate and confirmed. Clients are from an insurance company database with GOOD to EXCELLENT credit score! I, myself was able to apply for credit cards valued from $2,000 – $10,000 with my fullz. Info can be used to apply for loans, credit cards, lines of credit, bank withdrawal, assume identity, account takeover.”

Sure enough, the source who alerted me to this listing had obtained numerous fullz from this seller. All of them contained the personal and financial information on people in the Northwest United States (mostly in Washington state) who’d applied for life insurance through American Income Life, an insurance firm based in Waco, Texas.

American Income Life referred all calls to the company’s parent firm — Torchmark Corp., an insurance holding company in McKinney, Texas. This publication shared with Torchmark the records obtained from Imperial Russia. In response, Michael Majors, vice president of investor relations at Torchmark, said that the FBI and Secret Service were assisting the company in an ongoing investigation, and that Torchmark expected to begin the process of notifying affected consumers this week.

“We’re aware of the matter and we’ve been working with law enforcement on an ongoing investigation,” Majors said, after reviewing the documents shared by KrebsOnSecurity. “It looks like we’re working on the same matter that you’re inquiring about.”

Majors declined to answer additional questions, such as whether Torchmark has uncovered the source of the data breach and stopped the leakage of customer records, or when the company believes the breach began. Interestingly, ImperialRussia’s first post offering this data is dated more than three months ago, on June 15, 2014. Likewise, the insurance application documents shared with Torchmark by this publication also were dated mid-2014.

The financial information in the stolen life insurance applications includes the checking and/or savings account information of the applicant, and is collected so that American Income can pre-authorize payments and automatic monthly debits in the event the policy is approved. In a four-page discussion thread on Imperial Russian’s sales page at Evolution, buyers of this stolen data took turns discussing the quality of the information and its various uses, such as how one can use automated phone systems to verify the available balance of an applicant’s bank account.

Jessica Johnson, a Washington state resident whose records were among those sold by ImperialRussia, said in a phone interview that she received a call from a credit bureau this week after identity thieves tried to open two new lines of credit in her name.

“It’s been a nightmare,” she said. “Yesterday, I had all these phone calls from the credit bureau because someone tried to open two new credit cards in my name. And the only reason they called me was because I already had a credit card with that company and the company thought it was weird, I guess.”

ImperialRussia discusses his wares with potential and previous buyers.

ImperialRussia discusses his wares with potential and previous buyers.

More than 1.8 million people were victims of medical ID theft in 2013, according to a report from the Ponemon Institute, an independent research group. I suspect that many of these folks had their medical records stolen and used to open new lines of credit in their names, or to conduct tax refund fraud with the Internal Revenue Service (IRS).

Placing a fraud alert or freeze on your credit file is a great way to block identity thieves from hijacking your good name. For pointers on how to do that, as well as other tips on how to avoid becoming a victim of ID theft, check out this story.

Krebs on Security: LinkedIn Feature Exposes Email Addresses

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

One of the risks of using social media networks is having information you intend to share with only a handful of friends be made available to everyone. Sometimes that over-sharing happens because friends betray your trust, but more worrisome are the cases in which a social media platform itself exposes your data in the name of marketing.

leakedinlogoLinkedIn has built much of its considerable worth on the age-old maxim that “it’s all about who you know”: As a LinkedIn user, you can directly connect with those you attest to knowing professionally or personally, but also you can ask to be introduced to someone you’d like to meet by sending a request through someone who bridges your separate social networks. Celebrities, executives or any other LinkedIn users who wish to avoid unsolicited contact requests may do so by selecting an option that forces the requesting party to supply the personal email address of the intended recipient.

LinkedIn’s entire social fabric begins to unravel if any user can directly connect to any other user, regardless of whether or how their social or professional circles overlap. Unfortunately for LinkedIn (and its users who wish to have their email addresses kept private), this is the exact risk introduced by the company’s built-in efforts to expand the social network’s user base.

According to researchers at the Seattle, Wash.-based firm Rhino Security Labs, at the crux of the issue is LinkedIn’s penchant for making sure you’re as connected as you possibly can be. When you sign up for a new account, for example, the service asks if you’d like to check your contacts lists at other online services (such as Gmail, Yahoo, Hotmail, etc.). The service does this so that you can connect with any email contacts that are already on LinkedIn, and so that LinkedIn can send invitations to your contacts who aren’t already users.

LinkedIn assumes that if an email address is in your contacts list, that you must already know this person. But what if your entire reason for signing up with LinkedIn is to discover the private email addresses of famous people? All you’d need to do is populate your email account’s contacts list with hundreds of permutations of famous peoples’ names — including combinations of last names, first names and initials — in front of @gmail.com, @yahoo.com, @hotmail.com, etc. With any luck and some imagination, you may well be on your way to an A-list LinkedIn friends list (or a fantastic set of addresses for spear-phishing, stalking, etc.).

LinkedIn lets you know which of your contacts aren't members.

LinkedIn lets you know which of your contacts aren’t members.

When you import your list of contacts from a third-party service or from a stand-alone file, LinkedIn will show you any profiles that match addresses in your contacts list. More significantly, LinkedIn helpfully tells you which email addresses in your contacts lists are not LinkedIn users.

It’s that last step that’s key to finding the email address of the targeted user to whom LinkedIn has just sent a connection request on your behalf. The service doesn’t explicitly tell you that person’s email address, but by comparing your email account’s contact list to the list of addresses that LinkedIn says don’t belong to any users, you can quickly figure out which address(es) on the contacts list correspond to the user(s) you’re trying to find.

Rhino Security founders Benjamin Caudill and Bryan Seely have a recent history of revealing how trust relationships between and among online services can be abused to expose or divert potentially sensitive information. Last month, the two researchers detailed how they were able to de-anonymize posts to Secret, an app-driven online service that allows people to share messages anonymously within their circle of friends, friends of friends, and publicly. In February, Seely more famously demonstrated how to use Google Maps to intercept FBI and Secret Service phone calls.

This time around, the researchers picked on Dallas Mavericks owner Mark Cuban to prove their point with LinkedIn. Using their low-tech hack, the duo was able to locate the Webmail address Cuban had used to sign up for LinkedIn. Seely said they found success in locating the email addresses of other celebrities using the same method about nine times out ten.

“We created several hundred possible addresses for Cuban in a few seconds, using a Microsoft Excel macro,” Seely said. “It’s just a brute-force guessing game, but 90 percent of people are going to use an email address that includes components of their real name.”

The Rhino guys really wanted Cuban’s help in spreading the word about what they’d found, but instead of messaging Cuban directly, Seely pursued a more subtle approach: He knew Cuban’s latest start-up was Cyber Dust, a chat messenger app designed to keep your messages private. So, Seely fired off a tweet complaining that “Facebook Messenger crosses all privacy lines,” and that as  result he was switching to Cyber Dust.

When Mark Cuban retweeted Seely’s endorsement of Cyber Dust, Seely reached out to Cyberdust CEO Ryan Ozonian, letting him known that he’d discovered Cuban’s email address on LinkedIn. In short order, Cuban was asking Rhino to test the security of Cyber Dust.

“Fortunately no major faults were found and those he found are already fixed in the coming update,” Cuban said in an email exchange with KrebsOnSecurity. “I like working with them. They look to help rather than exploit.. We have learned from them and I think their experience will be valuable to other app publishers and networks as well.”

Whether LinkedIn will address the issues highlighted by Rhino Security remains to be seen. In an initial interview earlier this month, the social networking giant sounded unlikely to change anything in response.

Corey Scott, director of information security at LinkedIn, said very few of the company’s members opt-in to the requirement that all new potential contacts supply the invitee’s email address before sending an invitation to connect. He added that email address-to-user mapping is a fairly common design pattern, and that is is not particularly unique to LinkedIn, and that nothing the company does will prevent people from blasting emails to lists of addresses that might belong to a targeted user, hoping that one of them will hit home.

“Email address permutators, of which there are many of them on the ‘Net, have existed much longer than LinkedIn, and you can blast an email to all of them, knowing that most likely one of those will hit your target,” Scott said. “This is kind of one of those challenges that all social media companies face in trying to prevent the abuse of [site] functionality. We have rate limiting, scoring and abuse detection mechanisms to prevent frequent abusers of this service, and to make sure that people can’t validate spam lists.”

In an email sent to this report last week, however, LinkedIn said it was planning at least two changes to the way its service handles user email addresses.

“We are in the process of implementing two short-term changes and one longer term change to give our members more control over this feature,” Linkedin spokeswoman Nicole Leverich wrote in an emailed statement. “In the next few weeks, we are introducing new logic models designed to prevent hackers from abusing this feature. In addition, we are making it possible for members to ask us to opt out of being discoverable through this feature. In the longer term, we are looking into creating an opt-out box that members can choose to select to not be discoverable using this feature.”

Krebs on Security: Dread Pirate Sunk By Leaky CAPTCHA

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Ever since October 2013, when the FBI took down the online black market and drug bazaar known as the Silk Road, privacy activists and security experts have traded conspiracy theories about how the U.S. government managed to discover the geographic location of the Silk Road Web servers. Those systems were supposed to be obscured behind the anonymity service Tor, but as court documents released Friday explain, that wasn’t entirely true: Turns out, the login page for the Silk Road employed an anti-abuse CAPTCHA service that pulled content from the open Internet, thus leaking the site’s true location.

leakyshipTor helps users disguise their identity by bouncing their traffic between different Tor servers, and by encrypting that traffic at every hop along the way. The Silk Road, like many sites that host illicit activity, relied on a feature of Tor known as “hidden services.” This feature allows anyone to offer a Web server without revealing the true Internet address to the site’s users.

That is, if you do it correctly, which involves making sure you aren’t mixing content from the regular open Internet into the fabric of a site protected by Tor. But according to federal investigators,  Ross W. Ulbricht — a.k.a. the “Dread Pirate Roberts” and the 30-year-old arrested last year and charged with running the Silk Road — made this exact mistake.

As explained in the Tor how-to, in order for the Internet address of a computer to be fully hidden on Tor, the applications running on the computer must be properly configured for that purpose. Otherwise, the computer’s true Internet address may “leak” through the traffic sent from the computer.

howtorworks

And this is how the feds say they located the Silk Road servers:

“The IP address leak we discovered came from the Silk Road user login interface. Upon examining the individual packets of data being sent back from the website, we noticed that the headers of some of the packets reflected a certain IP address not associated with any known Tor node as the source of the packets. This IP address (the “Subject IP Address”) was the only non-Tor source IP address reflected in the traffic we examined.”

“The Subject IP Address caught our attention because, if a hidden service is properly configured to work on Tor, the source IP address of traffic sent from the hidden service should appear as the IP address of a Tor node, as opposed to the true IP address of the hidden service, which Tor is designed to conceal. When I typed the Subject IP Address into an ordinary (non-Tor) web browser, a part of the Silk Road login screen (the CAPTCHA prompt) appeared. Based on my training and experience, this indicated that the Subject IP Address was the IP address of the SR Server, and that it was ‘leaking’ from the SR Server because the computer code underlying the login interface was not properly configured at the time to work on Tor.”

For many Tor fans and advocates, The Dread Pirate Roberts’ goof will no doubt be labeled a noob mistake — and perhaps it was. But as I’ve said time and again, staying anonymous online is hard work, even for those of us who are relatively experienced at it. It’s so difficult, in fact, that even hardened cybercrooks eventually slip up in important and often fateful ways (that is, if someone or something was around at the time to keep a record of it).

A copy of the government’s declaration on how it located the Silk Road servers is here (PDF). A hat tip to Nicholas Weaver for the heads up about this filing.

A snapshop of offerings on the Silk Road.

A snapshop of offerings on the Silk Road.

TorrentFreak: In The Fappening’s Wake, 4chan Intros DMCA Policy

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

4chanEvery now and again a phenomenon takes the Internet by storm. They’re situations that the term ‘going viral’ was made for. A couple of weeks ago it was ice buckets, and since the weekend its been leaked celebrity pictures.

The event, which needs little introduction, saw the iCloud accounts of many prominent female celebrities accessed illegally and their personal (in many cases intimately so) photographs leaked online. The FBI are investigating and for the leakers this probably isn’t going to end well.

But for the users of 4chan this leak, which was rumored to have begun on the board itself, was the gift that just kept on giving. Excited users quickly came up with a portmanteau based on ‘happening’ plus ‘fapping’ and The Fappening was born, a prelude to taking the Internet by storm.

While the event itself appears to be dying down, the leak and the worldwide attention it bestowed on 4chan may have prompted a surprise decision by the site’s operator. Whether the leak was directly responsible will become clear in due course (we’ve reached out to the site for a response), but sometime yesterday 4chan introduced a DMCA policy.

4chan-DMCA

The policy registers a DMCA agent for 4chan, which helps to afford the site safe harbor protection under the Digital Millennium Copyright Act. Although not yet listed in the numerical section of Copyright.gov, the designated agent will now become the point of contact for copyright complaints and DMCA notices when content owners believe that their ownership rights have been violated on 4chan.

While most US-based user-generated content websites should not entertain operating without safe harbor, the way 4chan is set up provides a unique scenario in respect of infringing content being posted by its users.

“Threads expire and are pruned by 4chan’s software at a relatively high rate. Since most boards are limited to eleven or sixteen pages, content is usually available for only a few hours or days before it is removed,” the site’s FAQ explains.

4chan’s Chris Poole (‘moot’) previously told the Washington Post his deletion policy was both a necessarily evil and a plus to the site.

“It’s one of the few sites that has no memory. It’s forgotten the next day,” he said.

Despite the board’s userbase being notoriously rebellious, the deletion policy appears to work well. To date Google’s Transparency Report lists takedowns for just 706 URLs.

“I don’t have resources like YouTube to deal with $1 billion lawsuit with Viacom,” Poole said in 2012. “Don’t store what you absolutely don’t need. People are pre-disposed to wanting to store everything.”

Of course, it’s not only companies such as Viacom on the warpath. Yesterday a spokesman for Jennifer Lawrence said that the authorities had been contacted and anyone found posting ‘stolen’ photos of the actress online would be prosecuted.

While the scope of that action isn’t entirely clear, many of the leaked photos were ‘selfies’ to which Lawrence has first shout on copyright. They’re still being posted on hundreds if not thousands of Internet sites even today, so having a DMCA policy in place will help those sites avoid liability, even if in 4chan’s case the images are only present for a few hours.

In the meantime, sites such as The Pirate Bay who care substantially less about copyright law than 4chan does today are continuing to spread the full currently-available ‘Fappening’ archives at a rapid rate. Statistics collected by TorrentFreak suggest that the packs have been downloaded well over a million times.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

TorrentFreak: Can We Publicly Confess to Online Piracy Crimes?

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

piracy-crimeLast week’s leak of The Expendables 3 was a pretty big event in the piracy calendar and as TF explained to inquiring reporters, that is only achieved by getting the right mix of ingredients.

First and foremost, the movie was completely unreleased meaning that private screenings aside, it had never hit a theater anywhere in the world. Getting a copy of a movie at this stage is very rare indeed. Secondly, the quality of the leaked DVD was very good indeed.

Third, and we touched on this earlier, are the risks involved in becoming part of the online distribution mechanism for something like this. Potentially unfinished copies of yet-to-be-released flicks can be a very serious matter indeed, with custodial sentences available to the authorities.

And yet this week, David Pierce, Assistant Managing Editor at The Verge, wrote an article in which he admitted torrenting The Expendables 3 via The Pirate Bay.

Pirate confessions – uncut

Verge1

“The Expendables 3 comes out August 15th in thousands of theaters across America. I watched it Friday afternoon on my MacBook Air on a packed train from New York City to middle-of-nowhere Connecticut. I watched it again on the ride back. And I’m already counting down the days until I can see it in IMAX,” he wrote.

Pierce’s article, and it’s a decent read, talks about how the movie really needs to be seen on the big screen. It’s a journey into why piracy can act as promotion and how the small screen experience rarely compensates for seeing this kind of movie in the “big show” setting.

Pierce is a great salesman and makes a good case but that doesn’t alter the fact that he just admitted to committing what the authorities see as a pretty serious crime.

The Family Entertainment and Copyright Act of 2005 refers to it as “the distribution of a work being prepared for commercial distribution, by making it available on a computer network accessible to members of the public, if such person knew or should have known that the work was intended for commercial distribution.”

The term “making it available” refers to uploading and although one would like to think that punishments would be reserved only for initial leakers (if anyone), the legislation fails to specify. It seems that merely downloading and sharing the movie using BitTorrent could be enough to render a user criminally liable, as this CNET article from 2005 explains.

FECA

So with the risks as they are, why would Pierce put his neck on the line?

Obviously, he wanted to draw attention to the “big screen” points mentioned above and also appreciates plenty of readers. It’s also possible he just wasn’t aware of the significance of the offense. Sadly, our email to Pierce earlier in the week went unanswered so we can’t say for sure.

But here’s the thing.

There can be few people in the public eye, journalists included, who would admit to stealing clothes from a Paris fashion show in order to promote Versace’s consumer lines when they come out next season.

steal-carAnd if we wrote a piece about how we liberated a Honda Type R prototype from the Geneva Motor Show in order to boost sales ahead of its consumer release next year, we’d be decried as Grand Theft Auto’ists in need of discipline.

What this seems to show is that in spite of a decade-and-a-half’s worth of “piracy is theft” propaganda, educated and eloquent people such as David Pierce still believe that it is not, to the point where pretty serious IP crimes can be confessed to in public.

At the very least, the general perception is that torrenting The Expendables 3 is morally detached from picking up someone’s real-life property and heading for the hills. And none of us would admit to the latter, would we?

Hollywood and the record labels will be furious that this mentality persists after years of promoting the term “intellectual property” and while Lionsgate appear to have picked their initial targets (and the FBI will go after the initial leakers), the reality is that despite the potential for years in jail, it’s extremely unlikely the feds will be turning up at the offices of The Verge to collar Pierce. Nor will they knock on the doors of an estimated two million other Expendables pirates either.

And everyone knows it.

As a result, what we have here is a crazy confession brave article from Pierce which underlines that good movies are meant to be seen properly and that people who pirate do go on to become customers if the product is right. And, furthermore, those customers promote that content to their peers, such as the guy on the train who looked over Pierce’s shoulder when he was viewing his pirate booty.

“He won’t be the last person I tell to go see The Expendables 3 when it hits theaters in August,” Pierce wrote. “And I’ll be there with them, opening night. I know the setlist now, I know all the songs by heart, but I still want to see the show.”

Pierce’s initial piracy was illegal, no doubt, but when all is said and done (especially considering his intent to promote and invest in the movie) it hardly feels worthy of a stay in the slammer. I venture that the majority would agree – and so the cycle continues.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

TorrentFreak: Feds Receive Requests to Shut Down The Pirate Bay

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

pirate bayThere is no doubt that copyright holders repeatedly press the authorities to take action against The Pirate Bay.

So, when a Pirate Bay-related Freedom of Information request was sent to Homeland Security’s National Intellectual Property Rights Coordination Center, we expected to see letters from the major music labels and Hollywood studios. Interestingly that was not the case.

Late June Polity News asked Homeland Security to reveal all information the center holds on the notorious torrent site. Earlier this week the responses were received, mostly consisting of requests from individuals to shut down The Pirate Bay.

In total the center received 15 emails, and all appear to have been forwarded by the FBI, where they were apparently first sent. Some of the emails only list a few pirate site domains but others are more specific in calling for strong action against The Pirate Bay.

“Why don’t you seize all THE PIRATE BAY domains? Starting with thepiratebay.se. You have no idea how much good that would do to writers, artists, musicians, designers, inventors, software developers, movie people and our global economy in general,” one email reads.

crimesyn

The emails are all redacted but the content of the requests sometimes reveals who the sender might be. The example below comes from the author of “The Crystal Warrior,” which is probably the New Zealand author Maree Anderson.

“The Pirate Bay states that it can’t be held responsible for copyright infringement as it is a torrent site and doesn’t store the files on its servers. However the epub file of my published novel The Crystal Warrior has been illegally uploaded there,” the email reads.

The author adds that she takes a strong stand against piracy, but that her takedown notices are ignored by The Pirate Bay. She hopes that the authorities can take more effective action.

“Perhaps you would have more luck in putting pressure on them than one individual like myself. And if you are unable to take further action, I hope this notification will put The Pirate Bay in your sights so you can keep an eye on them,” the author adds.


pirateauthor

Most of the other requests include similar calls to action and appear to come from individual copyright holders. However, there is also a slightly more unusual request.

The email in question comes from the mother of a 14-year-old boy whose father is said to frequently pirate movies and music. The mother says she already visited an FBI office to report the man and is now seeking further advice. Apparently she previously reached out to the MPAA, but they weren’t particularly helpful.

“MPAA only wanted to know where he was downloading and could not help. I ask you what can I do, as a parent, to prevent a 14-year-old from witnessing such a law breaking citizen in his own home?” the mother writes.

“It is not setting a good example for him and I don’t think that it is right to subject him to this cyber crime. Devices on websites used: www.piratebay.com for downloads and www.LittleSnitch.com so he won’t be detected. This is not right. Any help would be appreciated,” she adds.

piratemom

All of the revealed requests were sent between 2012 and 2014. Thus far, however, the Department of Homeland Security nor the FBI have taken any action against the Pirate Bay.

Whether the pirating dad is still on the loose remains unknown for now, but chances are he’s still sharing music and movies despite the FBI referral.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Errata Security: Cliché: open-source is secure

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

Some in cybersec keep claiming that open-source is inherently more secure or trustworthy than closed-source. This is demonstrably false.

Firstly, there is the problem of usability. Unusable crypto isn’t a valid option for most users. Most would rather just not communicate at all, or risk going to jail, rather than deal with the typical dependency hell of trying to get open-source to compile. Moreover, open-source apps are notoriously user-hostile, which is why the Linux desktop still hasn’t made headway against Windows or Macintosh. The reason is that developers blame users for being stupid for not appreciating how easy their apps are, whereas Microsoft and Apple spend $billions in usability studies actually listening to users. Desktops like Ubuntu are pretty good — but only when they exactly copy Windows/Macintosh. Ubuntu still doesn’t invest in the usability studies that Microsoft/Apple do.
The second problem is deterministic builds. If I want to install an app on my iPhone or Android, the only usable way is through their app stores. This means downloading the binary, not the source. Without deterministic builds, there is no way to verify the downloaded binary matches the public source. The binary may, in fact, be compiled from different source containing a backdoor. This means a malicious company (or an FBI NSL letter) can backdoor open-source binaries as easily as closed-source binaries.
The third problem is code-review. People trust open-source because they can see for themselves if it has any bugs. Or, if not themselves, they have faith that others are looking at the code (“many eyes makes bugs shallow”). Yet, this rarely happens. We repeatedly see bugs giving backdoor access (‘vulns’) that remain undetected in open-source projects for years, such as the OpenSSL Heartbleed bug. The simple fact is that people aren’t looking at open-source. Those qualified to review code would rather be writing their own code. The opposite is true for closed-source, where they pay people to review code. While engineers won’t review code for fame/glory, they will for money. Given two products, one open and the other closed, it’s impossible to guess which has had more “eyes” looking at the source — in many case, it’s the closed-source that has been better reviewed.
What’s funny about this open-source bigotry is that it leads to very bad solutions. A lot of people I know use the libpurple open-source library and the jabber.ccc.de server (run by CCC hacking club). People have reviewed the libpurple source and have found it extremely buggy, and chat apps don’t pin SSL certificates, meaning any SSL encryption to the CCC server can easily be intercepted. In other words, the open-source alternative is known to be incredibly insecure, yet people still use it, because “everyone knows” that open-source is more secure than closed-source.
Wickr and SilentCircle are two secure messaging/phone apps that I use, for the simple fact that they work both on Android and iPhone, and both are easy to use. I’ve read their crypto algorithms, so I have some assurance that they are doing things right. SilentCircle has open-sourced part of their code, which looks horrible, so it’s probable they have some 0day lurking in there somewhere, but it’s really no worse than equivalent code. I do know that both companies have spent considerable resources on code review, so I know at least as many “eyes” have reviewed their code as open-source. Even if they showed me their source, I’m not going to read it all — I’ve got more important things to do, like write my own source.
Thus, I see no benefit to open-source in this case. Except for Cryptocat, all the open-source messaging apps I’ve used have been buggy and hard to use. But, you can easily change my mind: just demonstrate an open-source app where more eyes have reviewed the code, or a project that has deterministic builds, or a project that is easier to use, or some other measurable benefit.
Of course, I write this as if the argument was about the benefits of open-source. We all know this doesn’t matter. As the EFF teaches us, it’s not about benefits, but which is ideologically pure; that open-source is inherently more ethical than closed-source.

Errata Security: Um, talks are frequently canceled at hacker cons

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

Talks are frequently canceled at hacker conventions. It’s the norm. I had to cancel once because, on the flight into Vegas, a part fell off the plane forcing an emergency landing. Last weekend, I filled in at HopeX with a talk, replacing somebody else who had to cancel.

I point this out because of this stories like this one hyping the canceled Tor talk at BlackHat. It’s titled says the talk was “Suddenly Canceled”. The adverb “suddenly” is clearly an attempt to hype the story, since there is no way to slowly cancel a talk.
The researchers are academics at Carnegie-Mellon University (CMU). There are good reasons why CMU might have to cancel the talk. The leading theory is that it might violate prohibitions against experiments on unwilling human subjects. There also may be violations of wiretap laws. In other words, the most plausible reasons why CMU might cancel the talk have nothing to do with trying to suppress research.
Suppressing research, because somebody powerful doesn’t want it to be published, is the only reason cancelations are important. It’s why the Boston MTA talk was canceled, because they didn’t want it revealed how to hack transit cards. It’s why the Michael Lynn talk was (almost) canceled, because Cisco didn’t want things revealed.  It’s why I (almost) had a talk canceled, because TippingPoint convinced the FBI to come by my offices to threaten me (I gave the talk because I don’t take threats well). These are all newsworthy things.
The reporting on the Tor cancelation talk, however, is just hype, trying to imply something nefarious when there is no evidence.

TorrentFreak: Six Android Piracy Group Members Charged, Two Arrested

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

usdojAssisted by police in France and the Netherlands, in the summer of 2012 the FBI took down three unauthorized Android app stores. Appbucket, Applanet and SnappzMarket all had their domains seized, the first action of its type in the Android scene.

For two years the United States Department of Justice has released information on the case and last evening came news of more charges and more arrests.

Assistant Attorney General Leslie R. Caldwell of the Justice Department’s Criminal Division announced the unsealing of three federal indictments in the Northern District of Georgia charging six members of Appbucket, Applanet and SnappzMarket for their roles in the unauthorized distribution of Android apps.

SnappzMarket

Joshua Ryan Taylor, 24, of Kentwood, Michigan, and Scott Walton, 28, of Cleveland, Ohio, two alleged members of SnappzMarket, were both arrested yesterday. They are due to appear before magistrates in Michigan and Ohio respectively.

An indictment returned on June 17 charges Gary Edwin Sharp II, 26, of Uxbridge, Massachusetts, along with Taylor and Walton, with one count of conspiracy to commit criminal copyright infringement. Sharp is also charged with two counts of criminal copyright infringement.

It’s alleged that the three men were members of SnappzMarket between May 2011 through August 2012 along with Kody Jon Peterson, 22, of Clermont, Florida. In April, Peterson pleaded guilty to one count of conspiracy to commit criminal copyright infringement. As part of his guilty plea he agreed to work undercover for the government.

Appbucket

Another indictment returned June 17 in Georgia charges James Blocker, 36, of Rowlett, Texas, with one count of conspiracy to commit criminal copyright infringement.

A former member of Appbucket, Blocker is alleged to have conspired with Thomas Allen Dye, 21, of Jacksonville, Florida; Nicholas Anthony Narbone, 26, of Orlando, Florida, and Thomas Pace, 38, of Oregon City, Oregon to distribute Android apps with a value of $700,000.

During March and April 2014, Dye, Narbone and Pace all pleaded guilty to conspiracy to commit criminal copyright infringement.

Applanet

applanetA further indictment June 17 in Georgia charges Aaron Blake Buckley, 20, of Moss Point, Mississippi; David Lee, 29, of Chino Hills, California; and Gary Edwin Sharp II (also of Appbucket) with one count of conspiracy to commit criminal copyright infringement.

Lee is additionally charged with one count of aiding and abetting criminal copyright infringement and Buckley with one count of criminal copyright infringement.

All three identified themselves as former members of Applanet. The USDOJ claims that along with other members they are responsible for the illegal distribution of four million Android apps with a value of $17m. Buckley previously launched a fund-raiser in an effort to fight off the United States government.

“As a result of their criminal efforts to make money by ripping off the hard work and creativity of high-tech innovators, the defendants are charged with illegally distributing copyrighted apps,” said Assistant Attorney General Caldwell.

“The Criminal Division is determined to protect the labor and ingenuity of copyright owners and to keep pace with criminals in the modern, technological marketplace.”

A statement from the FBI’s Atlanta Field Office indicates that the FBI will pursue more piracy groups in future.

“The FBI will continue to provide significant investigative resources toward such groups engaged in such wholesale pirating or copyright violations as seen here,” Special Agent in Charge J. Britt Johnson said.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Krebs on Security: Crooks Seek Revival of ‘Gameover Zeus’ Botnet

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Cybercrooks today began taking steps to resurrect the Gameover ZeuS botnet, a complex crime machine that has been blamed for the theft more than $100 million from banks, businesses and consumers worldwide. The revival attempt comes roughly five weeks after the FBI joined several nations, researchers and security firms in a global and thus far successful effort to eradicate it. gameover

The researchers who helped dismantle Gameover Zeus said they were surprised that the botmasters didn’t fight back. Indeed, for the past month the crooks responsible seem to have kept a low profile.

But that changed earlier this morning when researchers at Malcovery [full disclosure: Malcovery is an advertiser on this blog] began noticing spam being blasted out with phishing lures that included zip files booby-trapped with malware.

Looking closer, the company found that the malware shares roughly 90 percent of its code base with Gameover Zeus. Part of what made the original GameOver ZeuS so difficult to shut down was its reliance in part on an advanced peer-to-peer (P2P) mechanism to control and update the bot-infected systems.

But according to Gary Warner, Malcovery’s co-founder and chief technologist, this new Gameover variant is stripped of the P2P code, and relies instead on an approach known as fast-flux hosting. Fast-flux is a kind of round-robin technique that lets botnets hide phishing and malware delivery sites behind an ever-changing network of compromised systems acting as proxies, in a bid to make the botnet more resilient to takedowns.

Like the original Gameover, however, this variant also includes a “domain name generation algorithm” or DGA, which is a failsafe mechanism that can be invoked if the botnet’s normal communications system fails. The DGA creates a constantly-changing list of domain names each week (gibberish domains that are essentially long jumbles of letters).

In the event that systems infected with the malware can’t reach the fast-flux servers for new updates, the code instructs the botted systems to seek out active domains from the list specified in the DGA. All the botmasters need to do in this case to regain control over his crime machine is register just one of those domains and place the update instructions there.

Warner said the original Gameover botnet that was clobbered last month is still locked down, and that it appears whoever released this variant is essentially attempting to rebuild the botnet from scratch. “This discovery indicates that the criminals responsible for Gameover’s distribution do not intend to give up on this botnet even after suffering one of the most expansive botnet takeovers and takedowns in history,” Warner said.

Gameover is based on code from the ZeuS Trojan, an infamous family of malware that has been used in countless online banking heists. Unlike ZeuS — which was sold as a botnet creation kit to anyone who had a few thousand dollars in virtual currency to spend — Gameover ZeuS has since October 2011 been controlled and maintained by a core group of hackers from Russia and Ukraine. Those individuals are believed to have used the botnet in high-dollar corporate account takeovers that frequently were punctuated by massive distributed-denial-of-service (DDoS) attacks intended to distract victims from immediately noticing the thefts.

According to the Justice Department, Gameover has been implicated in the theft of more than $100 million in account takeovers. According to the U.S. Justice Department, the author of the ZeuS Trojan (and by extension the Gameover Zeus malware) is allegedly a Russian citizen named Evgeniy Mikhailovich Bogachev.

For more details, check out Malcovery’s blog post about this development.

Yevgeniy Bogachev, Evgeniy Mikhaylovich Bogachev, a.k.a. "lucky12345", "slavik", "Pollingsoon". Source: FBI.gov "most wanted, cyber.

Yevgeniy Bogachev, Evgeniy Mikhaylovich Bogachev, a.k.a. “lucky12345″, “slavik”, “Pollingsoon”. Source: FBI.gov “most wanted, cyber.

TorrentFreak: Kim Dotcom Extradition Hearing Delayed Until 2015

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

The United States Government is keen to get its hands on Kim Dotcom. He stands accused of committing the biggest copyright-related crime ever seen through his now-defunct cloud storage site Megaupload.

But their access to the entrepreneur will have to wait.

According to Dotcom, his extradition hearing has now been delayed until February 16, 2015.

Delays and postponements have become recurring features of the criminal case being built against Dotcom in the United States.

A March 2013 date came and went without a promised hearing, as did another in November the same year, a delay which Dotcom said would “save Prime Minister John Key embarrassment during an election campaign.”

Another hearing date for April 2014 also failed to materialize and now the date penciled in for the coming weeks has also been struck down.

Dotcom also reports that he still hasn’t received a copy of the data that was unlawfully sent to the FBI by New Zealand authorities.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

TorrentFreak: Dotcom Encryption Keys Can’t Be Given to FBI, Court Rules

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

dotcom-laptopDuring the raid more than two years ago on his now-famous mansion, police in New Zealand seized 135 computers and drives belonging to Kim Dotcom.

In May 2012 during a hearing at Auckland’s High Court, lawyer Paul Davison QC demanded access to the data stored on the confiscated equipment, arguing that without it Dotcom could not mount a proper defense.

The FBI objected to the request due to some of the data being encrypted. However, Dotcom refused to hand over the decryption passwords unless the court guaranteed him access to the data. At this point it was revealed that despite assurances from the court to the contrary, New Zealand police had already sent copies of the data to U.S. authorities.

In May 2014, Davison was back in court arguing that New Zealand police should release copies of the data from the seized computers and drives, reiterating the claim that without the information Dotcom could not get a fair trial. The High Court previously ruled that the Megaupload founder could have copies, on the condition he handed over the encryption keys.

But while Dotcom subsequently agreed to hand over the passwords, that was on the condition that New Zealand police would not hand them over to U.S. authorities. Dotcom also said he couldn’t remember the passwords after all but may be able to do so if he gained access to prompt files contained on the drives.

The police agreed to give Dotcom access to the prompts but with the quid pro quo that the revealed passwords could be passed onto the United States, contrary to Dotcom’s wishes.

Today Justice Winkelmann ruled that if the police do indeed obtain the codes, they must not hand them over to the FBI. Reason being, the copies of the computers and drives should never have been sent to the United States in the first place.

While the ruling is a plus for Dotcom, the entrepreneur today expressed suspicion over whether the FBI even need the encryption codes.

“NZ Police is not allowed to provide my encryption password to the FBI,” he wrote on Twitter, adding, “As if they don’t have it already.”

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

TorrentFreak: UK Cinemas Ban Google Glass Over Piracy Fears

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

google-glassThe movie industry sees the illegal recording of movies as one of the biggest piracy threats and for years has gone to extremes to stop it.

It started well over a decade ago when visitors began sneaking handheld camcorders into theaters. These big clunkers were relatively easy to spot, but as time passed the recording devices grew smaller and easier to obfuscate.

Google Glass is one of the newest threats on the block. Earlier this year the FBI dragged a man from a movie theater in Columbus, Ohio, after theater staff presumed he was using Google Glass to illegally record a film. While the man wasn’t recording anything at all, the response from the cinema employees was telling.

This month Google Glass went on sale in the UK, and unlike their American counterparts, British cinemas have been quick to announce a blanket ban on the new gadget.

“Customers will be requested not to wear these into cinema auditoriums, whether the film is playing or not,” Phil Clapp, chief executive of the Cinema Exhibitors’ Association told the Independent.

The first Glass wearer at a Leicester Square cinema has already been instructed to stow his device, and more are expected to follow. Google Glass wearers with prescription lenses would be wise to take a pair of traditional glasses along if they want to enjoy a movie on the big screen.

Movie industry group FACT sees Google Glass and other new recording devices as significant threats and works in tandem with local cinemas to prevent film from being recorded.

“Developments in technology have led to smaller, more compact devices which have the capability to record sound and vision, including most mobile phones. FACT works closely with cinema operators and distributors to ensure that best practice is carried out to prevent and detect illegal recordings taking place,” the group says.

In recent years the UK movie industry has intensified its efforts to stop camcording and not without success. In 2012 none of the illegally recorded movies that appeared online originated from a UK cinema while several attempts were successfully thwarted.

Last year, cinema staff helped UK police to arrest five people and another nine were sent home with cautions. As a thank you for these vigilant actions, the Film Distributors’ Association awarded 13 cinema employees with cash rewards of up to £500.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Krebs on Security: 2014: The Year Extortion Went Mainstream

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

The year 2014 may well go down in the history books as the year that extortion attacks went mainstream. Fueled largely by the emergence of the anonymous online currency Bitcoin, these shakedowns are blurring the lines between online and offline fraud, and giving novice computer users a crash course in modern-day cybercrime.

An extortion letter sent to 900 Degrees Neapolitan Pizzeria in New Hampshire.

An extortion letter sent to 900 Degrees Neapolitan Pizzeria in New Hampshire.

At least four businesses recently reported receiving “Notice of Extortion” letters in the U.S. mail. The letters say the recipient has been targeted for extortion, and threaten a range of negative publicity, vandalism and harassment unless the target agrees to pay a “tribute price” of one bitcoin (currently ~USD $561) by a specified date. According to the letter, that tribute price increases to 3 bitcoins (~$1,683) if the demand isn’t paid on time.

The ransom letters, which appear to be custom written for restaurant owners, threaten businesses with negative online reviews, complaints to the Better Business Bureau, harassing telephone calls, telephone denial-of-service attacks, bomb threats, fraudulent delivery orders, vandalism, and even reports of mercury contamination.

The missive encourages recipients to sign up with Coinbase – a popular bitcoin exchange – and to send the funds to a unique bitcoin wallet specified in the letter and embedded in the QR code that is also printed on the letter.

Interestingly, all three letters I could find that were posted online so far targeted pizza stores. At least two of them were mailed from Orlando, Florida.

The letters all say the amounts are due either on Aug. 1 or Aug. 15. Perhaps one reason the deadlines are so far off is that the attackers understand that not everyone has bitcoins, or even knows about the virtual currency.

“What the heck is a BitCoin?” wrote the proprietors of New Hampshire-based 900 Degrees Neapolitan Pizzeria, which posted a copy of the letter (above) on their Facebook page.

Sandra Alhilo, general manager of Pizza Pirates in Pomona, Calif., received the extortion demand on June 16.

“At first, I was laughing because I thought it had to be a joke,” Alhilo said in a phone interview. “It was funny until I went and posted it on our Facebook page, and then people put it on Reddit and the Internet got me all paranoid.”

Nicholas Weaver, a researcher at the International Computer Science Institute (ICSI) and at the University California, Berkeley, said these extortion attempts cost virtually nothing and promise a handsome payoff for the perpetrators.

“From the fraudster’s perspective, the cost of these attacks is a stamp and an envelope,” Weaver said. “This type of attack could be fairly effective. Some businesses — particularly restaurant establishments — are very concerned about negative publicity and reviews. Bad Yelp reviews, tip-offs to the health inspector..that stuff works and isn’t hard to do.”

While some restaurants may be an easy mark for this sort of crime, Weaver said the extortionists in this case are tangling with a tough adversary — The U.S. Postal Service — which takes extortion crimes perpetrated through the U.S. mail very seriously.

“There is a lot of operational security that these guys might have failed at, because this is interstate commerce, mail fraud, and postal inspector territory, where the gloves come off,” Weaver said. “I’m willing to bet there are several tools available to law enforcement here that these extortionists didn’t consider.”

It’s not entirely clear if or why extortionists seem to be picking on pizza establishments, but it’s probably worth noting that the grand-daddy of all pizza joints – Domino’s Pizza in France — recently found itself the target of a pricey extortion attack earlier this month after hackers threatened to release the stolen details on more than 650,000 customers if the company failed to pay a ransom of approximately $40,000).

Meanwhile, Pizza Pirates’s Alhilo says the company has been working with the local U.S. Postal Inspector’s office, which was very interested in the letter. Alhilo said her establishment won’t be paying the extortionists.

“We have no intention of paying it,” she said. “Honestly, if it hadn’t been a slow day that Monday I might have just throw the letter out because it looked like junk mail. It’s annoying that someone would try to make a few bucks like this on the backs of small businesses.”

A GREAT CRIME FOR CRIMINALS

Fueled largely by the relative anonymity of cryptocurrencies like Bitcoin, extortion attacks are increasingly being incorporated into all manner of cyberattacks today. Today’s thieves are no longer content merely to hijack your computer and bandwidth and steal all of your personal and financial data; increasingly, these crooks are likely to hold all of your important documents for ransom as well.

“In the early days, they’d steal your credit card data and then threaten to disclose it only after they’d already sold it on the underground,” said Alan Paller, director of research at the SANS Institute, a Bethesda, Md. based security training firm. “But today, extortion is the fastest way for the bad guys to make money, because it’s the shortest path from cybercrime to cash. It’s really a great crime for the criminals.

Last month, the U.S. government joined private security companies and international law enforcement partners to dismantle a criminal infrastructure responsible for spreading Cryptlocker, a ransomware scourge that the FBI estimates stole more than $27 million from victims compromised by the file-encrypting malware.

Even as the ink was still drying on the press releases about the Cryptolocker takedown, a new variant of Cryptolocker — Cryptowall — was taking hold. These attacks encrypt the victim PC’s hard drive unless and until the victim pays an arbitrary amount specified by the perpetrators — usually a few hundred dollars worth of bitcoins. Many victims without adequate backups in place (or those whose backups also were encrypted) pay up.  Others, like the police department in the New Hampshire hamlet of Durham, are standing their ground.

The downside to standing your ground is that — unless you have backups of your data — the encrypted information is gone forever. When these attacks hit businesses, the results can be devastating. Code-hosting and project management services provider CodeSpaces.com was forced to shut down this month after a hacker gained access to its Amazon EC2 account and deleted most data, including backups. According to Computerworld, the devastating security breach happened over a span of 12 hours and initially started with a distributed denial-of-service attack followed by an attempt to extort money from the company.

A HIDDEN CRIME

Extortion attacks against companies operating in the technology and online space are nothing new, of course. Just last week, news came to light that mobile phone giant Nokia in 2007 paid millions to extortionists who threatened to reveal an encryption key to Nokia’s Symbian mobile phone source code.

Trouble is, the very nature of these scams makes it difficult to gauge their frequency or success.

“The problem with extortion is that the money is paid in order to keep the attack secret, and so if the attack is successful, there is no knowledge of the attack even having taken place,” SANS’s Paller said.

Traditionally, the hardest part about extortion has been getting paid and getting away with the loot. In the case of the crooks who extorted Nokia, the company paid the money, reportedly leaving the cash in a bag at an amusement park car lot. Police were tracking the drop-off location, but ultimately lost track of the blackmailers.

Anonymous virtual currencies like Bitcoin not only make it easier for extortionists to get paid, but they also make it easier and more lucrative for more American blackmailers to get in on the action. Prior to Bitcoin’s rise in popularity, the principal way that attackers extracted their ransom was by instructing victims to pay by wire transfer or reloadable prepaid debit cards — principally Greendot cards sold at retailers, convenience stores and pharmacies.

But unlike Bitcoin payments, these methods of cashing out are easily traceable if cashed out in within the United States, the ICSI’s Weaver said.

“Bitcoin is their best available tool if in they’re located in the United States,” Weaver said of extortionists. “Western Union can be traced at U.S. cashout locations, as can Greendot payments. Which means you either need an overseas partner [who takes half of the profit for his trouble] or Bitcoin.”

TorrentFreak: Movie Chain Bans Google Glass Over Piracy Fears

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

Ever since the concept became public there have been fears over potential misuse of Google Glass. The advent of the wearable computer has sparked privacy fears and perhaps unsurprisingly, concerns that it could be used for piracy.

Just this January the FBI dragged a man from a movie theater in Columbus, Ohio, after theater staff presumed his wearing of Google Glass was a sign that he was engaged in camcorder piracy.

While it’s possible the device could be put to that use, it’s now less likely that patrons of the Alamo Drafthouse movie theater chain will be able to do so without being noticed. Speaking with Deadline, company CEO and founder Tim League says the time is now right to exclude the active use of Glass completely.

“We’ve been talking about this potential ban for over a year,” League said.

“Google Glass did some early demos here in Austin and I tried them out personally. At that time, I recognized the potential piracy problem that they present for cinemas. I decided to put off a decision until we started seeing them in the theater, and that started happening this month.”

According to League, people won’t be forbidden from bringing Google Glass onto the company’s premises, nor will they be banned from wearing the devices. Only when the devices are switched on will there be a problem.

“Google Glass is officially banned from drafthouse auditoriums once lights dim for trailers,” League explained yesterday.

Asked whether people could use them with corrective lenses, League said that discretion would be used.

“It will be case by case, but if it is clear when they are on, clear when they are off, will likely be OK,” he said.

But despite the theater chain’s apparent flexibility towards the non-active use of the device, the ban does seem to go further than the official stance taken by the MPAA following the earlier Ohio incident.

“Google Glass is an incredible innovation in the mobile sphere, and we have seen no proof that it is currently a significant threat that could result in content theft,” the MPAA said in a statement.

However, recording a movie in a theater remains a criminal offense in the United States, so the decision as to whether a crime has been committed will be the decision of law enforcement officers called to any ‘camming’ incident. Given then the MPAA’s statement, it will be interesting to see if the studios will encourage the police to pursue cases against future Google Glass users.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Errata Security: Can I drop a pacemaker 0day?

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

Can I drop a pacemaker 0day at DefCon that is capable of killing people?

Computers now run our cars. It’s now possible for a hacker to infect your car with a “virus” that can slam on the brakes in the middle of the freeway. Computers now run medical devices like pacemakers and insulin pumps, it’s now becoming possible assassinate somebody by stopping their pacemaker with a bluetooth exploit.

The problem is that manufacturers are 20 years behind in terms of computer “security”. They don’t just have vulnerabilities, they have obvious vulnerabilities. That means not only can these devices be hacked, they can be easily be hacked by teenagers. Vendors do something like put a secret backdoor password in a device believing nobody is smart enough to find it — then a kid finds it in under a minute using a simple program like “strings“.
Telling vendors about the problem rarely helps because vendors don’t care. If they cared at all, they wouldn’t have been putting the vulnerabilities in their product to begin with. 30% of such products have easily discovered backdoors, which is something they should already care about, so telling them you’ve discovered they are one of the 30% won’t help.
Historically, we’ve dealt with vendor unresponsiveness through the process of “full disclosure”. If a vendor was unresponsive after we gave them a chance to first fix the bug, we simply published the bug (“drop 0day”), either on a mailing list, or during a talk at a hacker convention like DefCon. Only after full disclosure does the company take the problem seriously and fix it.
This process has worked well. If we look at the evolution of products from Windows to Chrome, the threat of 0day has caused them to vastly improve their products. Moreover, now they court 0day: Google pays you a bounty for Chrome 0day, with no strings attached on how you might also maliciously use it.
So let’s say I’ve found a pacemaker with an obvious BlueTooth backdoor that allows me to kill a person, and a year after notifying the vendor, they still ignore the problem, continuing to ship vulnerable pacemakers to customers. What should I do? If I do nothing, more and more such pacemakers will ship, endangering more lives. If I disclose the bug, then hackers may use it to kill some people.
The problem is that dropping a pacemaker 0day is so horrific that most people would readily agree it should be outlawed. But, at the same time, without the threat of 0day, vendors will ignore the problem.
This is the question for groups that defend “coder’s rights”, like the EFF. Will they really defend coders in this hypothetical scenario, declaring that releasing code 0day code is free speech that reveals problems of public concern? Or will they agree that such code should be suppressed in the name of public safety?
I ask this question because right now they are avoiding the issue, because whichever stance they take will anger a lot of people. This paper from the EFF on the issue seems to support disclosing 0days, but only in the abstract, not in the concrete scenario that I support. The EFF has a history of backing away from previous principles when they become unpopular. For example, they once fought against regulating the Internet as a public utility, now they fight for it in the name of net neutrality. Another example is selling 0days to the government, which the EFF criticizes. I doubt if the EFF will continue to support disclosing 0days when they can kill people. The first time a child dies due to a car crash caused by a hacker, every organization is going to run from “coder’s rights”.
By the way, it should be clear in the above post on which side of this question I stand: for coder’s rights.

Update: Here’s another scenario. In Twitter discussions, people have said that the remedy for unresponsive vendors is to contact an organization like ICS-CERT, the DHS organization responsible for “control systems”. That doesn’t work, because ICS-CERT is itself a political, unresponsive organization.

The ICS-CERT doesn’t label “default passwords” as a “vulnerability”, despite the fact that it’s a leading cause of hacks, and a common feature of exploit kits. They claim that it’s the user’s responsibility to change the password, and not the fault of the vendor if they don’t.

Yet, disclosing default passwords is one of the things that vendors try to suppress. When a researcher reveals a default password in a control system, and a hacker exploits it to cause a power outage, it’s the researcher who will get blamed for revealing information that was not-a-vulnerability.

I say this because I was personally threatened by the FBI to suppress something that was not-a-vulnerability, yet which they claimed would hurt national security if I revealed it to Chinese hackers.

Again, the only thing that causes change is full disclosure. Everything else allows politics to suppress information vital to public safety.


Update: Some have suggested it’s that moral and legal are two different arguments, that someone can call full disclosure immoral without necessarily arguing that it should be illegal.

That’s not true. That’s like saying that speech is immoral when Nazi’s do it. It isn’t — the content may be vile, but the act of speaking never immoral.

The “moral but legal” argument is too subtle for politics, you really have to pick one or the other. We saw that happen with the EFF. They originally championed the idea that the Internet should not be regulated. They, they championed the idea of net neutrality — which is Internet regulation. They original claimed there was no paradox, because they were saying merely that net neutrality was moral not that it should be law. Now they’ve discarded that charade, and are actively lobbying congress to make net neutrality law.

Sure, sometimes some full disclosure will result in bad results, but more often, those with political power will seek to suppress vital information with reasons that sound good at the time, like “think of the children!”. We need to firmly defend full disclosure as free speech, in all circumstances.


Update: Some have suggested that instead of disclosing details, a researcher can inform the media.

This has been tried. It doesn’t work. Vendors have more influence on the media than researchers.

We say this happen in the Apple WiFi fiasco. It was an obvious bug (SSID’s longer than 97 bytes), but at the time Apple kernel exploitation wasn’t widely known. Therefore, the researchers tried to avoid damaging Apple by not disclosing the full exploit. Thus, people could know about the bug without people being able to exploit it.

This didn’t work. Apple’s marketing department claimed the entire thing was fake. They did later fix the bug — claiming it was something they found unrelated to the “fake” vulns from the researchers.

Another example was two years ago when researchers described bugs in airplane control systems. The FAA said the vulns were fake, and the press took the FAA’s line on the problem.

The history of “going to the media” has demonstrated that only full-disclosure works.

TorrentFreak: Kim Dotcom Fails in Bid to Suppress FBI Evidence

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

dotcom-laptopIn 2012 following the raid on his New Zealand mansion, Kim Dotcom fought to gain access to the information being held against him by the FBI.

A ruling by District Court Judge David Harvey in May of that year, which stood despite an August appeal, ordered disclosure of all documents relating to the alleged crimes of the so-called Megaupload Conspiracy.

While it was agreed that this information should be made available, an order forbidding publication was handed down in respect to the so-called Record of Case, a 200-page document summarizing an estimated 22 million emails and Skype discussions obtained by the FBI during their investigation.

Last November a sealed court order by US Judge Liam O’Grady already allowed the U.S. Government to share the summary of evidence from the Megaupload case with copyright holders, something which was actioned before the end of the year.

Over in New Zealand, however, Kim Dotcom has been fighting an application by the Crown to make the Record of Case public. That battle came to an end today when Auckland District Court Judge Nevin Dawson rejected an application by Dotcom’s legal team to extend the suppression order placed on the document.

According to RadioNZ, the document contains sensitive information including email and chat conversations which suggest that the Megaupload team knew their users were uploading copyrighted material.

In another setback, further applications by Dotcom to force Immigration New Zealand, the Security Intelligence Service, and several other government departments to hand over information they hold on him, were also rejected by Judge Dawson.

Dotcom’s lawyer Paul Davidson, QC, told Stuff that the battle will continue.

“We will press on with our resolve,” he said.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Schneier on Security: Disclosing vs. Hoarding Vulnerabilities

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

There’s a debate going on about whether the US government — specifically, the NSA and United States Cyber Command — should stockpile Internet vulnerabilities or disclose and fix them. It’s a complicated problem, and one that starkly illustrates the difficulty of separating attack and defense in cyberspace.

A software vulnerability is a programming mistake that allows an adversary access into that system. Heartbleed is a recent example, but hundreds are discovered every year.

Unpublished vulnerabilities are called “zero-day” vulnerabilities, and they’re very valuable because no one is protected. Someone with one of those can attack systems world-wide with impunity.

When someone discovers one, he can either use it for defense or for offense. Defense means alerting the vendor and getting it patched. Lots of vulnerabilities are discovered by the vendors themselves and patched without any fanfare. Others are discovered by researchers and hackers. A patch doesn’t make the vulnerability go away, but most users protect themselves by patch their systems regularly.

Offense means using the vulnerability to attack others. This is the quintessential zero-day, because the vendor doesn’t even know the vulnerability exists until it starts being used by criminals or hackers. Eventually the affected software’s vendor finds out — the timing depends on how extensively the vulnerability is used — and issues a patch to close the vulnerability.

If an offensive military cyber unit discovers the vulnerability — or a cyber-weapons arms manufacturer — it keeps that vulnerability secret for use to deliver a cyber-weapon. If it is used stealthily, it might remain secret for a long time. If unused, it’ll remain secret until someone else discovers it.

Discoverers can sell vulnerabilities. There’s a rich market in zero-days for attack purposes — both military/commercial and black markets. Some vendors offer bounties for vulnerabilities to incent defense, but the amounts are much lower.

The NSA can play either defense or offense. It can either alert the vendor and get a still-secret vulnerability fixed, or it can hold on to it and use it as to eavesdrop on foreign computer systems. Both are important US policy goals, but the NSA has to choose which one to pursue. By fixing the vulnerability, it strengthens the security of the Internet against all attackers: other countries, criminals, hackers. By leaving the vulnerability open, it is better able to attack others on the Internet. But each use runs the risk of the target government learning of, and using for itself, the vulnerability — or of the vulnerability becoming public and criminals starting to use it.

There is no way to simultaneously defend US networks while leaving foreign networks open to attack. Everyone uses the same software, so fixing us means fixing them, and leaving them vulnerable means leaving us vulnerable. As Harvard Law Professor Jack Goldsmith wrote, “every offensive weapon is a (potential) chink in our defense — and vice versa.”

To make matters even more difficult, there is an arms race going on in cyberspace. The Chinese, the Russians, and many other countries are finding vulnerabilities as well. If we leave a vulnerability unpatched, we run the risk of another country independently discovering it and using it in a cyber-weapon that we will be vulnerable to. But if we patch all the vulnerabilities we find, we won’t have any cyber-weapons to use against other countries.

Many people have weighed in on this debate. The president’s Review Group on Intelligence and Communications Technologies, convened post-Snowden, concluded (recommendation 30), that vulnerabilities should only be hoarded in rare instances and for short times. Cory Doctorow calls it a public health problem. I have said similar things. Dan Geer recommends that the US government corner the vulnerabilities market and fix them all. Both the FBI and the intelligence agencies claim that this amounts to unilateral disarmament.

It seems like an impossible puzzle, but the answer hinges on how vulnerabilities are distributed in software.

If vulnerabilities are sparse, then it’s obvious that every vulnerability we find and fix improves security. We render a vulnerability unusable, even if the Chinese government already knows about it. We make it impossible for criminals to find and use it. We improve the general security of our software, because we can find and fix most of the vulnerabilities.

If vulnerabilities are plentiful — and this seems to be true — the ones the US finds and the ones the Chinese find will largely be different. This means that patching the vulnerabilities we find won’t make it appreciably harder for criminals to find the next one. We don’t really improve general software security by disclosing and patching unknown vulnerabilities, because the percentage we find and fix is small compared to the total number that are out there.

But while vulnerabilities are plentiful, they’re not uniformly distributed. There are easier-to-find ones, and harder-to-find ones. Tools that automatically find and fix entire classes of vulnerabilities, and coding practices that eliminate many easy-to-find ones, greatly improve software security. And when person finds a vulnerability, it is likely that another person soon will, or recently has, found the same vulnerability. Heartbleed, for example, remained undiscovered for two years, and then two independent researchers discovered it within two days of each other. This is why it is important for the government to err on the side of disclosing and fixing.

The NSA, and by extension US Cyber Command, tries its best to play both ends of this game. Former NSA Director Michael Hayden talks about NOBUS, “nobody but us.” The NSA has a classified process to determine what it should do about vulnerabilities, disclosing and closing most of the ones it finds, but holding back some — we don’t know how many — vulnerabilities that “nobody but us” could find for attack purposes.

This approach seems to be the appropriate general framework, but the devil is in the details. Many of us in the security field don’t know how to make NOBUS decisions, and the recent White House clarification posed more questions than it answered.

Who makes these decisions, and how? How often are they reviewed? Does this review process happen inside Department of Defense, or is it broader? Surely there needs to be a technical review of each vulnerability, but there should also be policy reviews regarding the sorts of vulnerabilities we are hoarding. Do we hold these vulnerabilities until someone else finds them, or only for a short period of time? How many do we stockpile? The US/Israeli cyberweapon Stuxnet used four zero-day vulnerabilities. Burning four on a single military operation implies that we are not hoarding a small number, but more like 100 or more.

There’s one more interesting wrinkle. Cyber-weapons are a combination of a payload — the damage the weapon does — and a delivery mechanism: the vulnerability used to get the payload into the enemy network. Imagine that China knows about a vulnerability and is using it in a still-unfired cyber-weapon, and that the NSA learns about it through espionage. Should the NSA disclose and patch the vulnerability, or should it use it itself for attack? If it discloses, then China could find a replacement vulnerability that the NSA won’t know about it. But if it doesn’t, it’s deliberately leaving the US vulnerable to cyber-attack. Maybe someday we can get to the point where we can patch vulnerabilities faster than the enemy can use them in an attack, but we’re nowhere near that point today.

The implications of US policy can be felt on a variety of levels. The NSA’s actions have resulted in a widespread mistrust of the security of US Internet products and services, greatly affecting American business. If we show that we’re putting security ahead of surveillance, we can begin to restore that trust. And by making the decision process much more public than it is today, we can demonstrate both our trustworthiness and the value of open government.

An unpatched vulnerability puts everyone at risk, but not to the same degree. The US and other Western countries are highly vulnerable, because of our critical electronic infrastructure, intellectual property, and personal wealth. Countries like China and Russia are less vulnerable — North Korea much less — so they have considerably less incentive to see vulnerabilities fixed. Fixing vulnerabilities isn’t disarmament; it’s making our own countries much safer. We also regain the moral authority to negotiate any broad international reductions in cyber-weapons; and we can decide not to use them even if others do.

Regardless of our policy towards hoarding vulnerabilities, the most important thing we can do is patch vulnerabilities quickly once they are disclosed. And that’s what companies are doing, even without any government involvement, because so many vulnerabilities are discovered by criminals.

We also need more research in automatically finding and fixing vulnerabilities, and in building secure and resilient software in the first place. Research over the last decade or so has resulted in software vendors being able to find and close entire classes of vulnerabilities. Although there are many cases of these security analysis tools not being used, all of our security is improved when they are. That alone is a good reason to continue disclosing vulnerability details, and something the NSA can do to vastly improve the security of the Internet worldwide. Here again, though, they would have to make the tools they have to automatically find vulnerabilities available for defense and not attack.

In today’s cyberwar arms race, unpatched vulnerabilities and stockpiled cyber-weapons are inherently destabilizing, especially because they are only effective for a limited time. The world’s militaries are investing more money in finding vulnerabilities than the commercial world is investing in fixing them. The vulnerabilities they discover affect the security of us all. No matter what cybercriminals do, no matter what other countries do, we in the US need to err on the side of security and fix almost all the vulnerabilities we find. But not all, yet.

This essay previously appeared on TheAtlantic.com.

Schneier on Security: Disclosing vs Hoarding Vulnerabilities

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

There’s a debate going on about whether the U.S. government — specifically, the NSA and United States Cyber Comman — should stockpile Internet vulnerabilities or disclose and fix them. It’s a complicated problem, and one that starkly illustrates the difficulty of separating attack and defense in cyberspace.

A software vulnerability is a programming mistake that allows an adversary access into that system. Heartbleed is a recent example, but hundreds are discovered every year.

Unpublished vulnerabilities are called “zero-day” vulnerabilities, and they’re very valuable because no one is protected. Someone with one of those can attack systems world-wide with impunity.

When someone discovers one, he can either use it for defense or for offense. Defense means alerting the vendor and getting it patched. Lots of vulnerabilities are discovered by the vendors themselves and patched without any fanfare. Others are discovered by researchers and hackers. A patch doesn’t make the vulnerability go away, but most users protect themselves by patch their systems regularly.

Offense means using the vulnerability to attack others. This is the quintessential zero-day, because the vendor doesn’t even know the vulnerability exists until it starts being used by criminals or hackers. Eventually the affected software’s vendor finds out — the timing depends on how extensively the vulnerability is used — and issues a patch to close the vulnerability.

If an offensive military cyber unit discovers the vulnerability — or a cyber-weapons arms manufacturer — it keeps that vulnerability secret for use to deliver a cyber-weapon. If it is used stealthily, it might remain secret for a long time. If unused, it’ll remain secret until someone else discovers it.

Discoverers can sell vulnerabilities. There’s a rich market in zero-days for attack purposes — both military/commercial and black markets. Some vendors offer bounties for vulnerabilities to incent defense, but the amounts are much lower.

The NSA can play either defense or offense. It can either alert the vendor and get a still-secret vulnerability fixed, or it can hold on to it and use it as to eavesdrop on foreign computer systems. Both are important U.S. policy goals, but the NSA has to choose which one to pursue. By fixing the vulnerability, it strengthens the security of the Internet against all attackers: other countries, criminals, hackers. By leaving the vulnerability open, it is better able to attack others on the Internet. But each use runs the risk of the target government learning of, and using for itself, the vulnerability — or of the vulnerability becoming public and criminals starting to use it.

There is no way to simultaneously defend U.S. networks while leaving foreign networks open to attack. Everyone uses the same software, so fixing us means fixing them, and leaving them vulnerable means leaving us vulnerable. As Harvard Law Professor Jack Goldsmith wrote, “every offensive weapon is a (potential) chink in our defense—and vice versa.”

To make matters even more difficult, there is an arms race going on in cyberspace. The Chinese, the Russians, and many other countries are finding vulnerabilities as well. If we leave a vulnerability unpatched, we run the risk of another country independently discovering it and using it in a cyber-weapon that we will be vulnerable to. But if we patch all the vulnerabilities we find, we won’t have any cyber-weapons to use against other countries.

Many people have weighed in on this debate. The president’s Review Group on Intelligence and Communications Technologies, convened post-Snowden, concluded (recommendation 30), that vulnerabilities should only be hoarded in rare instances and for short times. Cory Doctorow calls it a public health problem. I have said similar things. Dan Geer recommends that the U.S. government corner the vulnerabilities market and fix them all. Both the FBI and the intelligence agencies claim that this amounts to unilateral disarmament.

It seems like an impossible puzzle, but the answer hinges on how vulnerabilities are distributed in software.

If vulnerabilities are sparse, then it’s obvious that every vulnerability we find and fix improves security. We render a vulnerability unusable, even if the Chinese government already knows about it. We make it impossible for criminals to find and use it. We improve the general security of our software, because we can find and fix most of the vulnerabilities.

If vulnerabilities are plentiful — and this seems to be true — the ones the U.S. finds and the ones the Chinese find will largely be different. This means that patching the vulnerabilities we find won’t make it appreciably harder for criminals to find the next one. We don’t really improve general software security by disclosing and patching unknown vulnerabilities, because the percentage we find and fix is small compared to the total number that are out there.

But while vulnerabilities are plentiful, they’re not uniformly distributed. There are easier-to-find ones, and harder-to-find ones. Tools that automatically find and fix entire classes of vulnerabilities, and coding practices that eliminate many easy-to-find ones, greatly improve software security. And when person finds a vulnerability, it is likely that another person soon will, or recently has, found the same vulnerability. Heartbleed, for example, remained undiscovered for two years, and then two independent researchers discovered it within two days of each other. This is why it is important for the government to err on the side of disclosing and fixing.

The NSA, and by extension U.S. Cyber Command, tries its best to play both ends of this game. Former NSA Director Michael Hayden talks about NOBUS, “nobody but us.” The NSA has a classified process to determine what it should do about vulnerabilities, disclosing and closing most of the ones it finds, but holding back some — we don’t know how many — vulnerabilities that “nobody but us” could find for attack purposes.

This approach seems to be the appropriate general framework, but the devil is in the details. Many of us in the security field don’t know how to make NOBUS decisions, and the recent White House clarification posed more questions than it answered.

Who makes these decisions, and how? How often are they reviewed? Does this review process happen inside Department of Defense, or is it broader? Surely there needs to be a technical review of each vulnerability, but there should also be policy reviews regarding the sorts of vulnerabilities we are hoarding. Do we hold these vulnerabilities until someone else finds them, or only for a short period of time? How many do we stockpile? The US/Israeli cyberweapon Stuxnet used four zero-day vulnerabilities. Burning four on a single military operation implies that we are not hoarding a small number, but more like 100 or more.

There’s one more interesting wrinkle. Cyber-weapons are a combination of a payload — the damage the weapon does — and a delivery mechanism: the vulnerability used to get the payload into the enemy network. Imagine that China knows about a vulnerability and is using it in a still-unfired cyber-weapon, and that the NSA learns about it through espionage. Should the NSA disclose and patch the vulnerability, or should it use it itself for attack? If it discloses, then China could find a replacement vulnerability that the NSA won’t know about it. But if it doesn’t, it’s deliberately leaving the U.S. vulnerable to cyber-attack. Maybe someday we can get to the point where we can patch vulnerabilities faster than the enemy can use them in an attack, but we’re nowhere near that point today.

The implications of U.S. policy can be felt on a variety of levels. The NSA’s actions have resulted in a widespread mistrust of the security of U.S. Internet products and services, greatly affecting American business. If we show that we’re putting security ahead of surveillance, we can begin to restore that trust. And by making the decision process much more public than it is today, we can demonstrate both our trustworthiness and the value of open government.

An unpatched vulnerability puts everyone at risk, but not to the same degree. The U.S. and other Western countries are highly vulnerable, because of our critical electronic infrastructure, intellectual property, and personal wealth. Countries like China and Russia are less vulnerable — North Korea much less — so they have considerably less incentive to see vulnerabilities fixed. Fixing vulnerabilities isn’t disarmament; it’s making our own countries much safer. We also regain the moral authority to negotiate any broad international reductions in cyber-weapons; and we can decide not to use them even if others do.

Regardless of our policy towards hoarding vulnerabilities, the most important thing we can do is patch vulnerabilities quickly once they are disclosed. And that’s what companies are doing, even without any government involvement, because so many vulnerabilities are discovered by criminals.

We also need more research in automatically finding and fixing vulnerabilities, and in building secure and resilient software in the first place. Research over the last decade or so has resulted in software vendors being able to find and close entire classes of vulnerabilities. Although there are many cases of these security analysis tools not being used, all of our security is improved when they are. That alone is a good reason to continue disclosing vulnerability details, and something the NSA can do to vastly improve the security of the Internet worldwide. Here again, though, they would have to make the tools they have to automatically find vulnerabilities available for defense and not attack.

In today’s cyberwar arms race, unpatched vulnerabilities and stockpiled cyber-weapons are inherently destabilizing, especially because they are only effective for a limited time. The world’s militaries are investing more money in finding vulnerabilities than the commercial world is investing in fixing them. The vulnerabilities they discover affect the security of us all. No matter what cybercriminals do, no matter what other countries do, we in the U.S. need to err on the side of security and fix almost all the vulnerabilities we find. But not all, yet.

This essay previously appeared on TheAtlantic.com.

Errata Security: FBI will now record interrogations

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

This is huge, so I thought I’d blog about this: after a century, the FBI has reversed policy, and will now start recording interrogations.

Prior to this, the FBI policy was to not record interrogations. They worked in pairs (always two there are) with one guy interrogating, and the second quietly writing down what was said. What they wrote down was always their version of events, which was always in their favor.
This has long been a strategy to trap people into becoming informants. It’s a felony to lie to a federal agent. Thus, if you later say something that contradicts their version of what you said, you are guilty of lying — which they’ll forgive if you inform on your friends.
I experienced this myself. Two agents came to our business to talk to us about a talk we were giving at BlackHat. Part of it contained the threat that if we didn’t cancel our talk, they’d taint our file so we’d never be able to pass a background check and work in government ever again. According to a later FOIA, that threat wasn’t included in their form 302 about the conversation. And since it’s my word against theirs, their threat never happened.
This is a big deal in Dhjokar Tsarnaev (Boston bombing) case. The FBI interviewed Tsarnaev while near death on a hospital bed. Their transcription of what he said bears little semblance to what was actually said, omitting key details like how often he asked to talk to his lawyer, or the FBI agents denying him access to his lawyer, or the threats the agents made to him.
This policy proved beyond a shadow of doubt that the FBI is inherently corrupt. Now that they are changing this, such proof will be harder to come by — though I have no doubt it’s still true.

Update: other stories have focused on video taping interrogations after arrest, but more importantly, the policy change also covers investigations, when they talk to people whom they have no intention of arresting (such as my case).

Note: I used this in my short story for last year’s DEF CON contest. It’s interesting that it’s now already out of date.

Krebs on Security: ‘Blackshades’ Trojan Users Had It Coming

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

The U.S. Justice Department today announced a series of actions against more than 100 people accused of purchasing and using “Blackshades,” a password-stealing Trojan horse program designed to infect computers throughout the world to spy on victims through their web cameras, steal files and account information, and log victims’ key strokes. While any effort that discourages the use of point-and-click tools for ill-gotten gains is a welcome development, the most remarkable aspect of this crackdown is that those who were targeted in this operation lacked any clue that it was forthcoming.

The Blackshades user forum.

The Blackshades user forum.

To be sure, Blackshades is an effective and easy-to-use tool for remotely compromising and spying on your targets. Early on in its development, researchers at CitzenLab discovered that Blackshades was being used to spy on activists seeking to overthrow the regime in Syria.

The product was sold via well-traveled and fairly open hacker forums, and even included an active user forum where customers could get help configuring and wielding the powerful surveillance tool. Although in recent years a license to Blackshades sold for several hundred Euros, early versions of the product were sold via PayPal for just USD $40.

In short, Blackshades was a tool created and marketed principally for buyers who wouldn’t know how to hack their way out of a paper bag. From the Justice Department’s press release today:

“After purchasing a copy of the RAT, a user had to install the RAT on a victim’s computer – i.e., “infect” a victim’s computer. The infection of a victim’s computer could be accomplished in several ways, including by tricking victims into clicking on malicious links or by hiring others to install the RAT on victims’ computers.

The RAT contained tools known as ‘spreaders’ that helped users of the RAT maximize the number of infections. The spreader tools generally worked by using computers that had already been infected to help spread the RAT further to other computers. For instance, in order to lure additional victims to click on malicious links that would install the RAT on their computers, the RAT allowed cybercriminals to send those malicious links to others via the initial victim’s social media service, making it appear as if the message had come from the initial victim.”

News that the FBI and other national law enforcement organizations had begun rounding up Blackshades customers started surfacing online last week, when multiple denizens of the noob-friendly hacker forum Hackforums[dot]net began posting firsthand experiences of receiving a visit from local authorities related to their prior alleged Blackshades use. See the image gallery at the end of this post for a glimpse into the angst that accompanied that development.

While there is a certain amount of schadenfreude in today’s action, the truth is that any longtime Blackshades customer who didn’t know this day would be coming should turn in his hacker card immediately. In June 2012, the Justice Department announced a series of indictments against at least two dozen individuals who had taken the bait and signed up to be active members of “Carderprofit,” a fraud forum that was created and maintained by the Federal Bureau of Investigation.

Among those arrested in the CarderProfit sting was Michael Hogue, the alleged co-creator of Blackshades. That so many of the customers of this product are teenagers who wouldn’t know a command line prompt from a hole in the ground is evident by the large number of users who vented their outrage over their arrests and/or visits by the local authorities on Hackforums, which by the way was the genesis of the CarderProfit sting from Day One.

In June 2010, Hackforums administrator Jesse Labrocca — a.k.a. “Omniscient” — posted a message to all users of the forum, notifying them that the forum would no longer tolerate the posting of messages about ways to buy and use the ZeuS Trojan, a far more sophisticated remote-access Trojan that is heavily used by cybercriminals worldwide and has been implicated in the theft of hundreds of millions of dollars from small- to mid-sized businesses worldwide.

Hackforums admin Jesse "Omniscient" LaBrocca urging users to register at a new forum -- Carderprofit.eu -- a sting Web site set up by the FBI.

Hackforums admin Jesse “Omniscient” LaBrocca urging users to register at a new forum — Carderprofit.cc — a sting Web site set up by the FBI.

That warning, shown in the screen shot above, alerted Hackforums users that henceforth any discussion about using or buying ZeuS was verboten on the site, and that those who wished to carry on conversations about this topic should avail themselves of a brand new forum that was being set up to accommodate them. And, of course, that forum was carderprofit[dot]eu.

Interestingly, a large number of the individuals rounded up as part of the FBI’s CardProfit sting included several key leaders of LulzSec (including the 16-year-old individual responsible for sending a heavily armed police response to my home in March 2013).

The CarderProfit homepage, which featured an end-user license agreement written by the FBI.

The CarderProfit homepage, which featured an end-user license agreement written by the FBI.

In a press conference today, the FBI said its investigation has shown that Blackshades was purchased by at least several thousand users in more than 100 countries and used to infect more than half a million computers worldwide. The government alleges that one co-creator of Blackshades generated sales of more than $350,000 between September 2010 and April 2014. Information about that individual and others charged in this case can be found at this link.

For a glimpse at what the recipients of all this attention went through these past few days, check out the images below.

bs1
bs2
bs3
bs4
bs5
bs6
bs7
bs8
bs9

Schneier on Security: Espionage vs. Surveillance

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

According to NSA documents published in Glenn Greenwald’s new book No Place to Hide, we now know that the NSA spies on embassies and missions all over the world, including those of Brazil, Bulgaria, Colombia, the European Union, France, Georgia, Greece, India, Italy, Japan, Mexico, Slovakia, South Africa, South Korea, Taiwan, Venezuela and Vietnam.

This will certainly strain international relations, as happened when it was revealed that the U.S. is eavesdropping on German Chancellor Angela Merkel’s cell phone — but is anyone really surprised? Spying on foreign governments is what the NSA is supposed to do. Much more problematic, and dangerous, is that the NSA is spying on entire populations. It’s a mistake to have the same laws and organizations involved with both activities, and it’s time we separated the two.

The former is espionage: the traditional mission of the NSA. It’s an important military mission, both in peacetime and wartime, and something that’s not going to go away. It’s targeted. It’s focused. Decisions of whom to target are decisions of foreign policy. And secrecy is paramount.

The latter is very different. Terrorists are a different type of enemy; they’re individual actors instead of state governments. We know who foreign government officials are and where they’re located: in government offices in their home countries, and embassies abroad. Terrorists could be anyone, anywhere in the world. To find them, the NSA has to look for individual bad actors swimming in a sea of innocent people. This is why the NSA turned to broad surveillance of populations, both in the U.S. and internationally.

If you think about it, this is much more of a law enforcement sort of activity than a military activity. Both involve security, but just as the NSA’s traditional focus was governments, the FBI’s traditional focus was individuals. Before and after 9/11, both the NSA and the FBI were involved in counterterrorism. The FBI did work in the U.S. and abroad. After 9/11, the primary mission of counterterrorist surveillance was given to the NSA because it had existing capabilities, but the decision could have gone the other way.

Because the NSA got the mission, both the military norms and the legal framework from the espionage world carried over. Our surveillance efforts against entire populations were kept as secret as our espionage efforts against governments. And we modified our laws accordingly. The 1978 Foreign Intelligence Surveillance Act (FISA) that regulated NSA surveillance required targets to be “agents of a foreign power.” When the law was amended in 2008 under the FISA Amendments Act, a target could be any foreigner anywhere.

Government-on-government espionage is as old as governments themselves, and is the proper purview of the military. So let the Commander in Chief make the determination on whose cell phones to eavesdrop on, and let the NSA carry those orders out.

Surveillance is a large-scale activity, potentially affecting billions of people, and different rules have to apply – the rules of the police. Any organization doing such surveillance should apply the police norms of probable cause, due process, and oversight to population surveillance activities. It should make its activities much less secret and more transparent. It should be accountable in open courts. This is how we, and the rest of the world, regains the trust in the US’s actions.

In January, President Obama gave a speech on the NSA where he said two very important things. He said that the NSA would no longer spy on Angela Merkel’s cell phone. And while he didn’t extend that courtesy to the other 82 million citizens of Germany, he did say that he would extend some of the U.S.’s constitutional protections against warrantless surveillance to the rest of the world.

Breaking up the NSA by separating espionage from surveillance, and putting the latter under a law enforcement regime instead of a military regime, is a step toward achieving that.

This essay originally appeared on CNN.com.

Schneier on Security: New NSA Snowden Documents

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Glenn Greenwald’s new book, No Place to Hide, was published today. There are about 100 pages of NSA documents on the book’s website. I haven’t gone through them yet. At a quick glance, only a few of them have been published before.

Here are two book reviews.

EDITED TO ADD (5/13): It’s suprising how large the FBI’s role in all of this is. On page 81, we see that they’re the point contact for BLARNEY. (BLARNEY is a decades-old AT&T data collection program.) And page 28 shows the ESCU — that’s the FBI’s Electronic Communications Surveillance Unit — is point on all the important domestic collection and interaction with companies. When companies deny that they work with the NSA, it’s likely that they’re working with the FBI and not realizing that it’s the NSA that getting all the data they’re providing.

Krebs on Security: Teen Arrested for 30+ Swattings, Bomb Threats

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

A 16-year-old male from Ottawa, Canada has been arrested for allegedly making at least 30 fraudulent calls to emergency services across North America over the past few months. The false alarms — two of which targeted this reporter — involved calling in phony bomb threats and multiple attempts at “swatting” — a hoax in which the perpetrator spoofs a call about a hostage situation or other violent crime in progress in the hopes of tricking police into responding at a particular address with deadly force.

po2-swatbkOn March 9, a user on Twitter named @ProbablyOnion (possibly NSFW) started sending me rude and annoying messages. A month later (and several weeks after blocking him on Twitter), I received a phone call from the local police department. It was early in the morning on Apr. 10, and the cops wanted to know if everything was okay at our address.

Since this was not the first time someone had called in a fake hostage situation at my home, the call I received came from the police department’s non-emergency number, and they were unsurprised when I told them that the Krebs manor and all of its inhabitants were just fine.

Minutes after my local police department received that fake notification, @ProbablyOnion was bragging on Twitter about swatting me, including me on his public messages: “You have 5 hostages? And you will kill 1 hostage every 6 times and the police have 25 minutes to get you $100k in clear plastic.” Another message read: “Good morning! Just dispatched a swat team to your house, they didn’t even call you this time, hahaha.”

I told this user privately that targeting an investigative reporter maybe wasn’t the brightest idea, and that he was likely to wind up in jail soon. But @ProbablyOnion was on a roll: That same day, he hung out his for-hire sign on Twitter, with the following message: “want someone swatted? Tweet me  their name, address and I’ll make it happen.”

wantswat

Several Twitter users apparently took him up on that offer. All told, @ProbablyOnion would claim responsibility for more than two dozen swatting and bomb threat incidents at schools and other public locations across the United States.

On May 7, @ProbablyOnion tried to get the swat team to visit my home again, and once again without success. “How’s your door?” he tweeted. I replied: “Door’s fine, Curtis. But I’m guessing yours won’t be soon. Nice opsec!”

I was referring to a document that had just been leaked on Pastebin, which identified @ProbablyOnion as a 19-year-old Curtis Gervais from Ontario. @ProbablyOnion laughed it off but didn’t deny the accuracy of the information, except to tweet that the document got his age wrong. A day later, @ProbablyOnion would post his final tweet: “Still awaiting for the horsies to bash down my door,” a taunting reference to the Royal Canadian Mounted Police (RCMP).

According to an article in the Ottawa Citizen, the 16-year-old faces 60 charges, including creating fear by making bomb threats. Ottawa police also are investigating whether any alleged hoax calls diverted responders away from real emergencies.

Most of the people involved in swatting and making bomb threats are young males under the age of 18 — the age when kids seem to have little appreciation for or care about the seriousness of their actions. According to the FBI, each swatting incident costs emergency responders approximately $10,000. Each hoax also unnecessarily endangers the lives of the responders and the public.

Take, for example, the kid who swatted my home last year: According to interviews with multiple law enforcement sources familiar with the case, that kid is only 17 now, and was barely 16 at the time of the incident in March 2013. Identified in several Wired articles as “Cosmo the God,” Long Beach, Calif. resident Eric Taylor violated the terms of his 2011 parole, which forbade him from using the Internet until his 21st birthday. Taylor pleaded guilty in 2011 to multiple felonies, including credit card fraud, identity theft, bomb threats and online impersonation.

In nearly every case I’m aware of, these kids who think swatting is fun have serious problems at home, if indeed they have any meaningful parental oversight in their lives. It’s sad because with a bit of guidance and the right environment, some of these kids probably would make very good security professionals. Heck, Eric Taylor was even publicly thanked by Google for finding and reporting security vulnerabilities in the fourth quarter of 2012 (nevermind that this was technically after his no-computer probation kicked in).

Update, 2:42 p.m. ET: The FBI also has issued a press release about this arrest, although it also does not name the 16 year-old.