Posts tagged ‘fbi’

TorrentFreak: Can We Publicly Confess to Online Piracy Crimes?

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

piracy-crimeLast week’s leak of The Expendables 3 was a pretty big event in the piracy calendar and as TF explained to inquiring reporters, that is only achieved by getting the right mix of ingredients.

First and foremost, the movie was completely unreleased meaning that private screenings aside, it had never hit a theater anywhere in the world. Getting a copy of a movie at this stage is very rare indeed. Secondly, the quality of the leaked DVD was very good indeed.

Third, and we touched on this earlier, are the risks involved in becoming part of the online distribution mechanism for something like this. Potentially unfinished copies of yet-to-be-released flicks can be a very serious matter indeed, with custodial sentences available to the authorities.

And yet this week, David Pierce, Assistant Managing Editor at The Verge, wrote an article in which he admitted torrenting The Expendables 3 via The Pirate Bay.

Pirate confessions – uncut

Verge1

“The Expendables 3 comes out August 15th in thousands of theaters across America. I watched it Friday afternoon on my MacBook Air on a packed train from New York City to middle-of-nowhere Connecticut. I watched it again on the ride back. And I’m already counting down the days until I can see it in IMAX,” he wrote.

Pierce’s article, and it’s a decent read, talks about how the movie really needs to be seen on the big screen. It’s a journey into why piracy can act as promotion and how the small screen experience rarely compensates for seeing this kind of movie in the “big show” setting.

Pierce is a great salesman and makes a good case but that doesn’t alter the fact that he just admitted to committing what the authorities see as a pretty serious crime.

The Family Entertainment and Copyright Act of 2005 refers to it as “the distribution of a work being prepared for commercial distribution, by making it available on a computer network accessible to members of the public, if such person knew or should have known that the work was intended for commercial distribution.”

The term “making it available” refers to uploading and although one would like to think that punishments would be reserved only for initial leakers (if anyone), the legislation fails to specify. It seems that merely downloading and sharing the movie using BitTorrent could be enough to render a user criminally liable, as this CNET article from 2005 explains.

FECA

So with the risks as they are, why would Pierce put his neck on the line?

Obviously, he wanted to draw attention to the “big screen” points mentioned above and also appreciates plenty of readers. It’s also possible he just wasn’t aware of the significance of the offense. Sadly, our email to Pierce earlier in the week went unanswered so we can’t say for sure.

But here’s the thing.

There can be few people in the public eye, journalists included, who would admit to stealing clothes from a Paris fashion show in order to promote Versace’s consumer lines when they come out next season.

steal-carAnd if we wrote a piece about how we liberated a Honda Type R prototype from the Geneva Motor Show in order to boost sales ahead of its consumer release next year, we’d be decried as Grand Theft Auto’ists in need of discipline.

What this seems to show is that in spite of a decade-and-a-half’s worth of “piracy is theft” propaganda, educated and eloquent people such as David Pierce still believe that it is not, to the point where pretty serious IP crimes can be confessed to in public.

At the very least, the general perception is that torrenting The Expendables 3 is morally detached from picking up someone’s real-life property and heading for the hills. And none of us would admit to the latter, would we?

Hollywood and the record labels will be furious that this mentality persists after years of promoting the term “intellectual property” and while Lionsgate appear to have picked their initial targets (and the FBI will go after the initial leakers), the reality is that despite the potential for years in jail, it’s extremely unlikely the feds will be turning up at the offices of The Verge to collar Pierce. Nor will they knock on the doors of an estimated two million other Expendables pirates either.

And everyone knows it.

As a result, what we have here is a crazy confession brave article from Pierce which underlines that good movies are meant to be seen properly and that people who pirate do go on to become customers if the product is right. And, furthermore, those customers promote that content to their peers, such as the guy on the train who looked over Pierce’s shoulder when he was viewing his pirate booty.

“He won’t be the last person I tell to go see The Expendables 3 when it hits theaters in August,” Pierce wrote. “And I’ll be there with them, opening night. I know the setlist now, I know all the songs by heart, but I still want to see the show.”

Pierce’s initial piracy was illegal, no doubt, but when all is said and done (especially considering his intent to promote and invest in the movie) it hardly feels worthy of a stay in the slammer. I venture that the majority would agree – and so the cycle continues.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

TorrentFreak: Feds Receive Requests to Shut Down The Pirate Bay

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

pirate bayThere is no doubt that copyright holders repeatedly press the authorities to take action against The Pirate Bay.

So, when a Pirate Bay-related Freedom of Information request was sent to Homeland Security’s National Intellectual Property Rights Coordination Center, we expected to see letters from the major music labels and Hollywood studios. Interestingly that was not the case.

Late June Polity News asked Homeland Security to reveal all information the center holds on the notorious torrent site. Earlier this week the responses were received, mostly consisting of requests from individuals to shut down The Pirate Bay.

In total the center received 15 emails, and all appear to have been forwarded by the FBI, where they were apparently first sent. Some of the emails only list a few pirate site domains but others are more specific in calling for strong action against The Pirate Bay.

“Why don’t you seize all THE PIRATE BAY domains? Starting with thepiratebay.se. You have no idea how much good that would do to writers, artists, musicians, designers, inventors, software developers, movie people and our global economy in general,” one email reads.

crimesyn

The emails are all redacted but the content of the requests sometimes reveals who the sender might be. The example below comes from the author of “The Crystal Warrior,” which is probably the New Zealand author Maree Anderson.

“The Pirate Bay states that it can’t be held responsible for copyright infringement as it is a torrent site and doesn’t store the files on its servers. However the epub file of my published novel The Crystal Warrior has been illegally uploaded there,” the email reads.

The author adds that she takes a strong stand against piracy, but that her takedown notices are ignored by The Pirate Bay. She hopes that the authorities can take more effective action.

“Perhaps you would have more luck in putting pressure on them than one individual like myself. And if you are unable to take further action, I hope this notification will put The Pirate Bay in your sights so you can keep an eye on them,” the author adds.


pirateauthor

Most of the other requests include similar calls to action and appear to come from individual copyright holders. However, there is also a slightly more unusual request.

The email in question comes from the mother of a 14-year-old boy whose father is said to frequently pirate movies and music. The mother says she already visited an FBI office to report the man and is now seeking further advice. Apparently she previously reached out to the MPAA, but they weren’t particularly helpful.

“MPAA only wanted to know where he was downloading and could not help. I ask you what can I do, as a parent, to prevent a 14-year-old from witnessing such a law breaking citizen in his own home?” the mother writes.

“It is not setting a good example for him and I don’t think that it is right to subject him to this cyber crime. Devices on websites used: www.piratebay.com for downloads and www.LittleSnitch.com so he won’t be detected. This is not right. Any help would be appreciated,” she adds.

piratemom

All of the revealed requests were sent between 2012 and 2014. Thus far, however, the Department of Homeland Security nor the FBI have taken any action against the Pirate Bay.

Whether the pirating dad is still on the loose remains unknown for now, but chances are he’s still sharing music and movies despite the FBI referral.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Errata Security: Cliché: open-source is secure

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

Some in cybersec keep claiming that open-source is inherently more secure or trustworthy than closed-source. This is demonstrably false.

Firstly, there is the problem of usability. Unusable crypto isn’t a valid option for most users. Most would rather just not communicate at all, or risk going to jail, rather than deal with the typical dependency hell of trying to get open-source to compile. Moreover, open-source apps are notoriously user-hostile, which is why the Linux desktop still hasn’t made headway against Windows or Macintosh. The reason is that developers blame users for being stupid for not appreciating how easy their apps are, whereas Microsoft and Apple spend $billions in usability studies actually listening to users. Desktops like Ubuntu are pretty good — but only when they exactly copy Windows/Macintosh. Ubuntu still doesn’t invest in the usability studies that Microsoft/Apple do.
The second problem is deterministic builds. If I want to install an app on my iPhone or Android, the only usable way is through their app stores. This means downloading the binary, not the source. Without deterministic builds, there is no way to verify the downloaded binary matches the public source. The binary may, in fact, be compiled from different source containing a backdoor. This means a malicious company (or an FBI NSL letter) can backdoor open-source binaries as easily as closed-source binaries.
The third problem is code-review. People trust open-source because they can see for themselves if it has any bugs. Or, if not themselves, they have faith that others are looking at the code (“many eyes makes bugs shallow”). Yet, this rarely happens. We repeatedly see bugs giving backdoor access (‘vulns’) that remain undetected in open-source projects for years, such as the OpenSSL Heartbleed bug. The simple fact is that people aren’t looking at open-source. Those qualified to review code would rather be writing their own code. The opposite is true for closed-source, where they pay people to review code. While engineers won’t review code for fame/glory, they will for money. Given two products, one open and the other closed, it’s impossible to guess which has had more “eyes” looking at the source — in many case, it’s the closed-source that has been better reviewed.
What’s funny about this open-source bigotry is that it leads to very bad solutions. A lot of people I know use the libpurple open-source library and the jabber.ccc.de server (run by CCC hacking club). People have reviewed the libpurple source and have found it extremely buggy, and chat apps don’t pin SSL certificates, meaning any SSL encryption to the CCC server can easily be intercepted. In other words, the open-source alternative is known to be incredibly insecure, yet people still use it, because “everyone knows” that open-source is more secure than closed-source.
Wickr and SilentCircle are two secure messaging/phone apps that I use, for the simple fact that they work both on Android and iPhone, and both are easy to use. I’ve read their crypto algorithms, so I have some assurance that they are doing things right. SilentCircle has open-sourced part of their code, which looks horrible, so it’s probable they have some 0day lurking in there somewhere, but it’s really no worse than equivalent code. I do know that both companies have spent considerable resources on code review, so I know at least as many “eyes” have reviewed their code as open-source. Even if they showed me their source, I’m not going to read it all — I’ve got more important things to do, like write my own source.
Thus, I see no benefit to open-source in this case. Except for Cryptocat, all the open-source messaging apps I’ve used have been buggy and hard to use. But, you can easily change my mind: just demonstrate an open-source app where more eyes have reviewed the code, or a project that has deterministic builds, or a project that is easier to use, or some other measurable benefit.
Of course, I write this as if the argument was about the benefits of open-source. We all know this doesn’t matter. As the EFF teaches us, it’s not about benefits, but which is ideologically pure; that open-source is inherently more ethical than closed-source.

Errata Security: Um, talks are frequently canceled at hacker cons

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

Talks are frequently canceled at hacker conventions. It’s the norm. I had to cancel once because, on the flight into Vegas, a part fell off the plane forcing an emergency landing. Last weekend, I filled in at HopeX with a talk, replacing somebody else who had to cancel.

I point this out because of this stories like this one hyping the canceled Tor talk at BlackHat. It’s titled says the talk was “Suddenly Canceled”. The adverb “suddenly” is clearly an attempt to hype the story, since there is no way to slowly cancel a talk.
The researchers are academics at Carnegie-Mellon University (CMU). There are good reasons why CMU might have to cancel the talk. The leading theory is that it might violate prohibitions against experiments on unwilling human subjects. There also may be violations of wiretap laws. In other words, the most plausible reasons why CMU might cancel the talk have nothing to do with trying to suppress research.
Suppressing research, because somebody powerful doesn’t want it to be published, is the only reason cancelations are important. It’s why the Boston MTA talk was canceled, because they didn’t want it revealed how to hack transit cards. It’s why the Michael Lynn talk was (almost) canceled, because Cisco didn’t want things revealed.  It’s why I (almost) had a talk canceled, because TippingPoint convinced the FBI to come by my offices to threaten me (I gave the talk because I don’t take threats well). These are all newsworthy things.
The reporting on the Tor cancelation talk, however, is just hype, trying to imply something nefarious when there is no evidence.

TorrentFreak: Six Android Piracy Group Members Charged, Two Arrested

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

usdojAssisted by police in France and the Netherlands, in the summer of 2012 the FBI took down three unauthorized Android app stores. Appbucket, Applanet and SnappzMarket all had their domains seized, the first action of its type in the Android scene.

For two years the United States Department of Justice has released information on the case and last evening came news of more charges and more arrests.

Assistant Attorney General Leslie R. Caldwell of the Justice Department’s Criminal Division announced the unsealing of three federal indictments in the Northern District of Georgia charging six members of Appbucket, Applanet and SnappzMarket for their roles in the unauthorized distribution of Android apps.

SnappzMarket

Joshua Ryan Taylor, 24, of Kentwood, Michigan, and Scott Walton, 28, of Cleveland, Ohio, two alleged members of SnappzMarket, were both arrested yesterday. They are due to appear before magistrates in Michigan and Ohio respectively.

An indictment returned on June 17 charges Gary Edwin Sharp II, 26, of Uxbridge, Massachusetts, along with Taylor and Walton, with one count of conspiracy to commit criminal copyright infringement. Sharp is also charged with two counts of criminal copyright infringement.

It’s alleged that the three men were members of SnappzMarket between May 2011 through August 2012 along with Kody Jon Peterson, 22, of Clermont, Florida. In April, Peterson pleaded guilty to one count of conspiracy to commit criminal copyright infringement. As part of his guilty plea he agreed to work undercover for the government.

Appbucket

Another indictment returned June 17 in Georgia charges James Blocker, 36, of Rowlett, Texas, with one count of conspiracy to commit criminal copyright infringement.

A former member of Appbucket, Blocker is alleged to have conspired with Thomas Allen Dye, 21, of Jacksonville, Florida; Nicholas Anthony Narbone, 26, of Orlando, Florida, and Thomas Pace, 38, of Oregon City, Oregon to distribute Android apps with a value of $700,000.

During March and April 2014, Dye, Narbone and Pace all pleaded guilty to conspiracy to commit criminal copyright infringement.

Applanet

applanetA further indictment June 17 in Georgia charges Aaron Blake Buckley, 20, of Moss Point, Mississippi; David Lee, 29, of Chino Hills, California; and Gary Edwin Sharp II (also of Appbucket) with one count of conspiracy to commit criminal copyright infringement.

Lee is additionally charged with one count of aiding and abetting criminal copyright infringement and Buckley with one count of criminal copyright infringement.

All three identified themselves as former members of Applanet. The USDOJ claims that along with other members they are responsible for the illegal distribution of four million Android apps with a value of $17m. Buckley previously launched a fund-raiser in an effort to fight off the United States government.

“As a result of their criminal efforts to make money by ripping off the hard work and creativity of high-tech innovators, the defendants are charged with illegally distributing copyrighted apps,” said Assistant Attorney General Caldwell.

“The Criminal Division is determined to protect the labor and ingenuity of copyright owners and to keep pace with criminals in the modern, technological marketplace.”

A statement from the FBI’s Atlanta Field Office indicates that the FBI will pursue more piracy groups in future.

“The FBI will continue to provide significant investigative resources toward such groups engaged in such wholesale pirating or copyright violations as seen here,” Special Agent in Charge J. Britt Johnson said.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Krebs on Security: Crooks Seek Revival of ‘Gameover Zeus’ Botnet

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Cybercrooks today began taking steps to resurrect the Gameover ZeuS botnet, a complex crime machine that has been blamed for the theft more than $100 million from banks, businesses and consumers worldwide. The revival attempt comes roughly five weeks after the FBI joined several nations, researchers and security firms in a global and thus far successful effort to eradicate it. gameover

The researchers who helped dismantle Gameover Zeus said they were surprised that the botmasters didn’t fight back. Indeed, for the past month the crooks responsible seem to have kept a low profile.

But that changed earlier this morning when researchers at Malcovery [full disclosure: Malcovery is an advertiser on this blog] began noticing spam being blasted out with phishing lures that included zip files booby-trapped with malware.

Looking closer, the company found that the malware shares roughly 90 percent of its code base with Gameover Zeus. Part of what made the original GameOver ZeuS so difficult to shut down was its reliance in part on an advanced peer-to-peer (P2P) mechanism to control and update the bot-infected systems.

But according to Gary Warner, Malcovery’s co-founder and chief technologist, this new Gameover variant is stripped of the P2P code, and relies instead on an approach known as fast-flux hosting. Fast-flux is a kind of round-robin technique that lets botnets hide phishing and malware delivery sites behind an ever-changing network of compromised systems acting as proxies, in a bid to make the botnet more resilient to takedowns.

Like the original Gameover, however, this variant also includes a “domain name generation algorithm” or DGA, which is a failsafe mechanism that can be invoked if the botnet’s normal communications system fails. The DGA creates a constantly-changing list of domain names each week (gibberish domains that are essentially long jumbles of letters).

In the event that systems infected with the malware can’t reach the fast-flux servers for new updates, the code instructs the botted systems to seek out active domains from the list specified in the DGA. All the botmasters need to do in this case to regain control over his crime machine is register just one of those domains and place the update instructions there.

Warner said the original Gameover botnet that was clobbered last month is still locked down, and that it appears whoever released this variant is essentially attempting to rebuild the botnet from scratch. “This discovery indicates that the criminals responsible for Gameover’s distribution do not intend to give up on this botnet even after suffering one of the most expansive botnet takeovers and takedowns in history,” Warner said.

Gameover is based on code from the ZeuS Trojan, an infamous family of malware that has been used in countless online banking heists. Unlike ZeuS — which was sold as a botnet creation kit to anyone who had a few thousand dollars in virtual currency to spend — Gameover ZeuS has since October 2011 been controlled and maintained by a core group of hackers from Russia and Ukraine. Those individuals are believed to have used the botnet in high-dollar corporate account takeovers that frequently were punctuated by massive distributed-denial-of-service (DDoS) attacks intended to distract victims from immediately noticing the thefts.

According to the Justice Department, Gameover has been implicated in the theft of more than $100 million in account takeovers. According to the U.S. Justice Department, the author of the ZeuS Trojan (and by extension the Gameover Zeus malware) is allegedly a Russian citizen named Evgeniy Mikhailovich Bogachev.

For more details, check out Malcovery’s blog post about this development.

Yevgeniy Bogachev, Evgeniy Mikhaylovich Bogachev, a.k.a. "lucky12345", "slavik", "Pollingsoon". Source: FBI.gov "most wanted, cyber.

Yevgeniy Bogachev, Evgeniy Mikhaylovich Bogachev, a.k.a. “lucky12345″, “slavik”, “Pollingsoon”. Source: FBI.gov “most wanted, cyber.

TorrentFreak: Kim Dotcom Extradition Hearing Delayed Until 2015

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

The United States Government is keen to get its hands on Kim Dotcom. He stands accused of committing the biggest copyright-related crime ever seen through his now-defunct cloud storage site Megaupload.

But their access to the entrepreneur will have to wait.

According to Dotcom, his extradition hearing has now been delayed until February 16, 2015.

Delays and postponements have become recurring features of the criminal case being built against Dotcom in the United States.

A March 2013 date came and went without a promised hearing, as did another in November the same year, a delay which Dotcom said would “save Prime Minister John Key embarrassment during an election campaign.”

Another hearing date for April 2014 also failed to materialize and now the date penciled in for the coming weeks has also been struck down.

Dotcom also reports that he still hasn’t received a copy of the data that was unlawfully sent to the FBI by New Zealand authorities.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

TorrentFreak: Dotcom Encryption Keys Can’t Be Given to FBI, Court Rules

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

dotcom-laptopDuring the raid more than two years ago on his now-famous mansion, police in New Zealand seized 135 computers and drives belonging to Kim Dotcom.

In May 2012 during a hearing at Auckland’s High Court, lawyer Paul Davison QC demanded access to the data stored on the confiscated equipment, arguing that without it Dotcom could not mount a proper defense.

The FBI objected to the request due to some of the data being encrypted. However, Dotcom refused to hand over the decryption passwords unless the court guaranteed him access to the data. At this point it was revealed that despite assurances from the court to the contrary, New Zealand police had already sent copies of the data to U.S. authorities.

In May 2014, Davison was back in court arguing that New Zealand police should release copies of the data from the seized computers and drives, reiterating the claim that without the information Dotcom could not get a fair trial. The High Court previously ruled that the Megaupload founder could have copies, on the condition he handed over the encryption keys.

But while Dotcom subsequently agreed to hand over the passwords, that was on the condition that New Zealand police would not hand them over to U.S. authorities. Dotcom also said he couldn’t remember the passwords after all but may be able to do so if he gained access to prompt files contained on the drives.

The police agreed to give Dotcom access to the prompts but with the quid pro quo that the revealed passwords could be passed onto the United States, contrary to Dotcom’s wishes.

Today Justice Winkelmann ruled that if the police do indeed obtain the codes, they must not hand them over to the FBI. Reason being, the copies of the computers and drives should never have been sent to the United States in the first place.

While the ruling is a plus for Dotcom, the entrepreneur today expressed suspicion over whether the FBI even need the encryption codes.

“NZ Police is not allowed to provide my encryption password to the FBI,” he wrote on Twitter, adding, “As if they don’t have it already.”

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

TorrentFreak: UK Cinemas Ban Google Glass Over Piracy Fears

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

google-glassThe movie industry sees the illegal recording of movies as one of the biggest piracy threats and for years has gone to extremes to stop it.

It started well over a decade ago when visitors began sneaking handheld camcorders into theaters. These big clunkers were relatively easy to spot, but as time passed the recording devices grew smaller and easier to obfuscate.

Google Glass is one of the newest threats on the block. Earlier this year the FBI dragged a man from a movie theater in Columbus, Ohio, after theater staff presumed he was using Google Glass to illegally record a film. While the man wasn’t recording anything at all, the response from the cinema employees was telling.

This month Google Glass went on sale in the UK, and unlike their American counterparts, British cinemas have been quick to announce a blanket ban on the new gadget.

“Customers will be requested not to wear these into cinema auditoriums, whether the film is playing or not,” Phil Clapp, chief executive of the Cinema Exhibitors’ Association told the Independent.

The first Glass wearer at a Leicester Square cinema has already been instructed to stow his device, and more are expected to follow. Google Glass wearers with prescription lenses would be wise to take a pair of traditional glasses along if they want to enjoy a movie on the big screen.

Movie industry group FACT sees Google Glass and other new recording devices as significant threats and works in tandem with local cinemas to prevent film from being recorded.

“Developments in technology have led to smaller, more compact devices which have the capability to record sound and vision, including most mobile phones. FACT works closely with cinema operators and distributors to ensure that best practice is carried out to prevent and detect illegal recordings taking place,” the group says.

In recent years the UK movie industry has intensified its efforts to stop camcording and not without success. In 2012 none of the illegally recorded movies that appeared online originated from a UK cinema while several attempts were successfully thwarted.

Last year, cinema staff helped UK police to arrest five people and another nine were sent home with cautions. As a thank you for these vigilant actions, the Film Distributors’ Association awarded 13 cinema employees with cash rewards of up to £500.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Krebs on Security: 2014: The Year Extortion Went Mainstream

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

The year 2014 may well go down in the history books as the year that extortion attacks went mainstream. Fueled largely by the emergence of the anonymous online currency Bitcoin, these shakedowns are blurring the lines between online and offline fraud, and giving novice computer users a crash course in modern-day cybercrime.

An extortion letter sent to 900 Degrees Neapolitan Pizzeria in New Hampshire.

An extortion letter sent to 900 Degrees Neapolitan Pizzeria in New Hampshire.

At least four businesses recently reported receiving “Notice of Extortion” letters in the U.S. mail. The letters say the recipient has been targeted for extortion, and threaten a range of negative publicity, vandalism and harassment unless the target agrees to pay a “tribute price” of one bitcoin (currently ~USD $561) by a specified date. According to the letter, that tribute price increases to 3 bitcoins (~$1,683) if the demand isn’t paid on time.

The ransom letters, which appear to be custom written for restaurant owners, threaten businesses with negative online reviews, complaints to the Better Business Bureau, harassing telephone calls, telephone denial-of-service attacks, bomb threats, fraudulent delivery orders, vandalism, and even reports of mercury contamination.

The missive encourages recipients to sign up with Coinbase – a popular bitcoin exchange – and to send the funds to a unique bitcoin wallet specified in the letter and embedded in the QR code that is also printed on the letter.

Interestingly, all three letters I could find that were posted online so far targeted pizza stores. At least two of them were mailed from Orlando, Florida.

The letters all say the amounts are due either on Aug. 1 or Aug. 15. Perhaps one reason the deadlines are so far off is that the attackers understand that not everyone has bitcoins, or even knows about the virtual currency.

“What the heck is a BitCoin?” wrote the proprietors of New Hampshire-based 900 Degrees Neapolitan Pizzeria, which posted a copy of the letter (above) on their Facebook page.

Sandra Alhilo, general manager of Pizza Pirates in Pomona, Calif., received the extortion demand on June 16.

“At first, I was laughing because I thought it had to be a joke,” Alhilo said in a phone interview. “It was funny until I went and posted it on our Facebook page, and then people put it on Reddit and the Internet got me all paranoid.”

Nicholas Weaver, a researcher at the International Computer Science Institute (ICSI) and at the University California, Berkeley, said these extortion attempts cost virtually nothing and promise a handsome payoff for the perpetrators.

“From the fraudster’s perspective, the cost of these attacks is a stamp and an envelope,” Weaver said. “This type of attack could be fairly effective. Some businesses — particularly restaurant establishments — are very concerned about negative publicity and reviews. Bad Yelp reviews, tip-offs to the health inspector..that stuff works and isn’t hard to do.”

While some restaurants may be an easy mark for this sort of crime, Weaver said the extortionists in this case are tangling with a tough adversary — The U.S. Postal Service — which takes extortion crimes perpetrated through the U.S. mail very seriously.

“There is a lot of operational security that these guys might have failed at, because this is interstate commerce, mail fraud, and postal inspector territory, where the gloves come off,” Weaver said. “I’m willing to bet there are several tools available to law enforcement here that these extortionists didn’t consider.”

It’s not entirely clear if or why extortionists seem to be picking on pizza establishments, but it’s probably worth noting that the grand-daddy of all pizza joints – Domino’s Pizza in France — recently found itself the target of a pricey extortion attack earlier this month after hackers threatened to release the stolen details on more than 650,000 customers if the company failed to pay a ransom of approximately $40,000).

Meanwhile, Pizza Pirates’s Alhilo says the company has been working with the local U.S. Postal Inspector’s office, which was very interested in the letter. Alhilo said her establishment won’t be paying the extortionists.

“We have no intention of paying it,” she said. “Honestly, if it hadn’t been a slow day that Monday I might have just throw the letter out because it looked like junk mail. It’s annoying that someone would try to make a few bucks like this on the backs of small businesses.”

A GREAT CRIME FOR CRIMINALS

Fueled largely by the relative anonymity of cryptocurrencies like Bitcoin, extortion attacks are increasingly being incorporated into all manner of cyberattacks today. Today’s thieves are no longer content merely to hijack your computer and bandwidth and steal all of your personal and financial data; increasingly, these crooks are likely to hold all of your important documents for ransom as well.

“In the early days, they’d steal your credit card data and then threaten to disclose it only after they’d already sold it on the underground,” said Alan Paller, director of research at the SANS Institute, a Bethesda, Md. based security training firm. “But today, extortion is the fastest way for the bad guys to make money, because it’s the shortest path from cybercrime to cash. It’s really a great crime for the criminals.

Last month, the U.S. government joined private security companies and international law enforcement partners to dismantle a criminal infrastructure responsible for spreading Cryptlocker, a ransomware scourge that the FBI estimates stole more than $27 million from victims compromised by the file-encrypting malware.

Even as the ink was still drying on the press releases about the Cryptolocker takedown, a new variant of Cryptolocker — Cryptowall — was taking hold. These attacks encrypt the victim PC’s hard drive unless and until the victim pays an arbitrary amount specified by the perpetrators — usually a few hundred dollars worth of bitcoins. Many victims without adequate backups in place (or those whose backups also were encrypted) pay up.  Others, like the police department in the New Hampshire hamlet of Durham, are standing their ground.

The downside to standing your ground is that — unless you have backups of your data — the encrypted information is gone forever. When these attacks hit businesses, the results can be devastating. Code-hosting and project management services provider CodeSpaces.com was forced to shut down this month after a hacker gained access to its Amazon EC2 account and deleted most data, including backups. According to Computerworld, the devastating security breach happened over a span of 12 hours and initially started with a distributed denial-of-service attack followed by an attempt to extort money from the company.

A HIDDEN CRIME

Extortion attacks against companies operating in the technology and online space are nothing new, of course. Just last week, news came to light that mobile phone giant Nokia in 2007 paid millions to extortionists who threatened to reveal an encryption key to Nokia’s Symbian mobile phone source code.

Trouble is, the very nature of these scams makes it difficult to gauge their frequency or success.

“The problem with extortion is that the money is paid in order to keep the attack secret, and so if the attack is successful, there is no knowledge of the attack even having taken place,” SANS’s Paller said.

Traditionally, the hardest part about extortion has been getting paid and getting away with the loot. In the case of the crooks who extorted Nokia, the company paid the money, reportedly leaving the cash in a bag at an amusement park car lot. Police were tracking the drop-off location, but ultimately lost track of the blackmailers.

Anonymous virtual currencies like Bitcoin not only make it easier for extortionists to get paid, but they also make it easier and more lucrative for more American blackmailers to get in on the action. Prior to Bitcoin’s rise in popularity, the principal way that attackers extracted their ransom was by instructing victims to pay by wire transfer or reloadable prepaid debit cards — principally Greendot cards sold at retailers, convenience stores and pharmacies.

But unlike Bitcoin payments, these methods of cashing out are easily traceable if cashed out in within the United States, the ICSI’s Weaver said.

“Bitcoin is their best available tool if in they’re located in the United States,” Weaver said of extortionists. “Western Union can be traced at U.S. cashout locations, as can Greendot payments. Which means you either need an overseas partner [who takes half of the profit for his trouble] or Bitcoin.”

TorrentFreak: Movie Chain Bans Google Glass Over Piracy Fears

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

Ever since the concept became public there have been fears over potential misuse of Google Glass. The advent of the wearable computer has sparked privacy fears and perhaps unsurprisingly, concerns that it could be used for piracy.

Just this January the FBI dragged a man from a movie theater in Columbus, Ohio, after theater staff presumed his wearing of Google Glass was a sign that he was engaged in camcorder piracy.

While it’s possible the device could be put to that use, it’s now less likely that patrons of the Alamo Drafthouse movie theater chain will be able to do so without being noticed. Speaking with Deadline, company CEO and founder Tim League says the time is now right to exclude the active use of Glass completely.

“We’ve been talking about this potential ban for over a year,” League said.

“Google Glass did some early demos here in Austin and I tried them out personally. At that time, I recognized the potential piracy problem that they present for cinemas. I decided to put off a decision until we started seeing them in the theater, and that started happening this month.”

According to League, people won’t be forbidden from bringing Google Glass onto the company’s premises, nor will they be banned from wearing the devices. Only when the devices are switched on will there be a problem.

“Google Glass is officially banned from drafthouse auditoriums once lights dim for trailers,” League explained yesterday.

Asked whether people could use them with corrective lenses, League said that discretion would be used.

“It will be case by case, but if it is clear when they are on, clear when they are off, will likely be OK,” he said.

But despite the theater chain’s apparent flexibility towards the non-active use of the device, the ban does seem to go further than the official stance taken by the MPAA following the earlier Ohio incident.

“Google Glass is an incredible innovation in the mobile sphere, and we have seen no proof that it is currently a significant threat that could result in content theft,” the MPAA said in a statement.

However, recording a movie in a theater remains a criminal offense in the United States, so the decision as to whether a crime has been committed will be the decision of law enforcement officers called to any ‘camming’ incident. Given then the MPAA’s statement, it will be interesting to see if the studios will encourage the police to pursue cases against future Google Glass users.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Errata Security: Can I drop a pacemaker 0day?

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

Can I drop a pacemaker 0day at DefCon that is capable of killing people?

Computers now run our cars. It’s now possible for a hacker to infect your car with a “virus” that can slam on the brakes in the middle of the freeway. Computers now run medical devices like pacemakers and insulin pumps, it’s now becoming possible assassinate somebody by stopping their pacemaker with a bluetooth exploit.

The problem is that manufacturers are 20 years behind in terms of computer “security”. They don’t just have vulnerabilities, they have obvious vulnerabilities. That means not only can these devices be hacked, they can be easily be hacked by teenagers. Vendors do something like put a secret backdoor password in a device believing nobody is smart enough to find it — then a kid finds it in under a minute using a simple program like “strings“.
Telling vendors about the problem rarely helps because vendors don’t care. If they cared at all, they wouldn’t have been putting the vulnerabilities in their product to begin with. 30% of such products have easily discovered backdoors, which is something they should already care about, so telling them you’ve discovered they are one of the 30% won’t help.
Historically, we’ve dealt with vendor unresponsiveness through the process of “full disclosure”. If a vendor was unresponsive after we gave them a chance to first fix the bug, we simply published the bug (“drop 0day”), either on a mailing list, or during a talk at a hacker convention like DefCon. Only after full disclosure does the company take the problem seriously and fix it.
This process has worked well. If we look at the evolution of products from Windows to Chrome, the threat of 0day has caused them to vastly improve their products. Moreover, now they court 0day: Google pays you a bounty for Chrome 0day, with no strings attached on how you might also maliciously use it.
So let’s say I’ve found a pacemaker with an obvious BlueTooth backdoor that allows me to kill a person, and a year after notifying the vendor, they still ignore the problem, continuing to ship vulnerable pacemakers to customers. What should I do? If I do nothing, more and more such pacemakers will ship, endangering more lives. If I disclose the bug, then hackers may use it to kill some people.
The problem is that dropping a pacemaker 0day is so horrific that most people would readily agree it should be outlawed. But, at the same time, without the threat of 0day, vendors will ignore the problem.
This is the question for groups that defend “coder’s rights”, like the EFF. Will they really defend coders in this hypothetical scenario, declaring that releasing code 0day code is free speech that reveals problems of public concern? Or will they agree that such code should be suppressed in the name of public safety?
I ask this question because right now they are avoiding the issue, because whichever stance they take will anger a lot of people. This paper from the EFF on the issue seems to support disclosing 0days, but only in the abstract, not in the concrete scenario that I support. The EFF has a history of backing away from previous principles when they become unpopular. For example, they once fought against regulating the Internet as a public utility, now they fight for it in the name of net neutrality. Another example is selling 0days to the government, which the EFF criticizes. I doubt if the EFF will continue to support disclosing 0days when they can kill people. The first time a child dies due to a car crash caused by a hacker, every organization is going to run from “coder’s rights”.
By the way, it should be clear in the above post on which side of this question I stand: for coder’s rights.

Update: Here’s another scenario. In Twitter discussions, people have said that the remedy for unresponsive vendors is to contact an organization like ICS-CERT, the DHS organization responsible for “control systems”. That doesn’t work, because ICS-CERT is itself a political, unresponsive organization.

The ICS-CERT doesn’t label “default passwords” as a “vulnerability”, despite the fact that it’s a leading cause of hacks, and a common feature of exploit kits. They claim that it’s the user’s responsibility to change the password, and not the fault of the vendor if they don’t.

Yet, disclosing default passwords is one of the things that vendors try to suppress. When a researcher reveals a default password in a control system, and a hacker exploits it to cause a power outage, it’s the researcher who will get blamed for revealing information that was not-a-vulnerability.

I say this because I was personally threatened by the FBI to suppress something that was not-a-vulnerability, yet which they claimed would hurt national security if I revealed it to Chinese hackers.

Again, the only thing that causes change is full disclosure. Everything else allows politics to suppress information vital to public safety.


Update: Some have suggested it’s that moral and legal are two different arguments, that someone can call full disclosure immoral without necessarily arguing that it should be illegal.

That’s not true. That’s like saying that speech is immoral when Nazi’s do it. It isn’t — the content may be vile, but the act of speaking never immoral.

The “moral but legal” argument is too subtle for politics, you really have to pick one or the other. We saw that happen with the EFF. They originally championed the idea that the Internet should not be regulated. They, they championed the idea of net neutrality — which is Internet regulation. They original claimed there was no paradox, because they were saying merely that net neutrality was moral not that it should be law. Now they’ve discarded that charade, and are actively lobbying congress to make net neutrality law.

Sure, sometimes some full disclosure will result in bad results, but more often, those with political power will seek to suppress vital information with reasons that sound good at the time, like “think of the children!”. We need to firmly defend full disclosure as free speech, in all circumstances.


Update: Some have suggested that instead of disclosing details, a researcher can inform the media.

This has been tried. It doesn’t work. Vendors have more influence on the media than researchers.

We say this happen in the Apple WiFi fiasco. It was an obvious bug (SSID’s longer than 97 bytes), but at the time Apple kernel exploitation wasn’t widely known. Therefore, the researchers tried to avoid damaging Apple by not disclosing the full exploit. Thus, people could know about the bug without people being able to exploit it.

This didn’t work. Apple’s marketing department claimed the entire thing was fake. They did later fix the bug — claiming it was something they found unrelated to the “fake” vulns from the researchers.

Another example was two years ago when researchers described bugs in airplane control systems. The FAA said the vulns were fake, and the press took the FAA’s line on the problem.

The history of “going to the media” has demonstrated that only full-disclosure works.

TorrentFreak: Kim Dotcom Fails in Bid to Suppress FBI Evidence

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

dotcom-laptopIn 2012 following the raid on his New Zealand mansion, Kim Dotcom fought to gain access to the information being held against him by the FBI.

A ruling by District Court Judge David Harvey in May of that year, which stood despite an August appeal, ordered disclosure of all documents relating to the alleged crimes of the so-called Megaupload Conspiracy.

While it was agreed that this information should be made available, an order forbidding publication was handed down in respect to the so-called Record of Case, a 200-page document summarizing an estimated 22 million emails and Skype discussions obtained by the FBI during their investigation.

Last November a sealed court order by US Judge Liam O’Grady already allowed the U.S. Government to share the summary of evidence from the Megaupload case with copyright holders, something which was actioned before the end of the year.

Over in New Zealand, however, Kim Dotcom has been fighting an application by the Crown to make the Record of Case public. That battle came to an end today when Auckland District Court Judge Nevin Dawson rejected an application by Dotcom’s legal team to extend the suppression order placed on the document.

According to RadioNZ, the document contains sensitive information including email and chat conversations which suggest that the Megaupload team knew their users were uploading copyrighted material.

In another setback, further applications by Dotcom to force Immigration New Zealand, the Security Intelligence Service, and several other government departments to hand over information they hold on him, were also rejected by Judge Dawson.

Dotcom’s lawyer Paul Davidson, QC, told Stuff that the battle will continue.

“We will press on with our resolve,” he said.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Schneier on Security: Disclosing vs. Hoarding Vulnerabilities

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

There’s a debate going on about whether the US government — specifically, the NSA and United States Cyber Command — should stockpile Internet vulnerabilities or disclose and fix them. It’s a complicated problem, and one that starkly illustrates the difficulty of separating attack and defense in cyberspace.

A software vulnerability is a programming mistake that allows an adversary access into that system. Heartbleed is a recent example, but hundreds are discovered every year.

Unpublished vulnerabilities are called “zero-day” vulnerabilities, and they’re very valuable because no one is protected. Someone with one of those can attack systems world-wide with impunity.

When someone discovers one, he can either use it for defense or for offense. Defense means alerting the vendor and getting it patched. Lots of vulnerabilities are discovered by the vendors themselves and patched without any fanfare. Others are discovered by researchers and hackers. A patch doesn’t make the vulnerability go away, but most users protect themselves by patch their systems regularly.

Offense means using the vulnerability to attack others. This is the quintessential zero-day, because the vendor doesn’t even know the vulnerability exists until it starts being used by criminals or hackers. Eventually the affected software’s vendor finds out — the timing depends on how extensively the vulnerability is used — and issues a patch to close the vulnerability.

If an offensive military cyber unit discovers the vulnerability — or a cyber-weapons arms manufacturer — it keeps that vulnerability secret for use to deliver a cyber-weapon. If it is used stealthily, it might remain secret for a long time. If unused, it’ll remain secret until someone else discovers it.

Discoverers can sell vulnerabilities. There’s a rich market in zero-days for attack purposes — both military/commercial and black markets. Some vendors offer bounties for vulnerabilities to incent defense, but the amounts are much lower.

The NSA can play either defense or offense. It can either alert the vendor and get a still-secret vulnerability fixed, or it can hold on to it and use it as to eavesdrop on foreign computer systems. Both are important US policy goals, but the NSA has to choose which one to pursue. By fixing the vulnerability, it strengthens the security of the Internet against all attackers: other countries, criminals, hackers. By leaving the vulnerability open, it is better able to attack others on the Internet. But each use runs the risk of the target government learning of, and using for itself, the vulnerability — or of the vulnerability becoming public and criminals starting to use it.

There is no way to simultaneously defend US networks while leaving foreign networks open to attack. Everyone uses the same software, so fixing us means fixing them, and leaving them vulnerable means leaving us vulnerable. As Harvard Law Professor Jack Goldsmith wrote, “every offensive weapon is a (potential) chink in our defense — and vice versa.”

To make matters even more difficult, there is an arms race going on in cyberspace. The Chinese, the Russians, and many other countries are finding vulnerabilities as well. If we leave a vulnerability unpatched, we run the risk of another country independently discovering it and using it in a cyber-weapon that we will be vulnerable to. But if we patch all the vulnerabilities we find, we won’t have any cyber-weapons to use against other countries.

Many people have weighed in on this debate. The president’s Review Group on Intelligence and Communications Technologies, convened post-Snowden, concluded (recommendation 30), that vulnerabilities should only be hoarded in rare instances and for short times. Cory Doctorow calls it a public health problem. I have said similar things. Dan Geer recommends that the US government corner the vulnerabilities market and fix them all. Both the FBI and the intelligence agencies claim that this amounts to unilateral disarmament.

It seems like an impossible puzzle, but the answer hinges on how vulnerabilities are distributed in software.

If vulnerabilities are sparse, then it’s obvious that every vulnerability we find and fix improves security. We render a vulnerability unusable, even if the Chinese government already knows about it. We make it impossible for criminals to find and use it. We improve the general security of our software, because we can find and fix most of the vulnerabilities.

If vulnerabilities are plentiful — and this seems to be true — the ones the US finds and the ones the Chinese find will largely be different. This means that patching the vulnerabilities we find won’t make it appreciably harder for criminals to find the next one. We don’t really improve general software security by disclosing and patching unknown vulnerabilities, because the percentage we find and fix is small compared to the total number that are out there.

But while vulnerabilities are plentiful, they’re not uniformly distributed. There are easier-to-find ones, and harder-to-find ones. Tools that automatically find and fix entire classes of vulnerabilities, and coding practices that eliminate many easy-to-find ones, greatly improve software security. And when person finds a vulnerability, it is likely that another person soon will, or recently has, found the same vulnerability. Heartbleed, for example, remained undiscovered for two years, and then two independent researchers discovered it within two days of each other. This is why it is important for the government to err on the side of disclosing and fixing.

The NSA, and by extension US Cyber Command, tries its best to play both ends of this game. Former NSA Director Michael Hayden talks about NOBUS, “nobody but us.” The NSA has a classified process to determine what it should do about vulnerabilities, disclosing and closing most of the ones it finds, but holding back some — we don’t know how many — vulnerabilities that “nobody but us” could find for attack purposes.

This approach seems to be the appropriate general framework, but the devil is in the details. Many of us in the security field don’t know how to make NOBUS decisions, and the recent White House clarification posed more questions than it answered.

Who makes these decisions, and how? How often are they reviewed? Does this review process happen inside Department of Defense, or is it broader? Surely there needs to be a technical review of each vulnerability, but there should also be policy reviews regarding the sorts of vulnerabilities we are hoarding. Do we hold these vulnerabilities until someone else finds them, or only for a short period of time? How many do we stockpile? The US/Israeli cyberweapon Stuxnet used four zero-day vulnerabilities. Burning four on a single military operation implies that we are not hoarding a small number, but more like 100 or more.

There’s one more interesting wrinkle. Cyber-weapons are a combination of a payload — the damage the weapon does — and a delivery mechanism: the vulnerability used to get the payload into the enemy network. Imagine that China knows about a vulnerability and is using it in a still-unfired cyber-weapon, and that the NSA learns about it through espionage. Should the NSA disclose and patch the vulnerability, or should it use it itself for attack? If it discloses, then China could find a replacement vulnerability that the NSA won’t know about it. But if it doesn’t, it’s deliberately leaving the US vulnerable to cyber-attack. Maybe someday we can get to the point where we can patch vulnerabilities faster than the enemy can use them in an attack, but we’re nowhere near that point today.

The implications of US policy can be felt on a variety of levels. The NSA’s actions have resulted in a widespread mistrust of the security of US Internet products and services, greatly affecting American business. If we show that we’re putting security ahead of surveillance, we can begin to restore that trust. And by making the decision process much more public than it is today, we can demonstrate both our trustworthiness and the value of open government.

An unpatched vulnerability puts everyone at risk, but not to the same degree. The US and other Western countries are highly vulnerable, because of our critical electronic infrastructure, intellectual property, and personal wealth. Countries like China and Russia are less vulnerable — North Korea much less — so they have considerably less incentive to see vulnerabilities fixed. Fixing vulnerabilities isn’t disarmament; it’s making our own countries much safer. We also regain the moral authority to negotiate any broad international reductions in cyber-weapons; and we can decide not to use them even if others do.

Regardless of our policy towards hoarding vulnerabilities, the most important thing we can do is patch vulnerabilities quickly once they are disclosed. And that’s what companies are doing, even without any government involvement, because so many vulnerabilities are discovered by criminals.

We also need more research in automatically finding and fixing vulnerabilities, and in building secure and resilient software in the first place. Research over the last decade or so has resulted in software vendors being able to find and close entire classes of vulnerabilities. Although there are many cases of these security analysis tools not being used, all of our security is improved when they are. That alone is a good reason to continue disclosing vulnerability details, and something the NSA can do to vastly improve the security of the Internet worldwide. Here again, though, they would have to make the tools they have to automatically find vulnerabilities available for defense and not attack.

In today’s cyberwar arms race, unpatched vulnerabilities and stockpiled cyber-weapons are inherently destabilizing, especially because they are only effective for a limited time. The world’s militaries are investing more money in finding vulnerabilities than the commercial world is investing in fixing them. The vulnerabilities they discover affect the security of us all. No matter what cybercriminals do, no matter what other countries do, we in the US need to err on the side of security and fix almost all the vulnerabilities we find. But not all, yet.

This essay previously appeared on TheAtlantic.com.

Schneier on Security: Disclosing vs Hoarding Vulnerabilities

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

There’s a debate going on about whether the U.S. government — specifically, the NSA and United States Cyber Comman — should stockpile Internet vulnerabilities or disclose and fix them. It’s a complicated problem, and one that starkly illustrates the difficulty of separating attack and defense in cyberspace.

A software vulnerability is a programming mistake that allows an adversary access into that system. Heartbleed is a recent example, but hundreds are discovered every year.

Unpublished vulnerabilities are called “zero-day” vulnerabilities, and they’re very valuable because no one is protected. Someone with one of those can attack systems world-wide with impunity.

When someone discovers one, he can either use it for defense or for offense. Defense means alerting the vendor and getting it patched. Lots of vulnerabilities are discovered by the vendors themselves and patched without any fanfare. Others are discovered by researchers and hackers. A patch doesn’t make the vulnerability go away, but most users protect themselves by patch their systems regularly.

Offense means using the vulnerability to attack others. This is the quintessential zero-day, because the vendor doesn’t even know the vulnerability exists until it starts being used by criminals or hackers. Eventually the affected software’s vendor finds out — the timing depends on how extensively the vulnerability is used — and issues a patch to close the vulnerability.

If an offensive military cyber unit discovers the vulnerability — or a cyber-weapons arms manufacturer — it keeps that vulnerability secret for use to deliver a cyber-weapon. If it is used stealthily, it might remain secret for a long time. If unused, it’ll remain secret until someone else discovers it.

Discoverers can sell vulnerabilities. There’s a rich market in zero-days for attack purposes — both military/commercial and black markets. Some vendors offer bounties for vulnerabilities to incent defense, but the amounts are much lower.

The NSA can play either defense or offense. It can either alert the vendor and get a still-secret vulnerability fixed, or it can hold on to it and use it as to eavesdrop on foreign computer systems. Both are important U.S. policy goals, but the NSA has to choose which one to pursue. By fixing the vulnerability, it strengthens the security of the Internet against all attackers: other countries, criminals, hackers. By leaving the vulnerability open, it is better able to attack others on the Internet. But each use runs the risk of the target government learning of, and using for itself, the vulnerability — or of the vulnerability becoming public and criminals starting to use it.

There is no way to simultaneously defend U.S. networks while leaving foreign networks open to attack. Everyone uses the same software, so fixing us means fixing them, and leaving them vulnerable means leaving us vulnerable. As Harvard Law Professor Jack Goldsmith wrote, “every offensive weapon is a (potential) chink in our defense—and vice versa.”

To make matters even more difficult, there is an arms race going on in cyberspace. The Chinese, the Russians, and many other countries are finding vulnerabilities as well. If we leave a vulnerability unpatched, we run the risk of another country independently discovering it and using it in a cyber-weapon that we will be vulnerable to. But if we patch all the vulnerabilities we find, we won’t have any cyber-weapons to use against other countries.

Many people have weighed in on this debate. The president’s Review Group on Intelligence and Communications Technologies, convened post-Snowden, concluded (recommendation 30), that vulnerabilities should only be hoarded in rare instances and for short times. Cory Doctorow calls it a public health problem. I have said similar things. Dan Geer recommends that the U.S. government corner the vulnerabilities market and fix them all. Both the FBI and the intelligence agencies claim that this amounts to unilateral disarmament.

It seems like an impossible puzzle, but the answer hinges on how vulnerabilities are distributed in software.

If vulnerabilities are sparse, then it’s obvious that every vulnerability we find and fix improves security. We render a vulnerability unusable, even if the Chinese government already knows about it. We make it impossible for criminals to find and use it. We improve the general security of our software, because we can find and fix most of the vulnerabilities.

If vulnerabilities are plentiful — and this seems to be true — the ones the U.S. finds and the ones the Chinese find will largely be different. This means that patching the vulnerabilities we find won’t make it appreciably harder for criminals to find the next one. We don’t really improve general software security by disclosing and patching unknown vulnerabilities, because the percentage we find and fix is small compared to the total number that are out there.

But while vulnerabilities are plentiful, they’re not uniformly distributed. There are easier-to-find ones, and harder-to-find ones. Tools that automatically find and fix entire classes of vulnerabilities, and coding practices that eliminate many easy-to-find ones, greatly improve software security. And when person finds a vulnerability, it is likely that another person soon will, or recently has, found the same vulnerability. Heartbleed, for example, remained undiscovered for two years, and then two independent researchers discovered it within two days of each other. This is why it is important for the government to err on the side of disclosing and fixing.

The NSA, and by extension U.S. Cyber Command, tries its best to play both ends of this game. Former NSA Director Michael Hayden talks about NOBUS, “nobody but us.” The NSA has a classified process to determine what it should do about vulnerabilities, disclosing and closing most of the ones it finds, but holding back some — we don’t know how many — vulnerabilities that “nobody but us” could find for attack purposes.

This approach seems to be the appropriate general framework, but the devil is in the details. Many of us in the security field don’t know how to make NOBUS decisions, and the recent White House clarification posed more questions than it answered.

Who makes these decisions, and how? How often are they reviewed? Does this review process happen inside Department of Defense, or is it broader? Surely there needs to be a technical review of each vulnerability, but there should also be policy reviews regarding the sorts of vulnerabilities we are hoarding. Do we hold these vulnerabilities until someone else finds them, or only for a short period of time? How many do we stockpile? The US/Israeli cyberweapon Stuxnet used four zero-day vulnerabilities. Burning four on a single military operation implies that we are not hoarding a small number, but more like 100 or more.

There’s one more interesting wrinkle. Cyber-weapons are a combination of a payload — the damage the weapon does — and a delivery mechanism: the vulnerability used to get the payload into the enemy network. Imagine that China knows about a vulnerability and is using it in a still-unfired cyber-weapon, and that the NSA learns about it through espionage. Should the NSA disclose and patch the vulnerability, or should it use it itself for attack? If it discloses, then China could find a replacement vulnerability that the NSA won’t know about it. But if it doesn’t, it’s deliberately leaving the U.S. vulnerable to cyber-attack. Maybe someday we can get to the point where we can patch vulnerabilities faster than the enemy can use them in an attack, but we’re nowhere near that point today.

The implications of U.S. policy can be felt on a variety of levels. The NSA’s actions have resulted in a widespread mistrust of the security of U.S. Internet products and services, greatly affecting American business. If we show that we’re putting security ahead of surveillance, we can begin to restore that trust. And by making the decision process much more public than it is today, we can demonstrate both our trustworthiness and the value of open government.

An unpatched vulnerability puts everyone at risk, but not to the same degree. The U.S. and other Western countries are highly vulnerable, because of our critical electronic infrastructure, intellectual property, and personal wealth. Countries like China and Russia are less vulnerable — North Korea much less — so they have considerably less incentive to see vulnerabilities fixed. Fixing vulnerabilities isn’t disarmament; it’s making our own countries much safer. We also regain the moral authority to negotiate any broad international reductions in cyber-weapons; and we can decide not to use them even if others do.

Regardless of our policy towards hoarding vulnerabilities, the most important thing we can do is patch vulnerabilities quickly once they are disclosed. And that’s what companies are doing, even without any government involvement, because so many vulnerabilities are discovered by criminals.

We also need more research in automatically finding and fixing vulnerabilities, and in building secure and resilient software in the first place. Research over the last decade or so has resulted in software vendors being able to find and close entire classes of vulnerabilities. Although there are many cases of these security analysis tools not being used, all of our security is improved when they are. That alone is a good reason to continue disclosing vulnerability details, and something the NSA can do to vastly improve the security of the Internet worldwide. Here again, though, they would have to make the tools they have to automatically find vulnerabilities available for defense and not attack.

In today’s cyberwar arms race, unpatched vulnerabilities and stockpiled cyber-weapons are inherently destabilizing, especially because they are only effective for a limited time. The world’s militaries are investing more money in finding vulnerabilities than the commercial world is investing in fixing them. The vulnerabilities they discover affect the security of us all. No matter what cybercriminals do, no matter what other countries do, we in the U.S. need to err on the side of security and fix almost all the vulnerabilities we find. But not all, yet.

This essay previously appeared on TheAtlantic.com.

Errata Security: FBI will now record interrogations

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

This is huge, so I thought I’d blog about this: after a century, the FBI has reversed policy, and will now start recording interrogations.

Prior to this, the FBI policy was to not record interrogations. They worked in pairs (always two there are) with one guy interrogating, and the second quietly writing down what was said. What they wrote down was always their version of events, which was always in their favor.
This has long been a strategy to trap people into becoming informants. It’s a felony to lie to a federal agent. Thus, if you later say something that contradicts their version of what you said, you are guilty of lying — which they’ll forgive if you inform on your friends.
I experienced this myself. Two agents came to our business to talk to us about a talk we were giving at BlackHat. Part of it contained the threat that if we didn’t cancel our talk, they’d taint our file so we’d never be able to pass a background check and work in government ever again. According to a later FOIA, that threat wasn’t included in their form 302 about the conversation. And since it’s my word against theirs, their threat never happened.
This is a big deal in Dhjokar Tsarnaev (Boston bombing) case. The FBI interviewed Tsarnaev while near death on a hospital bed. Their transcription of what he said bears little semblance to what was actually said, omitting key details like how often he asked to talk to his lawyer, or the FBI agents denying him access to his lawyer, or the threats the agents made to him.
This policy proved beyond a shadow of doubt that the FBI is inherently corrupt. Now that they are changing this, such proof will be harder to come by — though I have no doubt it’s still true.

Update: other stories have focused on video taping interrogations after arrest, but more importantly, the policy change also covers investigations, when they talk to people whom they have no intention of arresting (such as my case).

Note: I used this in my short story for last year’s DEF CON contest. It’s interesting that it’s now already out of date.

Krebs on Security: ‘Blackshades’ Trojan Users Had It Coming

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

The U.S. Justice Department today announced a series of actions against more than 100 people accused of purchasing and using “Blackshades,” a password-stealing Trojan horse program designed to infect computers throughout the world to spy on victims through their web cameras, steal files and account information, and log victims’ key strokes. While any effort that discourages the use of point-and-click tools for ill-gotten gains is a welcome development, the most remarkable aspect of this crackdown is that those who were targeted in this operation lacked any clue that it was forthcoming.

The Blackshades user forum.

The Blackshades user forum.

To be sure, Blackshades is an effective and easy-to-use tool for remotely compromising and spying on your targets. Early on in its development, researchers at CitzenLab discovered that Blackshades was being used to spy on activists seeking to overthrow the regime in Syria.

The product was sold via well-traveled and fairly open hacker forums, and even included an active user forum where customers could get help configuring and wielding the powerful surveillance tool. Although in recent years a license to Blackshades sold for several hundred Euros, early versions of the product were sold via PayPal for just USD $40.

In short, Blackshades was a tool created and marketed principally for buyers who wouldn’t know how to hack their way out of a paper bag. From the Justice Department’s press release today:

“After purchasing a copy of the RAT, a user had to install the RAT on a victim’s computer – i.e., “infect” a victim’s computer. The infection of a victim’s computer could be accomplished in several ways, including by tricking victims into clicking on malicious links or by hiring others to install the RAT on victims’ computers.

The RAT contained tools known as ‘spreaders’ that helped users of the RAT maximize the number of infections. The spreader tools generally worked by using computers that had already been infected to help spread the RAT further to other computers. For instance, in order to lure additional victims to click on malicious links that would install the RAT on their computers, the RAT allowed cybercriminals to send those malicious links to others via the initial victim’s social media service, making it appear as if the message had come from the initial victim.”

News that the FBI and other national law enforcement organizations had begun rounding up Blackshades customers started surfacing online last week, when multiple denizens of the noob-friendly hacker forum Hackforums[dot]net began posting firsthand experiences of receiving a visit from local authorities related to their prior alleged Blackshades use. See the image gallery at the end of this post for a glimpse into the angst that accompanied that development.

While there is a certain amount of schadenfreude in today’s action, the truth is that any longtime Blackshades customer who didn’t know this day would be coming should turn in his hacker card immediately. In June 2012, the Justice Department announced a series of indictments against at least two dozen individuals who had taken the bait and signed up to be active members of “Carderprofit,” a fraud forum that was created and maintained by the Federal Bureau of Investigation.

Among those arrested in the CarderProfit sting was Michael Hogue, the alleged co-creator of Blackshades. That so many of the customers of this product are teenagers who wouldn’t know a command line prompt from a hole in the ground is evident by the large number of users who vented their outrage over their arrests and/or visits by the local authorities on Hackforums, which by the way was the genesis of the CarderProfit sting from Day One.

In June 2010, Hackforums administrator Jesse Labrocca — a.k.a. “Omniscient” — posted a message to all users of the forum, notifying them that the forum would no longer tolerate the posting of messages about ways to buy and use the ZeuS Trojan, a far more sophisticated remote-access Trojan that is heavily used by cybercriminals worldwide and has been implicated in the theft of hundreds of millions of dollars from small- to mid-sized businesses worldwide.

Hackforums admin Jesse "Omniscient" LaBrocca urging users to register at a new forum -- Carderprofit.eu -- a sting Web site set up by the FBI.

Hackforums admin Jesse “Omniscient” LaBrocca urging users to register at a new forum — Carderprofit.cc — a sting Web site set up by the FBI.

That warning, shown in the screen shot above, alerted Hackforums users that henceforth any discussion about using or buying ZeuS was verboten on the site, and that those who wished to carry on conversations about this topic should avail themselves of a brand new forum that was being set up to accommodate them. And, of course, that forum was carderprofit[dot]eu.

Interestingly, a large number of the individuals rounded up as part of the FBI’s CardProfit sting included several key leaders of LulzSec (including the 16-year-old individual responsible for sending a heavily armed police response to my home in March 2013).

The CarderProfit homepage, which featured an end-user license agreement written by the FBI.

The CarderProfit homepage, which featured an end-user license agreement written by the FBI.

In a press conference today, the FBI said its investigation has shown that Blackshades was purchased by at least several thousand users in more than 100 countries and used to infect more than half a million computers worldwide. The government alleges that one co-creator of Blackshades generated sales of more than $350,000 between September 2010 and April 2014. Information about that individual and others charged in this case can be found at this link.

For a glimpse at what the recipients of all this attention went through these past few days, check out the images below.

bs1
bs2
bs3
bs4
bs5
bs6
bs7
bs8
bs9

Schneier on Security: Espionage vs. Surveillance

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

According to NSA documents published in Glenn Greenwald’s new book No Place to Hide, we now know that the NSA spies on embassies and missions all over the world, including those of Brazil, Bulgaria, Colombia, the European Union, France, Georgia, Greece, India, Italy, Japan, Mexico, Slovakia, South Africa, South Korea, Taiwan, Venezuela and Vietnam.

This will certainly strain international relations, as happened when it was revealed that the U.S. is eavesdropping on German Chancellor Angela Merkel’s cell phone — but is anyone really surprised? Spying on foreign governments is what the NSA is supposed to do. Much more problematic, and dangerous, is that the NSA is spying on entire populations. It’s a mistake to have the same laws and organizations involved with both activities, and it’s time we separated the two.

The former is espionage: the traditional mission of the NSA. It’s an important military mission, both in peacetime and wartime, and something that’s not going to go away. It’s targeted. It’s focused. Decisions of whom to target are decisions of foreign policy. And secrecy is paramount.

The latter is very different. Terrorists are a different type of enemy; they’re individual actors instead of state governments. We know who foreign government officials are and where they’re located: in government offices in their home countries, and embassies abroad. Terrorists could be anyone, anywhere in the world. To find them, the NSA has to look for individual bad actors swimming in a sea of innocent people. This is why the NSA turned to broad surveillance of populations, both in the U.S. and internationally.

If you think about it, this is much more of a law enforcement sort of activity than a military activity. Both involve security, but just as the NSA’s traditional focus was governments, the FBI’s traditional focus was individuals. Before and after 9/11, both the NSA and the FBI were involved in counterterrorism. The FBI did work in the U.S. and abroad. After 9/11, the primary mission of counterterrorist surveillance was given to the NSA because it had existing capabilities, but the decision could have gone the other way.

Because the NSA got the mission, both the military norms and the legal framework from the espionage world carried over. Our surveillance efforts against entire populations were kept as secret as our espionage efforts against governments. And we modified our laws accordingly. The 1978 Foreign Intelligence Surveillance Act (FISA) that regulated NSA surveillance required targets to be “agents of a foreign power.” When the law was amended in 2008 under the FISA Amendments Act, a target could be any foreigner anywhere.

Government-on-government espionage is as old as governments themselves, and is the proper purview of the military. So let the Commander in Chief make the determination on whose cell phones to eavesdrop on, and let the NSA carry those orders out.

Surveillance is a large-scale activity, potentially affecting billions of people, and different rules have to apply – the rules of the police. Any organization doing such surveillance should apply the police norms of probable cause, due process, and oversight to population surveillance activities. It should make its activities much less secret and more transparent. It should be accountable in open courts. This is how we, and the rest of the world, regains the trust in the US’s actions.

In January, President Obama gave a speech on the NSA where he said two very important things. He said that the NSA would no longer spy on Angela Merkel’s cell phone. And while he didn’t extend that courtesy to the other 82 million citizens of Germany, he did say that he would extend some of the U.S.’s constitutional protections against warrantless surveillance to the rest of the world.

Breaking up the NSA by separating espionage from surveillance, and putting the latter under a law enforcement regime instead of a military regime, is a step toward achieving that.

This essay originally appeared on CNN.com.

Schneier on Security: New NSA Snowden Documents

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Glenn Greenwald’s new book, No Place to Hide, was published today. There are about 100 pages of NSA documents on the book’s website. I haven’t gone through them yet. At a quick glance, only a few of them have been published before.

Here are two book reviews.

EDITED TO ADD (5/13): It’s suprising how large the FBI’s role in all of this is. On page 81, we see that they’re the point contact for BLARNEY. (BLARNEY is a decades-old AT&T data collection program.) And page 28 shows the ESCU — that’s the FBI’s Electronic Communications Surveillance Unit — is point on all the important domestic collection and interaction with companies. When companies deny that they work with the NSA, it’s likely that they’re working with the FBI and not realizing that it’s the NSA that getting all the data they’re providing.

Krebs on Security: Teen Arrested for 30+ Swattings, Bomb Threats

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

A 16-year-old male from Ottawa, Canada has been arrested for allegedly making at least 30 fraudulent calls to emergency services across North America over the past few months. The false alarms — two of which targeted this reporter — involved calling in phony bomb threats and multiple attempts at “swatting” — a hoax in which the perpetrator spoofs a call about a hostage situation or other violent crime in progress in the hopes of tricking police into responding at a particular address with deadly force.

po2-swatbkOn March 9, a user on Twitter named @ProbablyOnion (possibly NSFW) started sending me rude and annoying messages. A month later (and several weeks after blocking him on Twitter), I received a phone call from the local police department. It was early in the morning on Apr. 10, and the cops wanted to know if everything was okay at our address.

Since this was not the first time someone had called in a fake hostage situation at my home, the call I received came from the police department’s non-emergency number, and they were unsurprised when I told them that the Krebs manor and all of its inhabitants were just fine.

Minutes after my local police department received that fake notification, @ProbablyOnion was bragging on Twitter about swatting me, including me on his public messages: “You have 5 hostages? And you will kill 1 hostage every 6 times and the police have 25 minutes to get you $100k in clear plastic.” Another message read: “Good morning! Just dispatched a swat team to your house, they didn’t even call you this time, hahaha.”

I told this user privately that targeting an investigative reporter maybe wasn’t the brightest idea, and that he was likely to wind up in jail soon. But @ProbablyOnion was on a roll: That same day, he hung out his for-hire sign on Twitter, with the following message: “want someone swatted? Tweet me  their name, address and I’ll make it happen.”

wantswat

Several Twitter users apparently took him up on that offer. All told, @ProbablyOnion would claim responsibility for more than two dozen swatting and bomb threat incidents at schools and other public locations across the United States.

On May 7, @ProbablyOnion tried to get the swat team to visit my home again, and once again without success. “How’s your door?” he tweeted. I replied: “Door’s fine, Curtis. But I’m guessing yours won’t be soon. Nice opsec!”

I was referring to a document that had just been leaked on Pastebin, which identified @ProbablyOnion as a 19-year-old Curtis Gervais from Ontario. @ProbablyOnion laughed it off but didn’t deny the accuracy of the information, except to tweet that the document got his age wrong. A day later, @ProbablyOnion would post his final tweet: “Still awaiting for the horsies to bash down my door,” a taunting reference to the Royal Canadian Mounted Police (RCMP).

According to an article in the Ottawa Citizen, the 16-year-old faces 60 charges, including creating fear by making bomb threats. Ottawa police also are investigating whether any alleged hoax calls diverted responders away from real emergencies.

Most of the people involved in swatting and making bomb threats are young males under the age of 18 — the age when kids seem to have little appreciation for or care about the seriousness of their actions. According to the FBI, each swatting incident costs emergency responders approximately $10,000. Each hoax also unnecessarily endangers the lives of the responders and the public.

Take, for example, the kid who swatted my home last year: According to interviews with multiple law enforcement sources familiar with the case, that kid is only 17 now, and was barely 16 at the time of the incident in March 2013. Identified in several Wired articles as “Cosmo the God,” Long Beach, Calif. resident Eric Taylor violated the terms of his 2011 parole, which forbade him from using the Internet until his 21st birthday. Taylor pleaded guilty in 2011 to multiple felonies, including credit card fraud, identity theft, bomb threats and online impersonation.

In nearly every case I’m aware of, these kids who think swatting is fun have serious problems at home, if indeed they have any meaningful parental oversight in their lives. It’s sad because with a bit of guidance and the right environment, some of these kids probably would make very good security professionals. Heck, Eric Taylor was even publicly thanked by Google for finding and reporting security vulnerabilities in the fourth quarter of 2012 (nevermind that this was technically after his no-computer probation kicked in).

Update, 2:42 p.m. ET: The FBI also has issued a press release about this arrest, although it also does not name the 16 year-old.

Krebs on Security: Tax Fraud Gang Targeted Healthcare Firms

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Earlier this month, I wrote about an organized cybercrime gang that has been hacking into HR departments at organizations across the country and filing fraudulent tax refund requests with the IRS on employees of those victim firms. Today, we’ll look a bit closer at the activities of this crime gang, which appears to have targeted a large number of healthcare and senior living organizations that were all using the same third-party payroll and HR services provider.

taxfraudAs I wrote in the previous story, KrebsOnSecurity encountered a Web-based control panel that an organized criminal gang has been using to track bogus tax returns filed on behalf of employees at hacked companies whose HR departments had been relieved of W-2 forms for all employees.

Among the organizations listed in that panel were Plaintree Inc. and Griffin Faculty Practice Plan. Both entities are subsidiaries of Derby, Conn.-based Griffin Health Services Corp.

Steve Mordecai, director of human resources at Griffin Hospital, confirmed that a security breach at his organization had exposed the personal and tax data on “a limited number of employees for Griffin Health Services Corp. and Griffin Hospital.” Mordecai said the attackers obtained the information after stealing the organization’s credentials at a third-party payroll and HR management provider called UltiPro.

Mordecai said that the bad guys only managed to steal data on roughly four percent of the organization’s employees, but he declined to say how many employees the healthcare system currently has. An annual report (PDF) from 2009 states that Griffin Hospital alone had more than 1,384 employees.

Griffin employee tax records, as recorded in the fraudsters' Web-based control panel.

Griffin employee tax records, as recorded in the fraudsters’ Web-based control panel.

“Fortunately for us it was a limited number of employees who may have had their information breached or stolen,” Mordecai said. “There is a criminal investigation with the FBI that is ongoing, so I can’t say much more.”

The FBI did not return calls seeking comment. But according Reuters, the FBI recently circulated a private notice to healthcare providers, warning that the “cybersecurity systems at many healthcare providers are lax compared to other sectors, making them vulnerable to attacks by hackers searching for Americans’ personal medical records and health insurance data.”

According to information in their Web-based control panel, the attackers responsible for hacking into Griffin also may have infiltrated an organization called Medical Career Center Inc., but that could not be independently confirmed.

This crime gang also appears to have targeted senior living facilities, including SL Bella Terra LLC, a subsidiary of Chicago-based Senior Lifestyle Corp, an assisted living firm that operates in seven states. Senior Living did not return calls seeking comment.

In addition, the attackers hit  Swan Home Health LLC  in Menomonee Falls, Wisc., a company that recently changed its named to EnlivantMonica Lang, vice president of communications for Enlivant, said Swan Home Health is a subsidiary of Chicago-based Assisted Living Concepts Inc., an organization that owns and operates roughly 200 assisted living facilities in 20 states.

Swan Home Health employee's tax info, as recorded by the fraudsters.

Swan Home Health employee’s tax info, as recorded by the fraudsters.

ALC disclosed in March 2014 that a data breach in December 2013 had exposed the personal information on approximately 43,600 current and former employees. In its March disclosure, ALC said that its internal employee records were compromised after attackers stole login credentials to the company’s third-party payroll provider.

That disclosure didn’t name the third-party provider, but every victim organization I’ve spoken with that’s been targeted by this crime gang had outsourced their payroll and/or human resources operations to UltiPro.

Enlivant’s Lang confirmed that the company also relied on UltiPro, and that some employees have come forward to report attempts to file fraudulent tax refunds on their behalf with the IRS.

“We believe that [the attackers] accessed employee names, addresses, birthdays, Social Security numbers and pay information, which is plenty to get someone going from a tax fraud perspective,” Lang said in a telephone interview.

ULTIPRO & THE TWO-FACTOR SHUFFLE

I reached out to Ultipro to learn if they offered their customers any sort of two-factor authentication to beef up the security of their login process. Jody Kaminsky, senior vice president of marketing at Ultipro, confirmed that the company does in fact offer multi-factor authentication for its customers.

“We strongly encourage them to use it,” Kaminsky said. “We’d prefer not to provide any specific details about how it works that might assist or enable those who may attempt to break the law. Unfortunately, it does seem like this tax fraud scheme is pretty widespread and certainly not limited to our customers. We are aware of a few of our customers who have been impacted, however we can’t provide any further information due to confidentiality obligations.”

Kaminsky did not respond to questions about how long UltiPro has been offering the multi-factor solution. But information shared by an employee at a victim firm that has not been named in this series indicates that UltiPro only recently added multi-factor logins – after a large number of its customers had already been compromised by fraudsters who were plundering W-2 data to file fraudulent tax refunds.

A copy of a message sent by UltiPro to its customer base indicates that on Feb. 6, 2014 the company temporarily suspended access to individual employee W-2 records for all customers. That message reads, in part:

“On February 6, 2014, we applied an update to UltiPro to remove administrator/manager access to W-2s as a proactive measure in response to a specific new security threat we have been alerted to this tax season.  Across all industries and providers, there is increasing activity that targets payroll administrators and any user with access to multiple employee records, where malicious groups or individuals are attempting to obtain employee W-2 information for the purpose of committing tax fraud.  Any organization, regardless of payroll provider, is a potential target of this threat where the malicious group uses malware to spy on customer or end-user’s workstation to obtain user names and passwords. If obtained, any user name and password can then be used to log in, obtain employee W-2 information, and file that information for tax fraud purposes.”

“For this reason, we took immediate action to protect your employees by removing access for managers and administrators to individual employee W-2s. We will be releasing an UltiPro update restoring administrator and manager access with an additional level of authentication to validate the identity of the user. This UltiPro update is targeted for Sunday, February 9.”

On Feb. 9, UltiPro told customers that it was restoring access to employee W-2 information, and that managers and payroll administrators trying to access those records would be presented with an eight-digit “security access code request” — a text message delivered to a mobile phone designated by each user. That communication stated that each access code would be valid for only a short time before it expired, and that administrators could view employee W-2s for an hour before being logged out and forced to request another code.

UltiPro also told customers that it was instituting new changes to alert administrators via email about any users that modify their contact information:

“If a user changes a primary email address in UltiPro, he/she will receive an email to the previous email address communicating that a change occurred and to contact the administrator if the change was not initiated by him/her,” the company said in an Feb. 10 email to users. “If a user changes a secondary email address, he/she will receive an email to the primary email address with that same message.”

It remains unclear why so many individuals in the healthcare industry have been targeted by tax fraud this year. Last week, I published a story showing that hundreds of physicians in numerous states were just discovering that they’d been victimized by tax fraud, although the pattern of fraud in those cases did not match the attacks against healthcare organizations detailed in this story. As I noted in that piece, the tax fraud committed against individual physicians this year was far more selective, and did not impact other employees at those healthcare organizations.

Earlier this month, the University of Pittsburgh Medical Center confirmed that a data breach thought to affect only a few dozen employees actually revealed the personal information of approximately 27,000 employees — including at least 788 who reported experiencing some form of tax fraud or bank accounts that were wiped clean as a result of the breach. It is unclear how the breach occurred, and the UPMC has declined a request for an interview.

Errata Security: Fun with IDS funtime #3: heartbleed

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

I don’t like the EmergingThreat rules, not so much because of the rules themselves but because of the mentality of the people who use them. I scan the entire Internet. When I get clueless abuse complaints, such as “stop scanning me but I won’t tell you my IP address”, they usually come from people using the EmergingThreat rules. This encourages me to evade those rules.

The latest example is the Heartbleed attack. Rules that detect the exploit trigger on the pattern |18 03| being the first the first bytes of TCP packet payload. However, TCP is a streaming protocol: patterns can therefore appear anywhere in the payload, not just the first two bytes.

I therefore changed  masscan to push that pattern deeper into the payload, and re-scanned the Internet. Sure enough, not a single user of EmergingThreats complained. The only complaints from IDS users were from IBM sensors with the “TLS_Heartbeat_Short_Request” signature, and from some other unknown IDS with a signature name of “Heartbleed_Scan”.

That the rules can so easily be evaded is an important consideration. Recently, the FBI and ICS-CERT published advisories, recommending IDS signatures to detect exploits of the bug. None of their recommended signatures can detect masscan. I doubt these organizations have the competency to understand why, so I thought I’d explain it in simple terms.

How TCP and Sockets work

Network software uses the “Sockets API”, specifically the “send()” function. Typically, the packets we see on the wire match the contents of sends. If code calls these two functions back-to-back…

    send(fd, “abc”, 3, 0);
    send(fd, “xyz”, 3, 0);

…then we’ll see two TCP packets, one containing a payload of “abc” and the other containing a payload of “xyz”.

The send() function will only transmit immediately like this when the receiver is keeping up with the transmitter. If the receiver is falling behind, then the send() function writes the data into a kernel buffer instead of transmitting, waiting for the receiver to catch back up. Multiple sends will be combined into the same buffer. In this example, in the case of slow connections, the kernel will combine “abc” and “xyz” into a single buffer “abcxyz”. Later, the kernel may transmit “a” in one packet (if the receiver continues to be slow) followed by “bcxyz”, or maybe “abcx” followed by “yz”, or the entire buffer at once with “abcxyz”. TCP makes sure these bytes are sent in order, but doesn’t guarantee how they might be fragmented.

Masscan exploits this feature of TCP. When transmitting packets to the server, it combines data so that the Heartbeat pattern occurs later in the packet. It then pretends to be a slow connection, causing the server to combine data in the responses. Thus, in neither the to or from direction do any packets start with the triggering pattern |18 03|.

Viewing the packets

You can see this in action within this packet-capture using Wireshark. Frame #10 is where masscan transmits the Heartbeat request. It combines this with an Alert request, thus pushing the |18 03| pattern seven bytes deeper into the packet. This evades detection when the packets are sent to the server.

In order to cause the response to evade detection, masscan must appear as a slow receiver. It can do this in a number of ways. In this example, it does this by advertising a small “window”. This is a parameter in TCP packets that tells the other side how many bytes are available in the receive buffers. This is shown in frame #6, where masscan claims to have a window size of only 600 bytes.

This causes the server side to buffer the send() requests in the kernel. Normally, we should be seeing some separate packets at this point, such as a “Server Certificate”, “Server Key Exchange”, and “Server Hello Done” message, followed by the “Heartbeat” response. Because of the small window, the server has combined these all together and streamed them across the connection. In frame #16, we see the result, with the pattern |18 03| at offset 0x009F in packet.

This trick is, by itself, detectable. Notice that Wireshark is unhappy with this, and helpfully prints the message [TCP Window Full] several times. An IDS might use the same logic to detect a problem. But there’s other ways to cause the same effect that an IDS might not detect. For example, masscan might choose to not acknowledge sent data, forcing data to build up in the remote buffers. That an IDS might trigger on one trick doesn’t mean there aren’t other easy tricks to evade detection of fixed-offset TCP payloads.

Verifying with Security Onion

The “Security Onion” is a Linux distro with many IDS packages built in. I installed the latest version and ran it, including running the “pulledpork” tool to download the latest signatures. I also copied the Fox-IT rules linked by the FBI into the “local.rules”.

To test that things were running correctly, I ran the “ssltest.py” script — one of the first tools used to test for the Heartbleed problem. This is what running the script looked like:

As expected, the IDS had no problem detecting the exploit. All the appropriate rules from the FBI/ICS-CERT guidance fired, including the “Snort Community Rules”, the “Emerging Threat” rules, and the “Fox-IT” rules.
I then ran masscan, as shown below. Nothing triggered on the IDS, as I expected. No other events fired either, such as complaining about the “window” being full. The underlying Snort engine may have logic designed to detect things like small TCP windows, but if it did, Security Onion didn’t have them enabled.

I also ran my heartleech tool to automatically extract the private-key from the target. In theory, it should be detectable. It uses the same trick to evade detect on the packet sent to the server, but doesn’t have the same control over packets coming back from the server (heartleech goes through the Sockets API whereas masscan doesn’t, so can’t control the “window”). It appears that the signatures, in order to avoid false-positives, won’t always trigger on a response unless they have also seen a request — thus allowing heartleech to evade detection sometimes. I could only get the Security Onion device to trigger about one out of every three times I ran heartleech.

How IDSs can detect Heartbleed

The above tricks masscan uses only evades “pattern-matching” signatures. It doesn’t work against “protocol-analysis”. In the first picture above, you’ll notice how Wireshark (a protocol-analyzer) isn’t fooled by the evasions. It correct decodes the SSL protocol and knows which bytes are the Heartbeat.
The “Bro” open-source IDS works off the same principle. It decodes SSL (with a recent upgrade) can detect the Heartbleed vulnerability, regardless of what masscan and heartleech do to the packets. Some commercial IDSs use protocol-analysis as well, such as the IBM sensor** that trigger the “TLS_Heartbeat_Short_Request”. Furthermore, it’s likely that Snort will be upgraded in the near future to implement Heartbleed detection in their “SSL Preprocessor”, which similarly does protocol-analysis.
That doesn’t mean any of these are completely correct, either. SSL can run on all ports, but for performance reasons, preprocessors might be configured to examine traffic on a limited number of ports. This will miss many attacks. The following is an example attacking a server on port 444 [pcap]:
Another problem is that while protocol analysis can’t be evaded by simple TCP tricks, the person who wrote the signature may have written an incomplete one. The IBM protocol-analysis is able to measure the length of the request, but the threshold for triggering may be too low. They may have put a threshold of only 3 bytes for the request, which is the standard exploit, but I could easily increase that to 61 bytes to make an incoming HeartBEAT look indistinguishable from a HeartBLEED attempt.

Lastly, there is the problem of response time. IBM uses “state-machines” to decode protocols, and thus was able to release it’s protocol-analysis signature for this problem in roughly the same timeframe as others released pattern-matching signatures (April 9). Bro has a scripting language that likewise makes fast turn arounds easier, but there’s no way for such fast updates to be automatically imported into a sensor. Snort and Suricata have said they are updating their SSL preprocessors, but as of April 28, they haven’t been released yet.

Conclusion

Just because somebody has released a pattern-matching signature for the latest threat doesn’t mean it’s adequate. We have organizations like the FBI and ICS-CERT that blinding repeat such recommendations because they don’t fully grasp how TCP works. The consequence is that if you followed the FBI/ICS-CERT recommendations, you didn’t detect the masscan Heartbleed scan I did this last weekend.


Disclaimer: I created the technology in the IBM products. I’ve had zero involvement with them for the last 7 years, so I know little about what they’ve been doing lately, but the fact that they detected masscan implies they are still using the protocol-analysis state-machines that I put into the product those many years ago. By the way, masscan itself uses those same state-machines to parse responses from the server: if you want to see how they work, just read the “proto-ssl.c” source code in masscan.


Victor Julien, lead Suricata developer, detected my scan with the new protocol-decode added to Suricata:
  1. 04/26/2014-13:20:05.515184  [**] [1:2230012:1] SURICATA TLS overflow heartbeat encountered, possible exploit attempt (heartbleed) [**] [Classification: Generic Protocol Command Decode] [Priority: 3] {TCP} 209.126.230.74:20193 -> 192.168.47.27:443

Seth Hall, a Bro developer, detected my scan with their new protocol-decode logic:
1398506591.523781 CsXIjO1BWbvfZpbnha 209.126.230.74 17193 x.x.x.x 443 – — tcp Heartbleed::SSL_Heartbeat_Attack An TLS heartbleed attack was detected! Record length 3, payload length 16384 – 209.126.230.74 x.x.x.x 443 – worker1-10 Notice::ACTION_LOG 3600.000000 F – – – – -

Both these cases demonstrate the point that to correctly detect such threats, you need protocol-analysis, not pattern-matching.

TorrentFreak: Android Pirate Agrees To Work Undercover For the Feds

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

snappzIn 2012, three Android-focused websites were seized by the Department of Justice. With help from French and Dutch police, the FBI took over applanet.net, appbucket.net and snappzmarket.com, a trio of so-called ‘rogue’ app stores.

Carrying out several arrests the authorities heralded the operation as the first of its kind, alongside claims that together the sites had facilitated the piracy of more than two million apps.

Last month the Department of Justice announced that two of the three admins of Appbucket had entered guilty pleas to charges of criminal copyright infringement and would be sentenced in June.

Yesterday the DoJ reported fresh news on the third defendant. Appbucket’s Thomas Pace, 38, of Oregon City, Oregon, pleaded guilty to one count of conspiracy to commit criminal copyright infringement and will be sentenced in July.

As reported in late March, the former operator of Applanet says he intends to fight the U.S. Government. However, the same definitely cannot be said about Kody Jon Peterson of Clermont, Florida.

The 22-year-old, who was involved in the operations of SnappzMarket, pleaded guilty this week to one count of conspiracy to commit criminal copyright infringement. He admitted being involved in the illegal copying and distribution of more than a million pirated Android apps with a retail value of $1.7 million. His sentencing date has not been set, but even when that’s over his debt to the government may still not be paid.

As part of his guilty plea, Peterson entered into a plea agreement in which he gave up his right to be tried by a jury and any right to an appeal. He also accepted that he could be jailed for up to five years, be subjected to supervised release of up to three years, be hit with a $250,000 fine, and have to pay restitution to the victims of his crimes.

spyPeterson also agreed to cooperate with the authorities in the investigation, including producing all relevant records and attending interviews when required. However, in addition to more standard types of cooperation, the 22-year-old also agreed to go much further. A copy of his plea agreement obtained by TF reveals that Peterson has agreed to work undercover for the Government.

“Upon request by the Government, the Defendant agrees to act in an undercover investigative capacity to the best of his ability,” the agreement reads.

“The Defendant agrees that Defendant will make himself available to the law enforcement agents designated by the Government, will fully comply with all reasonable instructions given by such agents, and will allow such agents to monitor and record conversations and other interactions with persons suspected of criminal activity.”

The plea agreement also notes that in order to facilitate this work, Government attorneys and agents are allowed to contact Peterson on no notice and communicate with him without his own attorney being present. The extent of Peterson’s cooperation will eventually be detailed to the sentencing court and if it is deemed to be “substantial” then the Government will file a motion to have his sentence reduced.

But despite the agreements, Peterson has another huge problem to face. According to court documents he is an immigrant to the United States and as such a guilty plea could see him removed from the country. Whether he will be allowed to stay will be the subject of a separate proceeding but given his agreement to work undercover it seems unlikely the Government would immediately choose to eject such a valuable asset.

In the meantime, former associates and contacts of Peterson could potentially be talking online to him right now, with a FBI agent listening in over his shoulder and recording everything being said.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Krebs on Security: Crimeware Helps File Fraudulent Tax Returns

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Many companies believe that if they protect their intellectual property and customers’ information, they’ve done a decent job of safeguarding their crown jewels from attackers. But in an increasingly common scheme, cybercriminals are targeting the Human Resources departments at compromised organizations and rapidly filing fraudulent federal tax returns on all employees.

Last month, KrebsOnSecurity encountered a Web-based control panel that an organized criminal gang has been using to track bogus tax returns filed on behalf of employees at hacked companies whose HR departments had been relieved of W2 forms for all employees.

The control panel for a tax fraud botnet involving more than a half dozen victim organizations.

An obfuscated look at the he control panel for a tax fraud operation involving more than a half dozen victim organizations.

According to the control panel seen by this reporter, the scammers in charge of this scheme have hacked more than a half-dozen U.S. companies, filing fake tax returns on nearly every employee. At last count, this particular scam appears to stretch back to the beginning of this year’s tax filing season, and includes fraudulent returns filed on behalf of thousands of people — totaling more than $1 million in bogus returns.

The control panel includes a menu listing every employee’s W2 form, including all data needed to successfully file a return, such as the employee’s Social Security number, address, wages and employer identification number. Each fake return was apparently filed using the e-filing service provided by H&R Block, a major tax preparation and filing company. H&R Block did not return calls seeking comment for this story.

The "drops" page of this tax  fraud operation lists the nicknames of the co-conspirators who agreed to "cash out" funds on the prepaid cards generated by the bogus returns -- minus a small commission.

The “drops” page of this tax fraud operation lists the nicknames of the co-conspirators who agreed to “cash out” funds on the prepaid cards generated by the bogus returns — minus a small commission.

Fraudulent returns listed in the miscreants’ control panel that were successfully filed produced a specific five-digit tax filing Personal Identification Number (PIN) apparently generated by H&R Block’s online filing system. An examination of the panel suggests that successfully-filed returns are routed to prepaid American Express cards that are requested to be sent to addresses in the United States corresponding to specific “drops,” or co-conspirators in the scheme who have agreed to receive the prepaid cards and “cash out” the balance — minus their fee for processing the bogus returns.

Alex Holden, chief information security officer at Hold Security, said although tax fraud is nothing new, automating the exploitation of human resource systems for mass tax fraud is an innovation.

“The depth of this specific operation permits them to act as a malicious middle-man and tax preparation company to be an unwitting ‘underwriter’ of this crime,” Holden said. “And the victims maybe exploited not only for 2013 tax year but also down the road,  and perhaps subject of higher scrutiny by IRS — not to mention potential financial losses. Companies should look at their human resource infrastructure to ensure that payroll, taxes, financial, medical, and other benefits are afforded the same level of protection as their other mission-critical assets.”

ULTIPRO USERS TARGETED

I spoke at length with Doug, a 45-year-old tax fraud victim at a company that was listed in the attacker’s control panel. Doug agreed to talk about his experience if I omitted his last name and his employer’s name from this story. Doug confirmed that the information in the attacker’s tax fraud panel was his and mostly correct, but he said he didn’t recognize the Gmail address used to fraudulently submit his taxes at H&R Block.

Doug said his employer recently sent out a company-wide email stating there had been a security breach at a cloud provider that was subcontracted to handle the company’s employee benefits and payroll systems.

“Our company sent out a blanket email saying there had been a security breach that included employee names, addresses, Social Security numbers, and other information, and that they were going to pay for a free year’s worth of credit monitoring,” Doug said.

Almost a week after that notification, the company sent out a second notice stating that the breach extended to the personal information of all spouses and children of its employees.

“We were later notified that the breach was much deeper than originally suspected, which included all of our beneficiaries, their personal information, my life insurance policy, 401-K stuff, and our taxes,” Doug said. “My sister-in-law is an accountant, so I raced to her and asked her to help us file our taxes immediately. She pushed them through quickly but the IRS came back and said someone had already filed our taxes a few days before us.”

Doug has since spent many hours filling out countless forms with a variety of organizations, including the Federal Trade Commission, the FBI, the local police department, and of course the Internal Revenue Service.

Doug’s company and another victim at a separate company whose employees were all listed as recent tax fraud victims in the attacker’s online control panel both said their employers’ third-party cloud provider of payroll services was Weston, Fla.-based Ultimate Software. In each case, the attackers appear to have stolen the credentials of the victim organization’s human resources manager, credentials that were used to manage employee payroll and benefits at Ultipro, an online HR and payroll solutions provider.

Jody Kaminsky, senior vice president of marketing at Ultimate Software, said the company has no indication of a compromise of Ultimate’s security. Instead, she said Doug’s employer appears to have had its credentials stolen and abused by this fraud operation.

“Although we are aware that several customers’ employees were victims of tax fraud, we have no reason to believe this unauthorized access was the result of a compromise of our own security,” Kaminsky said. “Rather, our investigation suggests this is the result of stolen login information on the end-user level and not our application.”

Kaminsky continued:

“Unfortunately incidents of tax fraud this tax season across the U.S. are increasing and do not appear to be limited to just our customers or any one company (as I’m sure you’re well aware due to your close coverage of this issue). Over the past several weeks, we have communicated multiple times with our customers about recent threats of tax fraud and identity theft schemes.”

“We believe through schemes such as phishing or malware on end-user computers, criminals are attempting to obtain system login information and use those logins to access employee data for tax fraud purposes. We take identity theft schemes extremely seriously. As tax season progresses, we have been encouraging our customers to take steps to protect their systems such as enforcing frequent password resets and ensuring employee computers’ are up-to-date on anti-malware protection.”

PROTECT YOURSELF FROM TAX FRAUD

According to a 2013 report from the Treasury Inspector General’s office, the U.S. Internal Revenue Service (IRS) issued nearly $4 billion in bogus tax refunds in 2012. The money largely was sent to people who stole Social Security numbers and other information on U.S. citizens, and then filed fraudulent tax returns on those individuals claiming a large refund but at a different address.

It’s important to note that fraudsters engaged in this type of crime are in no way singling out H&R Block or Ultipro. Cybercrooks in charge of large collections of hacked computers can just as easily siphon usernames and passwords — as well as incomplete returns — from taxpayers who are preparing returns via other online filing services, including TurboTax and TaxSlayer.

If you become the victim of identity theft outside of the tax system or believe you may be at risk due to a lost/stolen purse or wallet, questionable credit card activity or credit report, etc., you are encouraged to contact the IRS at the Identity Protection Specialized Unit, toll-free at 1-800-908-4490 so that the IRS can take steps to further secure your account.

That process is likely to involve the use of taxpayer-specific PINs for people that have had issues with identity theft. If approved, the PIN is required on any tax return filed for that consumer before a return can be accepted. To start the process of applying for a tax return PIN from the IRS, check out the steps at this link. You will almost certainly need to file an IRS form 14039 (PDF), and provide scanned or photocopied records, such a drivers license or passport.

The most frightening aspect of this tax crimeware panel is that its designers appear to have licensed it for resale. It’s not clear how much this particular automated fraud machine costs, but sources in the financial industry tell this reporter that this same Web interface has been implicated in multiple tax return scams targeting dozens of companies in this year’s tax-filing season.

Krebs on Security: ‘Heartbleed’ Bug Exposes Passwords, Web Site Encryption Keys

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Researchers have uncovered an extremely critical vulnerability in recent versions of OpenSSL, a technology that allows millions of Web sites to encrypt communications with visitors. Complicating matters further is the release of a simple exploit that can be used to steal usernames and passwords from vulnerable sites, as well as private keys that sites use to encrypt and decrypt sensitive data.

Credit: Heartbleed.com

Credit: Heartbleed.com

From Heartbleed.com:

“The Heartbleed bug allows anyone on the Internet to read the memory of the systems protected by the vulnerable versions of the OpenSSL software. This compromises the secret keys used to identify the service providers and to encrypt the traffic, the names and passwords of the users and the actual content. This allows attackers to eavesdrop communications, steal data directly from the services and users and to impersonate services and users.”

An advisory from Carnegie Mellon University’s CERT notes that the vulnerability is present in sites powered by OpenSSL versions 1.0.1 through 1.0.1f. According to Netcraft, a company that monitors the technology used by various Web sites, more than a half million sites are currently vulnerable. As of this morning, that included Yahoo.com, and — ironically — the Web site of openssl.org. This list at Github appears to be a relatively recent test for the presence of this vulnerability in the top 1,000 sites as indexed by Web-ranking firm Alexa.

An easy-to-use exploit that is being widely traded online allows an attacker to retrieve private memory of an application that uses the vulnerable OpenSSL “libssl” library in chunks of 64kb at a time. As CERT notes, an attacker can repeatedly leverage the vulnerability to retrieve as many 64k chunks of memory as are necessary to retrieve the intended secrets.

Jamie Blasco, director of AlienVault Labs, said this bug has “epic repercussions” because not only does it expose passwords and cryptographic keys, but in order to ensure that attackers won’t be able to use any data that does get compromised by this flaw, affected providers have to replace the private keys and certificates after patching the vulnerable OpenSSL service for each of the services that are using the OpenSSL library [full disclosure: AlienVault is an advertiser on this blog].

It is likely that a great many Internet users will be asked to change their passwords this week (I hope). Meantime, companies and organizations running vulnerable versions should upgrade to the latest iteration of OpenSSL - OpenSSL 1.0.1g — as quickly as possible.

Update, 2:26 p.m.: It appears that this Github page allows visitors to test whether a site is vulnerable to this bug (hat tip to Sandro Süffert). For more on what you can do you to protect yourself from this vulnerability, see this post.