Posts tagged ‘fbi’

Errata Security: They are deadly serious about crypto backdoors

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

Julian Sanchez (@normative) has an article questioning whether the FBI is serious about pushing crypto backdoors, or whether this is all a ploy pressuring companies like Apple to give them access. I think they are serious — deadly serious.

The reason they are only half-heartedly pushing backdoors at the moment is that they believe we, the opposition, aren’t serious about the issue. After all, the 4rth Amendment says that a “warrant of probable cause” gives law enforcement unlimited power to invade our privacy. Since the constitution is on their side, only irrelevant hippies could ever disagree. There is no serious opposition to the proposition. It’ll all work itself out in the FBI’s favor eventually. Among the fascist class of politicians, like the Dianne Feinsteins and Lindsay Grahams of the world, belief in this principle is rock solid. They have absolutely no doubt.

But the opposition is deadly serious. By “deadly” I mean this is an issue we are willing to take up arms over. If congress were to pass a law outlawing strong crypto, I’d move to a non-extradition country, declare the revolution, and start working to bring down the government. You think the “Anonymous” hackers were bad, but you’ve seen nothing compared to what the tech community would do if encryption were outlawed.

On most policy questions, there are two sides to the debate, where reasonable people disagree. Crypto backdoors isn’t that type of policy question. It’s equivalent to techies what trying to ban guns would be to the NRA.

So the FBI trundles along, as if the opposition were hippies instead of ardent revolutionaries.

Eventually, though, things will come to a head where the FBI pushes forward. There will eventually be another major terrorist attack in the United States, and the terrorist will have been using encrypted communications. At that point, we are going to see the deadly seriousness of the FBI on the issue, and the deadly seriousness of the opposition. And by “deadly” I mean exactly that — violence and people getting killed.

Julian Sanchez is probably right that at this point, the FBI isn’t pushing too hard, and is willing to just pressure companies to get what they want (recovered messages from iCloud backups), and to give populist activists like the EFF easy wins (avoiding full backdoors) to take the pressure off. But in the long run, I believe this issue will become violent.

Krebs on Security: Sources: Security Firm Norse Corp. Imploding

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Norse Corp., a Foster City, Calif. based cybersecurity firm that has attracted much attention from the news media and investors alike this past year, fired its chief executive officer this week amid a major shakeup that could spell the end of the company. The move comes just weeks after the company laid off almost 30 percent of its staff.

Sources close to the matter say Norse CEO Sam Glines was asked to step down by the company’s board of directors, with board member Howard Bain stepping in as interim CEO. Those sources say the company’s investors have told employees that they can show up for work on Monday but that there is no guarantee they will get paid if they do.

A snapshot of Norse's semi-live attack map.

A snapshot of Norse’s semi-live attack map.

Glines agreed earlier this month to an interview with KrebsOnSecurity but later canceled that engagement without explanation. Bain could not be immediately reached for comment.

Two sources at Norse said the company’s assets will be merged with Irvine, Ca. based networking firm SolarFlare, which has some of the same investors and investment capital as Norse. Neither Norse nor SolarFlare would comment for this story.

The pink slips that Norse issued just after New Years’s Day may have come as a shock to many employees, but perhaps the layoffs shouldn’t have been much of a surprise: A careful review of previous ventures launched by the company’s founders reveals a pattern of failed businesses, reverse mergers, shell companies and product promises that missed the mark by miles.

EYE CANDY

In the tech-heavy, geek-speak world of cybersecurity, infographics and other eye candy are king because they promise to make complicated and boring subjects accessible and sexy. And Norse’s much-vaunted interactive attack map is indeed some serious eye candy: It purports to track the source and destination of countless Internet attacks in near real-time, and shows what appear to be multicolored fireballs continuously arcing across the globe.

Norse says the data that feeds its online attack map come from a network of more than eight million online “sensors” — honeypot systems that the company has strategically installed at Internet properties in 47 countries around the globe to attract and record malicious and suspicious Internet traffic.

According to the company’s marketing literature, Norse’s sensors are designed to mimic a broad range of computer systems. For example, they might pretend to be a Web server when an automated attack or bot scans the system looking for Web server vulnerabilities. In other cases, those sensors might watch for Internet attack traffic that would typically only be seen by very specific machines, such as devices that manage complex manufacturing systems, power plants or other industrial control systems.

Several departing and senior Norse employees said the company’s attack data was certainly voluminous enough to build a business upon — if not especially sophisticated or uncommon. But most of those interviewed said Norse’s top leadership didn’t appear to be interested in or capable of building a strong product behind the data. More worryingly, those same people said there are serious questions about the validity of the data that informs the company’s core product.

UP IN SMOKE(S)

Norse Corp. and its fundamental technology arose from the ashes of several companies that appear to have been launched and then acquired by shell companies owned by Norse’s top executives — principally the company’s founder and chief technology officer Tommy Stiansen. Stiansen did not respond to multiple requests for comment.

This acquisition process, known as a “reverse merger” or “reverse takeover,” involves the acquisition of a public company by a private company so that the private company can bypass the lengthy and complex process of going public.

Reverse mergers are completely legal, but they can be abused to hide the investors in a company and to conceal certain liabilities of the acquired company, such as pending lawsuits or debt. In 2011, the U.S. Securities and Exchange Commission (SEC) issued a bulletin cautioning investors about plunking down investments in reverse mergers, warning that they may be prone to fraud and other abuses.

The founders of Norse Corp. got their start in 1998 with a company called Cyco.net (pronounced “psycho”). According to a press release issued at the time, “Cyco.net was a New Mexico based firm established to develop a network of cyber companies.”

“This site is a lighthearted destination that will be like the ‘People Magazine’ of the Internet,” said Richard Urrea, Cyco’s CEO, in a bizarre explanation of the company’s intentions. “This format has proven itself by providing Time Warner with over a billion dollars of ad revenue annually. That, combined with the CYCO.NET’s e-commerce and various affiliations, such as Amazon.com, could amount to three times that figure. Not a portal like Yahoo, the CYCO.NET will serve as the launch pad to rocket the Internet surfer into the deepest reaches of cyberspace.”

In 2003, Cyco.net acquired Orion Security Services, a company founded by Stiansen, Norse’s current CTO and founder and the one Norse executive who is actually from Norway. Orion was billed as a firm that provides secure computer network management solutions, as well as video surveillance systems via satellite communications.

The Orion acquisition reportedly came with $20 million in financing from a private equity firm called Cornell Capital Partners LP, which listed itself as a Cayman Islands exempt limited partnership whose business address was in Jersey City, NJ.

Cornell later changed its name to Yorkville Advisors, an entity that became the subject of an investigation by the U.S. Securities and Exchange Commission (SEC) and a subsequent lawsuit in which the company was accused of reporting “false and inflated values.”

Despite claims that Cyco.net was poised to “rocket into the deepest riches of cyberspace,” it somehow fell short of that destination and ended up selling cigarettes online instead. Perhaps inevitably, the company soon found itself the target of a lawsuit by several states led by the Washington state attorney general that accused the company of selling tobacco products to minors, failing to report cigarette sales and taxes, and for falsely advertising cigarettes as tax-free.

COPYRIGHT COPS

In 2005, Cyco.net changed its name to Nexicon, but only after acquiring by stock swap another creation by Stiansen — Pluto Communications — a company formed in 2002 and whose stated mission was to provide “operational billing solutions for telecom networks.” Again, Urrea would issue a press release charting a course for the company that would have almost no bearing on what it actually ended up doing.

“We are very excited that the transition from our old name and identity is now complete, and we can start to formally reposition our Company under the new brand name of Nexicon,” Urrea said. “After the divestiture of our former B2C company in 2003, we have laid the foundation for our new business model, offering all-in-one or issue-specific B2B management solutions for the billing, network control, and security industries.”

In June 2008, Sam Glines — who would one day become CEO of Norse Corp. — joined Nexicon and was later promoted to chief operating officer. By that time, Nexicon had morphed itself into an online copyright cop, marketing a technology they claimed could help detect and stop illegal file-sharing. The company’s “GetAmnesty” technology sent users a pop-up notice explaining that it was expensive to sue the user and even more expensive for the user to get sued. Recipients of these notices were advised to just click the button displayed and pay for the song and all would be forgiven.

In November 2008, Nexicon was acquired by Priviam, another shell company operated by Stiansen and Nexicon’s principals. Nexicon went on to sign Youtube.com and several entertainment studios as customers. But soon enough, reports began rolling in of rampant false-positives — Internet users receiving threatening legal notices from Nexicon that they were illegally sharing files when they actually weren’t. Nexicon/Priviam’s business began drying up, and it’s stock price plummeted.

In September 2011, the Securities and Exchange Commission revoked the company’s ability to trade its penny stock (then NXCO on the pink sheets), noting that the company had failed to file any periodic reports with the SEC since its inception. In June 2012, the SEC also revoked Priviam’s ability to trade its stock, citing the same compliance failings that led to the de-listing of Nexicon.

By the time the SEC revoked Nexicon’s trading ability, the company’s founders were already working to reinvent themselves yet again. In August 2011, they raised $50,000 in seed money from Oak Investment Partners to jump-start Norse Corp. A year later, Norse received $3.5 million in debt refinancing, and in December 2013 got its first big infusion of cash — $10 million from Oak Investment Partners. In September 2015, KPMG invested $11.4 million in the company.

Several former employees say Stiansen’s penchant for creating shell corporations served him well in building out Norse’s global sensor network. Some of the sensors are in countries where U.S. assets are heavily monitored, such as China. Those same insiders said Norse’s network of shell corporations also helped the company gain visibility into attack traffic in countries where it is forbidden for U.S. firms to do business, such as Iran and Syria.

THE MAN BEHIND THE CURTAIN

By 2014, Norse was throwing lavish parties at top Internet security conferences and luring dozens of smart security experts away from other firms. Among them was Mary Landesman, formerly a senior security researcher at Cisco Systems. Landesman said Norse had recently hired many of her friends in the cybersecurity business and had developed such a buzz in the industry that she recruited her son to come work alongside her at the company.

As a senior data scientist at Norse, Landesman’s job was to discover useful and interesting patterns in the real-time attack data that drove the company’s “cyber threat intelligence” offerings (including its eye candy online attack map referenced at the beginning of this story). By this time, former employees say Norse’s systems were collecting a whopping 140 terabytes of Internet attack and traffic data per day. To put that in perspective a single terabyte can hold approximately 1,000 copies of the Encyclopedia Britannica. The entire printed collection of the U.S. Library of Congress would take up about ten terabytes.

Landesman said she wasn’t actually given access to all that data until the fall of 2015 — seven months after being hired as Norse’s chief data scientist — and that when she got the chance to dig into it, she was disappointed: The information appeared to be little more than what one might glean from a Web server log — albeit millions of them around the world.

“The data isn’t great, and it’s pretty much the same thing as if you looked at Web server logs that had automated crawlers and scanning tools hitting it constantly,” Landesman said in an interview with KrebsOnSecurity. “But if you know how to look at it and bring in a bunch of third-party data and tools, the data is not without its merits, if not just based on the sheer size of it.”

Landesman and other current and former Norse employees said very few people at the company were permitted to see how Norse collected its sensor data, and that Norse founder Stiansen jealously guarded access to the back-end systems that gathered the information.

“With this latest round of layoffs, if Tommy got hit by a bus tomorrow I don’t think there would be a single person in the company left who understands how the whole thing works,” said one former employee at Norse who spoke on condition of anonymity.

SHOW ME THE DATA

Stuart McClure, president and founder of the cybersecurity firm Cylance, said he found out just how reluctant Stiansen could be to share Norse data when he visited Stiansen and the company’s offices in Northern California in late 2014. McClure said he went there to discuss collaborating with Norse on two upcoming reports: One examining Iran’s cyber warfare capabilities, and another about exactly who was responsible for the massive Nov. 2014 cyber attack on Sony Pictures Entertainment.

The FBI had already attributed the attack to North Korean hackers. But McClure was intrigued after Stiansen confidentially shared that Norse had reached a vastly different conclusion than the FBI: Norse had data suggesting the attack on Sony was the work of disgruntled former employees.

McClure said he recalls listening to Stiansen ramble on for hours about Norse’s suspicions and simultaneously dodging direct questions about how it had reached the conclusion that the Sony attack was an inside job.

“I just kept going back to them and said, ‘Tommy, show me the data.’ We wanted to work with them, but when they couldn’t or wouldn’t produce any data or facts to substantiate their work, we couldn’t proceed.”

After that experience, McClure said he decided not to work with Norse on either the Sony report or the Iran investigation. Cylance ended up releasing its own report on Iran’s cyber capabilities; that analysis — dubbed “Operation Cleaver” (PDF) — was later tacitly acknowledged in a confidential report by the FBI.

Conversely, Norse’s take on Iran’s cyber prowess (PDF) was trounced by critics as a deeply biased, headline-grabbing report. It came near the height of international negotiations over lifting nuclear sanctions against Iran, and Norse had teamed up with the American Enterprise Institute, a conservative think tank that has traditionally taken a hard line against threats or potential threats to the United States.

In its report, Norse said it saw a half-million attacks on industrial control systems by Iran in the previous 24 months — a 115 percent increase in attacks. But in a scathing analysis of Norse’s findings, critical infrastructure security expert Robert M. Lee said Norse’s claim of industrial control systems being attacked and implying it was definitively the Iranian government was disingenuous at best. Lee said he obtained an advanced copy of an earlier version of the report that was shared with unclassified government and private industry channels, and that the data in the report simply did not support its conclusions.

“The systems in question are fake systems….and the data obtained cannot be accurately used for attribution,” Lee wrote of Norse’s sensor network. “In essence, Norse identified scans from Iranian Internet locations against fake systems and announced them as attacks on industrial control systems by a foreign government. The Norse report’s claims of attacks on industrial control systems is wrong. The data is misleading. The attention it gained is damaging. And even though a real threat is identified it is done in a way that only damages national cybersecurity.”

FROM SMOKES TO SMOKE & MIRRORS?

KrebsOnSecurity interviewed almost a dozen current and former employees at Norse, as well as several outside investors who said they considered buying the firm. None but Landesman would speak on the record. Most said Norse’s data — the core of its offering — was solid, if prematurely marketed as a way to help banks and others detect and deflect cyber attacks.

“I think they just went to market with this a couple of years too soon,” said one former Norse employee who left on his own a few months prior to the January 2016 layoffs, in part because of concerns about the validity of the data that the company was using to justify some of its public threat reports. “It wasn’t all there, and I worried that they were finding what they wanted to find in the data. If you think about the network they built, that’s a lot of power.”

On Jan. 4, 2016, Landesman learned she and roughly two dozen other colleagues at Norse were being let go. The data scientist said she vetted Norse’s founders prior to joining the firm, but that it wasn’t until she was fired at the beginning of 2016 that she started doing deeper research into the company’s founders.

“I realized that, oh crap, I think this is a scam,” Landesman said. “They’re trying to draw this out and tap into whatever the buzzwords du jour there are, and have a product that’s going to meet that and suck in new investors.”

Calls to Norse investor KPMG International went unreturned. An outside PR firm for KPMG listed on the press release about the original $11.4 million funding for Norse referred my inquiry to a woman running an outside PR firm for Norse, who declined to talk on the record because she said she wasn’t sure whether her firm was still representing the tech company.

“These shell companies formed by [the company’s founders] bilked investors,” Landesman said. “Had anyone gone and investigated any of these partnerships they were espousing as being the next big thing, they would have realized this was all smoke and mirrors.”

Krebs on Security: Firm Sues Cyber Insurer Over $480K Loss

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

A Texas manufacturing firm is suing its cyber insurance provider for refusing to cover a $480,000 loss following an email scam that impersonated the firm’s chief executive.

athookAt issue is a cyber insurance policy issued to Houston-based Ameriforge Group Inc. (doing business as “AFGlobal Corp.“) by Federal Insurance Co., a division of insurance giant Chubb Group. AFGlobal maintains that the policy it held provided coverage for both computer fraud and funds transfer fraud, but that the insurer nevertheless denied a claim filed in May 2014 after scammers impersonating AFGlobal’s CEO convinced the company’s accountant to wire $480,000 to a bank in China.

According to documents filed with the U.S. District Court in Harris County, Texas, the policy covered up to $3 million, with a $100,000 deductible. The documents indicate that from May 21, 2014 to May 27, 2014, AFGlobal’s director of accounting received a series of emails from someone claiming to be Gean Stalcup, the CEO of AFGlobal.

“Glen, I have assigned you to manage file T521,” the phony message to the accounting director Glen Wurm allegedly read. “This is a strictly confidential financial operation, to which takes priority over other tasks. Have you already been contacted by Steven Shapiro (attorney from KPMG)? This is very sensitive, so please only communicate with me through this email, in order for us not to infringe SEC regulations. Please do no speak with anyone by email or phone regarding this. Regards, Gean Stalcup.”

Roughly 30 minutes later, Mr. Wurm said he was contacted via phone and email by Mr. Shapiro stating that due diligence fees associated with the China acquisition in the mount of $480,000 were needed. AFGlobal claims a Mr. Shapiro followed up via email with wiring instructions.

After wiring the funds as requested — sending the funds to an account at the Agricultural Bank of China — Mr. Wurm said he received no further correspondence from the imposter until May 27, 2014, when the imposter acknowledged receipt of the $480,000 and asked Wurm to wire an additional $18 million. Wurm said he became suspicious after that request, and alerted the officers of the company to his suspicions.

According to the plaintiff, “the imposter seemed to know the normal procedures of the company and also that Gean Stalcup had a long-standing, very personal and familiar relationship with Mr. Wurm — sufficient enough that Mr. Wurm would not question a request from the CEO.”

The company said it attempted to recover the $480,000 wire from its bank, but that the money was already gone by the 27th, with the imposters zeroing out and closing the recipient account shortly after the transfer was completed on May 21.

In a letter sent to Chubb to the plaintiff, the insurance firm said it was denying the claim because the scam, known alternatively as “business email compromise” (BEC) and CEO fraud, did not involve the forgery of a financial instrument as required by the policy.

“Federal disagrees with your contention that forgery coverage is implicated by this matter,” the insurer wrote in a Oct. 9, 2014 letter to AFGlobal. “Your August 12 letter asserts that ‘[t]he Forgery by a Third Party in this incident was of a financial instrument.’ Federal is unaware of any authority to support your position that the email you reference qualifies as a Financial Instrument (as that term is defined by in the Policy).

According to Chubb, to be a financial instrument, the subject email must be a check, draft, or a similar written promise, order or direction to pay a sum certain in money that is made, drawn by or drawn upon an Organization or by anyone acting as an Organization’s agent, or that is purported to have been so made or drawn.

“Your August 12 letter appears to argue that ‘[t]he email constituted an order or direction to pay’ because Mr. Shapiro’s May 21, 2014 email contained wire transfer instructions as to where the funds (apparently discussed in a separate phone conversation between ‘Mr. Shapiro’ and Mr. Wurm) were to be sent,” the insurance firm told AFGlobal. “This argument ignores the fact that what defines a Financial Instrument under the Policy is not merely the existence of a written promise, order or direction to pay, but a written promise, order or direction to pay that is ‘similar’ to a ‘check’ or ‘draft.’

The insurer continued:

“In the context of a commercial crime policy, ‘checks’ and ‘drafts’ are widely understood to be types of negotiable instruments. They represent unconditional written orders or promises to pay a fixed amount of money on demand, or at a definite time, to a payee or bearer, and they can be transferred outside of the maker or drawer’s control. The email at issue in this matter — which is not negotiable — is in not way similar to these types of instruments.”

Chubb’s claim in this case and its definition of a financial instrument would seem to be dated enough that they also might discount transfers from e-checks or deposits scanned and sent over the phone — although the documents in this case do not touch on those instruments. Chubb’s definitions of what constitutes a financial instrument are laid out in this document (PDF).

The complaint lodged by AFGlobal is here (PDF).  The insurance company’s response is here.

Law360 notes that this is actually the second time in the past year that Chubb Corp. unit Federal Insurance was taken to court over coverage after its policyholder was fraudulently swindled out of money.

“Research technology company Medidata Solutions Inc. sued Federal in February for denying reimbursement of $4.8 million after a company employee, also contacted by a fake CEO and fake attorney, instructed him to also wire the money to a Chinese bank,” wrote Steven Trader for Law360. “Though Medidata argued that the imposter changed the email code to alter the sender’s address and include the CEO’s forged signature, thereby constituting a “fraudulent” change in data that triggered coverage, Federal fought back in New York federal court that its policy only covered hacking, not voluntary transfers of money.”

BEC or CEO Fraud schemes are an increasingly common and costly form of cybercrime. According to the FBI, thieves stole nearly $750 million in such scams from more than 7,000 victim companies in the U.S. between October 2013 and August 2015.

CEO fraud usually begins with the thieves either phishing an executive and gaining access to that individual’s inbox, or emailing employees from a look-alike domain name that is one or two letters off from the target company’s true domain name. For example, if the target company’s domain was “example.com” the thieves might register “examp1e.com” (substituting the letter “L” for the numeral 1) or “example.co,” and send messages from that domain.

In these cases, the fraudsters will forge the sender’s email address displayed to the recipient, so that the email appears to be coming from example.com. In all cases, however, the “reply-to” address is the spoofed domain (e.g. examp1e.com), ensuring that any replies are sent to the fraudster.

On the surface, business email compromise scams may seem unsophisticated relative to moneymaking schemes that involve complex malicious software, such as Dyre and ZeuS. But in many ways, the BEC attack is more versatile and adept at sidestepping basic security strategies used by banks and their customers to minimize risks associated with account takeovers. In traditional phishing scams, the attackers interact with the victim’s bank directly, but in the BEC scam the crooks trick the victim into doing that for them.

The FBI urges businesses to adopt two-step or two-factor authentication for email, where available, and/or to establish other communication channels — such as telephone calls — to verify significant transactions. Businesses are also advised to exercise restraint when publishing information about employee activities on their Web sites or through social media.

TorrentFreak: FBI Investigates Hollywood Ties to Pirated ‘Hateful Eight’ Screener

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

fbiantiOver the past several days more than a dozen high quality screeners of Hollywood films have appeared online, including The Hateful Eight, The Revenant and Steve Jobs.

Screeners are advance copies of recent movies, which are generally sent out to critics and awards voters. These high quality releases are subject to intense security precautions by the studios, as they are highly sought after by online pirates.

This year there appears to be a serious breach in the security process and Hollywood has involved the FBI to uncover where.

THR now reports that a watermark on the leaked copy of Tarantino’s The Hateful Eight points to Andrew Kosove, the co-CEO of production-finance company Alcon Entertainment.

The screener that was intended for Kosove was reportedly signed off by an office assistant at the company. The Hollywood executive, however, says he never received the copy.

“I’ve never seen this DVD. It’s never touched my hands. We’re going to do more than cooperate with the FBI. We’re going to conduct our own investigation to find out what happened,” Kosove told THR.

The screener eventually ended up online where it was released by the P2P-group Hive-CM8. The copy of The Hateful Eight is not the only leak to originate from this group, but it’s unknown whether any of the other releases are also linked to Alcon Entertainment’s co-CEO.

The Hollywood company is cooperating with the FBI and the film’s distributor The Weinstein Company to find out what exactly happened. Kosove hopes that the feds can help to get to the bottom of the matter.

“At the moment, nobody knows anything, but I promise we will find out. And I am praying that it had nothing to do with anyone at our beloved company,” he told Deadline.

Every year more than a dozen screeners leak online, often with direct ties to entertainment industry insiders.

Last year a pirated copy of The Secret Life of Walter Mitty was linked to Ellen DeGeneres, and Howard Stern’s name was also connected to a Super 8 screener.

At the time of writing The Hateful Eight has been shared more than a million times through various unauthorized channels. The film is set to premiere in the U.S. on Christmas day.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Errata Security: Where do bitcoins go when you die? (sci-fi)

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

A cyberpunk writer asks this, so I thought I’d answer it:

Note that it’s asked in a legal framework, about “wills” and “heirs”, but law isn’t the concern. Instead, the question is:

What happens to the bitcoins if you don’t pass on the wallet and password?

Presumably, your heirs will inherit your computer, and if they scan it, they’ll find your bitcoin wallet. But the wallet is encrypted, and the password is usually not written down anywhere, but memorized by the owner. Without the password, they can do nothing with the wallet.

Now, they could “crack” the password. Half the population will choose easy-to-remember passwords, which means that anybody can crack them. Many, though, will choose complex passwords that essentially mean nobody can crack them.

As a science-fiction writer, you might make up a new technology for cracking passwords. For example, “quantum computers” are becoming scary real scary fast. But here’s the thing: any technology that makes it easy to crack this password also makes it easy to crack all of bitcoin to begin with.

But let’s go back a moment and look at how bitcoin precisely works. Sci-fi writers imagine future currency as something that exchanged between two devices, such as me holding up my phone to yours, and some data is exchanged. The “coins” are data that exist on one device, that then flow to another device.

This actually doesn’t work, because of the “double spending” problem. Unlike real coins, data can be copied. Any data I have on a device that I give to you, I can also keep, and then spend a second time to give to somebody else.

The solution is a ledger. When my phone squirts coins to your phones, both our phones contact the bank and inform it of the transfer. The bank then debits my account and credits yours. And that’s how your credit card works with the “chip and pin”. It’s actually a small computer on the credit card that verifies a transaction, and then your bank records that transaction in a ledger, debiting your account.

Bitcoin is simply that ledger, but without banks. It’s a public ledger, known as the blockchain.

The point is that you don’t have any bitcoins yourself. Instead, there is an entry in the public-ledger/blockchain that says you have bitcoins.

What’s in a bitcoin wallet is not any bitcoins, but the secret crypto keys that control the associated entries in the public ledger. Only the person with the private key can add a transaction to the public-ledger/blockchain reassigning those bitcoins to somebody else. Such a private key looks something like:

E9873D79C6D87DC0FB6A5778633389F4453213303DA61F20BD67FC233AA33262

Without this key, the associated entries in the blockchain become stale. There’s no way to create new entries passing bitcoins to somebody else. If somebody dies without passing this key to somebody else, then the bitcoins essentially die with them.

In theory, somebody can memorize their private key, but in practice, nobody does. Instead, they put this into a file, and then encrypt the file with a password that’s more easily memorized. For example, they might use as their password the first line of text from Neuromancer. It’s long and hard to guess, but yet something that is either easily memorized, of if forgotten, easily recovered. In other words, the password (or passphrase in this case) to encrypt the file containing the private key might be:

The sky above the port was the color of television, tuned to a dead channel.

So now our deceased has to pass on both the wallet file and the password that will decrypt the wallet. Presumably, though, the deceased’s heirs will find the computer and the wallet, so practically the only problem becomes cracking the password.

Cracking is an exponential problem. The trope in sci-fi is to wave aside this problem and “reroute the encryptions”, and instantly decrypt such things, but in the real world, it’s a lot harder. Passwords become exponentially harder to crack the longer they are.

The classic story here is that of a knave who plays chess with a king. The king tells his opponent that he can have anything he wants within reason should he win. The knave chooses this as his prize: one grain of rice for the first square, two for the second, four grains of rice for the third square, and so on, doubling each time for all 64 squares on the chessboard. The king, thinking this to be a minor amount, agrees. When the knave wins, the king finds he cannot payoff the winnings — because of exponential growth.

The first ten squares have the following number of rice grains:

1 2 4 8 16 32 64 128 256 512

This is 1024 grains of rice in total. Using ‘k’ to mean ‘a thousand’ (kilo-grains), the next 10 squares look like this:

1k 2k 4k 8k 16k 32k 64k 128k 256k 512k

This is about a million grains of rice. Using ‘m’ to mean ‘a million’ (mega-grains of rice), the next 10 squares look like this:

1m 2m 4m 8m 16m 32m 64m 128m 256k 512m

This is about a billion grains of rice. The next 10 squares becomes a trillion gains of rice, and we are only 40 out of 64 squares.

As the Wikipedia article discusses, filling the chessboard requires a heap of rice larger than Mt. Everest in rice, or a thousand years at the current rate of growing rice.

One ending of this story is that the knave gets the daughter in marriage and half the kingdom. In the other version of this story, the king beheads the knave for his impudence.

The same applies to password cracking. Short passwords are easily cracked. Because of exponential growth, long passwords becoming impossible to track, even at sci-fi levels of imagined technology. If such a magic technology existed, then it would defeat the underlying cryptography of the blockchain as well — if you could crack the password encrypting the key, you could just crack the key. If you could do that, then you could steal everyone’s bitcoins, not just the deceased’s.

In the above example, the sci-fi writer in question imagines an artificial intelligence that, in order to make money, tracks down dead people and harvests all the bitcoins they haven’t passed on. This can’t be done by harvesting the blockchain — it’d need the private keys.

One way that this might happen is that for the AI to own a company that recycles computers. Before recycling, it automatically scans them for such files. While it can’t break the encryption normally, some large percentage of people choose weak passwords. Also, the AI might know some tricks that make it smarter at figuring out how people choose passwords. It still won’t crack everything, but even cracking half the possible coins would lead to a good amount of income.

Or, let’s tackle this problem from another angle, a legal angle. One of the hot topics these days is something known as “crypto backdoors”. The police claim (erroneously in my opinion) that such unbreakable encryption prevents them from investigating some crimes, because even when they have a warrant to get computers, phones, and files, they can’t possibly decrypt them. Thus, they claim, technology needs a “backdoor” that only the police can access with a warrant.

In it’s simplest form, this is technically easy. Indeed, it’s often a feature for corporations, so that they can get at the encrypted files and message when employees leave the firm, or more often, when stupid employees forget their password but need to have the IT department recover their data.

In a practical form, it’s unreasonable, because it means outlawing any software that doesn’t have a backdoor. Since crypto is just math, and software is something anybody can write, this means a drastic police-state measure. But, if you are a cyberpunk writer about future dystopias, well then, this would be perfectly reasonable.

Thus, in this case, the police, using their secret backdoor key, would be able to decrypt the wallet, and recover any secret key.

But then at the same time, the police could in theory impose this rule on the blockchain itself. Instead of simply trusting a single person’s key, it can trust multiple keys, so that any of them can transfer bitcoins to somebody else. One of those keys could be a secret backdoor police held by the police, so they could step in and grab bitcoins any time they want.

This would, of course, largely defeat the purpose of the bitcoin blockchain, because now you had a central control. But things can go halfway. Bitcoin is transnational, so it really can’t be controlled by even a dystopic government, which is why it’s currently popular in places like Russia. However, a government can still force the citizens of their own country to backdoor their transactions with that county’s public backdoor key (which matches a secret police key). Thus, the American police would be able to grab bitcoins from any law-abiding American to chose to sign their transactions with the FBI’s key.

The point I’m making here is that if you are a sci-fi writer, while a naive approach to the topic might not have a good answer, something thinking and discussing it with a bunch of people might yield something fruitful.

Schneier on Security: Tracking Someone Using LifeLock

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Someone opened a LifeLock account in his ex-wife’s name, and used the service to track her bank accounts, credit cards, and other financial activities.

The article is mostly about how appalling LifeLock was about this, but I’m more interested in the surveillance possibilities. Certainly the FBI can use LifeLock to surveil people with a warrant. The FBI/NSA can also collect the financial data of every LifeLock customer with a National Security Letter. But it’s interesting how easy it was for an individual to open an account for another individual.

TorrentFreak: MPAA ‘Softens’ Movie Theater Anti-Piracy Policy, Drops Bounty

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

recillegalThe MPAA sees illegally recorded movies as one of the biggest piracy threats and goes to extremes to stop it.

During pre-release screenings and premieres, for example, employees are often equipped with night-vision goggles and other spy tech to closely monitor movie goers.

In some cases members of the public have been instructed to hand over all recording-capable devices including phones and Google glasses.

Through these measures the MPAA hopes to prevent pirates from camcording movies or recording audio in theaters. The underlying policy is drafted in cooperation with the National Association of Theatre Owners (NATO), and a few days ago the most recent version was released.

At first sight not much has changed. The MPAA still recommends theater owners to keep an eye on suspect movie goers while prohibiting the use of any recording devices including phones.

“Preventative measures should include asking patrons to silence and put away their phones and requiring they turn off and stow all other devices capable of recording, including wearable technology capable of recording.

“If individuals fail or refuse to put any recording device away, managers—per your theater’s policy — can ask them to leave,” the recommendation reads.

There are several subtle changed throughout the document though, especially regarding the involvement of police. Previously, theater employees were encouraged to detain suspect visitors and hand them over to the authorities.

This is explicitly stated in the following snippet taken from the 2014 version of the best practices.

“Theater managers should immediately alert law enforcement authorities whenever they have clear indications that prohibited activity is taking place—the proper authorities will determine what laws may have been violated and what enforcement action should be taken.”

In the new document, however, it’s no longer a requirement to call the police. Instead, this is now optional.

“Theater managers have the option to immediately alert law enforcement authorities whenever they have clear indications that prohibited activity is taking place or managers can the stop the activity without law enforcement assistance.”

Similar changes were made throughout the document. Even reporting incidents to the MPAA no longer appears to be mandatory, which it still was according to last year’s text.

“After your theater manager has contacted the police, your theater manager should immediately call the MPAA 24/7 Anti-Camcording Hot Line to report the incident.”

The language above has now been changed to a less urgent option of simply reporting incidents, should a theater manager deem it appropriate.

“Your theater manager can also call the MPAA 24/7 Anti-Camcording Hot Line to report the incident.”

Aside from the softer tone there’s another significant change to the best practices. The $500 “reward” movie theater employees could get for catching pirates is no longer mentioned.

The old Take Action Award mention
takeactionreward

In fact, the entire “take action award” program appears to have been discontinued. The NATO page where it was listed now returns a 404 error and the details on FightFilmTheft have been removed as well.

This stands in stark contrast to the UK where the rewards for a similar program were doubled just a few weeks ago, with officials describing it as a great success.

The question that remains unanswered is why the MPAA and NATO have implemented these changes. Could it be that there were too many false positives being reported to the police, or is there an image problem perhaps?

In recent years several questionable police referrals resulted in a media backlash. A 19-year-old girl was arrested for recording a 20 second clip from the movie “Transformers,” which she wanted to show to her brother, for example.

And just last year the FBI dragged a man from a movie theater in Columbus, Ohio, after theater staff presumed his wearing of Google Glass was a sign that he was engaged in camcorder piracy.

Meanwhile, reports of real pirates being apprehended in a similar fashion have been notable by their absence.

Best Practices to Prevent Film Theft

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

TorrentFreak: Megaupload Programmer Already Freed From U.S. Prison

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

Acting on a lead from the entertainment industry, in January 2012 the U.S. Government shut down Megaupload.

To date, much of their efforts have been focused on extraditing Kim Dotcom and his former colleagues from New Zealand to the United States but earlier this year it became apparent that they’d already snared an important piece of the puzzle.

Operating under key mega figure Matthias Ortmann, Andrus Nomm was a Megaupload programmer who reportedly earned a little over $3200 per month.

In common with his former colleagues, Nomm was cited in the Megaupload indictment, meaning that the FBI wanted the Estonian in the United States to face criminal charges. With few funds at his disposal to put a Dotcom-like fight, Nomm flew from the Netherlands and handed himself over to U.S. authorities after three years.

In February the 36-year-old was arrested and carried through with a deal he’d promised to cut with U.S. authorities. Just days later the Department of Justice announced that Nomm had pleaded guilty to criminal copyright infringement, and he was sentenced to 366 days in prison.

Dotcom slammed the development.

“An innocent coder pleads guilty after 3 years of DOJ abuse, with no end in sight, in order to move on with his life,” Dotcom said. “I have nothing but compassion and understanding for Andrus Nomm and I hope he will soon be reunited with his son.”

This week it appears Dotcom’s wishes came true. According to NZHerald, after serving just nine months in prison, Nomm’s name appeared on a list of prisoners due to be released this week.

However, the Estonian’s release will be bitter-sweet since according to the same report Nomm’s evidence is already being used against Dotcom and as recently as his just-concluded extradition hearing.

The details will not be made public until have Judge Nevin Dawson hands down his decision but it’s believed that Nomm has stated on the record that Dotcom and his former colleagues knowingly profited from copyright infringement.

Nevertheless, Dotcom still feels that Nomm pleaded guilty to a crime he didn’t commit.

“One year in jail was his way out. I don’t blame him. I can understand why Andrus did it. But it doesn’t change the fact that he is innocent,” Dotcom told the Herald.

Underlining his point, Dotcom points to a video recorded by Nomm just three months after the raid and uploaded to YouTube after Nomm signed the plea deal.

“Andrus made it clear in his documentary interview that he had done nothing wrong,” Dotcom said.

Although three years in limbo and a year in jail will have had a considerable impact on Nomm’s life, his deal with the U.S. now means that he can get on with his life. The same cannot be said of Dotcom and his former colleagues.

Nomm plead guilty to two counts of conspiracy to commit copyright infringement, charges that Dotcom and his former colleagues continue to deny. The U.S. also dropped the money laundering and racketeering charges against the Estonian – the same is unlikely to happen in Dotcom’s case. However, Nomm still has a “money judgment” of US$175m to contend with, a not inconsiderable amount that he will presumably never pay.

The conviction of Nomm is a considerable feather in the cap of U.S. authorities who indicate that Nomm has given them much more evidence than has been revealed thus far. Only time will tell how valuable that will prove.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Krebs on Security: Paris Terror Attacks Stoke Encryption Debate

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

U.S. state and federal law enforcement officials appear poised to tap into public concern over the terror attacks in France last week to garner support for proposals that would fundamentally weaken the security of encryption technology used by U.S. corporations and citizens. Here’s a closer look at what’s going on, and why readers should be tuned in and asking questions.

encryptedeyeDespite early and widely repeated media reports that the terrorists who killed at least 128 people in Paris used strong encryption to disguise their communications, the evidence of this has failed to materialize. An initial report on Nov. 14 from Forbes titled “Why the Paris ISIS Terrorists Used PlayStation4 to Plan Attacks” was later backpedalled to “How Paris ISIS Terrorists May Have Used PlayStation 4 to Discuss and Plan.” Turns out there was actually nothing to indicate the attackers used gaming consoles to hide their communications; only that they could do that if they wanted to.

Politico ran a piece on Sunday that quoted a Belgian government official saying French authorities had confiscated at least one PlayStation 4 gaming console from one of the attacker’s belongings (hat tip to Insidesources.com).

“It’s unclear if the suspects in the attacks used PlayStation as a means of communication,” the Politico story explained. “But the sophistication of the attacks raises questions about the ability of law enforcement to detect plots as extremists use new and different forms of technology to elude investigators.”

Also on Sunday, The New York Times published a story that included this bit:

“The attackers are believed to have communicated using encryption technology, according to European officials who had been briefed on the investigation but were not authorized to speak publicly. It was not clear whether the encryption was part of widely used communications tools, like WhatsApp, which the authorities have a hard time monitoring, or something more elaborate. Intelligence officials have been pressing for more leeway to counter the growing use of encryption.”

After heavy criticism of the story on Twitter, The Times later removed the story from the site (it is archived here). That paragraph was softened into the following text, which was included in a different Times story later in the day: “European officials said they believed the Paris attackers had used some kind of encrypted communication, but offered no evidence.” To its credit, the Times today published a more detailed look at the encryption debate.

The media may be unwittingly playing into the hands of folks that former NBC reporter Bob Sullivan lovingly calls the “anti-encryption opportunists,” i.e., those who support weakening data encryption standards to make it easier for law enforcement officials to lawfully monitor people suspected of terrorist activity.

The directors of the FBI , Central Intelligence Agency and National Security Agency have repeated warned Congress and the technology community that they’re facing a yawning intelligence gap from smart phone and internet communication technologies that use encryption which investigators cannot crack — even after being granted the authority to do so by the U.S. courts.

For its part, the Obama administration has reportedly backed down in its bitter dispute with Silicon Valley over the encryption of data on iPhones and other digital devices.

“While the administration said it would continue to try to persuade companies like Apple and Google to assist in criminal and national security investigations, it determined that the government should not force them to breach the security of their products,” wrote Nicole Perlroth and David Sanger for The New York Times in October. “In essence, investigators will have to hope they find other ways to get what they need, from data stored in the cloud in unencrypted form or transmitted over phone lines, which are covered by a law that affects telecommunications providers but not the technology giants.”

But this hasn’t stopped proponents of weakening encryption from identifying opportunities to advance their cause. In a memo obtained in August by The Washington PostRobert Litt, a lawyer in the Office of the Director of National Intelligence, wrote that the public support for weakening encryption “could turn in the event of a terrorist attack or criminal event where strong encryption can be shown to have hindered law enforcement.”

To that apparent end, law enforcement officials from Manhattan and the City of London are expected on Wednesday to release a “white paper on smartphone encryption,” during an annual financial crimes and cybersecurity symposium at The Federal Reserve Bank of New York. A media notice (PDF) about the event was sent out by Manhattan District Attorney Cyrus R. Vance Jr., one of the speakers at the event and a vocal proponent of building special access for law enforcement into encrypted communications. Here’s Vance in a recent New York Times op-ed on the need for the expanded surveillance powers.

Critics say any plans designed to build in secret “backdoors” that allow court-ordered access to encrypted communications ultimately would backfire once those backdoors were discovered by crooks and nation states. In her column titled “After Paris Attacks, Here’s What the CIA Director Gets Wrong About Encryption,” Wired.com’s Kim Zetter examines security holes in the arguments for weakening encryption.

The aforementioned Bob Sullivan reminds us that weakening domestic encryption laws would simply ensure that the criminals we wish to monitor use non-US encryption technology:

“For starters, U.S. firms that sell products using encryption would create backdoors, if forced by law.  But products created outside the U.S.?  They’d create backdoors only if their governments required it.  You see where I’m going. There will be no global master key law that all corporations adhere to.  By now I’m sure you’ve realized that such laws would only work to the extent that they are obeyed.  Plenty of companies would create rogue encryption products, now that the market for them would explode.  And of course, terrorists are hard at work creating their own encryption schemes.”

“There’s also the problem of existing products, created before such a law. These have no backdoors and could still be used. You might think of this as the genie out of the bottle problem, which is real. It’s very,  very hard to undo a technological advance.”

“Meanwhile, creation of backdoors would make us all less safe.  Would you trust governments to store and protect such a master key?  Managing defense of such a universal secret-killer is the stuff of movie plots.  No, the master key would most likely get out, or the backdoor would be hacked.  That would mean illegal actors would still have encryption that worked, but the rest of us would not. We would be fighting with one hand behind out backs.”

“In the end, it’s a familiar argument: disabling encryption would only stop people from using it legally. Criminals and terrorists would still use it illegally.”

Where do you come down on this debate, dear readers? Are you taking advantage of the kinds of technologies and services — like Signal, Telegram and Wickr — that use encryption the government says it can’t crack? Sound off in the comments below.

Schneier on Security: Did Carnegie Mellon Attack Tor for the FBI?

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

There’s pretty strong evidence that the team of researchers from Carnegie Mellon University who canceled their scheduled 2015 Black Hat talk deanonymized Tor users for the FBI.

Details are in this Vice story and this Wired story (and these
https://blog.torproject.org/blog/did-fbi-pay-university-attack-tor-users”>two follow-on Vice stories). And here’s the reaction from the Tor Project.

Nicholas Weaver guessed this back in January.

The behavior of the researchers is reprehensible, but the real issue is that CERT Coordination Center (CERT/CC) has lost its credibility as an honest broker. The researchers discovered this vulnerability and submitted it to CERT. Neither the researchers nor CERT disclosed this vulnerability to the Tor Project. Instead, the researchers apparently used this vulnerability to deanonymize a large number of hidden service visitors and provide the information to the FBI.

Does anyone still trust CERT to behave in the Internet’s best interests?

LWN.net: Did the FBI Pay a University to Attack Tor Users? (Tor blog)

This post was syndicated from: LWN.net and was written by: jake. Original post: at LWN.net

The Tor blog is carrying a post from interim executive director Roger Dingledine that accuses Carnegie Mellon University (CMU) of accepting $1 million from the FBI to de-anonymize Tor users.
There is no indication yet that they had a warrant or any institutional oversight by Carnegie Mellon’s Institutional Review Board. We think it’s unlikely they could have gotten a valid warrant for CMU’s attack as conducted, since it was not narrowly tailored to target criminals or criminal activity, but instead appears to have indiscriminately targeted many users at once.

Such action is a violation of our trust and basic guidelines for ethical research. We strongly support independent research on our software and network, but this attack crosses the crucial line between research and endangering innocent users.” Cryptographer Matthew Green has also weighed in (among others, including Forbes and Ars Technica): “If CMU really did conduct Tor de-anonymization research for the benefit of the FBI, the people they identified were allegedly not doing the nicest things. It’s hard to feel particularly sympathetic.

Except for one small detail: there’s no reason to believe that the defendants were the only people affected.”

Krebs on Security: Ransomware Now Gunning for Your Web Sites

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

One of the more common and destructive computer crimes to emerge over the past few years involves ransomware — malicious code that quietly scrambles all of the infected user’s documents and files with very strong encryption.  A ransom, to be paid in Bitcon, is demanded in exchange for a key to unlock the files. Well, now it appears fraudsters are developing ransomware that does the same but for Web sites — essentially holding the site’s files, pages and images for ransom.

Image: Kaspersky Lab

Image: Kaspersky Lab

This latest criminal innovation, innocuously dubbed “Linux.Encoder.1” by Russian antivirus and security firm Dr.Web, targets sites powered by the Linux operating system. The file currently has almost zero detection when scrutinized by antivirus products at Google’s Virustotal.com, a free tool for scanning suspicious files against dozens of popular antivirus products.

Typically, the malware is injected into Web sites via known vulnerabilities in site plugins or third-party software — such shopping cart programs. Once on a host machine, the malware will encrypt all of the files in the “home” directories on the system, as well backup directories and most of the system folders typically associated with Web site files, images, pages, code libraries and scripts.

The ransomware problem is costly, hugely disruptive, and growing. In June, the FBI said said it received 992 CryptoWall-related complaints in the preceding year, with losses totaling more than $18 million. And that’s just from those victims who reported the crimes to the U.S. government; a huge percentage of cybercrimes never get reported at all.

ONE RECENT VICTIM

On Nov. 4, the Linux Website ramsomware infected a server used professional Web site designer Daniel Macadar. The ransom message was inside a plain text file called “instructions to decrypt” that was included in every file directory with encrypted files:

“To obtain the private key and php script for this computer, which will automatically decrypt files, you need to pay 1 bitcoin(s) (~420 USD),” the warning read. “Without this key, you will never be able to get your original files back.”

Macadar said the malware struck a development Web server of his that also hosted Web sites for a couple of longtime friends. Macadar was behind on backing up the site and the server, and the attack had rendered those sites unusable. He said he had little choice but to pay the ransom. But it took him some time before he was able to figure out how to open and fund a Bitcoin account.

“I didn’t have any Bitcoins at that point, and I was never planning to do anything with Bitcoin in my life,” he said.

According to Macadar, the instructions worked as described, and about three hours later his server was fully decrypted. However, not everything worked the way it should have.

“There’s a  decryption script that puts the data back, but somehow it ate some characters in a few files, adding like a comma or an extra space or something to the files,” he said.

Macadar said he hired Thomas Raef — 0wner of Web site security service WeWatchYourWebsite.com — to help secure his server after the attack, and to figure out how the attackers got in. Raef told me his customer’s site was infected via an unpatched vulnerability in Magento, a shopping cart software that many Web sites use to handle ecommerce payments.

CheckPoint detailed this vulnerability back in April 2015 and Magento issued a fix yet many smaller ecommerce sites fall behind on critical updates for third-party applications like shopping cart software. Also, there are likely other exploits published recently that can expose a Linux host and any associated Web services to attackers and to site-based ransomware.

INNOVATIONS FROM THE UNDERGROUND

This new Linux Encoder malware is just one of several recent innovations in ransomware. As described by Romanian security firm Bitdefender, the latest version of the CryptoWall crimeware package (yes, it is actually named CryptoWall 4.0) displays a redesigned ransom message that also now encrypts the names of files along with each file’s data! Each encrypted file has a name made up of random numbers and letters.

Traditional ransomware attacks also are getting more expensive, at least for new threats that currently are focusing on European (not American) banks. According to security education firm KnowBe4, a new ransomware attack targeting Windows computers starts as a “normal” ransomware infection, encrypting both local and network files and throwing up a ransom note for 2.5 Bitcoin (currently almost USD $1,000). Here’s the kicker: In the ransom note that pops up on the victim’s screen, the attackers claim that if they are not paid, they will publish the files on the Internet.

Crim-innovations.

Crim-innovations.

Well,  that’s one way of getting your files back. This is the reality that dawns on countless people for the first time each day: Fail to securely back up your files — whether on your computer or Web site — and the bad guys may eventually back them up for you! ‘

Oh, the backup won’t be secure, and you probably won’t be able to remove the information from the Internet if they follow through with such threats.

The tools for securely backing up computers, Web sites, data, and even entire hard drives have never been more affordable and ubiquitous. So there is zero excuse for not developing and sticking with a good backup strategy, whether you’re a home user or a Web site administrator.

PC World last year published a decent guide for Windows users who wish to take advantage of the the OS’s built-in backup capabilities. I’ve personally used Acronis and Macrium products, and find both do a good job making it easy to back up your rig. The main thing is to get into a habit of doing regular backups.

There are good guides all over the Internet showing users how to securely back up Linux systems (here’s one). Others tutorials are more OS-specific. For example, here’s a sensible backup approach for Debian servers. I’d like to hear from readers about their backup strategies — what works — particularly from those who maintain Linux-based Web servers like Apache and Nginx.

It’s worth noting that the malware requires the compromised user account on the Linux system to be an administrator; operating Web servers and Web services as administrator is generally considered poor security form, and threats like this one just reinforce why.

Also, most ransomware will search for other network or file shares that are attached or networked to the infected machine. If it can access those files, it will attempt to encrypt them as well. It’s a good idea to either leave your backup medium disconnected from the system unless and until you are backing up or restoring files.

For his part, Macadar said he is rebuilding the compromised server and now backing up his server in two places at once — using local, on-site backup drives as well as remote (web-based) backup services.

TorrentFreak: YTS Reaches MPAA Deal But Dotcom Faces Decades in Jail?

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

After reporting on thousands of file-sharing related stories around the globe for almost ten years, the folks here at TF have a ‘feel’ for how certain scenarios play out. With that in mind, something doesn’t feel right with the ongoing drama involving YTS / YIFY.

When sites as big as YTS get taken down by the MPAA, RIAA, or their partners around the world, these organizations usually order their PR departments to repeatedly bash the big button marked “CONGRATULATIONS TO US”. Yet for weeks following the YTS shut down there was complete silence.

Details of the multi-million dollar lawsuit supposedly filed in New Zealand are nowhere to be found either. And if one was expecting the usual “Shut down by ICE/FBI/DELTA FORCE” banner to appear on YTS.to instead of the usual YIFY movie rips, then there’s only disappointment there too.

Ok, the MPAA have this week admitted they’re behind the shutdown, but the way it’s being handled is extremely puzzling. The announcement from MPAA chief Chris Dodd was muted to say the least and the somewhat compulsory gloating at having taken down one of the world’s most important piracy sites is almost non-existent.

This is odd for a number of reasons, not least when one considers the nature and scale of the operation. YIFY / YTS released as many as five thousand copies of mainstream movies onto the Internet. Between them they were shared dozens of millions of times, at least. Over the past decade those kinds of numbers – and a lot less – have seen people jailed for up to five years in the United States and elsewhere.

Yet according to credible sources the operator of YTS – a 21-year-old who for unknown reasons isn’t even being named – has already settled his beef with the MPAA. This, despite running a site that has been repeatedly listed as a worldwide notorious market in the USTR’s Special 301 Report.

Of course, the operator of YTS isn’t in the United States, he’s in New Zealand, but geographical boundaries are rarely an issue for Hollywood. Take the drama surrounding Kim Dotcom and his former site Megaupload, for example.

Like the operator of YTS, Dotcom also lives in New Zealand. Importantly, it’s never been claimed that Dotcom uploaded anything illegal to the Internet (let alone thousands of movies) yet he was subjected to a commando-like raid on his home by dozens of armed police. He’s also facing extradition to the United States where he faces decades in jail.

Now, think of the flamboyant Dotcom what you will. Then feel relieved for the admin of YTS, who by many accounts is a thoroughly nice guy and has somehow managed to save his own skin, despite providing much of the content for global phenomenon Popcorn Time.

But then try to get a handle on how differently these two people are being treated after allegedly committing roughly the same offenses in exactly the same country. One case is still dragging on after almost four years, with tens of millions spent on lawyers and no end in sight. The other was a done deal inside four weeks.

Earlier this week TorrentFreak spoke with Kim Dotcom who told us he’d been following the YTS story in the media. Intrigued, we wanted to know – how does it feel to be raked over the coals for close to four years, have all your property seized, face extradition and decades in jail, while someone just up the road can walk away relatively unscathed from what would’ve been a slam-dunk case for the MPAA?

“It’s a double standard isn’t it?” Dotcom told TF.

“I think our case has chilled law enforcement and Hollywood against pursuing the criminal route in cases such as this. Quick civil settlements seem to be the new way to go.”

Dotcom may well be right and the fact that New Zealand already has a massive headache because of his case may well have been a factor in the decision not to make a huge example of the YTS operator. At the moment no one is talking though, and it’s entirely possible that no one ever will.

That makes a case like this all the more unsettling. Are we witnessing Hollywood’s ability to switch on a massive overseas law enforcement response in one case and then reel in the United States government in another? It’s worth saying again – YTS was a ‘notorious market’ in the eyes of the USTR yet apparently that be dealt with privately these days.

But with all that being said, it is quite possible that the U.S. government has learned lessons from its heavy-handed actions in 2012 and doesn’t want to repeat them again, least of all in New Zealand, a country whose judges must be growing tired of the Dotcom debacle.

“As the DOJ admitted the Megaupload case is a test case. The test isn’t going well for them,” Dotcom concludes.

And for that the guy behind YTS must be thanking his lucky stars.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Schneier on Security: The Rise of Political Doxing

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Last week, CIA director John O. Brennan became the latest victim of what’s become a popular way to embarrass and harass people on the Internet. A hacker allegedly broke into his AOL account and published e-mails and documents found inside, many of them personal and sensitive.

It’s called doxing­ — sometimes doxxing­ — from the word “documents.” It emerged in the 1990s as a hacker revenge tactic, and has since been as a tool to harass and intimidate people on the Internet. Someone would threaten a woman with physical harm, or try to incite others to harm her, and publish her personal information as a way of saying “I know a lot about you­ — like where you live and work.” Victims of doxing talk about the fear that this tactic instills. It’s very effective, by which I mean that it’s horrible.

Brennan’s doxing was slightly different. Here, the attacker had a more political motive. He wasn’t out to intimidate Brennan; he simply wanted to embarrass him. His personal papers were dumped indiscriminately, fodder for an eager press. This doxing was a political act, and we’re seeing this kind of thing more and more.

Last year, the government of North Korea allegedly did this to Sony. Hackers the FBI believes were working for North Korea broke into the company’s networks, stole a huge amount of corporate data, and published it. This included unreleased movies, financial information, company plans, and personal e-mails. The reputational damage to the company was enormous; the company estimated the cost at $41 million.

In July, hackers stole and published sensitive documents from the cyberweapons arms manufacturer Hacking Team. That same month, different hackers did the same thing to the infidelity website Ashley Madison. In 2014, hackers broke into the iCloud accounts of over 100 celebrities and published personal photographs, most containing some nudity. In 2013, Edward Snowden doxed the NSA.

These aren’t the first instances of politically motivated doxing, but there’s a clear trend. As people realize what an effective attack this can be, and how an individual can use the tactic to do considerable damage to powerful people and institutions, we’re going to see a lot more of it.

On the Internet, attack is easier than defense. We’re living in a world where a sufficiently skilled and motivated attacker will circumvent network security. Even worse, most Internet security assumes it needs to defend against an opportunistic attacker who will attack the weakest network in order to get­ — for example­ — a pile of credit card numbers. The notion of a targeted attacker, who wants Sony or Ashley Madison or John Brennan because of what they stand for, is still new. And it’s even harder to defend against.

What this means is that we’re going to see more political doxing in the future, against both people and institutions. It’s going to be a factor in elections. It’s going to be a factor in anti-corporate activism. More people will find their personal information exposed to the world: politicians, corporate executives, celebrities, divisive and outspoken individuals.

Of course they won’t all be doxed, but some of them will. Some of them will be doxed directly, like Brennan. Some of them will be inadvertent victims of a doxing attack aimed at a company where their information is stored, like those celebrities with iPhone accounts and every customer of Ashley Madison. Regardless of the method, lots of people will have to face the publication of personal correspondence, documents, and information they would rather be private.

In the end, doxing is a tactic that the powerless can effectively use against the powerful. It can be used for whistleblowing. It can be used as a vehicle for social change. And it can be used to embarrass, harass, and intimidate. Its popularity will rise and fall on this effectiveness, especially in a world where prosecuting the doxers is so difficult.

There’s no good solution for this right now. We all have the right to privacy, and we should be free from doxing. But we’re not, and those of us who are in the public eye have no choice but to rethink our online data shadows.

This essay previously appeared on Vice Motherboard.

Darknet - The Darkside: FBI Recommends Crypto Ransomware Victims Just Pay

This post was syndicated from: Darknet - The Darkside and was written by: Darknet. Original post: at Darknet - The Darkside

Crypto ransomware is a type of malware that holds you ransom by encrypting your files and has been around for a while, but the FBI recently said at a cyber security summit that they advise companies that fall victim just to pay. Such malware tends to use pretty strong encryption algorithms like RSA-2048, which you […]

The post FBI Recommends…

Read the full post at darknet.org.uk

Krebs on Security: Cybersecurity Information (Over)Sharing Act?

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

The U.S. Senate is preparing to vote on cybersecurity legislation that proponents say is sorely needed to better help companies and the government share information about the latest Internet threats. Critics of the bill and its many proposed amendments charge that it will do little, if anything, to address the very real problem of flawed cybersecurity while creating conditions that are ripe for privacy abuses. What follows is a breakdown of the arguments on both sides, and a personal analysis that seeks to add some important context to the debate.

Up for consideration by the full Senate this week is the Cybersecurity Information Sharing Act (CISA), a bill designed to shield companies from private lawsuits and antitrust laws if they seek help or cooperate with one another to fight cybercrime. The Wall Street Journal and The Washington Post each recently published editorials in support of the bill.

“The idea behind the legislation is simple: Let private businesses share information with each other, and with the government, to better fight an escalating and constantly evolving cyber threat,” the WSJ said in an editorial published today (paywall). “This shared data might be the footprint of hackers that the government has seen but private companies haven’t. Or it might include more advanced technology that private companies have developed as a defense.”

“Since hackers can strike fast, real-time cooperation is essential,” the WSJ continued. “A crucial provision would shield companies from private lawsuits and antitrust laws if they seek help or cooperate with one another. Democrats had long resisted this legal safe harbor at the behest of plaintiffs lawyers who view corporate victims of cyber attack as another source of plunder.”

The Post’s editorial dismisses “alarmist claims [that] have been made by privacy advocates who describe it as a ‘surveillance’ bill”:

“The notion that there is a binary choice between privacy and security is false. We need both privacy protection and cybersecurity, and the Senate legislation is one step toward breaking the logjam on security,” the Post concluded. “Sponsors have added privacy protections that would scrub out personal information before it is shared. They have made the legislation voluntary, so if companies are really concerned, they can stay away. A broad coalition of business groups, including the U.S. Chamber of Commerce, has backed the legislation, saying that cybertheft and disruption are “advancing in scope and complexity.”

But critics of CISA say the devil is in the details, or rather in the raft of amendments that may be added to the bill before it’s passed. The Center for Democracy & Technology (CDT), a nonprofit technology policy group based in Washington, D.C., has published a comprehensive breakdown of the proposed amendments and their potential impacts.

CDT says despite some changes made to assuage privacy concerns, neither CISA as written nor any of its many proposed amendments address the fundamental weaknesses of the legislation. According to CDT, “the bill requires that any Internet user information volunteered by a company to the Department of Homeland Security for cybersecurity purposes be shared immediately with the National Security Agency (NSA), other elements of the Intelligence Community, with the FBI/DOJ, and many other Federal agencies – a requirement that will discourage company participation in the voluntary information sharing scheme envisioned in the bill.”

CDT warns that CISA risks turning the cybersecurity program it creates into a backdoor wiretap by authorizing sharing and use of CTIs (cyber threat indicators) for a broad array of law enforcement purposes that have nothing to do with cybersecurity. Moreover, CDT says, CISA will likely introduce unintended consequences:

“It trumps all law in authorizing companies to share user Internet communications and data that qualify as ‘cyber threat indicators,’ [and] does nothing to address conduct of the NSA that actually undermines cybersecurity, including the stockpiling of zero day vulnerabilities.”

ANALYSIS

On the surface, efforts to increase information sharing about the latest cyber threats seem like a no-brainer. We read constantly about breaches at major corporations in which the attackers were found to have been inside of the victim’s network for months or years on end before the organization discovered that it was breached (or, more likely, they were notified by law enforcement officials or third-party security firms).

If only there were an easier way, we are told, for companies to share so-called “indicators of compromise” — Internet addresses or malicious software samples known to be favored by specific cybercriminal groups, for example — such breaches and the resulting leakage of consumer data and corporate secrets could be detected and stanched far more quickly.

In practice, however, there are already plenty of efforts — some public, some subscription-based — to collect and disseminate this threat data. From where I sit, the biggest impediment to detecting and responding to breaches in a more timely manner comes from a fundamental lack of appreciation — from an organization’s leadership on down — for how much is riding on all the technology that drives virtually every aspect of the modern business enterprise today. While many business leaders fail to appreciate the value and criticality of all their IT assets, I guarantee you today’s cybercrooks know all too well how much these assets are worth. And this yawning gap in awareness and understanding is evident by the sheer number of breaches announced each week.

Far too many organizations have trouble seeing the value of investing in cybersecurity until it is too late. Even then, breached entities will often seek out shiny new technologies or products that they perceive will help detect and prevent the next breach, while overlooking the value of investing in talented cybersecurity professionals to help them make sense of what all this technology is already trying to tell them about the integrity and health of their network and computing devices.

One of the more stunning examples of this comes from a depressingly static finding in the annual data breach reports published by Verizon Enterprise, a company that helps victims of cybercrime respond to and clean up after major data breaches. Every year, Verizon produces an in-depth report that tries to pull lessons out of dozens of incidents it has responded to in the previous year. It also polls dozens of law enforcement agencies worldwide for their takeaways from investigating cybercrime incidents.

The depressingly static stat is that in a great many of these breaches, the information that could have tipped companies off to a breach much sooner was already collected by the breached organization’s various cybersecurity tools; the trouble was, the organization lacked the human resources needed to make sense of all this information.

We all want the enormous benefits that technology and the Internet can bring, but all too often we are unwilling to face just how dependent we have become on technology. We embrace and extoll these benefits, but we routinely fail to appreciate how these tools can be used against us. We want the benefits of it all, but we’re reluctant to put in the difficult and very often unsexy work required to make sure we can continue to make those benefits work for us.

The most frustrating aspect of a legislative approach to fixing this problem is that it may be virtually impossible to measure whether a bill like CISA will in fact lead to more information sharing that helps companies prevent or quash data breaches. Meanwhile, history is littered with examples of well-intentioned laws that produce unintended (if not unforeseen) consequences.

Having read through the proposed CISA bill and its myriad amendments, I’m left with an impression perhaps best voiced in a letter sent earlier this week to the bill’s sponsors by nearly two-dozen academics. The coalition of professors charged that CISA is an example of the classic “let’s do something law” from a Congress that is under intense pressure to respond to a seemingly never-ending parade of breaches across the public and private sectors.

Rather than encouraging companies to increase their own cybersecurity standards, the professors wrote, “CISA ignores that goal and offloads responsibility to a generalized public-private secret information sharing network.”

“CISA creates new law in the wrong places,” the letter concluded. “For example, as the attached letter indicates, security threat information sharing is already quite robust. Instead, what are most needed are more robust and meaningful private efforts to prevent intrusions into networks and leaks out of them, and CISA does nothing to move us in that direction.”

Further reading: Independent national security journalist Marcy Wheeler’s take at EmptyWheel.net.

Errata Security: Ethics of killing Hitler

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

The NYTimes asks us: if we could go back in time and kill Hitler as a baby, would we do it? There’s actually several questions here: emotional, moral, and ethical. Consider a rephrasing of the question to focus on the emotional question: could you kill a baby, even if you knew it would grow up and become Hitler?

But it’s the ethical question that comes up the most often, and it has real-world use. It’s pretty much the question Edward Snowden faced: should he break his oath and disclose the NSA’s mass surveillance of Americans?

I point this out because my ethical response is “yes, and go to jail”. The added “and go to jail” makes it a rare response — lots of people are willing to kill Hitler if they don’t suffer any repercussions.

For me, the hypothetical question is “If you went back in time and killed Hitler, would you go to jail for murder?“. My answer is “yes”. I’d still do my best to lessen the punishment. I’d hire the best lawyer to defend me. It’s just that I would put judgement of my crime or heroism in the hands of others. I would pay the consequences, whatever they were.

Another way of looking at the question is: “If you had a time machine, is killing Hitler the best option?“. Maybe if you sent a hot chick back in time to get Hitler laid as a teenager, he wouldn’t be so angry at the world. Maybe if you went back in time and purchased his crappy paintings, or hired him as an architect, you could steer his life onto another path. Seriously, the time stream is full of butterflies that simply need to flap their wings in order to divert Hitler from genocide.

I point this out because it’s “murder” that is the question, and Hitler is only window dressing.

There is a cybersecurity bill, “CISA”, in front of congress right now that will be voted on next week. But “cybersecurity” is only the window dressing. The tech industry and cybersecurity experts oppose it. Its only supporters are the intelligence community, like the FBI and NSA. It’s really a disguised surveillance bill. Just like people seem uninterested in stopping Hitler through some means other than murder, government is uninterested in stopping hackers through some other means than mass surveillance and a police state.

Anyway, those are my two answers to the “kill Hitler” question. If I had a time machine, my first choice wouldn’t be “murder”. If I did choose “murder”, I’d expect to go to jail for it.

Errata Security: Ethics of killing Hitler

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

The NYTimes asks us: if we could go back in time and kill Hitler as a baby, would we do it? There’s actually several questions here: emotional, moral, and ethical. Consider a rephrasing of the question to focus on the emotional question: could you kill a baby, even if you knew it would grow up and become Hitler?

But it’s the ethical question that comes up the most often, and it has real-world use. It’s pretty much the question Edward Snowden faced: should he break his oath and disclose the NSA’s mass surveillance of Americans?

I point this out because my ethical response is “yes, and go to jail”. The added “and go to jail” makes it a rare response — lots of people are willing to kill Hitler if they don’t suffer any repercussions.

For me, the hypothetical question is “If you went back in time and killed Hitler, would you go to jail for murder?“. My answer is “yes”. I’d still do my best to lessen the punishment. I’d hire the best lawyer to defend me. It’s just that I would put judgement of my crime or heroism in the hands of others. I would pay the consequences, whatever they were.

Another way of looking at the question is: “If you had a time machine, is killing Hitler the best option?“. Maybe if you sent a hot chick back in time to get Hitler laid as a teenager, he wouldn’t be so angry at the world. Maybe if you went back in time and purchased his crappy paintings, or hired him as an architect, you could steer his life onto another path. Seriously, the time stream is full of butterflies that simply need to flap their wings in order to divert Hitler from genocide.

I point this out because it’s “murder” that is the question, and Hitler is only window dressing.

There is a cybersecurity bill, “CISA”, in front of congress right now that will be voted on next week. But “cybersecurity” is only the window dressing. The tech industry and cybersecurity experts oppose it. Its only supporters are the intelligence community, like the FBI and NSA. It’s really a disguised surveillance bill. Just like people seem uninterested in stopping Hitler through some means other than murder, government is uninterested in stopping hackers through some other means than mass surveillance and a police state.

Anyway, those are my two answers to the “kill Hitler” question. If I had a time machine, my first choice wouldn’t be “murder”. If I did choose “murder”, I’d expect to go to jail for it.

Schneier on Security: Police Want Genetic Data from Corporate Repositories

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Both the FBI and local law enforcement are trying to get the genetic data stored at companies like 23andMe.

No surprise, really.

As NYU law professor Erin Murphy told the New Orleans Advocate regarding the Usry case, gathering DNA information is “a series of totally reasonable steps by law enforcement.” If you’re a cop trying to solve a crime, and you have DNA at your disposal, you’re going to want to use it to further your investigation. But the fact that your signing up for 23andMe or Ancestry.com means that you and all of your current and future family members could become genetic criminal suspects is not something most users probably have in mind when trying to find out where their ancestors came from.

Schneier on Security: Obama Administration Not Pursuing a Backdoor to Commercial Encryption

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

The Obama Administration is not pursuing a law that would force computer and communications manufacturers to add backdoors to their products for law enforcement. Sensibly, they concluded that criminals, terrorists, and foreign spies would use that backdoor as well.

Score one for the pro-security side in the Second Crypto War.

It’s certainly not over. The FBI hasn’t given up on an encryption backdoor (or other backdoor access to plaintext) since the early 1990s, and it’s not going to give up now. I expect there will be more pressure on companies, both overt and covert, more insinuations that strong security is somehow responsible for crime and terrorism, and more behind-closed-doors negotiations.

Schneier on Security: Automatic Face Recognition and Surveillance

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

ID checks were a common response to the terrorist attacks of 9/11, but they’ll soon be obsolete. You won’t have to show your ID, because you’ll be identified automatically. A security camera will capture your face, and it’ll be matched with your name and a whole lot of other information besides. Welcome to the world of automatic facial recognition. Those who have access to databases of identified photos will have the power to identify us. Yes, it’ll enable some amazing personalized services; but it’ll also enable whole new levels of surveillance. The underlying technologies are being developed today, and there are currently no rules limiting their use.

Walk into a store, and the salesclerks will know your name. The store’s cameras and computers will have figured out your identity, and looked you up in both their store database and a commercial marketing database they’ve subscribed to. They’ll know your name, salary, interests, what sort of sales pitches you’re most vulnerable to, and how profitable a customer you are. Maybe they’ll have read a profile based on your tweets and know what sort of mood you’re in. Maybe they’ll know your political affiliation or sexual identity, both predictable by your social media activity. And they’re going to engage with you accordingly, perhaps by making sure you’re well taken care of or possibly by trying to make you so uncomfortable that you’ll leave.

Walk by a policeman, and she will know your name, address, criminal record, and with whom you routinely are seen. The potential for discrimination is enormous, especially in low-income communities where people are routinely harassed for things like unpaid parking tickets and other minor violations. And in a country where people are arrested for their political views, the use of this technology quickly turns into a nightmare scenario.

The critical technology here is computer face recognition. Traditionally it has been pretty poor, but it’s slowly improving. A computer is now as good as a person. Already Google’s algorithms can accurately match child and adult photos of the same person, and Facebook has an algorithm that works by recognizing hair style, body shape, and body language ­- and works even when it can’t see faces. And while we humans are pretty much as good at this as we’re ever going to get, computers will continue to improve. Over the next years, they’ll continue to get more accurate, making better matches using even worse photos.

Matching photos with names also requires a database of identified photos, and we have plenty of those too. Driver’s license databases are a gold mine: all shot face forward, in good focus and even light, with accurate identity information attached to each photo. The enormous photo collections of social media and photo archiving sites are another. They contain photos of us from all sorts of angles and in all sorts of lighting conditions, and we helpfully do the identifying step for the companies by tagging ourselves and our friends. Maybe this data will appear on handheld screens. Maybe it’ll be automatically displayed on computer-enhanced glasses. Imagine salesclerks ­- or politicians ­- being able to scan a room and instantly see wealthy customers highlighted in green, or policemen seeing people with criminal records highlighted in red.

Science fiction writers have been exploring this future in both books and movies for decades. Ads followed people from billboard to billboard in the movie Minority Report. In John Scalzi’s recent novel Lock In, characters scan each other like the salesclerks I described above.

This is no longer fiction. High-tech billboards can target ads based on the gender of who’s standing in front of them. In 2011, researchers at Carnegie Mellon pointed a camera at a public area on campus and were able to match live video footage with a public database of tagged photos in real time. Already government and commercial authorities have set up facial recognition systems to identify and monitor people at sporting events, music festivals, and even churches. The Dubai police are working on integrating facial recognition into Google Glass, and more US local police forces are using the technology.

Facebook, Google, Twitter, and other companies with large databases of tagged photos know how valuable their archives are. They see all kinds of services powered by their technologies ­ services they can sell to businesses like the stores you walk into and the governments you might interact with.

Other companies will spring up whose business models depend on capturing our images in public and selling them to whoever has use for them. If you think this is farfetched, consider a related technology that’s already far down that path: license-plate capture.

Today in the US there’s a massive but invisible industry that records the movements of cars around the country. Cameras mounted on cars and tow trucks capture license places along with date/time/location information, and companies use that data to find cars that are scheduled for repossession. One company, Vigilant Solutions, claims to collect 70 million scans in the US every month. The companies that engage in this business routinely share that data with the police, giving the police a steady stream of surveillance information on innocent people that they could not legally collect on their own. And the companies are already looking for other profit streams, selling that surveillance data to anyone else who thinks they have a need for it.

This could easily happen with face recognition. Finding bail jumpers could even be the initial driving force, just as finding cars to repossess was for license plate capture.

Already the FBI has a database of 52 million faces, and describes its integration of facial recognition software with that database as “fully operational.” In 2014, FBI Director James Comey told Congress that the database would not include photos of ordinary citizens, although the FBI’s own documents indicate otherwise. And just last month, we learned that the FBI is looking to buy a system that will collect facial images of anyone an officer stops on the street.

In 2013, Facebook had a quarter of a trillion user photos in its database. There’s currently a class-action lawsuit in Illinois alleging that the company has over a billion “face templates” of people, collected without their knowledge or consent.

Last year, the US Department of Commerce tried to prevail upon industry representatives and privacy organizations to write a voluntary code of conduct for companies using facial recognition technologies. After 16 months of negotiations, all of the consumer-focused privacy organizations pulled out of the process because industry representatives were unable to agree on any limitations on something as basic as nonconsensual facial recognition.

When we talk about surveillance, we tend to concentrate on the problems of data collection: CCTV cameras, tagged photos, purchasing habits, our writings on sites like Facebook and Twitter. We think much less about data analysis. But effective and pervasive surveillance is just as much about analysis. It’s sustained by a combination of cheap and ubiquitous cameras, tagged photo databases, commercial databases of our actions that reveal our habits and personalities, and ­- most of all ­- fast and accurate face recognition software.

Don’t expect to have access to this technology for yourself anytime soon. This is not facial recognition for all. It’s just for those who can either demand or pay for access to the required technologies ­- most importantly, the tagged photo databases. And while we can easily imagine how this might be misused in a totalitarian country, there are dangers in free societies as well. Without meaningful regulation, we’re moving into a world where governments and corporations will be able to identify people both in real time and backwards in time, remotely and in secret, without consent or recourse.

Despite protests from industry, we need to regulate this budding industry. We need limitations on how our images can be collected without our knowledge or consent, and on how they can be used. The technologies aren’t going away, and we can’t uninvent these capabilities. But we can ensure that they’re used ethically and responsibly, and not just as a mechanism to increase police and corporate power over us.

This essay previously appeared on Forbes.com.

EDITED TO ADD: Two articles that say much the same thing.

Krebs on Security: Scottrade Breach Hits 4.6 Million Customers

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Welcome to Day 2 of Cybersecurity (Breach) Awareness Month! Today’s awareness lesson is brought to you by retail brokerage firm Scottrade Inc., which just disclosed a breach involving contact information and possibly Social Security numbers on 4.6 million customers.

scottradeIn an email sent today to customers, St. Louis-based Scottrade said it recently heard from federal law enforcement officials about crimes involving the theft of information from Scottrade and other financial services companies.

“Based upon our subsequent internal investigation coupled with information provided by the authorities, we believe a list of client names and street addresses was taken from our system,” the email notice reads. “Importantly, we have no reason to believe that Scottrade’s trading platforms or any client funds were compromised. All client passwords remained encrypted at all times and we have not seen any indication of fraudulent activity as a result of this incident.”

The notice said that although Social Security numbers, email addresses and other sensitive data were contained in the system accessed, “it appears that contact information was the focus of the incident.” The company said the unauthorized access appears to have occurred over a period between late 2013 and early 2014.

Asked about the context of the notification from federal law enforcement officials, Scottrade spokesperson Shea Leordeanu said the company couldn’t comment on the incident much more than the information included in its Web site notice about the attack. But she did say that Scottrade learned about the data theft from the FBI, and that the company is working with agents from FBI field offices in Atlanta and New York. FBI officials could not be immediately reached for comment.

It may well be that the intruders were after Scottrade user data to facilitate stock scams, and that a spike in spam email for affected Scottrade customers will be the main fallout from this break-in.

In July 2015, prosecutors in Manhattan filed charges against five people — including some suspected of having played a role in the 2014 breach at JPMorgan Chase that exposed the contact information on more than 80 million consumers. The authorities in that investigation said they suspect that group sought to use email addresses stolen in the JPMorgan hacking to further stock manipulation schemes involving spam emails to pump up the price of otherwise worthless penny stocks.

Scottrade said despite the fact that it doesn’t believe Social Security numbers were stolen, the company is offering a year’s worth of free credit monitoring services to affected customers. Readers who are concerned about protecting their credit files from identity thieves should read How I Learned to Stop Worrying and Embrace the Security Freeze.

AWS Security Blog: Learn About the Rest of the Security and Compliance Track Sessions Being Offered at re:Invent 2015

This post was syndicated from: AWS Security Blog and was written by: Craig Liebendorfer. Original post: at AWS Security Blog

Previously, I mentioned that the re:Invent 2015 Security & Compliance track sessions had been announced, and I also discussed the AWS Identity and Access Management (IAM) sessions that will be offered as part of the Security & Compliance track.

Today, I will highlight the remainder of the sessions that will be presented as part of the Security & Compliance track. If you are going to re:Invent 2015, you can add these sessions to your schedule now. If you won’t be attending re:Invent in person this year, keep in mind that all sessions will be available on YouTube (video) and SlideShare (slide decks) after the conference.

Auditing

SEC314: Full Configuration Visibility and Control with AWS Config

With AWS Config, you can discover what is being used on AWS, understand how resources are configured and how their configurations changed over time—all without disrupting end-user productivity on AWS. You can use this visibility to assess continuous compliance with best practices, and integrate with IT service management, configuration management, and other ITIL tools. In this session, AWS Senior Product Manager Prashant Prahlad will discuss:

  • Mechanisms to aggregate this deep visibility to gain insights into your overall security and operational posture.
  • Ways to leverage notifications from the service to stay informed, trigger workflows, or graph your infrastructure.
  • Integrating AWS Config with ticketing and workflow tools to help you maintain compliance with internal practices or industry guidelines.
  • Aggregating this data with other configuration management tools to move toward a single source of truth solution for configuration management.

This session is best suited for administrators and developers with a focus on audit, security, and compliance.

SEC318: AWS CloudTrail Deep Dive

Ever wondered how can you find out which user made a particular API call, when the call was made, and which resources were acted upon? In this session, you will learn from AWS Senior Product Manager Sivakanth Mundru how to turn on AWS CloudTrail for hundreds of AWS accounts in all AWS regions to ensure you have full visibility into API activity in all your AWS accounts. We will demonstrate how to use CloudTrail Lookup in the AWS Management Console to troubleshoot operational and security issues and how to use the AWS CLI or SDKs to integrate your applications with CloudTrail.

We will also demonstrate how you can monitor for specific API activity by using Amazon CloudWatch and receive email notifications, when such activity occurs. Using CloudTrail Lookup and CloudWatch Alarms, you can take immediate action to quickly remediate any security or operational issues. We will also share best practices and ready-to-use scripts, and dive deep into new features that help you configure additional layers of security for CloudTrail log files.

SEC403: Timely Security Alerts and Analytics: Diving into AWS CloudTrail Events by Using Apache Spark on Amazon EMR

Do you want to analyze AWS CloudTrail events within minutes of them arriving in your Amazon S3 bucket? Would you like to learn how to run expressive queries over your CloudTrail logs? AWS Senior Security Engineer Will Kruse will demonstrate Apache Spark and Apache Spark Streaming as two tools to analyze recent and historical security logs for your accounts. To do so, we will use Amazon Elastic MapReduce (EMR), your logs stored in S3, and Amazon SNS to generate alerts. With these tools at your fingertips, you will be the first to know about security events that require your attention, and you will be able to quickly identify and evaluate the relevant security log entries.

DDoS

SEC306: Defending Against DDoS Attacks

In this session, AWS Operations Manager Jeff Lyon and AWS Software Development Manager Andrew Kiggins will address the current threat landscape, present DDoS attacks that we have seen on AWS, and discuss the methods and technologies we use to protect AWS services. You will leave this session with a better understanding of:

  • DDoS attacks on AWS as well as the actual threats and volumes that we typically see.
  • What AWS does to protect our services from these attacks.
  • How this all relates to the AWS Shared Responsibility Model.

Incident Response

SEC308: Wrangling Security Events in the Cloud

Have you prepared your AWS environment for detecting and managing security-related events? Do you have all the incident response training and tools you need to rapidly respond to, recover from, and determine the root cause of security events in the cloud? Even if you have a team of incident response rock stars with an arsenal of automated data acquisition and computer forensics capabilities, there is likely a thing or two you will learn from several step-by-step demonstrations of wrangling various potential security events within an AWS environment, from detection to response to recovery to investigating root cause. At a minimum, show up to find out who to call and what to expect when you need assistance with applying your existing, already awesome incident response runbook to your AWS environment. Presenters are AWS Principal Security Engineer Don “Beetle” Bailey and AWS Senior Security Consultant Josh Du Lac.

SEC316: Harden Your Architecture with Security Incident Response Simulations (SIRS)

Using Security Incident Response Simulations (SIRS—also commonly called IR Game Days) regularly keeps your first responders in practice and ready to engage in real events. SIRS help you identify and close security gaps in your platform, and application layers then validate your ability to respond. In this session, AWS Senior Technical Program Manager Jonathan Miller and AWS Global Security Architect Armando Leite will share a straightforward method for conducting SIRS. Then AWS enterprise customers will take the stage to share their experience running joint SIRS with AWS on their AWS architectures. Learn about detection, containment, data preservation, security controls, and more.

Key Management

SEC301: Strategies for Protecting Data Using Encryption in AWS

Protecting sensitive data in the cloud typically requires encryption. Managing the keys used for encryption can be challenging as your sensitive data passes between services and applications. AWS offers several options for using encryption and managing keys to help simplify the protection of your data at rest. In this session, AWS Principal Product Manager Ken Beer and Adobe Systems Principal Scientist Frank Wiebe will help you understand which features are available and how to use them, with emphasis on AWS Key Management Service and AWS CloudHSM. Adobe Systems Incorporated will present their experience using AWS encryption services to solve data security needs.

SEC401: Encryption Key Storage with AWS KMS at Okta

One of the biggest challenges in writing code that manages encrypted data is developing a secure model for obtaining keys and rotating them when an administrator leaves. AWS Key Management Service (KMS) changes the equation by offering key management as a service, enabling a number of security improvements over conventional key storage methods. Okta Senior Software Architect Jon Todd will show how Okta uses the KMS API to secure a multi-region system serving thousands of customers. This talk is oriented toward developers looking to secure their applications and simplify key management.

Overall Security

SEC201: AWS Security State of the Union

Security must be at the forefront for any online business. At AWS, security is priority number one. AWS Vice President and Chief Information Security Officer Stephen Schmidt will share his insights into cloud security and how AWS meets customers’ demanding security and compliance requirements—and in many cases helps them improve their security posture. Stephen, with his background with the FBI and his work with AWS customers in the government, space exploration, research, and financial services organizations, will share an industry perspective that’s unique and invaluable for today’s IT decision makers.

SEC202: If You Build It, They Will Come: Best Practices for Securely Leveraging the Cloud

Cloud adoption is driving digital business growth and enabling companies to shift to processes and practices that make innovation continual. As with any paradigm shift, cloud computing requires different rules and a different way of thinking. This presentation will highlight best practices to build and secure scalable systems in the cloud and capitalize on the cloud with confidence and clarity.

In this session, Sumo Logic VP of Security/CISO Joan Pepin will cover:

  • Key market drivers and advantages for leveraging cloud architectures.
  • Foundational design principles to guide strategy for securely leveraging the cloud.
  • The “Defense in Depth” approach to building secure services in the cloud, whether it’s private, public, or hybrid.
  • Real-world customer insights from organizations who have successfully adopted the "Defense in Depth" approach.

Session sponsored by Sumo Logic.

SEC203: Journey to Securing Time Inc’s Move to the Cloud

Learn how Time Inc. met security requirements as they transitioned from their data centers to the AWS cloud. Colin Bodell, CTO from Time Inc. will start off this session by presenting Time’s objective to move away from on-premise and co-location data centers to AWS and the cost savings that has been realized with this transition. Chris Nicodemo from Time Inc. and Derek Uzzle from Alert Logic will then share lessons learned in the journey to secure dozens of high volume media websites during the migration, and how it has enhanced overall security flexibility and scalability. They will also provide a deep dive on the solutions Time has leveraged for their enterprise security best practices, and show you how they were able to execute their security strategy. 

Who should attend: InfoSec and IT management. Session sponsored by Alert Logic.

SEC303: Architecting for End-to-End Security in the Enterprise

This session will tell the story of how security-minded enterprises provide end-to-end protection of their sensitive data in AWS. Learn about the enterprise security architecture decisions made by Fortune 500 organizations during actual sensitive workload deployments as told by the AWS professional service security, risk, and compliance team members who lived them. In this technical walkthrough, AWS Principal Consultant Hart Rossman and AWS Principal Security Solutions Architect Bill Shinn will share lessons learned from the development of enterprise security strategy, security use-case development, end-to-end security architecture and service composition, security configuration decisions, and the creation of AWS security operations playbooks to support the architecture.

SEC321: AWS for the Enterprise—Implementing Policy, Governance, and Security for Enterprise Workloads

CSC Director of Global Cloud Portfolio Kyle Falkenhagen will demonstrate enterprise policy, governance, and security products to deploy and manage enterprise and industry applications AWS.  CSC will demonstrate automated provisioning and management of big data platforms and industry specific enterprise applications with automatically provisioned secure network connectivity from the datacenter to AWS over layer 2 routed AT&T Netbond (provides AWS DirectConnect access) connection.  CSC will also demonstrate how applications blueprinted on CSC’s Agility Platform can be re-hosted on AWS in minutes or re-instantiated across multiple AWS regions. CSC will also demonstrate how CSC can provide agile and consumption-based endpoint security for workloads in any cloud or virtual infrastructure, providing enterprise management and 24×7 monitoring of workload compliance, vulnerabilities, and potential threats.

Session sponsored by CSC.

SEC402: Enterprise Cloud Security via DevSecOps 2.0

Running enterprise workloads with sensitive data in AWS is hard and requires an in-depth understanding about software-defined security risks. At re:Invent 2014, Intuit and AWS presented "Enterprise Cloud Security via DevSecOps" to help the community understand how to embrace AWS features and a software-defined security model. Since then, we’ve learned quite a bit more about running sensitive workloads in AWS.

We’ve evaluated new security features, worked with vendors, and generally explored how to develop security-as-code skills. Come join Intuit DevSecOps Leader Shannon Lietz and AWS Senior Security Consultant Matt Bretan to learn about second-year lessons and see how DevSecOps is evolving. We’ve built skills in security engineering, compliance operations, security science, and security operations to secure AWS-hosted applications. We will share stories and insights about DevSecOps experiments, and show you how to crawl, walk, and then run into the world of DevSecOps.

Security Architecture

SEC205: Learn How to Hackproof Your Cloud Using Native AWS Tools

The cloud requires us to rethink much of what we do to secure our applications. The idea of physical security morphs as infrastructure becomes virtualized by AWS APIs. In a new world of ephemeral, autoscaling infrastructure, you need to adapt your security architecture to meet both compliance and security threats. And AWS provides powerful tools that enable users to confidently overcome these challenges.

In this session, CloudCheckr Founder and CTO Aaron Newman will discuss leveraging native AWS tools as he covers topics including:

  • Minimizing attack vectors and surface area.
  • Conducting perimeter assessments of your virtual private clouds (VPCs).
  • Identifying internal vs. external threats.
  • Monitoring threats.
  • Reevaluating intrusion detection, activity monitoring, and vulnerability assessment in AWS.

Session sponsored by CloudCheckr.

Enjoy re:Invent!

– Craig

Krebs on Security: With Stolen Cards, Fraudsters Shop to Drop

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

A time-honored method of extracting cash from stolen credit cards involves “reshipping” scams, which manage the purchase, reshipment and resale of carded consumer goods from America to Eastern Europe — primarily Russia. A new study suggests that some 1.6 million credit and debit cards are used to commit at least $1.8 billion in reshipping fraud each year, and identifies some choke points for disrupting this lucrative money laundering activity.

Many retailers long ago stopped allowing direct shipments of consumer goods from the United States to Russia and Eastern Europe, citing the high rate of fraudulent transactions for goods destined to those areas. As a result, fraudsters have perfected the reshipping service, a criminal enterprise that allows card thieves and the service operators essentially split the profits from merchandise ordered with stolen credit and debit cards.

Source: Drops for Stuff research paper.

Source: Drops for Stuff research paper.

Much of the insight in this story comes from a study released last week called “Drops for Stuff: An Analysis of Reshipping Mule Scams,” which has multiple contributors (including this author). To better understand reshipping scheme, it helps to have a quick primer on the terminology thieves use to describe different actors in the scam.

The “operator” of the reshipping service specializes in recruiting “reshipping mules” or “drops” — essentially unwitting consumers in the United States who are enlisted through work-at-home job scams and promised up to $2,500 per month salary just for receiving and reshipping packages.

In practice, virtually all drops are cut loose after approximately 30 days of their first shipment — just before the promised paycheck is due. Because of this constant churn, the operator must be constantly recruiting new drops.

The operator sells access to his stable of drops to card thieves, also known as “stuffers.” The stuffers use stolen cards to purchase high-value products from merchants and have the merchants ship the items to the drops’ address. Once the drops receive the packages, the stuffers provide them with prepaid shipping labels that the mules will use to ship the packages to the stuffers themselves. After they receive the packaged relayed by the drops, the stuffers then sell the products on the local black market.

The shipping service operator will either take a percentage cut (up to 50 percent) where stuffers pay a portion of the product’s retail value to the site operator as the reshipping fee. On the other hand, those operations that target lower-priced products (clothing, e.g.) may simply charge a flat-rate fee of $50 to $70 per package. Depending on the sophistication of the reshipping service, stuffers can either buy shipping labels directly from the service — generally at a volume discount — or provide their own [for a discussion of ancillary criminal services that resell stolen USPS labels purchased wholesale, check out this story from 2014].

The researchers found that reshipping sites typically guarantee a certain level of customer satisfaction for successful package delivery, with some important caveats. If a drop who is not marked
as problematic embezzles the package, reshipping sites offer free shipping for the next package or pay up to 15% of the item’s value as compensation to stuffers (e.g., as compensation for “burning” the
credit card or the already-paid reshipping label).

However, in cases where the authorities identify the drop and intercept the package, the reshipping
sites provide no compensation — it calls these incidents “acts of God” over which it has no control.

“For a premium, stuffers can rent private drops that no other stuffers will have access to,” the researchers wrote. “Such private drops are presumably more reliable and are shielded from interference by other stuffers and, in turn, have a reduced risk to be discovered (hence, lower risk of losing packages).”

AMPLIFYING PROFITS

One of the key benefits of cashing out stolen cards using a reshipping service is that many luxury consumer goods that are typically bought with stolen cards — gaming consoles, iPads, iPhones and other Apple devices, for instance — can be sold in Russia for a 30 percent to 5o percent markup on top of the original purchase price, allowing the thieves to increase their return on each stolen card.

shopFor example, an Apple MacBook selling for 1,000 US dollars in the United States typically retails for for about 1,400 US dollars in Russia because a variety of customs duties, taxes and other fees increase their price.

It’s not hard to see how this can become a very lucrative form of fraud for everyone involved (except the drops). According to the researchers, the average damage from a reshipping scheme per cardholder is $1, 156.93. In this case, the stuffer buys a card off the black market for $10, turns around and purchases more than $1,100 worth of goods. After the reshipping service takes its cut (~$550), and the stuffer pays for his reshipping label (~$100), the stuffer receives the stolen goods and sells them on the black market in Russia for $1,400. He has just turned a $10 investment into more than $700. Rinse, wash, and repeat.

The study examined the inner workings of seven different reshipping services over a period of five years, from 2010 to 2015, and involved data shared by the FBI and the U.S. Postal Investigative Service. The analysis showed that at least 85 percent of packages being reshipped via these schemes were being sent to Moscow or to the immediate surrounding areas of Moscow.

The researchers wrote that “although it is often impossible to apprehend criminals who are abroad, the patterns of reshipping destinations can help to intercept the international shipping packages beforethey leave the country, e.g., at an USPS International Service Center. Focusing inspection efforts on the packages destined to the stuffers’ prime destination cities can increase the success of intercepting items from reshipping scams.”

The research team wrote that disrupting the reshipping chains of these scams has the potential to cripple the underground economy by affecting a major income stream of cybercriminals. By way of example, the team found that a single criminal-operated reshipping service  can earn a yearly revenue of over 7.3 million US dollars, most of which is profit.

A copy of the full paper is available here (PDF).

Krebs on Security: Bidding for Breaches, Redefining Targeted Attacks

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

A growing community of private and highly-vetted cybercrime forums is redefining the very meaning of “targeted attacks.” These bid-and-ask forums match crooks who are looking for access to specific data, resources or systems within major corporations with hired muscle who are up to the task or who already have access to those resources.

A good example of this until recently could be found at a secretive online forum called “Enigma,” a now-defunct community that was built as kind of eBay for data breach targets. Vetted users on Enigma were either bidders or buyers — posting requests for data from or access to specific corporate targets, or answering such requests with a bid to provide the requested data. The forum, operating on the open Web for months until recently, was apparently scuttled when the forum administrators (rightly) feared that the community had been infiltrated by spies.

The screen shot below shows several bids on Enigma from March through June 2015, requesting data and services related to HSBC UK, Citibank, Air Berlin and Bank of America:

Enigma, an exclusive forum for cyber thieves to buy and sell access to or data stolen from companies.

Enigma, an exclusive forum for cyber thieves to buy and sell access to or data stolen from companies.

One particularly active member, shown in the screen shot above and the one below using the nickname “Demander,” posts on Jan. 10, 2015 that he is looking for credentials from Cisco and that the request is urgent (it’s unclear from the posting whether he’s looking for access to Cisco Corp. or simply to a specific Cisco router). Demander also was searching for services related to Bank of America ATMs and unspecified data or services from Wells Fargo.

More bids on Enigma forum for services.

More bids on Enigma forum for services, data, and access to major corporations.

Much of the information about Enigma comes from Noam Jolles, a senior intelligence expert at Diskin Advanced Technologies. The employees at Jolles’ firm are all former members of Shin Bet, a.k.a. the Israel Security Agency/General Security Service — Israel’s counterespionage and counterterrorism agency, and similar to the British MI5 or the American FBI. The firm’s namesake comes from its founder, Yuval Diskin, who headed Shin Bet from 2005 to 2011.

“On Enigma, members post a bid and call on people to attack certain targets or that they are looking for certain databases for which they are willing to pay,” Jolles said. “And people are answering it and offering their merchandise.”

Those bids can take many forms, Jolles said, from requests to commit a specific cyberattack to bids for access to certain Web servers or internal corporate networks.

“I even saw bids regarding names of people who could serve as insiders,” she said. “Lists of people who might be susceptible to being recruited or extorted.”

Many experts believe the breach that exposed tens of millions user accounts at AshleyMadison.com — an infidelity site that promises to hook up cheating spouses — originated from or was at least assisted by an insider at the company. Interestingly, on June 25, 2015 — three weeks before news of the breach broke — a member on a related secret data-trading forum called the “Gentlemen’s Club” solicits “data and service” related to AshleyMadison, saying “Don’t waste time if you don’t know what I’m talking about. Big job opportunity.”

On June 26, 2015, a forum member named "Diablo" requests data and services related to AshleyMadison.com.

On June 26, 2015, a “Gentlemen’s Club” forum member named “Diablo” requests data and services related to AshleyMadison.com.

Cybercrime forums like Enigma vet new users and require non-refundable deposits of virtual currency (such as Bitcoin). More importantly, they have strict rules: If the forum administrators notice you’re not trading with others on the forum, you’ll soon be expelled from the community. This policy means that users who are not actively involved in illicit activities — such as buying or selling access to hacked resources — aren’t allowed to remain on the board for long.

BLURRING GEOGRAPHIC BOUNDARIES

In some respects, the above-mentioned forums — as exclusive as they appear to be — are a logical extension of cybercrime forum activity that has been maturing for more than a decade.

As I wrote in my book, Spam Nation: The Inside Story of Organized Cyber Crime — From Global Epidemic to Your Front Door, “crime forums almost universally help lower the barriers to entry for would-be cybercriminals. Crime forums offer crooks with disparate skills a place to market and test their services and wares, and in turn to buy ill-gotten goods and services from others.”

globeauthThe interesting twist with forums like Enigma is that they focus on connecting miscreants seeking specific information or access with those who can be hired to execute a hack or supply the sought-after information from a corpus of already-compromised data. Based on her interaction with other buyers and sellers on these forums, Jolles said a great many of the requests for services seem to be people hiring others to conduct spear-phishing attacks — those that target certain key individuals within companies and organizations.

“What strikes me the most about these forums is the obvious use of spear-phishing attacks, the raw demand for people who know how to map targets for phishing, and the fact that so many people are apparently willing to pay for it,” Jolles said. “It surprises me how much people are willing to pay for good fraudsters and good social engineering experts who are hooking the the bait for phishing.”

Jolles believes Enigma and similar bid-and-ask forums are helping to blur international and geographic boundaries between attackers responsible for stealing the data and those who seek to use it for illicit means.

“We have seen an attack be committed by an Eastern European gang, for example, and the [stolen] database will eventually get to China,” Jolles said. “In this data-trading arena, the boundaries are getting warped within it. I can be a state-level buyer, while the attackers will be eastern European criminals.”

ASK FOR THE SAMURAI

Jolles said she began digging deeper into these forums in a bid to answer the question of what happens to what she calls the “missing databases.” Avivah Litan, a fraud analyst with Gartner Inc., wrote about Jolles’ research in July 2015, and explained it this way:

“Where has all the stolen data gone and how is it being used? 

We have all been bombarded by weekly, if not daily reports of breaches and theft of sensitive personal information at organizations such as Anthem, JP Morgan Chase and OPM. Yet, despite the ongoing onslaught of reported breaches (and we have to assume that only the sloppy hackers get caught and that the reported breaches are just a fraction of the total breach pie) – we have not seen widespread identity theft or personal damage inflicted from these breaches.

Have any of you heard of direct negative impacts from these thefts amongst your friends, family, or acquaintances? I certainly have not.

Jolles said a good example of a cybercriminal actor who helps to blur the typical geographic lines in cybercrime is a mysterious mass-purchaser of stolen data known to many on Enigma and other such forums by a number of nicknames, including “King,” but most commonly “The Samurai.”

“According to what I can understand so far, this was a nickname was given to him and not one he picked himself,” Jolles said. “He is looking for any kind of large volumes of stolen data. Of course, I am getting my information from people who are actually trading with him, not me trading with him directly. But they all say he will buy it and pay immediately, and that he is from China.”

What other clues are there that The Samurai could be affiliated with a state-sponsored actor? Jolles said this actor pays immediately for good, verifiable databases, and generally doesn’t haggle over the price.

“People think he’s Chinese, that he’s government because the way he pays,” Jolles said. “He pays immediately and he’s not negotiating.”

The Samurai may be just some guy in a trailer park in the middle of America, or an identity adopted by a group of individuals, for all I know. Alternatively, he could be something of a modern-day Keyser Söze, a sort of virtual boogeyman who gains mythical status among investigators and criminals alike.

Nevertheless, new forums like The Gentlemen’s Club and Enigma are notable because they’re changing the face of targeted attacks, building crucial bridges between far-flung opportunistic hackers, hired guns and those wishing to harness those resources.