Posts tagged ‘fbi’

Krebs on Security: FBI: $1.2B Lost to Business Email Scams

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

The FBI today warned about a significant spike in victims and dollar losses stemming from an increasingly common scam in which crooks spoof communications from executives at the victim firm in a bid to initiate unauthorized international wire transfers. According to the FBI, thieves stole nearly $750 million in such scams from more than 7,000 victim companies in the U.S. between October 2013 and August 2015.

athook

In January 2015, the FBI released stats showing that between Oct. 1, 2013 and Dec. 1, 2014, some 1,198 companies lost a total of $179 million in so-called business e-mail compromise (BEC) scams, also known as “CEO fraud.” The latest figures show a marked 270 percent increase in identified victims and exposed losses. Taking into account international victims, the losses from BEC scams total more than $1.2 billion, the FBI said.

“The scam has been reported in all 50 states and in 79 countries,” the FBI’s alert notes. “Fraudulent transfers have been reported going to 72 countries; however, the majority of the transfers are going to Asian banks located within China and Hong Kong.”

CEO fraud usually begins with the thieves either phishing an executive and gaining access to that individual’s inbox, or emailing employees from a look-alike domain name that is one or two letters off from the target company’s true domain name. For example, if the target company’s domain was “example.com” the thieves might register “examp1e.com” (substituting the letter “L” for the numeral 1) or “example.co,” and send messages from that domain.

Unlike traditional phishing scams, spoofed emails used in CEO fraud schemes are unlikely to set off spam traps, because these are targeted phishing scams that are not mass e-mailed. Also, the crooks behind them take the time to understand the target organization’s relationships, activities, interests and travel and/or purchasing plans.

They do this by scraping employee email addresses and other information from the target’s Web site to help make the missives more convincing. In the case where executives or employees have their inboxes compromised by the thieves, the crooks will scour the victim’s email correspondence for certain words that might reveal whether the company routinely deals with wire transfers — searching for messages with key words like “invoice,” “deposit” and “president.”

On the surface, business email compromise scams may seem unsophisticated relative to moneymaking schemes that involve complex malicious software, such as Dyre and ZeuS. But in many ways, the BEC attack is more versatile and adept at sidestepping basic security strategies used by banks and their customers to minimize risks associated with account takeovers. In traditional phishing scams, the attackers interact with the victim’s bank directly, but in the BEC scam the crooks trick the victim into doing that for them.

Business Email Compromise (BEC) scams are more versatile and adaptive than more traditional malware-based scams.

Business Email Compromise (BEC) scams are more versatile and adaptive than more traditional malware-based scams.

In these cases, the fraudsters will forge the sender’s email address displayed to the recipient, so that the email appears to be coming from example.com. In all cases, however, the “reply-to” address is the spoofed domain (e.g. examp1e.com), ensuring that any replies are sent to the fraudster.

The FBI’s numbers would seem to indicate that the average loss per victim is around $100,000. That may be so, but some of the BEC swindles I’ve written about thus far have involved much higher amounts. Earlier this month, tech firm Ubiquiti Networks disclosed in a quarterly financial report that it suffered a whopping $46.7 million hit because of a BEC scam.

In February, con artists made off with $17.2 million from one of Omaha, Nebraska’s oldest companies — The Scoular Co., an employee-owned commodities trader. According to Omaha.com, an executive with the 800-employee company wired the money in installments last summer to a bank in China after receiving emails ordering him to do so.

In March 2015, I posted the story Spoofing the Boss Turns Thieves a Tidy Profit, which recounted the nightmarish experience of an Ohio manufacturing firm that came within a whisker of losing $315,000 after an employee received an email she thought was from her boss asking her to wire the money to China to pay for some raw materials.

The FBI urges businesses to adopt two-step or two-factor authentication for email, where available, and/or to establish other communication channels — such as telephone calls — to verify significant transactions. Businesses are also advised to exercise restraint when publishing information about employee activities on their Web sites or through social media, as attackers perpetrating these schemes often will try to discover information about when executives at the targeted organization will be traveling or otherwise out of the office.

Consumers are not immune from these types of scams. According to a related advisory posted the FBI today, in the three months between April 1, 2015 and June 30, 2015, the agency received 21 complaints from consumers who suffered losses of nearly $700,000 after having their inboxes hijacked or spoofed by thieves. The FBI said it identified approximately $14 million in attempted losses associated with open FBI investigations into such crimes against consumers.

Schneier on Security: No-Fly List Uses Predictive Assessments

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

The US government has admitted that it uses predictive assessments to put people on the no-fly list:

In a little-noticed filing before an Oregon federal judge, the US Justice Department and the FBI conceded that stopping US and other citizens from travelling on airplanes is a matter of “predictive assessments about potential threats,” the government asserted in May.

“By its very nature, identifying individuals who ‘may be a threat to civil aviation or national security’ is a predictive judgment intended to prevent future acts of terrorism in an uncertain context,” Justice Department officials Benjamin C Mizer and Anthony J Coppolino told the court on 28 May.

“Judgments concerning such potential threats to aviation and national security call upon the unique prerogatives of the Executive in assessing such threats.”

It is believed to be the government’s most direct acknowledgement to date that people are not allowed to fly because of what the government believes they might do and not what they have already done.

When you have a secret process that can judge and penalize people without due process or oversight, this is the kind of thing that happens.

Krebs on Security: Cyberheist Victim Trades Smokes for Cash

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Earlier this month, KrebsOnSecurity featured the exclusive story of a Russian organized cybercrime gang that stole more than $100 million from small to mid-sized businesses with the help of phantom corporations on the border with China. Today, we’ll look at the stranger-than-fiction true tale of an American firm that lost $197,000 in a remarkably similar 2013 cyberheist, only to later recover most of the money after allegedly plying Chinese authorities with a carton of cigarettes and a hefty bounty for their trouble.

wirefraudThe victim company — an export/import firm based in the northeastern United States — first reached out to this author in 2014 via a U.S. based lawyer who has successfully extracted settlements from banks on the premise that they haven’t done enough to protect their customers from cyberheists. The victim company’s owner — we’ll call him John — agreed to speak about the incident on condition of anonymity, citing pending litigation with the bank.

On Christmas Eve 2013, the accountant at John’s company logged on to the bank’s portal to make a deposit. After submitting her username and password, she was redirected to a Web page that said the bank’s site was experiencing technical difficulties and that she need to provide a one-time token to validate her request.

Unbeknownst to the accountant at the time, cybercrooks had infected her machine with a powerful password-stealing Trojan horse program and had complete control over her Web browser. Shortly after she supplied the token, the crooks used her hijacked browser session to initiate a fraudulent $197,000 wire transfer to a company in Harbin, a city on the Chinese border with Russia.

The next business day when John’s company went to reverse the wire, the bank said the money was already gone.

“My account rep at the bank said we shouldn’t expect to get that money back, and that they weren’t responsible for this transaction,” John said. “I told them that I didn’t understand because the bank had branches in China, why couldn’t they do anything? The bank rep said that, technically, the crime wasn’t committed against us, it was committed against you.”

SMOKING OUT THE THIEVES

In April 2011, the FBI issued an alert warning that cyber thieves had stolen approximately $20 million in the year prior from small to mid-sized U.S. companies through a series of fraudulent wire transfers sent to Chinese economic and trade companies located on or near the country’s border with Russia.

heil2In that alert, the FBI warned that the intended recipients of the fraudulent, high-dollar wires were companies based in the Heilongjiang province of China, and that these firms were registered in port cities located near the Russia-China border. Harbin, where John’s $197,000 was sent, is the capital and largest city of the Heilonjiang province.

Undeterred, John’s associate had a cousin in China who was a lawyer and who offered his assistance. John said that initially the Harbin police were reluctant to help, insisting they first needed an official report from the FBI about the incident to corroborate John’s story.

In the end, he said, the Chinese authorities ended up settling for a police report from the local cops in his hometown. But according to John, what really sealed the deal was that the Chinese lawyer friend met the Harbin police officers with a gift-wrapped carton of smokes and the promise of a percentage of the recovered funds if they caught the guy responsible and were able to recover the money.

Two days later, the Harbin police reportedly located the business that had received the money, and soon discovered that the very same day this business had just received another international wire transfer for 900,000 Euros.

“They said the money that was stolen from us came in on a Tuesday and was out a day later,” John said. “They wanted to know whether we’d pay expenses for the two police guys to fly to Beijing to complete the investigation, so we wired $1,500 to take care of that, and they froze the account of the guy who got our money.”

In the end, John’s associate flew with her husband from the United States to Beijing and then on to Harbin to meet the attorney, and from there the two of them arranged to meet the cops from Harbin.

“They took her to the bank, where she opened up a new account,” John said. “Then they brought her to a hotel room, and three people came into that hotel room and online they made the transfer [of the amount that the cops had agreed would be their cut].”

Getting the leftover $166,000 back into the United States would entail another ordeal: John said his handlers were unable to initiate a direct wire back to the United States of such a sum unless his company already had a business located in the region. Fortunately, John’s firm was able to leverage a longtime business partner in Singapore who did have a substantial business presence inside China and who agreed to receive the money and forward it on to John’s company, free of charge.

I like John’s story because I have written over 100 pieces involving companies that have lost six-figures or more from cyberheists, and very few of them have ever gotten their money back. Extra-legal remedies to recoup the losses from cybercrime are generally few, unless your organization has the money, willpower and tenacity to pursue your funds to the ends of the earth. Unfortunately, this does not describe most victim businesses in the United States.

U.S. consumers who bank online are protected by Regulation E, which dramatically limits the liability for consumers who lose money from unauthorized account activity online (provided the victim notifies their financial institution of the fraudulent activity within 60 days of receiving a disputed account statement).

Businesses, however, do not enjoy such protections. States across the country have adopted the Uniform Commercial Code (UCC), which holds that a payment order received by the [bank] is “effective as the order of the customer, whether or not authorized, if the security procedure is a commercially reasonable method of providing security against unauthorized payment orders, and the bank proves that it accepted the payment order in good faith and in compliance with the security procedure and any written agreement or instruction of the customer restricting acceptance of payment orders issued in the name of the customer.”

Do you run your own business and bank online but are unwilling to place all of your trust in your bank’s security? That’s a sound conclusion. If you’re wondering what you can do to protect your accounts, consider adopting some of the advice I laid out in Online Banking Best Practices for Businesses and Banking on a Live CD.

Schneier on Security: Another Salvo in the Second Crypto War (of Words)

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Prosecutors from New York, London, Paris, and Madrid wrote an op-ed in yesterday’s New York Times in favor of backdoors in cell phone encryption. There are a number of flaws in their argument, ranging from how easy it is to get data off an encrypted phone to the dangers of designing a backdoor in the first place, but all of that has been said before. And since anecdote can be more persuasive than data, the op-ed started with one:

In June, a father of six was shot dead on a Monday afternoon in Evanston, Ill., a suburb 10 miles north of Chicago. The Evanston police believe that the victim, Ray C. Owens, had also been robbed. There were no witnesses to his killing, and no surveillance footage either.

With a killer on the loose and few leads at their disposal, investigators in Cook County, which includes Evanston, were encouraged when they found two smartphones alongside the body of the deceased: an iPhone 6 running on Apple’s iOS 8 operating system, and a Samsung Galaxy S6 Edge running on Google’s Android operating system. Both devices were passcode protected.

You can guess the rest. A judge issued a warrant, but neither Apple nor Google could unlock the phones. “The homicide remains unsolved. The killer remains at large.”

The Intercept researched the example, and it seems to be real. The phones belonged to the victim, and…

According to Commander Joseph Dugan of the Evanston Police Department, investigators were able to obtain records of the calls to and from the phones, but those records did not prove useful. By contrast, interviews with people who knew Owens suggested that he communicated mainly through text messages — the kind that travel as encrypted data — and had made plans to meet someone shortly before he was shot.

The information on his phone was not backed up automatically on Apple’s servers — apparently because he didn’t use wi-fi, which backups require.

[…]

But Dugan also wasn’t as quick to lay the blame solely on the encrypted phones. “I don’t know if getting in there, getting the information, would solve the case,” he said, “but it definitely would give us more investigative leads to follow up on.”

This is the first actual example I’ve seen illustrating the value of a backdoor. Unlike the increasingly common example of an ISIL handler abroad communicating securely with a radicalized person in the US, it’s an example where a backdoor might have helped. I say “might have,” because the Galaxy S6 is not encrypted by default, which means the victim deliberately turned the encryption on. If the native smartphone encryption had been backdoored, we don’t know if the victim would have turned it on nevertheless, or if he would have employed a different, non-backdoored, app.

The authors’ other examples are much sloppier:

Between October and June, 74 iPhones running the iOS 8 operating system could not be accessed by investigators for the Manhattan district attorney’s office — despite judicial warrants to search the devices. The investigations that were disrupted include the attempted murder of three individuals, the repeated sexual abuse of a child, a continuing sex trafficking ring and numerous assaults and robberies.

[…]

In France, smartphone data was vital to the swift investigation of the Charlie Hebdo terrorist attacks in January, and the deadly attack on a gas facility at Saint-Quentin-Fallavier, near Lyon, in June. And on a daily basis, our agencies rely on evidence lawfully retrieved from smartphones to fight sex crimes, child abuse, cybercrime, robberies or homicides.

We’ve heard that 74 number before. It’s over nine months, in an office that handles about 100,000 cases a year: less than 0.1% of the time. Details about those cases would be useful, so we can determine if encryption was just an impediment to investigation, or resulted in a criminal going free. The government needs to do a better job of presenting empirical data to support its case for backdoors. That they’re unable to do so suggests very strongly that an empirical analysis wouldn’t favor the government’s case.

As to the Charlie Hebdo case, it’s not clear how much of that vital smartphone data was actual data, and how much of it was unable-to-be-encrypted metadata. I am reminded of the examples that then-FBI-Director Louis Freeh would give during the First Crypto Wars in the 1990s. The big one used to illustrate the dangers of encryption was Mafia boss John Gotti. But the surveillance that convicted him was a room bug, not a wiretap. Given that the examples from FBI Director James Comey’s “going dark” speech last year were bogus, skepticism in the face of anecdote seems prudent.

So much of this “going dark” versus the “golden age of surveillance” debate depends on where you start from. Referring to that first Evanston example and the inability to get evidence from the victim’s phones, the op-ed authors write: “Until very recently, this situation would not have occurred.” That’s utter nonsense. From the beginning of time until very recently, this was the only situation that could have occurred. Objects in the vicinity of an event were largely mute about the past. Few things, save for eyewitnesses, could ever reach back in time and produce evidence. Even 15 years ago, the victim’s cell phone would have had no evidence on it that couldn’t have been obtained elsewhere, and that’s if the victim had been carrying a cell phone at all.

For most of human history, surveillance has been expensive. Over the last couple of decades, it has become incredibly cheap and almost ubiquitous. That a few bits and pieces are becoming expensive again isn’t a cause for alarm.

This essay originally appeared on Lawfare.

Schneier on Security: Intimidating Military Personnel by Targeting Their Families

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

This FBI alert is interesting:

(U//FOUO) In May 2015, the wife of a US military member was approached in front of her home by two Middle-Eastern males. The men stated that she was the wife of a US interrogator. When she denied their claims, the men laughed. The two men left the area in a dark-colored, four-door sedan with two other Middle-Eastern males in the vehicle. The woman had observed the vehicle in the neighborhood on previous occasions.

(U//FOUO) Similar incidents in Wyoming have been reported to the FBI throughout June 2015. On numerous occasions, family members of military personnel were confronted by Middle-Eastern males in front of their homes. The males have attempted to obtain personal information about the military member and family members through intimidation. The family members have reported feeling scared.

The report says nothing about whether these are isolated incidents, a trend, or part of a larger operation. But it has gotten me thinking about the new ways military personnel can be intimidated. More and more military personnel live here and work there, remotely as drone pilots, intelligence analysts, and so on, and their military and personal lives intertwine to a degree we have not seen before. There will be some interesting security repercussions from that.

Krebs on Security: Tech Firm Ubiquiti Suffers $46M Cyberheist

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Networking firm Ubiquiti Networks Inc. disclosed this week that cyber thieves recently stole $46.7 million using an increasingly common scam in which crooks spoof communications from executives at the victim firm in a bid to initiate unauthorized international wire transfers.

athookUbiquiti, a San Jose based maker of networking technology for service providers and enterprises, disclosed the attack in a quarterly financial report filed this week with the U.S. Securities and Exchange Commission (SEC). The company said it discovered the fraud on June 5, 2015, and that the incident involved employee impersonation and fraudulent requests from an outside entity targeting the company’s finance department.

“This fraud resulted in transfers of funds aggregating $46.7 million held by a Company subsidiary incorporated in Hong Kong to other overseas accounts held by third parties,” Ubiquiti wrote. “As soon as the Company became aware of this fraudulent activity it initiated contact with its Hong Kong subsidiary’s bank and promptly initiated legal proceedings in various foreign jurisdictions. As a result of these efforts, the Company has recovered $8.1 million of the amounts transferred.”

Known variously as “CEO fraud,” and the “business email compromise,” the swindle that hit Ubiquiti is a sophisticated and increasingly common one targeting businesses working with foreign suppliers and/or businesses that regularly perform wire transfer payments.  In January 2015, the FBI warned that cyber thieves stole nearly $215 million from businesses in the previous 14 months through such scams, which start when crooks spoof or hijack the email accounts of business executives or employees.

In February, con artists made off with $17.2 million from one of Omaha, Nebraska’s oldest companies —  The Scoular Co., an employee-owned commodities trader. According to Omaha.com, an executive with the 800-employee company wired the money in installments last summer to a bank in China after receiving emails ordering him to do so.

In March 2015, I posted the story Spoofing the Boss Turns Thieves a Tidy Profit, which recounted the nightmarish experience of an Ohio manufacturing firm that came within a whisker of losing $315,000 after an employee received an email she thought was from her boss asking her to wire the money to China to pay for some raw materials.

Ubiquiti didn’t disclose precisely how it was scammed, but CEO fraud usually begins with the thieves either phishing an executive and gaining access to that individual’s inbox, or emailing employees from a look-alike domain name that is one or two letters off from the target company’s true domain name. For example, if the target company’s domain was “example.com” the thieves might register “examp1e.com” (substituting the letter “L” for the numeral 1) or “example.co,” and send messages from that domain.

In these cases, the fraudsters will forge the sender’s email address displayed to the recipient, so that the email appears to be coming from example.com. In all cases, however, the “reply-to” address is the spoofed domain (e.g. examp1e.com), ensuring that any replies are sent to the fraudster.

In the case of the above-mentioned Ohio manufacturing firm that nearly lost $315,000, that company determined that the fraudsters had just hours before the attack registered the phony domain and associated email account with Vistaprint, which offers a free one-month trial for companies looking to quickly set up a Web site.

Ubiquiti said in addition to the $8.1 million it already recovered, some $6.8 million of the amounts transferred are currently subject to legal injunction and reasonably expected to be recovered. It added that an internal investigation completed last month uncovered no evidence that its systems were penetrated or that any corporate information, including our financial and account information, was accessed. Likewise, the investigation reported no evidence of employee criminal involvement in the fraud.

“The Company is continuing to pursue the recovery of the remaining $31.8 million and is cooperating with U.S. federal and numerous overseas law enforcement authorities who are actively pursuing a multi-agency criminal investigation,” the 10-K filing reads. “The Company may be limited in what information it can disclose due to the ongoing investigation. The Company currently believes this is an isolated event and does not believe its technology systems have been compromised or that Company data has been exposed.”

The FBI’s advisory on these scams urges businesses to adopt two-step or two-factor authentication for email, where available, and/or to establish other communication channels — such as telephone calls — to verify significant transactions. Businesses are also advised to exercise restraint when publishing information about employee activities on their Web sites or through social media, as attackers perpetrating these schemes often will try to discover information about when executives at the targeted organization will be traveling or otherwise out of the office.

Ubiquiti noted that as a result of its investigation, the company and its audit committee and advisors concluded that its internal control over financial reporting were ineffective due to one or more material weaknesses, though it didn’t disclose what measures it took to close those security gaps.

“The Company has implemented enhanced internal controls over financial reporting since June 5, 2015 and is in the process of implementing additional procedures and controls pursuant to recommendations from the investigation,” it said.

There are probably some scenarios in which legitimate emails between two parties carry different display and “reply-to” addresses. But if the message also involves a “reply-to” domain that has virtually no reputation (it was registered within hours or days of the message being sent), the chances that the email is fraudulent go up dramatically.

Business Email Compromise (BEC) or man-in-the-email (MITE) scams  are adaptive and surprisingly complicated.

Business Email Compromise (BEC) or man-in-the-email (MITE) scams are adaptive and surprisingly complex.

Unlike traditional phishing scams, spoofed emails used in CEO fraud schemes are unlikely to set off spam traps, because these are targeted phishing scams that are not mass e-mailed. Also, the crooks behind them take the time to understand the target organization’s relationships, activities, interests and travel and/or purchasing plans.

They do this by scraping employee email addresses and other information from the target’s Web site to help make the missives more convincing. In the case where executives or employees have their inboxes compromised by the thieves, the crooks will scour the victim’s email correspondence for certain words that might reveal whether the company routinely deals with wire transfers — searching for messages with key words like “invoice,” “deposit” and “president.”

On the surface, business email compromise scams may seem unsophisticated relative to moneymaking schemes that involve complex malicious software, such as Dyre and ZeuS. But in many ways, the BEC attack is more versatile and adept at sidestepping basic security strategies used by banks and their customers to minimize risks associated with account takeovers. In traditional phishing scams, the attackers interact with the victim’s bank directly, but in the BEC scam the crooks trick the victim into doing that for them.

Business Email Compromise (BEC)  scams are more versatile and adaptive than more traditional malware-based scams.

Business Email Compromise (BEC) scams are more versatile and adaptive than more traditional malware-based scams.

Hat tip to Brian Honan at CSO Online, who spotted this filing on Thursday. Update, 4:20 ET: Corrected the spelling in several instances of the networking company’s name.

Schneier on Security: Nicholas Weaver on iPhone Security

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Excellent essay:

Yes, an iPhone configured with a proper password has enough protection that, turned off, I’d be willing to hand mine over to the DGSE, NSA, or Chinese. But many (perhaps most) users don’t configure their phones right. Beyond just waiting for the suspect to unlock his phone, most people either use a weak 4-digit passcode (that can be brute-forced) or use the fingerprint reader (which the officer has a day to force the subject to use).

Furthermore, most iPhones have a lurking security landmine enabled by default: iCloud backup. A simple warrant to Apple can obtain this backup, which includes all photographs (so there is the selfie) and all undeleted iMessages! About the only information of value not included in this backup are the known WiFi networks and the suspect’s email, but a suspect’s email is a different warrant away anyway.

Finally, there is iMessage, whose “end-to-end” nature, despite FBI complaints, contains some significant weaknesses and deserves scare-quotes. To start with, iMessage’s encryption does not obscure any metadata, and as the saying goes, “the Metadata is the Message”. So with a warrant to Apple, the FBI can obtain all the information about every message sent and received except the message contents, including time, IP addresses, recipients, and the presence and size of attachments. Apple can’t hide this metadata, because Apple needs to use this metadata to deliver messages.

He explains how Apple could enable surveillance on iMessage and FaceTime:

So to tap Alice, it is straightforward to modify the keyserver to present an additional FBI key for Alice to everyone but Alice. Now the FBI (but not Apple) can decrypt all iMessages sent to Alice in the future. A similar modification, adding an FBI key to every request Alice makes for any keys other than her own, enables tapping all messages sent by Alice. There are similar architectural vulnerabilities which enable tapping of “end-to-end secure” FaceTime calls.

There’s a persistent rumor going around that Apple is in the secret FISA Court, fighting a government order to make its platform more surveillance-friendly — and they’re losing. This might explain Apple CEO Tim Cook’s somewhat sudden vehemence about privacy. I have not found any confirmation of the rumor.

Krebs on Security: Inside the $100M ‘Business Club’ Crime Gang

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

New research into a notorious Eastern European organized cybercrime gang accused of stealing more than $100 million from banks and businesses worldwide provides an unprecedented, behind-the-scenes look at an exclusive “business club” that dabbled in cyber espionage and worked closely with phantom Chinese firms on Russia’s far eastern border.

In the summer of 2014, the U.S. Justice Department joined multiple international law enforcement agencies and security firms in taking down the Gameover ZeuS botnet, an ultra-sophisticated, global crime machine that infected upwards of a half-million PCs.

Thousands of freelance cybercrooks have used a commercially available form of the ZeuS banking malware for years to siphon funds from Western bank accounts and small businesses. Gameover ZeuS, on the other hand, was a closely-held, customized version secretly built by the ZeuS author himself (following a staged retirement) and wielded exclusively by a cadre of hackers that used the systems in countless online extortion attacks, spam and other illicit moneymaking schemes.

Last year’s takedown of the Gameover ZeuS botnet came just months after the FBI placed a $3 million bounty on the botnet malware’s alleged author — a Russian programmer named Evgeniy Mikhailovich Bogachev who used the hacker nickname “Slavik.” But despite those high-profile law enforcement actions, little has been shared about the day-to-day operations of this remarkably resourceful cybercrime gang.

That changed today with the release of a detailed report from Fox-IT, a security firm based in the Netherlands that secretly gained access to a server used by one of the group’s members. That server, which was rented for use in launching cyberattacks, included chat logs between and among the crime gang’s core leaders, and helped to shed light on the inner workings of this elite group.

The alleged ZeuS Trojan author, Yevgeniy Bogachev, Evgeniy Mikhaylovich Bogachev, a.k.a. "lucky12345", "slavik", "Pollingsoon". Source: FBI.gov "most wanted, cyber.

The alleged ZeuS Trojan author, Yevgeniy Bogachev, Evgeniy Mikhaylovich Bogachev, a.k.a. “lucky12345″, “slavik”, “Pollingsoon”. Source: FBI.gov “most wanted, cyber.

THE ‘BUSINESS CLUB’

The chat logs show that the crime gang referred to itself as the “Business Club,” and counted among its members a core group of a half-dozen people supported by a network of more than 50 individuals. In true Oceans 11 fashion, each Business Club member brought a cybercrime specialty to the table, including 24/7 tech support technicians, third-party suppliers of ancillary malicious software, as well as those engaged in recruiting “money mules” — unwitting or willing accomplices who could be trained or counted on to help launder stolen funds.

“To become a member of the business club there was typically an initial membership fee and also typically a profit sharing agreement,” Fox-IT wrote. “Note that the customer and core team relationship was entirely built on trust. As a result not every member would directly get full access, but it would take time until all the privileges of membership would become available.”

Michael Sandee, a principal security expert at Fox-IT and author of the report, said although Bogachev and several other key leaders of the group were apparently based in or around Krasnodar — a temperate area of Russia on the Black Sea — the crime gang had members that spanned most of Russia’s 11 time zones.

Geographic diversity allowed the group — which mainly worked regular 9-5 hour days Monday through Friday — to conduct their cyberheists against banks by following the rising sun across the globe — emptying accounts at Australia and Asian banks in the morning there, European banks in the afternoon, before handing the operations over to a part of the late afternoon team based in Eastern Europe that would attempt to siphon funds from banks that were just starting their business day in the United States.

“They would go along with the time zone, starting with banks in Australia, then continuing in Asia and following the business day wherever it was, ending the day with [attacks against banks in] the United States,” Sandee said.

Image: Timetemperature.com

Image: Timetemperature.com

Business Club members who had access to the GameOver ZeuS botnet’s panel for hijacking online banking transactions could use the panel to intercept security challenges thrown up by the victim’s bank — including one-time tokens and secret questions — as well as the victim’s response to those challenges. The gang dubbed its botnet interface “World Bank Center,” with a tagline beneath that read: “We are playing with your banks.”

The business end of the Business Club's peer-to-peer botnet, dubbed "World Bank Center."

The business end of the Business Club’s peer-to-peer botnet, dubbed “World Bank Center.” Image: Fox-IT

CHINESE BANKS, RUSSIAN BUSINESSES

Aside from their role in siphoning funds from Australian and Asian banks, Business Club members based in the far eastern regions of Russia also helped the gang cash out some of their most lucrative cyberheists, Fox-IT’s research suggests.

In April 2011, the FBI issued an alert warning that cyber thieves had stolen approximately $20 million in the year prior from small to mid-sized U.S. companies through a series of fraudulent wire transfers sent to Chinese economic and trade companies located on or near the country’s border with Russia.

In that alert, the FBI warned that the intended recipients of the fraudulent, high-dollar wires were companies based in the Heilongjiang province of China, and that these firms were registered in port cities located near the Russia-China border. The FBI said the companies all used the name of a Chinese port city in their names, such as Raohe, Fuyuan, Jixi City, Xunke, Tongjiang, and Donging, and that the official name of the companies also included the words “economic and trade,” “trade,” and “LTD”. The FBI further advised that recipient entities usually held accounts with a the Agricultural Bank of China, the Industrial and Commercial Bank of China, and the Bank of China.

Fox-IT said its access to the gang revealed documents that showed members of the group establishing phony trading and shipping companies in the Heilongjiang province — Raohe county and another in Suifenhe — two cities adjacent to a China-Russia border crossing just north of Vladivostok.

Remittance slips discovered by Fox-IT show records of wire transfers that the Business Club executed from hacked accounts in the United States and Europe to accounts tied to phony shipping companies in China on the border with Russia.

Remittance slips discovered by Fox-IT show records of wire transfers that the Business Club executed from hacked accounts in the United States and Europe to accounts tied to phony shipping companies in China on the border with Russia. Image: Fox-IT

Sandee said the area in and around Suifenhe began to develop several major projects for economic cooperation between China and Russia beginning in the first half of 2012. Indeed, this Slate story from 2009 describes Suifenhe as an economy driven by Russian shoppers on package tours, noting that there is a rapidly growing population of Russian expatriates living in the city.

“So it is not unlikely that peer-to-peer ZeuS associates would have made use of the positive economic climate and business friendly environment to open their businesses right there,” Fox-IT said in its report. “This shows that all around the world Free Trade Zones and other economic incentive areas are some of the key places where criminals can set up corporate accounts, as they are promoting business. And without too many problems, and with limited exposure, can receive large sums of money.”

Remittance found by Fox-IT from Wachovia Bank in New York to an tongue-in-cheek named Chinese front company in Suifenhe called "Muling Shuntong Trading."

Remittance found by Fox-IT from Wachovia Bank in New York to an tongue-in-cheek named Chinese front company in Suifenhe called “Muling Shuntong Trading.” Image: Fox-IT

KrebsOnSecurity publicized several exclusive stories about U.S.-based businesses robbed of millions of dollars from cyberheists that sent the stolen money in wires to Chinese firms, including $1.66M in Limbo After FBI Seizes Funds from Cyberheist, and $1.5 million Cyberheist Ruins Escrow Firm.

The red arrows indicate the border towns of  Raohe (top) and Suifenhe (below)

The red arrows indicate the border towns of Raohe (top) and Suifenhe (below). Image: Fox-IT

KEEPING TABS ON THE NEIGHBORS

The Business Club regularly divvied up the profits from its cyberheists, although Fox-IT said it lamentably doesn’t have insight into how exactly that process worked. However, Slavik — the architect of ZeuS and Gameover ZeuS — didn’t share his entire crime machine with the other Club members. According to Fox-IT, the malware writer converted part of the botnet that was previously used for cyberheists into a distributed espionage system that targeted specific information from computers in several neighboring nations, including Georgia, Turkey and Ukraine.

Beginning in late fall 2013 — about the time that conflict between Ukraine and Russia was just beginning to heat up — Slavik retooled a cyberheist botnet to serve as purely a spying machine, and began scouring infected systems in Ukraine for specific keywords in emails and documents that would likely only be found in classified documents, Fox-IT found.

“All the keywords related to specific classified documents or Ukrainian intelligence agencies,” Fox-IT’s Sandee said. “In some cases, the actual email addresses of persons that were working at the agencies.”

Likewise, they keyword searches that Slavik used to scourt bot-infected systems in Turkey suggested the botmaster was searching for specific files from the Turkish Ministry of Foreign Affairs or the Turkish KOM – a specialized police unit. Sandee said it’s clear that Slavik was looking to intercept communications about the conflict in Syria on Turkey’s southern border — one that Russia has supported by reportedly shipping arms into the region.

“The keywords are around arms shipments and Russian mercenaries in Syria,” Sandee said. “Obviously, this is something Turkey would be interested in, and in this case it’s obvious that the Russians wanted to know what the Turkish know about these things.”

According to Sandee, Slavik kept this activity hidden from his fellow Business Club members, at least some of whom hailed from Ukraine.

“The espionage side of things was purely managed by Slavik himself,” Sandee said. “His co-workers might not have been happy about that. They would probably have been happy to work together on fraud, but if they would see the system they were working on was also being used for espionage against their own country, they might feel compelled to use that against him.”

Whether Slavik’s former co-workers would be able to collect a reward even if they did turn on their former boss is debatable. For one thing, he is probably untouchable as long as he remains in Russia. But someone like that almost certainly has protection higher up in the Russian government.

Indeed, Fox-IT’s report concludes it’s evident that Slavik was involved in more than just the crime ring around peer-to-peer ZeuS.

“We could speculate that due to this part of his work he had obtained a level of protection, and
was able to get away with certain crimes as long as they were not committed against Russia,” Sandee wrote. “This of course remains speculation, but perhaps it is one of the reasons why he has as yet not been apprehended.”

The Fox-IT report, available here (PDF), is the subject of a talk today at the Black Hat security conference in Las Vegas, presented by Fox-IT’s Sandee, Elliott Peterson of the FBI, and Tillmann Werner of Crowdstrike.

Are you fascinated by detailed stories about real-life organized cybercrime operations? If so, you’ll almost certainly enjoy reading my book, Spam Nation: The Inside Story of Organized Cybercrime – From Global Epidemic to Your Front Door.

Schneier on Security: Backdoors Won’t Solve Comey’s Going Dark Problem

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

At the Aspen Security Forum two weeks ago, James Comey (and others) explicitly talked about the “going dark” problem, describing the specific scenario they are concerned about. Maybe others have heard the scenario before, but it was a first for me. It centers around ISIL operatives abroad and ISIL-inspired terrorists here in the US. The FBI knows who the Americans are, can get a court order to carry out surveillance on their communications, but cannot eavesdrop on the conversations, because they are encrypted. They can get the metadata, so they know who is talking to who, but they can’t find out what’s being said.

“ISIL’s M.O. is to broadcast on Twitter, get people to follow them, then move them to Twitter Direct Messaging” to evaluate if they are a legitimate recruit, he said. “Then they’ll move them to an encrypted mobile-messaging app so they go dark to us.”

[…]

The FBI can get court-approved access to Twitter exchanges, but not to encrypted communication, Comey said. Even when the FBI demonstrates probable cause and gets a judicial order to intercept that communication, it cannot break the encryption for technological reasons, according to Comey.

If this is what Comey and the FBI are actually concerned about, they’re getting bad advice — because their proposed solution won’t solve the problem. Comey wants communications companies to give them the capability to eavesdrop on conversations without the conversants’ knowledge or consent; that’s the “backdoor” we’re all talking about. But the problem isn’t that most encrypted communications platforms are security encrypted, or even that some are — the problem is that there exists at least one securely encrypted communications platform on the planet that ISIL can use.

Imagine that Comey got what he wanted. Imagine that iMessage and Facebook and Skype and everything else US-made had his backdoor. The ISIL operative would tell his potential recruit to use something else, something secure and non-US-made. Maybe an encryption program from Finland, or Switzerland, or Brazil. Maybe Mujahedeen Secrets. Maybe anything. (Sure, some of these will have flaws, and they’ll be identifiable by their metadata, but the FBI already has the metadata, and the better software will rise to the top.) As long as there is something that the ISIL operative can move them to, some software that the American can download and install on their phone or computer, or hardware that they can buy from abroad, the FBI still won’t be able to eavesdrop.

And by pushing these ISIL operatives to non-US platforms, they lose access to the metadata they otherwise have.

Convincing US companies to install backdoors isn’t enough; in order to solve this going dark problem, the FBI has to ensure that an American can only use backdoored software. And the only way to do that is to prohibit the use of non-backdoored software, which is the sort of thing that the UK’s David Cameron said he wanted for his country in January:

But the question is are we going to allow a means of communications which it simply isn’t possible to read. My answer to that question is: no, we must not.

And that, of course, is impossible. Jonathan Zittrain explained why. And Cory Doctorow outlined what trying would entail:

For David Cameron’s proposal to work, he will need to stop Britons from installing software that comes from software creators who are out of his jurisdiction. The very best in secure communications are already free/open source projects, maintained by thousands of independent programmers around the world. They are widely available, and thanks to things like cryptographic signing, it is possible to download these packages from any server in the world (not just big ones like Github) and verify, with a very high degree of confidence, that the software you’ve downloaded hasn’t been tampered with.

[…]

This, then, is what David Cameron is proposing:

* All Britons’ communications must be easy for criminals, voyeurs and foreign spies to intercept.

* Any firms within reach of the UK government must be banned from producing secure software.

* All major code repositories, such as Github and Sourceforge, must be blocked.

* Search engines must not answer queries about web-pages that carry secure software.

* Virtually all academic security work in the UK must cease — security research must only take place in proprietary research environments where there is no onus to publish one’s findings, such as industry R&D and the security services.

* All packets in and out of the country, and within the country, must be subject to Chinese-style deep-packet inspection and any packets that appear to originate from secure software must be dropped.

* Existing walled gardens (like IOs and games consoles) must be ordered to ban their users from installing secure software.

* Anyone visiting the country from abroad must have their smartphones held at the border until they leave.

* Proprietary operating system vendors (Microsoft and Apple) must be ordered to redesign their operating systems as walled gardens that only allow users to run software from an app store, which will not sell or give secure software to Britons.

* Free/open source operating systems — that power the energy, banking, ecommerce, and infrastructure sectors — must be banned outright.

As extreme as it reads, without all of that, the ISIL operative would be able to communicate securely with his potential American recruit. And all of this is not going to happen.

Last week, former NSA director Mike McConnell, former DHS secretary Michael Chertoff, and former deputy defense secretary William Lynn published a Washington Post op-ed opposing backdoors in encryption software. They wrote:

Today, with almost everyone carrying a networked device on his or her person, ubiquitous encryption provides essential security. If law enforcement and intelligence organizations face a future without assured access to encrypted communications, they will develop technologies and techniques to meet their legitimate mission goals.

I believe this is true. Already one is being talked about in the academic literature: lawful hacking.

Perhaps the FBI’s reluctance to accept this is based on their belief that all encryption software comes from the US, and therefore is under their influence. Back in the 1990s, during the first Crypto Wars, the US government had a similar belief. To convince them otherwise, George Washington University surveyed the cryptography market in 1999 and found that there were over 500 companies in 70 countries manufacturing or distributing non-US cryptography products. Maybe we need a similar study today.

This essay previously appeared on Lawfare.

Schneier on Security: Bizarre High-Tech Kidnapping

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

This is a story of a very high-tech kidnapping:

FBI court filings unsealed last week showed how Denise Huskins’ kidnappers used anonymous remailers, image sharing sites, Tor, and other people’s Wi-Fi to communicate with the police and the media, scrupulously scrubbing meta data from photos before sending. They tried to use computer spyware and a DropCam to monitor the aftermath of the abduction and had a Parrot radio-controlled drone standing by to pick up the ransom by remote control.

The story also demonstrates just how effective the FBI is tracing cell phone usage these days. They had a blocked call from the kidnappers to the victim’s cell phone. First they used an search warrant to AT&T to get the actual calling number. After learning that it was an AT&T prepaid Trakfone, they called AT&T to find out where the burner was bought, what the serial numbers were, and the location where the calls were made from.

The FBI reached out to Tracfone, which was able to tell the agents that the phone was purchased from a Target store in Pleasant Hill on March 2 at 5:39 pm. Target provided the bureau with a surveillance-cam photo of the buyer: a white male with dark hair and medium build. AT&T turned over records showing the phone had been used within 650 feet of a cell site in South Lake Tahoe.

Here’s the criminal complaint. It borders on surreal. Were it an episode of CSI:Cyber, you would never believe it.

Krebs on Security: The Wheels of Justice Turn Slowly

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

On the evening March 14, 2013, a heavily-armed police force surrounded my home in Annandale, Va., after responding to a phony hostage situation that someone had alerted authorities to at our address. I’ve recently received a notice from the U.S. Justice Department stating that one of the individuals involving in that “swatting” incident had pleaded guilty to a felony conspiracy charge.

swatnet“A federal investigation has revealed that several individuals participated in a scheme to commit swatting in the course of which these individuals committed various federal criminal offenses,” reads the DOJ letter, a portion of which is here (PDF). “You were the victim of the criminal conduct which resulted in swattings in that you were swattted.”

The letter goes on to state that one of the individuals who participated in the scheme has pleaded guilty to conspiracy charges (Title 18, Section 371) in federal court in Washington, D.C.

The notice offers little additional information about the individual who pleaded guilty or about his co-conspirators, and the case against him is sealed. It could be the individual identified at the conclusion of this story, or someone else. In any case, my own digging on this investigation suggests the government is in the process of securing charges or guilty pleas in connection with a group of young men who ran the celebrity “doxing” Web site exposed[dot]su (later renamed exposed[dot]re).

As I noted in a piece published just days after my swatting incident, the attack came not long after I wrote a story about the site, which was posting the Social Security numbers, previous addresses, phone numbers and credit reports on a slew of high-profile individuals, from the director of the FBI to Kim Kardashian, Bill Gates and First Lady Michelle Obama. Many of those individuals whose personal data were posted at the site also were the target of swatting attacks, including P. Diddy, Justin Timberlake and Ryan Seacrest.

The Web site exposed[dot]su featured the personal data of celebrities and public figures.

The Web site exposed[dot]su featured the personal data of celebrities and public figures.

Sources close to the investigation say Yours Truly was targeted because this site published a story correctly identifying the source of the personal data that the hackers posted on exposed[dot]su. According to my sources, the young men, nearly all of whom are based here in the United States, obtained the personal data after hacking into a now-defunct online identity theft service called ssndob[dot]ru.

Investigative reporting first published on KrebsOnSecurity in September 2013 revealed that the same miscreants controlling ssndob[dot]ru (later renamed ssndob[dot]ms) siphoned personal data from some of America’s largest consumer and business data aggregators, including LexisNexis, Dun & Bradstreet and Kroll Background America.

The administration page of ssndob[dot]ru. Note the logged in user, ssndob@ssa.gov, is the administrator.

The administration page of ssndob[dot]ru. Note the logged in user, ssndob@ssa.gov, is the administrator.

I look forward to the day that the Justice Department releases the names of the individuals responsible for these swatting incidents, for running exposed[dot]su, and hacking the ssndob[dot]ru ID theft service. While that identity theft site went offline in 2013, several competing services have unfortunately sprung up in its wake, offering the ability to pull Social Security numbers, dates of birth, previous addresses and credit reports and virtually all Americans.

Further reading:

Who Built the Identity Theft Service SSNDOB[dot]RU? 

Credit Reports Sold for Cheap in the Underweb

Data Broker Giants Hacked by ID Theft Service

Data Broker Hackers Also Compromised NW3C

Swatting Incidents Tied to ID Theft Sites?

Toward a Breach Canary for Data Brokers

How I Learn to Stop Worrying and Embrace the Credit Freeze

Schneier on Security: Using Secure Chat

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Micah Lee has a good tutorial on installing and using secure chat.

To recap: We have installed Orbot and connected to the Tor network on Android, and we have installed ChatSecure and created an anonymous secret identity Jabber account. We have added a contact to this account, started an encrypted session, and verified that their OTR fingerprint is correct. And now we can start chatting with them with an extraordinarily high degree of privacy.

FBI Director James Comey, UK Prime Minister David Cameron, and totalitarian governments around the world all don’t want you to be able to do this.

Krebs on Security: The Darkode Cybercrime Forum, Up Close

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

By now, many of you loyal KrebsOnSecurity readers have seen stories in the mainstream press about the coordinated global law enforcement takedown of Darkode[dot]me, an English-language cybercrime forum that served as a breeding ground for botnets, malware and just about every other form of virtual badness. This post is an attempt to distill several years’ worth of lurking on this forum into a narrative that hopefully sheds light on the individuals apprehended in this sting and the cybercrime forum scene in general.

To tell this tale completely would take a book the size of The Bible, but it’s useful to note that the history of Darkode — formerly darkode[dot]com — traces several distinct epochs that somewhat neatly track the rise and fall of the forum’s various leaders. What follows is a brief series of dossiers on those leaders, as well as a look at who these people are in real life.

ISERDO

Darkode began almost eight years ago as a pet project of Matjaz Skorjanc, a now-36-year-old Slovenian hacker best known under the hacker alisas “Iserdo” and “Netkairo.” Skorjanc was one of several individuals named in the complaints published today by the U.S. Justice Department.

Butterfly Bot customers wonder why Iserdo isn't responding to support requests. He was arrested hours before.

Butterfly Bot customers wonder why Iserdo isn’t responding to support requests. He was arrested hours before.

Iserdo was best known as the author of the ButterFly Bot, a plug-and-play malware strain that allowed even the most novice of would-be cybercriminals to set up a global cybercrime operation capable of harvesting data from thousands of infected PCs, and using the enslaved systems for crippling attacks on Web sites. Iserdo was arrested by Slovenian authorities in 2010. According to investigators, his ButterFly Bot kit sold for prices ranging from $500 to $2,000.

In May 2010, I wrote a story titled Accused Mariposa Botnet Operators Sought Jobs at Spanish Security Firm, which detailed how Skorjanc and several of his associates actually applied for jobs at Panda Security, an antivirus and security firm based in Spain. At the time, Skorjanc and his buddies were already under the watchful eye of the Spanish police.

MAFI

Following Iserdo’s arrest, control of the forum fell to a hacker known variously as “Mafi,” “Crim” and “Synthet!c,” who according to the U.S. Justice Department is a 27-year-old Swedish man named Johan Anders Gudmunds. Mafi is accused of serving as the administrator of Darkode, and creating and selling malware that allowed hackers to build botnets. The Justice Department also alleges that Gudmunds operated his own botnet, “which at times consisted of more than 50,000 computers, and used his botnet to steal data from the users of those computers on approximately 200,000,000 occasions.”

Mafi was best known for creating the Crimepack exploit kit, a prepackaged bundle of commercial crimeware that attackers can use to booby-trap hacked Web sites with malicious software. Mafi’s stewardship over the forum coincided with the admittance of several high-profile Russian cybercriminals, including “Paunch,” an individual arrested in Russia in 2013 for selling a competing and far more popular exploit kit called Blackhole.

Paunch worked with another Darkode member named “J.P. Morgan,” who at one point maintained an $800,000 budget for buying so-called “zero-day vulnerabilities,” critical flaws in widely-used commercial software like Flash and Java that could be used to deploy malicious software.

Darkode admin "Mafi" explains his watermarking system.

Darkode admin “Mafi” explains his watermarking system.

Perhaps unsurprisingly, Mafi’s reign as administrator of Darkode coincided with the massive infiltration of the forum by a number of undercover law enforcement investigators, as well as several freelance security researchers (including this author).

As a result, Mafi spent much of his time devising new ways to discover which user accounts on Darkode were those used by informants, feds and researchers, and which were “legitimate” cybercriminals looking to ply their wares.

For example, in mid-2013 Mafi and his associates cooked up a scheme to create a fake sales thread for a zero-day vulnerability — all in a bid to uncover which forum participants were researchers or feds who might be lurking on the forum.

That plan, which relied on a clever watermarking scheme designed to “out” any forum members who posted screen shots of the forum online, worked well but also gave investigators key clues about the forum’s hierarchy and reporting structure.

logsruhroh

Mafi worked quite closely with another prominent Darkode member nicknamed “Fubar,” and together the two of them advertised sales of a botnet crimeware package called Ngrbot (according to Mafi’s private messages on the forum, this was short for “Niggerbot.” Oddly enough, the password databases from several of Mafi’s accounts on hacked cybercrime forums would all include variations on the word “nigger” in some form). Mafi also advertised the sale of botnets based on “Grum” a spam botnet whose source code was leaked in 2013.

SP3CIALIST

Conspicuously absent from the Justice Department’s press release on this takedown is any mention of Darkode’s most recent administrator — a hacker who goes by the handle “Sp3cialist.”

Better known to Darkode members at “Sp3c,” this individual’s principal contribution to the forum seems to have revolved around a desire to massively expand the membership of the form, as well as an obsession with purging the community of anyone who even remotely might emit a whiff of being a fed or researcher.

The personal signature of Sp3cialist.

The personal signature of Sp3cialist.

Sp3c is widely known as a core member of the Lizard Squad, a group of mostly low-skilled miscreants who specialize in launching distributed denial-of-service attacks (DDoS) aimed at knocking Web sites offline.

In late 2014, the Lizard Squad took responsibility for launching a series of high-profile DDoS attacks that knocked offline the online gaming networks of Sony and Microsoft for the majority of Christmas Day.

In the first few days of 2015, KrebsOnSecurity was taken offline by a series of large and sustained denial-of-service attacks apparently orchestrated by the Lizard Squad. As I noted ina previous story, the booter service — lizardstresser[dot]su — is hosted at an Internet provider in Bosnia that is home to a large number of malicious and hostile sites. As detailed in this story, the same botnet that took Sony and Microsoft offline was built using a global network of hacked wireless routers.

That provider happens to be on the same “bulletproof” hosting network advertised by “sp3c1alist,” the administrator of the cybercrime forum Darkode. At the time, Darkode and LizardStresser shared the same Internet address.

Another key individual named in the Justice Department’s complaint against Darkode is a hacker known only to most in the underground as “KMS.” The government says KMS is a 28-year-old from Opelousas, Louisiana named Rory Stephen Guidry, who used the Jabber instant message address “k@exploit.im.” Having interacted with this individual on numerous occasions, I’d be remiss if I didn’t at least explain why this person is at once the least culpable and perhaps most interesting of the group named in the law enforcement purge.

For the past 12 months, KMS has been involved in an effort to expose the Lizard Squad members, to varying degrees of success. To call this kid a master in social engineering is probably a disservice to the term of art itself: There are few individuals I would consider more skilled in tricking people into divulging information that is not in their best interests than this guy.

Near as I can tell, KMS has work assiduously (for his own reasons, no doubt) to expose the people behind the Lizard Squad and, by extension, the core members of Darkode. Unfortunately for KMS, his activities also appear to have ensnared him in this investigation.

To be clear, nobody is saying KMS is a saint. KMS’s best friend, a hacker from Kentucky named Ryan King (a.k.a. “Starfall” and a semi-frequent commenter on this blog), says KMS routinely had trouble seeing the lines between exposing others and involving himself in their activities. This kid was a master of social engineer, almost par none. Here’s one recording of him making a fake emergency call to the FBI,  eerily disguising his voice as that of President Obama.

For example, KMS is rumored to have played a part in exposing the Lizard Squad’s February 2015 hijack of Google.com’s domain in Vietnam. The message left behind in that crime suggested this author was somehow responsible, along with Sp3c and a Rory Andrew Godfrey, the only name that KMS was known under publicly until this week’s law enforcement action.

“As far as I know, I’m the only one who knew his real name,” said King, who described himself as a close personal friend and longtime acquaintance of Guidry. “The only botnets that he operated were those that he social engineered out of [less skilled hackers], but even those he was trying get shut down. All I know is that he and I were trying to get [root] access to Darkode and destroy it, and the feds beat us to it by about a week.”

The U.S. government sees things otherwise. Included in a heavily-redacted affidavit (PDF) related to Guidry’s case are details of a pricing structure that investigators say KMS used to sell access to hacked machines (see screenshot below)

kmsbot

As mentioned earlier, I could go on for volumes about the litany of cybercrimes advertised at Darkode. Instead, it’s probably best if I just leave here a living archive of screen grabs I’ve taken over the years of various discussions on the Darkode forum.

In its final days, Darkode’s true Internet address was protected from DDoS attacks and from meddlesome researchers by CloudFlare, a content distribution network that specializes in helping Web sites withstand otherwise crippling attacks. As such, it seems fitting that at least some of my personal archive of screen shots from my time on Darkode should also be hosted there. Happy hunting.

One final note: As happens with many of these takedowns, the bad guys don’t just go away: They go someplace else. In this case, that someplace else is most likely to be a Deep Web or Dark Web forum accessible only via Tor: According to chats observed from Sp3c’s public and private online accounts, the forum is getting ready to move much further underground.

Linux How-Tos and Linux Tutorials: Give Your Raspberry Pi Night Vision With the PiNoir Camera

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Ben Martin. Original post: at Linux How-Tos and Linux Tutorials

raspaccess

The Raspberry Pi and Pi2 are economical little ARM machines which can happily run Linux. The popularity of the Raspberry Pi and compatible Pi 2 models means that a great deal of accessories are available. These accessories include the PiNoir Camera and 4D Systems’ touch-sensitive, 3.5-inch display.

The PiNoir camera is so named because it does not have an Infrared Filter (no-IR). Without an IR filter the camera can be used at night, provided you have an infrared light source. With night vision you can use the Raspberry Pi as an around-the-clock surveillance camera monitor, baby monitor, or to give vision to a robot. The PiNoir Camera comes without a case, so you might like to pick up something to help protect it.

I’ll be setting up the 4D Systems screen and then taking a look not only at the PiNoir Camera, but how well it functions in combination with an infrared light source which offers a fairly wide beam and up to 10 meters of lighting. The camera connects to the camera Serial Interface (CSI) port on the Raspberry Pi and the screen connects to the 40 pin expansion header on the Raspberry Pi.

The 4D Systems 3.5-Inch Screen

The 4D Systems screen runs at a 480×320 QVGA resolution with 65k colors. Physically the screen is about the same size as the Raspberry Pi 2. The screen mounting screw tabs extend a little beyond the Raspberry Pi on the USB connector side of the device.

The datasheet for the screen mentions that you should include some shielding to prevent accidental electrical contact between items on the back of the screen and the Raspberry Pi. I found that the socket that connects the screen and USB ports provide a decent support for the screen. One chip could be easily made to touch the Ethernet connector on the Raspberry Pi, so some form of non-conducting standoff would probably be advisable to stop that from happening. There are also male pins on the back of the screen, though they seem to have a reasonable clearance from the Raspberry Pi, though again perhaps some form of shielding would be wise. The best solution would be to create a case with mounting holes for both the Raspberry Pi and screen which would keep both at the correct distance from each other.

There are drivers for both the Raspberry Pi and Pi2 models for Raspian which include screen and touch support. The Raspberry Pi 2 has a 40 pin expansion header running down one side of it. The back of the screen has a 26 pin female header to connect to the Raspberry Pi. The first 26 pins on the Raspberry Pi2 header are in the same configuration as the earlier Pi models. The common 26 pins are at the end farthest away from the USB sockets on the Raspberry Pi 2.

Before connecting the screen you should install the drivers for it. These are linked from the download section of the manufacturer 4D Systems’ product page. I was running Raspbian 7 (Wheezy) and used the drivers from kernel4dpi_1.3-3_pi2.deb to test the screen. Setting up the screen drivers is done by installing the kernel package as shown below. I found that if I performed an apt-get upgrade then I had to also reinstall the kernel4dpi to get the screen working again.

root@pi:~# wget https://.../kernel4dpi_1.3-3_pi2.deb
root@pi:~# dpkg -i kernel4dpi_1.3-3_pi2.deb
...
Enable boot to GUI [Yn]y  

After powering down the Raspberry Pi and attaching the screen I could see the boot messages as I restarted the Raspberry Pi but unfortunately, once booting was complete I saw the screen backlight but no image. I was then running kernel version Linux pi 3.18.9-v7+. After digging around, I noticed in /etc/rc.local the following line which should have brought up an X session on a framebuffer device which is backed by the screen.

sudo -u pi FRAMEBUFFER=/dev/fb1 startx &

From an SSH session I decided to run startx with the selected FRAMEBUFFER and the screen came to life with a nice desktop. My installation of Raspbian had S05kdm in its default runlevel (2) startup. Disabling S05kdm and rebooting brought up a graphical session right after boot as one would hope.

The screen provides a framebuffer device which is quite handy, as it allows you to view text, images, and video without using a full desktop if that is what you want. The following commands will view an image or video directly on the screen. Toolkits such as Qt will also let you run on a framebuffer.

root@pi:/usr/share/pixmaps# fbi -T 2 -d /dev/fb1 MagicLinuxPenguins.png 
root@pi:~# mplayer -vo fbdev:/dev/fb1  -vf scale=480:320  test.mkv

Touch calibration happens at two levels: using ts_calibrate on the framebuffer and xinput_calibrator for the X Window session. Calibration in both cases is fairly simple, clicking on four or five positions on the screen when asked. The screen orientation can be changed allowing four different rotations using the /boot/cmdline.txt file. All of these procedures are well documented in the datasheet for the screen.

Dimming and turning off the backlight can apparently be controlled using GPIO18 and an exposed /sys/class/backlight/4dpi/brightness file. Although I had a jumper on the J1 connector of the screen in the PWM position, writing to the brightness file did not change the screen brightness while running an X session. Perhaps X itself was in charge of the screen dimming or I had a configuration issue with the Raspberry Pi 2.

The PiNoir Camera

pinoirThe PiNoir Camera is capable of capturing 30 frames per second for 1080p video and still images up to 5 megapixels, which is 2592×1944. The camera comes connected to a board with a fairly short ribbon cable attached which you then attach to the Raspberry Pi. The Raspberry Pi has a CSI port near the Ethernet port to connect the camera ribbon cable into. There is a clip which you pull upwards on each end and then you can move the clip backwards a little bit allowing the ribbon to be inserted into the CSI port. The clip can then be moved back and pushed down to lock the ribbon into place. The manuals mention that the camera and ribbon are static sensitive. After grounding myself before starting to assemble the camera into its enclosure and connecting the camera ribbon to the CSI port the camera worked fine.

Once the hardware is connected you might need to enable the CSI port in software. To do this run raspi-config and select Enable Camera from the menu. You will then have to reboot your Raspberry Pi.

The raspistill program will let you get your first still image from the camera. The only thing you’ll need to tell it is where to save that image file to, as shown below. A 5-second delay will precede the actual image capture and the result will be saved into test.jpg. I found that the light setup was better with this delay, using a –timeout 100 parameter to take the image after only a 100 millisecond delay resulted in an image with much poorer exposure.

pi@pi~$ raspistill  -o test.jpg
pi@pi~$ raspistill --timeout 100 -o badlight.jpg

To test how well exposed an image was after camera startup I took a time lapse series of 10 images, one each second using the below command. It took about 3 seconds for the image to go from a rather green image to a much more color-rich image.

pi@pi ~ $ raspistill --timeout 10000 --timelapse 1000 
                     --nopreview -n -o test%04d.jpg

The raspivid tool is a good place to start when you want to get video from the camera. Unfortunately the test.h264 file created by the first command will be without any video file container, that is, just the raw h264 video stream. This makes it a bit harder to play back with normal video players, so the MP4Box command can be used to create an mp4 file which contains the raw test.h264 stream. The test.mp4 can be played using mplayer and other tools.

  pi@pi ~ $ raspivid -t 5000 -o test.h264
  pi@pi ~ $
  pi@pi ~ $ sudo apt-get install -y gpac
  pi@pi ~ $ MP4Box -fps 30 -add test.h264 test.mp4

If you want to export the video stream over the network, gstreamer might be a more useful tool. There is a gst-rpicamsrc project which adds support for the Raspberry Pi Camera as as source to gstreamer. Unfortunately, at the time of writing gst-rpicamsrc is not packaged in the main Raspbian distribution. Installing the libgstreamer1.0-dev package allowed me to git pull from the gst-rpicamsrc repository and build and install that module just as the commands in its README describe. I’ve replicated the commands below for ease of use.

pi@pi ~/src $ sudo apt-get install libgstreamer1.0-dev  libgstreamer-plugins-base1.0-dev
pi@pi ~/src $ git clone https://github.com/thaytan/gst-rpicamsrc.git
pi@pi ~/src $ cd ./gst-rpicamsrc/
pi@pi ~/src $./autogen.sh --prefix=/usr --libdir=/usr/lib/arm-linux-gnueabihf/
pi@pi ~/src $ make
pi@pi ~/src $ sudo make install

Now that you have the gst-rpicamsrc module installed you can send the h264 stream over the network using the below commands. I found that even at 1080 resolution and a high profile there was very little CPU used on the Raspberry Pi2 to stream the camera. CPU usage was in the single-digit values in a top(1) display.

The Raspberry Pi supports encoding video in hardware and because the rpicamsrc is supplying h264 encoded video right from the source I suspect that the hardware encoding was being used to produce that video stream. The first two commands stream 720 or 1080 video with a slightly different encode profile. The final command should be run on the ‘mydesktop’ machine to catch the stream and view it in a window.

  pi@pi~$ gst-launch-1.0 -v rpicamsrc bitrate=1000000 
  ! video/x-h264,width=1280,height=720,framerate=15/1 
  ! rtph264pay config-interval=1 
  ! udpsink host=mydesktop port=5000
  pi@pi~$ gst-launch-1.0 -v rpicamsrc bitrate=1000000 
  ! video/x-h264,width=1920,height=1080,framerate=15/1,profile=high 
  ! rtph264pay config-interval=1 
  ! udpsink host=mydesktop port=5000
  me@mydesktop ~$ gst-launch-1.0 udpsrc port=5000 
  caps="application/x-rtp,media=(string)video,clock-rate=(int)90000,encoding-name=(string)H264" 
  ! rtph264depay ! decodebin ! autovideosink

Night Vision with the PiNoir Camera and IR source

The datasheet for the PiNoir camera states that it wants infrared light at around 880nm wavelength. The TV6700 IR Illuminator operates at 850nm so might not be directly on the sweet spot that the PiNoir is expecting. The specifications of the IR illuminator are a 10 meter range and a 70 degree radiation angle. The LEDs are listed for up to 6,000 hours of use and there is an automatic on/off capability to turn off the illuminator during the day.

Infrared lightThe TV6700 IR illuminator comes with a smallish round metal cylinder, one end of which you can see LEDs through a clear cover. The unit wants a 12-Volt power supply and there is a Y splitter with one female and two male DC jacks. If your existing camera’s power supply is 12 V and has some spare power capacity then the splitter will let you grab power right off the camera and mount the IR illuminator nearby. There is also a U shape bracket and some mounting screws.

An initial test of the illuminator was done in a room about 5 by 5 yards big. At night with a 3-Watt LED desk light and a fairly dark LCD screen I could just make out items such as a fan in the near ground (2 feet from the camera) but items on a bookshelf 2 yards from the camera were much harder to see, only being able to read very large text on the spines of white books. Turning off the 3 W LED blackened the PiNoir camera display completely. It seems the light emitted by the monitor when viewing a dark image is not enough for the PiNoir Camera to provide much of an image by.

Leaving the 3 W normal LED light off and turning on the IR illuminator made the whole picture clearly visible in greyscale. Waving a hand in front of the camera came through in real time although the room was dark. Using IR lighting and a camera that does not filter IR has the advantage of being able to stream fast movement properly. Using a still camera to take a sequence of longer exposure images would likely result in a collection of motion blur images.

Moving to an outside setting with a large grass lawn at night I could make out a person a little over 7 meters away from the Raspberry Pi and almost over the entire range of width of the captured video image from the camera. Perhaps only getting 7 meters visibility was the result of not precisely matching the infrared wavelength that the PiNoir camera expects.

RPi for Video Monitoring Applications

While the Raspberry Pi runs perfectly well without any display, some applications might benefit from being able to display information on demand. Because the 4D Systems 3.5 inch screen is about the same dimensions as the Raspberry Pi board it can offer a display without increasing the physical size of the Raspberry Pi or requiring external power for the screen.

Being able to produce and stream 1080 h264 encoded video using the PiNoir camera and in low-light situations opens up the Raspberry Pi to video monitoring applications. For example, using a headless Raspberry Pi with a PiNoir camera as a baby monitor. The main downside to using the Raspberry Pi as a monitor is that the CSI cable is rather short, especially when compared to a USB cable. If you want a Raspberry Pi at the heart of your robot then you can stream nice robot eye camera vision over the network without a great impact on the robot’s CPU performance.

The IR source is a must have if you plan to use the PiNoir camera for security monitoring. Being able to see who is doing what while they might be unaware of the camera or (infrared) light source helps you respond to the situation in an appropriate manner.

We would like to thank RS Components for supplying the hardware used in these articles. The PiNoir Camera, 4D Systems touch-sensitive 3.5-inch displayinfrared light source, and camera case are available for purchase on RS Components’ Australian website while stocks last, or through RS Components’ main website, if you’re ordering internationally.

TorrentFreak: FBI Assists Overseas Pirate Movie Site Raids

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

fbi-logoThere are many thousands of so-called ‘pirate’ sites online today, each specializing in a particular area. Some choose to target movies or music, for example, while others take a more general approach.

What most sites have in common, however, is their appetite for content created in the United States. This means that wherever they are in the world, it’s likely that these sites will attract the attention of some of the world’s largest entertainment companies and their law enforcement partners. That scenario now appears to have played out in Romania.

According to a report from the prosecutor’s office at Romania’s High Court of Cassation and Justice, an investigation dating back to 2011 against a trio of movie and TV show streaming sites has just resulted in raids police and officers from organized crime units carrying out raids.

Although not detailed by officials, one of the country’s most popular streaming portals is now offline. Visitors to Serialepenet.ro are now being transparently redirected to a server operated by the Romanian Ministry of Justice carrying the message below.

rom-seized

Fisierulmeu.ro, one of the most popular file-hosting sites in Romania, is also down.

According to the prosecutor, authorities carried out searches at four locations including the homes of several suspects and companies believed to be offering services to the sites.

Local TV outlet StirileProTV says that after searching in the U.S. and Europe, the FBI and local police managed to track down the operation to an office block in the capital Bucharest.

xservers

The building contains Romanian web-hosting company Xservers which was featured in local media in connection with the case. Media allegations suggest that the company is somehow implicated in laundering money from the piracy operation but no official statement has yet been issued.

Several men were arrested on suspicion of intellectual property offenses, tax evasion and money laundering. Documents and computer hardware were also seized.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and the best VPN services.

Errata Security: ProxyHam conspiracy is nonsense

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

This DEF CON conspiracy theory is about a canceled talk about “ProxyHam”, which has been canceled under mysterious circumstances. It’s nonsense.

The talk was hype to begin with. You can buy a 900 MHz bridge from Ubquiti for $125 (or MicroTik device for $129) and attach it to a Raspberry Pi. How you’d do this is obvious. It’s a good DEF CON talk, because it’s the application that important, but the technical principles here are extremely basic.
If you look careful at the pic in the Wired story on ProxyHam, it appears they are indeed just using the Ubuiti device. Here is the pic from Wired:
And here is the pic from Ubquiti’s website:
I don’t know why the talk was canceled. One likely reason is that the stories (such as the one on Wired) sensationalized the thing, so maybe their employer got cold feet. Or maybe the FBI got scared and really did give them an NSL, though that’s incredibly implausible. The feds have other ways to encourage people to be silent (I’ve personally been threatened to cancel a talk), but it wouldn’t be an NSL.
Anyway, if DEF CON wants a talk on how to hook up a Raspberry Pi to a UbiQuiTi NanoStation LOCOM9 in order bridge WiFi, I’ll happily give that talk. It’s just basic TCP/IP configuration, and if you want to get fancy, some VPN configuration for the encryptions. Just give me enough lead time to actually buy the equipment and test it out. Also, if DEF CON wants to actually set this up in order to get long distance WiFi working to other hotels, I’ll happily buy a couple units and set them up this way.

Update: Accessing somebody’s open-wifi, like at Starbucks, is (probably) not a violation of the CFAA (Computer Fraud and Abuse Act). The act is vague, of course, so almost anything you do on a computer can violate the CFAA if prosectors want to go after you, but at the same time, this sort of access is far from the original intent of the CFAA. Public WiFi at places like Starbucks is public.

This is not a violation of FCC part 97 which forbids ham radios from encryption data. It’s operating in the unlicensed ISM bands, so is not covered by ham rules, despite the name “ProxyHam”.


Update: An even funner talk, which I’ve long wanted to do, is to do the same thing with cell phones. Take a cellphone, pull it apart, disconnect the antenna, then connect it to a highly directional antenna pointed at a distant cell tower — several cells away. You’d then be physically nowhere near where the cell tower thinks you are. I don’t know enough about how to block signals in other directions, though — radio waves are hard.


Update: There are other devices than those I mention:

SANS Internet Storm Center, InfoCON: green: Detecting Random – Finding Algorithmically chosen DNS names (DGA), (Thu, Jul 9th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

Most normal user traffic communicates via a hostname and not an IP address. Solooking at traffic communicating directly by IP with no associated DNS request is a good thing do to. Some attackers use DNS names for their communications. There is alsomalware such as Skybot and the Styx exploit kit that use algorithmically chosen host name rather than IP addresses for their command and control channels. This malware uses what has been called DGA or Domain Generation Algorithms to create random lookinghost names for its TLS command and control channel or to digitally sign its SSL certificates. These do not look like normal host names. A human being can easily pick them out of our logs and traffic, but it turns out to be a somewhat challenging thing to do in an automated process. Natural Language Processing or measuring the randomness dont seem to work very well. Here is a video that illustrates the problem and one possible approach to solving it.

One way you might try to solve this is with a tool called ent. ent a great Linux tool for detecting entropy in files.”>Entropy = 7.999982 bits per byte.”> –“>[~]$ python -c print A*1000000 | ent
Entropy = 0.000021 bits per byte. — 0 = not random

So 8 is highly random and 0 is not random at all.”>[~]$ echo google | ent
Entropy = 2.235926 bits per byte.
[~]$ echo clearing-house | ent
Entropy = 3.773557 bits per byte. – Valid hosts are in the 2 to 4 range

Google scores 2.23 and clearing-house scores 3.7. So it appears as thoughlegitimate host names willbe in the 2 to 4 range.”>[~]$ echo e6nbbzucq2zrhzqzf | ent
Entropy = 3.503258 bits per byte.
[~]$ echo sdfe3454hhdf | ent
Entropy = 3.085055 bits per byte. – Malicious host from Skybot and Styx malware are in the same range as valid hosts

Thats no good. Known malicious host names are also in the 2 to 4 range. They score just about the same as normal host names. We need a different approach to this problem.

Normal readable English has some pairs of characters that appear more frequently than others. TH, QU and ER appear very frequently but other pairs like WZ appear very rarely. Specifically, there is approximately a 40% chance that a T will be followed by an H. There is approximately a 97% change that a Q will be followed by the letter U. There is approximately a 19% chance that E is followed by R. With regard to unlikely pairs, there is approximately a 0.004% chance that W will be followed by a Z. So here is the idea, lets analyze a bunch of text and figure out what normal looks like. Then measure the host names against the tables. Im making this script and a Windows executable version of this tool available to you to try it out. Let me know how it works. Here is a look at how to use the tool.

Step 1) You need a frequency table. I shared two of them in my github if you want to use them you can download them and skip to step 2.

1a) Create the table: Im creating a table called custom.freq.”>C:\freqfreq.exe –create custom.freq

1b) You can optionally turn ON case sensitivity if you want the frequency table to count uppercase letters and lowercase letters separately. Without this option the tool will convert everything to lowercase before counting character pairs.”>C:\freqfreq.exe -t custom.freq

1c) Next fill the frequency table with normal text. You might load it with known legitimate host names like the Alexa top 1 million most commonly accessed websites. (http://s3.amazonaws.com/alexa-static/top-1m.csv.zip) I will just load it up with famous works of literature.”>C:\freqfor %i in (txtdocs\*.*) do freq.exe –normalfile %i custom.freq
C:\freqfreq.exe –normalfile txtdocs\center_earth custom.freq
C:\freqfreq.exe –normalfile txtdocs\defoe-robinson-103.txt custom.freq
C:\freqfreq.exe –normalfile txtdocs\dracula.txt custom.freq
C:\freqfreq.exe –normalfile txtdocs\freck10.txt custom.freq
C:\freq”>

Step 2) Measure badness!

Once the frequency table is filled with data you can start to measure strings to see how probable they are according to our frequency tables.”>C:\freqfreq.exe –measure google custom.freq
6.59612840648
C:\freqfreq.exe –measure clearing-house custom.freq
12.1836883765

So normal host names have a probability above 5 (at least these two and most others do). We will consider anything above 5 to be good for our tests.”>C:\freqfreq.exe –measure asdfl213u1 custom.freq
3.15113061843
C:\freqfreq.exe –measure po24sf92cxlk”>Our malicious hosts are less than 5. 5 seems to be a pretty good benchmark. In my testing it seems to work pretty well for picking out these abnormal host names. But it isnt perfect. Nothing is. One problem is that very small host names and acronyms that are not in the source files you use to build your frequency tables will be below 5. For example, fbi and cia both come up below 5 when I just use classic literature to build my frequency tables. But I am not limited to classic literature. That leads us to step 3.

Step 3) Tune for your organization.

The real power of frequency tables is when you tune it to match normal traffic for your network. –normal and –odd. –normal can be given a normal string and it will update the frequency table with that string. Both –normal and –odd can be used with the –weight option tocontrol how much influence the given string has on the probabilities in the frequency table. Its effectiveness is demonstrated by the accompanying youtube video. Note that marking random host names as –odd is not a good strategy. It simply injects noise into the frequency table. Like everything else in security identifying all the bad in the world is a losing proposition. Instead focus on learning normal and identifying anomalies. So passing –normal cia –weight 10000 adds 10000 counts of the pair ci and the pair ia to the frequency table and increases the probability of cia”>C:\freqfreq.exe –normal cia –weight 10000 custom.freq

The source code and a Windows Executable version of this program can be downloaded from here:https://github.com/MarkBaggett/MarkBaggett/tree/master/freq

Tomorrow I in my diary I will show you some other cool things you can do with this approach and how you can incorporate this into your own tools.

Follow me on twitter @MarkBaggett

Want to learn to use this code in your own script or build tools of your own? Join me for PythonSEC573 in Las Vegas this September 14th! Click here for more information.

What do you think? Leave a comment.

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Krebs on Security: Finnish Decision is Win for Internet Trolls

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

In a win for Internet trolls and teenage cybercriminals everywhere, a Finnish court has decided not to incarcerate a 17-year-old found guilty of more than 50,000 cybercrimes, including data breaches, payment fraud, operating a huge botnet and calling in bomb threats, among other violations.

Julius "Ryan" Kivimaki.

Julius “Ryan” Kivimaki.

As the Finnish daily Helsingin Sanomat reports, Julius Kivimäki — a.k.a. “Ryan” and “Zeekill” — was given a two-year suspended sentence and ordered to forfeit EUR 6,558.

Kivimaki vaulted into the media spotlight late last year when he claimed affiliation with the Lizard Squad, a group of young hooligans who knocked offline the gaming networks of Microsoft and Sony for most of Christmas Day.

According to the BBC, evidence presented at Kivimaki’s trial showed that he compromised more than 50,000 computer servers by exploiting vulnerabilities in Adobe’s Cold Fusion web application software. Prosecutors also said Kivimaki used stolen credit cards to buy luxury goods and shop vouchers, and participated in a money laundering scheme that he used to fund a trip to Mexico.

Kivimaki allegedly also was involved in calling in multiple fake bomb threats and “swatting” incident — reporting fake hostage situations at an address to prompt a heavily armed police response to that location. DailyDot quotes Blair Strater, a victim of Kivimaki’s swatting and harassment, who expressed disgust at the Finnish ruling.

Speaking with KrebsOnSecurity, Strater called Kivimaki “a dangerous sociopath” who belongs behind bars.

Although it did not factor into his trial, sources close to the Lizard Squad investigation say Kivimaki also was responsible for making an August 2014 bomb threat against Sony Online Entertainment President John Smedley that grounded an American Airlines plane.

In an online interview with KrebsOnSecurity, Kivimaki denied involvement with the American Airlines incident, and said he was not surprised by the leniency shown by the court in his trial.

“During the trial it became apparent that nobody suffered significant (if any) damages because of the alleged hacks,” he said.

The danger in a decision such as this is that it emboldens young malicious hackers by reinforcing the already popular notion that there are no consequences for cybercrimes committed by individuals under the age of 18.

Case in point: Kivimaki is now crowing about the sentence; He’s changed the description on his Twitter profile to “Untouchable hacker god.” The Twitter account for the Lizard Squad tweeted the news of Kivimaki’s non-sentencing triumphantly: “All the people that said we would rot in prison don’t want to comprehend what we’ve been saying since the beginning, we have free passes.”

It is clear that the Finnish legal system, like that of the United States, simply does not know what to do with minors who are guilty of severe cybercrimes.  The FBI has for several years now been investigating several of Kivimaki’s contemporaries, young men under the age of 18 who are responsible for a similarly long list of cybercrimes — including credit card fraud, massively compromising a long list of Web sites and organizations running Cold Fusion software, as well as swatting my home in March 2013. Sadly, to this day those individuals also remain free and relatively untouched by the federal system.

Lance James, former head of cyber intelligence for Deloitte and a security researcher who’s followed the case closely, said he was disappointed at the court’s decision given the gravity and extensiveness of the crimes.

“We’re talking about the Internet equivalent of violent crimes and assault,” James said. “This is serious stuff.”

Kivimaki said he doesn’t agree with the characterization of swatting as a violent crime.

“I don’t see how a reasonable person could possibly compare cybercrime with violent crimes,” he said. “There’s a pretty clear distinction here. As far as I’m aware nobody has ever died in such an incident. Nor have I heard of anyone suffering bodily injury.”

As serious as Kivimaki’s crimes may be, kids like him need to be monitored, mentored, and molded — not jailed, says James.

“Studying his past, he’s extremely smart, but he’s trouble, and definitely needs a better direction,” James said. “A lot of these kids have trouble in the home, such as sibling or parental abuse and abandonment. These teenagers, they aren’t evil, they are troubled. There needs to be a diversion program — the same way they treat at-risk teenagers and divert them away from gang activity — that is designed to help them get on a better path.”

TorrentFreak: FBI Wants Pirate Bay Logs to Expose Copyright Trolls

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

pirate bayOver the past few years copyright troll law firm Prenda crossed the line on several occasions.

Most controversial was the clear evidence that Prenda uploaded their own torrents to The Pirate Bay, creating a honeypot for the people they later sued over pirated downloads.

The crucial evidence to back up this allegation came from The Pirate Bay, who shared upload logs with TorrentFreak that tied a user account and uploads to Prenda and its boss John Steele.

This serious allegation together with other violations piqued the interest of the FBI. For a long time there have been suspicions that the authorities are investigating the Prenda operation and today we can confirm that this is indeed the case.

The confirmation comes from Pirate Bay co-founders Peter Sunde and Fredrik Neij, who independently informed TF that they were questioned about Prenda during their stays in prison.

“I was told that Prenda Law has been under investigation for over a year, and from the printouts they showed me, I believe that,” Sunde tells TF.

Sunde was visited by Swedish police officers who identified themselves, noting that they were sent on behalf of the FBI. The officers mainly asked questions about Pirate Bay backups and logs.

“They asked many questions about the TPB backups and logs. I told them that even if they have one of the backups that it would be nearly impossible to decrypt,” Sunde says, adding that he couldn’t help them as he’s no longer associated with the site.

A short while after Sunde was questioned in prison the same happened to Neij. Again, the officers said they were gathering information about Pirate Bay’s logs on behalf of the FBI.

“They wanted to know if I could verify the accuracy of the IP-address logs, how they were stored, and how they could be retrieved,” Neij says.

The FBI’s interest in the logs was directly linked to the article we wrote on the Prenda honeypot in 2013. While it confirms that the feds are looking into Prenda, the FBI has not announced anything in public yet.

TF contacted the Swedish police a while ago asking for further details, but received no response.

It’s worth noting that the police officers also asked questions about the current state of The Pirate Bay and who’s running the site. With the recent raid in mind, it’s not unthinkable they may also have had an alternative motive.

In any case, today’s revelations show that Prenda is in serious trouble. The same copyright trolls who abused The Pirate Bay to trap pirates, may also face their demise thanks to the very same site.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and the best VPN services.

Schneier on Security: What is the DoD’s Position on Backdoors in Security Systems?

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

In May, Admiral James A. Winnefeld, Jr., vice-chairman of the Joint Chiefs of Staff, gave an address at the Joint Service Academies Cyber Security Summit at West Point. After he spoke for twenty minutes on the importance of Internet security and a good national defense, I was able to ask him a question (32:42 mark) about security versus surveillance:

Bruce Schneier: I’d like to hear you talk about this need to get beyond signatures and the more robust cyber defense and ask the industry to provide these technologies to make the infrastructure more secure. My question is, the only definition of “us” that makes sense is the world, is everybody. Any technologies that we’ve developed and built will be used by everyone — nation-state and non-nation-state. So anything we do to increase our resilience, infrastructure, and security will naturally make Admiral Rogers’s both intelligence and attack jobs much harder. Are you okay with that?

Admiral James A. Winnefeld: Yes. I think Mike’s okay with that, also. That’s a really, really good question. We call that IGL. Anyone know what IGL stands for? Intel gain-loss. And there’s this constant tension between the operational community and the intelligence community when a military action could cause the loss of a critical intelligence node. We live this every day. In fact, in ancient times, when we were collecting actual signals in the air, we would be on the operational side, “I want to take down that emitter so it’ll make it safer for my airplanes to penetrate the airspace,” and they’re saying, “No, you’ve got to keep that emitter up, because I’m getting all kinds of intelligence from it.” So this is a familiar problem. But I think we all win if our networks are more secure. And I think I would rather live on the side of secure networks and a harder problem for Mike on the intelligence side than very vulnerable networks and an easy problem for Mike. And part of that — it’s not only the right thing do, but part of that goes to the fact that we are more vulnerable than any other country in the world, on our dependence on cyber. I’m also very confident that Mike has some very clever people working for him. He might actually still be able to get some work done. But it’s an excellent question. It really is.

It’s a good answer, and one firmly on the side of not introducing security vulnerabilities, backdoors, key-escrow systems, or anything that weakens Internet systems. It speaks to what I have seen as a split in the the Second Crypto War, between the NSA and the FBI on building secure systems versus building systems with surveillance capabilities.

I have written about this before:

But here’s the problem: technological capabilities cannot distinguish based on morality, nationality, or legality; if the US government is able to use a backdoor in a communications system to spy on its enemies, the Chinese government can use the same backdoor to spy on its dissidents.

Even worse, modern computer technology is inherently democratizing. Today’s NSA secrets become tomorrow’s PhD theses and the next day’s hacker tools. As long as we’re all using the same computers, phones, social networking platforms, and computer networks, a vulnerability that allows us to spy also allows us to be spied upon.

We can’t choose a world where the US gets to spy but China doesn’t, or even a world where governments get to spy and criminals don’t. We need to choose, as a matter of policy, communications systems that are secure for all users, or ones that are vulnerable to all attackers. It’s security or surveillance.

NSA Director Admiral Mike Rogers was in the audience (he spoke earlier), and I saw him nodding at Winnefeld’s answer. Two weeks later, at CyCon in Tallinn, Rogers gave the opening keynote, and he seemed to be saying the opposite.

“Can we create some mechanism where within this legal framework there’s a means to access information that directly relates to the security of our respective nations, even as at the same time we are mindful we have got to protect the rights of our individual citizens?”

[…]

Rogers said a framework to allow law enforcement agencies to gain access to communications is in place within the phone system in the United States and other areas, so “why can’t we create a similar kind of framework within the internet and the digital age?”

He added: “I certainly have great respect for those that would argue that they most important thing is to ensure the privacy of our citizens and we shouldn’t allow any means for the government to access information. I would argue that’s not in the nation’s best long term interest, that we’ve got to create some structure that should enable us to do that mindful that it has to be done in a legal way and mindful that it shouldn’t be something arbitrary.”

Does Winnefeld know that Rogers is contradicting him? Can someone ask JCS about this?

Schneier on Security: Reassessing Airport Security

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

News that the Transportation Security Administration missed a whopping 95% of guns and bombs in recent airport security “red team” tests was justifiably shocking. It’s clear that we’re not getting value for the $7 billion we’re paying the TSA annually.

But there’s another conclusion, inescapable and disturbing to many, but good news all around: we don’t need $7 billion worth of airport security. These results demonstrate that there isn’t much risk of airplane terrorism, and we should ratchet security down to pre-9/11 levels.

We don’t need perfect airport security. We just need security that’s good enough to dissuade someone from building a plot around evading it. If you’re caught with a gun or a bomb, the TSA will detain you and call the FBI. Under those circumstances, even a medium chance of getting caught is enough to dissuade a sane terrorist. A 95% failure rate is too high, but a 20% one isn’t.

For those of us who have been watching the TSA, the 95% number wasn’t that much of a surprise. The TSA has been failing these sorts of tests since its inception: failures in 2003, a 91% failure rate at Newark Liberty International in 2006, a 75% failure rate at Los Angeles International in 2007, more failures in 2008. And those are just the public test results; I’m sure there are many more similarly damning reports the TSA has kept secret out of embarrassment.

Previous TSA excuses were that the results were isolated to a single airport, or not realistic simulations of terrorist behavior. That almost certainly wasn’t true then, but the TSA can’t even argue that now. The current test was conducted at many airports, and the testers didn’t use super-stealthy ninja-like weapon-hiding skills.

This is consistent with what we know anecdotally: the TSA misses a lot of weapons. Pretty much everyone I know has inadvertently carried a knife through airport security, and some people have told me about guns they mistakenly carried on airplanes. The TSA publishes statistics about how many guns it detects; last year, it was 2,212. This doesn’t mean the TSA missed 44,000 guns last year; a weapon that is mistakenly left in a carry-on bag is going to be easier to detect than a weapon deliberately hidden in the same bag. But we now know that it’s not hard to deliberately sneak a weapon through.

So why is the failure rate so high? The report doesn’t say, and I hope the TSA is going to conduct a thorough investigation as to the causes. My guess is that it’s a combination of things. Security screening is an incredibly boring job, and almost all alerts are false alarms. It’s very hard for people to remain vigilant in this sort of situation, and sloppiness is inevitable.

There are also technology failures. We know that current screening technologies are terrible at detecting the plastic explosive PETN — that’s what the underwear bomber had — and that a disassembled weapon has an excellent chance of getting through airport security. We know that some items allowed through airport security make excellent weapons.

The TSA is failing to defend us against the threat of terrorism. The only reason they’ve been able to get away with the scam for so long is that there isn’t much of a threat of terrorism to defend against.

Even with all these actual and potential failures, there have been no successful terrorist attacks against airplanes since 9/11. If there were lots of terrorists just waiting for us to let our guard down to destroy American planes, we would have seen attacks — attempted or successful — after all these years of screening failures. No one has hijacked a plane with a knife or a gun since 9/11. Not a single plane has blown up due to terrorism.

Terrorists are much rarer than we think, and launching a terrorist plot is much more difficult than we think. I understand this conclusion is counterintuitive, and contrary to the fearmongering we hear every day from our political leaders. But it’s what the data shows.

This isn’t to say that we can do away with airport security altogether. We need some security to dissuade the stupid or impulsive, but any more is a waste of money. The very rare smart terrorists are going to be able to bypass whatever we implement or choose an easier target. The more common stupid terrorists are going to be stopped by whatever measures we implement.

Smart terrorists are very rare, and we’re going to have to deal with them in two ways. One, we need vigilant passengers — that’s what protected us from both the shoe and the underwear bombers. And two, we’re going to need good intelligence and investigation — that’s how we caught the liquid bombers in their London apartments.

The real problem with airport security is that it’s only effective if the terrorists target airplanes. I generally am opposed to security measures that require us to correctly guess the terrorists’ tactics and targets. If we detect solids, the terrorists will use liquids. If we defend airports, they bomb movie theaters. It’s a lousy game to play, because we can’t win.

We should demand better results out of the TSA, but we should also recognize that the actual risk doesn’t justify their $7 billion budget. I’d rather see that money spent on intelligence and investigation — security that doesn’t require us to guess the next terrorist tactic and target, and works regardless of what the terrorists are planning next.

This essay previously appeared on CNN.com.

Errata Security: What’s the state of iPhone PIN guessing

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

I think even some experts have gotten this wrong, so I want to ask everyone: what’s the current state-of-the-art for trying to crack Apple PIN codes?

This is how I think it works currently (in iOS 8).
To start with, there is a special “crypto-chip” inside the iPhone that holds your secrets (like a TPM or ARM TrustZoneSecurCore). I think originally it was ARM’s TrustZone, but now that Apple designs its own chips, that they’ve customized it (“Secure Enclave”). I think they needed to add stuff to make Touch ID work.
All the data (on the internal flash drive) is encrypted with a random AES key that nobody, not even the NSA, can crack. This random AES key is stored on the crypto-chip. Thus, if your phone is stolen, the robbers cannot steal the data from it — as long as your phone is locked properly.
To unlock your phone, you type in a 4 digit passcode. This passcode gets sent to the crypto-chip, which verifies the code, then gives you the AES key needed to decrypt the flash drive. This is all invisible, of course, but that’s what’s going on underneath the scenes. Since the NSA can’t crack the AES key on the flash drive, they must instead get it from the crypto-chip.
Thus, unlocking the phone means guessing your 4 digit PIN.
This seems easy. After all, it’s only 4 digits. However, offline cracking is impossible. The only way to unlock the phone is to send guesses to the crypto-chip (a form of online cracking). This can be done over the USB port, so they (the NSA) don’t need to sit there trying to type every possible combination — they can simply write a little script to send commands over USB.
To make this more difficult, the crypto-chip will slow things down. After 6 failed guesses, the iPhone temporarily disables itself for 1-minute. Thus, it’ll take the NSA a week (6.9 days), trying all 10,000 combinations, once per minute.
Better yet, you can configure your phone to erase itself after 10 failed attempts ([Erase Data] Passcode setting). This isn’t the default configuration, but it’s how any paranoid person (like myself) configures their phone. This is a hard lock, preventing even the NSA from ever decrypting the phone. It’s the “going dark” problem that the FBI complains about. If they get the iPhone from a terrorist, drug dealers, or pedophile, they won’t be able to decrypt it (well, beyond the 0.1% chance of guessing 10 random numbers). (Note: I don’t think it actually erases the flash drive, but simply erases the secret AES key — which is essentially the same thing).
Instead of guessing PIN numbers, there may be a way to reverse-engineer such secrets from the crypto-chip, such as by using acids in order to remove the top from the chip then use an electron microscope to read the secrets. (Physical possession of the phone is required). One of the Snowden docs implies that the NSA can sometimes do this, but that it takes a month and a hundred thousand dollars, and has a 50% chance of destroying the chip permanently without being able to extract any secrets. In any event, that may have been possible with the older chips, but the latest iPhones now include custom chips designed by Apple where this may no longer be possible.
There may be a a physical exploit that gets around this. Somebody announced a device that would guess a PIN, then immediately power down the device before the failed guess could be recorded. That allows an infinite number of guesses, requiring a reboot of the phone in between. Since the reboot takes about a minute, it means hooking up the phone to the special device and waiting a week. This worked in phones up to iOS 8.1, but presumably it’s something Apple has since patched (some think 8.1.1 patched this).
There may be other exploits in software. In various versions of iOS, hackers have found ways of bypassing the lock screen. Generally, these exploits require the phone to still be powered on since it was stolen. (Coming out of sleep mode is different than being powered up, even though it looks like the same unlocking process to you). However, whenever hackers disclose such techniques, Apple quickly patches them, so it’s not a practical long term strategy. On the other hand, they steal the phone, the FBI/NSA may simply hold it powered on in storage for several years, hoping an exploit is released. The FBI is patient, they really don’t care if it takes a few years to complete a case. The last such exploit was in iOS 7.0, and Apple is about to announce iOS 9. They are paranoid about such exploits, I doubt that a new one will be found.
If the iPhone owner synced their phone with iTunes, then it’s probable that the FBI/NSA can confiscate both the phone and the desktop in order to grab the data. They can then unlock the phone from the desktop, or they can simply grab the backup files from the desktop. If your desktop computer also uses disk encryption, you can prevent this. Some desktops use TPMs to protect the disk (requiring slow online cracking similar to cracking the iPhone PIN). Others would allow offline cracking of your password, but if you chose a sufficiently long passwords (mine is 23 characters), even the NSA can’t crack it — even at the rate of billions of guesses per second that would be possible with offline cracking.
The upshot is this. If you are a paranoid person and do things correctly (set iPhone to erase after 10 attempts, either don’t sync with desktop or sync with proper full disk encryption), then when the FBI or NSA comes for you, they won’t be able to decrypt your phone. You are safe to carry out all your evil cyber-terrorist plans.
I’m writing this up in general terms because I think this is how it works. Many of us general experts glance over the docs and make assumptions about how we think things should work, based on our knowledge of crypto, but we haven’t paid attention to the details, especially the details as the state-of-the-art changes over the years. Sadly, asking general questions gets general answers from well-meaning, helpful people who really know only just as much as I do. I’m hoping those who are up on the latest details, experts like Jonathan Zdziarski, will point out where I’m wrong.

Response: So Jonathan has written a post describing this in more detail here:  http://pastebin.com/SqQst8CV

He is more confident the NSA has 0days to get around everything. I think the point wroth remembering is that nothing can be decrypted without 0days. and that if ever 0days become public, Apple patches them. Hence, you can’t take steal somebody phone and take it to the local repair shop to get it read — unless it’s an old phone that hasn’t been updated. It also means the FBI is unlikely to get the data — at least without revealing that they’ve got an 0da.


Specifics: Specifically, I think this is what happens.

Unique-id (UID): When the CPU is manufactured, it’s assigned a unique-identifier. This is done with hardware fuses, some of which are blown to create 1 and 0s. Apple promises the following:

  • that UIDs are secret and can never be read from the chip, but anybody, for any reason
  • that all IDs are truely random (nobody can guess the random number generation)
  • that they (or suppliers) keep no record of them

This is the root of all security. If it fails, then the NSA can decrypt the phone.

Crypto-accelerator: The CPU has a built-in AES accelerator that’s mostly separate from the main CPU. One reason it exists is to quickly (with low power consumption) decrypt/encrypt everything on the flash-drive. It’s the only part of the CPU that can read the UID. It can therefore use the UID, plus the PIN/passcode, to encrypt/decrypt something.

Special flash: Either a reserved area of the flash-drive, or a wholly separate flash chip, is used to store the rest of the secrets. These are encrypted using the UID/PIN combo. Apple calls this “effaceable” storage. When it “wipes” the phone, this area is erased, but the rest of the flash drive isn’t. Information like your fingerprint (for Touch ID) is stored here.

So the steps are:

  1. iOS boots
  2. phone asks for PIN/passcode
  3. iOS sends PIN/passcode to crypto-accelerate to decrypt flash-drive key (read from the “effaceable” storage area)
  4. uses flash-drive key to decrypt all your data

I’m skipping details. This is just enough to answer certain questions.

FAQ: Where is the unique hardware ID stored? On the flash memory? The answer is within the CPU itself. Flash memory will contain further keys, for example to unlock all your data, but they have to be decrypted using the unique-id plus PIN/passcode.

Schneier on Security: NSA Running a Massive IDS on the Internet Backbone

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

The latest story from the Snowden documents, co-published by The New York Times and ProPublica, shows that the NSA is operating a signature-based intrusion detection system on the Internet backbone:

In mid-2012, Justice Department lawyers wrote two secret memos permitting the spy agency to begin hunting on Internet cables, without a warrant and on American soil, for data linked to computer intrusions originating abroad — including traffic that flows to suspicious Internet addresses or contains malware, the documents show.

The Justice Department allowed the agency to monitor only addresses and “cybersignatures” ­- patterns associated with computer intrusions ­ that it could tie to foreign governments. But the documents also note that the N.S.A. sought to target hackers even when it could not establish any links to foreign powers.

To me, the big deal here is 1) the NSA is doing this without a warrant, and 2) that the policy change happened in secret, without any public policy debate.

The effort is the latest known expansion of the N.S.A.’s warrantless surveillance program, which allows the government to intercept Americans’ cross-border communications if the target is a foreigner abroad. While the N.S.A. has long searched for specific email addresses and phone numbers of foreign intelligence targets, the Obama administration three years ago started allowing the agency to search its communications streams for less-identifying Internet protocol addresses or strings of harmful computer code.

[…]

To carry out the orders, the F.B.I. negotiated in 2012 to use the N.S.A.’s system for monitoring Internet traffic crossing “chokepoints operated by U.S. providers through which international communications enter and leave the United States,” according to a 2012 N.S.A. document. The N.S.A. would send the intercepted traffic to the bureau’s “cyberdata repository” in Quantico, Virginia.

Ninety pages of NSA documents accompany the article. Here is a single OCRed PDF of them all.

Jonathan Mayer was consulted on the article. He gives more details on his blog, which I recommend you all read.

In my view, the key takeaway is this: for over a decade, there has been a public policy debate about what role the NSA should play in domestic cybersecurity. The debate has largely presupposed that the NSA’s domestic authority is narrowly circumscribed, and that DHS and DOJ play a far greater role. Today, we learn that assumption is incorrect. The NSA already asserts broad domestic cybersecurity powers. Recognizing the scope of the NSA’s authority is particularly critical for pending legislation.

This is especially important for pending information sharing legislation, which Mayer explains.

The other big news is that ProPublica’s Julia Angwin is working with Laura Poitras on the Snowden documents. I expect that this isn’t the last artcile we’re going to see.

EDITED TO ADD: Others are writing about these documents. Shane Harris explains how the NSA and FBI are working together on Internet surveillance. Benjamin Wittes says that the story is wrong, that “combatting overseas cybersecurity threats from foreign governments” is exactly what the NSA is supposed to be doing, and that they don’t need a warrant for any of that. And Marcy Wheeler points out that she has been saying for years that the NSA has been using Section 702 to justify Internet surveillance.

Schneier on Security: Yet Another New Biometric: Brainprints

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

New research:

In “Brainprint,” a newly published study in academic journal Neurocomputing, researchers from Binghamton University observed the brain signals of 45 volunteers as they read a list of 75 acronyms, such as FBI and DVD. They recorded the brain’s reaction to each group of letters, focusing on the part of the brain associated with reading and recognizing words, and found that participants’ brains reacted differently to each acronym, enough that a computer system was able to identify each volunteer with 94 percent accuracy. The results suggest that brainwaves could be used by security systems to verify a person’s identity.

I have no idea what the false negatives are, or how robust this biometric is over time, but the article makes the important point that unlike most biometrics this one can be updated.

“If someone’s fingerprint is stolen, that person can’t just grow a new finger to replace the compromised fingerprint — the fingerprint for that person is compromised forever. Fingerprints are ‘non-cancellable.’ Brainprints, on the other hand, are potentially cancellable. So, in the unlikely event that attackers were actually able to steal a brainprint from an authorized user, the authorized user could then ‘reset’ their brainprint,” Laszlo said.

Presumably the resetting involves a new set of acronyms.

Author’s self-archived version of the paper (pdf).

Errata Security: Uh, the only reform of domestic surveillance is dismantling it

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

A lot of smart people are cheering the reforms of domestic surveillance in the USA “FREEDOM” Act. Examples include  Timothy Lee, EFF, Julian Sanchez, and Amie Stepanovich. I don’t understand why. Domestic surveillance is a violation of our rights. The only acceptable reform is getting rid of it. Anything less is the moral equivalent of forcing muggers to not wear ski masks — it doesn’t actually address the core problem (mugging, in this case).
Bulk collection still happens, and searches still happen. The only thing the act does is move ownership of the metadata databases from the NSA to the phone companies. In no way does the bill reform the idea that, on the pretext of terrorism, law enforcement can still rummage through the records, looking for everyone “two hops” away from a terrorist.
We all know the Patriot Act is used primarily to prosecute the War on Drugs rather than the War on Terror. I see nothing in FREEDOM act that reforms this. We all know the government cloaks its abuses under the secrecy of national security — and while I see lots in the act that tries to make things more transparent, the act still allows such a cloak.
I see none of the reforms I’d want. For example, I want a law that requires the disclosure, to the public, of the total number of US phone records the government has grabbed every month, regardless of which law enforcement or intelligence agency grabbed them, regardless of which program or authority was used to grab them. After Snowden caught the government using wild justification for it’s metadata program — any law that doesn’t target such all such collection regardless of justification will work to reign it in.
A vast array of other things need to be reformed regarding domestic surveillance, such as use of “Stingray” devices, the “third party doctrine” allowing the grabbing of business records even without terrorism as a justification, parallel construction, the border search exemption, license plate readers, and so on.
Bulk collection happens. our lives are increasingly electronic. We leave a long trail of “business records” behind us whatever we do. Consider Chris Roberts, “Sindragon”, who joked about hacking a plane on Twitter and is now under investigation by the FBI. They can easily rummage through all those records. While they might not find him guilty of hacking, they may find he violated an obscure tax law or export law, and charge him with that sort of crime. That everything about our lives is being collected in bulk, allowing arbitrary searches by law enforcement, is still a cyber surveillance state, that the FREEDOM act comes nowhere close to touching.
The fact of the matter is that the NSA’s bulk collection was the least of our problems. Indeed, the NSA’s focus on foreign targets meant, in practice, it really wasn’t used domestically. The FREEDOM act now opens up searches of metadata to all the other law enforcement agencies. Instead of skulking in secret occasionally searching metadata, the FBI, DEA, and ATF can now do so publicly, with the blessing of the law behind them.