Posts tagged ‘research’

TorrentFreak: Tech Giants Want to Punish DMCA Takedown Abusers

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

copyright-brandedEvery day copyright holders send millions of DMCA takedown notices to various Internet services.

Most of these requests are legitimate, aimed at disabling access to copyright-infringing material. However, there are also many overbroad and abusive takedown notices which lead to unwarranted censorship.

These abuses are a thorn in the side of major tech companies such as Google, Facebook and Microsoft. These companies face serious legal consequences if they fail to take content down, but copyright holders who don’t play by the rules often walk free.

This problem is one of the main issues highlighted in a new research report (pdf) published by the CCIA, a trade group which lists many prominent tech companies among its members.

The report proposes several changes to copyright legislation that should bring it in line with the current state of the digital landscape. One of the suggestions is to introduce statutory damages for people who abuse the takedown process.

“One shortcoming of the DMCA is that the injunctive-like remedy of a takedown, combined with a lack of due process, encourages abuse by individuals and entities interested in suppressing content,” CCIA writes.

“Although most rightsholders make good faith use of the DMCA, there are numerous well-documented cases of misuse of the DMCA’s extraordinary remedy. In many cases, bad actors have forced the removal of material that did not infringe copyright.”

The report lists several examples, including DMCA notices which are used to chill political speech by demanding the takedown of news clips, suppress consumer reviews, or retaliate against critics.

Many Internet services are hesitant to refuse these type of takedown requests at it may cause them to lose their safe harbor protection, while the abusers themselves don’t face any serious legal risk.

The CCIA proposes to change this by introducing statutory damage awards for abusive takedown requests. This means that the senders would face the same consequences as the copyright infringers.

“To more effectively deter intentional DMCA abuse, Congress should extend Section 512(f) remedies for willful misrepresentations under the DMCA to include statutory awards, as it has for willful infringement under Section 504(c),” CCIA writes.

In addition to tackling DMCA abuse the tech companies propose several other changes to copyright law.

One of the suggestions is to change the minimum and maximum statutory damages for copyright infringement, which are currently $750 and $150,000 per work.

According to the CCIA the minimum should be lowered to suit cases that involve many infringements, such as a user who hosts thousands of infringing works on a cloud storage platform.

The $150,000 maximum, on the other hand, is open to abuse by copyright trolls and rightsholders who may use it as a pressure tool.

The tech companies hopes that U.S. lawmakers will consider these and other suggestions put forward in the research paper, to improve copyright law and make it future proof.

“Since copyright law was written more than 100 years ago, the goal has been to encourage creativity to benefit the overall public good. It’s important as copyright is modernized to ensure that reforms continue to benefit not just rightsholders, but the overall public good,” the CCIA concludes.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

SANS Internet Storm Center, InfoCON: green: Actor that tried Neutrino exploit kit now back to Angler, (Wed, Aug 26th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green


Last week, we saw the group behind a significant amount of Angler exploit kit (EK) switch to Neutrino EK [1]. We didnt know if the change was permanent, and I also noted that criminal groups using EKs have quickly changed tactics in the past. This week, the group is back to Angler EK.

The past few days, Ive noticed several examples Angler EK pushing TeslaCrypt 2.0 ransomware. For todays diary, well look at four examples of Angler EK on Tuesday 2015-08-25 from 16:42 to 18:24 UTC. All examples delivered the same sample of TeslaCrypt 2.0 ransomware.

TeslaCrypt 2.0

TeslaCrypt is a recent familyofransomware that first appeared early this year. Its beenknown to mimic CryptoLocker, and weve seen it use the names TelsaCrypt and AlphaCrypt in previous infections [2,3,4]. According to Kaspersky Lab, version 2.0 of TeslaCrypt usesthe same type of decrypt instructions as CryptoWall [5]. however, artifacts and traffic from the infected host reveal this is actuallyTeslaCrypt.

Kafeine from Malware Dont Need Coffee first tweeted about the new ransomware on 2015-07-13 [6]. The next day on, Kaspersky Lab released details on this most recent version of TeslaCrypt [5].

I saw my first sample of TeslaCrypt 2.0 sent from Nuclear EK on 2015-07-20 [7]. Most TeslaCrypt 2.0 samples weve run across since then were delivered by however, we havent seen a great deal of it. Until recently, most of the ransomware delivered by Angler EK was CryptoWall 3.0. however, this time the iframes pointed to Angler EK. In most cases, the iframe led directly to the Angler EK landing page. ” />
Shown above: From the third example, the” />
Shown above: From the fourth example, theiframe pointing to an Angler EK landing page.

Looking at the traffic in Wireshark, we find two different IPs and four different domains from the four Angler infections during a 1 hour and 42 minute time span.”>”>. Although Angler EK sends its payload encrypted, I was able to grab a decrypted copy from an infected host before it deleted itself.

  • File name: 2015-08-25-Angler-EK-payload-TeslaCrypt-2.0.exe
  • File size: 346.9 KB (355,239 bytes)
  • MD5 hash: 4321192c28109be890decfa5657fb3b3
  • SHA1 hash: 352f81f9f7c1dcdb5dbfe9bee0faa82edba043b9
  • SHA256 hash: 838f89a2eead1cfdf066010c6862005cd3ae15cf8dc5190848b564352c412cfa
  • Detection ratio: 3 / 49
  • First submission: 2015-08-25 19:51:01 UTC
  • Virus Total analysis: link
  • analysis: link
  • analysis: link

The following post-infection traffic was seenfrom the four infected hosts:

  • – TCP port 80 (http) – IP address check
  • – TCP port 80 (http)- – post-infection callback
  • – TCP port 80 (http) – – post-infection callback

Malwr.coms analysis of the payload reveals additional IP addresses and hosts:

  • – TCP port 80 (http) –
  • – TCP port 80 (http) –
  • – TCP port 80 (http) –
  • – TCP port 80 (http) –
  • – TCP port 443 (encrypted) –
  • – TCP port 443 (encrypted) –

Snort-based alerts on the traffic

I played back the pcap on Security Onion using Suricata with the EmergingThreats (ET) and ET Pro rule sets. The results show alerts for Angler EK and AlphaCrypt. The AlphaCrypt alerts triggered on callback traffic from TeslaCrypt 2.0. ” />
Shown above: Got a captcha when trying one of the URLs” />
Shown above: Final decrypt instructions with a bitcoin address for the ransom payment.

Final words

On the same cloned host with the same malware, we saw a different URLfor the decrypt instructions each time. Every infection resulted in a different bitcoin address for the ransom payment, even though it was the same sample infecting the same cloned host.

We continue to see EKs used by this and other criminal groups to spread malware. Although we havent seen as much CryptoWall this week, the situation could easily change in a few days time.

Traffic and malware for thisdiary are listed below:

  • A zip archive of four pcap files with the infection traffic from Tuesday 2015-08-25 is available here. “>(4.14″>MB)
  • A zip archive of the malware and other artifacts is available here. “>(957 KB)

The zip archive for the malware is password-protected with the standard password. If you dont know it, email and ask.

Brad Duncan
Security Researcher at Rackspace
Blog: – Twitter: @malware_traffic



(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Schneier on Security: Are Data Breaches Getting Larger?

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

This research says that data breaches are not getting larger over time.

Hype and Heavy Tails: A Closer Look at Data Breaches,” by Benjamin Edwards, Steven Hofmeyr, and Stephanie Forrest:

Abstract: Recent widely publicized data breaches have exposed the
personal information of hundreds of millions of people. Some reports point to alarming increases in both the size and frequency of data breaches, spurring institutions around the world to address what appears to be a worsening situation. But, is the problem actually growing worse? In this paper, we study a popular public dataset and develop Bayesian Generalized Linear Models to investigate trends in data breaches. Analysis of the model shows that neither size nor frequency of data breaches has increased over the past decade. We find that the increases that have attracted attention can be explained by the heavy-tailed statistical distributions underlying the dataset. Specifically, we find that data breach size is log-normally distributed and that the daily frequency of breaches is described by a negative binomial distribution. These distributions may provide clues to the generative mechanisms that are responsible for the breaches. Additionally, our model predicts the likelihood of breaches of a particular size in the future. For example, we find that in the next year there is only a 31% chance of a breach of 10 million records or more in the US. Regardless of any trend, data breaches are costly, and we combine the model with two different cost models to project that in the next three years breaches could cost up to $55 billion.

The paper was presented at WEIS 2015.

Schneier on Security: The Advertising Value of Intrusive Tracking

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Here’s an interesting research paper that tries to calculate the differential value of privacy-invasive advertising practices.

The researchers used data from a mobile ad network and was able to see how different personalized advertising practices affected customer purchasing behavior. The details are interesting, but basically, most personal information had little value. Overall, the ability to target advertising produces a 29% greater return on an advertising budget, mostly by knowing the right time to show someone a particular ad.

The paper was presented at WEIS 2015.

TorrentFreak: MPAA Seeks New Global Anti-Piracy Vice President

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

Due to opposing beliefs over how content should be consumed online, there is a war being waged on the Internet, one in which the guerilla forces of the file-sharing masses take on the world’s leading content companies and their armies of lawyers.

As a result, Hollywood and the major recording labels are committed to pouring endless millions into content protection, with the aim of affecting consumer behavior by any means – and by force if necessary.

To that end the MPAA is currently hoping to boost its already sizable anti-piracy team with the addition of a new Vice President of Global Content Protection.

The position – advertised externally this week – is an important one and will see the new recruit working with Hollywood studios to “define and execute” the MPAA’s global online content protection strategies.

“This position is primarily responsible for developing and executing a global Internet strategy for combating piracy, managing multiple projects simultaneously, managing staff and keeping apprised of technological developments in the piracy ecosystem and user behaviors online,” the MPAA’s listing reads.

The post is central to the MPAA’s entire anti-piracy operation. Responsibilities include directing international investigations of “websites, operators and business entities engaged in or associated with copyright infringement” while monitoring and reporting on emerging trends and threats.

Legal action is a large part of the MPAA’s work and the role requires the successful candidate to develop and manage relationships “with high-level law enforcement officials in key regions and countries” while helping to develop the movie group’s global civil litigation policy.

Also falling within the job description are key elements of the so-called “Follow the Money” approach to online piracy.

Along the lines of several collaborative initiatives already underway (six strikes etc), the new VP will be expected to develop relationships with intermediaries such as hosting providers, advertising companies, payment processors, domain name registrars and social networks such as Facebook.

He or she will also be responsible for providing technical assistance, research, data and training to government agencies, lobbyists and other rights holders concerning content protection issues.

As should be clear from the above, it’s a big job that will only be suitable for a limited number of applicants. In addition to a bachelor’s degree, candidates will need a graduate degree and experience in content protection intelligence, investigation and enforcement under their belts.

Naturally the MPAA only seeks the technically adept when it comes to piracy-related vacancies. Candidates should have plenty of experience with various content distribution methods including “streaming video, online file hosting and peer-to-peer sharing.”

For a group determined to hold third parties responsible for the infringements of others, it should comes as no surprise that applicants are also expected to have a sterling understanding of the relationships between “ISPs, domain names, IP addresses, and hosting providers, and technical infrastructure of such online resources.”

Finally, the MPAA insists that their ideal applicant will know right from wrong.

“[We require] a team player who has the utmost moral and ethical character to support the content protection team and to implement sound strategies that will benefit the motion picture industry today and tomorrow,” the MPAA concludes.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Schneier on Security: NSA Plans for a Post-Quantum World

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Quantum computing is a novel way to build computers — one that takes advantage of the quantum properties of particles to perform operations on data in a very different way than traditional computers. In some cases, the algorithm speedups are extraordinary.

Specifically, a quantum computer using something called Shor’s algorithm can efficiently factor numbers, breaking RSA. A variant can break Diffie-Hellman and other discrete log-based cryptosystems, including those that use elliptic curves. This could potentially render all modern public-key algorithms insecure. Before you panic, note that the largest number to date that has been factored by a quantum computer is 143. So while a practical quantum computer is still science fiction, it’s not stupid science fiction.

(Note that this is completely different from quantum cryptography, which is a way of passing bits between two parties that relies on physical quantum properties for security. The only thing quantum computation and quantum cryptography have to do with each other is their first words. It is also completely different from the NSA’s QUANTUM program, which is its code name for a packet-injection system that works directly in the Internet backbone.)

Practical quantum computation doesn’t mean the end of cryptography. There are lesser-known public-key algorithms such as McEliece and lattice-based algorithms that, while less efficient than the ones we use, are currently secure against a quantum computer. And quantum computation only speeds up a brute-force keysearch by a factor of a square root, so any symmetric algorithm can be made secure against a quantum computer by doubling the key length.

We know from the Snowden documents that the NSA is conducting research on both quantum computation and quantum cryptography. It’s not a lot of money, and few believe that the NSA has made any real advances in theoretical or applied physics in this area. My guess has been that we’ll see a practical quantum computer within 30 to 40 years, but not much sooner than that.

This all means that now is the time to think about what living in a post-quantum world would be like. NIST is doing its part, having hosted a conference on the topic earlier this year. And the NSA announced that it is moving towards quantum-resistant algorithms.

Earlier this week, the NSA’s Information Assurance Directorate updated its list of Suite B cryptographic algorithms. It explicitly talked about the threat of quantum computers:

IAD will initiate a transition to quantum resistant algorithms in the not too distant future. Based on experience in deploying Suite B, we have determined to start planning and communicating early about the upcoming transition to quantum resistant algorithms. Our ultimate goal is to provide cost effective security against a potential quantum computer. We are working with partners across the USG, vendors, and standards bodies to ensure there is a clear plan for getting a new suite of algorithms that are developed in an open and transparent manner that will form the foundation of our next Suite of cryptographic algorithms.

Until this new suite is developed and products are available implementing the quantum resistant suite, we will rely on current algorithms. For those partners and vendors that have not yet made the transition to Suite B elliptic curve algorithms, we recommend not making a significant expenditure to do so at this point but instead to prepare for the upcoming quantum resistant algorithm transition.

Suite B is a family of cryptographic algorithms approved by the NSA. It’s all part of the NSA’s Cryptographic Modernization Program. Traditionally, NSA algorithms were classified and could only be used in specially built hardware modules. Suite B algorithms are public, and can be used in anything. This is not to say that Suite B algorithms are second class, or breakable by the NSA. They’re being used to protect US secrets: “Suite A will be used in applications where Suite B may not be appropriate. Both Suite A and Suite B can be used to protect foreign releasable information, US-Only information, and Sensitive Compartmented Information (SCI).”

The NSA is worried enough about advances in the technology to start transitioning away from algorithms that are vulnerable to a quantum computer. Does this mean that the agency is close to a working prototype in their own classified labs? Unlikely. Does this mean that they envision practical quantum computers sooner than my 30-to-40-year estimate? Certainly.

Unlike most personal and corporate applications, the NSA routinely deals with information it wants kept secret for decades. Even so, we should all follow the NSA’s lead and transition our own systems to quantum-resistant algorithms over the next decade or so — possibly even sooner.

The essay previously appeared on

EDITED TO ADD: The computation that factored 143 also accidentally “factored much larger numbers such as 3599, 11663, and 56153, without the awareness of the authors of that work,” which shows how weird this all is.

EDITED TO ADD: Seems that I need to be clearer: I do not stand by my 30-40-year prediction. The NSA is acting like practical quantum computers will exist long before then, and I am deferring to their expertise.

Schneier on Security: Did Kaspersky Fake Malware?

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Two former Kaspersky employees have accused the company of faking malware to harm rival antivirus products. They would falsely classify legitimate files as malicious, tricking other antivirus companies that blindly copied Kaspersky’s data into deleting them from their customers’ computers.

In one technique, Kaspersky’s engineers would take an important piece of software commonly found in PCs and inject bad code into it so that the file looked like it was infected, the ex-employees said. They would send the doctored file anonymously to VirusTotal.

Then, when competitors ran this doctored file through their virus detection engines, the file would be flagged as potentially malicious. If the doctored file looked close enough to the original, Kaspersky could fool rival companies into thinking the clean file was problematic as well.


The former Kaspersky employees said Microsoft was one of the rivals that were targeted because many smaller security companies followed the Redmond, Washington-based company’s lead in detecting malicious files. They declined to give a detailed account of any specific attack.

Microsoft’s antimalware research director, Dennis Batchelder, told Reuters in April that he recalled a time in March 2013 when many customers called to complain that a printer code had been deemed dangerous by its antivirus program and placed in “quarantine.”

Batchelder said it took him roughly six hours to figure out that the printer code looked a lot like another piece of code that Microsoft had previously ruled malicious. Someone had taken a legitimate file and jammed a wad of bad code into it, he said. Because the normal printer code looked so much like the altered code, the antivirus program quarantined that as well.

Over the next few months, Batchelder’s team found hundreds, and eventually thousands, of good files that had been altered to look bad.

Kaspersky denies it.

Krebs on Security: How Not to Start an Encryption Company

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Probably the quickest way for a security company to prompt an overwhelmingly hostile response from the security research community is to claim that its products and services are “unbreakable” by hackers. The second-fastest way to achieve that outcome is to have that statement come from an encryption company CEO who served several years in federal prison for his role in running a $210 million Ponzi scheme. Here’s the story of a company that managed to accomplish both at the same time and is now trying to learn from (and survive) the experience.

unbreakabletothecoreThanks to some aggressive marketing, Irvine, Calif. based security firm Secure Channels Inc. (SCI) and its CEO Richard Blech have been in the news quite a bit lately — mainly Blech being quoted in major publications such as NBC NewsPolitico and USA Today  — talking about how his firm’s “unbreakable” encryption technology might have prevented some of the larger consumer data breaches that have come to light in recent months.

Blech’s company, founded in 2014 and with his money, has been challenging the security community to test its unbreakable claim in a cleverly unwinnable series of contests: At the Black Hat Security conference in Las Vegas last year, the company offered a new BMW to anyone who could unlock a digital file that was encrypted with its “patented” technology.

At the RSA Security Conference this year in San Francisco, SCI offered a $50,000 bounty to anyone who could prove the feat. When no one showed up to claim the prizes, SCI issued press releases crowing about a victory for its products.

Turns out, Blech knows a thing or two about complex, unwinnable games: He pleaded guilty in 2003 of civil and criminal fraud charges and sentenced to six years in U.S. federal prison for running an international Ponzi scheme.

Once upon a time, Blech was the CEO of Credit Bancorp. Ltd., an investment firm that induced its customers to deposit securities, cash, and other assets in trust by promising the impossible: a “custodial dividend” based on the profits of “risk-less” arbitrage. Little did the company’s investors know at the time, but CBL was running a classic Ponzi scheme: Taking cash and other assets from new investors to make payments to earlier ones, creating the impression of sizable returns, prosecutors said. Blech was sentenced to 72 months in prison and was released in 2007.


In April 2015, Lance James, a security researcher who has responded to challenges like the BMW and $50,000 prizes touted by SCI, began receiving taunting Tweets from Blech and Ross Harris, a particularly aggressive member of SCI’s sales team. That twitter thread (PDF) had started with WhiteHat Security CTO Jeremiah Grossman posting a picture of a $10,000 check that James was awarded from Telesign, a company that had put up the money after claiming that its StrongWebmail product was unhackable. Turns out, it wasn’t so strong; James and two other researchers found a flaw in the service and hacked the CEO’s email account. StrongWebmail never recovered from that marketing stunt.

James replied to Grossman that, coincidentally, he’d just received an email from SCI offering a BMW to anyone who could break the company’s crypto.

“When the crypto defeats you, we’ll give you a t-shirt, ‘Can’t touch this,’ you’ll wear it for a Tweet,” Blech teased James via Twitter on April 7, 2015. “Challenge accepted,” said James, owner of the security consultancy Unit 221b.  “Proprietary patented crypto is embarrassing in 2015. You should know better.”

As it happens, encrypting a file with your closed, proprietary encryption technology and then daring the experts to break it is not exactly the way you prove its strength or gain the confidence of the security community in general. Experts in encryption tend to subscribe to an idea known as Kerckhoff’s principle when deciding the relative strength and merits of any single cryptosystem: Put simply, a core tenet of Kerckhoff’s principle holds that “one ought to design systems under the assumption that the enemy will gain full familiarity with them.”

Translation: If you want people to take you seriously, put your encryption technology on full view of the security community (minus your private encryption keys), and let them see if they can break the system.

James said he let it go when SCI refused to talk seriously about sharing its cryptography solution, only to hear again this past weekend from SCI’s director of marketing Deirdre “Dee” Murphy on Twitter that his dismissal of their challenge proved he was “obsolete.” Murphy later deleted the tweets, but some of them are saved here.

Nate Cardozo, a staff attorney at the nonprofit digital rights group Electronic Frontier Foundation (EFF), said companies that make claims of unbreakable technologies very often are effectively selling snake oil unless they put their products up for peer review.

“They don’t disclose their settings or what modes their ciphers are running in,” Cardozo said. “They have a patent which is laughably vague about what it’s actually doing, and yet their chief marketing officer insults security researchers on Twitter saying, ‘If our stuff is so insecure, just break it.’”

Cardozo was quick to add that although there is no indication whatsoever that Secure Channels Inc. is engaging in any kind of fraud, they are engaged in “wildly irresponsible marketing.”

“And that’s not good for anyone,” he said. “In the cryptography community, the way you prove your system is secure is you put it up to peer review, you get third party audits, you publish specifications, etc. Apple’s not open-source and they do all of that. You can download the security white paper and see everything that iMessage is doing. The same is true for WhatsApp and PGP. When we see companies like Secure Channel treating crypto like a black box, that raises red flags. Any company making such claims deserves scrutiny, but because we can’t scrutinize the actual cryptography they’re using, we have to scrutinize the company itself.”


I couldn’t believe that any security company — let alone a firm that was trying to break into the encryption industry (a business that requires precision perhaps beyond any other, no less) — could make so many basic errors and miscalculations, so I started digging deeper into SCI and its origins. At the same time I requested and was granted an interview with Blech and his team.

I learned that SCI is actually licensing its much-vaunted, patented encryption technology from a Swiss firm by the same name – Secure Channels SA. Malcolm Hutchinson, president and CEO at Secure Channels SA, said he and his colleagues have been “totally dismayed at the level of marketing hype being used by SCI.”

“In hindsight, the mistake we made was licensing SCI to use the Secure Channel name, as this has led to a blurring of the distinction between the owner of the IP and the licensee of that IP which has been exploited,” he told KrebsOnSecurity in an email exchange.

SCI’s CEO Blech has been quoted in the news media saying the company has multiple U.S. government clients. When asked at the outset of a phone interview to name some of those government clients, Blech said he was unable to because they were all “three-letter agencies.” He mentioned instead a deal with MicroTech, a technology integrator that does work with a number of government agencies. When asked whether SCI was actually doing any work for any government clients via its relationship with MicroTech, Blech conceded that it was not.

“We’re on their GSA schedule and in a flow with these agencies,” Blech said.

The same turned out to be the case of another “client” Blech mentioned: American electronics firm Ingram Micro. Was anyone actually using SCI’s technology because of the Ingram relationship? Well, no, not yet.

Did the company actually have any paying clients, I asked? Blech said yes, SCI has three credit union clients in California, two who of whom couldn’t be disclosed because of confidentiality agreements. In what sense was the credit union (La Loma Federal Credit Union) using SCI’s unbreakable encryption? As Blech explained it, SCI sent one of its employees to help the bank with a compliance audit, but La Loma FCU hasn’t actually deployed any of his products.

“They’re not ready for it, so we haven’t deployed it,” he said.

I asked Blech what about the gap in his resume roughly between 2003 and 2007. When he balked, I asked whether he’d advised all of his employees of his criminal record when they were hired? Yes, of course, he said (this, according to two former SCI employees, was not actually the case).

In any event, Blech seemed to know this subject was going to come up, and initially took ownership over the issue, although he said he never ran any Ponzi schemes.

“This is in my past and something I’ve addressed and paid my debt for in every way,” Blech said. “I took the approach that was going to get me home to my family the soonest. That meant cooperating with the government and not fighting them in a long, drawn-out battle. I took responsibility, financially and in every way I had to with this case.”

Then he added that it really wasn’t his fault. “There were people in my company that were in America while I was living in Europe that went out and did things inappropriately that got the attention of the authorities,” adding that virtually all of the money was returned to investors.

“I put more than $2 million of my own money into this company,” Blech said of SCI. “I could have hidden, and spent that to reinvent myself and sit on a beach in the Bahamas. But I didn’t do that.”


Why in the world wouldn’t anyone want to deploy an unhackable security product? Perhaps because the product doesn’t offer much beyond existing encryption technologies to justify the expenditure?


Put simply, SCI’s secret sauce is a process for taking existing encryption techniques (they only use vetted, established code libraries) and randomizing which one gets used to encrypt the file that needs to protected, and then encrypting the output with AES-256. Seems patently obvious, yet otherwise harmless. But how does this improve upon AES-256 — widely considered one of the most secure ciphers available today?

It’s not clear that it does. In case after case, we’ve seen security technologies that were previously secure compromised by the addition of functionality, features or implementations that are fundamentally flawed. In the case of the HeartBleed bug — a massive vulnerability in OpenSSL that enabled anyone to snoop on encrypted Web traffic — the bug was reportedly introduced accidentally by an OpenSSL volunteer programmer who intended to add new functionality to the widely-used standard.

Robert Hansen, vice president of WhiteHat Labs at WhiteHat Security, pointed to another example: Acutrust, a once ambitious security firm that came up with a brilliant idea to combat phishing attacks, only to create a new problem in the process.

“Acutrust turned a normal [password] hash into a pretty picture as a convoluted way to prevent phishing and it made it super easy to brute-force every username and password offline, and didn’t help with phishing at all,” Hansen wrote in a Facebook message. “This article single handedly effectively put them out of business, FYI.”

All told, I spent more than an hour on the phone with Blech and his team. At the beginning of the call, it was clear that neither he nor any of his people were familiar with Kerckhoff’s principle, or even appreciated the idea that having their product publicly vetted might be a good thing. But by the end of the call, things seemed to be turning around.

At first, Blech said anyone who wanted to try to break the company’s technology needed only to look to its patent on file with the U.S. Patent & Trademark Office, which he said basically explained the whole thing. I took another look at SCI’s press release about its precious patent: “One of the most interesting things about technology is the personalities behind it,” the company’s own in-house media firm crowed. No question about that.

Early in the interview, Blech said he wouldn’t want to let just anyone and everyone have access to their product; the company would want to vet the potential testers. Later in the call, the tone had changed.

“Without the decryption key, even if you have the source code, not going to be able to get through it,” Blech said. “We don’t know the randomization sequence,” chosen by their technology when it is asked to encrypt a file, he said.

Now we were getting somewhere, or at least a whole lot closer to crotchety ole’ Kerckhoff’s principle. The company finally seemed opening up to the idea of an independent review. This was progress. But would SCI cease its “unhackable” marketing shenananigans until such time? SCI’s Marketing Director Deirdre Murphy was non-committal, suggesting that perhaps the company would find a less controversial way to describe their product, such as “impenetrable.” I just had to sigh and end the interview.

Just minutes after that call, I received an email from SCI’s outside public relations company stating that SCI would, in fact, be publishing a request for proposal for independent testing of its technology:

“As an early stage company we were focused on coming to market and channel partnering.  We now realize that specific infosec industry norms around independent need to be met – and quickly.  We’ve been using the peer review and testing of existing partners, advanced prospects and early engagements up until now. WE hear the infosec community’s feedback on testing, and look forward to engaging in independently conducted tests.  We are today publishing requests for proposals for such testing.”

“We realize that sometimes a technology innovator’s earliest critics can be their best sources of feedback. We hope to solicit constructive involvement from  the infosec community and some of its vast array of experts.”

Kreckhoff would be so proud.

Krebs on Security: Stress-Testing the Booter Services, Financially

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

The past few years have witnessed a rapid proliferation of cheap, Web-based services that troublemakers can hire to knock virtually any person or site offline for hours on end. Such services succeed partly because they’ve enabled users to pay for attacks with PayPal. But a collaborative effort by PayPal and security researchers has made it far more difficult for these services to transact with their would-be customers.



By offering a low-cost, shared distributed denial-of-service (DDoS) attack infrastructure, these so-called “booter” and “stresser” services have attracted thousands of malicious customers and are responsible for hundreds of thousands of attacks per year. Indeed, KrebsOnSecurity has repeatedly been targeted in fairly high-volume attacks from booter services — most notably a service run by the Lizard Squad band of miscreants who took responsibility for sidelining the the Microsoft xBox and Sony Playstation on Christmas Day 2014.

For more than two months in the summer 2014, researchers with George Mason University, UC Berkeley’s International Computer Science Institute, and the University of Maryland began following the money, posing as buyers of nearly two dozen booter services in a bid to discover the PayPal accounts that booter services were using to accept payments. In response to their investigations, PayPal began seizing booter service PayPal accounts and balances, effectively launching their own preemptive denial-of-service attacks against the payment infrastructure for these services.

PayPal will initially limit reported merchant accounts that are found to violate its terms of service (turns out, accepting payments for abusive services is a no-no). Once an account is limited, the merchant cannot withdraw or spend any of the funds in their account. This results in the loss of funds in these accounts at the time of freezing, and potentially additional losses due to opportunity costs the proprietors incur while establishing a new account. In addition, PayPal performed their own investigation to identify additional booter domains and limited accounts linked to these domains as well.

The efforts of the research team apparently brought some big-time disruption for nearly two-dozen of the top booter services. The researchers said that within a day or two following their interventions, they saw the percentage of active booters quickly dropping from 70 to 80 percent to around 50 percent, and continuing to decrease to a low of around 10 percent that were still active.


While some of the booter services went out of business shortly thereafter, more than a half-dozen shifted to accepting payments via Bitcoin (although the researchers found that this dramatically cut down on the services’ overall number of active customers). Once the target intervention began, they found the average lifespan of an account dropped to around 3.5 days, with many booters’ PayPal accounts only averaging around two days before they were no longer used again.

The researchers also corroborated the outages by monitoring hacker forums where the services were marketed, chronicling complaints from angry customers and booter service operators who were inconvenienced by the disruption (see screen shot galley below).

A booter service proprietor advertising his wares on the forum Hackforums complains about Paypal repeatedly limiting his account.

A booter service proprietor advertising his wares on the forum Hackforums complains about Paypal repeatedly limiting his account.

Another booter seller on Hackforums whinges about PayPal limiting the account he uses to accept attack payments from customers.

Another booter seller on Hackforums whinges about PayPal limiting the account he uses to accept attack payments from customers.

"It's a shame PayPal had to shut us down several times causing us to take money out of our own pocket to purchase servers, hosting and more," says this now-defunct booter service to its former customers.

“It’s a shame PayPal had to shut us down several times causing us to take money out of our own pocket to purchase servers, hosting and more,” says this now-defunct booter service to its former customers.

Deadlyboot went dead after the PayPal interventions. So sad.

Deadlyboot went dead after the PayPal interventions. So sad.

Daily attacks from Infected Stresser dropped off precipitously following the researchers' work.

Daily attacks from Infected Stresser dropped off precipitously following the researchers’ work.

As I’ve noted in past stories on booter service proprietors I’ve tracked down here in the United States, many of these service owners and operators are kids operating within easy reach of U.S. law enforcement. Based on the aggregated geo-location information provided by PayPal, the researchers found that over 44% of the customer and merchant PayPal accounts associated with booters are potentially owned by someone in the United States.


The research team also pored over leaked and scraped data from three popular booter services —”Asylum Stresser,” another one called “VDO,” and the booter service referenced above called “Lizard Stresser.” All three of these booter services had been previously hacked by unknown individuals. By examining the leaked data from these services, the researchers found these three services alone had attracted over 6,000 subscribers and had launched over 600,000 attacks against over 100,000 distinct victims.

Data based on leaked databases from these three booter services.

Data based on leaked databases from these three booter services.

Like other booter services, Asylum, Lizard Stresser and VDO rely on a subscription model, where customers or subscribers can launch an unlimited number of attacks that have a duration typically ranging from 30 seconds to 1-3 hours and are limited to 1-4 concurrent attacks depending on the tier of subscription purchased. The price for a subscription normally ranges from $10-$300 USD per a month depending on the duration and number of concurrent attacks provided.

“We also find that the majority of booter customers prefer paying via PayPal and that Lizard Stresser, which only accepted Bitcoin, had a minuscule 2% signup to paid subscriber conversion rate compared to 15% for Asylum Stresser and 23% for VDO 1, which both accepted PayPal,” they wrote.

The research team found that some of the biggest attacks from these booter services take advantage of common Internet-based hardware and software — everything from consumer gaming consoles to routers and modems to Web site content management systems — that ships with networking features which can easily be abused for attacks and that are turned on by default.

Specific examples of these include DNS amplification attacks, network time protocol (NTP) attacksSimple Service Discovery Protocol (SSDP) attacks, and XML-RPC attacks. These attack methods are particularly appealing for booter services because they hide the true source of attacks and/or can amplify a tiny amount of attack bandwidth into a much larger assault on the victim. Such attack methods also offer the booter service virtually unlimited, free attack bandwidth, because there are tens of millions of misconfigured devices online that can be abused in these attacks.

Finally, the researchers observed a stubborn fact about these booter services that I’ve noted in several stories: That the booter service front-end Web sites where customers go to pay for service and order attacks were all protected by CloudFlare, a content distribution network that specializes in helping networks stay online in the fact of withering online attacks.

I have on several occasions noted that if CloudFlare adopted a policy of not enabling booter services, it could eliminate a huge conflict of interest for the company and — more importantly — help eradicate the booter industry. The company has responded that this would lead to a slippery slope of censorship, but that it will respond to all proper requests from law enforcement regarding booters. I won’t rehash this debate again here (anyone interested in CloudFlare’s take on this should see this story).

In any case, the researchers note that they contacted CloudFlare’s abuse email on June 21st, 2014 to notify the company of the abusive nature of these services.

“As of the time of writing this paper, we have not received any response to our complaints and they continue to use CloudFlare,” the paper notes. “This supports the notion that at least for our set of booters CloudFlare is a robust solution to protect their frontend servers. In addition, has a list of over 100 booters that are using CloudFlare’s services to protect their frontend servers.”

A copy of the research paper is available here (PDF).

TorrentFreak: BitTorrent Can Be Exploited for DoS Attacks, Research Warns

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

dangerWith dozens of millions of active users at any given point in the day the BitTorrent protocol is a force to be reckoned with.

While BitTorrent swarms are relatively harmless, a new paper published by City University London researcher Florian Adamsky reveals that there’s potential for abuse.

The paper, titled ‘P2P File-Sharing in Hell: Exploiting BitTorrent Vulnerabilities to Launch Distributed Reflective DoS Attacks’, shows that various BitTorrent protocols can be used to amplify Denial of Service attacks.

Through various experiments Adamsky has confirmed that the vulnerability affects the uTP, DHT, Message Stream Encryption and BitTorrent Sync protocols.

The attacks are most effective through the BitTorrent Sync application where the original bandwidth can be increased by a factor of 120.

For traditional torrent clients such as uTorrent and Vuze the impact is also significant, boosting attacks by 39 and 54 times respectively.

Speaking with TF, Adamsky states that it’s relatively easy to carry out a distributed reflective Denial of Service (DRDoS) attack via BitTorrent. The attacker only needs a valid info-hash, or the “secret” in case of BitTorrent Sync.

“This attack should not be so hard to run, since an attacker can collect millions of possible amplifiers by using trackers, DHT or PEX,” he explains.

“With a single BitTorrent Sync ping message, an attacker can amplify the traffic up to 120 times.”

BitTorrent Inc has been notified about the vulnerabilities and patched some in a recent beta release. For now, however, uTorrent is still vulnerable to a DHT attack. Vuze was contacted as well but has yet to release an update according to the researcher.

For users of BitTorrent-based software there is no security concern other than the fact that people are participating in a DDoS attack without their knowledge. The vulnerability mostly leads to a lot of wasted bandwidth.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Krebs on Security: Adobe, MS Push Patches, Oracle Drops Drama

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Adobe today pushed another update to seal nearly three dozen security holes in its Flash Player software. Microsoft also released 14 patch bundles, including a large number of fixes for computers running its new Windows 10 operating system. Not to be left out of Patch Tuesday, Oracle‘s chief security officer lobbed something of a conversational hand grenade into the security research community, which responded in kind and prompted Oracle to back down.

brokenflash-aAdobe’s latest patch for Flash (it has issued more than a dozen this year alone) fixes at least 34 separate security vulnerabilities in Flash and Adobe AIR. Mercifully, Adobe said this time around it is not aware of malicious hackers actively exploiting any of the flaws addressed in this release.

Adobe recommends users of Adobe Flash Player on Windows and Macintosh update to Adobe Flash Player Adobe Flash Player installed with Google Chrome will be automatically updated to the latest Google Chrome version, which will include Adobe Flash Player on Windows and Macintosh, and version for Linux and Chrome OS.

However, I would recommend that if you use Flash, you should strongly consider removing it, or at least hobbling it until and unless you need it. Disabling Flash in Chrome is simple enough, and can be easily reversed: On a Windows, Mac, Linux or Chrome OS installation of Chrome, type “chrome:plugins” into the address bar, and on the Plug-ins page look for the “Flash” listing: To disable Flash, click the disable link (to re-enable it, click “enable”). Windows users can remove Flash from the Add/Remove Programs panel, or use Adobe’s uninstaller for Flash Player.

If you’re concerned about removing Flash altogether, consider a dual-browser approach. That is, unplugging Flash from the browser you use for everyday surfing, and leaving it plugged in to a second browser that you only use for sites that require Flash.

If you decide to proceed with Flash and update, the most recent versions of Flash should be available from the Flash home page, but beware potentially unwanted add-ons, like McAfee Security Scan. To avoid this, uncheck the pre-checked box before downloading, or grab your OS-specific Flash download from here. Windows users who browse the Web with anything other than Internet Explorer may need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.)


Microsoft may have just released Windows 10 as a free upgrade to Windows 7 and 8 customers, but some 40 percent of the patches released today apply to the new flagship OS, according to a tally by security firm Qualys. There is even an update for Microsoft Edge, the browser that Microsoft wants to replace Internet Explorer.

win10Nevertheless, IE gets its own critical update (MS15-089), which addresses at least 13 flaws — most of which can be exploited remotely without any help from the user, save from perhaps just visiting a hacked or malicious site.

Another notable update plugs scary-looking flaws in Microsoft Office (MS15-081). Qualys says it appears the worst of the flaws fixed in the Office patch could be triggered automatically — possibly through the Outlook e-mail preview pane, for example.

According to security firm Shavlik, there are two flaws fixed in today’s release from Microsoft that are being actively exploited in the wild: One fixed in the Office Patch (CVE-2015-1642) and another in Windows itself (CVE-2015-1769). Several other vulnerabilities fixed today were publicly disclosed prior to today, increasing the risk that we could see public exploitation of these bugs soon.

If you run Windows, take some time soon to back up your data and update your system. As ever, if you experience any issues as a result of applying any of these updates, please leave a note about your experience in the comments section.


I’ve received questions from readers about a rumored software update for Java (Java 8, Update 60); I have no idea where this is coming from, but this should not be security-related patch. Generally speaking, even-numbered Java updates are non-security related. More importantly, Oracle has moved to releasing security updates for Java on a quarterly patch cycle, except for extreme emergencies (and I’m unaware of a dire problem with Java right now, aside perhaps from having this massively buggy and insecure program installed in the first place).

Alas, not to be left out of the vulnerability madness, Oracle’s Chief Security Officer Mary Ann Davidson published a provocative blog post titled “Don’t, Just Don’t” that stirred up quite a tempestuous response from the security community today.

Davidson basically said security researchers who try to reverse engineer the company’s code to find software flaws are violating the legal agreement they acknowledged when installing the software. She also chastised researchers for spreading “a pile of steaming FUD” (a.k.a. Fear, Uncertainty and Doubt).

Oracle later unpublished the post (it is still available in Google’s cache here), but not before Davidson’s rant was lampooned endlessly on Twitter and called out by numerous security firms. My favorite so far came from Twitter user small_data, who said: “The City of Rome’s EULA stipulates Visigoths cannot recruit consultants who know about some hidden gate to gain entry.”

Images posted by Twitter users posting to the sacrastic hashtag #oraclefanfic

Images posted by Twitter users posting to the sacrastic hashtag #oraclefanfic

Schneier on Security: Detecting Betrayal in Diplomacy Games

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Interesting research detecting betrayal in the game of Diplomacy by analyzing interplayer messages.

One harbinger was a shift in politeness. Players who were excessively polite in general were more likely to betray, and people who were suddenly more polite were more likely to become victims of betrayal, study coauthor and Cornell graduate student Vlad Niculae reported July 29 at the Annual Meeting of the Association for Computational Linguistics in Beijing. Consider this exchange from one round:

    Germany: Can I suggest you move your armies east and then I will support you? Then next year you move [there] and dismantle Turkey. I will deal with England and France, you take out Italy.

    Austria: Sounds like a perfect plan! Happy to follow through. And­ — thank you Bruder!

Austria’s next move was invading German territory. Bam! Betrayal.

An increase planning-related language by the soon-to-be victim also indicated impending betrayal, a signal that emerges a few rounds before the treachery ensues. And correspondence of soon-to-be betrayers had an uptick in positive sentiment in the lead-up to their breach.

Working from these linguistic cues, a computer program could peg future betrayal 57 percent of the time. That might not sound like much, but it was better than the accuracy of the human players, who never saw it coming. And remember that by definition, a betrayer conceals the intention to betray; the breach is unexpected (that whole trust thing). Given that inherent deceit, 57 percent isn’t so bad.

Back when I was in high school, I briefly published a postal Diplomacy zine.

Academic paper.

SANS Internet Storm Center, InfoCON: green: What Was Old is New Again: Honeypots!, (Mon, Aug 10th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

Here at the ISC, we operate a number of honeypots. So it is nice to see how honeypots in different shapes are starting to become popular again, with even a couple of startups specializing in honeypot solutions. Back around 2001, we had products like Symantecs Mantrap, open source efforts like the Deception Toolkit, and of course the Honeynet project.

I dont think honeypots ever went away (after all, we have been running a few, and the honeynet project still has a some great tools and such to run them). But honeypots never really caught on in enterprise networks. I think there were severalreasons for that: First of all, pretty much all honeypots are pretty easy to discover, and typically do not deceive the more advanced attackers enterprises are most afraid about. Secondly, a good honeypot deployment, in particular if it involves difficult to detect full interaction honeypots, can be difficult to manage. Lastly, enterprises dont want to be accused of inviting an attacker by providing honey to trap them.

More recently, a couple of companies sprung up to solve some of these problems. They offer either an outsourced honeypot (or better deception) solution and redirect traffic from your network to their honeypot, or they leverage virtualization to make honeypots easier to deploy and manage across an existing network. In addition, they also make it easier to collect indicators from honeypots and deploy them using existing enterprise security solutions.

At Blackhat, a couple of talks focused on these newer Deception technologies (this is what they call honeypots these days):

Breaking Honeypots for Fun and Profit (by several people from Cymmetria)

Must read for anybody deploying low interaction honeypots. These honeypots are simple (and of course imperfect) simulations of existing systems. For example Kippo and Dioneah. If you run one of these honeypots, you should check out the techniques outlined in the talk. It shouldnt be too hard to adapt your honeypot to evade these detection techniques.

Bring Back the Honeypots (Haroon Meer and Marco Slaviero)

This talk gives a good summary of more modern honeypots and honeytokens. If you are familiar with John Strands ADHD Linux distribution, you may already know about things like booby trapped documents.

Other talks do not deal directly with honeypot deployment, but instead presented results collected from honeypots. Honeypots in our experience have been very helpful in emulating IoT devices, and so it is no surprise that SCADA security research takes advantage of honeypots to detect and measure attack activity.

Johannes B. Ullrich, Ph.D.

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

TorrentFreak: Torrent Site Proxies Rife With Malware Injecting Scripts

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

warningIn many countries including the UK, Italy, Denmark and France, the leading torrent sites are no longer freely accessible.

These court-ordered blockades requested by the music and movie industries are becoming widespread, but so are the tools to circumvent them.

For every domain name blocked, many proxies and mirrors emerge. These sites allow people to access the blocked sites and effectively bypass the restrictions put in place by the court.

Initially, the proxy sites were launched to help users gain access to their favorite torrent sites. However, more recently the demand for circumvention tools is being abused by people who are out to make hard cash.

Instead of offering a simple workaround, many proxies add their own scripts. In some cases these scripts are harmless, but according to security researcher Gabor Szathmari the majority serve questionable content.

Szathmari examined a sample of 6,158 proxy sites and found that over 99% added their own code. Only 21 sites in the sample did not modify the original site.

“99.7% of the tested torrent mirrors are injecting additional JavaScript into the web browsing traffic. A great share of these scripts serve content with malicious intent such as malware and click-fraud,” he notes.

The researcher informs TF that many of the researched proxies are suspicious because they use code that is either obfuscated or has a lot of random redirects. These scripts pretty much all use the domain name.


Taking a closer look at the proxies reveals that several of the ads link to malware. In addition, one of the scripts generated fake views of car racing videos in the background.

The original torrent sites, including The Pirate Bay, KickassTorrents and ExtraTorrent, are aware of the problem and are trying to minimize the damage by blocking suspicious proxies and mirrors.

“It’s a serious issue. We have been fighting against it for a long time,” the ExtraTorrent team informs TF.

“Most unauthorized proxy websites loaded ExtraTorrent in a frame and added malware JavaScript code or replaced ET’s banners with others,” they add.

ExtraTorrent has been able to block several proxies, but they can’t do anything against those that use a cached version of the site. To guide users in the right direction they therefore publish a list of official mirrors on their site.

Copyright holders often warn that pirate sites may serve malware, but this research suggests that they are only making the problem worse by censoring the original sites.

“I am an advocate for unfiltered Internet, and this example shows that censorship can violate the security of end-users,” Szathmari tells TF.

Of course, some of the original sites may also run dubious ads, but the malicious proxies appear to be much worse and should be avoided.

“I would advise downloaders to always use the original sites or the official proxy sites whenever possible,” the researcher says.

“If the original sites are blocked by the ISP, I would recommend to bypass the filtering with a reputable VPN service that does not modify traffic, or a reputable mirror that does not alter the website in any way.”

Szathmari published the full findings and his research methodology in a recent blog post.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and the best VPN services.

Raspberry Pi: Solar Eclipses from Past to Future, Earth to Jupiter

This post was syndicated from: Raspberry Pi and was written by: Liz Upton. Original post: at Raspberry Pi

Liz: Here’s another space-themed post from our friends at Wolfram Research, showing how the Wolfram Language can be used to visualize solar eclipses total and partial, past and present, and as seen from Earth, Mars and Jupiter.
Eclipse splash graphic

You may have heard that on March 20 there was a solar eclipse. Depending on where you are geographically, a solar eclipse may or may not be visible. If it is visible, local media make a small hype of the event, telling people how and when to observe the event, what the weather conditions will be, and other relevant details. If the eclipse is not visible in your area, there is a high chance it will draw very little attention. But people on Wolfram Community come from all around the world, and all—novices and experienced users and developers—take part in these conversations. And it is a pleasure to witness how knowledge of the subject and of Wolfram technologies and data from different parts of the world are shared.

Five discussions arose recently on Wolfram Community that are related to the latest solar eclipse. They are arranged below in the order they appeared on Wolfram Community. The posts roughly reflect on anticipation, observation, and data analysis of the recent eclipse, as well as computations for future and extraterrestrial eclipses.

I will take almost everything here from the Wolfram Community discussions, summarizing important and interesting points, and sometimes changing the code or visuals slightly. For complete details, I encourage you to read the original posts.

First, before the total solar eclipse happened on March 20, 2015, Wolfram’s own Jeff Bryant and Francisco Rodríguez explained how to see where geographically the eclipse is totally or partially visible. Using GeoEntities, Francisco was able to also highlight with green the countries from which at least the partial solar eclipse would be visible:

Using GeoEntities to see where  geographical visibility is of March 20, 2015 eclipse

Map showing visibility of eclipse using GeoGraphics function

Jeff Bryant is in the US and Francisco Rodríguez is in Peru, so as you can see above, neither was able to see even the partial solar eclipse. The intense red area shows the visibility of the total eclipse, and the lighter red is the partial eclipse. I consoled them by telling them that quite soon—in the next decade—almost all countries in the world, including the US and Peru, will be able to observe at least a partial phase of a total solar eclipse:

Future global visibility of total and partial solar eclipses

Visual representation of future partial and total solar eclipses

Another great way to visualize chronological events is with a new Wolfram Language function, TimelinePlot. I’ve considered the last few years and the next few years, and have plotted the countries and territories (according to the ISO 3166-1 standard) where a total solar eclipse will be visible, as well as when:

TimelinePlot showing future total solar eclipses

Visual of TimelinePlot future total solar eclipses

The image above shows the incredible powers of computational infographics. You see right away that a spectacular total solar eclipse will span the US from coast to coast on August 21, 2017 (see a related discussion below). You can also see that Argentina and Chile will get lucky, viewing a total eclipse twice in a row. Most subtly and curiously, the recent solar eclipse is unique in the sense that it covered two territories almost completely: the Faroe Islands and Svalbard. This means any inhabitant of these territories could have seen the total eclipse from any geo location, cloudiness permitting. Usually it’s quite the opposite: the observational area of a total eclipse is much smaller than the territory area it spans, and most of the inhabitants would have to travel to observe the total eclipse (fortunately, no visas needed). The behavior of the Solar System is very complex. The NASA data on solar eclipses goes just several thousand years into the past and future, losing precision drastically due to the chaos phenomenon.

At the time of the eclipse, I was in Odesa, Ukraine, which was in the partial eclipse zone. I made a separate post showing my position relative to the eclipse zone and grabbing a few photos of the eclipse. Using the orthographic GeoProjection, it’s easy to show that the total eclipse zone did not really cover any majorly populated places, passing mostly above ocean water. The black line shows the boundary of the partial eclipse visibility, which covered many populated territories:

Using GeoProjection to show my position relative to eclipse zone

Visual showing location related to eclipse zone

The Faroe Islands were in the zone of the total solar eclipse, and above I show the shortest path, or geodesic, between the islands and my location. In a separate post (see further discussion below), Marco Thiel posted a link to mesmerizing footage of the total solar eclipse, shot from an airplane (to avoid any cloudiness) by a BBC crew while flying above the Faroe Islands (see related discussion below). Francisco actually showed in a comment how to compute the distance from Odesa to the partial eclipse border:

Using GeoDistance to compute distance from Odesa to partial eclipse boarder

My photos, shot with a toy camera, were of course nothing like the BBC footage. Dense cloud coverage above Ukraine permitted only a few glimpses of the chipped-off Sun. Most images were very foggy, but ImageAdjust did a perfect job of removing the pall. A sample unedited photo is available for download in my Wolfram Community post:

Using ImageAdjust on solar eclipse photos

Solar eclipse images filtered with ImageAdjust

By the way, can you guess why you see the candy below? As I said in my post, the kids in my neighborhood in Ukraine observed the eclipse through the wrapper of this and other similar types of Ukrainian candy. The candy is cheap, and the wrap is opaque enough to keep eyes safe when the Sun brightens in the patches between the clouds. Do you remember using floppy disks? It was typical in the past to look at the Sun through floppy disk film. Many people may remember.

Candy wrappers used to see solar eclipses through

And this is where the conversation got picked up by our users. Sander Huisman, a physicist from the University of Twente in the Netherlands, asked a great question: “Wouldn’t it be cool if you could find your location just from the photos? We can calculate the coverage of the Sun for each of your photos, and inside the photo we can also find the time when it was taken. Using those two pieces of information, we should be able to identify the location of your photo, right?” I did not know how to go about such calculations, but Marco Thiel, an applied mathematician from the University of Aberdeen, UK, posted another discussion, Aftermath of the solar eclipse. Marco and Henrik Schachner, a physicist from the Radiation Therapy Center in Weilheim, Germany, tried to at least estimate the percentage of the Sun coverage using image processing and computational geometry functionality. This is the first part of the problem. If you have an idea of how to solve second part, finding a location from a photo timestamp and percentage of the Sun cover, please join the discussion and post on Wolfram Community. Marco and Henrik used photos from Aberdeen, which was very close to the total eclipse zone.

Estimating percentage of Sun coverage using image processing and computational geometry functionality

Even though he was so close, Marco did not have a chance to capture the partial eclipse due to high cloudiness. What irony and luck that the photos he used came from a US student from Cornell University, Tanvi Chheda, who spent a semester abroad at Marco’s university. She grabbed the shots with her iPad, but what wonderful images with the eclipse and birds. Thank you, Tanvi, for sharing them on Wolfram Community! Here is one:

Image of eclipse from Tanvi Chheda

Well, that’s the turbulent nature of Wolfram Community—something interesting is always happening, and happening quite fast. I’ll summarize the main subject of Marco’s post in a moment (see the original Community post for more images and eclipse coverage estimation), but as Marco wrote: “Even before today’s eclipse, there were reports warning that Europe might face large-scale blackouts because the power grids would be strained by a lack of solar power. This is why I decided to use Mathematica to analyze some data about the effects on the power grid in the UK. I also used data from the Netatmo Weather Station to analyze variations in the temperature in Europe due to the eclipse.”

Marco owns a Netatmo Weather Station, and had written about its usage in an earlier post. He used an API to get data from many stations throughout Europe, and also tapped into the public data from the power grid. One of his interesting findings was a strong correlation between the eclipse period and a sharp rise in the hydroelectric power production:

Correlation between eclipse period and hydroelectric power

For more observations, code, data, and analysis, I encourage you to read through the original post. There, Marco also touched on the subject of global warming and the relevance of high-resolution crowd-sourced data. To visualize the diversity of the discussion, I imported the whole text and used the new Wolfram Language function WordCloud:

Using WordCloud to show the diversity of a Community discussion

WordCloud showing diversity of topics in Community post

It’s nice that the Wolfram Language code, as well as the text, is getting parsed, and you can see the most frequently used functions. In the code above, there are three handy tricks. First is that the option WordOrientation has diverse settings for words’ directions. Second is that the option ScalingFunctions can give the layout a good visual appeal, and the simple power law I’ve chosen is often more flexible than the logarithmic one. The third trick is subtler. It is the choice of background color to be the “bottom” color of the ColorFunction used. Then not only do the sizes of the words stress their weights, but they also fade into the background.

From the TimelinePlot infographics above, you can see that a total eclipse will span the US from northwest to southeast on August 21, 2017. I made yet another Wolfram Community post showcasing some computations with this eclipse. You should take a look at the original for all the details, but here is an image of all US counties that will be spanned during the total eclipse. Each county is colored according to the history of cloud cover above it from 2000 to 2015. This serves as an estimate for the probability of clear visibility of the eclipse. The colder the colors, the higher the chance of clear skies. That’s very approximate, though, especially taking into account the unreliability of weather stations. GeoEntities is a very nice function that selects only those geographical objects that intersect with the polygon of the total eclipse. Below is quite a cool graphic that I think only the Wolfram Language can build in a few lines of code:

Computing 2017 eclipse path and historical cloud coverage for areas

Map of historical cloud coverage and 2017 solar eclipse path

And now that we’ve looked into the past and the future of the total solar eclipses, is there anything left to ponder? As it turns out, yes—the extraterrestrial solar eclipses! We live in unique times and on a unique planet with the angular diameter of its only Moon and its only Sun pretty much identical. I mentioned above a documentary where a BBC crew shot a video of the total solar eclipse from an airplane above the Faroe Islands. Quoting the show host, Liz Bonnin, right from the airplane: “There is no other planet in the Solar System that experiences the eclipse like this one… even though the Sun is 400 times bigger than the Moon, at this moment in our Solar System’s history, the Moon happens to be 400 times closer to the Earth than the Sun, and so they appear the same size…”

So can we verify that our planet is unique? In a recent Wolfram Community post, Jeff Bryant addressed this question. He made some computations using PlanetData and PlanetaryMoonData to investigate the solar eclipses on other planets. The main goal is to compare the angular diameter of the Sun to the angular diameter of the Moon in question, when observed from the surface of the planet in question. He used the semimajor axis of the Moon’s orbit as an estimate of the Moon’s distance from its host planet. Please see the complete code in the original post. Here I mention the final results. For Earth, we have an almost perfect ratio of 1, meaning that the Moon exactly covers the Sun in a total eclipse:

Angular diameter of the Sun compared to the angular diameter of the Moon on Earth

Now here is Mars’ data. The largest Moon, Phobos, is only .6 the diameter of the Sun viewed from the surface of Mars, so it can’t completely cover the Sun:

Angular diameter on Sun compared to the Moons on Mars

With human missions to Mars becoming more realistic, would you not be curious how a solar eclipse looks over there? Here are some spectacular shots captured by NASA’s Mars rover Curiosity of Phobos, passing right in front of the Sun:

NASA's Mars rover Curiosity of Phobos

NASA/JPL-Caltech/Malin Space Science Systems/Texas A&M Univ.

These are the sharpest images of a solar eclipse ever taken from Mars. As you can see, Phobos covers the Sun only partially (60%, according to our calculations), as seen from the surface of Mars. Such a solar eclipse is called a ring, or annular, type. Jupiter’s data seems more promising:

Angular diameter of the Sun compared to the Moons of Jupiter

Jupiter’s Moon Amalthea is the closest with a ratio of 0.9, yet even if its orbit allows a perfect 90% of Sun cover, the spectacular Earth-eclipse coronas are probably not visible. During a total Earth solar eclipse, the solar corona can be seen by the naked eye:

Amalthea total solar eclipse
Image Courtesy of Luc Viatour

Do you have a few ideas of your own to share or a few questions to ask? Join Wolfram Community—we would love to see your contributions!

Download this post as a Computable Document Format (CDF) file.

The post Solar Eclipses from Past to Future, Earth to Jupiter appeared first on Raspberry Pi.

Krebs on Security: Inside the $100M ‘Business Club’ Crime Gang

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

New research into a notorious Eastern European organized cybercrime gang accused of stealing more than $100 million from banks and businesses worldwide provides an unprecedented, behind-the-scenes look at an exclusive “business club” that dabbled in cyber espionage and worked closely with phantom Chinese firms on Russia’s far eastern border.

In the summer of 2014, the U.S. Justice Department joined multiple international law enforcement agencies and security firms in taking down the Gameover ZeuS botnet, an ultra-sophisticated, global crime machine that infected upwards of a half-million PCs.

Thousands of freelance cybercrooks have used a commercially available form of the ZeuS banking malware for years to siphon funds from Western bank accounts and small businesses. Gameover ZeuS, on the other hand, was a closely-held, customized version secretly built by the ZeuS author himself (following a staged retirement) and wielded exclusively by a cadre of hackers that used the systems in countless online extortion attacks, spam and other illicit moneymaking schemes.

Last year’s takedown of the Gameover ZeuS botnet came just months after the FBI placed a $3 million bounty on the botnet malware’s alleged author — a Russian programmer named Evgeniy Mikhailovich Bogachev who used the hacker nickname “Slavik.” But despite those high-profile law enforcement actions, little has been shared about the day-to-day operations of this remarkably resourceful cybercrime gang.

That changed today with the release of a detailed report from Fox-IT, a security firm based in the Netherlands that secretly gained access to a server used by one of the group’s members. That server, which was rented for use in launching cyberattacks, included chat logs between and among the crime gang’s core leaders, and helped to shed light on the inner workings of this elite group.

The alleged ZeuS Trojan author, Yevgeniy Bogachev, Evgeniy Mikhaylovich Bogachev, a.k.a. "lucky12345", "slavik", "Pollingsoon". Source: "most wanted, cyber.

The alleged ZeuS Trojan author, Yevgeniy Bogachev, Evgeniy Mikhaylovich Bogachev, a.k.a. “lucky12345″, “slavik”, “Pollingsoon”. Source: “most wanted, cyber.


The chat logs show that the crime gang referred to itself as the “Business Club,” and counted among its members a core group of a half-dozen people supported by a network of more than 50 individuals. In true Oceans 11 fashion, each Business Club member brought a cybercrime specialty to the table, including 24/7 tech support technicians, third-party suppliers of ancillary malicious software, as well as those engaged in recruiting “money mules” — unwitting or willing accomplices who could be trained or counted on to help launder stolen funds.

“To become a member of the business club there was typically an initial membership fee and also typically a profit sharing agreement,” Fox-IT wrote. “Note that the customer and core team relationship was entirely built on trust. As a result not every member would directly get full access, but it would take time until all the privileges of membership would become available.”

Michael Sandee, a principal security expert at Fox-IT and author of the report, said although Bogachev and several other key leaders of the group were apparently based in or around Krasnodar — a temperate area of Russia on the Black Sea — the crime gang had members that spanned most of Russia’s 11 time zones.

Geographic diversity allowed the group — which mainly worked regular 9-5 hour days Monday through Friday — to conduct their cyberheists against banks by following the rising sun across the globe — emptying accounts at Australia and Asian banks in the morning there, European banks in the afternoon, before handing the operations over to a part of the late afternoon team based in Eastern Europe that would attempt to siphon funds from banks that were just starting their business day in the United States.

“They would go along with the time zone, starting with banks in Australia, then continuing in Asia and following the business day wherever it was, ending the day with [attacks against banks in] the United States,” Sandee said.



Business Club members who had access to the GameOver ZeuS botnet’s panel for hijacking online banking transactions could use the panel to intercept security challenges thrown up by the victim’s bank — including one-time tokens and secret questions — as well as the victim’s response to those challenges. The gang dubbed its botnet interface “World Bank Center,” with a tagline beneath that read: “We are playing with your banks.”

The business end of the Business Club's peer-to-peer botnet, dubbed "World Bank Center."

The business end of the Business Club’s peer-to-peer botnet, dubbed “World Bank Center.” Image: Fox-IT


Aside from their role in siphoning funds from Australian and Asian banks, Business Club members based in the far eastern regions of Russia also helped the gang cash out some of their most lucrative cyberheists, Fox-IT’s research suggests.

In April 2011, the FBI issued an alert warning that cyber thieves had stolen approximately $20 million in the year prior from small to mid-sized U.S. companies through a series of fraudulent wire transfers sent to Chinese economic and trade companies located on or near the country’s border with Russia.

In that alert, the FBI warned that the intended recipients of the fraudulent, high-dollar wires were companies based in the Heilongjiang province of China, and that these firms were registered in port cities located near the Russia-China border. The FBI said the companies all used the name of a Chinese port city in their names, such as Raohe, Fuyuan, Jixi City, Xunke, Tongjiang, and Donging, and that the official name of the companies also included the words “economic and trade,” “trade,” and “LTD”. The FBI further advised that recipient entities usually held accounts with a the Agricultural Bank of China, the Industrial and Commercial Bank of China, and the Bank of China.

Fox-IT said its access to the gang revealed documents that showed members of the group establishing phony trading and shipping companies in the Heilongjiang province — Raohe county and another in Suifenhe — two cities adjacent to a China-Russia border crossing just north of Vladivostok.

Remittance slips discovered by Fox-IT show records of wire transfers that the Business Club executed from hacked accounts in the United States and Europe to accounts tied to phony shipping companies in China on the border with Russia.

Remittance slips discovered by Fox-IT show records of wire transfers that the Business Club executed from hacked accounts in the United States and Europe to accounts tied to phony shipping companies in China on the border with Russia. Image: Fox-IT

Sandee said the area in and around Suifenhe began to develop several major projects for economic cooperation between China and Russia beginning in the first half of 2012. Indeed, this Slate story from 2009 describes Suifenhe as an economy driven by Russian shoppers on package tours, noting that there is a rapidly growing population of Russian expatriates living in the city.

“So it is not unlikely that peer-to-peer ZeuS associates would have made use of the positive economic climate and business friendly environment to open their businesses right there,” Fox-IT said in its report. “This shows that all around the world Free Trade Zones and other economic incentive areas are some of the key places where criminals can set up corporate accounts, as they are promoting business. And without too many problems, and with limited exposure, can receive large sums of money.”

Remittance found by Fox-IT from Wachovia Bank in New York to an tongue-in-cheek named Chinese front company in Suifenhe called "Muling Shuntong Trading."

Remittance found by Fox-IT from Wachovia Bank in New York to an tongue-in-cheek named Chinese front company in Suifenhe called “Muling Shuntong Trading.” Image: Fox-IT

KrebsOnSecurity publicized several exclusive stories about U.S.-based businesses robbed of millions of dollars from cyberheists that sent the stolen money in wires to Chinese firms, including $1.66M in Limbo After FBI Seizes Funds from Cyberheist, and $1.5 million Cyberheist Ruins Escrow Firm.

The red arrows indicate the border towns of  Raohe (top) and Suifenhe (below)

The red arrows indicate the border towns of Raohe (top) and Suifenhe (below). Image: Fox-IT


The Business Club regularly divvied up the profits from its cyberheists, although Fox-IT said it lamentably doesn’t have insight into how exactly that process worked. However, Slavik — the architect of ZeuS and Gameover ZeuS — didn’t share his entire crime machine with the other Club members. According to Fox-IT, the malware writer converted part of the botnet that was previously used for cyberheists into a distributed espionage system that targeted specific information from computers in several neighboring nations, including Georgia, Turkey and Ukraine.

Beginning in late fall 2013 — about the time that conflict between Ukraine and Russia was just beginning to heat up — Slavik retooled a cyberheist botnet to serve as purely a spying machine, and began scouring infected systems in Ukraine for specific keywords in emails and documents that would likely only be found in classified documents, Fox-IT found.

“All the keywords related to specific classified documents or Ukrainian intelligence agencies,” Fox-IT’s Sandee said. “In some cases, the actual email addresses of persons that were working at the agencies.”

Likewise, they keyword searches that Slavik used to scourt bot-infected systems in Turkey suggested the botmaster was searching for specific files from the Turkish Ministry of Foreign Affairs or the Turkish KOM – a specialized police unit. Sandee said it’s clear that Slavik was looking to intercept communications about the conflict in Syria on Turkey’s southern border — one that Russia has supported by reportedly shipping arms into the region.

“The keywords are around arms shipments and Russian mercenaries in Syria,” Sandee said. “Obviously, this is something Turkey would be interested in, and in this case it’s obvious that the Russians wanted to know what the Turkish know about these things.”

According to Sandee, Slavik kept this activity hidden from his fellow Business Club members, at least some of whom hailed from Ukraine.

“The espionage side of things was purely managed by Slavik himself,” Sandee said. “His co-workers might not have been happy about that. They would probably have been happy to work together on fraud, but if they would see the system they were working on was also being used for espionage against their own country, they might feel compelled to use that against him.”

Whether Slavik’s former co-workers would be able to collect a reward even if they did turn on their former boss is debatable. For one thing, he is probably untouchable as long as he remains in Russia. But someone like that almost certainly has protection higher up in the Russian government.

Indeed, Fox-IT’s report concludes it’s evident that Slavik was involved in more than just the crime ring around peer-to-peer ZeuS.

“We could speculate that due to this part of his work he had obtained a level of protection, and
was able to get away with certain crimes as long as they were not committed against Russia,” Sandee wrote. “This of course remains speculation, but perhaps it is one of the reasons why he has as yet not been apprehended.”

The Fox-IT report, available here (PDF), is the subject of a talk today at the Black Hat security conference in Las Vegas, presented by Fox-IT’s Sandee, Elliott Peterson of the FBI, and Tillmann Werner of Crowdstrike.

Are you fascinated by detailed stories about real-life organized cybercrime operations? If so, you’ll almost certainly enjoy reading my book, Spam Nation: The Inside Story of Organized Cybercrime – From Global Epidemic to Your Front Door.

Schneier on Security: Face Recognition by Thermal Imaging

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

New research can identify a person by reading their thermal signature in complete darkness and then matching it with ordinary photographs.

Research paper:

Abstract: Cross modal face matching between the thermal and visible spectrum is a much desired capability for night-time surveillance and security applications. Due to a very large modality gap, thermal-to-visible face recognition is one of the most challenging face matching problem. In this paper, we present an approach to bridge this modality gap by a significant margin. Our approach captures the highly non-linear relationship be- tween the two modalities by using a deep neural network. Our model attempts to learn a non-linear mapping from visible to thermal spectrum while preserving the identity in- formation. We show substantive performance improvement on a difficult thermal-visible face dataset. The presented approach improves the state-of-the-art by more than 10% in terms of Rank-1 identification and bridge the drop in performance due to the modality gap by more than 40%.

Raspberry Pi: Autonomous recording for marine ecology

This post was syndicated from: Raspberry Pi and was written by: Helen Lynn. Original post: at Raspberry Pi

Cetacean species, including whales, dolphins and porpoises, are considered indicators of the health of marine ecosystems around the world. While a number are known to be endangered, a lack of data means that the population size and conservation status of many species are impossible to estimate. These animals are vulnerable to the effects of human activities and the noise they cause.

In Brazil, researchers carry out underwater acoustic monitoring to assess the ecological impact of industrial activities on the coast. As well as quantifying human‑generated noise, this type of study is very useful for scientists studying cetaceans, because the efficient transmission of sound in water means the tones and clicks they produce can be detected hundreds of kilometres away. However, commercial underwater recorders are expensive and inflexible, with proprietary software and hardware that is difficult or impossible to modify. Earlier this summer, though, a team from the University of São Paulo in Brazil published a paper about the flexible, low-cost autonomous recorder they have built, based on a Raspberry Pi, in open-access journal PLOS ONE. This is what it looks like:

The hydrophone – an underwater microphone – is on the top, protected by the cage that you can see, which is made of stainless steel; when deployed, all of the other components are inside the 50cm PVC case on the right. With walls approximately 9.5mm thick, this enclosure successfully withstands pressures of up to 10 bar, equivalent to those experienced almost 100m underwater, in pressure‑chamber testing.

Output from the hydrophone is passed via a signal-conditioning board and then a USB audio codec including an analogue-to-digital converter before being processed and stored by the Raspberry Pi. There’s a battery pack of five ordinary D-size Duracell batteries, with room in the enclosure to add four more such packs in parallel, and a power management module including a real-time clock.

So that the device doesn’t have to consume power during transport to the deployment location, the power management unit incorporates a Hall effect latch, controlled by a magnet on the outside of the enclosure, to connect or disconnect the batteries via a relay. Once the unit has been deployed, the real-time clock can control the relay to power the Raspberry Pi on or off at scheduled times. For their tests, the team used a 128GB SD card, one of the largest compatible cards they could find, although the limiting factor for autonomous functioning of the recorder proved to be power rather than data storage capacity.

The team deployed their autonomous recorders to locations on the eastern and southeastern coast of Brazil for field-testing, and they all performed satisfactorily, monitoring marine traffic and whale and dolphin populations. From the results of their tests they estimate that with the maximum number of five battery packs, the devices could provide almost two weeks of continuous recording, or over four months of recording at one hour per day. They used a Raspberry Pi Model A; the Model A+, smaller and with even lower power consumption, would eke out the power for longer.

The recorder has various settings that users can alter to optimise for different mission requirements: the scheduling of recording times and the nature of any automatic post-processing can be adjusted, as can the recording sample rate (the whistles and clicks of dolphins are best captured at a higher sample rate than low-frequency whale vocalisations, for example). At an estimated cost of US $500, it should be an attractive option for research groups faced with the alternative of splashing out six times as much on a less customisable commercial device.

It’s very good indeed to see Raspberry Pi used to build low-cost open hardware for research and study. The last time I poked around the web looking for open labware, there were some encouraging examples, but they were a little thin on the ground; now the most slapdash of searches returns a clutch of exciting results, from OpenTrons’ crowdfunded liquid-handling robot to a <$100 fluorescence microscope via my personal did-they-really-make-that? favourite, 3D-printed Raman spectrometer ramanPi. Setting up a research group or a teaching lab in a few years’ time might be a very different thing to what scientists have been used to.

The post Autonomous recording for marine ecology appeared first on Raspberry Pi.

Krebs on Security: Chinese VPN Service as Attack Platform?

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Hardly a week goes by without a news story about state-sponsored Chinese cyberspies breaking into Fortune 500 companies to steal intellectual property, personal data and other invaluable assets. Now, researchers say they’ve unearthed evidence that some of the same Chinese hackers also have been selling access to compromised computers within those companies to help perpetuate future breaches.

The so-called “Great Firewall of China” is an effort by the Chinese government to block citizens from accessing specific content and Web sites that the government has deemed objectionable. Consequently, many Chinese seek to evade such censorship by turning to virtual private network or “VPN” services that allow users to tunnel their Internet connections to locations beyond the control of the Great Firewall.


Security experts at RSA Research say they’ve identified an archipelago of Chinese-language virtual private network (VPN) services marketed to Chinese online gamers and those wishing to evade censorship, but which also appear to be used as an active platform for launching attacks on non-Chinese corporations while obscuring the origins of the attackers.

Dubbed by RSA as “Terracotta VPN” (a reference to the Chinese Terracotta Army), this satellite array of VPN services “may represent the first exposure of a PRC-based VPN operation that maliciously, efficiently and rapidly enlists vulnerable servers around the world,” the company said in a report released today.

The hacker group thought to be using Terracotta to launch and hide attacks is known by a number of code names, including the “Shell_Crew” and “Deep Panda.” Security experts have tied this Chinese espionage gang to some of the largest data breaches in U.S. history, including the recent attack on the U.S. Office of Personnel Management, as well as the breaches at U.S. healthcare insurers Anthem and Premera.

According to RSA, Terracotta VPN has more than 1,500 nodes around the world where users can pop up on the Internet. Many of those locations appear to be little more than servers at Internet service providers in the United States, Korea, Japan and elsewhere that offer cheap virtual private servers.

But RSA researchers said they discovered that many of Terracotta’s exit nodes were compromised Windows servers that were “harvested” without the victims’ knowledge or permission, including systems at a Fortune 500 hotel chain; a hi-tech manufacturer; a law firm; a doctor’s office; and a county government of a U.S. state.

The report steps through a forensics analysis that RSA conducted on one of the compromised VPN systems, tracking each step the intruders took to break into the server and ultimately enlist the system as part of the Terracotta VPN network.

“All of the compromised systems, confirmed through victim-communication by RSA Research, are Windows servers,” the company wrote. “RSA Research suspects that Terracotta is targeting vulnerable Windows servers because this platform includes VPN services that can be configured quickly (in a matter of seconds).”

RSA says suspected nation-state actors have leveraged at least 52 Terracotta VPN nodes to exploit sensitive targets among Western government and commercial organizations. The company said it received a specific report from a large defense contractor concerning 27 different Terracotta VPN node Internet addresses that were used to send phishing emails targeting users in their organization.

“Out of the thirteen different IP addresses used during this campaign against this one (APT) target, eleven (85%) were associated with Terracotta VPN nodes,” RSA wrote of one cyber espionage campaign it investigated. “Perhaps one of the benefits of using Terracotta for Advanced Threat Actors is that their espionage related network traffic can blend-in with ‘otherwise-legitimate’ VPN traffic.”


RSA’s report includes a single screen shot of software used by one of the commercial VPN services marketed on Chinese sites and tied to the Terracotta network, but for me this was just a tease: I wanted a closer look at this network, yet RSA (or more likely, the company’s lawyers) carefully omitted any information in its report that would make it easy to locate the sites selling or offering the Terracotta VPN.

RSA said the Web sites advertising the VPN services are marketed on Chinese-language Web sites that are for the most part linked by common domain name registrant email addresses and are often hosted on the same infrastructure with the same basic Web content. Along those lines, the company did include one very useful tidbit in its report: A section designed to help companies detect servers that may be compromised warned that any Web servers seen phoning home to 8800free[dot]info should be considered hacked.

A lookup at for the historic registration records on 8800free[dot]info show it was originally registered in 2010 to someone using the email address “” Among the nine other domains registered to is 517jiasu[dot]cn, an archived version of which is available here.

Domaintools shows that in 2013 the registration record for 8800free[dot]info was changed to include the email address “” Helpfully, that email was used to register at least 39 other sites, including quite a few that are or were at one time advertising similar-looking VPN services.

Pivoting off the historic registration records for many of those sites turns up a long list of VPN sites registered to other interesting email addresses, including “,” “” and “” (click the email addresses for a list of domains registered to each).

Armed with lists of dozens of VPN sites, it wasn’t hard to find several sites offering different VPN clients for download. I installed each on a carefully isolated virtual machine (don’t try this at home, kids!). Here’s one of those sites:

One of the sites offering the VPN software and service that RSA has dubbed "Terracotta."

A Google-translated version of one of the sites offering the VPN software and service that RSA has dubbed “Terracotta.”

All told, I managed to download, install and use at least three VPN clients from VPN service domains tied to the above-mentioned email addresses. The Chinese-language clients were remarkably similar in overall appearance and function, and listed exit nodes via tabs for several countries, including the Canada, Japan, South Korea and the United States, among others. Here is one of the VPN clients I played with in researching this story:


This one was far more difficult to use, and crashed repeatedly when I first tried to take it for a test drive:


None of the VPN clients I tried would list the Internet addresses of the individual nodes. However, each node in the network can be discovered simply by running some type of network traffic monitoring tool in the background (I used Wireshark), and logging the address that is pinged when one clicks on a new connection.

RSA said it found more than 500 Terracotta servers that were U.S. based, but I must have gotten in on the fun after the company started notifying victim organizations because I found only a few dozen U.S.-based hosts in any of the VPN clients I checked. And most of the ones I did find that were based in the United States appeared to be virtual private servers at a handful of hosting companies.

The one exception I found was a VPN node tied to a dedicated Windows server for the Web site of a company in Michigan that manufactures custom-made chairs for offices, lounges and meeting rooms. That company did not return calls seeking comment.

In addition to the U.S.-based hosts, I managed to step through a huge number of systems based in South Korea. I didn’t have time to look through each record to see whether any of the Korean exit nodes were interesting, but here’s the list I came up with in case anyone is interested. I simply haven’t had time to look at and look up the rest of the clients in what RSA is calling the Terracotta network. Here’s a more simplified list of just the organizational names attached to each record.

Assuming RSA’s research is accurate (and I have no reason to doubt that it is) the idea of hackers selling access to hacked PCs for anonymity and stealth online is hardly a new one. In Sept. 2011, I wrote about how the Russian cybercriminals responsible for building the infamous TDSS botnet were selling access to computers sickened with the malware via a proxy service called AWMProxy, even allowing customers to pay for the access with PayPal, Visa and MasterCard.

It is, after all, incredibly common for malicious hackers to use systems they’ve hacked to help perpetrate future cybercrimes – particularly espionage attacks. A classified map of the United States obtained by NBC last week showing the victims of Chinese cyber espionage over the past five years lights up like so many exit nodes in a VPN network.

Source: NBC

Source: NBC

Schneier on Security: Backdoors Won’t Solve Comey’s Going Dark Problem

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

At the Aspen Security Forum two weeks ago, James Comey (and others) explicitly talked about the “going dark” problem, describing the specific scenario they are concerned about. Maybe others have heard the scenario before, but it was a first for me. It centers around ISIL operatives abroad and ISIL-inspired terrorists here in the US. The FBI knows who the Americans are, can get a court order to carry out surveillance on their communications, but cannot eavesdrop on the conversations, because they are encrypted. They can get the metadata, so they know who is talking to who, but they can’t find out what’s being said.

“ISIL’s M.O. is to broadcast on Twitter, get people to follow them, then move them to Twitter Direct Messaging” to evaluate if they are a legitimate recruit, he said. “Then they’ll move them to an encrypted mobile-messaging app so they go dark to us.”


The FBI can get court-approved access to Twitter exchanges, but not to encrypted communication, Comey said. Even when the FBI demonstrates probable cause and gets a judicial order to intercept that communication, it cannot break the encryption for technological reasons, according to Comey.

If this is what Comey and the FBI are actually concerned about, they’re getting bad advice — because their proposed solution won’t solve the problem. Comey wants communications companies to give them the capability to eavesdrop on conversations without the conversants’ knowledge or consent; that’s the “backdoor” we’re all talking about. But the problem isn’t that most encrypted communications platforms are security encrypted, or even that some are — the problem is that there exists at least one securely encrypted communications platform on the planet that ISIL can use.

Imagine that Comey got what he wanted. Imagine that iMessage and Facebook and Skype and everything else US-made had his backdoor. The ISIL operative would tell his potential recruit to use something else, something secure and non-US-made. Maybe an encryption program from Finland, or Switzerland, or Brazil. Maybe Mujahedeen Secrets. Maybe anything. (Sure, some of these will have flaws, and they’ll be identifiable by their metadata, but the FBI already has the metadata, and the better software will rise to the top.) As long as there is something that the ISIL operative can move them to, some software that the American can download and install on their phone or computer, or hardware that they can buy from abroad, the FBI still won’t be able to eavesdrop.

And by pushing these ISIL operatives to non-US platforms, they lose access to the metadata they otherwise have.

Convincing US companies to install backdoors isn’t enough; in order to solve this going dark problem, the FBI has to ensure that an American can only use backdoored software. And the only way to do that is to prohibit the use of non-backdoored software, which is the sort of thing that the UK’s David Cameron said he wanted for his country in January:

But the question is are we going to allow a means of communications which it simply isn’t possible to read. My answer to that question is: no, we must not.

And that, of course, is impossible. Jonathan Zittrain explained why. And Cory Doctorow outlined what trying would entail:

For David Cameron’s proposal to work, he will need to stop Britons from installing software that comes from software creators who are out of his jurisdiction. The very best in secure communications are already free/open source projects, maintained by thousands of independent programmers around the world. They are widely available, and thanks to things like cryptographic signing, it is possible to download these packages from any server in the world (not just big ones like Github) and verify, with a very high degree of confidence, that the software you’ve downloaded hasn’t been tampered with.


This, then, is what David Cameron is proposing:

* All Britons’ communications must be easy for criminals, voyeurs and foreign spies to intercept.

* Any firms within reach of the UK government must be banned from producing secure software.

* All major code repositories, such as Github and Sourceforge, must be blocked.

* Search engines must not answer queries about web-pages that carry secure software.

* Virtually all academic security work in the UK must cease — security research must only take place in proprietary research environments where there is no onus to publish one’s findings, such as industry R&D and the security services.

* All packets in and out of the country, and within the country, must be subject to Chinese-style deep-packet inspection and any packets that appear to originate from secure software must be dropped.

* Existing walled gardens (like IOs and games consoles) must be ordered to ban their users from installing secure software.

* Anyone visiting the country from abroad must have their smartphones held at the border until they leave.

* Proprietary operating system vendors (Microsoft and Apple) must be ordered to redesign their operating systems as walled gardens that only allow users to run software from an app store, which will not sell or give secure software to Britons.

* Free/open source operating systems — that power the energy, banking, ecommerce, and infrastructure sectors — must be banned outright.

As extreme as it reads, without all of that, the ISIL operative would be able to communicate securely with his potential American recruit. And all of this is not going to happen.

Last week, former NSA director Mike McConnell, former DHS secretary Michael Chertoff, and former deputy defense secretary William Lynn published a Washington Post op-ed opposing backdoors in encryption software. They wrote:

Today, with almost everyone carrying a networked device on his or her person, ubiquitous encryption provides essential security. If law enforcement and intelligence organizations face a future without assured access to encrypted communications, they will develop technologies and techniques to meet their legitimate mission goals.

I believe this is true. Already one is being talked about in the academic literature: lawful hacking.

Perhaps the FBI’s reluctance to accept this is based on their belief that all encryption software comes from the US, and therefore is under their influence. Back in the 1990s, during the first Crypto Wars, the US government had a similar belief. To convince them otherwise, George Washington University surveyed the cryptography market in 1999 and found that there were over 500 companies in 70 countries manufacturing or distributing non-US cryptography products. Maybe we need a similar study today.

This essay previously appeared on Lawfare.

TorrentFreak: Sweden’s Largest Streaming Site Will Close After Raid

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

swefilmlogoWhile millions associate Sweden with BitTorrent through its connections with The Pirate Bay, over the past several years the public has increasingly been obtaining its content in other ways.

Thanks to cheap bandwidth and an appetite for instant gratification, so-called streaming portals have grown in popularity, with movies and TV shows just a couple of clicks away in convenient Netflix-style interfaces.

Founded in 2011, Swefilmer is currently Sweden’s most popular streaming movie and TV show site. Research last year from Media Vision claimed that 25% of all web TV viewing in the country was carried out on Swefilmer and another similar site, Dreamfilm.

According to Alexa the site is currently the country’s 100th most popular domain, but in the next three days it will shut down for good.


The revelation comes from the site’s admin, who has just been revealed as local man Ola Johansson. He says that a surprise and unwelcome visit made it clear that he could not continue.

In a YouTube video posted yesterday, Johansson reports that earlier this month he was raided by the police who seized various items of computer equipment and placed him under arrest.

“It’s been a tough month to say the least. On 8 July, I received a search by the police at home. I lost a computer, mobile phone and other things,” Johansson says.

While most suspects in similar cases are released after a few hours or perhaps overnight, Johansson says he was subjected to an extended detention.

ola“I got to sit in jail for 90 hours. When I came out on Monday [after being raided on Wednesday] the site had been down since Friday,” he explains.

The Swede said he noticed something was amiss at the beginning of July when he began experiencing problems with the Russian server that was used to host the site’s videos.

“It started when all things from disappeared. That’s the service where we have uploaded all the videos,” Johansson says.

While the site remains online for now, the Swede says that this Friday Swefilmer will close down for good. The closure will mark the end of an era but since he is now facing a criminal prosecution that’s likely to conclude in a high-profile trial, Johansson has little choice but to pull the plug.

The site’s considerable userbase will be disappointed with the outcome but there are others that are welcoming the crackdown.

“We are not an anonymous Hollywood studio,” said local director Anders Nilsson in response to the news.

“We are a group of film makers and we will not give up when someone spits in our faces by stealing our movies and putting them on criminal sites to share them in the free world. It is just as insulting as if someone had stolen the purely physical property.”

Aside from creating a gap in the unauthorized streaming market, the forthcoming closure of Swefilmer will have repercussions in the courtroom too, particularly concerning an important legal process currently playing out in Sweden.

Last November, Universal Music, Sony Music, Warner Music, Nordisk Film and the Swedish Film Industry filed a lawsuit in the Stockholm District Court against local ISP Bredbandsbolaget (The Broadband Company). It demands that the ISP blocks subscriber access to The Pirate Bay and also Swefilmer.

Even after negotiation Bredbandsbolaget refused to comply, so the parties will now meet in an October hearing to determine the future of website blocking in Sweden.

It is believed that the plaintiffs in the case were keen to tackle a torrent site and a streaming site in the same process but whether Swefilmer will now be replaced by another site is currently unknown. If it does, Dreamfilm could be the most likely candidate.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and the best VPN services.

Schneier on Security: New RC4 Attack

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

New research: “All Your Biases Belong To Us: Breaking RC4 in WPA-TKIP and TLS,” by Mathy Vanhoef and Frank Piessens:

Abstract: We present new biases in RC4, break the Wi-Fi Protected Access Temporal Key Integrity Protocol (WPA-TKIP), and design a practical plaintext recovery attack against the Transport Layer Security (TLS) protocol. To empirically find new biases in the RC4 keystream we use statistical hypothesis tests. This reveals many new biases in the initial keystream bytes, as well as several new long-term biases. Our fixed-plaintext recovery algorithms are capable of using multiple types of biases, and return a list of plaintext candidates in decreasing likelihood.

To break WPA-TKIP we introduce a method to generate a large number of identical packets. This packet is decrypted by generating its plaintext candidate list, and using redundant packet structure to prune bad candidates. From the decrypted packet we derive the TKIP MIC key, which can be used to inject and decrypt packets. In practice the attack can be executed within an hour. We also attack TLS as used by HTTPS, where we show how to decrypt a secure cookie with a success rate of 94% using 9*227 ciphertexts. This is done by injecting known data around the cookie, abusing this using Mantin’s ABSAB bias, and brute-forcing the cookie by traversing the plaintext candidates. Using our traffic generation technique, we are able to execute the attack in merely 75 hours.

News articles.

We need to deprecate the algorithm already.

Errata Security: Infosec’s inability to quantify risk

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

Infosec isn’t a real profession. Among the things missing is proper “risk analysis”. Instead of quantifying risk, we treat it as an absolute. Risk is binary, either there is risk or there isn’t. We respond to risk emotionally rather than rationally, claiming all risk needs to be removed. This is why nobody listens to us. Business leaders quantify and prioritize risk, but we don’t, so our useless advice is ignored.

An example of this is the car hacking stunt by Charlie Miller and Chris Valasek, where they turned off the engine at freeway speeds. This has lead to an outcry of criticism in our community from people who haven’t quantified the risk. Any rational measure of the risk of that stunt is that it’s pretty small — while the benefits are very large.
In college, I owned a poorly maintained VW bug that would occasionally lose power on the freeway, such as from an electrical connection falling off from vibration. I caused more risk by not maintaining my car than these security researchers did.
Indeed, cars losing power on the freeway is a rather common occurrence. We often see cars on the side of the road. Few accidents are caused by such cars. Sure, they add risk, but so do people abruptly changing lanes.
No human is a perfect driver. Every time we get into our cars, instead of cycling or taking public transportation, we add risk to those around us. The majority of those criticizing this hacking stunt have caused more risk to other drivers this last year by commuting to work. They cause this risk not for some high ideal of improving infosec, but merely for personal convenience. Infosec is legendary for it’s hypocrisy, this is just one more example.
Google, Tesla, and other companies are creating “self driving cars”. Self-driving cars will always struggle to cope with unpredictable human drivers, and will occasionally cause accidents. However, in the long run, self-driving cars will be vastly safer. To reach that point, we need to quantify risk. We need to be able to show that for every life lost due to self-driving cars, two have been saved because they are inherently safer. But here’s the thing, if we use the immature risk analysis from the infosec “profession”, we’ll always point to the one life lost, and never quantify the two lives saved. Using infosec risk analysis, safer self-driving cars will never happen.
In hindsight, it’s obvious to everyone that Valasek and Miller went too far. Renting a track for a few hours costs less than the plane ticket for the journalist to come out and visit them. Infosec is like a pride of lions, that’ll leap and devour one of their members when they show a sign of weakness. This minor mistake is weakness, so many in infosec have jumped on the pair, reveling in righteous rage. But any rational quantification of the risks show that the mistake is minor, compared to the huge benefit of their research. I, for one, praise these two, and hope they continue their research — knowing full well that they’ll likely continue to make other sorts of minor mistakes in the future.

Errata Security: My BIS/Wassenaar comment

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

This is my comment I submitted to the BIS on their Wassenaar rules:


I created the first “intrusion prevention system”, as well as many tools and much cybersecurity research over the last 20 years. I would not have done so had these rules been in place. The cost and dangers would have been too high. If you do not roll back the existing language, I will be forced to do something else.

After two months, reading your FAQ, consulting with lawyers and export experts, the cybersecurity industry still hasn’t figured out precisely what your rules mean. The language is so open-ended that it appears to control everything. My latest project is a simple “DNS server”, a piece of software wholly unrelated to cybersecurity. Yet, since hackers exploit “DNS” for malware command-and-control, it appears to be covered by your rules. It’s specifically designed for both the distribution and control of malware. This isn’t my intent, it’s just a consequence of how “DNS” works. I haven’t decided whether to make this tool open-source yet, so therefore traveling to foreign countries with the code on my laptop appears to be a felony violation of export controls.

Of course you don’t intend to criminalize this behavior, but that isn’t the point. The point is that the rules are so vague that they become impossible for anybody to know exactly what is prohibited. We therefore have to take the conservative approach. As we’ve seen with other vague laws, such as the CFAA, enforcement is arbitrary and discriminatory. None of us would have believed that downloading files published on a public website would be illegal until a member of community was convicted under the CFAA for doing it. None of us wants to be a similar test case for export controls. The current BIS rules are so open-ended that they would have a powerful chilling effect on our industry.

The solution, though, isn’t to clarify the rules, but to roll them back. You can’t clarify the difference between good/bad software because there is no difference between offensive and defensive tools — just the people who use them. The best way to secure your network is to attack it yourself. For example, my “masscan” tool quickly scans large networks for vulnerabilities like “Heartbleed”. Defenders use it to quickly find vulnerable systems, to patch them. But hackers also use my tool to find vulnerable systems to hack them. There is no solution that stops bad governments from buying “intrusion” or “surveillance” software that doesn’t also stop their victims from buying software to protect themselves. Export controls on offensive software means export controls on defensive software. Export controls mean the Sudanese and Ethiopian people can no longer defend themselves from their own governments.

Wassenaar was intended to stop “proliferation” and “destabilization”, yet intrusion/surveillance software is neither of those. Human rights activists have hijacked the arrangement for their own purposes. This is a good purpose, of course, since these regimes are evil. It’s just that Wassenaar is the wrong way to do this, with a disproportionate impact on legitimate industry, while at the same time, hurting the very people it’s designed to help. Likewise, your own interpretation of Wassenaar seems to have been hijacked by the intelligence community in the United States for their own purposes to control “0days”.

Rather than the current open-end and vague interpretation of the Wassenaar changes, you must do the opposite, and create the narrowest of interpretations. Better yet, you need to go back and renegotiate the rules with the other Wassenaar members, as software is not a legitimate target of Wassenaar control. Computer code is not a weapon, if you make it one, then you’ll destroy America’s standing in the world. On a personal note, if you don’t drastically narrow this, my research and development will change. Either I will stay in this country and do something else, or I will move out of this country (despite being a fervent patriot).

Robert Graham
Creator of BlackICE, sidejacking, and masscan.
Frequent speaker at cybersecurity conferences.

Krebs on Security: Hacking Team Used Spammer Tricks to Resurrect Spy Network

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Last week, hacktivists posted online 400 GB worth of internal emails, documents and other data stolen from Hacking Team, an Italian security firm that has earned the ire of privacy and civil liberties groups for selling spy software to governments worldwide. New analysis of the leaked Hacking Team emails suggests that in 2013 the company used techniques perfected by spammers to hijack Internet address space from a spammer-friendly Internet service provider in a bid to regain control over a spy network it apparently had set up for the Italian National Military Police.


Hacking Team is in the business of selling exploits that allow clients to secretly deploy spyware on targeted systems. In just the past week since the Hacking Team data was leaked, for example, Adobe has fixed two previously undocumented zero-day vulnerabilities in its Flash Player software that Hacking Team had sold to clients as spyware delivery mechanisms.

The spyware deployed by Hacking Team’s exploits are essentially remote-access Trojan horse programs designed to hoover up stored data, recorded communications, keystrokes, etc. from infected devices, giving the malware’s operator full control over victim machines.

Systems infested with Hacking Team’s malware are configured to periodically check for new instructions or updates at a server controlled by Hacking Team and/or its clients. This type of setup is very similar to the way spammers and cybercriminals design “botnets,” huge collections of hacked PCs that are harvested for valuable data and used for a variety of nefarious purposes.

No surprise, then, that Hacking Team placed its control servers in this case at an ISP that was heavily favored by spammers. Leaked Hacking Team emails show that in 2013, the company set up a malware control server for the Special Operations Group of the Italian National Military Police (INMP), an entity focused on investigating organized crime and terrorism. One or both of these organizations chose to position that control at Santrex, a notorious Web hosting provider that at the time served as a virtual haven for spammers and malicious software downloads.

But that decision backfired. As I documented in October 2013, Santrex unexpectedly shut down all of its servers, following a series of internal network issues and extensive downtime. Santrex made that decision after several months of incessant attacks, hacks and equipment failures at its facilities caused massive and costly problems for the ISP and its customers. The company’s connectivity problems essentially made it impossible for either Hacking Team or the INMP to maintain control over the machines infected with the spyware.

According to research published Sunday by OpenDNS Security Labs, around that same time the INMP and Hacking Team cooked up a plan to regain control over the Internet addresses abandoned by Santrex. The plan centered around a traffic redirection technique known as “BGP hijacking,” which involves one ISP fraudulently “announcing” to the rest of the world’s ISPs that it is in fact the rightful custodian of a dormant range of Internet addresses that it doesn’t actually have the right to control.

IP address hijacking is hardly a new phenomenon. Spammers sometimes hijack Internet address ranges that go unused for periods of time (see this story from 2014 and this piece I wrote in 2008 for The Washington Post for examples of spammers hijacking Internet space). Dormant or “unannounced” address ranges are ripe for abuse partly because of the way the global routing system works: Miscreants can “announce” to the rest of the Internet that their hosting facilities are the authorized location for given Internet addresses. If nothing or nobody objects to the change, the Internet address ranges fall into the hands of the hijacker.

Apparently nobody detected the BGP hijack at the time, and that action eventually allowed Hacking Team and its Italian government customer to reconnect with the Trojaned systems that once called home to their control server at Santrex. OpenDNS said it was able to review historic BGP records and verify the hijack, which at the time allowed Hacking Team and the INMP to migrate their malware control server to another network.

This case is interesting because it sheds new light on the potential dual use of cybercrime-friendly hosting providers. For example, law enforcement agencies have been known to allow malicious ISPs like Santrex to operate with impunity because the alternative — shutting the provider down or otherwise interfering with its operations –can interfere with the ability of investigators to gather sufficient evidence of wrongdoing by bad actors operating at those ISPs. Indeed, the notoriously bad and spammer-friendly ISPs McColo and Atrivo were perfect examples of this prior to their being ostracized and summarily shut down by the Internet community in 2008.

But this example shows that some Western law enforcement agencies may also seek to conceal their investigations by relying on the same techniques and hosting providers that are patronized by the very criminals they are investigating.