Posts tagged ‘Other’

TorrentFreak: Retired Scene Groups Return to Honor Fallen Member

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

ripTo many people the Warez Scene is something mythical or at least hard to comprehend. A group of people at the top of the piracy pyramid.

The Scene is known for its aversion to public file-sharing, but nonetheless it’s in large part responsible for much of the material out there today.

The goal of most Scene groups is to be the first to release a certain title, whether that’s a film, music or software. While there is some healthy competition The Scene is also a place where lifelong friendships are started.

A few days ago, on October 17, the Scene lost Goolum, a well-respected member and friend. Only in his late thirties, he passed away after being part of the Scene for more than a decade.

As a cracker Goolum, also known as GLM, was of the more experienced reverse engineers who worked on numerous releases.

Through the years Goolum was connected to several groups which are now retired, some for more than a decade. To honor their fallen friend, the groups ZENiTH, Lz0, SLT and MiDNiGHT have made a one-time comeback.

Below is an overview of their farewell messages, which honor him for his cracking skills but most of all as a friend. Our thoughts go out to Goolum’s friends and family.

ZENiTH: THUNDERHEAD.ENGINEERING.PYROSIM.V2014.2.RIP.GOOLUM-ZENiTH (NFO)

ZENiTH, a group that retired around 2005, mentions Goolum’s loyalty and the love for his daughter.

“Goolum has been in and around the scene since the Amiga days but had never been a guy to jump from group to group, but stayed loyal and dedicated to the few groups he was involved in.”

“We are all proud to have been in a group with you, to have spent many a long night sharing knowledge about everything, learning about your daughter who you where very proud of, and all the projects you were involved in.”

ZENiTH’s in memoriam
zenith1

Lz0: CEI.Inc.EnSight.Gold.v10.1.1b.Incl.Keygen.RIP.GOOLUM-Lz0 (NFO)

Lz0 or LineZer0, split from the Scene last year but many of its members are still actively involved in other roles. The group mentions the hard time Goolum has had due to drug problems. LzO also highlights Goolum’s love for his daughter, and how proud he was of her.

“We all knew that he struggled in life – not just economical but also on a personal level and not the least with his drug issues. One of the things that kept him going was his wonderful daughter whom he cherished a lot. He often talked about her, and how proud of her he was. He was clear that if there was one thing in life he was proud of – it was that he became the dad of a wonderful girl.”

“We’re shocked that when finally things started to move in the right direction, that we would receive the news about his death. It came without warning and we can only imagine the shock of his family. It’s hard to find the right words – or words for that matter. Even though it might have appeared as that he was lonely – with few friends, he knew that we were just a keyboard away.”

Lz0′s in memoriam
Lz0mem

SLT: PROTEUS.ENGINEERING.FASTSHIP.V6.1.30.1.RIP.GOOLUM-SLT (NFO)

SLT or SOLiTUDE has been retired since 2000 but returns to remember Goolum. The group notes that he will be dearly missed.

“You will be missed. It is not easy to say goodbye to someone who you have known for over a decade, trading banter, laughs, advice and stories. You leave behind a daughter, a family and a group of friends, who will miss you dearly.”

“As the news have spread, the kind words have poured in. Solitude is releasing this in honor of you, to show that the values we founded the group on is the exact values you demonstrated through your decades of being in the scene. Loyalty, friendship and hard work. Our thoughts are with you, wherever you may be.”

SLT’s in memoriam
SLT

MiDNiGHT: POINTWISE_V17.2.R2_RIP_GOOLUM-MIDNIGHT (NFO)

MiDNiGHT hasn’t been active for nearly a decade but have also honored Goolum with a comeback. The group mentions that he was a great friend who was always in for a chat and a beer.

“Life won’t ever be the same again my friend. We could sit and chat for hours and hours, and even then we knew each other well enough that nothing more was required than a beer, a rant and a small *yarr* and we’d know it would all be good.”

“This time it’s not good mate. I am here, you are not. I can’t even begin to express how this makes me feel – except an absolute sadness.”

MiDNiGHT’s in memoriam
midnight

RIP Goolum 1977 – 2014

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

SANS Internet Storm Center, InfoCON: green: CSAM Month of False Positives: Ghosts in the Pentest Report, (Tue, Oct 21st)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

As part of most vulnerability assessments and penetration tests against a website, we almost always run some kind of scanner. Burp (commercial) and ZAP (free from OWASP) are two commonly used scanners. Once youve done a few website assessments, you start to get a feel for what pages and fields are likely candidates for exploit. But especially if its a vulnerability assessment, where youre trying to cover as many issues as possible (and exploits might even be out of scope), its always a safe bet to run a scanner to see what other issues might be in play.

All too often, we see people take these results as-is, and submit them as the actual report. The HUGE problem with this is false positives and false negatives.

False negatives are issues that are real, but are not be found by your scanner. For instance, Burp and ZAP arent the best tools for pointing a big red arrow at software version issues – for instance vulnerability versions of WordPress or WordPress plugins. You might want to use WPSCAN for something like that. Or if you go to the login page, a view source will often give you what you need.

Issues with the certificates will also go unnoticed by a dedicated web scanner – NIKTO or WIKTO are good choices for that. Or better yet, you can use openssl to pull the raw cert, or just view it in your browser.

(If youre noticing that much of what the cool tools will do is possible with some judicious use of your browser, thats exactly what Im pointing out!)

NMAP is another great tool to use for catching what a web scanner might miss. For instance, if youve got a Struts admin page or Hypervisor login on the same IP as your target website, but on a different port than the website, NMAP is the go-to tool. Similarly, lots of basic site assessment can be done with the NMAP –version parameters, and the NSE scripts bundled with NMAP are a treasure trove as well! (Check out Manuels excellent series on NMAP scripts).

False positives are just as bad – where the tool indicates a vulnerability where there is none. If you include blatant false positives in your report, youll find that the entire report will end up in the trash can, along with your reputation with that client! A few false positives that I commonly see are SQL Injection and OS Commmand Injection.

SQL Injection is a vulnerability where, from the web interface, you can interact with and get information from a SQL database thats behind the website, often dumping entire tables.

Website assessment tools ( Burp in this case, but many other tools use similar methods) commonly tests for SQL Injection by injecting a SQL waitfor delay 0:0:20 command. If this takes a significantly longer time to complete than the basic statement, then Burp will mark this as Firm for certainty. Needless to say, I often see this turn up as a false positive. What youll find is that Burp generally runs multiple threads (10 by default) during a scan, so can really run up the CPU on a website, especially if the site is mainly parametric (where pages are generated on the fly from database input during a session). Also, if a sites error handling routines take longer than they should, youll see this get thrown off.

So, how should we test to verify this initial/preliminary finding? First of all, Burps test isnt half bad on a lot of sites. Testing Burps injection with curl or a browser after the scanning is complete will sometimes show that the SQL injection is real. Test with multiple times, so that you can show consistent and appropriate delays for values of 10,30,60, 120 seconds.

If that fails – for instance if they all delay 10 seconds, or maybe no appreciable delay at all, dont despair – SQLMAP tests much more thoroughly, and should be part of your toolkit anyway – try that. Or test manually – after a few websites youll find that testing manually might be quicker than an exhaustive SQLMAP test (though maybe not as thorough).

If you use multiple methods (and there are a lot of different methods) and still cant verify that SQL injection is in play after that initial scans finding, quite often this has to go into the false positives section of your report.

OS Command Injection – where you can execute unauthorized Operating System commands from the web interface – is another common false positive, and for much the same reason. In this vulnerability, the scanner will often use ping -c 20 127.0.0.1 or ping -n 20 127.0.0.1 – in other words, the injected command tells the webserver to ping itself, in this case 20 times. This will in most operating systems create a delay of 20 seconds. As in the SQL injection example, youll find that tests that depend on predictable delay will often get thrown off if they are executed during a busy scan. Running them after the scan (again, using your browser or curl) is often all you need to do to prove these findings as false. Testing other commands, such as pinging or opening an ftp session to a test host on the internet (that is monitoring for such traffic using tcpdump or syslog) is another good sober second thought test, but be aware that if the website you are testing has an egress filter applied to its traffic, a successful injection might not generate the traffic you are hoping for – itll be blocked at the firewall. If you have out of band access to the site being assessed, creating a test file is another good test.

Other tests can similarly see false positives. For instance, any tests that rely only on service banner grabs can be thrown off easily – either by admins putting a false banner in place, or if site updates update packages and services, but dont change that initially installed banner.

Long story short, never never never (never) believe that initial finding that your scanning tool gives you. All of the tools discussed are good tools – they should all be in your toolbox and in many cases should be at the top of your go-to list. Whether the tool is open source or closed, free or very expensive, they will all give you false positives, and every finding needs to be verified as either a true or false positive. In fact, you might not want to believe the results from your second tool either, especially if its testing the same way. Whenever you can, go back to first principals and verify manually. Or if its in scope, verify with an actual exploit – theres nothing better than getting a shell to prove that you can get a shell!

For false negatives, youll also want to have multiple tools and some good manual tests in your arsenal – if your tool misses a vulnerability, you may find that many or all of your tools test for that issue the same way. Often the best way to catch a false negative is to just know how that target service runs, and know how to test for that specific issue manually. If you are new to assessments and penetration tests, false negatives will be much harder to find, and really no matter how good you are youll never know if you got all of them.

If you need to discuss false positives and negatives with a non-technical audience, going to non-technical tools is a good way to make the point. A hammer is a great tool, but while screws are similar to nails, a hammer isnt always the best way to deal with them.

Please, use our comment form tell us about false positives or false negatives that youve found in vulnerability assessments or penetration tests. Keep in mind that usually these arent an indicator of a bad tool, theyre usually just a case of getting a proper parallax view to get a better look at the situation.

===============
Rob VandenBrink
Metafore

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Linux How-Tos and Linux Tutorials: What is a Good Command-Line Calculator on Linux

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Linux How-Tos and Linux Tutorials. Original post: at Linux How-Tos and Linux Tutorials

Every modern Linux desktop distribution comes with a default GUI-based calculator app. On the other hand, if your workspace is full of terminal windows, and you would rather crunch some numbers within one of those terminals quickly, you are probably looking for a command-line calculator. In this category, GNU bc (short for “basic calculator”) is […]

Read more at Xmodulo

LWN.net: Debian Project mourns the loss of Peter Miller

This post was syndicated from: LWN.net and was written by: ris. Original post: at LWN.net

The Debian Project recently learned that community member Peter Miller died
last July. “Peter was a relative newcomer to the Debian project, but his
contributions to Free and Open Source Software goes back the the late
1980s. Peter was significant contributor to GNU gettext as well as being
the main upstream author and maintainer of other projects that ship as
part of Debian, including, but not limited to srecord, aegis and cook.
Peter was also the author of the paper “Recursive Make Considered
Harmful”.

Krebs on Security: Banks: Credit Card Breach at Staples Stores

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Multiple banks say they have identified a pattern of credit and debit card fraud suggesting that several Staples Inc. office supply locations in the Northeastern United States are currently dealing with a data breach. Staples says it is investigating “a potential issue” and has contacted law enforcement.

staplesAccording to more than a half-dozen sources at banks operating on the East Coast, it appears likely that fraudsters have succeeded in stealing customer card data from some subset of Staples locations, including seven Staples stores in Pennsylvania, at least three in New York City, and another in New Jersey.

Framingham, Mass.-based Staples has more than 1,800 stores nationwide, but so far the banks contacted by this reporter have traced a pattern of fraudulent transactions on a group of cards that had all previously been used at a small number of Staples locations in the Northeast.

The fraudulent charges occurred at other (non-Staples) businesses, such as supermarkets and other big-box retailers. This suggests that the cash registers in at least some Staples locations may have fallen victim to card-stealing malware that lets thieves create counterfeit copies of cards that customers swipe at compromised payment terminals.

Asked about the banks’ claims, Staples’s Senior Public Relations Manager Mark Cautela confirmed that Staples is in the process of investigating a “potential issue involved credit card data and has contacted law enforcement.”

“We take the protection of customer information very seriously, and are working to resolve the situation,” Cautela said. “If Staples discovers an issue, it is important to note that customers are not responsible for any fraudulent activity on their credit cards that is reported on a timely basis.”  

LWN.net: The FSF opens nominations for the 17th annual Free Software Awards

This post was syndicated from: LWN.net and was written by: ris. Original post: at LWN.net

The Free Software Foundation (FSF) and the GNU Project have announced the
opening of nominations for the 17th annual Free Software Awards. The
Free Software Awards include the Award for the Advancement of Free
Software and the Award for Projects of Social Benefit. “In the case of both awards, previous winners are not eligible for
nomination, but renomination of other previous nominees is encouraged.
Only individuals are eligible for nomination for the Advancement of
Free Software Award (not projects), and only projects can be nominated
for the Social Benefit Award (not individuals). For a list of previous
winners, please visit https://www.fsf.org/awards.

TorrentFreak: 4shared Demands Retraction Over Misleading Piracy Report

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

profitLast month the Digital Citizens Alliance and NetNames released a new report with the aim of exposing the business models and profitability of “rogue” file-storage sites.

The report titled Behind The Cyberlocker Door: Behind The Cyberlocker Door: A Report How Shadowy Cyberlockers Use Credit Card Companies to Make Millions, is being used as ammunition for copyright holders to pressure credit card companies and advertisers to cut ties with the listed sites.

While some of the sites mentioned are indeed of a dubious nature the report lacks nuance. The “shadowy” label certainly doesn’t apply to all. Mega, for example, was quick to point out that the report is “grossly untrue and highly defamatory.” The company has demanded a public apology.

4shared, the most visited site in the report with over 50 million unique visitors per month, is now making similar claims. According to 4shared’s Mike Wilson the company has put its legal team on the case.

“We decided to take action and demand a public retraction of the information regarding 4shared’s revenues and business model as published in the report. Our legal team is already working on the respective notes to Digital Citizens Alliance and Netnames,” Wilson tells TorrentFreak.

As the largest file-hosting service the report estimates that 4shared grosses $17.6 million per year. However, 4shared argues that many of the assumptions in the report are wrong and based on a distorted view of the company’s business model.

“Revenue volumes in this report are absolutely random. For instance, 4shared’s actual revenue from premium subscription sales is approximately 20 times smaller than is shown in the document,” Wilson says.

4shared explains that its premium users are mostly interested in storing their files safely and securely. In addition, the company notes that it doesn’t have any affiliate programs or other encouragements for uploading or downloading files.

Unlike the report claims, 4shared stresses that it’s not setup as a service that aims to profit from copyright infringement, although it admits that this does take place.

To deal with this unauthorized use the file-hosting service has a DMCA takedown policy in place. In addition, some of the most trusted rightsholder representative have direct access to the site where they can delete files without sending a takedown notice.

This works well and the overall takedown volume is relatively low. Together, the site’s users store a billion files and in an average month 4shared receives takedown notices for 0.05% of these files.

In addition to their takedown procedure 4shared also scans publicly shared music files for copyright-infringing content. This Music ID system, custom-built by the company, scans for pirated music files based on a unique audio watermark and automatically removes them.

Despite these efforts 4shared was included in the “shadowy cyberlocker” report where it’s branded a rogue and criminal operation. Whether the company’s legal team will be able to set the record straight has yet to be seen.

Netnames and Digital Citizens have thus far declined to remove Mega from the report as the company previously demanded. Mega informs TorrentFreak that a defamation lawsuit remains an option and that they are now considering what steps to take next.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

TorrentFreak: Kim Dotcom Must Reveal Everything He Owns to Hollywood

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

dotcom-laptopKim Dotcom has been associated with many things over the years, but one enduring theme has been wealth – and lots of it.

Even in the wake of the now-infamous raid on his New Zealand mansion and the seizure of millions in assets, somehow Dotcom has managed to rake in millions. Or did he also have some stashed away?

It’s an important matter for Hollywood. The businessman’s continued lavish lifestyle diminishes the financial pot from where any payout will be made should they prevail in their copyright infringement battles against the Megaupload founder.

The studio’s concerns were previously addressed by Judge Courtney, who had already ordered Dotcom to disclose to the Court the details of his worldwide assets. The entrepreneur filed an appeal but that hearing would take place in October, a date beyond the already-ordered disclosure date.

Dotcom took his case to the Court of Appeal in the hope of staying the disclosure order, but in August that failed.

Dotcom complied with the ruling and subsequently produced an affidavit. However, he asked the Court of Appeal to overturn the decision of the High Court in order to keep the document a secret from the studios. That bid has now failed.

Following a ruling handed down this morning by the New Zealand Court of Appeal, Dotcom’s financial information will soon be in the hands of adversaries Twentieth Century Fox, Disney, Paramount, Universal and Warner Bros.

Court of Appeal Judges John Wild, Rhys Harrison and Christin French ordered the affidavit to be released to the studios on the basis that the information could only be used in legal proceedings concerning the restraining of Dotcom’s assets. And with a confidentiality clause attached to the affidavit, the public will not gain access to the information.

Another setback for Dotcom came in respect of who pays the bill for proceedings. The Megaupload founder’s attempt at avoiding costs was turned down after the judges found that having already supplied the affidavit as required, Dotcom’s appeal was not likely to succeed.

And there was more bad news for Dotcom in a separate High Court ruling handed down in New Zealand today. It concerns the extradition cases against not only him but also former Megaupload associates Finn Batato, Mathias Ortmann and Bram Van Der Kolk.

The theory put forward by Dotcom is that the United States and New Zealand governments had politically engineered his downfall in order to extradite him to the U.S. To gather evidence showing how that happened, Dotcom and the other respondents made a pair of applications to the extradition court (the District Court) requesting that it make discovery orders against various New Zealand government agencies, ministers and departments.

The District Court declined so the respondents sought a judicial review of that decision claiming that the Court acted unfairly and erred in law. In today’s ruling, Justice Simon France said there was no “air of reality” that political interference had been involved in Dotcom’s extradition case.

“It is, as the District Court held, all supposition and the drawing of links without a basis,” the Judge wrote.

“Nothing suggests involvement of the United States of America, and nothing suggests the New Zealand Government had turned its mind to extradition issues. These are the key matters and there is no support for either contention.”

Judge France said that as respondents in the case, the United States were entitled to costs.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Krebs on Security: Spike in Malware Attacks on Aging ATMs

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

This author has long been fascinated with ATM skimmers, custom-made fraud devices designed to steal card data and PINs from unsuspecting users of compromised cash machines. But a recent spike in malicious software capable of infecting and jackpotting ATMs is shifting the focus away from innovative, high-tech skimming devices toward the rapidly aging ATM infrastructure in the United States and abroad.

Last month, media outlets in Malaysia reported that organized crime gangs had stolen the equivalent of about USD $1 million with the help of malware they’d installed on at least 18 ATMs across the country. Several stories about the Malaysian attack mention that the ATMs involved were all made by ATM giant NCR. To learn more about how these attacks are impacting banks and the ATM makers, I reached out to Owen Wild, NCR’s global marketing director, security compliance solutions.

Wild said ATM malware is here to stay and is on the rise.

ncrmalware

BK: I have to say that if I’m a thief, injecting malware to jackpot an ATM is pretty money. What do you make of reports that these ATM malware thieves in Malaysia were all knocking over NCR machines?

OW: The trend toward these new forms of software-based attacks is occurring industry-wide. It’s occurring on ATMs from every manufacturer, multiple model lines, and is not something that is endemic to NCR systems. In this particular situation for the [Malaysian] customer that was impacted, it happened to be an attack on a Persona series of NCR ATMs. These are older models. We introduced a new product line for new orders seven years ago, so the newest Persona is seven years old.

BK: How many of your customers are still using this older model?

OW: Probably about half the install base is still on Personas.

BK: Wow. So, what are some of the common trends or weaknesses that fraudsters are exploiting that let them plant malware on these machines? I read somewhere that the crooks were able to insert CDs and USB sticks in the ATMs to upload the malware, and they were able to do this by peeling off the top of the ATMs or by drilling into the facade in front of the ATM. CD-ROM and USB drive bays seem like extraordinarily insecure features to have available on any customer-accessible portions of an ATM.

OW: What we’re finding is these types of attacks are occurring on standalone, unattended types of units where there is much easier access to the top of the box than you would normally find in the wall-mounted or attended models.

BK: Unattended….meaning they’re not inside of a bank or part of a structure, but stand-alone systems off by themselves.

OW: Correct.

BK: It seems like the other big factor with ATM-based malware is that so many of these cash machines are still running Windows XP, no?

This new malware, detected by Kaspersky Lab as Backdoor.MSIL.Tyupkin, affects ATMs from a major ATM manufacturer running Microsoft Windows 32-bit.

This new malware, detected by Kaspersky Lab as Backdoor.MSIL.Tyupkin, affects ATMs from a major ATM manufacturer running Microsoft Windows 32-bit.

OW: Right now, that’s not a major factor. It is certainly something that has to be considered by ATM operators in making their migration move to newer systems. Microsoft discontinued updates and security patching on Windows XP, with very expensive exceptions. Where it becomes an issue for ATM operators is that maintaining Payment Card Industry (credit and debit card security standards) compliance requires that the ATM operator be running an operating system that receives ongoing security updates. So, while many ATM operators certainly have compliance issues, to this point we have not seen the operating system come into play.

BK: Really?

OW: Yes. If anything, the operating systems are being bypassed or manipulated with the software as a result of that.

BK: Wait a second. The media reports to date have observed that most of these ATM malware attacks were going after weaknesses in Windows XP?

OW: It goes deeper than that. Most of these attacks come down to two different ways of jackpotting the ATM. The first is what we call “black box” attacks, where some form of electronic device is hooked up to the ATM — basically bypassing the infrastructure in the processing of the ATM and sending an unauthorized cash dispense code to the ATM. That was the first wave of attacks we saw that started very slowly in 2012, went quiet for a while and then became active again in 2013.

The second type that we’re now seeing more of is attacks that start with the introduction of malware into the machine, and that kind of attack is a little less technical to get on the older machines if protective mechanisms aren’t in place.

BK: What sort of protective mechanisms, aside from physically securing the ATM?

OW: If you work on the configuration setting…for instance, if you lock down the BIOS of the ATM to eliminate its capability to boot from USB or CD drive, that gets you about as far as you can go. In high risk areas, these are the sorts of steps that can be taken to reduce risks.

BK: Seems like a challenge communicating this to your customers who aren’t anxious to spend a lot of money upgrading their ATM infrastructure.

OW: Most of these recommendations and requirements have to be considerate of the customer environment. We make sure we’ve given them the best guidance we can, but at end of the day our customers are going to decide how to approach this.

BK: You mentioned black-box attacks earlier. Is there one particular threat or weakness that makes this type of attack possible? One recent story on ATM malware suggested that the attackers may have been aided by the availability of ATM manuals online for certain older models.

OW: The ATM technology infrastructure is all designed on multivendor capability. You don’t have to be an ATM expert or have inside knowledge to generate or code malware for ATMs. Which is what makes the deployment of preventative measures so important. What we’re faced with as an industry is a combination of vulnerability on aging ATMs that were built and designed at a point where the threats and risk were not as great.

According to security firm F-Secure, the malware used in the Malaysian attacks was “PadPin,” a family of malicious software first identified by Symantec. Also, Russian antivirus firm Kaspersky has done some smashing research on a prevalent strain of ATM malware that it calls “Tyupkin.” Their write-up on it is here, and the video below shows the malware in action on a test ATM.

In a report published this month, the European ATM Security Team (EAST) said it tracked at least 20 incidents involving ATM jackpotting with malware in the first half of this year. “These were ‘cash out’ or ‘jackpotting’ attacks and all occurred on the same ATM type from a single ATM deployer in one country,” EAST Director Lachlan Gunn wrote. “While many ATM Malware attacks have been seen over the past few years in Russia, Ukraine and parts of Latin America, this is the first time that such attacks have been reported in Western Europe. This is a worrying new development for the industry in Europe”

Card skimming incidents fell by 21% compared to the same period in 2013, while overall ATM related fraud losses of €132 million (~USD $158 million) were reported, up 7 percent from the same time last year.

TorrentFreak: Illegal Copying Has Always Created Jobs, Growth, And Prosperity

This post was syndicated from: TorrentFreak and was written by: Rick Falkvinge. Original post: at TorrentFreak

copyright-brandedIt often helps to understand present time by looking at history, and seeing how history keeps repeating itself over and over.

In the late 1700s, the United Kingdom was the empire that established laws on the globe. The United States was still largely a colony – even if not formally so, it was referred to as such in the civilized world, meaning France and the United Kingdom.

The UK had a strictly protectionist view of trade: all raw materials must come to England, and all luxury goods must be made from those materials while in the UK, to be exported to the rest of the world. Long story short, the UK was where the value was to be created.

Laws were written to lock in this effect. Bringing the ability to refine materials somewhere else, the mere knowledge, was illegal. “Illegal copying”, more precisely.

Let’s look at a particularly horrible criminal from that time, Samuel Slater. In the UK, he was even known as “Slater the Traitor”. His crime was to memorize the drawings of a British textile mill, move to New York, and copy the whole of the British textile mill from memory – something very illegal. For this criminal act, building the so-called Slater Mill, he was hailed as “the father of the American Industrial Revolution” by those who would later displace the dominance of the UK – namely the United States. This copy-criminal also has a whole town named after him.

Copying brings jobs and prosperity. Copying has always brought jobs and prosperity. It is those who don’t want to compete who try to legislate a right to rest on their laurels and outlaw copying. It never works.

We can take a look at the early film industry as well. That industry was bogged down with patent monopolies from one of the worst monopolists through industrial history, Thomas Edison and his Western Electric. He essentially killed off any film company that started in or at New York, where the film industry was based at the time. A few of the nascent film companies – Warner Brothers, Universal Pictures, MGM – therefore chose to settle as far from this monopolist as possible, and went across the entire country, to a small unexploited suburb outside of Los Angeles, California, which was known as “Hollywoodland” and had a huge sign to that effect. There, they would be safe from Edison’s patent enforcement, merely through taking out enough distance between themselves and him.

Yes, you read that right – the entire modern film industry was founded on piracy. Which, again, lead to jobs and prosperity.

The heart of the problem is this: those who decide what is “illegal” to copy do so from a basis of not wanting to get outcompeted, and never from any kind of moral high ground. It’s just pure industrial protectionism. Neo-mercantilism, if you prefer. Copying always brings jobs and prosperity. Therefore, voluntarily agreeing to the terms of the incumbent industries, terms which are specifically written to keep everybody else unprosperous, is astoundingly bad business and policy.

I’d happily go as far as to say there is a moral imperative to disobey any laws against copying. History will always put you in the right, as was the case with Samuel Slater, for example.

For a more modern example, you have Japan. When I grew up in the 1980s, Japanese industry was known for cheap knock-off goods. They copied everything shamelessly, and never got quality right. But they knew something that the West didn’t: copying brings prosperity. When you copy well enough, you learn at a staggering pace, and you eventually come out as the R&D leader, the innovation leader, building on that incremental innovation you initially copied. Today, Japan builds the best quality stuff available in any category.

The Japanese knew and understand that it takes three generations of copying and an enormous work discipline to become the best in the world in any industry. Recently, to my huge astonishment, they even overtook the Scottish as masters of whisky. (As I am a very avid fan of Scottish whisky, this was a personal source of confusion for me, even though I know things work this way on a rational level.)

At the personal level, pretty much every good software developer I know learned their craft by copying other people’s code. Copying brings prosperity at the national and the individual levels. Those who would seek to outlaw it, or obey such unjust bans against copying, have no moral high ground whatsoever – and frankly, I think people who voluntarily choose to obey such unjust laws deserve to stay unprosperous, and fall with their incumbent master when that time comes.

Nobody ever took the lead by voluntarily walking behind somebody else, after all. The rest of us copy, share, and innovate, and we wait for nobody who tries to legislate their way to competitiveness.

About The Author

Rick Falkvinge is a regular columnist on TorrentFreak, sharing his thoughts every other week. He is the founder of the Swedish and first Pirate Party, a whisky aficionado, and a low-altitude motorcycle pilot. His blog at falkvinge.net focuses on information policy.

Book Falkvinge as speaker?

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

TorrentFreak: The Soaring Financial Cost of Blocking Pirate Sites

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

On Friday news broke that luxury brand company Richemont had succeeded in its quest to have several sites selling counterfeit products blocked by the UK’s largest ISPs.

The landmark ruling, which opens the floodgates for perhaps tens of thousands of other sites to be blocked at the ISP level, contained some surprise information on the costs involved in blocking infringing websites. The amounts cited by Justice Arnold all involve previous actions undertaken by the movie and music industry against sites such as The Pirate Bay and KickassTorrents.

The applications themselves

The solicitor acting for Richemont, Simon Baggs of Wiggin LLP, also acted for the movie studios in their website blocking applications. Information Baggs provided to the court reveals that an unopposed application for a section 97A blocking order works out at around £14,000 per website.

The record labels’ costs aren’t revealed but Justice Arnold said “it is safe to assume that they are of a similar magnitude to the costs incurred by the film studios.”

In copyright cases, 47 sites have been blocked at the ISP level = £658,000

Keeping blocked sites blocked

When blocking orders are issued in the UK they contain provisions for rightsholders to add additional IP addresses and URLs to thwart anti-blocking countermeasures employed by sites such as The Pirate Bay. It is the responsibility of the rightsholders to “accurately identify IP addresses and URLs which are to be notified to ISPs in this way.”

It transpires that in order to monitor the server locations and domain names used by targeted websites, the film studios have hired a company called Incopro, which happens to be directed by Simon Baggs of Wiggins.

In addition to maintaining a database of 10,000 ‘pirate’ domains, Incopro also operates ‘BlockWatch’. This system continuously monitors the IP addresses and domains of blocked sites and uses the information to notify ISPs of new IPs and URLs to be blocked.

“Incopro charges a fee to enter a site into the BlockWatch system. It also charges an ongoing monthly fee,” Justice Arnold reveals. “In addition, the rightholders incur legal costs in collating, checking and sending notifications to the ISPs. Mr Baggs’ evidence is that, together, these costs work out at around £3,600 per website per year.”

If we assume that the music industry’s costs are similar, for 47 sites these monitoring costs amount to around £169,200 per year, every year.

Costs to ISPs for implementing blocking orders

The ISPs involved in blocking orders have been less precise as to the costs involved, but they are still being incurred on an ongoing basis. All incur ongoing costs when filtering websites such as those on the Internet Watch List, but copyright injunctions only add to the load.

Sky

The cost of implementing a new copyright blocking order is reported as a “mid three figure sum” by Sky, with an update to an order (adding new IP addresses, for example) amounts to half of that. Ongoing monitoring of blocked domains costs the ISP a “low four figure sum per month.”

BT

According to the court, BT says that it expends 60 days of employee time per year implementing section 97A orders via its Cleanfeed system and a further 12 days employee time elsewhere.

Each new order takes up 8 hours of in-house lawyers’ time plus 13 hours of general staff time. Updates to orders accrue an hour of costs in the legal department plus another 13 hours of blocking staff time.

EE

For each new order EE expends 30 minutes of staff time and a further three hours of time at BT whose staff it utilizes. Updates cost the same amount of time.

EE pays BT a “near four figure sum” for each update and expends 36 hours employee time each year on maintenance and management.

TalkTalk

TalkTalk’s legal team expends two hours implementing each new order while its engineers spend around around two and a half. Updates are believed to amount to the same. The company’s senior engineers burn through 60 hours each year dealing with blocking orders amounting to “a low six figure sum” per annum.

Virgin

Virgin estimates that Internet security staff costs amount to a “low five figure sum” per year. Interestingly the ISP said it spent more on blocking this year than last, partly due to its staff having to respond to comments about blocking on social media.

And the bills are only set to increase

According to Justice Arnold several additional blocking orders are currently pending. They are:

- An application by Paramount Home Entertainment Ltd and other film studios relating to seven websites said to be “substantially focused” on infringement of copyright in movies and TV shows

- An application by 1967 Ltd and other record companies in respect of 21 torrent sites

- An application by Twentieth Century Fox Film Corp and other film studios in respect of eight websites said to be “substantially focused” on infringement of copyright in movies and TV shows

But these 36 new sites to be blocked on copyright grounds are potentially just the tip of a quite enormous iceberg now that blocking on trademark grounds is being permitted.

Richemont has identified approximately 239,000 sites potentially infringing on their trademarks, 46,000 of which have been confirmed as infringing and are waiting for enforcement action.

So who will pick up the bill?

“It is obvious that ISPs faced with the costs of implementing website orders have a choice. They may either absorb these costs themselves, resulting in slightly lower profit margins, or they may pass these costs on to their subscribers in the form of higher subscription charges,” Justice Arnold writes.

Since all ISPs will have to bear similar costs, it seems likely that the former will prove most attractive to them, as usual.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

TorrentFreak: Jennifer Lawrence Gets Google to Censor Leaked Pictures, Sort Of

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

pixelatedOver the past several weeks hundreds of photos of naked celebrities leaked online. This “fappening” triggered a massive takedown operation targeting sites that host and link to the controversial images.

As a hosting provider and search engine Google inadvertently plays a role in distributing the compromising shots, much to the displeasure of the women involved.

More than a dozen of them sent Hollywood lawyer Marty Singer after the company. Earlier this month Singer penned an angry letter to Google threatening legal action if it doesn’t remove the images from YouTube, Blogspot and its search results.

“It is truly reprehensible that Google allows its various sites, systems and search results to be used for this type of unlawful activity. If your wives, daughters or relatives were victims of such blatant violations of basic human rights, surely you would take appropriate action,” the letter reads.

While no legal action has yet been taken, some celebrities have also sent individual DMCA takedown requests to Google. On September 24 Jennifer Lawrence’s lawyers asked the search engine to remove two links to thefappening.eu as these infringe on the star’s copyrights.

The DMCA takedown request
jlawdmca
Earlier this week the request was still pending, so TorrentFreak asked Google what was causing the delay. The company said it could not comment on individual cases but a day later the links in question were removed.

This means that both the thefappening.eu main domain and the tag archive of Jennifer Lawrence posts no longer appear in Google’s search results.

Whether this move has helped Lawrence much is doubtful though. The site in question had already redirected its site to a new domain at thefappening.so. These links remain indexed since they were not mentioned in the takedown request.

The good news is that many of Lawrence’s pictures are no longer hosted on the site itself. In fact, the URLs listed in the takedown request to Google no longer show any of the infringing photos in question, so technically Google had no obligation to remove the URLs.

A prominent disclaimer on the site points out that the operator will gladly take down the compromising photos if he’s asked to do so. Needless to say, this is much more effective than going after Google.

The disclaimer
attention

Photo via

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

The Hacker Factor Blog: By Proxy

This post was syndicated from: The Hacker Factor Blog and was written by: The Hacker Factor Blog. Original post: at The Hacker Factor Blog

As I tweak and tune the firewall and IDS system at FotoForensics, I keep coming across unexpected challenges and findings. One of the challenges is related to proxies. If a user uploads prohibited content from a proxy, then my current system bans the entire proxy. An ideal solution would only ban the user.

Proxies serve a lot of different purposes. Most people think about proxies in regards to anonymity, like the TOR network. TOR is a series of proxies that ensure that the endpoint cannot identify the starting point.

However, there are other uses for proxies. Corporations frequently have a set of proxies for handling network traffic. This allows them to scan all network traffic for potential malware. It’s a great solution for mitigating the risk from one user getting a virus and passing it to everyone in the network.

Some governments run proxies as a means to filter content. China and Syria come to mind. China has a custom solution that has been dubbed the “Great Firewall of China“. They use it to restrict site access and filter content. Syria, on the other hand, appears to use a COTS (commercial off-the-shelf) solution. In my web logs, most traffic from Syria comes through Blue Coat ProxySG systems.

And then there are the proxies that are used to bypass usage limits. For example, your hotel may charge for Internet access. If there’s a tech convention in the hotel, then it’s common to see one person pay for the access, and then run his own SOCKS proxy for everyone else to relay out over the network. This gives everyone access without needing everyone to pay for the access.

Proxy Services

Proxy networks that are designed for anonymity typically don’t leak anything. If I ban a TOR node, then that node stays banned since I cannot identify individual users. However, the proxies that are designed for access typically do reveal something about the user. In fact, many proxies explicitly identify who’s request is being relayed. This added information is stuffed in HTTP header fields that most web sites ignore.

For example, I recently received an HTTP request from 66.249.81.4 that contained the HTTP header “X-Forwarded-For: 82.114.168.150″. If I were to ban the user, then I would ban “66.249.81.4″, since that system connected to my server. However, 66.249.81.4 is google-proxy-66-249-81-4.google.com and is part of a proxy network. This proxy network identified who was relaying with the X-Forwarded-For header. In this case, “82.114.168.150″ is someone in Yemen. If I see this reference, then I can start banning the user in Yemen rather than the Google Proxy that is used by lots of people. (NOTE: I changed the Yemen IP address for privacy, and this user didn’t upload anything requiring a ban; this is just an example.)

Unfortunately, there is no real standard here. Different proxies use different methods to denote the user being relayed. I’ve seen headers like “X-Forwarded”, “X-Forwarded-For”, “HTTP_X_FORWARDED_FOR” (yes, they actually sent this in their header; this is NOT from the Apache variable), “Forwarded”, “Forwarded-For-IP”, “Via”, and more. Unless I know to look for it, I’m liable to ban a proxy rather than a user.

In some cases, I see the direct connection address also listed as the relayed address; it claims to be relaying itself. I suspect that this is cause by some kind of anti-virus system that is filtering network traffic through a local proxy. And sometimes I see private addresses (“private” as in “private use” and “should not be routed over the Internet”; not “don’t tell anyone”). These are likely home users or small companies that run a proxy for all of the computers on their local networks.

Proxy Detection

If I cannot identify the user being proxied, then just identifying that the system is a proxy can be useful. Rather than banning known proxies for three months, I might ban the proxy for only a day or a week. The reduced time should cut down on the number of people blocked because of the proxy that they used.

There are unique headers that can identify that a proxy is present. Blue Coat ProxySG, for example, adds in a unique header: “X-BlueCoat-Via: abce6cd5a6733123″. This tracking ID is unique to the Blue Coat system; every user relaying through that specific proxy gets the same unique ID. It is intended to prevent looping between Blue Coat devices. If the ProxySG system sees its own unique ID, then it has identified a loop.

Blue Coat is not the only vendor with their own proxy identifier. Fortinet’s software adds in a “X-FCCKV2″ header. And Verizon silently adds in an “X-UIDH” header that has a large binary string for tracking users.

Language and Location

Besides identifying proxies, I can also identify the user’s preferred language.

The intent with specifying languages in the HTTP header is to help web sites present content in the native language. If my site supports English, German, and French, then seeing a hint that says “French” should help me automatically render the page using French. However, this can be used along with IP address geolocation to identify potential proxies. If the IP address traces to Australia but the user appears to speak Italian, then it increases the likelihood that I’m seeing an Australian proxy that is relaying for a user in Italy.

The official way to identify the user’s language is to use an HTTP “Accept-Language” header. For example, “Accept-Language: en-US,en;q=0.5″ says to use the United States dialect of English, or just English if there is no dialect support at the web site. However, there are unofficial approaches to specifying the desired language. For example, many web browsers encode the user’s preferred language into the HTTP user-agent string.

Similarly, Facebook can relay network requests. These appear in the header “X-Facebook-Locale”. This is an unofficial way to identify when Facebook being use as a proxy. However, it also tells me the user’s preferred language: “X-Facebook-Locale: fr_CA”. In this case, the user prefers the Canadian dialect of French (fr_CA). While the user may be located anywhere in the world, he is probably in Canada.

There’s only one standard way to specify the recipient’s language. However, there are lots of common non-standard ways. Just knowing what to look for can be a problem. But the bigger problem happens when you see conflicting language definitions.

Accept-Language: de-de,de;q=0.5

User-Agent: Mozilla/5.0 (Linux; Android 4.4.2; it-it; SAMSUNG SM-G900F/G900FXXU1ANH4 Build/KOT49H) AppleWebKit/537.36 (KHTML, like Gecko) Version/1.6 Chrome/28.0.1500.94 Mobile Safari/537.36

X-Facebook-Locale: es_LA

x-avantgo-clientlanguage: en_GB

x-ucbrowser-ua: pf(Symbian);er(U);la(en-US);up(U2/1.0.0);re(U2/1.0.0);dv(NOKIAE90);pr
(UCBrowser/9.2.0.336);ov(S60V3);pi(800*352);ss(800*352);bt(GJ);pm(0);bv(0);nm(0);im(0);sr(2);nt(1)

X-OperaMini-Phone-UA: Mozilla/5.0 (Linux; U; Android 4.4.2; id-id; SM-G900T Build/id=KOT49H.G900SKSU1ANCE) AppleWebKit/534.30 (KHTML, like Gecko) Version/4.0 Mobile Safari/534.30

If I see all of these in one request, then I’ll probably choose the official header first (German from German). However, without the official header, would I choose Spanish from Latin America (“es-LA” is unofficial but widely used), Italian from Italy (it-it) as specified by the web browser user-agent string, or the language from one of those other fields? (Fortunately, in the real world these would likely all be the same. And you’re unlikely to see most of these fields together. Still, I have seen some conflicting fields.)

Time to Program!

So far, I have identified nearly a dozen different HTTP headers that denote some kind of proxy. Some of them identify the user behind the proxy, but others leak clues or only indicate that a proxy was used. All of this can be useful for determining how to handle a ban after someone violates my site’s terms of service, even if I don’t know who is behind the proxy.

In the near future, I should be able to identify at least some of these proxies. If I can identify the people using proxies, then I can restrict access to the user rather than the entire proxy. And if I can at least identify the proxy, then I can still try to lessen the impact for other users.

TorrentFreak: Uploaded.net Liable For Failing to Delete Copyright Content

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

uploaded-logoHaving content removed from the Internet is a task undertaken by most major entertainment industry companies. While laws differ around the world, the general understanding is that once notified of an infringement, Internet-based companies need to take action to prevent ongoing liability.

A case in Germany involving popular file-hosting service Uploaded.net has not only underlined this notion, but clarified that in some instances a hosting service can be held liable even if they aren’t aware of the content of a takedown notice.

It all began with anti-piracy company proMedia GmbH who had been working with their record label partners to remove unauthorized content from the Internet. The Hamburg-based company spotted a music album being made available on Uploaded so wrote to the company with a request for it to be removed.

“In the case at hand, a notice with regards to some infringing URLs on the file-hosting site was sent to the given abuse contact of the site,” Mirko Brüß, a lawyer with record label lawfirm Rasche Legal, told TorrentFreak.

However, three days later the album was still being made available so the lawfirm sent Uploaded an undertaking to cease and desist. When the file-hosting site still didn’t respond, Rasche Legal obtained a preliminary injunction against Uploaded.

“After it was served in Switzerland, the file-hoster objected and the court had an oral hearing in September,” Brüß explains.

In its response Uploaded appealed the injunction claiming it had never been aware of the takedown notices from proMedia GmbH. Lars Sobiraj of Tarnkappe told TF that Uploaded claimed to have received an empty Excel spreadsheet so didn’t react to it, preferring instead to receive plain text documents or complaints via its official takedown tool.

Rasche Legal later sent another email but Uploaded staff reportedly didn’t get a chance to read that either since an email server identified the correspondence as spam and deleted it.

“We did not believe this ‘story’ but thought they had just failed to process the notice expeditiously,” Brüß tolf TF.

In its judgment on the case the Hamburg District Court found that while service providers have no general obligations to monitor for infringing content on their services, the same cannot be said of infringements they have been made aware of.

However, the big question sat on Uploaded’s claims that it had never been aware of the infringements in question since it had never received the notices relating to them. In the event the Court found that sending the emails to Uploaded was enough to notify the service that infringements were taking place and that it must take responsibility for ending them.

“The Court followed our reasoning, meaning it is sufficient that the file-hoster actually receives the notice in a way that you can expect it to be read under normal circumstances,” Brüß says.

“There is a similar jurisdiction with regards to postal mail, where it is sufficient that the letter has reached your inbox and it is not necessary that you actually read the content of the letter in order for it to take legal effect. So here, we had proved that the takedown notice did reach the file-hoster’s mailserver, they only failed to act upon it.”

A ruling in the opposite direction would have opened up the possibility of other companies in a similar position to Uploaded blaming technical issues each time they failed to take down infringing content, Brüß explains. Instead, file-hosters are now required to respond quickly to complaints or face liability.

“So in essence, file-hosters need to make sure that they attain knowledge of all notices sent to them and act upon these notices expeditiously, or they face secondary (or even primary) liability. Also, the court stated that it does not matter by which means the notices are sent,” Brüß concludes.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

TorrentFreak: Google Will Punish “Pirate” Sites Harder in Search Results

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

google-bayOver the past few years the entertainment industries have repeatedly asked Google to step up its game when it comes to anti-piracy efforts.

These remarks haven’t fallen on deaf ears and Google has slowly implemented various new anti-piracy measures in response.

Today Google released an updated version of its “How Google Fights Piracyreport. The company provides an overview of all the efforts it makes to combat piracy, but also stresses that copyright holders themselves have a responsibility to make content available.

One of the most prominent changes is a renewed effort to make “pirate” sites less visible in search results. Google has had a downranking system in place since 2012, but this lacked effectiveness according to the RIAA, MPAA and other copyright industry groups.

The improved version, which will roll out next week, aims to address this critique.

“We’ve now refined the signal in ways we expect to visibly affect the rankings of some of the most notorious sites. This update will roll out globally starting next week,” says Katherine Oyama, Google’s Copyright Policy Counsel.

The report notes that the new downranking system will still be based on the number of valid DMCA requests a site receives, among other factors. The pages of flagged sites remain indexed, but are less likely to be the top results.

“Sites with high numbers of removal notices may appear lower in search results. This ranking change helps users find legitimate, quality sources of content more easily,” the report reads.

Looking at the list of sites for which Google received the most DMCA takedown request, we see that 4shared, Filestube and Dilandau can expect to lose some search engine traffic.

The report further highlights several other tweaks and improvements to Google’s anti-piracy efforts. For example, in addition to banning piracy related AutoComplete words, Google now also downranks suggestions that return results with many “pirate” sites.

Finally, the report also confirms our previous reporting which showed that Google uses ads to promote legal movie services when people search for piracy related keywords such as torrent, DVDrip and Putlocker. This initiative aims to increase the visibility of legitimate sites.

A full overview of Google’s anti-piracy efforts is available here.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

TorrentFreak: United States Hosts Most Pirate Sites, UK Crime Report Finds

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

sam-pirateThe UK IP Crime Group, a coalition of law enforcement agencies, government departments and industry representatives, has released its latest IP Crime Report.

The report is produced by the UK Government’s Intellectual Property Office and provides an overview of recent achievements and current challenges in the fight against piracy and counterfeiting. Increasingly, these threats are coming from the Internet.

“One of the key features in this year’s report is the continuing trend that the Internet is a major facilitator of IP crime,” the Crime Group writes.

The report notes that as in previous years, Hollywood-funded industry group FACT remains one of the key drivers of anti-piracy efforts in the UK. Over the past year they’ve targeted alleged pirate sites though various channels, including their hosting providers.

Not all hosts are receptive to FACT’s complaints though, and convincing companies that operate abroad is often a challenge. This includes the United States where the majority of the investigated sites are hosted.

“Only 14% of websites investigated by FACT are hosted in the UK. While it is possible to contact the hosts of these websites, there still remains a considerable number of copyright infringing websites that are hosted offshore and not within the jurisdiction of the UK.”

“Analysis has shown that the three key countries in which content is hosted are the UK, the USA and Canada. However, Investigating servers located offshore can cause specific problems for FACT’s law enforcement partners,” the report notes.

ushostpirate

The figure above comes as a bit of a surprise, as one would expect that United States authorities and industry groups would have been keeping their own houses in order.

Just a few months ago the US-based IIPA, which includes MPAA and RIAA as members, called out Canada because local hosting providers are “a magnet” for pirate sites. However, it now appears they have still plenty of work to do inside U.S. borders.

But even when hosting companies are responsive to complaints from rightsholders the problem doesn’t always go away. The report mentions that most sites simply move on to another host, and continue business as usual there.

“In 2013, FACT closed a website after approaching the hosting provider on 63 occasions. Although this can be a very effective strategy, in most instances the website is swiftly transferred onto servers owned by another ISP, often located outside the UK.”

While downtime may indeed be relatively brief the report claims that it may still hurt the site, as visitors may move on to other legitimate or illegitimate sources.

“The [moving] process usually involves a disruptive period of time whereby the website is offline, during which users will often find an alternative service, thus negatively affecting the website’s popularity.”

While hosting companies remain a main target, tackling the online piracy problem requires a multi-layered approach according to the UK Crime Group.

With the help of local law enforcement groups such as City of London’s PIPCU, copyright holders have rolled out a variety of anti-piracy measures in recent months. This includes domain name suspensions, cutting off payment processors and ad revenue, website blocking by ISPs and criminal prosecutions.

These and other efforts are expected to continue during the years to come. Whether that will be enough to put a real dent in piracy rates has yet to be seen.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

TorrentFreak: High Court Orders ISPs to Block Counterfeiting Websites

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

Following successful action by the world’s leading entertainment companies to have file-sharing sites blocked at the ISP level, it was perhaps inevitable that other companies with similar issues would tread the same path.

Compagnie Financière Richemont S.A. owns several well-known luxury brands including Cartier and Mont Blanc and for some time has tried to force sites selling counterfeit products to close down. Faced with poor results, in 2014 the company wrote to the UK’s leading ISPs – Sky, TalkTalk, BT, Virgin Media, EE and Telefonica/O2 – complaining that third party sites were infringing on Richemont trademarks.

Concerned that Richemont hadn’t done enough to close the sites down on its own and that blocking could affect legitimate trade, the ISPs resisted and the matter found itself before the High Court.

This morning a decision was handed down and it’s good news for Richemont. The ISPs named in the legal action must now restrict access to websites selling physical counterfeits in the same way they already restrict file-sharing sites.

The websites mentioned in the current order are cartierloveonline.com, hotcartierwatch.com, iwcwatchtop.com, replicawatchesiwc.com, 1iwc.com, montblancpensonlineuk.com, ukmontblancoutlet.co.uk . In addition, Richemont identified tens of thousands of additional domains that could be added in the future.

A Richemont spokesperson told TorrentFreak that the ruling represents a positive step in the fight to protect brands and customers from the sale of counterfeit goods online.

“We are pleased by this judgment and welcome the Court’s recognition that there is a public interest in preventing trade mark infringement, particularly where counterfeit goods are involved. The Courts had already granted orders requiring ISPs to block sites for infringement of copyright in relation to pirated content. This decision is a logical extension of that principle to trade marks,” the company said.

Wiggin LLP, the lawfirm at the heart of website blocking action for the entertainment industry, acted for Richemont in the case. The company says that today’s judgment holds benefits for both rightsholders and consumers.

“In a comprehensive judgment, the court has considered the enforcement methods that are presently available to trade mark owners when tackling infringement online. The court has concluded that Internet Service Providers play ‘an essential role’ and that the court can and should apply Article 11 of the Enforcement Directive to require the application of technical measures to impede infringement of trade marks,” Wiggin said.

According to a comment sent to TF by Arty Rajendra, Partner at IP law firm Rouse Legal, the decision is likely to be appealed.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Errata Security: FBI’s crypto doublethink

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

Recently, FBI Director James Comey gave a speech at the Brookings Institute decrying crypto. It was transparently Orwellian, arguing for a police-state. In this post, I’ll demonstrate why, quoting bits of the speech.

“the FBI has a sworn duty to keep every American safe from crime and terrorism”
“The people of the FBI are sworn to protect both security and liberty”

This is not true. The FBI’s oath is to “defend the Constitution”. Nowhere in the oath does it say “protect security” or “keep people safe”.

This detail is important. Tyrants suppress civil liberties in the name of national security and public safety. This oath taken by FBI agents, military personnel, and the even the president, is designed to prevent such tyrannies.

Comey repeatedly claims that FBI agents both understand their duty and are committed to it. That Comey himself misunderstands his oath disproves both assertions. This reinforces our belief that FBI agents do not see their duty as protecting our rights, but instead see rights as an impediment in pursuit of some other duty.

Freedom is Danger

The book 1984 describes the concept of “doublethink“, with political slogans as examples: “War is Peace”, “Ignorance is Strength”, and “Freedom is Slavery”. Comey goes full doublethink:

Some have suggested there is a conflict between liberty and security. I disagree. At our best, we in law enforcement, national security, and public safety are looking for security that enhances liberty. When a city posts police officers at a dangerous playground, security has promoted liberty—the freedom to let a child play without fear.

He’s wrong. Liberty and security are at odds. That’s what the 4th Amendment says. We wouldn’t be having this debate if they weren’t at odds.

He follows up with more doublethink, claiming “we aren’t seeking a back-door”, but instead are instead interested in “developing intercept solutions during the design phase”. Intercept solutions built into phones is the very definition of a backdoor, of course.

“terror terror terror terror terror”
“child child child child child child”

Comey mentions terrorism 5 times and child exploitation 6 times. This is transparently the tactic of the totalitarian, demagoguery based on emotion rather than reason.

Fear of terrorism on 9/11 led to the Patriot act, granting law enforcement broad new powers in the name of terrorism. Such powers have been used overwhelming for everything else. The most telling example is the detainment of David Miranda in the UK under a law that supposedly only applied to terrorists. Miranda was carrying an encrypted copy of Snowden files — clearly having nothing to do with terrorism. It was clearly exploitation of anti-terrorism laws for the purposes of political suppression.

Any meaningful debate doesn’t start with the headline grabbing crimes, but the ordinary ones, like art theft and money laundering. Comey has to justify his draconian privacy invasion using those laws, not terrorism.

“rule of law, rule of law, rule of law, rule of law, rule of law”
Comey mentions rule-of-law five times in his speech. His intent is to demonstrate that even the FBI is subject to the law, namely review by an independent judiciary. But that isn’t true.

The independent judiciary has been significantly weakened in recent years. We have secret courts, NSLs, and judges authorizing extraordinary powers because they don’t understand technology. Companies like Apple and Google challenge half the court orders they receive, because judges just don’t understand. There is frequent “parallel construction”, where evidence from spy agencies is used against suspects, sidestepping judicial review.

What Comey really means is revealed by this statement: “I hope you know that I’m a huge believer in the rule of law. … There should be no law-free zone in this country”. This a novel definition of “rule of law”, a “rule by law enforcement”, that has never been used before. It reveals what Comey really wants, a totalitarian police-state where nothing is beyond the police’s powers, where the only check on power is a weak and pliant judiciary.

“that a commitment to the rule of law and civil liberties is at the core of the FBI”
No, lip service to these things is at the core of the FBI.

I know this from personal experience when FBI agents showed up at my offices and threatened me, trying to get me to cancel a talk at a cybersecurity conference. They repeated over and over how they couldn’t force me to cancel my talk because I had a First Amendment right to speak — while simultaneously telling me that if I didn’t cancel my talk, they would taint my file so that I would fail background checks and thus never be able to work for the government ever again.
We saw that again when the FBI intercepted clearly labeled “attorney-client privileged” mail between Weev and his lawyer. Their excuse was that the threat of cyberterrorism trumped Weev’s rights.

Then there was that scandal that saw widespread cheating on a civil-rights test. FBI agents were required to certify, unambiguously, that nobody helped them on the test. They lied. It’s one more oath FBI agents seem not to care about.

If commitment to civil liberties was important to him, Comey would get his oath right. If commitment to rule-of-law was important, he’d get the definition right. Every single argument Comey make seeks demonstrates how little he is interested in civil liberties.

“Snowden Snowden Snowden”

Comey mentions Snowden three times, such as saying “In the wake of the Snowden disclosures, the prevailing view is that the government is sweeping up all of our communications“.

This is not true. No news article based on the Snowden document claims this. No news site claims this. None of the post-Snowden activists believe this. All the people who matter know the difference between metadata and full eavesdropping, and likewise, the difficulty the FBI has in getting at that data.

This is how we know the FBI is corrupt. They ignore our concerns that government has been collecting every phone record in the United States for 7 years without public debate, but instead pretend the issue is something stupid, like the false belief they’ve been recording all phone calls. They knock down strawman arguments instead of addressing our real concerns.

Regulate communication service providers

In his book 1984, everyone had a big screen television mounted on the wall that was two-way. Citizens couldn’t turn the TV off, because it had to be blaring government propaganda all the time. The camera was active at all time in case law enforcement needed to access it. At the time the book was written in 1934, televisions were new, and people thought two-way TVs were plausible. They weren’t at that time; it was a nonsense idea.

But then the Internet happened and now two-way TVs are a real thing. And it’s not just the TV that’s become two-way video, but also our phones. If you believe the FBI follows the “rule of law” and that the courts provide sufficient oversight, then there’s no reason to stop them going full Orwell, allowing the police to turn on your device’s camera/microphone any time they have a court order in order to eavesdrop on you. After all, as Comey says, there should be no law-free zone in this country, no place law enforcement can’t touch.

Comey pretends that all he seeks at the moment is a “regulatory or legislative fix to create a level playing field, so that all communication service providers are held to the same standard” — meaning a CALEA-style backdoor allowing eavesdropping. But here’s thing: communication is no longer a service but an app. Communication is “end-to-end”, between apps, often by different vendors, bypassing any “service provider”. There is no way to way to eavesdrop on those apps without being able to secretly turn on a device’s microphone remotely and listen in.

That’s why we crypto-activists draw the line here, at this point. Law enforcement backdoors in crypto inevitably means an Orwellian future.


Conclusion

There is a lot more wrong with James Comey’s speech. What I’ve focused on here were the Orwellian elements. The right to individual crypto, with no government backdoors, is the most important new human right that technology has created. Without it, the future is an Orwellian dystopia. And as proof of that, I give you James Comey’s speech, whose arguments are the very caricatures that Orwell lampooned in his books.

TorrentFreak: New Github DMCA Policy Gives Alleged Infringers a Second Chance

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

githubLike other highly-trafficked domains relying heavily on user contributed content, coding and collaboration platform Github now publishes its own transparency report detailing copyright-related complaints received by the company.

Some of these DMCA notices have been reported here on TF in recent months, including one sent by the MPAA which effectively ended Popcorn Time’s presence on the site and another sent by Microsoft targeting an Xbox music app.

Now, in a move to bring more transparency and clarity to its copyright processes, Github has announced significant changes to the way it handles DMCA complaints. The company says that three major changes have been implemented in order to improve on-site experience and better serve users.

In the first instance, copyright owners will need to conduct their investigations as usual and send a properly formatted takedown notice to Github. Presuming it meets statutory requirements, Github will publish it in their transparency report and pass a link to the user in question.

At this point Github’s new policy begins to take effect. Previously the company would’ve immediately taken down the complained-about content but Github now says it wants to provide alleged infringers with a chance to put things right “whenever possible.”

24 hours to take action

To this end, Github says users will have the opportunity to modify or remove content within 24 hours of a complaint. Copyright holders will be notified that Github has given the affected user this leeway and it will be down to the user to inform Github within the allotted period that the appropriate changes have been made. Failure to do so will see the repository removed.

Despite this wiggle room, not all complaints will result in the luxury of a 24 hour ‘action’ period. Should a DMCA notice claim that the entire contents of a repository infringe, the repository in question will be removed “expeditiously.”

Forks will not be automatically disabled

The second significant change is that when Github receives a copyright complaint against a parent repository, it will not automatically disable project forks. For that to happen any complaint will have to specifically include not only the parent’s URL, but also the locations of all related forks.

“GitHub will not automatically disable forks when disabling a parent repository. This is because forks belong to different users, may have been altered in significant ways, and may be licensed or used in a different way that is protected by the fair-use doctrine,” Github explains.

Fighting back: Counter-notices

As required by law, users affected by takedown notices have a right of reply if they believe they’ve been wrongly targeted. Sufficiently detailed counter notices can be submitted to Github for forwarding to complaining rightsholders. They will also be published in the site’s transparency report.

This right of reply is very important and one that appears to be under utilized. Earlier this month Github published a complaint which targeted and took down a wide range of addons for the popular media player XBMC.

Apparently sent by ‘DMCA Secure’, a company that has no immediately visible web presence, the notice claimed to represent a wide range of copyright holders including Sony, Fox, Dreamworks, NFL and WWE, to name just a few.

The notice is unusual. While it’s common for the first three companies to team up, we’d never seen a notice featuring such a wide range of diverse rightsholders before. Also, while the functionality of the code could give rise to copyright issues, none of those companies own the copyrights to the code in question.

TF put it to Github that the complaint looked unusual and might even be bogus, but the company declined to comment on specific cases. Like many companies in similar positions, it appears Github has to take notices on face value and relies on users to submit counter-notices to air their complaints. None of the repositories in question have done so.

Github’s revamped DMCA policy can be found here, along with how-to guides on submitting takedown and counter notices.

While Github is well-known in the technology sector, it may come as a surprise just how popular the service is. Around seven million people use the site and according to Alexa, Github.com is the 127th most-visited domain in the world.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

TorrentFreak: Leaked TPP Draft Reveals Tough Anti-Piracy Measures

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

copyright-brandedThe Trans-Pacific Partnership, an agreement aimed at strengthening economic ties between the United States, Canada, New Zealand, Japan and eight other countries in the region, has been largely shrouded in secrecy.

Today whistleblower outfit Wikileaks sheds some light on the ongoing negotiations by leaking a new draft of the agreement’s controversial intellectual property chapter.

The draft dates back to May 2014 and although it’s far from final, some significant progress has been made since the first leak during August last year.

For example, the countries have now agreed that a new copyright term will be set in the agreement. No decision has been made on a final term but options currently on the table are life of the author plus 50, 70 or 100 years.

The proposal to add criminal sanctions for non-commercial copyright infringement, which is currently not the case in many countries, also remains in play.

The leak further reveals a new section on ISP liability. This includes a proposal to make it mandatory for ISPs to alert customers who stand accused of downloading copyrighted material, similar to the requirement under the U.S. DMCA.

Alberto Cerda of Georgetown University Law Center points out that some of the proposals in the ISP liability section go above and beyond the DMCA.

“The most worrying proposal on the matter is that one that would extend the scope of the provisions from companies that provide Internet services to any person who provides online services,” Cerda told TorrentFreak.

This means that anyone who passes on Internet traffic could be held liable for the copyright infringements of others. This could include the local coffeehouse that offers free wifi, or even someone’s own Internet connection if it’s shared with others.

The leaked draft also adds a provision that would allow ISPs to spy on their own users to catch those who download infringing content. This is another concern, according to the law Professor.

“From a human rights viewpoint, that should be expressly limited to exceptional circumstances,” Cerda says.

It’s clear that the ISP liability section mimicks the DMCA. In fact, throughout the TPP chapter the most draconian proposals often originate from the United States.

Law Professor Michael Geist notes that Canada has been the leading opponent of many of the U.S. proposals, which often go against the country’s recently revamped copyright law. Geist warns that the TPP may eventually lead to tougher local laws as U.S. pressure continues.

“As the treaty negotiations continue, the pressure to cave to U.S. pressure will no doubt increase, raising serious concerns about whether the TPP will force the Canadian government to overhaul recently enacted legislation,” Geist writes.

Compared to the previous draft that leaked last year there are also some positive developments to report.

For example, Canada put forward a proposal that permits countries to allow exceptions to technological protection measures. This would makes it possible to classify DRM-circumvention as fair use, for example. A refreshing proposal, but one that’s unlikely to be approved by the U.S.

If anything, the leaked TPP chapter shows once again that there is still a very long way to go before a final draft is ready. After more than three years of negotiating many of the proposals are still heavily debated and could go in multiple directions.

That is, if an agreement is ever reached.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

SANS Internet Storm Center, InfoCON: green: Logging SSL, (Thu, Oct 16th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

With POODLE behind us, it is time to get ready for the next SSL firedrill. One of the questions that keeps coming up is which ciphers and SSL/TLS versions are actually in use. If you decide to turn off SSLv3 or not depends a lot on who needs it, and it is an important answer to have ready should tomorrow some other cipher turn out to be too weak.

But keep in mind that it is not just numbers that matter. You also need to figure out who the outliers are and how important (or dangerous?) they are. So as a good start, try to figure out how to log SSL/TLS versions and ciphers. There are a couple of options to do this:

In Apache, you can log the protocol version and cipher easily by logging the respective environment variable [1] . For example:
CustomLog logs/ssl_request_log %t %h {User-agent}i%{SSL_PROTOCOL}x %{SSL_CIPHER}x

Logs SSL protocol and cipher. You can add this to an existing access log, or create a new log. If you decide to log this in its own log, I suggest you add User-Agent and IP Address (as well as time stamp).

In nginx, you can do the same by adding$ssl_cipher $ssl_protocolto the log_format directive in your nginx configuration. For example:

log_format ssl $remote_addr $http_user_agent $ssl_cipher $ssl_protocol

Should give you a similar result as for apache above.

If you have a packet sniffer in place, you can also use tshark to extract the data. With t-shark, you can actually get a bit further. You can log the client hello with whatever ciphers the client proposed, and the server hello which will indicate what cipher the server picked.

tshark -r ssl -2R ssl.handshake.type==2 or ssl.handshake.type==1 -T fields -e ssl.handshake.type -e ssl.record.version -e ssl.handshake.version -e ssl.handshake.ciphersuite

For extra credit log the host name requested in the client hello via SNI and compare it to the actual host name the client connects to.

Now you can not only collect Real Data as to what ciphers are needed, but you can also look for anomalies. For example, user agents that request very different ciphers then other connections that claim to originate from the same user agent. Or who is asking for weak ciphers? Maybe a sign for an SSL downgrade attack? Or an attack tool using and older SSL library…

[1] http://httpd.apache.org/docs/2.2/mod/mod_ssl.html#logformats[2]


Johannes B. Ullrich, Ph.D.
STI|Twitter|LinkedIn

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

TorrentFreak: Freedom-Friendly Iceland Blocks The Pirate Bay

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

In 2013, copyright groups including the local equivalents of the RIAA (STEF) and MPAA (SMAIS) reported the operators of The Pirate Bay to Icelandic police. It had zero negative effect on the site.

So, with a public anti-piracy awareness campaign under their belts, STEF and SMAIS embarked on a strategy successfully employed by copyright holders in the UK, Italy, the Netherlands, Belgium, Denmark and other European countries. The groups issued demands for local ISPs to block not only The Pirate Bay, but also Deildu.net, Iceland’s most popular private torrent tracker.

Modifications to the country’s Copyright Act in 2010 authorized injunctions against intermediaries, so the chances of success seemed good. However, this was Iceland, a country strongly associated with freedom of speech. Could protection of copyrights trump that?

“This action doesn’t go against freedom of expression as it aims to prevent copyright infringement and protect the rights and income of authors, artists and producers,” the rightsholders insisted.

Initial legal action against ISPs faced issues, with one blocking request rejected on a procedural matter. Another featuring four plaintiffs was reduced to three when in May this year the Supreme Court decided that only music group STEF had the rights to claim injunctive relief.

But despite the setbacks, this week the rightsholders achieved the ruling they had been hoping for. The Reykjavík District Court handed down an injunction to ISPs Vodafone and Hringdu forcing them to block several domains belonging to The Pirate Bay and Deildu.

STEF Director of Policy Gudrun Bjork Bjarnadóttir told local media that the decision of the Court is an important event that will smooth the way for much-needed additional blockades.

“We will never reach a final victory in the battle so it makes sense for people to realize that it’s likely that new sites will spring up. However, following similar actions abroad visitor numbers to such sites have declined significantly,” Bjarnadóttir said.

The domains to be blocked include thepiratebay.se, thepiratebay.sx and thepiratebay.org, plus deildu.net and deildu.com. Currently the injunction applies to just two ISPs and it’s unclear whether there will be an attempt at expansion, but in the meantime the effort is likely to be a symbolic one.

The block against The Pirate Bay will be circumvented almost immediately due to the wide range of reverse proxy sites available and Deildu has already taken evasive action of its own. Within hours the private tracker announced a brand new domain – Iceland.pm – one that isn’t listed in the court order.

ISP Hringdu says that the Court ruling runs counter to company policies.

“It is clear that [the ruling] is not in harmony with Hringdu’s policy regarding net freedom,” director Kristinn Pétursson told Vísir. “The company has placed great emphasis on the idea that our customers should have unrestricted access to the internet.”

Neither of the ISPs has yet indicated an appeal to the Supreme Court.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

TorrentFreak: Google Removes Pirate Bay Search Box and Links

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

google-bayAbout a month ago Google announced its new and improved “sitelinks” sections.

This section appears when searching for keywords related to large sites, including YouTube and Twitter, and lists links to popular parts of the site.

Last week TorrentFreak reported that The Pirate Bay had also been added to this list. This allowed people to use Google to search Pirate Bay pages, complete with a pirate-themed AutoComplete function.

While this unusual addition was the work of algorithms, it was bound to upset some entertainment industry groups. After all, many copyright holders have been asking to make sites such as The Pirate Bay less visible in the search results, and this change was doing the opposite.

This is how a search for The Pirate Bay looked like until yesterday, complete with a search box and prominent sitelinks.

Pirate Bay search box and sitelinks
tpbsitelinks

Now, less than a week later the search bar no longer appears for Pirate Bay related content. Even more so, other prominent sitelinks which have been in place for more than a year are gone too.

Today, the only things left are a few rather small sitelinks under the site description, as shown below.

Pirate Bay ….
google-sitelinks-gone-tpb

TorrentFreak has confirmed that the sitelinks features were removed for several torrent sites including Isohunt.to and Torrentz.eu. For Google, Twitter and other sites the new search box remains online.

The removal of the search box and prominent links appears to be intentional. TorrentFreak learned that Google was not happy with the unintended feature for The Pirate Bay, and must have felt the need to take action.

While the removal may be a well intended move to keep copyright holders pleased, it places Google in a difficult position. It could be argued that if the sitelinks features have been removed due to the “infringing” aspects of a site, why still keep the site in search results at all?

To find out more TorrentFreak contacted Google, but the company didn’t wish to comment on the recent changes. Google did stress that the placing of the sitelinks is determined automatically.

“Not every site will get the sitelinks search box; it’s determined automatically based on a number of factors. As always, we’ll keep working to improve the quality of our search results,” a Google spokesperson says.

The comment evades the issue at hand, but it appears that these factors were changed recently to exclude The Pirate Bay and other “pirate” sites.

For now, however, all Pirate Bay pages remain indexed as usual. In that regard the recent change is mostly interesting from a political perspective, as a possible result on the entertainment’s continuing pressure on the search engine.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Krebs on Security: Seleznev Arrest Explains ‘2Pac’ Downtime

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

The U.S. Justice Department has piled on more charges against alleged cybercrime kingpin Roman Seleznev, a Russian national who made headlines in July when it emerged that he’d been whisked away to Guam by U.S. federal agents while vacationing in the Maldives. The additional charges against Seleznev may help explain the extended downtime at an extremely popular credit card fraud shop in the cybercrime underground.

The 2pac[dot]cc credit card shop.

The 2pac[dot]cc credit card shop.

The government alleges that the hacker known in the underground as “nCux” and “Bulba” was Roman Seleznev, a 30-year-old Russian citizen who was arrested in July 2014 by the U.S. Secret Service. According to Russian media reports, the young man is the son of a prominent Russian politician.

Seleznev was initially identified by the government in 2012, when it named him as part of a conspiracy involving more than three dozen popular merchants on carder[dot]su, a bustling fraud forum where Bulba and other members openly marketed various cybercrime-oriented services (see the original indictment here).

According to Seleznev’s original indictment, he was allegedly part of a group that hacked into restaurants between 2009 and 2011 and planted malicious software to steal card data from store point-of-sale devices. The indictment further alleges that Seleznev and unnamed accomplices used his online monikers to sell stolen credit and debit cards at bulba[dot]cc and track2[dot]name. Customers of these services paid for their cards with virtual currencies, including WebMoney and Bitcoin.

But last week, U.S. prosecutors piled on another 11 felony counts against Seleznev, charging that he also sold stolen credit card data on a popular carding store called 2pac[dot]cc. Interestingly, Seleznev’s arrest coincides with a period of extended downtime on 2pac[dot]cc, during which time regular customers of the store could be seen complaining on cybercrime forums where the store was advertised that the proprietor of the shop had gone silent and was no longer responding to customer support inquiries.

A few weeks after Seleznev’s arrest, it appears that someone new began taking ownership of 2pac[dot]cc’s day-to-day operations. That individual recently posted a message on the carding shop’s home page apologizing for the extended outage and stating that fresh, new cards were once again being added to the shop’s inventory.

The message, dated Aug. 8, 2014, explains that the proprietor of the shop was unreachable because he hospitalized following a car accident:

“Dear customers. We apologize for the inconvenience that you are experiencing now by the fact that there are no updates and [credit card] checker doesn’t work. This is due to the fact that our boss had a car accident and he is in the hospital. We will solve all problems as soon as possible. Support always available, thank you for your understanding.”

2pac[dot]cc's apologetic message to would-be customers of the credit card fraud shop.

2pac[dot]cc’s apologetic message to would-be customers of the credit card fraud shop.

IT’S ALL ABOUT CUSTOMER SERVICE

2pac is but one of dozens of fraud shops selling stolen debit and credit cards. And with news of new card breaches at major retailers surfacing practically each week, the underground is flush with inventory. The single most important factor that allows individual card shop owners to differentiate themselves among so much choice is providing excellent customer service.

Many card shops, including 2pac[dot]cc, try to keep customers happy by including an a-la-carte card-checking service that allows customers to test purchased cards using compromised merchant accounts — to verify that the cards are still active. Most card shop checkers are configured to automatically refund to the customer’s balance the value of any cards that come back as declined by the checking service.

This same card checking service also is built into rescator[dot]cc, a card shop profiled several times in this blog and perhaps best known as the source of cards stolen from the Target, Sally Beauty, P.F. Chang’s and Home Depot retail breaches. Shortly after breaking the news about the Target breach, I published a lengthy analysis of forum data that suggested Rescator was a young man based in Odessa, Ukraine.

Turns out, Rescator is a major supplier of stolen cards to other, competing card shops, including swiped1[dot]su — a carding shop that’s been around in various forms since at least 2008. That information came in a report (PDF) released today by Russian computer security firm Group-IB, which said it discovered a secret way to view the administrative statistics for the swiped1[dot]su Web site. Group-IB found that a user named Rescator was by far the single largest supplier of stolen cards to the shop, providing some 5,306,024 cards to the shop over the years.

Group-IB also listed the stats on how many of Rescator’s cards turned out to be useful for cybercriminal customers. Of the more than five million cards Rescator contributed to the shop, only 151,720 (2.8 percent) were sold. Another 421,801 expired before they could be sold. A total of 42,626 of the 151,720 — or about 28 percent – of Rescator’s cards that were sold on Swiped1[dot]su came back as declined when run through the site’s checking service.

The swiped1[dot]su login page.

The swiped1[dot]su login page.

Many readers have asked why the the thieves responsible for the card breach at Home Depot collected cards from Home Depot customers for five months before selling the cards (on Rescator’s site, of course). After all, stolen credit cards don’t exactly age gracefully and get more valuable over time.

One possible explanation — supported by the swiped1[dot]su data and by my own reporting on this subject — is that veteran fraudsters like Rescator know that only a tiny fraction of stolen cards actually get sold. Based on interviews with several banks that were heavily impacted by the Target breach, for example, I have estimated that although Rescator and his band of thieves managed to steal some 40 million debit and credit card numbers in the Target breach, they likely only sold between one and three million of those cards.

The crooks in the Target breach were able to collect 40 million cards in approximately three weeks, mainly because they pulled the trigger on the heist on or around Black Friday, the busiest shopping day of the year and the official start of the holiday shopping season in the United States. My guess is that Rescator and his associates understood all too well how many cards they needed to steal from Home Depot to realize a certain number of sales and monetary return for the heist, and that they kept collecting cards until they had hit that magic number.

SANS Internet Storm Center, InfoCON: green: POODLE: Turning off SSLv3 for various servers and client. , (Wed, Oct 15th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

Before you start: While adjusting your SSL configuration, you should also check for various other SSL related configuration options. A good outline can be found at http://bettercrypto.org as well as at http://ssllabs.com (for web servers in particular)

Here are some configuration directives to turn off SSLv3 support on servers:

Apache: Add -SSLv3 to the SSLProtocol line. It should already contain -SSLv2 unless you list specific protocols.

nginx: list specific allowed protocols in the ssl_protocols line. Make sure SSLv2

Postfix: Disable SSLv3 support in the smtpd_tls_manadatory_protocols configuration line. For example: smtpd_tls_mandatory_protocols=!SSLv2,!SSLv3

Dovecot: similar, disable SSLv2 and SSLv3 in the ssl_protocols line. For example: ssl_protocols =!SSLv2 !SSLv3

HAProxy Server: the bind configuration line should include no-sslv3 (this line also lists allowed ciphers)

puppet:seehttps://github.com/stephenrjohnson/puppetmodule/commit/1adb73f9a400cb5e91c4ece1c6166fd63004f448 for instructions

For clients, turning off SSLv3 can be a bit more tricky, or just impossible.

Google Chrome: you need to start Google Chrome with the –ssl-version-min=tls1 option.

Internet Explorer: You can turn off SSLv3 support in the advanced internet option dialog.

Firefox: check the security.tls.version.min setting in about:config and set it to 1. Oddly enough, in our testing, the default setting of 0 will allow SSLv3 connections, but refuses to connect to our SSLv3 only server.

For Microsoft Windows, you can use group policies. For details see Microsofts advisory:https://technet.microsoft.com/en-us/library/security/3009008.aspx

To test, continue to use our POODLE Test page at https://poodletest.com or the QualysSSLLabs page at https://ssllabs.com

To detect the use of SSLv3, you can try the following filters:

tshark/wireshark display filters:ssl.handshake.version==0×0300

tcpdump filter: (1) accounting for variable TCP header length:tcp[((tcp[12]4)*4)+9:2]=0×0300
(2) assuming TCP header length is 20:tcp[29:2]=0×0300

We will also have a special webcast at 3pm ET. For details see

https://www.sans.org/webcasts/about-poodle-99032

the webcast will probably last 20-30 minutes and summarize the highlights of what we know so far.


Johannes B. Ullrich, Ph.D.
STI|Twitter|LinkedIn

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.