Posts tagged ‘Other’

TorrentFreak: Record Labels Obtain Orders to Block 21 Torrent Sites

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

stop-blockedHaving ISPs block file-sharing sites is a key anti-piracy strategy employed by major rightsholders in the UK. Both Hollywood-affiliated groups and the recording labels have obtained High Court orders alongside claims that the process is an effective way to hinder piracy.

Last week these rightsholders were joined by luxury brand owner Richemont, which successfully obtained orders to block sites selling counterfeit products. The outcome of that particular case had delayed decisions in other blocking applications, including one put forward by the record labels. Today the High Court ended its hiatus by processing a new injunction.

The application was made by record labels 1967, Dramatico Entertainment, Infectious Music, Liberation Music, Simco Limited, Sony Music and Universal Music. The labels represented themselves plus the BPI (British Recorded Music Industry) and PPL (Phonographic Performance Ltd) which together account for around 99% of all music legally available in the UK today.

Through their legal action the labels hoped to disrupt the activities of sites and services they believe to be enabling and facilitating the unlawful distribution of their copyright works. In this case the key targets were the 21 torrent sites listed below:

(1) bittorrent.am, (2) btdigg.org, (3) btloft.com, (4) bts.to, (5) limetorrents.com, (6) nowtorrents.com, (7) picktorrent.com, (8) seedpeer.me, (9) torlock.com, (10) torrentbit.net, (11) torrentdb.li, (12) torrentdownload.ws, (13) torrentexpress.net, (14) torrentfunk.com, (15) torrentproject.com, (16) torrentroom.com, (17) torrents.net, (18) torrentus.eu, (19) torrentz.cd, (20) torrentzap.com and (21) vitorrent.org.

As usual the UK’s leading Internet service providers – Sky, Virgin, TalkTalk, BT and EE – were named as defendants in the case. The ISPs neither consented to nor opposed the application but participated in order to negotiate the wording of any order granted.

In his ruling Justice Arnold noted that the sites listed in the application function in a broadly similar way to The Pirate Bay and KickassTorrents, sites that are already subjected to blocking orders. Perhaps surprisingly, efforts by some of the sites to cooperate with rightsholders meant little to the Court.

“All of [the sites] go to considerable lengths to facilitate and promote the downloading of torrent files, and hence infringing content, by their users,” Justice Arnold wrote.

“Although a few of the Target Websites pay lipservice to copyright protection, in reality they all flout it. Although a few of the Target Websites claim not to, they all have control over which torrent files they index.”

Also of interest is that Court didn’t differentiate between sites that allow users to upload torrents, those that store them, or those that simply harvest links to torrents hosted elsewhere.

“Thirteen of the Target Websites (bittorrent.am, btdigg.org, btloft.com, nowtorrents.com, picktorrent.com, torrentdb.li, torrentdownload.ws, torrentexpress.net, torrentproject.com, torrentroom.com, torrentus.eu, torrentz.cd and vitorrent.org) do not permit uploads of torrent files by users, but gather all their links to torrent files using ‘crawling’ technology. No torrent files are stored on these websites’ own servers,” Justice Arnold explained.

“Nevertheless, the way in which the torrent files (or rather the links thereto) are presented, and the underlying technology, is essentially the same as in the cases of the other Target Websites.”

The Judge also touched on the efficacy of website blockades, citing comScore data which suggests that, on average, the number of UK visitors to already blocked BitTorrent sites has declined by 87%.

“No doubt some of these users are using circumvention measures which are not reflected in the comScore data, but for the reasons given elsewhere it seems clear that not all users do this,” Justice Arnold wrote.

bpiSpeaking with TF the BPI said that the 21 sites had been selected for blocking on the basis that they are amongst the most infringing sites available in the UK today. BPI Chief Executive Geoff Taylor said that having them rendered inaccessible would help both the music industry and consumers.

“Illegal sites dupe consumers and deny artists a fair reward for their work. The online black market stifles investment in new British music, holds back the growth of innovative legal services like Spotify and destroys jobs across Britain’s vital creative sector,” Taylor said.

“Sites such as these also commonly distribute viruses, malware and other unsafe or inappropriate content. These blocks will not only make the internet a safer place for music fans, they will help make sure there is more great British music in years to come.”

Finally, and mirroring a decision made in the Richemont case, Justice Arnold said that Internet subscribers affected by the block will be given the ability to apply to the High Court to discharge or vary the orders. Furthermore, when blocked site information pages are viewed by ISP subscribers in future, additional information will have to be displayed including details of the parties who obtained the block.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

TorrentFreak: Anti-Piracy Police PIPCU Secure Govt. Funding Until 2017

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

In a relatively short space of time City of London Police’s Intellectual Property Crime Unit has stamped its mark on the online piracy space in a way few other organizations have managed.

Since its official launch in September 2013 the unit has tackled online copyright infringement from a number of directions including arrests, domain seizures and advertising disruptions. PIPCU has shut down several sports streaming and ebook sites plus a large number of proxies.

In June 2013 when the Department for Business, Innovation & Skills announced the creation of PIPCU, Viscount Younger of Leckie noted that the Intellectual Property Office would provide an initial £2.56 million in funding to the unit over two years.

However, this funding was allocated on a temporary basis and was set to expire in 2015, a situation which prompted the Prime Minister’s former Intellectual Property Advisor Mike Weatherley to call for additional support.

This morning the government confirmed that additional funding will indeed be made available to PIPCU enabling it to operate until at least 2017.

Speaking to the national crime unit at the Anti-Counterfeiting Group Conference in London, Minister for Intellectual Property Baroness Neville-Rolfe said that PIPCU would be boosted by £3 million of funding from the public purse.

“We’ve seen significant success in PIPCU’s first year of operation. This extra support will help the unit to build on this impressive record in the fight against intellectual property crime, which costs the UK at least £1.3 billion a year in lost profits and taxes,” Baroness Neville-Rolfe said.

“With more money now being invested in ideas than factories or machinery in the UK, it is vital that we protect creators and consumers and the UK’s economic growth. Government and industry must work together to give long-term support to PIPCU, so that we can strengthen the UK’s response to the blight of piracy and counterfeiters.”

City of London Police Commander Steve Head, who is the Police National Coordinator for Economic Crime, welcomed the cash injection.

“The government committing to fund the Police Intellectual Property Crime Unit until 2017 is fantastic news for the City of London Police and the creative industries, and very bad news for those that seek to make capital through intellectual property crime,” Head said.

“Since launching a year ago, PIPCU has quickly established itself as an integral part of the national response to a problem that is costing the UK more than a billion pounds a year. Much of this success is down to PIPCU moving away from traditional policing methods and embracing new and innovative tactics, to disrupt and dismantle criminal networks responsible for causing huge damages to legitimate businesses.”

PIPCU, which is closely allied with the Intellectual Property Office (IPO), is a 21-strong team comprised of detectives, investigators, analysts, researchers, an education officer and a communications officer.

The unit also reports two secondees – a Senior Intelligence Officer from the IPO and an Internet Investigator from the BPI. The latter role was previously filled by the BPI’s Mark Rampton but according to his Linkedin profile he left his position last month. No announcement has been made detailing his replacement.

While PIPCU is definitely leaving its mark, not all operations have gone to plan. In one of its highest-profile actions to date, last month the unit shut down what it described as an illegal and “industrial scale” sports streaming service in Manchester. However, in mid October all charges were dropped against its alleged operator.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Matthew Garrett: Linux Container Security

This post was syndicated from: Matthew Garrett and was written by: Matthew Garrett. Original post: at Matthew Garrett

First, read these slides. Done? Good.

Hypervisors present a smaller attack surface than containers. This is somewhat mitigated in containers by using seccomp, selinux and restricting capabilities in order to reduce the number of kernel entry points that untrusted code can touch, but even so there is simply a greater quantity of privileged code available to untrusted apps in a container environment when compared to a hypervisor environment[1].

Does this mean containers provide reduced security? That’s an arguable point. In the event of a new kernel vulnerability, container-based deployments merely need to upgrade the kernel on the host and restart all the containers. Full VMs need to upgrade the kernel in each individual image, which takes longer and may be delayed due to the additional disruption. In the event of a flaw in some remotely accessible code running in your image, an attacker’s ability to cause further damage may be restricted by the existing seccomp and capabilities configuration in a container. They may be able to escalate to a more privileged user in a full VM.

I’m not really compelled by either of these arguments. Both argue that the security of your container is improved, but in almost all cases exploiting these vulnerabilities would require that an attacker already be able to run arbitrary code in your container. Many container deployments are task-specific rather than running a full system, and in that case your attacker is already able to compromise pretty much everything within the container. The argument’s stronger in the Virtual Private Server case, but there you’re trading that off against losing some other security features – sure, you’re deploying seccomp, but you can’t use selinux inside your container, because the policy isn’t per-namespace[2].

So that seems like kind of a wash – there’s maybe marginal increases in practical security for certain kinds of deployment, and perhaps marginal decreases for others. We end up coming back to the attack surface, and it seems inevitable that that’s always going to be larger in container environments. The question is, does it matter? If the larger attack surface still only results in one more vulnerability per thousand years, you probably don’t care. The aim isn’t to get containers to the same level of security as hypervisors, it’s to get them close enough that the difference doesn’t matter.

I don’t think we’re there yet. Searching the kernel for bugs triggered by Trinity shows plenty of cases where the kernel screws up from unprivileged input[3]. A sufficiently strong seccomp policy plus tight restrictions on the ability of a container to touch /proc, /sys and /dev helps a lot here, but it’s not full coverage. The presentation I linked to at the top of this post suggests using the grsec patches – these will tend to mitigate several (but not all) kernel vulnerabilities, but there’s tradeoffs in (a) ease of management (having to build your own kernels) and (b) performance (several of the grsec options reduce performance).

But this isn’t intended as a complaint. Or, rather, it is, just not about security. I suspect containers can be made sufficiently secure that the attack surface size doesn’t matter. But who’s going to do that work? As mentioned, modern container deployment tools make use of a number of kernel security features. But there’s been something of a dearth of contributions from the companies who sell container-based services. Meaningful work here would include things like:

  • Strong auditing and aggressive fuzzing of containers under realistic configurations
  • Support for meaningful nesting of Linux Security Modules in namespaces
  • Introspection of container state and (more difficult) the host OS itself in order to identify compromises

These aren’t easy jobs, but they’re important, and I’m hoping that the lack of obvious development in areas like this is merely a symptom of the youth of the technology rather than a lack of meaningful desire to make things better. But until things improve, it’s going to be far too easy to write containers off as a “convenient, cheap, secure: choose two” tradeoff. That’s not a winning strategy.

[1] Companies using hypervisors! Audit your qemu setup to ensure that you’re not providing more emulated hardware than necessary to your guests. If you’re using KVM, ensure that you’re using sVirt (either selinux or apparmor backed) in order to restrict qemu’s privileges.
[2] There’s apparently some support for loading per-namespace Apparmor policies, but that means that the process is no longer confined by the sVirt policy
[3] To be fair, last time I ran Trinity under Docker under a VM, it ended up killing my host. Glass houses, etc.

comment count unavailable comments

TorrentFreak: Photographer Who Sued Imgur Now Has a Pirate Bay Problem

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

boffoli1When it comes to online piracy most attention usually goes out to music, TV-shows and movies. However, photos are arguably the most-infringed works online.

Virtually every person on the Internet has shared a photo without obtaining permission from its maker, whether through social networks, blogs or other services.

While most photographers spend little time on combating piracy, Seattle-based artist Christopher Boffoli has taken some of the largest web services to court for aiding these infringements

Boffoli has filed lawsuits against Twitter, Google and others, which were settled out for court under undisclosed terms. Last month he started a new case against popular image sharing site Imgur after it allegedly ignored his takedown requests.

The photographer asked the court to order an injunction preventing Imgur from making 73 of his photos available online. In addition, he requested millions of dollars in statutory damages for willful copyright infringement.

Imgur has yet to file an official reply to the complaint. In the meantime, however, Boffoli’s actions appear to have triggered another less welcome response.

A few days ago a user of The Pirate Bay decided to upload a rather large archive of the photographer’s work to the site. The archive in question is said to hold 20,754 images, including the most famous “Big Appetites” series.

A torrent with 20,754 images
tpb-boffoli

The image archive, which is more than eight gigabytes in size, had to be partly wrapped in an .iso file because otherwise the .torrent file itself would have been too large.

The description of the archive mentions Boffoli’s recent actions against Imgur, which could have triggered the upload. One of the commenters points out that the Imgur lawsuit may have done more harm than good, and a new Internet meme was born.

“Sued for 73 images, got 20,754 uploaded to TPB, LOL. About the Big Appetites series, if I ever get my hands on a copy, I’ll scan it at 600 dpi and upload it here, have fun trying to censor the internet, Boffoli,” the commenter notes.

TorrentFreak asked Boffoli for a comment on the leak and whether he will take steps to prevent the distribution, but we have yet to hear back.

While not everyone may agree with the lawsuit against Imgur piracy can impact photographers quite a bit. It’s usually not the average Pirate Bay user that’s causing the damage though, but rather companies that use professional photos commercially without a license.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

TorrentFreak: U.S. Government Shuts Down Music Sharing Sites

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

IPRC_SeizedDuring the spring of 2010 U.S. authorities started a campaign to take copyright-infringing websites offline.

Since then Operation in Our Sites has resulted in thousands of domain name seizures and several arrests. While most of the sites are linked to counterfeit goods, dozens of “pirate” sites have also been targeted.

After a period of relative calm the authorities appear to have restarted their efforts with the takedown of two large music sites. RockDizFile.com and RockDizMusic.com, which are connected, now display familiar banners in which ICE takes credit for their demise.

“This domain has been seized by ICE- Homeland Security Investigations, pursuant to a seizure warrant issued by a United States District Court under the authority of 18 U.S.C. §§ 981 and 2323,” the banner reads.

TorrentFreak contacted ICE yesterday for a comment on the recent activity but we have yet to receive a response.

The domain names are now pointing to the same IP-address where many of the previously seized websites, such as torrent-finder.com and channelsurfing.net, are directed. Both domain names previously used Cloudflare and had their NS entries updated earlier this week.

Despite the apparent trouble, RockDizFile.com and RockDizMusic.com’s Twitter and Facebook pages have remained silent for days.

RockDizMusic presented itself as an index of popular new music. Artists were encouraged to use the site to promote their work, but the site also featured music being shared without permission, including pre-release tracks.

RockDizMusic.com
rockdizmusic

RockDizFile used a more classic file-hosting look, but with a 50MB limit it was mostly used for music. The site offered premium accounts to add storage space and remove filesize and bandwidth limitations.

RockDizFile.com
rockdizfile

Both websites appear to have a strong focus on rap and hip-hop music. This is in line with previous ICE seizures which targeted RapGodFathers.com, RMX4U.com, OnSmash.com and Dajaz1.com.

The latter was seized by mistake. The record labels failed to deliver proof of alleged infringements to the authorities and after a long appeal the domain was eventually returned to its owners.

This incident and the general lack of due process of ICE’s domain seizures has led to critique from lawmakers and legal scholars. The authorities are nevertheless determined to keep Operation in Our Sites going.

“Operation In Our Sites’ enforcement actions involve federal law enforcement investigating and developing evidence to obtain seizure warrants from federal judges,” ICE states on its website.

Once a credible lead comes in ICE says it “will work with the U.S. Department of Justice to prosecute, convict, and punish individuals as well as seize website domain names, profits, and other property from IP thieves.”

At this point it’s unclear whether ICE has targeted any of the individuals connected to RockDizFile.com and RockDizMusic.com or whether the unit has taken down any other sites in a similar fashion.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Delian's Tech blog: My perl-cwmp patches are merged

This post was syndicated from: Delian's Tech blog and was written by: Delian Delchev. Original post: at Delian's Tech blog

Hello,
I’ve used perl-cwmp here and there. It is a nice, really small, really light and simple TR-069 ACS, with a very easy install and no heavy requirements. You can read the whole code for few minutes and you can make your own modifications. I am using it in a lot of small “special” cases, where you need something fast and specific, or a very complex workflow that cannot be implemented by any other ACS server.
However, this project has been stalled for a while. I’ve found that a lot of modern TR-069/CWMP agents do not work well with the perl-cwmp. 
There are quite of few reasons behind those problems:
- Some of the agents are very strict – they expect the SOAP message to be formatted in a specific way, not the way perl-cwmp does it
- Some of the agents are compiled with not so smart, static expansion of the CWMP xsd file. That means they do expect string type spec in the SOAP message and strict ordering
perl-cwmp do not “compile” the CWMP XSD and do not send strict requests nor interpretate the responses strictly. It does not automatically set the correct property type in the request according to the spec, because it never reads the spec. It always assume that the property type is a string.
To allow perl-cwmp to be fixed and adjusted to work with those type of TR-069 agents I’ve done few modifications to the code, and I am happy to announce they have been accepted and merged to the main code:
The first modification is that I’ve updated (according to the current standard) the SOAP header. It was incorrectly set and many TR069 devices I have tested (and basically all that worked with the Broadcom TR069 client) rejected the request.
The second modification is that all the properties now may have specified type. Unless you specify the type it is always assumed to be a string. That will allow the ACS to set property value of agents that do a strict set check.
InternetGatewayDevice.ManagementServer.PeriodicInformInterval: #xsd:unsignedInt#60
The #…# specifies the type of the property. In the example above, we are setting value of unsignedInt 60 to PeriodicInformInterval.
You can also set value to a property by reading a value from another property.
For that you can use ${ property name }
Here is an example how to set the PPP password to be the value of the Serial Number:
InternetGatewayDevice.WANDevice.1.WANConnectionDevice.1.WANPPPConnection.1.Password: ${InternetGatewayDevice.DeviceInfo.SerialNumber}
And last but not least – now you can execute small code, or external script and set the value of a property to the output of that code. You can do that with $[ code ]
Here is an example how to set a random value to the PeriodicInformInterval:

InternetGatewayDevice.ManagementServer.PeriodicInformInterval: #xsd:unsignedInt#$[60 + int(rand(100))]

Here is another example, how to execute external script that could take this decision:
InternetGatewayDevice.ManagementServer.PeriodicInformInterval: #xsd:unsignedInt#$[ `./externalscript.sh ${InternetGatewayDevice.LANDevice.1.LANEthernetInterfaceConfig.1.MACAddress} ${InternetGatewayDevice.DeviceInfo.SerialNumber}` ]
The last modification I’ve done is to allow the perl-cwmp to “fork” a new process when a TR-069 request arrives. It has been single threaded code, which mean the agents has to wait until the previous task is completed. However, if the TCP listening queue is full, or the ACS very busy, some of the agents will assume there is no response and timeout. You may have to wait for 24h (the default periodic interval for some vendors) until you get your next request. Now that can be avoided.
All this is very valuable for dynamic and automated configurations without the need of modification of the core code, just modifying the configuration file.

Krebs on Security: Google Accounts Now Support Security Keys

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

People who use Gmail and other Google services now have an extra layer of security available when logging into Google accounts. The company today incorporated into these services the open Universal 2nd Factor (U2F) standard, a physical USB-based second factor sign-in component that only works after verifying the login site is truly a Google site.

A $17 U2F device made by Yubikey.

A $17 U2F device made by Yubikey.

The U2F standard (PDF) is a product of the FIDO (Fast IDentity Online) Alliance, an industry consortium that’s been working to come up with specifications that support a range of more robust authentication technologies, including biometric identifiers and USB security tokens.

The approach announced by Google today essentially offers a more secure way of using the company’s 2-step authentication process. For several years, Google has offered an approach that it calls “2-step verification,” which sends a one-time pass code to the user’s mobile or land line phone.

2-step verification makes it so that even if thieves manage to steal your password, they still need access to your mobile or land line phone if they’re trying to log in with your credentials from a device that Google has not previously seen associated with your account. As Google notes in a support document, security key “offers better protection against this kind of attack, because it uses cryptography instead of verification codes and automatically works only with the website it’s supposed to work with.”

Unlike a one-time token approach, the security key does not rely on mobile phones (so no batteries needed), but the downside is that it doesn’t work for mobile-only users because it requires a USB port. Also, the security key doesn’t work for Google properties on anything other than Chrome.

The move comes a day after Apple launched its Apple Pay platform, a wireless payment system that takes advantage of the near-field communication (NFC) technology built into the new iPhone 6, which allows users to pay for stuff at participating merchants merely by tapping the phone on the store’s payment terminal.

I find it remarkable that Google, Apple and other major tech companies continue to offer more secure and robust authentication options than are currently available to consumers by their financial institutions. I, for one, will be glad to see Apple, Google or any other legitimate player give the entire mag-stripe based payment infrastructure a run for its money. They could hardly do worse.

Soon enough, government Web sites may also offer consumers more authentication options than many financial sites.  An Executive Order announced last Friday by The White House requires the National Security Council Staff, the Office of Science and Technology Policy and the Office of Management and Budget (OMB) to submit a plan to ensure that all agencies making personal data accessible to citizens through digital applications implement multiple layers of identity assurance, including multi-factor authentication. Verizon Enterprise has a good post with additional details of this announcement.

TorrentFreak: Australians Face ‘Fines’ For Downloading Pirate Movies

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

Much to the disappointment of owner Voltage Pictures, early January 2013 a restricted ‘DVD Screener’ copy of the hit movie Dallas Buyers Club leaked online. The movie was quickly downloaded by tens of thousands but barely a month later, Voltage was plotting revenge.

In a lawsuit filed in the Southern District of Texas, Voltage sought to identify illegal downloaders of the movie by providing the IP addresses of Internet subscribers to the court. Their aim – to scare those individuals into making cash settlements to make supposed lawsuits disappear.

Now, in the most significant development of the ‘trolling’ model in recent times, Dallas Buyers Club LLC are trying to expand their project into Australia. Interestingly the studio has chosen to take on subscribers of the one ISP that was absolutely guaranteed to put up a fight.

iiNet is Australia’s second largest ISP and the country’s leading expert when it comes to fighting off aggressive rightsholders. In 2012 the ISP defeated Hollywood in one of the longest piracy battles ever seen and the company says it will defend its subscribers in this case too.

Chief Regulatory Officer Steve Dalby says that Dallas Buyers Club LLC (DBCLLC) recently applied to the Federal Court to have iiNet and other local ISPs reveal the identities of people they say have downloaded and/or shared their movie without permission.

According to court documents seen by TorrentFreak the other ISPs involved are Wideband Networks Pty Ltd, Internode Pty Ltd, Dodo Services Pty Ltd, Amnet Broadband Pty Ltd and Adam Internet Pty Ltd.

Although the stance of the other ISPs hasn’t yet been made public, DBCLLC aren’t going to get an easy ride. iiNet (which also owns Internode and Adam) says it will oppose the application for discovery.

“iiNet would never disclose customer details to a third party, such as movie studio, unless ordered to do so by a court. We take seriously both our customers’ privacy and our legal obligations,” Dalby says.

While underlining that the company does not condone copyright infringement, news of Dallas Buyers Club / Voltage Pictures’ modus operandi has evidently reached iiNet, and the ISP is ready for them.

“It might seem reasonable for a movie studio to ask us for the identity of those they suspect are infringing their copyright. Yet, this would only make sense if the movie studio intended to use this information fairly, including to allow the alleged infringer their day in court, in order to argue their case,” Dalby says.

“In this case, we have serious concerns about Dallas Buyers Club’s intentions. We are concerned that our customers will be unfairly targeted to settle any claims out of court using a practice called ‘speculative invoicing’.”

The term ‘speculative invoicing’ was coined in the UK in response to the activities of companies including the now defunct ACS:Law, which involved extracting cash settlements from alleged infringers (via mailed ‘invoices’) and deterring them from having their say in court. Once the scheme was opened up to legal scrutiny it completely fell apart.

Some of the flaws found to exist in both UK and US ‘troll’ cases are cited by iiNet, including intimidation of subscribers via excessive claims for damages. The ISP also details the limitations of IP address-based evidence when it comes to identifying infringers due to shared household connections and open wifi scenarios.

“Because Australian courts have not tested these cases, any threat by rights holders, premised on the outcome of a successful copyright infringement action, would be speculative,” Dalby adds.

The Chief Regulatory Officer says that since iiNet has opposed the action for discovery the Federal Court will now be asked to decide whether iiNet should hand over subscriber identities to DBCLLC. A hearing on that matter is expected early next year and it will be an important event.

While a win for iiNet would mean a setback for rightsholders plotting similar action, victory for DBCLLC will almost certainly lead to others following in their footsteps. For an idea of what Australians could face in this latter scenario, in the United States the company demands payment of up to US$7,000 (AUS$8,000) per infringement.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

TorrentFreak: Retired Scene Groups Return to Honor Fallen Member

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

ripTo many people the Warez Scene is something mythical or at least hard to comprehend. A group of people at the top of the piracy pyramid.

The Scene is known for its aversion to public file-sharing, but nonetheless it’s in large part responsible for much of the material out there today.

The goal of most Scene groups is to be the first to release a certain title, whether that’s a film, music or software. While there is some healthy competition The Scene is also a place where lifelong friendships are started.

A few days ago, on October 17, the Scene lost Goolum, a well-respected member and friend. Only in his late thirties, he passed away after being part of the Scene for more than a decade.

As a cracker Goolum, also known as GLM, was of the more experienced reverse engineers who worked on numerous releases.

Through the years Goolum was connected to several groups which are now retired, some for more than a decade. To honor their fallen friend, the groups ZENiTH, Lz0, SLT and MiDNiGHT have made a one-time comeback.

Below is an overview of their farewell messages, which honor him for his cracking skills but most of all as a friend. Our thoughts go out to Goolum’s friends and family.

ZENiTH: THUNDERHEAD.ENGINEERING.PYROSIM.V2014.2.RIP.GOOLUM-ZENiTH (NFO)

ZENiTH, a group that retired around 2005, mentions Goolum’s loyalty and the love for his daughter.

“Goolum has been in and around the scene since the Amiga days but had never been a guy to jump from group to group, but stayed loyal and dedicated to the few groups he was involved in.”

“We are all proud to have been in a group with you, to have spent many a long night sharing knowledge about everything, learning about your daughter who you where very proud of, and all the projects you were involved in.”

ZENiTH’s in memoriam
zenith1

Lz0: CEI.Inc.EnSight.Gold.v10.1.1b.Incl.Keygen.RIP.GOOLUM-Lz0 (NFO)

Lz0 or LineZer0, split from the Scene last year but many of its members are still actively involved in other roles. The group mentions the hard time Goolum has had due to drug problems. LzO also highlights Goolum’s love for his daughter, and how proud he was of her.

“We all knew that he struggled in life – not just economical but also on a personal level and not the least with his drug issues. One of the things that kept him going was his wonderful daughter whom he cherished a lot. He often talked about her, and how proud of her he was. He was clear that if there was one thing in life he was proud of – it was that he became the dad of a wonderful girl.”

“We’re shocked that when finally things started to move in the right direction, that we would receive the news about his death. It came without warning and we can only imagine the shock of his family. It’s hard to find the right words – or words for that matter. Even though it might have appeared as that he was lonely – with few friends, he knew that we were just a keyboard away.”

Lz0′s in memoriam
Lz0mem

SLT: PROTEUS.ENGINEERING.FASTSHIP.V6.1.30.1.RIP.GOOLUM-SLT (NFO)

SLT or SOLiTUDE has been retired since 2000 but returns to remember Goolum. The group notes that he will be dearly missed.

“You will be missed. It is not easy to say goodbye to someone who you have known for over a decade, trading banter, laughs, advice and stories. You leave behind a daughter, a family and a group of friends, who will miss you dearly.”

“As the news have spread, the kind words have poured in. Solitude is releasing this in honor of you, to show that the values we founded the group on is the exact values you demonstrated through your decades of being in the scene. Loyalty, friendship and hard work. Our thoughts are with you, wherever you may be.”

SLT’s in memoriam
SLT

MiDNiGHT: POINTWISE_V17.2.R2_RIP_GOOLUM-MIDNIGHT (NFO)

MiDNiGHT hasn’t been active for nearly a decade but have also honored Goolum with a comeback. The group mentions that he was a great friend who was always in for a chat and a beer.

“Life won’t ever be the same again my friend. We could sit and chat for hours and hours, and even then we knew each other well enough that nothing more was required than a beer, a rant and a small *yarr* and we’d know it would all be good.”

“This time it’s not good mate. I am here, you are not. I can’t even begin to express how this makes me feel – except an absolute sadness.”

MiDNiGHT’s in memoriam
midnight

RIP Goolum 1977 – 2014

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

SANS Internet Storm Center, InfoCON: green: CSAM Month of False Positives: Ghosts in the Pentest Report, (Tue, Oct 21st)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

As part of most vulnerability assessments and penetration tests against a website, we almost always run some kind of scanner. Burp (commercial) and ZAP (free from OWASP) are two commonly used scanners. Once youve done a few website assessments, you start to get a feel for what pages and fields are likely candidates for exploit. But especially if its a vulnerability assessment, where youre trying to cover as many issues as possible (and exploits might even be out of scope), its always a safe bet to run a scanner to see what other issues might be in play.

All too often, we see people take these results as-is, and submit them as the actual report. The HUGE problem with this is false positives and false negatives.

False negatives are issues that are real, but are not be found by your scanner. For instance, Burp and ZAP arent the best tools for pointing a big red arrow at software version issues – for instance vulnerability versions of WordPress or WordPress plugins. You might want to use WPSCAN for something like that. Or if you go to the login page, a view source will often give you what you need.

Issues with the certificates will also go unnoticed by a dedicated web scanner – NIKTO or WIKTO are good choices for that. Or better yet, you can use openssl to pull the raw cert, or just view it in your browser.

(If youre noticing that much of what the cool tools will do is possible with some judicious use of your browser, thats exactly what Im pointing out!)

NMAP is another great tool to use for catching what a web scanner might miss. For instance, if youve got a Struts admin page or Hypervisor login on the same IP as your target website, but on a different port than the website, NMAP is the go-to tool. Similarly, lots of basic site assessment can be done with the NMAP –version parameters, and the NSE scripts bundled with NMAP are a treasure trove as well! (Check out Manuels excellent series on NMAP scripts).

False positives are just as bad – where the tool indicates a vulnerability where there is none. If you include blatant false positives in your report, youll find that the entire report will end up in the trash can, along with your reputation with that client! A few false positives that I commonly see are SQL Injection and OS Commmand Injection.

SQL Injection is a vulnerability where, from the web interface, you can interact with and get information from a SQL database thats behind the website, often dumping entire tables.

Website assessment tools ( Burp in this case, but many other tools use similar methods) commonly tests for SQL Injection by injecting a SQL waitfor delay 0:0:20 command. If this takes a significantly longer time to complete than the basic statement, then Burp will mark this as Firm for certainty. Needless to say, I often see this turn up as a false positive. What youll find is that Burp generally runs multiple threads (10 by default) during a scan, so can really run up the CPU on a website, especially if the site is mainly parametric (where pages are generated on the fly from database input during a session). Also, if a sites error handling routines take longer than they should, youll see this get thrown off.

So, how should we test to verify this initial/preliminary finding? First of all, Burps test isnt half bad on a lot of sites. Testing Burps injection with curl or a browser after the scanning is complete will sometimes show that the SQL injection is real. Test with multiple times, so that you can show consistent and appropriate delays for values of 10,30,60, 120 seconds.

If that fails – for instance if they all delay 10 seconds, or maybe no appreciable delay at all, dont despair – SQLMAP tests much more thoroughly, and should be part of your toolkit anyway – try that. Or test manually – after a few websites youll find that testing manually might be quicker than an exhaustive SQLMAP test (though maybe not as thorough).

If you use multiple methods (and there are a lot of different methods) and still cant verify that SQL injection is in play after that initial scans finding, quite often this has to go into the false positives section of your report.

OS Command Injection – where you can execute unauthorized Operating System commands from the web interface – is another common false positive, and for much the same reason. In this vulnerability, the scanner will often use ping -c 20 127.0.0.1 or ping -n 20 127.0.0.1 – in other words, the injected command tells the webserver to ping itself, in this case 20 times. This will in most operating systems create a delay of 20 seconds. As in the SQL injection example, youll find that tests that depend on predictable delay will often get thrown off if they are executed during a busy scan. Running them after the scan (again, using your browser or curl) is often all you need to do to prove these findings as false. Testing other commands, such as pinging or opening an ftp session to a test host on the internet (that is monitoring for such traffic using tcpdump or syslog) is another good sober second thought test, but be aware that if the website you are testing has an egress filter applied to its traffic, a successful injection might not generate the traffic you are hoping for – itll be blocked at the firewall. If you have out of band access to the site being assessed, creating a test file is another good test.

Other tests can similarly see false positives. For instance, any tests that rely only on service banner grabs can be thrown off easily – either by admins putting a false banner in place, or if site updates update packages and services, but dont change that initially installed banner.

Long story short, never never never (never) believe that initial finding that your scanning tool gives you. All of the tools discussed are good tools – they should all be in your toolbox and in many cases should be at the top of your go-to list. Whether the tool is open source or closed, free or very expensive, they will all give you false positives, and every finding needs to be verified as either a true or false positive. In fact, you might not want to believe the results from your second tool either, especially if its testing the same way. Whenever you can, go back to first principals and verify manually. Or if its in scope, verify with an actual exploit – theres nothing better than getting a shell to prove that you can get a shell!

For false negatives, youll also want to have multiple tools and some good manual tests in your arsenal – if your tool misses a vulnerability, you may find that many or all of your tools test for that issue the same way. Often the best way to catch a false negative is to just know how that target service runs, and know how to test for that specific issue manually. If you are new to assessments and penetration tests, false negatives will be much harder to find, and really no matter how good you are youll never know if you got all of them.

If you need to discuss false positives and negatives with a non-technical audience, going to non-technical tools is a good way to make the point. A hammer is a great tool, but while screws are similar to nails, a hammer isnt always the best way to deal with them.

Please, use our comment form tell us about false positives or false negatives that youve found in vulnerability assessments or penetration tests. Keep in mind that usually these arent an indicator of a bad tool, theyre usually just a case of getting a proper parallax view to get a better look at the situation.

===============
Rob VandenBrink
Metafore

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Linux How-Tos and Linux Tutorials: What is a Good Command-Line Calculator on Linux

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Linux How-Tos and Linux Tutorials. Original post: at Linux How-Tos and Linux Tutorials

Every modern Linux desktop distribution comes with a default GUI-based calculator app. On the other hand, if your workspace is full of terminal windows, and you would rather crunch some numbers within one of those terminals quickly, you are probably looking for a command-line calculator. In this category, GNU bc (short for “basic calculator”) is […]

Read more at Xmodulo

LWN.net: Debian Project mourns the loss of Peter Miller

This post was syndicated from: LWN.net and was written by: ris. Original post: at LWN.net

The Debian Project recently learned that community member Peter Miller died
last July. “Peter was a relative newcomer to the Debian project, but his
contributions to Free and Open Source Software goes back the the late
1980s. Peter was significant contributor to GNU gettext as well as being
the main upstream author and maintainer of other projects that ship as
part of Debian, including, but not limited to srecord, aegis and cook.
Peter was also the author of the paper “Recursive Make Considered
Harmful”.

Krebs on Security: Banks: Credit Card Breach at Staples Stores

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Multiple banks say they have identified a pattern of credit and debit card fraud suggesting that several Staples Inc. office supply locations in the Northeastern United States are currently dealing with a data breach. Staples says it is investigating “a potential issue” and has contacted law enforcement.

staplesAccording to more than a half-dozen sources at banks operating on the East Coast, it appears likely that fraudsters have succeeded in stealing customer card data from some subset of Staples locations, including seven Staples stores in Pennsylvania, at least three in New York City, and another in New Jersey.

Framingham, Mass.-based Staples has more than 1,800 stores nationwide, but so far the banks contacted by this reporter have traced a pattern of fraudulent transactions on a group of cards that had all previously been used at a small number of Staples locations in the Northeast.

The fraudulent charges occurred at other (non-Staples) businesses, such as supermarkets and other big-box retailers. This suggests that the cash registers in at least some Staples locations may have fallen victim to card-stealing malware that lets thieves create counterfeit copies of cards that customers swipe at compromised payment terminals.

Asked about the banks’ claims, Staples’s Senior Public Relations Manager Mark Cautela confirmed that Staples is in the process of investigating a “potential issue involved credit card data and has contacted law enforcement.”

“We take the protection of customer information very seriously, and are working to resolve the situation,” Cautela said. “If Staples discovers an issue, it is important to note that customers are not responsible for any fraudulent activity on their credit cards that is reported on a timely basis.”  

LWN.net: The FSF opens nominations for the 17th annual Free Software Awards

This post was syndicated from: LWN.net and was written by: ris. Original post: at LWN.net

The Free Software Foundation (FSF) and the GNU Project have announced the
opening of nominations for the 17th annual Free Software Awards. The
Free Software Awards include the Award for the Advancement of Free
Software and the Award for Projects of Social Benefit. “In the case of both awards, previous winners are not eligible for
nomination, but renomination of other previous nominees is encouraged.
Only individuals are eligible for nomination for the Advancement of
Free Software Award (not projects), and only projects can be nominated
for the Social Benefit Award (not individuals). For a list of previous
winners, please visit https://www.fsf.org/awards.

TorrentFreak: 4shared Demands Retraction Over Misleading Piracy Report

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

profitLast month the Digital Citizens Alliance and NetNames released a new report with the aim of exposing the business models and profitability of “rogue” file-storage sites.

The report titled Behind The Cyberlocker Door: Behind The Cyberlocker Door: A Report How Shadowy Cyberlockers Use Credit Card Companies to Make Millions, is being used as ammunition for copyright holders to pressure credit card companies and advertisers to cut ties with the listed sites.

While some of the sites mentioned are indeed of a dubious nature the report lacks nuance. The “shadowy” label certainly doesn’t apply to all. Mega, for example, was quick to point out that the report is “grossly untrue and highly defamatory.” The company has demanded a public apology.

4shared, the most visited site in the report with over 50 million unique visitors per month, is now making similar claims. According to 4shared’s Mike Wilson the company has put its legal team on the case.

“We decided to take action and demand a public retraction of the information regarding 4shared’s revenues and business model as published in the report. Our legal team is already working on the respective notes to Digital Citizens Alliance and Netnames,” Wilson tells TorrentFreak.

As the largest file-hosting service the report estimates that 4shared grosses $17.6 million per year. However, 4shared argues that many of the assumptions in the report are wrong and based on a distorted view of the company’s business model.

“Revenue volumes in this report are absolutely random. For instance, 4shared’s actual revenue from premium subscription sales is approximately 20 times smaller than is shown in the document,” Wilson says.

4shared explains that its premium users are mostly interested in storing their files safely and securely. In addition, the company notes that it doesn’t have any affiliate programs or other encouragements for uploading or downloading files.

Unlike the report claims, 4shared stresses that it’s not setup as a service that aims to profit from copyright infringement, although it admits that this does take place.

To deal with this unauthorized use the file-hosting service has a DMCA takedown policy in place. In addition, some of the most trusted rightsholder representative have direct access to the site where they can delete files without sending a takedown notice.

This works well and the overall takedown volume is relatively low. Together, the site’s users store a billion files and in an average month 4shared receives takedown notices for 0.05% of these files.

In addition to their takedown procedure 4shared also scans publicly shared music files for copyright-infringing content. This Music ID system, custom-built by the company, scans for pirated music files based on a unique audio watermark and automatically removes them.

Despite these efforts 4shared was included in the “shadowy cyberlocker” report where it’s branded a rogue and criminal operation. Whether the company’s legal team will be able to set the record straight has yet to be seen.

Netnames and Digital Citizens have thus far declined to remove Mega from the report as the company previously demanded. Mega informs TorrentFreak that a defamation lawsuit remains an option and that they are now considering what steps to take next.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

TorrentFreak: Kim Dotcom Must Reveal Everything He Owns to Hollywood

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

dotcom-laptopKim Dotcom has been associated with many things over the years, but one enduring theme has been wealth – and lots of it.

Even in the wake of the now-infamous raid on his New Zealand mansion and the seizure of millions in assets, somehow Dotcom has managed to rake in millions. Or did he also have some stashed away?

It’s an important matter for Hollywood. The businessman’s continued lavish lifestyle diminishes the financial pot from where any payout will be made should they prevail in their copyright infringement battles against the Megaupload founder.

The studio’s concerns were previously addressed by Judge Courtney, who had already ordered Dotcom to disclose to the Court the details of his worldwide assets. The entrepreneur filed an appeal but that hearing would take place in October, a date beyond the already-ordered disclosure date.

Dotcom took his case to the Court of Appeal in the hope of staying the disclosure order, but in August that failed.

Dotcom complied with the ruling and subsequently produced an affidavit. However, he asked the Court of Appeal to overturn the decision of the High Court in order to keep the document a secret from the studios. That bid has now failed.

Following a ruling handed down this morning by the New Zealand Court of Appeal, Dotcom’s financial information will soon be in the hands of adversaries Twentieth Century Fox, Disney, Paramount, Universal and Warner Bros.

Court of Appeal Judges John Wild, Rhys Harrison and Christin French ordered the affidavit to be released to the studios on the basis that the information could only be used in legal proceedings concerning the restraining of Dotcom’s assets. And with a confidentiality clause attached to the affidavit, the public will not gain access to the information.

Another setback for Dotcom came in respect of who pays the bill for proceedings. The Megaupload founder’s attempt at avoiding costs was turned down after the judges found that having already supplied the affidavit as required, Dotcom’s appeal was not likely to succeed.

And there was more bad news for Dotcom in a separate High Court ruling handed down in New Zealand today. It concerns the extradition cases against not only him but also former Megaupload associates Finn Batato, Mathias Ortmann and Bram Van Der Kolk.

The theory put forward by Dotcom is that the United States and New Zealand governments had politically engineered his downfall in order to extradite him to the U.S. To gather evidence showing how that happened, Dotcom and the other respondents made a pair of applications to the extradition court (the District Court) requesting that it make discovery orders against various New Zealand government agencies, ministers and departments.

The District Court declined so the respondents sought a judicial review of that decision claiming that the Court acted unfairly and erred in law. In today’s ruling, Justice Simon France said there was no “air of reality” that political interference had been involved in Dotcom’s extradition case.

“It is, as the District Court held, all supposition and the drawing of links without a basis,” the Judge wrote.

“Nothing suggests involvement of the United States of America, and nothing suggests the New Zealand Government had turned its mind to extradition issues. These are the key matters and there is no support for either contention.”

Judge France said that as respondents in the case, the United States were entitled to costs.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Krebs on Security: Spike in Malware Attacks on Aging ATMs

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

This author has long been fascinated with ATM skimmers, custom-made fraud devices designed to steal card data and PINs from unsuspecting users of compromised cash machines. But a recent spike in malicious software capable of infecting and jackpotting ATMs is shifting the focus away from innovative, high-tech skimming devices toward the rapidly aging ATM infrastructure in the United States and abroad.

Last month, media outlets in Malaysia reported that organized crime gangs had stolen the equivalent of about USD $1 million with the help of malware they’d installed on at least 18 ATMs across the country. Several stories about the Malaysian attack mention that the ATMs involved were all made by ATM giant NCR. To learn more about how these attacks are impacting banks and the ATM makers, I reached out to Owen Wild, NCR’s global marketing director, security compliance solutions.

Wild said ATM malware is here to stay and is on the rise.

ncrmalware

BK: I have to say that if I’m a thief, injecting malware to jackpot an ATM is pretty money. What do you make of reports that these ATM malware thieves in Malaysia were all knocking over NCR machines?

OW: The trend toward these new forms of software-based attacks is occurring industry-wide. It’s occurring on ATMs from every manufacturer, multiple model lines, and is not something that is endemic to NCR systems. In this particular situation for the [Malaysian] customer that was impacted, it happened to be an attack on a Persona series of NCR ATMs. These are older models. We introduced a new product line for new orders seven years ago, so the newest Persona is seven years old.

BK: How many of your customers are still using this older model?

OW: Probably about half the install base is still on Personas.

BK: Wow. So, what are some of the common trends or weaknesses that fraudsters are exploiting that let them plant malware on these machines? I read somewhere that the crooks were able to insert CDs and USB sticks in the ATMs to upload the malware, and they were able to do this by peeling off the top of the ATMs or by drilling into the facade in front of the ATM. CD-ROM and USB drive bays seem like extraordinarily insecure features to have available on any customer-accessible portions of an ATM.

OW: What we’re finding is these types of attacks are occurring on standalone, unattended types of units where there is much easier access to the top of the box than you would normally find in the wall-mounted or attended models.

BK: Unattended….meaning they’re not inside of a bank or part of a structure, but stand-alone systems off by themselves.

OW: Correct.

BK: It seems like the other big factor with ATM-based malware is that so many of these cash machines are still running Windows XP, no?

This new malware, detected by Kaspersky Lab as Backdoor.MSIL.Tyupkin, affects ATMs from a major ATM manufacturer running Microsoft Windows 32-bit.

This new malware, detected by Kaspersky Lab as Backdoor.MSIL.Tyupkin, affects ATMs from a major ATM manufacturer running Microsoft Windows 32-bit.

OW: Right now, that’s not a major factor. It is certainly something that has to be considered by ATM operators in making their migration move to newer systems. Microsoft discontinued updates and security patching on Windows XP, with very expensive exceptions. Where it becomes an issue for ATM operators is that maintaining Payment Card Industry (credit and debit card security standards) compliance requires that the ATM operator be running an operating system that receives ongoing security updates. So, while many ATM operators certainly have compliance issues, to this point we have not seen the operating system come into play.

BK: Really?

OW: Yes. If anything, the operating systems are being bypassed or manipulated with the software as a result of that.

BK: Wait a second. The media reports to date have observed that most of these ATM malware attacks were going after weaknesses in Windows XP?

OW: It goes deeper than that. Most of these attacks come down to two different ways of jackpotting the ATM. The first is what we call “black box” attacks, where some form of electronic device is hooked up to the ATM — basically bypassing the infrastructure in the processing of the ATM and sending an unauthorized cash dispense code to the ATM. That was the first wave of attacks we saw that started very slowly in 2012, went quiet for a while and then became active again in 2013.

The second type that we’re now seeing more of is attacks that start with the introduction of malware into the machine, and that kind of attack is a little less technical to get on the older machines if protective mechanisms aren’t in place.

BK: What sort of protective mechanisms, aside from physically securing the ATM?

OW: If you work on the configuration setting…for instance, if you lock down the BIOS of the ATM to eliminate its capability to boot from USB or CD drive, that gets you about as far as you can go. In high risk areas, these are the sorts of steps that can be taken to reduce risks.

BK: Seems like a challenge communicating this to your customers who aren’t anxious to spend a lot of money upgrading their ATM infrastructure.

OW: Most of these recommendations and requirements have to be considerate of the customer environment. We make sure we’ve given them the best guidance we can, but at end of the day our customers are going to decide how to approach this.

BK: You mentioned black-box attacks earlier. Is there one particular threat or weakness that makes this type of attack possible? One recent story on ATM malware suggested that the attackers may have been aided by the availability of ATM manuals online for certain older models.

OW: The ATM technology infrastructure is all designed on multivendor capability. You don’t have to be an ATM expert or have inside knowledge to generate or code malware for ATMs. Which is what makes the deployment of preventative measures so important. What we’re faced with as an industry is a combination of vulnerability on aging ATMs that were built and designed at a point where the threats and risk were not as great.

According to security firm F-Secure, the malware used in the Malaysian attacks was “PadPin,” a family of malicious software first identified by Symantec. Also, Russian antivirus firm Kaspersky has done some smashing research on a prevalent strain of ATM malware that it calls “Tyupkin.” Their write-up on it is here, and the video below shows the malware in action on a test ATM.

In a report published this month, the European ATM Security Team (EAST) said it tracked at least 20 incidents involving ATM jackpotting with malware in the first half of this year. “These were ‘cash out’ or ‘jackpotting’ attacks and all occurred on the same ATM type from a single ATM deployer in one country,” EAST Director Lachlan Gunn wrote. “While many ATM Malware attacks have been seen over the past few years in Russia, Ukraine and parts of Latin America, this is the first time that such attacks have been reported in Western Europe. This is a worrying new development for the industry in Europe”

Card skimming incidents fell by 21% compared to the same period in 2013, while overall ATM related fraud losses of €132 million (~USD $158 million) were reported, up 7 percent from the same time last year.

TorrentFreak: Illegal Copying Has Always Created Jobs, Growth, And Prosperity

This post was syndicated from: TorrentFreak and was written by: Rick Falkvinge. Original post: at TorrentFreak

copyright-brandedIt often helps to understand present time by looking at history, and seeing how history keeps repeating itself over and over.

In the late 1700s, the United Kingdom was the empire that established laws on the globe. The United States was still largely a colony – even if not formally so, it was referred to as such in the civilized world, meaning France and the United Kingdom.

The UK had a strictly protectionist view of trade: all raw materials must come to England, and all luxury goods must be made from those materials while in the UK, to be exported to the rest of the world. Long story short, the UK was where the value was to be created.

Laws were written to lock in this effect. Bringing the ability to refine materials somewhere else, the mere knowledge, was illegal. “Illegal copying”, more precisely.

Let’s look at a particularly horrible criminal from that time, Samuel Slater. In the UK, he was even known as “Slater the Traitor”. His crime was to memorize the drawings of a British textile mill, move to New York, and copy the whole of the British textile mill from memory – something very illegal. For this criminal act, building the so-called Slater Mill, he was hailed as “the father of the American Industrial Revolution” by those who would later displace the dominance of the UK – namely the United States. This copy-criminal also has a whole town named after him.

Copying brings jobs and prosperity. Copying has always brought jobs and prosperity. It is those who don’t want to compete who try to legislate a right to rest on their laurels and outlaw copying. It never works.

We can take a look at the early film industry as well. That industry was bogged down with patent monopolies from one of the worst monopolists through industrial history, Thomas Edison and his Western Electric. He essentially killed off any film company that started in or at New York, where the film industry was based at the time. A few of the nascent film companies – Warner Brothers, Universal Pictures, MGM – therefore chose to settle as far from this monopolist as possible, and went across the entire country, to a small unexploited suburb outside of Los Angeles, California, which was known as “Hollywoodland” and had a huge sign to that effect. There, they would be safe from Edison’s patent enforcement, merely through taking out enough distance between themselves and him.

Yes, you read that right – the entire modern film industry was founded on piracy. Which, again, lead to jobs and prosperity.

The heart of the problem is this: those who decide what is “illegal” to copy do so from a basis of not wanting to get outcompeted, and never from any kind of moral high ground. It’s just pure industrial protectionism. Neo-mercantilism, if you prefer. Copying always brings jobs and prosperity. Therefore, voluntarily agreeing to the terms of the incumbent industries, terms which are specifically written to keep everybody else unprosperous, is astoundingly bad business and policy.

I’d happily go as far as to say there is a moral imperative to disobey any laws against copying. History will always put you in the right, as was the case with Samuel Slater, for example.

For a more modern example, you have Japan. When I grew up in the 1980s, Japanese industry was known for cheap knock-off goods. They copied everything shamelessly, and never got quality right. But they knew something that the West didn’t: copying brings prosperity. When you copy well enough, you learn at a staggering pace, and you eventually come out as the R&D leader, the innovation leader, building on that incremental innovation you initially copied. Today, Japan builds the best quality stuff available in any category.

The Japanese knew and understand that it takes three generations of copying and an enormous work discipline to become the best in the world in any industry. Recently, to my huge astonishment, they even overtook the Scottish as masters of whisky. (As I am a very avid fan of Scottish whisky, this was a personal source of confusion for me, even though I know things work this way on a rational level.)

At the personal level, pretty much every good software developer I know learned their craft by copying other people’s code. Copying brings prosperity at the national and the individual levels. Those who would seek to outlaw it, or obey such unjust bans against copying, have no moral high ground whatsoever – and frankly, I think people who voluntarily choose to obey such unjust laws deserve to stay unprosperous, and fall with their incumbent master when that time comes.

Nobody ever took the lead by voluntarily walking behind somebody else, after all. The rest of us copy, share, and innovate, and we wait for nobody who tries to legislate their way to competitiveness.

About The Author

Rick Falkvinge is a regular columnist on TorrentFreak, sharing his thoughts every other week. He is the founder of the Swedish and first Pirate Party, a whisky aficionado, and a low-altitude motorcycle pilot. His blog at falkvinge.net focuses on information policy.

Book Falkvinge as speaker?

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

TorrentFreak: The Soaring Financial Cost of Blocking Pirate Sites

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

On Friday news broke that luxury brand company Richemont had succeeded in its quest to have several sites selling counterfeit products blocked by the UK’s largest ISPs.

The landmark ruling, which opens the floodgates for perhaps tens of thousands of other sites to be blocked at the ISP level, contained some surprise information on the costs involved in blocking infringing websites. The amounts cited by Justice Arnold all involve previous actions undertaken by the movie and music industry against sites such as The Pirate Bay and KickassTorrents.

The applications themselves

The solicitor acting for Richemont, Simon Baggs of Wiggin LLP, also acted for the movie studios in their website blocking applications. Information Baggs provided to the court reveals that an unopposed application for a section 97A blocking order works out at around £14,000 per website.

The record labels’ costs aren’t revealed but Justice Arnold said “it is safe to assume that they are of a similar magnitude to the costs incurred by the film studios.”

In copyright cases, 47 sites have been blocked at the ISP level = £658,000

Keeping blocked sites blocked

When blocking orders are issued in the UK they contain provisions for rightsholders to add additional IP addresses and URLs to thwart anti-blocking countermeasures employed by sites such as The Pirate Bay. It is the responsibility of the rightsholders to “accurately identify IP addresses and URLs which are to be notified to ISPs in this way.”

It transpires that in order to monitor the server locations and domain names used by targeted websites, the film studios have hired a company called Incopro, which happens to be directed by Simon Baggs of Wiggins.

In addition to maintaining a database of 10,000 ‘pirate’ domains, Incopro also operates ‘BlockWatch’. This system continuously monitors the IP addresses and domains of blocked sites and uses the information to notify ISPs of new IPs and URLs to be blocked.

“Incopro charges a fee to enter a site into the BlockWatch system. It also charges an ongoing monthly fee,” Justice Arnold reveals. “In addition, the rightholders incur legal costs in collating, checking and sending notifications to the ISPs. Mr Baggs’ evidence is that, together, these costs work out at around £3,600 per website per year.”

If we assume that the music industry’s costs are similar, for 47 sites these monitoring costs amount to around £169,200 per year, every year.

Costs to ISPs for implementing blocking orders

The ISPs involved in blocking orders have been less precise as to the costs involved, but they are still being incurred on an ongoing basis. All incur ongoing costs when filtering websites such as those on the Internet Watch List, but copyright injunctions only add to the load.

Sky

The cost of implementing a new copyright blocking order is reported as a “mid three figure sum” by Sky, with an update to an order (adding new IP addresses, for example) amounts to half of that. Ongoing monitoring of blocked domains costs the ISP a “low four figure sum per month.”

BT

According to the court, BT says that it expends 60 days of employee time per year implementing section 97A orders via its Cleanfeed system and a further 12 days employee time elsewhere.

Each new order takes up 8 hours of in-house lawyers’ time plus 13 hours of general staff time. Updates to orders accrue an hour of costs in the legal department plus another 13 hours of blocking staff time.

EE

For each new order EE expends 30 minutes of staff time and a further three hours of time at BT whose staff it utilizes. Updates cost the same amount of time.

EE pays BT a “near four figure sum” for each update and expends 36 hours employee time each year on maintenance and management.

TalkTalk

TalkTalk’s legal team expends two hours implementing each new order while its engineers spend around around two and a half. Updates are believed to amount to the same. The company’s senior engineers burn through 60 hours each year dealing with blocking orders amounting to “a low six figure sum” per annum.

Virgin

Virgin estimates that Internet security staff costs amount to a “low five figure sum” per year. Interestingly the ISP said it spent more on blocking this year than last, partly due to its staff having to respond to comments about blocking on social media.

And the bills are only set to increase

According to Justice Arnold several additional blocking orders are currently pending. They are:

- An application by Paramount Home Entertainment Ltd and other film studios relating to seven websites said to be “substantially focused” on infringement of copyright in movies and TV shows

- An application by 1967 Ltd and other record companies in respect of 21 torrent sites

- An application by Twentieth Century Fox Film Corp and other film studios in respect of eight websites said to be “substantially focused” on infringement of copyright in movies and TV shows

But these 36 new sites to be blocked on copyright grounds are potentially just the tip of a quite enormous iceberg now that blocking on trademark grounds is being permitted.

Richemont has identified approximately 239,000 sites potentially infringing on their trademarks, 46,000 of which have been confirmed as infringing and are waiting for enforcement action.

So who will pick up the bill?

“It is obvious that ISPs faced with the costs of implementing website orders have a choice. They may either absorb these costs themselves, resulting in slightly lower profit margins, or they may pass these costs on to their subscribers in the form of higher subscription charges,” Justice Arnold writes.

Since all ISPs will have to bear similar costs, it seems likely that the former will prove most attractive to them, as usual.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

TorrentFreak: Jennifer Lawrence Gets Google to Censor Leaked Pictures, Sort Of

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

pixelatedOver the past several weeks hundreds of photos of naked celebrities leaked online. This “fappening” triggered a massive takedown operation targeting sites that host and link to the controversial images.

As a hosting provider and search engine Google inadvertently plays a role in distributing the compromising shots, much to the displeasure of the women involved.

More than a dozen of them sent Hollywood lawyer Marty Singer after the company. Earlier this month Singer penned an angry letter to Google threatening legal action if it doesn’t remove the images from YouTube, Blogspot and its search results.

“It is truly reprehensible that Google allows its various sites, systems and search results to be used for this type of unlawful activity. If your wives, daughters or relatives were victims of such blatant violations of basic human rights, surely you would take appropriate action,” the letter reads.

While no legal action has yet been taken, some celebrities have also sent individual DMCA takedown requests to Google. On September 24 Jennifer Lawrence’s lawyers asked the search engine to remove two links to thefappening.eu as these infringe on the star’s copyrights.

The DMCA takedown request
jlawdmca
Earlier this week the request was still pending, so TorrentFreak asked Google what was causing the delay. The company said it could not comment on individual cases but a day later the links in question were removed.

This means that both the thefappening.eu main domain and the tag archive of Jennifer Lawrence posts no longer appear in Google’s search results.

Whether this move has helped Lawrence much is doubtful though. The site in question had already redirected its site to a new domain at thefappening.so. These links remain indexed since they were not mentioned in the takedown request.

The good news is that many of Lawrence’s pictures are no longer hosted on the site itself. In fact, the URLs listed in the takedown request to Google no longer show any of the infringing photos in question, so technically Google had no obligation to remove the URLs.

A prominent disclaimer on the site points out that the operator will gladly take down the compromising photos if he’s asked to do so. Needless to say, this is much more effective than going after Google.

The disclaimer
attention

Photo via

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

The Hacker Factor Blog: By Proxy

This post was syndicated from: The Hacker Factor Blog and was written by: The Hacker Factor Blog. Original post: at The Hacker Factor Blog

As I tweak and tune the firewall and IDS system at FotoForensics, I keep coming across unexpected challenges and findings. One of the challenges is related to proxies. If a user uploads prohibited content from a proxy, then my current system bans the entire proxy. An ideal solution would only ban the user.

Proxies serve a lot of different purposes. Most people think about proxies in regards to anonymity, like the TOR network. TOR is a series of proxies that ensure that the endpoint cannot identify the starting point.

However, there are other uses for proxies. Corporations frequently have a set of proxies for handling network traffic. This allows them to scan all network traffic for potential malware. It’s a great solution for mitigating the risk from one user getting a virus and passing it to everyone in the network.

Some governments run proxies as a means to filter content. China and Syria come to mind. China has a custom solution that has been dubbed the “Great Firewall of China“. They use it to restrict site access and filter content. Syria, on the other hand, appears to use a COTS (commercial off-the-shelf) solution. In my web logs, most traffic from Syria comes through Blue Coat ProxySG systems.

And then there are the proxies that are used to bypass usage limits. For example, your hotel may charge for Internet access. If there’s a tech convention in the hotel, then it’s common to see one person pay for the access, and then run his own SOCKS proxy for everyone else to relay out over the network. This gives everyone access without needing everyone to pay for the access.

Proxy Services

Proxy networks that are designed for anonymity typically don’t leak anything. If I ban a TOR node, then that node stays banned since I cannot identify individual users. However, the proxies that are designed for access typically do reveal something about the user. In fact, many proxies explicitly identify who’s request is being relayed. This added information is stuffed in HTTP header fields that most web sites ignore.

For example, I recently received an HTTP request from 66.249.81.4 that contained the HTTP header “X-Forwarded-For: 82.114.168.150″. If I were to ban the user, then I would ban “66.249.81.4″, since that system connected to my server. However, 66.249.81.4 is google-proxy-66-249-81-4.google.com and is part of a proxy network. This proxy network identified who was relaying with the X-Forwarded-For header. In this case, “82.114.168.150″ is someone in Yemen. If I see this reference, then I can start banning the user in Yemen rather than the Google Proxy that is used by lots of people. (NOTE: I changed the Yemen IP address for privacy, and this user didn’t upload anything requiring a ban; this is just an example.)

Unfortunately, there is no real standard here. Different proxies use different methods to denote the user being relayed. I’ve seen headers like “X-Forwarded”, “X-Forwarded-For”, “HTTP_X_FORWARDED_FOR” (yes, they actually sent this in their header; this is NOT from the Apache variable), “Forwarded”, “Forwarded-For-IP”, “Via”, and more. Unless I know to look for it, I’m liable to ban a proxy rather than a user.

In some cases, I see the direct connection address also listed as the relayed address; it claims to be relaying itself. I suspect that this is cause by some kind of anti-virus system that is filtering network traffic through a local proxy. And sometimes I see private addresses (“private” as in “private use” and “should not be routed over the Internet”; not “don’t tell anyone”). These are likely home users or small companies that run a proxy for all of the computers on their local networks.

Proxy Detection

If I cannot identify the user being proxied, then just identifying that the system is a proxy can be useful. Rather than banning known proxies for three months, I might ban the proxy for only a day or a week. The reduced time should cut down on the number of people blocked because of the proxy that they used.

There are unique headers that can identify that a proxy is present. Blue Coat ProxySG, for example, adds in a unique header: “X-BlueCoat-Via: abce6cd5a6733123″. This tracking ID is unique to the Blue Coat system; every user relaying through that specific proxy gets the same unique ID. It is intended to prevent looping between Blue Coat devices. If the ProxySG system sees its own unique ID, then it has identified a loop.

Blue Coat is not the only vendor with their own proxy identifier. Fortinet’s software adds in a “X-FCCKV2″ header. And Verizon silently adds in an “X-UIDH” header that has a large binary string for tracking users.

Language and Location

Besides identifying proxies, I can also identify the user’s preferred language.

The intent with specifying languages in the HTTP header is to help web sites present content in the native language. If my site supports English, German, and French, then seeing a hint that says “French” should help me automatically render the page using French. However, this can be used along with IP address geolocation to identify potential proxies. If the IP address traces to Australia but the user appears to speak Italian, then it increases the likelihood that I’m seeing an Australian proxy that is relaying for a user in Italy.

The official way to identify the user’s language is to use an HTTP “Accept-Language” header. For example, “Accept-Language: en-US,en;q=0.5″ says to use the United States dialect of English, or just English if there is no dialect support at the web site. However, there are unofficial approaches to specifying the desired language. For example, many web browsers encode the user’s preferred language into the HTTP user-agent string.

Similarly, Facebook can relay network requests. These appear in the header “X-Facebook-Locale”. This is an unofficial way to identify when Facebook being use as a proxy. However, it also tells me the user’s preferred language: “X-Facebook-Locale: fr_CA”. In this case, the user prefers the Canadian dialect of French (fr_CA). While the user may be located anywhere in the world, he is probably in Canada.

There’s only one standard way to specify the recipient’s language. However, there are lots of common non-standard ways. Just knowing what to look for can be a problem. But the bigger problem happens when you see conflicting language definitions.

Accept-Language: de-de,de;q=0.5

User-Agent: Mozilla/5.0 (Linux; Android 4.4.2; it-it; SAMSUNG SM-G900F/G900FXXU1ANH4 Build/KOT49H) AppleWebKit/537.36 (KHTML, like Gecko) Version/1.6 Chrome/28.0.1500.94 Mobile Safari/537.36

X-Facebook-Locale: es_LA

x-avantgo-clientlanguage: en_GB

x-ucbrowser-ua: pf(Symbian);er(U);la(en-US);up(U2/1.0.0);re(U2/1.0.0);dv(NOKIAE90);pr
(UCBrowser/9.2.0.336);ov(S60V3);pi(800*352);ss(800*352);bt(GJ);pm(0);bv(0);nm(0);im(0);sr(2);nt(1)

X-OperaMini-Phone-UA: Mozilla/5.0 (Linux; U; Android 4.4.2; id-id; SM-G900T Build/id=KOT49H.G900SKSU1ANCE) AppleWebKit/534.30 (KHTML, like Gecko) Version/4.0 Mobile Safari/534.30

If I see all of these in one request, then I’ll probably choose the official header first (German from German). However, without the official header, would I choose Spanish from Latin America (“es-LA” is unofficial but widely used), Italian from Italy (it-it) as specified by the web browser user-agent string, or the language from one of those other fields? (Fortunately, in the real world these would likely all be the same. And you’re unlikely to see most of these fields together. Still, I have seen some conflicting fields.)

Time to Program!

So far, I have identified nearly a dozen different HTTP headers that denote some kind of proxy. Some of them identify the user behind the proxy, but others leak clues or only indicate that a proxy was used. All of this can be useful for determining how to handle a ban after someone violates my site’s terms of service, even if I don’t know who is behind the proxy.

In the near future, I should be able to identify at least some of these proxies. If I can identify the people using proxies, then I can restrict access to the user rather than the entire proxy. And if I can at least identify the proxy, then I can still try to lessen the impact for other users.

TorrentFreak: Uploaded.net Liable For Failing to Delete Copyright Content

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

uploaded-logoHaving content removed from the Internet is a task undertaken by most major entertainment industry companies. While laws differ around the world, the general understanding is that once notified of an infringement, Internet-based companies need to take action to prevent ongoing liability.

A case in Germany involving popular file-hosting service Uploaded.net has not only underlined this notion, but clarified that in some instances a hosting service can be held liable even if they aren’t aware of the content of a takedown notice.

It all began with anti-piracy company proMedia GmbH who had been working with their record label partners to remove unauthorized content from the Internet. The Hamburg-based company spotted a music album being made available on Uploaded so wrote to the company with a request for it to be removed.

“In the case at hand, a notice with regards to some infringing URLs on the file-hosting site was sent to the given abuse contact of the site,” Mirko Brüß, a lawyer with record label lawfirm Rasche Legal, told TorrentFreak.

However, three days later the album was still being made available so the lawfirm sent Uploaded an undertaking to cease and desist. When the file-hosting site still didn’t respond, Rasche Legal obtained a preliminary injunction against Uploaded.

“After it was served in Switzerland, the file-hoster objected and the court had an oral hearing in September,” Brüß explains.

In its response Uploaded appealed the injunction claiming it had never been aware of the takedown notices from proMedia GmbH. Lars Sobiraj of Tarnkappe told TF that Uploaded claimed to have received an empty Excel spreadsheet so didn’t react to it, preferring instead to receive plain text documents or complaints via its official takedown tool.

Rasche Legal later sent another email but Uploaded staff reportedly didn’t get a chance to read that either since an email server identified the correspondence as spam and deleted it.

“We did not believe this ‘story’ but thought they had just failed to process the notice expeditiously,” Brüß tolf TF.

In its judgment on the case the Hamburg District Court found that while service providers have no general obligations to monitor for infringing content on their services, the same cannot be said of infringements they have been made aware of.

However, the big question sat on Uploaded’s claims that it had never been aware of the infringements in question since it had never received the notices relating to them. In the event the Court found that sending the emails to Uploaded was enough to notify the service that infringements were taking place and that it must take responsibility for ending them.

“The Court followed our reasoning, meaning it is sufficient that the file-hoster actually receives the notice in a way that you can expect it to be read under normal circumstances,” Brüß says.

“There is a similar jurisdiction with regards to postal mail, where it is sufficient that the letter has reached your inbox and it is not necessary that you actually read the content of the letter in order for it to take legal effect. So here, we had proved that the takedown notice did reach the file-hoster’s mailserver, they only failed to act upon it.”

A ruling in the opposite direction would have opened up the possibility of other companies in a similar position to Uploaded blaming technical issues each time they failed to take down infringing content, Brüß explains. Instead, file-hosters are now required to respond quickly to complaints or face liability.

“So in essence, file-hosters need to make sure that they attain knowledge of all notices sent to them and act upon these notices expeditiously, or they face secondary (or even primary) liability. Also, the court stated that it does not matter by which means the notices are sent,” Brüß concludes.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

TorrentFreak: Google Will Punish “Pirate” Sites Harder in Search Results

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

google-bayOver the past few years the entertainment industries have repeatedly asked Google to step up its game when it comes to anti-piracy efforts.

These remarks haven’t fallen on deaf ears and Google has slowly implemented various new anti-piracy measures in response.

Today Google released an updated version of its “How Google Fights Piracyreport. The company provides an overview of all the efforts it makes to combat piracy, but also stresses that copyright holders themselves have a responsibility to make content available.

One of the most prominent changes is a renewed effort to make “pirate” sites less visible in search results. Google has had a downranking system in place since 2012, but this lacked effectiveness according to the RIAA, MPAA and other copyright industry groups.

The improved version, which will roll out next week, aims to address this critique.

“We’ve now refined the signal in ways we expect to visibly affect the rankings of some of the most notorious sites. This update will roll out globally starting next week,” says Katherine Oyama, Google’s Copyright Policy Counsel.

The report notes that the new downranking system will still be based on the number of valid DMCA requests a site receives, among other factors. The pages of flagged sites remain indexed, but are less likely to be the top results.

“Sites with high numbers of removal notices may appear lower in search results. This ranking change helps users find legitimate, quality sources of content more easily,” the report reads.

Looking at the list of sites for which Google received the most DMCA takedown request, we see that 4shared, Filestube and Dilandau can expect to lose some search engine traffic.

The report further highlights several other tweaks and improvements to Google’s anti-piracy efforts. For example, in addition to banning piracy related AutoComplete words, Google now also downranks suggestions that return results with many “pirate” sites.

Finally, the report also confirms our previous reporting which showed that Google uses ads to promote legal movie services when people search for piracy related keywords such as torrent, DVDrip and Putlocker. This initiative aims to increase the visibility of legitimate sites.

A full overview of Google’s anti-piracy efforts is available here.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

TorrentFreak: United States Hosts Most Pirate Sites, UK Crime Report Finds

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

sam-pirateThe UK IP Crime Group, a coalition of law enforcement agencies, government departments and industry representatives, has released its latest IP Crime Report.

The report is produced by the UK Government’s Intellectual Property Office and provides an overview of recent achievements and current challenges in the fight against piracy and counterfeiting. Increasingly, these threats are coming from the Internet.

“One of the key features in this year’s report is the continuing trend that the Internet is a major facilitator of IP crime,” the Crime Group writes.

The report notes that as in previous years, Hollywood-funded industry group FACT remains one of the key drivers of anti-piracy efforts in the UK. Over the past year they’ve targeted alleged pirate sites though various channels, including their hosting providers.

Not all hosts are receptive to FACT’s complaints though, and convincing companies that operate abroad is often a challenge. This includes the United States where the majority of the investigated sites are hosted.

“Only 14% of websites investigated by FACT are hosted in the UK. While it is possible to contact the hosts of these websites, there still remains a considerable number of copyright infringing websites that are hosted offshore and not within the jurisdiction of the UK.”

“Analysis has shown that the three key countries in which content is hosted are the UK, the USA and Canada. However, Investigating servers located offshore can cause specific problems for FACT’s law enforcement partners,” the report notes.

ushostpirate

The figure above comes as a bit of a surprise, as one would expect that United States authorities and industry groups would have been keeping their own houses in order.

Just a few months ago the US-based IIPA, which includes MPAA and RIAA as members, called out Canada because local hosting providers are “a magnet” for pirate sites. However, it now appears they have still plenty of work to do inside U.S. borders.

But even when hosting companies are responsive to complaints from rightsholders the problem doesn’t always go away. The report mentions that most sites simply move on to another host, and continue business as usual there.

“In 2013, FACT closed a website after approaching the hosting provider on 63 occasions. Although this can be a very effective strategy, in most instances the website is swiftly transferred onto servers owned by another ISP, often located outside the UK.”

While downtime may indeed be relatively brief the report claims that it may still hurt the site, as visitors may move on to other legitimate or illegitimate sources.

“The [moving] process usually involves a disruptive period of time whereby the website is offline, during which users will often find an alternative service, thus negatively affecting the website’s popularity.”

While hosting companies remain a main target, tackling the online piracy problem requires a multi-layered approach according to the UK Crime Group.

With the help of local law enforcement groups such as City of London’s PIPCU, copyright holders have rolled out a variety of anti-piracy measures in recent months. This includes domain name suspensions, cutting off payment processors and ad revenue, website blocking by ISPs and criminal prosecutions.

These and other efforts are expected to continue during the years to come. Whether that will be enough to put a real dent in piracy rates has yet to be seen.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

TorrentFreak: High Court Orders ISPs to Block Counterfeiting Websites

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

Following successful action by the world’s leading entertainment companies to have file-sharing sites blocked at the ISP level, it was perhaps inevitable that other companies with similar issues would tread the same path.

Compagnie Financière Richemont S.A. owns several well-known luxury brands including Cartier and Mont Blanc and for some time has tried to force sites selling counterfeit products to close down. Faced with poor results, in 2014 the company wrote to the UK’s leading ISPs – Sky, TalkTalk, BT, Virgin Media, EE and Telefonica/O2 – complaining that third party sites were infringing on Richemont trademarks.

Concerned that Richemont hadn’t done enough to close the sites down on its own and that blocking could affect legitimate trade, the ISPs resisted and the matter found itself before the High Court.

This morning a decision was handed down and it’s good news for Richemont. The ISPs named in the legal action must now restrict access to websites selling physical counterfeits in the same way they already restrict file-sharing sites.

The websites mentioned in the current order are cartierloveonline.com, hotcartierwatch.com, iwcwatchtop.com, replicawatchesiwc.com, 1iwc.com, montblancpensonlineuk.com, ukmontblancoutlet.co.uk . In addition, Richemont identified tens of thousands of additional domains that could be added in the future.

A Richemont spokesperson told TorrentFreak that the ruling represents a positive step in the fight to protect brands and customers from the sale of counterfeit goods online.

“We are pleased by this judgment and welcome the Court’s recognition that there is a public interest in preventing trade mark infringement, particularly where counterfeit goods are involved. The Courts had already granted orders requiring ISPs to block sites for infringement of copyright in relation to pirated content. This decision is a logical extension of that principle to trade marks,” the company said.

Wiggin LLP, the lawfirm at the heart of website blocking action for the entertainment industry, acted for Richemont in the case. The company says that today’s judgment holds benefits for both rightsholders and consumers.

“In a comprehensive judgment, the court has considered the enforcement methods that are presently available to trade mark owners when tackling infringement online. The court has concluded that Internet Service Providers play ‘an essential role’ and that the court can and should apply Article 11 of the Enforcement Directive to require the application of technical measures to impede infringement of trade marks,” Wiggin said.

According to a comment sent to TF by Arty Rajendra, Partner at IP law firm Rouse Legal, the decision is likely to be appealed.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.