Posts tagged ‘Other’

TorrentFreak: RIAA and Friends Accuse CNET of Hosting ‘Pirate’ Software

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

downloadcomDespite growing revenue streams from digital music, the music industry still sees online piracy as a significant threat.

This week a coalition of 16 music groups including the RIAA, the American Association of Independent Music (A2IM) and the American Society of Composers, Authors and Publishers (ASCAP) voiced their concern over so-called “ripping” software.

The groups are not happy with CNET’s as the software portal offers access to various YouTube downloaders and other stream ripping tools. In a letter to Les Moonves, CEO of CNET’s parent company CBS, they accuse the download portal of offering infringing software.

“[CNET’s] has made various computer, web, and mobile applications available that induce users to infringe copyrighted content by ripping the audio or the audio and video from what might be an otherwise legitimate stream,” the letter reads.

“We ask that you consider the above in light of industry best practices, your company’s reputation, the clear infringing nature of these applications, and your role in creating a safe, legitimate, and innovative Internet ecosystem,” the groups add.

Despite the strong wording, CBS doesn’t appear to be very impressed by the accusations.

In response cited by Billboard the company notes that “all of the software indexed on is legal”. According to CBS the mentioned software can be used for legal means and the company notes that this is the responsibility of the user.


This isn’t the first time that CNET and CBS have been called out for allegedly facilitating piracy. A few years ago a group of artists sued CBS and CNET for their role in distributing uTorrent, LimeWire and other P2P software.

The artists claimed that CNET profits heavily from distributing file-sharing software via, while demonstrating in editorial reviews how these application can be used to download copyright-infringing material.

The judge eventually ruled in favor of CBS and CNET and said that there was no indication that the companies will purposefully encourage copyright infringement in the future. A software ban would therefore needlessly silence “public discussion of P2P technologies.”

Given CBS’s response to the music group’s recent letter, the current request won’t be effective either.

TF asked RIAA, A2IM and ASCAP for additional details on the letter it sent to CBS but none of the groups replied to our inquiry before publication.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and the best VPN services.

Lauren Weinstein's Blog: Windows 10: A Potential Privacy Mess, and Worse

This post was syndicated from: Lauren Weinstein's Blog and was written by: Lauren. Original post: at Lauren Weinstein's Blog

I had originally been considering accepting Microsoft’s offer of a free upgrade from Windows 7 to Windows 10. After all, reports have suggested that it’s a much more usable system than Windows 8/8.1 — but of course in keeping with the “every other MS release of Windows is a dog” history, that’s a pretty low bar. However, it appears that…

TorrentFreak: Google Publishes Chrome Fix For Serious VPN Security Hole

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

As large numbers of Internet users wise up to seemingly endless online privacy issues, security products are increasingly being viewed as essential for even basic tasks such as web browsing.

In addition to regular anti-virus, firewall and ad-busting products, users wishing to go the extra mile often invest in a decent VPN service which allow them to hide their real IP addresses from the world. Well that’s the theory at least.

January this year details of a serious vulnerability revealed that in certain situations third parties were able to discover the real IP addresses of Chrome and Firefox users even though they were connected to a VPN.

This wasn’t the fault of any VPN provider though. The problem was caused by features present in WebRTC, an open-source project supported by Google, Mozilla and Opera.

By placing a few lines of code on a website and using a STUN server it became possible to reveal not only users’ true IP addresses, but also their local network address too.

While users were immediately alerted to broad blocking techniques that could mitigate the problem, it’s taken many months for the first wave of ‘smart’ solutions to arrive.

Following on the heels of a Chrome fix published by Rentamob earlier this month which protects against VPN leaks while leaving WebRTC enabled, Google has now thrown its hat into the ring.

Titled ‘WebRTC Network Limiter‘, the tiny Chrome extension (just 7.31KB) disables the WebRTC multiple-routes option in Chrome’s privacy settings while configuring WebRTC not to use certain IP addresses.

In addition to hiding local IP addresses that are normally inaccessible to the public Internet (such as, the extension also stops other public IP addresses being revealed.

“Any public IP addresses associated with network interfaces that are not used for web traffic (e.g. an ISP-provided address, when browsing through a VPN) [are hidden],” Google says.

“Once the extension is installed, WebRTC will only use public IP addresses associated with the interface used for web traffic, typically the same addresses that are already provided to sites in browser HTTP requests.”

While both the Google and Rentamob solutions provide more elegant responses to the problem than previously available, both admit to having issues.

“Some WebRTC functions, like VOIP, may be affected by the multiple routes disabled setting. This is unavoidable,” Rentamob explains.

Google details similar problems, including issues directly linked to funneling traffic through a VPN.

“This extension may affect the performance of applications that use WebRTC for audio/video or real-time data communication. Because it limits the potential network paths, WebRTC may pick a path that results in significantly longer delay or lower quality (e.g. through a VPN). We are attempting to determine how common this is,” the company concludes.

After applying the blocks and fixes detailed above, Chrome users can check for IP address leaks by using sites including IPLeak and BrowserLeaks.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and the best VPN services.

Schneier on Security: Fugitive Located by Spotify

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

The latest in identification by data:

Webber said a tipster had spotted recent activity from Nunn on the Spotify streaming service and alerted law enforcement. He scoured the Internet for other evidence of Nunn and Barr’s movements, eventually filling out 12 search warrants for records at different technology companies. Those searches led him to an IP address that traced Nunn to Cabo San Lucas, Webber said.

Nunn, he said, had been avidly streaming television shows and children’s programs on various online services, giving the sheriff’s department a hint to the couple’s location.

Krebs on Security: Windows 10 Shares Your Wi-Fi With Contacts

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Starting today, Microsoft is offering most Windows 7 and Windows 8 users a free upgrade to the software giant’s latest operating system — Windows 10. But there’s a very important security caveat that users should know about before transitioning to the new OS: Unless you opt out, Windows 10 will by default share your Wi-Fi network password with any contacts you may have listed in Outlook and Skype — and, with an opt-in, your Facebook friends!

msoptoutThis brilliant new feature, which Microsoft has dubbed Wi-Fi Sense, doesn’t share your WiFi network password per se — it shares an encrypted version of that password. But it does allow anyone in your Skype or Outlook or Hotmail contacts lists to waltz onto your Wi-Fi network — should they ever wander within range of it or visit your home (or hop onto it secretly from hundreds of yards away with a good ‘ole cantenna!).

I first read about this disaster waiting to happen over at The Register, which noted that Microsoft’s Wi-Fi Sense FAQ seeks to reassure would-be Windows 10 users that the Wi-Fi password will be sent encrypted and stored encrypted — on a Microsoft server. According to PCGamer, if you use Windows 10’s “Express” settings during installation, Wi-Fi Sense is enabled by default.

“For networks you choose to share access to, the password is sent over an encrypted connection and stored in an encrypted file on a Microsoft server, and then sent over a secure connection to your contacts’ phone if they use Wi-Fi Sense and they’re in range of the Wi-Fi network you shared,” the FAQ reads.

The company says your contacts will only be able to share your network access, and that Wi-Fi Sense will block those users from accessing any other shared resources on your network, including computers, file shares or other devices. But these words of assurance probably ring hollow for anyone who’s been paying attention to security trends over the past few years: Given the myriad ways in which social networks and associated applications share and intertwine personal connections and contacts, it’s doubtful that most people are aware of who exactly all of their social network followers really are from one day to the next.

El Reg says it well here:

That sounds wise – but we’re not convinced how it will be practically enforced: if a computer is connected to a protected Wi-Fi network, it must know the key. And if the computer knows the key, a determined user or hacker will be able to find it within the system and use it to log into the network with full access.

In theory, someone who wanted access to your company network could befriend an employee or two, and drive into the office car park to be in range, and then gain access to the wireless network. Some basic protections, specifically ones that safeguard against people sharing their passwords, should prevent this.

I should point out that Wi-Fi networks which use the centralized 802.1x Wi-Fi authentication — and these are generally tech-savvy large organizations — won’t have their Wi-Fi credentials shared by this new feature.

Microsoft’s solution for those concerned requires users to change the name (a.k.a. “SSID“) of their Wi-Fi network to include the text “_optout” somewhere in the network name (for example, “oldnetworknamehere_optout”).

It’s interesting to contrast Microsoft’s approach here with that of Apple, who offer an opt-in service called iCloud Keychain; this service allows users who decide to use the service to sync WiFi access information, email passwords, and other stored credentials amongst their own personal constellation of Apple computers and iDevices via Apple’s iCloud service, but which does not share this information with other users. Apple’s iCloud Keychain service encrypts the credentials prior to sharing them, as does Microsoft’s Wi-Fi Sense service; the difference is that it’s opt-in and that it only shares the credentials with your own devices.

Wi-Fi Sense has of course been a part of the latest Windows Phone for some time, yet it’s been less of a concern previously because Windows Phone has nowhere near the market share of mobile devices powered by Google’s Android or Apple’s iOS. But embedding this feature in an upgrade version of Windows makes it a serious concern for much of the planet.

Why? For starters, despite years of advice to the contrary, many people tend to re-use the same password for everything. Also, lots of people write down their passwords. And, as The Reg notes, if you personally share your Wi-Fi password with a friend — by telling it to them or perhaps accidentally leaving it on a sticky note on your fridge — and your friend enters the password into his phone, the friends of your friend now have access to the network.

Source: How-To Geek

Source: How-To Geek

An article in Ars Technica suggests the concern over this new feature is much ado about nothing. That story states: “First, a bit of anti-scaremongering. Despite what you may have read elsewhere, you should not be mortally afraid of Wi-Fi Sense. By default, it will not share Wi-Fi passwords with anyone else. For every network you join, you’ll be asked if you want to share it with your friends/social networks.”

To my way of reading that, if I’m running Windows 10 in the default configuration and a contact of mine connects to my Wi-Fi network and say yes to sharing, Windows shares access to that network: The contact gets access automatically, because I’m running Windows 10 and we’re social media contacts. True, that contact doesn’t get to see my Wi-Fi password, but he can nonetheless connect to my network.

While you’re at it, consider keeping Google off your Wi-Fi network as well. It’s unclear whether the Wi-Fi Sense opt-out kludge will also let users opt-out of having their wireless network name indexed by Google, which requires the inclusion of the phrase “_nomap” in the Wi-Fi network name. The Register seems to think Windows 10 upgraders can avoid each by including both “_nomap” and “_optout” in the Wi-Fi network name, but this article at How-To Geek says users will need to choose the lesser of two evils.

Either way, Wi-Fi Sense combined with integrated Google mapping tells people where you live (and/or where your business is), meaning that they now know where to congregate to jump onto your Wi-Fi network without your permission.

My suggestions:

  1. Prior to upgrade to Windows 10, change your Wi-Fi network name/SSID to something that includes the terms “_nomap_optout”.
  2. After the upgrade is complete, change the privacy settings in Windows to disable Wi-Fi Sense sharing.
  3. If you haven’t already done so, consider additional steps to harden the security of your Wi-Fi network.

Further reading:

What Is Wi-Fi Sense and Why Does it Want Your Facebook Account? 

UH OH: Windows 10 Will Share Your Wi-Fi Key With Your Friends’ Friends

Why Windows 10 Shares Your Wi-Fi Password and How to Stop it

Wi-Fi Sense in Windows 10: Yes, It Shares Your Passkeys, No You Shouldn’t Be Scared

TorrentFreak: Twitter Sued for Failing to Remove Copyrighted Photo

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

pirate-twitterIn common with many other user-generated sites, Twitter is used by some of its members to host or link to copyright infringing material.

If rightsholders submit a takedown request, Twitter swiftly takes the infringing content down. This policy made headlines just a few days ago when Twitter removed several tweets that republished a joke without attribution.

However, a new lawsuit suggests that Twitter’s takedown efforts are not always this effective.

This week award-winning photographer Kristen Pierson filed a complaint (pdf) against Twitter at a California District Court. Pierson accuses Twitter of hosting or linking to one of her works without permission.

“A Twitter user or users copied the Infringing Image without license or permission from Pierson and on information and belief sent one or more Tweets publicizing and linking to it. The Infringing Uses were hosted either on Twitter or on third-party servers,” the complaint reads.

Under U.S. law Internet services are not liable for the copyright infringements of their users, as long as they respond to takedown requests. But Twitter failed to do that, Pierson says.

On March 4 of last year Pierson sent a notice to Twitter’s registered DMCA agent pointing out that one of her photos of Dragonforce guitarist Herman Li was being shared illegally. More than a year passed by but she received no response.

The takedown notice

The Twitter account which allegedly posted the image is no longer online, but even today the infringing image is still present on Twitter’s servers and accessible through the url.

Pierson doesn’t mention whether she sent any follow-ups to the original request. TF searched for the notice in question on where Twitter publishes its takedown notices, but it’s not listed there.

In the complaint the photographer asks for a restraining order preventing Twitter from hosting or linking to her work. In addition, Pierson demands both statutory and actual damages which could well exceed $150,000.

This is not the first time that Twitter has been sued by a photographer over a failed takedown response. Christopher Boffoli previously sued the company for the same offense. The case was settled out of court.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and the best VPN services.

Schneier on Security: Bizarre High-Tech Kidnapping

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

This is a story of a very high-tech kidnapping:

FBI court filings unsealed last week showed how Denise Huskins’ kidnappers used anonymous remailers, image sharing sites, Tor, and other people’s Wi-Fi to communicate with the police and the media, scrupulously scrubbing meta data from photos before sending. They tried to use computer spyware and a DropCam to monitor the aftermath of the abduction and had a Parrot radio-controlled drone standing by to pick up the ransom by remote control.

The story also demonstrates just how effective the FBI is tracing cell phone usage these days. They had a blocked call from the kidnappers to the victim’s cell phone. First they used an search warrant to AT&T to get the actual calling number. After learning that it was an AT&T prepaid Trakfone, they called AT&T to find out where the burner was bought, what the serial numbers were, and the location where the calls were made from.

The FBI reached out to Tracfone, which was able to tell the agents that the phone was purchased from a Target store in Pleasant Hill on March 2 at 5:39 pm. Target provided the bureau with a surveillance-cam photo of the buyer: a white male with dark hair and medium build. AT&T turned over records showing the phone had been used within 650 feet of a cell site in South Lake Tahoe.

Here’s the criminal complaint. It borders on surreal. Were it an episode of CSI:Cyber, you would never believe it.

TorrentFreak: Sweden’s Largest Streaming Site Will Close After Raid

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

swefilmlogoWhile millions associate Sweden with BitTorrent through its connections with The Pirate Bay, over the past several years the public has increasingly been obtaining its content in other ways.

Thanks to cheap bandwidth and an appetite for instant gratification, so-called streaming portals have grown in popularity, with movies and TV shows just a couple of clicks away in convenient Netflix-style interfaces.

Founded in 2011, Swefilmer is currently Sweden’s most popular streaming movie and TV show site. Research last year from Media Vision claimed that 25% of all web TV viewing in the country was carried out on Swefilmer and another similar site, Dreamfilm.

According to Alexa the site is currently the country’s 100th most popular domain, but in the next three days it will shut down for good.


The revelation comes from the site’s admin, who has just been revealed as local man Ola Johansson. He says that a surprise and unwelcome visit made it clear that he could not continue.

In a YouTube video posted yesterday, Johansson reports that earlier this month he was raided by the police who seized various items of computer equipment and placed him under arrest.

“It’s been a tough month to say the least. On 8 July, I received a search by the police at home. I lost a computer, mobile phone and other things,” Johansson says.

While most suspects in similar cases are released after a few hours or perhaps overnight, Johansson says he was subjected to an extended detention.

ola“I got to sit in jail for 90 hours. When I came out on Monday [after being raided on Wednesday] the site had been down since Friday,” he explains.

The Swede said he noticed something was amiss at the beginning of July when he began experiencing problems with the Russian server that was used to host the site’s videos.

“It started when all things from disappeared. That’s the service where we have uploaded all the videos,” Johansson says.

While the site remains online for now, the Swede says that this Friday Swefilmer will close down for good. The closure will mark the end of an era but since he is now facing a criminal prosecution that’s likely to conclude in a high-profile trial, Johansson has little choice but to pull the plug.

The site’s considerable userbase will be disappointed with the outcome but there are others that are welcoming the crackdown.

“We are not an anonymous Hollywood studio,” said local director Anders Nilsson in response to the news.

“We are a group of film makers and we will not give up when someone spits in our faces by stealing our movies and putting them on criminal sites to share them in the free world. It is just as insulting as if someone had stolen the purely physical property.”

Aside from creating a gap in the unauthorized streaming market, the forthcoming closure of Swefilmer will have repercussions in the courtroom too, particularly concerning an important legal process currently playing out in Sweden.

Last November, Universal Music, Sony Music, Warner Music, Nordisk Film and the Swedish Film Industry filed a lawsuit in the Stockholm District Court against local ISP Bredbandsbolaget (The Broadband Company). It demands that the ISP blocks subscriber access to The Pirate Bay and also Swefilmer.

Even after negotiation Bredbandsbolaget refused to comply, so the parties will now meet in an October hearing to determine the future of website blocking in Sweden.

It is believed that the plaintiffs in the case were keen to tackle a torrent site and a streaming site in the same process but whether Swefilmer will now be replaced by another site is currently unknown. If it does, Dreamfilm could be the most likely candidate.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and the best VPN services.

TorrentFreak: RIAA Wants Domain Registrar to Expose ‘Pirate Site’ Owner

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

riaaDespite an increased availability of legal options, millions of people still stream MP3s from unofficial sources. These sites are a thorn in the side of the RIAA.

Going after these pirate sites is a problem, according to the music group, as the operators are often unknown and hidden behind Whois privacy services. This is one of the reasons why the RIAA is supporting an ICANN proposal to limit domain name privacy.

But even under current laws and regulations it’s often possible to find out who runs a website, through a DMCA subpoena for example. And a recent case shows that the process isn’t too hard.

A few days ago the RIAA obtained a DMCA subpoena from the U.S. District Court of Columbia ordering domain name registrar Dynadot to expose the personal details of a customer. These subpoenas are signed off by a clerk and don’t require any overview from a judge.

With the subpoena in hand RIAA asked Dynadot to identify the owner of the music streaming site, claiming that the site infringes the work of artists such as Eminem, Drake and Selena Gomez. Among other details, the registrar is ordered to share the IP-address and email address of the site’s operator.

“We believe your service is hosting the below-referenced domain name on its network. The website associated with this domain name offers files containing sound recordings which are owned by one or more of our member companies and have not been authorized for this kind of use,” the RIAA writes.

In addition, the RIAA also urges Dynadot to review whether the site violates its terms of service as a repeat infringer, which means that it should be pulled offline.

“We also ask that you consider the widespread and repeated infringing nature of the site operator(s)’ conduct, and whether the site(s)‘ activities violate your terms of service and/or your company’s repeat infringer policy.” is a relatively small site that allows user to discover, stream and download music tracks. The audio files themselves appear to be sourced from the music hosting service Audioinbox, and are not hosted on the site’s servers.

“On our website you can find links that lead to media files. These files are stored somewhere else on the internet and are not a part of this website. does not carry any responsibility for them,” the website’s operator notes.

It is unclear what the RIAA is planning to do if they obtain the personal information of the site owners. In addition to suggesting that Dynadot should disconnect the site as a repeat infringer, the music group will probably issue a warning to the site’s operator.

For now, however, Soundpiff is still up and running.

This is not the first time that the RIAA has gone after similar sites in this way. Over the past several years the group has targeted several other download and streaming sites via their registrars or Whois privacy services. Some of these have closed, but others still remain online today.

RIAA’s subpoena to Dynadot

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and the best VPN services.

Backblaze Blog | The Life of a Cloud Backup Company: Hard Drive Reliability Stats for Q2 2015

This post was syndicated from: Backblaze Blog | The Life of a Cloud Backup Company and was written by: Andy Klein. Original post: at Backblaze Blog | The Life of a Cloud Backup Company

Hard Drive Reliability Stats for Q2 2015
Each quarter, Backblaze updates our hard drive statistics with the latest data. As of the end of Q2 2015, there were 47,561 drives spinning in our datacenter. Subtracting out boot drives, drive models with less than 45 drives and drives in testing systems, we are publishing data on 46,038 hard drives spread across 21 different models, from 1.5TB to 8.0TB in size.

All the hard drives in this review are used in the production systems in our datacenter. The environment is climate controlled and all drives are individually monitored. Each day we pull the available SMART stats reported by each and every drive. These stats are available for download from our Hard Drive Data web page.

There are two SMART stats of particular interest for most folks: hours in operation and drive temperature. The SMART 9 attribute allows us to compute the age of the drive, and the SMART 194 attribute allows us to determine that all drives are within their acceptable temperature range. Downloading the data will enable you to examine the SMART stats for every drive we used in this review.

Hard Drive Failure Rates

We’ll start by comparing the hard drive reliability stats for the January-June 2015 period with the stats from 2014:

Annual Hard Drive Failure Rates by Manufacturer

Trends in Hard Drive Failure Rates

The following table presents the cumulative Hard Drive reliability stats over time. This table can provide insights into failure rate trends as the drive population ages:

Hard Drive Failure Rates

What Is A Failed Hard Drive?

For Backblaze there are three reasons a drive is considered to have “failed”:

  1. The drive will not spin up or connect to the OS.
  2. The drive will not sync, or stay synced, in a RAID Array (see note below).
  3. The Smart Stats we use show values above our thresholds.
  4. Note: Backblaze Vaults do not use RAID. Instead, we use our open-sourced implementation of Reed-Solomon encoding to replace the function of RAID. As a result, these drives are not subject to RAID-sync errors. RAID-sync failures are only applicable to stand-alone Storage Pods.


The 4TB drives continue to rock, with both Seagate and HGST 4TB drives performing well. The Seagate 4TB drive has a current cumulative failure rate of 3.0% and has a street price of $131.58 each on Amazon. The HGST 4TB drive has a higher street price of $174.99 on Amazon, but a lower cumulative failure rate of 1.18%. Both drives have been in service for over a year and we currently own 17,000+ Seagate and 11,000+ HGST 4TB drives and continue to purchase more.

The failure rates of the Toshiba and Western Digital 4TB drives look respectable as well, but they are based on a very limited number of drives for each model. We’ve had trouble getting the Toshiba drives quoted to us in quantity, although there appears to be some movement on that front. Western Digital drives are almost always quoted to us at a higher price than other drive models. Until we can get a reasonable number of these drives, we can’t recommend either of them, although you may find them to be just fine for your personal use.
4TB Drive Failure Rates
For 6TB drives, the data is still not firm. We did manage to buy 450 Western Digital 6TB drives that currently have a 6.2% failure rate over the 8 months they’ve been in service, but their failure rate has fluctuated over time from 3.07% to 13.75%. The 495 6TB Seagate drives have a lower current failure rate, at 3.8%, but they have been in service less than 6 months. In summary, more time is needed to get a good fix on the failure rates of these 6TB drives.

Regarding the 8TB drives we have deployed, we need more drives and more time before we can recommend anything. We currently have had 45 HGST 8TB drives (Helium) deployed for about one quarter. The current annual failure rate of 5.3% is based on one drive failure, which is certainly not enough to draw any conclusions.

The 1.5TB and 2.0TB drives we have in production are slowly being replaced with larger capacity drives. The average age of the Seagate 1.5TB drives is over 5 years, making these some of the oldest drives in our data center. Their current annual failure rate is over 10%, so these drives are being replaced first. The HGST 2TB drives we have running have been exceptional. After 4+ years of service their cumulative failure rate is just 1.9%. If you’re interested in a 2TB drive, the HGST drive, model HDS722020ALA330, has been an all-star performer for us and they are still available on Amazon for $56.43.

As far as 3TB drives go, we have replaced nearly all of our Seagate 3TB drives. If you are interested in getting a 3TB drive, we’ve had a positive experience with the HGST 3TB drive (model HDS723030ALA640) which can still be found in stock on Amazon for $110.88. Their current failure rate in our environment is 1.83%. The HGST 3TB, model HDS5C3030ALA630, is a choice good as well, but quantities seem to be limited.

The post Hard Drive Reliability Stats for Q2 2015 appeared first on Backblaze Blog | The Life of a Cloud Backup Company.

TorrentFreak: Sony Settles Piracy Lawsuit With Russia’s Facebook

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

vkFor several years VKontakte, or VK, has been branded as a piracy facilitator by copyright holders and even the U.S. Government.

In common with many user-generated sites, VK allows its millions of users to upload anything from movies and TV shows to their entire music collections. However, copyright holders often claim that Russia’s social network has failed to adopt proper anti-piracy measures.

Last year this resulted in a lawsuit filed at the Saint Petersburg and Leningrad Region Arbitration Court, in which Sony Music, Universal Music and Warner Music demanded countermeasures and compensation for the large scale copyright infringement VK allegedly facilitates.

The case is still ongoing, but as of this week Sony Music has dropped out. According to a local report Sony and VK signed a confidential settlement agreement to resolve the dispute.

No further details on the content of the deal have been published, but according to sources VK will upgrade its current music service.

Among other things, the social network will start charging mobile users for access to its official music platform. Desktop users will still have free access, but these views will be monetized through advertisements.

Both changes will be rolled out gradually after a thorough test phase.

The settlement with Sony Music is a breakthrough for the Russian equivalent of Facebook, but it doesn’t mean that all legal troubles are over.

The remaining cases against Universal Music and Warner Music haven’t been resolved yet. Together with Sony the companies demanded 50 million rubles ($830,000) in damages in their complaint last year, and VK is still on the hook for most of it.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and the best VPN services.

Lauren Weinstein's Blog: What Google’s New Changes to Google+ and YouTube REALLY Mean

This post was syndicated from: Lauren Weinstein's Blog and was written by: Lauren. Original post: at Lauren Weinstein's Blog

In a pair of blog posts today, Google announced major changes in the operations of their Google+ (G+) and YouTube services: There are a number of changes noted, but my executive summary would be that Google is ending the enforced connection of Google+ user profiles to other Google services, notably YouTube. The popular clickbait analysis appearing on many…

Linux How-Tos and Linux Tutorials: Must-Know Linux Commands For New Users

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Swapnil Bhartiya. Original post: at Linux How-Tos and Linux Tutorials

fedora cli

One of the beauties of Linux-based systems is that you can manage your entire system right from the terminal using the command line. The advantage of using the command line is that you can use the same knowledge and skills to manage any Linux distribution.

This is not possible through the graphical user interface (GUI) as each distro, and desktop environment (DE), offers its own user interfaces. To be clear, there are cases in which you will need different commands to perform certain tasks on different distributions, but more or less the concept and ideas remain the same.

In this article, we are going to talk about some of the basic commands that a new Linux user should know. I will show you how to update your system, manage software, manipulate files and switch to root using the command line on three major distributions: Ubuntu (which also includes its flavors and derivatives, and Debian), openSUSE and Fedora.

Let’s get started!

Keep your system safe and up-to-date

Linux is secure by design, but the fact is that all software has bugs and there could be security holes. So it’s very important to keep your system updated. Think of it this way: Running an out-of-date operating system is like being in an armored tank with the doors unlocked. Will the armor protect you? Anyone can enter through the open doors and cause harm. Similarly there can be un-patched holes in your OS which can compromise your systems. Open source communities, unlike the proprietary world, are extremely quick at patching holes, so if you keep your system updated you’ll stay safe.

Keep an eye on news sites to be aware of security vulnerabilities. If there is a hole discovered, read about it and update your system as soon as a patch is out. Either way you must make it a practice to run the update commands at least once a week on production machines. If you are running a complicated server then be extra careful and go through the changelog to ensure updates won’t break your customization.

Ubuntu: Bear one thing in mind: you must always refresh repositories (aka repos) before upgrading the system or installing any software. On Ubuntu, you can update your system with the following commands. The first command refreshes repositories:

sudo apt-get update

Once the repos are updated you can now run the system update command:

sudo apt-get upgrade

However this command doesn’t update the kernel and some other packages, so you must also run this command:

sudo apt-get dist-upgrade

openSUSE: If you are on openSUSE, you can update the system using these commands (as usual, the first command is meant to update repos)

sudo zypper refresh
sudo zypper up

Fedora: If you are on Fedora, you can use the ‘dnf’ command which is ‘kind’ of equivalent to zypper and apt-get:

sudo dnf update
sudo dnf upgrade

Software installation and removal

You can install only those packages which are available in the repositories enabled on your system. Every distro comes with some official or third-party repos enabled by default.

Ubuntu: To install any package on Ubuntu, first update the repo and then use this syntax:

sudo apt-get install [package_name]


sudo apt-get install gimp

openSUSE: The commands would be:

sudo zypper install [package_name]

Fedora: Fedora has dropped ‘yum’ and now uses ‘dnf’ so the command would be:

sudo dnf install [package_name]

The procedure to remove the software is the same, just exchange ‘install’ with ‘remove’.


sudo apt-get remove [package_name]


sudo zypper remove [package_name]


sudo dnf remove [package_name]

How to manage third party software?

There is a huge community of developers who offer their software to users. Different distributions use different mechanisms to make third party software available to their users. It also depends on how a developer offers their software to users; some offer binaries and others offer it through repositories.

Ubuntu heavily relies on PPAs (personal package archives) but, unfortunately, there is no built-in tool which can assist a user in searching PPAs. You will need to Google the PPA and then add the repository manually before installing the software. This is how you would add any PPA to your system:

sudo add-apt-repository ppa:<repository-name>

Example: Let’s say I want to add LibreOffice PPA to my system. I would Google the PPA and then acquire the repo name from Launchpad, which in this case is “libreoffice/ppa”. Then add the ppa using the following command:

sudo add-apt-repository ppa:libreoffice/ppa

It will ask you to hit the Enter key in order to import the keys. Once it’s done, refresh the repos with the ‘update’ command and then install the package.

openSUSE has an elegant solution for third-party apps. You can visit, search for the package and install it with one click. It will automatically add the repo to your system. If you want to add any repo manually, use this command:.

sudo zypper ar -f url_of_the_repo name_of_repo
sudo zypper ar -f LOF

Then refresh the repo and install software:

sudo zypper refresh
sudo zypper install libreoffice

Fedora users can simply add RPMFusion (both free and non-free repos) which contain a majority of applications. In case you do need to add a repo, this is the command:

config-manager --add-repo

Some basic commands

I have written a few articles on how to manage files on your system using the CLI, here are some of the basic commands which are common across all distributions.

Copy files or directories to a new location:

cp path_of_file_1 path_of_the_directory_where_you_want_to_copy/

Copy all files from a directory to a new location (notice the slash and asterisk, which implies all files within that directory):

cp path_of_files/* path_of_the_directory_where_you_want_to_copy/

Move a file from one location to another (the trailing slash means inside that directory):

mv path_of_file_1 path_of_the_directory_where_you_want_to_move/

Move all file from one location to another:

mv path_of_directory_where_files_are/* path_of_the_directory_where_you_want_to_move/

Delete a file:

rm path_of_file

Delete a directory:

rm -r path_of_directory

Remove all content from the directory, leaving the directory folder intact:

rm -r path_of_directory/*

Create new directory

To create a new directory, first enter the location where you want to create a directory. Let’s say you want to create a ‘foundation’ folder inside your Documents directory. Let’s change the directory using the cd (aka change directory) command:

cd /home/swapnil/Documents

(exchange ‘swapnil with the user on your system)

Then create the directory with mkdir command:

mkdir foundation

You can also create a directory from anywhere, by giving the path of the directory. For example:

mdkir /home/swapnil/Documents/foundation

If you want to create parent-child directories, which means directories within other directories then use the -p option. It will create all directories in the given path:

mdkir -p /home/swapnil/Documents/linux/foundation

Become root

You either need to be root or the user should have sudo powers to perform some administrative tasks such as managing packages or making changes to the root directories or files. An example would be to edit the ‘fstab’ file which keeps a record of mounted hard drives. It’s inside the ‘etc’ directory which is within the root directory. You can make changes to this file only as a super user. In most distros you can become root by ‘switching user’. Let’s say on openSUSE I want to become root as I am going to work inside the root directory. You can use either command:

sudo su -


su -

That will ask for the password and then you will have root privileges. Keep one point in mind: never run your system as root user unless you know what you are doing. Another important point to note is that the files or directories you modify as root also change ownership of those files from that user or specific service to root. You may have to revert the ownership of those files otherwise the services or users won’t be able to to access or write to those files. To change users, this is the command:

sudo chown -R user:user /path_of_file_or_directory

You may often need this when you have partitions from other distros mounted on the system. When you try to access files on such partitions, you may come across a permission denied error. You can simply change the ownership of such partitions to access them. Just be extra careful, don’t change permissions or ownership of root directories.

These are the basic commands any new Linux user needs. If you have any questions or if you want us to cover a specific topic, please mention them in the comments below.

TorrentFreak: BREIN Hits 128 Sites Plus BitTorrent Uploaders & Moderator

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

nopiracyAnti-piracy groups come in all shapes and sizes but one of the most famous is Dutch outfit BREIN. Although its mission has expanded in recent years, BREIN is generally viewed as one of the more aggressive groups doing Hollywood’s bidding in Europe. That has included taking on giants such as The Pirate Bay.

Unlike most groups operating in its field, each year BREIN publishes an overview of its anti-piracy enforcement actions. It’s a broad report that for operational and security reasons tends to leave out specific details. Nevertheless, the highlights of its initial 2015 report provide a useful insight to the outfit’s current focus.

In the first half of the year BREIN continued its threats to local webhosts who offer services to file-sharing sites. While some are less responsive than others, BREIN says 128 ‘illegal’ sites were taken down in this way. Almost two dozen were BitTorrent sites, 37 takedowns hit streaming video portals and two targeted cyberlockers distributing music. The remainder were linking sites used to spread content hosted on cyberlockers and Usenet.

Like its counterparts overseas, BREIN mentions the tendency of ‘pirate’ sites to attempt to hide their locations behind the services of U.S.-based Cloudflare. However, the anti-piracy group says that unmasking sites’ true locations can be achieved upon request.

“BREIN believes that the services provided by CloudFlare to illegal providers should be discontinued after notification by BREIN,” the group adds.

As previously reported, BREIN also took action against several sites helping to distribute Popcorn Time software. The anti-piracy group says it targeted seven in all, with two “fleeing abroad” to be pursued by other copyright enforcers.

Also in the first half of 2015, BREIN says it obtained a total of 12 ex-parte injunctions, i.e court orders against alleged infringers who were not present to defend themselves during the proceedings.

Five of the orders concerned large uploaders, four connected to BitTorrent and the other to Usenet. BREIN said it also obtained an injunction against “an important moderator” on one of the “largest illegal BitTorrent sites”. In line with BREIN policy, the site itself is not named.

Five of the ex-parte orders related to those offering movies and TV shows without permission while two were connected to eBook offerings, one of which was a 13,500 title supplier. Two video game infringement injunctions were also obtained, one of which related to modification of consoles.

In action directed away from individuals, BREIN says it continued with its efforts to have infringing links delisted from Google. In the first half of the year the group says it sent 1.4 million infringement reports to Google, making 10 million reports since the program began in 2012.

BREIN also notes that it targeted various dedicated BitTorrent trackers with requests to “blacklist illegal infohashes”. Two of the trackers reportedly complied but a third “fled abroad” where it is now being pressured by another anti-piracy outfit.

Finally, BREIN reminds everyone that the long-running Pirate Bay blocking case is not over yet. After a big legal defeat in January 2014, BREIN is now taking the case all the way to the Supreme Court.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and the best VPN services.

TorrentFreak: MPAA Emails Expose Dirty Media Attack Against Google

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

google-bayLate last year leaked documents revealed that the MPAA helped Mississippi Attorney General (AG) Jim Hood to revive SOPA-like censorship efforts in the United States.

In a retaliatory move Google sued the Attorney General, hoping to find out more about the secret plan. The company also demanded copies of internal communications from the MPAA which are now revealing how far the anti-Google camp planned to go.

Emails between the MPAA and two of AG Hood’s top lawyers include a proposal that outlines how the parties could attack Google. In particular, they aim to smear Google through an advanced PR campaign involving high-profile news outlets such as The Today Show and The Wall Street Journal.

With help from Comcast and News Corp, they planned to hire a PR firm to “attack” Google and others who resisted the planned anti-piracy efforts. To hide links to the MPAA and the AG’s office, this firm should be hired through a seemingly unaffiliated nonprofit organization, the emails suggest.

“This PR firm can be funded through a nonprofit dedicated to IP issues. The ‘live buys’ should be available for the media to see, followed by a segment the next day on the Today Show (David green can help with this),” the plan reads (pdf).

The Today Show feature would be followed up by a statement from a large Google investor calling on the company to do more to tackle the piracy problem.

“After the Today Show segment, you want to have a large investor of Google (George can help us determine that) come forward and say that Google needs to change its behavior/demand reform.”

In addition, a planted piece in the Wall Street Journal should suggest that Google’s stock would lose value if the company doesn’t give in to the demands.

“Next, you want NewsCorp to develop and place an editorial in the WSJ emphasizing that Google’s stock will lose value in the face of a sustained attack by AGs and noting some of the possible causes of action we have developed,” the plan notes.


Previously, the MPAA accused Google of waging an “ongoing public relations war,” but the above shows that the Hollywood group is no different.

On top of the PR-campaign the plan also reveals details on how the parties would taint Google before the National Association of Attorneys General.

Through a series of live taped segments they would show how easy it is for minors to pirate R-rated movies, buy heroin and order an assault weapon with the help of Google’s search engine.

Finally, the plan includes a “final step” where Attorney General Hood would issue a civil investigatory demand to Google.

In its court filing (pdf) Google uses the information above to argue that the AG’s civil investigatory demand was not the basis of a legitimate investigation. Instead, it was another tool pressuring the company to implement more stringent anti-piracy measures.

Given this new information, Google hopes that the court will compel Fox, NBC and Viacom to hand over relevant internal documents, as they were “plainly privy” to the secretive campaign.

It’s now up to the judge to decide how to proceed, but based on the emails above, the MPAA and the AG’s office have some explaining to do.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and the best VPN services.

TorrentFreak: Geo-Blocking Caused Massive TV Piracy 20 Years Ago

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

tvDue to complex licensing agreements between content creators and distributors, movies and TV shows are often locked down to a specific region. A prime example is the U.S. edition of Netflix which offers a better selection than versions available elsewhere.

It’s a frustrating situation for consumers who are forced to jump through hoops to access the content they want to buy. The problem is amplified in Europe, where citizens of member states – sometimes located just a few miles apart – are regularly denied access to cross-border digital content.

This week, however, the European Commission sent a strong signal to the world’s largest movie studios and a powerful broadcaster that geo-restriction won’t be tolerated. Sky UK, Disney, NBC Universal, Paramount Pictures, Sony, Twentieth Century Fox and Warner Bros. were all put on notice with the launch of an EU antitrust investigation into the practice.

When one considers the history it’s difficult to feel sympathy for these companies. Just as geo-locking, blocking and local release windows fuel piracy today, licensing and geo-restriction fueled massive movie and TV show piracy two decades ago.

Dr Markus Kuhn currently works as a senior lecturer in Computer Science at the University of Cambridge. He made the headlines in 2010 when he was asked to analyze a controversial ‘bomb detector’ deployed in Iraq and concluded it could detect nothing. Twenty years ago, however, his skills were being deployed against content providers who simply refused to make their content widely available.

As a German citizen keen to view English language sci-fi content undubbed, Kuhn approached UK-based Sky TV in the early 1990s and offered to buy an official viewing smartcard from the company. Due to licensing conditions and their geo-blocking policy, Sky refused to sell him one. It was a move the company would later come to regret.

Faced with a completely inflexible market, Kuhn decided that if Sky wouldn’t provide its content for a price, then he would gain access to it for free. As a result the undergraduate began investigating the VideoCrypt encryption system used by Sky.

After what must’ve been hundreds of hours work, in March 1994 Kuhn debuted Season7, a piece of decryption software using a simple hardware interface that would enable viewers across Europe to watch Sky programming for free.

“This software was primarily written for European Star Trek fans outside Great Britain who don’t have a chance to get a regular Sky subscription and have no other way of watching the undubbed version of their favorite [sci-fi] series,” Kuhn said in a June 1994 announcement.

kuhn“I don’t want to cause any harm to Sky and I even asked them for a regular subscription some time ago, but they refused to sell one to Germany. So they have to live with the consequences of attracting the interest of high-tech freaks to the technical details of their access control system.”

Despite Kuhn’s best intentions, what followed was a Sky viewing free-for-all. With Kuhn’s software being spread between bulletin board systems and passed around on floppy discs, electronics enthusiasts across Europe began making and selling so-called “Season interfaces” for users to plug into their video decoders.

For those lucky enough to own a computer (a PC with a 12 MHz i286 processor was required to run a Season setup) what followed were some magical times. Satellite TV was a luxury item for most families so watching Kuhn’s software do its work (decoding was displayed live on-screen) was a hypnotic and exciting experience.

Sadly for Sky, however, Kuhn’s tools didn’t remain isolated in Germany where the company was doing zero business. Soon, large quantities of potential Sky customers in the UK and across Europe were also enjoying the service for free. That was exactly what Sky wanted to avoid but thanks to geo-blocking, that’s what it got.

Of course, like most hacks the fun eventually came to an end when Sky’s crypto experts threw a wrench in the works but the significance of Kuhn’s work lives on today. Rather than being driven by a ‘pirate’ ethos, Kuhn simply wanted to pay for a product that should have been freely available. When primitive licensing arrangements and restrictive business practices stopped him from doing so, Sky and its partners paid the price.

Today, more than two decades on, it seems that neither Sky nor its Hollywood allies have changed their ways. Still, it remains a possibility that the EU investigation launched this week will help them understand a thing or two about a free market while reminding them of Kuhn’s disruptive response to restriction 20 years ago.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and the best VPN services.

TorrentFreak: MPAA Sues MovieTube Sites Over Mass Piracy

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

movietubeUnauthorized movie streaming sites have been a thorn in the side of Hollywood for many years, and yesterday the MPAA decided to take one of the most prominent players to court.

MPAA members 20th Century Fox, Columbia Pictures, Disney, Paramount, Universal and Warner Bros filed a lawsuit against a group of MovieTube affiliated websites, which operate from more than two dozen domain names.

In the complaint, filed at a New York District Court a few hours ago, the movie studios describe MovieTube as a business that’s designed and operated to promote copyright infringement for profit.

The MPAA lists several popular websites including,,,,,,, and These sites share hosting facilities and a similar design and the studios believe that they are operated by the same people.

The websites in question are typical streaming sites, where users can watch videos and in some cases download the source files to their computers.

“Defendants, through the MovieTube Websites, aggregate, organize and provide embedded links to extensive libraries of Infringing Copies of Plaintiffs’ Works,” the compliant (pdf) reads.

“…users can watch Infringing Copies without leaving the MovieTube Websites. The MovieTube Websites even allow users, in some instances, to download Infringing Copies by clicking on a selection from a menu built into the video player software supplied by Defendants.”

According to the MPAA, MovieTube’s operators are well aware of the infringing nature of their site. On one of their Facebook pages they write that it’s not a problem that many films are pirated, since they are not bound by U.S. laws.


The complaint accuses MovieTube of various counts of copyright and trademark infringement. This means that the site’s operators face millions of dollars in statutory damages.

Perhaps more importantly, the MPAA is also demanding a broad preliminary injunction to make it virtually impossible for the operators to keep their sites online.

Among other things, the proposed measures would prevent domain registrars, domain registries, hosting companies, advertisers and other third-party outfits from doing business with the site.

If granted, MovieTube’s operators will have a hard time keeping the sites afloat, but it appears that the injunction may not even be needed.

At the time of writing all MovieTube domain names are unreachable. It is unclear whether the operators took this decision themselves, but for now the future of these sites looks grim.

The full list of sites mentioned in the complaint is as follows:,,,,,,,,,,,,,,,,,,,,,,,,,,, and

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and the best VPN services.

TorrentFreak: Massive Piracy Case Ends in Disappointment for Hollywood

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

warezAfter tracking down hundreds of Internet pirates over the years, a case that came to a head at the turn of the decade was shaping up to be one of the most important for anti-piracy group Antipiratbyrån (now Rights Alliance).

More often focused on lower-hanging fruit, Antipiratbyrån had their eyes on the “warez scene”, the people and infrastructure at the very top of the so-called “piracy pyramid” from where content trickles down to the masses.

In 2010 and following a lengthy investigation by Antipiratbyrån, police raided a topsite known as ‘Devil’. Topsites are top-secret, high-speed servers used by piracy release groups and their affiliates for storing and distributing unauthorized copyrighted content. When Devil went down dozens of servers were seized, together containing an estimated 250 terabytes of pirate content.

One man was also arrested but it took until 2014 for him to be charged with unlawfully making content available “intentionally or by gross negligence.”

According to police the 50-something year old man from Väsby, Sweden, acted “in consultation or in concert with other persons, supplied, installed, programmed, maintained, funded and otherwise administered and managed” the Devil file-sharing network. Before its shutdown, Devil was reported to service around 200 elite members.

Considering Antipiratbyrån’s links with the movie industry it came as no surprise that the charges included the unlawful making available of 2,250 mainly Hollywood movies. According to the prosecutor, those numbers made the case a record breaker.

“We have not prosecuted for this many movies in the past. There are many movies and large data set,” prosecutor Fredrik Ingblad commented earlier. “It is also the largest analysis of computers ever made in an individual case.”


Given the scale of the case it was expected that punishments would be equally harsh but things did not play out that way.

Despite admitting that he operated servers at his home and in central Stockholm and the court acknowledging that rightsholders had suffered great damage, the man has just been sentenced to probation and 160 hours of community service.

According to, two key elements appear to have kept the man’s punishment down. Firstly, he cooperated with police in the investigation. Secondly – and this is a feature in many file-sharing prosecutions – the case simply dragged on for too long.

“It is worrying that the bottleneck at the police has affected the sentence,” says Sara Lindbäck of Rights Alliance.

Defense lawyer Henrik Olsson Lilja says that he’s pleased his client has avoided jail but adds that no decision has yet been made on any appeal. That being said, an end to the criminal case doesn’t necessarily mean the matter is completely over.

Last year Rights Alliance indicated that the six main studios behind the prosecution might initiate a civil action against the man and demand between $673,400 and $2.69m per title infringed, albeit on a smaller sample-sized selection of the 2,250 movies involved in the case.

No announcement has been made on that front and Rights Alliance did not respond to our requests for comment.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and the best VPN services.

Schneier on Security: How an Amazon Worker Stole iPads

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

A worker in Amazon’s packaging department in India figured out how to deliver electronics to himself:

Since he was employed with the packaging department, he had easy access to order numbers. Using the order numbers, he packed his order himself; but instead of putting pressure cookers in the box, he stuffed it with iPhones, iPads, watches, cameras, and other expensive electronics in the pressure cooker box. Before dispatching the order, the godown also has a mechanism to weigh the package. To dodge this, Bhamble stuffed equipment of equivalent weight,” an officer from Vithalwadi police station said. Bhamble confessed to the cops that he had ordered pressure cookers thrice in the last 15 days. After he placed the order, instead of, say, packing a five-kg pressure cooker, he would stuff gadgets of equivalent weight. After receiving delivery clearance, he would then deliver the goods himself and store it at his house. Speaking to mid-day, Deputy Commissioner of Police (Zone IV) Vasant Jadhav said, “Bhamble’s job profile was of goods packaging at’s warehouse in Bhiwandi.

SANS Internet Storm Center, InfoCON: green: Patching in 2 days? – “tell him he’s dreaming”, (Fri, Jul 24th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

With all the patching you have been doing lately I thought it would be opportune to have a look at what can and cant be done within two days. Why two days? Well quite a few standards want you to, I guess that is one reason, but the more compelling reason is that it takes less and less time for attacks to be weaponised in the modern world. We have over the past year or so seen vulnerabilities released and within hours vulnerable systems are being identified and in many cases exploited. That is probably a more compelling reasons than the standard says to. Mind you to be fair the standard typically has it in there for that reason.

So why does patching instill such dread in many? It tends to be for a number of reasons, the main objections I come across are:

  • It might break something
  • It is internal therefore were ok AKA we have a firewall
  • Those systems are critical and we cant reboot them
  • The vendor wont let us

It might break something”>Yes it could, absolutely. Most vendors have pushed patched that despite their efforts to test prior to deployment will actually break something. However in reality the occurrences are low and where possible you should have pushed it to test systems prior to production implementation anyway,so …

It is internal therefore were ok AKA we have a firewall”>This has to be one of my favourites. Many of us have MM environments, hard on the outside and nice gooey soft on the inside. Which is exactly what attackers are looking for,it is one of the reasons why Phishing is so popular. You get your malware executed by someone on the inside. To me this is not really a reason. I will let you use this reason to prioritise patching, sure, but that is assuming you then go patch your internet facing or other critical devices first.

Those systems are critical and we can”>Er, ok. granted you all have systemsthat are critical to the organisation, but if standard management functions cannot be performed on the system, then that in itself should probably have been raised as a massive risk to the organisation. There are plenty of denial of service vulnerabilities that will cause a reboot. If an internet facing system cant be rebooted I suspect you might want to take a look at that on Monday. For internal systems, maybe it is time to segment them as much of the normal network as you possibly can to reduce the risk to those systems.

The vendor won”>Now it is easy for me to say get a different vendor, but that doesnt really help you much. I find that when you discuss exactly what you can or cantchange the vendor will often be quite accommodating. In fact most of the time they may not even be able to articulate why you cant patch. Ive had vendors state you couldnt patch the operating system, when all of their application was Java. Reliant on Java, sure, reliant on a Windows patch for IE, not so much. Depending on how important you are to them they may come to the party and start doing things sensibly.
If you still get no joy, then maybe it is time to move the system to a more secure network segment and limit the interaction between it and the rest of the environment, allowing work to continue, but reduce the attack platform.

So the two days, achievable?

Tricky but yes. You will need to have all your ducks lined up and have the right tools and processes in place.

Lets have a look at the process first. Generally speaking the process will be pretty much the same for all of you. A high level process is below, I suspect it is familiar to most of you.” />

Evaluate the patch.”>There are a number of different approaches that organisations take. Some organisations will just accept the vendor recommendation. If the vendor states the patch is critical then it gets applied, no questions asked. Well one, do we have the product/OS that needs to be patched? Yes, then patch.”>Other organisations take a more granular approach, they may not be quite as flexible in applying all patches as they are released, or possibly rebooting systems is a challenge (we have all come across systems that when rebooted have a hard time coming back). In those organisations the patch needs to be evaluated. In those situations I like using the CVSS scores and apply any modifiers that are relevant to the site to get a more accurate picture of the risk to my environment ( If you go down this path make sure you have some criteria in place for considering things critical. For example if your process states a CVSs score of 10 is critical. Scores of 9.9 or lower are High, medium low etc. Id probably be querying the thoughts behind that. “>Many patches address a number of CVEs. I generally pick the highest scoring one and evaluate it. If it drops below the critical score we have specified, but is close. I may evaluate a second one, but if the score remains in the critical range I wont evaluate the rest of the CVEs. It is not like you can apply a patch that addresses multiple CVEs for only one.”>You already know which servers, workstations and devices are critical to the organisation, if not that might be a good task for Monday, junior can probably do most of this. Based on the patch you will know what it patches and most vendors provide some information on the attack vector and the expected impact. “>You might want to check our site as well. On patch Tuesday we publish patching information and break it down into server and client patches as well as priorities (PATCH NOW means exactly that). Based on these answers you may be able to reduce the scope of your patching and therefore make the 48 hours more achievable.”>Once the patches are know, at least those that must be applied within the 48 hours, push them to your Dev/Test servers/workstations, assuming you have them. If you do not you might need to designate some machines and devices, non-critical of course, as your test machines. Deploy the patches as soon as you can, monitor for odd behaviour and havepeople test. Some organisations have scripted the testing and OPS teams can run these to ensure systems are still working. Others not quite that automated may need to have some assistance from the business to validate the patches didt break anything. “>Once the tests have been completed the patches can be pushed out to the next stage aUAT or staging environment. If you do not have this and you likely wont have these for all your systems, maybe set up a pilot group that is representative of the servers/devices you have in the environment. In the case of workstations pick those that are representative of the various different user groups in the organisation.Again test that things are still functioning as they should. These test should be standardised where possible, quick and easy to execute and verify. Once confirmed working the production roll out can be scheduled.”>Schedule the non critical servers that need patching first, just in case, but by now the patches have been applied to a number of machines, passed at least two sets of tests and prior to deployment to production you quickly checked the dairy entry to see if people have experienced issues. Our readers are pretty quick in identifying potential issues (make sure you submit a comment if you come across one). Ready to go.

Assuming that you have the information in place and the processes and have the resources to patch, you should be able to patch dev/test within the first 4 hours after receiving the patch information. Evaluating the information should not take long. Close to lunch time UAT/staging/Pilot groups can be patched (assuming testing is fine). Leave them overnight perhaps. Once confirmed there are no issues start patching production and schedule the reboots if needed for that evening.

Patched within 2 days, dreaming? nope, possible, tough, but possible.

For those of you that have patching solutions in place your risk level and effort needed is probably lower than others. By the time the patch has been released to you it has been packaged and some rudimentary testing has already been done by the vendor. The errors that blow up systems will likely already have been caught. For those of you that do not have patching solutions have a good look at what WSUS can do for you. With the third party add on it can also be used to patch third party products ( You may have to do some additional work to package it up which may slow you down, but it shouldnt be to much.

It is always good to improve so if you have some hints, tips good practices, horrible disasters that you can share so others can avoid them, leave a comment.

Happy patching well be doing more over the next few weeks.

Mark H – Shearwater

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

TorrentFreak: Project Free TV Streaming Site Shuts Down

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

pftv-logoWhile BitTorrent remains the most used peer-to-peer method of obtaining video content online, for the past several years the availability of cheap bandwidth has provided users with additional options.

Closely associated with sites such as YouTube, streaming is now one of the most popular ways of viewing content. Thanks to a player embedded in a webpage no special skills are required. As a result, streaming sites have popped up all over the web, with a sizable proportion dedicated to copyrighted content.

However, to be really useful all of this content needs to be findable and that’s where sites such as Project Free TV (PFTV) stepped in. Indexing popular content from all around the web, PFTV presented TV and movie content to the masses in an easily navigated interface with hit shows such as Game of Thrones and The Walking Dead just a click away.


As a result of its attention to detail, comprehensive database and a loyal following, PFTV grew to become one of the most popular sites of its kind. Its popularity attracted the attention of copyright holders too, with Hollywood having the site blocked in the UK during November 2013.

Last evening, however, it all came to an end. Instead of its familiar yellow, orange and purple homepage, PFTV now displays a single word: “Goodbye”

Since Project Free TV had become the go-to place for millions of TV fans, the site’s users were quick to react, with dozens taking to Twitter to express their disappointment.


But for many it is the site’s content discovery features that will be most missed.

“What I loved about Project Free TV was the aggregating feature of their daily TV show list,” a former user explains.

“While not a complete representation by far it had most of the shows I was interested in and introduced me to many excellent British and Australian shows I did not know of as well as plenty of new shows from the US I wasn’t aware of due to practically not ever seeing commercials for them on broadcast or cable.”

While most people enjoyed the site via its web presence, Project Free TV was also a massive hit with users of Kodi/XBMC. Thanks to a third party plugin located at, PFTV’s library could be enjoyed from within the software. Users now experience errors instead.

“It’s sad to see them go, our community is definitely in shock. However, it’s good to see that they closed while still on the top of their game, on their own terms,” a senior developer at TVAddons told TorrentFreak.

“There are a lot of other sites offering similar services and I’m confident that users who were dedicated supporters of Project Free TV will likely find a new home elsewhere in the coming days.”

Users searching for PFTV using Google will already find plenty of sites using the Project Free TV name but most are clones with reduced functionality. At best, those claiming to be the real deal aren’t being straight while others appear to be more interested in serving up malicious advertising than providing a decent service.

Project Free TV’s operator did not respond to our requests for comment.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and the best VPN services.

TorrentFreak: EU Starts Geo-Blocking Antitrust Case Against U.S Movie Studios

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

europe-flagDue to complicated licensing agreements many movies and TV-series are only available online in a few selected countries, often for a limited period.

The movie studios often restrict broadcasters and streaming services to make content widely available, a practice which the European Commission wants to stop.

Today the European Commission sent a statement of objections to Sky UK and six large US film studios: Disney, NBCUniversal, Paramount Pictures, Sony, Twentieth Century Fox and Warner Bros.

The Commission believes that the geo-restrictions the parties agreed upon are violating EU competition rules.

“European consumers want to watch the pay-TV channels of their choice regardless of where they live or travel in the EU,” says Margrethe Vestager, EU Commissioner in charge of competition policy.

“Our investigation shows that they cannot do this today, also because licensing agreements between the major film studios and Sky UK do not allow consumers in other EU countries to access Sky’s UK and Irish pay-TV services, via satellite or online.”

Under European rules consumers should be able to access the services of Sky and other service providers regardless of where they are located. At the moment, most online services block access to content based on the country people are located, something Sky and the movie studios also agreed on.

The geo-blocking practices are a thorn in the side of the European Commission who now hope to abolish these restrictions altogether.

In parallel to the antitrust investigation the EU’s governing body adopted the new Digital Single Market Strategy earlier this year. One of the main pillars of the new strategy is to provide consumers and businesses with better access to digital goods and services.

The Commission plans “to end unjustified geo-blocking,” which it describes as “a discriminatory practice used for commercial reasons.”

“I want to see every consumer getting the best deals and every business accessing the widest market – wherever they are in Europe,” Commission President Jean-Claude Juncker said at the time.

Sky UK and the six major studios will now have to respond to the concerns. The current statement of objections is only the start of the antitrust investigation, a final decision will take at least several months.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and the best VPN services.

Linux How-Tos and Linux Tutorials: How to Create a Streaming Media Server with Linux Using Plex

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Jack Wallen. Original post: at Linux How-Tos and Linux Tutorials

Figure 1: The Plex web interface.

Media is king—it has been for a very long time. It helps inform, enlighten, inspire, and entertain. More and more, users aren’t content with sitting in front of a television or desktop to enjoy their media. Or, collections have become so large, transferring them from drive to drive has become cumbersome at best. So what do you do when you want to take advantage of that massive media collection from multiple locations and your target devices either don’t have space for it or you don’t want to take the time to transfer it?

You set up a media streaming server.

With this type of server you can enjoy your media from a desktop, laptop, smart phone, or tablet. Naturally, each media streaming server offers different features and there are plenty of available servers (from bare bones to full-featured solutions). I want to demonstrate the process of setting up a streaming server using Plex. Why Plex? Because it is one of the most feature-rich media servers available that also happens to be cross-platform, has a built-in media player (and transcoder), and also offers apps for both Android and iOS. With Plex, you can also sign into your account and stream outside of your local network (and enjoy other features). Do note: Some of the Plex features require a premium membership fee.

With all of that in mind, let’s begin the process of setting up the Plex server. I will be installing on the latest release of openSUSE, but the server can be installed on Fedora, Ubuntu, Debian, and more.


You will be surprised to find out that installation is the easiest piece of the Plex puzzle—even on a non-Ubuntu distribution. Let’s walk through the steps (with a reminder, I’m using the latest version of openSUSE):

  1. Open up your web browser and download the installer that meets your needs.

  2. Open up your file manager (in my case, on openSUSE, that’d be Files).

  3. Change into the directory housing the downloaded installer.

  4. Double-click on the downloaded installer.

  5. Type your admin password and hit Enter.

  6. Allow the installation to complete.

That’s it! The installation of the Plex media server is done.

Starting the server

Now you must start up the Plex server. This is done manually at this point (reboot should start the server automatically); so open up a terminal window and su to the root user. Once you have admin power at your fingertips, issue the following command:

/etc/init.d/plexmediaserver start

The Plex server will start running in the background and you can connect to the web-based interface to set up your server.

Set up

You might not be surprised to know that the setup of the Plex server will take the most time. It’s not terribly difficult, but can be time consuming. I’ll cut to the chase and illustrate the important pieces of this puzzle.

The first thing you should do is sign up for a Plex account (even the free account) so you are able to take advantage of some of the extra features. You can sign up here. Once you have your account created, you’re ready to go.

Open up your browser (on the PC housing the Plex server) and point it to http://localhost:32400/web. You will be presented with the Plex web-based administration tool (Figure 1, above).

NOTE: You can also configure the Plex server from a remote machine. To do this, you simply replace localhost with the IP address of the Plex server.

The first thing you want to do is click on the user drop-down and select Sign In. When prompted, enter your Plex credentials and then click on the Config icon (wrench to the left of the user drop-down). In the Setting page (Figure 2), click on the Server tab.

plex configuration

You will see a configuration option called Friendly name. Enter a name for your Plex server and then click SAVE CHANGES.

Now it’s time to configure the locations of your libraries. This is one issue that must warrant a tiny explanation. Yes, you can configure the root location of your Music and Video libraries; but there are guidelines for naming files and folders. Here are some tips:

  • Separate media into appropriate folders (Music, Movies, TV, etc)

  • Movies should be named as follows: [Movie_Name (Release_Year)].mp4

  • TV shows should include season and episode numbers in the name: [Show Name – sXXeYY].mp4

  • Each TV Show episode file should be stored in a set of folders as follows: ~/TV Shows/Show Name/Season/episodes (NOTE: For TV Shows, the folder structure is crucial.)

  • Music content should be stored as follows: ~/Music/Artist/Album/tracks

NOTE: Your root folder does not have to be housed under your home user directory.

In order to be able to stream, you have to create Libraries for each type of media. Here’s how:

  1. Open up the Plex web admin tool

  2. Click on the Home button

  3. Click the + button associated with PLEXSERVER (or whatever friendly name you gave your Plex server) in the left navigation

  4. Click the icon for the media type associated with the library to be added (We’ll add a music library for example)

  5. Give the library a name, select a language, and click NEXT (Figure 3)


  7. Locate the media folder and, once you’ve selected it, click NEXT

  8. Select Create a basic music library and click NEXT

  9. Scan through the presented options on the last page (the defaults work fine) and click ADD LIBRARY.

naming a plex media library

That’s it. After a refresh, your media should show up (depending on how large the folder is, this could take some time.)

Repeat this process for every type of media you want to add and your Plex streaming server is ready.

Using your server

Out of the box, you can point any desktop or laptop device on your network to the IP of the Plex server (in the form http://IP_ADDRESS:32400/web) and Plex will appear, ready to stream media (Figure 4).

plex streaming

To connect to your Plex streaming server from the mobile app is incredibly simple—you open the app, select your server from the dropdown, and your stream-able media will appear (Figure 5).

Figure 5: Plex running on a Verizon-branded Motorola Droid Turbo.

NOTE: Unless you pay the $4.99 license fee for the app, your media will be limited in playback (music will stop after 1 minute and all videos/images will be watermarked).

If you’re looking for one of the most feature-rich and well supported media streaming servers on the market, you can’t go wrong with Plex. Yes, there are plenty of other streaming servers available, but you’ll be hard-pressed to find one as robust and ready to serve.

SANS Internet Storm Center, InfoCON: green: Some more 0-days from ZDI, (Thu, Jul 23rd)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

For those of us that are in patching world the last few weeks has not been fun. It seemed like there was a new critical issue almost every other day and almost certainly just after you finished the previous round of patching. I guess that is what happens when a hacking firm is breached.

Well unfortunately Im here to add to your woes. BK wrote in (thanks) to remind me that on the same day that Microsoft patched a critical issue,ZDI released four vulnerabilities that, whilst based on their CVSS score may not quite reach critical (in Microsoft world), will likely result in a patch for most systems (including Windows phone). ” target=”_blank”>

In this case all four were discovered in-house, disclosed to the vendor over 120 days ago and as of release unlikely to have an exploit associated with it. That is however likely to change.

Mark H

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Ksplice Blog: Fixing Security Vulnerabilities in Linux

This post was syndicated from: Ksplice Blog and was written by: michael.ploujnikov-Oracle. Original post: at Ksplice Blog

Security vulnerabilities are some of the hardest bugs to discover yet they can have the largest impact. At Ksplice, we spend a lot of time looking at security vulnerabilities and seeing how they are fixed. We use automated tools such as the Trinity syscall fuzzer and the Kernel Address Sanitizer (KASan) to aid our process. In this blog post we’ll go over some case studies of recent vulnerabilities and show you how you can avoid them in your code.

CVE-2013-7339 and CVE-2014-2678

These two are very similar NULL pointer dereferences when trying to bind an RDS socket without having an RDS device. This is an oversight that happens quite often in hardware-specific code in the kernel. It is easy for developers to assume that a piece of hardware always exists since all their dev machines have it, but that sometimes leads to other possible hardware configurations left untested. In this example the code makes a seemingly reasonable assumption that using RDS sockets without RDS hardware doesn’t really make sense.

The issue is pretty simple as we can see from this fix:

diff --git a/net/rds/ib.c b/net/rds/ib.c
index b4c8b00..ba2dffe 100644
--- a/net/rds/ib.c
+++ b/net/rds/ib.c
@@ -338,7 +338,8 @@ static int rds_ib_laddr_check(__be32 addr)
   ret = rdma_bind_addr(cm_id, (struct sockaddr *)&sin);
   /* due to this, we will claim to support iWARP devices unless we
      check node_type. */
-     if (ret || cm_id->device->node_type != RDMA_NODE_IB_CA)
+     if (ret || !cm_id->device ||
+         cm_id->device->node_type != RDMA_NODE_IB_CA)
                                   ret = -EADDRNOTAVAIL;

                                   rdsdebug("addr %pI4 ret %d node type %dn",

Generally we are allowed to bind an address without a physical device so we can reach this code without any RDS hardware. Sadly, this code wrongly assumes that a devices exists at this point and that cm_id->device is not NULL leading to a NULL pointer dereference.

These type of issues are usually caught early in -next as that exposes the code to various users and hardware configurations, but this one managed to slip through somehow.

There are many variations of the scenario where the hardware specific and other kernel code doesn’t handle
cases which "don’t make sense". Another recent example is dlmfs. The kernel would panic when trying to create a directory on it – something that doesn’t happen in regular usage of dlmfs.


This one is interesting and very difficult to stumble upon by accident. It’s a race condition that is only possible during the migration of huge pages between NUMA nodes, so the window of opportunity is *very* small. It can be triggered by trying to dump the NUMA maps of a process while its memory is being moved around. What happens is that the code trying to dump memory makes invalid memory accesses because it does not check the presence of the memory beforehand.

When we dump NUMA maps we need to walk memory entries using walk_page_range():

         * Handle hugetlb vma individually because pagetable
         * walk for the hugetlb page is dependent on the
         * architecture and we can't handled it in the same
         * manner as non-huge pages.
        if (walk->hugetlb_entry && (vma->vm_start <= addr) &&
            is_vm_hugetlb_page(vma)) {
                if (vma->vm_end < next)
                        next = vma->vm_end;
                 * Hugepage is very tightly coupled with vma,
                 * so walk through hugetlb entries within a
                 * given vma.
                err = walk_hugetlb_range(vma, addr, next, walk);
                if (err)
                pgd = pgd_offset(walk->mm, next);

When walk_page_range() detects a hugepage it calls walk_hugetlb_range(), which calls the proc’s callback (provided by walk->hugetlb_entry()) for each page individually:

        pte_t *pte;
        int err = 0;

        do {
                next = hugetlb_entry_end(h, addr, end);
                pte = huge_pte_offset(walk->mm, addr & hmask);
                if (pte && walk->hugetlb_entry)
                        err = walk->hugetlb_entry(pte, hmask, addr, next, walk);
                if (err)
                        return err;
        } while (addr = next, addr != end);

Note that the callback is executed for each pte; even for those that are not present in memory (pte_present(*pte) would return false in that case). This is done by the walker code because it was assumed that callback functions might want to handle that scenario for some reason. In the past there was no way for a huge pte to be absent, but that changed when the hugepage migration was introduced. During page migration unmap_and_move_huge_page() removes huge ptes:

if (page_mapped(hpage)) {


page_was_mapped = 1;


Unfortunately, some callbacks were not changed to deal with this new possibility. A notable example is gather_pte_stats(), which tries to lock a non-existent pte:

        orig_pte = pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl);

This can cause a panic if it happens during a tiny window inside unmap_and_move_huge_page().

Dumping NUMA maps doesn’t happen too often and is mainly used for testing/debugging, so this bug has lived there for quite a while and was made visible only recently when hugepage migration was added.

It’s also common that adding userspace interfaces to trigger kernel code which doesn’t get called often exposes many issues. This happened recently when the firmware loading code was exposed to userspace.


This one also falls into the category of "doesn’t make sense" because it involves repeated page faulting of memory that we marked as unwanted. When this happens shmem tries to remove a block of memory, but since it’s getting faulted over and over again shmem will hang waiting until it’s available for removal. Meanwhile other filesystem operations will be blocked, which is bad because that memory may never become available for removal.

When we’re faulting a shmem page in, shmem_fault() would grab a reference to the page:

static int shmem_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
error = shmem_getpage(inode, vmf->pgoff, &vmf->page, SGP_CACHE, &ret);

But because shmem_fallocate() holds i_mutex this means that shmem_fallocate() can wait forever until it can free up that page. This, in turn means that that filesystem is stuck waiting for shmem_fallocate() to complete.

Beyond that, punching holes in files and marking memory as unwanted are not common operations; especially not on a shmem filesystem. This means that those code paths are very untested.


This is a privilege escalation which was found using KASan. We’ve noticed that as a result of a call to a PPPOL2TP ioctl an uninitialized address inside a struct was being read. Further investigation showed that this is the result of a confusion about the type of the struct that was being accessed.

When we call setsockopt from userspace on a PPPOL2TP socket in userspace we’ll end up in pppol2tp_setsockopt() which will look at the level parameter to see if the sockopt operation is intended for PPPOL2TP or the underlying UDP socket:

   if (level != SOL_PPPOL2TP)
      return udp_prot.setsockopt(sk, level, optname, optval, optlen);

PPPOL2TP tries to be helpful here and allows userspace to set UDP sockopts rather than just PPPOL2TP ones. The problem here is that UDP‘s setsockopt expects a udp_sock:

 int udp_lib_setsockopt(struct sock *sk, int level, int optname,
                        char __user *optval, unsigned int optlen,
                        int (*push_pending_frames)(struct sock *))
         struct udp_sock *up = udp_sk(sk);

But instead it’s getting just a sock struct.

It’s possible to leverage this struct confusion to achieve privilege escalation. We can overwrite the function pointer in the struct to point to code of our choice. Then we can trigger the execution of this code by making another socket operation. The piece of code that allowed for this vulnerability was added for convenience, but no one ever needed it, and it was never tested.


We hope that this exposition of straightforward and more subtle kernel bugs will remind of the importance of looking at code from a new perspective and encourage the developer community to contribute to and create new tools and methodologies for detecting and preventing bugs in the kernel.