Posts tagged ‘Other’

TorrentFreak: BT Starts Blocking Private Torrent Sites

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

bt-blockedFollowing a series of High Court orders, six UK ISPs are currently required to block subscriber access to dozens of the world’s largest torrent sites.

The latest order was issued last month after a complaint from the major record labels. It expands the UK blocklist by 21 torrent sites, including,,, and

This weekend both BT and Sky implemented the new changes, making it harder for their subscribers to reach these sites. Interestingly, however, BT appears to have gone above and beyond the court order, limiting access to various other sites as well.

Over the past several days TorrentFreak has received reports from several users of private torrent sites who get an “error blocked” message instead of their favorite sites. These include the popular and trackers, as well as scene release site

IPTorrents and Torrentday are significant targets. Although both sites require prospective users to obtain an invite from a current member (or from the site itself in exchange for cash), they have over a hundred thousand active users.

The error displayed when BT subscribers try to access the above URLs is similar to that returned when users to try access sites covered by High Court injunctions.

However, there is no known court decision that requires BT to block these URLs. In fact, no UK ISP has ever blocked a private torrent site before.

TF contacted BT’s press contact and customer service team but we have yet to receive a response to our findings. Meanwhile, several of the affected users are discussing on Facebook and Twitter how they can bypass the blockades.


It appears that for now IPTorrents is still accessible via https and via the site’s alternative .me and .ru domains. In addition, VPNs and proxy servers are often cited among suggested workaround techniques.

Whether the private torrent sites will remain blocked and on what grounds remains a mystery for now. We will update this article if BT sends us a response. BT users who spot more unusual blocks are encouraged to get in touch.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

TorrentFreak: ABS-CBN Sues Another 18 Sites Over TV and Movie Piracy

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

money-featBack in August TF published a reported on copyright-focused legal action initiated in the United States by ABS-CBN, the largest media and entertainment company in the Philippines.

The media giant filed a lawsuit at a federal court in Oregon looking for millions of dollars in damages from two local husband and wife residents. Their main target, Jeff Ashby, claimed he created several tiny websites so that his wife could enjoy entertainment from her home country. Lawyers for ABS-CBN viewed those sites rather differently.

Last month the case ended badly for the defendants. After branding Ashby a hardcore criminal and using its own news shows to paint him in a poor light, ABS-CBN hit their home run. The media giant reached a consent agreement with Ashby and the Oregon District Court ordered him to pay a mind-blowing $10 million in damages.

Here at TF we suspected that the $10m decision might be of value to ABS-CBN should they wish to begin suing other sites. After all, no one wants to get hit with a $10m bill so settlement offers below this amount might seem more attractive and become more easily arrived at. Sure enough, just weeks later ABS-CBN is back.

In an action filed in a Florida district court, ABS-CBN is now targeting 100 ‘Does’ and another 18 sites in a copyright and trademark infringement lawsuit. ABS-CBN says that in the United States it makes its content available through companies including Comcast, Time Warner Cable, DirecTV, Cox Communications, AT&T, Verizon and Charter to name just a few, but these ‘pirate’ services are undermining that commercial activity.

“Through their various websites, Defendants hold out to the public that they have ABS-CBN’s content, and re-broadcast ABS-CBN’s TV shows and movies over the Internet, in order to illegally profit from ABS-CBN’s intellectual property, without ABS-CBN’s consent,” court papers read.

“Further, Defendants control the organization and presentation of the content by themselves providing links to ABS-CBN shows and promote and advertise the content as ABS-CBN’s, including through the use of ABS-CBN’s marks; and stream the shows for users’ viewing through their websites.”

The media company also claims that the ‘pirate’ sites distribute malware, spyware and “other nefarious, malicious and harmful software….typically in the guise of software updates ‘needed’ by the viewer in order to enhance their viewing experience of Plaintiffs’ video content.”

Visits to a handful of the sites carried out by TF confirmed that some do indeed request the installation of a browser addon but when those are rejected the sites remain functional.

In order to end any infringement quickly, ABS-CBN is seeking temporary, preliminary, and permanent injunctions not only against the sites, but also anyone “acting in concert or participation” with them including Internet search engines, web hosts, domain name registrars, and domain name registries.

In respect of domains, ABS-CBN wants all domains put “On Hold” by their registries and then canceled, deleted or transferred “so that they may no longer be used for illegal purposes.”

On the copyright front the action seeks the maximum statutory damages from the defendants of $150,000 per infringement plus attorneys’ fees and costs. In respect of abuse of trademarks, ABS-CBN requests $2 million for each counterfeit trademark used.

Finally, the Philippines-based company demands that all funds generated by the pirate sites should be handed over to partially satisfy any judgment handed down.

It seems unlikely that any of the sites (listed below) will go head-to-head with ABS-CBN in court so settlement agreements will have to be reached. Whether the media giant will begin publishing the details of yet more large settlements will remain to be seen, but it’s doubtful that any will have $10m just sitting around.

Sites targeted by ABS-CBN in its latest lawsuit.


Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Krebs on Security: Adobe Pushes Critical Flash Patch

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

For the second time this month, Adobe has issued a security update for its Flash Player software. New versions are available for Windows, Mac and Linux versions of Flash. The patch provides additional protection on a vulnerability that Adobe fixed earlier this year for which attackers appear to have devised unique and active exploits.

brokenflash-aAdobe recommends users of the Adobe Flash Player desktop runtime for Windows and Macintosh update to v. by visiting the Adobe Flash Player Download Center, or via the update mechanism within the product when prompted. Adobe Flash Player for Linux has been updated to v. 

According to Adobe, these updates provide additional hardening against CVE-2014-8439, which was fixed in a Flash patch that the company released in October 2014. The bulletin for this update is here. Finnish security firm F-Secure says it reported the flaw to Adobe after receiving information from independent researcher Kafeine that indicated the vulnerability was being exploited in-the-wild by an exploit kit (malicious software designed to be stitched into hacked Web sites and foist malware on visitors via browser flaws like this one).

To see which version of Flash you have installed, check this link. IE10/IE11 on Windows 8.x and Chrome should auto-update their versions of Flash.

The most recent versions of Flash are available from the Flash home page, but beware potentially unwanted add-ons, like McAfee Security Scan. To avoid this, uncheck the pre-checked box before downloading, or grab your OS-specific Flash download from here.

Windows users who browse the Web with anything other than Internet Explorer may need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.).


SANS Internet Storm Center, InfoCON: green: Guest diary: Detecting Suspicious Devices On-The-Fly, (Tue, Nov 25th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

If you apply classic hardening rules (keep the patch level, use an AV, enable the firewall and use them with due diligence), modern operating systems are more and more difficult to compromise today. Extra tools like EMET could also raise the bar. On the other side, networks are more and more populated with unknown/personal devices or devices which provide multiple facilities like storage (NAS), printers (MFP), VoIP, IP camera, …

Being easily compromised, they became a very good target to pivot into the network. They run out-of-the-box, just plug the network/power cables and they are ready to go! A classic vulnerability management process will detect such devices but you still have the risk to miss them if you run a monthly scan! To catch new devices on the fly and to have an immediate idea of their attack surface (example: is there a backdoor present), Im using the following toolbox: Arpwatch, Nmap and OSSEC as the conductor.

Arpwatch is a tool for monitoring ARP traffic on a LAN. It can detect new MAC addresses or pairing changes (IP/MAC). Nmap is the most known port scanner and OSSEC is a log management tool with many features like a built-in HIDS.

A first good news is that Arpwatch log entries are processed by default in OSSEC. It has a great feature called Active-Response which allows to trigger actions (read: execute scripts) in specific conditions. In our case,” />

The above configuration specifies that will be executed with the argument srcip (reported by Arpwatch) on the agent 001 when the rule 7201 or 7202 will match (when a new host or a MAC address change is detected). The script is based on the existing active-response scripts and spawns a Nmap scan:

nmap -sC -O -oG – -oN ${PWD}/../logs/${IP}.log ${IP} | grep Ports: ${PWD}/../logs/gnmap.log

This command will output interesting information in grepable format to the gnmap.log file: the open ports (if any) of the detected IP like in the example below. One line per host will be generated:

Host: ( Ports: 22/open/tcp//ssh///, 80/open/tcp///,3306/open/tcp/// …

OSSEC is a wonderful tool and can decode this by default. Just configure the gnmap.log as a new events source:

And new alerts will be generated:

2014 Oct 27 17:54:23 (shiva)
Rule: 581 (level 8) – Host information added.
Host: (, open ports: 22(tcp) 80(tcp) 3306(tcp)

By using this technique, you will immediately detect new hosts connected to the network (or if an IP address is paired with a new MAC address) and youll get the list of the services running on it as well as the detected operating system (if the fingerprinting is successful). Happy hunting!

Xavier Mertens

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

TorrentFreak: Google Asked to Censor Three Million Pirate Bay URLs

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

pirate bayDespite the criminal prosecution of The Pirate Bay four, the notorious torrent site remains available to the public at large.

TPB is setup to make it especially difficult for law enforcement to take it down, so copyright holders have to turn to third parties to address the threat.

One of the main strategies is to ask Google and other search engines to remove infringing Pirate Bay URLs from their search results.

Google in particular is heavily targeted and this week the number of URLs submitted to Google reached the three million mark. Nearly all of these links have indeed been removed and can no longer be accessed through search results.

The chart below shows the number of links that have been submitted per week. There is a sharp decline towards the end of 2013 when The Pirate Bay used another domain name. The requests increased again in December when the torrent site switched back.

3 Million Pirate Bay URLs reported

While most of the reported links do indeed point to copyrighted material, some none-infringing pages have been removed as well.

Paramount Pictures, for example, asked to remove this blog post where a comment mentions “the beast of hercules,” not the Hercules movie. Similarly, TPB’s Doodles page is gone because an adult entertainment company confused it with Kelly Madison’s “Yankee Doodle Dame”

In total, the three million URLs were submitted in 135,486 separate takedown notices, averaging more than 22 links per takedown request. A staggering number, but one that pales in comparison to other sites.

Looking at the list of domains for which Google received the most URLs removal requests, The Pirate Bay is currently listed in 23rd place. The top spot goes to with close to 13 million URLs, followed by,, and, the first torrent site in the list, comes in 8th with 5.4 million URLs.

For The Pirate Bay the reduced availability in Google is not much of a problem. Previously the Pirate Bay team informed TorrentFreak that they stopped relying on search engines as a traffic source a long time ago.

And indeed, despite the censored pages The Pirate Bay’s traffic has continued to grow. Even today the site remains among the 100 most visited websites on the Internet.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

lcamtuf's blog: afl-fuzz: crash exploration mode

This post was syndicated from: lcamtuf's blog and was written by: Michal Zalewski. Original post: at lcamtuf's blog

One of the most labor-intensive portions of any fuzzing project is the work needed to determine if a particular crash poses a security risk. A small minority of all fault conditions will have obvious implications; for example, attempts to write or jump to addresses that clearly come from the input file do not need any debate. But most crashes are more ambiguous: some of the most common issues are NULL pointer dereferences and reads from oddball locations outside the mapped address space. Perhaps they are a manifestation of an underlying vulnerability; or perhaps they are just harmless non-security bugs. Even if you prefer to err on the side of caution and treat them the same, the vendor may not share your view.

If you have to make the call, sifting through such crashes may require spending hours in front of a debugger – or, more likely, rejecting a good chunk of them based on not much more than a hunch. To help triage the findings in a more meaningful way, I decided to add a pretty unique and nifty feature to afl-fuzz: the brand new crash exploration mode, enabled via -C.

The idea is very simple: you take a crashing test case and give it to afl-fuzz as a starting point for the automated run. The fuzzer then uses its usual feedback mechanisms and genetic algorithms to see how far it can get within the instrumented codebase while still keeping the program in the crashing state. Mutations that stop the crash from happening are thrown away; so are the ones that do not alter the execution path in any appreciable way. The occasional mutation that makes the crash happen in a subtly different way will be kept and used to seed subsequent fuzzing rounds later on.

The beauty of this mode is that it very quickly produces a small corpus of related but somewhat different crashes that can be effortlessly compared to pretty accurately estimate the degree of control you have over the faulting address, or to figure out whether you can get past the initial out-of-bounds read by nudging it just the right way (and if the answer is yes, you probably get to see what happens next). It won’t necessarily beat thorough code analysis, but it’s still pretty cool: it lets you make a far more educated guess without having to put in any work.

As an admittedly trivial example, let’s take a suspect but ambiguous crash in unrtf, found by afl-fuzz in its normal mode:

unrtf[7942]: segfault at 450 ip 0805062b sp bf957e60 error 4 in unrtf[8048000+1c000]

When fed to the crash explorer, the fuzzer took just several minutes to notice that by changing {\cb-44901990 in the converted RTF file to printable representations of other negative integers, it could quickly trigger faults at arbitrary addresses of its choice, corresponding mostly-linearly to the integer set:

unrtf[28809]: segfault at 88077782 ip 0805062b sp bff00210 error 4 in unrtf[8048000+1c000]
unrtf[26656]: segfault at 7271250 ip 0805062b sp bf957e60 error 4 in unrtf[8048000+1c000]

Given a bit more time, it would also almost certainly notice that choosing values within the mapped address space get it past the crashing location and permit even more fun. So, automatic exploit writing next?

Errata Security: That wraps it up for end-to-end

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

The defining feature of the Internet back in 1980 was “end-to-end”, the idea that all the intelligence was on the “ends” of the network, and not in middle. This feature is becoming increasingly obsolete.

This was a radical design at the time. Big corporations and big government still believed in the opposite model, with all the intelligence in big “mainframe” computers at the core of the network. Users would just interact with “dumb terminals” on the ends.

The reason the Internet was radical was the way it gave power to the users. Take video phones, for example. AT&T had been promising this since the 1960s, as the short segment in “2001 A Space Odyssey” showed. However, getting that feature to work meant replacing all the equipment inside the telephone network. Telephone switches would need to know the difference between a normal phone call and a video call. Moreover, there could be only one standard, world wide, so that calling Japan or Europe would work with their video telephone systems. Users were powerless to develop video calling on their own — they would have to wait for the big telcom monopolies to develop it, however long it took.

That changed with the Internet. The Internet carries packets without knowing their content. Video calling with Facetime or Skype or LINE is just an app, from your iPhone or Android or PC. People keep imagining new applications for the Internet every day, and implement them, without having to change anything in core Internet routing hardware.

I’ve used Facetime, Skype, and LINE to talk to people in Japan. That’s because there is no real international standard for video calling. Each person I call requires me to install whichever app they are using. Traditional thinking is that government ought to create standards, so that every app would be compatible with every other app, so that I could Skype from Windows to somebody’s iPhone using Facetime. This tradition is nonsense. If we waited for government standards, it’d take forever. Teenagers who heavily use video today would be grown up with kids of their own before government got around to creating the right standard. Lack of standards means freedom to innovate.

Such freedom was almost not the case. You may have heard of something called the “OSI 7 Layer Model”. Everything you know about that model is wrong. It was an attempt by Big Corporations and Big Government to enforce their model of core-centric networking. It demanded such things as a “connection oriented network protocol”, meaning smart routers rather than the dumbs ones we have today. It demanded that applications be standardized, so that there would be only one video conferencing standard, for example. Governments in US, Japan, and Europe mandated that the computers they bought supporting OSI conformant protocols. (The Internet’s TCP/IP protocols do not conform to the OSI model.) Such rules were on the book through into the late 1990s dot-com era, when many in government still believed that the TCP/IP Internet was just a brief experiment on the way to a Glorious Government OSI Internetwork.

The Internet did have standards, of course, but they were developed in the opposite manner. Individuals innovated first, on the ends of the network, developing apps. Only when such apps became popular did they finally get documented as a “standard’. In other words, Internet standards we more de facto than de jure. People innovated first, on their own ends of the network, and the infrastructure and standards caught up later.

But here’s the thing: the Internet ideal of end-to-end isn’t perfect, either. There are reasons why not all innovation happens on the ends.

Take your home network as an example. The way your home likely works is that you have a single home router with cable/fiber/DSL on one side talking to the Internet, and WiFi on the other side talking to the devices in your home. Attached to your router you have a desktop computer, a couple notebooks, an iPad, your phones, an Xbox/Playstation, and your TV.

In the true end-to-end model, all these devices would be on the Internet directly — that they could be “pinged” from the Internet. In today’s reality, though, that’s not the way things work. Your home router is a firewall. It blocks incoming connections, so that devices in your home can connect outwards, but nothing on the Internet can connect inwards. This fundamentally breaks the ideal of end-to-end, as a smart device sits in the network controlling access to the ends.

This is done for two reasons. The first is security, so that hackers can’t hack the devices in your home. Blocking inbound traffic blocks 99% of hacker attacks against devices.

The second reason for smart home routers is the well-known limitation on Internet addresses: there are only 4 billion of them. However, there are more than 4 billion devices connected to the Internet. This fix this, your home router does address translation. Your router has only a single public Internet address. All the devices in your home have private addresses that wouldn’t work on the Internet. As packets flow in/out of your home, your router transparently changes the private addresses in the packets into the single public address.

Thus, when you google “what’s my IP address“, you’ll get a different address than your local machine. Your machine will have a private address like 10.x.x.x or 192.168.x.x, but servers on the Internet won’t see that — they’ll see the public address you’ve been assigned by your ISP.

According to Gartner, nearly billion smarthphones were sold in 2013. These are all on the Internet. That represents a quarter of the Internet address space used up in only a single year. Yet, virtually none of them are assigned real Internet addresses. Almost all of them are behind address translators — not the small devices like you have in your home, but massive translators that can handle millions of simultaneous devices.

The consequence is this: there are more devices with private addresses, that must go through translators, than there are devices with public addresses. In other words, less than 50% of the Internet is end-to-end.

The “address space exhaustion” of tradition Internet addresses inspired an update to the protocol to use larger addresses, known as IPv6. It uses 128-bit addresses, or 4 billion times 4 billion times 4 billion times 4 billion. This is enough to assign a unique address to all the grains of sand on all the beaches on Earth. It’s enough to restore end-to-end access to every device on the Internet, times billions and billlions.

My one conversation with Vint Cerf (one of the key Internet creators) was over this address space issue. Back in 1992, every Internet engineer knew for certain that the Internet would run out of addresses by around the year 2000. Every engineer knew this would cause the Internet to collapse. At the IETF meeting, I tried to argue otherwise. I used the Simon-Ehrlich Wager as an analogy. Namely, the 4 billion addresses weren’t a fixed resource, because we would become increasingly efficient at using them. For example, “dynamic” addresses would use space more efficiently, and translation would reuse addresses.

Cerf’s response was the tautology “but that would break the end-to-end principle”.

Well, yes, but no such principle should be a straightjacket. The end-to-end principle is already broken by hackers. Even with IPv6, when all your home devices have a public rather than private address on the Internet, you still want a firewall breaking the end-to-end principle blocking inbound connections. Once you’ve decided to firewall a network, it no longer matters whether it’s using IPv6 or address translation of private addresses. Indeed, address translation is better for firewalling, as it defaults to “fail close”. That means if a failure occurs, all communication is blocked. With IPv6, firewalls become “fail open”, where failures allow communication to continue.

Firewalls are only the start in breaking end-to-end. It’s the “cloud” where we see a radical reversion back to old principles.

Your phone is no longer a true “end” of the network. Sure, your phone has a powerful processor that’s faster than supercomputers of the last decade, but that power is used primarily for display not for computation. Your data and computation is instead done in the cloud. Indeed, when you lose or destroy your phone, you simply buy a new one and “restore” it form the cloud.

Thus, we are right back to the old world of smart core network with “mainframes”, and “dumb terminals” on the ends. That your phone has supercomputer power doesn’t matter — it still does just what it’s told by the cloud.

But the last nail in the coffin to the “end-to-end” principle is the idea of “net neutrality”. While many claim it’s a technical concept, it’s just a meaningless political slogan. Congestion is an inherent problem of the Internet, and no matter how objectively you try to solve it, it’ll end up adversely affecting somebody — somebody who will then lobby politicians to rule in their favor. The Comcast-NetFlix issue is a good example where the true technical details are at odds with the way this congestion issue has been politicized. Things like “fast-lanes” are everywhere, from content-delivery-networks to channelized cable/fiber. Rhetoric creates political distinctions among various “fast-lanes” when there are no technical distinctions.

This politicization of the Internet ends the personal control over the Internet that was promised by end-to-end. Instead of being able to act first and asking for forgiveness later, you must first wait for permission from Big Government. Instead of being able to create your own services, you must wait for Big Corporations (the only ones that can afford lawyers to lobby government) to deliver those services to you.


We aren’t going to regress completely to the days of mainframes, of course, but we’ve given up much of the territory of individualistic computing. In some ways, this is a good thing. I don’t want to manage my own data, losing it when a hard drive crashes because I forgot to back it up. In other ways, it’s a bad thing. The more we regulate the Internet to insure good things, the more we stop innovations that don’t fit within our preconceived notions. Worse, the more it’s regulated, the more companies have to invest in lobbying the government for favorable regulation, rather than developing new technology..

Krebs on Security: Spam Nation Book Tour Highlights

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Greetings from sunny Austin, Texas, where I’m getting ready to wrap up a week-long book tour that began in New York City, then blazed through Chicago, San Francisco, and Seattle. I’ve been trying to tweet links to various media interviews about Spam Nation over the past week, but wanted to offer a more comprehensive account and to share some highlights of the tour.

For three days starting last Sunday, I was in New York City — doing a series of back-to-back television and radio interviews. Prior to leaving for New York, I taped television interviews with Jeffrey Brown at the PBS NewsHour; the first segment delves into some of the points touched on in the book, and the second piece is titled “Why it’s harder than you think to go ‘off the grid’.”


On Monday, I was fortunate to once again be a guest on Terri Gross‘s show Fresh Air, which you can hear at this link. Tuesday morning began with a five-minute appearance on CBS This Morning, which included a sit-down with Charlie Rose, Gayle King and Norah O’Donnell. Later in the day, I was interviewed by the MarketPlace Tech ReportMSNBC’s The Cycle, as well as the Tavis Smiley show. Wednesday was a mercifully light day, with just two interviews: KGO-AM and the Jim Bohannon Radio Show.

Thursday’s round of media appearances began at around sunrise in the single-digit temperature Chicago suburbs. My driver from the hotel to all of these events took me aback at first. Roxanna was a petite blonde from Romania who could have just as easily been a supermodel. I thought for a moment someone was playing a practical joke when I first heard her “Gud mornink Meester Krebs” in a Eastern European accent upon stepping into her Town Car, but Roxanna was a knowledgeable driver who got us everywhere on time and didn’t take any crap from anyone on the road.

wcl-ji The first of those interviews was a television segment for WGN News and a taped interview with TouchVision, followed by my first interview in front of a studio audience at Windy City Live.  The guest who went on right before me was none other than the motivational speaker/life coach Tony Robbins, who is a tough act to follow and was also on the show to promote his new book. At six feet seven inches, Robbins is a larger-than-life guy whose mere presence almost took up half the green room. Anyway Mr. Robbins had quite the security detail, so I took this stealthie of Tony as he was confined to the makeup chair prior to his appearance.

On Thursday afternoon, after an obligatory lunch at the infamous Billy Goat burger joint (the inspiration for the “Cheezborger, cheezborger, cheezborger” Saturday Night Live skit) I visited the Sourcebooks office in Naperville, met many of the folks who worked on Spam Nation, signed a metric ton of books and the company’s author wall.

The Spam Nation signing in Naperville, IL.

The Spam Nation signing in Naperville, IL.

After an amazing dinner with my sister and the CEO of Sourcebooks, we headed to my first book signing event just down the street. It was a well-attended event with some passionate readers and fans, including quite a few folks from @BurbsecWest with whom I had beers afterwards.

On Friday, I hopped a plane to San Francisco and sat down for taped interviews with USA Today and Bloomberg News. The book signing that night at Books Inc. drew a nice crowd and also was followed by some after-event celebration.

Departed for Seattle the next morning, and sat down for a studio interview with longtime newsman (and general mensch) Herb Weisbaum at KOMO-AM. The signing in Seattle, at Third Place Books, was the largest turnout of all, and included a very inquisitive crowd that bought up all of the copies of Spam Nation that the store had on hand.

Yours Truly at a book signing in Seattle's Third Place Books.

Book signing at Seattle’s Third Place Books.

If you’re planning to be in Austin tonight — Nov. 24 — consider stopping by B&N Arboretum at 7:00 p.m. and get your copy of Spam Nation signed. I’ll be holding one more signing — 7:00 p.m. in Washington, D.C.’s Politics & Prose on Dec. 4.

For those on the fence about buying Spam Nation, Slate and LinkedIn both ran excerpts of the book. Other reviews and interviews are available at, Yahoo Also, I was interviewed at length several times over the past month by CBS’s 60 Minutes, which is doing a segment on retail data breaches. That interview could air as early as Nov. 30. On that note, the Minneapolis Star Tribune ran a lengthy story on Sunday that followed up on some information I first reported a year ago about a Ukrainian man thought to be tied to the Target breach, among others.

Raspberry Pi: ramanPi: an open source 3D-printable Raman spectrometer

This post was syndicated from: Raspberry Pi and was written by: Helen Lynn. Original post: at Raspberry Pi

The 2014 Hackaday Prize offered fabulous prizes for the best exemplars of an open, clearly documented device involving connected electronics. Committed hardware hacker fl@c@ (we understand that’s pronounced “flatcat”) wasn’t in the habit of opening up their work, but had been thinking that perhaps they should, and this seemed the perfect opportunity to give it a go. They decided to make an entry of one of their current works-in-progress, a DIY Raman spectrometer based on a Raspberry Pi. The project, named ramanPi, made it to the final of the contest, and was declared fifth prize winner at the prize announcement in Munich a couple of weeks ago.

ramanPi optics overview

Raman spectroscopy is a molecular identification technique that, like other spectroscopic techniques, works by detecting and analysing the characteristic ways in which substances absorb and emit radiation in various regions of the electromagnetic spectrum. It relies on the phenomenon of Raman scattering, in which a tiny proportion of the light falling on a sample is absorbed and then re-emitted at a different frequency; the shift in frequency is characteristic of the structure of the material, and can be used to identify it.

The ideal molecular identification technique is sensitive (requiring only small quantities of sample), non-destructive of the sample, unambiguous, fast, and cheap; spectroscopic methods perform pretty well against all but the final criterion. This means that fl@c@’s Raman spectrometer, which uses a Raspberry Pi and 3D-printed parts together with readily available off-the-shelf components, removes an obstacle to using a very valuable technique for individuals and organisations lacking a large equipment budget.

The ramanPi uses a remote interface so that it can be viewed and controlled from anywhere. Like conventional Raman spectrometers, it uses a laser as a powerful monochromatic light source; uniquely, however, its design:

[…] is based on an open source concept that side steps the expensive optics normally required for raman spectroscopy. Ordinarily, an expensive notch filter would be used which is cost prohibitive for most average people. My system avoids this cost by using two less expensive edge filters which when combined in the correct manner provide the same benefit as the notch filter…at the minimal cost of a little extra computing time.

Once a cuvette containing the sample to be tested is loaded into the ramanPi, the laser is powered up behind a shutter and the first filter is selected while the cuvette’s temperature is stabilised. Then the shutter is disengaged and the sample exposed to laser light, and scattered light is collected, filtered and passed to a Raspberry Pi camera module for capturing and then analysis. The laser shutter is re-engaged and the process is repeated with the second filter. The Raspberry Pi combines multiple exposures into a single image and carries out further image processing to derive the sample’s Raman spectrum. Finally, the spectrum is compared with spectra in online databases, and any match found is displayed.

fl@c@ says,

I’ve been trying to build up the courage to share my work and ideas with the world because I think it benefits everyone. This project is my first to share, and for it to be featured here [in a Hackaday Prize Hacker bio] […] is really amazing. I appreciate this whole community, I’ve learned a lot from it over the years and I hope to be able to give back and contribute more soon!

We’re very glad fl@c@ did decide to share this – ramanPi is an astonishing first contribution to the open source movement, and something that’s likely to be of interest to schools, chemists, biologists, home brew enthusiasts, people who want to know what’s in their water, businesses, ecologists and the simply curious.

You can read about ramanPi in much more detail, with further videos, diagrams, discussion and build instructions, on its Hackaday project page. We hope that this is far from the last we’ll hear of this project, or of fl@c@!

TorrentFreak: Pirate Bay Founder Preps Appeal, Puts the Press Straight

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

After being arrested in Cambodia during September 2012 it soon became clear that two Scandinavian countries wanted to get their hands on Gottfrid Svartholm.

Sweden had a long-standing interest in their countryman for his infamous work on The Pirate Bay, but once that was out-of-the-way a pair of hacking cases had to be dealt with.

The first, in Sweden, resulted in partial successes for both sides. While Gottfrid was found guilty of hacking into IT company Logica, following testimony from Jacob Appelbaum he was later cleared by the Appeal Court (Svea Hovrätt) of hacking into Nordea Bank.

But despite this significant result and a repeat appearance from Appelbaum, the trial that concluded in Denmark last month went all one way, with Gottfrid picking up a three-and-a-half year sentence.

With his mother Kristina acting as go-between, TorrentFreak recently fired off some questions to Gottfrid to find out how he’s been bearing up following October’s verdict and to discover his plans for the future.

Firstly, TF asked about his opinion on the decision. Gottfrid declined to answer directly but indicated we should look to the fact that he has already filed an appeal against the verdict. That should be enough of an answer, he said.

As it stands and considering time served, Gottfrid could be released as early as August 2015, but that clearly isn’t deterring him from the possibility of leaving sooner. Gottfrid has always shown that he’s both stubborn and a fighter, so sitting out his sentence in silence was probably never an option.

Moving on, TF pressed Gottfrid on what he feels were the points of failure during the court process and how these will play out alongside his appeal.

“Can’t discuss defense strategy at this point,” he responded. Fair enough.

Even considering the preparations for an appeal, there are a lot of hours in the coming months that will prove hard to fill. However, Gottfrid’s comments suggest that his access to books has improved since his days in solitary confinement and he’s putting that to use.

“I study neurobiology and related subjects to pass the time,” he says, with mother Kristina noting that this education is self-motivated.

“The ‘arrest house’ can of course not provide him with opportunities for higher studies,” she says.

Although he’s been thrust into the public eye on many occasions, Gottfrid’s appearances at court in Sweden (documented in TPB AFK) and later in his Danish trial reveal a man with an eye for detail and accuracy. It perhaps comes as little surprise then that he also took the opportunity to put the record straight on something he knows a lot about – the history of The Pirate Bay.

If one searches for “founders of The Pirate Bay” using Google, it’s very clear from many thousands of reports that they are Gottfrid Svartholm, Fredrik Neij and Peter Sunde. According to Gottfrid, however, that simply isn’t true.

“TPB was founded by me and two people who haven’t been involved since 2004,” Gottfrid says. “Fredrik came into the picture when the site moved from Mexico to Sweden, probably early 2004.”

While acknowledging Fredrik’s work as important for the growth of the site, Gottfrid noted that Peter’s arrival came sometime later. He didn’t specify who the other two founders were but it’s likely they’re to be found among the early members of Piratbyrån as detailed here.

With Peter Sunde already released from his sentence and Fredrik Neij close to beginning his, it’s possible that the founders trio could all be free men by the end of 2015. So does Gottfrid have anything exciting up his sleeve for then?

“Yes, I have plans, but I’m not sharing them,” he concludes.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

TorrentFreak: Piracy Monetization Firm Rightscorp Sued for Harassment and Abuse

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

rightscorp-realCopyright holders have been sending DMCA takedown notices to ISPs for over a decade, but in recent years these warnings turned into revenue opportunities.

Companies such as Rightscorp ask U.S. ISPs to forward DMCA notices to subscribers,with a settlement offer tagged on to the end. On behalf of Warner Bros, BMG and others Rightscorp asks subscribers to pay $20 per pirated file or risk a potential $150,000 in court.

In recent months there have been various complaints from people who were aggressively approached by Rightscorp, which has now resulted in a class-action complaint against the piracy monetization firm.

The lawsuit was filed at a California federal court on behalf of Karen Reif, Isaac Nesmith and others who were approached by Rightscorp. In the complaint, Rightscorp is accused of violating the Telephone Consumer Protection Act, violations of debt Collection laws and Abuse of Process.

One of the allegations describes the repeated use of robo-calls to alleged infringers. A summary of what happened to Karen Reif shows that once Rightscorp knows who you are, they don’t give up easily.

“By late September of 2014, Ms. Reif was receiving on average about one robo-call per day, and sometimes one robo-call and one live call in the same day.These calls came in from a variety of different numbers, from different area codes all over the country,” the complaint alleges.

This bombardment of harassing robo-calls is a violation of the Telephone Consumer Protection Act, the lawyers argue.

The class-action further includes a long list of violations regarding Rightscorp’s debt collection practices, violating both the FDCPA and the Rosenthal Act.

“Among other wrongful conduct: Rightscorp has engaged in telephone harassment and abuse; made various false and misleading representations; engaged in unfair collections practices; failed to provide validation and required notices relating to the debts..,” the complaint reads.

In addition to the above Rightscorp allegedly made false representations that ISPs were participating in the debt collection. For example, the warning letter stated that ISPs would disconnect repeat infringers, something that rarely happened.

Finally, the complaint raises the issue of Rightscorp’s controversial DMCA subpoenas which demand that smaller ISPs should hand over personal details of their subscribers. Thus far most ISPs have complied, but according to the complaint these requests are a “sham and abuse” of the legal process.

“To identify potential consumers to target, Rightscorp has willfully misused this Court’s subpoena power by issuing at least 142 special DMCA subpoenas, per [the DMCA], to various Internet Service Providers.”

“These subpoenas, which were issued on this Court’s authority, but procured outside of an adversarial proceeding and without any judicial review, are so clearly legally invalid as to be a sham and abuse of the legal process,” the complaint reads.

The above is just a summary of the long list of complaints being brought against Rightscorp. With these settlement practices becoming more common, the case will definitely be one to watch.

Attorney Morgan Pietz is confident that they have a strong case and told FCT that other Rightscorp victims are invited to get in touch.

“We would still be very interested to talking to anyone who was being contacted by Rightscorp or who paid settlements, particularly anyone who was getting the pre-recorded robo-calls,” Pietz said.

For Rightscorp the lawsuit is yet another setback. Earlier this month the piracy monetization firm reported that it continues to turn a loss, which may eventually drive the company towards bankruptcy.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

TorrentFreak: Google Refuses MPAA Request to Blacklist ‘Pirate Site’ Homepages

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

google-bayEvery week copyright holders send millions of DMCA takedown notices to Google, hoping to make pirated movies and music harder to find.

The music industry groups RIAA and BPI are among the most active senders. Together they have targeted more than 170 million URLs in recent years.

The MPAA’s statistics are more modest. Thus far the Hollywood group has asked Google to remove only 19,288 links from search results. The most recent request is one worth highlighting though, as it shows a clear difference of opinion between Hollywood and Google.

Last week the MPAA sent a DMCA request listing 81 allegedly infringing pages, mostly torrent and streaming sites.

Unlike most other copyright holders, the MPAA doesn’t list the URLs where the pirated movies are linked from, but the site’s homepages instead. This is a deliberate strategy, one that previously worked against KickassTorrents.

However, this time around Google was less receptive. As can be seen below most of the MPAA’s takedown requests were denied. In total, Google took “no action” for 60 of the 81 submitted URLs, including, and

Part of MPAA’s takedown request

It’s unclear why Google refused to take action, but it seems likely that the company views the MPAA’s request as too broad. While the sites’ homepages may indirectly link to pirated movies, for most this required more than one click from the homepage.

We previously asked Google under what circumstances a homepage might be removed from search results. A spokesperson couldn’t go into detail but noted that “it’s more complex than simply counting how many clicks one page is from another.”

“We’ve designed a variety of policies to comply with the requirements of the law, while weeding out false positives and material that’s too remote from infringing activity,” Google spokesperson told us.

In this case Google appears to see most reported homepages as not infringing, at least not for the works the MPAA specified.

The MPAA previously said that it would like to move towards blocking pirate sites from search engines entirely, however Google’s recent actions suggest that the company doesn’t want to go this far just yet.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

The Hacker Factor Blog: Lowering The Bar

This post was syndicated from: The Hacker Factor Blog and was written by: The Hacker Factor Blog. Original post: at The Hacker Factor Blog

The Electronic Frontier Foundation (EFF) is one of my favorite non-profit organizations. They have a huge number of attorneys who are ready to help people with issues related to online privacy, copyright, and security. If you’re about to make an 0-day exploit public and receive a legal threat from the software provider, then the EFF should be the first place you go.

The EFF actually provides multiple services. Some are top-notch, but others are not as high quality as they should be. These services include:

Legal Representation
If you need an attorney for an online issue, such as privacy or security, then they can give you direction. When I received a copyright extortion letter from Getty Images, the EFF rounded up four different attorneys who were interested in helping me fight Getty. (Getty Images backed down before I could use these attorneys.) Legal assistance is one of the EFF’s biggest and best offerings.

Legal News
The EFF continually releases news blurbs and whitepapers that discuss current events and their impact on security and privacy. Did you know that U.S. companies supply eavesdropping gear to Central Asian autocrats or that Feds proposed the secret phone database used by local Virginia cops? If you follow the EFF’s news feed, then you saw these reports. As a news aggregation service, their reports are very timely, but also very biased. The EFF’s reporting is biased toward a desire for absolute privacy online, even though nobody’s anonymous online.

Technical Services
The EFF occasionally promotes or releases software designed to assist with online privacy. While these efforts have good intentions, they are typically poorly thought out and can lead to significant problems. For example:

  • HTTPS Everywhere. This browser extension forces your web browser to use HTTPS whenever possible. It has a long set of configuration files that specify which sites should use HTTPS. Earlier this year, I wrote about some of the problems created by this application in “EFF’ing Up“. Specifically: (1) Some sites return different content if you use HTTPS instead of HTTP, (2) they do not appear to test their configuration files prior to releasing them, and (3) they do not fix bad configuration files.

  • TOR. The EFF is a strong supporter of the TOR Project, which consists of a network of servers that help anonymize network connections. The problem is that the EFF wants everyone to run a TOR relay. For a legal organization, the EFF seems to forget that many ISPs forbid end consumers from running public network services — running a TOR relay may violate your ISP’s terms of service. The TOR relay will also slow down your network connection as other people use your bandwidth. (Having other people use your bandwidth is why most consumer-level ISPs forbid users from hosting network services.) And if someone else uses your TOR relay to view child porn, then you are the person that the police will interrogate. In effect, the EFF tells people to run a network service without revealing any of the legal risks.

Free SSL

The EFF recently began promoting a new technical endeavor called Let’s Encrypt. This free CA server should help web sites move to HTTPS. News outlets like Boing Boing, The Register, and ExtremeTech all reported on this news announcement.

A Little Background

Let’s backup a moment… On the web, you can either connect to sites using HTTP or HTTPS. The former (HTTP) is unencrypted. That means anyone watching the network traffic can see what you are doing. The latter (HTTPS) is HTTP over SSL; SSL provides a framework for encrypting network traffic.

But notice how I say “framework”. SSL does not encrypt traffic. Instead, it provides a way for a client (like your web browser) and a server (like a web site) to negotiate how they want to transfer data. If both sides agree on a cryptographic setting, then the data is encrypted.

HTTPS is not a perfect solution. In many cases, it really acts as a security placebo. A user may see that HTTPS is being used, but may not be aware that they are still vulnerable. The initial HTTPS connection can be hijacked (a man-in-the-middle attack) and fake certificates can be issued to phishing servers. Even if the network connection is encrypted, this does nothing to stop the web server from tracking users or providing malware, and nothing to stop vandals from attacking web server. And all of this is before SSL exploits like Heartbleed and POODLE. In general, HTTPS should be considered a “better than nothing” solution. But it is far from perfect.

Entry Requirements

Even with all of the problems associated with SSL and HTTPS, for most uses it is still better than nothing. So why don’t more sites use HTTPS? There’s really a few limitations to entry. The EFF’s “Let’s Encrypt” project is a great solution to one of these problems and a partial solution to another problem. However, it doesn’t address all of the issues, and it is likely to create some new problems that the EFF has not disclosed.

Problem #1: Pay to Play
When an HTTPS client connects to an HTTPS server, the server transmits a server-side certificate as part of the cryptographic negotiation. The client then checks with a third-party certificate authority (CA server) and asks whether the server’s certificate is legitimate. This allows the client to know that the server is actually the correct server.

The server’s certificate identifies the CA network that should be used to verify the certificate. Unfortunately, if the certificate can say where to go to verify it, then bad guys can issue a certificate and tell your browser that it should be verified by a CA server run by the same bad guys. (Yes, looks like your bank, and their SSL certificate even looks valid, according to For this reason, every web browser ships with a list of known-trusted CA servers. If the CA server is not on the known-list, then it isn’t trusted by default.

If there are any problems with the server’s certificate, then the web browser issues an alert to the user. The problems include outdated/expired certificates, coming from the wrong domain, and untrusted CA servers.

And this is where the first barrier toward wide-spread use comes in… All of those known-trusted CA servers charge a fee. If you want your web server to run with an SSL certificate that won’t generate any user warnings, then you need to pay one of these known-trusted CA servers to issue an SSL certificate for your online service. And if you run multiple services, then you need to pay them multiple times.

The problems should be obvious. Some people don’t have money to pay for the trusted certificate, or they don’t want to spend the money. You can register a domain name for $10 a year, but the SSL certificate will likely run $150 or more. If your site doesn’t need SSL, then you’re not going to pay $150 to require it.

And then there are people like me, who cannot justify paying for a security solution (SSL) that isn’t secure. I cannot justify paying $150 or more, just so web browsers won’t see a certificate warning when they connect to my HTTPS services. (I use self-signed certificates. By themselves, they are untrusted and not secure, but I offer client-side certificates. Virtually no sites use client-side certificates. But client-side certs are what actually makes SSL secure.)

The EFF’s “Let’s Encrypt” project is a free SSL CA server. With this solution, cost is no longer an entry barrier. When their site goes live, I hope to use it for my SSL needs.

Of course, other CA services, like Entrust, Thawte, and GoDaddy, may lower their prices of offer similar free services. (You cannot data-mine users unless they use your service. Even with a “free” pricing model, these CA issuers can still make a hefty profit from collected user data.) As far as the EFF’s offerings go, this is a very disruptive technology for the SSL industry.

Problem #2: Server Installation
Let’s assume that you acquired an SSL certificate from a certificate authority (Thawte, GoDaddy, Let’s Encrypt, etc.). The next step is to install the certificate on your web server.

HTTPS has never been known for its simplicity. Installing the SSL server-side certificate is a nightmare of configuration files and application-specific complexity. Unless you are a hard-core system administrator, then you probably cannot do it. Even GUI interfaces like cPanel have multiple complex steps that are not for non-technies. You, as a user with a web browser, have no idea how much aggravation the system administrator went through in order to provide you with HTTPS and that little lock icon on the address bar. If they are good, then they spent hours. If it was new to them, then it could have been days.

In effect, lots of sites do not run HTTPS because it is overly complicated to install and configure. (And let’s hope that you don’t have to change certificates anytime soon…) Also, HTTPS certificates include an expiration date. This means that there is an ongoing maintenance cost that includes time and effort.

The EFF’s “Let’s Encrypt” solution says that it will include automated management software to help mitigate the installation and maintenance effort. This will probably work if you run one of their supported platforms and have a simple configuration file. But if you’re running a complex system with multiple domains, custom configuration files, and strict maintenance/update procedures, then no script from the EFF will assist you.

Of course, all of this is speculation since the EFF has not announced the supported platforms yet… So far, they have only mentioned a python script for Apache servers. I assume that they mean “Apache2″ and not “Apache”. And even then, the configuration at FotoForensics has been customized for my own needs, so I suspect that their solution won’t work out-of-the-box for my needs.

Problem #3: Client Installation
So… let’s assume that it is past Summer 2015, when Let’s Encrypt becomes available. Let’s also assume that you got the server-side certificate and their automated maintenance script running. You’ve got SSL on your server, HTTPS working, and you’re ready for users. Now everything is about to work without any problems, right? Actually, no.

As pointed out in problem #1, unknown CA servers are not in the user’s list of trusted CA servers. So every browser connecting to one of these web servers will see that ugly alert about an untrusted certificate.

Every user will need to add the new Let’s Encrypt CA servers to their trusted list. And every browser (and almost every version of every browser) does this differently. Making matters worse, lots of mobile devices do not have a way to add new CA servers. It will take years or even decades to fully resolve this problem.

Windows XP reached its “end of life” (again), yet nearly 30% of Windows computers still run XP. IPv6 has been around for nearly 20 years, yet deployment is still at less than 10% for most countries. Getting everyone in the world to update/upgrade is a massive task. It is easier to release a new system than it is to update a deployed product.

The EFF may dream of everyone updating their web browsers, but that’s not the reality. The reality is that users will be quickly trained to ignore any certificate alerts from the web browsers. This opens the door for even more phishing and malware sites. (If the EFF really wanted to solve this problem, then they would phase out the use of SSL and introduce something new.)

There is one other possibility… Along with the EFF, IdenTrust is sponsoring Let’s Encrypt. IdenTrust runs a trusted CA service that issues SSL certificates. (The cost varies from $40 per year for personal use to over $200 per year, depending on various options.) Let’s Encryption could piggy-back off of IdenTrust. This would get past the “untrusted CA service” problem.

But if they did rely on the known-trusted IdenTrust that is already listed in every web browser… the why would anyone buy an SSL certificate from IdenTrust when they can get it for free via Let’s Encrypt? There has to be some catch here. Are they collecting user data? Every browser must verify every server, so whoever runs this free CA server knows when you connected to specific online services — that’s a lot of personal information. Or perhaps they hope to drive sales to their other products. Or maybe there will be a license agreement that prohibits the free service from commercial use. All of this would undermine the entire purpose of trying to protect user’s traffic.

Problem #4: Fake Domains
Phishing web sites, where bad guys impersonate your bank or other online service, have been using SSL certificates for years. They will register a domain like “” and hope that users won’t notice the “fjewahuif” in the hostname. Then they register a real SSL certificate for their “” domain. At this point, victims see the “bankofamerica” text in the hostname and they see the valid HTTPS connection and they assume that this is legitimate.

The problem gets even more complicated when they use DNS hijacking. On rare occasions, bad guys have temporarily stolen domains and used to to capture customer information. For example, they could steal the “” domain and register a certificate for it at any of the dozens of legitimate CA servers. (If the real Bank of America uses VeriSign, then the fake Bank of America can use Thawte and nobody will notice.) With domain hijacking, it looks completely real but can actually be completely fake.

The price for an SSL certificate used to be a little deterrent. (Most scammers don’t mind paying $10 for a domain and $150 for a legitimate certificate, when the first victim will bring in a few thousands of dollars in stolen money.) But a free SSL CA server? Now there’s no reason not to run this scam. I honestly expect the volume of SSL certificate requests at the EFF’s Let’s Encrypt servers to quickly grow to 50%-80% scam requests. (A non-profit with a legal emphasis that helps scammers? As M. Night Shyamalan says in Robot Chicken: “What a twist!“)

“Free” as in “Still has a lot of work to do before it’s really ready”

The biggest concern that I have with this EFF announcement is that the technology does not exist yet. Their web site says “Arriving Summer 2015” — it’s nearly a year away. While they do have some test code available, their proposed standard is still a draft and they explicitly say to not run the code on any production systems. Until this solidifies into a public release, this is vaporware.

But I do expect this to eventually become a reality. The EFF is not doing this project alone. Let’s Encrypt is also sponsored by Mozilla, Akamai, Cisco, and IdenTrust. These are companies that know browsers, network traffic, and SSL. These are some of the biggest names and they are addressing one of the big problems on today’s Internet. I have no doubt that they are aware of these problems; I just dislike how they failed to disclose these issues when they had their Pollyannaish press release. Just because it is “free” doesn’t mean it won’t have costs for implementation, deployment, maintenance, and customer service. In the open source world, “free” does not mean “without cost”.

Overall, I do like the concept. Let’s Encrypt is intended to make it easier for web services to implement SSL. They will be removing the cost barrier and, in some cases, simplifying maintenance. However, they still face an uphill battle. Users may need to update their web browsers (or replace their old cellphones), steps need to be taken to mitigate scams, users must not be trained to habitually accept invalid certificates, and none of this helps the core issue that HTTPS is a security placebo and not a trustworthy solution. With all of these issues still needing to be addressed, I think that their service announcement a few days ago was a little premature.

TorrentFreak: Luxury Watchmakers Target Pirate Smartwatch Faces

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

rolx-360While digital watches have been becoming more complex in recent years, the advent of a new generation of smartwatches is changing the market significantly. Manufacturers such as Samsung, Sony, Pebble, Motorola and LG all have an interest in the game, with Apple set to show its hand in the early part of 2015.

Currently Android Wear compatible devices such as Motorola’s Moto360 are proving popular, not least due to their ability to display custom watch faces. Fancy Tag Heuer’s latest offering on your wrist? No problem. Rolex? Omega? Cartier? Patek Philippe? All just a click or two away.

Of course, having a digital copy of a watch on one’s wrist is a much cheaper option than the real deal. See that Devon watch fourth from left in the image below? A real-world version will set you back a cool $17,500. The copy? Absolutely free.


While it’s been fun and games for a while, makers of some of the world’s most expensive and well known watches are now targeting sites offering ‘pirate’ smartwatch faces in order to have digital likenesses of their products removed from the market.

TorrentFreak has learned that IWC, Panerai, Omega, Fossil, Armani, Michael Kors, Tissot, Certina, Swatch, Flik Flak and Mondaine are sending cease and desist notices to sites and individuals thought to be offering faces without permission.

Richemont, a company behind several big brands including Cartier, IWC and Panerai, appears to be one of the frontrunners. The company is no stranger to legal action and recently made the headlines after obtaining court orders to have domains selling counterfeit watches blocked at the ISP level in the UK.

Notices seen by TorrentFreak reveal that the company, which made 2.75 billion euros from its watch division during 2012/2013, is lodging notices against watch face sites citing breaches of its trademark rights. Owners are being given 24 hours to remove infringing content.

We discussed the issue with Richemont’s PR representatives but were informed that on this occasion the company could not be reached for comment.

Earlier this week a source informed TF that Swatch-owned Omega had also been busy, targeting a forum with demands that all Omega faces should be removed on “registered trademark, copyright and design rights” grounds. Although the forum would not talk on the record, its operator revealed that the content in question had been removed. Omega did not respond to our requests for comment.

While watchmakers are hardly a traditional foe for those offering digital content, history shows us that they are prepared to act aggressively in the right circumstances.

mondaineMondaine, a Swiss-based company also involved in the latest takedowns, famously found itself in a huge spat with Apple after the company included one of its designs in iOS6. That ended up costing Apple a reported $21 million in licensing fees. The same design is readily available for the Moto360 on various watch face sites.

So how are sites handing the claims of the watchmakers? TorrentFreak spoke with Luke, the operator of leading user-uploaded watch face site FaceRepo. He told us that the site had indeed received takedown notices from brand owners but made it very clear that uploading infringing content is discouraged and steps are being taken to keep it off the site.

“Although some of the replica faces we’ve received take downs for are very cool looking and represent significant artistic talent on the part of the designer, we believe that owners of copyrights or trademarks have the right to defend their brand,” Luke explained.

“If a copyright or trademark owner contacts us, we will promptly remove infringing material. To date, all requests for removal of infringing material have been satisfied within a matter of hours.”

Learning very quickly from other user generated content sites, FaceRepo notifies its users that their content has been flagged as infringing and also deactivates accounts of repeat infringers. A keyword filter has also been introduced which targets well known brands.

“If these [brand names] are found in the face name, description or tags, this will cause the upload to be rejected with a message stating that sharing of copyrighted or trademarked material is prohibited,” FaceRepo’s owner notes.

The development of a new front in the war to keep copyrighted and trademarked content off the Internet is hardly a surprise, and considering their power it comes as no shock that the watchmakers have responded in the way they have. We may be some time from an actual lawsuit targeting digital reproductions of physical content, but as the wearables market develops, one can not rule them out.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

TorrentFreak: Fail: MPAA Makes Legal Content Unfindable In Google

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

wheretowatchThe entertainment industries have gone head to head with Google in recent months, demanding tougher anti-piracy measures from the search engine.

According to the MPAA and others, Google makes it too easy for its users to find pirated content. Instead, they would prefer Google to downrank sites such as The Pirate Bay from its search results or remove them entirely.

A few weeks ago Google took additional steps to decrease the visibility of pirated content, but the major movie studios haven’t been sitting still either.

Last week MPAA announced the launch of, a website that lists where movies and TV-shows can be watched legally.

“ offers a simple, streamlined, comprehensive search of legitimate platforms – all in one place. It gives you the high-quality, easy viewing experience you deserve while supporting the hard work and creativity that go into making films and shows,” the MPAA’s Chris Dodd commented.

At first glance WhereToWatch offers a rather impressive database of entertainment content. It even features TorrentFreak TV, although this is listed as “not available” since the MPAA’s service doesn’t index The Pirate Bay.

Overall, however, it’s a decent service. WhereToWatch could also be an ideal platform to beat pirate sites in search results, something the MPAA desperate wants to achieve.

Sadly for the MPAA that is only a “could” since Google and other search engines currently have a hard time indexing the site. As it turns out, the MPAA’s legal platform isn’t designed with even the most basic SEO principles in mind.

For example, if Google visits the movie overview page all links to individual pages are hidden by Javascript, and the search engine only sees this. As a result, movie and TV-show pages in the MPAA’s legal platform are invisible to Google.

Google currently indexes only one movie page, which was most likely indexed through an external link. With Bing the problem is just as bad.


It’s worth noting that WhereToWatch doesn’t block search engines from spidering its content through the robots.txt file. It’s just the coding that makes it impossible for search engines to navigate and index the site.

This is a pretty big mistake, considering that the MPAA repeatedly hammered on Google to feature more legal content. With some proper search engine optimization (SEO) advice they can probably fix the problem in the near future.

Previously Google already offered SEO tips to copyright holders, but it’s obvious that the search engine wasn’t consulted in this project.

To help the MPAA on its way we asked isoHunt founder Gary Fung for some input. Last year Fung lost his case to the MPAA, forcing him to shut down the site, but he was glad to offer assistance nonetheless.

“I suggest MPAA optimize for search engine keywords such as ‘download ‘ and ‘torrent ‘. For some reason when people google for movies, that’s what they actually search for,” Fung tells us.

A pretty clever idea indeed, as the MPAA’s own research shows that pirate-related search terms are often used to “breed” new pirates.

Perhaps it’s an idea for the MPAA to hire Fung or other “industry” experts for some more advice. Or better still, just look at how the popular pirate sites have optimized their sites to do well in search engines, and steal their work.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

TorrentFreak: Swedes Prepare Record File-Sharing Prosecution

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

serversFollowing a lengthy investigation by anti-piracy group Antipiratbyrån, in 2010 police raided a “warez scene” topsite known as Devil. Dozens of servers were seized containing an estimated 250 terabytes of pirate content.

One man was arrested and earlier this year was eventually charged with unlawfully making content available “intentionally or by gross negligence.”

Police say that the man acted “in consultation or concert with other persons, supplied, installed, programmed, maintained, funded and otherwise administered and managed” the file-sharing network from where the infringements were carried out. It’s claimed that the Devil topsite had around 200 members.

All told the man is accused of illegally making available 2,250 mainly Hollywood movies, a record amount according to the prosecutor.

“We have not prosecuted for this many movies in the past. There are many movies and large data set,” says prosecutor Fredrik Ingblad. “It is also the largest analysis of computers ever made in an individual case.”

Few details have been made available on the case but it’s now been revealed that Antipiratbyrån managed to trace the main Devil server back to the data center of a Stockholm-based electronics company. The site’s alleged operator, a man from Väsbybo in his 50s and employee of the company, reportedly admitted being in control of the server.

While it would likely have been the intention of Devil’s operator for the content on the site to remain private, leaks inevitably occurred. Predictably some of that material ended up on public torrent sites, an aggravating factor according to Antipiratbyrån lawyer Henrik Pontén.

“This is a very big issue and it is this type of crime that is the basis for all illegal file sharing. The films available on Pirate Bay circulate from these smaller networks,” Pontén says.

The big question now concerns potential damages. Pontén says that the six main studios behind the case could demand between $673,400 and $2.69m per movie. Multiply that by 2,250 and that’s an astonishing amount, but the lawyer says that in order not to burden the justice system, a few titles could be selected.

Henrik Olsson Lilja, a lawyer representing the defendant, declined to comment in detail but criticized the potential for high damages.

“I want to wait for the trial, but there was no intent in the sense that the prosecutor is looking for,” Lilja told “In practice, these are American-style punitive damages.”

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Backblaze Blog | The Life of a Cloud Backup Company: Backblaze + Time Machine = ♥

This post was syndicated from: Backblaze Blog | The Life of a Cloud Backup Company and was written by: Yev. Original post: at Backblaze Blog | The Life of a Cloud Backup Company


“Why do I need online backup if I have Time Machine Already?” We get that question a lot. Here, we recommend you use both. Backblaze strongly believes in a 3-2-1 backup policy. What’s 3-2-1? Three copies of your data, on two different media, and one copy off-site. If you have that baseline, you’re in good shape. The on-site portions of your backup strategy are typically, the original piece of data, and an external hard drive of some sort. Most of our Mac customers use Time Machine, so that’s the one we’ll focus on here.

Raising Awareness
Apple did a great job with Time Machine, and with building awareness for backups. When you plugged in your first external hard drive, your Mac would ask if you wanted to use that drive as a Time Machine backup drive, which was instrumental in teaching users about the importance and potential ease of backups. It also dramatically simplified data backup, making it automatic and continuous. Apple knew that having people manually drag and drop files into folders and drives so they were backed up was not a reliable backup strategy. By making it automatic, many people used Time Machine for their local backup, but this still left a hole in their backup strategy, they had nothing off-site.

Why Bother
Having an off-site backup comes in handy when your computer and local backup (Time Machine in this case) are both lost. That can occur because of fire, theft, flood, forgetfulness, or a wide variety of other unfortunate reasons. Stories of people neglecting to replace their failed Time Machine drive then having their computer crash are well known. An off-site backup that is current, such as an automatic online backup can also be used to augment the local Time Machine backup, especially when traveling. For example, your hard drive in your laptop crashes while you’re on vacation. Time Machine can be used to recover up to the point where you left for your trip and your online backup can be used to fill in the rest.

Some Limitations
One thing about using Time Machine, is that as a hard drive, it doesn’t scale with the amounts of data that you have. When you purchase a 500GB drive, that’s all the space you have for your backup. For example, if you have your Mac Pro or MacBook and have a Time Machine hard drive connected to it, it will back up the data that’s on the computer. If you add an additional hard drive in to the mix as a storage drive, the Time Machine drive may not be large enough to handle both data sets, from the Mac and from the additional storage. So the more data you accumulate, the larger the Time Machine drive you have to use.

Additionally, if you store data on your Time Machine drive itself, those files are not actually going to be included in the Time Machine backup, so be wary! Apple and Backblaze strongly recommend using a separate, dedicated drive for your Time Machine backup, and not keeping any original data on that drive. That way, if the drive fails, you only lose one copy, and avoid potentially losing both copies. Backblaze works similarly, because you have an off-site backup with Backblaze, it’s another layer of protection from data loss.

So use both! And if you’re on a PC, use an external hard drive as your second media type (most come with their own local-backup software). There’s no such thing as too many backups. Backing up is like a retirement or stock portfolio, the more diversification you have, the less vulnerability you have!

Author information



Social Marketing Manager at Backblaze

Yev enjoys speed-walking on the beach. Speed-dating. Speed-writing blog posts. The film Speed. Speedy technology. Speedy Gonzales. And Speedos. But mostly technology.

Follow Yev on:

Twitter: @YevP | LinkedIn: Yev Pusin | Google+: Yev Pusin

The post Backblaze + Time Machine = ♥ appeared first on Backblaze Blog | The Life of a Cloud Backup Company. Introducing AcousticBrainz

This post was syndicated from: and was written by: n8willis. Original post: at

MusicBrainz, the not-for-profit project that maintains an
assortment of “open content” music metadata databases, has announced
a new effort named AcousticBrainz. AcousticBrainz
is designed to be an open, crowd-sourced database cataloging various
“audio features” of music, including “low-level spectral
information such as tempo, and additional high level descriptors for
genres, moods, keys, scales and much more.
” The data collected
is more comprehensive than MusicBrainz’s existing AcoustID database,
which deals only with acoustic fingerprinting for song recognition.
The new project is a partnership with the Music Technology Group at
Universitat Pompeu Fabra, and uses that group’s free-software toolkit
Essentia to perform its
acoustic analyses. A follow-up
digs into the AcousticBrainz analysis of the project’s initial
650,000-track data set, including examinations of genre, mood, key,
and other factors. Version 2 of the kdbus patches posted

This post was syndicated from: and was written by: jake. Original post: at

The second version of the kdbus patches have been posted to the Linux kernel mailing list by
Greg Kroah-Hartman. The biggest change since the original patch set (which
we looked at in early November) is that
kdbus now provides a filesystem-based interface (kdbusfs) rather than the
/dev/kdbus device-based interface. There are lots of other
changes in response to v1 review comments as well. “kdbus is a kernel-level IPC implementation that aims for resemblance to
[the] protocol layer with the existing userspace D-Bus daemon while
enabling some features that couldn’t be implemented before in userspace.

TorrentFreak: U.S. Copyright Alert System Security Could Be Improved, Review Finds

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

spyFebruary last year the MPAA, RIAA and five major Internet providers in the United States launched their “six strikes” anti-piracy plan.

The Copyright Alert System’s main goal is to inform subscribers that their Internet connections are being used to share copyrighted material without permission. These alerts start out friendly in tone, but repeat infringers face a temporary disconnection from the Internet or other mitigation measures.

The evidence behind the accusations is provided by MarkMonitor, which monitors BitTorrent users’ activities on copyright holders’ behalf. The overseeing Center for Copyright Information (CCI) previously hired an impartial and independent technology expert to review the system, hoping to gain trust from the public.

Their first pick, Stroz Friedberg, turned out to be not that impartial as the company previously worked as RIAA lobbyists. To correct this unfortunate choice, CCI assigned Professor Avi Rubin of Harbor Labs to re-examine the system.

This week CCI informed us that a summary of Harbor Labs’s findings is now available to the public. The full review is not being published due to the vast amount of confidential information it contains, but the overview of the findings does provide some interesting details.

Overall, Harbor Labs concludes that the evidence gathering system is solid and that false positives, cases where innocent subscribers are accused, are reasonably minimized.

“We conclude, based on our review, that the MarkMonitor AntiPiracy system is designed to ensure that there are no false positives under reasonable and realistic assumptions. Moreover, the system produces thorough case data for alleged infringement tracking.”

However, there is some room for improvement. For example, MarkMonitor could implement additional testing to ensure that false positives and human errors are indeed caught.

“… we believe that the system would benefit from additional testing and that the existing structure leaves open the potential for preventable failures. Additionally, we recommend that certain elements of operational security be enhanced,” Harbor Labs writes.

In addition, the collected evidence may need further protections to ensure that it can’t be tampered with or fall into the wrong hands.

“… we believe that this collected evidence and other potentially sensitive data is not adequately controlled. While MarkMonitor does protect the data from outside parties, its protection against inside threats (e.g., potential rogue employees) is minimal in terms of both policy and technical enforcement.”

The full recommendations as detailed in the report are as follows:


The CCI is happy with the new results, which they say confirm the findings of the earlier Stroz Friedberg review.

“The Harbor Labs report reaffirms the findings from our first report – conducted by Stroz Friedberg – that the CAS is well designed and functioning as we hoped,” CCI informs TF.

In the months to come the operators of the Copyright Alert System will continue to work with copyright holders to make further enhancements and modifications to their processes.

“As the CAS exits the initial ramp-up period, CCI has been assured by our content owners that they have taken all recommendations made within both reports into account and are continuing to focus on maintaining the robust system that minimizes false positives and protects customer security and privacy,” CCI adds.

Meanwhile, they will continue to alert Internet subscribers to possible infringements. After nearly two years copyright holders have warned several million users, hoping to convert then to legal alternatives.

Thus far there’s no evidence that Copyright Alerts have had a significant impact on piracy rates. However, the voluntary agreement model is being widely embraced by various stakeholders and similar schemes are in the making in both the UK and Australia.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Krebs on Security: Convicted ID Thief, Tax Fraudster Now Fugitive

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

In April 2014, this blog featured a story about Lance Ealy, an Ohio man arrested last year for buying Social Security numbers and banking information from an underground identity theft service that relied in part on data obtained through a company owned by big-three credit bureau Experian. Earlier this week, Ealy was convicted of using the data to fraudulently claim tax refunds with the IRS in the names of more than 175 U.S. citizens, but not before he snipped his monitoring anklet and skipped town.

Lance Ealy, in self-portrait he uploaded to twitter before absconding.

Lance Ealy, in selfie he uploaded to Twitter before absconding.

On Nov. 18, a jury in Ohio convicted Ealy, 28, on all 46 charges, including aggravated identity theft, and wire and mail fraud. Government prosecutors presented evidence that Ealy had purchased Social Security numbers and financial data on hundreds of consumers, using an identity theft service called (later renamed The jury found that Ealy used that information to fraudulently file at least 179 tax refund requests with the Internal Revenue Service, and to open up bank accounts in other victims’ names — accounts he set up to receive and withdraw tens of thousand of dollars in refund payments from the IRS.

The identity theft service that Ealy used was dismantled in 2013, after investigators with the U.S. Secret Service arrested its proprietor and began tracking and finding many of his customers. Investigators later discovered that the service’s owner had obtained much of the consumer data from data brokers by posing as a private investigator based in the United States.

In reality, the owner of was a Vietnamese man paying for his accounts at data brokers using cash wire transfers from a bank in Singapore. Among the companies that Ngo signed up with was Court Ventures, a California company that was bought by credit bureau Experian nine months before the government shut down

Court records show that Ealy went to great lengths to delay his trial, and even reached out to this reporter hoping that I would write about his allegations that everyone from his lawyer to the judge in the case was somehow biased against him or unfit to participate in his trial. Early on, Ealy fired his attorney, and opted to represent himself. When the court appointed him a public defender, Ealy again choose to represent himself.

“Mr. Ealy’s motions were in a lot of respects common delay tactics that defendants use to try to avoid the inevitability of a trial,” said Alex Sistla, an assistant U.S. attorney in Ohio who helped prosecute the case.

Ealy also continued to steal peoples’ identities while he was on trial (although no longer buying from, according to the government. His bail was revoked for several months, but in October the judge in the case ordered him released on a surety bond.

It is said that a man who represents himself in court has a fool for a client, and this seems doubly true when facing criminal charges by the U.S. government. Ealy’s trial lasted 11 days, and involved more than 70 witnesses — many of the ID theft victims. His last appearance in court was on Friday. When investigators checked in on Ealy at his home over the weekend, they found his electronic monitoring bracelet but not Ealy.

Ealy faces up to 10 years in prison on each count of possessing 15 or more unauthorized access devices with intent to defraud and using unauthorized access devices to obtain items of $1,000 or more in value; up to five years in prison on each count of filing false claims for income tax refunds with the IRS; up to 20 years in prison on each count of wire fraud and each count of mail fraud; and mandatory two-year sentences on each count of aggravated identity theft that must run consecutive to whatever sentence may ultimately be handed down. Each count of conviction also carries a fine of up to $250,000.

I hope they find Mr. Ealy soon and lock him up for a very long time. Unfortunately, he is one of countless fraudsters perpetrating this costly and disruptive form of identity theft. In 2014, both my sister and I were the victims of tax ID theft, learning that unknown fraudsters had already filed tax refunds in our names when we each filed our taxes with the IRS.

I would advise all U.S. readers to request a tax filing PIN from the IRS (sadly, it turns out that I applied for mine in Feburary, only days after the thieves filed my tax return). If approved, the PIN is required on any tax return filed for that consumer before a return can be accepted. To start the process of applying for a tax return PIN from the IRS, check out the steps at this link. You will almost certainly need to file an IRS form 14039 (PDF), and provide scanned or photocopied records, such a drivers license or passport.

To read more about other ID thieves who were customers of that the Secret Service has nabbed and put on trial, check out the stories in this series. Ealy’s account on Twitter is an also an eye-opener.

TorrentFreak: BitTorrent Users are Avid, Eclectic Content Buyers, Survey Finds

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

Each month 150-170 million Internet users share files using the BitTorrent protocol, a massive audience by most standards. The common perception is that these people are only interested in obtaining content for free.

However, studies have found that file-sharers are often more engaged than the average consumer, as much was admitted by the RIAA back in 2012. There’s little doubt that within those millions of sharers lie people spending plenty of money on content and entertainment.

To get a closer look, in September BitTorrent Inc. conducted a survey among a sample of its users. In all, 2,500 people responded and now the company has published the results. The figures aren’t broken down into age groups, but BitTorrent Inc. informs TF that BitTorrent users trend towards young and male.


From its survey the company found that 50% of respondents buy music each month, with a sway towards albums rather than singles (44% v 32%). BitTorrent users are reported as 170% more likely to have paid for a digital music download in the past six months than Joe Public.

Citing figures from the RIAA, BitTorrent Inc. says its users are also 8x more likely than the average Internet user to pay for a streaming music service, with 16% of BitTorrent users and 2% of the general public holding such an account.

Perhaps a little unexpectedly, supposedly tech-savvy torrent users are still buying CDs and vinyl, with 45% and 10% respectively reporting a purchase in the past 12 months. BitTorrent Inc. says that the latter represents users “engaging and unpacking art as a multimedia object”, a clear reference to how the company perceives its BitTorrent Bundles.

On average, BitTorrent Inc. says its user base spends $48 a year on music, with 31% spending more than $100 annually.



When it comes to movies, 47% of respondents said they’d paid for a theater ticket in the preceding 12 months, up on the 38% who purchased a DVD or Blu-ray disc during the same period.

Users with active movie streaming accounts and those making digital movie purchases tied at 23%, with DVD rental (22%) and digital rental (16%) bringing up the rear.

All told, BitTorrent Inc. says that 52% of respondents buy movies on a monthly basis with the average annual spend amounting to $54. More than a third say they spend in excess of $100.


So do the results of the survey suggest that BitTorrent Inc.’s users have a lot to offer the market and if so, what?

“The results confirm what we knew already, that our users are super fans. They are consumers of content and are eager to reward artists for their work,” Christian Averill, BitTorrent Inc.’s Director of Communications, told TF.

“BitTorrent Bundle was started based on this premise and we have more than 10,000 artists now signed up, with more to come. With 90% of purchase going to the content creators, BitTorrent Bundle is the most artist friendly, direct-to-fan distribution platform on the market.”

It seems likely that promoting and shifting Bundles was a major motivator for BitTorrent Inc. to carry out the survey and by showing that torrent users aren’t shy to part with their cash, more artists like Thom Yorke will hopefully be prepared to engage with BitTorrent Inc.’s fanbase.

Also of note is the way BitTorrent Inc. is trying to position that fanbase or, indeed, how that fanbase has positioned itself. While rock (20%), electronic (15%) and pop (13%) took the top spots in terms of genre popularity among users, 23% described their tastes as a vague “other”. Overall, 61% of respondents described their musical tastes as “eclectic”.

“[Our] users are engaged in the creative community and they have diverse taste. They also do not define themselves by traditional genres. We feel this is a true representation about how fans view themselves universally these days. They are eclectic,” Averill concludes.

While monetizing content remains a key focus for BitTorrent Inc., the company is also making strides towards monetizing its distribution tools. Last evening uTorrent Plus was replaced by uTorrent Pro (Windows), an upgraded client offering torrent streaming, an inbuilt player, video file converter and anti-virus features. The ad-free client (more details here) is available for $19.95 per year.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Linux How-Tos and Linux Tutorials: Beginning Git and Github for Linux Users

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Carla Schroder. Original post: at Linux How-Tos and Linux Tutorials

fig-1 github

The Git distributed revision control system is a sweet step up from Subversion, CVS, Mercurial, and all those others we’ve tried and made do with. It’s great for distributed development, when you have multiple contributors working on the same project, and it is excellent for safely trying out all kinds of crazy changes. We’re going to use a free Github account for practice so we can jump right in and start doing stuff.

Conceptually Git is different from other revision control systems. Older RCS tracked changes to files, which you can see when you poke around in their configuration files. Git’s approach is more like filesystem snapshots, where each commit or saved state is a complete snapshot rather than a file full of diffs. Git is space-efficient because it stores only changes in each snapshot, and links to unchanged files. All changes are checksummed, so you are assured of data integrity, and always being able to reverse changes.

Git is very fast, because your work is all done on your local PC and then pushed to a remote repository. This makes everything you do totally safe, because nothing affects the remote repo until you push changes to it. And even then you have one more failsafe: branches. Git’s branching system is brilliant. Create a branch from your master branch, perform all manner of awful experiments, and then nuke it or push it upstream. When it’s upstream other contributors can work on it, or you can create a pull request to have it reviewed, and then after it passes muster merge it into the master branch.

So what if, after all this caution, it still blows up the master branch? No worries, because you can revert your merge.

Practice on Github

The quickest way to get some good hands-on Git practice is by opening a free Github account. Figure 1 shows my Github testbed, named playground. New Github accounts come with a prefab repo populated by a README file, license, and buttons for quickly creating bug reports, pull requests, Wikis, and other useful features.

Free Github accounts only allow public repositories. This allows anyone to see and download your files. However, no one can make commits unless they have a Github account and you have approved them as a collaborator. If you want a private repo hidden from the world you need a paid membership. Seven bucks a month gives you five private repos, and unlimited public repos with unlimited contributors.

Github kindly provides copy-and-paste URLs for cloning repositories. So you can create a directory on your computer for your repository, and then clone into it:

$ mkdir git-repos
$ cd git-repos
$ git clone
Cloning into 'playground'...
remote: Counting objects: 4, done.
remote: Compressing objects: 100% (4/4), done.
remote: Total 4 (delta 0), reused 0 (delta 0)
Unpacking objects: 100% (4/4), done.
Checking connectivity... done.
$ ls playground/

All the files are copied to your computer, and you can read, edit, and delete them just like any other file. Let’s improve and learn the wonderfulness of Git branching.


Git branches are gloriously excellent for safely making and testing changes. You can create and destroy them all you want. Let’s make one for editing

$ cd playground
$ git checkout -b test
Switched to a new branch 'test'

Run git status to see where you are:

$ git status
On branch test
nothing to commit, working directory clean

What branches have you created?

$ git branch
* test

The asterisk indicates which branch you are on. master is your main branch, the one you never want to make any changes to until they have been tested in a branch. Now make some changes to, and then check your status again:

$ git status
On branch test
Changes not staged for commit:
  (use "git add ..." to update what will be committed)
  (use "git checkout -- ..." to discard changes in working directory)
no changes added to commit (use "git add" and/or "git commit -a")

Isn’t that nice, Git tells you what is going on, and gives hints. To discard your changes, run

$ git checkout

Or you can delete the whole branch:

$ git checkout master
$ git branch -D test

Or you can have Git track the file:

$ git add
$ git status
On branch test
Changes to be committed:
  (use "git reset HEAD ..." to unstage)

At this stage Git is tracking, and it is available to all of your branches. Git gives you a helpful hint– if you change your mind and don’t want Git to track this file, run git reset HEAD This, and all Git activity, is tracked in the .git directory in your repository. Everything is in plain text files: files, checksums, which user did what, remote and local repos– everything.

What if you have multiple files to add? You can list each one, for example git add file1 file2 file2, or add all files with git add *.

When there are deleted files, you can use git rm filename, which only un-stages them from Git and does not delete them from your system. If you have a lot of deleted files, use git add -u.

Committing Files

Now let’s commit our changed file. This adds it to our branch and it is no longer available to other branches:

$ git commit
[test 5badf67] changes to readme
 1 file changed, 1 insertion(+)

You’ll be asked to supply a commit message. It is a good practice to make your commit messages detailed and specific, but for now we’re not going to be too fussy. Now your edited file has been committed to the branch test. It has not been merged with master or pushed upstream; it’s just sitting there. This is a good stopping point if you need to go do something else.

What if you have multiple files to commit? You can commit specific files, or all available files:

$ git commit file1 file2
$ git commit -a

How do you know which commits have not yet been pushed upstream, but are still sitting in branches? git status won’t tell you, so use this command:

$ git log --branches --not --remotes
commit 5badf677c55d0c53ca13d9753344a2a71de03199
Author: Carla Schroder 
Date:   Thu Nov 20 10:19:38 2014 -0800
    changes to readme

This lists un-merged commits, and when it returns nothing then all commits have been pushed upstream. Now let’s push this commit upstream:

$ git push origin test
Counting objects: 7, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 324 bytes | 0 bytes/s, done.
Total 3 (delta 1), reused 0 (delta 0)
 * [new branch]      test -> test

You may be asked for your Github login credentials. Git caches them for 15 minutes, and you can change this. This example sets the cache at two hours:

$ git config --global credential.helper 'cache --timeout=7200'

Now go to Github and look at your new branch. Github lists all of your branches, and you can preview your files in the different branches (figure 2).

fig-2 github

Now you can create a pull request by clicking the Compare & Pull Request button. This gives you another chance to review your changes before merging with master. You can also generate pull requests from the command line on your computer, but it’s rather a cumbersome process, to the point that you can find all kinds of tools for easing the process all over the Web. So, for now, we’ll use the nice clicky Github buttons.

Github lets you view your files in plain text, and it also supports many markup languages so you can see a generated preview. At this point you can push more changes in the same branch. You can also make edits directly on Github, but when you do this you’ll get conflicts between the online version and your local version. When you are satisfied with your changes, click the Merge pull request button. You’ll have to click twice. Github automatically examines your pull request to see if it can be merged cleanly, and if there are conflicts you’ll have to fix them.

Another nice Github feature is when you have multiple branches, you can choose which one to merge into by clicking the Edit button at the right of the branches list (figure 3).

fig-3 github

After you have merged, click the Delete Branch button to keep everything tidy. Then on your local computer, delete the branch by first pulling the changes to master, and then you can delete your branch without Git complaining:

$ git checkout master
$ git pull origin master
$ git branch -d test

You can force-delete a branch with an uppercase -D:

$ git branch -D test

Reverting Changes

Again, the Github pointy-clicky way is easiest. It shows you a list of all changes, and you can revert any of them by clicking the appropriate button. You can even restore deleted branches.

You can also do all of these tasks exclusively from your command line, which is a great topic for another day because it’s complex. For an exhaustive Git tutorial try the free Git book, and you can test everything with your Github account.

TorrentFreak: Torrents Good For a Third of all Internet Traffic in Asia-Pacific

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

download-keyboardOver the years we have been following various reports on changes in Internet traffic, specifically in relation to torrents.

One of the patterns that emerged with the rise of video streaming services is that BitTorrent is losing its share of total Internet traffic, in North America at least, where good legal services are available.

This downward spiral is confirmed by the latest report from Sandvine which reveals that torrent traffic is now responsible for ‘only’ 5% of all U.S. Internet traffic in North America during peak hours, compared to 10.3% last year.

In other countries, however, this decrease is not clearly visible. In Europe, for example, the percentage of Internet traffic during peak hours has remained stable over the past two years at roughly 15%, while absolute traffic increased during the same period.

In Asia-Pacific BitTorrent traffic there’s yet another trend. Here, torrents are booming with BitTorrent traffic increasing more than 50% over the past year.


According to Sandvine torrents now account for 32% of all traffic during peak hours, up from 21%. Since overall traffic use also increased during the same period, absolute traffic has more than doubled.

Looking at upstream data alone torrents are good for more than 55% of all traffic during peak hours.

One of the countries where unauthorized BitTorrent usage has been growing in recent years is Australia, which has one of the highest piracy rates in the world.

There are several reasons why torrents are growing in popularity, but the lack of good legal alternatives is expected to play an important role. It’s often hard or expensive to get access to the latest movies and TV-shows in this region.

It will be interesting to see whether this trend will reverse during the coming years as more legal services come online. Netflix’ arrival in Australia next year, for example, is bound to shake things up.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

SANS Internet Storm Center, InfoCON: green: Critical WordPress XSS Update, (Thu, Nov 20th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

Today, WordPress4.0.1 was released, which addresses a critical XSS vulnerability (among other vulnerabilities). [1]

The XSS vulnerability deserves a bit more attention, as it is an all too common problem, and often underestimated. First of all, why is XSS Critical? It doesnt allow direct data access like SQL Injection, and it doesnt allow code execution on the server. Or does it?

XSS does allow an attacker to modify the HTML of the site. With that, the attacker can easily modify form tags (think about the login form, changing the URL it submits its data to) or the attacker could use XMLHTTPRequest to conduct CSRF without being limited by same origin policy. The attacker will know what you type, and will be able to change what you type, so in short: The attacker is in full control. This is why XSS is happening.

The particular issue here was that WordPress allows some limited HTML tags in comments. This is always a very dangerous undertaking. The word press developers did attempt to implement the necessary safeguards. Only certain tags are allowed, and even for these tags, the code checked for unsafe attributes. Sadly, this check wasnt done quite right. Remember that browsers will also parse somewhat malformed HTML just fine.

A better solution would have probably been to use a standard library instead of trying to do this themselves. HTML Purifier is one such library for PHP. Many developer shy away from using it as it is pretty bulky. But it is bulky for a reason: it does try to cover a lot of ground. It not only normalizes HTML and eliminates malformed HTML, but it also provides a rather flexible configuration file. Many lightweight alternatives, like the solution WordPress came up with, rely on regular expressions. Regular expressions are typically not the right tool to parse HTML. Too much can go wrong starting from new lines and ending somewhere around multi-bytecharacters. In short: Dont use regular expressions to parse HTML (or XML), in particular for security.


Johannes B. Ullrich, Ph.D.

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.