Posts tagged ‘Facebook’

Спирт, есенция и умора: 2014-04-18 “IPv6 Deployment in Sofia University” lecture at RIPE-SEE-3

This post was syndicated from: Спирт, есенция и умора and was written by: Vasil Kolev. Original post: at Спирт, есенция и умора

This is my talk from RIPE-SEE3, on the deployment of IPv6 in Sofia University. The presentation can be downloade in pdf or odt format.

I’ve put the slide number or the notes/comments I have in [], the rest is as close to what I said as I remember, with some corrections.


Hello. I’m going to talk about the IPv6 deployment in Sofia University. It’s an important example how this can be done on a medium-to-large scale, with no hardware upgrades or hidden costs.

This presentation was prepared by Vesselin Kolev, who did most of the architecture of the network there and was involved in its operations until recently.

(I want to clarify, we’re not the same person, and we’re not brothers, I get that question a lot)


Please note that I’m only doing this presentation, I was not involved in the deployment or any network operations of the university’s network.


These are the people that did and operated this deployment. Most of them aren’t there any more – Vesselin Kolev is at Technion in Israel, Nikolay Nikolov and Vladislav Rusanov are at Tradeo, Hristo Dragolov is at Manson, Radoslav Buchakchiev is at Aalbord University, Ivan Yordanov is at Interoute, Georgi Naidenov is a freelancer. Stefan Dimitrov, Vladislav Georgiev and Mariana Petkova are still at the university, and some of them (and some of the new people there) are at this conference, so if you have questions related to the current operations there, they’ll probably be able to answer better than me.


So, let me start with the addressing.


This is the current unicast network that’s in use in the university. It was assigned around February 2011 and was used from the 11th of February 2011 onwards. It’s a /47, later on I’ll explain why.

Also please note that the maintenance of the records in the RIPE database for the network is done only with PGP-signed emails, to protect this from hijacking. As noted by one previous speaker, there are a lot of cases in RIPE NCC, where bad people try to steal prefixes (as their value is rising), and that’s possible mostly because there aren’t good security measures in place in a lot of LIRs.


Before that, the university used a /35 from SpectrumNet (since then bought by Mobiltel), until they got their own allocation.

It should be noted that in IPv6, the renumbering is a lot easier than in IPv4 and this was done very fast.


Now, on the allocation policy,


this is how unicast addresses are assigned. It’s based on RFC4291, and for the basic entities in the university (faculties) there’s a /60, for each backbone network, server farm or virtual machine bridge there’s a /64, and all the additional allocations are the same as the initial size (on request).
Also, allocations of separate /64 are available for special/specific projects.


The university also utilizes RFC4139 addresses, for local restricted resources. The allocations are done on /32 basis.


Now, onto the intra- and inter-AS routing,


The software used is Quagga on CentOS.
There is a specific reason for using CentOS – the distribution is a recompilation of RHEL, and follows it closely, which means that it’s as stable as it gets – if there’s a security or other important update, just the pieces are backported to the current version of the software, which effectively means that you can have automatic updates running on your servers and routers and be confident that they won’t break horribly.
That’s in stark contrast with almost every router vendor I can think of.


The current transit providers for the university network are Unicom-B (the academic network in Bulgaria), and SpectrumNet (now owned by Mobiltel).
The university has private peering with Digital systems, Evolink (it changed it’s name from Lirex), ITD, Netissat and Neterra. It also provides an AS112 stub node.

(For the people that don’t know, AS112 is somewhat volunteer project run anycast nodes for the DNS servers for some zones that generate crap traffic, e.g. “”, or the reverse zones for the RFC1918 addresses)


This is the basic schema of the external connectivity. The university has two border router, each of which is connected to both upstream providers (and to the private peers, but that would’ve cluttered the drawing too much). They’re interconnected through one of the MAN networks, and are in the Lozenetz campus (where the University Computing Center is, the main operator of the network) and the Rectorate.


These are the prefixes that the university originates. Here we can see why the university has a /47 – so it could be de-aggregated, for traffic engineering of the inbound traffic. That’s one problem that nothing has solved yet, and that would plague us for a lot more years…
Here each border router announces a /48, and both announce the /47, so they can effectively split the inbound traffic.

There’s also the IPv6 prefix for AS112, announced from both borders.


This is what every router should do for the prefix announces it receives. Basically, everything from a private ASN is dropped, and all prefixes that are not in 2000::/3 (the unicast part of the IPv6 space), are shorter than /3 or longer than /49 are also dropped.


Here you can see the schema of the backbone network – the two border routers, and the access routers of the faculties. They’re all under the administrative control of the network operations team.


For this schema to work efficiently, the two border routers do the job of route reflectors, and the access routers are route reflector clients – e.g. each access router has two BGP sessions, one with each border router, and that way it learns all the routes coming from the rest of the access routers.
This setup would’ve been unmanageable otherwise, as doing a full mesh of so many routers would’ve resulted in full mess.
[Ok, I didn't think of that pun at the presentation]

[back to 16]

The initial idea was to actually have the border routers be route reflectors only, and have all the access routers in the VLANs of the external BGP peers, for the traffic to flow directly and not have the two borders as bottlenecks. This wasn’t implemented because of administrative/layer8-9 issues.


This is how the core network connects with the faculty networks – the access router is connected in a network with the routers of the faculty, and via OSPF (or RIP in some cases) it redistributes a default route to them.

(Yes, RIP is used and as someone told me a few hours ago, if it’s stupid and it works, maybe it’s not that stupid)


Now here’s something I would’ve done differently :)

Both OSPF and RIP are secured using IPSec, here’s how the config looks like.


This is not something that you usually see, mostly because it’s harder to configure and the weirdness in the interoperability of different IPSec implementations. For an university network, where risk factors like students exist, this provides a layer of protection of the routing daemons (which, to be frank, aren’t the most secure software you can find) from random packets that can be sent in those segments.
It’s reasonable to accept that the kernel’s IPSec code is more secure than the security code of the routing daemons.

This the only thing that this setup provides more than the other options – a pre-shared key is used, as there aren’t any implementations that have IKEv1 and can be used for this task.

Also, this is harder to operate and configure for untrained personnel, but the team decided to go ahead with the extra security.


And of course, there has to be some filtering.


Here’s a schema of one external link – in this case, to Neterra.


Here we can see the configuration of the packet filtering. This is basically an implementation of BCP38.

First, let me ask, how many of you know what BCP38 is?
[about 25-30% of the audience raise their hands]
And how many of you do it in their own networks?
[Three times less people raise their hands]
OK, for the rest – please google BCP38, and deploy it :)
[Actually, this RFC has a whole site dedicated to it]

Basically, what BCP38 says is that you must not allow inbound traffic from source addresses that are not routed through that interface. In the Cisco world it’s known as RPF (Reverse path filtering), AFAIK in Juniper it’s the same.

In Linux, this is done best using the routing subsystem. Here what we can see is that on the external interface we block everything that’s coming from addresses in our network, and on the internal – anything that’s not from the prefixes we originate.


Here we can see the setup on an access router (as we can see, BCP38 is deployed on both the border and access routers). Here we can differentiate and allow only the network of the end-users.


On the border routers, there’s also some filtering with IPtables, to disallow anyone outside of the backbone network to connect to the BGP daemon, and also to disallow anyone to query the NTP server.
(what’s not seen here is that connection tracking is disabled for the rest of the traffic, not to overload it)


On the access routers, we also have the filtering for the BGP daemon and the NTP server, but also


we filter out any traffic that’s not related to a connection that was established from the outside. This is the usually cited “benefit” of NAT and the reason for some ill-informed people to ask for NAT in IPv6.

With this, the big-bad-internet can’t connect directly to the workstations, which, let’s be frank, is a very good idea, knowing how bad the workstation OS security is.


The relevant services provided by the university have IPv6 enabled.


Most of the web server have had support since 2007.

The DNS servers too, and there’s also a very interesting anycast implementation for the recursive resolvers that I’ll talk in detail about later.

The email services also have supported IPv6 since 2007, and they got their first spam message over IPv6 in on the 12th of December, 2007 (which should be some kind of record, I got mine two years ago, from a server in some space-related institute in Russia).

[31] has been IPv6 enabled since 2010, it’s a service for mirroring different projects (for example Debian).

The university also operates some OpenVPN concentrators, that are accessible over IPv6 and can provide IPv6 services.


The coverage of the deployment is very good – in the first year they had more than 50% of the workstations in the Lozenetz campus with IPv6 connectivity, and today more than 85% of the workstation in the whole university have IPv6 connectivity.
(the rest don’t have it because of two reasons – either they are too old to support it, or they are not turning it on because they had problems with it at the beginning and are refusing to use it)

Around 20% of the traffic that the university does is IPv6.
[question from the public - what is in that traffic? We should be showing people that there's actually IPv6 content out there, like facebook and youtube]

From my understanding (the university doesn’t have a policy to snoop on the users’ traffic) this is either to google(youtube), or to the CERN grid (which is IPv6 only). Yes, there actually exists a lot of content over IPv6, netflix/hulu have already deployed it, and it’s just a few of the big sites (twitter is an example) that are still holding back.

The university provides both ways for the configuration of end-nodes – Router Advertisement (RA) and DHCPv6. For most cases they’re both needed, as RA can’t provide DNS servers, and DHCPv6 can’t provide a gateway (which is something that’s really annoying to most people doing deployments).


Here’s how the default route is propagated in the network.


This was actually a surprise for me – there are actually two default routes in IPv6. One is ::/0, which includes the whole address space, and there is 2000::/3, which includes only the unicast space. There is a use for sending just 2000::/3, to be able to fence devices/virtual machines from the internet, but to leave them access to local resources (so they can update after a security breach, for example).


Here is how you redistribute ::/0, by first creating a null route and adding it in the configuration.


And here’s how it’s propagated in the network, using OSPF.


We can do the same for 2000::/3,


and push it to the virtual machines, who also have connectivity to the local resources through a different path.


Now, this is something very interesting that was done to have a highly-available recursive DNS resolver for the whole network.

[40 and 41 are the same, sorry for the mistake]


This is how a node works. Basically, there’s a small program (“manager”) that checks every millisecond if the BIND daemon responds, and if it’s just starting, it adds the anycast resolver addresses to the loopback interface and redistributes a route through OSPF.

(note the addresses used – FEC0::.. – turns out that this is the default resolver for MS windows, so this was the easiest choice. Seems that even in the land of IPv6 we still get dictated by MS what to do)


If the name server daemon dies, the manager withdraws the addresses.

Also, if the whole nodes dies, the OSPF announce will be withdrawn with it.


Here you can see what changes when a node is active. If one node is down, the routers will see the route to the other one, and send traffic accordingly.


And if they’re both operational, the routers in both campuses get the routes to the node in the other campus with lower local preference, so if they route traffic just to their own, and this way the traffic is load-balanced.


Thank you, and before the questions, I just want to show you again


the people that made this deployment. They deserve the credit for it :)

[In the questions section, Stefan Dimitrov from SU stood up and explained that now they have three border routers, the third one is at the 4th kilometer campus]

Блогът на Юруков: Сигналите за злоупотреби с лични данни и защо трябва да спрем да говорим за това

This post was syndicated from: Блогът на Юруков and was written by: Боян Юруков. Original post: at Блогът на Юруков

В понеделник писах как може лесно да подадем жалба за това, че са използвали личните ни данни в подписките на партиите. Към статията пуснах анкета за онези, които са открили ЕГН-то си без да са се подписвали. Спрях анкетата днес в 10 часа и за точно три дни и половина 186 души бяха подали сигнал. Не зачетох 22 от записите. Повечето бяха повторения, а 3-4 бяха написани на ръка вместо копирани от страницата на ЦИК. Дори с тях обаче резултатите не изглеждат различно. Сигналите посочват 167 нарушения, защото трима души са открили ЕГН-то си в списъците на повече от една партия.

цик формуляр лични данни кзлд избори евроизбори деклрации анкета  politika bylgariq Натиснете снимката за пълен размер на графиката.


В този Google Docs документ ще намерите резултатите сортирани по брой нарушения, съотношение нарушения/подписи в списъците, подписи в списъците и ред на регистрация в ЦИК. Реших да не споделям оригиналните сигнали, защото в доста от тях има лична информация. Тук може да свалите таблицата и като Excel файл.

Вижда се ясно, че най-много сигнали има за Съюза на комунистите. Следва ги България без цензура, две националистически партии и Християндемократите. Коалиция за България има 3 сигнала. ГЕРБ – 1, а АБВ и ДПС – 0.

Важно е да се разбере

Важна подробност при тези резултати, че не мога да потвърдя или отхвърля нито един от тези сигнали. Попълнилите анкетата твърдят, че са открили ЕГН-то си в даден списък и че не са се подписвали сами. Целта тук е просто да измерим приблизителното разпределение на проблемите подписи.

Попълването на анкетата не е еквивалентно на жалба до Комисията за защита на личните данни. Това трябва да направите лично и става сравнително лесно. Важно е да знаем, че доказването на престъпление или административно нарушение ще е изключително трудно. Първото е отговорност на прокуратурата, а второто – на КЗЛД. Първо, трябва да се докаже извън всякакво съмнение, че подписът е фалшифициран. Второ, трябва да се докаже, че личните данни на даден човек са използвани без неговото съгласие. За целта вече се провеждат разпити. Защитата на партиите ще бъде, че законът не изисква от активистите им да искат лични карти и да потвърждават подписите. Ако това не проработи ще кажат, че те не са отговорни за действията на отделни активисти.

Спекулации с нарушенията

Във вторник пуснах предварителна справка от 87 сигнала. Споделих таблица подобна на горната във Facebook и в коментарите в този блог. В сряда сутринта научих, че са копирали таблицата ми, но са подменили цифрите добавяйки между 15 и 25 нарушения на ГЕРБ, РБ и АБВ. Париите от управляващата коалиция са непокътнати. Днес статията вече е свалена, но може да се намери в Google Cache. Сравнение на двете таблици ще намерите на стената ми във Facebook. Във връзка с тази публикация на, вчера ЦИК излезе със становище, което виждам, че са повторили днес:

По повод публикувани в медии данни за броя на граждани, заявили злоупотреба с лични данни от регистрирани за участие в изборите партии и коалиции, и свързаните с тези злоупотреби регистрирани партии и коалиции, Централната избирателна комисия уведомява, че такива данни не са предоставяни от Централната избирателна комисия.

Централната избирателна комисия не извършва подобно обобщаване и обявяване на данни, които са обект на проверка от други компетентни органи – Комисията за защита на личните данни и Прокуратурата на Република България.

Костинброд в нова опаковка

След като нещата се проточат, прокуратурата си измие ръцете, че не може да докаже престъпления, а КЗЛД обърне дефинициите както им е удобно този път, за нас ще остане само горчивият вкус на изметен под килима скандал. Получава се същото, както с Костинбродската печатница на миналите избори – не е ясно какво се е случило; обществена тайна е, че е стандартна практика и всички партии го правят; някой се е сетил да го осветли и използва като политически коз; става скандал, медиите наскачат като хиени и всички забравяме за какво са изборите реално.

Истината е, че таблицата и цифрите от анкетата нямат никакво значение. 180 сигнала за нарушения на база 132000 подписа предадени от париите е нищо. Проблемът не е в ЦИК или в работата на формуляра, както писах преди няколко дни. Проблемът не е в партиите, макар точно липсата на морал в лидерите на всяка една от тях да е довела пряко или косвено до фалшификациите. Проблемът е в начина на събиране на тези подписи и практиката на партиите в конкретния случай.

По-големият проблем обаче е, че въобще говорим за това. В последните месеци не съм чул една дума за работата на европарламента, за досегашната работа на евродепутатите или за новите кандидати. Изключение правят няколко муцуни, които коментираме не заради качествата им, а заради острата липса на такива и скандалите свързани с тях. Тези избори можеха да са спокойно за кмет или директор на тоалетна и нищо нямаше да се промени в обществения дебат или коментарите на медиите.

The Hacker Factor Blog: Timing Isn’t Everything

This post was syndicated from: The Hacker Factor Blog and was written by: The Hacker Factor Blog. Original post: at The Hacker Factor Blog

Earlier this week, iMediaEthics tweeted a link to an article about contemporary journalism issues: “Social Media Putting Pressure On Journalism“. Basically, a group of industry leaders got together to discuss issues regarding accuracy and social networks. I really like the statement about journalism’s need for a focus change: “[W]e need to be aware of that and develop our own standards, which includes being right even if it doesn’t mean being first.”

What surprised me was that they needed to have a meeting to realize this.

Social Expectations

With social networks like Twitter, Reddit, and Facebook, I expect to see breaking news very quickly. However, I also expect to see zero validation. Breaking news from everyday people is more likely to be fractional, opinionated, full of baseless speculation, or outright inaccurate. In some cases, the errors are because the witness did not know any better. Other times, it is intentionally incorrect. (Trolls are everywhere.)

When social networks focus on any particular topic, then are very likely to be incorrect. For example, consider flight MH370. I saw theories on Reddit about it going North, being held captive in China, stolen by North Korea, going down in dozens of different areas, being shot down by a missile, and captured by a UFO. There’s even a theory that it landed on some tiny Pacific island. (I can’t make this up!)

We saw the same thing with last year’s Boston Marathon bombing. When it happened, Reddit reported it first. However, they also reported on other explosions (that did not happen), reported on lots of other bombs (that were not bombs), and named a half dozen suspects (who were not involved).

In this regard, social networks act a lot like ten thousand monkeys. If there is going to be a correct theory, it will likely surface first on a social network. (Reddit gave the theory that flight MH370 could have flown south long before the media reported it.) However, there is no way to winnow the wheat from the chaff; there are so many incorrect theories that there is no reasonable method to identify the correct theory. Moreover, we have to ask “why was it correct?” Did the social network get it right by coincidence, or was it well-researched?


With official news outlets, I expect reports to be well-researched. Proper journalism cites sources, validates findings, and researches collaborating information. A “single sourced” fact is treated skeptically and evaluated against the source’s credentials.

Unfortunately, most news outlets have turned away from formal journalism. It takes too long to check stories and there’s a lot of pressure to be as fast as Twitter. Sadly, some reporters believe that the social networks are the story. They report facts like “I saw it on Twitter!” and they count retweets as multiple sources. (Is reporting on the “Five of the craziest theories” about MH370 really newsworthy? If so, then I can come up with some even crazier ideas just to make headlines!)

I get offended when news outlets report things they read on Twitter. (Or when CNN reports on things seen in YouTube videos…) With social networks, I expect random conspiracies and rapid content without any vetting. But news outlets should be reporting “news” — facts that are verified and accurate.

The problems with bad reporting are not limited to social media. In many cases, it is a lot like that kids game “telephone” — where a messages gets passed between children and words get changed a little. One reporter doesn’t completely understand the information, so they write something with a minor error. Another reporter further misunderstands the report and the error balloons into something bigger. For example, earlier today I read a headline on Slashdot that said, “Microsoft Confirms It Is Dropping Windows 8.1 Support“. This struck me as pretty amazing. I mean Windows 8.1 sucks, but I cannot see Microsoft spontaneously aborting their latest-greatest operating system.

Slashdot got the headline from Infoworld: “Microsoft confirms it’s dropping Windows 8.1 support“. Infoworld is usually a fairly good news outlet for tech information. However, Infoworld didn’t create the story — they cited a Technet blog as the source: “Information Regarding the Latest Update for Windows 8.1“. Now we can see new information: You need to install ‘Windows 8.1 Update’. Every future security patch for 8.1 will require the 8.1 Update as a baseline system. That’s no different from the old XP patch system that required SP1, and later SP2, as a baseline. If you didn’t have XP’s service pack #2 (SP2) installed, then you couldn’t install anything else. This is a far cry from “Dropping Windows 8.1 Support”.

TechNet’s blog cites an official source at Microsoft: “What’s new in Windows 8.1 Update and Windows RT 8.1 Update?” This reads as a very positive change. Microsoft says that they have listened to all of the “8.1 sucks” complaints and are updating 8.1 to address users needs. A lot of missing functionality is coming back, so expect the user interface to look a little different. Again, there is no sign that Microsoft is dropping 8.1.

Unfortunately, a lot of big news outlets no longer report news. FoxNews is so biased and inaccurate that I wonder why they are allowed to use the word “News” in their name. And MSNBC spends more time reporting opinions instead of facts. A few years ago I stopped following CNN because they made “as seen on YouTube” a daily feature. Today, CNN spouts baseless drivel. For example, their coverage of MH370 included the supposition that “something ‘beyond our understanding’ happened to Malaysia Airlines flight MH 370, that ‘something’ being perhaps supernatural maybe?” Yes: CNN actually speculated that the disappearance of MH370 came from God. (This is different from the fake quote about Sarah Palin claiming that the plane flew to heaven.)

In all honesty, if I wanted baseless speculation, extremely biased opinions, and conspiracies, then I would follow 4chan or Facebook. When it comes to reporting news, Facebook is like 4chan for the easily offended.

No So Bad

There are still a few news organizations that I trust. For example, my local news, 9News in Denver, really does try to report facts. On occasion, they will even state in breaking news stories that they are intentionally not reporting something until they can verify the information. (I’m always impressed when they do this.) Yes, 9News does have the occasional opinion piece, but it is clearly an opinion. (If you haven’t seen news anchor Kyle Clark’s rant about snow-covered patio furniture then drop everything and watch it. I saw it live — at the end, he dropped his microphone and walk off stage.)

I’m also gaining more trust from the Open Newsroom on Google+. It is clear to me that there are some journalists who still care about facts and accuracy. Unfortunately, the individuals are not the same as the organizations. As a whole, many media organizations seem to fail to realize that they serve a very different purpose compared to Twitter and Facebook. In the rush to break the latest story, I can only hope that media outlets don’t forget that accuracy matters. When reporting news, it is better to be a little late than completely wrong.

TorrentFreak: Raging Anti-Piracy Boss Goes on a Tirade Against BitTorrent

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

steeleFor a few years now, BitTorrent Inc. has done its best to position the company as a neutral and legitimate business.

In a recent interview with “That Was Me”, BitTorrent inventor Bram Cohen explained this challenge, as well as the general benefits BitTorrent has to offer.

The interview got some coverage here and there, including at Upstart, where it drew the attention of Robert Steele, Chief Technology Officer at anti-piracy outfit Rightscorp, a company that has made quite a few headlines this year.

Steele was not happy with the positive press coverage BitTorrent received from the media outlets, to say the least. Through Facebook (which uses BitTorrent) he wrote two responses to the article, which are worth repeating for a variety of reasons.

The comments appear to have been made late at night, possibly under influence, so we have left them intact and unedited for authenticity’s sake. Steele starts off by claiming that BitTorrent was designed for only one reason – to distribute pirated content.

“Absolutely ridiculous. Bram Cohen said in 2012 that ‘my goal is to destroy television’. BitTorrent’s architecture and features are designed for one reason only – to assist people in avoiding legitimate law enforcement efforts when they illgally consume other people’s intellectual property,” Steele begins.

It may not come as a surprise that Steele is quoting Cohen out of context. At the time, BitTorrent’s founder was actually referring to his new streaming technology, that would make it possible for anyone to stream video content to a large audience at virtually no cost.

Also, BitTorrent isn’t in any way helping people to avoid law enforcement, quite the contrary. People who use BitTorrent are easy to track down, which is in fact something that Rightscorp is banking its entire business model on.

In the second comment Steele brings in Accel, the venture capital firm that invested millions of dollars in BitTorrent Inc. According to the Rightscorp CTO Accel is also guilty of encouraging piracy, and he suggests that uTorrent should have been equipped with a blacklist of pirate torrent hashes.

“If Accell Partner’s BitTorrent was actually a legitimate business not directly involved in driving and facilitating piracy, they would have a blacklist of copyrighted hashes that the BT client won’t ‘share’. Dropbox does this. Why does Dropbox do this? Because they actually obey the law and respect content creators,” Steele says.

Steele touches on a sensitive subject here, as BitTorrent could indeed implement a blacklist to prevent some pirated content from being shared. TorrentFreak has raised this issue with BitTorrent Inc in the past, but we have never received a response on the matter.

rageMoving on from this sidetrack, Steele’s tirade in the first comment evolves into something that’s scarily incomprehensible.

“BTTracker software is not needed unless the goal is to enable other people outside of BitTorrent, Inc. to operate the systems that log the ip addresses of infringing computers. Why do they do it that way? Not becuase it is needed to move big files. Dropbox doesnt need trackers. They do it that way because Limewire got sued for hosting those lists.” Steele notes.

From what we understand, Steele doesn’t get why BitTorrent is decentralized, which is the entire basis of the technology. The comment is wrong on so many points that we almost doubt that Steele has any idea how BitTorrent works, or Limewire for that matter.

We surely hope that the investors in Rightscorp, which is a publicly traded company now, aren’t reading along.

Finally, Rightscorp’s CTO suggests that BitTorrent and its backers should be taken to court, to pay back the damage they cause to the entertainment industries.

“Bram Cohen and Accell Partner’s BitTorrent should be held accountable for the wages and income they have helped take from hundreds of thousands of creative workers just like Limewire, Grokster, Aimster, Kazaa and Napster were.”


From the incoherent reasoning and the many grammar and spelling mistakes we have to assume that Steele wasn’t fully accountable when he wrote the comments. Perhaps the end of a busy week, or the end of an eventful night.

In any case, we’ve saved a copy of the comments below, just in case they are accidentally deleted.

Steele’s comments

Photo: Michael Theis

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

SANS Internet Storm Center, InfoCON: green: Attack or Bad Link? Your Guess?, (Mon, Apr 7th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

Reviewing my logs, I found this odd request:

GET /infocon.htmlppQ/detail/20130403164740572kode-til-boozt-10/basura-que-va-acumulando/_medium=twittersideIM&lang=en&brand=nokiaokseen-fortumin-joensuun-voimalaitokselle/)&utm_term=inspirationfeedistan%20Tehreek-e-Insaf)%e0%b9%89%e2%86%90_%c3%96k%e2%98%bc%e0%b9%84%e0%b8%a1%e0%b9%88%e0%b9%84%e0%b8%8a%e0%b9%88%e2%99%a5His%c3%b6%e2%86%94ll%e0%b8%95%e0%b9%88%e0%b8%81%e0%b9%89%c3%b6%e0%b8%a1%e0%b8%b1%e0%b9%88%e0%b8%a2%e0%b8%94%e0%b9%89%e0%b8%b2E%e2%86%90n%c3%96%e2%86%90m%c3%96neY%c2%ae%e2%97%84%e2%97%84--html26eu1=0&eu2=0&x=50&y=16&dataPartenzaDa=20121001&dataPartenzaA=20121010&orderBy=Prezzo HTTP/1.0" 302 154 "-" "facebookexternalhit/1.1 (+" "2a03:2880:20:4ff7::"

It does look like a valid request from Facebook. "facebookexternalhit" is used by Facebook to screen links people post for malware. However, the link "doesn't make sense". Doesn't really look like an attack to me, just weird. Any ideas how this may happen?


Johannes B. Ullrich, Ph.D.
SANS Technology Institute

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

TorrentFreak: Document Reveals When Copyright Trolls Drop Piracy Cases

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

trollIt’s well known that while copyright trolls may suggest they are going to pursue all of their cases to the bitter end, they simply do not. Plenty of cases are dropped or otherwise terminated, although the precise reasons why this happens usually remain a closely guarded secret.

Today, however, we have a much clearer idea of what happens behind the scenes at Malibu Media, one of the main companies in the United States currently chasing down BitTorrent users for cash settlements.

The company was required by Illinois Judge Milton Shadur to submit a summary of its activities in Illinois and, as spotted by troll watcher SJD over at Fight Copyright Trolls, there was an agreement that it could remain under seal.

Somehow, however, that document has now became available on Pacer and it reveals some rather interesting details on Malibu’s operations.

Overall, Malibu Media reports that it filed cases in Illinois against 886 defendants. According to the company, just 174 have paid up so far, with 150 of those hiring a lawyer to do so.

While 100 cases are still open (including 42 still at discovery stage and 30 in negotiations), for various reasons a total of 612 defendants paid nothing at all and the cases against them were dismissed. Malibu reveal the reasons for this in their filing, and they’re quite eye-opening to say the least.


“Hardship is when a defendant may be liable for the conduct, but has extenuating circumstances where Plaintiff does not wish to proceed against him or her,” the Malibu document explains.

“Examples are when a defendant has little or no assets, defendant has serious illness or has recently deceased, defendant is currently active duty US military, defendant is a charitable organization or school, etc.”

Out of 886 defendants, Malibu reports that cases against 49 were dropped on hardship grounds.

Insufficient Evidence

It has long been said that an IP address alone isn’t enough to identify an infringer and Malibu’s own submission to the court underlines this in grand fashion.

“Insufficient evidence is defined as when Plaintiff’s evidence does not raise a strong
presumption that the defendant is the infringer or some other ambiguity causes Malibu to question the Defendant’s innocence,” the company writes.

So, in an attempt to boost the value of the IP address evidence, Malibu says it investigates further to determine whether the account holder is in fact the infringer. The company says it looks in three areas.

1. Length of the infringement, i.e. how long it took place, when it began, when it ended, whether it took place during the day or night, and any other patterns.

2. Location of the residence where the infringement occurred, i.e. whether it is in a remote location or with other dwellings within wireless access range.

3. Profiling suspected pirates using social media (Facebook, Twitter)

The third element is of particular interest. Malibu says that since July 2012 it has been monitoring not just its own content online, but also piracy on music, movies, ebooks and software. It compares the IP addresses it spots downloading other pirate content with the IP addresses known to be infringing copyright on its own titles.

The data collected is then used to profile the person behind the IP address and this is compared with information gleaned from sites including Facebook and Twitter.

“Oftentimes, a subscriber will publicly admit on social media to enjoying sports teams,
music groups, or favorite TV shows. Malibu will compare their likes and interests to their [downloads of other content] and determine whether the interests match,” the company explains.

So in what circumstances will Malibu dismiss a case on evidence grounds?

In the company’s own words:

-Multiple roommates within one residence with similar profiles and interests share a single Internet connection

-The defendant has left the country and cannot be located

-The results of additional surveillance do not specifically match profile interests or occupation of Defendant or other authorized users of the Internet connection

-The subscriber is a small business with public Wi-Fi access, etc

From a total of 886 defendants, cases against 259 were dropped due to insufficient evidence.

The Polygraph Defense

In the absence of any other supporting evidence, how can a subscriber prove a negative, i.e that he or she did not carry out any unlawful file-sharing? Quite bizarrely, Malibu says that it will accept the results of a lie detector test.

“[M]alibu will dismiss its claims against any Defendant who agrees to and passes a
polygraph administered by a licensed examiner of the Defendant’s choosing,” the company told the court.

So has anyone taken the bait? Apparently so.

“Out of the entirety of polygraphs administered within the United States by Malibu, no Defendant has passed and all such examinations have subsequently led to the Defendant settling the case,” Malibu writes.

No discovery

In order for Malibu to pressure account holders into settling, it first needs to find out who they are from their ISPs. Malibu’s submission reveals that this is not always possible due to:

- ISPs not retaining logging data for a long enough period
- Subpoenas being quashed due to cases being severed
- Information held on file at ISPs does not match identities of an address’s occupants
- ISP could not match the IP address with a subscriber at the time and date stipulated by Malibu

From a total of 886 defendants, cases against 304 were dropped due to failed discovery.

Cases dismissed due to settlement / actual judgments obtained

In total, 174 cases were settled by defendants without need for a trial but the amounts paid are not included in the document. However, the submission does reveal that two cases did go to court resulting in statutory damages awards of $26,250 and $15,000 respectively.


Malibu’s submission points to a few interesting conclusions, not least that the vast majority of their cases get dismissed for one reason or another and a significant proportion simply do not pay up.

The document also suggests that Malibu are working under the assumption that an IP address alone isn’t enough to secure a settlement and that additional social media-sourced evidence is required to back it up.

This information, plus the reasons listed by Malibu for not pursuing cases, should ensure that even less people are prompted to pay up in future.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

TorrentFreak: Record Labels Sue Russian Facebook Over Large-Scale Piracy

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

vkFor several years vKontakte, or VK, has been marked as a piracy facilitator by copyright holders and even the U.S. Government.

In several Special 301 Reports published by the United States Trade Representative, Russia’s Facebook equivalent has been criticized for the huge quantities of unauthorized media it hosts. As a result it is currently labeled a “notorious market”, a term usually reserved for piracy’s apparent worst-of-the-worst.

In common with many user-generated sites, VK allows its millions of users to upload anything from movies and TV shows to their entire music collections. Unlike Facebook and other major players, Russia’s social network has been very slow to adopt anti-piracy measures.

Three major record labels – Sony Music, Universal Music and Warner Music – have now taken their concerns to the Saint Petersburg & Leningradsky Region Arbitration Court. The labels accuse VK of running a service that facilitates large-scale copyright infringement and are demanding countermeasures and compensation.

The record labels have asked for an order requiring VK to implement fingerprinting technology to delete copyrighted works and prevent them from being re-uploaded. In addition, Sony, Warner and Universal are demanding 50 million rubles ($1.4 million) from the social networking site to compensate for losses suffered.

“VK’s music service, unlike others in Russia, is an unlicensed file-sharing service that is designed for copyright infringement on a large-scale,” IFPI’s Frances Moore says in a comment.

“We have repeatedly highlighted this problem over a long period of time. We have encouraged VK to cease its infringements and negotiate with record companies to become a licensed service. To date the company has taken no meaningful steps to tackle the problem, so today legal proceedings are being commenced,” Moore adds.

VK has yet to respond to the accusations. Russia’s telecoms regulator Roskomnadzor previously said that VK was trying very hard to better their anti-piracy practices, but these efforts apparently came too late.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Чорба от греховете на dzver: Better ratings for Goodreads

This post was syndicated from: Чорба от греховете на dzver and was written by: dzver. Original post: at Чорба от греховете на dzver

Goodreads has a five star rating system, which I frequently use, but don’t like. My problem with it is that ratings below ★★★ usually mean “I didn’t read the book”, and ★★★★★ often means “I’m a fanboy and will give 5 stars to anything that mentions Star Wars”. So here is my improved rating system:

★★★★★★★ It was mind blowing. Stop whatever you are doing and start reading this book now.
★★★★★★ It was absolutely amazing. I enjoyed every moment with this book and couldn’t stop reading it.
★★★★★ I really liked it. It had some twists and interruptions.
★★★★ I successfully completed the book.
Didn’t read it. No reason to lie here, I tried hard, but didn’t work. No need to chose between , ★★ and ★★★ – they are all the same. Didn’t read it.

And the special rating that completes the rating system:
10 ★ – This very special book mentions Star Wars, Batman, Teenage Vampires, also K.J. Rowling saw me typing one of the pages. It is one of those spiritual books that is fancy to have on my Facebook profile. I’ve heard about it, I’ve never seen it, and I vote 10 ★. I love books!

Krebs on Security: Android Botnet Targets Middle East Banks

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

I recently encountered a botnet targeting Android smartphone users who bank at financial institutions in the Middle East. The crude yet remarkably effective mobile bot that powers this whole operation comes disguised as one of several online banking apps, has infected more than 2,700 phones, and has intercepted at least 28,000 text messages.

The botnet — which I’ve affectionately dubbed “Sandroid” — comes bundled with Android apps made to look like mobile two-factor authentication modules for various banks, including Riyad Bank, SAAB (formerly the Saudi British Bank), AlAhliOnline (National Commercial Bank), Al Rajhi Bank, and Arab National Bank.

The fake Android bank apps employed by this botnet.

The fake Android bank apps employed by the Sandroid botnet.

It’s not clear how the apps are initially presented to victims, but if previous such scams are any indication they are likely offered after infecting the victim’s computer with a password-stealing banking Trojan. Many banks send customers text messages containing one-time codes that are used to supplement a username and password when the customer logs on to the bank’s Web site. And that precaution of course requires attackers interested in compromising those accounts to also hack the would-be victim’s phone.

Banking Trojans — particularly those targeting customers of financial institutions outside of the United States — will often throw up a browser pop-up box that mimics the bank and asks the user to download a “security application” on their mobile phones. Those apps are instead phony programs that merely intercept and then relay the victim’s incoming SMS messages to the botnet master, who can then use the code along with the victim’s banking username and password to log in as the victim.

Text messages intercepted by the Sandroid botnet malware.

Some of the 28,000+ text messages intercepted by the Sandroid botnet malware.

This particular botnet appears to have been active for at least the past year, and the mobile malware associated with it has been documented by both Symantec and Trend Micro. The malware itself seems to be heavily detected by most of the antivirus products on the market, but then again it’s likely that few — if any — of these users are running antivirus applications on their mobile devices.

In addition, this fake bank campaign appears to have previously targeted Facebook, as well as banks in Australia and Spain, including Caixa Bank, Commonwealth Bank, National Australia Bank, and St. George Bank.

The miscreant behind this campaign seems to have done little to hide his activities. The same registry information that was used to register the domain associated with this botnet — — was also used to register the phony bank domains that delivered this malware, including,,,,,, and The registrar used in each of those cases was Center of Ukrainian Internet Names.

I am often asked if people should be using mobile antivirus products. From my perspective, most of these malicious apps don’t just install themselves; they require the user to participate in the fraud. Keeping your mobile device free of malware involves following some of the same steps outlined in my Tools for a Safer PC and 3 Rules primers: Chiefly, if you didn’t go looking for it, don’t install it! If you own an Android device and wish to install an application, do your homework before installing the program. That means spending a few moments to research the app in question, and not installing apps that are of dubious provenance. 

That said, this malware appears to be well-detected by mobile antivirus solutions. Many antivirus firms offer free mobile versions of their products. Some are free, and others are free for the initial use — they will scan and remove malware for free but charge for yearly subscriptions. Some of the free offerings include AVG, Avast, Avira, Bitdefender, Dr. Web, ESET, Fortinet, Lookout, Norton, Panda Cloud Antivirus, Sophos, and ZoneAlarm.

Incidentally, the mobile phone number used to intercept all of the text messages is +79154369077, which traces back to a subscriber in Moscow on the Mobile Telesystems network.

Schneier on Security: Ephemeral Apps

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Ephemeral messaging apps such as Snapchat, Wickr and Frankly, all of which advertise that your photo, message or update will only be accessible for a short period, are on the rise. Snapchat and Frankly, for example, claim they permanently delete messages, photos and videos after 10 seconds. After that, there’s no record.

This notion is especially popular with young people, and these apps are an antidote to sites such as Facebook where everything you post lasts forever unless you take it down—and taking it down is no guarantee that it isn’t still available.

These ephemeral apps are the first concerted push against the permanence of Internet conversation. We started losing ephemeral conversation when computers began to mediate our communications. Computers naturally produce conversation records, and that data was often saved and archived.

The powerful and famous — from Oliver North back in 1987 to Anthony Weiner in 2011 — have been brought down by e-mails, texts, tweets and posts they thought private. Lots of us have been embroiled in more personal embarrassments resulting from things we’ve said either being saved for too long or shared too widely.

People have reacted to this permanent nature of Internet communications in ad hoc ways. We’ve deleted our stuff where possible and asked others not to forward our writings without permission. “Wall scrubbing” is the term used to describe the deletion of Facebook posts.

Sociologist danah boyd has written about teens who systematically delete every post they make on Facebook soon after they make it. Apps such as Wickr just automate the process. And it turns out there’s a huge market in that.

Ephemeral conversation is easy to promise but hard to get right. In 2013, researchers discovered that Snapchat doesn’t delete images as advertised; it merely changes their names so they’re not easy to see. Whether this is a problem for users depends on how technically savvy their adversaries are, but it illustrates the difficulty of making instant deletion actually work.

The problem is that these new “ephemeral” conversations aren’t really ephemeral the way a face-to-face unrecorded conversation would be. They’re not ephemeral like a conversation during a walk in a deserted woods used to be before the invention of cell phones and GPS receivers.

At best, the data is recorded, used, saved and then deliberately deleted. At worst, the ephemeral nature is faked. While the apps make the posts, texts or messages unavailable to users quickly, they probably don’t erase them off their systems immediately. They certainly don’t erase them from their backup tapes, if they end up there.

The companies offering these apps might very well analyze their content and make that information available to advertisers. We don’t know how much metadata is saved. In SnapChat, users can see the metadata even though they can’t see the content and what it’s used for. And if the government demanded copies of those conversations — either through a secret NSA demand or a more normal legal process involving an employer or school — the companies would have no choice but to hand them over.

Even worse, if the FBI or NSA demanded that American companies secretly store those conversations and not tell their users, breaking their promise of deletion, the companies would have no choice but to comply.

That last bit isn’t just paranoia.

We know the U.S. government has done this to companies large and small. Lavabit was a small secure e-mail service, with an encryption system designed so that even the company had no access to users’ e-mail. Last year, the NSA presented it with a secret court order demanding that it turn over its master key, thereby compromising the security of every user. Lavabit shut down its service rather than comply, but that option isn’t feasible for larger companies. In 2011, Microsoft made some still-unknown changes to Skype to make NSA eavesdropping easier, but the security promises they advertised didn’t change.

This is one of the reasons President Barack Obama’s announcement that he will end one particular NSA collection program under one particular legal authority barely begins to solve the problem: the surveillance state is so robust that anything other than a major overhaul won’t make a difference.

Of course, the typical Snapchat user doesn’t care whether the U.S. government is monitoring his conversations. He’s more concerned about his high school friends and his parents. But if these platforms are insecure, it’s not just the NSA that one should worry about.

Dissidents in the Ukraine and elsewhere need security, and if they rely on ephemeral apps, they need to know that their own governments aren’t saving copies of their chats. And even U.S. high school students need to know that their photos won’t be surreptitiously saved and used against them years later.

The need for ephemeral conversation isn’t some weird privacy fetish or the exclusive purview of criminals with something to hide. It represents a basic need for human privacy, and something every one of us had as a matter of course before the invention of microphones and recording devices.

We need ephemeral apps, but we need credible assurances from the companies that they are actually secure and credible assurances from the government that they won’t be subverted.

This essay previously appeared on

Schneier on Security: The Continuing Public/Private Surveillance Partnership

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

If you’ve been reading the news recently, you might think that corporate America is doing its best to thwart NSA surveillance.

Google just announced that it is encrypting Gmail when you access it from your computer or phone, and between data centers. Last week, Mark Zuckerberg personally called President Obama to complain about the NSA using Facebook as a means to hack computers, and Facebook’s Chief Security Officer explained to reporters that the attack technique has not worked since last summer. Yahoo, Google, Microsoft, and others are now regularly publishing “transparency reports,” listing approximately how many government data requests the companies have received and complied with.

On the government side, last week the NSA’s General Counsel Rajesh De seemed to have thrown those companies under a bus by stating that — despite their denials — they knew all about the NSA’s collection of data under both the PRISM program and some unnamed “upstream” collections on the communications links.

Yes, it may seem like the the public/private surveillance partnership has frayed — but, unfortunately, it is alive and well. The main focus of massive Internet companies and government agencies both still largely align: to keep us all under constant surveillance. When they bicker, it’s mostly role-playing designed to keep us blasé about what’s really going on.

The U.S. intelligence community is still playing word games with us. The NSA collects our data based on four different legal authorities: the Foreign Intelligence Surveillance Act (FISA) of 1978, Executive Order 12333 of 1981 and modified in 2004 and 2008, Section 215 of the Patriot Act of 2001, and Section 702 of the FISA Amendments Act (FAA) of 2008. Be careful when someone from the intelligence community uses the caveat “not under this program” or “not under this authority”; almost certainly it means that whatever it is they’re denying is done under some other program or authority. So when De said that companies knew about NSA collection under Section 702, it doesn’t mean they knew about the other collection programs.

The big Internet companies know of PRISM — although not under that code name — because that’s how the program works; the NSA serves them with FISA orders. Those same companies did not know about any of the other surveillance against their users conducted on the far more permissive EO 12333. Google and Yahoo did not know about MUSCULAR, the NSA’s secret program to eavesdrop on their trunk connections between data centers. Facebook did not know about QUANTUMHAND, the NSA’s secret program to attack Facebook users. And none of the target companies knew that the NSA was harvesting their users’ address books and buddy lists.

These companies are certainly pissed that the publicity surrounding the NSA’s actions is undermining their users’ trust in their services, and they’re losing money because of it. Cisco, IBM, cloud service providers, and others have announced that they’re losing billions, mostly in foreign sales.

These companies are doing their best to convince users that their data is secure. But they’re relying on their users not understanding what real security looks like. IBM’s letter to its clients last week is an excellent example. The letter lists five "simple facts" that it hopes will mollify its customers, but the items are so qualified with caveats that they do the exact opposite to anyone who understands the full extent of NSA surveillance. And IBM’s spending $1.2B on data centers outside the U.S. will only reassure customers who don’t realize that National Security Letters require a company to turn over data, regardless of where in the world it is stored.

Google’s recent actions, and similar actions of many Internet companies, will definitely improve its users’ security against surreptitious government collection programs — both the NSA’s and other governments’ — but their assurances deliberately ignores the massive security vulnerability built into its services by design. Google, and by extension, the U.S. government, still has access to your communications on Google’s servers.

Google could change that. It could encrypt your e-mail so only you could decrypt and read it. It could provide for secure voice and video so no one outside the conversations could eavesdrop.

It doesn’t. And neither does Microsoft, Facebook, Yahoo, Apple, or any of the others.

Why not? They don’t partly because they want to keep the ability to eavesdrop on your conversations. Surveillance is still the business model of the Internet, and every one of those companies wants access to your communications and your metadata. Your private thoughts and conversations are the product they sell to their customers. We also have learned that they read your e-mail for their own internal investigations.

But even if this were not true, even if — for example — Google were willing to forgo data mining your e-mail and video conversations in exchange for the marketing advantage it would give it over Microsoft, it still won’t offer you real security. It can’t.

The biggest Internet companies don’t offer real security because the U.S. government won’t permit it.

This isn’t paranoia. We know that the U.S. government ordered the secure e-mail provider Lavabit to turn over its master keys and compromise every one of its users. We know that the U.S. government convinced Microsoft — either through bribery, coercion, threat, or legal compulsion — to make changes in how Skype operates, to make eavesdropping easier.

We don’t know what sort of pressure the U.S. government has put on Google and the others. We don’t know what secret agreements those companies have reached with the NSA. We do know the NSA’s BULLRUN program to subvert Internet cryptography was successful against many common protocols. Did the NSA demand Google’s keys, as it did with Lavabit? Did its Tailored Access Operations group break into to Google’s servers and steal the keys?

We just don’t know.

The best we have are caveat-laden pseudo-assurances. At SXSW earlier this month, CEO Eric Schmidt tried to reassure the audience by saying that he was “pretty sure that information within Google is now safe from any government’s prying eyes.” A more accurate statement might be, “Your data is safe from governments, except for the ways we don’t know about and the ways we cannot tell you about. And, of course, we still have complete access to it all, and can sell it at will to whomever we want.” That’s a lousy marketing pitch, but as long as the NSA is allowed to operate using secret court orders based on secret interpretations of secret law, it’ll never be any different.

Google, Facebook, Microsoft, and the others are already on the record as supporting these legislative changes. It would be better if they openly acknowledged their users’ insecurity and increased their pressure on the government to change, rather than trying to fool their users and customers.

This essay previously appeared on

The Hacker Factor Blog: Locating Pictures

This post was syndicated from: The Hacker Factor Blog and was written by: The Hacker Factor Blog. Original post: at The Hacker Factor Blog

There’s a question that I often receive regarding photos: Where was this picture taken? Basically, they have a photo and want to identify the location. This comes up in legal cases, media requests, and just odd photos found online. (With news outlets, they usually follow it up with “and when was it taken?”) Tracking a photo to a location is usually a very difficult problem. Unfortunately, there are no generic or automated solutions.

However, just because it is a hard problem does not mean it is impossible. (Sometimes it is impossible, but not always.) Usually it just takes time and a dedication to tracking down clues.

The easy way

When people think about identifying where a photo was taken, they immediately think about embedded GPS coordinates. And the truth is, if GPS information exists in the picture’s metadata, then that is a great place to begin.

Unfortunately, very very very few pictures contain GPS information. At FotoForensics, we’re getting close to a half-million unique picture uploads, and only about 1% of them contain GPS metadata. There are reasons that GPS information is so hard to find:

  • Unavailable. GPS data is almost exclusively associated with smartphones. Very few point-and-shoot cameras have built-in GPS.

  • Disabled. For devices with GPS chips, there is usually an option to disable geo-stamping photos. Some devices default to “off” and are never turned on, while others may default to “on” but have users intentionally turn it off. There’s also the GPS system itself; lots of people turn off GPS on their smartphones because it will drain your battery. If your phone’s GPS is disabled then your camera will not include GPS information in the picture.

    There are other ways for a device to geolocate without using GPS. Some smartphones can get a rough estimate using nearby wireless access point identifiers (SSIDs) or by finding nearby cell towers. But to the camera’s function that looks up GPS information, this is all the same. If your device cannot geolocate then there will not be a location recorded with the picture.

  • Stripped. Processing a picture with a graphics program, or uploading it to an online service like Facebook or Twitter, can (and usually will) alter or remove metadata. This includes removing GPS information. Even if the data was there at the beginning, it is not there anymore.

Of course, even if the GPS information is present, it does not mean it is accurate. I’m sure that people with smartphones have noticed the accuracy issue. When you first turn on the mapping program, it will draw a huge circle on the map. The circle may span a couple of miles. It does not mean that you are in the center of the circle; it’s indicating that you are “somewhere” in that circle — you could be near the center or somewhere along the edge. After a few minutes, the device has time to synchronize and better narrow down the region — denoted with a smaller circle. Eventually it may become a dot that identifies your location to within a few feet.

With GPS metadata, there are fields for location and accuracy. Unfortunately, most mobile devices only fill out the location data and not the accuracy information. This means that the extremely precise GPS location stored in the metadata may be off by a mile. Even if the GPS location pinpoints a house, you cannot be certain that the photo was taken in that house — it could have been captured a half-mile away.

Another place to look is in metadata annotations. If the picture came from a media outlet, then there’s probably metadata that identifies “where” the photo was taken, even if it is just a city name. Unfortunately, most online news sites resave images prior to publishing, and that can strip out these annotations.

Looking Closer

GPS information and annotations in metadata are nice when they exist. Unfortunately, they may not exist. And even if they are present, they may still not be very accurate or reliable. That means geolocating a photo must rely on the photo’s content. There are different clues in the photo’s content that may help identify the location. Some of these may be very precise (geolocation) while others may help you narrow down a region (geo-fencing), country, or at least rule out some parts of the world.

The easiest photos are the ones with unique and notable landmarks: statues, distinct buildings, street signs… Even photos of mountain ranges or generic streets may be enough to find the location. If the camera was fairly close to the subject, then you can probably identify the photographer’s position to within a few feet. A long distance shot may narrow it down to an area.

For very notable objects, such as scenic views, distinct statues, or elements seen at tourist stops, you may be able to find the location by uploading the picture to TinEye or Google Image Search. If other people have photographed the same object from about the same position, then these image search engines may be able to identify other photos from the same spot.

In my opinion, TinEye is better at finding similar photos, but Google may annotate the search results with a text name or description. In either case, you will probably need to visit the resulting web pages in order to see if any page mentions where the photographer was located. (Knowing that the photo’s content shows “New York City” is not the same as geolocating a photographer who was standing at the foot of the Statue of Liberty.)

Different cities and countries have different building styles. If you can identify the style, then you may be able to identify where the photo was taken. There’s been a few advances in this research area (for example, PDF). Unfortunately, as far as I know, there are no public image search engines that do this type of matching.

Usually, you just happen to find someone who recognizes the style and can help narrow down a location. (That’s one of the benefits of turning a photo over to a large social group like Reddit — there is likely someone who will recognize something.) However, even this can be somewhat inaccurate. For example, neighboring countries (e.g., Poland and Germany) can have similar architectural styles. In California, there’s a city called Solvang that looks like Denmark. Most American cities have a “Chinatown” that uses Chinese architecture, and China has rebuilt cities from countries like France and Italy.

If you cannot identify a city or a country, then you can probably still identify regions to exclude. For example, do you see any text in the photo? If the street signs are only in English, then you are probably not looking at any Asian, African, or middle-Eastern countries. (Non-English speaking countries either do not use English letters or include multiple languages on the signs.)

Currency can be another great clue. If I see Mexican pesos, then I’m thinking Mexico. Sure, it could be a Spanish-language classroom in the United States, but then other clues would tip you off that it’s a classroom. (Like maybe, desks?) It could also be someone from Mexico who lives in Canada and has decorated his home with trinkets from his homeland. But unless you have a reason to suspect another country, a best-guess is to use what you see. If everything looks like Mexico, then it’s probably Mexico.

Exclusion cannot tell you where a photo was taken. However, it can help identify where the photo was not taken. (Photo showing a tropical beach? It’s probably not the South Pole or Northern Europe.)

Picture Time!

To give you an example of geolocation, consider this photo that was recently trending at FotoForensics:

My question is: where was this photo taken? Or more specifically, where was the photographer standing and what direction was the photographer facing?

Sure, you could go to the forum where the picture was being discussed and the city is identified, but let’s assume that you do not have that information. (And anyway, the forum does not tell you the exact location where the photographer was standing or the direction the camera is facing.) In real life, you may have nothing more than a photo; assume that you just have this photo and nothing else. Also, let’s assume that you are like me and you do not know the area and do not recognize the street.

Here’s how I walked through it to identify the location (your approach may be different):

  1. Metadata. First, let’s go for the easy clues and start with the metadata. Maybe we will get lucky and find GPS coordinates or a textual description. Unfortunately, this picture has no informative metadata. (It’s been stripped, but it was still worth the time to look.)

  2. Search. Using TinEye and Google Image Search turned up no useful results.
  3. License Plates. Someday I hope to have a database of license plate formats (colors, layouts, etc.), but I do not have that today. However, I know that long, rectangular, and yellow (with or without the blue strip on the left) is European. So I can immediately rule out Africa, Asian, Australia, North America, and South America. (While the cars could have been shipped to another country, we go with what it most likely.)
  4. English. All of the text is in English. European and English-only? That’s an island like England, Ireland, or Scotland. It’s not the European mainland. (This is geo-fencing — narrowing down a location to a region or area.)
  5. Bank. Now I can start looking up text. I see an HSBC ATM machine. I know that HSBC is a bank and it’s found in the British Isles. (While HSBC is found in lots of other countries, it does not exclude my current geo-fenced area.)
  6. Store. I do not know what “Waitrose” is, but I can type the word into Google. It turns out, Waitrose is a grocery store in England. That narrows down my search to one of about 300 locations. (I know, 300 seems like a lot, but it’s smaller than “anywhere in the world.”)
  7. Web. The Waitrose corporate website allows you to select a branch. (There’s 339 of them right now.) Each branch contains a small picture of the location. Non-programmers will need to go one-by-one and look at each picture. Fortunately, I’m a programmer. It took me a few minutes to write a small script to harvest all of their store pictures. I thought I would use these thumbnail images to rule out locations. (No red brick. No black awning. Not on a corner…) Instead, I got lucky:

    The green advertisement on the wall in the photo is blue in the thumbnail, and the HSBC ATM is missing, but it’s the same location. According to their corporate headquarters, this is Waitrose Wilmslow.

  8. Address. Unfortunately, the corporate web site does not provide a numerical street address or GPS location. All they say is: “Church Street, Wilmslow, Cheshire, SK9 1AY”. (Not being from England, this looks to me like a description and not a mailing address.) Fortunately, I can type this into Google Maps and find the street. Using Google Street View, I can find the address: 4 Church Street, Wilmslow, England, UK.

    The street view shows me the exact location. The photographer had to be standing in the street, facing North. (Not where the mouse has highlighted the road — the photographer was standing a little to the right.) Even if he was using a telephoto lens, he would still need to be somewhere down the street, facing North.

Now we have answered the questions. We know where the photographer was standing and the direction the camera was facing.

Digging Deeper

Armed with this information, there’s a few other things I can now tell about this photo. For example, the Google Street View shows that there are cameras everywhere. You can even see one in the photo above the “Waitrose” sign. If this photo was showing a crime, then there are cameras that recorded the photographer.

Looking at the shadows, we can see that they fall to the North (toward the store) and not to the left or right. So this was likely taken in the middle of the day. And is that the photographer reflected in the car’s mirror?

The corporate web site’s thumbnail was timestamped November 2010 and it lacked the ATM. The Google Street View is timestamped (lower-left) September 2012 and it shows the ATM. So sometime between November 2010 and September 2012, the ATM was installed. This means that the photo was taken sometime after November 2010. If I contacted Waitrose, then I suspect we could narrow down the date based on the advertisements that are visible. While we probably would not find the exact date, I believe that we could narrow it down to a month or less. Together with the camera information (assuming at least one camera on the street still has the pictures available), we can even identify the exact moment — and possibly even watch the photographer come and go.

With Google Street View, we can even tell a little more about the building. For example, watching the building while moving down the street permits us to see the framed advertisement change. It it a scrolling billboard. The green advertisement in the photo, the blue advertisement in the corporate thumbnail, and the picture seen in the Google Street View could all be part of the same scrolling ad series.

Using Bing’s street view of the same address (requires Internet Explorer), there is one image that shows part of the green banner scrolling into place. So it is part of the rotation cycle. Unfortunately, Bing doesn’t display any date information related to the street view. However… In the photo’s upper-left corner is a yellow and black sign. This same sign is seen in the Google Street View, but it is not present in the Bing street view. If we knew when that black-and-yellow sign appeared, then we could further narrow down the date range.

(If we cheat, then we can look at the forum. The posting was made on 21-November-2013, so the date range is November 2010 to 21-November-2013. The person claims to have taken the photo “a few weeks ago”, so that would be October or early November 2013.)

Needles and Haystacks

The good news is that many pictures can be geolocated to a specific location. However, there is no generic or automated solution. Right now, every photo is a unique challenge, and some may be very time-consuming.

(And for the people who really want to know: I think the license plates are real. It’s hard to tell from the photo due to multiple resaves, but the UK permits people to look up the vehicles based on the plate and manufacturer. Both license plates exist and they match the vehicles.)

PHONEBLOKS.COM: Designs of the Year Award 2014 Phonebloks has been nominated for…

This post was syndicated from: PHONEBLOKS.COM and was written by: PHONEBLOKS.COM. Original post: at PHONEBLOKS.COM

Designs of the Year Award 2014

Phonebloks has been nominated for the Design of the Year Award. We are truly excited about this opportunity. It is an honor for us to even be considered against the many other talented nominees.  

There is one particular nominee that shares the same values and passions as Phonebloks. That nominee is Fairphone. By making a phone that puts social values first, Fairphone wants to change the way products are made.  

As chance would have it, We here at Phonebloks will today, the 27th of March, 2014, be going up against Fairphone in the first round of the Design of the Year Web Vote. So, in the spirit of collaboration, we would like to invite you to vote on both Fairphone and Phonebloks.

Voting occurs through the social media channels of Facebook and Twitter. To vote, visit and log in. Use one account for Phonebloks and one for Fairphone! 

We truly appreciate your continued support. Thank you and best of luck to all the nominees! [$] Facebook and the kernel

This post was syndicated from: and was written by: jake. Original post: at

As one of the plenary sessions on the first day of the Linux Storage, Filesystem, and
Memory Management (LSFMM) Summit,
Btrfs developer Chris Mason presented on how his new employer, Facebook,
uses the Linux kernel. He shared some of the eye-opening numbers that
demonstrate just how much processing Facebook does using Linux, along with
some of the “pain points” the company has with the kernel.

Subscribers can click below for a report on the talk from this week’s edition.

TorrentFreak: Busted: BSA Steals Photo For “Snitch On a Pirate” Campaign

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

nopiracyA few weeks ago we reported on a controversial anti-piracy campaign operated by the Business Software Alliance (BSA).

Representing major software companies, the BSA is using Facebook ads which encourage people to report businesses that use unlicensed software. If one of these reports results in a successful court case, the pirate snitch can look forward to a cash reward.

Below is one of the promoted Facebook posts that appeared in the timeline of thousands of people on Saint Patrick’s Day. It features a homemade cake in the shape of a pot of gold and sends a clear message to the readers.

“Your pot of gold is right here baby. Report unlicensed software and GET PAID,” the post reads.

Unlicensed Photo

The ad is a bit misleading, since those who read the fine-print realize that the pot of gold is as unreachable as any. However, there’s a more worrying issue with the ad.

On closer inspection the photo appears to be lifted from Cakecentral where a user named ‘bethasd’ posted her home-baked creation. Indeed, all signs suggest that the photo for this campaign wasn’t properly licensed, but pirated by the BSA.

Hoping that this was all a misunderstanding, TF contacted the BSA yesterday afternoon, asking for a comment. Thus far the group hasn’t responded to us, but an hour after we sent the inquiry the infringing photo magically disappeared from Facebook.

Luckily we made a copy, and so did Google.

So while the BSA didn’t comment, their attempt to cover up the situation clearly shows that they didn’t have the right to use the image in question. Needless to say, that is more than a touch ironic, especially for an image that’s being used in an anti-piracy campaign.

We encourage ‘bethasd’ to get in contact with the software industry group, and demand both licensing fees and damages for the unauthorized use of her photo. Surely, the BSA will be happy to hand over a pot of gold to her.

For the BSA it’s probably wise to reconsider their marketing strategy on Facebook. Right now the overwhelming majority of the comments are negative, which defeats the purpose of the campaign.

Facebook love


Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

TorrentFreak: Nuking a Facebook Page on Bogus Copyright Grounds is Easy

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

facebook-logoCopyright holders and tech companies in the United States are currently engaged in discussions on how to move forward with the notice and takedown provisions of the DMCA. The moves are voluntary and aimed at eliminating the kind of backlash experienced during the SOPA uprising.

Both parties have good reason to progress. Copyright holders say the are tired of taking down content only to have it reappear quickly after. On the other hand, service providers and similar web companies burn through significant resources ensuring they respond to takedown demands in a way that maintains their protection from liability.

This fear of the law encourages service providers to err on the side of caution by taking content down quickly, and worrying about legitimacy later. Earlier this week TF spoke with a site operator with first hand experience of a major Internet company’s approach to what they believed to be a genuine copyright complaint.

As previously reported, BeBe is the operator of Wrestling-Network, a site that links to unauthorized WWE streams. While the legality of his main domain is up for lively debate, he maintains his Facebook page was squeaky clean when a WWE lawyer moved to have it taken down. What we didn’t report at the time is that this wasn’t the first time Wrestling-Network had lost a Facebook page.

“We’ve removed or disabled access to the following content that you posted on Facebook because we received a report from a third-party that the content infringes their copyright(s),” Facebook told BeBe in an email.

BeBe wrote back to try and resolve matters, offering to put things right should there have been a copyright-related oversight. But Facebook weren’t interested in being the middle-man in this dispute, instead directing BeBe to contact the person who filed the complaint against him.

“If you believe that we have made a mistake in removing this content, then please contact the complaining party directly.[..],” Facebook responded.

Once furnished with the complainant’s details BeBe recognized them as belonging to a person he’d been in an unrelated dispute with. Despite using a Yahoo email address (rather than, the fake copyright complainer had convinced Facebook to shut down a page in the name of a third-party.

Worse still, BeBe would now have to negotiate with his adversary in order to get his page back.

“If both parties agree to restore the reported content, please ask the complaining party to contact us via email with a copy of the agreement so that we can refer to the original issue,” Facebook explained. “We will not be able to restore this content to Facebook unless we receive explicit notice of consent from the complaining party.”

Fake copyright reports like this aren’t new. In 2011 there was a spate of false notices which saw well-known Facebook pages belonging to Ars Technica, Neowin and Redwood Pie all being taken down by fraudulent complaints.

Three years ago it was exceptionally easy to take down a page, but here we are several years later and the process is still open to abuse. So how easy is it to nuke a Facebook page? On condition of anonymity, a person who targeted another’s Facebook page out of revenge explained.

“Taking down the Facebook page was simple. It requires a well written and thought out mail, directly sent to Facebook, but it was a very straight forward process,” TF was told.

“I did not pretend to be [a rightsholder] however. I claimed to be a company that handles such legal aspects for big corporations without them being directly involved, in order to avoid public backlash but at the same time protect their interests. I provided enough copyrighted posts to prove that the page was a constant abuser and the page was taken down.”

According to PCWorld, during the Department of Commerce Internet Policy Task Force’s first public forum yesterday, Ben Sheffner, vice president of legal affairs for the MPAA, said that erroneous or abusive notices were a tiny part of the DMCA picture and should be afforded appropriate levels of attention.

Corynne McSherry, intellectual property director at the Electronic Frontier Foundation, saw things quite differently.

Takedown abuse “regularly causes quite an uproar,” McSherry said. “Any multistakeholder dialog that was talking about the notice-and-takedown system and trying to improve it that didn’t include a discussion of takedown abuse would really have no legitimacy in the eyes of many, many Internet users.”

The problem might be small to the MPAA, but of course it’s not their Facebook page getting targeted. To smaller players the problem can be significantly more severe.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Krebs on Security: Sony Pictures Plans Movie About Yours Truly

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Sony Pictures is reportedly planning to make a big screen movie based at least in part on my (mis)adventures over the past few years as an independent investigative reporter writing about cybercrime. Some gumshoe I am: This took me by complete surprise.



The first inkling I had of this project came a few weeks ago when New York Times reporter Nicole Perlroth forwarded me a note she’d received from a Hollywood producer who was (and still is) apparently interested in acquiring my “life rights” for an upcoming film project. The producer reached out to The Times reporter after reading her mid-February 2014 profile of me, which chronicled the past year’s worth of reader responses from the likes of the very ne’er-do-wells I write about daily. Perlroth’s story began:

“In the last year, Eastern European cybercriminals have stolen Brian Krebs’s identity a half dozen times, brought down his website, included his name and some unpleasant epithets in their malware code, sent fecal matter and heroin to his doorstep, and called a SWAT team to his home just as his mother was arriving for dinner.”

I didn’t quite know what to make of the Hollywood inquiry at the time, and was so overwhelmed and distracted with travel and other matters that I neglected to follow up on it. Then, just yesterday, I awoke to a flurry of messages both congratulatory and incredulous on Twitter and Facebook regarding a story in The Hollywood Reporter:

“Sony has picked up the rights to the New York Times article ‘Reporting From the Web’s Underbelly,’ which focused on cyber security blogger Brian Krebs. Krebs, with his site, was the first person to expose the credit card breach at Target that shook the retail world in December.”

“Richard Wenk, the screenwriter who wrote Sony’s high-testing big-screen version of The Equalizer, is on board to write what is being envisioned as a cyber-thriller inspired by the article and set in the high-stakes international criminal world of cyber-crime.”

Judging from accounts of the screenwriter’s other movies, if this flick actually gets made someone vaguely resembling me probably will be kicking some badguy butt on the Silver Screen:

-The Expendables 2: Sly Stallone gets revenge.
The Mechanic: Jason Statham hits hard.
- 16 Blocks: Bruce Willis…well..Bruce Willises.
The Equalizer (Fall 2014): Denzel Washington tries to hide from his past life of kicking butt.

I still have yet to work out the details with Sony, but beyond remuneration (and perhaps a fleeting Hitchcock-style cameo) I would be delighted if I could influence the selection of the leading man. In the past week, I’ve been told I look like both Jim Carrey and Guy Pearce, but I’m not so sure. But if I had to pick one of my favorite actors, I’d love to see Edward Norton in the role. What about you, dear readers? Sound off in the comments below.

Update, 8:24 p.m. ET: Minneapolis Star-Tribune reporter Jennifer Bjorhus managed to get confirmation from Sony that the studio was working on this film.

TorrentFreak: WWE Lawyer Offers Gifts to Obtain Streaming Pirate’s Home Address

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

WWE2After launching its own streaming network and speculation surrounding a takeover, World Wrestling Entertainment (WWE) is now worth a reported $2.3 billion.

Like many large entertainment industry companies, WWE is an aggressive protector of its copyrights and for several years has been pursuing companies and site owners who dare to step over the line. Part of that strategy is to force fan sites to hand over their domains, should they include ‘WWE’ in their URL. For owners of sites which threaten their PPV and streaming sales, things aren’t much better.

During March 2013, Facebook said that WWE Intellectual Property Director Matthew Winterroth was behind the closure of a page operated by Wrestling-Network, a site offering links to WWE streams and shows. Wrestling-Network operator ‘BeBe’ was told by the social network that he would need to contact the lawyer directly to solve the dispute. BeBe decided to quit Facebook and moved to Twitter instead, but by the summer WWE had raised its head again, this time after PayPal disabled an account used for the site’s finances.

BeBe says that in October WWE sent a takedown notice to Cloudflare, who handed over the details of the site’s actual host. For a few months things went calm, but last week all that changed. PayPal closed the site’s new account which had been opened by a third-party, and Facebook shutdown Wrestling-Network’s new page and BeBe’s personal page while they were at it. At this point things took a turn for the unusual.

facebayAfter being given Winterroth’s contact details by Facebook, BeBe contacted the lawyer to see what could be done.

“My Facebook page was removed, care to share why?” BeBe wrote in an email to WWE last Saturday.

Without being given any further details (aside from BeBe’s email address which is enough to connect him with Wrestling-Network via a simple Google search), Winterroth wrote straight back suggesting there might have been some kind of mistake.

“What is your name, address and Facebook page that was potentially inadvertently removed and I’ll look into it,” the lawyer wrote.

“,” BeBe responded.

Since Winterroth was the person named by Facebook as being responsible for the takedowns, it would be reasonable to presume that he already knew the circumstances behind the page’s disappearance, so suggesting at this point that there might have been some kind of error seems somewhat unusual. Nevertheless, Winterroth further underlined that notion in a rather unusual follow-up email.

Needless to say, BeBe wasn’t tempted to take up the offer.

“I just woke up and while I was checking my phone, I read the email and started laughing hysterically,” BeBe informs TF.

“I mean, I heard a long time ago about a case where in order to arrest them on US territory, some guys were attracted to the USA by undercover FBI agents who promised them money and girls, but a gift bag from WWE? Really? He could at least given me some WrestleMania tickets.”

BeBe says he politely declined the offer.

“Oh, that’s so generous of you, but no thanks,” he told Winterroth. “I just want my page back since I didn’t post any links to copyrighted materials like you claim.”

Exactly 20 minutes later, the WWE lawyer’s tone had changed.

“Thank you for your correspondence. We have shut down your Facebook page and also worked with PayPal to permanently suspend your payment processor account with them. We now have your address and whereabouts in Romania,” he explained.

“Should you not shut down the website and agree not to infringe WWE intellectual property in the future in an immediate fashion, WWE will continue to work with our counsel in Romania, as well as the relevant legal authorities, including the Ministry of Internal Affairs/Bucharest City Police and Romanian National Audiovisual Council on our ongoing criminal complaint against you.”

What followed were demands for BeBe to hand over his domain but with tempers beginning to fray, that seemed unlikely.

rflag“[..] If you don’t know, Romania is not a state in the United States of America. Romania is a country in eastern Europe. Unless you figured it out by now, US law does not apply here and no Romanian law is being violated,” BeBe told the WWE in an Anakata-inspired response.

“Yes, this is why we are working closely with Romanian legal authorities on this matter, who have more knowledge of the current state of Romanian law that [sic] either you or I,” BeBe was informed. “Your website exists to infringe WWE intellectual property in a wholesale fashion, and such illegal use will not be tolerated.”

At this point, relations truly broke down.

“Ok, ok, I’m gonna go outside and wait for the SWAT team, or are you gonna send Seal Team 6? Well, whatever, in the meantime, you can go fuck yourself ‘Captain Skinny-Dick’,” BeBe told Winterroth.

“Oh, since you wanted my name and address, here it is: Mr. Fukhusen, 110 eatshitlane, 6800 Romania. Also, please stop with these legal threats Judge Judy, go back in your room and watch Suits and Law and Order.”

Signing off with a request for Winterroth to say “Hi” to WWE supremo Vince McMahon, BeBe severed his “negotiations” with WWE and has heard no more since.

Whether WWE will be tag-teaming with the Romanian police anytime soon will remain to be seen.

Photo credit

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Raspberry Pi: Welcome to the newest member of the Raspberry Pi family!

This post was syndicated from: Raspberry Pi and was written by: liz. Original post: at Raspberry Pi

If you’ve emailed our info@ address in the last year, spoken to us about trademarks or talked to us on Facebook or G+, you’ll have bumped into the indefatigable Lorna. She went on maternity leave a couple of weeks ago, and today she had a little boy. Welcome to the Pi family, Ronan Peter: we’re very pleased to meet you!

TorrentFreak: Russia’s Facebook Prepares To Make Peace With USTR Over Piracy

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

ustrEvery year the Special 301 Report identifies countries thought by the United States Trade Representative to pose the biggest intellectual property-related threats to U.S. companies. Russia has been a ‘priority’ country for some time, not least due to the actions of one of its biggest and most influential websites.

VKontakte (In Touch), is Russia’s Facebook. It’s a huge operation with tens of millions of users, each of whom has the ability to upload music, movies and TV shows to share with their friends. And with their friends’ friends. And with their friends’ friends’ friends.

Needless to say, entertainment companies aren’t pleased that this social networking giant is facilitating piracy on a grand scale, especially when that content – music in particular – goes on to fuel countless free MP3 download portals all around the Internet. If you’ve ever downloaded MP3s from the free web, chances are some of that music has come from VK.

For some time VK has been keen to update its image by making steps towards becoming more rightsholder-friendly. That said, it’s never really been enough for the U.S. and as a result Russia has again found itself on the latest Special 301 Report. But there signs that things could be getting more serious.

VK Executive Director Dmitry Sergeyev told ITAR-TASS yesterday that consultations between his company and rightsholders were underway, with a view to the signing of an anti-piracy memorandum with telecoms regulator Roskomnadzor.

As the government outfit at the center of Russia’s web-blocking mechanism, Roskomnadzor has significant power. Its anti-piracy memo deals with the pre-trial settlement of disputes between sites and copyright holders and requires signatories to implement content fingerprinting and identification systems in order to filter and block unauthorized material.

vk“VKontakte will introduce content identification, which will be used to monitor and promptly delete published content protected by copyright,” a source close to the company said.

“This will be the first step towards eliminating the social network from of the U.S. Trade Representative’s Special 301 Report, which is currently limiting the company’s ability to raise funds abroad and sign agreements with foreign rightsholders.”

Anti-piracy memorandum signatory the Russian Anti-Piracy Organization (RAPO) will be the messenger of progress. The group says it will monitor VK for pirate content in the months to come and if there is significant improvement, the MPAA will be informed.

“During this year, the industry will be observing what is happening to the sites, including,” RAPO chief Konstantin Zemchenkov said.

“If pirate content disappears from the social network, we’ll report to the MPAA, which in turn will report that fact to the IIPA [International Intellectual Property Alliance], which will inform the US authorities.”

Since the Special 301 Report is based on the previous years’ data, even in the event of progress VK won’t be able to get off the list until 2015. The site has been included since 2011, so removal isn’t going to come easy. Other local sites, such as and Rapidgator, remain on the list as thorns in the side of the U.S.

Source: TorrentFreak, for the latest info on copyright, file-sharing and VPN services.

Errata Security: RSAC Keynote: support your local sherif

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

RSA Security chairman Art Coviello opened his company’s conference with a discussion of “BSAFE backdoor” controversy [video]. Rather than defending his company’s mistakes in the affair, he seemed to justify them with a four-point plan calling for greater powers for law enforcement.

#1 “Renounce the use of cyber-weapons, and the use of the Internet for waging war”

This is sure to be a crowd-pleaser, touching upon the “0-day” debate in our community, but it’s wholly without substance.

We already use the Internet for waging war, whether it’s servicemen sending emails back home, or using Internet connectivity to control drones on the battlefield. Internet is communications, and communications is essential to warfare. We no longer have the ability to communicate without using the Internet. In modern warfare, all sides use the Internet for waging war.

Of course, that’s not precisely what he meant (I think). Instead, he probably refers to attacking each other through cyberspace. But it’s the same thing. If we are raining down terror from Internet-controlled drones, then that control mechanism, the Internet, becomes fair game. We can’t tell the victims of drone attacks that while shooting back at the drones is allowed by the rules of war, that hacking or viruses are somehow morally reprehensible and off limits. It’s the same with outer space: our use of GPS for precision-guided missiles and satellite communications means waging war in space, even though no military action has yet taken place in space. We are just lucky we haven’t attacked somebody yet with the ability to put ball-bearings in low-orbit taking out our GPS system — and the ability to launch anything into space for a decade.

In short, his idea “renouncing the use of the Internet for waging war” demonstrates a total lack of understanding of the issue.

He’s more on target with “cyber-weapons”. Our community has a legitimate debate over “military 0days”, and how the military’s purchase of 0days outbids bug bounties that serve to protect us by closing vulnerabilities.

However, the blanket statement about “cyber-weapons” ignores this complex issue, and treads bad ground. The argument seems tailor-made to appeal to the EFF crowd, but these people don’t renounce cyber-weapons as a principle. Instead, they defend their use, such as claiming Anonymous hackers were justified in using LOIC (a DDoS tool) against PayPal.

There is also the issue that virtually all “weapons” in cyberspace are dual-use: used by defenders as well as attackers. To outsiders, Nmap and Metasploit seem like evil tools with no legitimate purposes, but in fact they are most heavily used by defenders in protecting their networks against hackers. Again, the EFF hotly defends the use of such tools. That’s why the debate in our community centers on “0days”: it’s the one tool that doesn’t seem to be particularly useful to defenders.

Then there is the issue about whether code is speech (again, something the EFF defends). Virtually all “cyber-weapons” are open-source (except for the 0-days). Restricting them becomes an intolerable offense to basic rights.

In short, what Coviello is talking about is the same logic used by law enforcement in the 1990s, when encryption was classified as a munition and tightly controlled. The consequence was that it left good people open to attack. While this point looks initially like a sop to the anti-war crowd, it is in fact an attack on our liberties.

#2 “Cooperate internationally in the investigation, apprehension, and prosecution of cyber-criminals”

Our job in the cybersec community is to defend computers against hackers. That doesn’t automatically make us tools of the state for prosecuting cyber-criminals.

For one thing, the definition of “cyber-criminal” is overly broad. Unlocking your iPhone makes you a cybercriminal. Incrementing a number in a URL makes you a cybercriminal. Spoofing your MAC address makes you a cybercriminal. Posting to Facebook can make you a cybercriminal.

World wide, most countries are oppressive regimes. Certainly we aren’t going to aid law enforcement internationally and help those regimes. Even in the mostly “free” country of the United States, law enforcement has taken on the appearance of a police state. The U.S. jails over 1% of it’s population, which is 10 times more than any other free country. Half of all young black men are in the system, such as in jail or on parole. Even whites are more likely to be in jail in the United States than in Europe.

Yes, we in this community work on the side of law enforcement when it comes to real crimes like stealing money or murder. For a broad range of other things, we oppose law enforcement. Indeed, many of us live in constant fear that law enforcement will come up with a novel interpretation of the law in order deem previous common whitehat activities as cybercrime.

As in his first principle, Coviello reveals that he has gone back on the principles of RSA from the 1990 and is now taking the side of law enforcement against citizens. His comments seem to indicate that he’d find mandatory key escrow a good feature of encryption.

#3 “Ensure that economic activity on the Internet can proceed unfettered and that intellectual property rights are respected around the world”

Here Coviello is completely at odds with the rest of the cybersec community.

Yes, limited intellectual property protections for a limited time are the lifeblood of the modern economy, especially a “knowledge economy” like the United States. But, our zeal to protect intellectual property has lead to a cyber-police-state, where the DMCA is used to chill speech and patent trolls destroy innovation.

In justifying this principle, Coviello says “The rule of law must rule”. I’m not sure what he means by that. The phrase “rule of law” doesn’t mean the principle that law must crack down on wrongdoers. Instead, the phrase means that everyone is subject equally to the law, even the powerful. It means whichever laws we have, they should be applied equally.

And the lack of even treatment under the law is exactly why people are upset with the current intellectual property regime. One example is how Disney appears to have tailored copyright law to its own benefit at the expense of everyone else. Another example is how the DMCA is wholly unbalanced between the powerful and the powerless.

We see a theme developing here: Coviello (and by extension RSA) is clearly coming down on the side of law enforcement against individual rights.

#4 “Respect and ensure the privacy of all individuals”

Unlike Coviello’s first three points, this seems reasonable. Maybe he isn’t such a bad guy.

But, later in his remarks, it’s clear that he’s not really standing up for privacy. He says “Governments have a duty to create and enforce a balance … that embraces individual rights and collective security“. It’s quite clear from the nature of his arguments where he sees the correct balance — toward maximum security, and consequently, minimal individual rights.


My translation of Coviello’s comments is this: “If we had backdoored our crypto, would that have been such a bad thing?“. Betraying customer trust on behalf of the government is consistent with his entire speech: trusting the NSA, trusting NIST, and most of all, trusting the good intentions of the police state.

Masscan: I spend more attention on the first principle about “cyberweapon” than the remaining three. I get the impression it’s targeted at me, since I build cyberweapons (like my masscan tool). I get the impression he’s saying “don’t condemn us for our bad behavior, we aren’t as bad as those cyberweapon builders! Condemn them instead!!“.

TorrentFreak: FACT Closes More Torrent and Usenet Sites, and Makes it Look Easy

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

factEvery week TorrentFreak receives emails from individuals trying to find out the status of sites all over the world. Many disappear without warning and with no Facebook or Twitter updates, users naturally fear the worst.

Just lately a lot of torrent-related sites have been under DDoS attack so temporary disappearances have been nothing out of the ordinary, but when a site goes down and stays down for a few days or more, things begin to look more serious.

One site currently affected by prolonged downtime is but for users of this site there will be no happy ending. Along with related sites MazysMadhouse and ParadiseLane, XtremeTV has fallen victim to the long arm of Hollywood, reaching across the Atlantic via local anti-piracy group FACT.

“The domains listed [in your email] are part of our ongoing activity to disrupt pirate websites and to create an effective deterrent to operators of such sites,” FACT Director of Communications Eddy Leviten told TF.


Direct confirmation from FACT aside, signs that the anti-piracy group have shuttered a site are not too hard to find, if one knows how they operate.

In cases we’ve covered previously, FACT employees have turned up on the doorsteps of admins or staff demanding they close down sites. FACT has plenty of resources so obtaining an address to visit is not usually that difficult, if they set their mind to it.

However, for some unknown reason there are plenty of site operators who make it very easy for FACT. In several recent cases, queries on the affected sites’ WHOIS entries has revealed the site operators’ names and addresses, completely unprotected.

While revealing WHOIS reports can indicate that FACT might have been in town, there is another more obvious sign – FACT’s email addresses embedded into domain entries.

In previous cases as well as the one involving XtremeTV and MazysMadhouse, is now listed as the site’s contact email. This is because FACT demands the sign over of the site’s domain as part of the settlement agreement. Once the domain is in FACT’s hands it is shut down, taking the site with it, if that hasn’t gone offline already.


“We continue to direct visitors to domains signed over to FACT to legitimate resources where they can enjoy films and TV programmes in the cinema, online and on disc,” FACT’s Eddy Leviten told us.

Finally, Usenet indexing site also fell to FACT pressure this month. It appears the closure didn’t require a home visit, just an email to the site’s operators.

“Investigators at FACT have been examining your site and have noted that it is predominantly infringing TV content that is being made available,” FACT Director of Investigations and Intelligence Peter O’Rourke told the site in an email. “FACT requests that you desist from this activity immediately. Failure to do so will result in further investigation which may result in criminal prosecution.”

The perfectly understandable response from the site was put personal lives first.

“We are no longer under the radar. And therefore we also need to dip out. Remember we have lives and families,” the site’s operator said in a statement. “Sorry folks. Hope you all understand that this is out of our hands.”

All of the sites listed in this article are now closed and signs suggest that FACT will continue to close more, as long as site operators can be identified or simply convinced that prosecution and/or imprisonment is a serious possibility.

Source: TorrentFreak, for the latest info on copyright, file-sharing and VPN services.

The Hacker Factor Blog: Linking Up

This post was syndicated from: The Hacker Factor Blog and was written by: The Hacker Factor Blog. Original post: at The Hacker Factor Blog

I have a fondness for standards. Even if there is a non-standard option that may be better or faster, I like to be able to do something once and know it will be supported everywhere. For example, I recently griped about Internet Explorer’s unexpected HTML issues. But really, the problem was how IE was defaulting to an old backwards compatibility mode and not with the HTML or JavaScript. One change to the HTML head tag (to disable IE’s emulation mode) and everything just worked. Because I used HTML and JavaScript that is either standard or widely adopted, I did not have to modify existing code to make page function properly.

However, I still had to make one little change for Internet Explorer. It’s all of these “one little change” items that I don’t like. I should not need to modify my system with custom changes for every single web browser and online service. Every custom tweak becomes a maintenance nightmare.

I have been fighting with this same nightmare for the last few days. Since last August, every analysis page at FotoForensics has had a Twitter link at the bottom. Anyone can easily click to share the analysis page with Twitter. One click and your tweet is pre-populated with the URL for sharing.

Recently I hit the part on my to-do list where I want to add in links to other social networking sites beyond Twitter. Unfortunately, each one seems to be a special case.

Like: It’s not just for valley girls.

I’m sure you’ve seen these sharing buttons on lots of web sites. “Pin-It!” “Like!” “+1″ Clicking on the icon typically opens a new window that is pre-populated with the page’s URL, title, and maybe a short description. If you’re not already logged into Pinterest, Facebook, or Google+, then you’ll be prompted to login. If you’re logged in, you have the option to edit the posting or just share it as-is.

The concept is nice — it simplifies usability, reduces the risk of a typo in the URL or text, and makes sharing more convenient. From the indirect marketing viewpoint, easier sharing means more word-of-mouth advertisement for the service.

There are three things you need in order to make this work. First, you need an icon for each service. Second, you need HTML code that people can click on. And third, you need some custom code on your server.

Such Cute Buttons!

Getting the buttons is the easy part. You can either draw your own or download pre-made icons. I took the easier route. I went to and found a pre-made set that I liked. The real nice thing about this online service is that you can sort by license (I only selected icons that are free for commercial use without attribution). You can either browse the collection or search for something specific.

In my case, I found a few icons that I liked and they were all in the same collection. I then used one as a template for creating additional icons. For example, the Google icon was almost what I wanted, but it needed a plus sign. So I edited the icon, shifted over the “g” and added a “+”. And while the set has nice icons for Digg, Facebook, and Twitter, it was missing an icon for Reddit. So I used the Twitter icon as a template, changed the coloring, and replaced the icon with the Reddit android. Total time? About 2 minutes.


The next step requires adding HTML to each page. Fortunately for FotoForensics and most blog software, this really just means adding it once to a template.

Almost every online service wants you to link to their JavaScript code, load their CSS styles, and link to them. For example, Twitter wants you to include code like:

<a href=”” class=”twitter-share-button” data-url=”http://myurl“>Tweet</a>
<script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?’http’:'https’;if(!d.getElementById(id)){js=d.createElement(s);;js.src=p+’://’;fjs.parentNode.insertBefore(js,fjs);}}(document, ‘script’, ‘twitter-wjs’);</script>

While this can be convenient, it’s also going to significantly slow down your web page’s load time. The full page won’t load until all of the components from all of the various third-party services load. There’s also a privacy issue here: Twitter receives the web Referrer header and can tell what page you are visiting. If the page contains login credentials in the URL, then Twitter can track you.

My preference is to forget all of the third-party links and just use a simple URL to submit the page. While I do use JavaScript a little, my code is significantly smaller than Twitter’s recommended JavaScript and it doesn’t need to wait for any third-party web requests. And best yet: Twitter cannot spy on your web browsing unless you click on the Twitter link. Here’s my icon that links to Twitter:

<img title=’Twitter’ onclick=’““, “Twitter”, “height=500,width=500,left=100,top=100,resizable=yes,scrollbars=yes, toolbar=yes,menubar=no,location=no,directories=no,status=yes”)’ src=’smimg/twitter.png’>

This code displays an HTML image. I gave it a “title” that says “Twitter” when you hover the mouse over it. I also defined an onclick() event. Clicking on the icon will open a new window that is ready to share my URL to Twitter. (Be sure to URI-encode the URL.)

I have similar code for Reddit, Pinterest, Facebook, and Google+.

<img title=’Reddit’ onclick=’““, “Reddit”, “height=600,width=600,left=100,top=100,resizable=yes,scrollbars=yes, toolbar=yes,menubar=no,location=no,directories=no,status=yes”)’ src=’smimg/reddit.png’>

<img title=’Pinterest’ onclick=’““, “Pinterest”, “height=600,width=600,left=100,top=100,resizable=yes,scrollbars=yes, toolbar=yes,menubar=no,location=no,directories=no,status=yes”)’ src=’smimg/pinterest.png’>

<img title=’Facebook’ onclick=’““, “Facebook”, “height=600,width=600,left=100,top=100,resizable=yes,scrollbars=yes, toolbar=yes,menubar=no,location=no,directories=no,status=yes”)’ src=’smimg/facebook.png’>

<img title=’Google+’ onclick=’““, “Google+”, “height=600,width=600,left=100,top=100,resizable=yes,scrollbars=yes, toolbar=yes,menubar=no,location=no,directories=no,status=yes”)’ src=’smimg/google.png’>

Each of the service URLs take slightly different parameters, but it isn’t very complex. For example, most services want “url=” to point to your page’s URL; only Facebook wants “u=”. Pinterest requires both “url=” for the web page and “media=” for a representative picture. Pinterest also permits an optional “description=” that includes text. Reddit has additional options for specifying the title, description, etc. (Since FotoForensics doesn’t have a title, description, or other information about the picture, I leave it for the user to fill out.)

Some people don’t like putting the onclick inside the img tags. It’s easy enough to wrap the img tag with an anchor tag.

Server Bound

So far, I have very simple code that will work on all modern browsers. The next part is the hard part. Some online services try to automatically fill in additional information. The problem is, none of them follow the same standards. This mean that you need header code on every web page that is service-specific. This is a huge waste of bandwidth since 99% of the time the people visiting the web page are not from that service…

Starting with Twitter… Twitter offers the ability to register your site. If someone posts a link to your site, then your server can embed a picture or small article preview that appears with the tweet. I detailed how to set this up in my Twitter Cards blog entry. After registering, you need to have some additional meta fields in the header block.


<title>FotoForensics – Analysis</title>

<meta name=”description” content=”foto forensics, photo forensics, error level analysis” />

<meta http-equiv=”Content-Type” content=”text/html;charset=utf-8″ />
<link rel=”stylesheet” type=”text/css” href=”/style.css” />
<meta name=”twitter:card” content=”photo”>
<meta name=”twitter:url” content=”http://…”>
<meta name=”twitter:image” content=”http://…”>
<meta name=”twitter:image:src” content=”http://…”>
<meta name=”og:url” content=”http://…”>
<meta name=”og:image” content=”http://…”>

To mitigate bandwidth, I only include this if the request is coming from Twitter.

Facebook wants a different set of information:


<meta name=”og:title” content=”My Title” />
<meta name=”og:description” content=”My Description” />
<meta name=”og:type” content=”website” />
<meta name=”og:site_name” content=”My Site Name” />
<meta name=”og:url” content=”http://myurl” />
<meta name=”og:image” content=”http://myurl/image1″ />
<meta name=”og:image” content=”http://myurl/image2″ />
<meta name=”og:image” content=”http://myurl/image3″ />

All of the og: fields are from the Open Graph Protocol. (It would be a nice standard if more sites supported it.) When the user clicks on the Facebook button, it calls a script at This script goes to the URL (specified by the “u=” parameter) and harvests these og fields. Facebook uses these extra meta fields to pre-populate the posting.

There’s just a few problems with Facebook… If you do not specify an image using “og:image”, then Facebook will try to grab the first image on the page. This is probably your site’s top banner and probably not what you want.

Also, if you specify multiple og:image tags, then the user can select the picture to post. It’s a nice feature, but Facebook does not list the pictures in any deterministic order. Specifically, if you have three “og:image” records, then Facebook will request all 3 pictures at the same time. The first picture to return will be listed first and the last picture to return will be listed last. In other words, the order is arbitrary.

I ended up fighting to make Google+ work. As far as I can tell, Google+ only obeys a few of the og tags. Google really wants a ‘canonical’ tag for the URL and an “image_src” for the representative image.


<link rel=”canonical” href=”http://myurl” />
<link rel=”image_src” href=”http://myurl/image” />

I actually had one other problem with Google. It turns out that Google queries the web page from the Google corporate proxy ( This proxy was banned at FotoForensics for uploading full-frontal nudity on 4-Feb-2014 at 16:08:40 GMT. (Yes, there are people at Google who are using the corporate network to view porn.) I’ll probably end up adding in a special rule to identify whether it is Google’s post-bot (don’t ban) or a Google employee (ban!).

And for people keeping score: first Google tried to upload every picture at imgur to FotoForensics, then it tried to submit words and text, and then it tried to guess URLs. Now their employees are uploading porn. What happened to “do no evil”? My site currently has more custom code to address abuses from Google than specific code for any other web service. Compared to Google, the automated vulnerability scan-and-attack bots from China and the Netherlands are downright friendly.

Unlike Google+, Facebook, or Twitter, Reddit does not appear to harvest anything from my web site. There is no need for any custom code on my server to support Reddit.

Weakest Link

I like the idea of linking to social media sites. Users can see content and easily share it with other people. The thing that I do not like is the requirement to have service-specific code on my site. Having a URL to easily share with a service is not a problem. However, the requirement for service-specific headers defeats the purpose of the web and further segments the Internet. We used to have web sites that only worked with specific browsers. Now we have web sites that cannot be easily shared across social networks.

ps. If you find any problems with the social network buttons at FotoForensics, please let me know!

TorrentFreak: BSA Offers Facebook Users Cash If They Rat On Software Pirates

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

nopiracyAll the major software companies see piracy as a massive problem. Unlike the music and movie industries, however, they tend to focus their legal action more on the business side than on individual consumers.

Over the past two decades the Business Software Alliance (BSA) has represented major software companies, including Adobe, Apple, Microsoft, Oracle and Symantec, in their fight against under-licensed businesses.

This has resulted in raids on various companies, whose computers are often confiscated on the spot if the business owner fails to pay his or her dues. Some have described these practices as mafia-like, but the BSA believes they’re needed to stamp out piracy.

Recently, the BSA has upped the ante as they are now soliciting tips from the public about potentially infringing companies. While input from the public was always welcome, it’s the supporting PR-campaign that raises eyebrows.

The BSA is currently running an ad campaign on Facebook encouraging people to report piracy in return for a healthy reward. The example below shows how the group is trying to lure snitches with a ski-vacation.

BSA’s report piracy ad on FacebookBSA-pirate

Those who click through to the campaign website and read the fine print will find out that BSA is not really offering a vacation. They do however, promise to send tipsters a cut of an eventual settlement they receive when they choose to pursue a lead in court.

This reward could reach $5000 for a settlement of $15,000 or a massive $200,000 for a single tip if BSA gets a settlement of over $3 million. The rewards in question are targeted at users from various countries, including the US, Australia, Canada and China.

To show people how easy it is to become an anti-piracy reporter the BSA also lists an audio interview with an informant on their site.

“I feel great [about reporting piracy] because it’s wrong for businesses to do stuff like that. I would do it again no matter what. It was very easy to report, you have nothing to worry about,” the informant says.

Sounds great doesn’t it?

Here at TorrentFreak we appreciate a nice vacation as well, so hereby we rat out the U.S. military for running unlicensed copies of Windows 7. We’re looking forward to our reward…

Source: TorrentFreak, for the latest info on copyright, file-sharing and VPN services.

TorrentFreak: Hyperlinking is Not Copyright Infringement, EU Court Rules

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

The European Union has been expanding since its creation in the 1950s and is now comprised of 28 member states, each committed to EU law.

One of the key roles of the EU’s Court of Justice is to examine and interpret EU legislation to ensure its uniform application across all of those member states. The Court is also called upon by national courts to clarify finer points of EU law to progress local cases with Europe-wide implications.

One such case, referred to the CJEU by Sweden’s Court of Appeal, is of particular interest to Internet users as it concerns the very mechanism that holds the web together.

The dispute centers on a company called Retriever Sverige AB, an Internet-based subscription service that indexes links to articles that can be found elsewhere online for free.

The problem came when Retriever published links to articles published on a newspaper’s website that were written by Swedish journalists. The company felt that it did not have to compensate the journalists for simply linking to their articles, nor did it believe that embedding them within its site amounted to copyright infringement.

The journalists, on the other hand, felt that by linking to their articles Retriever had “communicated” their works to the public without permission. In the belief they should be paid, the journalists took their case to the Stockholm District Court. They lost their case in 2010 and decided to take the case to appeal. From there the Svea Court of Appeal sought advice from the EU Court.

Today the Court of Justice published its lengthy decision and it’s largely good news for the Internet.

“In the circumstances of this case, it must be observed that making available the works concerned by means of a clickable link, such as that in the main proceedings, does not lead to the works in question being communicated to a new public,” the Court writes.

“The public targeted by the initial communication consisted of all potential visitors to the site concerned, since, given that access to the works on that site was not subject to any restrictive measures, all Internet users could therefore have free access to them,” it adds.

“Therefore, since there is no new public, the authorization of the copyright holders is not required for a communication to the public such as that in the main proceedings.”

However, the ruling also makes it clear that while publishing a link to freely available content does not amount to infringement, there are circumstances where that would not be the case.

“Where a clickable link makes it possible for users of the site on which that link appears to circumvent restrictions put in place by the site on which the protected work appears in order to restrict public access to that work to the latter site’s subscribers only, and the link accordingly constitutes an intervention without which those users would not be able to access the works transmitted, all those users must be deemed to be a new public,” the Court writes.

So, in basic layman’s terms, if content is already freely available after being legally published and isn’t already subject to restrictions such as a subscription or pay wall, linking to or embedding that content does not communicate it to a new audience and is therefore not a breach of EU law.

The decision, which concurs with the opinions of a panel of scholars, appears to be good news for anyone who wants to embed a YouTube video in their blog or Facebook page, but bad news for certain collecting societies who feel that embedding should result in the payment of a licensing fee.

Source: TorrentFreak, for the latest info on copyright, file-sharing and VPN services.