Posts tagged ‘Privacy’

The Hacker Factor Blog: Happy Holidays from Facebook

This post was syndicated from: The Hacker Factor Blog and was written by: The Hacker Factor Blog. Original post: at The Hacker Factor Blog

If you do anything in the computer security or forensics world, then you probably view Facebook as a hive of scum and villainy. As a major social network, it attracts all sorts of criminal elements. Pedophiles use Facebook. Terrorists use Facebook. Drug dealers use Facebook. It’s like the only people not using Facebook are teens.

Social networks are split into two camps. On one side are the open forums. Everything is accessible and anyone can see content without needing special access. Twitter, Reddit, and most news sites fall into this category. While some content can be made private, most is public.

On the other side are the walled gardens. These are social networks where people on the outside can barely see anything inside. Facebook and Apple are the two big examples. As someone who isn’t on Facebook, I’ve never actually seen FarmVille. And I cannot see most user profiles or “wall” pages without logging in and connecting to users. It’s that “connecting” part that is a problem for law enforcement. The last thing you want to do is tip off a suspect by friending them, just to gain access to their shared information.

One Little Change

Earlier this week, Facebook made a subtle but important change to their service. Specifically, they changed their picture filenames. This, in turn, directly impacts online forensics. Since I’ve been tracking changes at Facebook for years, I’ve managed to put together a pretty good timeline.

Before July 2012, Facebook filenames used a five-number pattern: aa_bb_cc_dd_ee_n.jpg. (For example, 1234_5678_91011_12345_1234_b.jpg.) aa is the photo id, bb is the album id, cc is the profile id of the user who uploaded the picture. The dd and ee fields are random and designed to mitigate guessing a picture’s id. (The dd field may have some other purpose, but I never figured it out.) The final character, n, indicates the size for auto-scaling. Changing the final character to ‘o’ returns the original-size picture, ‘b’ is big, ‘q’ is 180px wide, etc.

Given a Facebook filename in this format, an analyst can quickly identify the URLs to the picture, album, and user’s profile.

In February 2012, Facebook started testing a new filenaming system. This system was fully deployed in July 2012. This new filename format uses a three-number pattern: aa_bb_ee_n.jpg. (E.g., 1234_567891011121_23456_b.jpg.) aa and bb are still the photo and album ids. The ee field is random and designed to prevent someone from guessing a picture’s id. The final character, n, still indicates the size for auto-scaling.

Given a Facebook filename in this three-number format, an analyst can still quickly identify the URL to the picture and wall page. If the picture’s wall page is public, then it displays the user’s account name, the image, and all comments related to the picture.

In October 2012, Facebook began to test an Akamai EdgeControl cache with cryptographic signatures. Akamai provides a last-mile content delivery system for distributing the network load. The cryptographic checksum prevents tampering to the URL. This means that real-time processing instructions in the URL, such as ‘/c92.0.403.403/’ for cropping or the size determination (e.g., ‘n’ or ‘o’), cannot be altered by the analyst. Any changes will return a ‘Content not found’ message.

The caching and anti-tamper system was deployed on 27-Dec-2012. However, the filenames still mapped to non-Akamai URLs for directly accessing the content at Facebook. In addition, relatively few pictures were served through the Akamai caching service.

All of this changed on 24-Nov-2014. (I may be off by a few days for the actual deployment). That’s when Facebook changed filenames again and began to distribute pictures almost exclusively through Akamai and with anti-tamper URLs. Technically, the filename still looks like the three-number format: aa_bb_ee_n.jpg. However, they changed the aa photo ID number in the filename. As a result, all filenames that predate 2014-11-24 can no longer be used to find the direct URL at Facebook. Pictures uploaded after 2014-11-24 may, in rare cases, be mapped to direct URLs at Facebook. But most of the time, they are only available from Akamai. Given only the Facebook filename, you can no longer find the URL to the picture hosted at Facebook. You can still find the wall page with the picture (if it is public), but not the direct URL to the picture itself.

For example, 1000526_539054152803549_1177659804_n.jpg is the filename of a picture that was uploaded to FotoForensics over a year ago. The direct URL to the picture was ‘’. Prior to 24-Nov-2014, this would return the image, but today it returns ‘Content not found’.

However, from this filename I can identify the wall page’s URL: (It’s gross, so I’m not hyperlinking to it.) According to the wall page, the picture’s new filename is 1014640_539054152803549_1177659804_o.jpg — the first number changed from 1000526 to 1014640.

‘Good news’ is relative

The impact to forensics and investigators is significant. If you have a filename that matches the three-number format, then you can trace the filename to Facebook. But if the file was acquired before 2014-11-24, then you cannot find the direct URL at Facebook in order to confirm that the file came from there. (By seeing the picture at Facebook, I suspect that law enforcement would have an easier time getting a warrant. Without the confirmation, it should be a little harder.) By the same means, some media outlets try to validate sources. They used to be able to confirm that a picture came from Facebook by tracing the filename to a URL. But today, they cannot positively confirm it unless the wall page is public.

In addition, anyone who was hotlinking to a picture at Facebook should have noticed that the link is now broken. In effect, Facebook just raised the walls a little higher around their private garden.

If someone sees a filename from Facebook, then it can no longer be traced back to the user. And if the URL contains an anti-tampering field (my example Facebook filename above did not have this field), then nobody can uncrop the image without more knowledge about where the picture is stored at Facebook. This stops people from snooping, law enforcement from tracking images without a warrant, and external web sites from hotlinking.

And the bad news?

Privacy advocates may be very pleased with this change. However, I think all of the privacy benefits are a side-effect from something much more detrimental. Since I have no insider knowledge about Facebook, I can only speculate about the cause behind this naming change. And I suspect that the cause is very anti-privacy.

Facebook rolled out this new change around 2014-11-24. That is just a few days after Facebook announced a major change to their new privacy policy. Most media outlets pointed out that the new policy is 70% shorter and much easier to read. But a few outlets, like PCworld pointed out that it specifies that Facebook wants to collect even more information about you.

For example, the new privacy policy says “We receive information about you and your activities on and off Facebook from third-party partners, such as information from a partner when we jointly offer services or from an advertiser about your experiences or interactions with them.” And this is where it comes back to pictures…

As I mentioned, Facebook had been testing Akamai’s EdgeControl cache service for months, but did not deploy it until they released their new privacy policy. Akamai is a huge company — they serve as much as 30% of all web traffic, and they collect metrics about users. To quote from the Wall Street journal, “Because it stores copies of websites, Akamai has the potential to access 15% to 30% of total Web traffic. Two years ago, it began offering to track Web users’ browsing activity for advertising purposes.” WSJ wrote that back in 2010, so Akamai has been tracking users for over six years.

Now we have Facebook, a giant company that can only collect information at Facebook, teaming up with Akamai, a giant company that can cross-collect information from a third of the Internet. It used to be that Facebook could only track you at third-party sites if their site had a link to Facebook. I previously showed how Facebook uses links at Home Depot to track users who visit this home improvement online store. But now, sites do not even need to have a link to Facebook.

Let’s trace how this entire thing works now. You visit a web site that is not a Facebook affiliate and has no link to Facebook. But, they do have a small ad that is hosted at Akamai. As your browser downloads the picture for the ad from Akamai, your browser (via the HTTP referer [sic] field) provides information about what site you are visiting. Akamai can even drop a cookie into your browser, just in case you change network addresses. (While not essential, the cookie simplifies following mobile devices.) Later, you go to some site that has a “Like us on Facebook” link with code hosted at Facebook and an image from Akamai. Now Akamai can put it all together and provide it to Facebook. They know the sites you visit, when you visited them, and what your interests are outside of Facebook. They can tie this together with Facebook information, so they further know your likes, friends, interests, etc.

Moreover, the list of Akamai customers is huge! Best Buy, NPR, MySpace, McAfee… Facebook can now see into the walled gardens at Apple and Microsoft, since both of them are Akamai clients. The Department of Defense is listed as an Akamai customer… I wonder if Facebook can identify DoD employees? The same goes for the Australian Government National Security (another Akamai client).

Did you see that link to PCworld that I have in the middle of this blog entry? (Where I point out that Facebook wants to collect information.) If you clicked it then you triggered an Akamai tracker. The tracker is in some JavaScript on the PCworld web page. The same goes for the links to Bloomberg and ABCnews that I have in the first paragraph.

Tis the season

But let’s go back to pictures. Why would Facebook change their filenames? The only reason that makes sense to me is that they intentionally want to break links for anyone hotlinking to their site. They are effectively drawing a line in the sand and saying “this is the baseline” for all new data collected.

Finally, I couldn’t help but notice that they rolled all of this out days before Thanksgiving and the start of the holiday shopping season. This year, an estimated 37% of shoppers are expected to shop online, and nearly all of them will trigger at least one Akamai or Facebook tracker.

Ho ho ho…

Darknet - The Darkside: Bitcoin Not That Anonymous Afterall

This post was syndicated from: Darknet - The Darkside and was written by: Darknet. Original post: at Darknet - The Darkside

One of the big advantages touted by Bitcoin (and other cryptocurrencies) was always the anonymity of the transactions, yes you can track a wallet address and see the transaction history. But there’s no real way to link that wallet address to a real person (so we thought). I mean other than any leaky fiat exchange […]

The post Bitcoin Not…

Read the full post at

Errata Security: The Pando Tor conspiracy troll

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

Tor, also known as The Onion Router, bounces your traffic through several random Internet servers, thus hiding the source. It means you can surf a website without them knowing who you are. Your IP address may appear to be coming from Germany when in fact you live in San Francisco. When used correctly, it prevents eavesdropping by law enforcement, the NSA, and so on. It’s used by people wanting to hide their actions from prying eyes, from political dissidents, to CIA operatives, to child pornographers.

Recently, Pando (and Internet infotainment site) released a story accusing Tor of being some sort of government conspiracy.

This is nonsense, of course. Pando’s tell-all exposé of the conspiracy contains nothing that isn’t already widely known. We in the community have long joked about this. We often pretend there is a conspiracy in order to annoy uptight Tor activists like Jacob Appelbaum, but we know there isn’t any truth to it. This really annoys me — how can I troll about Tor’s government connections when Pando claims there’s actually truth to the conspiracy?

The military and government throws research money around with reckless abandon. That no more means they created Tor than it means they created the Internet back in the 1970s. A lot of that research is pure research, intended to help people. Not everything the military funds is designed to kill people.

There is no single “government”. We know, for example, that while some in government paid Jacob Appelbaum’s salary, others investigated him for his Wikileaks connections. Different groups are often working at cross purposes — even within a single department.

A lot of people have ties to the government, including working for the NSA. The NSA isn’t some secret police designed to spy on Americans, so a lot of former NSA employees aren’t people who want to bust privacy. Instead, most NSA employees are sincere in making the world a better place — which includes preventing evil governments from spying on dissidents. As Snowden himself says, the NSA is full of honest people doing good work for good reasons. (That they’ve overstepped their bounds is a problem — but that doesn’t mean they are the devil).

Tor is based on open code and math. It really doesn’t matter what conspiracy lies behind it, because we can see the code. It’s like BitCoin — we know there is a secret conspiracy behind it, with the secretive Satoshi Nakamoto owning a billion dollars worth of the coins. But that still doesn’t shake our faith in the code and the math.

Dissidents use Tor — successfully. We know that because the dissidents are still alive. Even if it’s a secret conspiracy by the U.S. government, it still does what its supporters want, helping dissidents fight oppressive regimes. In any case, Edward Snowden, who had access to NSA secrets, trusts his own life to Tor.

Tor doesn’t work by magic. I mention this because the Pando article lists lots of cases where Tor failed to protect people. The reasons were unlikely to have been flaws in Tor itself, but appear to have been other more natural causes. For example, the Silk Road server configuration proves it was open to the Internet as well as through Tor, a rookie mistake that revealed its location. The perfect concealment system can’t work if you sometimes ignore it. It’s like blaming the Pill for not preventing pregnancy because you took it only on some days but not others. Thus, for those of us who know technically how things work, none of the cases cited by Pando shake our trust in Tor.

I’m reasonably technical. I’ve read the Tor spec (though not the code). I play with things like hostile exit nodes. I fully know Tor’s history and ties to the government. I find nothing in the Pando article that is credible, and much that is laughable. I suppose I’m guilty of getting trolled by this guy, but seriously, Pando pretends not to be a bunch of trolls, so maybe this deserves a response.

The Hacker Factor Blog: Lowering The Bar

This post was syndicated from: The Hacker Factor Blog and was written by: The Hacker Factor Blog. Original post: at The Hacker Factor Blog

The Electronic Frontier Foundation (EFF) is one of my favorite non-profit organizations. They have a huge number of attorneys who are ready to help people with issues related to online privacy, copyright, and security. If you’re about to make an 0-day exploit public and receive a legal threat from the software provider, then the EFF should be the first place you go.

The EFF actually provides multiple services. Some are top-notch, but others are not as high quality as they should be. These services include:

Legal Representation
If you need an attorney for an online issue, such as privacy or security, then they can give you direction. When I received a copyright extortion letter from Getty Images, the EFF rounded up four different attorneys who were interested in helping me fight Getty. (Getty Images backed down before I could use these attorneys.) Legal assistance is one of the EFF’s biggest and best offerings.

Legal News
The EFF continually releases news blurbs and whitepapers that discuss current events and their impact on security and privacy. Did you know that U.S. companies supply eavesdropping gear to Central Asian autocrats or that Feds proposed the secret phone database used by local Virginia cops? If you follow the EFF’s news feed, then you saw these reports. As a news aggregation service, their reports are very timely, but also very biased. The EFF’s reporting is biased toward a desire for absolute privacy online, even though nobody’s anonymous online.

Technical Services
The EFF occasionally promotes or releases software designed to assist with online privacy. While these efforts have good intentions, they are typically poorly thought out and can lead to significant problems. For example:

  • HTTPS Everywhere. This browser extension forces your web browser to use HTTPS whenever possible. It has a long set of configuration files that specify which sites should use HTTPS. Earlier this year, I wrote about some of the problems created by this application in “EFF’ing Up“. Specifically: (1) Some sites return different content if you use HTTPS instead of HTTP, (2) they do not appear to test their configuration files prior to releasing them, and (3) they do not fix bad configuration files.

  • TOR. The EFF is a strong supporter of the TOR Project, which consists of a network of servers that help anonymize network connections. The problem is that the EFF wants everyone to run a TOR relay. For a legal organization, the EFF seems to forget that many ISPs forbid end consumers from running public network services — running a TOR relay may violate your ISP’s terms of service. The TOR relay will also slow down your network connection as other people use your bandwidth. (Having other people use your bandwidth is why most consumer-level ISPs forbid users from hosting network services.) And if someone else uses your TOR relay to view child porn, then you are the person that the police will interrogate. In effect, the EFF tells people to run a network service without revealing any of the legal risks.

Free SSL

The EFF recently began promoting a new technical endeavor called Let’s Encrypt. This free CA server should help web sites move to HTTPS. News outlets like Boing Boing, The Register, and ExtremeTech all reported on this news announcement.

A Little Background

Let’s backup a moment… On the web, you can either connect to sites using HTTP or HTTPS. The former (HTTP) is unencrypted. That means anyone watching the network traffic can see what you are doing. The latter (HTTPS) is HTTP over SSL; SSL provides a framework for encrypting network traffic.

But notice how I say “framework”. SSL does not encrypt traffic. Instead, it provides a way for a client (like your web browser) and a server (like a web site) to negotiate how they want to transfer data. If both sides agree on a cryptographic setting, then the data is encrypted.

HTTPS is not a perfect solution. In many cases, it really acts as a security placebo. A user may see that HTTPS is being used, but may not be aware that they are still vulnerable. The initial HTTPS connection can be hijacked (a man-in-the-middle attack) and fake certificates can be issued to phishing servers. Even if the network connection is encrypted, this does nothing to stop the web server from tracking users or providing malware, and nothing to stop vandals from attacking web server. And all of this is before SSL exploits like Heartbleed and POODLE. In general, HTTPS should be considered a “better than nothing” solution. But it is far from perfect.

Entry Requirements

Even with all of the problems associated with SSL and HTTPS, for most uses it is still better than nothing. So why don’t more sites use HTTPS? There’s really a few limitations to entry. The EFF’s “Let’s Encrypt” project is a great solution to one of these problems and a partial solution to another problem. However, it doesn’t address all of the issues, and it is likely to create some new problems that the EFF has not disclosed.

Problem #1: Pay to Play
When an HTTPS client connects to an HTTPS server, the server transmits a server-side certificate as part of the cryptographic negotiation. The client then checks with a third-party certificate authority (CA server) and asks whether the server’s certificate is legitimate. This allows the client to know that the server is actually the correct server.

The server’s certificate identifies the CA network that should be used to verify the certificate. Unfortunately, if the certificate can say where to go to verify it, then bad guys can issue a certificate and tell your browser that it should be verified by a CA server run by the same bad guys. (Yes, looks like your bank, and their SSL certificate even looks valid, according to For this reason, every web browser ships with a list of known-trusted CA servers. If the CA server is not on the known-list, then it isn’t trusted by default.

If there are any problems with the server’s certificate, then the web browser issues an alert to the user. The problems include outdated/expired certificates, coming from the wrong domain, and untrusted CA servers.

And this is where the first barrier toward wide-spread use comes in… All of those known-trusted CA servers charge a fee. If you want your web server to run with an SSL certificate that won’t generate any user warnings, then you need to pay one of these known-trusted CA servers to issue an SSL certificate for your online service. And if you run multiple services, then you need to pay them multiple times.

The problems should be obvious. Some people don’t have money to pay for the trusted certificate, or they don’t want to spend the money. You can register a domain name for $10 a year, but the SSL certificate will likely run $150 or more. If your site doesn’t need SSL, then you’re not going to pay $150 to require it.

And then there are people like me, who cannot justify paying for a security solution (SSL) that isn’t secure. I cannot justify paying $150 or more, just so web browsers won’t see a certificate warning when they connect to my HTTPS services. (I use self-signed certificates. By themselves, they are untrusted and not secure, but I offer client-side certificates. Virtually no sites use client-side certificates. But client-side certs are what actually makes SSL secure.)

The EFF’s “Let’s Encrypt” project is a free SSL CA server. With this solution, cost is no longer an entry barrier. When their site goes live, I hope to use it for my SSL needs.

Of course, other CA services, like Entrust, Thawte, and GoDaddy, may lower their prices of offer similar free services. (You cannot data-mine users unless they use your service. Even with a “free” pricing model, these CA issuers can still make a hefty profit from collected user data.) As far as the EFF’s offerings go, this is a very disruptive technology for the SSL industry.

Problem #2: Server Installation
Let’s assume that you acquired an SSL certificate from a certificate authority (Thawte, GoDaddy, Let’s Encrypt, etc.). The next step is to install the certificate on your web server.

HTTPS has never been known for its simplicity. Installing the SSL server-side certificate is a nightmare of configuration files and application-specific complexity. Unless you are a hard-core system administrator, then you probably cannot do it. Even GUI interfaces like cPanel have multiple complex steps that are not for non-technies. You, as a user with a web browser, have no idea how much aggravation the system administrator went through in order to provide you with HTTPS and that little lock icon on the address bar. If they are good, then they spent hours. If it was new to them, then it could have been days.

In effect, lots of sites do not run HTTPS because it is overly complicated to install and configure. (And let’s hope that you don’t have to change certificates anytime soon…) Also, HTTPS certificates include an expiration date. This means that there is an ongoing maintenance cost that includes time and effort.

The EFF’s “Let’s Encrypt” solution says that it will include automated management software to help mitigate the installation and maintenance effort. This will probably work if you run one of their supported platforms and have a simple configuration file. But if you’re running a complex system with multiple domains, custom configuration files, and strict maintenance/update procedures, then no script from the EFF will assist you.

Of course, all of this is speculation since the EFF has not announced the supported platforms yet… So far, they have only mentioned a python script for Apache servers. I assume that they mean “Apache2″ and not “Apache”. And even then, the configuration at FotoForensics has been customized for my own needs, so I suspect that their solution won’t work out-of-the-box for my needs.

Problem #3: Client Installation
So… let’s assume that it is past Summer 2015, when Let’s Encrypt becomes available. Let’s also assume that you got the server-side certificate and their automated maintenance script running. You’ve got SSL on your server, HTTPS working, and you’re ready for users. Now everything is about to work without any problems, right? Actually, no.

As pointed out in problem #1, unknown CA servers are not in the user’s list of trusted CA servers. So every browser connecting to one of these web servers will see that ugly alert about an untrusted certificate.

Every user will need to add the new Let’s Encrypt CA servers to their trusted list. And every browser (and almost every version of every browser) does this differently. Making matters worse, lots of mobile devices do not have a way to add new CA servers. It will take years or even decades to fully resolve this problem.

Windows XP reached its “end of life” (again), yet nearly 30% of Windows computers still run XP. IPv6 has been around for nearly 20 years, yet deployment is still at less than 10% for most countries. Getting everyone in the world to update/upgrade is a massive task. It is easier to release a new system than it is to update a deployed product.

The EFF may dream of everyone updating their web browsers, but that’s not the reality. The reality is that users will be quickly trained to ignore any certificate alerts from the web browsers. This opens the door for even more phishing and malware sites. (If the EFF really wanted to solve this problem, then they would phase out the use of SSL and introduce something new.)

There is one other possibility… Along with the EFF, IdenTrust is sponsoring Let’s Encrypt. IdenTrust runs a trusted CA service that issues SSL certificates. (The cost varies from $40 per year for personal use to over $200 per year, depending on various options.) Let’s Encryption could piggy-back off of IdenTrust. This would get past the “untrusted CA service” problem.

But if they did rely on the known-trusted IdenTrust that is already listed in every web browser… the why would anyone buy an SSL certificate from IdenTrust when they can get it for free via Let’s Encrypt? There has to be some catch here. Are they collecting user data? Every browser must verify every server, so whoever runs this free CA server knows when you connected to specific online services — that’s a lot of personal information. Or perhaps they hope to drive sales to their other products. Or maybe there will be a license agreement that prohibits the free service from commercial use. All of this would undermine the entire purpose of trying to protect user’s traffic.

Problem #4: Fake Domains
Phishing web sites, where bad guys impersonate your bank or other online service, have been using SSL certificates for years. They will register a domain like “” and hope that users won’t notice the “fjewahuif” in the hostname. Then they register a real SSL certificate for their “” domain. At this point, victims see the “bankofamerica” text in the hostname and they see the valid HTTPS connection and they assume that this is legitimate.

The problem gets even more complicated when they use DNS hijacking. On rare occasions, bad guys have temporarily stolen domains and used to to capture customer information. For example, they could steal the “” domain and register a certificate for it at any of the dozens of legitimate CA servers. (If the real Bank of America uses VeriSign, then the fake Bank of America can use Thawte and nobody will notice.) With domain hijacking, it looks completely real but can actually be completely fake.

The price for an SSL certificate used to be a little deterrent. (Most scammers don’t mind paying $10 for a domain and $150 for a legitimate certificate, when the first victim will bring in a few thousands of dollars in stolen money.) But a free SSL CA server? Now there’s no reason not to run this scam. I honestly expect the volume of SSL certificate requests at the EFF’s Let’s Encrypt servers to quickly grow to 50%-80% scam requests. (A non-profit with a legal emphasis that helps scammers? As M. Night Shyamalan says in Robot Chicken: “What a twist!“)

“Free” as in “Still has a lot of work to do before it’s really ready”

The biggest concern that I have with this EFF announcement is that the technology does not exist yet. Their web site says “Arriving Summer 2015” — it’s nearly a year away. While they do have some test code available, their proposed standard is still a draft and they explicitly say to not run the code on any production systems. Until this solidifies into a public release, this is vaporware.

But I do expect this to eventually become a reality. The EFF is not doing this project alone. Let’s Encrypt is also sponsored by Mozilla, Akamai, Cisco, and IdenTrust. These are companies that know browsers, network traffic, and SSL. These are some of the biggest names and they are addressing one of the big problems on today’s Internet. I have no doubt that they are aware of these problems; I just dislike how they failed to disclose these issues when they had their Pollyannaish press release. Just because it is “free” doesn’t mean it won’t have costs for implementation, deployment, maintenance, and customer service. In the open source world, “free” does not mean “without cost”.

Overall, I do like the concept. Let’s Encrypt is intended to make it easier for web services to implement SSL. They will be removing the cost barrier and, in some cases, simplifying maintenance. However, they still face an uphill battle. Users may need to update their web browsers (or replace their old cellphones), steps need to be taken to mitigate scams, users must not be trained to habitually accept invalid certificates, and none of this helps the core issue that HTTPS is a security placebo and not a trustworthy solution. With all of these issues still needing to be addressed, I think that their service announcement a few days ago was a little premature.

Чорба от греховете на dzver: Cine Grand @ Sofia Ring

This post was syndicated from: Чорба от греховете на dzver and was written by: dzver. Original post: at Чорба от греховете на dzver

Щастлив съм, че най-сетне има конкуренция на Арена и Синема Сити. В новото мол са “открили” кино с приятна концепция – зали с по 50 кресла, които са на прилично растояние едно от друго и позволяват лягане и спане по време на по-скучни филми.

– Рекламите преди филма отнемат 6 минути, вместо 25!
– Privacy. Рискът някой да седне до теб и да не млъкне е драстично по-малък. Същото и за крака на креслото до вас, на сантиметри от главата ви.
– Удобство. Може да управлявате креслото с копчета.

– Киното е недовършено, както и целия мол. Работят 2 зали.
– Пуканките са в дъното, не в началото.
– Неопитен персонал. Пуснаха ни грешния филм.

Логика Арена да чарджва 9/12/15 вече няма, вече не са най-доброто кино.

TorrentFreak: U.S. Copyright Alert System Security Could Be Improved, Review Finds

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

spyFebruary last year the MPAA, RIAA and five major Internet providers in the United States launched their “six strikes” anti-piracy plan.

The Copyright Alert System’s main goal is to inform subscribers that their Internet connections are being used to share copyrighted material without permission. These alerts start out friendly in tone, but repeat infringers face a temporary disconnection from the Internet or other mitigation measures.

The evidence behind the accusations is provided by MarkMonitor, which monitors BitTorrent users’ activities on copyright holders’ behalf. The overseeing Center for Copyright Information (CCI) previously hired an impartial and independent technology expert to review the system, hoping to gain trust from the public.

Their first pick, Stroz Friedberg, turned out to be not that impartial as the company previously worked as RIAA lobbyists. To correct this unfortunate choice, CCI assigned Professor Avi Rubin of Harbor Labs to re-examine the system.

This week CCI informed us that a summary of Harbor Labs’s findings is now available to the public. The full review is not being published due to the vast amount of confidential information it contains, but the overview of the findings does provide some interesting details.

Overall, Harbor Labs concludes that the evidence gathering system is solid and that false positives, cases where innocent subscribers are accused, are reasonably minimized.

“We conclude, based on our review, that the MarkMonitor AntiPiracy system is designed to ensure that there are no false positives under reasonable and realistic assumptions. Moreover, the system produces thorough case data for alleged infringement tracking.”

However, there is some room for improvement. For example, MarkMonitor could implement additional testing to ensure that false positives and human errors are indeed caught.

“… we believe that the system would benefit from additional testing and that the existing structure leaves open the potential for preventable failures. Additionally, we recommend that certain elements of operational security be enhanced,” Harbor Labs writes.

In addition, the collected evidence may need further protections to ensure that it can’t be tampered with or fall into the wrong hands.

“… we believe that this collected evidence and other potentially sensitive data is not adequately controlled. While MarkMonitor does protect the data from outside parties, its protection against inside threats (e.g., potential rogue employees) is minimal in terms of both policy and technical enforcement.”

The full recommendations as detailed in the report are as follows:


The CCI is happy with the new results, which they say confirm the findings of the earlier Stroz Friedberg review.

“The Harbor Labs report reaffirms the findings from our first report – conducted by Stroz Friedberg – that the CAS is well designed and functioning as we hoped,” CCI informs TF.

In the months to come the operators of the Copyright Alert System will continue to work with copyright holders to make further enhancements and modifications to their processes.

“As the CAS exits the initial ramp-up period, CCI has been assured by our content owners that they have taken all recommendations made within both reports into account and are continuing to focus on maintaining the robust system that minimizes false positives and protects customer security and privacy,” CCI adds.

Meanwhile, they will continue to alert Internet subscribers to possible infringements. After nearly two years copyright holders have warned several million users, hoping to convert then to legal alternatives.

Thus far there’s no evidence that Copyright Alerts have had a significant impact on piracy rates. However, the voluntary agreement model is being widely embraced by various stakeholders and similar schemes are in the making in both the UK and Australia.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

TorrentFreak: BitTorrent Preps Sync Pro to Take on the Cloud

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

Sharing files across multiple devices used to be laborious and time-consuming affair but with the advent of services such as Dropbox the practice has become a breeze.

However, while many users remain unconcerned that third-party companies offering ‘cloud storage’ have control of their files, the revelations of Edward Snowden have undoubtedly increased fear of government snooping. With their new product ‘Sync’ in alpha, this data security loophole was pounced upon last year by BitTorrent Inc.

Released in early 2013, BitTorrent Sync is a tool that allows users to securely sync folders across multiple devices using the BitTorrent protocol. In terms of functionality it can be compared to any number of cloud-based services but with one key exception – Sync does not store data in the cloud but does so on users’ devices instead.

The software has now reached version 1.4 and the take-up has been impressive. During August, BitTorrent Inc. confirmed that there had been 10 million user installs transferring over 80 Petabytes of data between them.

Now the company is preparing to debut Sync 2.0 with greater functionality and flexibility while maintaining the privacy of its users. For the first time it will be made available in two editions, ‘free’ and ‘pro’. So what’s the difference?

“Sync 2.0 free will be an improvement from 1.4 and there won’t be any limits on performance or size of individual folders,” BitTorrent Inc.’s Christian Averill informs TorrentFreak.

“Pro users simply get premium functionality, catered specifically to individuals with large data needs and business workgroups.”

Sync 2.0 Free Edition
● Feature improvements, to enhance sharing and syncing folders
● Updated UI across platforms, new tablet apps on Android/iOS
● General performance improvements and bug fixes
● 30-day trial period for Sync Pro
● No restrictions on performance or individual folder sizes.

Sync 2.0 Pro Edition
● Access to very large folders (TBs): allows for on-demand access to individual files
● Control over folder permissions and ownership (see image below)
● Automatic synchronization across devices: all your devices are tied via a common identity; moves the relationship from device-to-device to person-to-person
● Priority technical support


BitTorrent Inc. believes that Sync 2.0 trumps services like Dropbox, Google Drive and OneDrive on a number of fronts. Sync 2.0 places no file-size restrictions on users versus a 1TB limit for rivals. Files are also synced more quickly, up to 16X, since Sync does not rely on uploads to the cloud.

Finally, in addition to enhanced security Sync 2.0 aims to offer better value for money too. The ‘free’ edition is just that and the ‘pro’ version costs $39.99. Competitors Dropbox, Google Drive, and Microsoft OneDrive charge upwards of $83.99 for comparable services.

No firm release data has been announced for Sync 2.0 but those interested in becoming an early adopter can do so here.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

SANS Internet Storm Center, InfoCON: green: “Big Data” Needs a Trip to the Security Chiropracter!, (Wed, Nov 19th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

When the fine folks at Portswigger updated Burp Suite last month to 1.6.07 (Nov 3), I was really glad to see NoSQL injection in the list of new features.

Whats NoSQL you ask? If your director is talking to you about Big Data or your Marketing is talking to you about customer metrics, likely what they mean is an app with a back-end database that uses NoSQL instead of real SQL.

Im tripping over this requirement this month in the retail space. Ive got clients that want to track a retail customers visit to the store (tracking their cellphones using the store wireless access points), to see:

  • if customers visit store sections where the sale items are?
  • or, if customers visit area x, do they statistically visit area y next?
  • or, having visited the above areas, how many customers actually purchase something?
  • or, after seeing a purchase, how many feature sale purchases are net-new customers (or repeat customers)

In other words, using the wireless system to track customer movements, then correlating it back to purchase behaviour to determine how effective each feature sale might be.

So what database do folks use for applications like this? Front-runners in the NoSQL race these days include MongoDB and CouchDB. Both databases do cool things with large volumes of data.”>Ensure that MongoDB runs in a trusted network environment and limit the interfaces on which MongoDB instances listen for incoming connections. Allow only trusted clients to access the network interfaces and ports on which MongoDB instances are available.

CouchDB has a similar statement at “>it should be obvious that putting a default installation into the wild is adventurous

So, where do I see folks deploying these databases? Why, in PUBLIC CLOUDs, thats where!” />

And what happens after you stand up your almost-free database and the analysis on that dataset is done? In most cases, the marketing folks who are using it simply abandon it, in a running state. What could possibly go wrong with that? Especially if they didnt tell anyone in either the IT or Security group that this database even existed?

Given that weve got hundreds of new ways to collect data that weve never had access to before, its pretty obvious that if big data infrastructures like these arent part of our current plans, they likely should be. All I ask is that folks do the risk assessments tha they would if this server was going up in their own datacenter. Ask some questions like:

  • What data will be on this server?
  • Who is the formal custodian of that data?
  • Is the data covered under a regulatory framework such as HIPAA or PCI? Do we need to host it inside of a specific zone or vlan?
  • What happens if this server is compromised? Will we need to disclose to anyone?
  • Who owns the operation of the server?
  • Who is responsible for securing the server?
  • Does the server have a pre-determined lifetime? Should it be deleted after some point?
  • Is the developer or marketing team thats looking at the dataset understand your regulatory requirements? Do they understand that Credit Card numbers and Patient Data are likely bad candidates for an off-prem / casual treatment like this (hint – NO THEY DO NOT).

Smartmeter applications are another big data thing Ive come across lately. Laying this out end-to-end – collecting data from hundreds of thousands of embedded devices that may or may not be securable, over a public network to be stored in an insecurable database in a public cloud. Oh, and the collected data impinges on at least 2 regulatory frameworks – PCI and NERC/FERC, possibly also privacy legislation depending on the country. Ouch!

Back to the tools to assess these databases – Burp isnt your only option to scan NoSQL database servers – in fact, Burp is more concerned with the web front-end to NoSQL itself. NoSQLMAP ( is another tool thats seeing a lot of traction, and of course the standard usual suspects list of tools have NoSQL scripts, components and plugins – Nessus has a nice set of compliance checks for the database itself, NMAP has scripts for both couchdb, mongodbb and hadoop detection, as well as mining for database-specific information. OWASP has a good page on NoSQL injection at, and also check out

Shodan is also a nice place to look in an assessment during your recon phase (for instance, take a look at )

Have you used a different tool to assess a NoSQL Database? Or have you had – lets say an interesting conversation around securing data in such a database with your management or marketing group? Please, add to the story in our comment form!

Rob VandenBrink

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Krebs on Security: Microsoft Releases Emergency Security Update

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Microsoft today deviated from its regular pattern of releasing security updates on the second Tuesday of each month, pushing out an emergency patch to plug a security hole in all supported versions of Windows. The company urged Windows users to install the update as quickly as possible, noting that miscreants already are exploiting the weaknesses to launch targeted attacks.

brokenwindowsThe update (MS14-068) addresses a bug in a Windows component called Microsoft Windows Kerberos KBC, which handles authenticating Windows PCs on a local network. It is somewhat less of a problem for Windows home users (it is only rated critical for server versions of Windows) but it poses a serious threat to organizations. According to security vendor Shavlik, the flaw allows an attacker to elevate domain user account privileges to those of the domain administrator account.

“The attacker could forge a Kerberos Ticket and send that to the Kerberos KDC which claims the user is a domain administrator,” writes Chris Goettl, product manager with Shavlik. “From there the attacker can impersonate any domain accounts, add themselves to any group, install programs, viewchangedelete date, or create any new accounts they wish.  This could allow the attacker to then compromise any computer in the domain, including domain controllers.  If there is a silver lining in this one it is in the fact that the attacker must have a valid domain user account to exploit the vulnerability, but once they have done so, they have the keys to the kingdom.”

The patch is one of two that Microsoft had expected to release on Patch Tuesday earlier this month, but unexpectedly pulled at the last moment.  “This is pretty severe and definitely explains why Microsoft only delayed the release and did not pull it from the November Patch Tuesday release all together,” Goettl said.

On a separate note, security experts are warning those who haven’t yet fully applied the updates from Patch Tuesday to get on with it already. Researchers with vulnerability exploit development firm Immunity have been detailing their work in devising reliable ways to exploit a critical flaw in Microsoft Secure Channel (a.k.a. “Schannel”), a security package in Windows that handles SSL/TLS encryption — which protects the privacy and security of Web browsing for Windows users. More importantly, there a signs that malicious hackers are devising their own methods of exploiting the flaw to seize control over unpatched Windows systems.

Wolfgang Kandek, chief technology officer at Qualys, said security researchers were immediately driven to this bulletin as it updates Microsoft’s SSL/TLS implementation fixing Remote Code Execution and Information Leakage that were found internally at Microsoft during a code audit.

“More information has not been made available, but in theory this sounds quite similar in scope to April’s Heartbleed problem in OpenSSL, which was widely publicized and had a number of documented abuse cases,” Kandek wrote in a blog post today. “The dark side is certainly making progress in finding an exploit for these vulnerabilities. It is now high time to patch.”

TorrentFreak: ISP Provides Free VPN to Protect Customer Privacy

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

vpn4lifeIn April a landmark ruling from the European Court of Justice declared Europe’s Data Retention Directive a violation of Internet users’ privacy and therefore invalid.

The Directive required Internet service providers and other telecommunications companies to log data on the activities of their subscribers, including who they communicate with and at what times, plus other identifying information such as IP addresses.

One of the first companies to react to the decision was Swedish ISP Bahnhof. The ISP has a reputation for objecting to what it sees as breaches of customer privacy, so did not hesitate following the Court’s announcement.

“Bahnhof stops all data storage with immediate effect. In addition, we will delete the information that was already saved,” Bahnhof CEO Jon Karlung said.

However, at the end of last month Swedish telecoms regulator PTS ordered Bahnhof to start storing communications data again under local data retention laws, warning the ISP that non-compliance would result in hefty fines.

At the time Karlung promised a “Plan B” to skirt the order, and today the details of that have emerged.

“One week remains before PTS requires a fine of five million krona ($676,500) from Bahnhof, as the company has not yet begun to store customer traffic data. Therefore, Bahnhof has chosen to activate ‘Plan B’,” Karlung announced today.

The plan involves Bahnhof reactivating data storage on November 24 as required. However, the ISP will thwart the collection of meaningful data by providing every customer with access to an anonymizing VPN service free of charge.

“The EU Court of Justice has held that it is a human right for people not to have their traffic data stored. We therefore believe that the time is ripe for VPN services become popular,” Karlung says.

The service, called LEX Integrity, is a no-logging provider so it will be impossible for any entity to get useful information about its users.

“The EU Court of Justice has issued a ruling that the previous government chose to ignore, and the current government has been silent for so long that we are starting to lose patience,” Karlung adds.

“So now Bahnhof will resolve the situation in a responsible manner, namely by solving the whole problem. We will start to store data, but at exactly the same time we will make data storage meaningless.”

The VPN service will become active next Monday.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Schneier on Security: Pew Research Survey on Privacy Perceptions

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Pew Research has released a new survey on American’s perceptions of privacy. The results are pretty much in line with all the other surveys on privacy I’ve read. As Cory Doctorow likes to say, we’ve reached “peak indifference to surveillance.”

TorrentFreak: Internet Pirates Always a Step Ahead , Aussies Say

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

aus-featAs the debate over Internet piracy sizzles Down Under, groups on all sides continue to put forward arguments on how to solve this polarizing issue.

The entertainment industries are clear. The current legal framework in Australia is inadequate in today’s market and tough new legislation is required to deter pirates and hold service providers more responsible for the actions of their users.

ISPs, on the other hand, are generally concerned at the prospect of greater copyright liability, with many viewing content availability at a fair price as the sustainable way to solve the piracy problem.

In order to better understand the opinions of the consumer, Aussie telecoms association the Communications Alliance has conducted a new study, the results of which were published this morning.

The survey, carried out among a sample 1,500 Australians, reveals a public split roughly 50/50 on whether piracy is “a problem” but one that also believes that it will eventually end up paying the bill for solving it.

A recurring theme for the prevalence of piracy in Australia is availability of content at a fair price, and the results of the survey appear to back up that belief. A total 60% of respondents said that improved entertainment product release strategies would lead to less piracy while 66% noted that cheaper, fairer pricing could achieve the same.

Just 19% felt that Government regulation resulting in stiff penalties for file-sharers would do the trick, and when it comes to pushing anti-piracy responsibilities onto service providers, almost three-quarters felt the approach would be ineffective.

Unsurprisingly the issue of cost is important for consumers, with 69% holding the opinion that “identifying, monitoring and punishing” ‘pirate’ subscribers would eventually lead to more expensive Internet bills for everyone. When questioned, 60% of respondents felt that the bill for dealing with piracy should be paid by the rightsholders.

Privacy was also an issue for 65% of respondents who said that monitoring Internet users’ downloading habits would have “serious privacy implications.” However, the most popular reason for not shifting responsibility to ISPs is the fact that pirates are always a step ahead, with 72% believing that given rapidly changing technology, a way around any technical measures will always be found.

“This research comes as the Government considers responses to its discussion paper on online copyright policy options. It paints a picture not of a nation of rampant pirates, but rather a majority of people who agree that action taken should include steps to reduce the market distortions that contribute to piracy,” commented Communications Alliance CEO, John Stanton.

While the entertainment companies have their tough demands and the ISPs have their objections, it seems likely that a solution will be found in the middle ground. Better pricing and availability will have an effect on the market while educational campaigns will help to sway some of those sitting on the fence. A total 59% of respondents favored the latter approach.

Whether ISPs will have to play a more active role remains to be seen, but given developments in the UK and United States, a notice-and-notice scheme to warn and educate consumers seems particularly likely.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Schneier on Security: Narrowly Constructing National Surveillance Law

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Orin Kerr has a new article that argues for narrowly constructing national security law:

This Essay argues that Congress should adopt a rule of narrow construction of the national security surveillance statutes. Under this interpretive rule, which the Essay calls a “rule of lenity,” ambiguity in the powers granted to the executive branch in the sections of the United States Code on national security surveillance should trigger a narrow judicial interpretation in favor of the individual and against the State. A rule of lenity would push Congress to be the primary decision maker to balance privacy and security when technology changes, limiting the rulemaking power of the secret Foreign Intelligence Surveillance Court. A rule of lenity would help restore the power over national security surveillance law to where it belongs: The People.

This is certainly not a panacea. As Jack Goldsmith rightly points out, more Congressional oversight over NSA surveillance during the last decade would have gained us more NSA surveillance. But it’s certainly better than having secret courts make the rules after only hearing one side of the argument. GnuPG 2.1.0 “modern” released

This post was syndicated from: and was written by: corbet. Original post: at

Version 2.1.0 of the GNU Privacy Guard has been released; this is the first
release in the new “modern” branch. Changes include elliptic curve
cryptography support, better keyserver pool handling, the creation of
revocation certificates by default, the removal of support for PGP2 keys,
and more.

Darknet - The Darkside: Facebook Allows Tor Access To Site

This post was syndicated from: Darknet - The Darkside and was written by: Darknet. Original post: at Darknet - The Darkside

Facebook started out blocking users of the Tor network in 2013, but have recently had a change of mind and now Facebook allows Tor access to the site even providing a special .onion address for users of the network to directly connect to Facebook infrastructure. It’s an interesting decision as many of the Facebook ‘security […]


Read the full post at

Linux How-Tos and Linux Tutorials: How to Find the Best Linux Distribution for a Specific Task

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Jack Wallen. Original post: at Linux How-Tos and Linux Tutorials

kali Linux

If you’re looking for a Linux distribution to handle a specific (even niche) task, there most certainly is a distribution ready to serve. From routers to desktops, from servers to multi-media…there’s a Linux for everything.

With such a wealth of Linux distributions available, where do you start looking when you have a specific task in mind? You start here, with this listing of some task-specific Linux distributions. This intent here isn’t to create an exhaustive list, but to get users pointed in the right direction. For an exhaustive listing of Linux distributions, check out Distrowatch.


The task of everyday usage could easily fall to one of many Linux distributions. In fact, most every Linux distribution can handle everyday, desktop use. From internet browsing/work to desktop publishing, social networks…everything you need for getting things done. The choice made will often depend on what type of interface you want (since nearly every distribution can run the apps you need). Are you looking for a more modern, touch-friendly interface? If so, go with Ubuntu and its Unity interface or Fedora and GNOME.

Since the list of desktop distributions is so extensive, here is a list of some of the top distributions and why they should be considered:

  • Ubuntu: Hardware support, touch-friendly interface

  • Mint: One of the most user-friendly distributions available

  • Deepin: Outstanding interface and user-friendly

  • Bodhi: Unique interface, lightweight distribution (also works well on Chromebooks)

  • Arch Linux: A full-featured desktop distribution that focuses on simplicity. 

Audio/Video engineering

When people think of audio/video, they tend to immediately default to Mac. Linux also excels in that playground. With full-blown distributions dedicated specifically to audio/video engineering, you won’t miss a beat or a scene. If you work with multi-media and Linux, you already know there are plenty of tools available (Lightworks, Audacity, Ardour, etc). What you might not know is that there are distributions available that come with everything you need to rock, preinstalled.

So if you’re looking to get your audio or video ready for performance or distribution, take a look at any of these flavors of Linux:

Ubuntu Studio: This is the most widely used multimedia-oriented Linux-based operating system. What is very nice about Ubuntu Studio is that it is optimized, from the kernel up, to be perfectly suited for the high demands made by audio/video editing/creation. The distribution is based on Ubuntu and the desktop is XFCE, so you can be sure it won’t take much from memory or CPU…so it’s all there for your tasks.

Dream Studio: Takes a very similar approach to Ubuntu Studio — with many of the same tools. The primary difference is that Dream Studio uses the Unity interface, for a more modern (and touch-friendly) look.

dream studio

Penetration testing

Although just about any Linux distribution can be used (or tweaked to be used) for this purpose, there are distributions specifically designed to test the security of your network through penetration testing. One of the best distributions you’ll find for this purpose is Kali Linux. This particular take on the Linux distribution incorporates more than 300 penetration testing and security tools to create one of the finest security-minded distributions available. With Kali you can simulate attacks on your network to see exactly what you need to protect your company’s precious data. You’ll find apps like Metasploit (for network penetration testing), Nmap (for port and vulnerability scanning), Wireshark (for network monitoring), and Aircrack-Ng (for testing wireless security).


Most Linux distributions are well-built for development. You’ll find all of the tools available to all distributions. There is, however, one consideration you’ll want to take into account. With versioned distributions (such as Ubuntu), you’ll find updates to developer-crucial packages (such as PHP) often lag well behind rolling release distributions. The top rolling release distributions are:

Enterprise Servers

If you’re looking to serve up large, high-demand websites, or power the backend of your business, there are Linux distributions ready to serve. You can go the fully supported, somewhat proprietary route, like Red Hat Enterprise Linux, or the fully free route with CentOS. What’s important with RHEL is that, when you make your purchase, you can also count on enterprise-grade support. For some companies, that level of support is mission-critical.

Of course, Red Hat isn’t the only game when it comes to fully supported enterprise-grade Linux. There’s also SUSE Linux Enterprise — for both servers and desktops. But that’s not all. You’ll find plenty of enterprise-ready servers in these distributions:

  • CentOS: The free, open source version of Red Hat Enterprise Server

  • Zentyal: A drop-in replacement for Windows Small Business Server. 

System Troubleshooting

If you’re looking to troubleshoot a PC system, a Windows installation, a hard drive, or even retrieve data from a problematic Windows PC, Linux is what you turn to. There are plenty of Linux distributions geared toward troubleshooting a system. Some of the best include:

  • Knoppix: A bootable Live CD (or USB) distribution that offers plenty of diagnostic tools.

  • Ultimate Boot CD: This is the tool you want when you need to do serious hardware diagnosis (from memory, to CPU, to hard drive, peripherals, and more). With UBCD you can also do data recovery and partitioning.

  • SystemRescueCD: This distribution offers plenty of tools focused on system and data rescue.


Linux also excels in the world of education. With tools like Moodle, ITALC, Claroline, and more — Linux has a firm grasp on the needs of education. And like every other niche, there are distributions geared specifically for the world of education. Two of the more popular distributions are:

  • Edubuntu: This is a partner project for Ubuntu Linux. The aim of Edubuntu is to help the educator with limited computer knowledge make use of Linux’ power, stability, and flexibility within the classroom or the home.

  • Uberstudent: Aimed at secondary and higher-education, Uberstudent is a complete, out of the box learning platform. Ubuerstudent was developed by a professional educator who specializes in academic success strategies, post-secondary literacy instruction, and educational technology.


If you’d like to replace the firmware on your current router with a more robust and secure solution, look no further than Linux. By flashing your router with a Linux distribution, you’ll find you enjoy more features and more control over your network experience. Of course, not all routers are flashable with Linux — so you’ll need to do a bit of research on your hardware. If your router is supported, look to these two major projects as your first steps toward more freedom with your network routing.

DD-wrt: This flavor offers tons of features and a very easy interface to help you control those features. You’ll also find plenty of documentation for DD-wrt.

OpenWRT:  This is a Linux distribution for embedded devices…including routers. Like all routers, you’ll control NAT, DHCP, DNS, and more.


If you don’t have the budget for firewall devices (such as Cisco), then a Linux firewall might just be the perfect solution. With the incredibly powerful iptables system, Linux makes for outstanding security. And there are plenty of routes to success with a Linux firewall. If you want as near an out-of-the-box solution, take a look at IP Cop. This particular firewall solution is geared toward home and SOHO usage, but offers a user-friendly, web-based interface that doesn’t require a system administrator level of understanding to use.

Of course, if you want absolute control of your firewall, you can also make use of a distribution like CentOS and learn the ins and outs of iptables.

Anonymous use

Finally, if you’re looking for a Linux distribution to use with anonymity, you want Tails.

Tails is a live Linux distribution that aims to leave no trace and aims at protecting your privacy and anonymity. This particular Linux distribution takes great care to use cryptography to encrypt all data leaving the system. Tails is built on Debian and contains all free software.

tails screen capture

There you have it. A sort of guide to help you navigate the waters of use-specific Linux distributions. And as I’ve mentioned before, fundamentally Linux can be made to do whatever you want. Don’t assume you must use a niche- or task-specific distribution to get something done. With just a little know-how, you can make any distribution into exactly what you need.

For more information about Linux distributions, visit the following sites:

The Hacker Factor Blog: We Know You’re A Dog

This post was syndicated from: The Hacker Factor Blog and was written by: The Hacker Factor Blog. Original post: at The Hacker Factor Blog

Usually when I read about “new” findings in computer security, they are things that I’ve known about for years. Car hacking, parasitic file attachments, and even changes in phishing and spamming. If you’re active in the computer security community, then most of the public announcements are probably not new to you. But Wired just reported on something that I had only learned about a few months ago.

I had previously mentioned that I was looking for alternate ways to ban users who violate the FotoForensics terms of service. Specifically, I’m looking at HTTP headers for clues to identify if the web client is using a proxy.

One of the things I discovered a few months ago was the “X-UIDH” header that some web clients send. As Wired and Web Policy mentioned, Verizon is adding this header to HTTP requests that go over their network and it can be used to track users.


As is typical for Wired, they didn’t get all of the details correct.

  • Wired says that the strings are “about 50 letters, numbers, and characters”. I’ve only seen 56 and 60 character sequences. The data appears to be a base64-encoded binary set. If you base64 decode the sequence, then you’ll see that it begins with a text number, like “379612345” and it is null-terminated. I don’t know what this is, but it is unique per account. It could be the user’s account number. After that comes a bunch of binary data that I have not yet decoded.

  • Wired says that the string follows the user. This is a half-truth. If you change network addresses, then only the first part of the base64 X-UIDH value stays the same. The rest changes. If services only store the X-UIDH string, then they will not be tracking you. But if they decode the string and use the decoded number, then services can track you regardless of your Verizon-assigned network address.
  • Wired makes it sound like Verizon adds the header to most Verizon clients. However, it isn’t added by every Verizon service. I’ve only seen this on some Verizon Wireless networks. User with FIOS or other Verizon services do not get exposed by this added header. And even people who use Verizon Wireless may not have it added, depending on their location. If your dynamically assigned hostname says “”, then you might be tagged. But if it isn’t, then you’re not.
  • The X-UIDH header is only added when the web request uses HTTP. I have not seen it added to any HTTPS headers. However, most web services use HTTP. And even services like eBay and Paypal load some images with HTTP even when you use HTTPS to connect to the service. So this information will be leaked.

The Wired article focused on how this can be used by advertisers. However, it can also be used by banks as part of a two-part authentication: something you know (your username and password) and something you are (your Verizon account number).

Personally, I’ve been planning to use it for a much more explicit purpose. I’ve mentioned that I am legally required to report people who upload child porn to my server. And while I am usually pro-privacy, I don’t mind reporting these people because there is a nearly one-to-one relationship between people who have child porn and people who abuse children. So… wouldn’t it be wonderful if I could also provide their Verizon account number along with my required report? (Let’s make it extremely easy for the police to make an arrest.)

Unique, and yet…

One other thing that Wired and other outlets failed to mention is that Verizon isn’t the only service that does this kind of tracking. Verizon adds in an “X-UIDH” header. But they are not alone. Two other examples are Vodafone and AT&T. Vodafone inserts an X-VF-ACR header and AT&T Mobility LLC (network AS20057) adds in an “x-acr” header. These headers can be used for the same type of user-specific tracking and identification.

And it isn’t even service providers. If your web antivirus software performs real-time network scanning, then there’s a good chance that it is adding in unique headers that can be used to track you. I’ve even identified a few headers that are inserted by specific nation-states. If I see the presence of certain HTTP headers, then I immediately know the country of origin. (I’m not making this info public yet because I don’t want Syria to change the headers. Oops…)

Business as usual

For over a decade, it has been widely known in the security field that users can be tracked based on their HTTP headers. In fact, the EFF has an online test that determines how unique your HTTP header is. (The EFF also links to a paper on this topic.) According to them, my combination of operating system, time zone, web browser, and browser settings makes my system “unique among the 4,645,400 tested so far.” Adding in yet-another header doesn’t make me more unique.

When I drive my car, I am in public. People can see my car and they can see me. While I believe that the entire world isn’t watching me, I am still in public. My car’s make and model is certainly not unique, but the various scratches and dents are. When I drive to my favorite restaurant, they know it is me before I get out of the car. By the same means, my HTTP header is distinct. For some uses, it is even unique. When I visit my favorite web sites, they can identify me by my browser’s HTTP header.

Continuing with this analogy, my car has a license plate. Anyone around me can see it and it is unique. With the right software, someone can even identify “me” from my license plate. Repainting my car doesn’t change the license plate. These unique tracking IDs that are added by various ISPs are no different from a license plate. The entire world may not be able to see it, but anywhere you go, it goes with you and it is not private.

The entire argument that these IDs violate online privacy is flawed. You never had privacy to begin with. Moreover, these unique tags do not make you any more exposed or any more difficult to track. And just as you can take specific steps to reduce your traceability in public, you still have options to reduce your traceability online.

The Hacker Factor Blog: Parasites

This post was syndicated from: The Hacker Factor Blog and was written by: The Hacker Factor Blog. Original post: at The Hacker Factor Blog

Every now and then, old security concepts resurface as if they were something new. Recently, I’ve been seeing a lot more activity related to parasitic attachments in pictures.

A parasitic attachment, or parasite, is an unrelated file that is simply attached to another file. With pictures, it is an unrelated chunk of data attached to the image file. When rendering a picture, the parasite is ignored. And when transferring the picture, the parasite follows along for the ride.

Attaching Parasites

To understand how this works, let’s focus on JPEG. Every JPEG has a header, information related to decompression settings, and the compressed binary image stream. The stream has a well-defined start and a well-defined end. When rendering pictures, your graphics program stops at the end of stream marker. It doesn’t look beyond that point, so anything attached after the JPEG becomes ignored information.

There’s actually a lot of information that may be intentionally stuffed after the image. Some vendors store thumbnail images after the main image. Back in 2010, I pointed out that some Android devices store operating system information after the picture.

Parasites are not limited to JPEG formats. Virtually every image format out there has a well-defined “end”, and rendering programs stop when they hit the defined end. PNG, BMP, and even GIF can all have parasites without impacting how the picture is rendered. There’s even a nice tutorial from 2010 for how to attach a parasite. And a similar tutorial from 2006. (And I remember doing this type of thing back in 1992, and it definitely wasn’t “new” back then.) Creating a parasitic attachment is literally as easy as appending data to an existing JPEG.

Parasites are not limited to the end of the file. They may be stuffed in comment fields, proprietary data blocks, and other unused areas in the picture file format. Both JPEG and PNG support custom data blocks. If the rendering software doesn’t support the custom data block, then the block is ignored. For parasites, you just define your own custom data block and expect it to be ignored.

Finally, there is the payload carried by the parasite. At FotoForensics, about 0.05% (yes, less than a tenth of a percent) of all files contain some kind of parasitic attachment. Zip files, RAR files, 7zip, and text are all common. But I’ve also seen PDF, PKCS7 certificates, encrypted data, word documents, unrelated pictures, and much more. In September 2014, FotoForensics received 34,206 unique file uploads. Of those, 17 files have parasites that my software readily identifies. Most of the parasites were zip files, but there were also a few RAR files and other types of data.

Hamster Dance

As an example, the following picture was uploaded to FotoForensics on 1-Sept-2014.

This file looks like a picture of some hamsters. But inside JPEG file is a parasitic zip file stuffed in an APP1 data field. This non-standard APP1 data block is ignored when the image is rendered. Even program like ExifTool and exiv2 ignore the unknown binary block. However, the APP1 data definitely contains a zip file and most zip programs will happily unzip it without even extracting it from the JPEG. Inside the zip file is another picture that gives clues to some GPS coordinates.

This hamster picture actually came from a geo-caching forum. In fact, most of the files with parasites at FotoForensics come from geo-caching forums.

“Why geo-caching?” They love puzzles. It used to be fun to give someone GPS coordinates and let them see if they could find some prize at the physical location. When that was too simple, they began to use remote coordinates — get ready for a three-hour hike or a mountain climb. When remote locations became too easy, they began to hide the objects — you might need to bring a shovel or a flashlight to find the prize. Then they began to turn the coordinates into puzzles: if you can solve the puzzle, then you will find the coordinates. Today? Hard-core steganography. First you have to find the puzzle. Then you have to solve it. Then you have to go to the coordinates (where there may be more puzzles) until you find the final prize. Seriously — if you want to see steg in real life, watch the geo-caching community.

As an aside, one of my friends keeps saying that we should start up a get-rich-quick business. Since FotoForensics receives lots of these geo-caching puzzles, we should solve them first and park a food truck at the prize location. You just know the players will be hungry when they get there.

Chimeric Parasites

Last month I read about a proof-of-concept tool that will turn a JPEG into a PDF or PNG file after applying AES or 3DES cryptography. Corkami works by using parasitic attachments. Specifically, they encrypt a PNG file and PDF, one with AES and the other with 3DES.

With many cryptographic algorithms, decrypting an already decrypted file is just another way to encrypt data. The results are binary data that can only be restored by encrypting the file.

After encrypting (technically, decrypting) the PNG and PDF, they store them in the JPEG. The example encodes the encrypted PNG at the beginning of the JPEG (in a comment) and the PDF as a huge binary parasite at the end of the JPEG.

The hard part for all of this is choosing the right key for all of the cryptography. The AES key is chosen so that it generates a proper PNG header (8 bytes) when given the JPEG header as input. Applying AES encryption to the JPEG creates a PNG header, some binary junk, and then decodes the encrypted PNG data. This results in a valid PNG with binary crud that is ignored by any graphics software.

Similarly, the 3DES key is chosen to generate the PDF header (8 bytes). And the encoded 3DES PDF is placed at the end of the JPEG. This way, the 3DES encoding reconstructs a PDF. And since PDFs start parsing at the end of the file, the binary garbage at the beginning of the file (created from the JPEG) is ignored and the entire thing looks renders a valid PDF.

Infectious Behavior

Discussions about parasitic attachments seem to come up annually. Last year, some researcher discovered that they could hide PHP or Perl or other types of code in text comment fields. If your web site processes back-end server scripts, displays JPEG comments, and isn’t careful about protecting output when displaying image comments, then this could run code on the server. (FotoForensics has captured plenty of examples of these hostile comment fields, and I’ve been seeing this sort of thing for years; the announcement last year may be new to them, but it wasn’t new.)

Keep in mind, hiding malware in a parasitic attachment is not the same as renaming an EXE to “JPEG” and emailing it as an attachment. (“Just double click on the picture!”) A properly created parasite will not interfere with the host image. Just renaming an executable to “.jpg” does not make it a parasite.

Harmless Parasites

There’s a difference between steganography and cryptography. Cryptography refers to making data inaccessible. You can see the data, but you cannot understand it. Steganography refers to making data hard to find. But if you find it, you may be able to immediately understand it.

Parasitic attachments are one form of steganography. However, as hiding places go, they are relatively easy to detect. Anyone parsing the file format will see a large, non-standard binary blob buried in the file. While your friends may not readily notice these large binary chunks stuffed in your pictures, forensic investigators are likely to find the hidden data very quickly. If you’re doing something malicious and investigators see these parasitic attachments, then they may be interpreted as “intent” to hide activities. (I’m not an attorney; if you find yourself in this situation, then you should get an attorney.)

Parasites are also trivial to remove. I frequently mention “resaved” images. That’s where a picture is decoded and then re-encoded as it is saved to a new file. Facebook resaves pictures. Twitter resaves pictures. And nearly every online picture sharing service that scales pictures also performs a resave. The simple action of resaving an image is enough to remove parasites. (I am pretty certain that Facebook and Twitter resave pictures as an explicit method for removing metadata, including any parasites.)

As far as the threat level goes, these parasitic attachments are explicitly hiding. They won’t activate on a double-click and, with few exceptions, remain passive and unnoticed. In order to use the data, you must know it is there and know how to extract the content.

Even though the technique has been around for decades, I still think finding parasites within pictures is a treat. You never know what you’re going to find. (I have no idea what “APdb6″ means, but GrrCon sounds like a fun conference.)

Linux How-Tos and Linux Tutorials: How to Get Open Source Android

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Carla Schroder. Original post: at Linux How-Tos and Linux Tutorials

fdroid logoAndroid is an astonishing commercial success, and is often touted as a Linux success. In some ways it is; Google was able to leverage Linux and free/open source software to get Android to market in record time, and to offer a feature set that quickly outstripped the old champion iOS.

But it’s not Linux as we know it. Most Android devices are locked-down, and we can’t freely download and install whatever operating systems we want like we can with our Linux PCs, or install whatever apps we want without jailbreaking our own devices that we own. We can’t set up a business to sell Google Android devices without jumping through a lot of expensive hoops (see The hidden costs of building an Android device and Secret Ties in Google’s “Open” Android.) We can’t even respin Google Android however we want to and redistribute it, because Google requires bundling a set of Google apps.

So where do you go to find real open source Android? Does such a thing even exist? Why yes it does.

F-Droid: FOSS Repository

There are quite a few Android repositories other than the Google Play Store, such as Amazon Appstore for AndroidSamsung Galaxy Apps, and the Opera Mobile Store. But there is only one, as far as I know, that stocks only free/open source apps, and that is F-Droid (figure 1).

F-Droid is a pure volunteer effort. It was founded in 2010 by Ciaran Gultnieks, and is now operated by F-Droid Limited, a non-profit organisation registered in England. F-Droid relies on donations and community support. The good F-Droid people perform security and privacy checks on submitted apps, though they wisely warn that there are no guarantees. F-Droid promises to respect your privacy and to not track you, your devices, or what you install. You don’t need to register for an account to use the F-Droid client, which sends no identifying information to their servers other than its version number.

To get F-Droid, all you do is download and install the F-Droid client (the download button is on the front page of the site). Easy peasey. You can browse and search apps on the website and in the client.

Other FOSS Android Directories

DroidBreak is a nice resource for finding FOSS Android apps. DroidBreak is not a software repository, but a good organized place to find apps. is another FOSS Android directory. It gives more information on most of the apps, and has some good Android books links.

PRISM Break lists alternatives to popular closed-source propietary apps, and is privacy- and security-oriented.

Now let’s look at how to get a FOSS Android operating system.


CyanogenMod is one of the best and most popular FOSS Android variants. This is a complete replacement for Google’s Android, just like you can replace Debian with Ubuntu or Linux Mint. (Or Mint with Debian. Or whatever.) It is based on cyanogenmod logothe Android Open Source Project.

All CyanogenMod source code is freely available on their Github repository. CyanogenMod supports bales of features including CPU overclocking, controlling permissions on apps, soft buttons, full tethering with no backtalk, easier Wi-fi, BlueTooth, and GPS management, and absolutely no spyware. Which seems to be the #1 purpose of most of the apps in the Play Store. CyanogenMod is more like a real Linux: completely open and modifiable.

CyanogenMod has a bunch of nice user-friendly features: a blacklist for blocking annoying callers, a quick setting ribbon for starting your favorite apps with one swipe, user-themeable, a customizable status bar, profiles for multiple users or multiple workflows, a customizable lockscreen…in short, a completely user-customizable interface. You get a superuser and unprivileged users, all just like your favorite Linux desktop.

CyanogenMod has been ported to a lot of devices, so chances are your phone or tablet is already supported. Amazon Kindle Fire, ASUS, Google Nexus, HTC, LG, Motorola, Samsung, Sony, and lots more. A large and active community supports CyanogenMod, and the Wiki contains bales of good documentation, including help for wannabe developers.

So how do you install CyanogenMod? Isn’t that the scary part, where a mistake bricks your device? That is a real risk. So start with nothing-to-lose practice gadgets: look for some older used tablets and smartphones for cheap and practice on them. Don’t risk your shiny new stuff until you’ve gained experience. Anyway, installation is not all that scary as the good CyanogenMod people have built a super-nice reliable installer that does not require that you be a mighty guru. You don’t need to root your phone because the installer does that for you. After installation the updater takes care of keeping your installation current.


Replicant gets my vote for best name. Please treat yourself to a viewing of the movie “Blade Runner” if you don’t get the reference. Even with a Free Android operating system, phones and tablets still use a lot of proprietary blobs, and one of the goals of Replicant is to replace these with Free software. Replicant was originally based on the Android Open Source Project, and then migrated to CyanogenMod to take advantage of their extensive device support. Replicant is a little replicant logomore work to install, so you’ll acquire a deeper knowledge of how to get software on devices that don’t want you to. Replicant is sponsored by the Free Software Foundation.

The Google Play Store has over a million apps. This sounds impressive, but many of them are junk, most of them are devoted to data-mining you for all you’re worth, and how many Mine Sweeper and Mahjongg ripoffs do you need? Android is destined to be a streamlined general-purpose operating system for a multitude of portable low-power devices (coming to a refrigerator near you! Why? Because!), and this is a great time to get acquainted with it on a deeper level. Ten years of Ubuntu (ars technica)

This post was syndicated from: and was written by: corbet. Original post: at

Here’s a
lengthy ars technica retrospective
on Ubuntu’s first ten years.
As you’ll soon see in this look at the desktop distro through the
years, Linux observers sensed there was something special about Ubuntu
nearly from the start. However, while a Linux OS that genuinely had users
in mind was quickly embraced, Ubuntu’s ten-year journey since is a
microcosm of the major Linux events of the last decade—encompassing
everything from privacy concerns and Windows resentment to server expansion
and hopes of convergence.

TorrentFreak: Australians Face ‘Fines’ For Downloading Pirate Movies

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

Much to the disappointment of owner Voltage Pictures, early January 2013 a restricted ‘DVD Screener’ copy of the hit movie Dallas Buyers Club leaked online. The movie was quickly downloaded by tens of thousands but barely a month later, Voltage was plotting revenge.

In a lawsuit filed in the Southern District of Texas, Voltage sought to identify illegal downloaders of the movie by providing the IP addresses of Internet subscribers to the court. Their aim – to scare those individuals into making cash settlements to make supposed lawsuits disappear.

Now, in the most significant development of the ‘trolling’ model in recent times, Dallas Buyers Club LLC are trying to expand their project into Australia. Interestingly the studio has chosen to take on subscribers of the one ISP that was absolutely guaranteed to put up a fight.

iiNet is Australia’s second largest ISP and the country’s leading expert when it comes to fighting off aggressive rightsholders. In 2012 the ISP defeated Hollywood in one of the longest piracy battles ever seen and the company says it will defend its subscribers in this case too.

Chief Regulatory Officer Steve Dalby says that Dallas Buyers Club LLC (DBCLLC) recently applied to the Federal Court to have iiNet and other local ISPs reveal the identities of people they say have downloaded and/or shared their movie without permission.

According to court documents seen by TorrentFreak the other ISPs involved are Wideband Networks Pty Ltd, Internode Pty Ltd, Dodo Services Pty Ltd, Amnet Broadband Pty Ltd and Adam Internet Pty Ltd.

Although the stance of the other ISPs hasn’t yet been made public, DBCLLC aren’t going to get an easy ride. iiNet (which also owns Internode and Adam) says it will oppose the application for discovery.

“iiNet would never disclose customer details to a third party, such as movie studio, unless ordered to do so by a court. We take seriously both our customers’ privacy and our legal obligations,” Dalby says.

While underlining that the company does not condone copyright infringement, news of Dallas Buyers Club / Voltage Pictures’ modus operandi has evidently reached iiNet, and the ISP is ready for them.

“It might seem reasonable for a movie studio to ask us for the identity of those they suspect are infringing their copyright. Yet, this would only make sense if the movie studio intended to use this information fairly, including to allow the alleged infringer their day in court, in order to argue their case,” Dalby says.

“In this case, we have serious concerns about Dallas Buyers Club’s intentions. We are concerned that our customers will be unfairly targeted to settle any claims out of court using a practice called ‘speculative invoicing’.”

The term ‘speculative invoicing’ was coined in the UK in response to the activities of companies including the now defunct ACS:Law, which involved extracting cash settlements from alleged infringers (via mailed ‘invoices’) and deterring them from having their say in court. Once the scheme was opened up to legal scrutiny it completely fell apart.

Some of the flaws found to exist in both UK and US ‘troll’ cases are cited by iiNet, including intimidation of subscribers via excessive claims for damages. The ISP also details the limitations of IP address-based evidence when it comes to identifying infringers due to shared household connections and open wifi scenarios.

“Because Australian courts have not tested these cases, any threat by rights holders, premised on the outcome of a successful copyright infringement action, would be speculative,” Dalby adds.

The Chief Regulatory Officer says that since iiNet has opposed the action for discovery the Federal Court will now be asked to decide whether iiNet should hand over subscriber identities to DBCLLC. A hearing on that matter is expected early next year and it will be an important event.

While a win for iiNet would mean a setback for rightsholders plotting similar action, victory for DBCLLC will almost certainly lead to others following in their footsteps. For an idea of what Australians could face in this latter scenario, in the United States the company demands payment of up to US$7,000 (AUS$8,000) per infringement.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Darknet - The Darkside: Apple’s OS X Yosemite Spotlight Privacy Issues

This post was syndicated from: Darknet - The Darkside and was written by: Darknet. Original post: at Darknet - The Darkside

So Apple pushed out it’s latest and great OS X version 10.10 called Yosemite, but it’s facing a bit of an uproar at the moment about some Spotlight privacy issues. For those who are not familiar, Spotlight is some kinda of super desktop search that searches everything on your computer (and now also the Internet) […]

The post…

Read the full post at

The Hacker Factor Blog: By Proxy

This post was syndicated from: The Hacker Factor Blog and was written by: The Hacker Factor Blog. Original post: at The Hacker Factor Blog

As I tweak and tune the firewall and IDS system at FotoForensics, I keep coming across unexpected challenges and findings. One of the challenges is related to proxies. If a user uploads prohibited content from a proxy, then my current system bans the entire proxy. An ideal solution would only ban the user.

Proxies serve a lot of different purposes. Most people think about proxies in regards to anonymity, like the TOR network. TOR is a series of proxies that ensure that the endpoint cannot identify the starting point.

However, there are other uses for proxies. Corporations frequently have a set of proxies for handling network traffic. This allows them to scan all network traffic for potential malware. It’s a great solution for mitigating the risk from one user getting a virus and passing it to everyone in the network.

Some governments run proxies as a means to filter content. China and Syria come to mind. China has a custom solution that has been dubbed the “Great Firewall of China“. They use it to restrict site access and filter content. Syria, on the other hand, appears to use a COTS (commercial off-the-shelf) solution. In my web logs, most traffic from Syria comes through Blue Coat ProxySG systems.

And then there are the proxies that are used to bypass usage limits. For example, your hotel may charge for Internet access. If there’s a tech convention in the hotel, then it’s common to see one person pay for the access, and then run his own SOCKS proxy for everyone else to relay out over the network. This gives everyone access without needing everyone to pay for the access.

Proxy Services

Proxy networks that are designed for anonymity typically don’t leak anything. If I ban a TOR node, then that node stays banned since I cannot identify individual users. However, the proxies that are designed for access typically do reveal something about the user. In fact, many proxies explicitly identify who’s request is being relayed. This added information is stuffed in HTTP header fields that most web sites ignore.

For example, I recently received an HTTP request from that contained the HTTP header “X-Forwarded-For:″. If I were to ban the user, then I would ban “”, since that system connected to my server. However, is and is part of a proxy network. This proxy network identified who was relaying with the X-Forwarded-For header. In this case, “” is someone in Yemen. If I see this reference, then I can start banning the user in Yemen rather than the Google Proxy that is used by lots of people. (NOTE: I changed the Yemen IP address for privacy, and this user didn’t upload anything requiring a ban; this is just an example.)

Unfortunately, there is no real standard here. Different proxies use different methods to denote the user being relayed. I’ve seen headers like “X-Forwarded”, “X-Forwarded-For”, “HTTP_X_FORWARDED_FOR” (yes, they actually sent this in their header; this is NOT from the Apache variable), “Forwarded”, “Forwarded-For-IP”, “Via”, and more. Unless I know to look for it, I’m liable to ban a proxy rather than a user.

In some cases, I see the direct connection address also listed as the relayed address; it claims to be relaying itself. I suspect that this is cause by some kind of anti-virus system that is filtering network traffic through a local proxy. And sometimes I see private addresses (“private” as in “private use” and “should not be routed over the Internet”; not “don’t tell anyone”). These are likely home users or small companies that run a proxy for all of the computers on their local networks.

Proxy Detection

If I cannot identify the user being proxied, then just identifying that the system is a proxy can be useful. Rather than banning known proxies for three months, I might ban the proxy for only a day or a week. The reduced time should cut down on the number of people blocked because of the proxy that they used.

There are unique headers that can identify that a proxy is present. Blue Coat ProxySG, for example, adds in a unique header: “X-BlueCoat-Via: abce6cd5a6733123″. This tracking ID is unique to the Blue Coat system; every user relaying through that specific proxy gets the same unique ID. It is intended to prevent looping between Blue Coat devices. If the ProxySG system sees its own unique ID, then it has identified a loop.

Blue Coat is not the only vendor with their own proxy identifier. Fortinet’s software adds in a “X-FCCKV2″ header. And Verizon silently adds in an “X-UIDH” header that has a large binary string for tracking users.

Language and Location

Besides identifying proxies, I can also identify the user’s preferred language.

The intent with specifying languages in the HTTP header is to help web sites present content in the native language. If my site supports English, German, and French, then seeing a hint that says “French” should help me automatically render the page using French. However, this can be used along with IP address geolocation to identify potential proxies. If the IP address traces to Australia but the user appears to speak Italian, then it increases the likelihood that I’m seeing an Australian proxy that is relaying for a user in Italy.

The official way to identify the user’s language is to use an HTTP “Accept-Language” header. For example, “Accept-Language: en-US,en;q=0.5″ says to use the United States dialect of English, or just English if there is no dialect support at the web site. However, there are unofficial approaches to specifying the desired language. For example, many web browsers encode the user’s preferred language into the HTTP user-agent string.

Similarly, Facebook can relay network requests. These appear in the header “X-Facebook-Locale”. This is an unofficial way to identify when Facebook being use as a proxy. However, it also tells me the user’s preferred language: “X-Facebook-Locale: fr_CA”. In this case, the user prefers the Canadian dialect of French (fr_CA). While the user may be located anywhere in the world, he is probably in Canada.

There’s only one standard way to specify the recipient’s language. However, there are lots of common non-standard ways. Just knowing what to look for can be a problem. But the bigger problem happens when you see conflicting language definitions.

Accept-Language: de-de,de;q=0.5

User-Agent: Mozilla/5.0 (Linux; Android 4.4.2; it-it; SAMSUNG SM-G900F/G900FXXU1ANH4 Build/KOT49H) AppleWebKit/537.36 (KHTML, like Gecko) Version/1.6 Chrome/28.0.1500.94 Mobile Safari/537.36

X-Facebook-Locale: es_LA

x-avantgo-clientlanguage: en_GB

x-ucbrowser-ua: pf(Symbian);er(U);la(en-US);up(U2/1.0.0);re(U2/1.0.0);dv(NOKIAE90);pr

X-OperaMini-Phone-UA: Mozilla/5.0 (Linux; U; Android 4.4.2; id-id; SM-G900T Build/id=KOT49H.G900SKSU1ANCE) AppleWebKit/534.30 (KHTML, like Gecko) Version/4.0 Mobile Safari/534.30

If I see all of these in one request, then I’ll probably choose the official header first (German from German). However, without the official header, would I choose Spanish from Latin America (“es-LA” is unofficial but widely used), Italian from Italy (it-it) as specified by the web browser user-agent string, or the language from one of those other fields? (Fortunately, in the real world these would likely all be the same. And you’re unlikely to see most of these fields together. Still, I have seen some conflicting fields.)

Time to Program!

So far, I have identified nearly a dozen different HTTP headers that denote some kind of proxy. Some of them identify the user behind the proxy, but others leak clues or only indicate that a proxy was used. All of this can be useful for determining how to handle a ban after someone violates my site’s terms of service, even if I don’t know who is behind the proxy.

In the near future, I should be able to identify at least some of these proxies. If I can identify the people using proxies, then I can restrict access to the user rather than the entire proxy. And if I can at least identify the proxy, then I can still try to lessen the impact for other users.

Errata Security: FBI’s crypto doublethink

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

Recently, FBI Director James Comey gave a speech at the Brookings Institute decrying crypto. It was transparently Orwellian, arguing for a police-state. In this post, I’ll demonstrate why, quoting bits of the speech.

“the FBI has a sworn duty to keep every American safe from crime and terrorism”
“The people of the FBI are sworn to protect both security and liberty”

This is not true. The FBI’s oath is to “defend the Constitution”. Nowhere in the oath does it say “protect security” or “keep people safe”.

This detail is important. Tyrants suppress civil liberties in the name of national security and public safety. This oath taken by FBI agents, military personnel, and the even the president, is designed to prevent such tyrannies.

Comey repeatedly claims that FBI agents both understand their duty and are committed to it. That Comey himself misunderstands his oath disproves both assertions. This reinforces our belief that FBI agents do not see their duty as protecting our rights, but instead see rights as an impediment in pursuit of some other duty.

Freedom is Danger

The book 1984 describes the concept of “doublethink“, with political slogans as examples: “War is Peace”, “Ignorance is Strength”, and “Freedom is Slavery”. Comey goes full doublethink:

Some have suggested there is a conflict between liberty and security. I disagree. At our best, we in law enforcement, national security, and public safety are looking for security that enhances liberty. When a city posts police officers at a dangerous playground, security has promoted liberty—the freedom to let a child play without fear.

He’s wrong. Liberty and security are at odds. That’s what the 4th Amendment says. We wouldn’t be having this debate if they weren’t at odds.

He follows up with more doublethink, claiming “we aren’t seeking a back-door”, but instead are instead interested in “developing intercept solutions during the design phase”. Intercept solutions built into phones is the very definition of a backdoor, of course.

“terror terror terror terror terror”
“child child child child child child”

Comey mentions terrorism 5 times and child exploitation 6 times. This is transparently the tactic of the totalitarian, demagoguery based on emotion rather than reason.

Fear of terrorism on 9/11 led to the Patriot act, granting law enforcement broad new powers in the name of terrorism. Such powers have been used overwhelming for everything else. The most telling example is the detainment of David Miranda in the UK under a law that supposedly only applied to terrorists. Miranda was carrying an encrypted copy of Snowden files — clearly having nothing to do with terrorism. It was clearly exploitation of anti-terrorism laws for the purposes of political suppression.

Any meaningful debate doesn’t start with the headline grabbing crimes, but the ordinary ones, like art theft and money laundering. Comey has to justify his draconian privacy invasion using those laws, not terrorism.

“rule of law, rule of law, rule of law, rule of law, rule of law”
Comey mentions rule-of-law five times in his speech. His intent is to demonstrate that even the FBI is subject to the law, namely review by an independent judiciary. But that isn’t true.

The independent judiciary has been significantly weakened in recent years. We have secret courts, NSLs, and judges authorizing extraordinary powers because they don’t understand technology. Companies like Apple and Google challenge half the court orders they receive, because judges just don’t understand. There is frequent “parallel construction”, where evidence from spy agencies is used against suspects, sidestepping judicial review.

What Comey really means is revealed by this statement: “I hope you know that I’m a huge believer in the rule of law. … There should be no law-free zone in this country”. This a novel definition of “rule of law”, a “rule by law enforcement”, that has never been used before. It reveals what Comey really wants, a totalitarian police-state where nothing is beyond the police’s powers, where the only check on power is a weak and pliant judiciary.

“that a commitment to the rule of law and civil liberties is at the core of the FBI”
No, lip service to these things is at the core of the FBI.

I know this from personal experience when FBI agents showed up at my offices and threatened me, trying to get me to cancel a talk at a cybersecurity conference. They repeated over and over how they couldn’t force me to cancel my talk because I had a First Amendment right to speak — while simultaneously telling me that if I didn’t cancel my talk, they would taint my file so that I would fail background checks and thus never be able to work for the government ever again.
We saw that again when the FBI intercepted clearly labeled “attorney-client privileged” mail between Weev and his lawyer. Their excuse was that the threat of cyberterrorism trumped Weev’s rights.

Then there was that scandal that saw widespread cheating on a civil-rights test. FBI agents were required to certify, unambiguously, that nobody helped them on the test. They lied. It’s one more oath FBI agents seem not to care about.

If commitment to civil liberties was important to him, Comey would get his oath right. If commitment to rule-of-law was important, he’d get the definition right. Every argument Comey make demonstrates how little he is interested in civil liberties.

“Snowden Snowden Snowden”

Comey mentions Snowden three times, such as saying “In the wake of the Snowden disclosures, the prevailing view is that the government is sweeping up all of our communications“.

This is not true. No news article based on the Snowden document claims this. No news site claims this. None of the post-Snowden activists believe this. All the people who matter know the difference between metadata and full eavesdropping, and likewise, the difficulty the FBI has in getting at that data.

This is how we know the FBI is corrupt. They ignore our concerns that government has been collecting every phone record in the United States for 7 years without public debate, but instead pretend the issue is something stupid, like the false belief they’ve been recording all phone calls. They knock down strawman arguments instead of addressing our real concerns.

Regulate communication service providers

In his book 1984, everyone had a big screen television mounted on the wall that was two-way. Citizens couldn’t turn the TV off, because it had to be blaring government propaganda all the time. The camera was active at all time in case law enforcement needed to access it. At the time the book was written in 1934, televisions were new, and people thought two-way TVs were plausible. They weren’t at that time; it was a nonsense idea.

But then the Internet happened and now two-way TVs are a real thing. And it’s not just the TV that’s become two-way video, but also our phones. If you believe the FBI follows the “rule of law” and that the courts provide sufficient oversight, then there’s no reason to stop them going full Orwell, allowing the police to turn on your device’s camera/microphone any time they have a court order in order to eavesdrop on you. After all, as Comey says, there should be no law-free zone in this country, no place law enforcement can’t touch.

Comey pretends that all he seeks at the moment is a “regulatory or legislative fix to create a level playing field, so that all communication service providers are held to the same standard” — meaning a CALEA-style backdoor allowing eavesdropping. But here’s thing: communication is no longer a service but an app. Communication is “end-to-end”, between apps, often by different vendors, bypassing any “service provider”. There is no way to way to eavesdrop on those apps without being able to secretly turn on a device’s microphone remotely and listen in.

That’s why we crypto-activists draw the line here, at this point. Law enforcement backdoors in crypto inevitably means an Orwellian future.


There is a lot more wrong with James Comey’s speech. What I’ve focused on here were the Orwellian elements. The right to individual crypto, with no government backdoors, is the most important new human right that technology has created. Without it, the future is an Orwellian dystopia. And as proof of that, I give you James Comey’s speech, whose arguments are the very caricatures that Orwell lampooned in his books.

Schneier on Security: Surveillance in Schools

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

This essay, “Grooming students for a lifetime of surveillance,” talks about the general trends in student surveillance.

Related: essay on the need for student privacy in online learning.