Posts tagged ‘Privacy’

TorrentFreak: Under U.S. Pressure, PayPal Nukes Mega For Encrypting Files

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

During September 2014, the Digital Citizens Alliance and Netnames teamed up to publish a brand new report. Titled ‘Behind The Cyberlocker Door: A Report How Shadowy Cyberlockers Use Credit Card Companies to Make Millions,’ it offered insight into the finances of some of the world’s most popular cyberlocker sites.

The report had its issues, however. While many of the sites covered might at best be considered dubious, the inclusion of – the most scrutinized file-hosting startup in history – was a real head scratcher. Mega conforms with all relevant laws and responds quickly whenever content owners need something removed. By any standard the company lives up to the requirements of the DMCA.

“We consider the report grossly untrue and highly defamatory of Mega,” Mega CEO Graham Gaylard told TF at the time. But now, just five months on, Mega’s inclusion in the report has come back to bite the company in a big way.

Speaking via email with TorrentFreak this morning, Gaylard highlighted the company’s latest battle, one which has seen the company become unable to process payments from customers. It’s all connected with the NetNames report and has even seen the direct involvement of a U.S. politician.

leahyAccording to Mega, following the publication of the report last September, SOPA and PIPA proponent Senator Patrick Leahy (Vermont, Chair Senate Judiciary Committee) put Visa and MasterCard under pressure to stop providing payment services to the ‘rogue’ companies listed in the NetNames report.

Following Leahy’s intervention, Visa and MasterCard then pressured PayPal to cease providing payment processing services to MEGA. As a result, Mega is no longer able to process payments.

“It is very disappointing to say the least. PayPal has been under huge pressure,” Gaylard told TF.

The company did not go without a fight, however.

“MEGA provided extensive statistics and other evidence showing that MEGA’s business is legitimate and legally compliant. After discussions that appeared to satisfy PayPal’s queries, MEGA authorised PayPal to share that material with Visa and MasterCard. Eventually PayPal made a non-negotiable decision to immediately terminate services to MEGA,” the company explains.

paypalWhat makes the situation more unusual is that PayPal reportedly apologized to Mega for its withdrawal while acknowledging that company’s business is indeed legitimate.

However, PayPal also advised that Mega’s unique selling point – it’s end-to-end-encryption – was a key concern for the processor.

“MEGA has demonstrated that it is as compliant with its legal obligations as USA cloud storage services operated by Google, Microsoft, Apple, Dropbox, Box, Spideroak etc, but PayPal has advised that MEGA’s ‘unique encryption model’ presents an insurmountable difficulty,” Mega explains.

As of now, Mega is unable to process payments but is working on finding a replacement. In the meantime the company is waiving all storage limits and will not suspend any accounts for non-payment. All accounts have had their subscriptions extended by two months, free of charge.

Mega indicates that it will ride out the storm and will not bow to pressure nor compromise the privacy of its users.

“MEGA supplies cloud storage services to more than 15 million registered customers in more than 200 countries. MEGA will not compromise its end-to-end user controlled encryption model and is proud to not be part of the USA business network that discriminates against legitimate international businesses,” the company concludes.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and anonymous VPN services.

Schneier on Security: Everyone Wants You To Have Security, But Not from Them

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

In December, Google’s Executive Chairman Eric Schmidt was interviewed at the CATO Institute Surveillance Conference. One of the things he said, after talking about some of the security measures his company has put in place post-Snowden, was: “If you have important information, the safest place to keep it is in Google. And I can assure you that the safest place to not keep it is anywhere else.”

The surprised me, because Google collects all of your information to show you more targeted advertising. Surveillance is the business model of the Internet, and Google is one of the most successful companies at that. To claim that Google protects your privacy better than anyone else is to profoundly misunderstand why Google stores your data for free in the first place.

I was reminded of this last week when I appeared on Glenn Beck’s show along with cryptography pioneer Whitfield Diffie. Diffie said:

You can’t have privacy without security, and I think we have glaring failures in computer security in problems that we’ve been working on for 40 years. You really should not live in fear of opening an attachment to a message. It ought to be confined; your computer ought to be able to handle it. And the fact that we have persisted for decades without solving these problems is partly because they’re very difficult, but partly because there are lots of people who want you to be secure against everyone but them. And that includes all of the major computer manufacturers who, roughly speaking, want to manage your computer for you. The trouble is, I’m not sure of any practical alternative.

That neatly explains Google. Eric Schmidt does want your data to be secure. He wants Google to be the safest place for your data ­ as long as you don’t mind the fact that Google has access to your data. Facebook wants the same thing: to protect your data from everyone except Facebook. Hardware companies are no different. Last week, we learned that Lenovo computers shipped with a piece of adware called Superfish that broke users’ security to spy on them for advertising purposes.

Governments are no different. The FBI wants people to have strong encryption, but it wants backdoor access so it can get at your data. UK Prime Minister David Cameron wants you to have good security, just as long as it’s not so strong as to keep the UK government out. And, of course, the NSA spends a lot of money ensuring that there’s no security it can’t break.

Corporations want access to your data for profit; governments want it security purposes, be they benevolent or malevolent. But Diffie makes an even stronger point: we give lots of companies access to our data because it makes our lives easier.

I wrote about this in my latest book, Data and Goliath:

Convenience is the other reason we willingly give highly personal data to corporate interests, and put up with becoming objects of their surveillance. As I keep saying, surveillance-based services are useful and valuable. We like it when we can access our address book, calendar, photographs, documents, and everything else on any device we happen to be near. We like services like Siri and Google Now, which work best when they know tons about you. Social networking apps make it easier to hang out with our friends. Cell phone apps like Google Maps, Yelp, Weather, and Uber work better and faster when they know our location. Letting apps like Pocket or Instapaper know what we’re reading feels like a small price to pay for getting everything we want to read in one convenient place. We even like it when ads are targeted to exactly what we’re interested in. The benefits of surveillance in these and other applications are real, and significant.

Like Diffie, I’m not sure there is any practical alternative. The reason the Internet is a worldwide mass-market phenomenon is that all the technological details are hidden from view. Someone else is taking care of it. We want strong security, but we also want companies to have access to our computers, smart devices, and data. We want someone else to manage our computers and smart phones, organize our e-mail and photos, and help us move data between our various devices.

Those “someones” will necessarily be able to violate our privacy, either by deliberately peeking at our data or by having such lax security that they’re vulnerable to national intelligence agencies, cybercriminals, or both. Last week, we learned that the NSA broke into the Dutch company Gemalto and stole the encryption keys for billions ­ yes, billions ­ of cell phones worldwide. That was possible because we consumers don’t want to do the work of securely generating those keys and setting up our own security when we get our phones; we want it done automatically by the phone manufacturers. We want our data to be secure, but we want someone to be able to recover it all when we forget our password.

We’ll never solve these security problems as long as we’re our own worst enemy. That’s why I believe that any long-term security solution will not only be technological, but political as well. We need laws that will protect our privacy from those who obey the laws, and to punish those who break the laws. We need laws that require those entrusted with our data to protect our data. Yes, we need better security technologies, but we also need laws mandating the use of those technologies.

This essay previously appeared on

SANS Internet Storm Center, InfoCON: green: 11 Ways To Track Your Moves When Using a Web Browser, (Tue, Feb 24th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

There are a number of different use cases to track users as they use a particular web site. Some of them are more sinister then others. For most web applications, some form of session tracking is required to maintain the users state. This is typically easily done using well configured cookies (and not the scope of this article). Session are meant to be ephemeral and will not persist for long.

On the other hand, some tracking methodsdo attempt to track the user over a long time, and in particular attempt to make it difficult to evade the tracking. This is sometimes done for advertisement purposes, but can also be done to stop certain attacks like brute forcing or to identify attackers that return to a site. In its worst case, from a private perspective, the tracking is done to follow a user across various web sites.

Over the years, browsers and plugins have provided a number of ways to restrict this tracking. Here are some of the more common techniques how tracking is done and how the user can prevent (some of) it:

1 – Cookies

Cookies are meant to maintain state between different requests. A browser will send a cookie with each request once it is set for a particular site. From a privacy point of view, the expiration time and the domain of the cookie are the most important settings. Most browsers will reject cookies set on behalf of a different site, unless the user permits these cookies to be set. A proper session cookie should not use an expiration date as it should expire as soon as the browser is closed. Most browser do offer means to review, control and delete cookies. In the past, a Cookie2 header was proposed for session cookies, but this header has been deprecated and browser stop supporting it.

2 – Flash Cookies (Local Shared Objects)

Flash has its own persistence mechanism. These flash cookies are files that can be left on the client. They can not be set on behalf of other sites (Cross-Origin), but one SWF scriptcan expose the content of a LSO to other scripts which can be used to implement cross-origin storage. The best way to prevent flash cookies from tracking you is to disable flash. Managing flash cookies is tricky and typically does require special plugins.

3 – IP Address

The IP address is probably the most basic tracking mechanism of all IP based communication, but not always reliable as users IP addresses may change at any time, and multiple users often share the same IP address. You can use various VPN products or systems like Tor to prevent your IP address from being used to track you, but this usually comes with a performance hit. Some modern JavaScript extension (RTC in particular) can be used to retrieve a users internal IP address, which can be used to resolve ambiguities introduced by NAT. But RTC is not yet implemented in all browsers. IPv6 may provide additional methods to use the IP address to identify users as you are less likely going to run into issues with NAT.

4 – User Agent

The User-Agent string sent by a browser is hardly ever unique by default, but spyware sometimes modifies the User-Agent to add unique values to it. Many browsers allow adjusting the User-Agent and more recently, browsers started to reduce the information in the User-Agent or even made it somewhat dynamic to match the expected content. Non-Spyware plugins sometimes modify the User-Agent to indicate support for specific features.

5 – Browser Fingerprinting

A web browser is hardly ever one monolithic piece of software. Instead, web browsers interact with various plugins and extensions the user may have installed. Past work has shown that the combination of plugin versions and configuration options selected by the user tends to be amazingly unique and this technique has been used to derive unique identifiers. There is not much you can do to prevent this, other then minimize the number of plugins you install (but that may be an indicator in itself)

6 – Local Storage

HTML 5 offers two new ways to store data on the client: Local Storage and Session Storage. Local Storage is most useful for persistent storage on the client, and with that user tracking. Access to local storage is limited to the site that sent the data. Some browsers implement debug features that allow the user to review the data stored. Session Storage is limited to a particular window and is removed as soon as the window is closed.

7 – Cached Content

Browsers cache content based on the expiration headers provided by the server. A web application can include unique content in a page, and then use JavaScript to check if the content is cached or not in order to identify a user. This technique can be implemented using images, fonts or pretty much any content. It is difficult to defend against unless you routinely (e.g. on closing the browser) delete all content. Some browsers allow you to not cache any content at all. But this can cause significant performance issues. Recently Google has been seen using fonts to track users, but the technique is not new. Cached JavaScript can easily be used to set unique tracking IDs.

8 – Canvas Fingerprinting

This is a more recent technique and in essence a special form of browser fingerprinting. HTML 5 introduced a Canvas API that allows JavaScript to draw image in your browser. In addition, it is possible to read the image that was created. As it turns out, font configurations and other paramters are unique enough to result in slightly different images when using identical JavaScript code to draw the image. These differences can be used to derive a browser identifier. Not much you can do to prevent this from happening. I am not aware of a browser that allows you to disable the canvas feature, and pretty much all reasonably up to date browsers support it in some form.

9 – Carrier Injected Headers

Verizon recently added injecting specific headers into HTTP requests to identify users. As this is done in flight, it only works for HTTP and not HTTPS. Each user is assigned a specific ID and the ID is injected into all HTTP requests as X-UIDH header. Verizon offers a for pay service that a web site can use to retrieve demographic information about the user. But just by itself, the header can be used to track users as it stays linked to the user for an extended time.

10 – Redirects

This is a bit a varitation on the cached content tracking. If a user is redirected using a 301 (Permanent Redirect) code, then the browser will remember the redirect and pull up the target page right away, not visiting the original page first. So for example, if you click on a link to, I could redirect you to Next time you go to, your browser will automatically go direct to the second URL. This technique is less reliable then some of the other techniques as browsers differ in how they cache redirects.

11- Cookie Respawning / Syncing

Some of the methods above have pretty simple counter measures. In order to make it harder for users to evade tracking, sites often combine different methods and respawn cookies. This technique is sometimes refered to as Evercookie. If the user deletes for example the HTTP cookie, but not the Flash Cookie, the Flash Cookie is used to re-create the HTTP cookie on the users next visit.

Any methods I missed (I am sure there have to be a couple…)

Johannes B. Ullrich, Ph.D.

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Schneier on Security: AT&T Charging Customers to Not Spy on Them

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

AT&T is charging a premium for gigabit Internet service without surveillance:

The tracking and ad targeting associated with the gigabit service cannot be avoided using browser privacy settings: as AT&T explained, the program “works independently of your browser’s privacy settings regarding cookies, do-not-track and private browsing.” In other words, AT&T is performing deep packet inspection, a controversial practice through which internet service providers, by virtue of their privileged position, monitor all the internet traffic of their subscribers and collect data on the content of those communications.

What if customers do not want to be spied on by their internet service providers? AT&T allows gigabit service subscribers to opt out — for a $29 fee per month.

I have mixed feelings about this. On one hand, AT&T is forgoing revenue by not spying on its customers, and it’s reasonable to charge them for that lost revenue. On the other hand, this sort of thing means that privacy becomes a luxury good. In general, I prefer to conceptualize privacy as a right to be respected and not a commodity to be bought and sold.

TorrentFreak: The World’s Most Idiotic Copyright Complaint

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

picarddmcaAt least once a month TorrentFreak reports on the often crazy world of DMCA takedown notices. Google is kind enough to publish thousands of them in its Transparency Report and we’re only too happy to spend hours trawling through them.

Every now and again a real gem comes to light, often featuring mistakes that show why making these notices public is not only a great idea but also in the public interest. The ones we found this week not only underline that assertion in bold, but are actually the worst examples of incompetence we’ve ever seen.

German-based Total Wipes Music Group have made these pages before after trying to censor entirely legal content published by Walmart, Ikea, Fair Trade USA and Dunkin Donuts. This week, however, their earlier efforts were eclipsed on a massive scale.

wipedFirst, in an effort to ‘protect’ their album “Truth or Dare” on Maze Records, the company tried to censor a TorrentFreak article from 2012 on how to download anonymously. The notice, found here, targets dozens of privacy-focused articles simply because they have the word “hide” in them.

But it gets worse – much worse. ‘Protecting’ an album called “Cigarettes” on Mona Records, Total Wipes sent Google a notice containing not a single infringing link. Unbelievably one of the URLs targeted an article on how to use PGP on the Mac. It was published by none other than the EFF.


So that was the big punchline, right? Pfft, nowhere near.

Going after alleged pirates of the album “In To The Wild – Vol.7″ on Aborigeno Music, Total Wipes offer their pièce de résistance, the veritable jewel in their crown. The notice, which covers 95 URLs, targets no music whatsoever. Instead it tries to ruin the Internet by targeting the download pages of some of the most famous online companies around.

Some of the URLs in the most abusive notice ever


In no particular order, here is a larger selection of some of the download pages the notice attacks.

ICQ, RedHat, SQLite, Vuze, LinuxMint, WineHQ, Foxit, Calibre, Kodi/XBMC, Skype, Java, OpenOffice, Gimp, Ubuntu, Python, TeamViewer, MySQL, VLC, Joomla, Z-Zip, RaspberryPI, Unity3D, Apache, MalwareBytes, Pidgin, LibreOffice, VMWare, uTorrent, WinSCP, WhatsApp, Evernote, AMD, AVG, Origin, TorProject, PHPMyAdmin, Nginx, FFmpeg, phpbb, Plex, GNU, WireShark, Dropbox and Opera.

If you can bear to read it the full notice can be found here. Worryingly Total Wipes Music are currently filing notices almost every day. Google rejects many of them but it’s only a matter of time before some sneak through.

We’ve said it before but it needs to be said again. Some people can’t be trusted to send takedown notices and must lose their right to do so when they demonstrate this level of abuse. The sooner Google kicks these people out the better.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and anonymous VPN services.

Schneier on Security: Co3 Systems Changes Its Name to Resilient Systems

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Today my company, Co3 Systems, is changing its name to Resilient Systems. The new name better reflects who we are and what we do. Plus, the old name was kind of dumb.

I have long liked the term “resilience.” If you look around, you’ll see it a lot. It’s used in human psychology, in organizational theory, in disaster recovery, in ecological systems, in materials science, and in systems engineering. Here’s a definition from 1991, in a book by Aaron Wildavsky called Searching for Safety: “Resilience is the capacity to cope with unanticipated dangers after they have become manifest, learning to bounce back.”

The concept of resilience has been used in IT systems for a long time.

I have been talking about resilience in IT security — and security in general — for at least 15 years. I gave a talk at an ICANN meeting in 2001 titled “Resilient Security and the Internet.” At the 2001 Black Hat, I said: “Strong countermeasures combine protection, detection, and response. The way to build resilient security is with vigilant, adaptive, relentless defense by experts (people, not products). There are no magic preventive countermeasures against crime in the real world, yet we are all reasonably safe, nevertheless. We need to bring that same thinking to the Internet.”

In Beyond Fear (2003), I spend pages on resilience: “Good security systems are resilient. They can withstand failures; a single failure doesn’t cause a cascade of other failures. They can withstand attacks, including attackers who cheat. They can withstand new advances in technology. They can fail and recover from failure.” We can defend against some attacks, but we have to detect and respond to the rest of them. That process is how we achieve resilience. It was true fifteen years ago and, if anything, it is even more true today.

So that’s the new name, Resilient Systems. We provide an Incident Response Platform, empowering organizations to thrive in the face of cyberattacks and business crises. Our collaborative platform arms incident response teams with workflows, intelligence, and deep-data analytics to react faster, coordinate better, and respond smarter.

And that’s the deal. Our Incident Response Platform produces and manages instant incident response plans. Together with our Security and Privacy modules, it provides IR teams with best-practice action plans and flexible workflows. It’s also agile, allowing teams to modify their response to suit organizational needs, and continues to adapt in real time as incidents evolve.

Resilience is a lot bigger than IT. It’s a lot bigger than technology. In my latest book, Data and Goliath, I write: “I am advocating for several flavors of resilience for both our systems of surveillance and our systems that control surveillance: resilience to hardware and software failure, resilience to technological innovation, resilience to political change, and resilience to coercion. An architecture of security provides resilience to changing political whims that might legitimize political surveillance. Multiple overlapping authorities provide resilience to coercive pressures. Properly written laws provide resilience to changing technological capabilities. Liberty provides resilience to authoritarianism. Of course, full resilience against any of these things, let alone all of them, is impossible. But we must do as well as we can, even to the point of assuming imperfections in our resilience.”

I wrote those words before we even considered a name change.

Same company, new name (and new website). Check us out.

Schneier on Security: New Book: <i>Data and Goliath</i>

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

After a year of talking about it, my new book is finally published.

This is the copy from the inside front flap:

You are under surveillance right now.

Your cell phone provider tracks your location and knows who’s with you. Your online and in-store purchasing patterns are recorded, and reveal if you’re unemployed, sick, or pregnant. Your e-mails and texts expose your intimate and casual friends. Google knows what you’re thinking because it saves your private searches. Facebook can determine your sexual orientation without you ever mentioning it.

The powers that surveil us do more than simply store this information. Corporations use surveillance to manipulate not only the news articles and advertisements we each see, but also the prices we’re offered. Governments use surveillance to discriminate, censor, chill free speech, and put people in danger worldwide. And both sides share this information with each other or, even worse, lose it to cybercriminals in huge data breaches.

Much of this is voluntary: we cooperate with corporate surveillance because it promises us convenience, and we submit to government surveillance because it promises us protection. The result is a mass surveillance society of our own making. But have we given up more than we’ve gained? In Data and Goliath, security expert Bruce Schneier offers another path, one that values both security and privacy. He shows us exactly what we can do to reform our government surveillance programs and shake up surveillance-based business models, while also providing tips for you to protect your privacy every day. You’ll never look at your phone, your computer, your credit cards, or even your car in the same way again.

And there’s a great quote on the cover:

“The public conversation about surveillance in the digital age would be a good deal more intelligent if we all read Bruce Schneier first.” –Malcolm Gladwell, author of David and Goliath

This is the table of contents:

Part 1: The World We’re Creating

Chapter 1: Data as a By-Product of Computing
Chapter 2: Data as Surveillance
Chapter 3: Analyzing our Data
Chapter 4: The Business of Surveillance
Chapter 5: Government Surveillance and Control
Chapter 6: Consolidation of Institutional Surveillance

Part 2: What’s at Stake

Chapter 7: Political Liberty and Justice
Chapter 8: Commercial Fairness and Equality
Chapter 9: Business Competitiveness
Chapter 10: Privacy
Chapter 11: Security

Part 3: What to Do About It

Chapter 12: Principles
Chapter 13: Solutions for Government
Chapter 14: Solutions for Corporations
Chapter 15: Solutions for the Rest of Us
Chapter 16: Social Norms and the Big Data Trade-off

I’ve gotten some great responses from people who read the bound galley, and hope for some good reviews in mainstream publications. So far, there’s one review.

You can buy the book at Amazon, Amazon UK, Barnes & Noble, Powell’s, Book Depository, or IndieBound — which routes your purchase through a local independent bookseller. E-books are available on Amazon, B&N, Apple’s iBooks store, and Google Play.

And if you can, please write a review for Amazon, Goodreads, or anywhere else.

Schneier on Security: Samsung Television Spies on Viewers

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Earlier this week, we learned that Samsung televisions are eavesdropping on their owners. If you have one of their Internet-connected smart TVs, you can turn on a voice command feature that saves you the trouble of finding the remote, pushing buttons and scrolling through menus. But making that feature work requires the television to listen to everything you say. And what you say isn’t just processed by the television; it may be forwarded over the Internet for remote processing. It’s literally Orwellian.

This discovery surprised people, but it shouldn’t have. The things around us are increasingly computerized, and increasingly connected to the Internet. And most of them are listening.

Our smartphones and computers, of course, listen to us when we’re making audio and video calls. But the microphones are always there, and there are ways a hacker, government, or clever company can turn those microphones on without our knowledge. Sometimes we turn them on ourselves. If we have an iPhone, the voice-processing system Siri listens to us, but only when we push the iPhone’s button. Like Samsung, iPhones with the “Hey Siri” feature enabled listen all the time. So do Android devices with the “OK Google” feature enabled, and so does an Amazon voice-activated system called Echo. Facebook has the ability to turn your smartphone’s microphone on when you’re using the app.

Even if you don’t speak, our computers are paying attention. Gmail “listens” to everything you write, and shows you advertising based on it. It might feel as if you’re never alone. Facebook does the same with everything you write on that platform, and even listens to the things you type but don’t post. Skype doesn’t listen — we think — but as Der Spiegel notes, data from the service “has been accessible to the NSA’s snoops” since 2011.

So the NSA certainly listens. It listens directly, and it listens to all these companies listening to you. So do other countries like Russia and China, which we really don’t want listening so closely to their citizens.

It’s not just the devices that listen; most of this data is transmitted over the Internet. Samsung sends it to what was referred to as a “third party” in its policy statement. It later revealed that third party to be a company you’ve never heard of — Nuance — that turns the voice into text for it. Samsung promises that the data is erased immediately. Most of the other companies that are listening promise no such thing and, in fact, save your data for a long time. Governments, of course, save it, too.

This data is a treasure trove for criminals, as we are learning again and again as tens and hundreds of millions of customer records are repeatedly stolen. Last week, it was reported that hackers had accessed the personal records of some 80 million Anthem Health customers and others. Last year, it was Home Depot, JP Morgan, Sony and many others. Do we think Nuance’s security is better than any of these companies? I sure don’t.

At some level, we’re consenting to all this listening. A single sentence in Samsung’s 1,500-word privacy policy, the one most of us don’t read, stated: “Please be aware that if your spoken words include personal or other sensitive information, that information will be among the data captured and transmitted to a third party through your use of Voice Recognition.” Other services could easily come with a similar warning: Be aware that your e-mail provider knows what you’re saying to your colleagues and friends and be aware that your cell phone knows where you sleep and whom you’re sleeping with — assuming that you both have smartphones, that is.

The Internet of Things is full of listeners. Newer cars contain computers that record speed, steering wheel position, pedal pressure, even tire pressure — and insurance companies want to listen. And, of course, your cell phone records your precise location at all times you have it on — and possibly even when you turn it off. If you have a smart thermostat, it records your house’s temperature, humidity, ambient light and any nearby movement. Any fitness tracker you’re wearing records your movements and some vital signs; so do many computerized medical devices. Add security cameras and recorders, drones and other surveillance airplanes, and we’re being watched, tracked, measured and listened to almost all the time.

It’s the age of ubiquitous surveillance, fueled by both Internet companies and governments. And because it’s largely happening in the background, we’re not really aware of it.

This has to change. We need to regulate the listening: both what is being collected and how it’s being used. But that won’t happen until we know the full extent of surveillance: who’s listening and what they’re doing with it. Samsung buried its listening details in its privacy policy — they have since amended it to be clearer — and we’re only having this discussion because a Daily Beast reporter stumbled upon it. We need more explicit conversation about the value of being able to speak freely in our living rooms without our televisions listening, or having e-mail conversations without Google or the government listening. Privacy is a prerequisite for free expression, and losing that would be an enormous blow to our society.

This essay previously appeared on

ETA (2/16): A German translation by Damian Weber.

Beyond Bandwidth: Securing Every Organization’s Most Valuable Asset: Data (Or, If You Don’t Know Where Your Data Is, Cybercriminals Will Find It For You)

This post was syndicated from: Beyond Bandwidth and was written by: Chris Richter. Original post: at Beyond Bandwidth

It’s hard to put a value on information, and yet it can be argued that data, especially now, can be the most valuable asset companies own. Unfortunately, it’s extremely difficult for organizations to assign a value to their intangible information assets, i.e. data. Without having established a value on its data assets, an organization will…

The post Securing Every Organization’s Most Valuable Asset: Data (Or, If You Don’t Know Where Your Data Is, Cybercriminals Will Find It For You) appeared first on Beyond Bandwidth.

SANS Internet Storm Center, InfoCON: green: Raising the “Creep Factor” in License Agreements, (Sun, Feb 8th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

When I started in this biz back in the 80s, I was brought up short when I read my first EULA (End User License Agreement). Back then, software was basically wrapped in the EULA (yes, like a Christmas present), and nobody read them then either. Imagine my surprise at the time that I hadnt actually purchased the software, but was granted the license to use the software, and ownership remained with the vendor (Microsoft, Lotus, UCSD and so on).

Well, things havent changed much since then, and the concept of ownership has been steadily creeping further and further into information territory that we dont expect. Google, Facebook and pretty much any other free service out there sells any information you post, as well as any other metadata that they can scrape from photos, session information and so on. The common proverb in those situations is if the service is free, then YOU are the product. Try reading the Google, Facebook or Twitter terms of service if you have an hour to spare and think your blood pressure is a bit low that day

The frontier of EULAs, and the market where you seem to be giving up the most private information you dont expect however seems to be in home appliances – in this case Smart Televisions. Samsung recently posted their EULA for their SmartTV here:

Theyre collecting the shows you watch, internet sites visited, IP addresses you browse from, cookies, likes, search terms (really?) and all kinds of other easy to collect and apparently easy to apologize for (in advance) information. With this information, so far Im pretty sure Im not hooking up my TV to my home wireless or ethernet, but Im not surprised – pretty much every Smart TV vendor collects this same info.

But the really interesting passage, where the creep factor is really off the charts for me is:
Please be aware that if your spoken words include personal or other sensitive information, that information will be among the data captured and transmitted to a third party through your use of Voice Recognition.

No word of course who the third partys are, and what their privacy policies might be.

Really and truly a spy in your living room. I guess its legal if its in a EULA or you work for a TLA? And its morally OK as long as you apologize in advance?


Rob VandenBrink

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

The diaspora* blog: diaspora* development review, January 2015

This post was syndicated from: The diaspora* blog and was written by: Diaspora* Foundation. Original post: at The diaspora* blog

These are the changes made to diaspora*'s codebase during January. They will take effect with the release of diaspora* v0.5.0.0.

Say a big thank you to everyone who has helped improve diaspora* this month!

This list has been created by volunteers from the diaspora* community. We'd love help in creating a development review each month; if you would like to help us, get in touch via the related thread on Loomio.

Augier @AugierLe42e

  • fixed the style of the header for the new statistics page: #5587
  • fixed the information about available services and open registrations, which wasn't correctly displayed on the new statistics page: #5595 and #5599

Marco Colli @collimarco

  • fixed a bug that linked a profile image from facebook instead of downloading it to the pod for the diaspora* profile: #5493
  • fixed the translation of timestamps on the mobile website: #5489

Dumitru Ursu @dimaursu

  • added a autoprefixer for CSS vendor prefixes: #5532, #5535 and #5536
  • converted MySQL fields to 4-byte unicode which improves the range of supported chars in posts on pods using MySQL: #5530

Faldrian @Faldrian

  • added an environment variable to specify the Firefox version for our test suite: #5584. The test suite sometimes has problems with recent Firefox versions, which can lead to failing tests when running the test suite on your own computer.
  • added buttons to the single-post view to hide/remove a post and to ignore a user: #5547

Fla @Flaburgan

  • added a currency setting to Paypal donations and allowed unhosted donation buttons for podmins: #5452
  • added followed tags to the mobile menu: #5468
  • removed the truncation for notification emails: #4830
  • fixed the active users count on the new statistics page: #5590

François Lamontagne @flamontagne

  • added a missing link in the FAQ: #5509

James Kiesel @gdpelican

  • improved the profile export feature. The export is now generated in the background and the user receives a notification mail as soon as the export is done: #5499 and #5578

Jason Robinson @jaywink

  • refactored javascript code for the mobile website to get rid of console errors: #5470
  • added some missing configuration for the profile export background job: #5570

maliktunga @maliktunga

  • improved the README: #5550

Marcelo Briones @margori

  • added the ability to strip privacy-sensitive EXIF data when uploading images: #5510

Sakshi Jain @sjain1107

  • removed the community spotlight setting from the settings page if it has not been enabled on the pod: #5562

SansPseudoFix @SansPseudoFix

  • fixed the style of the profile exporter on the settings page: #5582
  • added a statistics page. We already had statistics before but now they are more readable for non-technical users: #5464

Steffen van Bergerem @svbergerem

  • removed unused code from the ProfileHeaderView: #5472
  • ported the contacts page to Backbone.js: #5473. This implements client-side rendering of the contact list, which should speed up page load times.
  • replaced the markdown renderer pagedown by markdown-it: #5526, #5541, #5543, #5545 and #5574
  • added plugins for the markdown-it markdown renderer: #5551

TorrentFreak: Popcorn Time Explores I2P Anonymity as VPN Overloads

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

popcorntBranded a “Netflix for Pirates,” the Popcorn Time app quickly gathered a user base of millions of people over the past year.

There are several successful forks of the application available online who all work on their own feature sets., has been one of the most active projects. The fork added numerous features and made privacy one of its key selling points.

Last year it was the first fork to roll out a built-in VPN that could be used free of charge. However, with millions of users the associated VPN provider Kebrum had trouble keeping up with the massive demand.

“Our user base grew so quickly and is still growing at a tremendous pace that we’re having difficulties keeping up with the volume. Only a small percentage of the huge number of our users we have can use the VPN simultaneously at the moment,” the Popcorn Time team tells TF.

This motivated the developers to look for various alternatives to keep its users secure. In this quest the Invisible Internet Project (I2P) caught their eye.

“We’re now making the first steps in examining integration of Popcorn Time with the I2P network,” the team explains.

The I2P network has been around for more than a decade but never really caught on with the mainstream public. It operates as an anonymous overlay network, similar to Tor, but is optimized for file-sharing.

One of the major downsides of this type of anonymity is that it may slow down transfer speeds, and that’s also the main concern for the Popcorn Time developers.

“Our biggest question in regards to using the I2P network, and we’re examining this question thoroughly to see if it’s the best solution for anonymity for Popcorn Time, is whether the download speed will be good enough for Popcorn Time to work well and for users to be able to still get the awesome viewing experience they have become accustomed to.”

“We are trying to find ways in which we can use the huge user base Popcorn Time has in order to enhance the speed of I2P to our users,” the Popcorn Time team adds.

In addition to safeguarding the privacy of its users, Popcorn Time is also concerned about attacks on its own infrastructure. Android Planet reports that Popcorn Time also plans to distribute its software through P2P technology, so users can get the latest updates even when the server’s offline.

This is not just a hypothetical situation. A few months ago this fork of Popcorn Time lost its .eu domain name after they were put “under investigation” by the EURid registry, and pressure from copyright holders hasn’t stopped since according to the developers.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and anonymous VPN services.

SANS Internet Storm Center, InfoCON: green: GNU Privacy Guard (gpg) needs your help. If you have a couple $$ to spare, check, (Thu, Feb 5th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

Johannes B. Ullrich, Ph.D.

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

The diaspora* blog: Diaspora Yatra

This post was syndicated from: The diaspora* blog and was written by: Diaspora* Foundation. Original post: at The diaspora* blog

Diaspora Yatra is a campaign in India, which started last week and is scheduled to continue until March 6, to promote diaspora* and to attain self-reliance in communication technology.

An historic perspective on liberty and power

“Yatra” is a Sanskrit word meaning “journey.” Diaspora Yatra, launched by the Indian Pirates with eight partner organizations, has already covered four districts in the southern Indian state of Kerala. Pirate Praveen, who recently created a one-step installer for diaspora* on Debian, is a key player in this campaign, engaging in constructive discussions about diaspora* with people from all walks of life.

A talk to a women's group in Kollam

In its first week, Diaspora Yatra has started becoming an investigation into what the concepts of freedom, decentralization, and privacy mean to different sections of society:

  • While the techie adults in Technopark, Trivandrum, where the campaign kicked off, had strong opinions about security in decentralized networks and its trade-off with privacy, the young students of Anchal West School, Kollam were more worried about keeping unwanted eyes away from their private affairs.
  • After being forced to think about how “free” services provided by other social networks are economically feasible for their providers, the teachers at Badhiriya Bachelor of Education Training Centre, Kannanallore were unsure whom to trust. But the kids of Mar Baselias school, Kaithakode were receptive and eager.
  • The working class who assembled at Government SNDP Higher Secondary School, were quicker to explore diaspora*, encryption, and related applications. So were the students of Mar Thoma college, Thiruvalla, Pathanamthitta, who also were interested in the legal issues involved in using diaspora*. The lawyers of the Bar Association, Alappuzha went a step further and talked about whether podmins should scrutinize the content published on their pods and about the jurisdiction of pods.

A session at a middle school in Kaithakode

It cannot be mere coincidence that every kind of person has an opinion about what diaspora* should be, moments after they discover it for the first time. It is exactly the hunger for freedom and individuality which these minds seem to have that the Diaspora Yatra team intends to sate.

To keep up to date with the yatra’s progress, check the Diaspora Yatra site and follow the #diasporayatra tag within diaspora*. We’ll post again on this blog once the yatra has completed.

Darknet - The Darkside: Anthem Hacked – US Health Insurance Provider Leaks 70 Million Records

This post was syndicated from: Darknet - The Darkside and was written by: Darknet. Original post: at Darknet - The Darkside

Anthem Hacked! Everyone is screaming, I was like WTF is Anthem? Turns out it’s part of the 2nd largest health insurance provider in the US (Wellpoint) after United Healthcare – so it’s a pretty big deal with an estimated 70 Million people on its books. Of course according to them, “Anthem was the target of […]

The post…

Read the full post at

Krebs on Security: Banks: Card Thieves Hit White Lodging Again

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

For the second time in a year, multiple financial institutions are complaining of fraud on customer credit and debit cards that were all recently used at a string of Marriott properties run by hotel franchise firm White Lodging Services Corporation. White Lodging says it is investigating, but that so far it has found no signs of a new breach.

whitelodgingIn January 31, 2014, this author first reported evidence of a breach at some White Lodging locations. The Merrillville, Ind. based company confirmed a breach three days later, saying hackers had installed malicious software on cash registers in food and beverage outlets at 14 locations nationwide, and that the intruders had been stealing customer card data from these outlets for approximately nine months.

Fast-forward to late January 2015, and KrebsOnSecurity again began hearing from several financial institutions who had traced a pattern of counterfeit card fraud back to accounts that were all used at nearly a dozen Marriott properties across the country.

Banking sources say the cards that were compromised in this most recent incident look like they were stolen from many of the same White Lodging locations implicated in the 2014 breach, including hotels in Austin, Texas, Bedford Park, Ill., Denver, Indianapolis, and Louisville, Kentucky.  Those same sources said the compromises appear once again to be tied to hacked cash registers at food and beverage establishments within the White Lodging run hotels. The legitimate hotel transactions that predated fraudulent card charges elsewhere range from mid-September 2014 to January 2015.

Contacted about the findings, Marriott spokesman Jeff Flaherty said all of the properties cited by the banks as source of card fraud are run by White Lodging.

“We recently were made aware of the possibility of unusual credit card transactions at a number of hotels operated by one of our franchise management companies,” Flaherty said. “We understand the franchise company is looking into the matter. Because the suspected issue is related to systems that Marriott does not own or control, we do not have additional information to provide.”

I reached out to White Lodging on Jan. 31. In an emailed statement sent today, White Lodging spokesperson Kathleen Sebastian said the company engaged a security firm to investigate the reports, but so far that team has found no indication of a compromise.

“From your inquiry, we have engaged a full forensic audit of the properties in question,” Sebastian wrote. “We appreciate your concern, and we are taking this information very seriously. To this date, we have found no identifiable infection that would lead us to believe a breach has occurred. Our investigation is ongoing.”

Sebastian went on to say that in the past year, White Lodging has adopted a number of new security measures, including the installation of a third-party managed firewall system, dual-factor authentication for critical systems, and “various other systems as guided by our third-party cyber security service. While we have executed additional security protocols, we do not wish to specifically disclose full details of all security measure to the public.”


Flaherty said Marriott is nearing completing of a project to retrofit cash registers at Marriott-run properties with a technology called tokenization, which substitutes card data with placeholder information that has no intrinsic or exploitable value for attackers.

“As this matter involves Marriott hotel brands, we want to provide assurance that Marriott has a long-standing commitment to protect the privacy of the personal information that our guests entrust to us and we will continue to monitor the situation closely,” he said. “Marriott is currently on track to have all our U.S. managed systems fully tokenized within the month or so.”

Pressed on whether White Lodging also was using tokenization, Sebastian said the front desk systems at all White Lodging-managed Marriott properties are fully tokenized, and that payment terminals at other parts of the hotel (including restaurants, bars and gift shops) “are transitioning to tokenization and are scheduled to be fully tokenized by the end of the second quarter.”

Tokenization as a card security solution tends to be most attractive to businesses that must keep customer card numbers on file until the transaction is finalized, such as hotels, bars and rental car services. A January 2015 report by Gartner Inc. fraud analyst Avivah Litan found that at least 50 percent of Level 1 through Level 3 U.S. merchants have already adopted or will adopt tokenization in the next year.

Merchants retain tokens because they need to hang on to a single unique identifier of the customer for things like recurring billing, loyalty programs, and chargebacks and disputes. But experts say tokenization itself does not solve the problem that has fueled most retail card breaches in recent years: Malware remotely installed on point-of-sale devices that steals customer card data before it can be tokenized.

Gartner’s Litan said an alternative and far more secure approach to handling card data involves point-to-point encryption — essentially installing card readers and other technology that ensures customer card data is never transmitted in plain text anywhere in the retail environment. But, she said, many businesses have chosen tokenization in favor of encryption because it is cheaper and less complicated to implement in the short run.

“Point-to-point encryption involves upgrading your card readers, because you want the encryption to happen not at the software level — where it can be hacked — but at the hardware level,” Litan said. “But it’s expensive and there aren’t a lot of approved vendors to chose from if you want to pick a vendor who is in compliance” with Payment Card Industry (PCI) standards, violations of which can come with fines and costly audits, she said.

Merchants that adopt point-to-point encryption may also find themselves locked into a single credit card processor, because the encryption technology built into the newer readers often only works with a specific processor, Litan said.

“You end up with vendor or processor lock-in, because now your equipment is locked in to one payment processor, and you can’t easily just change to another processor if you’re later unhappy with that arrangement because that means changing your equipment,” Litan said.

In the end, many businesses — particularly hotels — opt for tokenization because it can dramatically simplify their process of proving compliance with PCI standards. For example, merchants that hold onto customer card data for a period of time until a transaction is finalized may be required to complete a security assessment that demands proof of compliance with some 350 different PCI requirements, whereas merchants that do not store electronic cardholder data or have substituted that process through tokenization likely have about 90 percent fewer PCI requirements to satisfy.

In a lot of cases, it’s really less about security and more about simplifying PCI compliance to reduce the scope of the audit, because you get big rewards when you don’t store credit card data,” Litan said. “Unfortunately, the PCI standards don’t have the same kind of rewards when it comes to securing card data in-transit [across a retailer’s internal network and systems] which is what point-to-point encryption addresses.”

Merchants in the United States are gradually shifting to installing card readers that can accommodate more secure chip cards that adhere to the Europay, MasterCard and Visa or EMV standard. These chip cards are designed to be far more expensive and difficult for thieves to counterfeit than regular credit cards that most U.S. consumers have in their wallets. Non-chip cards store cardholder data on a magnetic stripe, which can be trivially copied by point-of-sale malware.

Magnetic-stripe based cards are the primary target for hackers who have been breaking into retailers like Target and Home Depot and installing malicious software on the cash registers: The data is quite valuable to crooks because it can be sold to thieves who encode the information onto new plastic and go shopping at big box stores for stuff they can easily resell for cash (think high-dollar gift cards and electronics).

Newer, EMV/chip-based card readers can enable a range of additional payment and security options, including point-to-point encryption and mobile payments, such as Apple‘s new Apple Pay system. But integrating EMV with existing tokenization schemes can also present challenges for merchants. For example, Apple Pay uses a separate EMV tokenization process.

“This means that merchants who use their own tokenization system and choose to accept Apple Pay payments will end up with multiple tokens for one card number, defeating a major reason why many merchants adopted tokenization in the first place,” Litan said.

TorrentFreak: Huge Security Flaw Leaks VPN Users’ Real IP-Addresses

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

boxedThe Snowden revelations have made it clear that online privacy is certainly not a given.

Just a few days ago we learned that the Canadian Government tracked visitors of dozens of popular file-sharing sites.

As these stories make headlines around the world interest in anonymity services such as VPNs has increased, as even regular Internet users don’t like the idea of being spied on.

Unfortunately, even the best VPN services can’t guarantee to be 100% secure. This week a very concerning security flaw revealed that it’s easy to see the real IP-addresses of many VPN users through a WebRTC feature.

With a few lines of code websites can make requests to STUN servers and log users’ VPN IP-address and the “hidden” home IP-address, as well as local network addresses.

The vulnerability affects WebRTC-supporting browsers including Firefox and Chrome and appears to be limited to Windows machines.

A demo published on GitHub by developer Daniel Roesler allows people to check if they are affected by the security flaw.

IP-address leak

The demo claims that browser plugins can’t block the vulnerability, but luckily this isn’t entirely true. There are several easy fixes available to patch the security hole.

Chrome users can install the WebRTC block extension or ScriptSafe, which both reportedly block the vulnerability.

Firefox users should be able to block the request with the NoScript addon. Alternatively, they can type “about:config” in the address bar and set the “media.peerconnection.enabled” setting to false.


TF asked various VPN providers to share their thoughts and tips on the vulnerability. Private Internet Access told us that the are currently investigating the issue to see what they can do on their end to address it.

TorGuard informed us that they issued a warning in a blog post along with instructions on how to stop the browser leak. Ben Van Der Pelt, TorGuard’s CEO, further informed us that tunneling the VPN through a router is another fix.

“Perhaps the best way to be protected from WebRTC and similar vulnerabilities is to run the VPN tunnel directly on the router. This allows the user to be connected to a VPN directly via Wi-Fi, leaving no possibility of a rogue script bypassing a software VPN tunnel and finding one’s real IP,” Van der Pelt says.

“During our testing Windows users who were connected by way of a VPN router were not vulnerable to WebRTC IP leaks even without any browser fixes,” he adds.

While the fixes above are all reported to work, the leak is a reminder that anonymity should never be taken for granted.

As is often the case with these type of vulnerabilities, VPN and proxy users should regularly check if their connection is secure. This also includes testing against DNS leaks and proxy vulnerabilities.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and anonymous VPN services.

Errata Security: Needs more Hitler

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

Godwin’s Law doesn’t not apply to every mention of Hitler, as the Wikipedia page explains:

Godwin’s law applies especially to inappropriate, inordinate, or hyperbolic comparisons with Nazis. The law would not apply to mainstays of Nazi Germany such as genocide, eugenics, racial superiority, or to a discussion of other totalitarian regimes, if that was the explicit topic of conversation, because a Nazi comparison in those circumstances may be appropriate.

Last week, I wrote a piece about how President Obama’s proposed cyber laws were creating a Cyber Police State. The explicit topic of my conversation is totalitarian regimes.

This week, during the State of the Union address, I compared the text of Mein Kampf to the text of President Obama’s speech. Specifically, Mein Kampf said this:

The state must declare the child to be the most precious treasure of the people. As long as the government is perceived as working for the benefit of the children, the people will happily endure almost any curtailment of liberty and almost any deprivation.

Obama’s speech in support of his cyber legislation says this:

No foreign nation, no hacker, should be able to shut down our networks, steal our trade secrets, or invade the privacy of American families, especially our kids. We are making sure our government integrates intelligence to combat cyber threats, just as we have done to combat terrorism. And tonight, I urge this Congress to finally pass the legislation we need to better meet the evolving threat of cyber-attacks, combat identity theft, and protect our children’s information.

There is no reason to mention children here. None of the big news stories about hacker attacks have mentioned children. None of the credit cards scandals, or the Sony attack, involved children. Hackers don’t care about children, have never targeted children in the past, and are unlikely to target children in the future. Children are wholly irrelevant to the discussion.

The only reason children are mentioned in this section is for the exact reason described by Hitler. And this ties directly back to my original thesis that these cyber laws will create a cyber police state.

I didn’t immediately reach for Hitler to describe this problem. I started searching for quotes from the Simpsons, whose character Helen Lovejoy satirizes this situation by screaming at inappropriate times “Why won’t anybody think of the children“. But, while googling, I landed on the Mein Kampf quote first. Since it so perfectly describes this situation, I chose it instead of the Simpsons example.

Famous lefty Michael Moore compared the response to 9/11 with that of the Reichstag fire that catapulted the Nazi party into power. After the fire, Hitler was able to suspend civil liberties in order to fight the communists. This is appropriate, and not an application of Godwin’s law. Many claimed this was hyperbole, because the Patriot Act didn’t go as far as the Germans in suspending civil liberties, or handing power to President Bush. But that’s not the point — the point is that in both cases we were talking about the same sort of situation.

What keeps our country free is the lesson of totalitarian countries like Nazi German, Stalinist Russian, and Maoist China. We need to regularly be reminded of those lessons. When the situations are similar, albeit not as extreme, somebody needs to stand up and make that comparison. That’s how we prevent these situations from becoming as extreme.

In other words, we need more lessons from Hitler. When comparison’s are appropriate, when we are talking about totalitarianism, we shouldn’t let accusations of “Godwin’s Law” drown them out. Cory Doctorow Rejoins EFF to Eradicate DRM Everywhere

This post was syndicated from: and was written by: ris. Original post: at

The Electronic Frontier Foundation has announced
that Cory Doctorow has rejoined the organization “to battle the pervasive use of dangerous digital rights management (DRM) technologies that threaten users’ security and privacy, distort markets, confiscate public rights, and undermine innovation.

Darknet - The Darkside: Gitrob – Scan Github For Sensitive Files

This post was syndicated from: Darknet - The Darkside and was written by: Darknet. Original post: at Darknet - The Darkside

Developers generally like to share their code, and many of them do so by open sourcing it on GitHub, a social code hosting and collaboration service. Many companies also use GitHub as a convenient place to host both private and public code repositories by creating GitHub organizations where employees can be joined. Sometimes employees might…

Read the full post at

TorrentFreak: MPAA Links Online Piracy to Obama’s Cybersecurity Plan

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

mpaa-logoThe unprecedented Sony hack has put cybersecurity on top of the political agenda in the United States.

Just last week Representative Ruppersberger re-introduced the controversial CISPA bill and yesterday President Obama announced his new cybersecurity plans.

New measures are needed to “investigate, disrupt and prosecute” cybercrime as recent events have shown that criminals can and will exploit current weaknesses, according to the White House

“In this interconnected, digital world, there are going to be opportunities for hackers to engage in cyber assaults both in the private sector and the public sector,” President Obama notes.

Together with Congress the Obama administration hopes to draft a new bill that will address these concerns. Among other things, the new plan aims to improve information sharing between private Internet companies and the Government.

Privacy advocates argue that this kind of data sharing endangers the rights of citizens, who may see more private data falling into the hands of the Government. President Obama, on the other hand, sees it as a necessity to stop attacks such as the Sony breach.

“Because if we don’t put in place the kind of architecture that can prevent these attacks from taking place, this is not just going to be affecting movies, this is going to be affecting our entire economy in ways that are extraordinarily significant,” the President cautions.

With the Sony hack Hollywood played a central role in putting cybersecurity back on the agenda. And although President Obama makes no mention of online piracy, the MPAA is quick to add it to the discussion.

In a statement responding to the new cybersecurity plans, MPAA CEO Chris Dodd notes that because of these criminals certain companies have their “digital products exposed and available online for anyone to loot.”

“That’s why law enforcement must be given the resources they need to police these criminal activities,” Dodd says.

The MPAA appears to blend the Sony hack with online piracy. It calls upon Congress to keep the interests of Hollywood in mind, and urges private actors including search engines and ISPs to help in curbing the piracy threat.

“… responsible participants in the Internet ecosystem – content creators, search, payment processors, ad networks, ISPs – need to work more closely together to forge initiatives to stop the unlawful spread of illegally-obtained content,” Dodd says.

Hollywood’s effort to frame online piracy as a broader cybersecurity threat is not entirely new.

Last year an entertainment industry backed report claimed that 90 percent of the top pirate sites link to malware or other unwanted software. In addition, two-thirds were said to link to credit card scams.

This report was later cited in a Senate Subcommittee hearing where the MPAA urged lawmakers to take steps so young Americans can be protected from the “numerous hazards on pirate sites.”

Whether a new cybersecurity bill will indeed include anti-piracy measures has yet to be seen. But for the MPAA it may be one of the few positive outcomes of the Sony hack, which exposed some of its best kept secrets in recent weeks.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and anonymous VPN services.

Krebs on Security: Toward Better Privacy, Data Breach Laws

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

President Obama on Monday outlined a proposal that would require companies to inform their customers of a data breach within 30 days of discovering their information has been hacked. But depending on what is put in and left out of any implementing legislation, the effort could well could lead to more voluminous but less useful disclosure. Here are a few thoughts about how a federal breach law could produce fewer yet more meaningful notice that may actually help prevent future breaches.

dataleakThe plan is intended to unify nearly four dozen disparate state data breach disclosure laws into a single, federal standard. But as experts quoted in this story from The New York Times rightly note, much rides on whether or not any federal breach disclosure law is a baseline law that allows states to pass stronger standards.

For example, right now seven states already have so-called “shot-clock” disclosure laws, some more stringent; Connecticut requires insurance firms to notify no more than five days after discovering a breach; California has similar requirements for health providers. Also, at least 14 states and the District of Columbia have laws that permit affected consumers to sue a company for damages in the wake of a breach. What’s more, many states define “personal information” differently and hence have different triggers for what requires a company to disclose. For an excellent breakdown on the various data breach disclosure laws, see this analysis by BakerHostetler (PDF).

Leaving aside the weighty question of federal preemption, I’d like to see a discussion here and elsewhere about a requirement which mandates that companies disclose how they got breached. Naturally, we wouldn’t expect companies to disclose the specific technologies they’re using in a public breach document. Additionally, forensics firms called in to investigate aren’t always able to precisely pinpoint the cause or source of the breach.

But this information could be publicly shared in a timely way when it’s available, and appropriately anonymized. It’s unfortunate that while we’ve heard time and again about credit card breaches at retail establishments, we know very little about how those organizations were breached in the first place. A requirement to share the “how” of the hack when it’s known in an anonymized and by industry would be helpful.

I also want to address the issue of encryption. Many security experts insist that there ought to be a carve-out that would allow companies to avoid disclosure requirements in a breach that exposes properly encrypted sensitive data (i.e., the intruders did not also manage to steal the private key needed to decrypt the data). While a broader adoption of encryption could help lessen the impact of breaches, this exception is in some form already included in nearly all four dozen state data breach disclosure laws, and it doesn’t seem to have lessened the frequency of breach alerts.

passcrackI suspect this there are several reasons for this. The most obvious is that few organizations that suffer a breach are encrypting their sensitive data, or that they’re doing so sloppily (exposing the encryption key, e.g.). Also, most states also have provisions in their breach disclosure laws that require a “risk of harm” analysis that forces the victim organization to determine whether the breach is reasonably likely to result in harm (such as identity theft) to the affected consumer.

This is important because many of these breaches are the result of thieves breaking into a Web site database and stealing passwords, and in far too many cases the stolen passwords are not encrypted but instead “hashed” using a relatively weak and easy-to-crack approach such as MD5 or SHA-1. For a good basic breakdown on the difference between encrypting data and hashing it, check out this post. Also, for a primer on far more secure alternatives to cryptographic hashes, see my 2012 interview with Thomas Ptacek, How Companies Can Beef Up Password Security.

As long as we’re dealing with laws to help companies shore up their security, I would very much like to see some kind of legislative approach that includes ways to incentivize more companies to deploy two-factor and two step authentication — not just for their customers, but just as crucially (if not more so) for their employees.


President Obama also said he would propose the Student Data Privacy Act, which, according to The Times, would prohibit technology firms from profiting from information collected in schools as teachers adopt tablets, online services and Internet-connected software. The story also noted that the president was touting voluntary agreements by companies to safeguard energy data and to provide easy access to consumer credit scores. While Americans can by law get a free copy of their credit report from each of the threat major credit bureaus once per year — at — most consumers still have to pay to see their credit scores.

These changes would be welcome, but they fall far short of the sorts of revisions we need to the privacy laws in this country, some of which were written in the 1980s and predate even the advent of Web browsing technology. As I’ve discussed at length on this blog, Congress sorely needs to update the Electronic Communications Privacy Act (ECPA), the 1986 statute that was originally designed to protect Americans from Big Brother and from government overreach. Unfortunately, the law is now so outdated that it actually provides legal cover for the very sort of overreach it was designed to prevent. For more on the effort to change the status quo, see

cloudprivacyAlso, I’d like to see a broader discussion of privacy proposals that cover what companies can and must/must not do with all the biometric data they’re collecting from consumers. Companies are tripping over themselves to collect oodles of potentially very sensitive such data from consumers, and yet we still have no basic principles that say what companies can do with that information, how much they can collect, how they can collect it or share it, or how they will protect that information.

There are a handful of exceptions at the state level; read more here). But overall, we’re really lacking any sort of basic protections for that information, and consumers are giving it away every day without fully realizing there are basically zero federal standards for what can or should be done with this information.

Coming back to the subject of encryption: Considering how few companies actually make customer data encryption the default approach, it’s discouraging to see elements of this administration criticizing companies for it. There is likely a big showdown coming between the major mobile players and federal investigators over encryption. Apple and Google’s recent decision to introduce default, irrevocable data encryption on all devices powered by their latest operating systems has prompted calls from the U.S. law enforcement community for legislation that would require mobile providers to allow law enforcement officials to bypass that security in criminal investigations.

In October, FBI Director James Comey called on the mobile giants to dump their new encryption policies. Last week, I spoke at a conference in New York where the panel prior to my talk was an address from New York’s top prosecutor, who said he was working with unnamed lawmakers to craft new legal requirements. Last week, Sen. Ron Wyden (D-Ore.) reintroduced a bill that would bar the government from requiring tech companies to build so-called “backdoor” access to their data for law enforcement.

This tension is being felt across the pond as well: British Prime Minister David Cameron also has pledged new anti-terror laws that give U.K. security services the ability to read encrypted communications on mobile devices.

TorrentFreak: Record Labels Try to Force ISP to Disconnect Pirates

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

Half a decade ago the Irish Recorded Music Association (IRMA) ended its legal action against local ISP Eircom when the ISP agreed to implement a new anti-piracy policy against its own subscribers.

The agreement saw IRMA-affiliated labels including Sony, Universal and Warner tracking Eircom subscribers online. Eircom then forwarded warning notices to customers found to be sharing content without permission and agreed to disconnect those who were caught three times.

In a follow-up move IRMA tried to force another ISP, UPC, to implement the same measures. UPC fought back and a 2010 High Court ruling went in the ISP’s favor.

However, a 2012 change in the law emboldened IRMA to have a second bite and now the music group’s case is being heard by the Commercial Court. As before, IRMA wants an injunction issued against UPC forcing it to implement a “three strikes” or similar regime against its customers.

According to the Irish Times, Michael McDowell SC representing the labels said that UPC could come up with its own graduated response, whether it be “two strikes” or “five strikes”.

For its part, UPC appears to be more concerned about the cost of operating such a system rather than the actual introduction of one. UPC has provided estimates for doing so but the labels view the amounts involved as excessive.

Surprisingly, Cian Ferriter SC, for UPC, said the ISP has “no difficulty in handing over information” (on pirates) for the labels to pursue but the company has issues with setting up an “entire system” to deal with the problem.

The stance of UPC seems markedly different from its position during February 2014. At the time the company said that subjecting customers to a graduated response scheme would raise a “serious question of freedom of expression and public policy” and would “demand fair and impartial procedures in the appropriate balancing of rights.”

In the event, however, Mr McDowell said that UPC’s offer was not only a new but one that raises concern over privacy and data protection issues.

IRMA chairman Willie Kavanagh previously said that the Eircom three-strikes scheme had been “remarkably effective,” since only 0.2% of warned users have proceeded to the disconnection stage. Perhaps even more remarkable is that even after four years of the program, Eircom hadn’t disconnected a single customer.

“We are continuing to implement the graduated response process,” a spokesman said last March. “We haven’t, as yet, disconnected anyone.”

IRMA is contractually bound by its agreement with Eircom to pursue UPC and/or other ISPs to implement a graduated response scheme, so expect this one to run either until the bitter end – or when UPC cave in. For now the case is scheduled to run for eight days.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and anonymous VPN services.

TorrentFreak: Aussie ISPs Rushing Ahead With Anti-Piracy Proposals

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

ausFor years Australian citizens have complained of being treated as second class citizens by content companies who have failed to make content freely available at a fair price. As a result millions of Aussies have turned to file-sharing networks for their media fix.

This has given the country somewhat of a reputation on the world stage, which in turn has put intense pressure on the Australian government to do something to reduce unlawful usage.

After years of negotiations between ISPs and entertainment companies went nowhere, last year the government stepped in. ISPs were warned that if they don’t take voluntarily measures to deter and educate pirating subscribers, the government would force a mechanism upon them by law.

With a desire to avoid that option at all costs, the service providers went away with orders to come up with a solution. Just last month Attorney-General George Brandis and Communications Minister Malcolm Turnbull set an April 8 deadline, a tight squeeze considering the years of failed negotiations.

Nevertheless, iiNet, Australia’s second largest ISP, feels that the deadline will be met.

“We will have code; whether or not it gets the rubber stamp remains to be seen,” says iiNet chief regulatory officer Steve Dalby. “Dedicated people are putting in a lot of work drafting documents and putting frameworks together.”

With just 120 days to come up with a solution the government’s deadline is a big ask and Dalby says there are plenty of complications.

“There are issues around privacy, there are issues around appeals. There are issues around costs. There is a lot of work that needs to be done,” he says.

Of course, these are exactly the same issues that caused talks to collapse on a number of occasions in the past. However, in recent months it’s become clear that the government is prepared to accept less stringent measures than the entertainment industries originally wanted. Slowing and disconnecting subscribers is now off the table, for example.

Although there has been no official announcement, it seems likely that the ISPs will offer a notice-and-notice system similar to the one being planned for the UK.

Subscribers will be informed by email that their connections are being used to share content unlawfully and will be politely but firmly asked to stop. An educational program, which advises users where to obtain content legally, is likely to augment the scheme.

Who will pay for all this remains to be seen. ISPs have previously refused to contribute but with the government threatening to impose a code if a suitable one is not presented, compromise could be on the table.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and anonymous VPN services.

Клошкодил: An Open Letter to Mr. David Cameron

This post was syndicated from: Клошкодил and was written by: Vasil Kolev. Original post: at Клошкодил

(this is a guest post from Boyan Krosnov, the original is here)

An open letter to Mr. David Cameron,


This letter is in reaction to you statement: ” But the question remains: are we going to allow a means of communications which it simply isn’t possible to read. My answer to that question is: ‘No we must not’. “Source: link

Mr. Cameron, firstly this letter has nothing to do with recent events in Paris. It is solely addressing the issue of private communications.

End-to-end computer-assisted encryption with ephemeral keys has existed on this world since at least 1977. Even 130 years ago, in 1885, the one-time pad was already invented. If you don’t understand what these are, then please ask your technical advisers. Essentially, someone with a book (for one-time pad), a pen and a sheet of paper can encrypt and decrypt secret messages from/to a party located on the other end of the world. They can communicate these messages in public using a variety of low tech means. For example, they could post innocent-looking messages in a classifieds section of a newspaper. Anyone, without the necessary procedures and a copy of the pad, would not be able to know the content of their communication, and if the scheme is implemented correctly, would not be able to detect that a conversation is taking place. This is not a new development. Even the modern idea of a Sneakernet has existed at least as long as the Internet has existed.

Short of inventing a time machine, your goal is unachievable. :)

On a more serious note though, what you are promoting in your speech is scary and deeply immoral.

Since end-to-end encryption exists, I know of only three ways you could try to achieve your goal of total on-demand, and probably retro-active snoop-ability of communications. These are ineffective and in some cases even impossible to implement, but anyway here they are:

  • Option 1. to have a backdoor in every end-point device manufactured in the UK or brought in through the UK border. This approach does not prevent a sufficiently dedicated person from building a secure end-point from scratch, like with the one-time pad and newspaper approach I mentioned. Backdooring online services is a sub-case of this.


  • Option 2. to introduce a key escrow requirement for all encrypted communication beginning and ending and maybe even crossing the UK, and also detect and block all encrypted communication, not conforming to the key escrow rules. The latter might be impossible to implement, but that’s a whole other matter.


  • Option 3. Ban encrypted communication altogether. Which would tear down the whole of what you call “the digital economy”, and revert the UK back to a technological state resembling the 80s, while the rest of the world moves on.

The reality of the matter is that all three, apart from being totally ineffective in dealing with the threat of isolated terrorist acts, open the door for massive abuse, not just by the government, but also by related and unrelated third parties. Other people have explained this a lot better than I ever could. For example, in Cory Doctorow’s talk here. To quote just one line ” – … but we both understand, that if our government decided that weaponizing water-bourne parasites was more important than addressing them and curing them, then we would need a new government.”

Let me be clear on one point: We don’t trust nation states with our private thoughts and conversations. We need private communications not just for the paranoid, perverts and criminals, but for businesses, law enforcement, journalists and for regular every-day private conversations between friends and family members.

Your scheme would ruin private communication for all of us. The bad guys are already using secure and undetectable communication media.

Privacy is the opposite of total surveillance. You can’t have both. So unfortunately, for the goals you outlined, we need to have a working and secure private communication for everyone in the world to use, regardless of their sex, skin color, sexual orientation, age, religious views (or lack thereof), wealth, social status or intent. Your scheme for a total surveillance state must be stopped.

With good meaning and respect,
Boyan Krosnov
Sofia, 2015-01-13

(c) 2015, Boyan Krosnov, public domain