Posts tagged ‘Privacy’

Schneier on Security: Lessons from the Sony Hack

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Earlier this month, a mysterious group that calls itself Guardians of Peace hacked into Sony Pictures Entertainment’s computer systems and began revealing many of the Hollywood studio’s best-kept secrets, from details about unreleased movies to embarrassing emails (notably some racist notes from Sony bigwigs about President Barack Obama’s presumed movie-watching preferences) to the personnel data of employees, including salaries and performance reviews. The Federal Bureau of Investigation now says it has evidence that North Korea was behind the attack, and Sony Pictures pulled its planned release of “The Interview,” a satire targeting that country’s dictator, after the hackers made some ridiculous threats about terrorist violence.

Your reaction to the massive hacking of such a prominent company will depend on whether you’re fluent in information-technology security. If you’re not, you’re probably wondering how in the world this could happen. If you are, you’re aware that this could happen to any company (though it is still amazing that Sony made it so easy).

To understand any given episode of hacking, you need to understand who your adversary is. I’ve spent decades dealing with Internet hackers (as I do now at my current firm), and I’ve learned to separate opportunistic attacks from targeted ones.

You can characterize attackers along two axes: skill and focus. Most attacks are low-skill and low-focus­people using common hacking tools against thousands of networks world-wide. These low-end attacks include sending spam out to millions of email addresses, hoping that someone will fall for it and click on a poisoned link. I think of them as the background radiation of the Internet.

High-skill, low-focus attacks are more serious. These include the more sophisticated attacks using newly discovered “zero-day” vulnerabilities in software, systems and networks. This is the sort of attack that affected Target, J.P. Morgan Chase and most of the other commercial networks that you’ve heard about in the past year or so.

But even scarier are the high-skill, high-focus attacks­the type that hit Sony. This includes sophisticated attacks seemingly run by national intelligence agencies, using such spying tools as Regin and Flame, which many in the IT world suspect were created by the U.S.; Turla, a piece of malware that many blame on the Russian government; and a huge snooping effort called GhostNet, which spied on the Dalai Lama and Asian governments, leading many of my colleagues to blame China. (We’re mostly guessing about the origins of these attacks; governments refuse to comment on such issues.) China has also been accused of trying to hack into the New York Times in 2010, and in May, Attorney General Eric Holder announced the indictment of five Chinese military officials for cyberattacks against U.S. corporations.

This category also includes private actors, including the hacker group known as Anonymous, which mounted a Sony-style attack against the Internet-security firm HBGary Federal, and the unknown hackers who stole racy celebrity photos from Apple’s iCloud and posted them. If you’ve heard the IT-security buzz phrase “advanced persistent threat,” this is it.

There is a key difference among these kinds of hacking. In the first two categories, the attacker is an opportunist. The hackers who penetrated Home Depot’s networks didn’t seem to care much about Home Depot; they just wanted a large database of credit-card numbers. Any large retailer would do.

But a skilled, determined attacker wants to attack a specific victim. The reasons may be political: to hurt a government or leader enmeshed in a geopolitical battle. Or ethical: to punish an industry that the hacker abhors, like big oil or big pharma. Or maybe the victim is just a company that hackers love to hate. (Sony falls into this category: It has been infuriating hackers since 2005, when the company put malicious software on its CDs in a failed attempt to prevent copying.)

Low-focus attacks are easier to defend against: If Home Depot’s systems had been better protected, the hackers would have just moved on to an easier target. With attackers who are highly skilled and highly focused, however, what matters is whether a targeted company’s security is superior to the attacker’s skills, not just to the security measures of other companies. Often, it isn’t. We’re much better at such relative security than we are at absolute security.

That is why security experts aren’t surprised by the Sony story. We know people who do penetration testing for a living­real, no-holds-barred attacks that mimic a full-on assault by a dogged, expert attacker­and we know that the expert always gets in. Against a sufficiently skilled, funded and motivated attacker, all networks are vulnerable. But good security makes many kinds of attack harder, costlier and riskier. Against attackers who aren’t sufficiently skilled, good security may protect you completely.

It is hard to put a dollar value on security that is strong enough to assure you that your embarrassing emails and personnel information won’t end up posted online somewhere, but Sony clearly failed here. Its security turned out to be subpar. They didn’t have to leave so much information exposed. And they didn’t have to be so slow detecting the breach, giving the attackers free rein to wander about and take so much stuff.

For those worried that what happened to Sony could happen to you, I have two pieces of advice. The first is for organizations: take this stuff seriously. Security is a combination of protection, detection and response. You need prevention to defend against low-focus attacks and to make targeted attacks harder. You need detection to spot the attackers who inevitably get through. And you need response to minimize the damage, restore security and manage the fallout.

The time to start is before the attack hits: Sony would have fared much better if its executives simply hadn’t made racist jokes about Mr. Obama or insulted its stars­or if their response systems had been agile enough to kick the hackers out before they grabbed everything.

My second piece of advice is for individuals. The worst invasion of privacy from the Sony hack didn’t happen to the executives or the stars; it happened to the blameless random employees who were just using their company’s email system. Because of that, they’ve had their most personal conversations­gossip, medical conditions, love lives­exposed. The press may not have divulged this information, but their friends and relatives peeked at it. Hundreds of personal tragedies must be unfolding right now.

This could be any of us. We have no choice but to entrust companies with our intimate conversations: on email, on Facebook, by text and so on. We have no choice but to entrust the retailers that we use with our financial details. And we have little choice but to use cloud services such as iCloud and Google Docs.

So be smart: Understand the risks. Know that your data are vulnerable. Opt out when you can. And agitate for government intervention to ensure that organizations protect your data as well as you would. Like many areas of our hyper-technical world, this isn’t something markets can fix.

This essay previously appeared on the Wall Street Journal CIO Journal.

TorrentFreak: Researchers Make BitTorrent Anonymous and Impossible to Shut Down

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

triblerThe Pirate Bay shutdown has once again shows how vulnerable the BitTorrent ‘landscape’ is to disruptions.

With a single raid the largest torrent site on the Internet was pulled offline, dragging down several other popular BitTorrent services with it.

A team of researchers at Delft University of Technology has found a way to address this problem. With Tribler they’ve developed a robust BitTorrent client that doesn’t rely on central servers. Instead, it’s designed to keep BitTorrent alive, even when all torrent search engines, indexes and trackers are pulled offline.

“Tribler makes BitTorrent anonymous and impossible to shut down,” Tribler’s lead researcher Dr. Pouwelse tells TF.

“Recent events show that governments do not hesitate to block Twitter, raid websites, confiscate servers and steal domain names. The Tribler team has been working for 10 years to prepare for the age of server-less solutions and aggressive suppressors.”

To top that, the most recent version of Tribler that was released today also offers anonymity to its users through a custom-built in Tor network. This allows users to share and publish files without broadcasting their IP-addresses to the rest of the world.

“The public was beginning to lose the battle for Internet freedom, but today we are proud to be able to present an attack-resilient and censorship-resilient infrastructure for publishing,” Dr. Pouwelse says.

After thorough tests of the anonymity feature earlier this year, it’s now built into the latest release. Tribler implemented a Tor-like onion routing network which hides who is seeding or sharing files. Users can vary the number of “hops” the client uses to increase anonymity.

“Tribler creates a new dedicated network for anonymity that is in no way connected to the main Tor network. By using Tribler you become part of a Tor-like network and help others become anonymous,” Dr. Pouwelse says.

“That means you no longer have any exposure in any swarm, either downloading or seeding,” he adds.

Tribler_anonymous_downloading_in action__select_your_privacy_level_for_each_torrent

The downside to the increase in privacy is higher bandwidth usage. After all, users themselves also become proxies and have to relay the transfers of others. In addition, the anonymity feature may also slow down transfer speeds depending on how much other users are willing to share.

“We are very curious to see how fast anonymous downloads will be. It all depends on how social people are, meaning, if they leave Tribler running and help others automatically to become anonymous. If a lot of Tribler users turn out to be sharing and caring, the speed will be sufficient for a nice downloading experience,” Pouwelse says.

Another key feature of Tribler is decentralization. Users can search for files from within the application, which finds torrents through other peers instead of a central server. And if a tracker goes offline, the torrent will continue to download with the help of other users too.

The same decentralization principle applies to spam control. Where most torrent sites have a team of moderators to delete viruses, malware and fake files, Tribler uses user-generated “channels” which can be “liked” by others. If more people like a channel, the associated torrents get a boost in search results.

triblernew

Overall the main goal of the University project is to offer a counterweight to the increased suppression and privacy violations the Internet is facing. Supported by million of euros in taxpayer money, the Tribler team is confident that it can make the Internet a bit safer for torrent users.

“The Internet is turning into a privacy nightmare. There are very few initiatives that use strong encryption and onion routing to offer real privacy. Even fewer teams have the resources, the energy, technical skills and scientific know-how to take on the Big and Powerful for a few years,” Pouwelse says.

After the Pirate Bay raid last week Tribler enjoyed a 30% increase in users and they hope that this will continue to grow during the weeks to come.

Those who want to give it a spin are welcome to download Tribler here. It’s completely Open Source and with a version for Windows, Mac and Linux. In addition, the Tribler team also invites researchers to join the project.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

The Hacker Factor Blog: C is for Cookie

This post was syndicated from: The Hacker Factor Blog and was written by: The Hacker Factor Blog. Original post: at The Hacker Factor Blog

When people are banned for uploading prohibited content to FotoForensics, they have the option to contest the ban. They can fill out an online unban request form, or they can send an email request.

The problem with unban requests via email is that users forget to include information that I require in order to identify the ban. Right now, there’s about 4,000 active bans on the site. Without some basic information, I’ll be unable to match the ban to the user’s unban request.

For this reason, we released a web-based contact form last March. When people fill out the unban form, it automatically gathers information that is needed to identify the user’s ban. This includes the user’s network address, user-agent string, and other header information. (This is passive, not active; we do not need to run any client-side code to do this. The user is already providing this information with each web request.) With this web form, users just ask to be unbanned and the server automatically identifies the ban rule. This way, we have all the information we need.

What starts with the letter “C”?

A few months ago, I banned someone who decided to send me an unban request in an email. He did not use the online form because, as he put it, “Your site requires cookies.” At the time, I thought he was just a nut-job. Not because he was worried about web cookies, but because my FotoForensics site doesn’t use cookies.

When I first designed FotoForensics back in 2012, I used a default server installation. By default, Apache + PHP enables web cookies for tracking sessions. Since the public (HTTP) web site does not use sessions, I disabled cookies. I think they were enabled for the first few months, but they haven’t been used for over two years.

Keep in mind, I am only talking about the public FotoForensics web site. (The blog software at hackerfactor.com sets cookies, but it doesn’t use them unless you login… and I disabled the login interface since nobody besides me needs access.) Also, the private FotoForensics site (used by admins) uses HTTPS and does use cookies for tracking login sessions. But the public HTTP FotoForensics site does not use cookies. To confirm this, you can use httpfox for Firefox and the standalone wireshark sniffer. Both of these network analyzers show the entire HTTP headers sent between the web browser and FotoForensics web server. Neither should show any cookies being sent.

Cookie! Cookie! Cookie starts with “C”!

Web cookies are a cute way to save state between web requests. These short character sequences are sent from the web site to the browser and then returned by the browser during subsequent requests. The browser does not modify the cookie’s contents (without special JavaScript code); the browser only returns data that the server sent it.

The browser associates the cookie with a web site. The next time the browser contacts the site, it uploads the cookie. With any request, the server may change the cookie value.

The network flow typically looks like:

  1. Web browser connects to server and says “give me this web page”.
  2. Server provides the page and says “here’s a cookie!”
  3. Browser then requests the “CSS” style information and says “and here’s the cookie you gave me.”
  4. Server returns the CSS information. The server also knows, based on the unique cookie, that this data is going specifically to you and not to just “anybody”.
  5. Browser then requests each picture on the page. With each picture, it also says “and here’s the cookie!”
  6. Server sees each picture request and the unique tracking cookie and returns each picture.

Some cookies are used for uniquely tracking users. Other cookies contain configuration settings for that web site.

The one important aspect about cookies is that they do not span domains. If your browser receives a cookie from “google.com”, then it will only send it to “google.com”. Your browser won’t send the “google.com” cookie to reddit.com, fotoforensics.com, or any other web site. Cookies only go back to the domain that generated them.

What else starts with “C”?

Back to my cookie issue…

We are currently receiving about one unban request every 1-3 weeks. However, two of the last four unban requests have included cookies. This is really odd since the cookies did not come from my site.

I finally started tracking this problem. Specifically, I have been looking for web browsers that upload cookies that didn’t come from me. For example, on 2014-12-15, FotoForensics received 909 unique file uploads and 1,253 total uploads. The site was accessed by 4,738 users. (It was a relatively slow day.) Of all of those, a total of 33 requests included cookies. (Less than 1%.)

I started to look over the cookies to see if there was anything consistent.

  • Some cookies really look like Google Analytics. I see the utma, utmb, utmc, and utmz cookie values.

  • Some cookies are clearly marketing trackers. For example, one person’s cookie included a “mindsparktb” value. That’s Mindspark.com — an online advertiser. That cookie even mentioned something called “TOOLBAR_CLEANER”. That’s known malware by Mindspark. Another person’s cookie said “SUPER-CRSRDR”. That’s associated with another ad-based computer virus.

    Basically, both of these people have web browsers that are infected with ad-based viruses. Every web site they visit will have words underlined with links to ads. (You should only see 6 hyperlinks in this entire blog entry — 3 near the beginning, 2 in this section, and 1 at the end. If you see more hyperlinked words, then you’re infected. Those extra ad links are not coming from me! They are coming from a virus that is installed on your computer.) The people who supplied Google Analytics cookie data could also be infected with malware.

  • There’s a browser plugin called “ImTranslator” that adds in cookies when it translates pages.
  • A few of the cookies really look suspicious… It almost looks like their ISP, or someone in the middle of the network transfer, may be inserting tracking cookies. I’ll need more data before I can determine if this is specific to certain ISPs in Saudi Arabia and the Czech Republic, or something else.

Cookies should never be sent to the wrong domain. It should never happen. It isn’t like it’s an accident — the software in all of these browsers explicitly forbids it. I ran these observations by a few of my friends (SM, JK, BT). They all reached the same conclusion: there’s no legitimate reason for this to be happening. We were able to come up with three possible scenarios:

  • Option #1: The web browsers are infected with one or more viruses and they are inserting cookies incorrectly.

  • Option #2: The browser is using a network connection (ISP or proxy network) that is tagging web traffic and filtering out the cookies prior to forwarding packets to my service.
  • Option #3: The user had previously accessed my site through a proxy that was adding tracking cookies. Later, the user came to my site without the proxy and ended up sending the cookies. Without the proxy to intercept, there was nothing to stop me from seeing the cookies that the tagging proxy had associated with my web site.

Good enough for me

With most of the cookies that I am seeing, it really looks like a user with an infected computer (option #1). This begs the next question: what do I do about it?

On one hand, I want to tell these users that they have a problem. I could easily configure my site to inform users when I detect unexpected cookies. I could even create a special web page for people who “just want to check”. Seriously: this is easy to make. I could warn them that they may have malware installed on their computers.

But then there’s the “no good deed” issue. I’m sure that some people won’t distinguish detection from cause. They will blame me for infecting their computers. Or worse: they will beg me to help them de-worm their systems. (I don’t work for individuals for a reason: individuals are crazy. If they don’t accuse you for creating the problem, then they’ll blame you for failing to read their minds.)

Or maybe there is something else going on that I’m not seeing.

Schneier on Security: Over 700 Million People Taking Steps to Avoid NSA Surveillance

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

There’s a new international survey on Internet security and trust, of “23,376 Internet users in 24 countries,” including “Australia, Brazil, Canada, China, Egypt, France, Germany, Great Britain, Hong Kong, India, Indonesia, Italy, Japan, Kenya, Mexico, Nigeria, Pakistan, Poland, South Africa, South Korea, Sweden, Tunisia, Turkey and the United States.” Amongst the findings, 60% of Internet users have heard of Edward Snowden, and 39% of those “have taken steps to protect their online privacy and security as a result of his revelations.”

The press is mostly spinning this as evidence that Snowden has not had an effect: “merely 39%,” “only 39%,” and so on. (Note that these articles are completely misunderstanding the data. It’s not 39% of people who are taking steps to protect their privacy post-Snowden, it’s 39% of the 60% of Internet users — which is not everybody — who have heard of him. So it’s much less than 39%.)

Even so, I disagree with the “Edward Snowden Revelations Not Having Much Impact on Internet Users” headline. He’s having an enormous impact. I ran the actual numbers country by country, combining data on Internet penetration with data from this survey. Multiplying everything out, I calculate that 706 million people have changed their behavior on the Internet because of what the NSA and GCHQ are doing. (For example, 17% of Indonesians use the Internet, 64% of them have heard of Snowden and 62% of them have taken steps to protect their privacy, which equals 17 million people out of its total 250-million population.)

Note that the countries in this survey only cover 4.7 billion out of a total 7 billion world population. Taking the conservative estimates that 20% of the remaining population uses the Internet, 40% of them have heard of Snowden, and 25% of those have done something about it, that’s an additional 46 million people around the world.

It’s probably true that most of those people took steps that didn’t make any appreciable difference against an NSA level of surveillance, and probably not even against the even more pervasive corporate variety of surveillance. It’s probably even true that some of those people didn’t take steps at all, and just wish they did or wish they knew what to do. But it is absolutely extraordinary that 750 million people are disturbed enough about their online privacy that they will represent to a survey taker that they did something about it.

Name another news story that has caused over ten percent of the world’s population to change their behavior in the past year? Cory Doctorow is right: we have reached “peak indifference to surveillance.” From now on, this issue is going to matter more and more, and policymakers around the world need to start paying attention.

Related: a recent Pew Research Internet Project survey on Americans’ perceptions of privacy, commented on by Ben Wittes.

TorrentFreak: “How To Learn Absolutely Nothing In Fifteen Years,” By The Copyright Industry

This post was syndicated from: TorrentFreak and was written by: Rick Falkvinge. Original post: at TorrentFreak

pirate bayIn 1999, Napster was a one-time opportunity for the copyright industry to come out on top of the Internet. Napster was the center of attention for people sharing music. (Hard drives weren’t big enough to share movies yet.)

Everybody knew that the copyright industry at the time had two options – they could embrace and extend Napster, in which case they would be the center of culture going forward, or they could try to crush Napster, in which case they would lose the Internet forever as there would not be another centralized point like it.

The copyright industry, having a strong and persistent tradition of trying to obliterate every new technology for the past century, moved to crush Napster. It vanished. DirectConnect, LimeWire, and Kazaa — slightly more decentralized sharing mechanisms – popped up almost immediately, and BitTorrent a year or so later.

This was about as predictable as the behavior of a grandfather clock: the cat wasn’t just out of the bag, but had boarded a random train and travelled halfway cross-country already. People had smelled the scent of sharing, and there was no going back. However, people wouldn’t repeat the mistakes of Napster and have a single point of failure. For the next couple of years, sharing decentralized rapidly to become more impervious and resilient to the onslaught of an obsoleted distribution industry.

It is not a coincidence that The Pirate Bay rose about 2003. That time period was the apex of the post-Napster generation of sharing technologies. With the advent of the first generation of torrent sites, sharing slowly started to re-centralize to focus on these sharing sites. For a few years, DirectConnect hubs were popular, before people transitioned completely to the faster and more decentralized BitTorrent technology.

This week, The Pirate Bay was taken offline in a police raid in Sweden. It may only have been the front-end load balancer that got captured, but it was still a critical box for the overall setup, even if all the other servers are running in random, hidden locations.

Sure, The Pirate Bay was old and venerable, and quite far from up to date with today’s expectations on a website. That tells you so much more, when you consider it was consistently in the top 50 websites globally: if such a… badly maintained site can get to such a ranking, how abysmal mustn’t the copyright industry be?

The copyright industry is so abysmal it hasn’t learned anything in the past 15 years.

In the mere week following the downing of The Pirate Bay, there has been a flurry of innovation. People are doing exactly what they did fifteen years ago, after Napster: everybody is saying “never again”, and going to town inventing more resilience, more decentralization, and more sharing efficiency. The community who are manufacturing our own copies of knowledge and culture had gotten complacent with the rather badly-maintained website and more or less stopped innovating – The Pirate Bay had been good enough for several years, even when its age was showing.

I’ve seen signals from every continent in the past week that the past decade of decentralization technologies is getting pooled into new sharing initiatives. A lot of them seem really hot. Some are just hitting the ball out of the park if they get realized: everything from TOR to blockchain technology to distributed computing – components that weren’t there when BitTorrent first surfaced ten years ago. If realized, they should surface over the next few years, like BitTorrent surfaced three to four years after Napster with a bunch of other technologies in between. As a side bonus, these new initiatives will also protect privacy and free speech, which are both incompatible with enforcement of the copyright monopoly.

So in a way, this was welcome. We need that innovation. We need to not grow complacent. We all need to stay ahead of the crumbling monopolies – a dying tiger is dangerous, even when it’s obviously insane. But The Pirate Bay’s legacy will never die, just like Napster’s legacy won’t.

In the meantime, the copyright industry is a case study in how to really insist on not learning a damn thing from your own monumental mistakes in fifteen full years.

About The Author

Rick Falkvinge is a regular columnist on TorrentFreak, sharing his thoughts every other week. He is the founder of the Swedish and first Pirate Party, a whisky aficionado, and a low-altitude motorcycle pilot. His blog at falkvinge.net focuses on information policy.

Book Falkvinge as speaker?

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Darknet - The Darkside: Oryon C Portable – Open Source Intelligence (OSINT) Framework

This post was syndicated from: Darknet - The Darkside and was written by: Darknet. Original post: at Darknet - The Darkside

Oryon C Portable is a web browser designed to assist researchers in conducting Open Source Intelligence investigations. Oryon comes with dozens of pre-installed tools and a select set of links catalogued by category – including those that can be found in the OI Shared Resources. Based on SRWare Iron version 31.0.1700.0 (Chromium) More than 70…

Read the full post at darknet.org.uk

The Hacker Factor Blog: You Can Bank On It

This post was syndicated from: The Hacker Factor Blog and was written by: The Hacker Factor Blog. Original post: at The Hacker Factor Blog

Last week, security journalist Brian Krebs reported on a U.S. Treasury Department finding. The Treasury found that TOR nodes account for a large percent of online banking fraud.

I found this report to be startling. I wasn’t surprised that TOR was being used for fraud. Rather, I was stunned that, after all these years, the banking industry was not filtering out logins from TOR nodes!

Don’t look at me!

Let’s back up a moment… The purpose of TOR (the onion router) is to mix up the network pathway so that users can be anonymous online. The purpose of logging into anything — a bank, Google, Facebook, or any other online service — is to identify yourself. These are diametrically opposed concepts. You cannot be anonymous and identify yourself at the same time!

There may be some online services where you don’t care about the account and you want to be anonymous. A good example would be a free Yahoo Mail account that some anti-government Chinese citizen wants to access. They are anonymous but also identified for logging into the account. However, online banking is different.

With online banking, it is not a “free account”. The account manages tangable assets (money) and is directly associated with a person (or company). Customers want the bank to know it is them doing legitimate business and not someone else doing fraud.

The only time a user might want to be anonymous when accessing a bank is if the account is for doing something illegal (like money laundering). This way, the bank won’t be able to trace the account to an individual. But then again, no FDIC Insured bank wants that kind of customer. (Let’s leave the fraud to non-insured PayPal accounts.)

Seriously: I cannot think of any legitimate reason to do anonymous online banking. I see no legitimate reason to access your bank account using TOR.

Safe Web Access

The other thing to remember is that TOR is not a safe online system. Sure, nobody can trace the network connection from the web client to the web server, but that doesn’t mean it is safe. Specifically, you (the TOR user) do not know who owns each TOR exit node and you have no idea what they are doing to your data.

Last October, some researchers discovered that a few TOR exit nodes were maliciously modifying files. You may think you are downloading a program, but the TOR node was inserting malware instead.

Hostile TOR nodes have also been used to track users and even record logins and passwords.

In effect, if you use TOR then you should assume that (1) nobody knows it is you, and (2) someone is watching and recording what you do. Logging into your bank, or anywhere else, is really a bad idea for TOR users. Knowing this, it strikes me that banks are being intentionally ignorant to permit logins from TOR nodes. This majority of banking fraud should have been stopped years ago.

Filtering by Network

I have previously written about various ways to detect proxies. There are two fast and easy ways to detect proxy users: network and application filtering.

The first way focuses on the network address. The folks at the Tor Project actually have an FAQ entry for online services that want to block TOR. They even provide the list of known TOR nodes! At this point, the web server can look at every login request and check if the client’s network address is the same as a known TOR node. If it is, then they can block the request. (And if the login was valid, the bank can even block all login access to the account since the account has been compromised.)

Keep in mind: TOR is not the only proxy network out there. There are dozens of free lists of open proxies. (And even more fee-based lists.) There are also a couple of DNS-blacklist systems that identify known proxy addresses. And then there are network-based geo-location databases — most have some subnets identified as known proxy networks. Banks could even use the geo-location information to identify likely fraud. For example, if I last logged in from Colorado and then, minutes or hours later, appear to come from Europe, then my account has likely been compromised.

If banks really wanted to be proactive, then they would also identify Starbucks, McDonalds, Holiday Inn, and other major free-Internet providers and add them to the “no login” list. Users should never check their bank accounts from a free Internet service.

Filtering by Application

While network filtering will identify known addresses that denote proxy systems, there are always other proxies that are not found on any list.

Beyond looking at network addresses, services can detect proxies by looking at the web traffic’s HTTP header. Many proxy systems add in their own HTTP headers that denote a network relay. If any of these proxy headers exist, then the server should reject the login.

The biggest problem with HTTP headers is that there is no consistent method to identify a web proxy. Some relays add in an HTTP “VIA” header. Others may use “FORWARDED”, “FORWARDED-FOR”, “HTTP_CLIENT_IP”, “X-PROXY-ID” or similar header fields. My own FotoForensics system currently looks for over a dozen different HTTP headers that denote some kind of proxy network connection. While some of these proxy networks may be acceptable for online banking (e.g., “X-BlueCoat-Via” or “Client-IP”), others should probably be blacklisted.

Being proactive is not a crime

There are many viable uses for proxy networks. However, there are also times when using a proxy is a really bad idea. Banks should be utilizing all of these proxy detection methods. They should be ensuring that the network address is not part of a known proxy system. And they should be proactively trying to identify and reduce fraud.

Of course, some people may tell you that online banking through TOR is safe if you use HTTPS. However, that really isn’t true. Anyone who has seen the Defcon Wall of Sheep knows that HTTPS is easy to compromise if you control the network. Remember: SSL is a security placebo and not an actual security solution.

Before I began focusing on forensic tool development, I did a lot of forensic analysis for corporations. I always thought it was ironic when the corporate lawyers would give me very specific directions, like: “We want to know exactly what happened on this computer. Who did what and when. And whatever happens, we do not want you to look at that computer over there!” With corporate attorneys, if they know about something then they must act on it. But if they don’t explicitly know, then they don’t have to do anything about it. By not looking at the problem, they could always claim ignorance.

This entire “TOR used for bank fraud” situation has a similar feel. It is as if the banks want to claim ignorance rather than addressing the problem. But in this case, the entire industry has known for years that TOR is commonly used for online criminal activity. And we have long known that easy banking access facilitates fraud. In this case, not blocking TOR users really looks to me like intentional criminal negligence.

Schneier on Security: Comments on the Sony Hack

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

I don’t have a lot to say about the Sony hack, which seems to still be ongoing. I want to highlight a few points, though.

  1. At this point, the attacks seem to be a few hackers and not the North Korean government. (My guess is that it’s not an insider, either.) That we live in the world where we aren’t sure if any given cyberattack is the work of a foreign government or a couple of guys should be scary to us all.

  2. Sony is a company that hackers have loved to hate for years now. (Remember their rootkit from 2005?) We’ve learned previously that putting yourself in this position can be disastrous. (Remember HBGary.) We’re learning that again.
  3. I don’t see how Sony launching a DDoS attack against the attackers is going to help at all.
  4. The most sensitive information that’s being leaked as a result of this attack isn’t the unreleased movies, the executive emails, or the celebrity gossip. It’s the minutia from random employees:
  5. The most painful stuff in the Sony cache is a doctor shopping for Ritalin. It’s an email about trying to get pregnant. It’s shit-talking coworkers behind their backs, and people’s credit card log-ins. It’s literally thousands of Social Security numbers laid bare. It’s even the harmless, mundane, trivial stuff that makes up any day’s email load that suddenly feels ugly and raw out in the open, a digital Babadook brought to life by a scorched earth cyberattack.

    These people didn’t have anything to hide. They aren’t public figures. Their details aren’t going to be news anywhere in the world. But their privacy as been violated, and there are literally thousands of personal tragedies unfolding right now as these people deal with their friends and relatives who have searched and reads this stuff.

    These are people who did nothing wrong. They didn’t click on phishing links, or use dumb passwords (or even if they did, they didn’t cause this). They just showed up. They sent the same banal workplace emails you send every day, some personal, some not, some thoughtful, some dumb. Even if they didn’t have the expectation of full privacy, at most they may have assumed that an IT creeper might flip through their inbox, or that it was being crunched in an NSA server somewhere. For better or worse, we’ve become inured to small, anonymous violations. What happened to Sony Pictures employees, though, is public. And it is total.

    Gizmodo got this 100% correct. And this is why privacy is so important for everyone.

I’m sure there’ll be more information as this continues to unfold.

Krebs on Security: ‘Poodle’ Bug Returns, Bites Big Bank Sites

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Many of the nation’s top banks, investment firms and credit providers are vulnerable to a newly-discovered twist on a known security flaw that exposes their Web site traffic to eavesdropping. The discovery has prompted renewed warnings from the U.S. Department of Homeland Security advising vulnerable Web site owners to address the flaw as quickly as possible.

chasepoodleIn mid-October, the world learned about “POODLE,” an innocuous acronym for a serious security flaw in a specific version (version 3.0) of Secure Sockets Layer (SSL), the technology that most commercial Web sites use to protect the privacy and security of communications with customers.

When you visit a site that begins with “https://” you can be sure that the data that gets transmitted between that site and your browser cannot be read by anyone else. That is, unless those sites are still allowing traffic over SSL 3.0, in which case an attacker could exploit the POODLE bug to decrypt and extract information from inside an encrypted transaction — including passwords, cookies and other data that can be used to impersonate the legitimate user.

On Dec. 8, researchers found that the POODLE flaw also extends to certain versions of a widely used SSL-like encryption standard known as TLS (short for Transport Layer Security).

“The impact of this problem is similar to that of POODLE, with the attack being slightly easier to execute,” wrote Ivan Ristic, director of engineering at security firm Qualys, which made available online a free scanning tool that evaluates Web sites for the presence of the POODLE vulnerability, among other problems. “The main target are browsers, because the attacker must inject malicious JavaScript to initiate the attack.”

A cursory review using Qualys’s SSL/TLS scanning tool indicates that the Web sites for some of the world’s largest financial institutions are vulnerable to the new POODLE bug, including Bank of AmericaChase.comCitibankHSBC, Suntrust — as well as retirement and investment giants Fidelity.com and Vanguard (click links to see report). Dozens of sites offering consumer credit protection and other services run by Experian also are vulnerable, according to SSL Labs. Qualys estimates that about 10 percent of Web servers are vulnerable to the POODLE attack against TLS.

According to an advisory from the U.S. Computer Emergency Readiness Team (US-CERT), a partnership run in conjunction with the U.S. Department of Homeland Security, although there is currently no fix for the vulnerability SSL 3.0 itself, disabling SSL 3.0 support in Web applications is the most viable solution currently available. US-CERT notes that some of the same researchers who discovered the Poodle vulnerability also developed a fix for the TLS-related issues.

Until vulnerable sites patch the issue, there isn’t a lot that regular users can do to protect themselves from this bug, aside from exercising some restraint when faced with the desire to log in to banking and other sensitive sites over untrusted networks, such as public Wi-Fi hotspots.

 

TorrentFreak: BitTorrent Inc Works on P2P Powered Browser

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

bittorrent-logoBitTorrent Inc. announced a new project today, a web browser with the ambition of making the Internet “people powered.”

Project Maelstrom, as it’s called, is in the very early stages of development but BitTorrent Inc. is gearing up to send out invites for a closed Alpha test.

The company hasn’t released a feature set as yet, but it’s clear that the browser will serve websites and other content through users.

According to BitTorrent Inc. this can not only speed up websites but also boost people’s privacy. In addition, it should be capable of bypassing website blockades and other forms of censorship.

“If we are successful, we believe this project has the potential to help address some of the most vexing problems facing the Internet today,” BitTorrent CEO Eric Klinker notes.

“How can we keep the Internet open? How can we keep access to the Internet neutral? How can we better ensure our private data is not misused by large companies? How can we help the Internet scale efficiently for content?”

The idea for a BitTorrent powered browser is not new. The Pirate Bay started work on a related project a few months ago with the aim of keeping the site online even if its servers were raided.

The project hasn’t been released yet, although it would have come in handy today.

Interestingly, BitTorrent’s brief summary of how the browser will work sounds a lot like Pirate Bay’s plans. The company shared the following details with Gigaom.

“It works on top of the BitTorrent protocol. Websites are published as torrents and Maelstrom treats them as first class citizens instead of just downloadable content. So if a website is contained within a torrent we treat it just like a normal webpage coming in over HTTP.”

More details are expected to follow during the months to come. Those interested in Project Maelstrom can sign up for an invite to the Alpha test here.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Krebs on Security: Toward a Breach Canary for Data Brokers

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

When a retailer’s credit card systems get breached by hackers, banks usually can tell which merchant got hacked soon after those card accounts become available for purchase at underground cybercrime shops. But when companies that collect and sell sensitive consumer data get hacked or are tricked into giving that information to identity thieves, there is no easy way to tell who leaked the data when it ends up for sale in the black market. In this post, we’ll examine one idea to hold consumer data brokers more accountable.

breachcanarySome of the biggest retail credit card breaches of the past year — including the break-ins at Target and Home Depot — were detected by banks well before news of the incidents went public. When cards stolen from those merchants go up for sale on underground cybercrime shops, the banks often can figure out which merchant got hacked by acquiring a handful of their cards and analyzing the customer purchase history of those accounts. The merchant that is common to all stolen cards across a given transaction period is usually the breached retailer.

Sadly, this process of working backwards from stolen data to breach victim generally does not work in the case of breached data brokers that trade in Social Security information and other data, because too often there are no unique markers in the consumer data that would indicate from where the information was obtained.

Even in the handful of cases where underground crime shops selling consumer personal data have included data points in the records they sell that would permit that source analysis, it has taken years’ worth of very imaginative investigation by law enforcement to determine which data brokers were at fault. In Nov. 2011, I wrote about an identity theft service called Superget[dot]info, noting that “each purchasable record contains a two- to three-letter “sourceid,” which may provide clues as to the source of this identity information.”

Unfortunately, the world didn’t learn the source of that ID theft service’s data until 2013, a year after U.S. Secret Service agents arrested the site’s proprietor — a 24-year-old from Vietnam who was posing as a private investigator based in the United States. Only then were investigators able to determine that the source ID data matched information being sold by a subsidiary of big-three credit bureau Experian (among other data brokers that were selling to the ID theft service). But federal agents made that connection only after an elaborate investigation that lured the proprietor of that shop out of Vietnam and into a U.S. territory.

Meanwhile, during the more than six years that this service was in operation, Superget.info attracted more than 1,300 customers who paid at least $1.9 million to look up Social Security numbers, dates of birth, addresses, previous addresses, email addresses and other sensitive information on consumers, much of it used for new account fraud and tax return fraud.

Investigators got a lucky break in determining the source of another ID theft service that was busted up and has since changed its name (more on that in a moment). That service — known as “ssndob[dot]ru” — was the service used by exposed[dot]su, a site that proudly displayed the Social Security, date of birth, address history and other information on dozens of Hollywood celebrities, as well as public officials such as First Lady Michelle Obama, then FBI Director Robert Mueller, a then-director of the CIA.

As I explained in a 2013 exclusive, civilian fraud investigators working with law enforcement gained access to the back-end server that was being used to handle customer requests for consumer information. That database showed that the site’s 1,300 customers had spent hundreds of thousands of dollars looking up SSNs, birthdays, drivers license records, and obtaining unauthorized credit and background reports on more than four million Americans.

Although four million consumer records may seem like a big number, that figure did not represent the total number of consumer records for available through the ssndob[dot]ru. Rather, four million was merely the number of consumer records that the service’s customers had paid the service to look up. In short, it appeared that the ID theft service was drawing on active customer accounts inside of major consumer data brokers.

Investigators working on that case later determined that the same crooks who were running ssndob[dot]ru also were operating a small, custom botnet of hacked computers inside of several major data brokers, including LexisNexis, Dun & Bradstreet, and Kroll. All three companies acknowledged infections from the botnet, but shared little else about the incidents.

Despite their apparent role in facilitating (albeit unknowingly) these ID theft services, to my knowledge the data brokers involved have never been held publicly accountable in any court of law or by Congress.

CURRENT ID THEFT SERVICES

At present, there are multiple shops in the cybercrime underground that sell everything one would need to steal someone’s identity in the United States or apply for new lines of credit in their name — including Social Security numbers, addresses, previous addresses, phone numbers, dates of birth, and in some cases full credit history. The price of this information is shockingly low — about $3 to $5 per record.

KrebsOnSecurity conducted an exhaustive review of consumer data on sale at some of the most popular underground cybercrime sites. The results show that personal information on some of the most powerful Americans remains available for just a few dollars. And of course, if one can purchase this information on these folks, one can buy it on just about anyone in the United States today.

As an experiment, this author checked two of the most popular ID theft services in the underground for the availability of Social Security numbers, phone numbers, addresses and previous addresses on all members of the Senate Commerce Committee‘s Subcommittee on Consumer Protection, Product Safety and Insurance. That data is currently on sale for all thirteen Democrat and Republican lawmakers on the panel.

Between these two ID theft services, the same personal information was for sale on Edith Ramirez and Richard Cordray, the heads of the Federal Trade Commission (FTC) and the Consumer Financial Protection Bureau (CFPB), respectively.

ssndob-found

Getting these ID theft services shut down might feel good, but it is not a long-term solution. Both services used to conduct these lookups of the public figures mentioned above are second- and third-generation shops that have re-emerged from previous takedown efforts. In fact, at least one of them appears to be a reincarnation of ssndob[dot]ru, while the other seems little more than a reseller of that service.

Rather, it seems clear that what we need is more active oversight of the data broker industry, and new tools to help law enforcement (and independent investigators) determine the source of data being resold by these identity theft services.

Specifically, if there were a way for federal investigators to add “breach canaries,” — unique, dummy identities — to records maintained by the top data brokers, it could make it far easier to tell which broker is leaking consumer data either through breaches or hacked/fraudulent accounts.

Data brokers like Experian have strongly resisted calls from regulators for greater transparency in their operations and in the data that they hold about consumers. When the FTC recommended the creation of a central website where data brokers would be listed — with links to these companies, their privacy policies and also choice options, giving consumers the capability to review/amend the data that companies maintain — Experian lobbied against the idea, charging that it would “have the unintended effect of confusing consumers and eroding trust in e-commerce.”

The company’s main argument was essentially that it was unfair to impose such requirements on the bigger data brokers and ignore the rest. Experian’s chief lobbyist Tony Hadley has made the argument that there are just too many companies that have and share all this consumer data, which seems precisely the problem.

“The Direct Marketing Association (DMA) estimates that even a narrow definition of a marketing information service provider is likely to include more than 2,500 companies from all sectors of the economy,” Hadley wrote in a blog post earlier this year. “Simply put, the entire data industry – extremely vital to the US economy — cannot be neatly or accurately identified and then subjected to unrealistic requirements.”

My guess is that if the data broker giants are opposed to the idea of inserting dummy identities into their records to act as breach canaries, it is because such a practice could expose data-sharing relationships and record-keeping practices that these companies would rather not see the light of day. But barring any creative ideas to help investigators quickly learn the source of data being sold by identity theft services online, data brokers will remain free to facilitate and even profit from an illicit market for sensitive consumer information.

Schneier on Security: Corporations Misusing Our Data

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

In the Internet age, we have no choice but to entrust our data with private companies: e-mail providers, service providers, retailers, and so on.

We realize that this data is at risk from hackers. But there’s another risk as well: the employees of the companies who are holding our data for us.

In the early years of Facebook, employees had a master password that enabled them to view anything they wanted in any account. NSA employees occasionally snoop on their friends and partners. The agency even has a name for it: LOVEINT. And well before the Internet, people with access to police or medical records occasionally used that power to look up either famous people or people they knew.

The latest company accused of allowing this sort of thing is Uber, the Internet car-ride service. The company is under investigation for spying on riders without their permission. Called the “god view,” some Uber employees are able to see who is using the service and where they’re going — and used this at least once in 2011 as a party trick to show off the service. A senior executive also suggested the company should hire people to dig up dirt on their critics, making their database of people’s rides even more “useful.”

None of us wants to be stalked — whether it’s from looking at our location data, our medical data, our emails and texts, or anything else — by friends or strangers who have access due to their jobs. Unfortunately, there are few rules protecting us.

Government employees are prohibited from looking at our data, although none of the NSA LOVEINT creeps were ever prosecuted. The HIPAA law protects the privacy of our medical records, but we have nothing to protect most of our other information.

Your Facebook and Uber data are only protected by company culture. There’s nothing in their license agreements that you clicked “agree” to but didn’t read that prevents those companies from violating your privacy.

This needs to change. Corporate databases containing our data should be secured from everyone who doesn’t need access for their work. Voyeurs who peek at our data without a legitimate reason should be punished.

There are audit technologies that can detect this sort of thing, and they should be required. As long as we have to give our data to companies and government agencies, we need assurances that our privacy will be protected.

This essay previously appeared on CNN.com.

Darknet - The Darkside: Sony Pictures Hacked – Employee Details & Movies Leaked

This post was syndicated from: Darknet - The Darkside and was written by: Darknet. Original post: at Darknet - The Darkside

Sony hasn’t always had the best of times when it comes to being hacked, back in 2011 Sony basically had to rebuild the PlayStation Network (PSN) because of a hack which rendered the service off-line for almost a whole week. Plus the fact the PSN hack could have leaked up to 10 million user accounts […]

The post Sony Pictures Hacked…

Read the full post at darknet.org.uk

SANS Internet Storm Center, InfoCON: green: Do you have a Data Breach Response Plan?, (Mon, Dec 1st)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

The Ponemon Institute conducted and released a paper in September on its second annual study on data breaches. Some of the data collected shows interesting results. Based on their survey, 68% of respondents dont believe their company would know how to deal with negative public opinion and 67% think their organization does not understands what to do after a data breach occurs.[page 3] If either one occurs, it usually impact the brand, it can lead to lost of customers and shake business partners trust and confidence in the company.

They also found that more companies now have a data breach response plan 73% in 2014 compared to 61% last year. According to this survey, only ~30% of the response plans are effective or very effective.[page 4] The report suggest to be effective, the organization must provide training to its employees, to make them aware of their responsibilities on how to protect customer information when a data breach occurs.

There are several template of data breach response plan freely available to get you started. If you have one in place, how often is it reviewed and exercised? Do your receive training on how to properly safeguard customers sensitive data? The study can be downloaded here.

[1] http://www.experian.com/assets/data-breach/brochures/2014-ponemon-2nd-annual-preparedness.pdf [page 3,4]
[2] https://privacyassociation.org/resources/article/security-breach-response-plan-toolkit/
[3] http://www.cica.ca/resources-and-member-benefits/privacy-resources-for-firms-and-organizations/docs/item48785.pdf
[4] http://www.justice.gov/sites/default/files/opcl/docs/breach-procedures.pdf
[5] http://www.securingthehuman.org

———–

Guy Bruneau IPSS Inc. gbruneau at isc dot sans dot edu

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

TorrentFreak: Pirates Expose Their Download Habits Through Trakt

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

traktlogoWith over a quarter million users Trakt is one of the most popular communities for movie and TV fans.

The service allows users to keep track of the videos they watch throughout the week, either manually or automatically via media players such as VLC, XBMC and Popcorn Time.

Trakt processes and stores downloading habits so its users can keep track of what they’re watching. By default it also allows others to see this information as well, and that’s where things start to become problematic.

Trakt appears to be relatively popular among pirates. A quick glance at the most popular movies shows that most are still playing in theater. These could technically all be manual entries, but the “currently watching” lists shows that this isn’t the case.

For example, at the time of writing there were several Trakt users watching the latest box-office hit “Interstellar” via XBMC and Infuse. Also, the overview of the movie “Lucy” shown below lists several Popcorn Time users checking in a pirated copy of the film.

lucy-trakt

Of course people have to decide for themselves if they want to display their piracy habits to the rest of the world, but they shouldn’t complain if a movie studio comes knocking on their door.

We’ve just spent a few minutes looking through the profiles of several Popcorn Time users, who all display their browsing habits in public. Needless to say, this offers unique opportunities for copyright trolls.

Copyright holders only have to write down a few usernames and ask the court for a subpoena to expose their personal info.

Trakt is very clear about the fact that it helps private parties to enforce and comply with the law, and will have no other option than to hand over the requested information if they’re faced with a subpoena.

“We will disclose any information about you to government or law enforcement officials or private parties as we, in our sole discretion, believe necessary or appropriate to respond to claims and legal process (including but not limited to subpoenas)…” Trakt’s privacy policy reads.

So without too much hassle rightsholders can get their hands on pirates’ IP-addresses, email addresses, payment information, a database of files watched and at what times, etcetera.

And then there’s the added risk of running into scammers.

Many “pirating” Trakt users have profile images and usernames that can easily lead to a person’s identity. It took us less than a minute to dig up an email address of a Popcorn Time user, info that scammers could exploit to falsely threaten legal action while posing as a copyright holder.

That’s trouble waiting to happen.

Most pirating Trakt users are probably not aware of the risks above, but in any case it might be wise for the privacy minded to avoid the service.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

The Hacker Factor Blog: Happy Holidays from Facebook

This post was syndicated from: The Hacker Factor Blog and was written by: The Hacker Factor Blog. Original post: at The Hacker Factor Blog

If you do anything in the computer security or forensics world, then you probably view Facebook as a hive of scum and villainy. As a major social network, it attracts all sorts of criminal elements. Pedophiles use Facebook. Terrorists use Facebook. Drug dealers use Facebook. It’s like the only people not using Facebook are teens.

Social networks are split into two camps. On one side are the open forums. Everything is accessible and anyone can see content without needing special access. Twitter, Reddit, and most news sites fall into this category. While some content can be made private, most is public.

On the other side are the walled gardens. These are social networks where people on the outside can barely see anything inside. Facebook and Apple are the two big examples. As someone who isn’t on Facebook, I’ve never actually seen FarmVille. And I cannot see most user profiles or “wall” pages without logging in and connecting to users. It’s that “connecting” part that is a problem for law enforcement. The last thing you want to do is tip off a suspect by friending them, just to gain access to their shared information.

One Little Change

Earlier this week, Facebook made a subtle but important change to their service. Specifically, they changed their picture filenames. This, in turn, directly impacts online forensics. Since I’ve been tracking changes at Facebook for years, I’ve managed to put together a pretty good timeline.

Before July 2012, Facebook filenames used a five-number pattern: aa_bb_cc_dd_ee_n.jpg. (For example, 1234_5678_91011_12345_1234_b.jpg.) aa is the photo id, bb is the album id, cc is the profile id of the user who uploaded the picture. The dd and ee fields are random and designed to mitigate guessing a picture’s id. (The dd field may have some other purpose, but I never figured it out.) The final character, n, indicates the size for auto-scaling. Changing the final character to ‘o’ returns the original-size picture, ‘b’ is big, ‘q’ is 180px wide, etc.

Given a Facebook filename in this format, an analyst can quickly identify the URLs to the picture, album, and user’s profile.

In February 2012, Facebook started testing a new filenaming system. This system was fully deployed in July 2012. This new filename format uses a three-number pattern: aa_bb_ee_n.jpg. (E.g., 1234_567891011121_23456_b.jpg.) aa and bb are still the photo and album ids. The ee field is random and designed to prevent someone from guessing a picture’s id. The final character, n, still indicates the size for auto-scaling.

Given a Facebook filename in this three-number format, an analyst can still quickly identify the URL to the picture and wall page. If the picture’s wall page is public, then it displays the user’s account name, the image, and all comments related to the picture.

In October 2012, Facebook began to test an Akamai EdgeControl cache with cryptographic signatures. Akamai provides a last-mile content delivery system for distributing the network load. The cryptographic checksum prevents tampering to the URL. This means that real-time processing instructions in the URL, such as ‘/c92.0.403.403/’ for cropping or the size determination (e.g., ‘n’ or ‘o’), cannot be altered by the analyst. Any changes will return a ‘Content not found’ message.

The caching and anti-tamper system was deployed on 27-Dec-2012. However, the filenames still mapped to non-Akamai URLs for directly accessing the content at Facebook. In addition, relatively few pictures were served through the Akamai caching service.

All of this changed on 24-Nov-2014. (I may be off by a few days for the actual deployment). That’s when Facebook changed filenames again and began to distribute pictures almost exclusively through Akamai and with anti-tamper URLs. Technically, the filename still looks like the three-number format: aa_bb_ee_n.jpg. However, they changed the aa photo ID number in the filename. As a result, all filenames that predate 2014-11-24 can no longer be used to find the direct URL at Facebook. Pictures uploaded after 2014-11-24 may, in rare cases, be mapped to direct URLs at Facebook. But most of the time, they are only available from Akamai. Given only the Facebook filename, you can no longer find the URL to the picture hosted at Facebook. You can still find the wall page with the picture (if it is public), but not the direct URL to the picture itself.

For example, 1000526_539054152803549_1177659804_n.jpg is the filename of a picture that was uploaded to FotoForensics over a year ago. The direct URL to the picture was ‘https://scontent-a-fna.xx.fbcdn.net/hphotos-xfa1/1000526_539054152803549_1177659804_o.jpg’. Prior to 24-Nov-2014, this would return the image, but today it returns ‘Content not found’.

However, from this filename I can identify the wall page’s URL: https://www.facebook.com/photo.php?fbid=539054152803549. (It’s gross, so I’m not hyperlinking to it.) According to the wall page, the picture’s new filename is 1014640_539054152803549_1177659804_o.jpg — the first number changed from 1000526 to 1014640.

‘Good news’ is relative

The impact to forensics and investigators is significant. If you have a filename that matches the three-number format, then you can trace the filename to Facebook. But if the file was acquired before 2014-11-24, then you cannot find the direct URL at Facebook in order to confirm that the file came from there. (By seeing the picture at Facebook, I suspect that law enforcement would have an easier time getting a warrant. Without the confirmation, it should be a little harder.) By the same means, some media outlets try to validate sources. They used to be able to confirm that a picture came from Facebook by tracing the filename to a URL. But today, they cannot positively confirm it unless the wall page is public.

In addition, anyone who was hotlinking to a picture at Facebook should have noticed that the link is now broken. In effect, Facebook just raised the walls a little higher around their private garden.

If someone sees a filename from Facebook, then it can no longer be traced back to the user. And if the URL contains an anti-tampering field (my example Facebook filename above did not have this field), then nobody can uncrop the image without more knowledge about where the picture is stored at Facebook. This stops people from snooping, law enforcement from tracking images without a warrant, and external web sites from hotlinking.

And the bad news?

Privacy advocates may be very pleased with this change. However, I think all of the privacy benefits are a side-effect from something much more detrimental. Since I have no insider knowledge about Facebook, I can only speculate about the cause behind this naming change. And I suspect that the cause is very anti-privacy.

Facebook rolled out this new change around 2014-11-24. That is just a few days after Facebook announced a major change to their new privacy policy. Most media outlets pointed out that the new policy is 70% shorter and much easier to read. But a few outlets, like PCworld pointed out that it specifies that Facebook wants to collect even more information about you.

For example, the new privacy policy says “We receive information about you and your activities on and off Facebook from third-party partners, such as information from a partner when we jointly offer services or from an advertiser about your experiences or interactions with them.” And this is where it comes back to pictures…

As I mentioned, Facebook had been testing Akamai’s EdgeControl cache service for months, but did not deploy it until they released their new privacy policy. Akamai is a huge company — they serve as much as 30% of all web traffic, and they collect metrics about users. To quote from the Wall Street journal, “Because it stores copies of websites, Akamai has the potential to access 15% to 30% of total Web traffic. Two years ago, it began offering to track Web users’ browsing activity for advertising purposes.” WSJ wrote that back in 2010, so Akamai has been tracking users for over six years.

Now we have Facebook, a giant company that can only collect information at Facebook, teaming up with Akamai, a giant company that can cross-collect information from a third of the Internet. It used to be that Facebook could only track you at third-party sites if their site had a link to Facebook. I previously showed how Facebook uses links at Home Depot to track users who visit this home improvement online store. But now, sites do not even need to have a link to Facebook.

Let’s trace how this entire thing works now. You visit a web site that is not a Facebook affiliate and has no link to Facebook. But, they do have a small ad that is hosted at Akamai. As your browser downloads the picture for the ad from Akamai, your browser (via the HTTP referer [sic] field) provides information about what site you are visiting. Akamai can even drop a cookie into your browser, just in case you change network addresses. (While not essential, the cookie simplifies following mobile devices.) Later, you go to some site that has a “Like us on Facebook” link with code hosted at Facebook and an image from Akamai. Now Akamai can put it all together and provide it to Facebook. They know the sites you visit, when you visited them, and what your interests are outside of Facebook. They can tie this together with Facebook information, so they further know your likes, friends, interests, etc.

Moreover, the list of Akamai customers is huge! Best Buy, NPR, MySpace, McAfee… Facebook can now see into the walled gardens at Apple and Microsoft, since both of them are Akamai clients. The Department of Defense is listed as an Akamai customer… I wonder if Facebook can identify DoD employees? The same goes for the Australian Government National Security (another Akamai client).

Did you see that link to PCworld that I have in the middle of this blog entry? (Where I point out that Facebook wants to collect information.) If you clicked it then you triggered an Akamai tracker. The tracker is in some JavaScript on the PCworld web page. The same goes for the links to Bloomberg and ABCnews that I have in the first paragraph.

Tis the season

But let’s go back to pictures. Why would Facebook change their filenames? The only reason that makes sense to me is that they intentionally want to break links for anyone hotlinking to their site. They are effectively drawing a line in the sand and saying “this is the baseline” for all new data collected.

Finally, I couldn’t help but notice that they rolled all of this out days before Thanksgiving and the start of the holiday shopping season. This year, an estimated 37% of shoppers are expected to shop online, and nearly all of them will trigger at least one Akamai or Facebook tracker.

Ho ho ho…

Darknet - The Darkside: Bitcoin Not That Anonymous Afterall

This post was syndicated from: Darknet - The Darkside and was written by: Darknet. Original post: at Darknet - The Darkside

One of the big advantages touted by Bitcoin (and other cryptocurrencies) was always the anonymity of the transactions, yes you can track a wallet address and see the transaction history. But there’s no real way to link that wallet address to a real person (so we thought). I mean other than any leaky fiat exchange […]

The post Bitcoin Not…

Read the full post at darknet.org.uk

Errata Security: The Pando Tor conspiracy troll

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

Tor, also known as The Onion Router, bounces your traffic through several random Internet servers, thus hiding the source. It means you can surf a website without them knowing who you are. Your IP address may appear to be coming from Germany when in fact you live in San Francisco. When used correctly, it prevents eavesdropping by law enforcement, the NSA, and so on. It’s used by people wanting to hide their actions from prying eyes, from political dissidents, to CIA operatives, to child pornographers.

Recently, Pando (an Internet infotainment site) released a story accusing Tor of being some sort of government conspiracy.

This is nonsense, of course. Pando’s tell-all exposé of the conspiracy contains nothing that isn’t already widely known. We in the community have long joked about this. We often pretend there is a conspiracy in order to annoy uptight Tor activists like Jacob Appelbaum, but we know there isn’t any truth to it. This really annoys me — how can I troll about Tor’s government connections when Pando claims there’s actually truth to the conspiracy?

The military and government throws research money around with reckless abandon. That no more means they created Tor than it means they created the Internet back in the 1970s. A lot of that research is pure research, intended to help people. Not everything the military funds is designed to kill people.

There is no single “government”. We know, for example, that while some in government paid Jacob Appelbaum’s salary, others investigated him for his Wikileaks connections. Different groups are often working at cross purposes — even within a single department.

A lot of people have ties to the government, including working for the NSA. The NSA isn’t some secret police designed to spy on Americans, so a lot of former NSA employees aren’t people who want to bust privacy. Instead, most NSA employees are sincere in making the world a better place — which includes preventing evil governments from spying on dissidents. As Snowden himself says, the NSA is full of honest people doing good work for good reasons. (That they’ve overstepped their bounds is a problem — but that doesn’t mean they are the devil).

Tor is based on open code and math. It really doesn’t matter what conspiracy lies behind it, because we can see the code. It’s like BitCoin — we know there is a secret conspiracy behind it, with the secretive Satoshi Nakamoto owning a billion dollars worth of the coins. But that still doesn’t shake our faith in the code and the math.

Dissidents use Tor — successfully. We know that because the dissidents are still alive. Even if it’s a secret conspiracy by the U.S. government, it still does what its supporters want, helping dissidents fight oppressive regimes. In any case, Edward Snowden, who had access to NSA secrets, trusts his own life to Tor.

Tor doesn’t work by magic. I mention this because the Pando article lists lots of cases where Tor failed to protect people. The reasons were unlikely to have been flaws in Tor itself, but appear to have been other more natural causes. For example, the Silk Road server configuration proves it was open to the Internet as well as through Tor, a rookie mistake that revealed its location. The perfect concealment system can’t work if you sometimes ignore it. It’s like blaming the Pill for not preventing pregnancy because you took it only on some days but not others. Thus, for those of us who know technically how things work, none of the cases cited by Pando shake our trust in Tor.

I’m reasonably technical. I’ve read the Tor spec (though not the code). I play with things like hostile exit nodes. I fully know Tor’s history and ties to the government. I find nothing in the Pando article that is credible, and much that is laughable. I suppose I’m guilty of getting trolled by this guy, but seriously, Pando pretends not to be a bunch of trolls, so maybe this deserves a response.

The Hacker Factor Blog: Lowering The Bar

This post was syndicated from: The Hacker Factor Blog and was written by: The Hacker Factor Blog. Original post: at The Hacker Factor Blog

The Electronic Frontier Foundation (EFF) is one of my favorite non-profit organizations. They have a huge number of attorneys who are ready to help people with issues related to online privacy, copyright, and security. If you’re about to make an 0-day exploit public and receive a legal threat from the software provider, then the EFF should be the first place you go.

The EFF actually provides multiple services. Some are top-notch, but others are not as high quality as they should be. These services include:

Legal Representation
If you need an attorney for an online issue, such as privacy or security, then they can give you direction. When I received a copyright extortion letter from Getty Images, the EFF rounded up four different attorneys who were interested in helping me fight Getty. (Getty Images backed down before I could use these attorneys.) Legal assistance is one of the EFF’s biggest and best offerings.

Legal News
The EFF continually releases news blurbs and whitepapers that discuss current events and their impact on security and privacy. Did you know that U.S. companies supply eavesdropping gear to Central Asian autocrats or that Feds proposed the secret phone database used by local Virginia cops? If you follow the EFF’s news feed, then you saw these reports. As a news aggregation service, their reports are very timely, but also very biased. The EFF’s reporting is biased toward a desire for absolute privacy online, even though nobody’s anonymous online.

Technical Services
The EFF occasionally promotes or releases software designed to assist with online privacy. While these efforts have good intentions, they are typically poorly thought out and can lead to significant problems. For example:

  • HTTPS Everywhere. This browser extension forces your web browser to use HTTPS whenever possible. It has a long set of configuration files that specify which sites should use HTTPS. Earlier this year, I wrote about some of the problems created by this application in “EFF’ing Up“. Specifically: (1) Some sites return different content if you use HTTPS instead of HTTP, (2) they do not appear to test their configuration files prior to releasing them, and (3) they do not fix bad configuration files.

  • TOR. The EFF is a strong supporter of the TOR Project, which consists of a network of servers that help anonymize network connections. The problem is that the EFF wants everyone to run a TOR relay. For a legal organization, the EFF seems to forget that many ISPs forbid end consumers from running public network services — running a TOR relay may violate your ISP’s terms of service. The TOR relay will also slow down your network connection as other people use your bandwidth. (Having other people use your bandwidth is why most consumer-level ISPs forbid users from hosting network services.) And if someone else uses your TOR relay to view child porn, then you are the person that the police will interrogate. In effect, the EFF tells people to run a network service without revealing any of the legal risks.

Free SSL

The EFF recently began promoting a new technical endeavor called Let’s Encrypt. This free CA server should help web sites move to HTTPS. News outlets like Boing Boing, The Register, and ExtremeTech all reported on this news announcement.

A Little Background

Let’s backup a moment… On the web, you can either connect to sites using HTTP or HTTPS. The former (HTTP) is unencrypted. That means anyone watching the network traffic can see what you are doing. The latter (HTTPS) is HTTP over SSL; SSL provides a framework for encrypting network traffic.

But notice how I say “framework”. SSL does not encrypt traffic. Instead, it provides a way for a client (like your web browser) and a server (like a web site) to negotiate how they want to transfer data. If both sides agree on a cryptographic setting, then the data is encrypted.

HTTPS is not a perfect solution. In many cases, it really acts as a security placebo. A user may see that HTTPS is being used, but may not be aware that they are still vulnerable. The initial HTTPS connection can be hijacked (a man-in-the-middle attack) and fake certificates can be issued to phishing servers. Even if the network connection is encrypted, this does nothing to stop the web server from tracking users or providing malware, and nothing to stop vandals from attacking web server. And all of this is before SSL exploits like Heartbleed and POODLE. In general, HTTPS should be considered a “better than nothing” solution. But it is far from perfect.

Entry Requirements

Even with all of the problems associated with SSL and HTTPS, for most uses it is still better than nothing. So why don’t more sites use HTTPS? There’s really a few limitations to entry. The EFF’s “Let’s Encrypt” project is a great solution to one of these problems and a partial solution to another problem. However, it doesn’t address all of the issues, and it is likely to create some new problems that the EFF has not disclosed.

Problem #1: Pay to Play
When an HTTPS client connects to an HTTPS server, the server transmits a server-side certificate as part of the cryptographic negotiation. The client then checks with a third-party certificate authority (CA server) and asks whether the server’s certificate is legitimate. This allows the client to know that the server is actually the correct server.

The server’s certificate identifies the CA network that should be used to verify the certificate. Unfortunately, if the certificate can say where to go to verify it, then bad guys can issue a certificate and tell your browser that it should be verified by a CA server run by the same bad guys. (Yes, fake-bank.com looks like your bank, and their SSL certificate even looks valid, according to fake-ca.com.) For this reason, every web browser ships with a list of known-trusted CA servers. If the CA server is not on the known-list, then it isn’t trusted by default.

If there are any problems with the server’s certificate, then the web browser issues an alert to the user. The problems include outdated/expired certificates, coming from the wrong domain, and untrusted CA servers.

And this is where the first barrier toward wide-spread use comes in… All of those known-trusted CA servers charge a fee. If you want your web server to run with an SSL certificate that won’t generate any user warnings, then you need to pay one of these known-trusted CA servers to issue an SSL certificate for your online service. And if you run multiple services, then you need to pay them multiple times.

The problems should be obvious. Some people don’t have money to pay for the trusted certificate, or they don’t want to spend the money. You can register a domain name for $10 a year, but the SSL certificate will likely run $150 or more. If your site doesn’t need SSL, then you’re not going to pay $150 to require it.

And then there are people like me, who cannot justify paying for a security solution (SSL) that isn’t secure. I cannot justify paying $150 or more, just so web browsers won’t see a certificate warning when they connect to my HTTPS services. (I use self-signed certificates. By themselves, they are untrusted and not secure, but I offer client-side certificates. Virtually no sites use client-side certificates. But client-side certs are what actually makes SSL secure.)

The EFF’s “Let’s Encrypt” project is a free SSL CA server. With this solution, cost is no longer an entry barrier. When their site goes live, I hope to use it for my SSL needs.

Of course, other CA services, like Entrust, Thawte, and GoDaddy, may lower their prices of offer similar free services. (You cannot data-mine users unless they use your service. Even with a “free” pricing model, these CA issuers can still make a hefty profit from collected user data.) As far as the EFF’s offerings go, this is a very disruptive technology for the SSL industry.

Problem #2: Server Installation
Let’s assume that you acquired an SSL certificate from a certificate authority (Thawte, GoDaddy, Let’s Encrypt, etc.). The next step is to install the certificate on your web server.

HTTPS has never been known for its simplicity. Installing the SSL server-side certificate is a nightmare of configuration files and application-specific complexity. Unless you are a hard-core system administrator, then you probably cannot do it. Even GUI interfaces like cPanel have multiple complex steps that are not for non-technies. You, as a user with a web browser, have no idea how much aggravation the system administrator went through in order to provide you with HTTPS and that little lock icon on the address bar. If they are good, then they spent hours. If it was new to them, then it could have been days.

In effect, lots of sites do not run HTTPS because it is overly complicated to install and configure. (And let’s hope that you don’t have to change certificates anytime soon…) Also, HTTPS certificates include an expiration date. This means that there is an ongoing maintenance cost that includes time and effort.

The EFF’s “Let’s Encrypt” solution says that it will include automated management software to help mitigate the installation and maintenance effort. This will probably work if you run one of their supported platforms and have a simple configuration file. But if you’re running a complex system with multiple domains, custom configuration files, and strict maintenance/update procedures, then no script from the EFF will assist you.

Of course, all of this is speculation since the EFF has not announced the supported platforms yet… So far, they have only mentioned a python script for Apache servers. I assume that they mean “Apache2″ and not “Apache”. And even then, the configuration at FotoForensics has been customized for my own needs, so I suspect that their solution won’t work out-of-the-box for my needs.

Problem #3: Client Installation
So… let’s assume that it is past Summer 2015, when Let’s Encrypt becomes available. Let’s also assume that you got the server-side certificate and their automated maintenance script running. You’ve got SSL on your server, HTTPS working, and you’re ready for users. Now everything is about to work without any problems, right? Actually, no.

As pointed out in problem #1, unknown CA servers are not in the user’s list of trusted CA servers. So every browser connecting to one of these web servers will see that ugly alert about an untrusted certificate.

Every user will need to add the new Let’s Encrypt CA servers to their trusted list. And every browser (and almost every version of every browser) does this differently. Making matters worse, lots of mobile devices do not have a way to add new CA servers. It will take years or even decades to fully resolve this problem.

Windows XP reached its “end of life” (again), yet nearly 30% of Windows computers still run XP. IPv6 has been around for nearly 20 years, yet deployment is still at less than 10% for most countries. Getting everyone in the world to update/upgrade is a massive task. It is easier to release a new system than it is to update a deployed product.

The EFF may dream of everyone updating their web browsers, but that’s not the reality. The reality is that users will be quickly trained to ignore any certificate alerts from the web browsers. This opens the door for even more phishing and malware sites. (If the EFF really wanted to solve this problem, then they would phase out the use of SSL and introduce something new.)

There is one other possibility… Along with the EFF, IdenTrust is sponsoring Let’s Encrypt. IdenTrust runs a trusted CA service that issues SSL certificates. (The cost varies from $40 per year for personal use to over $200 per year, depending on various options.) Let’s Encryption could piggy-back off of IdenTrust. This would get past the “untrusted CA service” problem.

But if they did rely on the known-trusted IdenTrust that is already listed in every web browser… the why would anyone buy an SSL certificate from IdenTrust when they can get it for free via Let’s Encrypt? There has to be some catch here. Are they collecting user data? Every browser must verify every server, so whoever runs this free CA server knows when you connected to specific online services — that’s a lot of personal information. Or perhaps they hope to drive sales to their other products. Or maybe there will be a license agreement that prohibits the free service from commercial use. All of this would undermine the entire purpose of trying to protect user’s traffic.

Problem #4: Fake Domains
Phishing web sites, where bad guys impersonate your bank or other online service, have been using SSL certificates for years. They will register a domain like “bankofamerica.fjewahuif.com” and hope that users won’t notice the “fjewahuif” in the hostname. Then they register a real SSL certificate for their “fjewahuif.com” domain. At this point, victims see the “bankofamerica” text in the hostname and they see the valid HTTPS connection and they assume that this is legitimate.

The problem gets even more complicated when they use DNS hijacking. On rare occasions, bad guys have temporarily stolen domains and used to to capture customer information. For example, they could steal the “bankofamerica.com” domain and register a certificate for it at any of the dozens of legitimate CA servers. (If the real Bank of America uses VeriSign, then the fake Bank of America can use Thawte and nobody will notice.) With domain hijacking, it looks completely real but can actually be completely fake.

The price for an SSL certificate used to be a little deterrent. (Most scammers don’t mind paying $10 for a domain and $150 for a legitimate certificate, when the first victim will bring in a few thousands of dollars in stolen money.) But a free SSL CA server? Now there’s no reason not to run this scam. I honestly expect the volume of SSL certificate requests at the EFF’s Let’s Encrypt servers to quickly grow to 50%-80% scam requests. (A non-profit with a legal emphasis that helps scammers? As M. Night Shyamalan says in Robot Chicken: “What a twist!“)

“Free” as in “Still has a lot of work to do before it’s really ready”

The biggest concern that I have with this EFF announcement is that the technology does not exist yet. Their web site says “Arriving Summer 2015” — it’s nearly a year away. While they do have some test code available, their proposed standard is still a draft and they explicitly say to not run the code on any production systems. Until this solidifies into a public release, this is vaporware.

But I do expect this to eventually become a reality. The EFF is not doing this project alone. Let’s Encrypt is also sponsored by Mozilla, Akamai, Cisco, and IdenTrust. These are companies that know browsers, network traffic, and SSL. These are some of the biggest names and they are addressing one of the big problems on today’s Internet. I have no doubt that they are aware of these problems; I just dislike how they failed to disclose these issues when they had their Pollyannaish press release. Just because it is “free” doesn’t mean it won’t have costs for implementation, deployment, maintenance, and customer service. In the open source world, “free” does not mean “without cost”.

Overall, I do like the concept. Let’s Encrypt is intended to make it easier for web services to implement SSL. They will be removing the cost barrier and, in some cases, simplifying maintenance. However, they still face an uphill battle. Users may need to update their web browsers (or replace their old cellphones), steps need to be taken to mitigate scams, users must not be trained to habitually accept invalid certificates, and none of this helps the core issue that HTTPS is a security placebo and not a trustworthy solution. With all of these issues still needing to be addressed, I think that their service announcement a few days ago was a little premature.

Чорба от греховете на dzver: Cine Grand @ Sofia Ring

This post was syndicated from: Чорба от греховете на dzver and was written by: dzver. Original post: at Чорба от греховете на dzver

Щастлив съм, че най-сетне има конкуренция на Арена и Синема Сити. В новото мол са “открили” кино с приятна концепция – зали с по 50 кресла, които са на прилично растояние едно от друго и позволяват лягане и спане по време на по-скучни филми.

Предимства:
– Рекламите преди филма отнемат 6 минути, вместо 25!
– Privacy. Рискът някой да седне до теб и да не млъкне е драстично по-малък. Същото и за крака на креслото до вас, на сантиметри от главата ви.
– Удобство. Може да управлявате креслото с копчета.

Недостатъци:
– Киното е недовършено, както и целия мол. Работят 2 зали.
– Пуканките са в дъното, не в началото.
– Неопитен персонал. Пуснаха ни грешния филм.

Логика Арена да чарджва 9/12/15 вече няма, вече не са най-доброто кино.

TorrentFreak: U.S. Copyright Alert System Security Could Be Improved, Review Finds

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

spyFebruary last year the MPAA, RIAA and five major Internet providers in the United States launched their “six strikes” anti-piracy plan.

The Copyright Alert System’s main goal is to inform subscribers that their Internet connections are being used to share copyrighted material without permission. These alerts start out friendly in tone, but repeat infringers face a temporary disconnection from the Internet or other mitigation measures.

The evidence behind the accusations is provided by MarkMonitor, which monitors BitTorrent users’ activities on copyright holders’ behalf. The overseeing Center for Copyright Information (CCI) previously hired an impartial and independent technology expert to review the system, hoping to gain trust from the public.

Their first pick, Stroz Friedberg, turned out to be not that impartial as the company previously worked as RIAA lobbyists. To correct this unfortunate choice, CCI assigned Professor Avi Rubin of Harbor Labs to re-examine the system.

This week CCI informed us that a summary of Harbor Labs’s findings is now available to the public. The full review is not being published due to the vast amount of confidential information it contains, but the overview of the findings does provide some interesting details.

Overall, Harbor Labs concludes that the evidence gathering system is solid and that false positives, cases where innocent subscribers are accused, are reasonably minimized.

“We conclude, based on our review, that the MarkMonitor AntiPiracy system is designed to ensure that there are no false positives under reasonable and realistic assumptions. Moreover, the system produces thorough case data for alleged infringement tracking.”

However, there is some room for improvement. For example, MarkMonitor could implement additional testing to ensure that false positives and human errors are indeed caught.

“… we believe that the system would benefit from additional testing and that the existing structure leaves open the potential for preventable failures. Additionally, we recommend that certain elements of operational security be enhanced,” Harbor Labs writes.

In addition, the collected evidence may need further protections to ensure that it can’t be tampered with or fall into the wrong hands.

“… we believe that this collected evidence and other potentially sensitive data is not adequately controlled. While MarkMonitor does protect the data from outside parties, its protection against inside threats (e.g., potential rogue employees) is minimal in terms of both policy and technical enforcement.”

The full recommendations as detailed in the report are as follows:

recommendations

The CCI is happy with the new results, which they say confirm the findings of the earlier Stroz Friedberg review.

“The Harbor Labs report reaffirms the findings from our first report – conducted by Stroz Friedberg – that the CAS is well designed and functioning as we hoped,” CCI informs TF.

In the months to come the operators of the Copyright Alert System will continue to work with copyright holders to make further enhancements and modifications to their processes.

“As the CAS exits the initial ramp-up period, CCI has been assured by our content owners that they have taken all recommendations made within both reports into account and are continuing to focus on maintaining the robust system that minimizes false positives and protects customer security and privacy,” CCI adds.

Meanwhile, they will continue to alert Internet subscribers to possible infringements. After nearly two years copyright holders have warned several million users, hoping to convert then to legal alternatives.

Thus far there’s no evidence that Copyright Alerts have had a significant impact on piracy rates. However, the voluntary agreement model is being widely embraced by various stakeholders and similar schemes are in the making in both the UK and Australia.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

TorrentFreak: BitTorrent Preps Sync Pro to Take on the Cloud

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

Sharing files across multiple devices used to be laborious and time-consuming affair but with the advent of services such as Dropbox the practice has become a breeze.

However, while many users remain unconcerned that third-party companies offering ‘cloud storage’ have control of their files, the revelations of Edward Snowden have undoubtedly increased fear of government snooping. With their new product ‘Sync’ in alpha, this data security loophole was pounced upon last year by BitTorrent Inc.

Released in early 2013, BitTorrent Sync is a tool that allows users to securely sync folders across multiple devices using the BitTorrent protocol. In terms of functionality it can be compared to any number of cloud-based services but with one key exception – Sync does not store data in the cloud but does so on users’ devices instead.

The software has now reached version 1.4 and the take-up has been impressive. During August, BitTorrent Inc. confirmed that there had been 10 million user installs transferring over 80 Petabytes of data between them.

Now the company is preparing to debut Sync 2.0 with greater functionality and flexibility while maintaining the privacy of its users. For the first time it will be made available in two editions, ‘free’ and ‘pro’. So what’s the difference?

“Sync 2.0 free will be an improvement from 1.4 and there won’t be any limits on performance or size of individual folders,” BitTorrent Inc.’s Christian Averill informs TorrentFreak.

“Pro users simply get premium functionality, catered specifically to individuals with large data needs and business workgroups.”

Sync 2.0 Free Edition
● Feature improvements, to enhance sharing and syncing folders
● Updated UI across platforms, new tablet apps on Android/iOS
● General performance improvements and bug fixes
● 30-day trial period for Sync Pro
● No restrictions on performance or individual folder sizes.

Sync 2.0 Pro Edition
● Access to very large folders (TBs): allows for on-demand access to individual files
● Control over folder permissions and ownership (see image below)
● Automatic synchronization across devices: all your devices are tied via a common identity; moves the relationship from device-to-device to person-to-person
● Priority technical support

sync-pro

BitTorrent Inc. believes that Sync 2.0 trumps services like Dropbox, Google Drive and OneDrive on a number of fronts. Sync 2.0 places no file-size restrictions on users versus a 1TB limit for rivals. Files are also synced more quickly, up to 16X, since Sync does not rely on uploads to the cloud.

Finally, in addition to enhanced security Sync 2.0 aims to offer better value for money too. The ‘free’ edition is just that and the ‘pro’ version costs $39.99. Competitors Dropbox, Google Drive, and Microsoft OneDrive charge upwards of $83.99 for comparable services.

No firm release data has been announced for Sync 2.0 but those interested in becoming an early adopter can do so here.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

SANS Internet Storm Center, InfoCON: green: “Big Data” Needs a Trip to the Security Chiropracter!, (Wed, Nov 19th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

When the fine folks at Portswigger updated Burp Suite last month to 1.6.07 (Nov 3), I was really glad to see NoSQL injection in the list of new features.

Whats NoSQL you ask? If your director is talking to you about Big Data or your Marketing is talking to you about customer metrics, likely what they mean is an app with a back-end database that uses NoSQL instead of real SQL.

Im tripping over this requirement this month in the retail space. Ive got clients that want to track a retail customers visit to the store (tracking their cellphones using the store wireless access points), to see:

  • if customers visit store sections where the sale items are?
  • or, if customers visit area x, do they statistically visit area y next?
  • or, having visited the above areas, how many customers actually purchase something?
  • or, after seeing a purchase, how many feature sale purchases are net-new customers (or repeat customers)

In other words, using the wireless system to track customer movements, then correlating it back to purchase behaviour to determine how effective each feature sale might be.

So what database do folks use for applications like this? Front-runners in the NoSQL race these days include MongoDB and CouchDB. Both databases do cool things with large volumes of data.”>Ensure that MongoDB runs in a trusted network environment and limit the interfaces on which MongoDB instances listen for incoming connections. Allow only trusted clients to access the network interfaces and ports on which MongoDB instances are available.

CouchDB has a similar statement at http://guide.couchdb.org/draft/security.html “>it should be obvious that putting a default installation into the wild is adventurous

So, where do I see folks deploying these databases? Why, in PUBLIC CLOUDs, thats where!” />

And what happens after you stand up your almost-free database and the analysis on that dataset is done? In most cases, the marketing folks who are using it simply abandon it, in a running state. What could possibly go wrong with that? Especially if they didnt tell anyone in either the IT or Security group that this database even existed?

Given that weve got hundreds of new ways to collect data that weve never had access to before, its pretty obvious that if big data infrastructures like these arent part of our current plans, they likely should be. All I ask is that folks do the risk assessments tha they would if this server was going up in their own datacenter. Ask some questions like:

  • What data will be on this server?
  • Who is the formal custodian of that data?
  • Is the data covered under a regulatory framework such as HIPAA or PCI? Do we need to host it inside of a specific zone or vlan?
  • What happens if this server is compromised? Will we need to disclose to anyone?
  • Who owns the operation of the server?
  • Who is responsible for securing the server?
  • Does the server have a pre-determined lifetime? Should it be deleted after some point?
  • Is the developer or marketing team thats looking at the dataset understand your regulatory requirements? Do they understand that Credit Card numbers and Patient Data are likely bad candidates for an off-prem / casual treatment like this (hint – NO THEY DO NOT).

Smartmeter applications are another big data thing Ive come across lately. Laying this out end-to-end – collecting data from hundreds of thousands of embedded devices that may or may not be securable, over a public network to be stored in an insecurable database in a public cloud. Oh, and the collected data impinges on at least 2 regulatory frameworks – PCI and NERC/FERC, possibly also privacy legislation depending on the country. Ouch!

Back to the tools to assess these databases – Burp isnt your only option to scan NoSQL database servers – in fact, Burp is more concerned with the web front-end to NoSQL itself. NoSQLMAP (http://www.nosqlmap.net/) is another tool thats seeing a lot of traction, and of course the standard usual suspects list of tools have NoSQL scripts, components and plugins – Nessus has a nice set of compliance checks for the database itself, NMAP has scripts for both couchdb, mongodbb and hadoop detection, as well as mining for database-specific information. OWASP has a good page on NoSQL injection at https://www.owasp.org/index.php/Testing_for_NoSQL_injection, and also check out http://opensecurity.in/nosql-exploitation-framework/.

Shodan is also a nice place to look in an assessment during your recon phase (for instance, take a look at http://www.shodanhq.com/search?q=MongoDB+Server+Information )

Have you used a different tool to assess a NoSQL Database? Or have you had – lets say an interesting conversation around securing data in such a database with your management or marketing group? Please, add to the story in our comment form!

===============
Rob VandenBrink
Metafore

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Krebs on Security: Microsoft Releases Emergency Security Update

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Microsoft today deviated from its regular pattern of releasing security updates on the second Tuesday of each month, pushing out an emergency patch to plug a security hole in all supported versions of Windows. The company urged Windows users to install the update as quickly as possible, noting that miscreants already are exploiting the weaknesses to launch targeted attacks.

brokenwindowsThe update (MS14-068) addresses a bug in a Windows component called Microsoft Windows Kerberos KBC, which handles authenticating Windows PCs on a local network. It is somewhat less of a problem for Windows home users (it is only rated critical for server versions of Windows) but it poses a serious threat to organizations. According to security vendor Shavlik, the flaw allows an attacker to elevate domain user account privileges to those of the domain administrator account.

“The attacker could forge a Kerberos Ticket and send that to the Kerberos KDC which claims the user is a domain administrator,” writes Chris Goettl, product manager with Shavlik. “From there the attacker can impersonate any domain accounts, add themselves to any group, install programs, viewchangedelete date, or create any new accounts they wish.  This could allow the attacker to then compromise any computer in the domain, including domain controllers.  If there is a silver lining in this one it is in the fact that the attacker must have a valid domain user account to exploit the vulnerability, but once they have done so, they have the keys to the kingdom.”

The patch is one of two that Microsoft had expected to release on Patch Tuesday earlier this month, but unexpectedly pulled at the last moment.  “This is pretty severe and definitely explains why Microsoft only delayed the release and did not pull it from the November Patch Tuesday release all together,” Goettl said.

On a separate note, security experts are warning those who haven’t yet fully applied the updates from Patch Tuesday to get on with it already. Researchers with vulnerability exploit development firm Immunity have been detailing their work in devising reliable ways to exploit a critical flaw in Microsoft Secure Channel (a.k.a. “Schannel”), a security package in Windows that handles SSL/TLS encryption — which protects the privacy and security of Web browsing for Windows users. More importantly, there a signs that malicious hackers are devising their own methods of exploiting the flaw to seize control over unpatched Windows systems.

Wolfgang Kandek, chief technology officer at Qualys, said security researchers were immediately driven to this bulletin as it updates Microsoft’s SSL/TLS implementation fixing Remote Code Execution and Information Leakage that were found internally at Microsoft during a code audit.

“More information has not been made available, but in theory this sounds quite similar in scope to April’s Heartbleed problem in OpenSSL, which was widely publicized and had a number of documented abuse cases,” Kandek wrote in a blog post today. “The dark side is certainly making progress in finding an exploit for these vulnerabilities. It is now high time to patch.”

TorrentFreak: ISP Provides Free VPN to Protect Customer Privacy

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

vpn4lifeIn April a landmark ruling from the European Court of Justice declared Europe’s Data Retention Directive a violation of Internet users’ privacy and therefore invalid.

The Directive required Internet service providers and other telecommunications companies to log data on the activities of their subscribers, including who they communicate with and at what times, plus other identifying information such as IP addresses.

One of the first companies to react to the decision was Swedish ISP Bahnhof. The ISP has a reputation for objecting to what it sees as breaches of customer privacy, so did not hesitate following the Court’s announcement.

“Bahnhof stops all data storage with immediate effect. In addition, we will delete the information that was already saved,” Bahnhof CEO Jon Karlung said.

However, at the end of last month Swedish telecoms regulator PTS ordered Bahnhof to start storing communications data again under local data retention laws, warning the ISP that non-compliance would result in hefty fines.

At the time Karlung promised a “Plan B” to skirt the order, and today the details of that have emerged.

“One week remains before PTS requires a fine of five million krona ($676,500) from Bahnhof, as the company has not yet begun to store customer traffic data. Therefore, Bahnhof has chosen to activate ‘Plan B’,” Karlung announced today.

The plan involves Bahnhof reactivating data storage on November 24 as required. However, the ISP will thwart the collection of meaningful data by providing every customer with access to an anonymizing VPN service free of charge.

“The EU Court of Justice has held that it is a human right for people not to have their traffic data stored. We therefore believe that the time is ripe for VPN services become popular,” Karlung says.

The service, called LEX Integrity, is a no-logging provider so it will be impossible for any entity to get useful information about its users.

“The EU Court of Justice has issued a ruling that the previous government chose to ignore, and the current government has been silent for so long that we are starting to lose patience,” Karlung adds.

“So now Bahnhof will resolve the situation in a responsible manner, namely by solving the whole problem. We will start to store data, but at exactly the same time we will make data storage meaningless.”

The VPN service will become active next Monday.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.