Posts tagged ‘Privacy’

The Hacker Factor Blog: By Proxy

This post was syndicated from: The Hacker Factor Blog and was written by: The Hacker Factor Blog. Original post: at The Hacker Factor Blog

As I tweak and tune the firewall and IDS system at FotoForensics, I keep coming across unexpected challenges and findings. One of the challenges is related to proxies. If a user uploads prohibited content from a proxy, then my current system bans the entire proxy. An ideal solution would only ban the user.

Proxies serve a lot of different purposes. Most people think about proxies in regards to anonymity, like the TOR network. TOR is a series of proxies that ensure that the endpoint cannot identify the starting point.

However, there are other uses for proxies. Corporations frequently have a set of proxies for handling network traffic. This allows them to scan all network traffic for potential malware. It’s a great solution for mitigating the risk from one user getting a virus and passing it to everyone in the network.

Some governments run proxies as a means to filter content. China and Syria come to mind. China has a custom solution that has been dubbed the “Great Firewall of China“. They use it to restrict site access and filter content. Syria, on the other hand, appears to use a COTS (commercial off-the-shelf) solution. In my web logs, most traffic from Syria comes through Blue Coat ProxySG systems.

And then there are the proxies that are used to bypass usage limits. For example, your hotel may charge for Internet access. If there’s a tech convention in the hotel, then it’s common to see one person pay for the access, and then run his own SOCKS proxy for everyone else to relay out over the network. This gives everyone access without needing everyone to pay for the access.

Proxy Services

Proxy networks that are designed for anonymity typically don’t leak anything. If I ban a TOR node, then that node stays banned since I cannot identify individual users. However, the proxies that are designed for access typically do reveal something about the user. In fact, many proxies explicitly identify who’s request is being relayed. This added information is stuffed in HTTP header fields that most web sites ignore.

For example, I recently received an HTTP request from 66.249.81.4 that contained the HTTP header “X-Forwarded-For: 82.114.168.150″. If I were to ban the user, then I would ban “66.249.81.4″, since that system connected to my server. However, 66.249.81.4 is google-proxy-66-249-81-4.google.com and is part of a proxy network. This proxy network identified who was relaying with the X-Forwarded-For header. In this case, “82.114.168.150″ is someone in Yemen. If I see this reference, then I can start banning the user in Yemen rather than the Google Proxy that is used by lots of people. (NOTE: I changed the Yemen IP address for privacy, and this user didn’t upload anything requiring a ban; this is just an example.)

Unfortunately, there is no real standard here. Different proxies use different methods to denote the user being relayed. I’ve seen headers like “X-Forwarded”, “X-Forwarded-For”, “HTTP_X_FORWARDED_FOR” (yes, they actually sent this in their header; this is NOT from the Apache variable), “Forwarded”, “Forwarded-For-IP”, “Via”, and more. Unless I know to look for it, I’m liable to ban a proxy rather than a user.

In some cases, I see the direct connection address also listed as the relayed address; it claims to be relaying itself. I suspect that this is cause by some kind of anti-virus system that is filtering network traffic through a local proxy. And sometimes I see private addresses (“private” as in “private use” and “should not be routed over the Internet”; not “don’t tell anyone”). These are likely home users or small companies that run a proxy for all of the computers on their local networks.

Proxy Detection

If I cannot identify the user being proxied, then just identifying that the system is a proxy can be useful. Rather than banning known proxies for three months, I might ban the proxy for only a day or a week. The reduced time should cut down on the number of people blocked because of the proxy that they used.

There are unique headers that can identify that a proxy is present. Blue Coat ProxySG, for example, adds in a unique header: “X-BlueCoat-Via: abce6cd5a6733123″. This tracking ID is unique to the Blue Coat system; every user relaying through that specific proxy gets the same unique ID. It is intended to prevent looping between Blue Coat devices. If the ProxySG system sees its own unique ID, then it has identified a loop.

Blue Coat is not the only vendor with their own proxy identifier. Fortinet’s software adds in a “X-FCCKV2″ header. And Verizon silently adds in an “X-UIDH” header that has a large binary string for tracking users.

Language and Location

Besides identifying proxies, I can also identify the user’s preferred language.

The intent with specifying languages in the HTTP header is to help web sites present content in the native language. If my site supports English, German, and French, then seeing a hint that says “French” should help me automatically render the page using French. However, this can be used along with IP address geolocation to identify potential proxies. If the IP address traces to Australia but the user appears to speak Italian, then it increases the likelihood that I’m seeing an Australian proxy that is relaying for a user in Italy.

The official way to identify the user’s language is to use an HTTP “Accept-Language” header. For example, “Accept-Language: en-US,en;q=0.5″ says to use the United States dialect of English, or just English if there is no dialect support at the web site. However, there are unofficial approaches to specifying the desired language. For example, many web browsers encode the user’s preferred language into the HTTP user-agent string.

Similarly, Facebook can relay network requests. These appear in the header “X-Facebook-Locale”. This is an unofficial way to identify when Facebook being use as a proxy. However, it also tells me the user’s preferred language: “X-Facebook-Locale: fr_CA”. In this case, the user prefers the Canadian dialect of French (fr_CA). While the user may be located anywhere in the world, he is probably in Canada.

There’s only one standard way to specify the recipient’s language. However, there are lots of common non-standard ways. Just knowing what to look for can be a problem. But the bigger problem happens when you see conflicting language definitions.

Accept-Language: de-de,de;q=0.5

User-Agent: Mozilla/5.0 (Linux; Android 4.4.2; it-it; SAMSUNG SM-G900F/G900FXXU1ANH4 Build/KOT49H) AppleWebKit/537.36 (KHTML, like Gecko) Version/1.6 Chrome/28.0.1500.94 Mobile Safari/537.36

X-Facebook-Locale: es_LA

x-avantgo-clientlanguage: en_GB

x-ucbrowser-ua: pf(Symbian);er(U);la(en-US);up(U2/1.0.0);re(U2/1.0.0);dv(NOKIAE90);pr
(UCBrowser/9.2.0.336);ov(S60V3);pi(800*352);ss(800*352);bt(GJ);pm(0);bv(0);nm(0);im(0);sr(2);nt(1)

X-OperaMini-Phone-UA: Mozilla/5.0 (Linux; U; Android 4.4.2; id-id; SM-G900T Build/id=KOT49H.G900SKSU1ANCE) AppleWebKit/534.30 (KHTML, like Gecko) Version/4.0 Mobile Safari/534.30

If I see all of these in one request, then I’ll probably choose the official header first (German from German). However, without the official header, would I choose Spanish from Latin America (“es-LA” is unofficial but widely used), Italian from Italy (it-it) as specified by the web browser user-agent string, or the language from one of those other fields? (Fortunately, in the real world these would likely all be the same. And you’re unlikely to see most of these fields together. Still, I have seen some conflicting fields.)

Time to Program!

So far, I have identified nearly a dozen different HTTP headers that denote some kind of proxy. Some of them identify the user behind the proxy, but others leak clues or only indicate that a proxy was used. All of this can be useful for determining how to handle a ban after someone violates my site’s terms of service, even if I don’t know who is behind the proxy.

In the near future, I should be able to identify at least some of these proxies. If I can identify the people using proxies, then I can restrict access to the user rather than the entire proxy. And if I can at least identify the proxy, then I can still try to lessen the impact for other users.

Errata Security: FBI’s crypto doublethink

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

Recently, FBI Director James Comey gave a speech at the Brookings Institute decrying crypto. It was transparently Orwellian, arguing for a police-state. In this post, I’ll demonstrate why, quoting bits of the speech.

“the FBI has a sworn duty to keep every American safe from crime and terrorism”
“The people of the FBI are sworn to protect both security and liberty”

This is not true. The FBI’s oath is to “defend the Constitution”. Nowhere in the oath does it say “protect security” or “keep people safe”.

This detail is important. Tyrants suppress civil liberties in the name of national security and public safety. This oath taken by FBI agents, military personnel, and the even the president, is designed to prevent such tyrannies.

Comey repeatedly claims that FBI agents both understand their duty and are committed to it. That Comey himself misunderstands his oath disproves both assertions. This reinforces our belief that FBI agents do not see their duty as protecting our rights, but instead see rights as an impediment in pursuit of some other duty.

Freedom is Danger

The book 1984 describes the concept of “doublethink“, with political slogans as examples: “War is Peace”, “Ignorance is Strength”, and “Freedom is Slavery”. Comey goes full doublethink:

Some have suggested there is a conflict between liberty and security. I disagree. At our best, we in law enforcement, national security, and public safety are looking for security that enhances liberty. When a city posts police officers at a dangerous playground, security has promoted liberty—the freedom to let a child play without fear.

He’s wrong. Liberty and security are at odds. That’s what the 4th Amendment says. We wouldn’t be having this debate if they weren’t at odds.

He follows up with more doublethink, claiming “we aren’t seeking a back-door”, but instead are instead interested in “developing intercept solutions during the design phase”. Intercept solutions built into phones is the very definition of a backdoor, of course.

“terror terror terror terror terror”
“child child child child child child”

Comey mentions terrorism 5 times and child exploitation 6 times. This is transparently the tactic of the totalitarian, demagoguery based on emotion rather than reason.

Fear of terrorism on 9/11 led to the Patriot act, granting law enforcement broad new powers in the name of terrorism. Such powers have been used overwhelming for everything else. The most telling example is the detainment of David Miranda in the UK under a law that supposedly only applied to terrorists. Miranda was carrying an encrypted copy of Snowden files — clearly having nothing to do with terrorism. It was clearly exploitation of anti-terrorism laws for the purposes of political suppression.

Any meaningful debate doesn’t start with the headline grabbing crimes, but the ordinary ones, like art theft and money laundering. Comey has to justify his draconian privacy invasion using those laws, not terrorism.

“rule of law, rule of law, rule of law, rule of law, rule of law”
Comey mentions rule-of-law five times in his speech. His intent is to demonstrate that even the FBI is subject to the law, namely review by an independent judiciary. But that isn’t true.

The independent judiciary has been significantly weakened in recent years. We have secret courts, NSLs, and judges authorizing extraordinary powers because they don’t understand technology. Companies like Apple and Google challenge half the court orders they receive, because judges just don’t understand. There is frequent “parallel construction”, where evidence from spy agencies is used against suspects, sidestepping judicial review.

What Comey really means is revealed by this statement: “I hope you know that I’m a huge believer in the rule of law. … There should be no law-free zone in this country”. This a novel definition of “rule of law”, a “rule by law enforcement”, that has never been used before. It reveals what Comey really wants, a totalitarian police-state where nothing is beyond the police’s powers, where the only check on power is a weak and pliant judiciary.

“that a commitment to the rule of law and civil liberties is at the core of the FBI”
No, lip service to these things is at the core of the FBI.

I know this from personal experience when FBI agents showed up at my offices and threatened me, trying to get me to cancel a talk at a cybersecurity conference. They repeated over and over how they couldn’t force me to cancel my talk because I had a First Amendment right to speak — while simultaneously telling me that if I didn’t cancel my talk, they would taint my file so that I would fail background checks and thus never be able to work for the government ever again.
We saw that again when the FBI intercepted clearly labeled “attorney-client privileged” mail between Weev and his lawyer. Their excuse was that the threat of cyberterrorism trumped Weev’s rights.

Then there was that scandal that saw widespread cheating on a civil-rights test. FBI agents were required to certify, unambiguously, that nobody helped them on the test. They lied. It’s one more oath FBI agents seem not to care about.

If commitment to civil liberties was important to him, Comey would get his oath right. If commitment to rule-of-law was important, he’d get the definition right. Every single argument Comey make seeks demonstrates how little he is interested in civil liberties.

“Snowden Snowden Snowden”

Comey mentions Snowden three times, such as saying “In the wake of the Snowden disclosures, the prevailing view is that the government is sweeping up all of our communications“.

This is not true. No news article based on the Snowden document claims this. No news site claims this. None of the post-Snowden activists believe this. All the people who matter know the difference between metadata and full eavesdropping, and likewise, the difficulty the FBI has in getting at that data.

This is how we know the FBI is corrupt. They ignore our concerns that government has been collecting every phone record in the United States for 7 years without public debate, but instead pretend the issue is something stupid, like the false belief they’ve been recording all phone calls. They knock down strawman arguments instead of addressing our real concerns.

Regulate communication service providers

In his book 1984, everyone had a big screen television mounted on the wall that was two-way. Citizens couldn’t turn the TV off, because it had to be blaring government propaganda all the time. The camera was active at all time in case law enforcement needed to access it. At the time the book was written in 1934, televisions were new, and people thought two-way TVs were plausible. They weren’t at that time; it was a nonsense idea.

But then the Internet happened and now two-way TVs are a real thing. And it’s not just the TV that’s become two-way video, but also our phones. If you believe the FBI follows the “rule of law” and that the courts provide sufficient oversight, then there’s no reason to stop them going full Orwell, allowing the police to turn on your device’s camera/microphone any time they have a court order in order to eavesdrop on you. After all, as Comey says, there should be no law-free zone in this country, no place law enforcement can’t touch.

Comey pretends that all he seeks at the moment is a “regulatory or legislative fix to create a level playing field, so that all communication service providers are held to the same standard” — meaning a CALEA-style backdoor allowing eavesdropping. But here’s thing: communication is no longer a service but an app. Communication is “end-to-end”, between apps, often by different vendors, bypassing any “service provider”. There is no way to way to eavesdrop on those apps without being able to secretly turn on a device’s microphone remotely and listen in.

That’s why we crypto-activists draw the line here, at this point. Law enforcement backdoors in crypto inevitably means an Orwellian future.


Conclusion

There is a lot more wrong with James Comey’s speech. What I’ve focused on here were the Orwellian elements. The right to individual crypto, with no government backdoors, is the most important new human right that technology has created. Without it, the future is an Orwellian dystopia. And as proof of that, I give you James Comey’s speech, whose arguments are the very caricatures that Orwell lampooned in his books.

Schneier on Security: Surveillance in Schools

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

This essay, “Grooming students for a lifetime of surveillance,” talks about the general trends in student surveillance.

Related: essay on the need for student privacy in online learning.

Darknet - The Darkside: IPFlood – Simple Firefox Add-on To Hide Your IP Address

This post was syndicated from: Darknet - The Darkside and was written by: Darknet. Original post: at Darknet - The Darkside

IPFlood (previously IPFuck) is a Firefox add-on created to simulate the use of a proxy. It doesn’t actually change your IP address (obviously) and it doesn’t connect to a proxy either, it just changes the headers (that it can) so it appears to any web servers or software sniffing – that you are in fact [...]

The post IPFlood…

Read the full post at darknet.org.uk

Darknet - The Darkside: JPMorgan Hacked & Leaked Over 83 Million Customer Records

This post was syndicated from: Darknet - The Darkside and was written by: Darknet. Original post: at Darknet - The Darkside

So yah last week we all discovered, OMG JPMorgan Hacked! This set a lot of people on edge as JPMorgan Chase & Co is the largest US bank by assets – so it’s pretty seriously business. The breach happened back in July and was only disclosed last Thursday due to a filing to the US [...]

The post JPMorgan Hacked & Leaked Over 83…

Read the full post at darknet.org.uk

Krebs on Security: Bugzilla Zero-Day Exposes Zero-Day Bugs

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

A previously unknown security flaw in Bugzilla — a popular online bug-tracking tool used by Mozilla and many of the open source Linux distributions — allows anyone to view detailed reports about unfixed vulnerabilities in a broad swath of software. Bugzilla is expected today to issue a fix for this very serious weakness, which potentially exposes a veritable gold mine of vulnerabilities that would be highly prized by cyber criminals and nation-state actors.

The Bugzilla mascot.

The Bugzilla mascot.

Multiple software projects use Bugzilla to keep track of bugs and flaws that are reported by users. The Bugzilla platform allows anyone to create an account that can be used to report glitches or security issues in those projects. But as it turns out, that same reporting mechanism can be abused to reveal sensitive information about as-yet unfixed security holes in software packages that rely on Bugzilla.

A developer or security researcher who wants to report a flaw in Mozilla Firefox, for example, can sign up for an account at Mozilla’s Bugzilla platform. Bugzilla responds automatically by sending a validation email to the address specified in the signup request. But recently, researchers at security firm Check Point Software Technologies discovered that it was possible to create Bugzilla user accounts that bypass that validation process.

“Our exploit allows us to bypass that and register using any email we want, even if we don’t have access to it, because there is no validation that you actually control that domain,” said Shahar Tal, vulnerability research team leader for Check Point. “Because of the way permissions work on Bugzilla, we can get administrative privileges by simply registering using an address from one of the domains of the Bugzilla installation owner. For example, we registered as admin@mozilla.org, and suddenly we could see every private bug under Firefox and everything else under Mozilla.”

Bugzilla is expected today to release updates to remove the vulnerability and help further secure its core product.

“An independent researcher has reported a vulnerability in Bugzilla which allows the manipulation of some database fields at the user creation procedure on Bugzilla, including the ‘login_name’ field,” said Sid Stamm, principal security and privacy engineer at Mozilla, which developed the tool and has licensed it for use under the Mozilla public license.

“This flaw allows an attacker to bypass email verification when they create an account, which may allow that account holder to assume some privileges, depending on how a particular Bugzilla instance is managed,” Stamm said. “There have been no reports from users that sensitive data has been compromised and we have no other reason to believe the vulnerability has been exploited. We expect the fixes to be released on Monday.”

The flaw is the latest in a string of critical and long-lived vulnerabilities to surface in the past year — including Heartbleed and Shellshock — that would be ripe for exploitation by nation state adversaries searching for secret ways to access huge volumes of sensitive data.

“The fact is that this was there for 10 years and no one saw it until now,” said Tal. “If nation state adversaries [had] access to private bug data, they would have a ball with this. There is no way to find out if anyone did exploit this other than going through user list and seeing if you have a suspicious user there.”

Like Heartbleed, this flaw was present in open source software to which countless developers and security experts had direct access for years on end.

“The perception that many eyes have looked at open source code and it’s secure because so many people have looked at it, I think this is false,” Tal said. “Because no one really audits code unless they’re committed to it or they’re paid to do it. This is why we can see such foolish bugs in very popular code.”

trackbugdawg

The Hacker Factor Blog: Security By Apathy

This post was syndicated from: The Hacker Factor Blog and was written by: The Hacker Factor Blog. Original post: at The Hacker Factor Blog

There are a couple of common approaches to applying security. The most recommended method is a defense in depth approach. This applies layers of independent, well-known security methods, protecting the system even when one layer is breached. For example:

  1. Your home has a front door. That’s the first layer. The door permits people to enter and leave the house. Closing the door stops access.
  2. The door has a lock. The lock is actually independent of the door. The lock can be enabled or disabled regardless of whether the door is open or closed. But the lock provides an additional security layer to the door: if the door is closed and locked, then it is harder to get into the house.
  3. The front door probably has a deadbolt. Again, this is usually independent of the lock on the doorknob. A deadbolt even has it’s own latch (the bolt) to deter someone from kicking in the door.
  4. Inside the house, you have an alarm system. (You do have an alarm system, right?) The alarm is another layer, just in case someone gets around the door. The alarm may use door sensors, motion sensors, pressure pads, and more. Each of these add another layer to the home’s security.
  5. You might have a dog who barks loudly or attack intruders.
  6. Your valuables are locked down or stored in a safe. Even if the burglar gets past the door, dog, and alarm, this is yet another hurdle to contend with.
  7. And don’t forget the nosy neighbors, who call the cops every time a stranger drives down the street…

Each of these layers make it more difficult for an attacker. With your computer, you have your NAT-enabled router that plugs into your cable or DSL modem — the router that acts as a firewall, preventing uninvited traffic from entering your home. Your computer probably has its own software firewall. Your anti-virus scans all network traffic and media for hostile content. Your online services uses SSL and require passwords.

All of these are different layers. Granted, some layers may not be very strong, even the weakest ones are probably better than nothing.

Tell Nobody

Another concept is called Security by Obscurity. This is where details about some of the security layers are kept private. The belief is that the layer is safe as long as nobody knows the secret. However, as soon as someone knows the secret, the security is gone.

Lots of security gurus claim that Security by Obscurity isn’t security. But in reality, it is another layer and it works as long as it isn’t your only security precaution.

As an example, consider the lowly password. Passwords are a kind of security by obscurity. As long as you don’t tell someone your password, it is probably safe enough. Of course, if someone can guess your password then all security that it provides is gone.

However, even a weak password can be strong enough if you have other layers protecting it. One of my passwords is “Cubumber”. I’m not kidding, that’s really my password. At this point, people are probably thinking “What an idiot! He just told his password to the entire world!” Except, my password is protected by layers:

  • I didn’t identify the system or username that uses that password. This is security-by-obscurity. Without knowing where to use it, the password remains secure. (This is analogous to finding a car key and not knowing where the car is located. You can’t steal the car if you can’t find it.)
  • Even if you know the system, you still need to find the login screen. (Another security-by-obscurity.)
  • This particular system uses that password only allows logins from a specific subnet. So you need to identify the subnet and compromise it first. This falls under defense in depth and two-part authentication: something you know (the password) and something you are (the correct network address).
  • Assuming you can get on the right network, the connection to the system requires strong encryption. You will need to crack two other passwords (or one password and a fingerprint scanner) before you can access the encrypted network keys.
  • I should also mention that the necessary subnet is protected by a firewall and IPS system, so I’m not too concerned about a network attack.
  • All of these systems are physically located in an office that has a solid metal door, two locks, an overly-complex alarm system, and a barky dog. Oh, and there’s also nosy neighbors in the adjacent offices. (Hi Beth!)

Honestly, I’m not too concerned with people knowing my “Cucumber” password since nobody can easily get past all of the other security layers.

Whatever

There are other common security practices. Like the principle of least privilege: you only have access to the things you need. Secure by default and fail securely regarding initialization and error handling. Separation of duties (aka insulation), explicit trust, multi-part authentication, break one get one, etc.

All of these concepts are great when they are used and even better when used together. However, what we usually see is something nullified by apathy. There’s really two types of security apathy. There’s the stuff that you control and the stuff that is beyond your control.

For example, it is up to the user to choose a good password, to not use the same password twice, and to change default passwords. However, everyone reuses passwords. And if that online service really wants a password to continue, then I’ll just supply my standard “I don’t care” password. This becomes security apathy that I can control.

Similarly, I often find people who say “I don’t care if someone breaks into my computer. I don’t have anything valuable there.” That’s security apathy. It’s also myopic since the computer is usually connected to the Internet. (“Thanks! I’ll use your computer to send spam and to host my spatula porn collection!”)

Meh…

Not all security-related apathy can be blamed on the user. My cellphone has some bloatware apps that were installed by the manufacturer. Most of these apps are buggy and some have known vulnerabilities. When I install a new app, I can see what privileges it needs and I have the option to not install. But with pre-installed apps, I don’t know what any of them want to do with my data. I cannot even turn these things off. I rarely use my cellphone for maps, but the maps app is always running. And I’ve turned off the backup/sync options, but the backup app is always sucking down my battery. Even killing the backup app is only a temporary solution since it periodically starts up all by itself.

What’s worse is that many of these undesirable and high-risk features have no patches and there is no option to delete, disable, or remove them. Every few days I get a popup asking me to update some vendor-provided app, but then it complains that there is no update available. (Yes, T-Mobile, I’m talking about your User Account app.)

With my phone, the manufacturer has demonstrated Security by Apathy. They failed to provide secure options and failed to give me the ability to remove the stuff I don’t want. I cannot make my phone secure, even if I wanted to.

A least privilege approach would be to install nothing but the bare essentials. Then I could add in the various apps that I want. I think only Google’s Android One tries to do this. Every other phone is preloaded with bloatware that directly impacts usability, battery life, and device security.

It isn’t just mobile devices that have weak security that is out of our control. When the nude celebrity photo scandal first came out, it was pointed out that Apple permitted an unlimited number of login retries. (Reportedly now fixedkind of.) In this case, it doesn’t matter how strong the password is if I can guess as many times as I want. Every first-semester computer security student knows this. Apple’s disregard toward basic security practices and a lack of desire to address the issue in a timely fashion (i.e., years before the exploit) shows nothing but apathy toward the user.

Yada Yada

Then again, there are plenty of online services that still use the dreaded security question as a backdoor to your account.

“What is your mother’s maiden name?”
“Where did you go to high school?”
“What is your pet’s name?”

Everyone who does security knows that public information should never be used to protect private data. Yet Apple and Facebook and Yahoo and nearly every other major online service still asks these questions as an alternate authentication system. (As far as I know, Google is the only company to stop using these stupid questions that offer no real security.)

It isn’t that there are no other options for validating a user. Rather, these companies typically do not care enough to provide secure alternatives. There’s usually some marketeer with a checklist: “Do we have security questions? Check!” — There’s no checkbox for “is it a good idea?”

Moreover, we cannot assume that the users will be smart enough to not provide the real answers. If the system asks for your favorite color, then most people will enter their favorite color. (Security-minded people will enter an unrelated response, random characters, or a long phrase that is unlikely to be guessed. What’s my favorite color? “The orbit of Neptune on April 27th.”)

Talk to the Hand

There are a few things that enable most of today’s security exploits. First, there is bad software that has not been through a detailed security audit but is widely deployed. Then there is the corporate desire to check off functionality regardless of the impact to security. And finally, there are users who do not care enough to take security seriously.

Educating the user is much easier said than done. In the 35+ years that I have worked with computers, I have yet to see anyone come up with a viable way to educate users. Rather, software developers should make their code idiot proof. If users should not enter a bad password, then force the password to be strong. If you know that security questions are stupid, then don’t use them. And if you see that someone can guess the password as many times as they want, then implement a limit.

Yes, some code is complex and some bugs get released and some mistakes make it all the way out the door. But that doesn’t means that we shouldn’t try. The biggest issue facing computer security and personal privacy today is not from a bug or an oversight. It’s from corporate, developer, and user apathy.

Darknet - The Darkside: iSniff-GPS – Passive Wifi Sniffing Tool With Location Data

This post was syndicated from: Darknet - The Darkside and was written by: Darknet. Original post: at Darknet - The Darkside

iSniff GPS is a passive wifi sniffing tool which sniffs for SSID probes, ARPs and MDNS (Bonjour) packets broadcast by nearby iPhones, iPads and other wireless devices. The aim is to collect data which can be used to identify each device and determine previous geographical locations, based solely on information each device discloses about…

Read the full post at darknet.org.uk

Krebs on Security: We Take Your Privacy and Security. Seriously.

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

“Please note that [COMPANY NAME] takes the security of your personal data very seriously.” If you’ve been on the Internet for any length of time, chances are very good that you’ve received at least one breach notification email or letter that includes some version of this obligatory line. But as far as lines go, this one is about as convincing as the classic break-up line, “It’s not you, it’s me.”

coxletter

I was reminded of the sheer emptiness of this corporate breach-speak approximately two weeks ago, after receiving a snail mail letter from my Internet service provider — Cox Communications. In its letter, the company explained:

“On or about Aug. 13, 2014, “we learned that one of our customer service representatives had her account credentials compromised by an unknown individual. This incident allowed the unauthorized person to view personal information associated with a small number of Cox accounts. The information which could have been viewed included your name, address, email address, your Secret Question/Answer, PIN and in some cases, the last four digits only of your Social Security number or drivers’ license number.”

The letter ended with the textbook offer of free credit monitoring services (through Experian, no less), and the obligatory “Please note that Cox takes the security of your personal data very seriously.” But I wondered how seriously they really take it. So, I called the number on the back of the letter, and was directed to Stephen Boggs, director of public affairs at Cox.

Boggs said that the trouble started after a female customer account representative was “socially engineered” or tricked into giving away her account credentials to a caller posing as a Cox tech support staffer. Boggs informed me that I was one of just 52 customers whose information the attacker(s) looked up after hijacking the customer service rep’s account.

The nature of the attack described by Boggs suggested two things: 1) That the login page that Cox employees use to access customer information is available on the larger Internet (i.e., it is not an internal-only application); and that 2) the customer support representative was able to access that public portal with nothing more than a username and a password.

Boggs either did not want to answer or did not know the answer to my main question: Were Cox customer support employees required to use multi-factor or two-factor authentication to access their accounts? Boggs promised to call back with an definitive response. To Cox’s credit, he did call back a few hours later, and confirmed my suspicions.

“We do use multifactor authentication in various cases,” Boggs said. “However, in this situation there was not two-factor authentication. We are taking steps based on our investigation to close this gap, as well as to conduct re-training of our customer service representatives to close that loop as well.”

This sad state of affairs is likely the same across multiple companies that claim to be protecting your personal and financial data. In my opinion, any company — particularly one in the ISP business — that isn’t using more than a username and a password to protect their customers’ personal information should be publicly shamed.

Unfortunately, most companies will not proactively take steps to safeguard this information until they are forced to do so — usually in response to a data breach.  Barring any pressure from Congress to find proactive ways to avoid breaches like this one, companies will continue to guarantee the security and privacy of their customers’ records, one breach at a time.

Matthew Garrett: My free software will respect users or it will be bullshit

This post was syndicated from: Matthew Garrett and was written by: Matthew Garrett. Original post: at Matthew Garrett

I had dinner with a friend this evening and ended up discussing the FSF’s four freedoms. The fundamental premise of the discussion was that the freedoms guaranteed by free software are largely academic unless you fall into one of two categories – someone who is sufficiently skilled in the arts of software development to examine and modify software to meet their own needs, or someone who is sufficiently privileged[1] to be able to encourage developers to modify the software to meet their needs.

The problem is that most people don’t fall into either of these categories, and so the benefits of free software are often largely theoretical to them. Concentrating on philosophical freedoms without considering whether these freedoms provide meaningful benefits to most users risks these freedoms being perceived as abstract ideals, divorced from the real world – nice to have, but fundamentally not important. How can we tie these freedoms to issues that affect users on a daily basis?

In the past the answer would probably have been along the lines of “Free software inherently respects users”, but reality has pretty clearly disproven that. Unity is free software that is fundamentally designed to tie the user into services that provide financial benefit to Canonical, with user privacy as a secondary concern. Despite Android largely being free software, many users are left with phones that no longer receive security updates[2]. Textsecure is free software but the author requests that builds not be uploaded to third party app stores because there’s no meaningful way for users to verify that the code has not been modified – and there’s a direct incentive for hostile actors to modify the software in order to circumvent the security of messages sent via it.

We’re left in an awkward situation. Free software is fundamental to providing user privacy. The ability for third parties to continue providing security updates is vital for ensuring user safety. But in the real world, we are failing to make this argument – the freedoms we provide are largely theoretical for most users. The nominal security and privacy benefits we provide frequently don’t make it to the real world. If users do wish to take advantage of the four freedoms, they frequently do so at a potential cost of security and privacy. Our focus on the four freedoms may be coming at a cost to the pragmatic freedoms that our users desire – the freedom to be free of surveillance (be that government or corporate), the freedom to receive security updates without having to purchase new hardware on a regular basis, the freedom to choose to run free software without having to give up basic safety features.

That’s why projects like the GNOME safety and privacy team are so important. This is an example of tying the four freedoms to real-world user benefits, demonstrating that free software can be written and managed in such a way that it actually makes life better for the average user. Designing code so that users are fundamentally in control of any privacy tradeoffs they make is critical to empowering users to make informed decisions. Committing to meaningful audits of all network transmissions to ensure they don’t leak personal data is vital in demonstrating that developers fundamentally respect the rights of those users. Working on designing security measures that make it difficult for a user to be tricked into handing over access to private data is going to be a necessary precaution against hostile actors, and getting it wrong is going to ruin lives.

The four freedoms are only meaningful if they result in real-world benefits to the entire population, not a privileged minority. If your approach to releasing free software is merely to ensure that it has an approved license and throw it over the wall, you’re doing it wrong. We need to design software from the ground up in such a way that those freedoms provide immediate and real benefits to our users. Anything else is a failure.

(title courtesy of My Feminism will be Intersectional or it will be Bullshit by Flavia Dzodan. While I’m less angry, I’m solidly convinced that free software that does nothing to respect or empower users is an absolute waste of time)

[1] Either in the sense of having enough money that you can simply pay, having enough background in the field that you can file meaningful bug reports or having enough followers on Twitter that simply complaining about something results in people fixing it for you

[2] The free software nature of Android often makes it possible for users to receive security updates from a third party, but this is not always the case. Free software makes this kind of support more likely, but it is in no way guaranteed.

comment count unavailable comments

Darknet - The Darkside: CloudFlare Introduces SSL Without Private Key

This post was syndicated from: Darknet - The Darkside and was written by: Darknet. Original post: at Darknet - The Darkside

Handing over your private key to a cloud provider so they can terminate your SSL connections and you can work at scale has always been a fairly contentious issue, a necessary evil you may say. As if your private key gets compromised, it’s a big deal and without it (previously) there’s no way a cloud [...]

The post CloudFlare…

Read the full post at darknet.org.uk

Schneier on Security: Security for Vehicle-to-Vehicle Communications

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

The National Highway Traffic Safety Administration (NHTSA) has released a report titled “Vehicle-to-Vehicle Communications: Readiness of V2V Technology for Application.” It’s very long, and mostly not interesting to me, but there are security concerns sprinkled throughout: both authentication to ensure that all the communications are accurate and can’t be spoofed, and privacy to ensure that the communications can’t be used to track cars. It’s nice to see this sort of thing thought about in the beginning, when the system is first being designed, and not tacked on at the end.

Darknet - The Darkside: tinfoleak – Get Detailed Info About Any Twitter User

This post was syndicated from: Darknet - The Darkside and was written by: Darknet. Original post: at Darknet - The Darkside

tinfoleak is basically an OSINT tool for Twitter, there’s not a lot of stuff like this around – the only one that comes to mind in fact is creepy – Geolocation Information Aggregator. tinfoleak is a simple Python script that allow to obtain: basic information about a Twitter user (name, picture, location, followers, etc.) devices…

Read the full post at darknet.org.uk

The Hacker Factor Blog: Eight Is Enough

This post was syndicated from: The Hacker Factor Blog and was written by: The Hacker Factor Blog. Original post: at The Hacker Factor Blog

I must be one of those people who lives in a cave. (Well, at least it’s a man-cave.) I didn’t even realize that Apple’s iOS 8 was released until I heard all of the hoopla in the news.

When Apple did their recent big presentation, I heard about the new watch and the new iPhone, but not about the new operating system. The smart-watch didn’t impress me. At CACC last month, I saw a few people wearing devices that told the time, maintained their calendar, synced with their portable devices, and even checked their heart rates and sleep cycles. In this regard, Apple seems a little late to the game, over-priced, and limited in functionality.

The new iPhone also didn’t impress me. The only significant difference that I have heard about is the bigger screen. I find it funny that pants pockets are getting smaller and phones are getting bigger… So, where do you put this new iPhone? You can’t be expected to carry it everywhere by hand when you’re also holding a venti pumpkin spice soy latte with whip no room. Someone really needs to build an iPhone protector that doubles as a cup-holder. (Oh wait, it exists.) Or maybe an iBelt… that hangs the iPhone like a codpiece since it is more of a symbol of geek virility than a useful mobile device.

Then again, I’m not an Apple fanatic. I use a Mac, but I don’t go out of the way to worship at the foot of the latest greatest i-device.

Sight Seeing

Apple formally announced all of these new devices on September 9th. I decided to look over the FotoForensics logs for any iOS 8 devices. Amazingly, I’ve had a few sightings… and they started months before the formal announcement.

The first place I looked was in my web server’s log files. Every browser sends its user-agent string with their web request. This usually identifies the operating system and browser. The intent is to allow web services to collect metrics about usage. If I see a bunch of people using some new web browser, then I can test my site with that browser and ensure a good user experience.

With iOS devices, they also encode the version number. So I just looked for anything claiming to be an iOS 8 device. Here’s the date/time and user-agent strings that match iOS 8. I’m only showing the 1st instance per day:

[18/Mar/2014:18:40:39 -0500] “Mozilla/5.0 (iPad; CPU OS 8_0 like Mac OS X) AppleWebKit/538.22 (KHTML, like Gecko) Mobile/12A214″

[29/Apr/2014:13:27:58 -0500] “Mozilla/5.0 (iPhone; CPU iPhone OS 8_0 like Mac OS X) AppleWebKit/538.30.1 (KHTML, like Gecko) Mobile/12W252a”

[02/Jun/2014:16:56:45 -0500] “Mozilla/5.0 (iPhone; CPU iPhone OS 8_0 like Mac OS X) AppleWebKit/538.34.9 (KHTML, like Gecko) Version/7.0 Mobile/12A4265u Safari/9537.53″

[03/Jun/2014:16:44:38 -0500] “Mozilla/5.0 (iPhone; CPU iPhone OS 8_0 like Mac OS X) AppleWebKit/538.34.9 (KHTML, like Gecko) Version/7.0 Mobile/12A4265u Safari/9537.53″

After June 3rd, it basically became a daily appearance. The list includes iPhones and iPads. And, yes, the first few sightings came from Cupertino, California, where Apple is headquartered.

Even though iOS 8 is new, it looks like a few people have been using it for months. Product testers, demos, beta testers, etc.

Pictures?

When Apple released iOS 7, they added a new metadata field to their pictures. This field records the active-use time since the last reboot. I suspect that it is a useful metric for Apple. It also makes me wonder if iOS 8 added anything new.

As a research service, every picture uploaded to FotoForensics gets indexed for rapid searching. I searched the archive for any pictures that claim to be from an iOS 8 device. So far, there have only been five sightings. (Each photo shows personally identifiable information, selfies or pictures of text, so I won’t be linking to them.)

Amazingly, none of these initial iOS 8 photos are camera-original files. Adobe, Microsoft Windows, and other applications were used to save the picture. The earliest picture was uploaded on 2014-07-30 at 21:32:39 GMT by someone in California, and the picture’s metadata says it photographed on 2014-07-19.

Each of these iOS 8 photos came from an iPhone 5 or 5s device. I have yet to see any photos from an iPhone 6 device. (There was one sighting of an “iPhone 6Z” on 2013-01-30. But since it was uploaded by someone in France, I suspect that the metadata was altered.)

With the iPhone 5 and iOS 7, Apple introduced a “purple flareproblem. I don’t have many iOS 8 samples to compare against, and none are camera-originals. However, I’m not seeing the extreme artificial color correction that caused the purple flare. There’s still a distinct color correction, but it’s not as extreme. Perhaps the purple problem is fixed.

New Privacy

As far as I can tell, there is one notable new thing about iOS 8. Apple has publicly announced a change to their privacy policy. Specifically, they claim to have strong cryptography in the phones and no back doors. As a result, they will not be able to turn over any iPhone information to law enforcement, even if they have a valid subpoena. By implementing a technically strong solution and not retaining any keys, they forced their stance: it isn’t that they don’t want to help unlock a phone, it is that they technically cannot crack it in a realistic time frame.

While this stops Apple from assisting with iPhone and iPad devices that use iOS 8, it does nothing to stop Apple from turning over information uploaded to Apple’s iCloud service. (You do have the “backup to iCloud” option enabled, right?) This also does nothing to stop brute-force account guessing attacks, like the kind reportedly used to compromise celebrity nude photos. The newly deployed two-factor authentication seems like a much better solution even if it is too little too late.

Then again, I can also foresee new services that will handle your encryption keys for you, in case you lose them. After a few hundred complaints like “I lost my password and cannot access my precious kitty photos! Please help me!”, I expect that an entire market of back door options will become available for Apple users.

Behind the Eight Ball

I didn’t really pay attention to Apple’s latest releases until after they were out. However, it wouldn’t take much to make a database of known user agents and trigger an automated alert when the next Apple product first appears. It’s one thing to read about iOS 8 on Mac Rumors a few months before the release; it’s another thing to see it in my logs six months earlier.

While I don’t think much of Apple’s latest offerings, that doesn’t mean it won’t drive the market. Sometimes it’s not the produce itself that drives the innovation; sometimes it’s the spaces that need filling.

TorrentFreak: Mega Demands Apology Over “Defamatory” Cyberlocker Report

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

Yesterday the Digital Citizens Alliance released a new report that looks into the business models of “shadowy” file-storage sites.

Titled “Behind The Cyberlocker Door: A Report How Shadowy Cyberlockers Use Credit Card Companies to Make Millions,” the report attempts to detail the activities of some of the world’s most-visited hosting sites.

While it’s certainly an interesting read, the NetNames study provides a few surprises, not least the decision to include New Zealand-based cloud storage site Mega.co.nz. There can be no doubt that there are domains of dubious standing detailed in the report, but the inclusion of Mega stands out as especially odd.

Mega was without doubt the most-scrutinized file-hosting startup in history and as a result has had to comply fully with every detail of the law. And, unlike some of the other sites listed in the report, Mega isn’t hiding away behind shell companies and other obfuscation methods. It also complies fully with all takedown requests, to the point that it even took down its founder’s music, albeit following an erroneous request.

With these thoughts in mind, TorrentFreak alerted Mega to the report and asked how its inclusion amid the terminology used has been received at the company.

Grossly untrue and highly defamatory

mega“We consider the report grossly untrue and highly defamatory of Mega,” says Mega CEO Graham Gaylard.

“Mega is a privacy company that provides end-to-end encrypted cloud storage controlled by the customer. Mega totally refutes that it is a cyberlocker business as that term is defined and discussed in the report prepared by NetNames for the Digital Citizens Alliance.”

Gaylard also strongly refutes the implication in the report that as a “cyberlocker”, Mega is engaged in activities often associated with such sites.

“Mega is not a haven for piracy, does not distribute malware, and definitely does not engage in illegal activities,” Gaylard says. “Mega is running a legitimate business alongside other cloud storage providers in a highly competitive market.”

The Mega CEO told us that one of the perplexing things about the report is that none of the criteria set out by the report for “shadowy” sites is satisfied by Mega, yet the decision was still taken to include it.

Infringing content and best practices

One of the key issues is, of course, the existence of infringing content. All user-uploaded sites suffer from that problem, from YouTube to Facebook to Mega and thousands of sites in between. But, as Gaylard points out, it’s the way those sites handle the issue that counts.

“We are vigorous in complying with best practice legal take-down policies and do so very quickly. The reality though is that we receive a very low number of take-down requests because our aim is to have people use our services for privacy and security, not for sharing infringing content,” he explains.

“Mega acts very quickly to process any take-down requests in accordance with its Terms of Service and consistent with the requirements of the USA Digital Millennium Copyright Act (DMCA) process, the European Union Directive 2000/31/EC and New Zealand’s Copyright Act process. Mega operates with a very low rate of take-down requests; less than 0.1% of all files Mega stores.”

Affiliate schemes that encourage piracy

One of the other “rogue site” characteristics as outlined in the report is the existence of affiliate schemes designed to incentivize the uploading and sharing of infringing content. In respect of Mega, Gaylard rejects that assertion entirely.

“Mega’s affiliate program does not reward uploaders. There is no revenue sharing or credit for downloads or Pro purchases made by downloaders. The affiliate code cannot be embedded in a download link. It is designed to reward genuine referrers and the developers of apps who make our cloud storage platform more attractive,” he notes.

The PayPal factor

As detailed in many earlier reports (1,2,3), over the past few years PayPal has worked hard to seriously cut down on the business it conducts with companies in the file-sharing space.

Companies, Mega included, now have to obtain pre-approval from the payment processor in order to use its services. The suggestion in the report is that large “shadowy” sites aren’t able to use PayPal due to its strict acceptance criteria. Mega, however, has a good relationship with PayPal.

“Mega has been accepted by PayPal because we were able to show that we are a legitimate cloud storage site. Mega has a productive and respected relationship with PayPal, demonstrating the validity of Mega’s business,” Gaylard says.

Public apology and retraction – or else

Gaylard says that these are just some of the points that Mega finds unacceptable in the report. The CEO adds that at no point was the company contacted by NetNames or Digital Citizens Alliance for its input.

“It is unacceptable and disappointing that supposedly reputable organizations such as Digital Citizens and NetNames should see fit to attack Mega when it provides the user end to end encryption, security and privacy. They should be promoting efforts to make the Internet a safer and more trusted place. Protecting people’s privacy. That is Mega’s mission,” Gaylard says.

“We are requesting that Digital Citizens Alliance withdraw Mega from that report entirely and issue a public apology. If they do not then we will take further action,” he concludes.

TorrentFreak asked NetNames to comment on Mega’s displeasure and asked the company if it stands by its assertion that Mega is a “shadowy” cyberlocker. We received a response (although not directly to our questions) from David Price, NetNames’ head of piracy analysis.

“The NetNames report into cyberlocker operation is based on information taken from the websites of the thirty cyberlockers used for the research and our own investigation of this area, based on more than a decade of experience producing respected analysis exploring digital piracy and online distribution,” Price said.

That doesn’t sound like a retraction or an apology, so this developing dispute may have a way to go.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

LWN.net: Simply Secure announces itself

This post was syndicated from: LWN.net and was written by: jake. Original post: at LWN.net

A new organization to “make security easy and fun” has announced itself in a blog post entitled “Why Hello, World!”. Simply Secure is targeting the usability of security solutions: “If privacy and security aren’t easy and intuitive, they don’t work. Usability is key.
The organization was started by Google and Dropbox; it also has the Open Technology Fund as one of its partners.
To build trust and ensure quality outcomes, one core component of our work will be public audits of interfaces and code. This will help validate the security and usability claims of the efforts we support.

More generally, we aim to take a page from the open-source community and make as much of our work transparent and widely-accessible as possible. This means that as we get into the nitty-gritty of learning how to build collaborations around usably secure software, we will share our developing methodologies and expertise publicly. Over time, this will build a body of community resources that will allow all projects in this space to become more usable and more secure.”

TorrentFreak: Copyright Holders Want Netflix to Ban VPN Users

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

netflixWith the launch of legal streaming services such as Netflix, movie and TV fans have less reason to turn to pirate sites.

At the same time, however, these legal options invite people from other countries where the legal services are more limited. This is also the case in Australia where up to 200,000 people are estimated to use the U.S. version of Netflix.

Although Netflix has geographical restrictions in place, these are easy to bypass with a relatively cheap VPN subscription. To keep these foreigners out, entertainment industry companies are now lobbying for a global ban on VPN users.

Simon Bush, CEO of AHEDA, an industry group that represents Twentieth Century Fox, Warner Bros., Universal, Sony Pictures and other major players said that some members are actively lobbying for such a ban.

Bush didn’t name any of the companies involved, but he confirmed to Cnet that “discussions” to block Australian access to the US version of Netflix “are happening now”.

If implemented, this would mean that all VPN users worldwide will no longer be able to access Netflix. That includes the millions of Americans who are paying for a legitimate account. They can still access Netflix, but would not be allowed to do so securely via a VPN.

According to Bush the discussions to keep VPN users out are not tied to Netflix’s arrival in Australia. The distributors and other rightsholders argue that they are already being deprived of licensing fees, because some Aussies ignore local services such as Quickflix.

“I know the discussions are being had…by the distributors in the United States with Netflix about Australians using VPNs to access content that they’re not licensed to access in Australia,” Bush said.

“They’re requesting for it to be blocked now, not just when it comes to Australia,” he adds.

While blocking VPNs would solve the problem for distributors, it creates a new one for VPN users in the United States.

The same happened with Hulu a few months ago, when Hulu started to block visitors who access the site through a VPN service. This blockade also applies to hundreds of thousands of U.S. citizens.

Hulu’s blocklist was implemented a few months ago and currently covers the IP-ranges of all major VPN services. People who try to access the site through one of these IPs are not allowed to view any content on the site, and receive the following notice instead:

“Based on your IP-address, we noticed that you are trying to access Hulu through an anonymous proxy tool. Hulu is not currently available outside the U.S. If you’re in the U.S. you’ll need to disable your anonymizer to access videos on Hulu.”

It seems that VPNs are increasingly attracting the attention of copyright holders. Just a week ago BBC Worldwide argued that ISPs should monitor VPN users for excessive bandwidth use, assuming they would then be pirates.

Considering the above we can expect the calls for VPN bans to increase in the near future.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Krebs on Security: LinkedIn Feature Exposes Email Addresses

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

One of the risks of using social media networks is having information you intend to share with only a handful of friends be made available to everyone. Sometimes that over-sharing happens because friends betray your trust, but more worrisome are the cases in which a social media platform itself exposes your data in the name of marketing.

leakedinlogoLinkedIn has built much of its considerable worth on the age-old maxim that “it’s all about who you know”: As a LinkedIn user, you can directly connect with those you attest to knowing professionally or personally, but also you can ask to be introduced to someone you’d like to meet by sending a request through someone who bridges your separate social networks. Celebrities, executives or any other LinkedIn users who wish to avoid unsolicited contact requests may do so by selecting an option that forces the requesting party to supply the personal email address of the intended recipient.

LinkedIn’s entire social fabric begins to unravel if any user can directly connect to any other user, regardless of whether or how their social or professional circles overlap. Unfortunately for LinkedIn (and its users who wish to have their email addresses kept private), this is the exact risk introduced by the company’s built-in efforts to expand the social network’s user base.

According to researchers at the Seattle, Wash.-based firm Rhino Security Labs, at the crux of the issue is LinkedIn’s penchant for making sure you’re as connected as you possibly can be. When you sign up for a new account, for example, the service asks if you’d like to check your contacts lists at other online services (such as Gmail, Yahoo, Hotmail, etc.). The service does this so that you can connect with any email contacts that are already on LinkedIn, and so that LinkedIn can send invitations to your contacts who aren’t already users.

LinkedIn assumes that if an email address is in your contacts list, that you must already know this person. But what if your entire reason for signing up with LinkedIn is to discover the private email addresses of famous people? All you’d need to do is populate your email account’s contacts list with hundreds of permutations of famous peoples’ names — including combinations of last names, first names and initials — in front of @gmail.com, @yahoo.com, @hotmail.com, etc. With any luck and some imagination, you may well be on your way to an A-list LinkedIn friends list (or a fantastic set of addresses for spear-phishing, stalking, etc.).

LinkedIn lets you know which of your contacts aren't members.

LinkedIn lets you know which of your contacts aren’t members.

When you import your list of contacts from a third-party service or from a stand-alone file, LinkedIn will show you any profiles that match addresses in your contacts list. More significantly, LinkedIn helpfully tells you which email addresses in your contacts lists are not LinkedIn users.

It’s that last step that’s key to finding the email address of the targeted user to whom LinkedIn has just sent a connection request on your behalf. The service doesn’t explicitly tell you that person’s email address, but by comparing your email account’s contact list to the list of addresses that LinkedIn says don’t belong to any users, you can quickly figure out which address(es) on the contacts list correspond to the user(s) you’re trying to find.

Rhino Security founders Benjamin Caudill and Bryan Seely have a recent history of revealing how trust relationships between and among online services can be abused to expose or divert potentially sensitive information. Last month, the two researchers detailed how they were able to de-anonymize posts to Secret, an app-driven online service that allows people to share messages anonymously within their circle of friends, friends of friends, and publicly. In February, Seely more famously demonstrated how to use Google Maps to intercept FBI and Secret Service phone calls.

This time around, the researchers picked on Dallas Mavericks owner Mark Cuban to prove their point with LinkedIn. Using their low-tech hack, the duo was able to locate the Webmail address Cuban had used to sign up for LinkedIn. Seely said they found success in locating the email addresses of other celebrities using the same method about nine times out ten.

“We created several hundred possible addresses for Cuban in a few seconds, using a Microsoft Excel macro,” Seely said. “It’s just a brute-force guessing game, but 90 percent of people are going to use an email address that includes components of their real name.”

The Rhino guys really wanted Cuban’s help in spreading the word about what they’d found, but instead of messaging Cuban directly, Seely pursued a more subtle approach: He knew Cuban’s latest start-up was Cyber Dust, a chat messenger app designed to keep your messages private. So, Seely fired off a tweet complaining that “Facebook Messenger crosses all privacy lines,” and that as  result he was switching to Cyber Dust.

When Mark Cuban retweeted Seely’s endorsement of Cyber Dust, Seely reached out to Cyberdust CEO Ryan Ozonian, letting him known that he’d discovered Cuban’s email address on LinkedIn. In short order, Cuban was asking Rhino to test the security of Cyber Dust.

“Fortunately no major faults were found and those he found are already fixed in the coming update,” Cuban said in an email exchange with KrebsOnSecurity. “I like working with them. They look to help rather than exploit.. We have learned from them and I think their experience will be valuable to other app publishers and networks as well.”

Whether LinkedIn will address the issues highlighted by Rhino Security remains to be seen. In an initial interview earlier this month, the social networking giant sounded unlikely to change anything in response.

Corey Scott, director of information security at LinkedIn, said very few of the company’s members opt-in to the requirement that all new potential contacts supply the invitee’s email address before sending an invitation to connect. He added that email address-to-user mapping is a fairly common design pattern, and that is is not particularly unique to LinkedIn, and that nothing the company does will prevent people from blasting emails to lists of addresses that might belong to a targeted user, hoping that one of them will hit home.

“Email address permutators, of which there are many of them on the ‘Net, have existed much longer than LinkedIn, and you can blast an email to all of them, knowing that most likely one of those will hit your target,” Scott said. “This is kind of one of those challenges that all social media companies face in trying to prevent the abuse of [site] functionality. We have rate limiting, scoring and abuse detection mechanisms to prevent frequent abusers of this service, and to make sure that people can’t validate spam lists.”

In an email sent to this report last week, however, LinkedIn said it was planning at least two changes to the way its service handles user email addresses.

“We are in the process of implementing two short-term changes and one longer term change to give our members more control over this feature,” Linkedin spokeswoman Nicole Leverich wrote in an emailed statement. “In the next few weeks, we are introducing new logic models designed to prevent hackers from abusing this feature. In addition, we are making it possible for members to ask us to opt out of being discoverable through this feature. In the longer term, we are looking into creating an opt-out box that members can choose to select to not be discoverable using this feature.”

Darknet - The Darkside: Google DID NOT Leak 5 Million E-mail Account Passwords

This post was syndicated from: Darknet - The Darkside and was written by: Darknet. Original post: at Darknet - The Darkside

So a big panic hit the Internet a couple of days ago when it was alleged that Google had leaked 5 Million e-mail account passwords – and these had been posted on a Russian Bitcoin forum. I was a little sceptical, as Google tends to be pretty secure on that front and they had made [...]

The post Google DID NOT Leak 5 Million E-mail Account…

Read the full post at darknet.org.uk

Schneier on Security: The Concerted Effort to Remove Data Collection Restrictions

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Since the beginning, data privacy regulation focused on collection, storage, and use. You can see it in the OECD Privacy Framework from 1980 (see also this proposed update).

Recently, there has been concerted effort to focus all potential regulation on data use, completely ignoring data collection. Microsoft’s Craig Mundie argues this. So does the PCAST report. And the World Economic Forum. This is lobbying effort by US business. My guess is that the companies are much more worried about collection restrictions than use restrictions. They believe that they can slowly change use restrictions once they have the data, but that it’s harder to change collection restrictions and get the data in the first place.

We need to regulate collection as well as use. In a new essay, Chris Hoofnagle explains why.

Krebs on Security: Dread Pirate Sunk By Leaky CAPTCHA

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Ever since October 2013, when the FBI took down the online black market and drug bazaar known as the Silk Road, privacy activists and security experts have traded conspiracy theories about how the U.S. government managed to discover the geographic location of the Silk Road Web servers. Those systems were supposed to be obscured behind the anonymity service Tor, but as court documents released Friday explain, that wasn’t entirely true: Turns out, the login page for the Silk Road employed an anti-abuse CAPTCHA service that pulled content from the open Internet, thus leaking the site’s true location.

leakyshipTor helps users disguise their identity by bouncing their traffic between different Tor servers, and by encrypting that traffic at every hop along the way. The Silk Road, like many sites that host illicit activity, relied on a feature of Tor known as “hidden services.” This feature allows anyone to offer a Web server without revealing the true Internet address to the site’s users.

That is, if you do it correctly, which involves making sure you aren’t mixing content from the regular open Internet into the fabric of a site protected by Tor. But according to federal investigators,  Ross W. Ulbricht — a.k.a. the “Dread Pirate Roberts” and the 30-year-old arrested last year and charged with running the Silk Road — made this exact mistake.

As explained in the Tor how-to, in order for the Internet address of a computer to be fully hidden on Tor, the applications running on the computer must be properly configured for that purpose. Otherwise, the computer’s true Internet address may “leak” through the traffic sent from the computer.

howtorworks

And this is how the feds say they located the Silk Road servers:

“The IP address leak we discovered came from the Silk Road user login interface. Upon examining the individual packets of data being sent back from the website, we noticed that the headers of some of the packets reflected a certain IP address not associated with any known Tor node as the source of the packets. This IP address (the “Subject IP Address”) was the only non-Tor source IP address reflected in the traffic we examined.”

“The Subject IP Address caught our attention because, if a hidden service is properly configured to work on Tor, the source IP address of traffic sent from the hidden service should appear as the IP address of a Tor node, as opposed to the true IP address of the hidden service, which Tor is designed to conceal. When I typed the Subject IP Address into an ordinary (non-Tor) web browser, a part of the Silk Road login screen (the CAPTCHA prompt) appeared. Based on my training and experience, this indicated that the Subject IP Address was the IP address of the SR Server, and that it was ‘leaking’ from the SR Server because the computer code underlying the login interface was not properly configured at the time to work on Tor.”

For many Tor fans and advocates, The Dread Pirate Roberts’ goof will no doubt be labeled a noob mistake — and perhaps it was. But as I’ve said time and again, staying anonymous online is hard work, even for those of us who are relatively experienced at it. It’s so difficult, in fact, that even hardened cybercrooks eventually slip up in important and often fateful ways (that is, if someone or something was around at the time to keep a record of it).

A copy of the government’s declaration on how it located the Silk Road servers is here (PDF). A hat tip to Nicholas Weaver for the heads up about this filing.

A snapshop of offerings on the Silk Road.

A snapshop of offerings on the Silk Road.

lcamtuf's blog: Some notes on web tracking and related mechanisms

This post was syndicated from: lcamtuf's blog and was written by: Michal Zalewski. Original post: at lcamtuf's blog

Artur Janc and I put together a nice, in-depth overview of all the known fingerprinting and tracking vectors that appear to be present in modern browsers. This is an interesting, polarizing, and poorly-studied area; my main hope is that the doc will bring some structure to the discussions of privacy consequences of existing and proposed web APIs – and help vendors and standards bodies think about potential solutions in a more holistic way.

That’s it – carry on!

Darknet - The Darkside: Massive Celeb Leak Brings iCloud Security Into Question

This post was syndicated from: Darknet - The Darkside and was written by: Darknet. Original post: at Darknet - The Darkside

So this leak has caused quite a furore, normally I don’t pay attention to this stuff – but hey it’s JLaw and it’s a LOT of celebs at the same time – which indicates some kind of underlying problem. The massive list of over 100 celebs was posted originally on 4chan (of course) by an [...]

The post Massive Celeb Leak…

Read the full post at darknet.org.uk

TorrentFreak: Dotcom Loses Bid to Keep Assets Secret from Hollywood

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

dotcom-laptop20th Century Fox, Disney, Paramount, Universal, Columbia Pictures and Warner Bros are engaged in a huge battle with Kim Dotcom.

They believe that legal action currently underway against the Megaupload founder could lead to them receiving a sizable damages award should they win their case. But Dotcom’s lavish lifestyle gives them concerns. The more he spends, the less they could receive should the money begin to run out.

Those concerns were addressed by the High Court’s Judge Courtney, who previously ordered Dotcom to disclose the details of his worldwide assets to his Hollywood adversaries. Dotcom filed an appeal which will be heard in October, but that date is beyond the ordered disclosure date.

As a result, Dotcom took his case to the Court of Appeal in the hope of staying the disclosure order.

That bid has now failed.

Dotcom’s legal team argued out that their client’s October appeal would be rendered pointless if he was required to hand over financial information in advance. They also insisted a stay would not negatively affect the studios since millions in assets are currently restrained in New Zealand and elsewhere.

However, as explained by the Court of Appeal, any decision to stay a judgment is a balancing act between the rights of the successful party (Hollywood) to enforce its judgment and the consequences for both parties should the stay be granted or denied.

While the Court agreed that Dotcom’s appeal would be rendered pointless if disclosure to Hollywood was ordered, it rejected that would have an effect on Dotcom.

“[T]he mere fact that appeal rights are rendered nugatory is not necessarily determinative and in the circumstances of this case I consider that this consequence carries little weight. This is because Mr Dotcom himself does not assert that there will be any adverse effect on him if deprived of an effective appeal,” the decision reads.

The Court also rejected the argument put forward by Dotcom’s lawyer that the disclosure of financial matters would be a threat to privacy and amounted to an “unreasonable search”.

The Court did, however, acknowledge that Dotcom’s appeal would deal with genuine issues. That said, the concern over him disposing of assets outweighed them in this instance.

In respect of the effect of a stay on the studios, the Court looked at potential damages in the studios’ legal action against the Megaupload founder. Dotcom’s expert predicted damages “well below” US$10m, while the studios’ expert predicted in excess of US$100m.

The Court noted that Dotcom has now revealed that his personal assets restrained in both New Zealand and Hong Kong are together worth “not less” than NZ$ 33.93 million (US$ 28.39m). However, all of Dotcom’s assets are subject to a potential claim from his estranged wife, Mona, so the Court judged Dotcom’s share to be around NZ$17m.

As a result the Court accepted that there was an arguable case that eventual damages would be more than the value of assets currently restrained in New Zealand.

As a result, Dotcom is ordered to hand the details of his financial assets, “wherever they are located”, to the lawyers acting for the studios. There are restrictions on access to that information, however.

“The respondents’ solicitors are not to disclose the contents of the affidavit to any person without the leave of the Court,” the decision reads.

As legal proceedings in New Zealand continue, eyes now turn to Hong Kong. In addition to Dotcom’s personal wealth subjected to restraining order as detailed above, an additional NZ$25m owned by Megaupload and Vestor Limited is frozen in Hong Kong. Next week Dotcom’s legal team will attempt to have the restraining order lifted.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Backblaze Blog: That Was One UGLY Email

This post was syndicated from: Backblaze Blog and was written by: Andy. Original post: at Backblaze Blog

Our customer newsletter email for August was “Major Ugly.” We’re sorry. Let me explain. About once a month the marketing folks at Backblaze send out a newsletter via email to our customers. These emails are sent conforming to our customer privacy policy and terms of service. Only customers (not trial users) receive the newsletter and […]