Posts tagged ‘Privacy’

Krebs on Security: We Take Your Privacy and Security. Seriously.

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

“Please note that [COMPANY NAME] takes the security of your personal data very seriously.” If you’ve been on the Internet for any length of time, chances are very good that you’ve received at least one breach notification email or letter that includes some version of this obligatory line. But as far as lines go, this one is about as convincing as the classic break-up line, “It’s not you, it’s me.”

coxletter

I was reminded of the sheer emptiness of this corporate breach-speak approximately two weeks ago, after receiving a snail mail letter from my Internet service provider — Cox Communications. In its letter, the company explained:

“On or about Aug. 13, 2014, “we learned that one of our customer service representatives had her account credentials compromised by an unknown individual. This incident allowed the unauthorized person to view personal information associated with a small number of Cox accounts. The information which could have been viewed included your name, address, email address, your Secret Question/Answer, PIN and in some cases, the last four digits only of your Social Security number or drivers’ license number.”

The letter ended with the textbook offer of free credit monitoring services (through Experian, no less), and the obligatory “Please note that Cox takes the security of your personal data very seriously.” But I wondered how seriously they really take it. So, I called the number on the back of the letter, and was directed to Stephen Boggs, director of public affairs at Cox.

Boggs said that the trouble started after a female customer account representative was “socially engineered” or tricked into giving away her account credentials to a caller posing as a Cox tech support staffer. Boggs informed me that I was one of just 52 customers whose information the attacker(s) looked up after hijacking the customer service rep’s account.

The nature of the attack described by Boggs suggested two things: 1) That the login page that Cox employees use to access customer information is available on the larger Internet (i.e., it is not an internal-only application); and that 2) the customer support representative was able to access that public portal with nothing more than a username and a password.

Boggs either did not want to answer or did not know the answer to my main question: Were Cox customer support employees required to use multi-factor or two-factor authentication to access their accounts? Boggs promised to call back with an definitive response. To Cox’s credit, he did call back a few hours later, and confirmed my suspicions.

“We do use multifactor authentication in various cases,” Boggs said. “However, in this situation there was not two-factor authentication. We are taking steps based on our investigation to close this gap, as well as to conduct re-training of our customer service representatives to close that loop as well.”

This sad state of affairs is likely the same across multiple companies that claim to be protecting your personal and financial data. In my opinion, any company — particularly one in the ISP business — that isn’t using more than a username and a password to protect their customers’ personal information should be publicly shamed.

Unfortunately, most companies will not proactively take steps to safeguard this information until they are forced to do so — usually in response to a data breach.  Barring any pressure from Congress to find proactive ways to avoid breaches like this one, companies will continue to guarantee the security and privacy of their customers’ records, one breach at a time.

Matthew Garrett: My free software will respect users or it will be bullshit

This post was syndicated from: Matthew Garrett and was written by: Matthew Garrett. Original post: at Matthew Garrett

I had dinner with a friend this evening and ended up discussing the FSF’s four freedoms. The fundamental premise of the discussion was that the freedoms guaranteed by free software are largely academic unless you fall into one of two categories – someone who is sufficiently skilled in the arts of software development to examine and modify software to meet their own needs, or someone who is sufficiently privileged[1] to be able to encourage developers to modify the software to meet their needs.

The problem is that most people don’t fall into either of these categories, and so the benefits of free software are often largely theoretical to them. Concentrating on philosophical freedoms without considering whether these freedoms provide meaningful benefits to most users risks these freedoms being perceived as abstract ideals, divorced from the real world – nice to have, but fundamentally not important. How can we tie these freedoms to issues that affect users on a daily basis?

In the past the answer would probably have been along the lines of “Free software inherently respects users”, but reality has pretty clearly disproven that. Unity is free software that is fundamentally designed to tie the user into services that provide financial benefit to Canonical, with user privacy as a secondary concern. Despite Android largely being free software, many users are left with phones that no longer receive security updates[2]. Textsecure is free software but the author requests that builds not be uploaded to third party app stores because there’s no meaningful way for users to verify that the code has not been modified – and there’s a direct incentive for hostile actors to modify the software in order to circumvent the security of messages sent via it.

We’re left in an awkward situation. Free software is fundamental to providing user privacy. The ability for third parties to continue providing security updates is vital for ensuring user safety. But in the real world, we are failing to make this argument – the freedoms we provide are largely theoretical for most users. The nominal security and privacy benefits we provide frequently don’t make it to the real world. If users do wish to take advantage of the four freedoms, they frequently do so at a potential cost of security and privacy. Our focus on the four freedoms may be coming at a cost to the pragmatic freedoms that our users desire – the freedom to be free of surveillance (be that government or corporate), the freedom to receive security updates without having to purchase new hardware on a regular basis, the freedom to choose to run free software without having to give up basic safety features.

That’s why projects like the GNOME safety and privacy team are so important. This is an example of tying the four freedoms to real-world user benefits, demonstrating that free software can be written and managed in such a way that it actually makes life better for the average user. Designing code so that users are fundamentally in control of any privacy tradeoffs they make is critical to empowering users to make informed decisions. Committing to meaningful audits of all network transmissions to ensure they don’t leak personal data is vital in demonstrating that developers fundamentally respect the rights of those users. Working on designing security measures that make it difficult for a user to be tricked into handing over access to private data is going to be a necessary precaution against hostile actors, and getting it wrong is going to ruin lives.

The four freedoms are only meaningful if they result in real-world benefits to the entire population, not a privileged minority. If your approach to releasing free software is merely to ensure that it has an approved license and throw it over the wall, you’re doing it wrong. We need to design software from the ground up in such a way that those freedoms provide immediate and real benefits to our users. Anything else is a failure.

(title courtesy of My Feminism will be Intersectional or it will be Bullshit by Flavia Dzodan. While I’m less angry, I’m solidly convinced that free software that does nothing to respect or empower users is an absolute waste of time)

[1] Either in the sense of having enough money that you can simply pay, having enough background in the field that you can file meaningful bug reports or having enough followers on Twitter that simply complaining about something results in people fixing it for you

[2] The free software nature of Android often makes it possible for users to receive security updates from a third party, but this is not always the case. Free software makes this kind of support more likely, but it is in no way guaranteed.

comment count unavailable comments

Darknet - The Darkside: CloudFlare Introduces SSL Without Private Key

This post was syndicated from: Darknet - The Darkside and was written by: Darknet. Original post: at Darknet - The Darkside

Handing over your private key to a cloud provider so they can terminate your SSL connections and you can work at scale has always been a fairly contentious issue, a necessary evil you may say. As if your private key gets compromised, it’s a big deal and without it (previously) there’s no way a cloud [...]

The post CloudFlare…

Read the full post at darknet.org.uk

Schneier on Security: Security for Vehicle-to-Vehicle Communications

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

The National Highway Traffic Safety Administration (NHTSA) has released a report titled “Vehicle-to-Vehicle Communications: Readiness of V2V Technology for Application.” It’s very long, and mostly not interesting to me, but there are security concerns sprinkled throughout: both authentication to ensure that all the communications are accurate and can’t be spoofed, and privacy to ensure that the communications can’t be used to track cars. It’s nice to see this sort of thing thought about in the beginning, when the system is first being designed, and not tacked on at the end.

Darknet - The Darkside: tinfoleak – Get Detailed Info About Any Twitter User

This post was syndicated from: Darknet - The Darkside and was written by: Darknet. Original post: at Darknet - The Darkside

tinfoleak is basically an OSINT tool for Twitter, there’s not a lot of stuff like this around – the only one that comes to mind in fact is creepy – Geolocation Information Aggregator. tinfoleak is a simple Python script that allow to obtain: basic information about a Twitter user (name, picture, location, followers, etc.) devices…

Read the full post at darknet.org.uk

The Hacker Factor Blog: Eight Is Enough

This post was syndicated from: The Hacker Factor Blog and was written by: The Hacker Factor Blog. Original post: at The Hacker Factor Blog

I must be one of those people who lives in a cave. (Well, at least it’s a man-cave.) I didn’t even realize that Apple’s iOS 8 was released until I heard all of the hoopla in the news.

When Apple did their recent big presentation, I heard about the new watch and the new iPhone, but not about the new operating system. The smart-watch didn’t impress me. At CACC last month, I saw a few people wearing devices that told the time, maintained their calendar, synced with their portable devices, and even checked their heart rates and sleep cycles. In this regard, Apple seems a little late to the game, over-priced, and limited in functionality.

The new iPhone also didn’t impress me. The only significant difference that I have heard about is the bigger screen. I find it funny that pants pockets are getting smaller and phones are getting bigger… So, where do you put this new iPhone? You can’t be expected to carry it everywhere by hand when you’re also holding a venti pumpkin spice soy latte with whip no room. Someone really needs to build an iPhone protector that doubles as a cup-holder. (Oh wait, it exists.) Or maybe an iBelt… that hangs the iPhone like a codpiece since it is more of a symbol of geek virility than a useful mobile device.

Then again, I’m not an Apple fanatic. I use a Mac, but I don’t go out of the way to worship at the foot of the latest greatest i-device.

Sight Seeing

Apple formally announced all of these new devices on September 9th. I decided to look over the FotoForensics logs for any iOS 8 devices. Amazingly, I’ve had a few sightings… and they started months before the formal announcement.

The first place I looked was in my web server’s log files. Every browser sends its user-agent string with their web request. This usually identifies the operating system and browser. The intent is to allow web services to collect metrics about usage. If I see a bunch of people using some new web browser, then I can test my site with that browser and ensure a good user experience.

With iOS devices, they also encode the version number. So I just looked for anything claiming to be an iOS 8 device. Here’s the date/time and user-agent strings that match iOS 8. I’m only showing the 1st instance per day:

[18/Mar/2014:18:40:39 -0500] “Mozilla/5.0 (iPad; CPU OS 8_0 like Mac OS X) AppleWebKit/538.22 (KHTML, like Gecko) Mobile/12A214″

[29/Apr/2014:13:27:58 -0500] “Mozilla/5.0 (iPhone; CPU iPhone OS 8_0 like Mac OS X) AppleWebKit/538.30.1 (KHTML, like Gecko) Mobile/12W252a”

[02/Jun/2014:16:56:45 -0500] “Mozilla/5.0 (iPhone; CPU iPhone OS 8_0 like Mac OS X) AppleWebKit/538.34.9 (KHTML, like Gecko) Version/7.0 Mobile/12A4265u Safari/9537.53″

[03/Jun/2014:16:44:38 -0500] “Mozilla/5.0 (iPhone; CPU iPhone OS 8_0 like Mac OS X) AppleWebKit/538.34.9 (KHTML, like Gecko) Version/7.0 Mobile/12A4265u Safari/9537.53″

After June 3rd, it basically became a daily appearance. The list includes iPhones and iPads. And, yes, the first few sightings came from Cupertino, California, where Apple is headquartered.

Even though iOS 8 is new, it looks like a few people have been using it for months. Product testers, demos, beta testers, etc.

Pictures?

When Apple released iOS 7, they added a new metadata field to their pictures. This field records the active-use time since the last reboot. I suspect that it is a useful metric for Apple. It also makes me wonder if iOS 8 added anything new.

As a research service, every picture uploaded to FotoForensics gets indexed for rapid searching. I searched the archive for any pictures that claim to be from an iOS 8 device. So far, there have only been five sightings. (Each photo shows personally identifiable information, selfies or pictures of text, so I won’t be linking to them.)

Amazingly, none of these initial iOS 8 photos are camera-original files. Adobe, Microsoft Windows, and other applications were used to save the picture. The earliest picture was uploaded on 2014-07-30 at 21:32:39 GMT by someone in California, and the picture’s metadata says it photographed on 2014-07-19.

Each of these iOS 8 photos came from an iPhone 5 or 5s device. I have yet to see any photos from an iPhone 6 device. (There was one sighting of an “iPhone 6Z” on 2013-01-30. But since it was uploaded by someone in France, I suspect that the metadata was altered.)

With the iPhone 5 and iOS 7, Apple introduced a “purple flareproblem. I don’t have many iOS 8 samples to compare against, and none are camera-originals. However, I’m not seeing the extreme artificial color correction that caused the purple flare. There’s still a distinct color correction, but it’s not as extreme. Perhaps the purple problem is fixed.

New Privacy

As far as I can tell, there is one notable new thing about iOS 8. Apple has publicly announced a change to their privacy policy. Specifically, they claim to have strong cryptography in the phones and no back doors. As a result, they will not be able to turn over any iPhone information to law enforcement, even if they have a valid subpoena. By implementing a technically strong solution and not retaining any keys, they forced their stance: it isn’t that they don’t want to help unlock a phone, it is that they technically cannot crack it in a realistic time frame.

While this stops Apple from assisting with iPhone and iPad devices that use iOS 8, it does nothing to stop Apple from turning over information uploaded to Apple’s iCloud service. (You do have the “backup to iCloud” option enabled, right?) This also does nothing to stop brute-force account guessing attacks, like the kind reportedly used to compromise celebrity nude photos. The newly deployed two-factor authentication seems like a much better solution even if it is too little too late.

Then again, I can also foresee new services that will handle your encryption keys for you, in case you lose them. After a few hundred complaints like “I lost my password and cannot access my precious kitty photos! Please help me!”, I expect that an entire market of back door options will become available for Apple users.

Behind the Eight Ball

I didn’t really pay attention to Apple’s latest releases until after they were out. However, it wouldn’t take much to make a database of known user agents and trigger an automated alert when the next Apple product first appears. It’s one thing to read about iOS 8 on Mac Rumors a few months before the release; it’s another thing to see it in my logs six months earlier.

While I don’t think much of Apple’s latest offerings, that doesn’t mean it won’t drive the market. Sometimes it’s not the produce itself that drives the innovation; sometimes it’s the spaces that need filling.

TorrentFreak: Mega Demands Apology Over “Defamatory” Cyberlocker Report

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

Yesterday the Digital Citizens Alliance released a new report that looks into the business models of “shadowy” file-storage sites.

Titled “Behind The Cyberlocker Door: A Report How Shadowy Cyberlockers Use Credit Card Companies to Make Millions,” the report attempts to detail the activities of some of the world’s most-visited hosting sites.

While it’s certainly an interesting read, the NetNames study provides a few surprises, not least the decision to include New Zealand-based cloud storage site Mega.co.nz. There can be no doubt that there are domains of dubious standing detailed in the report, but the inclusion of Mega stands out as especially odd.

Mega was without doubt the most-scrutinized file-hosting startup in history and as a result has had to comply fully with every detail of the law. And, unlike some of the other sites listed in the report, Mega isn’t hiding away behind shell companies and other obfuscation methods. It also complies fully with all takedown requests, to the point that it even took down its founder’s music, albeit following an erroneous request.

With these thoughts in mind, TorrentFreak alerted Mega to the report and asked how its inclusion amid the terminology used has been received at the company.

Grossly untrue and highly defamatory

mega“We consider the report grossly untrue and highly defamatory of Mega,” says Mega CEO Graham Gaylard.

“Mega is a privacy company that provides end-to-end encrypted cloud storage controlled by the customer. Mega totally refutes that it is a cyberlocker business as that term is defined and discussed in the report prepared by NetNames for the Digital Citizens Alliance.”

Gaylard also strongly refutes the implication in the report that as a “cyberlocker”, Mega is engaged in activities often associated with such sites.

“Mega is not a haven for piracy, does not distribute malware, and definitely does not engage in illegal activities,” Gaylard says. “Mega is running a legitimate business alongside other cloud storage providers in a highly competitive market.”

The Mega CEO told us that one of the perplexing things about the report is that none of the criteria set out by the report for “shadowy” sites is satisfied by Mega, yet the decision was still taken to include it.

Infringing content and best practices

One of the key issues is, of course, the existence of infringing content. All user-uploaded sites suffer from that problem, from YouTube to Facebook to Mega and thousands of sites in between. But, as Gaylard points out, it’s the way those sites handle the issue that counts.

“We are vigorous in complying with best practice legal take-down policies and do so very quickly. The reality though is that we receive a very low number of take-down requests because our aim is to have people use our services for privacy and security, not for sharing infringing content,” he explains.

“Mega acts very quickly to process any take-down requests in accordance with its Terms of Service and consistent with the requirements of the USA Digital Millennium Copyright Act (DMCA) process, the European Union Directive 2000/31/EC and New Zealand’s Copyright Act process. Mega operates with a very low rate of take-down requests; less than 0.1% of all files Mega stores.”

Affiliate schemes that encourage piracy

One of the other “rogue site” characteristics as outlined in the report is the existence of affiliate schemes designed to incentivize the uploading and sharing of infringing content. In respect of Mega, Gaylard rejects that assertion entirely.

“Mega’s affiliate program does not reward uploaders. There is no revenue sharing or credit for downloads or Pro purchases made by downloaders. The affiliate code cannot be embedded in a download link. It is designed to reward genuine referrers and the developers of apps who make our cloud storage platform more attractive,” he notes.

The PayPal factor

As detailed in many earlier reports (1,2,3), over the past few years PayPal has worked hard to seriously cut down on the business it conducts with companies in the file-sharing space.

Companies, Mega included, now have to obtain pre-approval from the payment processor in order to use its services. The suggestion in the report is that large “shadowy” sites aren’t able to use PayPal due to its strict acceptance criteria. Mega, however, has a good relationship with PayPal.

“Mega has been accepted by PayPal because we were able to show that we are a legitimate cloud storage site. Mega has a productive and respected relationship with PayPal, demonstrating the validity of Mega’s business,” Gaylard says.

Public apology and retraction – or else

Gaylard says that these are just some of the points that Mega finds unacceptable in the report. The CEO adds that at no point was the company contacted by NetNames or Digital Citizens Alliance for its input.

“It is unacceptable and disappointing that supposedly reputable organizations such as Digital Citizens and NetNames should see fit to attack Mega when it provides the user end to end encryption, security and privacy. They should be promoting efforts to make the Internet a safer and more trusted place. Protecting people’s privacy. That is Mega’s mission,” Gaylard says.

“We are requesting that Digital Citizens Alliance withdraw Mega from that report entirely and issue a public apology. If they do not then we will take further action,” he concludes.

TorrentFreak asked NetNames to comment on Mega’s displeasure and asked the company if it stands by its assertion that Mega is a “shadowy” cyberlocker. We received a response (although not directly to our questions) from David Price, NetNames’ head of piracy analysis.

“The NetNames report into cyberlocker operation is based on information taken from the websites of the thirty cyberlockers used for the research and our own investigation of this area, based on more than a decade of experience producing respected analysis exploring digital piracy and online distribution,” Price said.

That doesn’t sound like a retraction or an apology, so this developing dispute may have a way to go.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

LWN.net: Simply Secure announces itself

This post was syndicated from: LWN.net and was written by: jake. Original post: at LWN.net

A new organization to “make security easy and fun” has announced itself in a blog post entitled “Why Hello, World!”. Simply Secure is targeting the usability of security solutions: “If privacy and security aren’t easy and intuitive, they don’t work. Usability is key.
The organization was started by Google and Dropbox; it also has the Open Technology Fund as one of its partners.
To build trust and ensure quality outcomes, one core component of our work will be public audits of interfaces and code. This will help validate the security and usability claims of the efforts we support.

More generally, we aim to take a page from the open-source community and make as much of our work transparent and widely-accessible as possible. This means that as we get into the nitty-gritty of learning how to build collaborations around usably secure software, we will share our developing methodologies and expertise publicly. Over time, this will build a body of community resources that will allow all projects in this space to become more usable and more secure.”

TorrentFreak: Copyright Holders Want Netflix to Ban VPN Users

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

netflixWith the launch of legal streaming services such as Netflix, movie and TV fans have less reason to turn to pirate sites.

At the same time, however, these legal options invite people from other countries where the legal services are more limited. This is also the case in Australia where up to 200,000 people are estimated to use the U.S. version of Netflix.

Although Netflix has geographical restrictions in place, these are easy to bypass with a relatively cheap VPN subscription. To keep these foreigners out, entertainment industry companies are now lobbying for a global ban on VPN users.

Simon Bush, CEO of AHEDA, an industry group that represents Twentieth Century Fox, Warner Bros., Universal, Sony Pictures and other major players said that some members are actively lobbying for such a ban.

Bush didn’t name any of the companies involved, but he confirmed to Cnet that “discussions” to block Australian access to the US version of Netflix “are happening now”.

If implemented, this would mean that all VPN users worldwide will no longer be able to access Netflix. That includes the millions of Americans who are paying for a legitimate account. They can still access Netflix, but would not be allowed to do so securely via a VPN.

According to Bush the discussions to keep VPN users out are not tied to Netflix’s arrival in Australia. The distributors and other rightsholders argue that they are already being deprived of licensing fees, because some Aussies ignore local services such as Quickflix.

“I know the discussions are being had…by the distributors in the United States with Netflix about Australians using VPNs to access content that they’re not licensed to access in Australia,” Bush said.

“They’re requesting for it to be blocked now, not just when it comes to Australia,” he adds.

While blocking VPNs would solve the problem for distributors, it creates a new one for VPN users in the United States.

The same happened with Hulu a few months ago, when Hulu started to block visitors who access the site through a VPN service. This blockade also applies to hundreds of thousands of U.S. citizens.

Hulu’s blocklist was implemented a few months ago and currently covers the IP-ranges of all major VPN services. People who try to access the site through one of these IPs are not allowed to view any content on the site, and receive the following notice instead:

“Based on your IP-address, we noticed that you are trying to access Hulu through an anonymous proxy tool. Hulu is not currently available outside the U.S. If you’re in the U.S. you’ll need to disable your anonymizer to access videos on Hulu.”

It seems that VPNs are increasingly attracting the attention of copyright holders. Just a week ago BBC Worldwide argued that ISPs should monitor VPN users for excessive bandwidth use, assuming they would then be pirates.

Considering the above we can expect the calls for VPN bans to increase in the near future.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Krebs on Security: LinkedIn Feature Exposes Email Addresses

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

One of the risks of using social media networks is having information you intend to share with only a handful of friends be made available to everyone. Sometimes that over-sharing happens because friends betray your trust, but more worrisome are the cases in which a social media platform itself exposes your data in the name of marketing.

leakedinlogoLinkedIn has built much of its considerable worth on the age-old maxim that “it’s all about who you know”: As a LinkedIn user, you can directly connect with those you attest to knowing professionally or personally, but also you can ask to be introduced to someone you’d like to meet by sending a request through someone who bridges your separate social networks. Celebrities, executives or any other LinkedIn users who wish to avoid unsolicited contact requests may do so by selecting an option that forces the requesting party to supply the personal email address of the intended recipient.

LinkedIn’s entire social fabric begins to unravel if any user can directly connect to any other user, regardless of whether or how their social or professional circles overlap. Unfortunately for LinkedIn (and its users who wish to have their email addresses kept private), this is the exact risk introduced by the company’s built-in efforts to expand the social network’s user base.

According to researchers at the Seattle, Wash.-based firm Rhino Security Labs, at the crux of the issue is LinkedIn’s penchant for making sure you’re as connected as you possibly can be. When you sign up for a new account, for example, the service asks if you’d like to check your contacts lists at other online services (such as Gmail, Yahoo, Hotmail, etc.). The service does this so that you can connect with any email contacts that are already on LinkedIn, and so that LinkedIn can send invitations to your contacts who aren’t already users.

LinkedIn assumes that if an email address is in your contacts list, that you must already know this person. But what if your entire reason for signing up with LinkedIn is to discover the private email addresses of famous people? All you’d need to do is populate your email account’s contacts list with hundreds of permutations of famous peoples’ names — including combinations of last names, first names and initials — in front of @gmail.com, @yahoo.com, @hotmail.com, etc. With any luck and some imagination, you may well be on your way to an A-list LinkedIn friends list (or a fantastic set of addresses for spear-phishing, stalking, etc.).

LinkedIn lets you know which of your contacts aren't members.

LinkedIn lets you know which of your contacts aren’t members.

When you import your list of contacts from a third-party service or from a stand-alone file, LinkedIn will show you any profiles that match addresses in your contacts list. More significantly, LinkedIn helpfully tells you which email addresses in your contacts lists are not LinkedIn users.

It’s that last step that’s key to finding the email address of the targeted user to whom LinkedIn has just sent a connection request on your behalf. The service doesn’t explicitly tell you that person’s email address, but by comparing your email account’s contact list to the list of addresses that LinkedIn says don’t belong to any users, you can quickly figure out which address(es) on the contacts list correspond to the user(s) you’re trying to find.

Rhino Security founders Benjamin Caudill and Bryan Seely have a recent history of revealing how trust relationships between and among online services can be abused to expose or divert potentially sensitive information. Last month, the two researchers detailed how they were able to de-anonymize posts to Secret, an app-driven online service that allows people to share messages anonymously within their circle of friends, friends of friends, and publicly. In February, Seely more famously demonstrated how to use Google Maps to intercept FBI and Secret Service phone calls.

This time around, the researchers picked on Dallas Mavericks owner Mark Cuban to prove their point with LinkedIn. Using their low-tech hack, the duo was able to locate the Webmail address Cuban had used to sign up for LinkedIn. Seely said they found success in locating the email addresses of other celebrities using the same method about nine times out ten.

“We created several hundred possible addresses for Cuban in a few seconds, using a Microsoft Excel macro,” Seely said. “It’s just a brute-force guessing game, but 90 percent of people are going to use an email address that includes components of their real name.”

The Rhino guys really wanted Cuban’s help in spreading the word about what they’d found, but instead of messaging Cuban directly, Seely pursued a more subtle approach: He knew Cuban’s latest start-up was Cyber Dust, a chat messenger app designed to keep your messages private. So, Seely fired off a tweet complaining that “Facebook Messenger crosses all privacy lines,” and that as  result he was switching to Cyber Dust.

When Mark Cuban retweeted Seely’s endorsement of Cyber Dust, Seely reached out to Cyberdust CEO Ryan Ozonian, letting him known that he’d discovered Cuban’s email address on LinkedIn. In short order, Cuban was asking Rhino to test the security of Cyber Dust.

“Fortunately no major faults were found and those he found are already fixed in the coming update,” Cuban said in an email exchange with KrebsOnSecurity. “I like working with them. They look to help rather than exploit.. We have learned from them and I think their experience will be valuable to other app publishers and networks as well.”

Whether LinkedIn will address the issues highlighted by Rhino Security remains to be seen. In an initial interview earlier this month, the social networking giant sounded unlikely to change anything in response.

Corey Scott, director of information security at LinkedIn, said very few of the company’s members opt-in to the requirement that all new potential contacts supply the invitee’s email address before sending an invitation to connect. He added that email address-to-user mapping is a fairly common design pattern, and that is is not particularly unique to LinkedIn, and that nothing the company does will prevent people from blasting emails to lists of addresses that might belong to a targeted user, hoping that one of them will hit home.

“Email address permutators, of which there are many of them on the ‘Net, have existed much longer than LinkedIn, and you can blast an email to all of them, knowing that most likely one of those will hit your target,” Scott said. “This is kind of one of those challenges that all social media companies face in trying to prevent the abuse of [site] functionality. We have rate limiting, scoring and abuse detection mechanisms to prevent frequent abusers of this service, and to make sure that people can’t validate spam lists.”

In an email sent to this report last week, however, LinkedIn said it was planning at least two changes to the way its service handles user email addresses.

“We are in the process of implementing two short-term changes and one longer term change to give our members more control over this feature,” Linkedin spokeswoman Nicole Leverich wrote in an emailed statement. “In the next few weeks, we are introducing new logic models designed to prevent hackers from abusing this feature. In addition, we are making it possible for members to ask us to opt out of being discoverable through this feature. In the longer term, we are looking into creating an opt-out box that members can choose to select to not be discoverable using this feature.”

Darknet - The Darkside: Google DID NOT Leak 5 Million E-mail Account Passwords

This post was syndicated from: Darknet - The Darkside and was written by: Darknet. Original post: at Darknet - The Darkside

So a big panic hit the Internet a couple of days ago when it was alleged that Google had leaked 5 Million e-mail account passwords – and these had been posted on a Russian Bitcoin forum. I was a little sceptical, as Google tends to be pretty secure on that front and they had made [...]

The post Google DID NOT Leak 5 Million E-mail Account…

Read the full post at darknet.org.uk

Schneier on Security: The Concerted Effort to Remove Data Collection Restrictions

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Since the beginning, data privacy regulation focused on collection, storage, and use. You can see it in the OECD Privacy Framework from 1980 (see also this proposed update).

Recently, there has been concerted effort to focus all potential regulation on data use, completely ignoring data collection. Microsoft’s Craig Mundie argues this. So does the PCAST report. And the World Economic Forum. This is lobbying effort by US business. My guess is that the companies are much more worried about collection restrictions than use restrictions. They believe that they can slowly change use restrictions once they have the data, but that it’s harder to change collection restrictions and get the data in the first place.

We need to regulate collection as well as use. In a new essay, Chris Hoofnagle explains why.

Krebs on Security: Dread Pirate Sunk By Leaky CAPTCHA

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Ever since October 2013, when the FBI took down the online black market and drug bazaar known as the Silk Road, privacy activists and security experts have traded conspiracy theories about how the U.S. government managed to discover the geographic location of the Silk Road Web servers. Those systems were supposed to be obscured behind the anonymity service Tor, but as court documents released Friday explain, that wasn’t entirely true: Turns out, the login page for the Silk Road employed an anti-abuse CAPTCHA service that pulled content from the open Internet, thus leaking the site’s true location.

leakyshipTor helps users disguise their identity by bouncing their traffic between different Tor servers, and by encrypting that traffic at every hop along the way. The Silk Road, like many sites that host illicit activity, relied on a feature of Tor known as “hidden services.” This feature allows anyone to offer a Web server without revealing the true Internet address to the site’s users.

That is, if you do it correctly, which involves making sure you aren’t mixing content from the regular open Internet into the fabric of a site protected by Tor. But according to federal investigators,  Ross W. Ulbricht — a.k.a. the “Dread Pirate Roberts” and the 30-year-old arrested last year and charged with running the Silk Road — made this exact mistake.

As explained in the Tor how-to, in order for the Internet address of a computer to be fully hidden on Tor, the applications running on the computer must be properly configured for that purpose. Otherwise, the computer’s true Internet address may “leak” through the traffic sent from the computer.

howtorworks

And this is how the feds say they located the Silk Road servers:

“The IP address leak we discovered came from the Silk Road user login interface. Upon examining the individual packets of data being sent back from the website, we noticed that the headers of some of the packets reflected a certain IP address not associated with any known Tor node as the source of the packets. This IP address (the “Subject IP Address”) was the only non-Tor source IP address reflected in the traffic we examined.”

“The Subject IP Address caught our attention because, if a hidden service is properly configured to work on Tor, the source IP address of traffic sent from the hidden service should appear as the IP address of a Tor node, as opposed to the true IP address of the hidden service, which Tor is designed to conceal. When I typed the Subject IP Address into an ordinary (non-Tor) web browser, a part of the Silk Road login screen (the CAPTCHA prompt) appeared. Based on my training and experience, this indicated that the Subject IP Address was the IP address of the SR Server, and that it was ‘leaking’ from the SR Server because the computer code underlying the login interface was not properly configured at the time to work on Tor.”

For many Tor fans and advocates, The Dread Pirate Roberts’ goof will no doubt be labeled a noob mistake — and perhaps it was. But as I’ve said time and again, staying anonymous online is hard work, even for those of us who are relatively experienced at it. It’s so difficult, in fact, that even hardened cybercrooks eventually slip up in important and often fateful ways (that is, if someone or something was around at the time to keep a record of it).

A copy of the government’s declaration on how it located the Silk Road servers is here (PDF). A hat tip to Nicholas Weaver for the heads up about this filing.

A snapshop of offerings on the Silk Road.

A snapshop of offerings on the Silk Road.

lcamtuf's blog: Some notes on web tracking and related mechanisms

This post was syndicated from: lcamtuf's blog and was written by: Michal Zalewski. Original post: at lcamtuf's blog

Artur Janc and I put together a nice, in-depth overview of all the known fingerprinting and tracking vectors that appear to be present in modern browsers. This is an interesting, polarizing, and poorly-studied area; my main hope is that the doc will bring some structure to the discussions of privacy consequences of existing and proposed web APIs – and help vendors and standards bodies think about potential solutions in a more holistic way.

That’s it – carry on!

Darknet - The Darkside: Massive Celeb Leak Brings iCloud Security Into Question

This post was syndicated from: Darknet - The Darkside and was written by: Darknet. Original post: at Darknet - The Darkside

So this leak has caused quite a furore, normally I don’t pay attention to this stuff – but hey it’s JLaw and it’s a LOT of celebs at the same time – which indicates some kind of underlying problem. The massive list of over 100 celebs was posted originally on 4chan (of course) by an [...]

The post Massive Celeb Leak…

Read the full post at darknet.org.uk

TorrentFreak: Dotcom Loses Bid to Keep Assets Secret from Hollywood

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

dotcom-laptop20th Century Fox, Disney, Paramount, Universal, Columbia Pictures and Warner Bros are engaged in a huge battle with Kim Dotcom.

They believe that legal action currently underway against the Megaupload founder could lead to them receiving a sizable damages award should they win their case. But Dotcom’s lavish lifestyle gives them concerns. The more he spends, the less they could receive should the money begin to run out.

Those concerns were addressed by the High Court’s Judge Courtney, who previously ordered Dotcom to disclose the details of his worldwide assets to his Hollywood adversaries. Dotcom filed an appeal which will be heard in October, but that date is beyond the ordered disclosure date.

As a result, Dotcom took his case to the Court of Appeal in the hope of staying the disclosure order.

That bid has now failed.

Dotcom’s legal team argued out that their client’s October appeal would be rendered pointless if he was required to hand over financial information in advance. They also insisted a stay would not negatively affect the studios since millions in assets are currently restrained in New Zealand and elsewhere.

However, as explained by the Court of Appeal, any decision to stay a judgment is a balancing act between the rights of the successful party (Hollywood) to enforce its judgment and the consequences for both parties should the stay be granted or denied.

While the Court agreed that Dotcom’s appeal would be rendered pointless if disclosure to Hollywood was ordered, it rejected that would have an effect on Dotcom.

“[T]he mere fact that appeal rights are rendered nugatory is not necessarily determinative and in the circumstances of this case I consider that this consequence carries little weight. This is because Mr Dotcom himself does not assert that there will be any adverse effect on him if deprived of an effective appeal,” the decision reads.

The Court also rejected the argument put forward by Dotcom’s lawyer that the disclosure of financial matters would be a threat to privacy and amounted to an “unreasonable search”.

The Court did, however, acknowledge that Dotcom’s appeal would deal with genuine issues. That said, the concern over him disposing of assets outweighed them in this instance.

In respect of the effect of a stay on the studios, the Court looked at potential damages in the studios’ legal action against the Megaupload founder. Dotcom’s expert predicted damages “well below” US$10m, while the studios’ expert predicted in excess of US$100m.

The Court noted that Dotcom has now revealed that his personal assets restrained in both New Zealand and Hong Kong are together worth “not less” than NZ$ 33.93 million (US$ 28.39m). However, all of Dotcom’s assets are subject to a potential claim from his estranged wife, Mona, so the Court judged Dotcom’s share to be around NZ$17m.

As a result the Court accepted that there was an arguable case that eventual damages would be more than the value of assets currently restrained in New Zealand.

As a result, Dotcom is ordered to hand the details of his financial assets, “wherever they are located”, to the lawyers acting for the studios. There are restrictions on access to that information, however.

“The respondents’ solicitors are not to disclose the contents of the affidavit to any person without the leave of the Court,” the decision reads.

As legal proceedings in New Zealand continue, eyes now turn to Hong Kong. In addition to Dotcom’s personal wealth subjected to restraining order as detailed above, an additional NZ$25m owned by Megaupload and Vestor Limited is frozen in Hong Kong. Next week Dotcom’s legal team will attempt to have the restraining order lifted.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

TorrentFreak: BTindex Exposes IP-Addresses of BitTorrent Users

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

spyUnless BitTorrent users are taking steps to hide their identities through the use of a VPN, proxy, or seedbox, their downloading habits are available for almost anyone to snoop on.

By design the BitTorrent protocol shares the location of any user in the swarm. After all, without knowing where to send the data nothing can be shared to begin with.

Despite this fairly common knowledge, even some experienced BitTorrent users can be shocked to learn that someone has been monitoring their activities, let alone that their sharing activity is being made public for the rest of the world to see.

Like it or not, this is exactly what the newly launched torrent search engine BTindex is doing.

Unlike most popular torrent sites BTindex adds new content by crawling BitTorrent’s DHT network. This is already quite unique as most other sites get their content from user uploads or other sites. However, the most controversial part without doubt is that the IP-addresses of BitTorrent users are being shared as well.

People who download a file from The Pirate Bay or any other torrent site expose their IP-addresses via the DHT network. BTindex records this information alongside the torrent metadata. The number of peers are displayed in the search results and for each file a selection of IP-addresses is made available to the public.

The image below shows a selection of peers who shared a pirated copy of the movie “Transcendence,” this week’s most downloaded film.

Some IP-addresses sharing “Transcendence.”
btindexips

Perhaps even more worrying to some, the site also gives an overview of all recorded downloads per IP-address. While the database is not exhaustive there is plenty of dirt to be found on heavy BitTorrent users who have DHT enabled in their clients.

Below is an example of the files that were shared via the IP-address of a popular VPN provider.

Files shared by the IP-address of a popular VPN provider
btindexvpnips

Since all data is collected through the DHT network people can avoid being tracked by disabling this feature in their BitTorrent clients. Unfortunately, that only gives a false sense of security as there are plenty of other monitoring firms who track people by gathering IP-addresses directly from the trackers.

The idea to index and expose IP-addresses of public BitTorrent users is not entirely new. In 2011 YouHaveDownloaded did something similar. This site generated considerable interest but was shut down a few months after its launch.

If anything, these sites should act as a wake up call to people who regularly share files via BitTorrent without countermeasures. Depending on the type of files being shared, a mention on BTindex is probably the least of their worries.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

TorrentFreak: The Copyright Monopoly Should Be Dead And Buried Already

This post was syndicated from: TorrentFreak and was written by: Rick Falkvinge. Original post: at TorrentFreak

copyright-brandedEvery time somebody questions the copyright monopoly, and in particular, whether it’s reasonable to dismantle freedom of the press, freedom of assembly, freedom of speech, freedom of information, and the privacy of correspondence just to maintain a distribution monopoly for an entertainment industry, the same question pops up out of nowhere:

“How will the artists get paid?”.

The copyright industry has been absolutely phenomenal in misleading the public in this very simple matter, suggesting that artists’ income somehow depend on a distribution monopoly of publishers. If the facts were out, this debate would have been over 20 years ago and the distribution monopoly already abolished quite unceremoniously.

There are three facts that need to be established and hammered in whenever somebody asks this question.

First: Less than one percent of artists’ income comes from the copyright monopoly. Read that sentence again. The overwhelming majority of artists get their income today from student loans, day jobs, unemployment benefits, and so on and so forth. One of the most recent studies (“Copyright as Incentive”, in Swedish as “Upphovsrätten som incitament”, 2006) quotes a number of 0.9 per cent as the average income share of artists that can be directly attributed to the existence of the copyright monopoly. The report calls the direct share of artists’ income “negligible”, “insignificant”. However, close to one hundred per cent of publishers’ income – the income of unnecessary, parasitic middlemen – is directly attributable to the copyright monopoly today. Guess who’s adamant about defending it? Hint: not artists.

Second: 99.99% of artists never see a cent in copyright monopoly royalties. Apart from the copyright industry’s creative accounting and bookkeeping – arguably the only reason they ever had to call themselves the “creative industry” – which usually robs artists blind, only one in ten thousand artists ever see a cent in copyright-monopoly-related royalties. Yes, this is a real number: 99% of artists are never signed with a label, and of those who are, 99% of those never see royalties. It comes across as patently absurd to defend a monopolistic, parasitic system where only one in ten thousand artists make any money with the argument “how will the artists make money any other way?”.

Third: Artists’ income has more than doubled because of culture-sharing. Since the advent of hobby-scale unlicensed manufacturing – which is what culture-sharing is legally, since it breaks a manufacturing monopoly on copies – the average income for musicians has risen 114%, according to a Norwegian study. Numbers from Sweden and the UK show the same thing. This shift in income has a direct correlation to hobby-based unlicensed manufacturing, as the sales of copies is down the drain – which is the best news imaginable for artists, since households are spending as much money on culture before (or more, according to some studies), but are buying in sales channels where artists get a much larger piece of the pie. Hobby-based unlicensed manufacturing has meant the greatest wealth transfer from parasitic middlemen to artists in the history of recorded music.

As a final note, it should be told that even if artists went bankrupt because of sustained civil liberties, that would still be the way to go. Any artist that goes from plinking their guitar in the kitchen to wanting to sell an offering is no longer an artist, but an entrepreneur; the same rules apply to them as to every other entrepreneur on the planet. Specifically, they do not get to dismantle civil liberties because such liberties are bad for business. But as we see, we don’t even need to take that into consideration, for the entire initial premise is false.

Kill copyright, already. Get rid of it. It hurts innovation, creativity, our next-generation industries, and our hard-won civil liberties. It’s not even economically defensible.

About The Author

Rick Falkvinge is a regular columnist on TorrentFreak, sharing his thoughts every other week. He is the founder of the Swedish and first Pirate Party, a whisky aficionado, and a low-altitude motorcycle pilot. His blog at falkvinge.net focuses on information policy.

Book Falkvinge as speaker?

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Linux How-Tos and Linux Tutorials: How to Set up Server-to-Server Sharing in ownCloud 7 on Linux

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Carla Schroder. Original post: at Linux How-Tos and Linux Tutorials

Most of the buzz around The Cloud is devoted to commercial services such as Google’s online apps, Amazon’s cloud services, and tablets and smartphones that are shortchanged on storage because they want to suck you into commercial cloud services. While commercial cloud services can be convenient, they also have well-known downsides like service outages, and lack of privacy and security. If you live within reach of government snoop agencies (like anywhere on planet Earth), or are subject to laws such as the Sarbanes-Oxley Act (SOX) or Health Insurance Portability and Accountability Act (HIPAA), then you need to keep your data under your control. Which I think is the wisest policy in any case.

ownCloud is the friendliest and easiest private cloud implementation to set up and use. ownCloud 7 was released last week, and this is the most interesting release yet. It is more polished and robust, easier to administer, and the killer feature in this version is server-to-server sharing. This lets you easily connect your ownCloud file shares and build your own private cloud of clouds. And then, someday, rule the world. Or, just share files.

Installating ownCloud

ownCloud is nicely documented, which is nearly all I need to love it. Imagine a software product that actually wants you to be able to use it; an astonishing concept, to be sure. There are multiple installation methods documented in the ownCloud Administrators Manual, including a detailed how-to on installing it from scratch. The nice ownCloud peoples use the openSUSE Build Service to build binary packages for Ubuntu, CentOS, Debian, Fedora, openSUSE, Red Hat, and SUSE, which is what I use. This is how I installed it on my test Ubuntu 14.04 server.

First fetch and install the GPG signing key for the openSUSE repository for your Linux distribution. Note that each command must be one unbroken line, with no newlines:

$ wget http://download.opensuse.org/repositories/isv:ownCloud:community/xUbuntu_14.04/Release.key
$ sudo apt-key add - < Release.key

Now add the repository, update your package list, and install ownCloud:

$ sudo sh -c "echo 'deb
http://download.opensuse.org/repositories/isv:/ownCloud:/community/xUbuntu_14.04/ 
/' >> /etc/apt/sources.list.d/owncloud.list"
$ sudo apt-get update
$ sudo apt-get install owncloud

fig-1 createlogin on ownCloud

If you don’t already have a LAMP stack installed, the installer will pull it in for you. When installation is complete open a Web browser to http://localhost/owncloud, and you will see the nice blue ownCloud installation wizard. Your first task is to create an admin user, as in figure 1. Click the eyeball to expose your password, which you’ll probably want to do so you know what you typed.

Next, you have some database options. If you go with the default SQLite you don’t have to do anything except click the Finish Setup button. SQLite is fine for lightweight duties, but if you have busier and larger workloads then use MariaDB, MySQL, or PostgreSQL. The wizard displays a button with these databases whether they are installed or not, so make sure the one you want is already installed, and you have an administrator login. I chose MySQL/MariaDB (Ubuntu defaults to MariaDB). You can give your new database any name you want and the installer will create it (figure 2). You must also pass in your database administrator login.

fig-2-db-setup

And that’s it. You’re done. ownCloud 7 is installed. Click the Finish Setup button and you’ll be greeted with a cheery “Welcome to ownCloud!” banner, with links to client apps for desktop computers, Android devices, and iDevices. ownCloud supports multiple clients: you can use a Web browser on any platform, or download client apps for more functionality such as synchronization and nicer file, contacts, and calendar management.

Setting up Server-to-Server Sharing

And now, the moment you’ve been waiting for: setting up server-to-server sharing. This works only with ownCloud servers that have this feature, which at the moment is ownCloud 7. You need two ownCloud 7 servers to test this.

Before you can share anything, you need to set your server’s hostname as a trusted ownCloud server domain. Look for this section in /var/www/owncloud/config/config.php:

'trusted_domains' => 
  array (
    0 => 'localhost', 
 ),

/var/www/owncloud/config/config.php is created by the installation wizard. See /var/www/owncloud/config/config.sample.php to see a complete list of options.

By default your ownCloud server only lets you access the server via domains that are listed as trusted domains in this file. Only localhost is listed by default. My server hostname is studio, so if I try to log into ownCloud via http://studio/owncloud I get an error message: “You are accessing the server from an untrusted domain.” This example allows connections via localhost, hostname, and IP address:

'trusted_domains' => 
  array (
    0 => 'localhost', 1 => 'studio', 2 => '192.168.1.50',
 ),

If you forget to create and use these trusted domains, you won’t be able to set up network file shares.

Next, go to your ownCloud administration page, which you can find by clicking the little arrow next to your username at the top right, and click Admin. Make sure that Remote Shares are enabled (figure 3).

fig-3 remote-shares

There is one more important step, and that is to enable mod_rewrite on Apache, and then restart it. This is what you do on Ubuntu:

$ sudo a2enmod rewrite
$ sudo service apache2 restart

If you don’t do this, your share will fail with a message like “Sabre\DAV\Exception\NotAuthenticated: No basic authentication headers were found” in your ownCloud server log.

fig-4 ownCloud studio share

Now you must log into either http://hostname/owncloud, or http://ip-address/owncloud. Create a new directory and stuff a few files into it. Then click on Share. Click the Share Link checkbox, and it creates a nice URL like http://studio/owncloud/public.php?service=files&t=6b6fa9a714a32ef0af8a83dde358deec (figure 4). Remember that bit about trusted domains? If you forget to connect to your ownCloud server with them, and instead use http://localhost/owncloud, the share URL will be also be http://localhost/. Which is no good for sharing.

You can optionally set a password on this share, an expiration date, allow uploads, and send an email notification. Configuring ownCloud to send emails requires a bit of configuration, so please consult the fine Administrator’s manual to learn how to do this.

Connecting to a New Share

The easy way to test connecting to a new share is to open a second browser tab on your first ownCloud server. Copy the share link into this tab, and it will open to your share. Then click the Add to your ownCloud button (figure 5), and enter the address of your second ownCloud server. In my test lab that is stinkpad/owncloud.

fig-5 add to owncloud

If you’re not already logged in you’ll get the login page. After logging in you’ll be asked if you want to add the remote share. Click Add Remote Share, and you’re done (figure 6).

fig-6 add remote share on ownCloud

Congratulations. You have linked two ownCloud servers, and now that the grotty setup work is done, creating more is just a few easy mouse clicks.

TorrentFreak: Bleep… BitTorrent Unveils Serverless & Encrypted Chat Client

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

bleepEncrypted Internet traffic surged worldwide after the Snowden revelations, with several developers releasing new tools to enable people to better protect their privacy.

Today BitTorrent Inc. contributes with the release of BitTorrent Bleep, a communication tool that allows people to exchange information without the need for any central servers. Combined with state of the art end-to-end encryption, the company sees Bleep as the ideal tool to evade government snooping.

Bleep’s main advantage over some other encrypted messaging applications is the absence of central servers. This means that there are no logs stored, all metadata goes through other peers in the network.

“Many messaging apps are advertising privacy and security by offering end-to-end encryption for messages. But when it comes to handling metadata, they are still leaving their users exposed,” BitTorrent’s Farid Fadaie explains.

“We reimagined how modern messaging should work. Our platform enables us to offer features in Bleep that are unique and meaningfully different from what is currently available.”

Bleep Bleep
BleepScreen

The application’s development is still in the early stages and the current release only works on Windows 7 and 8. Support for other operating systems including popular mobile platforms will follow in the future.

Aspiring Bleep users can create an account via an email or mobile phone number, but an incognito mode without the need to provide any personal details is also supported.

The new messaging app is not the only ‘breach safe’ tool the company is currently working on. Last year BitTorrent launched its Sync application which provides a secure alternative to centralized cloud backup solutions such as Dropbox and Google Drive.

BitTorrent Inc. is inviting people to test the new Bleep application, but warns there are still some bugs.

Those who want to give BitTorrent Bleep a try can head over to BitTorrent’s experiments section to sign up for the pre-Alpha release.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

The Hacker Factor Blog: A Victory for Fair Use

This post was syndicated from: The Hacker Factor Blog and was written by: The Hacker Factor Blog. Original post: at The Hacker Factor Blog

Last week I reported on a copyright infringement letter that I had received from Getty Images. The extremely hostile letter claimed that I was using a picture in violation of their copyright, ordered me to “cease and desist” using the picture, and demanded that I pay $475 in damages. Various outlets have referred to this letter as trolling and extortion.

Not being an attorney, I contacted my good friend, Mark D. Rasch. Mark is a well-known attorney in the computer security world. Mark headed the United States Department of Justice Computer Crime Unit for nine years and prosecuted cases ranging from computer crime and fraud to digital trespassing and viruses. If you’re old enough, then you remember the Hanover Hackers mentioned in The Cuckoo’s Egg, Robert Morris Jr. (first Internet worm), and Kevin Mitnick — Mark worked all of those prosecutions. He regularly speaks at conferences, appears in news interviews, and has taught cyberlaw to law enforcement and big universities. (If I were a big company looking for a chief privacy officer, I would hire him in a second.)

This letter from Getty had me concerned. But I can honestly say that, in the 12 years that I’ve known him, I have never seen Mark so animated about an issue. I have only ever seen him as a friendly guy who gives extremely informative advice. This time, I saw a side of Mark that I, as a friend, have never experienced. I would never want to be on the other side of the table from him. And even being on the same side was really intimidating. (Another friend told me that Mark has a reputation for being an aggressive bulldog. And this was my first time seeing his teeth.) His first advice to me was very straightforward. He said, “You have three options. One, do nothing. Two, send back a letter, and three, sue them.” Neither of us were fond of option #1. After a little discussion, I decided to do option #2 and prepare for #3.

First I sent the response letter. Then I took Mark’s advice and began to prepare for a lawsuit. Mark wanted me to take the initiative and file for a “Copyright Declaratory Judgment“. (Don’t wait for Getty.) In effect, I wanted the court to declare my use to be Fair Use.

Getty’s Reply

I honestly expected one of three outcomes from my response letter to Getty Images. Either (A) Getty would do nothing, in which case I would file for the Declaratory Judgment, or (B) Getty would respond with their escalation letter, demanding more money (in which case I would still file for the Declaratory Judgment), or (C) Getty would outright sue me, in which case I would respond however my attorney advised.

But that isn’t what happened. Remarkably, Getty backed down! Here’s the letter that they sent me (I’m only censoring email addresses):

From: License Compliance
To: Dr. Neal Krawetz
Subject: [371842247 Hacker Factor ]
Date: Tue, 22 Jul 2014 20:51:13 +0000

Dr. Krawetz:

We have reviewed your email and website and are taking no further action. Please disregard the offer letter that has been presented in this case. If you have any further questions or concerns, please do not hesitate to contact us.

Nancy Monson
Copyright Compliance Specialist
Getty Images Headquarters
605 Fifth Avenue South, Suite 400
Seattle WA 98104 USA
Phone 1 206 925 6125
Fax 1 206 925 5001
[redacted]@gettyimages.com

For more information about the Getty Images License Compliance Program, please visit http://company.gettyimages.com/license-compliance

Helpful information about image copyright rules and how to license stock photos is located at www.stockphotorights.com and Copyright 101.

Getty Images is leading the way in creating a more visual world. Our new embed feature makes it easy, legal, and free for anybody to share some of our images on websites, blogs, and social media platforms.
http://www.gettyimages.com/Creative/Frontdoor/embed

(c)2014 Getty Images, Inc.

PRIVILEGED AND CONFIDENTIAL
This message may contain privileged or confidential information and is intended only for the individual named. If you are not the named addressee or an employee or agent responsible for delivering this message to the intended recipient you should not disseminate, distribute or copy this e-mail or any attachments hereto. Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail and any attachments from your system without copying or disclosing the contents. E-mail transmission cannot be guaranteed to be secure or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the contents of this message, which arise as a result of e-mail transmission. If verification is required please request a hard-copy version. Getty Images, 605 5th Avenue South, Suite 400. Seattle WA 98104 USA, www.gettyimages.com. PLEASE NOTE that all incoming e-mails will be automatically scanned by us and by an external service provider to eliminate unsolicited promotional e-mails (“spam”). This could result in deletion of a legitimate e-mail before it is read by its intended recipient at our firm. Please tell us if you have concerns about this automatic filtering.

Mark Rasch also pointed out that Getty explicitly copyrighted their email to me. However, the same Fair Use that permits me to use their pictures also permits me to post their entire email message. And that whole “PRIVILEGED AND CONFIDENTIAL” paragraph? That’s garbage and can be ignored because I never agreed to their terms.

Findings

In preparing to file the Copyright Declaratory Judgment, I performed my due diligence by checking web logs and related files for information pertaining to this case. And since Getty has recanted, I am making some of my findings public.

Automated Filing
First, notice how Getty’s second letter says “We have reviewed your email and website…” This clearly shows up in my web logs. Among other things, people at Getty are the only (non-bot) visitors to access my site via “nealkrawetz.org” — everyone else uses “hackerfactor.com”. In each case, the Getty users initially went directly to my “In The Flesh” blog entry (showing that they were not searching or just browsing my site.) Their automated violation bot also used nealkrawetz.org. The big catch is that nobody at Getty ever reviewed “In The Flesh” prior to mailing their extortion letter.

In fact, I can see exactly when their bot visited my web site. Here are all of my logs related to their bot:

2014-06-08 23:41:44 | 14.102.40.242 | Mozilla/5.0 (Windows NT 6.1; WOW64; rv:29.0) Gecko/20100101 Firefox/29.0 | GET / | http://ops.picscout.com/QcApp/Classification/Index/371654690
2014-06-08 23:41:44 | 14.102.40.242 | Mozilla/5.0 (Windows NT 6.1; WOW64; rv:29.0) Gecko/20100101 Firefox/29.0 | GET / | http://ops.picscout.com/QcApp/Classification/Index/371654690
2014-06-09 21:08:00 | 14.102.40.242 | Mozilla/5.0 (Windows NT 6.1; WOW64; rv:29.0) Gecko/20100101 Firefox/29.0 | GET / | http://ops.picscout.com/QcApp/Classification/Index/371654690
2014-06-09 21:08:00 | 14.102.40.242 | Mozilla/5.0 (Windows NT 6.1; WOW64; rv:29.0) Gecko/20100101 Firefox/29.0 | GET / | http://ops.picscout.com/QcApp/Classification/Index/371654690
2014-06-14 23:05:36 | 109.67.106.4 | Mozilla/5.0 (Windows NT 6.1; WOW64; rv:29.0) Gecko/20100101 Firefox/29.0 | GET / | http://ops.picscout.com/QcApp/Classification/Index/371842247
2014-06-14 23:05:36 | 109.67.106.4 | Mozilla/5.0 (Windows NT 6.1; WOW64; rv:29.0) Gecko/20100101 Firefox/29.0 | GET / | http://ops.picscout.com/QcApp/Classification/Index/371842247
2014-06-14 23:05:44 | 109.67.106.4 | Mozilla/5.0 (Windows NT 6.1; WOW64; rv:29.0) Gecko/20100101 Firefox/29.0 | GET /blog/index.php?/archives/423-In-The-Flesh.html | http://ops.picscout.com/QcApp/PreReport/Index/371842247?normalFlow=True
2014-06-14 23:06:39 | 109.67.106.4 | Mozilla/5.0 (Windows NT 6.1; WOW64; rv:29.0) Gecko/20100101 Firefox/29.0 | GET /blog/index.php?/categories/18-Phones | http://ops.picscout.com/QcApp/Infringer/Index/371842247
2014-06-16 05:35:47 | 95.35.10.33 | Mozilla/5.0 (Windows NT 6.1; rv:29.0) Gecko/20100101 Firefox/29.0 | GET / | http://ops.picscout.com/QcApp/Classification/Index/371842247
2014-06-16 05:35:47 | 95.35.10.33 | Mozilla/5.0 (Windows NT 6.1; rv:29.0) Gecko/20100101 Firefox/29.0 | GET / | http://ops.picscout.com/QcApp/Classification/Index/371842247

This listing shows:

  • The date/time (in PST)
  • The bot’s IP address (two in Israel and one in India; none from the United States)
  • The user-agent string sent by the bot
  • Where they went — they most went to “/” (my homepage), but there is exactly one that went to “/blog/index.php?/archives/423-In-The-Flesh.html”. That’s when they compiled their complaint.
  • The “Referer” string, showing what they clicked in order to get to my site. Notice how their accesses are associated with a couple of complaint numbers. “371842247″ is the number associated with their extortion letter. However, “371654690″ appears to be a second potential complaint.

Getty’s complaint has a very specific timestamp on the letter. It’s doesn’t just have a date. Instead, it says “7/10/2014 11:05:05am” — a very specific time. The clocks may be off by a few seconds, but that “11:05″ matches my log file — it is off by exactly 12 hours. (The letter is timestamped 11:05am, and my logs recorded 11:05pm.) This shows that the entire filing process is automated.

When I use my bank’s online bill-pay system, it asks me when I want to have the letter delivered. Within the United States, it usually means mailing the letter four days earlier. I believe that Getty did the exact same thing. They scanned my web site and then mailed their letter so it would be delivered exactly one-month later, and dated the letter 4 days 12 hours before delivery.

Getty’s automated PicScout system is definitely a poorly-behaved web bot. At no time did Getty’s PicScout system retrieve my robots.txt file, showing that it fails to abide by Internet standards. I am also certain that this was a bot since a human’s web browser would have downloaded my blog’s CSS style sheet. (PicScout only downloaded the web page.)

Failure to perform due diligence
I want to emphasize that there are no other accesses to that blog entry by any address associated with Getty within months before their complaint. As of this year (from January 2014 to July 23, 2014), people at Getty have only visited the “In The Flesh” web page 13 times: once by the PicScout bot, and 12 times after they received my reply letter. This shows that Getty never viewed the web page prior to sending their letter. In effect, their “infringement” letter is nothing more than trolling and an attempt to extort money. They sent the letter without ever looking at the context in which the picture is used.

My claim that Getty never manually reviewed my web site prior to mailing is also supported by their second letter, where they recanted their claim of copyright infringement. Having actually looked at my blog, they realized that it was Fair Use.

My web logs are not my only proof that no human at Getty viewed the blog page in the months prior to sending the complaint. Getty’s threatening letter mentions only one single picture that is clearly labeled with Getty’s ImageBank watermark. However, if any human had visited the web page, then they would have seen FOUR pictures that are clearly associated with Getty, and all four pictures were adjacent on the web page! The four pictures are:

The first picture clearly says “GettyImages” in the top left corner. The second picture (from their complaint) is watermarked with Getty’s ImageBank logo. The third and fourth pictures come from Getty’s iStockPhoto service. Each photo was properly used as part of the research results in that blog entry. (And right now, they are properly used in the research findings of this blog entry.)

After Getty received my reply letter, they began to visit the “In The Flesh” URL from 216.169.250.12 — Getty’s corporate outbound web proxy address. Based on the reasonable assumption that different browser user-agent strings indicate different people, I observed them repeatedly visiting my site in groups of 3-5 people. Most of them initially visited the “In The Flesh” page at nealkrawetz.org; a few users visited my “About Me” and “Services” web pages. I am very confident that these indicate their attorneys reviewing my reply letter and web site. This is the absolute minimum evaluation that Getty should have done before sending their extortion letter.

Legal Issues
Besides pointing out how my blog entry clearly falls under Fair Use, my attorney noted a number of items that I (as a non-lawyer person) didn’t see. For example:

  • In Getty’s initial copyright complaint, they assert that they own the copyright. However, the burden of proof is on Getty Images. Getty provided no proof that they are the actual copyright holder, that they acquired the rights legally from the photographer, that they never transferred rights to anyone else, that they had a model release letter from the woman in the photo, that the picture was never made public domain, and that the copyright had not expired. In effect, they never showed that they actually have the copyright.

  • Getty’s complaint letter claims that they have searched their records and found no license for me to use that photo. However, they provided no proof that they ever searched their records. At minimum, during discovery I would demand a copy of all of their records so that I could confirm their findings and proof of their search. (Remember, the burden of proof is on Getty, not on me.) In addition, I have found public comments that explicitly identify people with valid licenses who reported receiving these hostile letters from Getty. This brings up the entire issue regarding how Getty maintains and searches their records.
  • Assuming some kind of violation (and I am not admitting any wrong here), there is a three-year statute of limitations regarding copyright infringement. My blog entry was posted on March 18, 2011. In contrast, their complaint letter was dated July 10, 2014 — that is more than three years after the pictures were posted on my site.

Known Research
Copyright law permits Fair Use for many purposes, including “research”. Even Getty’s own FAQ explicitly mentions “research” as an acceptable form of Fair Use. The question then becomes: am I a researcher and does my blog report on research? (Among other things, this goes toward my background section in the Copyright Declaratory Judgment filing.)

As it turns out, my web logs are extremely telling. I can see each time anyone at any network address associated with Getty Images visits my site. For most of my blog entries, I either get no Getty visitors or a few visitors. However, each time I post an in-depth research entry on digital photo forensics, I see large groups of people at Getty visiting the blog entry. I can even see when one Getty person comes through, and then a bunch of other Getty people visit my site — suggesting that one person told his coworkers about the blog entry. In effect, employees at Getty Images have been regular readers of my blog since at least 2011. (For discovery, I would request a forensic image of every computer in Getty’s company that has accessed my web site in order to determine if they used my site for research.)

Getty users also use my online analysis service, FotoForensics. This service is explicitly a research service. There are plenty of examples of Getty users accessing the FotoForensics site to view analysis images, read tutorials, and even upload pictures with test files that have names like “watermark.jpg” and “watermark-removed.jpg”. This explicitly shows that they are using my site as a research tool.

(For the ultra paranoid people: I have neither the time nor the desire to track down every user in my web logs. But if you send me a legal threat, I will grep through the data.)

However, the list does not stop there. For example, the Harvard Reference Guide lists me as the example for citing research from a blog. (PDF: see PDF page 44, document page 42.) Not only does Getty use my site as a research resource, Harvard’s style guide uses me as the example for a research blog (my bold for emphasis).

Blogs are NOT acceptable academic sources unless as objects of research

Paraphrasing, Author Prominent:
Krawetz (2011) uses a blog to discuss advanced forensic image analysis techniques.

Paraphrasing, Information Prominent:
Blogs may give credence to opinion, in some cases with supporting evidence; for example the claim that many images of fashion models have been digitally enhanced (Krawetz 2011).

Reference List Model:
Krawetz, N 2011, ‘The hacker factor blog’, web log, viewed 15 November 2011, http://www.hackerfactor.com/blog/

I should also point out that the AP and Reuters have both been very aware of my blog — including a VP at the AP — and neither has accused me of copyright infringement. They appear to recognize this as Fair Use. Moreover, with one of blog entries on a Reuters photo (Without a Crutch), a Reuters editor referred to the blog entry as a “Great in-depth analysis” on Reuter’s web site (see Sep 30, 2011) and on her twitter feed. This shows that Getty’s direct competition recognize my blog as a research resource.

SLAPP
One of the things my attorney mentioned was California’s Anti-SLAPP law. Wikipedia explains SLAPP, or Strategic Lawsuit Against Public Participation, as “a lawsuit that is intended to censor, intimidate, and silence critics by burdening them with the cost of a legal defense until they abandon their criticism or opposition.” Wikipedia also says:

The plaintiff’s goals are accomplished if the defendant succumbs to fear, intimidation, mounting legal costs or simple exhaustion and abandons the criticism. A SLAPP may also intimidate others from participating in the debate. A SLAPP is often preceded by a legal threat. The difficulty is that plaintiffs do not present themselves to the Court admitting that their intent is to censor, intimidate or silence their critics.

In this case, Getty preceded to send me a legal threat regarding alleged copyright infringement. Then they demanded $475 and threatened more actions if I failed to pay it. In contrast, it would cost me $400 to file for a Declaratory Judgment (more if I lived in other states), and costs could rise dramatically if Getty filed a lawsuit against me. In either scenario, it places a financial burden on me if I want to defend my First Amendment rights.

In the United States, California has special anti-SLAPP legislation. While not essential, it helps that Getty has offices in California and a network trace shows that some packets went from Getty to my blog through routers in California. As Wikipedia explains:

To win an anti-SLAPP motion, the defendant must first show that the lawsuit is based on claims related to constitutionally protected activities, typically First Amendment rights such as free speech, and typically seeks to show that the claim lacks any basis of genuine substance, legal underpinnings, evidence, or prospect of success. If this is demonstrated then the burden shifts to the plaintiff, to affirmatively present evidence demonstrating a reasonable probability of succeeding in their case by showing an actual wrong would exist as recognized by law, if the facts claimed were borne out.

This isn’t even half of his legal advice. I could barely take notes fast enough as he remarked about topics like Rule 11, tortious interference with a business relationship, Groucho Marx’s reply to Warner Brothers, and how Getty’s repeated access to my web site could be their way to inflate potential damage claims (since damages are based on the number of views).

A Little Due Diligence Goes A Long Way

Although this entire encounter with Getty Images took less than two weeks, I was preparing for a long battle. I even contacted the Electronic Freedom Foundation (EFF) to see if they could assist. The day after Getty recanted, I received a reply from the EFF: no less than four attorneys wanted to help me. (Thank you, EFF!)

I strongly believe that Getty Images is using a “cookie cutter” style of complaint and is not actually interested in any lawsuit; they just want to extort money from people who don’t know their rights or don’t have the fortitude for a long defense (SLAPP). Getty Images made no effort to evaluate the content beyond an automated search bot, made no attempt to review the bot’s results, provided no evidence that they are the copyright holder, provided no proof that they tried to verify licenses, and threatened legal action against me if I did not pay up.

I am glad that I stood up for my First Amendment rights.

Darknet - The Darkside: Clear Your Cookies? You Can’t Escape Canvas Fingerprinting

This post was syndicated from: Darknet - The Darkside and was written by: Darknet. Original post: at Darknet - The Darkside

So tracking is getting even trickier, it seems canvas fingerprinting would work in any browser that supports HTML5 and is pretty hard to stop as a user, as it’s a basic feature (a website instructing your browser to draw an image using canvas). And it turns out, every single browser will draw the image slightly [...]

The post Clear Your…

Read the full post at darknet.org.uk

Schneier on Security: Fingerprinting Computers By Making Them Draw Images

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Here’s a new way to identify individual computers over the Internet. The page instructs the browser to draw an image. Because each computer draws the image slightly differently, this can be used to uniquely identify each computer. This is a big deal, because there’s no way to block this right now.

Article. Hacker News thread.

EDITED TO ADD (7/22): This technique was first described in 2012. And it seems that NoScript blocks this. Privacy Badger probably blocks it, too.

EDITED TO ADD (7/23): EFF has a good post on who is using this tracking system — the White House is — and how to defend against it.

And a good story on BoingBoing.

TorrentFreak: BPI Rejects Use of Spotify-Owned “Stay Down” Pirate Tool

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

There are hundreds of millions of pirate files inhabiting the Internet and it’s fair to say that many of those are music tracks. As a result, the world’s leading record labels, who together claim 90%+ of the market, spend significant sums making those files more awkward to find.

For sites like The Pirate Bay, which point-blank refuses to remove any torrents whatsoever, the labels have little option than to head off to Google. There the search giant will remove Pirate Bay links from its indexes so that users won’t immediately find them.

However, rather than engaging a link whack-a-mole, the best solution by far is to remove the content itself. Perhaps surprisingly, many of the world’s leading file-lockers (even ones labeled ‘rogue’ by the United States), allow copyright holders direct back-end access to their systems so they can remove content themselves. It doesn’t really get any fairer than that, and here’s the issue.

This week, while looking at Google’s Transparency Report, TF noticed that during the past month massive file-hosting site 4shared became the record labels’ public enemy number one. In just four weeks, Google received 953,065 requests for 4shared links to be taken down, the majority of them from record labels. In fact, according to Google the BPI has complained about 4shared a mind-boggling 6.75 million times overall.

So, is 4shared refusing to cooperate with the BPI, hence the group’s endless complaints to Google? That conclusion might make sense but apparently it’s not the case. In fact, it appears that 4shared operates a removal system that is particularly friendly to music companies, one that not only allows them to take content down, but also keep it down.

“Throughout the years 4shared developed several tools for copyright owners to protect their content and established a special team that reacts to copyright claims in timely manner,” 4shared informs TorrentFreak.

“We don’t completely understand BPI’s reasons for sending claims to Google instead of using our tools. From our point of view the best and most effective way for copyright holders to find and remove links to the content they own is to use our music identification system.”

To find out more, TF spoke with the BPI. We asked them to comment on 4shared’s takedown tools and in the light of their existence why they choose to target Google instead. After a few friendly back-and-forth emails, the group declined to comment on the specific case.

“We prefer to comment on our overall approach on search rather than on individual sites, which is to focus on known sources of wide scale piracy and to use a number of tools to tackle this problem,” a BPI spokesman explained.

“Notice-sending represents just one part of the measures available to us, along with site blocking and working with the Police to reducing advertising on copyright infringing sites.”

We asked 4shared to reveal other copyright holders using their system, but the site declined on privacy grounds. However, it’s clear that the BPI isn’t a user and 4shared have their own ideas why that might be.

“It’s possible that BPI goes for quantity not quality,” TF was told.

“If they are trying to increase the number of links in reports or for PR reasons, they probably use a bot to harvest and send links to Google despite the fact that such an approach may also result in false claims.”

The “PR” angle is an interesting one. Ever since Google began publishing its Transparency Report rightsholders have used it to demonstrate how bad the piracy problem is. Boosting those numbers certainly helps the cause.

But is it possible, perhaps, that the BPI doesn’t trust the 4shared system. They didn’t answer our questions on that front either but it seems unlikely since 4shared uses EchoPrint, a solution purchased by Spotify earlier this year.

“Our music identification system which is based on Echoprint technology will not only find all matching content but will also restrict sharing of all potential future uploads of such content,” 4shared concludes.

Take-down-and-stay-down is the Holy Grail for anti-piracy companies. It’s a solution being pushed for in the United States in the face of what rightsholders say is a broken DMCA. On that basis there must be a good reason for the BPI not wanting to work with 4shared and it has to be said that the company’s “PR” theory proves more attractive than most.

The volume of notices in Google’s Transparency Report provide believable evidence of large-scale infringement and it’s certainly possible that the BPI would prefer to have 4shared blocked in the UK than work with the site’s takedown tools.

We’ll find out the truth in the months to come.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Krebs on Security: Even Script Kids Have a Right to Be Forgotten

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Indexeus, a new search engine that indexes user account information acquired from more than 100 recent data breaches, has caught many in the hacker underground off-guard. That’s because the breached databases crawled by this search engine are mostly sites frequented by young ne’er-do-wells who are just getting their feet wet in the cybercrime business.

Indexeus[dot]org

Indexeus[dot]org

Indexeus boasts that it has a searchable database of “over 200 million entries available to our customers.” The site allows anyone to query millions of records from some of the larger data breaches of late — including the recent break-ins at Adobe and Yahoo! – listing things like email addresses, usernames, passwords, Internet address, physical addresses, birthdays and other information that may be associated with those accounts.

Who are Indexeus’s target customers? Denizens of hackforums[dot]net, a huge forum that is overrun by novice teenage hackers (a.k.a “script kiddies”) from around the world who are selling and buying a broad variety of services designed to help attack, track or otherwise harass people online.

Few services are as full of irony and schadenfreude as Indexeus. You see, the majority of the 100+ databases crawled by this search engine are either from hacker forums that have been hacked, or from sites dedicated to offering so-called “booter” services — powerful servers that can be rented to launch denial-of-service attacks aimed at knocking Web sites and Web users offline.

The brains behind Indexeus — a gaggle of young men in their mid- to late teens or early 20s — envisioned the service as a way to frighten fellow hackers into paying to have their information removed or “blacklisted” from the search engine. Those who pay “donations” of approximately $1 per record (paid in Bitcoin) can not only get their records expunged, but that price also buys insurance against having their information indexed by the search engine in the event it shows up in future database leaks.

The team responsible for Indexeus explains the rationale for their project with the following dubious disclaimer:

“The purpose of Indexeus is not to provide private informations about someone, but to protect them by creating awareness. Therefore we are not responsible for any misuse or malicious use of our content and service. Indexeus is not a dump. A dump is by definition a file containing logins, passwords, personal details or emails. What Indexeus provides is a single-search, data-mining search engine.”

Such information would be very useful for those seeking to settle grudges by hijacking a rival hacker’s accounts. Unsurprisingly, a number of Hackforums users reported quickly finding many of their favorite usernames, passwords and other data on Indexeus. They began to protest against the service being marketed on Hackforums, charging that Indexeus was little more than a shakedown.

Indeed, the search engine was even indexing user accounts stolen from witza.net, the site operated by Hackforums administrator Jesse LaBrocca and used to process payments for Hackforums who wish to upgrade the standing of their accounts on the forum.

WHO RUNS INDEXEUS?

The individual who hired programmers to help him build Indexeus uses the nickname “Dubitus” on Hackforums and other forums. For the bargain price of $25 and two hours of your time on a Saturday, Dubitus also sells online instructional training on “doxing” people — working backwards from someone’s various online personas to determine their real-life name, address and other personal data.

Dubitus claims to be a master at something he calls “Web detracing,” which is basically removing all of the links from your online personas that might allow someone to dox you. I have no idea if his training class is any good, but it wasn’t terribly difficult to find this young man in the real world.

Dubitus offering training for  "doxing" and "Web detracing."

Dubitus offering training for “doxing” and “Web detracing.”

Contacted via Facebook by KrebsOnSecurity, Jason Relinquo, 23, from Lisbon, Portugal, acknowledged organizing and running the search engine. He also claims his service was built merely as an educational tool.

“I want this to grow and be a reference, and at some point by a tool useful enough to be used by law enforcement,” Relinquo said. “I wouldn’t have won the NATO Cyberdefense Competition if I didn’t have a bigger picture in my mind. Just keep that in yours.”

Relinquo said that to address criticisms that his service was a shakedown, he recently modified the terms of service so that users don’t have to pay to have their information removed from the site. Even so, it remains unclear how users would prove that they are the rightful owner of specific records indexed by the service.

Jason Relinquo

Jason Relinquo

“We’re going through some reforms (free blacklisting, plus subscription based searches), due some legal complications that I don’t want to escalate,” Relinquo wrote in a chat session. “If [Indexeus users] want to keep the logs and pay for the blacklist, it’s an option. We also state that in case of a minor, the removal is immediate.”

Asked which sort of legal complications were bedeviling his project, Relinquo cited the so-called “right to be forgotten,” data protection and privacy laws in Europe that were strengthened by a May 2014 decision by the European Court of Justice in a ruling against Google. In that case, the EU’s highest court ruled that individuals have a right to request the removal of Internet search results, including their names, that are “inadequate, irrelevant or no longer relevant, or excessive.”

I find it difficult to believe that Indexeus’s creators would be swayed by such technicalities, given that  that the service was set up to sell passwords to members of a forum known to be frequented by people who will use them for malicious purposes. In any case, I doubt this is the last time we will hear of a service like this. Some 822 million records were exposed in more than 2,160 separate data breach incidents last year, and there is plenty of room for competition and further specialization in the hacked-data search engine market.