Posts tagged ‘Privacy’

The Hacker Factor Blog: We Know You’re A Dog

This post was syndicated from: The Hacker Factor Blog and was written by: The Hacker Factor Blog. Original post: at The Hacker Factor Blog

Usually when I read about “new” findings in computer security, they are things that I’ve known about for years. Car hacking, parasitic file attachments, and even changes in phishing and spamming. If you’re active in the computer security community, then most of the public announcements are probably not new to you. But Wired just reported on something that I had only learned about a few months ago.

I had previously mentioned that I was looking for alternate ways to ban users who violate the FotoForensics terms of service. Specifically, I’m looking at HTTP headers for clues to identify if the web client is using a proxy.

One of the things I discovered a few months ago was the “X-UIDH” header that some web clients send. As Wired and Web Policy mentioned, Verizon is adding this header to HTTP requests that go over their network and it can be used to track users.

Miswired

As is typical for Wired, they didn’t get all of the details correct.

  • Wired says that the strings are “about 50 letters, numbers, and characters”. I’ve only seen 56 and 60 character sequences. The data appears to be a base64-encoded binary set. If you base64 decode the sequence, then you’ll see that it begins with a text number, like “379612345″ and it is null-terminated. I don’t know what this is, but it is unique per account. It could be the user’s account number. After that comes a bunch of binary data that I have not yet decoded.

  • Wired says that the string follows the user. This is a half-truth. If you change network addresses, then only the first part of the base64 X-UIDH value stays the same. The rest changes. If services only store the X-UIDH string, then they will not be tracking you. But if they decode the string and use the decoded number, then services can track you regardless of your Verizon-assigned network address.
  • Wired makes it sound like Verizon adds the header to most Verizon clients. However, it isn’t added by every Verizon service. I’ve only seen this on some Verizon Wireless networks. User with FIOS or other Verizon services do not get exposed by this added header. And even people who use Verizon Wireless may not have it added, depending on their location. If your dynamically assigned hostname says “myvzw.com”, then you might be tagged. But if it isn’t, then you’re not.
  • The X-UIDH header is only added when the web request uses HTTP. I have not seen it added to any HTTPS headers. However, most web services use HTTP. And even services like eBay and Paypal load some images with HTTP even when you use HTTPS to connect to the service. So this information will be leaked.

The Wired article focused on how this can be used by advertisers. However, it can also be used by banks as part of a two-part authentication: something you know (your username and password) and something you are (your Verizon account number).

Personally, I’ve been planning to use it for a much more explicit purpose. I’ve mentioned that I am legally required to report people who upload child porn to my server. And while I am usually pro-privacy, I don’t mind reporting these people because there is a nearly one-to-one relationship between people who have child porn and people who abuse children. So… wouldn’t it be wonderful if I could also provide their Verizon account number along with my required report? (Let’s make it extremely easy for the police to make an arrest.)

Unique, and yet…

One other thing that Wired and other outlets failed to mention is that Verizon isn’t the only service that does this kind of tracking. Verizon adds in an “X-UIDH” header. But they are not alone. Two other examples are Vodafone and AT&T. Vodafone inserts an X-VF-ACR header and AT&T Mobility LLC (network AS20057) adds in an “x-acr” header. These headers can be used for the same type of user-specific tracking and identification.

And it isn’t even service providers. If your web antivirus software performs real-time network scanning, then there’s a good chance that it is adding in unique headers that can be used to track you. I’ve even identified a few headers that are inserted by specific nation-states. If I see the presence of certain HTTP headers, then I immediately know the country of origin. (I’m not making this info public yet because I don’t want Syria to change the headers. Oops…)

Business as usual

For over a decade, it has been widely known in the security field that users can be tracked based on their HTTP headers. In fact, the EFF has an online test that determines how unique your HTTP header is. (The EFF also links to a paper on this topic.) According to them, my combination of operating system, time zone, web browser, and browser settings makes my system “unique among the 4,645,400 tested so far.” Adding in yet-another header doesn’t make me more unique.

When I drive my car, I am in public. People can see my car and they can see me. While I believe that the entire world isn’t watching me, I am still in public. My car’s make and model is certainly not unique, but the various scratches and dents are. When I drive to my favorite restaurant, they know it is me before I get out of the car. By the same means, my HTTP header is distinct. For some uses, it is even unique. When I visit my favorite web sites, they can identify me by my browser’s HTTP header.

Continuing with this analogy, my car has a license plate. Anyone around me can see it and it is unique. With the right software, someone can even identify “me” from my license plate. Repainting my car doesn’t change the license plate. These unique tracking IDs that are added by various ISPs are no different from a license plate. The entire world may not be able to see it, but anywhere you go, it goes with you and it is not private.

The entire argument that these IDs violate online privacy is flawed. You never had privacy to begin with. Moreover, these unique tags do not make you any more exposed or any more difficult to track. And just as you can take specific steps to reduce your traceability in public, you still have options to reduce your traceability online.

The Hacker Factor Blog: Parasites

This post was syndicated from: The Hacker Factor Blog and was written by: The Hacker Factor Blog. Original post: at The Hacker Factor Blog

Every now and then, old security concepts resurface as if they were something new. Recently, I’ve been seeing a lot more activity related to parasitic attachments in pictures.

A parasitic attachment, or parasite, is an unrelated file that is simply attached to another file. With pictures, it is an unrelated chunk of data attached to the image file. When rendering a picture, the parasite is ignored. And when transferring the picture, the parasite follows along for the ride.

Attaching Parasites

To understand how this works, let’s focus on JPEG. Every JPEG has a header, information related to decompression settings, and the compressed binary image stream. The stream has a well-defined start and a well-defined end. When rendering pictures, your graphics program stops at the end of stream marker. It doesn’t look beyond that point, so anything attached after the JPEG becomes ignored information.

There’s actually a lot of information that may be intentionally stuffed after the image. Some vendors store thumbnail images after the main image. Back in 2010, I pointed out that some Android devices store operating system information after the picture.

Parasites are not limited to JPEG formats. Virtually every image format out there has a well-defined “end”, and rendering programs stop when they hit the defined end. PNG, BMP, and even GIF can all have parasites without impacting how the picture is rendered. There’s even a nice tutorial from 2010 for how to attach a parasite. And a similar tutorial from 2006. (And I remember doing this type of thing back in 1992, and it definitely wasn’t “new” back then.) Creating a parasitic attachment is literally as easy as appending data to an existing JPEG.

Parasites are not limited to the end of the file. They may be stuffed in comment fields, proprietary data blocks, and other unused areas in the picture file format. Both JPEG and PNG support custom data blocks. If the rendering software doesn’t support the custom data block, then the block is ignored. For parasites, you just define your own custom data block and expect it to be ignored.

Finally, there is the payload carried by the parasite. At FotoForensics, about 0.05% (yes, less than a tenth of a percent) of all files contain some kind of parasitic attachment. Zip files, RAR files, 7zip, and text are all common. But I’ve also seen PDF, PKCS7 certificates, encrypted data, word documents, unrelated pictures, and much more. In September 2014, FotoForensics received 34,206 unique file uploads. Of those, 17 files have parasites that my software readily identifies. Most of the parasites were zip files, but there were also a few RAR files and other types of data.

Hamster Dance

As an example, the following picture was uploaded to FotoForensics on 1-Sept-2014.

This file looks like a picture of some hamsters. But inside JPEG file is a parasitic zip file stuffed in an APP1 data field. This non-standard APP1 data block is ignored when the image is rendered. Even program like ExifTool and exiv2 ignore the unknown binary block. However, the APP1 data definitely contains a zip file and most zip programs will happily unzip it without even extracting it from the JPEG. Inside the zip file is another picture that gives clues to some GPS coordinates.

This hamster picture actually came from a geo-caching forum. In fact, most of the files with parasites at FotoForensics come from geo-caching forums.

“Why geo-caching?” They love puzzles. It used to be fun to give someone GPS coordinates and let them see if they could find some prize at the physical location. When that was too simple, they began to use remote coordinates — get ready for a three-hour hike or a mountain climb. When remote locations became too easy, they began to hide the objects — you might need to bring a shovel or a flashlight to find the prize. Then they began to turn the coordinates into puzzles: if you can solve the puzzle, then you will find the coordinates. Today? Hard-core steganography. First you have to find the puzzle. Then you have to solve it. Then you have to go to the coordinates (where there may be more puzzles) until you find the final prize. Seriously — if you want to see steg in real life, watch the geo-caching community.

As an aside, one of my friends keeps saying that we should start up a get-rich-quick business. Since FotoForensics receives lots of these geo-caching puzzles, we should solve them first and park a food truck at the prize location. You just know the players will be hungry when they get there.

Chimeric Parasites

Last month I read about a proof-of-concept tool that will turn a JPEG into a PDF or PNG file after applying AES or 3DES cryptography. Corkami works by using parasitic attachments. Specifically, they encrypt a PNG file and PDF, one with AES and the other with 3DES.

With many cryptographic algorithms, decrypting an already decrypted file is just another way to encrypt data. The results are binary data that can only be restored by encrypting the file.

After encrypting (technically, decrypting) the PNG and PDF, they store them in the JPEG. The example encodes the encrypted PNG at the beginning of the JPEG (in a comment) and the PDF as a huge binary parasite at the end of the JPEG.

The hard part for all of this is choosing the right key for all of the cryptography. The AES key is chosen so that it generates a proper PNG header (8 bytes) when given the JPEG header as input. Applying AES encryption to the JPEG creates a PNG header, some binary junk, and then decodes the encrypted PNG data. This results in a valid PNG with binary crud that is ignored by any graphics software.

Similarly, the 3DES key is chosen to generate the PDF header (8 bytes). And the encoded 3DES PDF is placed at the end of the JPEG. This way, the 3DES encoding reconstructs a PDF. And since PDFs start parsing at the end of the file, the binary garbage at the beginning of the file (created from the JPEG) is ignored and the entire thing looks renders a valid PDF.

Infectious Behavior

Discussions about parasitic attachments seem to come up annually. Last year, some researcher discovered that they could hide PHP or Perl or other types of code in text comment fields. If your web site processes back-end server scripts, displays JPEG comments, and isn’t careful about protecting output when displaying image comments, then this could run code on the server. (FotoForensics has captured plenty of examples of these hostile comment fields, and I’ve been seeing this sort of thing for years; the announcement last year may be new to them, but it wasn’t new.)

Keep in mind, hiding malware in a parasitic attachment is not the same as renaming an EXE to “JPEG” and emailing it as an attachment. (“Just double click on the picture!”) A properly created parasite will not interfere with the host image. Just renaming an executable to “.jpg” does not make it a parasite.

Harmless Parasites

There’s a difference between steganography and cryptography. Cryptography refers to making data inaccessible. You can see the data, but you cannot understand it. Steganography refers to making data hard to find. But if you find it, you may be able to immediately understand it.

Parasitic attachments are one form of steganography. However, as hiding places go, they are relatively easy to detect. Anyone parsing the file format will see a large, non-standard binary blob buried in the file. While your friends may not readily notice these large binary chunks stuffed in your pictures, forensic investigators are likely to find the hidden data very quickly. If you’re doing something malicious and investigators see these parasitic attachments, then they may be interpreted as “intent” to hide activities. (I’m not an attorney; if you find yourself in this situation, then you should get an attorney.)

Parasites are also trivial to remove. I frequently mention “resaved” images. That’s where a picture is decoded and then re-encoded as it is saved to a new file. Facebook resaves pictures. Twitter resaves pictures. And nearly every online picture sharing service that scales pictures also performs a resave. The simple action of resaving an image is enough to remove parasites. (I am pretty certain that Facebook and Twitter resave pictures as an explicit method for removing metadata, including any parasites.)

As far as the threat level goes, these parasitic attachments are explicitly hiding. They won’t activate on a double-click and, with few exceptions, remain passive and unnoticed. In order to use the data, you must know it is there and know how to extract the content.

Even though the technique has been around for decades, I still think finding parasites within pictures is a treat. You never know what you’re going to find. (I have no idea what “APdb6″ means, but GrrCon sounds like a fun conference.)

Linux How-Tos and Linux Tutorials: How to Get Open Source Android

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Carla Schroder. Original post: at Linux How-Tos and Linux Tutorials

fdroid logoAndroid is an astonishing commercial success, and is often touted as a Linux success. In some ways it is; Google was able to leverage Linux and free/open source software to get Android to market in record time, and to offer a feature set that quickly outstripped the old champion iOS.

But it’s not Linux as we know it. Most Android devices are locked-down, and we can’t freely download and install whatever operating systems we want like we can with our Linux PCs, or install whatever apps we want without jailbreaking our own devices that we own. We can’t set up a business to sell Google Android devices without jumping through a lot of expensive hoops (see The hidden costs of building an Android device and Secret Ties in Google’s “Open” Android.) We can’t even respin Google Android however we want to and redistribute it, because Google requires bundling a set of Google apps.

So where do you go to find real open source Android? Does such a thing even exist? Why yes it does.

F-Droid: FOSS Repository

There are quite a few Android repositories other than the Google Play Store, such as Amazon Appstore for AndroidSamsung Galaxy Apps, and the Opera Mobile Store. But there is only one, as far as I know, that stocks only free/open source apps, and that is F-Droid (figure 1).

F-Droid is a pure volunteer effort. It was founded in 2010 by Ciaran Gultnieks, and is now operated by F-Droid Limited, a non-profit organisation registered in England. F-Droid relies on donations and community support. The good F-Droid people perform security and privacy checks on submitted apps, though they wisely warn that there are no guarantees. F-Droid promises to respect your privacy and to not track you, your devices, or what you install. You don’t need to register for an account to use the F-Droid client, which sends no identifying information to their servers other than its version number.

To get F-Droid, all you do is download and install the F-Droid client (the download button is on the front page of the site). Easy peasey. You can browse and search apps on the website and in the client.

Other FOSS Android Directories

DroidBreak is a nice resource for finding FOSS Android apps. DroidBreak is not a software repository, but a good organized place to find apps.

AOpenSource.com is another FOSS Android directory. It gives more information on most of the apps, and has some good Android books links.

PRISM Break lists alternatives to popular closed-source propietary apps, and is privacy- and security-oriented.

Now let’s look at how to get a FOSS Android operating system.

CyanogenMod

CyanogenMod is one of the best and most popular FOSS Android variants. This is a complete replacement for Google’s Android, just like you can replace Debian with Ubuntu or Linux Mint. (Or Mint with Debian. Or whatever.) It is based on cyanogenmod logothe Android Open Source Project.

All CyanogenMod source code is freely available on their Github repository. CyanogenMod supports bales of features including CPU overclocking, controlling permissions on apps, soft buttons, full tethering with no backtalk, easier Wi-fi, BlueTooth, and GPS management, and absolutely no spyware. Which seems to be the #1 purpose of most of the apps in the Play Store. CyanogenMod is more like a real Linux: completely open and modifiable.

CyanogenMod has a bunch of nice user-friendly features: a blacklist for blocking annoying callers, a quick setting ribbon for starting your favorite apps with one swipe, user-themeable, a customizable status bar, profiles for multiple users or multiple workflows, a customizable lockscreen…in short, a completely user-customizable interface. You get a superuser and unprivileged users, all just like your favorite Linux desktop.

CyanogenMod has been ported to a lot of devices, so chances are your phone or tablet is already supported. Amazon Kindle Fire, ASUS, Google Nexus, HTC, LG, Motorola, Samsung, Sony, and lots more. A large and active community supports CyanogenMod, and the Wiki contains bales of good documentation, including help for wannabe developers.

So how do you install CyanogenMod? Isn’t that the scary part, where a mistake bricks your device? That is a real risk. So start with nothing-to-lose practice gadgets: look for some older used tablets and smartphones for cheap and practice on them. Don’t risk your shiny new stuff until you’ve gained experience. Anyway, installation is not all that scary as the good CyanogenMod people have built a super-nice reliable installer that does not require that you be a mighty guru. You don’t need to root your phone because the installer does that for you. After installation the updater takes care of keeping your installation current.

Replicant

Replicant gets my vote for best name. Please treat yourself to a viewing of the movie “Blade Runner” if you don’t get the reference. Even with a Free Android operating system, phones and tablets still use a lot of proprietary blobs, and one of the goals of Replicant is to replace these with Free software. Replicant was originally based on the Android Open Source Project, and then migrated to CyanogenMod to take advantage of their extensive device support. Replicant is a little replicant logomore work to install, so you’ll acquire a deeper knowledge of how to get software on devices that don’t want you to. Replicant is sponsored by the Free Software Foundation.

The Google Play Store has over a million apps. This sounds impressive, but many of them are junk, most of them are devoted to data-mining you for all you’re worth, and how many Mine Sweeper and Mahjongg ripoffs do you need? Android is destined to be a streamlined general-purpose operating system for a multitude of portable low-power devices (coming to a refrigerator near you! Why? Because!), and this is a great time to get acquainted with it on a deeper level.

LWN.net: Ten years of Ubuntu (ars technica)

This post was syndicated from: LWN.net and was written by: corbet. Original post: at LWN.net

Here’s a
lengthy ars technica retrospective
on Ubuntu’s first ten years.
As you’ll soon see in this look at the desktop distro through the
years, Linux observers sensed there was something special about Ubuntu
nearly from the start. However, while a Linux OS that genuinely had users
in mind was quickly embraced, Ubuntu’s ten-year journey since is a
microcosm of the major Linux events of the last decade—encompassing
everything from privacy concerns and Windows resentment to server expansion
and hopes of convergence.

TorrentFreak: Australians Face ‘Fines’ For Downloading Pirate Movies

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

Much to the disappointment of owner Voltage Pictures, early January 2013 a restricted ‘DVD Screener’ copy of the hit movie Dallas Buyers Club leaked online. The movie was quickly downloaded by tens of thousands but barely a month later, Voltage was plotting revenge.

In a lawsuit filed in the Southern District of Texas, Voltage sought to identify illegal downloaders of the movie by providing the IP addresses of Internet subscribers to the court. Their aim – to scare those individuals into making cash settlements to make supposed lawsuits disappear.

Now, in the most significant development of the ‘trolling’ model in recent times, Dallas Buyers Club LLC are trying to expand their project into Australia. Interestingly the studio has chosen to take on subscribers of the one ISP that was absolutely guaranteed to put up a fight.

iiNet is Australia’s second largest ISP and the country’s leading expert when it comes to fighting off aggressive rightsholders. In 2012 the ISP defeated Hollywood in one of the longest piracy battles ever seen and the company says it will defend its subscribers in this case too.

Chief Regulatory Officer Steve Dalby says that Dallas Buyers Club LLC (DBCLLC) recently applied to the Federal Court to have iiNet and other local ISPs reveal the identities of people they say have downloaded and/or shared their movie without permission.

According to court documents seen by TorrentFreak the other ISPs involved are Wideband Networks Pty Ltd, Internode Pty Ltd, Dodo Services Pty Ltd, Amnet Broadband Pty Ltd and Adam Internet Pty Ltd.

Although the stance of the other ISPs hasn’t yet been made public, DBCLLC aren’t going to get an easy ride. iiNet (which also owns Internode and Adam) says it will oppose the application for discovery.

“iiNet would never disclose customer details to a third party, such as movie studio, unless ordered to do so by a court. We take seriously both our customers’ privacy and our legal obligations,” Dalby says.

While underlining that the company does not condone copyright infringement, news of Dallas Buyers Club / Voltage Pictures’ modus operandi has evidently reached iiNet, and the ISP is ready for them.

“It might seem reasonable for a movie studio to ask us for the identity of those they suspect are infringing their copyright. Yet, this would only make sense if the movie studio intended to use this information fairly, including to allow the alleged infringer their day in court, in order to argue their case,” Dalby says.

“In this case, we have serious concerns about Dallas Buyers Club’s intentions. We are concerned that our customers will be unfairly targeted to settle any claims out of court using a practice called ‘speculative invoicing’.”

The term ‘speculative invoicing’ was coined in the UK in response to the activities of companies including the now defunct ACS:Law, which involved extracting cash settlements from alleged infringers (via mailed ‘invoices’) and deterring them from having their say in court. Once the scheme was opened up to legal scrutiny it completely fell apart.

Some of the flaws found to exist in both UK and US ‘troll’ cases are cited by iiNet, including intimidation of subscribers via excessive claims for damages. The ISP also details the limitations of IP address-based evidence when it comes to identifying infringers due to shared household connections and open wifi scenarios.

“Because Australian courts have not tested these cases, any threat by rights holders, premised on the outcome of a successful copyright infringement action, would be speculative,” Dalby adds.

The Chief Regulatory Officer says that since iiNet has opposed the action for discovery the Federal Court will now be asked to decide whether iiNet should hand over subscriber identities to DBCLLC. A hearing on that matter is expected early next year and it will be an important event.

While a win for iiNet would mean a setback for rightsholders plotting similar action, victory for DBCLLC will almost certainly lead to others following in their footsteps. For an idea of what Australians could face in this latter scenario, in the United States the company demands payment of up to US$7,000 (AUS$8,000) per infringement.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Darknet - The Darkside: Apple’s OS X Yosemite Spotlight Privacy Issues

This post was syndicated from: Darknet - The Darkside and was written by: Darknet. Original post: at Darknet - The Darkside

So Apple pushed out it’s latest and great OS X version 10.10 called Yosemite, but it’s facing a bit of an uproar at the moment about some Spotlight privacy issues. For those who are not familiar, Spotlight is some kinda of super desktop search that searches everything on your computer (and now also the Internet) [...]

The post…

Read the full post at darknet.org.uk

The Hacker Factor Blog: By Proxy

This post was syndicated from: The Hacker Factor Blog and was written by: The Hacker Factor Blog. Original post: at The Hacker Factor Blog

As I tweak and tune the firewall and IDS system at FotoForensics, I keep coming across unexpected challenges and findings. One of the challenges is related to proxies. If a user uploads prohibited content from a proxy, then my current system bans the entire proxy. An ideal solution would only ban the user.

Proxies serve a lot of different purposes. Most people think about proxies in regards to anonymity, like the TOR network. TOR is a series of proxies that ensure that the endpoint cannot identify the starting point.

However, there are other uses for proxies. Corporations frequently have a set of proxies for handling network traffic. This allows them to scan all network traffic for potential malware. It’s a great solution for mitigating the risk from one user getting a virus and passing it to everyone in the network.

Some governments run proxies as a means to filter content. China and Syria come to mind. China has a custom solution that has been dubbed the “Great Firewall of China“. They use it to restrict site access and filter content. Syria, on the other hand, appears to use a COTS (commercial off-the-shelf) solution. In my web logs, most traffic from Syria comes through Blue Coat ProxySG systems.

And then there are the proxies that are used to bypass usage limits. For example, your hotel may charge for Internet access. If there’s a tech convention in the hotel, then it’s common to see one person pay for the access, and then run his own SOCKS proxy for everyone else to relay out over the network. This gives everyone access without needing everyone to pay for the access.

Proxy Services

Proxy networks that are designed for anonymity typically don’t leak anything. If I ban a TOR node, then that node stays banned since I cannot identify individual users. However, the proxies that are designed for access typically do reveal something about the user. In fact, many proxies explicitly identify who’s request is being relayed. This added information is stuffed in HTTP header fields that most web sites ignore.

For example, I recently received an HTTP request from 66.249.81.4 that contained the HTTP header “X-Forwarded-For: 82.114.168.150″. If I were to ban the user, then I would ban “66.249.81.4″, since that system connected to my server. However, 66.249.81.4 is google-proxy-66-249-81-4.google.com and is part of a proxy network. This proxy network identified who was relaying with the X-Forwarded-For header. In this case, “82.114.168.150″ is someone in Yemen. If I see this reference, then I can start banning the user in Yemen rather than the Google Proxy that is used by lots of people. (NOTE: I changed the Yemen IP address for privacy, and this user didn’t upload anything requiring a ban; this is just an example.)

Unfortunately, there is no real standard here. Different proxies use different methods to denote the user being relayed. I’ve seen headers like “X-Forwarded”, “X-Forwarded-For”, “HTTP_X_FORWARDED_FOR” (yes, they actually sent this in their header; this is NOT from the Apache variable), “Forwarded”, “Forwarded-For-IP”, “Via”, and more. Unless I know to look for it, I’m liable to ban a proxy rather than a user.

In some cases, I see the direct connection address also listed as the relayed address; it claims to be relaying itself. I suspect that this is cause by some kind of anti-virus system that is filtering network traffic through a local proxy. And sometimes I see private addresses (“private” as in “private use” and “should not be routed over the Internet”; not “don’t tell anyone”). These are likely home users or small companies that run a proxy for all of the computers on their local networks.

Proxy Detection

If I cannot identify the user being proxied, then just identifying that the system is a proxy can be useful. Rather than banning known proxies for three months, I might ban the proxy for only a day or a week. The reduced time should cut down on the number of people blocked because of the proxy that they used.

There are unique headers that can identify that a proxy is present. Blue Coat ProxySG, for example, adds in a unique header: “X-BlueCoat-Via: abce6cd5a6733123″. This tracking ID is unique to the Blue Coat system; every user relaying through that specific proxy gets the same unique ID. It is intended to prevent looping between Blue Coat devices. If the ProxySG system sees its own unique ID, then it has identified a loop.

Blue Coat is not the only vendor with their own proxy identifier. Fortinet’s software adds in a “X-FCCKV2″ header. And Verizon silently adds in an “X-UIDH” header that has a large binary string for tracking users.

Language and Location

Besides identifying proxies, I can also identify the user’s preferred language.

The intent with specifying languages in the HTTP header is to help web sites present content in the native language. If my site supports English, German, and French, then seeing a hint that says “French” should help me automatically render the page using French. However, this can be used along with IP address geolocation to identify potential proxies. If the IP address traces to Australia but the user appears to speak Italian, then it increases the likelihood that I’m seeing an Australian proxy that is relaying for a user in Italy.

The official way to identify the user’s language is to use an HTTP “Accept-Language” header. For example, “Accept-Language: en-US,en;q=0.5″ says to use the United States dialect of English, or just English if there is no dialect support at the web site. However, there are unofficial approaches to specifying the desired language. For example, many web browsers encode the user’s preferred language into the HTTP user-agent string.

Similarly, Facebook can relay network requests. These appear in the header “X-Facebook-Locale”. This is an unofficial way to identify when Facebook being use as a proxy. However, it also tells me the user’s preferred language: “X-Facebook-Locale: fr_CA”. In this case, the user prefers the Canadian dialect of French (fr_CA). While the user may be located anywhere in the world, he is probably in Canada.

There’s only one standard way to specify the recipient’s language. However, there are lots of common non-standard ways. Just knowing what to look for can be a problem. But the bigger problem happens when you see conflicting language definitions.

Accept-Language: de-de,de;q=0.5

User-Agent: Mozilla/5.0 (Linux; Android 4.4.2; it-it; SAMSUNG SM-G900F/G900FXXU1ANH4 Build/KOT49H) AppleWebKit/537.36 (KHTML, like Gecko) Version/1.6 Chrome/28.0.1500.94 Mobile Safari/537.36

X-Facebook-Locale: es_LA

x-avantgo-clientlanguage: en_GB

x-ucbrowser-ua: pf(Symbian);er(U);la(en-US);up(U2/1.0.0);re(U2/1.0.0);dv(NOKIAE90);pr
(UCBrowser/9.2.0.336);ov(S60V3);pi(800*352);ss(800*352);bt(GJ);pm(0);bv(0);nm(0);im(0);sr(2);nt(1)

X-OperaMini-Phone-UA: Mozilla/5.0 (Linux; U; Android 4.4.2; id-id; SM-G900T Build/id=KOT49H.G900SKSU1ANCE) AppleWebKit/534.30 (KHTML, like Gecko) Version/4.0 Mobile Safari/534.30

If I see all of these in one request, then I’ll probably choose the official header first (German from German). However, without the official header, would I choose Spanish from Latin America (“es-LA” is unofficial but widely used), Italian from Italy (it-it) as specified by the web browser user-agent string, or the language from one of those other fields? (Fortunately, in the real world these would likely all be the same. And you’re unlikely to see most of these fields together. Still, I have seen some conflicting fields.)

Time to Program!

So far, I have identified nearly a dozen different HTTP headers that denote some kind of proxy. Some of them identify the user behind the proxy, but others leak clues or only indicate that a proxy was used. All of this can be useful for determining how to handle a ban after someone violates my site’s terms of service, even if I don’t know who is behind the proxy.

In the near future, I should be able to identify at least some of these proxies. If I can identify the people using proxies, then I can restrict access to the user rather than the entire proxy. And if I can at least identify the proxy, then I can still try to lessen the impact for other users.

Errata Security: FBI’s crypto doublethink

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

Recently, FBI Director James Comey gave a speech at the Brookings Institute decrying crypto. It was transparently Orwellian, arguing for a police-state. In this post, I’ll demonstrate why, quoting bits of the speech.

“the FBI has a sworn duty to keep every American safe from crime and terrorism”
“The people of the FBI are sworn to protect both security and liberty”

This is not true. The FBI’s oath is to “defend the Constitution”. Nowhere in the oath does it say “protect security” or “keep people safe”.

This detail is important. Tyrants suppress civil liberties in the name of national security and public safety. This oath taken by FBI agents, military personnel, and the even the president, is designed to prevent such tyrannies.

Comey repeatedly claims that FBI agents both understand their duty and are committed to it. That Comey himself misunderstands his oath disproves both assertions. This reinforces our belief that FBI agents do not see their duty as protecting our rights, but instead see rights as an impediment in pursuit of some other duty.

Freedom is Danger

The book 1984 describes the concept of “doublethink“, with political slogans as examples: “War is Peace”, “Ignorance is Strength”, and “Freedom is Slavery”. Comey goes full doublethink:

Some have suggested there is a conflict between liberty and security. I disagree. At our best, we in law enforcement, national security, and public safety are looking for security that enhances liberty. When a city posts police officers at a dangerous playground, security has promoted liberty—the freedom to let a child play without fear.

He’s wrong. Liberty and security are at odds. That’s what the 4th Amendment says. We wouldn’t be having this debate if they weren’t at odds.

He follows up with more doublethink, claiming “we aren’t seeking a back-door”, but instead are instead interested in “developing intercept solutions during the design phase”. Intercept solutions built into phones is the very definition of a backdoor, of course.

“terror terror terror terror terror”
“child child child child child child”

Comey mentions terrorism 5 times and child exploitation 6 times. This is transparently the tactic of the totalitarian, demagoguery based on emotion rather than reason.

Fear of terrorism on 9/11 led to the Patriot act, granting law enforcement broad new powers in the name of terrorism. Such powers have been used overwhelming for everything else. The most telling example is the detainment of David Miranda in the UK under a law that supposedly only applied to terrorists. Miranda was carrying an encrypted copy of Snowden files — clearly having nothing to do with terrorism. It was clearly exploitation of anti-terrorism laws for the purposes of political suppression.

Any meaningful debate doesn’t start with the headline grabbing crimes, but the ordinary ones, like art theft and money laundering. Comey has to justify his draconian privacy invasion using those laws, not terrorism.

“rule of law, rule of law, rule of law, rule of law, rule of law”
Comey mentions rule-of-law five times in his speech. His intent is to demonstrate that even the FBI is subject to the law, namely review by an independent judiciary. But that isn’t true.

The independent judiciary has been significantly weakened in recent years. We have secret courts, NSLs, and judges authorizing extraordinary powers because they don’t understand technology. Companies like Apple and Google challenge half the court orders they receive, because judges just don’t understand. There is frequent “parallel construction”, where evidence from spy agencies is used against suspects, sidestepping judicial review.

What Comey really means is revealed by this statement: “I hope you know that I’m a huge believer in the rule of law. … There should be no law-free zone in this country”. This a novel definition of “rule of law”, a “rule by law enforcement”, that has never been used before. It reveals what Comey really wants, a totalitarian police-state where nothing is beyond the police’s powers, where the only check on power is a weak and pliant judiciary.

“that a commitment to the rule of law and civil liberties is at the core of the FBI”
No, lip service to these things is at the core of the FBI.

I know this from personal experience when FBI agents showed up at my offices and threatened me, trying to get me to cancel a talk at a cybersecurity conference. They repeated over and over how they couldn’t force me to cancel my talk because I had a First Amendment right to speak — while simultaneously telling me that if I didn’t cancel my talk, they would taint my file so that I would fail background checks and thus never be able to work for the government ever again.
We saw that again when the FBI intercepted clearly labeled “attorney-client privileged” mail between Weev and his lawyer. Their excuse was that the threat of cyberterrorism trumped Weev’s rights.

Then there was that scandal that saw widespread cheating on a civil-rights test. FBI agents were required to certify, unambiguously, that nobody helped them on the test. They lied. It’s one more oath FBI agents seem not to care about.

If commitment to civil liberties was important to him, Comey would get his oath right. If commitment to rule-of-law was important, he’d get the definition right. Every argument Comey make demonstrates how little he is interested in civil liberties.

“Snowden Snowden Snowden”

Comey mentions Snowden three times, such as saying “In the wake of the Snowden disclosures, the prevailing view is that the government is sweeping up all of our communications“.

This is not true. No news article based on the Snowden document claims this. No news site claims this. None of the post-Snowden activists believe this. All the people who matter know the difference between metadata and full eavesdropping, and likewise, the difficulty the FBI has in getting at that data.

This is how we know the FBI is corrupt. They ignore our concerns that government has been collecting every phone record in the United States for 7 years without public debate, but instead pretend the issue is something stupid, like the false belief they’ve been recording all phone calls. They knock down strawman arguments instead of addressing our real concerns.

Regulate communication service providers

In his book 1984, everyone had a big screen television mounted on the wall that was two-way. Citizens couldn’t turn the TV off, because it had to be blaring government propaganda all the time. The camera was active at all time in case law enforcement needed to access it. At the time the book was written in 1934, televisions were new, and people thought two-way TVs were plausible. They weren’t at that time; it was a nonsense idea.

But then the Internet happened and now two-way TVs are a real thing. And it’s not just the TV that’s become two-way video, but also our phones. If you believe the FBI follows the “rule of law” and that the courts provide sufficient oversight, then there’s no reason to stop them going full Orwell, allowing the police to turn on your device’s camera/microphone any time they have a court order in order to eavesdrop on you. After all, as Comey says, there should be no law-free zone in this country, no place law enforcement can’t touch.

Comey pretends that all he seeks at the moment is a “regulatory or legislative fix to create a level playing field, so that all communication service providers are held to the same standard” — meaning a CALEA-style backdoor allowing eavesdropping. But here’s thing: communication is no longer a service but an app. Communication is “end-to-end”, between apps, often by different vendors, bypassing any “service provider”. There is no way to way to eavesdrop on those apps without being able to secretly turn on a device’s microphone remotely and listen in.

That’s why we crypto-activists draw the line here, at this point. Law enforcement backdoors in crypto inevitably means an Orwellian future.


Conclusion

There is a lot more wrong with James Comey’s speech. What I’ve focused on here were the Orwellian elements. The right to individual crypto, with no government backdoors, is the most important new human right that technology has created. Without it, the future is an Orwellian dystopia. And as proof of that, I give you James Comey’s speech, whose arguments are the very caricatures that Orwell lampooned in his books.

Schneier on Security: Surveillance in Schools

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

This essay, “Grooming students for a lifetime of surveillance,” talks about the general trends in student surveillance.

Related: essay on the need for student privacy in online learning.

Darknet - The Darkside: IPFlood – Simple Firefox Add-on To Hide Your IP Address

This post was syndicated from: Darknet - The Darkside and was written by: Darknet. Original post: at Darknet - The Darkside

IPFlood (previously IPFuck) is a Firefox add-on created to simulate the use of a proxy. It doesn’t actually change your IP address (obviously) and it doesn’t connect to a proxy either, it just changes the headers (that it can) so it appears to any web servers or software sniffing – that you are in fact [...]

The post IPFlood…

Read the full post at darknet.org.uk

Darknet - The Darkside: JPMorgan Hacked & Leaked Over 83 Million Customer Records

This post was syndicated from: Darknet - The Darkside and was written by: Darknet. Original post: at Darknet - The Darkside

So yah last week we all discovered, OMG JPMorgan Hacked! This set a lot of people on edge as JPMorgan Chase & Co is the largest US bank by assets – so it’s pretty seriously business. The breach happened back in July and was only disclosed last Thursday due to a filing to the US [...]

The post JPMorgan Hacked & Leaked Over 83…

Read the full post at darknet.org.uk

Krebs on Security: Bugzilla Zero-Day Exposes Zero-Day Bugs

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

A previously unknown security flaw in Bugzilla — a popular online bug-tracking tool used by Mozilla and many of the open source Linux distributions — allows anyone to view detailed reports about unfixed vulnerabilities in a broad swath of software. Bugzilla is expected today to issue a fix for this very serious weakness, which potentially exposes a veritable gold mine of vulnerabilities that would be highly prized by cyber criminals and nation-state actors.

The Bugzilla mascot.

The Bugzilla mascot.

Multiple software projects use Bugzilla to keep track of bugs and flaws that are reported by users. The Bugzilla platform allows anyone to create an account that can be used to report glitches or security issues in those projects. But as it turns out, that same reporting mechanism can be abused to reveal sensitive information about as-yet unfixed security holes in software packages that rely on Bugzilla.

A developer or security researcher who wants to report a flaw in Mozilla Firefox, for example, can sign up for an account at Mozilla’s Bugzilla platform. Bugzilla responds automatically by sending a validation email to the address specified in the signup request. But recently, researchers at security firm Check Point Software Technologies discovered that it was possible to create Bugzilla user accounts that bypass that validation process.

“Our exploit allows us to bypass that and register using any email we want, even if we don’t have access to it, because there is no validation that you actually control that domain,” said Shahar Tal, vulnerability research team leader for Check Point. “Because of the way permissions work on Bugzilla, we can get administrative privileges by simply registering using an address from one of the domains of the Bugzilla installation owner. For example, we registered as admin@mozilla.org, and suddenly we could see every private bug under Firefox and everything else under Mozilla.”

Bugzilla is expected today to release updates to remove the vulnerability and help further secure its core product.

“An independent researcher has reported a vulnerability in Bugzilla which allows the manipulation of some database fields at the user creation procedure on Bugzilla, including the ‘login_name’ field,” said Sid Stamm, principal security and privacy engineer at Mozilla, which developed the tool and has licensed it for use under the Mozilla public license.

“This flaw allows an attacker to bypass email verification when they create an account, which may allow that account holder to assume some privileges, depending on how a particular Bugzilla instance is managed,” Stamm said. “There have been no reports from users that sensitive data has been compromised and we have no other reason to believe the vulnerability has been exploited. We expect the fixes to be released on Monday.”

The flaw is the latest in a string of critical and long-lived vulnerabilities to surface in the past year — including Heartbleed and Shellshock — that would be ripe for exploitation by nation state adversaries searching for secret ways to access huge volumes of sensitive data.

“The fact is that this was there for 10 years and no one saw it until now,” said Tal. “If nation state adversaries [had] access to private bug data, they would have a ball with this. There is no way to find out if anyone did exploit this other than going through user list and seeing if you have a suspicious user there.”

Like Heartbleed, this flaw was present in open source software to which countless developers and security experts had direct access for years on end.

“The perception that many eyes have looked at open source code and it’s secure because so many people have looked at it, I think this is false,” Tal said. “Because no one really audits code unless they’re committed to it or they’re paid to do it. This is why we can see such foolish bugs in very popular code.”

trackbugdawg

The Hacker Factor Blog: Security By Apathy

This post was syndicated from: The Hacker Factor Blog and was written by: The Hacker Factor Blog. Original post: at The Hacker Factor Blog

There are a couple of common approaches to applying security. The most recommended method is a defense in depth approach. This applies layers of independent, well-known security methods, protecting the system even when one layer is breached. For example:

  1. Your home has a front door. That’s the first layer. The door permits people to enter and leave the house. Closing the door stops access.
  2. The door has a lock. The lock is actually independent of the door. The lock can be enabled or disabled regardless of whether the door is open or closed. But the lock provides an additional security layer to the door: if the door is closed and locked, then it is harder to get into the house.
  3. The front door probably has a deadbolt. Again, this is usually independent of the lock on the doorknob. A deadbolt even has it’s own latch (the bolt) to deter someone from kicking in the door.
  4. Inside the house, you have an alarm system. (You do have an alarm system, right?) The alarm is another layer, just in case someone gets around the door. The alarm may use door sensors, motion sensors, pressure pads, and more. Each of these add another layer to the home’s security.
  5. You might have a dog who barks loudly or attack intruders.
  6. Your valuables are locked down or stored in a safe. Even if the burglar gets past the door, dog, and alarm, this is yet another hurdle to contend with.
  7. And don’t forget the nosy neighbors, who call the cops every time a stranger drives down the street…

Each of these layers make it more difficult for an attacker. With your computer, you have your NAT-enabled router that plugs into your cable or DSL modem — the router that acts as a firewall, preventing uninvited traffic from entering your home. Your computer probably has its own software firewall. Your anti-virus scans all network traffic and media for hostile content. Your online services uses SSL and require passwords.

All of these are different layers. Granted, some layers may not be very strong, even the weakest ones are probably better than nothing.

Tell Nobody

Another concept is called Security by Obscurity. This is where details about some of the security layers are kept private. The belief is that the layer is safe as long as nobody knows the secret. However, as soon as someone knows the secret, the security is gone.

Lots of security gurus claim that Security by Obscurity isn’t security. But in reality, it is another layer and it works as long as it isn’t your only security precaution.

As an example, consider the lowly password. Passwords are a kind of security by obscurity. As long as you don’t tell someone your password, it is probably safe enough. Of course, if someone can guess your password then all security that it provides is gone.

However, even a weak password can be strong enough if you have other layers protecting it. One of my passwords is “Cubumber”. I’m not kidding, that’s really my password. At this point, people are probably thinking “What an idiot! He just told his password to the entire world!” Except, my password is protected by layers:

  • I didn’t identify the system or username that uses that password. This is security-by-obscurity. Without knowing where to use it, the password remains secure. (This is analogous to finding a car key and not knowing where the car is located. You can’t steal the car if you can’t find it.)
  • Even if you know the system, you still need to find the login screen. (Another security-by-obscurity.)
  • This particular system uses that password only allows logins from a specific subnet. So you need to identify the subnet and compromise it first. This falls under defense in depth and two-part authentication: something you know (the password) and something you are (the correct network address).
  • Assuming you can get on the right network, the connection to the system requires strong encryption. You will need to crack two other passwords (or one password and a fingerprint scanner) before you can access the encrypted network keys.
  • I should also mention that the necessary subnet is protected by a firewall and IPS system, so I’m not too concerned about a network attack.
  • All of these systems are physically located in an office that has a solid metal door, two locks, an overly-complex alarm system, and a barky dog. Oh, and there’s also nosy neighbors in the adjacent offices. (Hi Beth!)

Honestly, I’m not too concerned with people knowing my “Cucumber” password since nobody can easily get past all of the other security layers.

Whatever

There are other common security practices. Like the principle of least privilege: you only have access to the things you need. Secure by default and fail securely regarding initialization and error handling. Separation of duties (aka insulation), explicit trust, multi-part authentication, break one get one, etc.

All of these concepts are great when they are used and even better when used together. However, what we usually see is something nullified by apathy. There’s really two types of security apathy. There’s the stuff that you control and the stuff that is beyond your control.

For example, it is up to the user to choose a good password, to not use the same password twice, and to change default passwords. However, everyone reuses passwords. And if that online service really wants a password to continue, then I’ll just supply my standard “I don’t care” password. This becomes security apathy that I can control.

Similarly, I often find people who say “I don’t care if someone breaks into my computer. I don’t have anything valuable there.” That’s security apathy. It’s also myopic since the computer is usually connected to the Internet. (“Thanks! I’ll use your computer to send spam and to host my spatula porn collection!”)

Meh…

Not all security-related apathy can be blamed on the user. My cellphone has some bloatware apps that were installed by the manufacturer. Most of these apps are buggy and some have known vulnerabilities. When I install a new app, I can see what privileges it needs and I have the option to not install. But with pre-installed apps, I don’t know what any of them want to do with my data. I cannot even turn these things off. I rarely use my cellphone for maps, but the maps app is always running. And I’ve turned off the backup/sync options, but the backup app is always sucking down my battery. Even killing the backup app is only a temporary solution since it periodically starts up all by itself.

What’s worse is that many of these undesirable and high-risk features have no patches and there is no option to delete, disable, or remove them. Every few days I get a popup asking me to update some vendor-provided app, but then it complains that there is no update available. (Yes, T-Mobile, I’m talking about your User Account app.)

With my phone, the manufacturer has demonstrated Security by Apathy. They failed to provide secure options and failed to give me the ability to remove the stuff I don’t want. I cannot make my phone secure, even if I wanted to.

A least privilege approach would be to install nothing but the bare essentials. Then I could add in the various apps that I want. I think only Google’s Android One tries to do this. Every other phone is preloaded with bloatware that directly impacts usability, battery life, and device security.

It isn’t just mobile devices that have weak security that is out of our control. When the nude celebrity photo scandal first came out, it was pointed out that Apple permitted an unlimited number of login retries. (Reportedly now fixedkind of.) In this case, it doesn’t matter how strong the password is if I can guess as many times as I want. Every first-semester computer security student knows this. Apple’s disregard toward basic security practices and a lack of desire to address the issue in a timely fashion (i.e., years before the exploit) shows nothing but apathy toward the user.

Yada Yada

Then again, there are plenty of online services that still use the dreaded security question as a backdoor to your account.

“What is your mother’s maiden name?”
“Where did you go to high school?”
“What is your pet’s name?”

Everyone who does security knows that public information should never be used to protect private data. Yet Apple and Facebook and Yahoo and nearly every other major online service still asks these questions as an alternate authentication system. (As far as I know, Google is the only company to stop using these stupid questions that offer no real security.)

It isn’t that there are no other options for validating a user. Rather, these companies typically do not care enough to provide secure alternatives. There’s usually some marketeer with a checklist: “Do we have security questions? Check!” — There’s no checkbox for “is it a good idea?”

Moreover, we cannot assume that the users will be smart enough to not provide the real answers. If the system asks for your favorite color, then most people will enter their favorite color. (Security-minded people will enter an unrelated response, random characters, or a long phrase that is unlikely to be guessed. What’s my favorite color? “The orbit of Neptune on April 27th.”)

Talk to the Hand

There are a few things that enable most of today’s security exploits. First, there is bad software that has not been through a detailed security audit but is widely deployed. Then there is the corporate desire to check off functionality regardless of the impact to security. And finally, there are users who do not care enough to take security seriously.

Educating the user is much easier said than done. In the 35+ years that I have worked with computers, I have yet to see anyone come up with a viable way to educate users. Rather, software developers should make their code idiot proof. If users should not enter a bad password, then force the password to be strong. If you know that security questions are stupid, then don’t use them. And if you see that someone can guess the password as many times as they want, then implement a limit.

Yes, some code is complex and some bugs get released and some mistakes make it all the way out the door. But that doesn’t means that we shouldn’t try. The biggest issue facing computer security and personal privacy today is not from a bug or an oversight. It’s from corporate, developer, and user apathy.

Darknet - The Darkside: iSniff-GPS – Passive Wifi Sniffing Tool With Location Data

This post was syndicated from: Darknet - The Darkside and was written by: Darknet. Original post: at Darknet - The Darkside

iSniff GPS is a passive wifi sniffing tool which sniffs for SSID probes, ARPs and MDNS (Bonjour) packets broadcast by nearby iPhones, iPads and other wireless devices. The aim is to collect data which can be used to identify each device and determine previous geographical locations, based solely on information each device discloses about…

Read the full post at darknet.org.uk

Krebs on Security: We Take Your Privacy and Security. Seriously.

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

“Please note that [COMPANY NAME] takes the security of your personal data very seriously.” If you’ve been on the Internet for any length of time, chances are very good that you’ve received at least one breach notification email or letter that includes some version of this obligatory line. But as far as lines go, this one is about as convincing as the classic break-up line, “It’s not you, it’s me.”

coxletter

I was reminded of the sheer emptiness of this corporate breach-speak approximately two weeks ago, after receiving a snail mail letter from my Internet service provider — Cox Communications. In its letter, the company explained:

“On or about Aug. 13, 2014, “we learned that one of our customer service representatives had her account credentials compromised by an unknown individual. This incident allowed the unauthorized person to view personal information associated with a small number of Cox accounts. The information which could have been viewed included your name, address, email address, your Secret Question/Answer, PIN and in some cases, the last four digits only of your Social Security number or drivers’ license number.”

The letter ended with the textbook offer of free credit monitoring services (through Experian, no less), and the obligatory “Please note that Cox takes the security of your personal data very seriously.” But I wondered how seriously they really take it. So, I called the number on the back of the letter, and was directed to Stephen Boggs, director of public affairs at Cox.

Boggs said that the trouble started after a female customer account representative was “socially engineered” or tricked into giving away her account credentials to a caller posing as a Cox tech support staffer. Boggs informed me that I was one of just 52 customers whose information the attacker(s) looked up after hijacking the customer service rep’s account.

The nature of the attack described by Boggs suggested two things: 1) That the login page that Cox employees use to access customer information is available on the larger Internet (i.e., it is not an internal-only application); and that 2) the customer support representative was able to access that public portal with nothing more than a username and a password.

Boggs either did not want to answer or did not know the answer to my main question: Were Cox customer support employees required to use multi-factor or two-factor authentication to access their accounts? Boggs promised to call back with an definitive response. To Cox’s credit, he did call back a few hours later, and confirmed my suspicions.

“We do use multifactor authentication in various cases,” Boggs said. “However, in this situation there was not two-factor authentication. We are taking steps based on our investigation to close this gap, as well as to conduct re-training of our customer service representatives to close that loop as well.”

This sad state of affairs is likely the same across multiple companies that claim to be protecting your personal and financial data. In my opinion, any company — particularly one in the ISP business — that isn’t using more than a username and a password to protect their customers’ personal information should be publicly shamed.

Unfortunately, most companies will not proactively take steps to safeguard this information until they are forced to do so — usually in response to a data breach.  Barring any pressure from Congress to find proactive ways to avoid breaches like this one, companies will continue to guarantee the security and privacy of their customers’ records, one breach at a time.

Matthew Garrett: My free software will respect users or it will be bullshit

This post was syndicated from: Matthew Garrett and was written by: Matthew Garrett. Original post: at Matthew Garrett

I had dinner with a friend this evening and ended up discussing the FSF’s four freedoms. The fundamental premise of the discussion was that the freedoms guaranteed by free software are largely academic unless you fall into one of two categories – someone who is sufficiently skilled in the arts of software development to examine and modify software to meet their own needs, or someone who is sufficiently privileged[1] to be able to encourage developers to modify the software to meet their needs.

The problem is that most people don’t fall into either of these categories, and so the benefits of free software are often largely theoretical to them. Concentrating on philosophical freedoms without considering whether these freedoms provide meaningful benefits to most users risks these freedoms being perceived as abstract ideals, divorced from the real world – nice to have, but fundamentally not important. How can we tie these freedoms to issues that affect users on a daily basis?

In the past the answer would probably have been along the lines of “Free software inherently respects users”, but reality has pretty clearly disproven that. Unity is free software that is fundamentally designed to tie the user into services that provide financial benefit to Canonical, with user privacy as a secondary concern. Despite Android largely being free software, many users are left with phones that no longer receive security updates[2]. Textsecure is free software but the author requests that builds not be uploaded to third party app stores because there’s no meaningful way for users to verify that the code has not been modified – and there’s a direct incentive for hostile actors to modify the software in order to circumvent the security of messages sent via it.

We’re left in an awkward situation. Free software is fundamental to providing user privacy. The ability for third parties to continue providing security updates is vital for ensuring user safety. But in the real world, we are failing to make this argument – the freedoms we provide are largely theoretical for most users. The nominal security and privacy benefits we provide frequently don’t make it to the real world. If users do wish to take advantage of the four freedoms, they frequently do so at a potential cost of security and privacy. Our focus on the four freedoms may be coming at a cost to the pragmatic freedoms that our users desire – the freedom to be free of surveillance (be that government or corporate), the freedom to receive security updates without having to purchase new hardware on a regular basis, the freedom to choose to run free software without having to give up basic safety features.

That’s why projects like the GNOME safety and privacy team are so important. This is an example of tying the four freedoms to real-world user benefits, demonstrating that free software can be written and managed in such a way that it actually makes life better for the average user. Designing code so that users are fundamentally in control of any privacy tradeoffs they make is critical to empowering users to make informed decisions. Committing to meaningful audits of all network transmissions to ensure they don’t leak personal data is vital in demonstrating that developers fundamentally respect the rights of those users. Working on designing security measures that make it difficult for a user to be tricked into handing over access to private data is going to be a necessary precaution against hostile actors, and getting it wrong is going to ruin lives.

The four freedoms are only meaningful if they result in real-world benefits to the entire population, not a privileged minority. If your approach to releasing free software is merely to ensure that it has an approved license and throw it over the wall, you’re doing it wrong. We need to design software from the ground up in such a way that those freedoms provide immediate and real benefits to our users. Anything else is a failure.

(title courtesy of My Feminism will be Intersectional or it will be Bullshit by Flavia Dzodan. While I’m less angry, I’m solidly convinced that free software that does nothing to respect or empower users is an absolute waste of time)

[1] Either in the sense of having enough money that you can simply pay, having enough background in the field that you can file meaningful bug reports or having enough followers on Twitter that simply complaining about something results in people fixing it for you

[2] The free software nature of Android often makes it possible for users to receive security updates from a third party, but this is not always the case. Free software makes this kind of support more likely, but it is in no way guaranteed.

comment count unavailable comments

Darknet - The Darkside: CloudFlare Introduces SSL Without Private Key

This post was syndicated from: Darknet - The Darkside and was written by: Darknet. Original post: at Darknet - The Darkside

Handing over your private key to a cloud provider so they can terminate your SSL connections and you can work at scale has always been a fairly contentious issue, a necessary evil you may say. As if your private key gets compromised, it’s a big deal and without it (previously) there’s no way a cloud [...]

The post CloudFlare…

Read the full post at darknet.org.uk

Schneier on Security: Security for Vehicle-to-Vehicle Communications

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

The National Highway Traffic Safety Administration (NHTSA) has released a report titled “Vehicle-to-Vehicle Communications: Readiness of V2V Technology for Application.” It’s very long, and mostly not interesting to me, but there are security concerns sprinkled throughout: both authentication to ensure that all the communications are accurate and can’t be spoofed, and privacy to ensure that the communications can’t be used to track cars. It’s nice to see this sort of thing thought about in the beginning, when the system is first being designed, and not tacked on at the end.

Darknet - The Darkside: tinfoleak – Get Detailed Info About Any Twitter User

This post was syndicated from: Darknet - The Darkside and was written by: Darknet. Original post: at Darknet - The Darkside

tinfoleak is basically an OSINT tool for Twitter, there’s not a lot of stuff like this around – the only one that comes to mind in fact is creepy – Geolocation Information Aggregator. tinfoleak is a simple Python script that allow to obtain: basic information about a Twitter user (name, picture, location, followers, etc.) devices…

Read the full post at darknet.org.uk

The Hacker Factor Blog: Eight Is Enough

This post was syndicated from: The Hacker Factor Blog and was written by: The Hacker Factor Blog. Original post: at The Hacker Factor Blog

I must be one of those people who lives in a cave. (Well, at least it’s a man-cave.) I didn’t even realize that Apple’s iOS 8 was released until I heard all of the hoopla in the news.

When Apple did their recent big presentation, I heard about the new watch and the new iPhone, but not about the new operating system. The smart-watch didn’t impress me. At CACC last month, I saw a few people wearing devices that told the time, maintained their calendar, synced with their portable devices, and even checked their heart rates and sleep cycles. In this regard, Apple seems a little late to the game, over-priced, and limited in functionality.

The new iPhone also didn’t impress me. The only significant difference that I have heard about is the bigger screen. I find it funny that pants pockets are getting smaller and phones are getting bigger… So, where do you put this new iPhone? You can’t be expected to carry it everywhere by hand when you’re also holding a venti pumpkin spice soy latte with whip no room. Someone really needs to build an iPhone protector that doubles as a cup-holder. (Oh wait, it exists.) Or maybe an iBelt… that hangs the iPhone like a codpiece since it is more of a symbol of geek virility than a useful mobile device.

Then again, I’m not an Apple fanatic. I use a Mac, but I don’t go out of the way to worship at the foot of the latest greatest i-device.

Sight Seeing

Apple formally announced all of these new devices on September 9th. I decided to look over the FotoForensics logs for any iOS 8 devices. Amazingly, I’ve had a few sightings… and they started months before the formal announcement.

The first place I looked was in my web server’s log files. Every browser sends its user-agent string with their web request. This usually identifies the operating system and browser. The intent is to allow web services to collect metrics about usage. If I see a bunch of people using some new web browser, then I can test my site with that browser and ensure a good user experience.

With iOS devices, they also encode the version number. So I just looked for anything claiming to be an iOS 8 device. Here’s the date/time and user-agent strings that match iOS 8. I’m only showing the 1st instance per day:

[18/Mar/2014:18:40:39 -0500] “Mozilla/5.0 (iPad; CPU OS 8_0 like Mac OS X) AppleWebKit/538.22 (KHTML, like Gecko) Mobile/12A214″

[29/Apr/2014:13:27:58 -0500] “Mozilla/5.0 (iPhone; CPU iPhone OS 8_0 like Mac OS X) AppleWebKit/538.30.1 (KHTML, like Gecko) Mobile/12W252a”

[02/Jun/2014:16:56:45 -0500] “Mozilla/5.0 (iPhone; CPU iPhone OS 8_0 like Mac OS X) AppleWebKit/538.34.9 (KHTML, like Gecko) Version/7.0 Mobile/12A4265u Safari/9537.53″

[03/Jun/2014:16:44:38 -0500] “Mozilla/5.0 (iPhone; CPU iPhone OS 8_0 like Mac OS X) AppleWebKit/538.34.9 (KHTML, like Gecko) Version/7.0 Mobile/12A4265u Safari/9537.53″

After June 3rd, it basically became a daily appearance. The list includes iPhones and iPads. And, yes, the first few sightings came from Cupertino, California, where Apple is headquartered.

Even though iOS 8 is new, it looks like a few people have been using it for months. Product testers, demos, beta testers, etc.

Pictures?

When Apple released iOS 7, they added a new metadata field to their pictures. This field records the active-use time since the last reboot. I suspect that it is a useful metric for Apple. It also makes me wonder if iOS 8 added anything new.

As a research service, every picture uploaded to FotoForensics gets indexed for rapid searching. I searched the archive for any pictures that claim to be from an iOS 8 device. So far, there have only been five sightings. (Each photo shows personally identifiable information, selfies or pictures of text, so I won’t be linking to them.)

Amazingly, none of these initial iOS 8 photos are camera-original files. Adobe, Microsoft Windows, and other applications were used to save the picture. The earliest picture was uploaded on 2014-07-30 at 21:32:39 GMT by someone in California, and the picture’s metadata says it photographed on 2014-07-19.

Each of these iOS 8 photos came from an iPhone 5 or 5s device. I have yet to see any photos from an iPhone 6 device. (There was one sighting of an “iPhone 6Z” on 2013-01-30. But since it was uploaded by someone in France, I suspect that the metadata was altered.)

With the iPhone 5 and iOS 7, Apple introduced a “purple flareproblem. I don’t have many iOS 8 samples to compare against, and none are camera-originals. However, I’m not seeing the extreme artificial color correction that caused the purple flare. There’s still a distinct color correction, but it’s not as extreme. Perhaps the purple problem is fixed.

New Privacy

As far as I can tell, there is one notable new thing about iOS 8. Apple has publicly announced a change to their privacy policy. Specifically, they claim to have strong cryptography in the phones and no back doors. As a result, they will not be able to turn over any iPhone information to law enforcement, even if they have a valid subpoena. By implementing a technically strong solution and not retaining any keys, they forced their stance: it isn’t that they don’t want to help unlock a phone, it is that they technically cannot crack it in a realistic time frame.

While this stops Apple from assisting with iPhone and iPad devices that use iOS 8, it does nothing to stop Apple from turning over information uploaded to Apple’s iCloud service. (You do have the “backup to iCloud” option enabled, right?) This also does nothing to stop brute-force account guessing attacks, like the kind reportedly used to compromise celebrity nude photos. The newly deployed two-factor authentication seems like a much better solution even if it is too little too late.

Then again, I can also foresee new services that will handle your encryption keys for you, in case you lose them. After a few hundred complaints like “I lost my password and cannot access my precious kitty photos! Please help me!”, I expect that an entire market of back door options will become available for Apple users.

Behind the Eight Ball

I didn’t really pay attention to Apple’s latest releases until after they were out. However, it wouldn’t take much to make a database of known user agents and trigger an automated alert when the next Apple product first appears. It’s one thing to read about iOS 8 on Mac Rumors a few months before the release; it’s another thing to see it in my logs six months earlier.

While I don’t think much of Apple’s latest offerings, that doesn’t mean it won’t drive the market. Sometimes it’s not the produce itself that drives the innovation; sometimes it’s the spaces that need filling.

TorrentFreak: Mega Demands Apology Over “Defamatory” Cyberlocker Report

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

Yesterday the Digital Citizens Alliance released a new report that looks into the business models of “shadowy” file-storage sites.

Titled “Behind The Cyberlocker Door: A Report How Shadowy Cyberlockers Use Credit Card Companies to Make Millions,” the report attempts to detail the activities of some of the world’s most-visited hosting sites.

While it’s certainly an interesting read, the NetNames study provides a few surprises, not least the decision to include New Zealand-based cloud storage site Mega.co.nz. There can be no doubt that there are domains of dubious standing detailed in the report, but the inclusion of Mega stands out as especially odd.

Mega was without doubt the most-scrutinized file-hosting startup in history and as a result has had to comply fully with every detail of the law. And, unlike some of the other sites listed in the report, Mega isn’t hiding away behind shell companies and other obfuscation methods. It also complies fully with all takedown requests, to the point that it even took down its founder’s music, albeit following an erroneous request.

With these thoughts in mind, TorrentFreak alerted Mega to the report and asked how its inclusion amid the terminology used has been received at the company.

Grossly untrue and highly defamatory

mega“We consider the report grossly untrue and highly defamatory of Mega,” says Mega CEO Graham Gaylard.

“Mega is a privacy company that provides end-to-end encrypted cloud storage controlled by the customer. Mega totally refutes that it is a cyberlocker business as that term is defined and discussed in the report prepared by NetNames for the Digital Citizens Alliance.”

Gaylard also strongly refutes the implication in the report that as a “cyberlocker”, Mega is engaged in activities often associated with such sites.

“Mega is not a haven for piracy, does not distribute malware, and definitely does not engage in illegal activities,” Gaylard says. “Mega is running a legitimate business alongside other cloud storage providers in a highly competitive market.”

The Mega CEO told us that one of the perplexing things about the report is that none of the criteria set out by the report for “shadowy” sites is satisfied by Mega, yet the decision was still taken to include it.

Infringing content and best practices

One of the key issues is, of course, the existence of infringing content. All user-uploaded sites suffer from that problem, from YouTube to Facebook to Mega and thousands of sites in between. But, as Gaylard points out, it’s the way those sites handle the issue that counts.

“We are vigorous in complying with best practice legal take-down policies and do so very quickly. The reality though is that we receive a very low number of take-down requests because our aim is to have people use our services for privacy and security, not for sharing infringing content,” he explains.

“Mega acts very quickly to process any take-down requests in accordance with its Terms of Service and consistent with the requirements of the USA Digital Millennium Copyright Act (DMCA) process, the European Union Directive 2000/31/EC and New Zealand’s Copyright Act process. Mega operates with a very low rate of take-down requests; less than 0.1% of all files Mega stores.”

Affiliate schemes that encourage piracy

One of the other “rogue site” characteristics as outlined in the report is the existence of affiliate schemes designed to incentivize the uploading and sharing of infringing content. In respect of Mega, Gaylard rejects that assertion entirely.

“Mega’s affiliate program does not reward uploaders. There is no revenue sharing or credit for downloads or Pro purchases made by downloaders. The affiliate code cannot be embedded in a download link. It is designed to reward genuine referrers and the developers of apps who make our cloud storage platform more attractive,” he notes.

The PayPal factor

As detailed in many earlier reports (1,2,3), over the past few years PayPal has worked hard to seriously cut down on the business it conducts with companies in the file-sharing space.

Companies, Mega included, now have to obtain pre-approval from the payment processor in order to use its services. The suggestion in the report is that large “shadowy” sites aren’t able to use PayPal due to its strict acceptance criteria. Mega, however, has a good relationship with PayPal.

“Mega has been accepted by PayPal because we were able to show that we are a legitimate cloud storage site. Mega has a productive and respected relationship with PayPal, demonstrating the validity of Mega’s business,” Gaylard says.

Public apology and retraction – or else

Gaylard says that these are just some of the points that Mega finds unacceptable in the report. The CEO adds that at no point was the company contacted by NetNames or Digital Citizens Alliance for its input.

“It is unacceptable and disappointing that supposedly reputable organizations such as Digital Citizens and NetNames should see fit to attack Mega when it provides the user end to end encryption, security and privacy. They should be promoting efforts to make the Internet a safer and more trusted place. Protecting people’s privacy. That is Mega’s mission,” Gaylard says.

“We are requesting that Digital Citizens Alliance withdraw Mega from that report entirely and issue a public apology. If they do not then we will take further action,” he concludes.

TorrentFreak asked NetNames to comment on Mega’s displeasure and asked the company if it stands by its assertion that Mega is a “shadowy” cyberlocker. We received a response (although not directly to our questions) from David Price, NetNames’ head of piracy analysis.

“The NetNames report into cyberlocker operation is based on information taken from the websites of the thirty cyberlockers used for the research and our own investigation of this area, based on more than a decade of experience producing respected analysis exploring digital piracy and online distribution,” Price said.

That doesn’t sound like a retraction or an apology, so this developing dispute may have a way to go.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

LWN.net: Simply Secure announces itself

This post was syndicated from: LWN.net and was written by: jake. Original post: at LWN.net

A new organization to “make security easy and fun” has announced itself in a blog post entitled “Why Hello, World!”. Simply Secure is targeting the usability of security solutions: “If privacy and security aren’t easy and intuitive, they don’t work. Usability is key.
The organization was started by Google and Dropbox; it also has the Open Technology Fund as one of its partners.
To build trust and ensure quality outcomes, one core component of our work will be public audits of interfaces and code. This will help validate the security and usability claims of the efforts we support.

More generally, we aim to take a page from the open-source community and make as much of our work transparent and widely-accessible as possible. This means that as we get into the nitty-gritty of learning how to build collaborations around usably secure software, we will share our developing methodologies and expertise publicly. Over time, this will build a body of community resources that will allow all projects in this space to become more usable and more secure.”

TorrentFreak: Copyright Holders Want Netflix to Ban VPN Users

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

netflixWith the launch of legal streaming services such as Netflix, movie and TV fans have less reason to turn to pirate sites.

At the same time, however, these legal options invite people from other countries where the legal services are more limited. This is also the case in Australia where up to 200,000 people are estimated to use the U.S. version of Netflix.

Although Netflix has geographical restrictions in place, these are easy to bypass with a relatively cheap VPN subscription. To keep these foreigners out, entertainment industry companies are now lobbying for a global ban on VPN users.

Simon Bush, CEO of AHEDA, an industry group that represents Twentieth Century Fox, Warner Bros., Universal, Sony Pictures and other major players said that some members are actively lobbying for such a ban.

Bush didn’t name any of the companies involved, but he confirmed to Cnet that “discussions” to block Australian access to the US version of Netflix “are happening now”.

If implemented, this would mean that all VPN users worldwide will no longer be able to access Netflix. That includes the millions of Americans who are paying for a legitimate account. They can still access Netflix, but would not be allowed to do so securely via a VPN.

According to Bush the discussions to keep VPN users out are not tied to Netflix’s arrival in Australia. The distributors and other rightsholders argue that they are already being deprived of licensing fees, because some Aussies ignore local services such as Quickflix.

“I know the discussions are being had…by the distributors in the United States with Netflix about Australians using VPNs to access content that they’re not licensed to access in Australia,” Bush said.

“They’re requesting for it to be blocked now, not just when it comes to Australia,” he adds.

While blocking VPNs would solve the problem for distributors, it creates a new one for VPN users in the United States.

The same happened with Hulu a few months ago, when Hulu started to block visitors who access the site through a VPN service. This blockade also applies to hundreds of thousands of U.S. citizens.

Hulu’s blocklist was implemented a few months ago and currently covers the IP-ranges of all major VPN services. People who try to access the site through one of these IPs are not allowed to view any content on the site, and receive the following notice instead:

“Based on your IP-address, we noticed that you are trying to access Hulu through an anonymous proxy tool. Hulu is not currently available outside the U.S. If you’re in the U.S. you’ll need to disable your anonymizer to access videos on Hulu.”

It seems that VPNs are increasingly attracting the attention of copyright holders. Just a week ago BBC Worldwide argued that ISPs should monitor VPN users for excessive bandwidth use, assuming they would then be pirates.

Considering the above we can expect the calls for VPN bans to increase in the near future.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Krebs on Security: LinkedIn Feature Exposes Email Addresses

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

One of the risks of using social media networks is having information you intend to share with only a handful of friends be made available to everyone. Sometimes that over-sharing happens because friends betray your trust, but more worrisome are the cases in which a social media platform itself exposes your data in the name of marketing.

leakedinlogoLinkedIn has built much of its considerable worth on the age-old maxim that “it’s all about who you know”: As a LinkedIn user, you can directly connect with those you attest to knowing professionally or personally, but also you can ask to be introduced to someone you’d like to meet by sending a request through someone who bridges your separate social networks. Celebrities, executives or any other LinkedIn users who wish to avoid unsolicited contact requests may do so by selecting an option that forces the requesting party to supply the personal email address of the intended recipient.

LinkedIn’s entire social fabric begins to unravel if any user can directly connect to any other user, regardless of whether or how their social or professional circles overlap. Unfortunately for LinkedIn (and its users who wish to have their email addresses kept private), this is the exact risk introduced by the company’s built-in efforts to expand the social network’s user base.

According to researchers at the Seattle, Wash.-based firm Rhino Security Labs, at the crux of the issue is LinkedIn’s penchant for making sure you’re as connected as you possibly can be. When you sign up for a new account, for example, the service asks if you’d like to check your contacts lists at other online services (such as Gmail, Yahoo, Hotmail, etc.). The service does this so that you can connect with any email contacts that are already on LinkedIn, and so that LinkedIn can send invitations to your contacts who aren’t already users.

LinkedIn assumes that if an email address is in your contacts list, that you must already know this person. But what if your entire reason for signing up with LinkedIn is to discover the private email addresses of famous people? All you’d need to do is populate your email account’s contacts list with hundreds of permutations of famous peoples’ names — including combinations of last names, first names and initials — in front of @gmail.com, @yahoo.com, @hotmail.com, etc. With any luck and some imagination, you may well be on your way to an A-list LinkedIn friends list (or a fantastic set of addresses for spear-phishing, stalking, etc.).

LinkedIn lets you know which of your contacts aren't members.

LinkedIn lets you know which of your contacts aren’t members.

When you import your list of contacts from a third-party service or from a stand-alone file, LinkedIn will show you any profiles that match addresses in your contacts list. More significantly, LinkedIn helpfully tells you which email addresses in your contacts lists are not LinkedIn users.

It’s that last step that’s key to finding the email address of the targeted user to whom LinkedIn has just sent a connection request on your behalf. The service doesn’t explicitly tell you that person’s email address, but by comparing your email account’s contact list to the list of addresses that LinkedIn says don’t belong to any users, you can quickly figure out which address(es) on the contacts list correspond to the user(s) you’re trying to find.

Rhino Security founders Benjamin Caudill and Bryan Seely have a recent history of revealing how trust relationships between and among online services can be abused to expose or divert potentially sensitive information. Last month, the two researchers detailed how they were able to de-anonymize posts to Secret, an app-driven online service that allows people to share messages anonymously within their circle of friends, friends of friends, and publicly. In February, Seely more famously demonstrated how to use Google Maps to intercept FBI and Secret Service phone calls.

This time around, the researchers picked on Dallas Mavericks owner Mark Cuban to prove their point with LinkedIn. Using their low-tech hack, the duo was able to locate the Webmail address Cuban had used to sign up for LinkedIn. Seely said they found success in locating the email addresses of other celebrities using the same method about nine times out ten.

“We created several hundred possible addresses for Cuban in a few seconds, using a Microsoft Excel macro,” Seely said. “It’s just a brute-force guessing game, but 90 percent of people are going to use an email address that includes components of their real name.”

The Rhino guys really wanted Cuban’s help in spreading the word about what they’d found, but instead of messaging Cuban directly, Seely pursued a more subtle approach: He knew Cuban’s latest start-up was Cyber Dust, a chat messenger app designed to keep your messages private. So, Seely fired off a tweet complaining that “Facebook Messenger crosses all privacy lines,” and that as  result he was switching to Cyber Dust.

When Mark Cuban retweeted Seely’s endorsement of Cyber Dust, Seely reached out to Cyberdust CEO Ryan Ozonian, letting him known that he’d discovered Cuban’s email address on LinkedIn. In short order, Cuban was asking Rhino to test the security of Cyber Dust.

“Fortunately no major faults were found and those he found are already fixed in the coming update,” Cuban said in an email exchange with KrebsOnSecurity. “I like working with them. They look to help rather than exploit.. We have learned from them and I think their experience will be valuable to other app publishers and networks as well.”

Whether LinkedIn will address the issues highlighted by Rhino Security remains to be seen. In an initial interview earlier this month, the social networking giant sounded unlikely to change anything in response.

Corey Scott, director of information security at LinkedIn, said very few of the company’s members opt-in to the requirement that all new potential contacts supply the invitee’s email address before sending an invitation to connect. He added that email address-to-user mapping is a fairly common design pattern, and that is is not particularly unique to LinkedIn, and that nothing the company does will prevent people from blasting emails to lists of addresses that might belong to a targeted user, hoping that one of them will hit home.

“Email address permutators, of which there are many of them on the ‘Net, have existed much longer than LinkedIn, and you can blast an email to all of them, knowing that most likely one of those will hit your target,” Scott said. “This is kind of one of those challenges that all social media companies face in trying to prevent the abuse of [site] functionality. We have rate limiting, scoring and abuse detection mechanisms to prevent frequent abusers of this service, and to make sure that people can’t validate spam lists.”

In an email sent to this report last week, however, LinkedIn said it was planning at least two changes to the way its service handles user email addresses.

“We are in the process of implementing two short-term changes and one longer term change to give our members more control over this feature,” Linkedin spokeswoman Nicole Leverich wrote in an emailed statement. “In the next few weeks, we are introducing new logic models designed to prevent hackers from abusing this feature. In addition, we are making it possible for members to ask us to opt out of being discoverable through this feature. In the longer term, we are looking into creating an opt-out box that members can choose to select to not be discoverable using this feature.”

Darknet - The Darkside: Google DID NOT Leak 5 Million E-mail Account Passwords

This post was syndicated from: Darknet - The Darkside and was written by: Darknet. Original post: at Darknet - The Darkside

So a big panic hit the Internet a couple of days ago when it was alleged that Google had leaked 5 Million e-mail account passwords – and these had been posted on a Russian Bitcoin forum. I was a little sceptical, as Google tends to be pretty secure on that front and they had made [...]

The post Google DID NOT Leak 5 Million E-mail Account…

Read the full post at darknet.org.uk