Posts tagged ‘ip address’

AWS DevOps Blog: Quickly Explore the Chef Environment in AWS OpsWorks

This post was syndicated from: AWS DevOps Blog and was written by: Daniel Huesch. Original post: at AWS DevOps Blog

AWS OpsWorks recently launched support for Chef 12 Linux. This release changes the way that information about the stacks, layers, and instances provided by OpsWorks is made available during a Chef run. In this post, I show how to interactively explore this information using the OpsWorks agent command line interface (CLI) and Pry, a shell for Ruby. Our documentation shows you what’s available, this post shows you how to explore that data interactively.

OpsWorks manages EC2 or on-premises instances by triggering Chef runs. Before running your Chef recipes, OpsWorks prepares an environment. This environment includes a number of data bags that provide information about your stack, instances, and other resources in your stack. You can use data bags to write cookbooks that adapt to changes in your infrastructure.

When an instance has finished its setup or when it leaves the online state, OpsWorks triggers a Configure event. You can register your own custom recipes to run during Configure events, and use a custom recipe as a light-weight service discovery mechanism. For example, you could use custom recipes to grant database access to an app server after it’s started, or revoke access after it’s stopped, or discover the IP address of the database server within your stack.

Typically, you access data about stacks, layers, and instances through Chef search. For earlier supported versions of Chef on Linux, this data was made available as attributes. In Chef 12 Linux, the data is available in data bags.

To access this data, I’m going to use only tools that are already present on OpsWorks instances: the OpsWorks agent CLI and Pry. Here’s the elevator pitch for Pry, taken from the Pry website:

Pry is a powerful alternative to the standard IRB shell for Ruby. It features syntax highlighting, a flexible plugin architecture, runtime invocation and source and documentation browsing.

Because Pry is already present on OpsWorks instances, there’s no need to install it.

I execute all terminal commands shown in the rest of this post as the root user.

How Do You Use Pry with OpsWorks?

First, let’s take a look at the OpsWorks agent CLI. The agent CLI lets you explore and repeat Chef runs on an instance.

To see a list of completed runs, use opsworks-agent-cli list:

[root@nodejs-server1 ~]# opsworks-agent-cli list
2015-12-16T13:37:2        setup
2015-12-16T13:40:56       configure

For an instance that has just finished booting, you should see a successful Setup event, followed by a successful Configure event.

Let’s repeat the Chef run for the Configure event. To repeat the last run, use opsworks-agent-cli run:

[root@nodejs-server1 ~]# opsworks-agent-cli run
[2015-12-16 13:44:55]  INFO [opsworks-agent(26261)]: About to re-run 'configure' from 2015-12-16T13:40:56
[2015-12-16 13:45:01]  INFO [opsworks-agent(26261)]: Finished Chef run with exitcode 0

Because the agent CLI can only repeat Chef runs, it doesn’t allow me to execute arbitrary recipes. I can do that in the OpsWorks console with the Run command. For demo purposes, I’ll use a custom cookbook named explore-opsworks-data to trigger a Chef run so I can then execute a recipe during the run.

The Chef run failed because I tried to execute a recipe that doesn’t exist. Let’s create and run the recipe and do it in a way that opens up a Pry session.

[root@nodejs-server1 ~]# mkdir -p /var/chef/cookbooks/explore-opsworks-data/recipes
[root@nodejs-server1 ~]# echo 'require "pry"; binding.pry' > /var/chef/cookbooks/explore-opsworks-data/recipes/default.rb
[root@nodejs-server1 ~]# opsworks-agent-cli run
[2015-12-16T13:55:32+00:00] INFO: Storing updated cookbooks/explore-opsworks-data/recipes/default.rb in the cache.
From: /var/chef/runs/35e8a98a-c81e-46a9-84e3-1bbd105f07dd/local-mode-cache/cache/cookbooks/explore-opsworks-data/recipes/default.rb @ line 1 Chef::Mixin::FromFile#from_file:
 => 1: require "pry"; binding.pry

That doesn’t look very good. In fact, the output appears truncated. That’s because I’m now using an interactive shell, Pry, right in the middle of the Chef run. But, I can now use Pry to run arbitrary Ruby code within the recipe I created. I’ll try searching on the data bags for the stack, layer, and instance.

The aws_opsworks_stack data bag contains details about the stack, like the region and the custom cookbook source, as shown in the following example:

=> [{"data_bag_item('aws_opsworks_stack', '8bd5b1e5-6f45-4d3d-9eb1-5cdaecaf77b8')"=>
    "custom_cookbooks_source"=>{"type"=>"archive", "url"=>"", "username"=>nil, "password"=>nil, "ssh_key"=>nil, "revision"=>nil},
    "name"=>"My Sample Stack (Linux)",

The aws_opsworks_layer data bag contains details about layers, like the layer name and Amazon Elastic Block Store (Amazon EBS) volume configurations:

=> [{"data_bag_item('aws_opsworks_layer', 'nodejs-server')"=>
   {"layer_id"=>"a8127c0d-749a-4192-aad7-8e512c8942b4", "name"=>"Node.js App Server", "packages"=>[], "shortname"=>"nodejs-server", "type"=>"custom", "volume_configurations"=>[], "id"=>"nodejs-server", "chef_type"=>"data_bag_item", "data_bag"=>"aws_opsworks_layer"}}]

The aws_opsworks_instance data bag contains details about instances, like the operating system and IP addresses:

=> [{"data_bag_item('aws_opsworks_instance', 'nodejs-server1')"=>

Now I’ll access a data bag directly. As the following example shows, the data I get this way is identical to the data the search command returns:

=> ["8bd5b1e5-6f45-4d3d-9eb1-5cdaecaf77b8"]
data_bag_item("aws_opsworks_stack", "8bd5b1e5-6f45-4d3d-9eb1-5cdaecaf77b8")
=> {"data_bag_item('aws_opsworks_stack', '8bd5b1e5-6f45-4d3d-9eb1-5cdaecaf77b8')"=>
   "custom_cookbooks_source"=>{"type"=>"archive", "url"=>"", "username"=>nil, "password"=>nil, "ssh_key"=>nil, "revision"=>nil},
   "name"=>"My Sample Stack (Linux)",

As a practical example of how I would use search in one of my recipes, I’ll look up the current instance’s root device type and layer ID:

myself = search(:aws_opsworks_instance, "self:true").first
... "My root device type is #{myself['root_device_type']}"
[2015-12-16T18:19:55+00:00] INFO: My root device type is ebs
... "I am a member of layer #{myself['layer_ids'].first}"
[2015-12-16T18:20:17+00:00] INFO: I am a member of layer a8127c0d-749a-4192-aad7-8e512c8942b4

And just to make it clear that this shell isn’t just about Chef, but about Ruby code in general, here’s a Ruby snippet that would list all files and directories below /tmp, without using Chef:

=> ["/tmp/npm-1967-e4f411bc", "/tmp/hsperfdata_root"]

After I’m done exploring, I can leave the shell by typing exit or by pressing Ctrl+D.


By using Pry in the middle of a Chef run, you can inspect the data that’s available during the run. If you’re troubleshooting a failed run by making a change on your workstation, updating cookbooks on your instance, and triggering another deployment, using this approach can save you a significant amount of time.

There’s no need to limit yourself to a single Pry session. If there are more areas in your code you need to explore, just put binding.pry in the appropriate place in your cookbook. Keep in mind, though, that you don’t want to permanently include this in your recipe, so don’t put this kind of a change under version control.

Errata Security: All app developers should learn from WhatsApp-v-Brazil incident and defend against it

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

So Brazil forced the ISPs to shutdown WhatsApp (a chat app) for 48 hours, causing more than a million of their customers to move to Telegram (another chat app). Apparently, this was to punish WhatsApp for not helping in a criminal investigation.

Well, this is similar to how ISPs block botnets. Botnets, the most common form of malware these days, have a command-channel back to the hacker that controls all the bots in the network. ISPs try to block the IP address and/or DNS name in order to block access to the botnet.

Botnets use two ways around this. One way is “fast-flux DNS“, where something like “” changes its IP address every few minutes. This produces too many IP addresses for ISPs to block. WhatsApp can keep spinning up new cloud instances at places like Amazon Web Services or Rackspace faster than ISPs can play whack-a-mole.

But ISPs can also block the domain name itself, instead of the IP address. Therefore, an app can also choose to use a “domain generation algorithm” or “domain flux”.  This generates a new domain name based on the current time, which changes several times per day. Names will be something like “”, using a predictable, but “pseudo-random” algorithm. This would generate too many names for ISPs to block, assume the algorithm was public. However, in practice, in situations like this, the ISPs wouldn’t know the algorithm, so therefore, wouldn’t know the list of names they needed to block.

The cool thing is that companies like WhatsApp can deploy such measures in their software really easily. but not tell anybody. The first time a government like Brazil tried to punish them, the ISPs would mysteriously fail at blocking the app. It would take days of research for anybody to figure out why.

This highlights two important points.

The first is that “governments”, not just “hackers”, need to be part of your threat model when developing apps/services. The second is that evil “malware” or “viruses” is often indistinguishable from good software. That’s what things like the Wassenaar Arms Control export restrictions are doomed to fail, because it’s impossible for regulations to clarify the difference.

Note: Apparently the court order specified ‘’, ‘’, all subdomains, and IP addresses used by those domains.

Errata Security: No, you can’t shut down parts of the Internet

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

In tonight’s Republican debate, Donald Trump claimed we should shutdown parts of the Internet in order to disable ISIS. This would not work. I thought I’d create some quick notes why.

This post claims it would be easy, just forge a BGP announcement. Doing so would then redirect all Syrian traffic to the United States instead of Syria. This is too simplistic of a view.

Technically, the BGP attack described in the above post wouldn’t even work. BGP announcements in the United States would only disrupt traffic to/from the United States. Traffic between Turkey and ISIS would remain unaffected. The Internet is based on trust — abusing trust this way could only work temporarily, before everyone else would untrust the United States. Legally, this couldn’t work, as the United States has no sufficient legal authority to cause such an action. Congress would have to pass a law, which it wouldn’t do.

But “routing” is just a logical layer built on top of telecommunications links. Syria and Iraq own their respective IP address space. ISIS doesn’t have any “ASN” of their own. (If you think otherwise, then simply tell us the ASN that ISIS uses). Instead, ISIS has to pay for telecommunications links to route traffic through other countries. This causes ISIS to share the IP address space of those countries. Since we are talking about client access to the Internet, these are probably going through NATs of some kind. Indeed, that’s how a lot of cellphone access works in third world countries — the IP address of your phone frequently does not match that of your country, but of the country of the company providing the cellphone service (which is often outsourced).

Any attempt to shut those down is going to have a huge collateral impact on other Internet users. You could take a scorched earth approach and disrupt everyone’s traffic, but that’s just going to increasingly isolate the United States while having little impact on ISIS. Satellite and other private radio links can be setup as fast as you bomb them.

In any event, a scorched earth approach to messing with IP routing is still harder than just cutting off their land-line links they already have. In other words, attacking ISIS at Layer 3 (routing) is foolish when attacking at Layer 1 (pysical links) is so much easier.

You could probably bomb fiber optic cables and satellite links as quickly as they got reestablished. But then, you could disable ISIS by doing the same thing with roads, bridges, oil wells, electrical power, and so on. Disabling critical infrastructure is considered a war crime, because it disproportionately affects the populace rather than the enemy. The same likely applies to Internet connections — you’d do little but annoy ISIS while harming the population.

Indeed, cutting off the population from the Internet is what dictators do. It’s what ISIS wants to do, but don’t, because it would turn the populace against them. Our strategy shouldn’t be to help ISIS.

Note that I’ve been focused on clients, because ISIS’s servers they use to interact with the rest of the world are located outside of ISIS controlled areas. That’s because Internet access is so slow and expensive, they use it for only client browsing, not for services. Trump tried to backoff his crazy proposal by insisting it was only in ISIS controlled areas, but that’s not how the Internet works. ISIS equipment is world wide — the only way to shut them down is a huge First Amendment violating censorship campaign.

Here’s the deal. The Internet routes around censorship. Of the many options we have, censoring the Internet in ISIS controlled territories is neither something we can do or would want to do. Simply null routing AS numbers in BGP and bombing satellite uplinks would certainly not do it. Cutting the physical links is certainly possible, but even ISIS’s neighbors, all of whom oppose ISIS, have not taken that step.

Update: In response to Weev’s comment below, I thought I’d make a few points. The Pakistan goof did not disable all of YouTube, just areas with a shorter route to Pakistan than the United States, such as Europe. Also, while it’s possible to create disruption, it’s impossible to do so for a long period of time, as the Pakistan incident showed when after a bit everyone just ignored Pakistan. It hurt Pakistan more than YouTube. Lastly, ISIS has no ASN to null route. If you disagree with me, then name the ASN. Instead, the ASNs in ISIS controled areas are those from Syria, neighbors like Turkey and Iran, and possibly other countries like China. Trying to block them all would cause huge collateral damage.

Update: If you think you can wage war by spoofing BGP, then it means ISIS-friendly ISPs can retaliate by spoofing back. It’s not a precedent you want to establish.

TorrentFreak: Sky Users Receive Porn Piracy Threats in Time For Christmas

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

Last month news broke that a brand new flood of copyright infringement threats were about to land with UK-based Internet users.

“A company called Golden Eye International, which owns rights to several copyrighted films, has claimed that a number of Sky Broadband customers engaged in unlawful file sharing of some of its films,” ISP Sky told its subscribers in a warning letter.

“It’s likely that Golden Eye International will contact you directly and may ask you to pay them compensation.”

It’s taken several weeks but as promised Sky subscribers are now receiving letters from Golden Eye (GEIL) and partner firm Ben Dover Productions (BDP).

“It is with regret that we are writing this letter to you. However, GEIL and BDP are very concerned at the illicit distribution of films over the Internet,” the letter begins.

GEIL then explains that it is not the content owner but “the licensee authorized to enforce breach of copyright” on the adult movie titles referenced in the letters. To protect our sources we aren’t publishing the movie titles but to get an idea of the embarrassment some people are feeling right now, a full list of the movies can be found in GEIL’s license arrangement with BDP, available here (PDF).

As usual GEIL points out that it has hired a “forensic computer analyst” to track alleged infringers. However, in more than one instance it appears that GEIL is accusing people of downloading and sharing content in the summer of 2014. Expecting people to remember what happened so long ago could be a tall order.


“On 26 August 2015 Master Bowles, sitting in the High Court, ordered that SKY UK LTD give disclosure of your name and address, for the purpose of enabling us to send you this letter and if necessary bring legal proceedings against you,” the letter continues.

“In accordance with that Order, SKY UK identified you as the subscriber noted in their systems as on their network associated with the IP address on the date and at the time in question.”

As noted this past weekend ISPs can make mistakes too, but nevertheless GEIL’s letter clearly states that the account holder is assumed to be both the infringer and the user of the relevant computer at the date and time in question.

The company cannot possibly know this for certain since any number of people can have access to a household’s Internet access. Interestingly, they immediately admit that too.


Of course, if people are unaware of any infringement taking place they cannot reasonably be expected to furnish GEIL with that information. And while GEIL say they “may” ask the court to conclude that the account holder was the user of a an unspecified computer on a date 18 months ago, the court is also free to reject that assertion.

It’s also worth noting that GEIL have never engaged in a contested case in court, despite threatening to do so many times previously. What the company actually wants is a confession and hard cash.

“Once your response to this letter is received, GEIL and BDP will be prepared, if we believe that you have behaved unlawfully, to give you the opportunity to avoid legal action by proposing a settlement out of court,” the letter notes.

As previously instructed by the court GEIL is not allowed to ask for a specific amount in its initial letter, but recipients of second letters from the company will probably receive demands of up to £600 to £700 to put the matter to rest.

However, GEIL also tries to lure letter recipients in by suggesting that accidental infringement or that carried out by a child might result in a lower settlement amount being offered.


GEIL concludes by asking for a detailed confession or for the account holder to point the finger at members of their family or friends who have had access to their network.

“Please state whether you admit that you have downloaded the Work and/or made it available for download by others and if so the extent to which you have done so and whether you are prepared in principle to enter into a settlement of the kind outlined above,” GEIL adds.

“If you deny that you have downloaded the Work or made it available for download by others, please explain the basis upon which you deny it, and provide the information that we have requested above about other users of the computer.”

TorrentFreak has spoken to several letter recipients in the past few days. Only one said he was thinking of settling with GEIL.

People looking for legal advice can contact Southampton-based solicitor Michael Coyle who is handling these cases for a fraction of the amount requested by GEIL.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

TorrentFreak: No Copyright Trolls, Your Evidence Isn’t Flawless

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

xmastrollEarlier this month TF broke the news that Sky Broadband in the UK were sending letters out to some of their customers, warning them they’re about to be accused of downloading and sharing movies without permission.

When they arrive the threats will come from Golden Eye International (GEIL), the company behind the ‘Ben Dover’ porn brand that has already targeted hundreds of people with allegations of Internet piracy.

“It’s likely that Golden Eye International will contact you directly and may ask you to pay them compensation,” the ISP warned.

In fact, GEIL will definitely ask for money, largely based on their insistence that the evidence they hold is absolutely irrefutable. It’s the same tune they’ve been singing for years now, without ever venturing to back up their claims in court. Sadly, other legal professionals are happy to sing along with them.

“Don’t do anything illegal and you won’t get a letter,” intellectual property specialist Iain Connor told The Guardian last week.

“Golden Eye will only have gotten details of people that they can prove downloaded content and so whether the ‘invoice’ demand is reasonable will depend on how much they downloaded that infringed copyright material.”

Quite aside from the fact that none of these cases are about downloading copyrighted material (they’re about uploading), one has to presume that Connor isn’t personally familiar with details of these cases otherwise he would’ve declared that interest. Secondly, he is absolutely wrong.

Companies like GEIL sometimes get it wrong, the anti-piracy trackers they use get things wrong, and ISPs get things wrong too. An IP address is NOT a person but innocent parties have to go to huge lengths to prove that. IT worker Harri Salminen did just that and this week finally managed to publicly clear his family’s name.

It started two years ago when his wife – the Internet account payer – was accused by an anti-piracy outfit (unconnected to GEIL) of pirating on a massive scale.

“They claimed that thousands of music tracks had been illegally distributed from our Internet connection,” Salminen told local media.

“The letter came addressed to my wife and she became very anxious, since she didn’t understand what this was all about. According to the letter, the matter was going to the court and we were advised to make contact to agree on compensation.”

Sound familiar? Read on.

The Salminen family has two children so took time to ensure they hadn’t uploaded anything illegally. Harri Salminen, who works in the IT industry, established that they had not, so began to conduct his own investigation. Faced with similar “irrefutable” IP address-based evidence to that presented in all of these ‘troll’ cases, what could’ve possibly gone wrong?

Attached to the letter of claim was a page from Salminen’s ISP which detailed the name of his wife, the IP address from where the piracy took place, and a date of infringement. This kind of attachment is common in such cases and allows trolls to imply that their evidence is somehow endorsed by their target’s ISP.

Then Salminen struck gold. On the day that the alleged infringement took place the IT worker was operating from home while logged into his company’s computer systems. Knowing that his company keeps logs of the IP addresses accessing the system, Salminen knew he could prove which IP address he’d been using on the day.

“I looked into my employer’s system logs for IP-addresses over several weeks and I was able to show that our home connection’s IP address at the time of the alleged act was quite different from the IP address mentioned in the letter,” he explained.

So what caused Salminen’s household to be wrongly identified? Well, showing how things can go wrong at any point, it appears that there was some kind of screw-up between the anti-piracy company and Salminen’s ISP.

Instead of identifying the people who had the IP address at the time of the actual offense, the ISP looked up the people using the address when the inquiry came in.

“The person under employment of the ISP inputs a date, time, and IP-address to the system based on a court order,” anti-piracy group TTVK now explains.

“And of course, when a human is doing something, there is always a possibility for an error. But even one error is too much.”

Saliminen says that it was only his expertise in IT that saved him from having to battle it out in court, even though his family was entirely innocent. Sadly, those about to be accused by Golden Eye probably won’t have access to similar resources.

“We have only written to those account holders for whom we have evidence of copyright infringement,” Golden Eye’s Julian Becker said confidently last week.

Trouble is, Golden Eye only has an IP address and the name of the account holder. They have no evidence that person is the actual infringer, even presuming there hasn’t been a screw-up like the one detailed above.

“We have written to account holders accusing them of copyright infringement, even though it’s entirely possible they personally did nothing wrong and shouldn’t have to pay us a penny,” is perhaps what he should’ve said.

But that’s not only way too frank but a sure-fire way of punching a huge hole in GEIL’s bottom line. And for a troll like GEIL, that would be a disaster.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services. [$] Supporting secure DNS in glibc

This post was syndicated from: and was written by: corbet. Original post: at

One of the many weak links in Internet security is the domain name system
(DNS); it is subject to attacks that, among other things, can mislead
applications regarding the IP address of a system they wish to connect to.
That, in turn, can cause connections to go to the wrong place, facilitating
man-in-the-middle attacks and more. The DNSSEC
protocol extensions are meant to address this threat by setting up a
cryptographically secure chain of trust for DNS information. When DNSSEC
is set up properly, applications should be able to trust the results of
domain lookups. As the discussion over an
attempt to better integrate DNSSEC into the GNU C Library
though, ensuring that DNS lookups are safe is still not a straightforward

Darknet - The Darkside: SpiderFoot – Open Source Intelligence Automation Tool (OSINT)

This post was syndicated from: Darknet - The Darkside and was written by: Darknet. Original post: at Darknet - The Darkside

SpiderFoot is an open source intelligence automation tool. Its goal is to automate the process of gathering intelligence about a given target, which may be an IP address, domain name, hostname or network subnet. SpiderFoot can be used offensively, i.e. as part of a black-box penetration test to gather information about the target or defensively…

Read the full post at

Linux How-Tos and Linux Tutorials: An Introduction to Uncomplicated Firewall (UFW)

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Jack Wallen. Original post: at Linux How-Tos and Linux Tutorials

ufw AOne of the many heralded aspects of Linux is its security. From the desktop to the server, you’ll find every tool you need to keep those machines locked down as tightly as possible. For the longest time, the security of Linux was in the hands of iptables (which works with the underlying netfilter system). Although incredibly powerful, iptables is complicated—especially for newer users. To truly make the most out of that system, it may take weeks or months to get up to speed. Thankfully, a much simpler front end for iptables is ready to help get your system as secure as you need.

That front end is Uncomplicated Firewall (UFW). UFW provides a much more user-friendly framework for managing netfilter and a command-line interface for working with the firewall. On top of that, if you’d rather not deal with the command line, UFW has a few GUI tools that make working with the system incredibly simple.

But, before we find out what GUI tools are available, it’s best to understand how the UFW command-line system works.

Working with the Command

The fundamental UFW command structure looks like this:

ufw [--dry-run] [options] [rule syntax]

Notice the –dry-run section. UFW includes the ability to include this argument which informs the command to not make any changes. Instead, you will see the results of your changes in the output.

As for working with the command, UFW can be used in two ways:

  • Simple syntax: Specifies a port and (optionally) the protocol

  • Full syntax: Specifies source, destination, port, and (optionally) the protocol

Let’s look at the simple syntax first. Say, for example, you want to allow traffic on port 22 (SSH). To do this with UFW, you’d run a command like:

sudo ufw allow 22

NOTE: I added sudo to the command because you must have admin privileges to run ufw. If you’re using a distribution that doesn’t take advantage of sudo, you’d first have to su to root and then run the same command (minus sudo).

Conversely, say you want to prevent traffic on port 22. To do this, the command would look like:

sudo ufw deny 22

Should you want to add a protocol to this, the command would look like:

sudo ufw deny 22/tcp

What happens if you don’t happen to know the port number for a service? The developers have taken that into consideration. UFW will run against /etc/services in such a way that you can define a rule using a service instead of a port. To allow SSH traffic, that command would look like:

sudo ufw allow ssh

Pretty simple, right? You can also add protocols to the above command, in the same way you did when defining a rule via port number.

sudo ufw allow ssh/tcp

Of the available arguments, the ones you’ll use the most with the ufw command are:

  • allow

  • deny

  • reject

  • limit

  • status: displays if the firewall is active or inactive

  • show: displays the current running rules on your firewall

  • reset: disables and resets the firewall to default

  • reload: reloads the current running firewall

  • disable: disables the firewall

If you want to use a fuller syntax, you can then begin to define a source and a destination for a rule. Say, for example, you have an IP address you’ve discovered has been attempting to get into your machine (for whatever reason) through port 25 (SMTP). Let’s say that address is (even though it’s an internal address) and your machine address is To block that address from gaining access (through any port), you could create the rule like so:

sudo ufw deny from to port 25

Let’s look at the limit option. If you have any reason for concern that someone might be attempting a denial of service attack on your machine, via port 80. You can limit connections to that port with UFW, like so:

sudo ufw limit 80/tcp

By default, the connection will be blocked after six attempts in a 30-second period.

You might also have a need to allow outgoing traffic on a certain port but deny incoming traffic on the same port. To do this, you would use the directional argument like so. To allow outgoing traffic on port 25 (SMTP), issue the command:

sudo ufw allow out on eth0 to any port 25 proto tcp

You could then add the next rule to block incoming traffic on the same interface and port:

sudo ufw deny in on eth0 from any 25 proto tcp

GUI Tools

Now that you understand the basics of UFW, it’s time to find out what GUI tools are available to make using this handy firewall even easier. There aren’t many which are actively maintained, and many distributions default to one in particular. That GUI is…

Gufw is one of the most popular GUI front ends for UFW. It’s available for Ubuntu, Linux Mint, openSUSE, Arch Linux, and Salix OS. With Gufw, you can easily create profiles to match different uses for a machine (home, public, office, etc.). As you might expect from such a tool, Gufw offers an interface that would make any level of user feel right at home (see Figure 1 above).

Some distributions, such as Ubuntu, don’t install Gufw by default. You will, however, find it in the Ubuntu Software Center. Search for gufw and install with a single click.

uwf BIf your distribution happens to be Elementary OS Freya, there’s a new front end for UFW built into the settings tool that allows you to very easily add rules to UFW (Figure 2). You can learn more about the Elementary OS Freya UFW front end from my post “Get to Know the Elementary OS Freya Firewall Tool.”

You might also come across another front end called ufw-frontends. That particular GUI hasn’t been in developed for some time now, so it’s best to avoid that particular app.

For most users, there is no need to spend the time learning iptables—not when there’s a much more user-friendly front end (that also happens to include solid GUI tools) that’ll get the job done. Of course, if you’re looking for business- or enterprise-class firewalling, you should certainly spend the time and effort to gain a full understanding of iptables.

Which is right for your needs, UFW or iptables?

AWS Official Blog: EC2 VPC VPN Update – NAT Traversal, Additional Encryption Options, and More

This post was syndicated from: AWS Official Blog and was written by: Jeff Barr. Original post: at AWS Official Blog

You can use Amazon Virtual Private Cloud to create a logically isolated section of the AWS Cloud. Within the VPC, you can define your desired IP address range, create subnets, configure route tables, and so forth. You can also use a network gateway to connect the VPC to your existing on-premises network using a hardware Virtual Private Network (VPN) connection. The VPN running in the AWS Cloud (also known as a VPN gateway or VGW) communicates with a customer gateway (CGW) on your network or in your data center (read about Your Customer Gateway to learn more).

Today we are adding several new features to the VPN. Here’s a summary:

  • NAT Traversal
  • Additional Encryption Options
  • Reusable IP addresses for the CGW

In order to take advantage of any of these new features, you will need to create a new VGW and then create new VPN tunnels with the desired attributes.

NAT Traversal
Network Address Translation (NAT) maps one range of IP addresses to another. Let’s say that you have created a VPC and assigned it to a desired IP address range, and then split that range into a couple of subnets. Then you launch some EC2 instances within the VPC, each bound to one of those subnets. You can now use Network Address Translation to map the VPC’s IP address range to a different range when seen from your existing network.  This mapping process takes places across the VPN connection and is known at NAT-T, or NAT Traversal. NAT-T allows you to create IP connections that originate on-premises and connect to an EC2 instance (or vice versa) using addresses that have been translated.

You can set this up when you create a new VPN connection in the AWS Management Console. You will need to open up UDP port 4500 in your firewall in order to make use of NAT-T.

Additional Encryption Options
You can now make use of several new encryption options.

When the VPC’s hardware VPN is in the process of establishing a connection with your on-premises VPN, it proposes several different encryption options, each with a different strength. You can now configure the VPN on the VPC to propose AES256 as an alternative to the older and weaker AES128. If you decide to make use of this new option, you should configure your device so that it no longer accepts a proposal to use AES128 encryption.

The two endpoints participate in a Diffie-Hellman key exchange in order to establish a shared secret. The Diffie-Hellman groups used in the exchange will determine the strength of the hash on the keys. You can now configure the use of a wider range of groups:

  • Phase 1 can now use DH groups 2, 14-18, 22, 23, and 24.
  • Phase 2 can now use DH groups 1, 2, 5, 14-18, 22, 23, and 24.

Packets that flow across the VPN connection are verified using a hash algorithm. A matching hash gives a very high-quality indicator that the packet has not been maliciously modified along the way. You can now configure the VPN on the VPC to use the SHA-2 hashing algorithm with a 256 bit digest (also known as SHA-256). Again, you should configure your device to disallow the use of the weaker hash algorithms.

Reusable CGW IP Addresses
You no longer need to specify a unique IP address for each customer gateway connection that you create. Instead, you can now reuse an existing IP address. Many VPC users have been asking for this feature and I expect it to be well-used.

To learn more, read our FAQ and the VPC Network Adminstrator Guide.


SANS Internet Storm Center, InfoCON: green: Victim of its own success and (ab)used by malwares, (Wed, Oct 28th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

This morning, I faced an interesting case. We were notified that one of our computers was doing potentially malicious HTTP requests. The malicious URL was: We quickly checked and detected to many hosts were sendingrequests to this API. It is a website hosted in France which provides geolocalisation services via a text/json/xml API. The usage is pretty quick and”>xavier@vps2$curl

You provide an IP address and it returns its 2-letters country code. They provide also a paying version with more features.We investigated deeper and found that one request was indeed performed by a single host using a fake User-Agent.”>alert http $HOME_NET any – $EXTERNAL_NET any (msg:ET POLICY External IP Lookup Attempt To Wipmania content:Host|3A 20||0d 0a|)
alert http $HOME_NET any – $EXTERNAL_NET any (msg:ET TROJAN Dorkbot GeoIP Lookup to wipmania content:User-Agent|3a| Mozilla/4.0|0d 0a|Host|3a||0d 0a|) || ET TROJAN Dorkbot GeoIP Lookup to wipmania

I found references to in the following malwares:

  • Dorkbot
  • Ruskill

VT reported 97 occurrences of the domain wipmania.comin malicious files:

Conclusion: if you provide online services and they become popular be careful to not be (ab)used by malwares! It could affect your overall reputation and make you flagged/blocked in black lists.

Xavier Mertens
ISC Handler – Freelance Security Consultant

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

SANS Internet Storm Center, InfoCON: green: Typo Squatting Charities for Fake Tech Support Schemes, (Mon, Oct 26th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

Joe wrote this weekend that:

A customer called me yesterday to make me aware of their computer that was compromised by one of those scam websites, that pops up an 800 numbers and tells them to call. Against her knowing better, she STILL called in…. ugh.

The site, I wanted to make you aware of wasamvets.COMShe wanted to make a donation, but the real website isamvets.ORG

It is always sad to see how people with good intentions, willing to donate to a deserving cause, are being taken advantage of. So I took a bit time to investigate this particular case.

First of all: I do NOT recommend you go to the .com version of the site above. I didnt see anything outright malicious, other then popupsadvertising the fake tech support service, but you never know what they are going to send next.

The content returned from the page is very variable. Currently, I am getting index pages linking to various veterans related pages. Typically these pages are auto-created using key words people used to get to the page, or keywords entered in the search field on the page. So no surprise that this page knows it is mistaken for a veteran charity.

When it does display the Fake Virus Warning page, then it does so very convincingly:

– the lok and feel is adapted to match the users OS and browsers
– even on mobile devices, like my iPad, the page emulates the browser used

After a couple of visits to the site, it no longer displayed the virus warning to me, even if I changed systems and IPs. So I am not sure if they ran out of ad impressions or if they time them to only show up so often.

According to Farsight Securitys DNS database, 10,000 different hostnames resolve to this one IP address. Most of them look like obvious typo squatting domains:

For example:,,

For some of them, I still get ads for do nothing ware like Mackeeper. (looking at the page from a Mac)

Johannes B. Ullrich, Ph.D.

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Krebs on Security: IBM Runs World’s Worst Spam-Hosting ISP?

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

This author has long sought to shame Web hosting and Internet service providers who fail to take the necessary steps to keep spammers, scammers and other online ne’er-do-wells off their networks. Typically, the companies on the receiving end of this criticism are little-known Internet firms. But according to anti-spam activists, the title of the Internet’s most spam-friendly provider recently has passed to networks managed by IBM — one of the more recognizable and trusted names in technology and security.

In March 2010, not long after I began working on my new book Spam Nation: The Inside Story of Organized Cybercrime, From Global Epidemic to Your Front DoorI ran a piece titled Naming and Shaming Bad ISPs. That story drew on data from 10 different groups that track spam and malware activity by ISP. At the time, a cloud computing firm called Softlayer was listed prominently in six out of 10 of those rankings.

The top spam-friendly ISPs and hosting providers in early 2010.

The top spam-friendly ISPs and hosting providers in early 2010.

Softlayer gradually cleaned up its act, and began responding more quickly to abuse reports filed by anti-spammers and security researchers. In July 2013, the company was acquired by IBM. More recently, however, the trouble at networks managed by Softlayer has returned. Last month, anti-spam group listed Softlayer as the “#1 spam hosting ISP,” putting Softlayer at the very top of its World’s Worst Spam Support ISPs index. Spamhaus said the number of abuse issues at the ISP has “rapidly reached rarely previously seen numbers.”

Contacted by KrebsOnSecurity, Softlayer for several weeks did not respond to requests for comment. After reaching out to IBM earlier this week, I received the following statement from Softlayer Communications Director Andre Fuochi:

“With the growth of Softlayer’s global footprint, as expected with any fast growing service, spammers have targeted our platform. We are aggressively working with authorities, groups like The Spamhaus Project, and IBM Security analysts to shut down this recent, isolated spike. Just in the past month we’ve shut down 95 percent of the spam accounts identified by Spamhaus, and continue to actively eliminate this activity.”

top10spamhausBut according to Spamhaus, Softlayer still has more than 600 abuse issues still unaddressed. Spamhaus says it is true that Softlayer has been responding to its abuse complaints, but that the scammers and spammers are moving much faster.

In a blog post published earlier this month, Spamhaus explained that the bulk of the trouble appears to have come from cybercriminal customers in Brazil who have been rapidly registering large numbers of domain names daily tied to fake but plausible-sounding companies or organizations.

“This Brazilian malware gang was so active that many listed [Softlayer Internet] ranges were being reassigned to the same spam gang immediately after re-entering the pool of available [Internet] addresses,” Spamhaus explained. “After observing the same [Internet] address ranges being reassigned repeatedly to the same spammers, Spamhaus contacted the SoftLayer abuse department and told them that [Spamhaus listings] for these specific issues would not be removed until SoftLayer was able to get control of the overall problem with these spammers.”

Spamhaus said it doesn’t known why Softlayer is having this problem, but it has a few guesses.

“We believe that SoftLayer, perhaps in an attempt to extend their business in the rapidly-growing Brazilian market, deliberately relaxed their customer vetting procedures,” the organization posited. “Cybercriminals from Brazil took advantage of SoftLayer’s extensive resources and lax vetting procedures. In particular, the malware operation exploited loopholes in Softlayer’s automated provisioning procedures to obtain an impressive number of IP address ranges, which they then used to send spam and host malware sites. Unfortunately, what happened to Softlayer can easily happen to any ISP that makes certain unwise choices.”

IBM/Softlayer did not comment on those allegations. But as I show in my book, Spam Nation, spammers and malware purveyors continuously seek out and patronize ISPs and hosting providers which erect the fewest barriers to rapidly setting up massive numbers of scammy sites simultaneously.

It is true that if you make it harder for spammers to operate, they don’t just go away; rather, they move someplace else where it’s easier to ply their trade. But there is little reason that these Internet bottom feeders should have made a home for themselves at a company owned by IBM, which bills itself as the fastest growing vendor in the worldwide security software market. Physician: Heal Thyself!

TorrentFreak: Popcorn Time & YTS Global Outages Cause Concern

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

popcornThis week a new service called BrowserPopcorn debuted and then shutdown. Just to get things absolutely clear, that has nothing whatsoever to do with today’s situation. With that out of the way, let’s move on.

As reported earlier this week, the main fork of Popcorn Time is undergoing transition. Disputes over money and power have hit the project at its core. As a result key members have now left the project while others insist they will continue.

Today’s Popcorn problems

With that turmoil already concerning the community, this morning there are fresh issues making waves. Several hours ago the project’s official domain – – went completely offline.

The domain itself is owned by David Lemarier, aka ‘phnz’, one of the devs who left earlier this week. With that in mind it’s certainly possible (or even likely) that the outage is connected to his departure. While there are some reports that the domain is now slowly coming back online, the site itself is still accessible by direct IP address.

However, the outage is affecting more than just the Popcorn Time domain. For reasons that are not yet clear the project’s official Facebook and Twitter accounts (both named ‘popcorntimetv’) have also gone down, with the latter now complaining that it simply does not exist.


But the problems go deeper still. An externally-hosted status page for reveals additional problems.

Not only are there issues with nameservers, the website and the forum, some of the APIs on which the entire PopcornTime application relies are also non-functioning, causing the whole system to fall over.


The reason the movie API is down is the issue currently causing most concern, since it’s likely to keep Popcorn Time offline, even when returns. Here’s why.

The YTS connection

Most if not all versions of Popcorn Time rely on a website called for their movie libraries and if YTS has problems, Popcorn Time has problems too. At the time of writing and for the past several hours, has been completely offline too, meaning that Popcorn Time is pretty much broken.

TorrentFreak contacted the admin of YTS but did not immediately receive a response to our request for comment. However, when accessed directly, a public facing server operated by YTS does return the message – “Be Back Soon :)” – but it’s unclear if that relates directly to the current situation or is a generic downtime message.


At least for now it appears that the problems facing and its team are separate from the problems being experienced by YTS. However, due to PopcornTime’s reliance on YTS for content, the system falls down when the latter doesn’t function as it should.

More on the situation as we have it.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Linux How-Tos and Linux Tutorials: Tips and Tricks for Using the Two Best E-Readers for Linux

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Jack Wallen. Original post: at Linux How-Tos and Linux Tutorials

ereader AIt is 2015 and your home library that once resided on bookshelves and milk crates now exists on either a handheld reader, your laptop, or your desktop. That, of course, is not to say the end of physical books is nigh. But when you want the most convenient way to either read or keep your library with you, e-books are the way to go. This is especially true for larger, heavier textbooks.

The good news for Linux users is that there are plenty of outstanding apps to make reading e-books quite easy. And, because these tools happen to be offered on the Linux platform, they offer some really cool features to make your e-book life even better.

Let’s take a look at two of the best e-book readers available for Linux, as well as a trick or two for each.


Calibre is the mack daddy of e-book tools on Linux. Not only does it offer an outstanding e-reader, it also helps you to convert your .html files into e-book format (it’ll convert .odt and other files…just not as well). The Calibre reader does a great job of displaying your e-books (from a vast array of formats). Calibre also offers a number of really handy features, such as:

  • Bookmarks

  • Reference mode (when in this mode, if you hover your cursor over a paragraph, it will show you the reference number in the form of Chapter.ParagraphNumber)

  • Table of Contents (view the book TOC as a sidebar)

  • Full-screen mode

  • Themes

  • And so much more

There is, however, one feature that places Calibre heads above all other competition: the Calibre server. With this, you can run a server and access your books from any device. Let’s set this up and access the Calibre server from another machine. I will assume you’ve already installed Calibre (it can be found in your distribution’s standard repositories). The steps are simple:

  1. Open Calibre and click on the Preferences button

  2. Click Sharing over the net (under the Sharing section)

  3. Configure the necessary port (if applicable)

  4. Give the server a username (passwords can cause some devices to not work with the server)

  5. Click Start Server

  6. Click Test Server

When you click Test Server, your default web browser should pop up to display the web-based Calibre Library interface (Figure 1 above).

With the server running, locate the IP address of the machine hosting your Calibre server. You can now access that server in the form of From that page, you can open a book by locating what you want to read and then clicking the associated Get button (Figure 2).

jack-ereader BOnce you click Get, the e-book file will download and you can then open it in your local copy of Calibre (or whatever e-reader you choose).

The one caveat to this is, by starting the server in this way, it will stop the second you close the app. If you want to leave the server running (without the GUI open), you can run it with the following command:

calibre-server --daemonize

This command will allow you to run the server without having to open Calibre. You can then set it to run as a startup service. How you do this will depend on what startup service your distribution uses (systemd or init).

There are even Android apps that let you access your Calibre library from anywhere (if you happen to save your Calibre Library in a cloud location). One particular app, Calibre Cloud, does a great job of accessing your Calibre Library from the likes of Google Drive, Dropbox, etc. Both a free version and a Pro version ($1.99 USD) are available. The Pro version also contains a built-in reader. If you opt for the free version, you’ll need to also install an e-book reader to use for viewing.


Lucidor doesn’t offer all the power and features that comes along with Calibre, but it is one of the best straight-up e-readers you’ll find for Linux. This tool is strictly a reader. Even without all that power under the hood, Lucidor delivers an outstanding e-reader experience. One of the coolest features of Lucidor is its tabbed interface, which allows you to open not only multiple books, but also multiple books from multiple sources.

You won’t find Lucidor in your standard repository. In fact, you’ll have to download the file for installation on your distribution. Let’s install Lucidor on Ubuntu. Here’s how:

  1. Download the .deb file

  2. Open a terminal window

  3. Issue the command sudo dpkg -i lucidor_XXX_all.deb (where XXX is the release number)

  4. Hit Enter

  5. Type your sudo password

  6. Hit Enter

  7. Allow the installation to complete

You should now see the Lucidor launcher in your Dash (or menu, depending upon your desktop). Run the app and you will be greeted by the minimal welcome screen (Figure 3).

jack-ereader CThe interface is quite simple to use. You click on the Links drop-down and select what you want to open. Let’s open up the Personal bookcase in a tab and then add a book. Click Links > Bookcase and the new tab will open, defaulting to the Personal Bookcase. Now click File > Open File. Locate the .epub file you want to add and then click Open. When the file opens in the Lucidor tab, you will prompted whether you want to add the file to the current Bookcase (Figure 4). Click Add and the book will now be available in your personal bookcase.

jack-ereader DAt this point, you can click the Tab button, click Open Bookcase, and start the process over to open a new book.

You can also add annotations to books for easy note-taking. Here’s how:jack-ereader F

  1. Open the book in question

  2. Locate a section of the book you want to annotate

  3. Click the Contents drop-down

  4. Select Annotations

  5. Highlight the portion of the text you want to annotate

  6. Click Create Note

  7. Enter your note for the annotation (Figure 5)

  8. Select Highlight (if you want the selected text to be highlighted)

  9. Select Mark Annotations to place a mark on the text where the annotation starts

  10. When you’re finished, click Add

There are several other features you can enjoy with either Calibre or Lucidor. Most importantly, however, is that you can simply read your books. Other e-readers are available for the Linux platform, but once you’ve used either of these, you won’t settle for anything less.

TorrentFreak: ISP Will Disconnect Pirates Following Hollywood Pressure

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

cord-cuttingAround the world Internet service providers are being placed under pressure to do something about the millions of customers who download and share copyright infringing content.

Major ISPs in the United States, for example, are currently participating in the so-called “six strikes” scheme operated by the major studios and recording labels. Under this project users are given up to six warnings before a range of measures are applied to their accounts.

The measures – which copyright holders insist need to be punitive – are problematic for ISPs. Consumers keep their companies and punishing them too hard isn’t good for business, especially since disappointed customers can simply sign up to a new provider.

However, according to a report coming out of Italy, a local ISP is so concerned about threats coming out of Hollywood it is now threatening to discard customers who download and share copyright infringing material.

According to news outlet Repubblica, one of its readers who subscribes to an ISP in the north of the country received correspondence after being caught downloading movies and TV shows by various Hollywood studios.

“We have received numerous reports of misuse from multiple rights holders (Viacom, Paramount, MGM and other distribution companies) by direct means or from their legal offices,” the mail begins.

“They contain precise details about the material downloaded, download times, the IP address used, the ownership rights of the person making the report, contact details and valid certificate digital signatures plus confirmation of the authenticity of the sender and message content.”

Of course, these kinds of warning notices are nothing new and millions are sent every month. However, the ISP warns that if it does not take measures to stop the infringements, it could be considered “an accomplice to the offenses”. On that basis alone it must take action.

“We ask you to kindly give us feedback within 48 hours. In the case of failure [to respond] or incorrect feedback we’ll be forced to proceed with the cancellation of the service,” the correspondence concludes.

In a comment, copyright lawyer Guido Scorza said that while the ISP’s threats are unusual, they are permissible.

“The conduct of the provider is curious but not illegal. In the event that the ISP cancels the user agreement you could outline, however, an abuse of rights,” Scorza explains.

“Withdrawing unilaterally due to Internet problems is one thing, doing it because a third party claims that a customer is responsible for piracy is something else. Only a court can decide if a user is a pirate or not,” the lawyer says.

Repubblica hasn’t yet named the ISP in question but did speak to its CEO who told the publication that it receives hundreds of complaints from rightsholders and their lawfirms. The notice being sent out to subscribers underlines the company’s position.

“What is required now from us is to ensure that the connection is no longer used for illegal downloading activity and to protect the connection in the event that [the customer] is sure he has never downloaded anything,” it reads.

The question of ISP liability is a thorny one, but increasingly the world’s largest entertainment companies are moving towards an insistence that repeat infringers must be dealt with. Clearly some are taking those threats to their logical conclusion.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

TorrentFreak: 2,800 Cloudflare IP Addresses Blocked By Court Order

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

cloudflare Otherwise known as the Stop Online Piracy Act, SOPA would have enabled U.S. authorities to have allegedly infringing websites blocked at the DNS level, thereby rendering them inaccessible.

In 2012 the proposals were met with outrage, especially by those in the tech community who feared that meddling with the very systems that keep the Internet running would eventually end in disaster.

The legislation never passed but nonetheless over the past three years many hundreds of websites have been blocked around the world. In fact, in some recent cases in the United States entertainment industry companies have achieved some of the things they claimed to need SOPA for.

So did any of this “break the Internet”? According to a piece published in The Hill this week, absolutely not.

“[Policymakers] should not accept the falsehood that blocking a website or taking other actions to shut down infringing sites equates to an assault on the security and reliability of the Internet as a whole,” wrote Daniel Castro, vice president of the Information Technology and Innovation Foundation.

“Instead, they should recognize that selective targeting of websites dedicated to infringement is an effective strategy to combat piracy, and encourage the recording industry, the film industry and online intermediaries to work in partnership to block, shut down and cut off revenue to these websites wherever possible.”

Mr Castro probably feels the Internet is unaffected by blocking since every website he accesses works just fine. However, new data on blocking activity elsewhere presents an altogether more worrying state of affairs.

In 2013, Russia activated a streamlined mechanism for having sites blocked at the ISP level. As a result many hundreds of sites have been rendered inaccessible to the public and some, such as torrent site RUTracker, are now facing the possibility of being blocked forever. But how has this affected the wider Internet?

Well, according to data obtained by web-blocking watchdog RUBlacklist, those doing the blocking in Russia are using their powers to the full while having little concern for collateral damage. As a result more than a third of all IP addresses on the country’s website blocking list belong not to illegal services, but to US-based CDN company Cloudflare.

RUBlacklist says the numbers are significant. As of October 5, 2015, a total of 8,284 IP addresses were on the national blocklist but a head-shaking 2,831 of them – more than 34% – have been registered to Cloudflare.


The problem is due to how CDNs (Content Delivery Networks) like Cloudflare are setup. Instead of a site’s own IP address facing the world, once Cloudflare is deployed it is the service’s IP addresses that are seen in public.

Sadly, when a complaint is filed at the Moscow City Court, no one really cares whether the IP addresses belong to a legitimate service or not, or whether they stay on the blacklist even after they fall out of use.

“Roskomnadzor simply fills and fills its database of banned IP-addresses, risking blocking dozens if not hundreds of thousands of innocent websites,” RUBlacklist explains.

Of course, the blocking of Cloudflare and its customers is nothing new. Earlier this year innocent websites were blocked in the UK by ISP Sky simply because a Pirate Bay proxy was hosted behind the same IP-address.

But while the Sky problems were probably down to human error, the current over-blocking situation in Russia could have been avoided if concerns had been considered in December 2013.

People knew Cloudflare was at risk then, yet no one appears to have taken the warnings seriously. In April 2014, Roskomnadzor admitted there was a problem but simply warned people not to use the service since Cloudflare didn’t respond to correspondence on the matter.

“CloudFlare representatives refused to cooperate and did not respond to the formal notices of Roskomnadzor,” the watchdog explained.

“In the absence of a reaction from CloudFlare many conscientious Internet resources using the CDN-service will be blocked by ISPs in Russia.”

But as the finger-pointing continues the people who really suffer are regular Internet users themselves. Fears that Joe Public would be caught in the copyright crossfire were just the kind of concerns that came to the forefront and fueled the SOPA protests.

Blocking might not have broken the Internet yet, but already parts of it need fixing. And, despite it all, the pirates continue their business largely as before.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Raspberry Pi: A new version of Scratch for Raspberry Pi: now with added GPIO

This post was syndicated from: Raspberry Pi and was written by: Clive Beale. Original post: at Raspberry Pi

There are many excellent things to be found in last week’s release of Raspbian Jessie and we’ve been keeping one of the best ones tucked under our big Raspberry Pi-shaped hat. In the Programming menu on the desktop you’ll find a new version of Scratch, our favourite programming language for beginners.

Breadboard and Scratch on Raspberry Pi

Connect buttons, sensors, cameras, LEDS, goblin sticks and other gubbins to your Pi using Scratch

Tim Rowledge, who has been “vigorously wrangling Scratch into shape over the last couple of years” (thanks Eben), tells us what’s new:


Along with the new Raspbian release we are including the latest Scratch system update. It might have seemed a bit quiet on the Scratch front since March, but lots has happened here in the rainforests of Vancouver Island. There are two primary changes you will notice:

  • a significant increase in speed
  • the addition of a built-in GPIO server.

Speedier Scratch

One of the big projects last year was to modernize the Scratch code to run in a current Squeak Smalltalk system rather than the very old original version; this improved performance a fair bit all on its own, since the newer Squeak benefited from a lot of work over the years. [The Scratch world is created using Squeak, a dialect of the Smalltalk programming language, and Squeak itself runs on a software emulation of a computer system called a virtual machine -Ed.] It also built us a Scratch that could run on the very latest Squeak virtual machines that have dynamic code translation, generating machine code at run-time. Along with work to improve the Squeak code that implements Scratch, we then had a noticeably faster system.

A major project this year has been building such a virtual machine for the Pi; until now, only x86 machines have been able to run this version. With a very generous amount of support from Eliot Miranda – the original author of the Cog virtual machine and all-round software deity – the ARM Cog VM has been whirring away since around June.

Benchmarks are always a nastily slippery subject, but we feel that Squeak performance is typically between 3 and 10 times faster, obviously depending on what exactly one is measuring. Things will get even faster in the future as we iron out wrinkles in the code generation, and soon we hope to start benefiting from another project that does code optimization on the fly; early hints suggest at least a doubling of performance. Since Scratch uses a lot of graphics and UI code it doesn’t always speed up so much; but we already did a lot of graphics speed improvements for prior releases.

Our favourite “scary big demo” is Andrew Oliver’s implementation of Pac-Man. The original release of Scratch on the Raspberry Pi Model B could manage almost one frame per second, at best. The same Model B with the latest Scratch system can manage about 12-15 frames per second, and on a Raspberry Pi 2 we can get a bit over 30, making a very playable Pac-Man.


The new GPIO server for Pi Scratch is a first pass at a new and hopefully simpler way for users to connect Scratch to the Raspberry Pi’s GPIO pins or to add-on boards plugged into them. It is modelled on the mesh/network server and uses the same internal API so that either or both can be used at any time – indeed, you can have both working and use a Pi as a sort of GPIO server or data source. We have not introduced any new blocks at this point.

The server also allows access to the Pi camera, IP address and date and time and allows complex functionality. For example, the following scripts provide (along with a suitably configured breadboard) the ability to turn LEDs on and off according to a button, to take a photo with a countdown provided by a progressively brightening LED, and ways to check the time etc.

Examples of using Scratch to control the camera module as well as LEDs and sensors connected to the Raspberry Pi's GPIO pins

Examples of using Scratch to control the camera module as well as LEDs and sensors connected to the Raspberry Pi’s GPIO pins

Add-On Hardware

We can also plug in Pi add-on cards such as the Sense HAT, Pibrella, Explorer Hat, PiFace, PiLite and Ryanteck motor boards.

Each card has its own set of commands layered on top of the basic GPIO facilities described above.

Demo project scripts

In the Scratch Examples directory (found via the File–>Open dialogue and the Examples shortcut) you will find a Sensors and Motors directory; several new GPIO scripts are included, including the one above.


Closing notes from Clive

We’re really pleased that GPIO is now built in to the Pi version of Scratch. It means that users can use access the GPIO pins “out of the box,” and so get into physical computing that much more easily. We’ve also introduced the GPIO pin numbering system also known as BCM numbering for consistency across our resources, and having our own version of GPIO support gives us finer control over functionality and support for add-on boards in future.

All of our resources using Scratch will use this version from now on, and existing resources will be rewritten. Tim’s reference guide details all of the commands and functionality, and there will be a simplified beginner’s tutorial along this week.

Last of all, there’s no way I can end this post without taking the opportunity to thank our community who have supported (and continue to support) GPIO in Scratch on the Pi. In particular, a big thanks to Simon Walters, aka @cymplecy, for all of his work on Scratch GPIO over the last few years.

The post A new version of Scratch for Raspberry Pi: now with added GPIO appeared first on Raspberry Pi.

TorrentFreak: New York Judge Puts Brakes on Copyright Troll Subpoenas

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

For the past seven or eight years alleged file-sharers in the United States have found themselves at the mercy of so-called copyright trolls and right at the very forefront are those from the adult movie industry.

By a country mile, adult video outfit Malibu Media (X-Art) is the most litigious after filing over 4,500 cases in less than 4 years, but news coming out of New York should give this notorious troll pause for thought.

Events began in June when Malibu filed suit in the Eastern District of New York against a so-called John Doe defendant known only by his Verizon IP address, The porn outfit claimed that the individual was responsible for 18 counts of copyright infringement between February and May 2015.

Early August the defendant received a letter from Verizon informing him that a subpoena had been received which required the ISP to identify the individual using the IP address on May 23, 2015. This caused the defendant to fight back.

“Since Defendant’s IP addresses were assigned dynamically by the ISP, even if Defendant was identified as the subscriber assigned the IP address,, at 03:31:54 on May 23, 2015, it doesn’t mean that Defendant is the same subscriber who was assigned the IP address at the other seventeen occasions,” the defendant’s motion to quash reads.

“If Defendant’s identifying information is given to Plaintiff, Plaintiff, as part of
their business model, will seek settlements of thousands of dollars claiming Defendant’s responsibility for eighteen downloads of copyright protected works under the threat of litigation and public exposure with no serious intention of naming Defendant.”

Case specifics aside, the motion also contains broad allegations about Malibu Media’s entire business model, beginning with the manner in which it collects evidence on alleged infringers using BitTorrent networks.

Citing a University of Washington study which famously demonstrated a printer receiving a DMCA notice for copyright infringement, the motion concludes that the techniques employed by Malibu for tracking down infringers are simply not up to the job.

“The research concludes that the common approach for identifying infringing users in the poplar BitTorrent file sharing network is not conclusive,” the motion notes.

“Even if Plaintiff could definitively trace the BitTorrent activity in question to the IP-registrant, Malibu conspicuously fails to present any evidence that John Doe either uploaded, downloaded, or even possessed а complete copyrighted video file.”

While detection is rightfully put under the spotlight, the filing places greater emphasis on the apparent extortion-like practices demonstrated by copyright trolls such as Malibu Media.

Citing the earlier words of Judge Harold Baer, the motion notes that “troll” cases not only risk the public embarrassment of a misidentified defendant, but also create the likelihood that he or she will be “coerced into an unjust settlement with the plaintiff to prevent the dissemination of publicity surrounding unfounded allegations.”

The motion continues by describing Malibu as an aggressive litigant which deliberately tries to embarrass and shame defendants in the aim of receiving cash payments.

“[Malibu] seeks quick, out-of-court settlements which, because they are hidden, raise serious questions about misuse of court procedure. Judges regularly complain about Malibu,” the motion reads.

“Malibu’s strategy and its business models are to extort, harass, and embarrass
defendants to persuade defendants to pay settlements with plaintiffs instead of paying for legal assistance while attempting to keep their anonymity and defending against allegations which can greatly damage their reputations.”

Following receipt of the motion, yesterday Judge Steven I. Locke handed down his order and it represents a potentially serious setback for Malibu.

“Because the arguments advanced in the Doe Defendant’s Motion to Quash raise serious questions as to whether good cause exists in these actions to permit the expedited pre-answer discovery provided for in the Court’s September 4, 2015 Order, the relief and directives provided for in that Order are stayed pending resolution of the Doe Defendant’s Motion to Quash,” Judge Locke writes.

If putting the brakes on one discovery subpoena wasn’t enough, the Judge’s order lists 19 other cases that are now the subject of an indefinite stay. However, as highlighted by FightCopyrightTrolls, the actual exposure is much greater, with a total of 88 subpoenas in the Eastern District now placed on hold.

As a result, ISPs are now under strict orders not to hand over the real identities of their subscribers until the Court gives the instruction following a ruling by Judge Locke. In the meantime, Malibu has until October 27 to respond to the Verizon user’s motion.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

AWS Official Blog: Amazon RDS Update – MariaDB is Now Available

This post was syndicated from: AWS Official Blog and was written by: Jeff Barr. Original post: at AWS Official Blog

We launched the Amazon Relational Database Service (RDS) almost six years ago, in October of 2009. The initial launch gave you the power to launch a MySQL database instance from the command line. From that starting point we have added a multitude of features, along with support for the SQL Server, Oracle Database, PostgreSQL, and Amazon Aurora databases. We have made RDS available in every AWS region, and on a very wide range of database instance types. You can now run RDS in a geographic location that is well-suited to the needs of your user base, on hardware that is equally well-suited to the needs of your application.

Hello, MariaDB
Today we are adding support for the popular MariaDB database, beginning with version 10.0.17. This engine was forked from MySQL in 2009, and has developed at a rapid clip ever since, adding support for two storage engines (XtraDB and Aria) and other leading-edge features. Based on discussions with potential customers, some of the most attractive features include parallel replication and thread pooling.

As is the case with all of the databases supported by RDS, you can launch MariaDB from the Console, AWS Command Line Interface (CLI), AWS Tools for Windows PowerShell, via the RDS API, or from a CloudFormation template.

I started out with the CLI and launched my database instance like this:

$ rds-create-db-instance jeff-mariadb-1 
  --engine mariadb 
  --db-instance-class db.r3.xlarge 
  --db-subnet-group-name dbsub 
  --allocated-storage 100 
  --publicly-accessible false 
  --master-username root --master-user-password PASSWORD

Let’s break this down, option by option:

  • Line 1 runs the rds-create-db-instance command and specifies the name (jeff-mariadb-1) that I have chosen for my instance.
  • Line 2 indicates that I want to run the MariaDB engine, and line 3 says that I want to run it on a db.r3.xlarge instance type.
  • Line 4 points to the database subnet group that  I have chosen for the database instance. This group lists the network subnets within my VPC (Virtual Private Cloud) that are suitable for my instance.
  • Line 5 requests 100 gigabytes of storage, and line 6 specifies that I don’t want the database instance to have a publicly accessible IP address.
  • Finally, line 7 provides the name and credentials for the master user of the database.

The command displays the following information to confirm my launch:

DBINSTANCE  jeff-mariadb-1  db.r3.xlarge  mariadb  100  root  creating  1  ****  db-QAYNWOIDPPH6EYEN6RD7GTLJW4  n  10.0.17  general-public-license  n  standard  n
      VPCSECGROUP  sg-ca2071af  active
SUBNETGROUP  dbsub  DB Subnet for Testing  Complete  vpc-7fd2791a
      SUBNET  subnet-b8243890  us-east-1e  Active
      SUBNET  subnet-90af64e7  us-east-1b  Active
      SUBNET  subnet-b3af64c4  us-east-1b  Active
      PARAMGRP  default.mariadb10.0  in-sync
      OPTIONGROUP  default:mariadb-10-0  in-sync

The RDS CLI includes a full set of powerful, high-level commands, all documented here. For example, I can create read replicas (rds-create-db-instance-read-replicas) and take snapshot backups (rds-create-db-snapshot) in minutes.

Here’s how I would launch the same instance using the AWS Management Console:

Get Started Today
You can launch RDS database instances running MariaDB today in all AWS regions. Supported database instance types include M3 (standard), R3 (memory optimized), and T2 (standard).


AWS Official Blog: AWS Import/Export Snowball – Transfer 1 Petabyte Per Week Using Amazon-Owned Storage Appliances

This post was syndicated from: AWS Official Blog and was written by: Jeff Barr. Original post: at AWS Official Blog

Even though high speed Internet connections (T3 or better) are available in many parts of the world, transferring terabytes or petabytes of data from an existing data center to the cloud remains challenging. Many of our customers find that the data migration aspect of an all-in move to the cloud presents some surprising issues. In many cases, these customers are planning to decommission their existing data centers after they move their apps and their data; in such a situation, upgrading their last-generation networking gear and boosting connection speeds makes little or no sense.

We launched the first-generation AWS Import/Export service way back in 2009. As I wrote at the time, “Hard drives are getting bigger more rapidly than internet connections are getting faster.” I believe that remains the case today. In fact, the rapid rise in Big Data applications, the emergence of global sensor networks, and the “keep it all just in case we can extract more value later” mindset have made the situation even more dire.

The original AWS Import/Export model was built around devices that you had to specify, purchase, maintain, format, package, ship, and track. While many AWS customers have used (and continue to use) this model, some challenges remain. For example, it does not make sense for you to buy multiple expensive devices as part of a one-time migration to AWS. In addition to data encryption requirements and device durability issues, creating the requisite manifest files for each device and each shipment adds additional overhead and leaves room for human error.

New Data Transfer Model with Amazon-Owned Appliances
After gaining significant experience with the original model, we are ready to unveil a new one, formally known as AWS Import/Export Snowball. Built around appliances that we own and maintain, the new model is faster, cleaner, simpler, more efficient, and more secure. You don’t have to buy storage devices or upgrade your network.

Snowball is designed for customers that need to move lots of data (generally 10 terabytes or more) to AWS on a one-time or recurring basis. You simply request one or more from the AWS Management Console and wait a few days for the appliance to be delivered to your site. If you want to import a lot of data, you can order one or more Snowball appliances and run them in parallel.

The new Snowball appliance is purpose-built for efficient data storage and transfer. It is rugged enough to withstand a 6 G jolt, and (at 50 lbs) light enough for one person to carry. It is entirely self-contained, with 110 Volt power and a 10 GB network connection on the back and an E Ink display/control panel on the front. It is weather-resistant and serves as its own shipping container; it can go from your mail room to your data center and back again with no packing or unpacking hassle to slow things down. In addition to being physically rugged and tamper-resistant, AWS Snowball detects tampering attempts. Here’s what it looks like:

Once you receive a Snowball, you plug it in, connect it to your network, configure the IP address (you can use your own or the device can fetch one from your network using DHCP), and install the AWS Snowball client. Then you return to the Console to download the job manifest and a 25 character unlock code. With all of that info in hand you start the appliance with one command:

$ snowball start -i DEVICE_IP -m PATH_TO_MANIFEST -u UNLOCK_CODE

At this point you are ready to copy data to the Snowball. The data will be 256-bit encrypted on the host and stored on the appliance in encrypted form. The appliance can be hosted on a private subnet with limited network access.

From there you simply copy up to 50 terabytes of data to the Snowball and disconnect it (a shipping label will automatically appear on the E Ink display), and ship it back to us for ingestion. We’ll decrypt the data and copy it to the S3 bucket(s) that you specified when you made your request. Then we’ll sanitize the appliance in accordance with National Institute of Standards and Technology Special Publication 800-88 (Guidelines for Media Sanitization).

At each step along the way, notifications are sent to an Amazon Simple Notification Service (SNS) topic and email address that you specify. You can use the SNS notifications to integrate the data import process into your own data migration workflow system.

Creating an Import Job
Let’s step through the process of creating an AWS Snowball import job from the AWS Management Console. I create a job by entering my name and address (or choosing an existing one if I have done this before):

Then I give the job a name (mine is import-photos), and select a destination (an AWS region and one or more S3 buckets):

Next, I set up my security (an IAM role and a KMS key to encrypt the data):

I’m almost ready! Now I choose the notification options. I can create a new SNS topic and create an email subscription to it, or I can use an existing topic. I can also choose the status changes that are of interest to me:

After I review and confirm my choices, the job becomes active:

The next step (which I didn’t have time for in the rush to re:Invent) would be to receive the appliance, install it and copy my data over, and ship it back.

In the Works
We are launching AWS Import/Export Snowball with import functionality so that you can move data to the cloud. We are also aware of many interesting use cases that involve moving data the other way, including large-scale data distribution, and plan to address them in the future.

We are also working on other enhancements including continuous, GPS-powered chain-of-custody tracking.

Pricing and Availability
There is a usage charge of $200 per job, plus shipping charges that are based on your destination and the selected shipment method. As part of this charge, you have up to 10 days (starting the day after delivery) to copy your data to the appliance and ship it out. Extra days are $15 each.

You can import data to the US Standard and US West (Oregon) regions, with more on the way.


AWS Official Blog: New – AWS WAF

This post was syndicated from: AWS Official Blog and was written by: Jeff Barr. Original post: at AWS Official Blog

Have you ever taken the time to watch the access and error logs from your web server scroll past? In addition to legitimate well-formed requests from users and spiders, you will probably see all sorts of unseemly and downright scary requests far too often.  For example, I checked the logs for one of my servers and found that someone or something was looking for popular packages that are often installed at well-known locations (I have changed the source IP address to for illustrative purposes):

If any of those probes had succeeded, the attacker could then try a couple of avenues to gain access to my server. They could run through a list of common (or default) user names and passwords, or they could attempt to exploit a known system, language, or application vulnerability (perhaps powered by SQL injection or cross-site request forgery) as the next step.

Like it or not, these illegitimate requests are going to be flowing in 24×7. Even if you keep your servers well-patched and do what you can to keep the attack surface as small as possible, there’s always room to add an additional layer of protection.

In order to help you to do this, we are launching AWS WAF today. As you will see when you read this post, AWS WAF will allow you to protect your AWS-powered web applications from application-layer attacks such as those I described above.

You can set it up and start protecting your applications in minutes. You simply create one or more web Access Control Lists (web ACLs), each containing rules (set of conditions defining acceptable or unacceptable requests/ IP addresses) and actions to take when a rule is satisfied. Then you attach the web ACL to your application’s Amazon CloudFront distribution.

From that point forward, incoming HTTP and HTTPS requests that arrive via the distribution will be checked against each rule in the associated web ACL. The conditions with the rules can be positive (allow certain requests or IP addresses) or negative (block certain requests or IP addresses).

I can use the rules and the conditions in many different ways. For example, I could create a rule that would block all access from the IP address shown above. If I were getting similar requests from many different IP addresses, I could choose to block on one or more strings in the URI such as “/typo3/” or “/xampp/.” I could also choose to create rules that would allow access to the actual functioning URIs within my application, and block all others. I can also create rules that guard against various forms of SQL injection.

AWS WAF Concepts
Let’s talk about conditions, rules, Wweb ACLs, and actions. I’ll illustrate some of my points with screen shots of the AWS WAF console.

Conditions inspect incoming requests. They can look at the request URI, the query string, a specific HTTP header,  or the HTTP method (GET, PUT, and so forth):

Because attackers often attempt to camouflage their requests in devious ways, conditions can also include transformations that are performed on the request before the content is inspected:

Conditions can also look at the incoming IP address, and can match a /8, /16, or /24 range. They can also use a /32 to match a single IP address:

Rules reference one or more conditions, all of which must be satisfied in order to make the rule active. For example, one rule could reference an IP-based rule and a request-based rule in order to block access to certain content. Each rule also generates Amazon CloudWatch metrics.

Actions are part of rules, and denote the action to be taken when a request matches all of the conditions in a rule. An action can allow a request to go through, block it, or simply count the number of times that the rule matches (this is good for evaluating potential new rules before using a more decisive action).

Web ACLs in turn reference one or more rules, along with an action for each rule. Each incoming request for a distribution is evaluated against successive rules until a request matches all of the conditions in the rule, then the action associated with the rule is taken. If no rule matches, then the default action (block or allow the request) is taken.

WAF  in Action
Let’s go through the process of creating a condition, a rule, and a web ACL. I’ll do this through the console, but you can also use the AWS Command Line Interface (CLI), AWS Tools for Windows PowerShell, or the Web Application Firewall API.

The console leads me through the steps. I start by creating a web ACL called ProtectSite:

Then I create conditions that will allow or block content:

I can create an IP match condition called BadIP to block the (fake) IP address from my server log:

And then I used the condition to create a rule called BadCompany:

And now I select the rule and chose the action (a single web ACL can use multiple rules; my example uses just one):

As you can see above, the default action is to allow requests through. The net effect is that this combination (condition  + rule + web ACL) will block incoming traffic from and allow everything else to go through.

The next step is to associate my new web ACL with a CloudFront distribution (we’ll add more services over time):

A single web ACL can be associated with any number of distributions. However, each distribution can be associated with one web ACL.

The web ACL will take effect within minutes. I can inspect its CloudWatch metrics to understand how often each rule and each web ACL is activated.

API Power
Everything that I have shown you above can also be accessed from your own code:

  • CreateIPSet, CreateByteMatchSet, and CreateSqlInjectionMatchSet are used to create conditions.
  • CreateRule is used to create rules from conditions.
  • CreateWebACL is used to create web ACLs from rules.
  • UpdateWebACL is used to associate a web ACL with a CloudFront distribution.

There are also functions to list, update, and delete conditions, rules, and web ACLs.

The GetSampledRequests function gives you access to up to 5,000 of the requests that were evaluated against a particular rule within a time period that you specify. The response includes detailed information about each of the requests, including the action taken (ALLOW, BLOCK, or COUNT).

Available Now
AWS WAF is available today anywhere CloudFront is available. Pricing is $5 per web ACL, $1 per rule, and $0.60 per million HTTP requests.

— Jeff;

TorrentFreak: Anti-Piracy Activities Get VPNs Banned at Torrent Sites

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

spyFor the privacy-conscious Internet user, VPNs and similar services are now considered must-have tools. In addition to providing much needed security, VPNs also allow users to side-step geo-blocking technology, a useful ability for today’s global web-trotter.

While VPNs are often associated with file-sharing activity, it may be of interest to learn that they are also used by groups looking to crack down on the practice. Just like file-sharers it appears that anti-piracy groups prefer to work undetected, as events during the past few days have shown.

Earlier this week while doing our usual sweep of the world’s leading torrent sites, it became evident that at least two popular portals were refusing to load. Finding no complaints that the sites were down, we were able to access them via publicly accessible proxies and as a result thought no more of it.

A day later, however, comments began to surface on Twitter that some VPN users were having problems accessing certain torrent sites. Sure enough, after we disabled our VPN the affected sites sprang into action. Shortly after, reader emails to TF revealed that other users were experiencing similar problems.

Eager to learn more, TF opened up a dialog with one of the affected sites and in return for granting complete anonymity, its operator agreed to tell us what had been happening.

“The IP range you mentioned was used for massive DMCA crawling and thus it’s been blocked,” the admin told us.

Intrigued, we asked the operator more questions. How do DMCA crawlers manifest themselves? Are they easy to spot and deal with?

“If you see 15,000 requests from the same IP address after integrity checks on the IP’s browsers for the day, you can safely assume its a [DMCA] bot,” the admin said.

From the above we now know that anti-piracy bots use commercial VPN services, but do they also access the sites by other means?

“They mostly use rented dedicated servers. But sometimes I’ve even caught them using Hola VPN,” our source adds. Interestingly, it appears that the anti-piracy activities were directed through the IP addresses of Hola users without them knowing.

Once spotted the IP addresses used by the aggressive bots are banned. The site admin wouldn’t tell TF how his system works. However, he did disclose that sizable computing resources are deployed to deal with the issue and that the intelligence gathered proves extremely useful.

Of course, just because an IP address is banned at a torrent site it doesn’t necessarily follow that a similar anti-DMCA system is being deployed. IP addresses are often excluded after being linked to users uploading spam, fakes and malware. Additionally, users can share IP addresses, particularly in the case of VPNs. Nevertheless, the banning of DMCA notice-senders is a documented phenomenon.

Earlier this month Jonathan Bailey at Plagiarism Today revealed his frustrations when attempting to get so-called “revenge porn” removed from various sites.

“Once you file your copyright or other notice of abuse, the host, rather than remove the material at question, simply blocks you, the submitter, from accessing the site,” Bailey explains.

“This is most commonly done by blocking your IP address. This means, when you come back to check and see if the site’s content is down, it appears that the content, and maybe the entire site, is offline. However, in reality, the rest of the world can view the content, it’s just you that can’t see it,” he notes.

Perhaps unsurprisingly, Bailey advises a simple way of regaining access to a site using these methods.

“I keep subscriptions with multiple VPN providers that give access to over a hundred potential IP addresses that I can use to get around such tactics,” he reveals.

The good news for both file-sharers and anti-piracy groups alike is that IP address blocks like these don’t last forever. The site we spoke with said that blocks on the VPN range we inquired about had already been removed. Still, the cat and mouse game is likely to continue.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

SANS Internet Storm Center, InfoCON: green: BizCN gate actor update, (Fri, Oct 2nd)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green


The actor using gates registered through BizCN(alwayswith privacy protection) continues using the Nuclear exploit kit (EK) to deliver malware.

My previous diary on this actor documented the actors switch from Fiesta EK to Nuclear EK in early July 2015 [1]. Since then, the BizCN gate actor briefly switched to Neutrino EK in however, it appears to be using Nuclear EK again.

Our thanksto Paul, who submitted a pcap of”>”>”>actorto the ISC.


Pauls pcap showed us a Google search leading to thecompromised website.In the image below, youcan alsosee” />
Shown above: A pcap of the traffic filtered by HTTP request.

No payload was found inthis EK traffic, so the Windowshost viewing the compromised websitedidnt get infected. The Windows host from this pcapwas running IE 11, and URLs for the EK traffic stop after the last two HTTP POST requests. These URL patterns are what Ive seen every time IE 11 crashes after getting hit with Nuclear EK.

A key thing to remember with the BizCN gate actor is the referer line from the landing page. This will always show the compromised website, and it wont indicate the BizCN-registered gate that gets you there. Pauls pcap didnt include traffic to the BizCN-registered gate, but I found a reference to it in the traffic. ” />
Shown above: Flow chart for EK traffic associated with the BizCN gate actor.

How did Ifind the gate in this example? First, I checked the referer on the HTTP GET request to the EK” />
Shown above: TCP stream for the HTTP GET request to the Nuclear EK landing page.

That referer should have injected script pointing to the BizCN gate URL, soI exported that” />
Shown above: ” />
Shown above: The object Iexportedfrom the pcap.

I searched the HTML text” />
Shown above: Malicious script in page from the compromised websitepointing to URL on the BizCN-registered gate domain.

The BizCN-registered”>, andpingingto itshowed as the IP address. ” />
Shown above: Whoisinformation on”>

This completes my flow chart for the BizCN gate actor.The domains associated from Pauls pcapwere:

  • – Compromised website
  • – – BizCN-registered gate
  • – – Nuclear EK

Final words

Recently, Ive hadhard time getting a full chain of infection traffic from theBizCN gate actor. Pauls pcap also had this issue, because there was no payload. However the BizCN gate actor is still active, and many of the compromised websites Ive noted in previous diaries [1, 4] are still compromised.

We continue to track the BizCN gate actor, and well let you know if we discover any significant changes.

Brad Duncan
Security Researcher at Rackspace
Blog: – Twitter: @malware_traffic



(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

TorrentFreak: Comcast User Hit With 112 DMCA Notices in 48 Hours

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

Every day, DMCA-style notices are sent to regular Internet users who use BitTorrent to share copyrighted material. These notices are delivered to users’ Internet service providers who pass them on in the hope that customers correct their behavior.

The most well-known notice system in operation in the United States is the so-called “six strikes” scheme, in which the leading recording labels and movie studios send educational warning notices to presumed pirates. Not surprisingly, six-strikes refers to users receiving a maximum of six notices. However, content providers outside the scheme are not bound by its rules – sometimes to the extreme.

According to a lawsuit filed this week in the United States District Court for the Western District of Pennsylvania (pdf), one unlucky Comcast user was subjected not only to a barrage of copyright notices on an unprecedented scale, but during one of the narrowest time frames yet.

The complaint comes from Rotten Records who state that the account holder behind a single Comcast IP address used BitTorrent to share the discography of Dog Fashion Disco, a long-since defunct metal band previously known as Hug the Retard.

“Defendant distributed all of the pieces of the Infringing Files allowing others to assemble them into a playable audio file,” Rotten Records’ attorney Flynn Wirkus Young explain.

Considering Rotten Records have been working with Rightscorp on other cases this year, it will come as no surprise that the anti-piracy outfit is also involved in this one. And boy have they been busy tracking this particular user. In a single 48 hour period, Rightscorp hammered the Comcast subscriber with more than two DMCA notices every hour over a single torrent.

“Rightscorp sent Defendant 112 notices via Defendant’s ISP Comcast from June 15, 2015 to June 17, 2015 demanding that Defendant stop illegally distributing Plaintiff’s work,” the lawsuit reads.

“Defendant ignored each and every notice and continued to illegally distribute Plaintiff’s work.”


While it’s clear that the John Doe behind IP address shouldn’t have been sharing the works in question (if he indeed was the culprit and not someone else), the suggestion to the Court that he or she systematically ignored 112 demands to stop infringing copyright is stretching the bounds of reasonable to say the least.

trolloridiotIn fact, Court documents state that after infringement began sometime on June 15, the latest infringement took place on June 16 at 11:49am, meaning that the defendant may well have acted on Rightscorp’s notices within 24 hours – and that’s presuming that Comcast passed them on right away, or even at all.

Either way, the attempt here is to portray the defendant as someone who had zero respect for Rotten Record’s rights, even after being warned by Rightscorp more than a hundred and ten times. Trouble is, all of those notices covered an alleged infringing period of less than 36 hours – hardly a reasonable time in which to react.

Still, it’s unlikely the Court will be particularly interested and will probably issue an order for Comcast to hand over their subscriber’s identity so he or she can be targeted by Rotten Records for a cash settlement.

Rotten has targeted Comcast users on several earlier occasions, despite being able to sue the subscribers of any service provider. Notably, while Comcast does indeed pass on Rightscorp’s DMCA takedown notices, it strips the cash settlement demand from the bottom.

One has to wonder whether Rightscorp and its client are trying to send the ISP a message with these lawsuits.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

SANS Internet Storm Center, InfoCON: green: Recent trends in Nuclear Exploit Kit activity, (Thu, Oct 1st)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green


Since mid-September 2015, Ive generated a great deal of Nuclear exploit kit (EK) traffic after checking compromised websites. This summer, I usually foundAngler EK. Now Im seeing more Nuclear.

Nuclear EK has alsobeen sending dual payloads. Idocumented dual payloads at least three times last year [1, 2, 3], but I hadnt noticed it again from Nuclear EKuntil recently. This time,one of the payloadsappears to beransomware. I sawFilecoder on 2015-09-18[4] and TeslaCrypt 2.0 on 2015-09-29[5]. In both cases,ransomware was a componentof the dual payloads from Nuclear EK.

To be clear, Nuclear EK isnt always sendingtwo payloads,but Ive noticed a dual payload trendwith this recent increase in Nuclear EK traffic.

Furthermore, on Wednesday 2015-09-30, the URL patternfor Nuclear EKs landing page changed. With that in mind, lets take a look at whats happening with Nuclear.

URL patterns

The images below show some examples of URL patterns for Nuclear EK”>Shown above: Some URLsfrom Nuclear EK on 2015-09-15. Pcap” />
Shown above: Some URLs from Nuclear EK on 2015-09-16. “>Shown above: Some URLsfrom Nuclear EK on 2015-09-18. Pcap”>Shown above: Some URLs from Nuclear EK on 2015-09-22. Pcap”>Shown above: Some URLs from Nuclear EK on 2015-09-29.Pcapavailable here.

In the above images, the initial HTTP GET request always starts with /search?q= for the landing page URL. “>Shown above: Some URLs fromNuclear EK on 2015-09-30.

The initial HTTP GET request now starts with /url?sa= instead of”>for the landing page URL. I saw the same thing from three different examples of Nuclear EK on 2015-09-30. Windows hosts from these examplesall had the exact”>Nuclear EK examples from 2015-09-30

I had some trouble infectinga Windows 7 host running IE 11. “>The browser always crashed before the EK”>payload was sent. SoI tried three different configurations to generate traffic for this diary. The first run hadaWindows 7 host running IE 10. The second run had a Windows 7 host runniningIE 8. The third run had a Windows 7 host running IE 11. All hosts were running”>I found a compromised website withan injected iframe leading to Nuclear EK. The screenshot below shows an example of themalicious script at the bottom of the page. Itsright before the closing body and HTML tags. Youll” />
Shown above: “>The first run used IE 10 with Flash player ” />
Shown above: Desktop background from the infected host.

Decrypt instructions were left as a text file on the desktop. The authors behind this ransomwareused and as email addresses for further decryption” />
Shown above: Decryption instructions from the ransomware.

Playing around with the pcap in Wireshark, I got a decent representation of the traffic. Below, youll see the compromised website, Nuclear EK on, and some of the post infection traffic. TLS activityon ports 443 and 9001 with random characters for the server names is Tor traffic. Several other attempted TCP connections can be found in the pcap, but none of those were successful, and theyre not shown below. ” />
Shown above: Some of the infection traffic from the pcap in Wireshark (from a Windows host usingIE 10 and Flash player

Below are alerts on the infection traffic when Iused tcpreplay onSecurity Onion with the EmergingThreats(ET)and ET Pro”>Shownabove: Alerts from the traffic using Sguil in Security Onion.

For the second run, Iinfecteda different Windows host running IE 8 and Flash player This generatedNuclear EK from from the same IP address and a slightly different domain name. however, I didt see the same traffic that triggered” />
Shown above: Nuclear EK traffic using IE 8 and Flash player

For the third run, I used a Windows host with IE 11 and Flash player As mentioned earlier, the browser would crash before the EK sent the payload, so this host didnt get infected with malware. I tried it once with Flash player and once without Flash player, both times running an unpatched version of IE 11. Each time, the browser crashed. Nuclear EK was still using the same IP address, butdifferent domain names were different. Within a 4 minute timespan on the pcap,youll find” />
Shown above: Nuclear EK traffic using”>1 and Flash”>… Tried twice but”>below”>Shown” />
Shown” />
Shown above: Nuclear EK sends the secondmalware payload.

Other than the landing page URL patternand dual payload,Nuclear EK looks remarkably similar to the last time we reviewed itin August 2015 [6].

Preliminary malware analysis

The first and second runs generated a full infection chain and post-infection traffic. The malware payload was the same during the first and second run. The first run had additional malware on the infected host. The third run using IE 11 didnt generate any malware payload.

Nuclear EK malware payload 1 of 2: