Posts tagged ‘ip address’

TorrentFreak: No Copyright Trolls, Your Evidence Isn’t Flawless

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

xmastrollEarlier this month TF broke the news that Sky Broadband in the UK were sending letters out to some of their customers, warning them they’re about to be accused of downloading and sharing movies without permission.

When they arrive the threats will come from Golden Eye International (GEIL), the company behind the ‘Ben Dover’ porn brand that has already targeted hundreds of people with allegations of Internet piracy.

“It’s likely that Golden Eye International will contact you directly and may ask you to pay them compensation,” the ISP warned.

In fact, GEIL will definitely ask for money, largely based on their insistence that the evidence they hold is absolutely irrefutable. It’s the same tune they’ve been singing for years now, without ever venturing to back up their claims in court. Sadly, other legal professionals are happy to sing along with them.

“Don’t do anything illegal and you won’t get a letter,” intellectual property specialist Iain Connor told The Guardian last week.

“Golden Eye will only have gotten details of people that they can prove downloaded content and so whether the ‘invoice’ demand is reasonable will depend on how much they downloaded that infringed copyright material.”

Quite aside from the fact that none of these cases are about downloading copyrighted material (they’re about uploading), one has to presume that Connor isn’t personally familiar with details of these cases otherwise he would’ve declared that interest. Secondly, he is absolutely wrong.

Companies like GEIL sometimes get it wrong, the anti-piracy trackers they use get things wrong, and ISPs get things wrong too. An IP address is NOT a person but innocent parties have to go to huge lengths to prove that. IT worker Harri Salminen did just that and this week finally managed to publicly clear his family’s name.

It started two years ago when his wife – the Internet account payer – was accused by an anti-piracy outfit (unconnected to GEIL) of pirating on a massive scale.

“They claimed that thousands of music tracks had been illegally distributed from our Internet connection,” Salminen told local media.

“The letter came addressed to my wife and she became very anxious, since she didn’t understand what this was all about. According to the letter, the matter was going to the court and we were advised to make contact to agree on compensation.”

Sound familiar? Read on.

The Salminen family has two children so took time to ensure they hadn’t uploaded anything illegally. Harri Salminen, who works in the IT industry, established that they had not, so began to conduct his own investigation. Faced with similar “irrefutable” IP address-based evidence to that presented in all of these ‘troll’ cases, what could’ve possibly gone wrong?

Attached to the letter of claim was a page from Salminen’s ISP which detailed the name of his wife, the IP address from where the piracy took place, and a date of infringement. This kind of attachment is common in such cases and allows trolls to imply that their evidence is somehow endorsed by their target’s ISP.

Then Salminen struck gold. On the day that the alleged infringement took place the IT worker was operating from home while logged into his company’s computer systems. Knowing that his company keeps logs of the IP addresses accessing the system, Salminen knew he could prove which IP address he’d been using on the day.

“I looked into my employer’s system logs for IP-addresses over several weeks and I was able to show that our home connection’s IP address at the time of the alleged act was quite different from the IP address mentioned in the letter,” he explained.

So what caused Salminen’s household to be wrongly identified? Well, showing how things can go wrong at any point, it appears that there was some kind of screw-up between the anti-piracy company and Salminen’s ISP.

Instead of identifying the people who had the IP address at the time of the actual offense, the ISP looked up the people using the address when the inquiry came in.

“The person under employment of the ISP inputs a date, time, and IP-address to the system based on a court order,” anti-piracy group TTVK now explains.

“And of course, when a human is doing something, there is always a possibility for an error. But even one error is too much.”

Saliminen says that it was only his expertise in IT that saved him from having to battle it out in court, even though his family was entirely innocent. Sadly, those about to be accused by Golden Eye probably won’t have access to similar resources.

“We have only written to those account holders for whom we have evidence of copyright infringement,” Golden Eye’s Julian Becker said confidently last week.

Trouble is, Golden Eye only has an IP address and the name of the account holder. They have no evidence that person is the actual infringer, even presuming there hasn’t been a screw-up like the one detailed above.

“We have written to account holders accusing them of copyright infringement, even though it’s entirely possible they personally did nothing wrong and shouldn’t have to pay us a penny,” is perhaps what he should’ve said.

But that’s not only way too frank but a sure-fire way of punching a huge hole in GEIL’s bottom line. And for a troll like GEIL, that would be a disaster.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services. [$] Supporting secure DNS in glibc

This post was syndicated from: and was written by: corbet. Original post: at

One of the many weak links in Internet security is the domain name system
(DNS); it is subject to attacks that, among other things, can mislead
applications regarding the IP address of a system they wish to connect to.
That, in turn, can cause connections to go to the wrong place, facilitating
man-in-the-middle attacks and more. The DNSSEC
protocol extensions are meant to address this threat by setting up a
cryptographically secure chain of trust for DNS information. When DNSSEC
is set up properly, applications should be able to trust the results of
domain lookups. As the discussion over an
attempt to better integrate DNSSEC into the GNU C Library
though, ensuring that DNS lookups are safe is still not a straightforward

Darknet - The Darkside: SpiderFoot – Open Source Intelligence Automation Tool (OSINT)

This post was syndicated from: Darknet - The Darkside and was written by: Darknet. Original post: at Darknet - The Darkside

SpiderFoot is an open source intelligence automation tool. Its goal is to automate the process of gathering intelligence about a given target, which may be an IP address, domain name, hostname or network subnet. SpiderFoot can be used offensively, i.e. as part of a black-box penetration test to gather information about the target or defensively…

Read the full post at

Linux How-Tos and Linux Tutorials: An Introduction to Uncomplicated Firewall (UFW)

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Jack Wallen. Original post: at Linux How-Tos and Linux Tutorials

ufw AOne of the many heralded aspects of Linux is its security. From the desktop to the server, you’ll find every tool you need to keep those machines locked down as tightly as possible. For the longest time, the security of Linux was in the hands of iptables (which works with the underlying netfilter system). Although incredibly powerful, iptables is complicated—especially for newer users. To truly make the most out of that system, it may take weeks or months to get up to speed. Thankfully, a much simpler front end for iptables is ready to help get your system as secure as you need.

That front end is Uncomplicated Firewall (UFW). UFW provides a much more user-friendly framework for managing netfilter and a command-line interface for working with the firewall. On top of that, if you’d rather not deal with the command line, UFW has a few GUI tools that make working with the system incredibly simple.

But, before we find out what GUI tools are available, it’s best to understand how the UFW command-line system works.

Working with the Command

The fundamental UFW command structure looks like this:

ufw [--dry-run] [options] [rule syntax]

Notice the –dry-run section. UFW includes the ability to include this argument which informs the command to not make any changes. Instead, you will see the results of your changes in the output.

As for working with the command, UFW can be used in two ways:

  • Simple syntax: Specifies a port and (optionally) the protocol

  • Full syntax: Specifies source, destination, port, and (optionally) the protocol

Let’s look at the simple syntax first. Say, for example, you want to allow traffic on port 22 (SSH). To do this with UFW, you’d run a command like:

sudo ufw allow 22

NOTE: I added sudo to the command because you must have admin privileges to run ufw. If you’re using a distribution that doesn’t take advantage of sudo, you’d first have to su to root and then run the same command (minus sudo).

Conversely, say you want to prevent traffic on port 22. To do this, the command would look like:

sudo ufw deny 22

Should you want to add a protocol to this, the command would look like:

sudo ufw deny 22/tcp

What happens if you don’t happen to know the port number for a service? The developers have taken that into consideration. UFW will run against /etc/services in such a way that you can define a rule using a service instead of a port. To allow SSH traffic, that command would look like:

sudo ufw allow ssh

Pretty simple, right? You can also add protocols to the above command, in the same way you did when defining a rule via port number.

sudo ufw allow ssh/tcp

Of the available arguments, the ones you’ll use the most with the ufw command are:

  • allow

  • deny

  • reject

  • limit

  • status: displays if the firewall is active or inactive

  • show: displays the current running rules on your firewall

  • reset: disables and resets the firewall to default

  • reload: reloads the current running firewall

  • disable: disables the firewall

If you want to use a fuller syntax, you can then begin to define a source and a destination for a rule. Say, for example, you have an IP address you’ve discovered has been attempting to get into your machine (for whatever reason) through port 25 (SMTP). Let’s say that address is (even though it’s an internal address) and your machine address is To block that address from gaining access (through any port), you could create the rule like so:

sudo ufw deny from to port 25

Let’s look at the limit option. If you have any reason for concern that someone might be attempting a denial of service attack on your machine, via port 80. You can limit connections to that port with UFW, like so:

sudo ufw limit 80/tcp

By default, the connection will be blocked after six attempts in a 30-second period.

You might also have a need to allow outgoing traffic on a certain port but deny incoming traffic on the same port. To do this, you would use the directional argument like so. To allow outgoing traffic on port 25 (SMTP), issue the command:

sudo ufw allow out on eth0 to any port 25 proto tcp

You could then add the next rule to block incoming traffic on the same interface and port:

sudo ufw deny in on eth0 from any 25 proto tcp

GUI Tools

Now that you understand the basics of UFW, it’s time to find out what GUI tools are available to make using this handy firewall even easier. There aren’t many which are actively maintained, and many distributions default to one in particular. That GUI is…

Gufw is one of the most popular GUI front ends for UFW. It’s available for Ubuntu, Linux Mint, openSUSE, Arch Linux, and Salix OS. With Gufw, you can easily create profiles to match different uses for a machine (home, public, office, etc.). As you might expect from such a tool, Gufw offers an interface that would make any level of user feel right at home (see Figure 1 above).

Some distributions, such as Ubuntu, don’t install Gufw by default. You will, however, find it in the Ubuntu Software Center. Search for gufw and install with a single click.

uwf BIf your distribution happens to be Elementary OS Freya, there’s a new front end for UFW built into the settings tool that allows you to very easily add rules to UFW (Figure 2). You can learn more about the Elementary OS Freya UFW front end from my post “Get to Know the Elementary OS Freya Firewall Tool.”

You might also come across another front end called ufw-frontends. That particular GUI hasn’t been in developed for some time now, so it’s best to avoid that particular app.

For most users, there is no need to spend the time learning iptables—not when there’s a much more user-friendly front end (that also happens to include solid GUI tools) that’ll get the job done. Of course, if you’re looking for business- or enterprise-class firewalling, you should certainly spend the time and effort to gain a full understanding of iptables.

Which is right for your needs, UFW or iptables?

AWS Official Blog: EC2 VPC VPN Update – NAT Traversal, Additional Encryption Options, and More

This post was syndicated from: AWS Official Blog and was written by: Jeff Barr. Original post: at AWS Official Blog

You can use Amazon Virtual Private Cloud to create a logically isolated section of the AWS Cloud. Within the VPC, you can define your desired IP address range, create subnets, configure route tables, and so forth. You can also use a network gateway to connect the VPC to your existing on-premises network using a hardware Virtual Private Network (VPN) connection. The VPN running in the AWS Cloud (also known as a VPN gateway or VGW) communicates with a customer gateway (CGW) on your network or in your data center (read about Your Customer Gateway to learn more).

Today we are adding several new features to the VPN. Here’s a summary:

  • NAT Traversal
  • Additional Encryption Options
  • Reusable IP addresses for the CGW

In order to take advantage of any of these new features, you will need to create a new VGW and then create new VPN tunnels with the desired attributes.

NAT Traversal
Network Address Translation (NAT) maps one range of IP addresses to another. Let’s say that you have created a VPC and assigned it to a desired IP address range, and then split that range into a couple of subnets. Then you launch some EC2 instances within the VPC, each bound to one of those subnets. You can now use Network Address Translation to map the VPC’s IP address range to a different range when seen from your existing network.  This mapping process takes places across the VPN connection and is known at NAT-T, or NAT Traversal. NAT-T allows you to create IP connections that originate on-premises and connect to an EC2 instance (or vice versa) using addresses that have been translated.

You can set this up when you create a new VPN connection in the AWS Management Console. You will need to open up UDP port 4500 in your firewall in order to make use of NAT-T.

Additional Encryption Options
You can now make use of several new encryption options.

When the VPC’s hardware VPN is in the process of establishing a connection with your on-premises VPN, it proposes several different encryption options, each with a different strength. You can now configure the VPN on the VPC to propose AES256 as an alternative to the older and weaker AES128. If you decide to make use of this new option, you should configure your device so that it no longer accepts a proposal to use AES128 encryption.

The two endpoints participate in a Diffie-Hellman key exchange in order to establish a shared secret. The Diffie-Hellman groups used in the exchange will determine the strength of the hash on the keys. You can now configure the use of a wider range of groups:

  • Phase 1 can now use DH groups 2, 14-18, 22, 23, and 24.
  • Phase 2 can now use DH groups 1, 2, 5, 14-18, 22, 23, and 24.

Packets that flow across the VPN connection are verified using a hash algorithm. A matching hash gives a very high-quality indicator that the packet has not been maliciously modified along the way. You can now configure the VPN on the VPC to use the SHA-2 hashing algorithm with a 256 bit digest (also known as SHA-256). Again, you should configure your device to disallow the use of the weaker hash algorithms.

Reusable CGW IP Addresses
You no longer need to specify a unique IP address for each customer gateway connection that you create. Instead, you can now reuse an existing IP address. Many VPC users have been asking for this feature and I expect it to be well-used.

To learn more, read our FAQ and the VPC Network Adminstrator Guide.


SANS Internet Storm Center, InfoCON: green: Victim of its own success and (ab)used by malwares, (Wed, Oct 28th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

This morning, I faced an interesting case. We were notified that one of our computers was doing potentially malicious HTTP requests. The malicious URL was: We quickly checked and detected to many hosts were sendingrequests to this API. It is a website hosted in France which provides geolocalisation services via a text/json/xml API. The usage is pretty quick and”>xavier@vps2$curl

You provide an IP address and it returns its 2-letters country code. They provide also a paying version with more features.We investigated deeper and found that one request was indeed performed by a single host using a fake User-Agent.”>alert http $HOME_NET any – $EXTERNAL_NET any (msg:ET POLICY External IP Lookup Attempt To Wipmania content:Host|3A 20||0d 0a|)
alert http $HOME_NET any – $EXTERNAL_NET any (msg:ET TROJAN Dorkbot GeoIP Lookup to wipmania content:User-Agent|3a| Mozilla/4.0|0d 0a|Host|3a||0d 0a|) || ET TROJAN Dorkbot GeoIP Lookup to wipmania

I found references to in the following malwares:

  • Dorkbot
  • Ruskill

VT reported 97 occurrences of the domain wipmania.comin malicious files:

Conclusion: if you provide online services and they become popular be careful to not be (ab)used by malwares! It could affect your overall reputation and make you flagged/blocked in black lists.

Xavier Mertens
ISC Handler – Freelance Security Consultant

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

SANS Internet Storm Center, InfoCON: green: Typo Squatting Charities for Fake Tech Support Schemes, (Mon, Oct 26th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

Joe wrote this weekend that:

A customer called me yesterday to make me aware of their computer that was compromised by one of those scam websites, that pops up an 800 numbers and tells them to call. Against her knowing better, she STILL called in…. ugh.

The site, I wanted to make you aware of wasamvets.COMShe wanted to make a donation, but the real website isamvets.ORG

It is always sad to see how people with good intentions, willing to donate to a deserving cause, are being taken advantage of. So I took a bit time to investigate this particular case.

First of all: I do NOT recommend you go to the .com version of the site above. I didnt see anything outright malicious, other then popupsadvertising the fake tech support service, but you never know what they are going to send next.

The content returned from the page is very variable. Currently, I am getting index pages linking to various veterans related pages. Typically these pages are auto-created using key words people used to get to the page, or keywords entered in the search field on the page. So no surprise that this page knows it is mistaken for a veteran charity.

When it does display the Fake Virus Warning page, then it does so very convincingly:

– the lok and feel is adapted to match the users OS and browsers
– even on mobile devices, like my iPad, the page emulates the browser used

After a couple of visits to the site, it no longer displayed the virus warning to me, even if I changed systems and IPs. So I am not sure if they ran out of ad impressions or if they time them to only show up so often.

According to Farsight Securitys DNS database, 10,000 different hostnames resolve to this one IP address. Most of them look like obvious typo squatting domains:

For example:,,

For some of them, I still get ads for do nothing ware like Mackeeper. (looking at the page from a Mac)

Johannes B. Ullrich, Ph.D.

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Krebs on Security: IBM Runs World’s Worst Spam-Hosting ISP?

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

This author has long sought to shame Web hosting and Internet service providers who fail to take the necessary steps to keep spammers, scammers and other online ne’er-do-wells off their networks. Typically, the companies on the receiving end of this criticism are little-known Internet firms. But according to anti-spam activists, the title of the Internet’s most spam-friendly provider recently has passed to networks managed by IBM — one of the more recognizable and trusted names in technology and security.

In March 2010, not long after I began working on my new book Spam Nation: The Inside Story of Organized Cybercrime, From Global Epidemic to Your Front DoorI ran a piece titled Naming and Shaming Bad ISPs. That story drew on data from 10 different groups that track spam and malware activity by ISP. At the time, a cloud computing firm called Softlayer was listed prominently in six out of 10 of those rankings.

The top spam-friendly ISPs and hosting providers in early 2010.

The top spam-friendly ISPs and hosting providers in early 2010.

Softlayer gradually cleaned up its act, and began responding more quickly to abuse reports filed by anti-spammers and security researchers. In July 2013, the company was acquired by IBM. More recently, however, the trouble at networks managed by Softlayer has returned. Last month, anti-spam group listed Softlayer as the “#1 spam hosting ISP,” putting Softlayer at the very top of its World’s Worst Spam Support ISPs index. Spamhaus said the number of abuse issues at the ISP has “rapidly reached rarely previously seen numbers.”

Contacted by KrebsOnSecurity, Softlayer for several weeks did not respond to requests for comment. After reaching out to IBM earlier this week, I received the following statement from Softlayer Communications Director Andre Fuochi:

“With the growth of Softlayer’s global footprint, as expected with any fast growing service, spammers have targeted our platform. We are aggressively working with authorities, groups like The Spamhaus Project, and IBM Security analysts to shut down this recent, isolated spike. Just in the past month we’ve shut down 95 percent of the spam accounts identified by Spamhaus, and continue to actively eliminate this activity.”

top10spamhausBut according to Spamhaus, Softlayer still has more than 600 abuse issues still unaddressed. Spamhaus says it is true that Softlayer has been responding to its abuse complaints, but that the scammers and spammers are moving much faster.

In a blog post published earlier this month, Spamhaus explained that the bulk of the trouble appears to have come from cybercriminal customers in Brazil who have been rapidly registering large numbers of domain names daily tied to fake but plausible-sounding companies or organizations.

“This Brazilian malware gang was so active that many listed [Softlayer Internet] ranges were being reassigned to the same spam gang immediately after re-entering the pool of available [Internet] addresses,” Spamhaus explained. “After observing the same [Internet] address ranges being reassigned repeatedly to the same spammers, Spamhaus contacted the SoftLayer abuse department and told them that [Spamhaus listings] for these specific issues would not be removed until SoftLayer was able to get control of the overall problem with these spammers.”

Spamhaus said it doesn’t known why Softlayer is having this problem, but it has a few guesses.

“We believe that SoftLayer, perhaps in an attempt to extend their business in the rapidly-growing Brazilian market, deliberately relaxed their customer vetting procedures,” the organization posited. “Cybercriminals from Brazil took advantage of SoftLayer’s extensive resources and lax vetting procedures. In particular, the malware operation exploited loopholes in Softlayer’s automated provisioning procedures to obtain an impressive number of IP address ranges, which they then used to send spam and host malware sites. Unfortunately, what happened to Softlayer can easily happen to any ISP that makes certain unwise choices.”

IBM/Softlayer did not comment on those allegations. But as I show in my book, Spam Nation, spammers and malware purveyors continuously seek out and patronize ISPs and hosting providers which erect the fewest barriers to rapidly setting up massive numbers of scammy sites simultaneously.

It is true that if you make it harder for spammers to operate, they don’t just go away; rather, they move someplace else where it’s easier to ply their trade. But there is little reason that these Internet bottom feeders should have made a home for themselves at a company owned by IBM, which bills itself as the fastest growing vendor in the worldwide security software market. Physician: Heal Thyself!

TorrentFreak: Popcorn Time & YTS Global Outages Cause Concern

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

popcornThis week a new service called BrowserPopcorn debuted and then shutdown. Just to get things absolutely clear, that has nothing whatsoever to do with today’s situation. With that out of the way, let’s move on.

As reported earlier this week, the main fork of Popcorn Time is undergoing transition. Disputes over money and power have hit the project at its core. As a result key members have now left the project while others insist they will continue.

Today’s Popcorn problems

With that turmoil already concerning the community, this morning there are fresh issues making waves. Several hours ago the project’s official domain – – went completely offline.

The domain itself is owned by David Lemarier, aka ‘phnz’, one of the devs who left earlier this week. With that in mind it’s certainly possible (or even likely) that the outage is connected to his departure. While there are some reports that the domain is now slowly coming back online, the site itself is still accessible by direct IP address.

However, the outage is affecting more than just the Popcorn Time domain. For reasons that are not yet clear the project’s official Facebook and Twitter accounts (both named ‘popcorntimetv’) have also gone down, with the latter now complaining that it simply does not exist.


But the problems go deeper still. An externally-hosted status page for reveals additional problems.

Not only are there issues with nameservers, the website and the forum, some of the APIs on which the entire PopcornTime application relies are also non-functioning, causing the whole system to fall over.


The reason the movie API is down is the issue currently causing most concern, since it’s likely to keep Popcorn Time offline, even when returns. Here’s why.

The YTS connection

Most if not all versions of Popcorn Time rely on a website called for their movie libraries and if YTS has problems, Popcorn Time has problems too. At the time of writing and for the past several hours, has been completely offline too, meaning that Popcorn Time is pretty much broken.

TorrentFreak contacted the admin of YTS but did not immediately receive a response to our request for comment. However, when accessed directly, a public facing server operated by YTS does return the message – “Be Back Soon :)” – but it’s unclear if that relates directly to the current situation or is a generic downtime message.


At least for now it appears that the problems facing and its team are separate from the problems being experienced by YTS. However, due to PopcornTime’s reliance on YTS for content, the system falls down when the latter doesn’t function as it should.

More on the situation as we have it.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Linux How-Tos and Linux Tutorials: Tips and Tricks for Using the Two Best E-Readers for Linux

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Jack Wallen. Original post: at Linux How-Tos and Linux Tutorials

ereader AIt is 2015 and your home library that once resided on bookshelves and milk crates now exists on either a handheld reader, your laptop, or your desktop. That, of course, is not to say the end of physical books is nigh. But when you want the most convenient way to either read or keep your library with you, e-books are the way to go. This is especially true for larger, heavier textbooks.

The good news for Linux users is that there are plenty of outstanding apps to make reading e-books quite easy. And, because these tools happen to be offered on the Linux platform, they offer some really cool features to make your e-book life even better.

Let’s take a look at two of the best e-book readers available for Linux, as well as a trick or two for each.


Calibre is the mack daddy of e-book tools on Linux. Not only does it offer an outstanding e-reader, it also helps you to convert your .html files into e-book format (it’ll convert .odt and other files…just not as well). The Calibre reader does a great job of displaying your e-books (from a vast array of formats). Calibre also offers a number of really handy features, such as:

  • Bookmarks

  • Reference mode (when in this mode, if you hover your cursor over a paragraph, it will show you the reference number in the form of Chapter.ParagraphNumber)

  • Table of Contents (view the book TOC as a sidebar)

  • Full-screen mode

  • Themes

  • And so much more

There is, however, one feature that places Calibre heads above all other competition: the Calibre server. With this, you can run a server and access your books from any device. Let’s set this up and access the Calibre server from another machine. I will assume you’ve already installed Calibre (it can be found in your distribution’s standard repositories). The steps are simple:

  1. Open Calibre and click on the Preferences button

  2. Click Sharing over the net (under the Sharing section)

  3. Configure the necessary port (if applicable)

  4. Give the server a username (passwords can cause some devices to not work with the server)

  5. Click Start Server

  6. Click Test Server

When you click Test Server, your default web browser should pop up to display the web-based Calibre Library interface (Figure 1 above).

With the server running, locate the IP address of the machine hosting your Calibre server. You can now access that server in the form of From that page, you can open a book by locating what you want to read and then clicking the associated Get button (Figure 2).

jack-ereader BOnce you click Get, the e-book file will download and you can then open it in your local copy of Calibre (or whatever e-reader you choose).

The one caveat to this is, by starting the server in this way, it will stop the second you close the app. If you want to leave the server running (without the GUI open), you can run it with the following command:

calibre-server --daemonize

This command will allow you to run the server without having to open Calibre. You can then set it to run as a startup service. How you do this will depend on what startup service your distribution uses (systemd or init).

There are even Android apps that let you access your Calibre library from anywhere (if you happen to save your Calibre Library in a cloud location). One particular app, Calibre Cloud, does a great job of accessing your Calibre Library from the likes of Google Drive, Dropbox, etc. Both a free version and a Pro version ($1.99 USD) are available. The Pro version also contains a built-in reader. If you opt for the free version, you’ll need to also install an e-book reader to use for viewing.


Lucidor doesn’t offer all the power and features that comes along with Calibre, but it is one of the best straight-up e-readers you’ll find for Linux. This tool is strictly a reader. Even without all that power under the hood, Lucidor delivers an outstanding e-reader experience. One of the coolest features of Lucidor is its tabbed interface, which allows you to open not only multiple books, but also multiple books from multiple sources.

You won’t find Lucidor in your standard repository. In fact, you’ll have to download the file for installation on your distribution. Let’s install Lucidor on Ubuntu. Here’s how:

  1. Download the .deb file

  2. Open a terminal window

  3. Issue the command sudo dpkg -i lucidor_XXX_all.deb (where XXX is the release number)

  4. Hit Enter

  5. Type your sudo password

  6. Hit Enter

  7. Allow the installation to complete

You should now see the Lucidor launcher in your Dash (or menu, depending upon your desktop). Run the app and you will be greeted by the minimal welcome screen (Figure 3).

jack-ereader CThe interface is quite simple to use. You click on the Links drop-down and select what you want to open. Let’s open up the Personal bookcase in a tab and then add a book. Click Links > Bookcase and the new tab will open, defaulting to the Personal Bookcase. Now click File > Open File. Locate the .epub file you want to add and then click Open. When the file opens in the Lucidor tab, you will prompted whether you want to add the file to the current Bookcase (Figure 4). Click Add and the book will now be available in your personal bookcase.

jack-ereader DAt this point, you can click the Tab button, click Open Bookcase, and start the process over to open a new book.

You can also add annotations to books for easy note-taking. Here’s how:jack-ereader F

  1. Open the book in question

  2. Locate a section of the book you want to annotate

  3. Click the Contents drop-down

  4. Select Annotations

  5. Highlight the portion of the text you want to annotate

  6. Click Create Note

  7. Enter your note for the annotation (Figure 5)

  8. Select Highlight (if you want the selected text to be highlighted)

  9. Select Mark Annotations to place a mark on the text where the annotation starts

  10. When you’re finished, click Add

There are several other features you can enjoy with either Calibre or Lucidor. Most importantly, however, is that you can simply read your books. Other e-readers are available for the Linux platform, but once you’ve used either of these, you won’t settle for anything less.

TorrentFreak: ISP Will Disconnect Pirates Following Hollywood Pressure

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

cord-cuttingAround the world Internet service providers are being placed under pressure to do something about the millions of customers who download and share copyright infringing content.

Major ISPs in the United States, for example, are currently participating in the so-called “six strikes” scheme operated by the major studios and recording labels. Under this project users are given up to six warnings before a range of measures are applied to their accounts.

The measures – which copyright holders insist need to be punitive – are problematic for ISPs. Consumers keep their companies and punishing them too hard isn’t good for business, especially since disappointed customers can simply sign up to a new provider.

However, according to a report coming out of Italy, a local ISP is so concerned about threats coming out of Hollywood it is now threatening to discard customers who download and share copyright infringing material.

According to news outlet Repubblica, one of its readers who subscribes to an ISP in the north of the country received correspondence after being caught downloading movies and TV shows by various Hollywood studios.

“We have received numerous reports of misuse from multiple rights holders (Viacom, Paramount, MGM and other distribution companies) by direct means or from their legal offices,” the mail begins.

“They contain precise details about the material downloaded, download times, the IP address used, the ownership rights of the person making the report, contact details and valid certificate digital signatures plus confirmation of the authenticity of the sender and message content.”

Of course, these kinds of warning notices are nothing new and millions are sent every month. However, the ISP warns that if it does not take measures to stop the infringements, it could be considered “an accomplice to the offenses”. On that basis alone it must take action.

“We ask you to kindly give us feedback within 48 hours. In the case of failure [to respond] or incorrect feedback we’ll be forced to proceed with the cancellation of the service,” the correspondence concludes.

In a comment, copyright lawyer Guido Scorza said that while the ISP’s threats are unusual, they are permissible.

“The conduct of the provider is curious but not illegal. In the event that the ISP cancels the user agreement you could outline, however, an abuse of rights,” Scorza explains.

“Withdrawing unilaterally due to Internet problems is one thing, doing it because a third party claims that a customer is responsible for piracy is something else. Only a court can decide if a user is a pirate or not,” the lawyer says.

Repubblica hasn’t yet named the ISP in question but did speak to its CEO who told the publication that it receives hundreds of complaints from rightsholders and their lawfirms. The notice being sent out to subscribers underlines the company’s position.

“What is required now from us is to ensure that the connection is no longer used for illegal downloading activity and to protect the connection in the event that [the customer] is sure he has never downloaded anything,” it reads.

The question of ISP liability is a thorny one, but increasingly the world’s largest entertainment companies are moving towards an insistence that repeat infringers must be dealt with. Clearly some are taking those threats to their logical conclusion.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

TorrentFreak: 2,800 Cloudflare IP Addresses Blocked By Court Order

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

cloudflare Otherwise known as the Stop Online Piracy Act, SOPA would have enabled U.S. authorities to have allegedly infringing websites blocked at the DNS level, thereby rendering them inaccessible.

In 2012 the proposals were met with outrage, especially by those in the tech community who feared that meddling with the very systems that keep the Internet running would eventually end in disaster.

The legislation never passed but nonetheless over the past three years many hundreds of websites have been blocked around the world. In fact, in some recent cases in the United States entertainment industry companies have achieved some of the things they claimed to need SOPA for.

So did any of this “break the Internet”? According to a piece published in The Hill this week, absolutely not.

“[Policymakers] should not accept the falsehood that blocking a website or taking other actions to shut down infringing sites equates to an assault on the security and reliability of the Internet as a whole,” wrote Daniel Castro, vice president of the Information Technology and Innovation Foundation.

“Instead, they should recognize that selective targeting of websites dedicated to infringement is an effective strategy to combat piracy, and encourage the recording industry, the film industry and online intermediaries to work in partnership to block, shut down and cut off revenue to these websites wherever possible.”

Mr Castro probably feels the Internet is unaffected by blocking since every website he accesses works just fine. However, new data on blocking activity elsewhere presents an altogether more worrying state of affairs.

In 2013, Russia activated a streamlined mechanism for having sites blocked at the ISP level. As a result many hundreds of sites have been rendered inaccessible to the public and some, such as torrent site RUTracker, are now facing the possibility of being blocked forever. But how has this affected the wider Internet?

Well, according to data obtained by web-blocking watchdog RUBlacklist, those doing the blocking in Russia are using their powers to the full while having little concern for collateral damage. As a result more than a third of all IP addresses on the country’s website blocking list belong not to illegal services, but to US-based CDN company Cloudflare.

RUBlacklist says the numbers are significant. As of October 5, 2015, a total of 8,284 IP addresses were on the national blocklist but a head-shaking 2,831 of them – more than 34% – have been registered to Cloudflare.


The problem is due to how CDNs (Content Delivery Networks) like Cloudflare are setup. Instead of a site’s own IP address facing the world, once Cloudflare is deployed it is the service’s IP addresses that are seen in public.

Sadly, when a complaint is filed at the Moscow City Court, no one really cares whether the IP addresses belong to a legitimate service or not, or whether they stay on the blacklist even after they fall out of use.

“Roskomnadzor simply fills and fills its database of banned IP-addresses, risking blocking dozens if not hundreds of thousands of innocent websites,” RUBlacklist explains.

Of course, the blocking of Cloudflare and its customers is nothing new. Earlier this year innocent websites were blocked in the UK by ISP Sky simply because a Pirate Bay proxy was hosted behind the same IP-address.

But while the Sky problems were probably down to human error, the current over-blocking situation in Russia could have been avoided if concerns had been considered in December 2013.

People knew Cloudflare was at risk then, yet no one appears to have taken the warnings seriously. In April 2014, Roskomnadzor admitted there was a problem but simply warned people not to use the service since Cloudflare didn’t respond to correspondence on the matter.

“CloudFlare representatives refused to cooperate and did not respond to the formal notices of Roskomnadzor,” the watchdog explained.

“In the absence of a reaction from CloudFlare many conscientious Internet resources using the CDN-service will be blocked by ISPs in Russia.”

But as the finger-pointing continues the people who really suffer are regular Internet users themselves. Fears that Joe Public would be caught in the copyright crossfire were just the kind of concerns that came to the forefront and fueled the SOPA protests.

Blocking might not have broken the Internet yet, but already parts of it need fixing. And, despite it all, the pirates continue their business largely as before.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Raspberry Pi: A new version of Scratch for Raspberry Pi: now with added GPIO

This post was syndicated from: Raspberry Pi and was written by: Clive Beale. Original post: at Raspberry Pi

There are many excellent things to be found in last week’s release of Raspbian Jessie and we’ve been keeping one of the best ones tucked under our big Raspberry Pi-shaped hat. In the Programming menu on the desktop you’ll find a new version of Scratch, our favourite programming language for beginners.

Breadboard and Scratch on Raspberry Pi

Connect buttons, sensors, cameras, LEDS, goblin sticks and other gubbins to your Pi using Scratch

Tim Rowledge, who has been “vigorously wrangling Scratch into shape over the last couple of years” (thanks Eben), tells us what’s new:


Along with the new Raspbian release we are including the latest Scratch system update. It might have seemed a bit quiet on the Scratch front since March, but lots has happened here in the rainforests of Vancouver Island. There are two primary changes you will notice:

  • a significant increase in speed
  • the addition of a built-in GPIO server.

Speedier Scratch

One of the big projects last year was to modernize the Scratch code to run in a current Squeak Smalltalk system rather than the very old original version; this improved performance a fair bit all on its own, since the newer Squeak benefited from a lot of work over the years. [The Scratch world is created using Squeak, a dialect of the Smalltalk programming language, and Squeak itself runs on a software emulation of a computer system called a virtual machine -Ed.] It also built us a Scratch that could run on the very latest Squeak virtual machines that have dynamic code translation, generating machine code at run-time. Along with work to improve the Squeak code that implements Scratch, we then had a noticeably faster system.

A major project this year has been building such a virtual machine for the Pi; until now, only x86 machines have been able to run this version. With a very generous amount of support from Eliot Miranda – the original author of the Cog virtual machine and all-round software deity – the ARM Cog VM has been whirring away since around June.

Benchmarks are always a nastily slippery subject, but we feel that Squeak performance is typically between 3 and 10 times faster, obviously depending on what exactly one is measuring. Things will get even faster in the future as we iron out wrinkles in the code generation, and soon we hope to start benefiting from another project that does code optimization on the fly; early hints suggest at least a doubling of performance. Since Scratch uses a lot of graphics and UI code it doesn’t always speed up so much; but we already did a lot of graphics speed improvements for prior releases.

Our favourite “scary big demo” is Andrew Oliver’s implementation of Pac-Man. The original release of Scratch on the Raspberry Pi Model B could manage almost one frame per second, at best. The same Model B with the latest Scratch system can manage about 12-15 frames per second, and on a Raspberry Pi 2 we can get a bit over 30, making a very playable Pac-Man.


The new GPIO server for Pi Scratch is a first pass at a new and hopefully simpler way for users to connect Scratch to the Raspberry Pi’s GPIO pins or to add-on boards plugged into them. It is modelled on the mesh/network server and uses the same internal API so that either or both can be used at any time – indeed, you can have both working and use a Pi as a sort of GPIO server or data source. We have not introduced any new blocks at this point.

The server also allows access to the Pi camera, IP address and date and time and allows complex functionality. For example, the following scripts provide (along with a suitably configured breadboard) the ability to turn LEDs on and off according to a button, to take a photo with a countdown provided by a progressively brightening LED, and ways to check the time etc.

Examples of using Scratch to control the camera module as well as LEDs and sensors connected to the Raspberry Pi's GPIO pins

Examples of using Scratch to control the camera module as well as LEDs and sensors connected to the Raspberry Pi’s GPIO pins

Add-On Hardware

We can also plug in Pi add-on cards such as the Sense HAT, Pibrella, Explorer Hat, PiFace, PiLite and Ryanteck motor boards.

Each card has its own set of commands layered on top of the basic GPIO facilities described above.

Demo project scripts

In the Scratch Examples directory (found via the File–>Open dialogue and the Examples shortcut) you will find a Sensors and Motors directory; several new GPIO scripts are included, including the one above.


Closing notes from Clive

We’re really pleased that GPIO is now built in to the Pi version of Scratch. It means that users can use access the GPIO pins “out of the box,” and so get into physical computing that much more easily. We’ve also introduced the GPIO pin numbering system also known as BCM numbering for consistency across our resources, and having our own version of GPIO support gives us finer control over functionality and support for add-on boards in future.

All of our resources using Scratch will use this version from now on, and existing resources will be rewritten. Tim’s reference guide details all of the commands and functionality, and there will be a simplified beginner’s tutorial along this week.

Last of all, there’s no way I can end this post without taking the opportunity to thank our community who have supported (and continue to support) GPIO in Scratch on the Pi. In particular, a big thanks to Simon Walters, aka @cymplecy, for all of his work on Scratch GPIO over the last few years.

The post A new version of Scratch for Raspberry Pi: now with added GPIO appeared first on Raspberry Pi.

TorrentFreak: New York Judge Puts Brakes on Copyright Troll Subpoenas

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

For the past seven or eight years alleged file-sharers in the United States have found themselves at the mercy of so-called copyright trolls and right at the very forefront are those from the adult movie industry.

By a country mile, adult video outfit Malibu Media (X-Art) is the most litigious after filing over 4,500 cases in less than 4 years, but news coming out of New York should give this notorious troll pause for thought.

Events began in June when Malibu filed suit in the Eastern District of New York against a so-called John Doe defendant known only by his Verizon IP address, The porn outfit claimed that the individual was responsible for 18 counts of copyright infringement between February and May 2015.

Early August the defendant received a letter from Verizon informing him that a subpoena had been received which required the ISP to identify the individual using the IP address on May 23, 2015. This caused the defendant to fight back.

“Since Defendant’s IP addresses were assigned dynamically by the ISP, even if Defendant was identified as the subscriber assigned the IP address,, at 03:31:54 on May 23, 2015, it doesn’t mean that Defendant is the same subscriber who was assigned the IP address at the other seventeen occasions,” the defendant’s motion to quash reads.

“If Defendant’s identifying information is given to Plaintiff, Plaintiff, as part of
their business model, will seek settlements of thousands of dollars claiming Defendant’s responsibility for eighteen downloads of copyright protected works under the threat of litigation and public exposure with no serious intention of naming Defendant.”

Case specifics aside, the motion also contains broad allegations about Malibu Media’s entire business model, beginning with the manner in which it collects evidence on alleged infringers using BitTorrent networks.

Citing a University of Washington study which famously demonstrated a printer receiving a DMCA notice for copyright infringement, the motion concludes that the techniques employed by Malibu for tracking down infringers are simply not up to the job.

“The research concludes that the common approach for identifying infringing users in the poplar BitTorrent file sharing network is not conclusive,” the motion notes.

“Even if Plaintiff could definitively trace the BitTorrent activity in question to the IP-registrant, Malibu conspicuously fails to present any evidence that John Doe either uploaded, downloaded, or even possessed а complete copyrighted video file.”

While detection is rightfully put under the spotlight, the filing places greater emphasis on the apparent extortion-like practices demonstrated by copyright trolls such as Malibu Media.

Citing the earlier words of Judge Harold Baer, the motion notes that “troll” cases not only risk the public embarrassment of a misidentified defendant, but also create the likelihood that he or she will be “coerced into an unjust settlement with the plaintiff to prevent the dissemination of publicity surrounding unfounded allegations.”

The motion continues by describing Malibu as an aggressive litigant which deliberately tries to embarrass and shame defendants in the aim of receiving cash payments.

“[Malibu] seeks quick, out-of-court settlements which, because they are hidden, raise serious questions about misuse of court procedure. Judges regularly complain about Malibu,” the motion reads.

“Malibu’s strategy and its business models are to extort, harass, and embarrass
defendants to persuade defendants to pay settlements with plaintiffs instead of paying for legal assistance while attempting to keep their anonymity and defending against allegations which can greatly damage their reputations.”

Following receipt of the motion, yesterday Judge Steven I. Locke handed down his order and it represents a potentially serious setback for Malibu.

“Because the arguments advanced in the Doe Defendant’s Motion to Quash raise serious questions as to whether good cause exists in these actions to permit the expedited pre-answer discovery provided for in the Court’s September 4, 2015 Order, the relief and directives provided for in that Order are stayed pending resolution of the Doe Defendant’s Motion to Quash,” Judge Locke writes.

If putting the brakes on one discovery subpoena wasn’t enough, the Judge’s order lists 19 other cases that are now the subject of an indefinite stay. However, as highlighted by FightCopyrightTrolls, the actual exposure is much greater, with a total of 88 subpoenas in the Eastern District now placed on hold.

As a result, ISPs are now under strict orders not to hand over the real identities of their subscribers until the Court gives the instruction following a ruling by Judge Locke. In the meantime, Malibu has until October 27 to respond to the Verizon user’s motion.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

AWS Official Blog: Amazon RDS Update – MariaDB is Now Available

This post was syndicated from: AWS Official Blog and was written by: Jeff Barr. Original post: at AWS Official Blog

We launched the Amazon Relational Database Service (RDS) almost six years ago, in October of 2009. The initial launch gave you the power to launch a MySQL database instance from the command line. From that starting point we have added a multitude of features, along with support for the SQL Server, Oracle Database, PostgreSQL, and Amazon Aurora databases. We have made RDS available in every AWS region, and on a very wide range of database instance types. You can now run RDS in a geographic location that is well-suited to the needs of your user base, on hardware that is equally well-suited to the needs of your application.

Hello, MariaDB
Today we are adding support for the popular MariaDB database, beginning with version 10.0.17. This engine was forked from MySQL in 2009, and has developed at a rapid clip ever since, adding support for two storage engines (XtraDB and Aria) and other leading-edge features. Based on discussions with potential customers, some of the most attractive features include parallel replication and thread pooling.

As is the case with all of the databases supported by RDS, you can launch MariaDB from the Console, AWS Command Line Interface (CLI), AWS Tools for Windows PowerShell, via the RDS API, or from a CloudFormation template.

I started out with the CLI and launched my database instance like this:

$ rds-create-db-instance jeff-mariadb-1 
  --engine mariadb 
  --db-instance-class db.r3.xlarge 
  --db-subnet-group-name dbsub 
  --allocated-storage 100 
  --publicly-accessible false 
  --master-username root --master-user-password PASSWORD

Let’s break this down, option by option:

  • Line 1 runs the rds-create-db-instance command and specifies the name (jeff-mariadb-1) that I have chosen for my instance.
  • Line 2 indicates that I want to run the MariaDB engine, and line 3 says that I want to run it on a db.r3.xlarge instance type.
  • Line 4 points to the database subnet group that  I have chosen for the database instance. This group lists the network subnets within my VPC (Virtual Private Cloud) that are suitable for my instance.
  • Line 5 requests 100 gigabytes of storage, and line 6 specifies that I don’t want the database instance to have a publicly accessible IP address.
  • Finally, line 7 provides the name and credentials for the master user of the database.

The command displays the following information to confirm my launch:

DBINSTANCE  jeff-mariadb-1  db.r3.xlarge  mariadb  100  root  creating  1  ****  db-QAYNWOIDPPH6EYEN6RD7GTLJW4  n  10.0.17  general-public-license  n  standard  n
      VPCSECGROUP  sg-ca2071af  active
SUBNETGROUP  dbsub  DB Subnet for Testing  Complete  vpc-7fd2791a
      SUBNET  subnet-b8243890  us-east-1e  Active
      SUBNET  subnet-90af64e7  us-east-1b  Active
      SUBNET  subnet-b3af64c4  us-east-1b  Active
      PARAMGRP  default.mariadb10.0  in-sync
      OPTIONGROUP  default:mariadb-10-0  in-sync

The RDS CLI includes a full set of powerful, high-level commands, all documented here. For example, I can create read replicas (rds-create-db-instance-read-replicas) and take snapshot backups (rds-create-db-snapshot) in minutes.

Here’s how I would launch the same instance using the AWS Management Console:

Get Started Today
You can launch RDS database instances running MariaDB today in all AWS regions. Supported database instance types include M3 (standard), R3 (memory optimized), and T2 (standard).


AWS Official Blog: AWS Import/Export Snowball – Transfer 1 Petabyte Per Week Using Amazon-Owned Storage Appliances

This post was syndicated from: AWS Official Blog and was written by: Jeff Barr. Original post: at AWS Official Blog

Even though high speed Internet connections (T3 or better) are available in many parts of the world, transferring terabytes or petabytes of data from an existing data center to the cloud remains challenging. Many of our customers find that the data migration aspect of an all-in move to the cloud presents some surprising issues. In many cases, these customers are planning to decommission their existing data centers after they move their apps and their data; in such a situation, upgrading their last-generation networking gear and boosting connection speeds makes little or no sense.

We launched the first-generation AWS Import/Export service way back in 2009. As I wrote at the time, “Hard drives are getting bigger more rapidly than internet connections are getting faster.” I believe that remains the case today. In fact, the rapid rise in Big Data applications, the emergence of global sensor networks, and the “keep it all just in case we can extract more value later” mindset have made the situation even more dire.

The original AWS Import/Export model was built around devices that you had to specify, purchase, maintain, format, package, ship, and track. While many AWS customers have used (and continue to use) this model, some challenges remain. For example, it does not make sense for you to buy multiple expensive devices as part of a one-time migration to AWS. In addition to data encryption requirements and device durability issues, creating the requisite manifest files for each device and each shipment adds additional overhead and leaves room for human error.

New Data Transfer Model with Amazon-Owned Appliances
After gaining significant experience with the original model, we are ready to unveil a new one, formally known as AWS Import/Export Snowball. Built around appliances that we own and maintain, the new model is faster, cleaner, simpler, more efficient, and more secure. You don’t have to buy storage devices or upgrade your network.

Snowball is designed for customers that need to move lots of data (generally 10 terabytes or more) to AWS on a one-time or recurring basis. You simply request one or more from the AWS Management Console and wait a few days for the appliance to be delivered to your site. If you want to import a lot of data, you can order one or more Snowball appliances and run them in parallel.

The new Snowball appliance is purpose-built for efficient data storage and transfer. It is rugged enough to withstand a 6 G jolt, and (at 50 lbs) light enough for one person to carry. It is entirely self-contained, with 110 Volt power and a 10 GB network connection on the back and an E Ink display/control panel on the front. It is weather-resistant and serves as its own shipping container; it can go from your mail room to your data center and back again with no packing or unpacking hassle to slow things down. In addition to being physically rugged and tamper-resistant, AWS Snowball detects tampering attempts. Here’s what it looks like:

Once you receive a Snowball, you plug it in, connect it to your network, configure the IP address (you can use your own or the device can fetch one from your network using DHCP), and install the AWS Snowball client. Then you return to the Console to download the job manifest and a 25 character unlock code. With all of that info in hand you start the appliance with one command:

$ snowball start -i DEVICE_IP -m PATH_TO_MANIFEST -u UNLOCK_CODE

At this point you are ready to copy data to the Snowball. The data will be 256-bit encrypted on the host and stored on the appliance in encrypted form. The appliance can be hosted on a private subnet with limited network access.

From there you simply copy up to 50 terabytes of data to the Snowball and disconnect it (a shipping label will automatically appear on the E Ink display), and ship it back to us for ingestion. We’ll decrypt the data and copy it to the S3 bucket(s) that you specified when you made your request. Then we’ll sanitize the appliance in accordance with National Institute of Standards and Technology Special Publication 800-88 (Guidelines for Media Sanitization).

At each step along the way, notifications are sent to an Amazon Simple Notification Service (SNS) topic and email address that you specify. You can use the SNS notifications to integrate the data import process into your own data migration workflow system.

Creating an Import Job
Let’s step through the process of creating an AWS Snowball import job from the AWS Management Console. I create a job by entering my name and address (or choosing an existing one if I have done this before):

Then I give the job a name (mine is import-photos), and select a destination (an AWS region and one or more S3 buckets):

Next, I set up my security (an IAM role and a KMS key to encrypt the data):

I’m almost ready! Now I choose the notification options. I can create a new SNS topic and create an email subscription to it, or I can use an existing topic. I can also choose the status changes that are of interest to me:

After I review and confirm my choices, the job becomes active:

The next step (which I didn’t have time for in the rush to re:Invent) would be to receive the appliance, install it and copy my data over, and ship it back.

In the Works
We are launching AWS Import/Export Snowball with import functionality so that you can move data to the cloud. We are also aware of many interesting use cases that involve moving data the other way, including large-scale data distribution, and plan to address them in the future.

We are also working on other enhancements including continuous, GPS-powered chain-of-custody tracking.

Pricing and Availability
There is a usage charge of $200 per job, plus shipping charges that are based on your destination and the selected shipment method. As part of this charge, you have up to 10 days (starting the day after delivery) to copy your data to the appliance and ship it out. Extra days are $15 each.

You can import data to the US Standard and US West (Oregon) regions, with more on the way.


AWS Official Blog: New – AWS WAF

This post was syndicated from: AWS Official Blog and was written by: Jeff Barr. Original post: at AWS Official Blog

Have you ever taken the time to watch the access and error logs from your web server scroll past? In addition to legitimate well-formed requests from users and spiders, you will probably see all sorts of unseemly and downright scary requests far too often.  For example, I checked the logs for one of my servers and found that someone or something was looking for popular packages that are often installed at well-known locations (I have changed the source IP address to for illustrative purposes):

If any of those probes had succeeded, the attacker could then try a couple of avenues to gain access to my server. They could run through a list of common (or default) user names and passwords, or they could attempt to exploit a known system, language, or application vulnerability (perhaps powered by SQL injection or cross-site request forgery) as the next step.

Like it or not, these illegitimate requests are going to be flowing in 24×7. Even if you keep your servers well-patched and do what you can to keep the attack surface as small as possible, there’s always room to add an additional layer of protection.

In order to help you to do this, we are launching AWS WAF today. As you will see when you read this post, AWS WAF will allow you to protect your AWS-powered web applications from application-layer attacks such as those I described above.

You can set it up and start protecting your applications in minutes. You simply create one or more web Access Control Lists (web ACLs), each containing rules (set of conditions defining acceptable or unacceptable requests/ IP addresses) and actions to take when a rule is satisfied. Then you attach the web ACL to your application’s Amazon CloudFront distribution.

From that point forward, incoming HTTP and HTTPS requests that arrive via the distribution will be checked against each rule in the associated web ACL. The conditions with the rules can be positive (allow certain requests or IP addresses) or negative (block certain requests or IP addresses).

I can use the rules and the conditions in many different ways. For example, I could create a rule that would block all access from the IP address shown above. If I were getting similar requests from many different IP addresses, I could choose to block on one or more strings in the URI such as “/typo3/” or “/xampp/.” I could also choose to create rules that would allow access to the actual functioning URIs within my application, and block all others. I can also create rules that guard against various forms of SQL injection.

AWS WAF Concepts
Let’s talk about conditions, rules, Wweb ACLs, and actions. I’ll illustrate some of my points with screen shots of the AWS WAF console.

Conditions inspect incoming requests. They can look at the request URI, the query string, a specific HTTP header,  or the HTTP method (GET, PUT, and so forth):

Because attackers often attempt to camouflage their requests in devious ways, conditions can also include transformations that are performed on the request before the content is inspected:

Conditions can also look at the incoming IP address, and can match a /8, /16, or /24 range. They can also use a /32 to match a single IP address:

Rules reference one or more conditions, all of which must be satisfied in order to make the rule active. For example, one rule could reference an IP-based rule and a request-based rule in order to block access to certain content. Each rule also generates Amazon CloudWatch metrics.

Actions are part of rules, and denote the action to be taken when a request matches all of the conditions in a rule. An action can allow a request to go through, block it, or simply count the number of times that the rule matches (this is good for evaluating potential new rules before using a more decisive action).

Web ACLs in turn reference one or more rules, along with an action for each rule. Each incoming request for a distribution is evaluated against successive rules until a request matches all of the conditions in the rule, then the action associated with the rule is taken. If no rule matches, then the default action (block or allow the request) is taken.

WAF  in Action
Let’s go through the process of creating a condition, a rule, and a web ACL. I’ll do this through the console, but you can also use the AWS Command Line Interface (CLI), AWS Tools for Windows PowerShell, or the Web Application Firewall API.

The console leads me through the steps. I start by creating a web ACL called ProtectSite:

Then I create conditions that will allow or block content:

I can create an IP match condition called BadIP to block the (fake) IP address from my server log:

And then I used the condition to create a rule called BadCompany:

And now I select the rule and chose the action (a single web ACL can use multiple rules; my example uses just one):

As you can see above, the default action is to allow requests through. The net effect is that this combination (condition  + rule + web ACL) will block incoming traffic from and allow everything else to go through.

The next step is to associate my new web ACL with a CloudFront distribution (we’ll add more services over time):

A single web ACL can be associated with any number of distributions. However, each distribution can be associated with one web ACL.

The web ACL will take effect within minutes. I can inspect its CloudWatch metrics to understand how often each rule and each web ACL is activated.

API Power
Everything that I have shown you above can also be accessed from your own code:

  • CreateIPSet, CreateByteMatchSet, and CreateSqlInjectionMatchSet are used to create conditions.
  • CreateRule is used to create rules from conditions.
  • CreateWebACL is used to create web ACLs from rules.
  • UpdateWebACL is used to associate a web ACL with a CloudFront distribution.

There are also functions to list, update, and delete conditions, rules, and web ACLs.

The GetSampledRequests function gives you access to up to 5,000 of the requests that were evaluated against a particular rule within a time period that you specify. The response includes detailed information about each of the requests, including the action taken (ALLOW, BLOCK, or COUNT).

Available Now
AWS WAF is available today anywhere CloudFront is available. Pricing is $5 per web ACL, $1 per rule, and $0.60 per million HTTP requests.

— Jeff;

TorrentFreak: Anti-Piracy Activities Get VPNs Banned at Torrent Sites

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

spyFor the privacy-conscious Internet user, VPNs and similar services are now considered must-have tools. In addition to providing much needed security, VPNs also allow users to side-step geo-blocking technology, a useful ability for today’s global web-trotter.

While VPNs are often associated with file-sharing activity, it may be of interest to learn that they are also used by groups looking to crack down on the practice. Just like file-sharers it appears that anti-piracy groups prefer to work undetected, as events during the past few days have shown.

Earlier this week while doing our usual sweep of the world’s leading torrent sites, it became evident that at least two popular portals were refusing to load. Finding no complaints that the sites were down, we were able to access them via publicly accessible proxies and as a result thought no more of it.

A day later, however, comments began to surface on Twitter that some VPN users were having problems accessing certain torrent sites. Sure enough, after we disabled our VPN the affected sites sprang into action. Shortly after, reader emails to TF revealed that other users were experiencing similar problems.

Eager to learn more, TF opened up a dialog with one of the affected sites and in return for granting complete anonymity, its operator agreed to tell us what had been happening.

“The IP range you mentioned was used for massive DMCA crawling and thus it’s been blocked,” the admin told us.

Intrigued, we asked the operator more questions. How do DMCA crawlers manifest themselves? Are they easy to spot and deal with?

“If you see 15,000 requests from the same IP address after integrity checks on the IP’s browsers for the day, you can safely assume its a [DMCA] bot,” the admin said.

From the above we now know that anti-piracy bots use commercial VPN services, but do they also access the sites by other means?

“They mostly use rented dedicated servers. But sometimes I’ve even caught them using Hola VPN,” our source adds. Interestingly, it appears that the anti-piracy activities were directed through the IP addresses of Hola users without them knowing.

Once spotted the IP addresses used by the aggressive bots are banned. The site admin wouldn’t tell TF how his system works. However, he did disclose that sizable computing resources are deployed to deal with the issue and that the intelligence gathered proves extremely useful.

Of course, just because an IP address is banned at a torrent site it doesn’t necessarily follow that a similar anti-DMCA system is being deployed. IP addresses are often excluded after being linked to users uploading spam, fakes and malware. Additionally, users can share IP addresses, particularly in the case of VPNs. Nevertheless, the banning of DMCA notice-senders is a documented phenomenon.

Earlier this month Jonathan Bailey at Plagiarism Today revealed his frustrations when attempting to get so-called “revenge porn” removed from various sites.

“Once you file your copyright or other notice of abuse, the host, rather than remove the material at question, simply blocks you, the submitter, from accessing the site,” Bailey explains.

“This is most commonly done by blocking your IP address. This means, when you come back to check and see if the site’s content is down, it appears that the content, and maybe the entire site, is offline. However, in reality, the rest of the world can view the content, it’s just you that can’t see it,” he notes.

Perhaps unsurprisingly, Bailey advises a simple way of regaining access to a site using these methods.

“I keep subscriptions with multiple VPN providers that give access to over a hundred potential IP addresses that I can use to get around such tactics,” he reveals.

The good news for both file-sharers and anti-piracy groups alike is that IP address blocks like these don’t last forever. The site we spoke with said that blocks on the VPN range we inquired about had already been removed. Still, the cat and mouse game is likely to continue.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

SANS Internet Storm Center, InfoCON: green: BizCN gate actor update, (Fri, Oct 2nd)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green


The actor using gates registered through BizCN(alwayswith privacy protection) continues using the Nuclear exploit kit (EK) to deliver malware.

My previous diary on this actor documented the actors switch from Fiesta EK to Nuclear EK in early July 2015 [1]. Since then, the BizCN gate actor briefly switched to Neutrino EK in however, it appears to be using Nuclear EK again.

Our thanksto Paul, who submitted a pcap of”>”>”>actorto the ISC.


Pauls pcap showed us a Google search leading to thecompromised website.In the image below, youcan alsosee” />
Shown above: A pcap of the traffic filtered by HTTP request.

No payload was found inthis EK traffic, so the Windowshost viewing the compromised websitedidnt get infected. The Windows host from this pcapwas running IE 11, and URLs for the EK traffic stop after the last two HTTP POST requests. These URL patterns are what Ive seen every time IE 11 crashes after getting hit with Nuclear EK.

A key thing to remember with the BizCN gate actor is the referer line from the landing page. This will always show the compromised website, and it wont indicate the BizCN-registered gate that gets you there. Pauls pcap didnt include traffic to the BizCN-registered gate, but I found a reference to it in the traffic. ” />
Shown above: Flow chart for EK traffic associated with the BizCN gate actor.

How did Ifind the gate in this example? First, I checked the referer on the HTTP GET request to the EK” />
Shown above: TCP stream for the HTTP GET request to the Nuclear EK landing page.

That referer should have injected script pointing to the BizCN gate URL, soI exported that” />
Shown above: ” />
Shown above: The object Iexportedfrom the pcap.

I searched the HTML text” />
Shown above: Malicious script in page from the compromised websitepointing to URL on the BizCN-registered gate domain.

The BizCN-registered”>, andpingingto itshowed as the IP address. ” />
Shown above: Whoisinformation on”>

This completes my flow chart for the BizCN gate actor.The domains associated from Pauls pcapwere:

  • – Compromised website
  • – – BizCN-registered gate
  • – – Nuclear EK

Final words

Recently, Ive hadhard time getting a full chain of infection traffic from theBizCN gate actor. Pauls pcap also had this issue, because there was no payload. However the BizCN gate actor is still active, and many of the compromised websites Ive noted in previous diaries [1, 4] are still compromised.

We continue to track the BizCN gate actor, and well let you know if we discover any significant changes.

Brad Duncan
Security Researcher at Rackspace
Blog: – Twitter: @malware_traffic



(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

TorrentFreak: Comcast User Hit With 112 DMCA Notices in 48 Hours

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

Every day, DMCA-style notices are sent to regular Internet users who use BitTorrent to share copyrighted material. These notices are delivered to users’ Internet service providers who pass them on in the hope that customers correct their behavior.

The most well-known notice system in operation in the United States is the so-called “six strikes” scheme, in which the leading recording labels and movie studios send educational warning notices to presumed pirates. Not surprisingly, six-strikes refers to users receiving a maximum of six notices. However, content providers outside the scheme are not bound by its rules – sometimes to the extreme.

According to a lawsuit filed this week in the United States District Court for the Western District of Pennsylvania (pdf), one unlucky Comcast user was subjected not only to a barrage of copyright notices on an unprecedented scale, but during one of the narrowest time frames yet.

The complaint comes from Rotten Records who state that the account holder behind a single Comcast IP address used BitTorrent to share the discography of Dog Fashion Disco, a long-since defunct metal band previously known as Hug the Retard.

“Defendant distributed all of the pieces of the Infringing Files allowing others to assemble them into a playable audio file,” Rotten Records’ attorney Flynn Wirkus Young explain.

Considering Rotten Records have been working with Rightscorp on other cases this year, it will come as no surprise that the anti-piracy outfit is also involved in this one. And boy have they been busy tracking this particular user. In a single 48 hour period, Rightscorp hammered the Comcast subscriber with more than two DMCA notices every hour over a single torrent.

“Rightscorp sent Defendant 112 notices via Defendant’s ISP Comcast from June 15, 2015 to June 17, 2015 demanding that Defendant stop illegally distributing Plaintiff’s work,” the lawsuit reads.

“Defendant ignored each and every notice and continued to illegally distribute Plaintiff’s work.”


While it’s clear that the John Doe behind IP address shouldn’t have been sharing the works in question (if he indeed was the culprit and not someone else), the suggestion to the Court that he or she systematically ignored 112 demands to stop infringing copyright is stretching the bounds of reasonable to say the least.

trolloridiotIn fact, Court documents state that after infringement began sometime on June 15, the latest infringement took place on June 16 at 11:49am, meaning that the defendant may well have acted on Rightscorp’s notices within 24 hours – and that’s presuming that Comcast passed them on right away, or even at all.

Either way, the attempt here is to portray the defendant as someone who had zero respect for Rotten Record’s rights, even after being warned by Rightscorp more than a hundred and ten times. Trouble is, all of those notices covered an alleged infringing period of less than 36 hours – hardly a reasonable time in which to react.

Still, it’s unlikely the Court will be particularly interested and will probably issue an order for Comcast to hand over their subscriber’s identity so he or she can be targeted by Rotten Records for a cash settlement.

Rotten has targeted Comcast users on several earlier occasions, despite being able to sue the subscribers of any service provider. Notably, while Comcast does indeed pass on Rightscorp’s DMCA takedown notices, it strips the cash settlement demand from the bottom.

One has to wonder whether Rightscorp and its client are trying to send the ISP a message with these lawsuits.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

SANS Internet Storm Center, InfoCON: green: Recent trends in Nuclear Exploit Kit activity, (Thu, Oct 1st)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green


Since mid-September 2015, Ive generated a great deal of Nuclear exploit kit (EK) traffic after checking compromised websites. This summer, I usually foundAngler EK. Now Im seeing more Nuclear.

Nuclear EK has alsobeen sending dual payloads. Idocumented dual payloads at least three times last year [1, 2, 3], but I hadnt noticed it again from Nuclear EKuntil recently. This time,one of the payloadsappears to beransomware. I sawFilecoder on 2015-09-18[4] and TeslaCrypt 2.0 on 2015-09-29[5]. In both cases,ransomware was a componentof the dual payloads from Nuclear EK.

To be clear, Nuclear EK isnt always sendingtwo payloads,but Ive noticed a dual payload trendwith this recent increase in Nuclear EK traffic.

Furthermore, on Wednesday 2015-09-30, the URL patternfor Nuclear EKs landing page changed. With that in mind, lets take a look at whats happening with Nuclear.

URL patterns

The images below show some examples of URL patterns for Nuclear EK”>Shown above: Some URLsfrom Nuclear EK on 2015-09-15. Pcap” />
Shown above: Some URLs from Nuclear EK on 2015-09-16. “>Shown above: Some URLsfrom Nuclear EK on 2015-09-18. Pcap”>Shown above: Some URLs from Nuclear EK on 2015-09-22. Pcap”>Shown above: Some URLs from Nuclear EK on 2015-09-29.Pcapavailable here.

In the above images, the initial HTTP GET request always starts with /search?q= for the landing page URL. “>Shown above: Some URLs fromNuclear EK on 2015-09-30.

The initial HTTP GET request now starts with /url?sa= instead of”>for the landing page URL. I saw the same thing from three different examples of Nuclear EK on 2015-09-30. Windows hosts from these examplesall had the exact”>Nuclear EK examples from 2015-09-30

I had some trouble infectinga Windows 7 host running IE 11. “>The browser always crashed before the EK”>payload was sent. SoI tried three different configurations to generate traffic for this diary. The first run hadaWindows 7 host running IE 10. The second run had a Windows 7 host runniningIE 8. The third run had a Windows 7 host running IE 11. All hosts were running”>I found a compromised website withan injected iframe leading to Nuclear EK. The screenshot below shows an example of themalicious script at the bottom of the page. Itsright before the closing body and HTML tags. Youll” />
Shown above: “>The first run used IE 10 with Flash player ” />
Shown above: Desktop background from the infected host.

Decrypt instructions were left as a text file on the desktop. The authors behind this ransomwareused and as email addresses for further decryption” />
Shown above: Decryption instructions from the ransomware.

Playing around with the pcap in Wireshark, I got a decent representation of the traffic. Below, youll see the compromised website, Nuclear EK on, and some of the post infection traffic. TLS activityon ports 443 and 9001 with random characters for the server names is Tor traffic. Several other attempted TCP connections can be found in the pcap, but none of those were successful, and theyre not shown below. ” />
Shown above: Some of the infection traffic from the pcap in Wireshark (from a Windows host usingIE 10 and Flash player

Below are alerts on the infection traffic when Iused tcpreplay onSecurity Onion with the EmergingThreats(ET)and ET Pro”>Shownabove: Alerts from the traffic using Sguil in Security Onion.

For the second run, Iinfecteda different Windows host running IE 8 and Flash player This generatedNuclear EK from from the same IP address and a slightly different domain name. however, I didt see the same traffic that triggered” />
Shown above: Nuclear EK traffic using IE 8 and Flash player

For the third run, I used a Windows host with IE 11 and Flash player As mentioned earlier, the browser would crash before the EK sent the payload, so this host didnt get infected with malware. I tried it once with Flash player and once without Flash player, both times running an unpatched version of IE 11. Each time, the browser crashed. Nuclear EK was still using the same IP address, butdifferent domain names were different. Within a 4 minute timespan on the pcap,youll find” />
Shown above: Nuclear EK traffic using”>1 and Flash”>… Tried twice but”>below”>Shown” />
Shown” />
Shown above: Nuclear EK sends the secondmalware payload.

Other than the landing page URL patternand dual payload,Nuclear EK looks remarkably similar to the last time we reviewed itin August 2015 [6].

Preliminary malware analysis

The first and second runs generated a full infection chain and post-infection traffic. The malware payload was the same during the first and second run. The first run had additional malware on the infected host. The third run using IE 11 didnt generate any malware payload.

Nuclear EK malware payload 1 of 2:

AWS Official Blog: New – Receive and Process Incoming Email with Amazon SES

This post was syndicated from: AWS Official Blog and was written by: Jeff Barr. Original post: at AWS Official Blog

We launched the Amazon Simple Email Service (SES) way back in 2011, with a focus on deliverability — getting mail through to the intended recipients. Today, the service is used by Amazon and our customers to send billions of transactional and marketing emails each year.

Today we are launching a much-requested new feature for SES. You can now use SES to receive email messages for entire domains or for individual addresses within a domain. This will allow you to build scalable, highly automated systems that can programmatically send, receive, and process messages with minimal human intervention.

You use sophisticated rule sets and IP address filters to control the destiny of each message. Messages that match a rule can be augmented with additional headers, stored in an S3 bucket, routed to an SNS topic, passed to a Lambda function, or bounced.

Receiving and Processing Email
In order to make use of this feature you will need to verify that you own the domain of interest. If you have already done this in order to use SES to send email, then you are already good to go.

Now you need to route your incoming email to SES for processing. You have two options here. You can set the domain’s MX (Mail Exchange) record to point to the SES SMTP endpoint in the region where you want to process incoming email. Or, you can configure your existing mail handling system to forward mail to the endpoint.

The next step is to figure out what you want to do with the messages. To do this, you need to create some receipt rules. Rules are grouped into rule sets (order matters within the set) and can apply to multiple domains. Like most aspects of AWS, rules and rule sets are specific to a particular region. You can have one active rule set per AWS region; if you have no such set, then all incoming email will be rejected.

Rules have the following attributes:

  • Enabled – A flag that enables or disables the rule.
  • Recipients – A list of email addresses and/or domains that the rule applies to. If this attribute is not supplied, the rule matches all addresses in the domain.
  • Scan – A flag to request spam and virus scans (default is true).
  • TLS – A flag to require that mail matching this rule is delivered over a connection that is encrypted with TLS.
  • Action List -An ordered list of actions to perform on messages that match the rule.

When SES receives a message, it performs several checks before it accepts the message for further processing. Here’s what happens:

  • The source IP address is checked against an internal list maintained by SES, and rejected if so (this list can be overridden using an IP address filter that explicitly allows the IP address).
  • The source IP address is checked against your IP address filters, and rejected if so directed by the filter.
  • The message is checked to see if it matches any of the recipients specified in a rule, or if there’s a domain level match, and accepted if so.

Messages that do not match a rule do not cost you anything. After a message has been accepted, SES will perform the actions associated with the matching rule.  The following actions are available:

  • Add a header to the message.
  • Store the message in a designated S3 bucket, with optional encryption using a key stored in AWS Key Management Service (KMS). The entire message (headers and body) must be no larger than 30 megabytes in size for this action to be effective.
  • Publish the message to a designated SNS topic. The entire message (headers and body) must be no larger than 150 kilobytes in size for this action to be effective.
  • Invoke a Lambda function. The invocation can be synchronous or asynchronous (the default).
  • Return a specified bounce message to the sender.
  • Stop processing the actions in the rule.

The actions are run in the order specified by the rule. Lambda actions have access to the results of the spam and virus scans and can take action accordingly. If the Lambda function needs access to the body of the message, a preceding action in the rule must store the message in S3.

A Quick Demo
Here’s how I would create a rule that passes incoming email messages to a Lambda function (MyFunction) notifies an SNS topic (MyTopic), and then stores the messages in an S3 bucket (MyBucket) after encrypting them with a KMS key (aws/ses):

I can see all of my rules at a glance:

Here’s a Lambda function that will stop further processing if a message fails any of the spam or virus checks. In order for this function to perform as expected, it must be invoked in synchronous (RequestResponse) fashion.

exports.handler = function(event, context) {
    console.log('Spam filter');
    var sesNotification = event.Records[0].ses;
    console.log("SES Notification:n", JSON.stringify(sesNotification, null, 2));
    // Check if any spam check failed
    if (sesNotification.receipt.spfVerdict.status      === 'FAIL'
        || sesNotification.receipt.dkimVerdict.status  === 'FAIL'
        || sesNotification.receipt.spamVerdict.status  === 'FAIL'
        || sesNotification.receipt.virusVerdict.status === 'FAIL')
        console.log('Dropping spam');
        // Stop processing rule set, dropping message

To learn more about this feature, read Receiving Email in the Amazon SES Developer Guide.

Pricing and Availability
You will pay $0.10 for every 1000 emails that you receive. Messages that are 256 KB or larger are charged for the number of complete 256 KB chunks in the message, at the rate of $0.09 per 1000 chunks. A 768 KB message counts for 3 chunks. You’ll also pay for any S3, SNS, or Lambda resources that you consume. Refer to the Amazon SES Pricing page for more information.

This new feature is available now and you can start using it today. Amazon SES is available in the US East (Northern Virginia), US West (Oregon), and Europe (Ireland) regions.

— Jeff;

AWS Security Blog: Use AWS Services to Comply with Security Best Practices—Minus the Inordinate Time Investment

This post was syndicated from: AWS Security Blog and was written by: Jonathan Desrocher. Original post: at AWS Security Blog

As security professionals, it is our job to be sure that our decisions comply with best practices. Best practices, though, tend to be time consuming, which means we either don’t get around to following best practices, or we spend too much time on tedious, manual tasks. This blog post includes two examples where AWS services can help achieve compliance with security best practices, minus the inordinate time investment.

One AWS Identity and Access Management (IAM) best practice is to delete or regularly rotate access keys. However, knowing which AWS access keys are in use has usually involved poring over AWS CloudTrail logs. In my May 30 webinar, I highlighted the then recently launched access key last used feature that makes access key rotation easier. By knowing the date and IP address of the last usage, you can much more easily identify which keys are in use and where. You can also identify those keys that haven’t been used in a long time; this helps to maintain good security posture by retiring and deleting old, unused access keys.

If you have a Windows environment on AWS and need to join each Amazon EC2 instance to the Windows domain, the best practice is to either do it manually, or embed credentials in the Amazon Machine Image (AMI). In this Auto Scaling Lifecycle Policies for Security Practitioners video,I show you how you can use Auto Scaling lifecycle policies to, among other things, join a server to a Windows domain without sharing credentials across instances.

These are just two examples of how using AWS services helps you comply with best practices, reduce risk, and spend less time on manual tasks. If you have questions or comments, either post them below or go to the IAM forum.

– Jonathan

Application Management Blog: Using AWS OpsWorks to Customize and Automate App Deployment on Windows

This post was syndicated from: Application Management Blog and was written by: Daniel Huesch. Original post: at Application Management Blog

Using OpsWorks and Chef on Windows helps you optimize your use of Windows by reliably automating configuration tasks to enforce instance compliance. Automating instance configuration enables software engineering best practices, like code-reviews and continuous integration, and allows smaller, faster delivery than with manual configuration. With automated instance configuration, you depend less on golden images or manual changes. OpsWorks also ships with features that ease operational tasks, like user management or scaling your infrastructure by booting additional machines.

In this post, I show how you can use OpsWorks to customize your instances and deploy your apps on Windows. To show how easy application management is when using OpsWorks, we will deploy a Node.JS app to Microsoft Internet Information Server (IIS). You can find both the cookbooks and the app source code in the Amazon Web Services – Labs repository on GitHub.

To follow this example, you need to understand how OpsWorks uses recipes and lifecycle events. For more information, see What is AWS OpsWorks?.

Create the Stack and Instance

First, let’s create a stack in the OpsWorks console. Navigate to the Add Stack page at

For the name, type Node.JS Windows Demo. For the Default operating system, choose Microsoft Windows Server 2012 R2 Base.

Next, configure the cookbook source. Choose Advanced to display the Configuration Management section. Enable Use custom Chef cookbooks, choose the Git version control system as the Repository type, and enter as the Repository URL. The cookbooks that we just configured describe how a Node.JS app is installed.

Choose Add Stack to create the stack.

In the Layers section, choose Add a Layer. Choose any name and short name you like. For Security Groups, choose AWS-OpsWorks-Web-Server as an additional security group. This ensures that HTTP traffic is allowed and you can connect to the demo app with your browser. For this example, you don’t need RDP access, so you can safely ignore the warning. Choose Add Layer.

Before we add instances to this layer, we have to wire up the Chef code with the lifecycle events. To do that, edit the layer recipes. On the Layers page, under more layers, choose Recipes. On the Layer more layers page, choose Edit. For the Setup lifecycle event, choose webserver_nodejs::setup, and for the Deploy event, choose webserver_nodejs::deploy.

Confirm your changes by choosing Save.

Now we are ready to add an instance. Switch to the Instances page by choosing Instances, and then choose Add an Instance. OpsWorks suggests a Hostname based on the layer name; for this example, webserver-node1. Choose Add Instance to confirm.

Don’t start the instance yet. We need to tell OpsWorks about the app that we want to deploy first. This ensures that the app is deployed when the instance starts. (When an instance executes the Setup event during boot, OpsWorks deploys apps automatically. Later, you can deploy new and existing apps to any running instances.)

Create the App

In the left pane, choose Apps to switch to the Apps page, and then choose Add an app. Give the app a name, choose Git as the Repository type, and enter as the Repository URL. The demo app supports environment variables, so you can use the APP_ADMIN_EMAIL key to set the mail address displayed on the demo app’s front page. Use any value you like.

To save the app, choose Add App. Return to the Instances page. Now start the instance.

OpsWorks reports setup progress, switching from “requested” to “pending,” and then to “booting.” When the instance is “booting,” it is running on Amazon EC2. After the OpsWorks agent is installed and has picked up its first lifecycle event, the status changes to “running_setup.” After OpsWorks processes the Setup lifecycle event, it shows the instance as “online.” When an instance reaches the online state, OpsWorks fires a Configure lifecycle event to inform all instances in the stack about the new instance.

Booting a new instance can take a few minutes, the total time depending on the instance size and your Chef recipes. While you wait, get a cup of coffee or tea. Then, let’s take a look at the code that Chef will execute and the general structure of the opsworks-windows-example-cookbooks.

Cookbook Deep Dive – Setup Event

Let’s take a look at the webserver_nodejs::setup recipe, which we wired to the Setup event when we chose layer settings:

# Recipes to install software on initial boot

include_recipe "opsworks_iis"
include_recipe "opsworks_nodejs"
include_recipe "opsworks_iisnode"

This recipe simply includes other recipes. As you can see, Chef installs the IIS, Node.JS, and the IIS-to-Node.JS bridge iisnode. By taking a closer look at the opsworks_nodejs cookbook, we can learn how to install apps. The general folder structure of a Chef cookbook is:

├── attributes
│   └── default.rb
├── definitions
│   └── opsworks_nodejs_npm.rb
├── metadata.rb
└── recipes
    └── default.rb

The opsworks_nodejs cookbook uses attributes to define settings that you can override when using the cookbook. This makes it easy to update to a new version of Node.JS or npm, the node package manager, by just setting the attribute to another value.

The file opsworks_nodejs/attributes/default.rb defines these version attributes:

default["opsworks_nodejs"]["node_version"] = "0.12.7"
default["opsworks_nodejs"]["npm_version"] = "2.13.0"

The default recipe in the opsworks_nodejs cookbook uses the node_version and npm_version. It uses node_version as part of the download URL construction and npm_version in the batch code.

version = node["opsworks_nodejs"]["node_version"]

download_file = "node-v#{version}-x86.msi"
download_path = ::File.join(Chef::Config["file_cache_path"], download_file)

remote_file download_path do
  source "{version}/#{download_file}"
  retries 2

windows_package "nodejs" do
  source download_path

batch "install npm version #{node['opsworks_nodejs']['npm_version']}" do
  code ""%programfiles(x86)%\nodejs\npm" -g install npm@#{node['opsworks_nodejs']['npm_version']}"

The cookbook installs Node.JS in two steps. First, it uses the Chef remote_file resource to download the installation package from the official Node.JS website and save it to the local disk. The cookbook also sets the retries attribute to enable retries, so the code is more resilient to short-term networking issues.

After the cookbook saves the file, the windows_package resource installs the MSI. Then, the cookbook installs the requested npm version using the batch resource.

Chef resources provide many attributes for fine-tuning their behavior. For more information, see the Chef Resources Reference.

Cookbook Deep Dive – Deploy Event

As I mentioned, OpsWorks doesn’t prepopulate your Chef run with recipes or resources. This gives you fine-grained control and complete flexibility over how to deploy your app. However, there are some common tasks, like checking your app out of Git or downloading it from Amazon S3. To make performing common tasks easier, the example cookbooks ship with a custom Chef resource that handles these steps for you, opsworks_scm_checkout.

As it does with the Setup recipe, OpsWorks uses the webserver_nodejs::deploy recipe only to include other recipes. The opsworks_app_nodejs cookbook’s default recipe does the heavy lifting.

The slimmed-down version of the recipe looks like the following.

apps = search(:aws_opsworks_app, "deploy:true")

apps.each do |app|
  opsworks_scm_checkout app["shortname"] do

  directory app_deploy do

  # Copy app to deployment directory
  batch "copy #{app["shortname"]}" do
    code "Robocopy.exe ..."

  # Run 'npm install'
  opsworks_nodejs_npm app["shortname"] do
    cwd app_deploy

  template "#{app['shortname']}/web.config" do
    variables({ :environment => app["environment"]})

  powershell_script "register #{app["shortname"]}" do

Reviewing the code will help you understand how to write your own deployment cookbooks, so let’s walk through it. First, we use Chef search to fetch information about the apps to be deployed. The AWS OpsWorks User Guide lists all exposed attributes.

For each app, Chef then executes the following steps:

·      It checks out the app to the local file system using the opsworks_scm_checkout resource.

·      It creates the target directory and copies the app to that directory. To transfer only the files that have changed, it uses Robocopy.

·      Running npm install, it downloads all required third-party libraries. It does this by using the opsworks_nodejs_npm custom resource, which is defined in the opsworks_nodejs cookbook.

·      The web.config file is generated using a template, taking OpsWorks environment variables into account. The file configures the IIS-to-Node.JS integration.

·      A Windows PowerShell run registers the IIS site and brings the app online.

Check Out the Node.JS Demo App

Your booted instance should be online now:

To access the demo app, under Public IP, click the IP address. The app allows you to leave comments that other visitors can see.

Notice that the page is not static, but includes information about the hostname, the browser you used to request it, the system time, and the Node.JS version used. At the bottom of the page, you can see that the APP_ADMIN_EMAIL environment variable you configured in the app was picked up.

Leave a comment on the app, and choose Send. Because this is a minimal app for demo purposes only, the comment is saved on the instance in a plain JSON file.

To see the details of the customizations OpsWorks just applied to your instance, choose the hostname on the Layer overview page. On the very bottom of the Instance detail page, you will find the last 30 commands the instance received. To see the corresponding Chef log, choose show.


Using OpsWorks with Chef 12 on Windows gives you a reliable framework for customizing your Windows Server 2012 R2 instances and managing your apps. By attaching Chef recipes to lifecycle events, you can customize your instances. You can use open source community cookbooks or create your own cookbooks. Chef’s support for PowerShell allows you to reuse automation for Windows. OpsWorks’ built-in operational tools, like user management, permissions, and temporary RDP access help you manage day-to-day operations.

The example showed how to deploy a Node.JS app using IIS. By referring to typical cookbooks that were included in the example like opsworks_nodejs, which is used to install Node.JS, or opsworks_app_nodejs, which is used to deploy the app, you can learn how to write your own cookbooks.

SANS Internet Storm Center, InfoCON: green: The WordPress Plugins Playground, (Mon, Sep 14th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

This morning, I had a quick look at my web serverlog file and searched for malicious activity. Attacks like brute-force generate a lot of entries and thuscan be easily detected.Other scanners are working below the radar and search for very specific vulnerabilities. In this case, a single request is often sent to the serverand generate a simple 404 errorwithout triggering any alert. My blog beingbased on the WordPress CMS, I searched for non HTTP/200hits for plugins URLs (/wp-content/plugins/)

CMS or “>Content Management Systems became vey popular today. Its easy to deploy aWordPress, Drupal or Joomla on top of a UNIX server. They exist also shared platforms which offer you some online space. If a CMS is delivered with standard options, it is easy for the owner to customize or to tune it.. just like cars.ModernCMS offer a way to extend the features or the lookn”>From a security perspective, plugins are today the weakest point of a CMS.If most of the CMSsource code is regularly audited and well maintained. Its not the same for their plugins. By deploying and using a plugin, you install third-party code into your website and grant some rights to it. Not all plugins are developed by skilled developers or with security in mind.Today, most vulnerabilities reported in CMS environment are due to “>8000+ hits for uninstalled/non-existent plugins

  • 899 unique plugins tested”>Just for information, here is myTop-20 of tested”>If the popularity is a pluginis a good indicator, do not trust them! (Popularity !=”>WordPress has an hardening guide“>As a general advice regarding 4xx HTTP errors, do not implementchecks for single errors but search for multiple 4xx (or 5xx) errors generated in a short amount of time from a single IP address. This is helpful to detect ongoing scans! (a log management solutioncan do that very easily)
  • Xavier Mertens
    ISC Handler – Freelance Security Consultant

    (c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.