Posts tagged ‘ip address’

TorrentFreak: Court Orders Cox to Expose “Most Egregious” BitTorrent Pirates

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

cox-logo Last year BMG Rights Management and Round Hill Music sued Cox Communications, arguing that the ISP fails to terminate the accounts of repeat infringers.

The companies, which control the publishing rights to songs by Katy Perry, The Beatles and David Bowie among others, claim that Cox has given up its DMCA safe harbor protections due to this inaction.

The case revolves around the “repeat infringer” clause of the DMCA, which prescribes that Internet providers must terminate the accounts of persistent pirates.

As part of the discovery process the music outfits requested details on the accounts which they caught downloading their content. In total there are 150,000 alleged pirates, but as a compromise BMG and Round Hill limited their initial request to 500 IP-addresses.

Cox refused to hand over the requested information arguing that the Cable Privacy Act prevents the company from disclosing this information.

The matter was discussed during a court hearing late last week. After a brief deliberation Judge John Anderson ruled that the ISP must hand over the personal details of 250 subscribers.

“Defendants shall produce customer information associated with the Top 250 IP Addresses recorded to have infringed in the six months prior to filing the Complaint,” Judge Anderson writes.

“This production shall include the information as requested in Interrogatory No.13, specifically: name, address, account number, the bandwidth speed associated with each account, and associated IP address of each customer.”

The order

The music companies also asked for the account information of the top 250 IP-addresses connected to the piracy of their files after the complaint was filed, but this request was denied. Similarly, if the copyright holders want information on any of the 149,500 other Cox customers they need a separate order.

The music companies previously informed the court that the personal details are crucial to proof their direct infringement claims, but it’s unclear how they plan to use the data.

While none of the Cox customers are facing any direct claims as of yet, it’s not unthinkable that some may be named in the suit to increase pressure on the ISP.

The full list of IP-addresses is available for download here (PDF).

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and anonymous VPN services.

Schneier on Security: USPS Tracking Queries to Its Package Tracking Website

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

A man was arrested for drug dealing based on the IP address he used while querying the USPS package tracking website.

SANS Internet Storm Center, InfoCON: green: Upatre/Dyre malspam – Subject: eFax message from “unknown”, (Wed, May 20th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green


Yesterday on 2015-05-19, I attended a meeting from my local chapter of the Information Systems Security Association (ISSA). During the meeting, one of the speakers discussed different levels of incident response by Security Operations Center (SOC) personnel. For non-targeted issues like botnet-based malicious spam (malspam) infecting a Windows host, you probably wont waste valuable time investigating every little detail. In most cases, youll probably start the process to re-image the infected computer and move on. Other suspicious events await, and they might reveal a more serious, targeted threat.

However, we still recover information about these malspam campaigns. Traffic patterns evolve, and changes should be documented.

Todays example of malspam

Searching through my employers blocked spam filters, I found the following Upatre/Dyre wave of malspam:

  • Date/Time: 2015-05-19 from from 12:00 AM to 5:47 AM CST
  • Number of messages: 20
  • Sender (spoofed):
  • Subject: eFax message from unknown” />

    As shown in the above image, these messages were tailored for the recipients. Youll also notice some of the recipient email addresses contain random characters and numbers. Nothing new here. Its just one of the many waves of malspam our filters block every day. I reported a similar wave earlier this month [1]. Let” />

    The attachment is a typical example of Upatre, much like weve seen before. Lets see what this malware does in a controlled environment.

    Indicators of compromise (IOC)

    I ran the malware on a physical host and generated the following traffic:

    • 2015-05-19 15:16:12 UTC – port 80 – – GET /
    • 2015-05-19 15:16:13 UTC – port 13410 – SYN packet to server, no response
    • 2015-05-19 15:16:16 UTC – port 443 – two SYN packets to server, no response
    • 2015-05-19 15:16:58 UTC – port 443 – two SYN packets to server, no response
    • 2015-05-19 15:17:40 UTC – port 443 – SSL traffic – approx 510 KB sent from server to infected host
    • 2015-05-19 15:17:56 UTC – port 3478 – UDP STUN traffic to:
    • 2015-05-19 15:17:58 UTC – port 443 – SSL traffic – approx 256 KB sent from server to infected host
    • 2015-05-19 15:18:40 UTC – port 13409 – SYN packet to server, no response

    In my last post about Upatre/Dyre, we saw Upatre-style HTTP GET requests to but no HTTP response from the server [1]. Thats been the case for quite some time now.” />
    Shown above: Attempted TCP connections to the same IP address now reset (RST) by the server

    How can we tell this is Upatre?” />

    As Ive mentioned before, is a service run by one of my fellow Rackspace employees [2]. By itself, its not malicious. Unfortunately, malware authors use this and similar services to check an infected computers IP address.

    What alerts trigger on this traffic?” />

    Related files on the infected host include:

    • C:UsersusernameAppDataLocalPwTwUwWTWcqBhWG.exe (Dyre)
    • C:UsersusernameAppDataLocalne9bzef6m8.dll
    • C:UsersusernameAppDataLocalTemp~TP95D5.tmp (encrypted or otherwise obfuscated)
    • C:UsersusernameAppDataLocalTempJinhoteb.exe (where Upatre copied itself after it was run)

    Some Windows registry changes for persistence:

    • Key name: HKEY_CURRENT_USERSoftwareMicrosoftWindowsCurrentVersionRun
    • Key name: HKEY_USERSS-1-5-21-52162474-342682794-3533990878-1000SoftwareMicrosoftWindowsCurrentVersionRun
    • Value name: GoogleUpdate
    • Value type: REG_SZ
    • Value data: C:UsersusernameAppDataLocalPwTwUwWTWcqBhWG.exe

    A pcap of the infection traffic is available at:

    A zip file of the associated Upatre/Dyre malware is available at:

    The zip file is password-protected with the standard password. If you dont know it, email and ask.

    Final words

    This was yet another wave of Upatre/Dyre malspam. No real surprises, but its always interesting to note the small changes from these campaigns.

    Brad Duncan, Security Researcher at Rackspace
    Blog: – Twitter: @malware_traffic



    (c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Linux How-Tos and Linux Tutorials: Install Linux on a Modern WiFi Router: Linksys WRT1900AC and OpenWrt

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Ben Martin. Original post: at Linux How-Tos and Linux Tutorials

linksyswrt1900ac router

The Linksys WRT1900AC is a top-end modern router that gets even sweeter when you unleash Linux on it and install OpenWrt. OpenWrt includes the opkg package management system giving you easy access to a great deal of additional open source software to use on your router. If you want the pleasure of SSH access on your router, the ability to use iptables on connections passing through it, and the ability to run various small servers on it, the Linksys WRT1900AC and OpenWrt are a powerful combination.

From a hardware perspective, the Linksys WRT1900AC includes simultaneous dual band with support for 802.11n (2.4 GigaHertz) up to 600 Megabytes per second and 802.11ac (5 GHz) up to 1.3 Gigabytes per second. This lets you connect your older devices to 802.11n and newer hardware can take advantage the greater speed and less congested 802.11ac signal.

The router has a dual-core Marvell Armada 370/XP CPU with 256 MB of RAM and 128 MB of flash storage. You can also attach more storage to the WRT1900AC using its USB 3.0 and eSATA ports. When using OpenWrt you might also like to attach a webcam and printer to the router. The Linksys WRT1900AC has a 4 port gigabit switch and a gigabit upstream WAN port.

Initial setup

The stock firmware that comes with the Linksys WRT1900AC uses a very simple four-step procedure for initial setup. I only partially followed the recommended setup steps.

Step 1: Connect the antennae and power.

Step 2: Connect your upstream “Internet” link to the appropriate port on the router.

Step 3: Connect to the wifi signal from the router. You are given a custom wireless network name and password which appears to be set differently for each individual router. This step 3 nicely removes the security vulnerability inherent in initial router setup, because your router will have a custom password right from the first time you power it on.

Step 4: Log in to and setup the router.

Instead of directly connecting to the Internet port, I used one of the 4 gigabit switch ports to attach the router to the local LAN. This made using the website at step 4 not work for me. I could create an account on the smartwifi site, but it wanted me to be connected through the wifi router in order to adjust the settings.

You can however set up the router without needing to use any remote websites. The Linksys will appear at and connecting a laptop to the wifi router and manually forcing the laptop’s IP address to allowed me to access the router configuration page. At that stage the Connectivity/Local Network page lets you set the IP address of the router to be something that will fit into your LAN in a non conflicting manner (and on the subnet you are using) and also disable the DHCP server if you already have one configured.

The initial screen I got when I was connecting directly using again wanted to take me off to a remote website, though you can click through to avoid having to do that if you want.

I tried to attach a 120 GB SanDisk Extreme SSD to test eSATA storage. Unfortunately ext4 is not a supported filesystem for External Storage in the stock firmware. It could see /dev/sda1 but 0 kilobytes used of 0 kb total space. Using a 16 GB flash pen drive formatted to FAT filesystem was fine; the ftp service was started and the drive showed up as a Samba share, too.

Switching over to OpenWrt

At the time of writing the options for installing OpenWrt on the device were changing. There were four images which offered Linux kernel version 3.18 or 4.0 and some level of extra fixes and updates depending on the image you choose. I used Kaloz’s evolving snapshots of trunk linked at openwrt_wrt1900ac_snapshot.img.

Flashing the image onto the router is very simple as you use the same web interface that is used to manually install standard firmware updates. The fun, and moments of anxiety that occur after the router reboots are familiar to anyone who has ever flashed a device.

When the router reboots you will not have any wifi signals at all from it. The router will come up at a default IP address of The easiest method to talk to the router is to use a laptop and force the ethernet interface to an address of Using a trunk distribution of OpenWrt you are likely not to have a useful web interface on the router. Visiting will likely show an empty web server with no files.

When falling back to trying to do an SSH or network login to the router, another little surprise awaits. Trying to SSH into the router showed that a connection was possible but I was unable to connect without any password. Unfortunately, OpenWrt sets the default password to nothing, creating a catch-22 with SSH not allowing a login with no password, so connection seemed impossible. The saving grace is that telnet is also running on the router and after installing the telnet client on the laptop I could login without any password without any issue. Gaining access to the router again was a tremendous relief.

In the telnet session you can use the passwd command to set a password and then you should be able to login using SSH. I opted to test the SSH login while the telnet session was still active so that I had a fallback in case login failed for some reason.

To make the web interface operational you will have to install the LuCI package. The below commands will do that for you. If you need to use a proxy to get to the Internet the http_proxy, https_proxy, and ftp_proxy environment variables will be of use. Again you might run into a little obstacle here, with the router on the subnet it might not be able to talk with your existing network if it is on the often used subnet. I found that manually forcing the IP address to a 192.168.0.X address using ifconfig on br-lan changed the address for bridged ports and everything moved to that subnet. This is not a permanent change, so if it doesn’t work rebooting the router gets you back to again. It is easy to change this for good using LuCI once you have that installed.

export http_proxy=
opkg update
opkg install luci

Once you have LuCI installed the rest of the router setup becomes point and click by visiting the web server on your router. To enable WiFi signals, go to the Network/Wifi page which gives you access to the two radios, one for 2.4 Ghz and the newer 5 Ghz 802.11nac standard. Each radio will be disabled by default. Oddly, I found that clicking edit for a radio and scrolling down to the Interface Configuration and the Wireless Security page, the default security was using “No Encryption.” I would have thought WPA2-PSK was perhaps a better default choice. So getting a radio up and running involved setting an ESSID, checking the Mode (I used Access Point), and setting the Wireless Security to something other than nothing and setting a password.

Many of the additional features you might install with opkg also have a LuCI support package available. For example, if you want to run a DLNA server on the Linksys WRT1900AC the minidlna package is available, and a luci-app-minidlna package will let you manage the server right from the LuCI web interface.

opkg install minidlna
opkg install luci-app-minidlna

Although the Linksys WRT1900AC has 128 MB of flash storage, it is broken up into many smaller partitions. The core /overlay partition had a size of only 24.6 MB with /tmp/syscfg being another 30 MB partition of which only around 300 KB was being used. While this provides plenty of space to install precompiled software, there isn’t enough space to install gcc onto the Linksys WRT1900AC/OpenWrt installation. I have a post up asking if there is a simple method to use more of the flash on the Linksys WRT1900AC from the OpenWrt file system. Another method to gain more space on an OpenWrt installation is to use an extroot, where the main system is stored on external storage. Perhaps with the Linksys WRT1900AC this could be a partition on an eSATA SSD.

If you don’t want to use extroot right away, another approach is to use another ARM machine that is running a soft floating point distribution to compile static binaries. Those can be transferred over using rsync to the OpenWrt installation on the Linksys WRT1900AC. An ARM machine is either using soft or hard floating point, and generally everything is compiled to work with one or the other. To see which version of floating point your hardware is expecting you can use the readelf tool to sniff at a few existing binaries as shown below. Note the soft-float ABI line in the output.

root@linksys1900ac:~# readelf -a /bin/bash|grep ABI
  OS/ABI:                            UNIX - System V
  ABI Version:                       0
  Flags:                             0x5000202, has entry point, Version5 EABI, soft-float ABI

I tried to get push button WPS setup to work from OpenWrt without success. I had used that feature under the standard firmware so it is able to work and makes connecting new devices to the router much simpler.

I also notice that there are serial TTL headers on the Linksys WRT1900AC and a post shows a method to reflash the firmware directly from uboot. I haven’t tried this out, but it is nice to see as a possible final ditch method to resurrect a device with non functioning firmware.

Another useful thing is to set up users other than root to use on the OpenWrt installation so that you have less risk of interrupting normal router activity. You might like to install that shadow utils and sudo in order to do this as shown below:

  root@wrt1900ac:/dev# opkg install sudo
  root@wrt1900ac:/dev# opkg install shadow-useradd shadow-groupadd
  root@wrt1900ac:/dev# sudo -u ben bash

I found that the fan came on when the Linksys WRT1900AC was booting into OpenWrt. The fan was turned off again soon after. The temperature readings are available using the sensors command as shown below.

root@wrt1900ac:~# sensors 
Adapter: mv64xxx_i2c adapter
ddr:          +52.8 C  
wifi:         +55.1 C  
Adapter: Virtual device
cpu:          +61.7 C  


Using an LG G3 phone with Android 5, the Wifi Network Analyzer app indicated a speed of 433 Mbps with the phone about a foot from the router. That speed dropped back to around 200Mbps when I moved several rooms away. The same results were given using the stock firmware and the OpenWrt image.

Running iperf (2.0.5) on the OpenWrt installation and a Mid 2012 Macbook Air gave a Bandwidth of 120 Mbps. The same client and server going through a DLink DIR-855 at a similar distance on 5 Ghz gave only 82 Mbps. Unfortunately the Mac only has wifi-n on it as wifi-ac was added to the next year’s model.

The LG G3 running Android 5 connected to the wifi-ac network using the iperf app could get 102 Mbps. These tests where run by starting the server with ‘-s’ and the client with ‘-c server-ip-address’. The server which was running on the Linksys WRT1900AC/OpenWrt machine chose a default of 85 kb TCP window size for these runs. Playing with window sizes I could get about 10 percent additional speed on the G3 without too much effort.

I connected a 120 GB SanDisk Extreme SSD to test the eSATA performance. For sequential IO Bonnie++ could write about 89 Mbps and read 148 Mbps and rewrite blocks at about 55 Mbps. Overall 5,200 seeks/s were able to be done. This compares well for read and rewrite with the eSATA on the Cubox which got 150 Mbps and 50  Mbps respectively. The Cubox could write at 120  Mbps which is about 35 percent faster than the Linksys WRT1900AC. This is using the same ext4 filesystem on both machines, the drive was just moved to each new machine.

bonnie++ -n 0 -f -m Linksys1900ac -d `pwd` 

OpenSSL performance for digests was in a similar ballpark to the BeagleBone Black and CuBox i4Pro. For ciphers the story was very different depending on which algorithm was used, DES and AES-256 were considerably slower than other ARM machines, whereas Blowfish and Cast ran at similar speeds to many other ARM CPUs. For 1,024 bit RSA signatures the Linksys WRT1900AC was around 25-30 percent the performance of the more budget ARM CPUs.

digests linksys router  

ciphers linksys router

rsa sign Linksys router

Final Thoughts

It is great to see that LuCI gives easy access to the router features and even has “app” packages to let you configure some of the additional software that you might like to install on your OpenWrt device. OpenWrt images for the Linksys WRT1900AC are a relatively recent development. Once a recommended stable image with LuCI included is released it should mitigate some of the tense moments that reflashing can present at the moment. The 177+ pages on the OpenWrt forum for the Linksys WRT1900AC are testament to the community interest in running OpenWrt on the device.

It is wonderful to see the powerful hardware that the Linksys WRT1900AC provides being able to run OpenWrt. The pairing of Linux/FLOSS and contemporary hardware lets you customize the device to fit your usage needs. Knowing that you can not only SSH in but that rsync is ready for you and that your programming language of choice can be installed on the device for those little programs that you want to have available all the time but don’t really want to leave a machine on in order to do. There are also some applications which work well on the router itself, for example, packet filtering. A single policy on the router can block tablets and phones from connecting to your work machines.

We would like to thank Linksys for providing the WRT1900AC hardware used in this article.

The Hacker Factor Blog: Email Delivery Errors

This post was syndicated from: The Hacker Factor Blog and was written by: The Hacker Factor Blog. Original post: at The Hacker Factor Blog

Email seems like a necessary evil. While I dislike spam, I like the non-immediate nature of the communication and the fact that messages can queue up (for better or for worse). And best of all, I can script actions for handling emails. If the email matches certain signatures, then the script can mark it as spam. If it comes from certain colleagues, then the script can mark it as urgent. In this regard, I think email is better than most communication methods.

Other forms of communication have their niche, but they also have their limitations. For example:

  • Phone. If the phone is there for my convenience, then why do I have to drop everything to answer it? (Dropping everything is not convenient.) And I have never had an answering machine show me the subject of the call before listening to it. Most answering machines require you to listen to message in the order they were received.

  • Chat rooms. Does anyone still use IRC or Jabber/XMPP? Real-time chat rooms are good if everyone is online at the same time. But if we’re all online and working together on a project, then it is just as easy to do a conference call on the phone, via Skype, or using any of those other VoIP protocols. Then again, most chat rooms do have ways to log conversations — which can be great for documenting progress.
  • Twitter. You rarely see details in 140 characters. It’s also hard to go back and see previous messages. And if you are following lots of people (or a few prolific people), then you might miss something important. (I view Twitter like the UDP of social communications… it’s alright to miss a few packets.)
  • Text messages. These are almost as bad as Twitter. At least with Twitter, I’m not charged per message.
  • Message boards. Whether it’s forum software, comments on a blog, or a private wall page at Facebook, message boards are everywhere. You can set topics, have threaded replies, etc. However, messages are restricted to members. If I am not a member of the message board, then I cannot leave you a message. (Message boards without membership requirements are either moderated or flooded with spam.) And there may be no easy way for someone to search or recall previous discussions.
  • Private messages. LinkedIn, Facebook, Flickr, Imgur, Reddit… Most services have ways to send private messages between members. This is fine if everyone you know uses those services. But messages are limited to the service.

In contrast, email permits large messages to be sent in a timely manner to people who use different services. If I cannot get to the message immediately, then it will sit in my mailbox — I will get to it when it is convenient. I can use my home email system to write to friends, regardless of whether they use Gmail, Yahoo, or Facebook. There are even email-to-anything and anything-to-email crossover systems, like If-this-then-that. Even Google Voice happily sends me email when someone leaves a message. (Google Voice also tries to translate the voice mail to text in the email. I know it’s from my brother when Google writes, “Unable to transcribe this message.”)

Clear Notifications

As automated tasks go, it is very common to have email sent as a notification. My RAID emails me monthly with the current status (“everything’s shiny!”) When one of my Linux servers had a memory failure, it emailed me.

Over at FotoForensics, I built an internal messaging system. As an administrator, I get notices about certain system events. I’ve even linked these messages with email — administrators get email when something important is queued up and needs a response. This really helps simplify maintenance — I usually get an email from the system every few days.

When users submit comments, I get a message. And I’ve designed the system to allow me to respond to the user via email. (This is why the comment form asks for an email address.) For the FotoForensics Lab service, I even configured a double-opt-in system so users can request accounts without my assistance.

And therein lies a problem… The easier it is to send messages, the easier it is to abuse it with spam. Over the decades, people have employed layers upon layers of spam detectors and heuristics to mitigate abuse.

With all of the layers of anti-spam crap that people use, creating a system that can send a status email or a double-opt-in message to anyone who requests contact can get complicated. It’s not as simple as calling a PHP function to send an email. In my experience, the PHP mail() function will succeed less than half of the time; usually the PHP mail() messages get discarded by spam filters.

Enabling Email

Even though my system works most of the time, I still have to fight with it occasionally in order to make sure that users receive responses to inquires. Some of the battles I had to fight so far:

  • Blacklists. Before you begin, make sure that your network address is not on any blacklists. If your network address was previously used by a spammer, then you’ve inherited a blacklisted address and nobody will receive your emails. Getting removed from blacklists ranges from difficult to impossible. And as long as your system is blacklisted, most people will not receive your emails.

  • Scripts. Lots of spammers use scripts. If you use a pre-packaged script to generate outgoing email, then it is likely to be identified as spam. This happens because different tools generate different signatures. If your tool matches the profile of a tool known to send spam, then it will be filtered. And chances are really good that spammers have already abused any pre-packaged scripts for sending spam.
  • Real mail. The email protocols (SMTP and ESMTP) are pretty straightforward. However, most scripts to send email only do the bare minimum. In particular, they usually don’t handle email errors very well. I ended up using a PHP script that communicates with my real mail server (Postfix). The postfix server properly delivers email and handles errors correctly. I’ve configured my postfix server to send email, but it never receives email. (Incoming email goes to a different mail server.)

At this point — with no blacklists, custom scripts, and a real outgoing email server — I was able to send email replies to about half of the people who requested service information. (Replying to people who fill out the contact form or who request a Lab account.) However, I still could not send email to anyone using Gmail, AOL, Microsoft Outlook, etc.

  • SPF. By itself, email is unauthenticated; anyone can send email as anyone. There are a handful of anti-spam additions to email that attempt to authenticate the sender. One of the most common ones is SPF — sender permitted from. This is a DNS record (TXT field) that lists the network addresses that can send email on behalf of a domain. If the recipient server sees that the sender does not match the SPF record, then it can be immediately discarded as spam.

    Many professional email services require an SPF record. Without it, they will assume that the email is unauthenticated and from a spammer. Enabling SPF approaches the 90% deliverable mark. Email can be delivered to Gmail, but not AOL or anyone using the Microsoft Outlook service.

  • Reverse hostnames. When emailing users at AOL, the AOL server would respond with a cryptic error message:

    521 5.2.1 : AOL will not accept delivery of this message.

    This is not one of AOL’s documented error codes. It took a lot of research, but I finally discovered that this is related to the reverse network address. Both AOL and Microsoft require the sender’s reverse hostname to resolve to the sender’s domain name. (Or in the case of AOL, it can resolve to anything except an IP address. If a lookup of your network address returns a hostname with the network address in it, then AOL will reject the email.) If you have a residential service (like Comcast or Verizon), then the reverse DNS lookup will not be permitted — you cannot send email to AOL directly from most residential ISPs. Fortunately, my hosting provider for FotoForensics was able to set my reverse DNS so I could send email from the FotoForensics server.

  • Microsoft. With everything else done, I could send email to all users except those who use the Microsoft Outlook service. The error message Microsoft returns says (with recipient information redacted):
    <recipient@recipient.domain>: host[213.x.x.x>] said: 550 5.7.1
    Service unavailable; Client host [65.x.x.x>] blocked using FBLW15; To
    request removal from this list please forward this message to (in reply to RCPT TO command)

    This cryptic warning is Microsoft’s way of saying that I need to contact them first and get permission to email their users.

    In my experience, writing in to ask permission will get you nowhere. Most services won’t answer the phone, ignore emails about delivery issues, and won’t help you at all. However, with Microsoft, I really had no other option. They didn’t give me any other option to contact them.

    With nothing left to lose, I bounced the entire email with the error message, original email, and headers, to Microsoft. I was actually amazed when I received an automated email with a trouble ticket number and telling me to wait 24 hours. I was even more amazed when, after 10 hours, I received a confirmation that the block was removed. I resent the FotoForensics contact form reply to the user… and it was delivered.

While I am thrilled to see that my server can now send replies to requests at every major service, I certainly hope other services do not adopt the Microsoft method. If my server needs to send replies to users at 100 different domains, then I do not want to spend time contacting each domain first and begging for permission to contact their users.

(Fortunately, this worked. If writing to Microsoft had not worked, then I was prepared to detect email addresses that use Outlook as a service and just blacklist them. “Please use a different email service since your provider will not accept email from us.”)

The dog ate it

While email is a convenient form of communication, I still have no idea whether I’ve fixed all of the delivery issues. Many sites will silently drop email rather than sending back a delivery error notice. Although I believe my outgoing email system now works with Gmail, Microsoft, Yahoo, AOL, and most other providers, the message may still be filtered somewhere down the line. (Email lacks a reliable delivery confirmation system. Hacks like web bugs and return receipts are unsupported by many email services.) It’s very possible for a long reply to never reach the recipient, and I’ll never know it.

Currently, the site sends about a half-dozen emails per day (max). These are responses to removal and unban requests, replies to comments, and double-opt-in messages (you requested an account; click on this link to confirm and create the account). I honestly never see a future when I will use email to promote new services or features. (Having spent decades tracking down spammers and developing anti-spam solutions, I cannot see myself joining the dark side.)

Of course, email is not the only option for communication. I’ve just started learning about WebRTC and HTML5 — I want to be able to give online training sessions and host voice calls via the web browser.

Schneier on Security: More on the NSA’s Capabilities

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Ross Anderson summarizes a meeting in Princeton where Edward Snowden was “present.”

Third, the leaks give us a clear view of an intelligence analyst’s workflow. She will mainly look in Xkeyscore which is the Google of 5eyes comint; it’s a federated system hoovering up masses of stuff not just from 5eyes own assets but from other countries where the NSA cooperates or pays for access. Data are “ingested” into a vast rolling buffer; an analyst can run a federated search, using a selector (such as an IP address) or fingerprint (something that can be matched against the traffic). There are other such systems: “Dancing oasis” is the middle eastern version. Some xkeyscore assets are actually compromised third-party systems; there are multiple cases of rooted SMS servers that are queried in place and the results exfiltrated. Others involve vast infrastructure, like Tempora. If data in Xkeyscore are marked as of interest, they’re moved to Pinwale to be memorialised for 5+ years. This is one function of the MDRs (massive data repositories, now more tactfully renamed mission data repositories) like Utah. At present storage is behind ingestion. Xkeyscore buffer times just depend on volumes and what storage they managed to install, plus what they manage to filter out.

As for crypto capabilities, a lot of stuff is decrypted automatically on ingest (e.g. using a “stolen cert,” presumably a private key obtained through hacking). Else the analyst sends the ciphertext to CES and they either decrypt it or say they can’t. There’s no evidence of a “wow” cryptanalysis; it was key theft, or an implant, or a predicted RNG or supply-chain interference. Cryptanalysis has been seen of RC4, but not of elliptic curve crypto, and there’s no sign of exploits against other commonly used algorithms. Of course, the vendors of some products have been coopted, notably skype. Homegrown crypto is routinely problematic, but properly implemented crypto keeps the agency out; gpg ciphertexts with RSA 1024 were returned as fails.


What else might we learn from the disclosures when designing and implementing crypto? Well, read the disclosures and use your brain. Why did GCHQ bother stealing all the SIM card keys for Iceland from Gemalto, unless they have access to the local GSM radio links? Just look at the roof panels on US or UK embassies, that look like concrete but are actually transparent to RF. So when designing a protocol ask yourself whether a local listener is a serious consideration.


On the policy front, one of the eye-openers was the scale of intelligence sharing — it’s not just 5 eyes, but 15 or 35 or even 65 once you count all the countries sharing stuff with the NSA. So how does governance work? Quite simply, the NSA doesn’t care about policy. Their OGC has 100 lawyers whose job is to “enable the mission”; to figure out loopholes or new interpretations of the law that let stuff get done. How do you restrain this? Could you use courts in other countries, that have stronger human-rights law? The precedents are not encouraging. New Zealand’s GCSB was sharing intel with Bangladesh agencies while the NZ government was investigating them for human-rights abuses. Ramstein in Germany is involved in all the drone killings, as fibre is needed to keep latency down low enough for remote vehicle pilots. The problem is that the intelligence agencies figure out ways to shield the authorities from culpability, and this should not happen.


The spooks’ lawyers play games saying for example that they dumped content, but if you know IP address and file size you often have it; and IP address is a good enough pseudonym for most intel / LE use. They deny that they outsource to do legal arbitrage (e.g. NSA spies on Brits and GCHQ returns the favour by spying on Americans). Are they telling the truth? In theory there will be an MOU between NSA and the partner agency stipulating respect for each others’ laws, but there can be caveats, such as a classified version which says “this is not a binding legal document.” The sad fact is that law and legislators are losing the capability to hold people in the intelligence world to account, and also losing the appetite for it.

Worth reading in full.

Krebs on Security: Who’s Scanning Your Network? (A: Everyone)

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Not long ago I heard from a reader who wanted advice on how to stop someone from scanning his home network, or at least recommendations about to whom he should report the person doing the scanning. I couldn’t believe that people actually still cared about scanning, and I told him as much: These days there are countless entities — some benign and research-oriented, and some less benign — that are continuously mapping and cataloging virtually every devices that’s put online.

GF5One of the more benign is, a data repository of research findings collected through continuous scans of the public Internet. The project, hosted by the ZMap Team at the University of Michigan, includes huge, regularly updated results grouped around scanning for Internet hosts running some of the most commonly used “ports” or network entryways, such as Port 443 (think Web sites protected by the lock icon denoting SSL/TLS Web site encryption); Port 21, or file transfer protocol (FTP); and Port 25, or simple mail transfer protocol (SMTP), used by many businesses to send email.

When I was first getting my feet wet on the security beat roughly 15 years ago, the practice of scanning networks you didn’t own looking for the virtual equivalent of open doors and windows was still fairly frowned upon — if not grounds to get one into legal trouble. These days, complaining about being scanned is about as useful as griping that the top of your home is viewable via Google Earth. Trying to put devices on the Internet and and then hoping that someone or something won’t find them is one of the most futile exercises in security-by-obscurity.

To get a gut check on this, I spoke at length last week with University of Michigan researchers Michael D. Bailey (MB) and Zakir Durumeric (ZD) about their ongoing and very public project to scan all the Internet-facing things. I was curious to get their perspective on how public perception of widespread Internet scanning has changed over the years, and how targeted scanning can actually lead to beneficial results for Internet users as a whole.

MB: Because of the historic bias against scanning and this debate between disclosure and security-by-obscurity, we’ve approached this very carefully. We certainly think that the benefits of publishing this information are huge, and that we’re just scratching the surface of what we can learn from it.

ZD: Yes, there are close to two dozen papers published now based on broad, Internet-wide scanning. People who are more focused on comprehensive scans tend to be the more serious publications that are trying to do statistical or large-scale analyses that are complete, versus just finding devices on the Internet. It’s really been in the last year that we’ve started ramping up and adding scans [to the site] more frequently.

BK: What are your short- and long-term goals with this project?

ZD: I think long-term we do want to add coverage of additional protocols. A lot of what we’re focused on is different aspects of a protocol. For example, if you’re looking at hosts running the “https://” protocol, there are many different ways you can ask questions depending on what perspective you come from. You see different attributes and behavior. So a lot of what we’ve done has revolved around https, which is of course hot right now within the research community.

MB: I’m excited to add other protocols. There a handful of protocols that are critical to operations of the Internet, and I’m very interested in understanding the deployment of DNS, BGP, and TLS’s interception with SMTP. Right now, there’s a pretty long tail to all of these protocols, and so that’s where it starts to get interesting. We’d like to start looking at things like programmable logic controllers (PLCs) and things that are responding from industrial control systems.

ZD: One of the things we’re trying to pay more attention to is the world of embedded devices, or this ‘Internet of Things’ phenomenon. As Michael said, there are also industrial protocols, and there are different protocols that these embedded devices are supporting, and I think we’ll continue to add protocols around that class of devices as well because from a security perspective it’s incredibly interesting which devices are popping up on the Internet.

BK: What are some of the things you’ve found in your aggregate scanning results that surprised you?

ZD: I think one thing in the “https://” world that really popped out was we have this very large certificate authority ecosystem, and a lot of the attention is focused on a small number of authorities, but actually there is this very long tail — there are hundreds of certificate authorities that we don’t really think about on a daily basis, but that still have permission to sign for any Web site. That’s something we didn’t necessary expect. We knew there were a lot, but we didn’t really know what would come up until we looked at those.

There also was work we did a couple of years ago on cryptographic keys and how those are shared between devices. In one example, primes were being shared between RSA keys, and because of this we were able to factor a large number of keys, but we really wouldn’t have seen that unless we started to dig into that aspect [their research paper on this is available here].

MB: One of things we’ve been surprised about is when we measure these things at scale in a way that hasn’t been done before, often times these kinds of emergent behaviors become clear.

BK: Talk about what you hope to do with all this data.

ZD: We were involved a lot in the analysis of the Heartbleed vulnerability. And one of the surprising developments there wasn’t that there were lots of people vulnerable, but it was interesting to see who patched, how and how quickly. What we were able to find was by taking the data from these scans and actually doing vulnerability notifications to everybody, we were able to increase patching for the Heartbleed bug by 50 percent. So there was an interesting kind of surprise there, not what you learn from looking at the data, but in terms of what actions do you take from that analysis? And that’s something we’re incredibly interested in: Which is how can we spur progress within the community to improve security, whether that be through vulnerability notification, or helping with configurations.

BK: How do you know your notifications helped speed up patching?

MB: With the Heartbleed vulnerability, we took the known vulnerable population from scans, and ran an A/B test. We split the population that was vulnerable in half and notified one half of the population, while not notifying the other half, and then measured the difference in patching rates between the two populations. We did end up after a week notifying the second population…the other half.

BK: How many people did you notify after going through the data from the Heartbleed vulnerability scanning? 

ZD: We took everyone on the IPv4 address space, found those that were vulnerable, and then contacted the registered abuse contact for each block of IP space. We used data from 200,000 hosts, which corresponded to 4,600 abuse contacts, and then we split those into an A/B test. [Their research on this testing was published here].

So, that’s the other thing that’s really exciting about this data. Notification is one thing, but the other is we’ve been building models that are predictive of organizational behavior. So, if you can watch, for example, how an organization runs their Web server, how they respond to certificate revocation, or how fast they patch — that actually tells you something about the security posture of the organization, and you can start to build models of risk profiles of those organizations. It moves away from this sort of patch-and-break or patch-and-pray game we’ve been playing. So, that’s the other thing we’ve been starting to see, which is the potential for being more proactive about security.

BK: How exactly do you go about the notification process? That’s a hard thing to do effectively and smoothly even if you already have a good relationship with the organization you’re notifying….

MB: I think one of the reasons why the Heartbleed notification experiment was so successful is we did notifications on the heels of a broad vulnerability disclosure. The press and the general atmosphere and culture provided the impetus for people to be excited about patching. The overwhelming response we received from notifications associated with that were very positive. A lot of people we reached out to say, ‘Hey, this is a great, please scan me again, and let me know if I’m patched.” Pretty much everyone was excited to have the help.

Another interesting challenge was that we did some filtering as well in cases where the IP address had no known patches. So, for example, where we got information from a national CERT [Computer Emergency Response Team] that this was an embedded device for which there was no patch available, we withheld that notification because we felt it would do more harm than good since there was no path forward for them. We did some aggregation as well, because it was clear there were a lot of DSL and dial-up pools affected, and we did some notifications to ISPs directly.

BK: You must get some pushback from people about being included in these scans. Do you think that idea that scanning is inherently bad or should somehow prompt some kind of reaction in and of itself, do you think that ship has sailed?

ZD: There is some small subset that does have issues. What we try to do with this is be as transparent as possible. All of our hosts we use for scanning, if look at them on WHOIS records or just visit them with a browser it will tell you right away that this machine is part of this research study, here’s the information we’re collecting and here’s how you can be excluded. A very small percentage of people who visit that page will read it and then contact us and ask to be excluded. If you send us an email [and request removal], we’ll remove you from all future scans. A lot of this comes down to education, a lot of people to whom we explain our process and motives are okay with it.

BK: Are those that object and ask to be removed more likely to be companies and governments, or individuals?

ZD: It’s a mix of all of them. I do remember offhand there were a fair number of academic institutions and government organizations, but there were a surprising number of home users. Actually, when we broke down the numbers last year (PDF), the largest category was small to mid-sized businesses. This time last year, we had excluded only 157 organizations that had asked for it.

BK: Was there any pattern to those that asked to be excluded?

ZD: I think that actually is somewhat interesting: The exclusion requests aren’t generally coming from large corporations, which likely notice our scanning but don’t have an issue with it. A lot of emails we get are from these small businesses and organizations that really don’t know how to interpret their logs, and often times just choose the most conservative route.

So we’ve been scanning for a several years now, and I think when we originally started scanning, we expected to have all the people who were watching for this to contact us all at once, and say ”Please exclude us.’ And then we sort of expected that the number of people who’d ask to be excluded would plateau, and we wouldn’t have problems again. But what we’ve seen is, almost the exact opposite. We still get [exclusion request] emails each day, but what we’re really finding is people aren’t discovering these scans proactively. Instead, they’re going through their logs while trying to troubleshoot some other issue, and they see a scan coming from us there and they don’t know who we are or why we’re contacting their servers. And so it’s not these organizations that are watching, it’s the ones who really aren’t watching who are contacting us.

BK: Do you guys go back and delete historic records associated with network owners that have asked to be excluded from scans going forward?

ZD: At this point we haven’t gone back and removed data. One reason is there are published research results that are based on those data sets, results, and so it’s very hard to change that information after the fact because if another researcher went back and tried to confirm an experiment or perform something similar, there would be no easy way of doing that.

BK: Is this what you’re thinking about for the future of your project? How to do more notification and build on the data you have for those purposes? Or are you going in a different or additional direction?

MB: When I think about the ethics of this kind of activity, I have very utilitarian view: I’m interested in doing as much good as we possibly can with the data we have. I think that lies in notifications, being proactive, helping organizations that run networks to better understand what their external posture looks like, and in building better safe defaults. But I’m most interested in a handful of core protocols that are under-serviced and not well understood. And so I think we should spend a majority of effort focusing on a small handful of those, including BGP, TLS, and DNS.

ZD: In many ways, we’re just kind of at the tip of this iceberg. We’re just starting to see what types of security questions we can answer from these large-scale analyses. I think in terms of notifications, it’s very exciting that there are things beyond the analysis that we can use to actually trigger actions, but that’s something that clearly needs a lot more analysis. The challenge is learning how to do this correctly. Every time we look at another protocol, we start seeing these weird trends and behavior we never noticed before. With every protocol we look at there are these endless questions that seem to need to be answered. And at this point there are far more questions than we have hours in the day to answer.

Raspberry Pi: Another Raspbian Desktop User Interface Update

This post was syndicated from: Raspberry Pi and was written by: Simon Long. Original post: at Raspberry Pi

Hopefully the dust has now settled on the first batch of changes to the Raspbian desktop which were made available at Christmas, and you’ve either a) decided you like them, or b) decided you hate them and have rolled back to a previous version, never to upgrade again. If you are in group b), I suggest you stop reading now…

The next set of desktop changes are now available. The overall look and feel hasn’t changed, but there have been a few major new features added, and a lot of small tweaks and bug fixes. The new features include the following:

New Wifi Interface

Connecting to a wifi network has been made much simpler in most cases by including a version of Roy Marples’ excellent dhcpcd and dhcpcd-ui packages. If you look towards the right-hand end of the menu bar, there is now a network icon. This shows the current state of your network connection – if a wifi connection is in use, it shows the signal strength; if not, it shows the connection state of your Ethernet connection. While connections are being set up, the icon flashes to indicate that a connection is being established.

To connect to a wifi network (assuming you have a USB wifi dongle connected to your Pi), left-click on the network icon and you should see a list of all visible wireless networks. Select the one you want, and if it is secured, you will be prompted for the network password. Enter the password, press OK and wait a few seconds for the wifi icon to stop flashing; if you then left-click the icon again, the selected network should be shown at the top of the list with a tick next to it, and you are then good to go.


If you right-click the network icon and choose the Wifi Networks (dhcpcdui) Settings option from the pop-up menu, you can manually enter IP addresses – choose the SSID option from the Configure drop-down, select your network and enter the IP addresses you want. From the same dialog, you can manually enter an IP address for your Ethernet connection by selecting the Interface option from the drop-down and choosing eth0 as the interface. Once you’ve clicked Apply, you may need to reboot your Pi for the new settings to take effect.

Speaking of rebooting, the first time you plug in a WiFi dongle, you will need to reboot your Pi to have it recognised by the new UI. Once that is done, with most dongles, you should be able to hot-plug them, or even to have two dongles connected at once if you wish to connect to two networks. (A few dongles won’t work when hot-plugged – the Edimax dongle is one, due to a bug in its driver. The official Raspberry Pi dongle and the PiHut dongle should both hot-plug without problems.)

If you prefer to use the original wpa_gui application, it is still installed – just type:


in a terminal window to open it.

New Volume / Audio Interface

Next to the network interface icon on the menu bar, you will see a volume icon. This works exactly the same way as on Windows or MacOS – left-click it to drop down the volume control, which you can then slide with the mouse. There’s a mute checkbox at the bottom to quickly mute or unmute the audio output. (Note that not all devices support mute, so with some external soundcards this may be greyed-out.)

If you right-click the volume icon, a pop-up menu appears to allow you to select which audio output is used – on a standard Pi, you have the choice of HDMI or Analog. If you have installed a USB audio interface, or an external soundcard, these will also appear as options in this menu.

To allow more detailed control of any external audio devices, we’ve included a custom version of xfce4-mixer, the mixer dialog from the XFCE desktop environment – you can access this either under Device Settings from the volume right-click menu (the option only appears if there is an external audio device connected) or from the Preferences section of the main menu, where it is listed as Audio Device Settings.


From this dialog, select the device you want to control from the drop-down at the top, and then press the Select Controls button to choose which of the controls the device offers that you want to display. Pressing the Make Default button on this window has the same effect as choosing an output source in the volume right-click menu.

New Appearance Settings

There is a new custom Appearance Settings dialog, available from the Preferences section of the main menu, which works with a new custom theme for Openbox and GTK called PiX. This allows you to set how your desktop looks (colour, background picture), how the menu bar looks (including an easy option to move it to the top or bottom of the screen) and how window title bars look.


This groups together the most useful settings from the original Openbox and LXSession appearance dialog boxes, which are now hidden in the menu. They are still on the system, so if you would prefer to use them, access them from a terminal window by typing obconf or lxappearance, respectively. (Note that using both the new dialog and the old obconf and lxappearance dialogs may result in strange combinations of settings; I’d strongly recommend you use one or the other, not all three…)

Other Stuff

There are a lot of other small changes, mostly to the appearance rather than functionality, and some minor bug fixes. For example, dialog buttons have a cleaner appearance due to the removal of icons and shortcut underlines. (The shortcuts are still there – just hold down the Alt key and they will appear on the buttons.)

By popular demand, you can now set the foreground and background colours of the CPU monitor on the menu bar – just right-click it and choose CPU Usage Monitor Settings, and you can colour it however you like. Also back by popular demand, the ability to use an arbitrary format for the menu bar clock (right-click and choose Digital Clock Settings) – it should also now resize and align properly if you do so…

Various other applications have been updated – the Epiphany browser has had speed and compatibility improvements; the Minecraft Python API is now compatible with Python 3, and there is a new version of Sonic Pi.

How Do I Get It?

If this all sounds good to you, it’s very easy to add to your Pi. Just open a terminal and type:

sudo apt-get update

sudo apt-get dist-upgrade

Updating will overwrite your desktop configuration files (found in /home/pi/.config). Just in case you had customised something in there for your own use, the original files are backed up as part of the upgrade process, and can be found in /home/pi/oldconffiles.

The new network interface is a separate package – we know that some people will have carefully customised their network setups, so you may not want the new changes, as they will overwrite your network configuration. If you do want the new network interface, type:

sudo apt-get install raspberrypi-net-mods

You will be prompted as part of the install to confirm that you want the new version of the network configuration file – press Y when prompted. (Or N if you’ve had a last-minute change of mind!)

As ever, user interface design isn’t an exact science – my hope is that the changes make your Pi nicer to use, but feedback (good or bad) is always welcome, so do post a comment and let me know your thoughts!

SANS Internet Storm Center, InfoCON: green: The Art of Logging, (Thu, May 7th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

[This is a Guest Diary by Xavier Mertens]

Handling log files is not a new topic. For a long time, people should know that taking care of your logs is a must have. They are very valuable when you need to investigate an incident. But, if collecting events and storing them for later processing is one point, events must be properly generated to be able to investigate suspicious activities! Lets take by example a firewall… Logging all the accepted traffic is one step but whats really important is to log all the rejected traffic. Most of the modern security devices (IDS, firewalls, web application firewalls, …) can integrate dynamic blacklists maintained by external organizations. They are plenty of useful blacklists on the internet with IP addresses, domain names, etc… It”>”>With the blacklist”>Lets assume a web application firewall which has this kind of feature. It will drop all connections from a (reported as) suspicious IP address from the beginning without more details. Let”>”>”>”>elif “>”>If we block the malicious IP addresses at the beginning of the policy, well never know which kind of attack has been tried. By blocking our malicious IP addresses at the end, we know that if one IP is blocked, our policy was not effective enough to block the attack! Maybe a new type of attack was tried and we need to add a new pattern. Blocking attackers is good but its more valuable to know why they were blocked

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Anchor Cloud Hosting: Rebuilding An OpenStack Instance and Keeping the Same Fixed IP

This post was syndicated from: Anchor Cloud Hosting and was written by: Craige McWhirter. Original post: at Anchor Cloud Hosting

OpenStack and in particular the compute service, Nova, has a useful rebuild function that allows you to rebuild an instance from a fresh image while maintaining the same fixed and floating IP addresses, amongst other metadata.

However if you have a shared storage back end, such as Ceph, you’re out of luck as this function is not for you.

Fortunately, there is another way.

Prepare for the Rebuild:

Note the fixed IP address of the instance that you wish to rebuild and the network ID:

$ nova show demoinstance0 | grep network
| DemoTutorial network                       |,                     |
$ export FIXED_IP=
$ neutron floatingip-list | grep
| ee7ecd21-bd93-4f89-a220-b00b04ef6753 |                  |      |
$ export FLOATIP_ID=ee7ecd21-bd93-4f89-a220-b00b04ef6753
$ neutron net-show DemoTutorial | grep " id "
| id              | 9068dff2-9f7e-4a72-9607-0e1421a78d0d |
$ export OS_NET=9068dff2-9f7e-4a72-9607-0e1421a78d0d  |

You now need to delete the instance that you wish to rebuild:

$ nova delete demoinstance0
Request to delete server demoinstance0 has been accepted.

Manually Prepare the Networking:

Now you need to re-create the port and re-assign the floating IP, if it had one:

$ neutron port-create --name demoinstance0 --fixed-ip ip_address=$FIXED_IP $OS_NET
Created a new port:
| Field                 | Value                                                                                 |
| admin_state_up        | True                                                                                  |
| allowed_address_pairs |                                                                                       |
| binding:vnic_type     | normal                                                                                |
| device_id             |                                                                                       |
| device_owner          |                                                                                       |
| fixed_ips             | {"subnet_id": "eb5db27f-edad-480e-92cb-1f8fec8848a8", "ip_address": ""}  |
| id                    | c1927578-451b-4682-8888-55c7163898a4                                                  |
| mac_address           | fa:16:3e:5a:39:67                                                                     |
| name                  | demoinstance0                                                                         |
| network_id            | 9068dff2-9f7e-4a72-9607-0e1421a78d0d                                                  |
| security_groups       | 5898c15a-4670-429b-a414-9f59671c4d8b                                                  |
| status                | DOWN                                                                                  |
| tenant_id             | gsu7j52c50804cf3aad71b92e6ced65e                                                      |
$ export OS_PORT=c1927578-451b-4682-8888-55c7163898a4
$ neutron floatingip-associate $FLOATIP_ID $OS_PORT
Associated floating IP ee7ecd21-bd93-4f89-a220-b00b04ef6753
$ neutron floatingip-list | grep $FIXED_IP
| ee7ecd21-bd93-4f89-a220-b00b04ef6753 |   |     | c1927578-451b-4682-8888-55c7163898a4 |


Now you need to boot the instance again and specify port you created:

$ nova boot --flavor=m1.tiny --image=MyImage --nic port-id=$OS_PORT demoinstance0
$ nova show demoinstance0 | grep network
| DemoTutorial network                       |,                     |

Now your rebuild has been completed, you’ve got your old IPs back and you’re done. Enjoy :-)

The post Rebuilding An OpenStack Instance and Keeping the Same Fixed IP appeared first on Anchor Cloud Hosting.

SANS Internet Storm Center, InfoCON: green: Upatre/Dyre – the daily grind of botnet-based malspam, (Tue, May 5th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

Malicious spam (malspam) delivering Upatre/Dyre has been an ongoing issue for quite some time. Many organizations have posted articles about this malware. Ive read good information on Dyre last year [1, 2] and this year [3].

Upatre is the malware downloader that retrieves Dyre (Dyreza), an information stealer described as a Zeus-like banking Trojan [4]. Earlier this year, EmergingThreats reported Upatre and Dyre are under constant development [5], while SecureWorks told us banking botnets continue to deliver this malspam despite previous takedowns [6].

Botnets sending waves of malspam with Upatre as zip file attachments are a near-daily occurrence. Most organizations wont see these emails, because the messages are almost always blocked by spam filters.

Because security researchers find Upatre/Dyre malspam nearly every day, its a bit tiresome to write about, and we sometimes gloss over the information when it comes our way. After all, the malspam is being blocked, right?

Nonetheless, we should continue to document some waves of Upatre/Dyre malspam to see if anything is changing or evolving.

Heres one wave we found aftersearching through our blocked spam filters at Rackspace within the past 24 hours:

  • Start date/time: 2015-05-04 13:48 UTC
  • End date/time: 2015-05-04 16:40 UTC
  • Timespan: 2 hours and 52 minutes
  • Number of emails: 212

We searched for subject lines starting with the word Holded and found 31 different subjects:

  • Holded account alert
  • Holded account caution
  • Holded account message
  • Holded account notification
  • Holded account report
  • Holded account warning
  • Holded bank operation alert
  • Holded bank operation caution
  • Holded bank operation message
  • Holded bank operation notification
  • Holded bank operation report
  • Holded bank operation warning
  • Holded operation alert
  • Holded operation caution
  • Holded operation message
  • Holded operation notification
  • Holded operation report
  • Holded operation warning
  • Holded payment alert
  • Holded payment caution
  • Holded payment message
  • Holded payment notification
  • Holded payment report
  • Holded payment warning
  • Holded transaction alert
  • Holded transaction caution
  • Holded transaction message
  • Holded transaction notification
  • Holded transaction report
  • Holded transaction warning

The 212 messages had different attachments. Heres a small sampling of the different file names:


Emails sent by this botnet came from different IP addresses before they hit our mail servers. Senders and message ID headers were all spoofed. Each of the email headers show the same Google IP address spoofed as the previous sender. In the images below, the source IP address–right before the message hit our email servers–is outlined in red. The spoofed Google IP address is highlighted in blue. The only true items are the IP addresses before these emails hit our mail servers. Everything else is cannot be verifiedandcan be considered” />

This wave sent dozens of different attachment names with hundreds of different file hashes. I took a random sample and infected a host to generate some traffic. This Dyre malware is VM-aware, so I had to use a physical host for the infection traffic. It shows the usual Upatre URLs, Dyre SSL certs and STUN traffic we” />
Shown above: Filtered” />
Shown above: EmergingThreats-based Snort events on the infection traffic using Security Onion.

Of note, is a service run by one of myfellow Rackspace employees [7]. By itself, its not malicious. is merely a free service that reports your hosts IP address. Unfortunately, malware authors use this and similar services to check an infected computers IP address. Because of that, youll often find alerts that report any traffic to these domains as an indicator of compromise (IOC).

The Upatre HTTP GET requests didnt return anything. Apparently, the follow-up Dyre malware was downloaded over one of the SSL connections. Here”>Dyre first saved to: C:\Users\username\AppData\Local\Temp\vwlsrAgtqYXVcRW.exe
Dyre was then moved to: “>

The zipfile is password-protected with the standard password. If you dont know it, email and ask.

Final words

Its a daily grind reviewing this information, and most security professionals have higher priority issues to deal with. However, if we dont periodically review these waves of Upatre/Dyre, our front-line analysts and other securitypersonnel mightnotrecognize the traffic and may miss the IOCs.

Brad Duncan, Security Researcher at Rackspace
Blog: – Twitter: @malware_traffic



(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

TorrentFreak: Microsoft Logs IP Addresses to Catch Windows 7 Pirates

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

microsoft-pirateDue to the fact that one needs to be present on most computers in order for them to work at all, operating systems are among the most pirated types of software around.

There can be little doubt that across its range in its 29 year history, Microsoft’s Windows operating systems have been pirated countless millions of times. It’s a practice that on some levels Microsoft has come to accept, with regular consumers largely avoiding the company’s aggression.

However, as one or perhaps more pirates are about to find out, the same cannot be said of those pirating the company’s products on a commercial scale.

In a lawsuit filed this week at a district court in Seattle, Microsoft targets individuals behind a single Verizon IP address – Who he, she or they are is unknown at this point, but according to Microsoft they’re responsible for some serious Windows pirating.

“As part of its cyberforensic methods, Microsoft analyzes product key activation data voluntarily provided by users when they activate Microsoft software, including the IP address from which a given product key is activated,” the lawsuit reads.

Microsoft says that its forensic tools allow the company to analyze billions of activations of Microsoft software and identify patterns “that make it more likely than not” that an IP address associated with activations is one through which pirated software is being activated.

“Microsoft’s cyberforensics have identified hundreds of product key activations
originating from IP address…which is presently assigned to
Verizon Online LLC. These activations have characteristics that on information and belief, establish that Defendants are using the IP address to activate pirated software.”

Microsoft says that the defendant(s) have activated hundreds of copies of Windows 7 using product keys that have been “stolen” from the company’s supply chain or have never been issued with a valid license, or keys used more times than their license allows.

In addition to immediate injunctive relief and the impounding of all infringing materials, the company demands profits attributable to the infringements, treble damages and attorney fees or, alternatively, statutory damages.

This week’s lawsuit (pdf) follows similar action in December 2014 in which Microsoft targeted the user behind an AT&T account.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and anonymous VPN services.

Schneier on Security: Detecting QUANTUMINSERT

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Fox-IT has a blog post (and has published Snort rules) on how to detect man-on-the-side Internet attacks like the NSA’s QUANTUMINSERT.

From a Wired article:

But hidden within another document leaked by Snowden was a slide that provided a few hints about detecting Quantum Insert attacks, which prompted the Fox-IT researchers to test a method that ultimately proved to be successful. They set up a controlled environment and launched a number of Quantum Insert attacks against their own machines to analyze the packets and devise a detection method.

According to the Snowden document, the secret lies in analyzing the first content-carrying packets that come back to a browser in response to its GET request. One of the packets will contain content for the rogue page; the other will be content for the legitimate site sent from a legitimate server. Both packets, however, will have the same sequence number. That, it turns out, is a dead giveaway.

Here’s why: When your browser sends a GET request to pull up a web page, it sends out a packet containing a variety of information, including the source and destination IP address of the browser as well as so-called sequence and acknowledge numbers, or ACK numbers. The responding server sends back a response in the form of a series of packets, each with the same ACK number as well as a sequential number so that the series of packets can be reconstructed by the browser as each packet arrives to render the web page.

But when the NSA or another attacker launches a Quantum Insert attack, the victim’s machine receives duplicate TCP packets with the same sequence number but with a different payload. “The first TCP packet will be the ‘inserted’ one while the other is from the real server, but will be ignored by the [browser],” the researchers note in their blog post. “Of course it could also be the other way around; if the QI failed because it lost the race with the real server response.”

Although it’s possible that in some cases a browser will receive two packets with the same sequence number from a legitimate server, they will still contain the same general content; a Quantum Insert packet, however, will have content with significant differences.

It’s important we develop defenses against these attacks, because everyone is using them.

SANS Internet Storm Center, InfoCON: green: Dalexis/CTB-Locker malspam campaign, (Thu, Apr 30th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

MalwareEvery Day

Malicious spam (malspam) is by sent by botnets every day. These malspam campaigns send malware designed to infect Windows computers. Ill see Dridex or Upatre/Dyre campaigns a daily basis. Fortunately, most of these emails are blocked by our spam filters.

This diary concerns a recent malspam wave on Tuesday 2015-04-28 from a botnet pushing Dalexis/CTB-Locker.

What is Dalexis/CTB-Locker?

Dalexis is a malware downloader. It drops a CAB file with embedded document thats opened on a users computer [1] then downloads more malware. Dalexis is often used to deliver CTB-Locker [2][3]. CTB-Locker is ransomware that encrypts files on your computer. In exchange for a ransom payment, the malware authors will provide a key to decrypt your files. Behavior of this malware is well-documented, but small changes often occur as new waves of malspam are sent out.

A similar wave of malspam from Monday 2015-04-27 was reported by [4]. The next day saw similar activity. This campaign will likely continue. Below is a flow chart from Tuesday” />

The messages have slightly different subject lines, and each email attachment has a different file hash. I infected a host using one of the attachments. Below are links to the associated files:

The ZIP file is password-protected with the standard password. If you dont know it, email and ask.

Infection as Seen from the Desktop

Extracted malware from these email attachments is an SCR file with an Excel icon. ” />

Had to download a Tor browser to get at the decryption instructions. The bitcoin address for the ransom payment is: 18GuppWVuZGqutYvZz9uaHxHcostrU6Upc” />

” />

Dalexis uses an HTTP GET request to download CTB-Locker. The file is encrypted in transit, but I retrieved a decrypted copy from the infected host. Dalexis reports to a command and control (CnC) server after the malware is successfully downloaded.

In the image below, youll find HTTP POST requests to different servers as Dalexis tries to find a CnC server that will respond. ” />

For indicators of compromise (IOCs), a list of domains unique to this infection follows:

(Read: IP address – domain name)

  • –
  • –
  • –
  • –
  • –
  • –
  • –
  • various –
  • various –

Example of Malspam From Tuesday 2015-04-28

From: Eda Uhrhammer
Date: Tuesday, April 28, 2015 at 16:16 UTC
To: [redacted]
Subject: [Issue 5261CC6247C37550] Account #295030013990 Temporarily Locked

Dear user,

We detect unauthorized Login Attempts to your ID #295030013990 from other IP Address.
Please re-confirm your identity. See attached docs for full information.

Eda Uhrhammer
Millard Peter
111 Hunter Street East, Peterborough, ON K9H 1G7



NOTE: The emails contain various international names, addresses, and phone numbers in the signature block.

Emails Collected

Start time: 2015-04-28 10:00:13 UTC
End time: 2015-04-28 16:16:28 UTC
Emails found: 24

Senders and Subject Lines

  • Sender: – Subject: [Issue 35078504EBA94667] Account #59859805294 Temporarily Locked
  • Sender: – Subject: [Issue 84908E27DF477852] Account #40648428303 Temporarily Locked
  • Sender: – Subject: [Issue 8694097116D18193] Account #257547165590 Temporarily Locked
  • Sender: – Subject: [Issue 11123E749D533902] Account #621999149649 Temporarily Locked
  • Sender: – Subject: [Issue 24789101648C8407] Account #250874039146 Temporarily Locked
  • Sender: – Subject: [Issue 6412D16736356564] Account #238632826769 Temporarily Locked
  • Sender: – Subject: [Issue 9139F9678C9A7466] Account #216021389500 Temporarily Locked
  • Sender: – Subject: [Issue 982886631E9E7489] Account #114654416120 Temporarily Locked
  • Sender: – Subject: [Issue 4895D8D81ADE1399] Account #843871639720 Temporarily Locked
  • Sender: – Subject: [Issue 72986FD85CE93134] Account #622243029178 Temporarily Locked
  • Sender: – Subject: [Issue 27883AA546718876] Account #475770363394 Temporarily Locked
  • Sender: – Subject: [Issue 5384A21F5AB26075] Account #717973552140 Temporarily Locked
  • Sender: – Subject: [Issue 5694B0643FCD587] Account #642271991381 Temporarily Locked
  • Sender: – Subject: [Issue 8219423F8CFB6864] Account #692223104314 Temporarily Locked
  • Sender: – Subject: [Issue 70308834A3929842] Account #339648082242 Temporarily Locked
  • Sender: – Subject: [Issue 33190977A2D04088] Account #831865092451 Temporarily Locked
  • Sender: – Subject: [Issue 706584024E142555] Account #196387638377 Temporarily Locked
  • Sender: – Subject: [Issue 830689BB76F4615] Account #162723085828 Temporarily Locked
  • Sender: – Subject: [Issue 46714D12FB834480] Account #526735661562 Temporarily Locked
  • Sender: – Subject: [Issue 39494AFE933A5158] Account #552561607876 Temporarily Locked
  • Sender: – Subject: [Issue 974641F53DD66126] Account #325636779394 Temporarily Locked
  • Sender: – Subject: [Issue 7505716EA6244832] Account #603263972311 Temporarily Locked
  • Sender: – Subject: [Issue 50438E220A5D7432] Account #906152957589 Temporarily Locked
  • Sender: – Subject: [Issue 5261CC6247C37550] Account #295030013990 Temporarily Locked

NOTE: The sending email addresses might be spoofed.


  • – 19,135 bytes – MD5 hash: 1a9fdce6b6efd094af354a389b0e04da
  • – 20,688 bytes – MD5 hash: a1b066361440a5ff6125f15b1ba2e1b1
  • – 20,681 bytes – MD5 hash: 01f8976034223337915e4900b76f9f26
  • – 19,135 bytes – MD5 hash: ab9a07054a985c6ce31c7d53eee90fbe
  • – 19,135 bytes – MD5 hash: 899689538df49556197bf1bac52f1b84
  • – 19,135 bytes – MD5 hash: eea0fd780ecad755940110fc7ee6d727
  • – 19,114 bytes – MD5 hash: f236e637e17bc44764e43a8041749e6c
  • – 20,168 bytes – MD5 hash: eda8075438646c617419eda13700c43a
  • – 20,177 bytes – MD5 hash: d00861c5066289ea9cca3f0076f97681
  • – 20,703 bytes – MD5 hash: 657e3d615bb1b6e7168319e1f9c5039f
  • – 19,113 bytes – MD5 hash: b7fe085962dc7aa7622bd15c3a303b41
  • – 20,642 bytes – MD5 hash: 2ba4d511e07090937b5d6305af13db68
  • – 20,710 bytes – MD5 hash: 24698aa84b14c42121f96a22fb107d00
  • – 20,709 bytes – MD5 hash: 04abf53d3b4d7bb7941a5c8397594db7
  • – 19,071 bytes – MD5 hash: b2ca48afbc0eb578a9908af8241f2ae8
  • – 20,175 bytes – MD5 hash: fa43842bda650c44db99f5789ef314e3
  • – 19,135 bytes – MD5 hash: 802d9abf21c812501400320f2efe7040
  • – 20,681 bytes – MD5 hash: 0687f63ce92e57a76b990a8bd5500b69
  • – 20,644 bytes – MD5 hash: 0918c8bfed6daac6b63145545d911c72
  • – 20,703 bytes – MD5 hash: 2e90e6d71e665b2a079b80979ab0e2cb
  • – 20,721 bytes – MD5 hash: 5b8a27e6f366f40cda9c2167d501552e
  • – 20,718 bytes – MD5 hash: 9c1acc3f27d7007a44fc0da8fceba120
  • – 20,713 bytes – MD5 hash: 1a6b20a5636115ac8ed3c4c4dd73f6aa
  • – 20,134 bytes – MD5 hash: b9d19a68205f2a7e2321ca3228aa74d1

Extracted Malware

  • 114654416120.scr – 98,304 bytes – MD5 hash: 46838a76fbf59e9b78d684699417b216
  • 162723085828.scr – 90,112 bytes – MD5 hash: 8f5df86fdf5f3c8e475357bab7bc38e8
  • 196387638377.scr – 90,112 bytes – MD5 hash: 59f71ef10861d1339e9765fb512d991c
  • 216021389500.scr – 98,304 bytes – MD5 hash: 0baa21fab10c7d8c64157ede39453ae5
  • 238632826769.scr – 98,304 bytes – MD5 hash: f953b4c8093276fbde3cfa5e63f990eb
  • 250874039146.scr – 98,304 bytes – MD5 hash: 6580e4ee7d718421128476a1f2f09951
  • 257547165590.scr – 94,208 bytes – MD5 hash: 6a15d6fa9f00d931ca95632697e5ba70
  • 295030013990.scr – 86,016 bytes – MD5 hash: 54c1ac0d5e8fa05255ae594adfe5706e
  • 325636779394.scr – 94,208 bytes – MD5 hash: 08a0c2aaf7653530322f4d7ec738a3df
  • 339648082242.scr – 94,208 bytes – MD5 hash: 1aaecdfd929725c195a7a67fc6be9b4b
  • 40648428303.scr – 94,208 bytes – MD5 hash: f51fcf418c973a94a7d208c3a8a30f19
  • 475770363394.scr – 81,920 bytes – MD5 hash: dbea4b3fb5341ce3ca37272e2b8052ae
  • 526735661562.scr – 94,208 bytes – MD5 hash: c0dc49296b0aec09c5bfefcf4129c29b
  • 552561607876.scr – 98,304 bytes – MD5 hash: 9239ec6fe6703279e959f498919fdfb0
  • 59859805294.scr – 86,016 bytes – MD5 hash: a9d11a69c692b35235ce9c69175f0796
  • 603263972311.scr – 94,208 bytes – MD5 hash: bcaf9ce1881f0f282cec5489ec303585
  • 621999149649.scr – 98,304 bytes – MD5 hash: 70a63f45eb84cb10ab1cc3dfb4ac8a3e
  • 622243029178.scr – 90,112 bytes – MD5 hash: d1b1e371aebfc3d500919e9e33bcd6c1
  • 642271991381.scr – 81,920 bytes – MD5 hash: 15a5acfbccbb80b01e6d270ea8af3789
  • 692223104314.scr – 94,208 bytes – MD5 hash: fa0fe28ffe83ef3dcc5c667bf2127d4c
  • 717973552140.scr – 98,304 bytes – MD5 hash: 646640f63f327296df0767fd0c9454d4
  • 831865092451.scr – 98,304 bytes – MD5 hash: ec872872bff91040d2bc1e4c4619cbbc
  • 843871639720.scr – 98,304 bytes – MD5 hash: b8e8e3ec7f4d6efee311e36613193b8d
  • 906152957589.scr – 94,208 bytes – MD5 hash: 36abcedd5fb6d17038bd7069808574e4


Brad Duncan, Security Researcher at Rackspace
Blog: – Twitter: @malware_traffic



(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Krebs on Security: A Day in the Life of a Stolen Healthcare Record

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

When your credit card gets stolen because a merchant you did business with got hacked, it’s often quite easy for investigators to figure out which company was victimized. The process of divining the provenance of stolen healthcare records, however, is far trickier because these records typically are processed or handled by a gauntlet of third party firms, most of which have no direct relationship with the patient or customer ultimately harmed by the breach.

I was reminded of this last month, after receiving a tip from a source at a cyber intelligence firm based in California who asked to remain anonymous. My source had discovered a seller on the darknet marketplace AlphaBay who was posting stolen healthcare data into a subsection of the market called “Random DB ripoffs,” (“DB,” of course, is short for “database”).

Eventually, this same fraudster leaked a large text file titled, “Tenet Health Hilton Medical Center,” which contained the name, address, Social Security number and other sensitive information on dozens of physicians across the country.

An AlphaBay user named "Boogie" giving away dozens of healthcare records.

An AlphaBay user named “Boogie” giving away dozens of healthcare records he claims to have stolen.

Contacted by KrebsOnSecurity, Tenet Health officials said the data was not stolen from its databases, but rather from a company called InCompass Healthcare. Turns out, InCompass disclosed a breach in August 2014, which reportedly occurred after a subcontractor of one of the company’s service providers failed to secure a computer server containing account information. The affected company was 24 ON Physicians, an affiliate of InCompass Healthcare.

“The breach affected approximately 10,000 patients treated at 29 facilities throughout the U.S. and approximately 40 employed physicians,” wrote Rebecca Kirkham, a spokeswoman for InCompass.

“As a result, a limited amount of personal information may have been exposed to the Internet between December 1, 2013 and April 17, 2014, Kirkham wrote in an emailed statement. Information that may have been exposed included patient names, invoice numbers, procedure codes, dates of service, charge amounts, balance due, policy numbers, and billing-related status comments. Patient social security number, home address, telephone number and date of birth were not in the files that were subject to possible exposure. Additionally, no patient medical records or bank account information were put at risk. The physician information that may have been exposed included physician name, facility, provider number and social security number.”

Kirkham said up until being contacted by this reporter, InCompass “had received no indication that personal information has been acquired or used maliciously.”

So who was the subcontractor that leaked the data? According to (and now confirmed by InCompass), the subcontractor responsible was PST Services, a McKesson subsidiary providing medical billing services, which left more than 10,000 patients’ information exposed via Google search for over four months.

As this incident shows, a breach at one service provider or healthcare billing company can have a broad impact across the healthcare system, but can be quite challenging to piece together.

Still, not all breaches involving health information are difficult to backtrack to the source. In September 2014, I discovered a fraudster on the now-defunct Evolution Market dark web community who was selling life insurance records for less than $7 apiece. That breach was fairly easily tied back to Torchmark Corp., an insurance holding company based in Texas; the name of the company’s subsidiary was plastered all over stolen records listing applicants’ medical histories.


Health records are huge targets for fraudsters because they typically contain all of the information thieves would need to conduct mischief in the victim’s name — from fraudulently opening new lines of credit to filing phony tax refund requests with the Internal Revenue Service. Last year, a great many physicians in multiple states came forward to say they’d been apparently targeted by tax refund fraudsters, but could not figure out the source of the leaked data. Chances are, the scammers stole it from hacked medical providers like PST Services and others.

In March 2015, HealthCare IT News published a list of healthcare providers that experienced data breaches since 2009, using information from the Department of Health and Human Services. That data includes HIPAA breaches reported by 1,149 covered entities and business associates, and covers some 41 million Americans. Curiously, the database does not mention some 80 million Social Security numbers and other data jeopardized in the Anthem breach that went public in February 2015 (nor 11 million records lost in the Premera breach that came to light in mid-March 2015).

Sensitive stolen data posted to cybercrime forums can rapidly spread to miscreants and ne’er-do-wells around the globe. In an experiment conducted earlier this month, security firm Bitglass synthesized 1,568 fake names, Social Security numbers, credit card numbers, addresses and phone numbers that were saved in an Excel spreadsheet. The spreadsheet was then transmitted through the company’s proxy, which automatically watermarked the file. The researchers set it up so that each time the file was opened, the persistent watermark (which Bitglass says survives copy, paste and other file manipulations), “called home” to record view information such as IP address, geographic location and device type.

The company posted the spreadsheet of manufactured identities anonymously to cyber-crime marketplaces on the Dark Web. The result was that in less than two weeks, the file had traveled to 22 countries on five continents, was accessed more than 1,100 times. “Additionally, time, location, and IP address analysis uncovered a high rate of activity amongst two groups of similar viewers, indicating the possibility of two cyber crime syndicates, one operating within Nigeria and the other in Russia,” the report concluded.

Source: Bitglass

Source: Bitglass

SANS Internet Storm Center, InfoCON: green: Actor using Fiesta exploit kit, (Tue, Apr 28th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

An Enduring Adversary

This diary entry documents a criminal group using the Fiesta exploit kit (EK) to infect Windows computers. I previously wrote a guest diary about this group on 2014-12-26 [1] and provided some updated information on my personal blog”>]. I first noticed this group in 2013, and its likely been active well before then.

The group is currently using a gate that generates traffic from compromised websites to a Fiesta EK domain. Im calling this group the BizCN gate actor because all its gate domains are registered through Chinese registrar, and they all reside on a single IP address. The registrant data is privacy-protected through Wuxi Yilian LLC.

Earlier this month, the BizCN gate actor changed its gate IPto [3]. Were currently seeing thegate lead to Fiesta EK on Below is a flow chart for” />

Traffic From an Infected Host

The following image shows traffic from (the gate)that occurred on 2015-04-26. ” />

Within the past week or so, Fiesta EK has modified its URL structure. Now youll finddashes and underscores in the URLs (something that wasn” />

A pcap of this traffic at is available at:

The malware payload on the infected host copied itself to a directory under the users AppData\Local folder. It also” />

A copy of the malware payload is available at: ” />

Below is an image from Sguil on Security Onion for EmergingThreats and ETPRO snort events caused bythe infection. ” />

Indicators of Compromise (IOCs)

Passive DNS on shows at least 100 domains registered through hosted on this IP address. Each domain is paired with a compromised website. Below is a list of the gate domains and their associated compromised websites Ive found so far this month:

(Read: gate on – compromised website)

  • –
  • –
  • –
  • –
  • –
  • –
  • –
  • –
  • –
  • –
  • –
  • –
  • –
  • –
  • –
  • –
  • –
  • –
  • –
  • –

How can you determine if your clients saw traffic associated with this actor? Organizations withweb proxy logs can search for to see theHTTP requests. Those HTTP headers should includea refererline withthe compromised website. Many of these compromised websites use vBulletin.

Final Notes

Researchers may have a hard timegeneratinginfection trafficfrom compromised websites associated with this actor. Most often, HTTP GET requests to the gate domain returna 404 Not Found. “>In some cases, the gate domain might not appear in traffic at all.Other times, the HTTP GET request for theFiesta EK landing page doesnt return anything. Its tough to get a fullinfection chain when youre trying to do it on purpose.

The BizCN gate actor occasionally changes the IP address for these gate domains. Since their information is now public through this diary entry, the actor will likely change the gates IP address and domains again.

Unless theres a drastic change in their pattern of operations, this BizCNgate actor will be found relatively soon after any upcoming changes.

Brad Duncan, Security Researcher at Rackspace
Blog: – Twitter: @malware_traffic



(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Errata Security: The hollow rhetoric of nation-state threats

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

The government is using the threat of nation-state hackers to declare a state-of-emergency and pass draconian laws in congress. However, as the GitHub DDoS shows, the government has little interest in actually defending us.
It took 25 days to blame North Korea for the Sony hack, between the moment “Hacked by the #GOP” appeared on Sony computers and when President Obama promised retaliation in a news conference — based on flimsy evidence of North Korea’s involvement. In contrast, it’s been more than 25 days since we’ve had conclusive proof the Chinese government was DDoSing GitHub, and our government has remained silent. China stopped the attacks after two weeks on their own volition, because GitHub defended itself, not because of anything the United States government did.
The reason for the inattention is that GitHub has no lobbyists. Sony spends several million dollars every year in lobbying, as well as hundreds of thousands in campaign contributions. When Sony gets hacked, politicians listen. In contrast, GitHub spends zero on either lobbying or contributions.
It’s not that GitHub isn’t important — it’s actually key infrastructure to the Internet. All computer nerds know the site. It’s the largest repository of source-code on the net. It’s so important that China couldn’t simply block the IP address, because China needs the site, too. That’s why China had to use a convoluted attack in order to pressure GitHub to censor content.
Despite GitHub’s importance, few in Washington D.C. have heard of it. If you don’t spend money on lobbying and donors, you just don’t exist. Even if the government heard of GitHub, they still wouldn’t respond. We have over half a trillion dollars of trade with China every year, not to mention a ton of diplomatic disputes. Our government won’t risk upsetting any of those issues, which do have tons of lobbying dollars behind them, in order to defend GitHub. At most, GitHub will become a bargaining chip, such as encouraging China to stop subsidizing tire exports in order to satisfy the steel workers union.
The point of this post isn’t to bang the drums of cyberwar, to claim that our government should retaliate against China for their nation-state attack. Quite the opposite. I’m trying to point out the hollow rhetoric of “nation-state threats”. You can’t use “nation-state defense” to justify sanctions on North Korea while ignoring the nation-state attack on GitHub.
The next time somebody uses “nation-state threats” in order to justify government policy that increases the police-state and military-industrial complex, the first question we should ask is to have them explain government’s inaction in the nation-state attack against GitHub.

SANS Internet Storm Center, InfoCON: green: Quantum Insert Attack, (Sun, Apr 26th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

The Dutch company Fox-IT has revealed a detailed information about Quantum Insert Attack. HTML Redirection attack by injecting malicious content into a specific TCP session. A session is selected for injection based on selectors, such as a persistent tracking cookie that identifies a user for a longer period of time.

The attack can be done by sniffing an HTTP request then the attacker will spoofed a crafted HTTP response. In order to craft a spoofed HTTP response the attacker should know the following:

  • Source and Destination IP address
  • Source and Destination TCP port
  • Sequence and Acknowledgment Number

Once the packet is spoofed a race condition will occur, if the attacker win the race then he/she would response to the victim with malicious content instead of the legitimate one.

Performing Quantum Insert attack require that the attacker can monitor the traffic and have very fast infrastructure to win the race condition.

To detect Quantum Insert we should look for the following:

  1. Duplicate Sequence number with two different payloads, since the attacker will spoof the response ,the victim will have two packets with same sequence number but with different payload.
  2. TTL anomalies ,the spoofed packets would show a different time to live value than the real packets . TTL different might be legit due to the nature of internet traffic but since the attacker will be closer to the target to win the race condition that might give unusual different in the ttl between the legitimate packets and the spoofed one.


(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

TorrentFreak: Leaked Piracy Report Details Fascinating Camcording Investigations

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

spyThis week the UK’s Federation Against Copyright Theft (FACT) released its latest report detailing the rewards presented to cinema workers who disrupt so-called movie “cammers”. FACT is the main group to release this kind of report and no equivalent is regularly made available from any other English speaking countries.

While the insight is useful to build a picture of “anti-camming” activity in the UK, FACT is obviously selective about the information it releases. While big successes receive maximum publicity, relative failures tend to be brushed under the carpet. Something else the group would like to keep a secret are presentations made to Sony Pictures in 2010, but thanks to a trove of leaked emails that is no longer possible.

The presentation begins with FACT stating that it’s the “best known and most respected industry enforcement body of its kind in the UK” and one that has forged “excellent relationships with “public enforcement agencies and within the criminal justice system”.


FACT goes on to give Sony several examples of situations in which it has been involved in information exercises sharing with the authorities. The exact details aren’t provided, but somewhat surprisingly FACT says they include murder, kidnap and large-scale missing persons investigations.

But perhaps of most interest are the details of how the group pursues those who illegally ‘cam’ and then distribute movies online. The presentation focuses on the “proven” leak of five movies in 2010, the total from UK cinemas for that year.

Vue Cinemas, North London

First up are ‘cams’ of Alice in Wonderland and Green Zone that originated from a Vue Cinema in North London. Noting that both movies had been recorded on their first day using an iPhone (one during a quiet showing, the other much more busy), the presentation offers infra-red photographic evidence of the suspect recording the movies.

Alice in Wonderland camming


Green Zone camming


Cineworld – Glasgow

The documentation behind this Scotland-based investigation is nothing short of fascinating. FACT determined that their suspect was the holder of a Cineworld Unlimited pass which at the time he had used 14 times.

On three occasions the suspect had viewed the movie Kick-Ass, including on the opening day. The ‘cammed’ copy that leaked online came from that viewing. The suspect also viewed Clash of the Titans, with a camcorded version later appearing online from that session. The man also attended three Iron Man 2 viewings at times which coincided with watermarks present on the online ‘cammed’ copies.

Working in collaboration with the cinema, FACT then obtained CCTV footage of the man approaching a cash desk.


Putting it all together

The most interesting document in the entire presentation is without doubt FACT’s investigative chart. It places the holder of the Cineworld Unlimited pass together with a woman found as a friend on his Facebook page. Described as IC1 (police code for white/caucasian), FACT note that the pair attended the Cineworld Cinema together on at least one occasion.

The unnamed female is listed at a property in Glasgow and from there things begin to unravel. An IP address connected with that residence uploaded a copy of Kick-Ass which was later made available by an online release group. The leader of that group was found to have communicated with the unknown cammer of the movie but who FACT strongly suspected to be the man in the images taken at the cinema. He was later arrested and confessed to his crimes.


The full document provides a fascinating insight into FACT’s operations, not only in camming mitigation but also in bringing down websites. Another notable chart shows the operations of an unnamed “video streaming” site.


While no names are mentioned, a later edition of the same presentation blanks out key details, suggesting a level of sensitivity. However, after examining the chart it appears likely that it refers to Surf the Channel, the site previously run by Anton Vickerman.

Considering the depth and presentation of the above investigations it will come as no surprise to most that many FACT investigators are former police officers. For the curious, the full document can be found here on Wikileaks.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and anonymous VPN services.

Linux How-Tos and Linux Tutorials: How to Configure Your Dev Machine to Work From Anywhere (Part 3)

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Jeff Cogswell. Original post: at Linux How-Tos and Linux Tutorials

In the previous articles, I talked about my mobile setup and how I’m able to continue working on the go. In this final installment, I’ll talk about how to install and configure the software I’m using. Most of what I’m talking about here is on the server side, because the Android and iPhone apps are pretty straightforward to configure.

Before we begin, however, I want to mention that this setup I’ve been describing really isn’t for production machines. This should only be limited to development and test machines. Also, there are many different ways to work remotely, and this is only one possibility. In general, you really can’t beat a good command-line tool and SSH access. But in some cases, that didn’t really work for me. I needed more; I needed a full Chrome JavaScript debugger, and I needed better word processing than was available on my Android tablets.

Here, then, is how I configured the software. Note, however, that I’m not writing this as a complete tutorial, simply because that would take too much space. Instead, I’m providing overviews, and assuming you know the basics and can google to find the details. We’ll take this step by step.

Spin up your server

First, we spin up the server on a host. There are several hosting companies; I’ve used Amazon Web Services, Rackspace, and DigitalOcean. My own personal preference for the operating system is Ubuntu Linux with LXDE. LXDE is a full desktop environment that includes the OpenBox window manager. I personally like OpenBox because of its simplicity while maintaining visual appeal. And LXDE is nice because, as its name suggests (Lightweight X11 Desktop Environment), it’s lightweight. However, many different environments and window managers will work. (I tried a couple tiling window managers such as i3, and those worked pretty well too.)

The usual order of installation goes like this: You use the hosting company’s website to spin up the server, and you provide a key file that will be used for logging into the server. You can usually use your own key that you generate, or have the service generate a key for you, in which case you download the key and save it. Typically when you provide a key, the server will automatically be configured to log in only using SSH with the key file. However, if not, you’ll want to follow disable password logins.

Connect to the server

The next step is to actually log into the server through an SSH command line and first set up a user for yourself that isn’t root, and then set up the desktop environment. You can log in from your desktop Linux, but if you like, this is a good chance to try out logging in from an Android or iOS tablet. I use JuiceSSH; a lot of people like ConnectBot. And there are others. But whichever you get, make sure it allows you to log in using a key file. (Key files can be created with or without a password. Also make sure the app you use allows you to use whichever key file type you created–password or no password.)

Copy your key file to your tablet. The best way is to connect the tablet to your computer, and transfer the file. However, if you want a quick and easy way to do it, you can email it. But be aware that you’re sending the private key file through an email system that other people could potentially access. It’s your call whether you want to do that. Either way, get the file installed on the tablet, and then configure the SSH app to log in using the key file, using the app’s instructions.

Then using the app, connect to your server. You’ll need the username, even though you’re using a key file (the server needs to know who you’re logging in as with the key file, after all); AWS typically uses “ubuntu” for the username for Ubuntu installations; others simply give you the root user. For AWS, to do the installation you’ll need to type sudo before each command since you’re not logged in as root, but won’t be asked for a password when running sudo. On other cloud hosts you can run the commands without sudo since you’re logged in as root.

Oh and by the way, because we don’t yet have a desktop environment, you’ll be typing commands to install the software. If you’re not familiar with the package installation tools, now is a chance to learn about them. For Debian-based systems (including Ubuntu), you’ll use apt-get. Other systems use yum, which is a command-line interface to the RPM package manager.

Install LXDE

From the command-line, it’s time to set up LXDE, or whichever desktop you prefer. One thing to bear in mind is that while you can run something big like Cinnamon, ask yourself if you really need it. Cinnamon is big and cumbersome. I use it on my desktop, but not on my hosted servers, opting instead for more lightweight desktops like LXDE. And if you’re familiar with desktops such as Cinnamon, LXDE will feel very similar.

There are lots of instructions online for installing LXDE or other desktops, and so I won’t reiterate the details here. DigitalOcean has a fantastic blog with instructions for installing a similar desktop, XFCE.

Install a VNC server

Then you need to install a VNC server. Instead of using TightVNC, which a lot of people suggest, I recommend vnc4server because it allows for easy resolution changes, as I’ll describe shortly.

While setting up the VNC server, you’ll create a VNC username. You can just use a username and password for VNC, and from there you’re able to connect from a VNC client app to the system. However, the connection won’t be secure. Instead, you’ll want to connect through what’s called an SSH tunnel. The SSH tunnel is basically an SSH session into the server that is used for passing connections that would otherwise go directly over the internet.

When you connect to a server over the Internet, you use a protocol and a port. VNC usually uses 5900 or 5901 for the port. But with an SSH tunnel, the SSH app listens on a port on the same local device, such as 5900 or 5901. Then the VNC app, instead of connecting to the remote server, connects locally to the SSH app. The SSH app, in turn, passes all the data on to the remote system. So the SSH serves as a go-between. But because it’s SSH, all the data is secure.

So the key is setting up a tunnel on your tablet. Some VNC apps can create the tunnel; others can’t and you need to use a separate app. JuiceSSH can create a tunnel, which you can use from other apps. My preferred VNC app, Remotix, on the other hand, can do the tunnel itself for you. It’s your choice how you do it, but you’ll want to set it up.

The app will have instructions for the tunnel. In the case of JuiceSSH, you specify the server you’re connecting to and the port, such as 5900 or 5901. Then you also specify the local port number the tunnel will be listening on. You can use any available port, but I’ll usually use the same port as the remote one. If I’m connecting to 5901 on the remote, I’ll have JuiceSSH also listen on 5901. That makes it easier to keep straight. Then you’ll open up your VNC app, and instead of connecting to a remote server, you connect to the port on the same tablet. For the server you just use, which is the IP address of the device itself. So to re-iterate:

  1. JuiceSSH connects, for example, to 5901 on the remote host. Meanwhile, it opens up 5901 on the local device.
  2. The VNC app connects to 5901 on the local device. It doesn’t need to know anything about what remote server it’s connecting to.

But some VNC apps don’t need another app to do the tunneling, and instead provide the tunnel themselves. Remotix can do this; if you set up your app to do so, make sure you understand that you’re still tunneling. You provide the information needed for the SSH tunnel, including the key file and username. Then Remotix does the rest for you.

Once you get the VNC app going, you’ll be in. You should see a desktop open with the LXDE logo in the background. Next, you’ll want to go ahead and configure the VNC client to your liking; I prefer to control the mouse using drags that simulate a trackpad; other people like to control the mouse by tapping exactly where you want to click. Remotix and several other apps let you choose either configuration.

Configuring the Desktop

Now let’s configure the desktop. One issue I had was getting the desktop to look good on my 10-inch tablet. This involved configuring the look and feel by clicking the taskbar menu < Preferences < Customize Look and Feel (or run from the command line lxappearance).


I also used OpenBox’s own configuration tool by clicking the taskbar menu < Preferences < OpenBox Configuration Manager (or runobconf).


My larger tablet’s screen isn’t huge at 10 inches, so I configured the menu bars and buttons and such to be somewhat large for a comfortable view. One issue is the tablet has such a high resolution that if I used the maximum resolution, everything was tiny. As such, I needed to be able to change resolutions based on the work I was doing, as well as based on which tablet I was using. This involved configuring the VNC server, though, not LXDE and OpenBox. So let’s look at that.

In order to change resolution on the fly, you need a program that can manage the RandR extensions, such as xrandr. But the TightVNC server that seems popular doesn’t work with RandR. Instead, I found the vvnc4server program works with xrandr, which is why I recommend using it instead. When you configure vnc4server, you’ll want to provide the different resolutions in the command’s -geometry option. Here’s an init.d service configuration file that does just that. (I modified this based on one I found on DigitalOcean’s blog.)

export USER="jeff"
OPTIONS="-depth 16 -geometry 1920x1125 -geometry 1240x1920 -geometry 2560x1500 -geometry 1920x1080 -geometry 1774x1040 -geometry 1440x843 -geometry 1280x1120 -geometry 1280x1024 -geometry 1280x750 -geometry 1200x1100 -geometry 1024x768 -geometry 800x600 :1"
. /lib/lsb/init-functions
case "$1" in
log_action_begin_msg "Starting vncserver for user '${USER}' on localhost:${DISPLAY}"
su ${USER} -c "/usr/bin/vnc4server ${OPTIONS}"
log_action_begin_msg "Stoping vncserver for user '${USER}' on localhost:${DISPLAY}"
su ${USER} -c "/usr/bin/vnc4server -kill :1"
$0 stop
$0 start
exit 0

The key here is the OPTIONS line with all the -geometry options. These will show up when you run xrandr from the command line:


You can use your VNC login to modify the file in the init.d directory (and indeed I did, using the editor called scite). But then after making these changes, you’ll need to restart the VNC service just this one time, since you’re changing its service settings. Doing so will end your current VNC session, and it might not restart correctly. So you might need to log in through JuiceSSH to restart the VNC server. Then you can log back in with the VNC server. (You also might need to restart the SSH tunnel.) After you do, you’ll be able to configure the resolution. And from then on, you can change the resolution on the fly without restarting the VNC server.

To change resolutions without having to restart the VNC server, just type:

xrandr -s 1

Replace 1 with the number for the resolution you want. This way you can change the resolution without restarting the VNC server.

Server Concerns

After everything is configured, you’re free to use the software you’re familiar with. The only catch is that hosts charge a good bit for servers that have plenty of RAM and disk space. As such, you might be limited on what you can run based on the amount of RAM and cores. Still, I’ve found that with just 2GB of RAM and 2 cores, with Ubuntu and LXDE, I’m able to have open Chrome with a few pages, LibreOffice with a couple documents open, Geany for my code editing, and my own server software running under node.js for testing, and mysql server. Occasionally if I get too many Chrome tabs open, the system will suddenly slow way down and I have to shut down tabs to free up more memory. Sometimes I run MySQL Workbench and it can bog things down a bit too, but it isn’t bad if I close up LibreOffice and leave only one or two Chrome tabs open. But in general, for most of my work, I have no problems at all.

And on top of that, if I do need more horsepower, I can spin up a bigger server with 4GB or 8GB and four cores or eight cores. But that gets costly and so I don’t do it for too many hours.

Multiple Screens

For fun, I did manage to get two screens going on a single desktop, one on my bigger 10-inch ASUS transformer tablet, and one on my smaller Nexus 7 all from my Linux server running on a public cloud host, complete with a single mouse moving between the two screens. To accomplish this, I started two VNC sessions, one from each tablet, and then from the one with the mouse and keyboard, I ran:

x2x -east -to :1

This basically connected the single mouse and keyboard to both displays. It was a fun experiment, but in my case, provided little practical value because it wasn’t like a true dual-display on a desktop computer. I couldn’t move slide windows between the displays, and the Chrome browser won’t open under more than one X display. In my case, for web development, I wanted to be able to open up the Chrome browser on one tablet, and then the Chrome JavaScript debug window on the other, but that didn’t work out.

Instead, what I found more useful was to have an SSH command-line shell on the smaller tablet, and that’s where I would run my node.js server code, which was printing out debug information. Then on the other I would have the browser running. That way I can glance back and forth without switching between windows on the single VNC login on the bigger tablet.

Back to Security

I can’t understate the importance of making sure you have your security set up and that you understand how the security works and what the ramifications are. I highly recommend using SSH with a keyfile login only, and no password logins allowed. And treat this as a development or test machine; don’t put customer data on the machine that could open you up to lawsuits in the event the machine gets compromised.

Instead, for production machines, allocate your production servers using all the best practices laid out by your own IT department security rules, and the host’s own rules. One issue I hit is my development machine needs to log into git, which requires a private key. My development machine is hosted, which means that private key is stored on a hosted server. That may or may not be a good idea in your case; you and your team will need to decide whether to do it. In my case, I decided I could afford the risk because the code I’m accessing is mostly open-source and there’s little private intellectual property involved. So if somebody broke into my development machine, they would have access to the source code for a small but non-vital project I’m working on, and drafts of these articles–no private or intellectual data.

Web Developers and A Pesky Thing Called Windows

Before I wrap this up, I want to present a topic for discussion. Over the past few years I’ve noticed that a lot of individual web developers use a setup quite similar to what I’m describing. In a lot of cases they use Windows instead of Linux, but the idea is the same regardless of operating system. But where they differ from what I’m describing is they host their entire customer websites and customer data on that one machine, and there is no tunneling; instead, they just type in a password. That is not what I’m advocating here. If you are doing this, please reconsider. (I personally know at least three private web developers who do this.)

Regardless of operating systems, take some time to understand the ramifications here. First, by logging in with a full desktop environment, you’re possibly slowing down your machine for your dev work. And if you mess something up and have to reboot, during that time your clients’ websites aren’t available during that time. Are you using replication? Are you using private networking? Are you running MySQL or some other database on the same machine instead of using virtual private networking? Entire books could (and have been) written on such topics and what the best practices are. Learn about replication; learn about virtual private networking and how to shield your database servers from outside traffic; and so on. And most importantly consider the security issues. Are you hosting customer data in a site that could easily be compromised? That could spell L-A-W-S-U-I-T. And that brings me to my conclusion for this series.

Concluding Remarks

Some commenters on the previous articles have brought up some valid points; one even used the phrase “playing.” While I really am doing development work, I’m definitely not doing this on production machines. If I were, that would indeed be playing and not be a legitimate use for a production machine. Use SSH for the production machines, and pick an editor to use and learn it. (I like vim, personally.) And keep the customer data on a server that is accessible only from a virtual private network. Read this to learn more.

Learn how to set up and configure SSH. And if you don’t understand all this, then please, practice and learn it. There are a million web sites out there to teach this stuff, including But if you do understand and can minimize the risk, then, you really can get some work done from nearly anywhere. My work has become far more productive. If I want to run to a coffee shop and do some work, I can, without having to take a laptop along. Times are good! Learn the rules, follow the best practices, and be productive.

See the previous tutorials:

How to Set Up Your Linux Dev Station to Work From Anywhere

Choosing Software to Work Remotely from Your Linux Dev Station

SANS Internet Storm Center, InfoCON: green: MS15-034: HTTP.sys (IIS) DoS And Possible Remote Code Execution. PATCH NOW, (Wed, Apr 15th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

Denial of Service (DoS) exploits are widely available to exploit CVE-2015-1635, a vulnerability in HTTP.sys, affectingInternet Information Server (IIS) . The patch was released on Tuesday (April 14th) as part of Microsoft”>Yellow as these scans use the DoS version, not the detection version of the exploit. The scans appear to be Internet wide.

[We will have a webcast live from SANS 2015 in Orlando at 6pm ET. For details, see . If you are attending SANS 2015: Osprey Room 1 at the Swan hotel]

Updated Section 6 information regarding Information Disclosure issue.

Based on posts on Twitter, is also sending the exploit code in somewhat targeted scans.

Version of the exploit seen used in these scans:

GET /%7Bwelcome.png HTTP/1.1
User-Agent: Wget/1.13.4 (linux-gnu)
Accept: */*
Host: [server-ip]
Connection: Keep-Alive
Range: bytes=18-18446744073709551615


1 – Which Versions of Windows”>2 – Will an IPS protect me?”>alert tcp $EXTERNAL_NETany – $HOME_NET 80 (msg: MS15-034Range Header HTTP.sys Exploit content: |0d 0a|Range: bytes= content: – byte_test: 10,”>(byte_test is limited to 10 bytes, so I just check if the first 10 bytes are larger then 1000000000)

Watch out, there are some tricks to bypass simple rules, like adding whitespace to the Range: headers value. More info here.

3 – Will the exploit work over SSL?

Yes. Which may be used to bypass your IDS or other network protections

4 – Have you seen active exploits in the wild?

Not yet. We have seen working DoS exploits, but have not detected them in our honeypots. Erratasec conducted a (partial) scan of the Internet using a non-DoSexploit with the intend to enumerate vulnerable systems.

5 – How do I know if I am vulnerable?

Send the following request to your IIS server:

GET / HTTP/1.1Host: MS15034Range: bytes=0-18446744073709551615

If the server responds with Requested Header Range Not Satisfiable, then you may be vulnerable.

Test Scripts:

(powershell removed as it doesnt support 64 bit intergers… worked without error for me, but something else may have been wrong with it)

curl -v [ipaddress]/ -H Host: test -H Range: bytes=0-18446744073709551615

wget -O /dev/null --header=Range: 0-18446744073709551615 http://[ip address]/

6 – Can this vulnerability be exploited to do more then a DoS?

In its advisory, Microsoft considered the vulnerability as a remote code execution vulnerability. But at this point, no exploit has been made public that executed code. Only DoS exploits are available.
There also appears to be an information disclosure vulnerability. If the lower end of the range is one byte less then the size of the retrieved file, kernel memory is appended to the output before the system reboots. In my own testing, I was not able to achieve consistent information leakage. Most of the time, the server just crashes.

[Turns out, the file does not have to be 4GB. Tried it with a short file and it worked. The 4GB information came from a bad interpretation of mine of the chinese article in the Resources section]

7 – How to I launch the DoS exploit?

In the example PoC above, change the 0- to 20-. (has to be smaller then the size of the file retrieved, but larger then 0)

8 – What is special about the large number in the PoC exploit?

It is 2^64-1. The largest 64 bit number (hex: 0xFFFFFFFFFFFFFFFF)

9 – Any Other Workarounds?

In IIS 7, you can disable kernel caching.

10 – Is only IIS vulnerable? Or are other components affected as well?

Potentially, anything using HTTP.sys and kernel cachingis vulnerable. HTTP.sys is the Windows library used toparse HTTP requests. However, IIS is the most common programexposing HTTP.sys. You may find potentially vulnerable components by typing:”>netsh http show servicestate”>No. IIS Request Filtering happens after the Range header is parsed.

References: (Chinese)

Thanks to Threatstop for providing an IIS server for testing.

Johannes B. Ullrich, Ph.D.

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

SANS Internet Storm Center, InfoCON: green: Odd POST Request To Web Honeypot, (Tue, Apr 14th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

I just saw this odd POST request to our honeypotsENUSMSN)rn
Host: [IP Address of Honeypot]
Cache-Control: no-cache


The payload looks Base64 encoded, but decoding doesnt help much either. The payload also looks like the + (which would be a space if URL encoded) marks a deliminator.

.k( 0000010:= df8a= f237= 8362= 2b86= f6cb= 6213= 2905= 1354= ...7.b+...b.)..t= 0000020:= 037f= da99= fbb0= d64a= ab78= 9f35= 979e= c8c1= .......j.x.5....= 0000030:= d725= ed06= af0b= 9fb9= e3d8= 5131= 8639= b125= .%........q1.9.%= 0000040:= c453= 60b0= ae0f= 2067= 44ff= 1ca6= a380= 949a= .s`...= gd.......= 0000050:= cceb= c944= 6ad8= c04b= 83b6= 6e94= 156e= d547= 0000060:= 0327= edb5= df4b= 72c6= 83be= 4603= 0fd5= 1a39= 0000070:= 5727= 0a70= d927= 1ab4= b783= 398b= d59a= 26eb= w.p.....9....= 0000080:= 2b48= 1349= ae2f= 5baf= d085= 8c43= 3b22= b2f1= +h.i.=..= 0000090:= 4b56= 544b= 8951= eae9= 16d7= 246f= 30c7= b5b1= kvtk.q....$o0...= 00000a0:= e4ae= 8c16= df06= aad9= 6e28= c51d= a1f8= 78ae= ........n(....x.= 00000b0:= 5370= 7e80= 2469= 2292= 06a2= 54b8= bab8= 0070= sp~.$i...t....p= 00000c0:= 8675= fe54= bf4d= cdfb= 2cf9= 1473= 0b89= f585= .u.t.m..,..s....= 00000d0:= 7eb3= 1ca9= f8f9= 77e8= 0881= ecde= 115b= 8143= ~.....w......[.c= 00000e0:= a3a9= 0c76= bcfb= ce7a= 4dfe= cb67= c06e= c9dd= 00000f0:= 8aac= fc48= 2ccc= 252c= d6d1= 0c41= 5ba2= 63bb= ...h,.%,...a[.c.= 0000100:= c68d= afe9= fdf9= e942= 43ad= 2f44= cfd2= 4486= .......bc.= d..d.= 0000110:= e5= 

Any ideas?

Johannes B. Ullrich, Ph.D.

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Linux How-Tos and Linux Tutorials: The CuBox: Linux on ARM in Around 2 Inches Cubed

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Ben Martin. Original post: at Linux How-Tos and Linux Tutorials

cuboxThe CuBox is a 2-inch cubed ARM machine that can be used as a set-top box, a small NAS or database server, or in many other interesting applications. In my ongoing comparison of ARM machines, including the BeagleBone Black, Cubieboard, and others, the CuBox has the fastest IO performance for SSD that I’ve tested so far.

There are a few models and some ways to customize each model giving you the choice between double or quad cores, if you need 1 or 2 gigabytes of RAM, if 100 megabit ethernet is fine or you’d rather have gigabit ethernet, and if wifi and bluetooth are needed. This gives you a price range from $90 to $140 depending on which features you’re after. We’ll take a look at the CuBox i4Pro, which is the top-of-the-line model with all the bells and whistles.

CuBox Features

Most of the connectors on the CuBox are on the back side. The connectors include gigabit ethernet, two USB 2.0 ports, a full sized HDMI connector, eSATA, power input, and a microSD slot. Another side of the CuBox also features an Optical S/PDIF Audio Out. The power supply is a 5 Volt/3 Amp unit and connects using a DC jack input on the CuBox.

One of the first things I noticed when unpacking the CuBox is that it is small, coming in at 2 by 2 inches in length and width and around 1 and 3/4 inches tall. To contrast, a Raspberry Pi 2 in a case comes out at around 3.5 inches long and just under but close to 2.5 inches wide. The CuBox stands taller on the table than the Raspberry Pi.

When buying the CuBox you can choose to get either Android 4.4 or OpenELEC/XBMC on your microSD card. You can also install Debian, Fedora, openSUSE, and others, when it arrives.

The CuBox i4Pro had Android 4.4.2 pre-installed. The first boot up sat at the “Android” screen for minutes, making me a little concerned that something was amiss. After the delay you are prompted to select the app launcher that you want to use and then you’re in business. A look at the apps that are available by default shows Google Keep and Drive as well as the more expected apps like Youtube, Gmail, and the Play Store. The YouTube app was recent enough to include an option to Chromecast the video playback. Some versions of Android distributed with small ARM machines do not come with the Play Store by default, so it’s good to see it here right off the bat.

One app that I didn’t expect was the Ethernet app. This lets you check what IP address, DNS settings, and proxy server, if any, are in use at the moment. You can also specify to use DHCP (the default) or a static IP address and nominate a proxy server as well as a list of machines that the CuBox shouldn’t use the proxy to access.

When switching applications the graphical transitions were smooth. The mouse wheel worked as expected in the App/Widgets screen, the settings menu, and the YouTube app. The Volume +/- keys on a multimedia keyboard changed the volume but only in increments of fully on or fully off. That might not be an issue if you are controlling the volume with your television or amp instead of the CuBox. Playback in the YouTube app was smooth and transitioned to full screen playback without any issues.

The Chrome browser (version 31.0.1650.59) got 2,445 overall for the Octane 2.0 benchmark. To contrast, on a 3-year-old Mac Air, Chrome (version 41.0.2272.89) got 13,542 overall.

Installing Debian

The microSD card does not have a spring loading in the CuBox. So to remove the microSD card you have to use your fingernail to carefully prise it out of the slot.

Switching to Debian can be done by downloading the image and using a command like the one below to copy that image to a microSD card. I kept the original card and used a new, second microSD card to write Debian onto so I could easily switch between Debian and Android. Once writing is done, slowly prise out the original microSD card and insert the newly created Debian microSD card.

dd if=Cubox-i_Debian_2.6_wheezy_3.14.14.raw 

There is also support for installing and running a desktop on your CuBox/Debian setup. That extends to experimental support for accelerated GPU and VPU on the CuBox. On my Debian installation, I tried to hardware decode the Big Buck Bunny but it seems some more tinkering is needed to get hardware decode working. Using the “GeexBox XBMC ‐ A Kodi Media Center” version 3.1 distribution the Big Buck Bunny file played fine, so hardware decoding is supported by the CuBox, it just might take a little more tinkering to get at it if you want to run Debian.

The Debian image boots to a text console by default. This is easily overcome by installing a desktop environment, I found that Xfce worked well on the CuBox.

CuBox Performance.

Digging around in /sys one should find the directory /sys/devices/system/cpu/cpu0/cpufreq which contains interesting files like cpuinfo_cur_freq and cpuinfo_max_freq. For me these showed about 0.8 Gigahertz and 1.2 Ghz respectively.

The OpenSSL benchmark is a single core test. Some other ARM machines like the ODroid-XU are clocked much faster than the CuBox, which will have an impact on the OpenSSL benchmark.

Compiling OpenSSL 1.0.1e on four cores took around 6.5 minutes. Performance for digest and ciphers was in a similar ballpark to the BeagleBone Black. For 1,024 bit RSA signatures the CuBox beat the BeagleBone Black at 200 to 160 respectively.

Cubox ciphers

Cubox digests

cubox rsa sign

Iceweasel 31.5 gets an octane of 2,015. For comparison, Iceweasel 31.4.0esr-1 on the Raspberry Pi 2 got an overall Octane score of 1,316.

To test 2Dgraphics performance I used version 1.0.1 of the Cairo Performance Demos. The gears test runs three turning gears; the chart runs four line graphs; the fish is a simulated fish tank with many fish swimming around; gradient is a filled curved edged path that moves around the screen; and flowers renders rotating flowers that move up and down the screen. For comparison I used a desktop machine running an Intel 2600K CPU with an NVidia GTX 570 card which drives two screens, one at 2560 x 1440 and the other at 1080p.

Test Radxa 
at 1080
Beable Bone 
Black at 720
LVDS at 768
desktop 2600k/nv570 
two screens
Raspberry Pi 2 
at 1080
CuBox i4Pro 
at 1080






























The CuBox also features an eSATA port, freeing you from microSD cards by making the considerably faster SSD storage available. The eSATA port, multi cores, and gigabit ethernet port make the CuBox and an external 2.5-inch SSD an interesting choice for a small NAS.

I connected a 120 GB SanDisk Extreme SSD to test the eSATA performance. For sequential IO Bonnie++ could write about 120 megabit/ second and read 150 mb/s and rewrite blocks at about 50 mb/s. Overall 6,000 seeks/second were able to be done.

For price comparison, a 120 GB SanDisk SSD currently goes for about $70 while a 128 GB SanDisk microSD card is around $100. The microSD card packaging mentions up to 48mb/s transfer rates. This is without considering that the SSD should perform better for server loads and times when there are data rewrites such as on database servers.

For comparison this is the same SSD I used when reviewing the Cubieboard. Although the CuBox and Cubieboard have similar sounding names they are completely different machines. Back then I found that the Cubieboard could write about 41 mb/s and read 104 mb/s back from it with 1849 seeks/s performed. The same SSD again on the TI OMAP5432 got 66 ms/s write, 131 mb/s read and could do 8558 seeks/s. It is strange that the CuBox can transfer more data to and from the drive than the TI OMAP5432 but the OMAP5432 has better seek performance.

As far as eSATA data transfer goes, the CuBox is the ARM machine with the fastest IO performance for this SSD I have tested so far.

Power usage

At an idle graphical login with a mouse and keyboard plugged in, the CuBox drew 3.2 Watts. Disconnecting the keyboard and mouse dropped power to 2.8 W. With the keyboard and mouse reconnected for the remainder of the readings, running a single instance of OpenSSL speed that jumped to 4 W. Running four OpenSSL speed tests at once power got up to 6.3 W. When running Octane the power ranged up to 5 W on occasion.

Final Words

While the smallest ARM machines try to directly attach to an HDMI port, if you plan to add a realistic amount of connections to the CuBox such as power, ethernet, and some USB cables then the HDMI dongle form factor becomes a disadvantage. Instead, the CuBox opts to have (almost) all the connectors coming out of one side of the machine and to make that machine extremely small.

Being able to select from three base machines, and configure if you want (and want to pay for) wifi and bluetooth lets you customize the machine for the application you have in mind. The availability of eSATA and a gigabit ethernet connection allow the CuBox to be a small server — be that a NAS or a database server. The availability of two XBMC/Kodi disk images offering hardware video decoding also makes the CuBox an interesting choice for media playback.

We would like to thank SolidRun for supplying the CuBox hardware used in this review.

TorrentFreak: Judge: IP-Address Doesn’t Identify a Movie Pirate

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

ip-addressWhile relatively underreported, many U.S. district courts are still swamped with lawsuits against alleged film pirates.

One of the newcomers this year are the makers of the action movie Manny. Over the past few months “Manny Film” has filed 215 lawsuits across several districts.

Like all copyright holders, the makers of the film rely on IP-addresses as evidence. They then ask the courts to grant a subpoena, forcing Internet providers to hand over the personal details of the associated account holders.

In most cases the courts sign off on these requests, but in Florida this isn’t as straightforward.

When District Court Judge Ursula Ungaro was assigned a Manny Film case she asked the company to explain how an IP-address can pinpoint the actual person who downloaded a pirated film. In addition, she asked them to show that geolocation tools are good enough to prove that the alleged pirate resides in the Court’s district.

In a detailed reply the filmmakers argued that IP-addresses can identify the defendant and that a refusal to grant a subpoena would set a “dangerous precedent.” Manny Film further stated that “all other courts” disagreed with the notion that an IP-address is not a person.

This last remark didn’t go down well with Judge Ungaro. In an order handed down this week she cites various cases where courts ruled that IP-addresses don’t always identify the alleged offenders.

“Due to the risk of ‘false positives,’ an allegation that an IP address is registered to an individual is not sufficient in and of itself to support a claim that the individual is guilty of infringement,” wrote the Judge citing a 2012 case, one of many examples.

The referenced cases clearly refute Manny Film’s claim that all other courts disagreed with the Judge Ungaro’s concerns, and the Judge is not convinced by any of the other arguments either.

“As in those cases, Plaintiff here fails to show how geolocation software can establish the identity of the Defendant. Specifically, there is nothing linking the IP address location to the identity of the person actually downloading and viewing the copy righted material and nothing establishing that the person actually lives in this district,” Judge Ungaro writes.

“Even if this IP address is located within a residence, geolocation software cannot identify who have access to that residence’s computer and who would actually be using it to infringe Plaintiff’s copyright,” she adds.

As a result, the Court refused to issue a subpoena and dismissed the case against IP-address for improper venue.

While not all judges may come to the same conclusion, the order makes it harder for rightholders to play their “copyright troll” scheme in the Southern District of Florida. At the same time, it provides future defendants with a good overview to fight similar claims elsewhere.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and anonymous VPN services.