Posts tagged ‘ip address’

TorrentFreak: VPN Providers Respond To Allegations of Data Leakage

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

vpn4lifeAs Internet users seek to bypass censorship, boost privacy and achieve a level of anonymity, VPN services have stepped in with commercial solutions to assist with these aims. The uptake among consumers has been impressive.

Reviews of VPN services are commonplace and usually base their ratings on price and speed. At TorrentFreak we examine many services annually, but with a focus on privacy issues instead.

Now a team of researchers from universities in London and Rome have published a paper titled A Glance through the VPN Looking Glass: IPv6 Leakage and DNS Hijacking in Commercial VPN clients. (pdf) after investigating 14 popular services on the market today.

“Our findings confirm the criticality of the current situation: many of these providers leak all, or a critical part of the user traffic in mildly adversarial environments. The reasons for these failings are diverse, not least the poorly defined, poorly explored nature of VPN usage, requirements and threat models,” the researchers write.

While noting that all providers are able to successfully send data through an encrypted tunnel, the paper claims that problems arise during the second stage of the VPN client’s operation: traffic redirection.

“The problem stems from the fact that routing tables are a resource that is concurrently managed by the operating system, which is unaware of the security requirements of the VPN client,” the researchers write.

This means that changes to the routing table (whether they are malicious or accidental) could result in traffic circumventing the VPN tunnel and leaking to other interfaces.

IPv6 VPN Traffic Leakage

“The vulnerability is driven by the fact that, whereas all VPN clients manipulate the IPv4 routing table, they tend to ignore the IPv6 routing table. No rules are added to redirect IPv6 traffic into the tunnel. This can result in all IPv6 traffic bypassing the VPN’s virtual interface,” the researchers explain.


As illustrated by the chart above, the paper claims that all desktop clients (except for those provided by Private Internet Access, Mullvad and VyprVPN) leaked “the entirety” of IPv6 traffic, while all providers except Astrill were vulnerable to IPv6 DNS hijacking attacks.

The paper was covered yesterday by The Register with the scary-sounding title “VPNs are so insecure you might as well wear a KICK ME sign” but without any input from the providers in question. We decided to contact a few of them for their take on the paper.

PureVPN told TF that they “take the security of our customers very seriously and thus, a dedicated team has been assigned to look into the matter.” Other providers had already received advanced notice of the paper.

“At least for AirVPN the paper is outdated,” AirVPN told TorrentFreak.

“We think that the researchers, who kindly sent the paper to us many months in advance and were warned about that, had no time to fix [the paper] before publication. There is nothing to worry about for AirVPN.”

“Current topology allows us to have the same IP address for VPN DNS server and VPN gateway, solving the vulnerability at its roots, months before the publication of the paper.”

TorGuard also knew of the whitepaper and have been working to address the issues it raises. The company adds that while The Register’s “the sky is falling” coverage of yesterday is “deceptive”, the study does illustrate the need for providers to stay vigilant. Specifically, TorGuard says that it has launched a new IPv6 leak prevention feature on Windows, Mac and Linux.

“Today we have released a new feature that will address this issue by giving users the option of capturing ALL IPv6 traffic and forcing it through the OpenVPN tunnel. During our testing this method proved highly effective in blocking potential IPv6 leaks, even in circumstances when these services were active or in use on the client’s machine,” the company reports.

On the DNS hijacking issue, TorGuard provides the following detail.

“It is important to note that the potential for this exploit only exists (in theory) if you are connected to a compromised WiFi network in which the attacker has gained full control of the router. If that is the case, DNS hijacking is only the beginning of one’s worries,” TorGuard notes.

“During our own testing of TorGuard’s OpenVPN app, we were unable to reproduce this when using private DNS servers because any DNS queries can only be accessed from within the tunnel itself.”

Noting that they released IPv6 Leak Protection in October 2013, leading VPN provider Private Internet Access told TorrentFreak that they feel the paper is lacking.

“While the article purported to be an unbiased and intricate look into the security offered by consumer VPN services, it was greatly flawed since the inputs or observations made by the researchers were inaccurate,” PIA said.

“While a scientific theory or scientific test can be proven by a logical formula or algorithm, if the observed or collected data is incorrect, the conclusion will be in error as well.”

PIA criticizes the report on a number of fronts, including incorrect claims about its DNS resolver.

“Contrary to the report, we have our own private DNS daemon running on the Choopa network. Additionally, the DNS server that is reported, while it is a real DNS resolver, is not the actual DNS that your system will use when connected to the VPN,” the company explains.

“Your DNS requests are handled by a local DNS resolver running on the VPN gateway you are connected to. This can be easily verified through a site like Additionally… we do not allow our DNS servers to report IPv6 (AAAA records) results. We’re very serious about security and privacy.”

Finally, in a comprehensive response (now published here) in which it notes that its Windows client is safe, PIA commends the researchers for documenting the DNS hijacking method but criticizes how it was presented to the VPN community.

“The DNS Hijacking that the author describes [..] is something that has recently been brought to light by these researchers and we commend them on their discovery. Proper reporting routines would have been great, however. Shamefully, this is improper security disclosure,” PIA adds.

While non-IPv6 users have nothing to fear, all users looking for a simply fix can disable IPv6 by following instructions for Windows, Linux and Mac.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and the best VPN services.

TorrentFreak: Cloudflare Reveals Pirate Site Locations in an Instant

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

cloudflareFive years ago, discovering the physical location of almost any ‘pirate’ site was achievable in a matter of seconds using widely available online tools. All one needed was an IP address and a simple lookup.

As sites became more aware of the need for security, cloaking efforts became more commonplace. Smaller sites, private trackers in particular, began using tunnels and proxies to hide their true locations, hampering anti-piracy efforts in the process. Later these kinds of techniques were used on even the largest sites, The Pirate Bay for example.

In the meantime the services of a rising company called Cloudflare had begun to pique the interest of security-minded site owners. Designed to optimize the performance of sites while blocking various kinds of abuse, Cloudflare-enabled sites get to exchange their regular IP address for one operated by Cloudflare, a neat side-effect for a site wishing to remain in the shadows.


Today, Cloudflare ‘protects’ dozens – perhaps hundreds – of ‘pirate’ sites. Some use Cloudflare for its anti-DDoS capabilities but all get to hide their real IP addresses from copyright holders. This has the potential to reduce the amount of DMCA notices and other complaints filtering through to their real hosts.

Surprisingly, however, belief persists in some quarters that Cloudflare is an impenetrable shield that allows ‘pirate’ sites to operate completely unhindered. In fact, nothing could be further from the truth.

In recent days a perfect example appeared in the shape of Sparvar (Sparrows), a Swedish torrent site that has been regularly hounded by anti-piracy outfit Rights Alliance. Sometime after moving to Canada in 2014, Sparvar began using the services of Cloudflare, which effectively cloaked the site’s true location from the world. Well, that was the theory.

According to an announcement from the site, Rights Alliance lawyer Henrik Pontén recently approached Cloudflare in an effort to uncover Sparvar’s email address and the true location of its servers. The discussions between Rights Alliance and Cloudflare were seen by Sparvar, which set alarm bells ringing.

“After seeing the conversations between Rights Alliance and server providers / CloudFlare we urge staff of other Swedish trackers to consider whether the risk they’re taking is really worth it,” site staff said.

“All that is required is an email to CloudFlare and then [anti-piracy companies] will have your IP address.”

As a result of this reveal, Sparvar is now offline. No site or user data has been compromised but it appears that the site felt it best to close down, at least for now.


This obviously upset users of the site, some of whom emailed TorrentFreak to express disappointment at the way the situation was handled by Cloudflare. However, Cloudflare’s terms and conditions should leave no doubt as to how the company handles these kinds of complaints.

One clause in which Cloudflare reserves the right to investigate not only sites but also their operators, it’s made crystal clear what information may be given up to third parties.

“You acknowledge that CloudFlare may, at its own discretion, reveal the information about your web server to alleged copyright holders or other complainants who have filed complaints with us,” the company writes.

The situation is further underlined when Cloudflare receives DMCA notices from copyright holders and forwards an alert to a site using its services.

“We have provided the name of your hosting provider to the reporter. Additionally, we have forwarded this complaint to your hosting provider as well,” the site’s abuse team regular advises.

While Cloudflare itself tends not to take direct action against sites it receives complaints about, problems can mount if a copyright holder is persistent enough. Just recently Cloudflare was ordered by a U.S. court to discontinue services to a Grooveshark replacement. That site is yet to reappear.

Finally, Sparvar staff have some parting advice for other site operators hoping to use Cloudflare services without being uncovered.

“We hope that you do not have your servers directly behind CloudFlare which means a big security risk. We hope and believe that you are also running some kind of reverse proxy,” the site concludes.

At the time of publication, Henrik Pontén of Rights Alliance had not responded to our requests for comment.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and the best VPN services.

The Hacker Factor Blog: Bot Spotting

This post was syndicated from: The Hacker Factor Blog and was written by: The Hacker Factor Blog. Original post: at The Hacker Factor Blog

There’s a common problem that impacts every web service out there: bots. The problem isn’t that web bots crawl web sites. Rather, the problem is that there are so many bots, and many bots behave poorly. A solid 40% of traffic to my blog comes from various kinds of bots. It used to be over 90%, but I took steps to mitigate bot traffic.

Some bots just want to crawl the entire web site. The poorly behaved bots (including GoogleBot and MSNBOT) sometimes get stuck in virtual directories and try to traverse indefinitely.

Other bots just want to hit the same URL over and over and over. One bot (from China) hit the same URL over a thousand times in one hour. I ended up creating a rule to detect and block rapid repeaters. This same rule stops RSS updaters that try to refresh more than a few times per hour. (I update this blog 1-2 times a week. There is no reason for an RSS system to request an update every 15 seconds.)

By blocking these bots and automated abuses, I cut my network traffic usage by more than half, freed up CPU cycles, and made real user requests much more responsive.

Finding Bots

There are a couple of different methods for finding bots.

The first method evaluates the user-agent string sent by the bot. If you see strings like “AhrefsBot”, “PostDesk-ReadBot”, or “OpenHoseBot”, then it is likely a bot crawling the web site. Of course, not every bot uses the term ‘bot’ in it’s name. There’s crawler4j, Digg Deeper, Jamie’s Spider, and many more. But this type of heuristic does make for a good first guess. If you see “googlebot”, then you can probably assume that the request is coming from Google. (Sure, some bots lie and most browsers can be configured to display an arbitrary user-agent string. But as a first-pass heuristic, this is a pretty good one.)

The second option is to filter by source network address. For example, Baidospider and the Internet Archive’s archive.org_bot each come from very specific network ranges. If I see a request coming from –, then I can be pretty confident that it is

More Complex

Of course, if there is a way to make a situation more complicated, then you can be certain that Google will find it… and Microsoft will then try to out-complex Google.

For example, Google does not use just one type of bot. The main crawler is Googlebot, but there’s also AdsBot-Google, Googlebot-Mobile, and AppEngine-Google. Unless you look for all of these different strings, you might end up with a confused Google web crawler infinitely traversing your site. (And I’m not convinced that these are all of the different types of bots by Google.) By the same means, Microsoft uses many different bot names. There’s “msnbot”, “bingbot”, “BingPreview”, and others. Unless you have a long list of Microsoft strings, you are unlikely to find them all.

Another approach is to maintain a permitted list of user-agent strings. But this leads to two big problems. First, most devices have different user-agent strings. You are unlikely to find every device that should be white-listed. The other problem is that many bots like to pretend to be other devices. Here are two user-agent strings, where BingBot pretends to be an Apple iPhone 7, and Google impersonates an iPhone 6:

Mozilla/5.0 (iPhone; CPU iPhone OS 7_0 like Mac OS X) AppleWebKit/537.51.1 (KHTML, like Gecko) Version/7.0 Mobile/11A465 Safari/9537.53 (compatible; bingbot/2.0; +

Mozilla/5.0 (iPhone; CPU iPhone OS 6_0 like Mac OS X) AppleWebKit/536.26 (KHTML, like Gecko) Version/6.0 Mobile/10A5376e Safari/8536.25 (compatible; Googlebot/2.1; +

(Yes, Google and Microsoft bots frequently impersonate Apple devices.) If you manage to white-list Apple iPhones, then you’ll be letting these bots in.

Adding to the problem, both Microsoft and Google refuse to publish a list of addresses used by their bots. As Microsoft stated:

Bing does not publish a list of IP addresses or ranges from which we crawl the Internet. The reason is simple: the IP addresses or ranges we use can change any time, so responding to requests differently based on a hardcoded list is not a recommended approach and may cause problems down the line.

This refusal to publish is very similar to Google’s policy:

Google doesn’t post a public list of IP addresses for webmasters to whitelist. This is because these IP address ranges can change, causing problems for any webmasters who have hard coded them. The best way to identify accesses by Googlebot is to use the user-agent (Googlebot).

Google’s advice to check the user-agent string for “Googlebot” is clearly not enough since it will miss AdsBot-Google and others. Microsoft gives similar advice, saying to look for a user-agent string containing “bingbot”. But that will miss msnbot, BingPreview, and other Microsoft bots.

Both Google and Microsoft do offer an alternative… They both suggest doing a reverse hostname lookup. This is where you identify the hostname from the network address. (As a web service, you always know the network address used by the client.) If the client’s network address resolves to “a name ending in”, then Microsoft says it is one of their bots. Google says to look for “” in the hostname. However, both of these options have some significant limitations. For example:

  • Resale and Fakes. Google sells their search engine solutions to various organizations. I spotted the Brazilian government running their own Googlebot service. The user-agent clearly says Googlebot and the crawling acted like Googlebot. However, the associated hostname ended with “”. Then again, these non-Google Googlebots may actually be fake Googlebots… the only real way to tell is by doing a hostname lookup.

  • Wrong bot. I caught accessing my site. It resolves to “”, so this does not match Google’s suggested “” domain. In this case, the user-agent string identified the bot as “Google-SearchByImage”.
  • Speed. Let’s assume that these companies do return the correct hostname when doing a reverse lookup for one of their bots. A single DNS query is relatively fast. However, automating the lookup for each incoming query will add processing time. This will result in a measurable delay for even a moderately high number of requests.
  • Quota. Most DNS servers consider more than a few dozen rapid lookups to be a network attack. This is why most include rate limiting features. If I want to stop every Microsoft and Google bot from getting caught in virtual directory mirroring loops, then I will need to look up every incoming query in order to see if it is actually from Google or Microsoft. This means querying DNS hundreds of times per hour. I am certain to trigger the rate limiting thresholds and become blocked from querying DNS.

Requiring web services to perform hundreds of DNS queries in order to identify real bots is impractical and self restricting.

Arbitrary Limitations

What I find really ironic is that Google and Microsoft happily publish their list of cloud service addresses. Google documents them in DNS TXT records. Retrieving this list of address ranges requires less than a dozen DNS queries, and not one per address. Similarly, Microsoft posts their cloud addresses on a web site. Yet, neither company will identify the subnets where their bots come from. While there is no standard method for publishing these lists, at least they are documented and published somewhere.

I actually think there is an ulterior motive and “IP address ranges can change” is nothing more than an excuse. Both Microsoft and Google make a significant income from their search engine technologies. If they published their network ranges, then many sites would either block access or offer alternate content. In effect, these companies just want to make it more difficult to detect and block.

The downside from not having an official listing is that many third-party sites have reverse-engineered the subnets used by these bots.,, and others have published lists of subnets used by common web bots. Unforunately, the quality of these lists vary dramatically. Many of them have not been maintained and list old addresses. And most of them list large subnets when only small subnets are used. For example, says that MSNBOT uses the range – However, I have only recorded MSNBOT coming from – (a significantly smaller network range).

Using excessively large ranges can lead to big problems. I had been using to identify bots. However, I recently saw a (human) user with a Windows Phone trying to access my site from one of the ranges that associated with MSNBOT. The user saw my automated “no bots allowed” message. Using Microsoft’s document reverse hostname check, I determined that has identified an overly large network range. I have since decided to use tighter network ranges, but I may still be over-blocking some subnets.

Managing Bots

If bots worked politely, then this wouldn’t be a problem. Unfortunately, many bots come in fast and furious. They can quickly suck up bandwidth and drive up the CPU load. (And if you’re on Amazon Cloud, where they charge for every network bit and every CPU cycle, then bots can result in real money being spent.)

There are a few options for mitigating bot abuse. Having a robots.txt exclusion file is a good start. Google and Microsoft seem to obey it, but other bots, like Getty’s Picscout do not even bother to retrieve the file. And some attack bots use the robots.txt listing as starting points for vulnerability scans.

Both Google and Microsoft have webmaster tools where you can claim ownership of your domain and specify scan rates. But even these are not consistently used. I have seen Microsoft ignore the scan rate settings and flood my server with requests.

There’s other little tricks that may alter web bot behavior. For example, you might add “nofollow” or “noindex” meta attributes in the HTML code, or generate an HTTP “X-Robots-Tag: noindex” response. However none of these other options are widely supported.

Even profiling web actions may not be enough to identify a bot. Real humans with web browsers will retrieve my web page style sheet (CSS) and associated dependencies (images, javascript, etc.), while bots usually do not access support links. However, even this may be difficult to track. For example, if the user reads my blog via the RSS feed, then they may not retrieve my CSS file. If they obey my web page cache settings, then they should not retrieve the CSS file every time. And if they use a corporate proxy network, then one web request may use one exit proxy (one network address), and a subsequent query may use a different proxy (a different network address). According to my logs, I have seen employees at Amazon request my web page from one address and the style sheet from a different address as they rotate through corporate proxies. (I know they were employees because I was on the phone with them at the time. “Now go to this URL on my web site. Wait… your network address changed!”)

I am still looking for other options to identify and mitigate web bots. A lot of them get stuck in my blog because the blog software has multiple ways to identify the same blog entry. If I could better identify bots, then I would have them stick to the permalinks rather than trying to index entries by page numbers that change whenever I post a new blog entry.

Pid Eins: The new sd-bus API of systemd

This post was syndicated from: Pid Eins and was written by: Lennart Poettering. Original post: at Pid Eins

With the new v221 release of

we are declaring the
API shipped with
stable. sd-bus is our minimal D-Bus
C library, supporting as
back-ends both classic socket-based D-Bus and
kdbus. The library has been been
part of systemd for a while, but has only been used internally, since
we wanted to have the liberty to still make API changes without
affecting external consumers of the library. However, now we are
confident to commit to a stable API for it, starting with v221.

In this blog story I hope to provide you with a quick overview on
sd-bus, a short reiteration on D-Bus and its concepts, as well as a
few simple examples how to write D-Bus clients and services with it.

What is D-Bus again?

Let’s start with a quick reminder what
D-Bus actually is: it’s a
powerful, generic IPC system for Linux and other operating systems. It
knows concepts like buses, objects, interfaces, methods, signals,
properties. It provides you with fine-grained access control, a rich
type system, discoverability, introspection, monitoring, reliable
multicasting, service activation, file descriptor passing, and
more. There are bindings for numerous programming languages that are
used on Linux.

D-Bus has been a core component of Linux systems since more than 10
years. It is certainly the most widely established high-level local
IPC system on Linux. Since systemd’s inception it has been the IPC
system it exposes its interfaces on. And even before systemd, it was
the IPC system Upstart used to expose its interfaces. It is used by
GNOME, by KDE and by a variety of system components.

D-Bus refers to both a
and a reference
. The
reference implementation provides both a bus server component, as well
as a client library. While there are multiple other, popular
reimplementations of the client library – for both C and other
programming languages –, the only commonly used server side is the
one from the reference implementation. (However, the kdbus project is
working on providing an alternative to this server implementation as a
kernel component.)

D-Bus is mostly used as local IPC, on top of AF_UNIX sockets. However,
the protocol may be used on top of TCP/IP as well. It does not
natively support encryption, hence using D-Bus directly on TCP is
usually not a good idea. It is possible to combine D-Bus with a
transport like ssh in order to secure it. systemd uses this to make
many of its APIs accessible remotely.

A frequently asked question about D-Bus is why it exists at all,
given that AF_UNIX sockets and FIFOs already exist on UNIX and have
been used for a long time successfully. To answer this question let’s
make a comparison with popular web technology of today: what
AF_UNIX/FIFOs are to D-Bus, TCP is to HTTP/REST. While AF_UNIX
sockets/FIFOs only shovel raw bytes between processes, D-Bus defines
actual message encoding and adds concepts like method call
transactions, an object system, security mechanisms, multicasting and

From our 10year+ experience with D-Bus we know today that while there
are some areas where we can improve things (and we are working on
that, both with kdbus and sd-bus), it generally appears to be a very
well designed system, that stood the test of time, aged well and is
widely established. Today, if we’d sit down and design a completely
new IPC system incorporating all the experience and knowledge we
gained with D-Bus, I am sure the result would be very close to what
D-Bus already is.

Or in short: D-Bus is great. If you hack on a Linux project and need a
local IPC, it should be your first choice. Not only because D-Bus is
well designed, but also because there aren’t many alternatives that
can cover similar functionality.

Where does sd-bus fit in?

Let’s discuss why sd-bus exists, how it compares with the other
existing C D-Bus libraries and why it might be a library to consider
for your project.

For C, there are two established, popular D-Bus libraries: libdbus, as
it is shipped in the reference implementation of D-Bus, as well as
GDBus, a component of GLib, the low-level tool library of GNOME.

Of the two libdbus is the much older one, as it was written at the
time the specification was put together. The library was written with
a focus on being portable and to be useful as back-end for higher-level
language bindings. Both of these goals required the API to be very
generic, resulting in a relatively baroque, hard-to-use API that lacks
the bits that make it easy and fun to use from C. It provides the
building blocks, but few tools to actually make it straightforward to
build a house from them. On the other hand, the library is suitable
for most use-cases (for example, it is OOM-safe making it suitable for
writing lowest level system software), and is portable to operating
systems like Windows or more exotic UNIXes.

is a much newer implementation. It has been written after considerable
experience with using a GLib/GObject wrapper around libdbus. GDBus is
implemented from scratch, shares no code with libdbus. Its design
differs substantially from libdbus, it contains code generators to
make it specifically easy to expose GObject objects on the bus, or
talking to D-Bus objects as GObject objects. It translates D-Bus data
types to GVariant, which is GLib’s powerful data serialization
format. If you are used to GLib-style programming then you’ll feel
right at home, hacking D-Bus services and clients with it is a lot
simpler than using libdbus.

With sd-bus we now provide a third implementation, sharing no code
with either libdbus or GDBus. For us, the focus was on providing kind
of a middle ground between libdbus and GDBus: a low-level C library
that actually is fun to work with, that has enough syntactic sugar to
make it easy to write clients and services with, but on the other hand
is more low-level than GDBus/GLib/GObject/GVariant. To be able to use
it in systemd’s various system-level components it needed to be
OOM-safe and minimal. Another major point we wanted to focus on was
supporting a kdbus back-end right from the beginning, in addition to
the socket transport of the original D-Bus specification (“dbus1″). In
fact, we wanted to design the library closer to kdbus’ semantics than
to dbus1’s, wherever they are different, but still cover both
transports nicely. In contrast to libdbus or GDBus portability is not
a priority for sd-bus, instead we try to make the best of the Linux
platform and expose specific Linux concepts wherever that is
beneficial. Finally, performance was also an issue (though a secondary
one): neither libdbus nor GDBus will win any speed records. We wanted
to improve on performance (throughput and latency) — but simplicity
and correctness are more important to us. We believe the result of our
work delivers our goals quite nicely: the library is fun to use,
supports kdbus and sockets as back-end, is relatively minimal, and the
performance is substantially

than both libdbus and GDBus.

To decide which of the three APIs to use for you C project, here are
short guidelines:

  • If you hack on a GLib/GObject project, GDBus is definitely your
    first choice.

  • If portability to non-Linux kernels — including Windows, Mac OS and
    other UNIXes — is important to you, use either GDBus (which more or
    less means buying into GLib/GObject) or libdbus (which requires a
    lot of manual work).

  • Otherwise, sd-bus would be my recommended choice.

(I am not covering C++ specifically here, this is all about plain C
only. But do note: if you use Qt, then QtDBus is the D-Bus API of
choice, being a wrapper around libdbus.)

Introduction to D-Bus Concepts

To the uninitiated D-Bus usually appears to be a relatively opaque
technology. It uses lots of concepts that appear unnecessarily complex
and redundant on first sight. But actually, they make a lot of
sense. Let’s have a look:

  • A bus is where you look for IPC services. There are usually two
    kinds of buses: a system bus, of which there’s exactly one per
    system, and which is where you’d look for system services; and a
    user bus, of which there’s one per user, and which is where you’d
    look for user services, like the address book service or the mail
    program. (Originally, the user bus was actually a session bus — so
    that you get multiple of them if you log in many times as the same
    user –, and on most setups it still is, but we are working on
    moving things to a true user bus, of which there is only one per
    user on a system, regardless how many times that user happens to
    log in.)

  • A service is a program that offers some IPC API on a bus. A
    service is identified by a name in reverse domain name
    notation. Thus, the org.freedesktop.NetworkManager service on the
    system bus is where NetworkManager’s APIs are available and
    org.freedesktop.login1 on the system bus is where
    systemd-logind‘s APIs are exposed.

  • A client is a program that makes use of some IPC API on a bus. It
    talks to a service, monitors it and generally doesn’t provide any
    services on its own. That said, lines are blurry and many services
    are also clients to other services. Frequently the term peer is
    used as a generalization to refer to either a service or a client.

  • An object path is an identifier for an object on a specific
    service. In a way this is comparable to a C pointer, since that’s
    how you generally reference a C object, if you hack object-oriented
    programs in C. However, C pointers are just memory addresses, and
    passing memory addresses around to other processes would make
    little sense, since they of course refer to the address space of
    the service, the client couldn’t make sense of it. Thus, the D-Bus
    designers came up with the object path concept, which is just a
    string that looks like a file system path. Example:
    /org/freedesktop/login1 is the object path of the ‘manager’
    object of the org.freedesktop.login1 service (which, as we
    remember from above, is still the service systemd-logind
    exposes). Because object paths are structured like file system
    paths they can be neatly arranged in a tree, so that you end up
    with a venerable tree of objects. For example, you’ll find all user
    sessions systemd-logind manages below the
    /org/freedesktop/login1/session sub-tree, for example called
    /org/freedesktop/login1/session/_55 and so on. How services
    precisely label their objects and arrange them in a tree is
    completely up to the developers of the services.

  • Each object that is identified by an object path has one or more
    interfaces. An interface is a collection of signals, methods, and
    properties (collectively called members), that belong
    together. The concept of a D-Bus interface is actually pretty
    much identical to what you know from programming languages such as
    Java, which also know an interface concept. Which interfaces an
    object implements are up the developers of the service. Interface
    names are in reverse domain name notation, much like service
    names. (Yes, that’s admittedly confusing, in particular since it’s
    pretty common for simpler services to reuse the service name string
    also as an interface name.) A couple of interfaces are standardized
    though and you’ll find them available on many of the objects
    offered by the various services. Specifically, those are
    org.freedesktop.DBus.Introspectable, org.freedesktop.DBus.Peer
    and org.freedesktop.DBus.Properties.

  • An interface can contain methods. The word “method” is more or
    less just a fancy word for “function”, and is a term used pretty
    much the same way in object-oriented languages such as Java. The
    most common interaction between D-Bus peers is that one peer
    invokes one of these methods on another peer and gets a reply. A
    D-Bus method takes a couple of parameters, and returns others. The
    parameters are transmitted in a type-safe way, and the type
    information is included in the introspection data you can query
    from each object. Usually, method names (and the other member
    types) follow a CamelCase syntax. For example, systemd-logind
    exposes an ActivateSession method on the
    org.freedesktop.login1.Manager interface that is available on the
    /org/freedesktop/login1 object of the org.freedesktop.login1

  • A signature describes a set of parameters a function (or signal,
    property, see below) takes or returns. It’s a series of characters
    that each encode one parameter by its type. The set of types
    available is pretty powerful. For example, there are simpler types
    like s for string, or u for 32bit integer, but also complex
    types such as as for an array of strings or a(sb) for an array
    of structures consisting of one string and one boolean each. See
    the D-Bus specification
    for the full explanation of the type system. The
    ActivateSession method mentioned above takes a single string as
    parameter (the parameter signature is hence s), and returns
    nothing (the return signature is hence the empty string). Of
    course, the signature can get a lot more complex, see below for
    more examples.

  • A signal is another member type that the D-Bus object system
    knows. Much like a method it has a signature. However, they serve
    different purposes. While in a method call a single client issues a
    request on a single service, and that service sends back a response
    to the client, signals are for general notification of
    peers. Services send them out when they want to tell one or more
    peers on the bus that something happened or changed. In contrast to
    method calls and their replies they are hence usually broadcast
    over a bus. While method calls/replies are used for duplex
    one-to-one communication, signals are usually used for simplex
    one-to-many communication (note however that that’s not a
    requirement, they can also be used one-to-one). Example:
    systemd-logind broadcasts a SessionNew signal from its manager
    object each time a user logs in, and a SessionRemoved signal
    every time a user logs out.

  • A property is the third member type that the D-Bus object system
    knows. It’s similar to the property concept known by languages like
    C#. Properties also have a signature, and are more or less just
    variables that an object exposes, that can be read or altered by
    clients. Example: systemd-logind exposes a property Docked of
    the signature b (a boolean). It reflects whether systemd-logind
    thinks the system is currently in a docking station of some form
    (only applies to laptops …).

So much for the various concepts D-Bus knows. Of course, all these new
concepts might be overwhelming. Let’s look at them from a different
perspective. I assume many of the readers have an understanding of
today’s web technology, specifically HTTP and REST. Let’s try to
compare the concept of a HTTP request with the concept of a D-Bus
method call:

  • A HTTP request you issue on a specific network. It could be the
    Internet, or it could be your local LAN, or a company
    VPN. Depending on which network you issue the request on, you’ll be
    able to talk to a different set of servers. This is not unlike the
    “bus” concept of D-Bus.

  • On the network you then pick a specific HTTP server to talk
    to. That’s roughly comparable to picking a service on a specific bus.

  • On the HTTP server you then ask for a specific URL. The “path” part
    of the URL (by which I mean everything after the host name of the
    server, up to the last “/”) is pretty similar to a D-Bus object path.

  • The “file” part of the URL (by which I mean everything after the
    last slash, following the path, as described above), then defines
    the actual call to make. In D-Bus this could be mapped to an
    interface and method name.

  • Finally, the parameters of a HTTP call follow the path after the
    “?”, they map to the signature of the D-Bus call.

Of course, comparing an HTTP request to a D-Bus method call is a bit
comparing apples and oranges. However, I think it’s still useful to
get a bit of a feeling of what maps to what.

From the shell

So much about the concepts and the gray theory behind them. Let’s make
this exciting, let’s actually see how this feels on a real system.

Since a while systemd has included a tool busctl that is useful to
explore and interact with the D-Bus object system. When invoked
without parameters, it will show you a list of all peers connected to
the system bus. (Use --user to see the peers of your user bus

$ busctl
NAME                                       PID PROCESS         USER             CONNECTION    UNIT                      SESSION    DESCRIPTION
:1.1                                         1 systemd         root             :1.1          -                         -          -
:1.11                                      705 NetworkManager  root             :1.11         NetworkManager.service    -          -
:1.14                                      744 gdm             root             :1.14         gdm.service               -          -
:1.4                                       708 systemd-logind  root             :1.4          systemd-logind.service    -          -
:1.7200                                  17563 busctl          lennart          :1.7200       session-1.scope           1          -
org.freedesktop.NetworkManager             705 NetworkManager  root             :1.11         NetworkManager.service    -          -
org.freedesktop.login1                     708 systemd-logind  root             :1.4          systemd-logind.service    -          -
org.freedesktop.systemd1                     1 systemd         root             :1.1          -                         -          -
org.gnome.DisplayManager                   744 gdm             root             :1.14         gdm.service               -          -

(I have shortened the output a bit, to make keep things brief).

The list begins with a list of all peers currently connected to the
bus. They are identified by peer names like “:1.11″. These are called
unique names in D-Bus nomenclature. Basically, every peer has a
unique name, and they are assigned automatically when a peer connects
to the bus. They are much like an IP address if you so will. You’ll
notice that a couple of peers are already connected, including our
little busctl tool itself as well as a number of system services. The
list then shows all actual services on the bus, identified by their
service names (as discussed above; to discern them from the unique
names these are also called well-known names). In many ways
well-known names are similar to DNS host names, i.e. they are a
friendlier way to reference a peer, but on the lower level they just
map to an IP address, or in this comparison the unique name. Much like
you can connect to a host on the Internet by either its host name or
its IP address, you can also connect to a bus peer either by its
unique or its well-known name. (Note that each peer can have as many
well-known names as it likes, much like an IP address can have
multiple host names referring to it).

OK, that’s already kinda cool. Try it for yourself, on your local
machine (all you need is a recent, systemd-based distribution).

Let’s now go the next step. Let’s see which objects the
org.freedesktop.login1 service actually offers:

$ busctl tree org.freedesktop.login1
  │ ├─/org/freedesktop/login1/seat/seat0
  │ └─/org/freedesktop/login1/seat/self
  │ ├─/org/freedesktop/login1/session/_31
  │ └─/org/freedesktop/login1/session/self

Pretty, isn’t it? What’s actually even nicer, and which the output
does not show is that there’s full command line completion
available: as you press TAB the shell will auto-complete the service
names for you. It’s a real pleasure to explore your D-Bus objects that

The output shows some objects that you might recognize from the
explanations above. Now, let’s go further. Let’s see what interfaces,
methods, signals and properties one of these objects actually exposes:

$ busctl introspect org.freedesktop.login1 /org/freedesktop/login1/session/_31
NAME                                TYPE      SIGNATURE RESULT/VALUE                             FLAGS
org.freedesktop.DBus.Introspectable interface -         -                                        -
.Introspect                         method    -         s                                        -
org.freedesktop.DBus.Peer           interface -         -                                        -
.GetMachineId                       method    -         s                                        -
.Ping                               method    -         -                                        -
org.freedesktop.DBus.Properties     interface -         -                                        -
.Get                                method    ss        v                                        -
.GetAll                             method    s         a{sv}                                    -
.Set                                method    ssv       -                                        -
.PropertiesChanged                  signal    sa{sv}as  -                                        -
org.freedesktop.login1.Session      interface -         -                                        -
.Activate                           method    -         -                                        -
.Kill                               method    si        -                                        -
.Lock                               method    -         -                                        -
.PauseDeviceComplete                method    uu        -                                        -
.ReleaseControl                     method    -         -                                        -
.ReleaseDevice                      method    uu        -                                        -
.SetIdleHint                        method    b         -                                        -
.TakeControl                        method    b         -                                        -
.TakeDevice                         method    uu        hb                                       -
.Terminate                          method    -         -                                        -
.Unlock                             method    -         -                                        -
.Active                             property  b         true                                     emits-change
.Audit                              property  u         1                                        const
.Class                              property  s         "user"                                   const
.Desktop                            property  s         ""                                       const
.Display                            property  s         ""                                       const
.Id                                 property  s         "1"                                      const
.IdleHint                           property  b         true                                     emits-change
.IdleSinceHint                      property  t         1434494624206001                         emits-change
.IdleSinceHintMonotonic             property  t         0                                        emits-change
.Leader                             property  u         762                                      const
.Name                               property  s         "lennart"                                const
.Remote                             property  b         false                                    const
.RemoteHost                         property  s         ""                                       const
.RemoteUser                         property  s         ""                                       const
.Scope                              property  s         "session-1.scope"                        const
.Seat                               property  (so)      "seat0" "/org/freedesktop/login1/seat... const
.Service                            property  s         "gdm-autologin"                          const
.State                              property  s         "active"                                 -
.TTY                                property  s         "/dev/tty1"                              const
.Timestamp                          property  t         1434494630344367                         const
.TimestampMonotonic                 property  t         34814579                                 const
.Type                               property  s         "x11"                                    const
.User                               property  (uo)      1000 "/org/freedesktop/login1/user/_1... const
.VTNr                               property  u         1                                        const
.Lock                               signal    -         -                                        -
.PauseDevice                        signal    uus       -                                        -
.ResumeDevice                       signal    uuh       -                                        -
.Unlock                             signal    -         -                                        -

As before, the busctl command supports command line completion, hence
both the service name and the object path used are easily put together
on the shell simply by pressing TAB. The output shows the methods,
properties, signals of one of the session objects that are currently
made available by systemd-logind. There’s a section for each
interface the object knows. The second column tells you what kind of
member is shown in the line. The third column shows the signature of
the member. In case of method calls that’s the input parameters, the
fourth column shows what is returned. For properties, the fourth
column encodes the current value of them.

So far, we just explored. Let’s take the next step now: let’s become
active – let’s call a method:

# busctl call org.freedesktop.login1 /org/freedesktop/login1/session/_31 org.freedesktop.login1.Session Lock

I don’t think I need to mention this anymore, but anyway: again
there’s full command line completion available. The third argument is
the interface name, the fourth the method name, both can be easily
completed by pressing TAB. In this case we picked the Lock method,
which activates the screen lock for the specific session. And yupp,
the instant I pressed enter on this line my screen lock turned on
(this only works on DEs that correctly hook into systemd-logind for
this to work. GNOME works fine, and KDE should work too).

The Lock method call we picked is very simple, as it takes no
parameters and returns none. Of course, it can get more complicated
for some calls. Here’s another example, this time using one of
systemd’s own bus calls, to start an arbitrary system unit:

# busctl call org.freedesktop.systemd1 /org/freedesktop/systemd1 org.freedesktop.systemd1.Manager StartUnit ss "cups.service" "replace"
o "/org/freedesktop/systemd1/job/42684"

This call takes two strings as input parameters, as we denote in the
signature string that follows the method name (as usual, command line
completion helps you getting this right). Following the signature the
next two parameters are simply the two strings to pass. The specified
signature string hence indicates what comes next. systemd’s StartUnit
method call takes the unit name to start as first parameter, and the
mode in which to start it as second. The call returned a single object
path value. It is encoded the same way as the input parameter: a
signature (just o for the object path) followed by the actual value.

Of course, some method call parameters can get a ton more complex, but
with busctl it’s relatively easy to encode them all. See the man

busctl knows a number of other operations. For example, you can use
it to monitor D-Bus traffic as it happens (including generating a
.cap file for use with Wireshark!) or you can set or get specific
properties. However, this blog story was supposed to be about sd-bus,
not busctl, hence let’s cut this short here, and let me direct you
to the man page in case you want to know more about the tool.

busctl (like the rest of system) is implemented using the sd-bus
API. Thus it exposes many of the features of sd-bus itself. For
example, you can use to connect to remote or container buses. It
understands both kdbus and classic D-Bus, and more!


But enough! Let’s get back on topic, let’s talk about sd-bus itself.

The sd-bus set of APIs is mostly contained in the header file

Here’s a random selection of features of the library, that make it
compare well with the other implementations available.

  • Supports both kdbus and dbus1 as back-end.

  • Has high-level support for connecting to remote buses via ssh, and
    to buses of local OS containers.

  • Powerful credential model, to implement authentication of clients
    in services. Currently 34 individual fields are supported, from the
    PID of the client to the cgroup or capability sets.

  • Support for tracking the life-cycle of peers in order to release
    local objects automatically when all peers referencing them

  • The client builds an efficient decision tree to determine which
    handlers to deliver an incoming bus message to.

  • Automatically translates D-Bus errors into UNIX style errors and
    back (this is lossy though), to ensure best integration of D-Bus
    into low-level Linux programs.

  • Powerful but lightweight object model for exposing local objects on
    the bus. Automatically generates introspection as necessary.

The API is currently not fully documented, but we are working on
completing the set of manual pages. For details
see all pages starting with sd_bus_.

Invoking a Method, from C, with sd-bus

So much about the library in general. Here’s an example for connecting
to the bus and issuing a method call:

#include <stdio.h>
#include <stdlib.h>
#include <systemd/sd-bus.h>

int main(int argc, char *argv[]) {
        sd_bus_error error = SD_BUS_ERROR_NULL;
        sd_bus_message *m = NULL;
        sd_bus *bus = NULL;
        const char *path;
        int r;

        /* Connect to the system bus */
        r = sd_bus_open_system(&bus);
        if (r < 0) {
                fprintf(stderr, "Failed to connect to system bus: %sn", strerror(-r));
                goto finish;

        /* Issue the method call and store the respons message in m */
        r = sd_bus_call_method(bus,
                               "org.freedesktop.systemd1",           /* service to contact */
                               "/org/freedesktop/systemd1",          /* object path */
                               "org.freedesktop.systemd1.Manager",   /* interface name */
                               "StartUnit",                          /* method name */
                               &error,                               /* object to return error in */
                               &m,                                   /* return message on success */
                               "ss",                                 /* input signature */
                               "cups.service",                       /* first argument */
                               "replace");                           /* second argument */
        if (r < 0) {
                fprintf(stderr, "Failed to issue method call: %sn", error.message);
                goto finish;

        /* Parse the response message */
        r = sd_bus_message_read(m, "o", &path);
        if (r < 0) {
                fprintf(stderr, "Failed to parse response message: %sn", strerror(-r));
                goto finish;

        printf("Queued service job as %s.n", path);


        return r < 0 ? EXIT_FAILURE : EXIT_SUCCESS;

Save this example as bus-client.c, then build it with:

$ gcc bus-client.c -o bus-client `pkg-config --cflags --libs libsystemd`

This will generate a binary bus-client you can now run. Make sure to
run it as root though, since access to the StartUnit method is

# ./bus-client
Queued service job as /org/freedesktop/systemd1/job/3586.

And that’s it already, our first example. It showed how we invoked a
method call on the bus. The actual function call of the method is very
close to the busctl command line we used before. I hope the code
excerpt needs little further explanation. It’s supposed to give you a
taste how to write D-Bus clients with sd-bus. For more more
information please have a look at the header file, the man page or
even the sd-bus sources.

Implementing a Service, in C, with sd-bus

Of course, just calling a single method is a rather simplistic
example. Let’s have a look on how to write a bus service. We’ll write
a small calculator service, that exposes a single object, which
implements an interface that exposes two methods: one to multiply two
64bit signed integers, and one to divide one 64bit signed integer by

#include <stdio.h>
#include <stdlib.h>
#include <errno.h>
#include <systemd/sd-bus.h>

static int method_multiply(sd_bus_message *m, void *userdata, sd_bus_error *ret_error) {
        int64_t x, y;
        int r;

        /* Read the parameters */
        r = sd_bus_message_read(m, "xx", &x, &y);
        if (r < 0) {
                fprintf(stderr, "Failed to parse parameters: %sn", strerror(-r));
                return r;

        /* Reply with the response */
        return sd_bus_reply_method_return(m, "x", x * y);

static int method_divide(sd_bus_message *m, void *userdata, sd_bus_error *ret_error) {
        int64_t x, y;
        int r;

        /* Read the parameters */
        r = sd_bus_message_read(m, "xx", &x, &y);
        if (r < 0) {
                fprintf(stderr, "Failed to parse parameters: %sn", strerror(-r));
                return r;

        /* Return an error on division by zero */
        if (y == 0) {
                sd_bus_error_set_const(ret_error, "net.poettering.DivisionByZero", "Sorry, can't allow division by zero.");
                return -EINVAL;

        return sd_bus_reply_method_return(m, "x", x / y);

/* The vtable of our little object, implements the net.poettering.Calculator interface */
static const sd_bus_vtable calculator_vtable[] = {
        SD_BUS_METHOD("Multiply", "xx", "x", method_multiply, SD_BUS_VTABLE_UNPRIVILEGED),
        SD_BUS_METHOD("Divide",   "xx", "x", method_divide,   SD_BUS_VTABLE_UNPRIVILEGED),

int main(int argc, char *argv[]) {
        sd_bus_slot *slot = NULL;
        sd_bus *bus = NULL;
        int r;

        /* Connect to the user bus this time */
        r = sd_bus_open_user(&bus);
        if (r < 0) {
                fprintf(stderr, "Failed to connect to system bus: %sn", strerror(-r));
                goto finish;

        /* Install the object */
        r = sd_bus_add_object_vtable(bus,
                                     "/net/poettering/Calculator",  /* object path */
                                     "net.poettering.Calculator",   /* interface name */
        if (r < 0) {
                fprintf(stderr, "Failed to issue method call: %sn", strerror(-r));
                goto finish;

        /* Take a well-known service name so that clients can find us */
        r = sd_bus_request_name(bus, "net.poettering.Calculator", 0);
        if (r < 0) {
                fprintf(stderr, "Failed to acquire service name: %sn", strerror(-r));
                goto finish;

        for (;;) {
                /* Process requests */
                r = sd_bus_process(bus, NULL);
                if (r < 0) {
                        fprintf(stderr, "Failed to process bus: %sn", strerror(-r));
                        goto finish;
                if (r > 0) /* we processed a request, try to process another one, right-away */

                /* Wait for the next request to process */
                r = sd_bus_wait(bus, (uint64_t) -1);
                if (r < 0) {
                        fprintf(stderr, "Failed to wait on bus: %sn", strerror(-r));
                        goto finish;


        return r < 0 ? EXIT_FAILURE : EXIT_SUCCESS;

Save this example as bus-service.c, then build it with:

$ gcc bus-service.c -o bus-service `pkg-config --cflags --libs libsystemd`

Now, let’s run it:

$ ./bus-service

In another terminal, let’s try to talk to it. Note that this service
is now on the user bus, not on the system bus as before. We do this
for simplicity reasons: on the system bus access to services is
tightly controlled so unprivileged clients cannot request privileged
operations. On the user bus however things are simpler: as only
processes of the user owning the bus can connect no further policy
enforcement will complicate this example. Because the service is on
the user bus, we have to pass the --user switch on the busctl
command line. Let’s start with looking at the service’s object tree.

$ busctl --user tree net.poettering.Calculator

As we can see, there’s only a single object on the service, which is
not surprising, given that our code above only registered one. Let’s
see the interfaces and the members this object exposes:

$ busctl --user introspect net.poettering.Calculator /net/poettering/Calculator
NAME                                TYPE      SIGNATURE RESULT/VALUE FLAGS
net.poettering.Calculator           interface -         -            -
.Divide                             method    xx        x            -
.Multiply                           method    xx        x            -
org.freedesktop.DBus.Introspectable interface -         -            -
.Introspect                         method    -         s            -
org.freedesktop.DBus.Peer           interface -         -            -
.GetMachineId                       method    -         s            -
.Ping                               method    -         -            -
org.freedesktop.DBus.Properties     interface -         -            -
.Get                                method    ss        v            -
.GetAll                             method    s         a{sv}        -
.Set                                method    ssv       -            -
.PropertiesChanged                  signal    sa{sv}as  -            -

The sd-bus library automatically added a couple of generic interfaces,
as mentioned above. But the first interface we see is actually the one
we added! It shows our two methods, and both take “xx” (two 64bit
signed integers) as input parameters, and return one “x”. Great! But
does it work?

$ busctl --user call net.poettering.Calculator /net/poettering/Calculator net.poettering.Calculator Multiply xx 5 7
x 35

Woohoo! We passed the two integers 5 and 7, and the service actually
multiplied them for us and returned a single integer 35! Let’s try the
other method:

$ busctl --user call net.poettering.Calculator /net/poettering/Calculator net.poettering.Calculator Divide xx 99 17
x 5

Oh, wow! It can even do integer division! Fantastic! But let’s trick
it into dividing by zero:

$ busctl --user call net.poettering.Calculator /net/poettering/Calculator net.poettering.Calculator Divide xx 43 0
Sorry, can't allow division by zero.

Nice! It detected this nicely and returned a clean error about it. If
you look in the source code example above you’ll see how precisely we
generated the error.

And that’s really all I have for today. Of course, the examples I
showed are short, and I don’t get into detail here on what precisely
each line does. However, this is supposed to be a short introduction
into D-Bus and sd-bus, and it’s already way too long for that …

I hope this blog story was useful to you. If you are interested in
using sd-bus for your own programs, I hope this gets you started. If
you have further questions, check the (incomplete) man pages, and
inquire us on IRC or the systemd mailing list. If you need more
examples, have a look at the systemd source tree, all of systemd’s
many bus services use sd-bus extensively.

Darknet - The Darkside: Just-Metadata – Gathers & Analyse IP Address Metadata

This post was syndicated from: Darknet - The Darkside and was written by: Darknet. Original post: at Darknet - The Darkside

Just-Metadata is a tool that can be used to gather IP address metadata passively about a large number of IP addresses, and attempt to extrapolate relationships that might not otherwise be seen. Just-Metadata has “gather” modules which are used to gather metadata about IPs loaded into the framework across multiple resources on the…

Read the full post at

SANS Internet Storm Center, InfoCON: green: How much is your IPv4 Space Worth, (Wed, Jun 10th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

Thanks to Rob for reminding me of IPv4auction websites again. I looked at them a couple years ago, but there was very little real activity at the time. Looks like that has changed now. ARIN is essentially out of IPv4 space, and very restrictive in handing out any addition addresses. It has gotten very hard, if not impossible, to obtain a larger block of IPv4 space. So no surprise that markets for IPv4 space are coming up.

These markets are not in line with registrar policies [1]. If someone receives an IP address assignment, then they dont technically own the addresses. Once they are no longer needed, they are supposed to be returned to ARIN to be handed to the next applicant in line. But there has been little enforcement, and there have always been grey areas. For example, a company may buy another company, and in the process obtain access to that companies IP address space. Later, assets other then the IP address space could be sold off, leaving the buy with the rights to the IP address space.

Here are some of the sites offering IP address space (I am not endorsing them, and have no idea how real they are):

– Currently three offers for space up to a /20 at $7-$10 per address. There are a couple of bids.
– There are a number of auctions with IP addresses for sale and for rent. Looks like they are going for about the same price as the addresses at [2]

Some sites have dones so in the past, but already shut down (e.g. In other cases, the nanog mailing list was used to offer IP address space, or IP addresses were purchased as part of bankruptcy auctions [3]


Johannes B. Ullrich, Ph.D.

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

TorrentFreak: Kim Dotcom’s MegaNet Preps Jan 2016 Crowdfunding Campaign

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

dotcom-laptopFor many years Kim Dotcom was associated with a crazy lifestyle but these days he prefers to be seen more as a family man.

Regularly posting pictures of his children on Twitter and playing down his wild past, Dotcom seems unlikely to entertain a recent request from Pirate Bay founder Peter Sunde to join him on the Gumball Rally.

But while yachts and fast cars might be a thing of the past, Dotcom has certainly not lost the fire in his belly when it comes to his current predicament. As he fights off a ravenous U.S. government determined to bring him to justice by any means possible, spying included, the Megaupload founder has positioned himself as a champion of Internet privacy.

On January 19, 2013, Dotcom marked the anniversary of the raid on his empire by launching the privacy-focused cloud-storage service Next year on the same date, the tenacious German says he will deliver again.

Thus far, details are thin on the ground, but what we do know is that Dotcom is planning a new anti-censorship network he calls MegaNet.

“How would you like a new Internet that can’t be controlled, censored or destroyed by Governments or Corporations?” Dotcom teased in February.

MegaNet’s precise mechanism is yet to be revealed, but Dotcom has already stated that the network will be non-IP address based and that blockchain technology will play an important role.

What we also know is that users’ mobile phones will play a crucial role, although at launch other devices will participate in the network.

“All your mobile phones become an encrypted network,” Dotcom notes. “You’d be surprised how much idle storage & bandwidth capacity mobile phones have. MegaNet will turn that idle capacity into a new network.”

At this stage it appears that Dotcom envisions a totally decentralized system, an essential quality if he is to deliver on his claims of absolute privacy.

With the earlier promise that participants in MegaNet “become the MegaNet”, Dotcom’s announcement this morning that the project will seek monetary contributions from the masses seems entirely fitting.

“MegaNet details will be revealed and equity will be available via crowd funding on 20 Jan 2016, the fourth anniversary of the raid [on Dotcom and Megupload],” Dotcom confirmed.

And for now, that is all. Dotcom has become somewhat of an expert at dripping small details to the masses as and when he sees fit while allowing the media to fill in the blanks. It’s a somewhat effective strategy which provides millions in free advertising for close to zero marketing outlay.

The big question now is how much equity MegaNet will need to get off the ground and how many of Dotcom’s supporters will believe that privacy is a commodity worth supporting with their wallets. People were happy to support Peter Sunde’s on the same premise, but as recently revealed the amount of cash required to compete can be considerable.

However, Dotcom probably won’t attempt this entirely on his own. Given his history there’s a significant chance that the entrepreneur will pull in heavyweights such as Julian Assange and Glenn Greenwald to support the campaign. That will definitely help to boost the coffers.

Update: Kim Dotcom has sent TorrentFreak additional details on how MegaNet will operate.

“MegaNet has a unique file crystallization and recreation protocol utilizing the blockchain. You can load entire websites with this new technology and it makes them immune to almost all hacker attacks and ddos,” Dotcom informs TF.

“In the beginning MegaNet will still utilize the current Internet as a dumb pipe but in 10 years it will run exclusively on smartphones with hopefully over 500 million users carrying the network.

“A network by the people for the people. Not controlled by any government or corporations. MegaNet will be a powerful tool to guard our privacy and freedoms and it will also be my legacy,” Dotcom concludes.

On the finance front, MegaNet will partner with and Max Keiser to raise capital.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and anonymous VPN services.

TorrentFreak: Court Orders Cloudflare to Disconnect ‘New’ Grooveshark

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

cloudflareLast month the long running lawsuit between the RIAA and Grooveshark came to an end. However, within days a new site was launched aiming to take its place.

The RIAA wasn’t happy with this development and quickly obtained a restraining order, preventing domain registrars and hosting companies from offering their services to the site.

This was somewhat effective, as Namecheap quickly suspended the original domain name. However, not all parties were as cooperative.

Popular CDN-service CloudFlare refused to take action on the basis that it is not “aiding and abetting” piracy. The RIAA disagreed and asked New York District Court Judge Alison Nathan to rule on the matter.

In an order (pdf) just published, Judge Nathan agrees with the music group.

CloudFlare argued that they were not bound to the restraining order since they were not in “active concert or participation” due to the automated nature of its services. In addition, the company countered that even if it disconnected Grooveshark, the site would still be accessible.

In her order Nathan notes that she finds neither argument persuasive. The fact that CloudFlare is aware of the infringements and provides services that help the public to easily access the infringing site, shows otherwise.

“Connecting internet users to in this manner benefits Defendants and quite fundamentally assists them in violating the injunction because, without it, users would not be able to connect to Defendants’ site unless they knew the specific IP address for the site,” Judge Nathan writes.

“Beyond the authoritative domain name server, CloudFlare also provides additional services that it describes as improving the performance of the site,” she adds.

The argument that the ‘new’ Grooveshark will still be around after CloudFlare suspends the account was found to be irrelevant. A third-party can still be bound by a restraining order even if terminating its services doesn’t render a website inaccessible.

“… just because another third party could aid and abet the Defendants in violating the injunction does not mean that CloudFlare is not doing so,” the order reads.

Finally, the Judge agrees that there may be other services that are not covered by the order. However, in this case CloudFlare is directly facilitating Grooveshark, with specific knowledge of the accounts that are responsible.

For CloudFlare the ruling comes as a disappointment, opening the door for a slew of similar requests. The CDN has several of the largest pirate sites as clients, including The Pirate Bay, which is now a relatively easy target.

At the time of writing is no longer accessible, suggesting that CloudFlare has already complied with the order.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and anonymous VPN services.

SANS Internet Storm Center, InfoCON: green: Guest Diary: Xavier Mertens – Playing with IP Reputation with Dshield & OSSEC, (Tue, Jun 2nd)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

[Guest Diary: Xavier Mertens] [Playing with IP Reputation with Dshield “>]

When investigating incidents or searching for malicious activity in your logs, IP reputation is a nice way to increasethe reliability of generated alerts. It can help toprioritizeincidents. Lets take an example with a WordPress blog. Itwill, sooner or later, be targeted by a brute-force attack on the default /wp-admin page. In this case, IP reputationcan be helpful: An attack performed from an IP address reported as actively scanning the Internet will not (or less)attract my attention. On the contrary, if the same kind of attack is coming from an unkown IP address, this could bemore suspicious…

By using a reputation system, our monitoring tool can tag an IP address with a label like reported as maliciousbased on a repository. The real value of this repository depends directly of the value of collected information. Im abig fan, a free service provided by the SANS Internet Storm Center. Such service is working thanks tothe data submitted by many people across the Internet. For years, Im also pushing my firewall logs to dshield.orgfrom my OSSEC server. I wrote a tool to achieve this:ossec2dshield ( By contributing to the system, its now time toget some benefits from my participation: Im re-using the database to automatically check the reputation of the IPaddresses attacking me. We come full circle!

To achieve this, lets use theAPI ( on and theOSSEC ( called Active-Response whichallows to trigger a script upon a set of conditions. In this example, we call the reputation script with ourattacker address for any alert with a level = 6.

(Check the Active-Response( details)

The ISC API can be used to query information about an IP address. The returned results are:

{ip:{abusecontact:unknown,number:,country: FR ,as:12876 ,asname: AS12876 ONLINE S.A.S.,FR,network:\/16 ,comment:null}}

The most interesting fields are:

count – the number of times the IP address has been reported as an attacker
attacks – the number of targeted IP addresses
mindate – the first report
maxdata – the last report

The script can be used from the command line or from an OSSEC Active-Responseconfiguration block. To reduce the requests against the API, a SQLite database is created and populated with a localcopy of the data. Existing IP addresses will be checked again after a specified TTL (time-to-live), by default 5 days.Data are also dumped in a flat file or Syslog for further processing by another tool. Here is an example of entry:

$ tail -f /var/log/ipreputation.log
[2015-05-27 23:30:07,769] DEBUG No data found, fetching from ISC
[2015-05-27 23:30:07,770] DEBUG Using proxy:
[2015-05-27 23:30:07,772] DEBUG Using user-agent: isc-ipreputation/1.0 (
[2015-05-27 23:30:09,760] DEBUG No data found, fetching from ISC
[2015-05-27 23:30:09,761] DEBUG Using proxy:
[2015-05-27 23:30:09,762] DEBUG Using user-agent: isc-ipreputation/1.0 (
[2015-05-27 23:30:10,138] DEBUG Saving

[2015-05-27 23:30:10,145] INFO IP=, AS=6848(TELENET-AS Telenet N.V.,BE), Network=, Country=BE, Count=148, AttackedIP=97, Trend=0, FirstSeen=2015-04-21, LastSeen=2015-05-27, Updated=2015-05-27 18:37:15

In this example, you can see that this IP address started to attack on the 21st of April. It was reported 148 timeswhile attacking 97 different IP addresses (This IP is certainly part of a botnet).

The script can be configuration with a YAML configuration file (default to /etc/isc-ipreputation.conf) which is veryeasy to understand:


debug: yes

path: /data/ossec/logs/isc-ipreputation.db
exclude-ip: 192\.168\..*|172\.16\..*|10\..*|fe80:.*
ttl-days: 5
user-agent: isc-ipreputation/1.0 (
Finally, the SQLite database can use used to get interesting statistics. Example, to get the top-10 of suspicious IPaddresses that attacked me (and their associated country):

$ sqlite3 isc-ipreputation.db
SQLite version 3.8.2 2013-12-06 14:53:30
Enter .help for instructions
Enter SQL statements terminated with a


It is also very easy to generate dynamic lists of IP addresses (orCDB ( called by OSSEC). The following commandwill generate a CDB list with my top-10 of malicious IP addresses:

$ sqlite3 isc-ipreputation.db \
echo $IP:Suspicious
done /data/ossec/lists/bad-ips
$ cat /data/ossec/lists/bad-ips
$ ossec-makelists
* File lists/bad-ips.cdb needs to be updated

Based on this list, you can add more granularity to your alerts by correlating the attacks with the CDB list. Note proposes arecommended block list( to be used. A few months ago,Richard Porter ( ( integrate one of them in a Palo Alto Networks firewall. This is a great resource but I think that both arecomplementary.

The script is available on my githubrepository (
“>If the enemy leaves a door open, you must rush in.”>PGP Key:

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

TorrentFreak: Hola VPN Already Exploited By “Bad Guys”, Security Firm Says

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

After a flurry of reports, last week the people behind geo-unblocking software Hola were forced to concede that their users’ bandwidth is being sold elsewhere for commercial purposes. But for the Israel-based company, that was the tip of the iceberg.

Following an initial unproofed report that the software operates as a botnet, this weekend researchers published an advisory confirming serious problems with the tool.

“The Hola Unblocker Windows client, Firefox addon, Chrome extension and Android application contain multiple vulnerabilities which allow a remote or local attacker to gain code execution and potentially escalate privileges on a user’s system,” the advisory reads.

Yesterday and after several days of intense pressure, Hola published a response in which it quoted Steve Jobs and admitted that mistakes had been made. Hola said that it would now be making it “completely clear” to its users that their resources are being used elsewhere in exchange for a free product.

Hola also confirmed that two vulnerabilities found by the researchers at Adios-Hola had now been fixed, but the researchers quickly fired back.

“We know this to be false,” they wrote in an update. “The vulnerabilities are *still* there, they just broke our vulnerability checker and exploit demonstration. Not only that; there weren’t two vulnerabilities, there were six.”

With Hola saying it now intends to put things right (it says it has committed to an external audit with “one of the big 4 auditing companies”) the company stood by its claims that its software does not turn users’ computers into a botnet. Today, however, an analysis by cybersecurity firm Vectra is painting Hola in an even more unfavorable light.

In its report Vectra not only insists that Hola behaves like a botnet, but it’s possible it has malicious features by design.

“While analyzing Hola, Vectra Threat Labs researchers found that in addition to behaving like a botnet, Hola contains a variety of capabilities that almost appear to be designed to enable a targeted, human-driven cyber attack on the network in which an Hola user’s machine resides,” the company writes.

“First, the Hola software can download and install any additional software without the user’s knowledge. This is because in addition to being signed with a valid code-signing certificate, once Hola has been installed, the software installs its own code-signing certificate on the user’s system.”

If the implications of that aren’t entirely clear, Vectra assists on that front too. On Windows machines, the certificate is added to the Trusted Publishers Certificate Store which allows *any code* to be installed and run with no notification given to the user. That is frightening.

Furthermore, Vectra found that Hola contains a built-in console (“zconsole”) that is not only constantly active but also has powerful functions including the ability to kill running processes, download a file and run it whilst bypassing anti-virus software, plus read and write content to any IP address or device.

“These capabilities enable a competent attacker to accomplish almost anything. This shifts the discussion away from a leaky and unscrupulous anonymity network, and instead forces us to acknowledge the possibility that an attacker could easily use Hola as a platform to launch a targeted attack within any network containing the Hola software,” Vectra says.

Finally, Vectra says that while analyzing the protocol used by Hola, its researchers found five different malware samples on VirusTotal that contain the Hola protocol. Worryingly, they existed before the recent bad press.

“Unsurprisingly, this means that bad guys had realized the potential of Hola before the recent flurry of public reports by the good guys,” the company adds.

For now, Hola is making a big show of the updates being made to its FAQ as part of its efforts to be more transparent. However, items in the FAQ are still phrased in a manner that portrays criticized elements of the service as positive features, something that is likely to mislead non-tech oriented users.

“Since [Hola] uses real peers to route your traffic and not proxy servers, it makes you more anonymous and more secure than regular VPN services,” one item reads.

How Hola will respond to Vectra’s latest analysis remains to be seen, but at this point there appears little that the company can say or do to pacify much of the hardcore tech community. That being said, if Joe Public still can’t see the harm in a free “community” VPN operating a commercial division with full access to his computer, Hola might settle for that.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and anonymous VPN services.

TorrentFreak: Court Orders Cox to Expose “Most Egregious” BitTorrent Pirates

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

cox-logo Last year BMG Rights Management and Round Hill Music sued Cox Communications, arguing that the ISP fails to terminate the accounts of repeat infringers.

The companies, which control the publishing rights to songs by Katy Perry, The Beatles and David Bowie among others, claim that Cox has given up its DMCA safe harbor protections due to this inaction.

The case revolves around the “repeat infringer” clause of the DMCA, which prescribes that Internet providers must terminate the accounts of persistent pirates.

As part of the discovery process the music outfits requested details on the accounts which they caught downloading their content. In total there are 150,000 alleged pirates, but as a compromise BMG and Round Hill limited their initial request to 500 IP-addresses.

Cox refused to hand over the requested information arguing that the Cable Privacy Act prevents the company from disclosing this information.

The matter was discussed during a court hearing late last week. After a brief deliberation Judge John Anderson ruled that the ISP must hand over the personal details of 250 subscribers.

“Defendants shall produce customer information associated with the Top 250 IP Addresses recorded to have infringed in the six months prior to filing the Complaint,” Judge Anderson writes.

“This production shall include the information as requested in Interrogatory No.13, specifically: name, address, account number, the bandwidth speed associated with each account, and associated IP address of each customer.”

The order

The music companies also asked for the account information of the top 250 IP-addresses connected to the piracy of their files after the complaint was filed, but this request was denied. Similarly, if the copyright holders want information on any of the 149,500 other Cox customers they need a separate order.

The music companies previously informed the court that the personal details are crucial to proof their direct infringement claims, but it’s unclear how they plan to use the data.

While none of the Cox customers are facing any direct claims as of yet, it’s not unthinkable that some may be named in the suit to increase pressure on the ISP.

The full list of IP-addresses is available for download here (PDF).

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and anonymous VPN services.

Schneier on Security: USPS Tracking Queries to Its Package Tracking Website

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

A man was arrested for drug dealing based on the IP address he used while querying the USPS package tracking website.

SANS Internet Storm Center, InfoCON: green: Upatre/Dyre malspam – Subject: eFax message from “unknown”, (Wed, May 20th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green


Yesterday on 2015-05-19, I attended a meeting from my local chapter of the Information Systems Security Association (ISSA). During the meeting, one of the speakers discussed different levels of incident response by Security Operations Center (SOC) personnel. For non-targeted issues like botnet-based malicious spam (malspam) infecting a Windows host, you probably wont waste valuable time investigating every little detail. In most cases, youll probably start the process to re-image the infected computer and move on. Other suspicious events await, and they might reveal a more serious, targeted threat.

However, we still recover information about these malspam campaigns. Traffic patterns evolve, and changes should be documented.

Todays example of malspam

Searching through my employers blocked spam filters, I found the following Upatre/Dyre wave of malspam:

  • Date/Time: 2015-05-19 from from 12:00 AM to 5:47 AM CST
  • Number of messages: 20
  • Sender (spoofed):
  • Subject: eFax message from unknown” />

    As shown in the above image, these messages were tailored for the recipients. Youll also notice some of the recipient email addresses contain random characters and numbers. Nothing new here. Its just one of the many waves of malspam our filters block every day. I reported a similar wave earlier this month [1]. Let” />

    The attachment is a typical example of Upatre, much like weve seen before. Lets see what this malware does in a controlled environment.

    Indicators of compromise (IOC)

    I ran the malware on a physical host and generated the following traffic:

    • 2015-05-19 15:16:12 UTC – port 80 – – GET /
    • 2015-05-19 15:16:13 UTC – port 13410 – SYN packet to server, no response
    • 2015-05-19 15:16:16 UTC – port 443 – two SYN packets to server, no response
    • 2015-05-19 15:16:58 UTC – port 443 – two SYN packets to server, no response
    • 2015-05-19 15:17:40 UTC – port 443 – SSL traffic – approx 510 KB sent from server to infected host
    • 2015-05-19 15:17:56 UTC – port 3478 – UDP STUN traffic to:
    • 2015-05-19 15:17:58 UTC – port 443 – SSL traffic – approx 256 KB sent from server to infected host
    • 2015-05-19 15:18:40 UTC – port 13409 – SYN packet to server, no response

    In my last post about Upatre/Dyre, we saw Upatre-style HTTP GET requests to but no HTTP response from the server [1]. Thats been the case for quite some time now.” />
    Shown above: Attempted TCP connections to the same IP address now reset (RST) by the server

    How can we tell this is Upatre?” />

    As Ive mentioned before, is a service run by one of my fellow Rackspace employees [2]. By itself, its not malicious. Unfortunately, malware authors use this and similar services to check an infected computers IP address.

    What alerts trigger on this traffic?” />

    Related files on the infected host include:

    • C:UsersusernameAppDataLocalPwTwUwWTWcqBhWG.exe (Dyre)
    • C:UsersusernameAppDataLocalne9bzef6m8.dll
    • C:UsersusernameAppDataLocalTemp~TP95D5.tmp (encrypted or otherwise obfuscated)
    • C:UsersusernameAppDataLocalTempJinhoteb.exe (where Upatre copied itself after it was run)

    Some Windows registry changes for persistence:

    • Key name: HKEY_CURRENT_USERSoftwareMicrosoftWindowsCurrentVersionRun
    • Key name: HKEY_USERSS-1-5-21-52162474-342682794-3533990878-1000SoftwareMicrosoftWindowsCurrentVersionRun
    • Value name: GoogleUpdate
    • Value type: REG_SZ
    • Value data: C:UsersusernameAppDataLocalPwTwUwWTWcqBhWG.exe

    A pcap of the infection traffic is available at:

    A zip file of the associated Upatre/Dyre malware is available at:

    The zip file is password-protected with the standard password. If you dont know it, email and ask.

    Final words

    This was yet another wave of Upatre/Dyre malspam. No real surprises, but its always interesting to note the small changes from these campaigns.

    Brad Duncan, Security Researcher at Rackspace
    Blog: – Twitter: @malware_traffic



    (c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Linux How-Tos and Linux Tutorials: Install Linux on a Modern WiFi Router: Linksys WRT1900AC and OpenWrt

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Ben Martin. Original post: at Linux How-Tos and Linux Tutorials

linksyswrt1900ac router

The Linksys WRT1900AC is a top-end modern router that gets even sweeter when you unleash Linux on it and install OpenWrt. OpenWrt includes the opkg package management system giving you easy access to a great deal of additional open source software to use on your router. If you want the pleasure of SSH access on your router, the ability to use iptables on connections passing through it, and the ability to run various small servers on it, the Linksys WRT1900AC and OpenWrt are a powerful combination.

From a hardware perspective, the Linksys WRT1900AC includes simultaneous dual band with support for 802.11n (2.4 GigaHertz) up to 600 Megabytes per second and 802.11ac (5 GHz) up to 1.3 Gigabytes per second. This lets you connect your older devices to 802.11n and newer hardware can take advantage the greater speed and less congested 802.11ac signal.

The router has a dual-core Marvell Armada 370/XP CPU with 256 MB of RAM and 128 MB of flash storage. You can also attach more storage to the WRT1900AC using its USB 3.0 and eSATA ports. When using OpenWrt you might also like to attach a webcam and printer to the router. The Linksys WRT1900AC has a 4 port gigabit switch and a gigabit upstream WAN port.

Initial setup

The stock firmware that comes with the Linksys WRT1900AC uses a very simple four-step procedure for initial setup. I only partially followed the recommended setup steps.

Step 1: Connect the antennae and power.

Step 2: Connect your upstream “Internet” link to the appropriate port on the router.

Step 3: Connect to the wifi signal from the router. You are given a custom wireless network name and password which appears to be set differently for each individual router. This step 3 nicely removes the security vulnerability inherent in initial router setup, because your router will have a custom password right from the first time you power it on.

Step 4: Log in to and setup the router.

Instead of directly connecting to the Internet port, I used one of the 4 gigabit switch ports to attach the router to the local LAN. This made using the website at step 4 not work for me. I could create an account on the smartwifi site, but it wanted me to be connected through the wifi router in order to adjust the settings.

You can however set up the router without needing to use any remote websites. The Linksys will appear at and connecting a laptop to the wifi router and manually forcing the laptop’s IP address to allowed me to access the router configuration page. At that stage the Connectivity/Local Network page lets you set the IP address of the router to be something that will fit into your LAN in a non conflicting manner (and on the subnet you are using) and also disable the DHCP server if you already have one configured.

The initial screen I got when I was connecting directly using again wanted to take me off to a remote website, though you can click through to avoid having to do that if you want.

I tried to attach a 120 GB SanDisk Extreme SSD to test eSATA storage. Unfortunately ext4 is not a supported filesystem for External Storage in the stock firmware. It could see /dev/sda1 but 0 kilobytes used of 0 kb total space. Using a 16 GB flash pen drive formatted to FAT filesystem was fine; the ftp service was started and the drive showed up as a Samba share, too.

Switching over to OpenWrt

At the time of writing the options for installing OpenWrt on the device were changing. There were four images which offered Linux kernel version 3.18 or 4.0 and some level of extra fixes and updates depending on the image you choose. I used Kaloz’s evolving snapshots of trunk linked at openwrt_wrt1900ac_snapshot.img.

Flashing the image onto the router is very simple as you use the same web interface that is used to manually install standard firmware updates. The fun, and moments of anxiety that occur after the router reboots are familiar to anyone who has ever flashed a device.

When the router reboots you will not have any wifi signals at all from it. The router will come up at a default IP address of The easiest method to talk to the router is to use a laptop and force the ethernet interface to an address of Using a trunk distribution of OpenWrt you are likely not to have a useful web interface on the router. Visiting will likely show an empty web server with no files.

When falling back to trying to do an SSH or network login to the router, another little surprise awaits. Trying to SSH into the router showed that a connection was possible but I was unable to connect without any password. Unfortunately, OpenWrt sets the default password to nothing, creating a catch-22 with SSH not allowing a login with no password, so connection seemed impossible. The saving grace is that telnet is also running on the router and after installing the telnet client on the laptop I could login without any password without any issue. Gaining access to the router again was a tremendous relief.

In the telnet session you can use the passwd command to set a password and then you should be able to login using SSH. I opted to test the SSH login while the telnet session was still active so that I had a fallback in case login failed for some reason.

To make the web interface operational you will have to install the LuCI package. The below commands will do that for you. If you need to use a proxy to get to the Internet the http_proxy, https_proxy, and ftp_proxy environment variables will be of use. Again you might run into a little obstacle here, with the router on the subnet it might not be able to talk with your existing network if it is on the often used subnet. I found that manually forcing the IP address to a 192.168.0.X address using ifconfig on br-lan changed the address for bridged ports and everything moved to that subnet. This is not a permanent change, so if it doesn’t work rebooting the router gets you back to again. It is easy to change this for good using LuCI once you have that installed.

export http_proxy=
opkg update
opkg install luci

Once you have LuCI installed the rest of the router setup becomes point and click by visiting the web server on your router. To enable WiFi signals, go to the Network/Wifi page which gives you access to the two radios, one for 2.4 Ghz and the newer 5 Ghz 802.11nac standard. Each radio will be disabled by default. Oddly, I found that clicking edit for a radio and scrolling down to the Interface Configuration and the Wireless Security page, the default security was using “No Encryption.” I would have thought WPA2-PSK was perhaps a better default choice. So getting a radio up and running involved setting an ESSID, checking the Mode (I used Access Point), and setting the Wireless Security to something other than nothing and setting a password.

Many of the additional features you might install with opkg also have a LuCI support package available. For example, if you want to run a DLNA server on the Linksys WRT1900AC the minidlna package is available, and a luci-app-minidlna package will let you manage the server right from the LuCI web interface.

opkg install minidlna
opkg install luci-app-minidlna

Although the Linksys WRT1900AC has 128 MB of flash storage, it is broken up into many smaller partitions. The core /overlay partition had a size of only 24.6 MB with /tmp/syscfg being another 30 MB partition of which only around 300 KB was being used. While this provides plenty of space to install precompiled software, there isn’t enough space to install gcc onto the Linksys WRT1900AC/OpenWrt installation. I have a post up asking if there is a simple method to use more of the flash on the Linksys WRT1900AC from the OpenWrt file system. Another method to gain more space on an OpenWrt installation is to use an extroot, where the main system is stored on external storage. Perhaps with the Linksys WRT1900AC this could be a partition on an eSATA SSD.

If you don’t want to use extroot right away, another approach is to use another ARM machine that is running a soft floating point distribution to compile static binaries. Those can be transferred over using rsync to the OpenWrt installation on the Linksys WRT1900AC. An ARM machine is either using soft or hard floating point, and generally everything is compiled to work with one or the other. To see which version of floating point your hardware is expecting you can use the readelf tool to sniff at a few existing binaries as shown below. Note the soft-float ABI line in the output.

root@linksys1900ac:~# readelf -a /bin/bash|grep ABI
  OS/ABI:                            UNIX - System V
  ABI Version:                       0
  Flags:                             0x5000202, has entry point, Version5 EABI, soft-float ABI

I tried to get push button WPS setup to work from OpenWrt without success. I had used that feature under the standard firmware so it is able to work and makes connecting new devices to the router much simpler.

I also notice that there are serial TTL headers on the Linksys WRT1900AC and a post shows a method to reflash the firmware directly from uboot. I haven’t tried this out, but it is nice to see as a possible final ditch method to resurrect a device with non functioning firmware.

Another useful thing is to set up users other than root to use on the OpenWrt installation so that you have less risk of interrupting normal router activity. You might like to install that shadow utils and sudo in order to do this as shown below:

  root@wrt1900ac:/dev# opkg install sudo
  root@wrt1900ac:/dev# opkg install shadow-useradd shadow-groupadd
  root@wrt1900ac:/dev# sudo -u ben bash

I found that the fan came on when the Linksys WRT1900AC was booting into OpenWrt. The fan was turned off again soon after. The temperature readings are available using the sensors command as shown below.

root@wrt1900ac:~# sensors 
Adapter: mv64xxx_i2c adapter
ddr:          +52.8 C  
wifi:         +55.1 C  
Adapter: Virtual device
cpu:          +61.7 C  


Using an LG G3 phone with Android 5, the Wifi Network Analyzer app indicated a speed of 433 Mbps with the phone about a foot from the router. That speed dropped back to around 200Mbps when I moved several rooms away. The same results were given using the stock firmware and the OpenWrt image.

Running iperf (2.0.5) on the OpenWrt installation and a Mid 2012 Macbook Air gave a Bandwidth of 120 Mbps. The same client and server going through a DLink DIR-855 at a similar distance on 5 Ghz gave only 82 Mbps. Unfortunately the Mac only has wifi-n on it as wifi-ac was added to the next year’s model.

The LG G3 running Android 5 connected to the wifi-ac network using the iperf app could get 102 Mbps. These tests where run by starting the server with ‘-s’ and the client with ‘-c server-ip-address’. The server which was running on the Linksys WRT1900AC/OpenWrt machine chose a default of 85 kb TCP window size for these runs. Playing with window sizes I could get about 10 percent additional speed on the G3 without too much effort.

I connected a 120 GB SanDisk Extreme SSD to test the eSATA performance. For sequential IO Bonnie++ could write about 89 Mbps and read 148 Mbps and rewrite blocks at about 55 Mbps. Overall 5,200 seeks/s were able to be done. This compares well for read and rewrite with the eSATA on the Cubox which got 150 Mbps and 50  Mbps respectively. The Cubox could write at 120  Mbps which is about 35 percent faster than the Linksys WRT1900AC. This is using the same ext4 filesystem on both machines, the drive was just moved to each new machine.

bonnie++ -n 0 -f -m Linksys1900ac -d `pwd` 

OpenSSL performance for digests was in a similar ballpark to the BeagleBone Black and CuBox i4Pro. For ciphers the story was very different depending on which algorithm was used, DES and AES-256 were considerably slower than other ARM machines, whereas Blowfish and Cast ran at similar speeds to many other ARM CPUs. For 1,024 bit RSA signatures the Linksys WRT1900AC was around 25-30 percent the performance of the more budget ARM CPUs.

digests linksys router  

ciphers linksys router

rsa sign Linksys router

Final Thoughts

It is great to see that LuCI gives easy access to the router features and even has “app” packages to let you configure some of the additional software that you might like to install on your OpenWrt device. OpenWrt images for the Linksys WRT1900AC are a relatively recent development. Once a recommended stable image with LuCI included is released it should mitigate some of the tense moments that reflashing can present at the moment. The 177+ pages on the OpenWrt forum for the Linksys WRT1900AC are testament to the community interest in running OpenWrt on the device.

It is wonderful to see the powerful hardware that the Linksys WRT1900AC provides being able to run OpenWrt. The pairing of Linux/FLOSS and contemporary hardware lets you customize the device to fit your usage needs. Knowing that you can not only SSH in but that rsync is ready for you and that your programming language of choice can be installed on the device for those little programs that you want to have available all the time but don’t really want to leave a machine on in order to do. There are also some applications which work well on the router itself, for example, packet filtering. A single policy on the router can block tablets and phones from connecting to your work machines.

We would like to thank Linksys for providing the WRT1900AC hardware used in this article.

The Hacker Factor Blog: Email Delivery Errors

This post was syndicated from: The Hacker Factor Blog and was written by: The Hacker Factor Blog. Original post: at The Hacker Factor Blog

Email seems like a necessary evil. While I dislike spam, I like the non-immediate nature of the communication and the fact that messages can queue up (for better or for worse). And best of all, I can script actions for handling emails. If the email matches certain signatures, then the script can mark it as spam. If it comes from certain colleagues, then the script can mark it as urgent. In this regard, I think email is better than most communication methods.

Other forms of communication have their niche, but they also have their limitations. For example:

  • Phone. If the phone is there for my convenience, then why do I have to drop everything to answer it? (Dropping everything is not convenient.) And I have never had an answering machine show me the subject of the call before listening to it. Most answering machines require you to listen to message in the order they were received.

  • Chat rooms. Does anyone still use IRC or Jabber/XMPP? Real-time chat rooms are good if everyone is online at the same time. But if we’re all online and working together on a project, then it is just as easy to do a conference call on the phone, via Skype, or using any of those other VoIP protocols. Then again, most chat rooms do have ways to log conversations — which can be great for documenting progress.
  • Twitter. You rarely see details in 140 characters. It’s also hard to go back and see previous messages. And if you are following lots of people (or a few prolific people), then you might miss something important. (I view Twitter like the UDP of social communications… it’s alright to miss a few packets.)
  • Text messages. These are almost as bad as Twitter. At least with Twitter, I’m not charged per message.
  • Message boards. Whether it’s forum software, comments on a blog, or a private wall page at Facebook, message boards are everywhere. You can set topics, have threaded replies, etc. However, messages are restricted to members. If I am not a member of the message board, then I cannot leave you a message. (Message boards without membership requirements are either moderated or flooded with spam.) And there may be no easy way for someone to search or recall previous discussions.
  • Private messages. LinkedIn, Facebook, Flickr, Imgur, Reddit… Most services have ways to send private messages between members. This is fine if everyone you know uses those services. But messages are limited to the service.

In contrast, email permits large messages to be sent in a timely manner to people who use different services. If I cannot get to the message immediately, then it will sit in my mailbox — I will get to it when it is convenient. I can use my home email system to write to friends, regardless of whether they use Gmail, Yahoo, or Facebook. There are even email-to-anything and anything-to-email crossover systems, like If-this-then-that. Even Google Voice happily sends me email when someone leaves a message. (Google Voice also tries to translate the voice mail to text in the email. I know it’s from my brother when Google writes, “Unable to transcribe this message.”)

Clear Notifications

As automated tasks go, it is very common to have email sent as a notification. My RAID emails me monthly with the current status (“everything’s shiny!”) When one of my Linux servers had a memory failure, it emailed me.

Over at FotoForensics, I built an internal messaging system. As an administrator, I get notices about certain system events. I’ve even linked these messages with email — administrators get email when something important is queued up and needs a response. This really helps simplify maintenance — I usually get an email from the system every few days.

When users submit comments, I get a message. And I’ve designed the system to allow me to respond to the user via email. (This is why the comment form asks for an email address.) For the FotoForensics Lab service, I even configured a double-opt-in system so users can request accounts without my assistance.

And therein lies a problem… The easier it is to send messages, the easier it is to abuse it with spam. Over the decades, people have employed layers upon layers of spam detectors and heuristics to mitigate abuse.

With all of the layers of anti-spam crap that people use, creating a system that can send a status email or a double-opt-in message to anyone who requests contact can get complicated. It’s not as simple as calling a PHP function to send an email. In my experience, the PHP mail() function will succeed less than half of the time; usually the PHP mail() messages get discarded by spam filters.

Enabling Email

Even though my system works most of the time, I still have to fight with it occasionally in order to make sure that users receive responses to inquires. Some of the battles I had to fight so far:

  • Blacklists. Before you begin, make sure that your network address is not on any blacklists. If your network address was previously used by a spammer, then you’ve inherited a blacklisted address and nobody will receive your emails. Getting removed from blacklists ranges from difficult to impossible. And as long as your system is blacklisted, most people will not receive your emails.

  • Scripts. Lots of spammers use scripts. If you use a pre-packaged script to generate outgoing email, then it is likely to be identified as spam. This happens because different tools generate different signatures. If your tool matches the profile of a tool known to send spam, then it will be filtered. And chances are really good that spammers have already abused any pre-packaged scripts for sending spam.
  • Real mail. The email protocols (SMTP and ESMTP) are pretty straightforward. However, most scripts to send email only do the bare minimum. In particular, they usually don’t handle email errors very well. I ended up using a PHP script that communicates with my real mail server (Postfix). The postfix server properly delivers email and handles errors correctly. I’ve configured my postfix server to send email, but it never receives email. (Incoming email goes to a different mail server.)

At this point — with no blacklists, custom scripts, and a real outgoing email server — I was able to send email replies to about half of the people who requested service information. (Replying to people who fill out the contact form or who request a Lab account.) However, I still could not send email to anyone using Gmail, AOL, Microsoft Outlook, etc.

  • SPF. By itself, email is unauthenticated; anyone can send email as anyone. There are a handful of anti-spam additions to email that attempt to authenticate the sender. One of the most common ones is SPF — sender permitted from. This is a DNS record (TXT field) that lists the network addresses that can send email on behalf of a domain. If the recipient server sees that the sender does not match the SPF record, then it can be immediately discarded as spam.

    Many professional email services require an SPF record. Without it, they will assume that the email is unauthenticated and from a spammer. Enabling SPF approaches the 90% deliverable mark. Email can be delivered to Gmail, but not AOL or anyone using the Microsoft Outlook service.

  • Reverse hostnames. When emailing users at AOL, the AOL server would respond with a cryptic error message:

    521 5.2.1 : AOL will not accept delivery of this message.

    This is not one of AOL’s documented error codes. It took a lot of research, but I finally discovered that this is related to the reverse network address. Both AOL and Microsoft require the sender’s reverse hostname to resolve to the sender’s domain name. (Or in the case of AOL, it can resolve to anything except an IP address. If a lookup of your network address returns a hostname with the network address in it, then AOL will reject the email.) If you have a residential service (like Comcast or Verizon), then the reverse DNS lookup will not be permitted — you cannot send email to AOL directly from most residential ISPs. Fortunately, my hosting provider for FotoForensics was able to set my reverse DNS so I could send email from the FotoForensics server.

  • Microsoft. With everything else done, I could send email to all users except those who use the Microsoft Outlook service. The error message Microsoft returns says (with recipient information redacted):
    <recipient@recipient.domain>: host[213.x.x.x>] said: 550 5.7.1
    Service unavailable; Client host [65.x.x.x>] blocked using FBLW15; To
    request removal from this list please forward this message to (in reply to RCPT TO command)

    This cryptic warning is Microsoft’s way of saying that I need to contact them first and get permission to email their users.

    In my experience, writing in to ask permission will get you nowhere. Most services won’t answer the phone, ignore emails about delivery issues, and won’t help you at all. However, with Microsoft, I really had no other option. They didn’t give me any other option to contact them.

    With nothing left to lose, I bounced the entire email with the error message, original email, and headers, to Microsoft. I was actually amazed when I received an automated email with a trouble ticket number and telling me to wait 24 hours. I was even more amazed when, after 10 hours, I received a confirmation that the block was removed. I resent the FotoForensics contact form reply to the user… and it was delivered.

While I am thrilled to see that my server can now send replies to requests at every major service, I certainly hope other services do not adopt the Microsoft method. If my server needs to send replies to users at 100 different domains, then I do not want to spend time contacting each domain first and begging for permission to contact their users.

(Fortunately, this worked. If writing to Microsoft had not worked, then I was prepared to detect email addresses that use Outlook as a service and just blacklist them. “Please use a different email service since your provider will not accept email from us.”)

The dog ate it

While email is a convenient form of communication, I still have no idea whether I’ve fixed all of the delivery issues. Many sites will silently drop email rather than sending back a delivery error notice. Although I believe my outgoing email system now works with Gmail, Microsoft, Yahoo, AOL, and most other providers, the message may still be filtered somewhere down the line. (Email lacks a reliable delivery confirmation system. Hacks like web bugs and return receipts are unsupported by many email services.) It’s very possible for a long reply to never reach the recipient, and I’ll never know it.

Currently, the site sends about a half-dozen emails per day (max). These are responses to removal and unban requests, replies to comments, and double-opt-in messages (you requested an account; click on this link to confirm and create the account). I honestly never see a future when I will use email to promote new services or features. (Having spent decades tracking down spammers and developing anti-spam solutions, I cannot see myself joining the dark side.)

Of course, email is not the only option for communication. I’ve just started learning about WebRTC and HTML5 — I want to be able to give online training sessions and host voice calls via the web browser.

Schneier on Security: More on the NSA’s Capabilities

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Ross Anderson summarizes a meeting in Princeton where Edward Snowden was “present.”

Third, the leaks give us a clear view of an intelligence analyst’s workflow. She will mainly look in Xkeyscore which is the Google of 5eyes comint; it’s a federated system hoovering up masses of stuff not just from 5eyes own assets but from other countries where the NSA cooperates or pays for access. Data are “ingested” into a vast rolling buffer; an analyst can run a federated search, using a selector (such as an IP address) or fingerprint (something that can be matched against the traffic). There are other such systems: “Dancing oasis” is the middle eastern version. Some xkeyscore assets are actually compromised third-party systems; there are multiple cases of rooted SMS servers that are queried in place and the results exfiltrated. Others involve vast infrastructure, like Tempora. If data in Xkeyscore are marked as of interest, they’re moved to Pinwale to be memorialised for 5+ years. This is one function of the MDRs (massive data repositories, now more tactfully renamed mission data repositories) like Utah. At present storage is behind ingestion. Xkeyscore buffer times just depend on volumes and what storage they managed to install, plus what they manage to filter out.

As for crypto capabilities, a lot of stuff is decrypted automatically on ingest (e.g. using a “stolen cert,” presumably a private key obtained through hacking). Else the analyst sends the ciphertext to CES and they either decrypt it or say they can’t. There’s no evidence of a “wow” cryptanalysis; it was key theft, or an implant, or a predicted RNG or supply-chain interference. Cryptanalysis has been seen of RC4, but not of elliptic curve crypto, and there’s no sign of exploits against other commonly used algorithms. Of course, the vendors of some products have been coopted, notably skype. Homegrown crypto is routinely problematic, but properly implemented crypto keeps the agency out; gpg ciphertexts with RSA 1024 were returned as fails.


What else might we learn from the disclosures when designing and implementing crypto? Well, read the disclosures and use your brain. Why did GCHQ bother stealing all the SIM card keys for Iceland from Gemalto, unless they have access to the local GSM radio links? Just look at the roof panels on US or UK embassies, that look like concrete but are actually transparent to RF. So when designing a protocol ask yourself whether a local listener is a serious consideration.


On the policy front, one of the eye-openers was the scale of intelligence sharing — it’s not just 5 eyes, but 15 or 35 or even 65 once you count all the countries sharing stuff with the NSA. So how does governance work? Quite simply, the NSA doesn’t care about policy. Their OGC has 100 lawyers whose job is to “enable the mission”; to figure out loopholes or new interpretations of the law that let stuff get done. How do you restrain this? Could you use courts in other countries, that have stronger human-rights law? The precedents are not encouraging. New Zealand’s GCSB was sharing intel with Bangladesh agencies while the NZ government was investigating them for human-rights abuses. Ramstein in Germany is involved in all the drone killings, as fibre is needed to keep latency down low enough for remote vehicle pilots. The problem is that the intelligence agencies figure out ways to shield the authorities from culpability, and this should not happen.


The spooks’ lawyers play games saying for example that they dumped content, but if you know IP address and file size you often have it; and IP address is a good enough pseudonym for most intel / LE use. They deny that they outsource to do legal arbitrage (e.g. NSA spies on Brits and GCHQ returns the favour by spying on Americans). Are they telling the truth? In theory there will be an MOU between NSA and the partner agency stipulating respect for each others’ laws, but there can be caveats, such as a classified version which says “this is not a binding legal document.” The sad fact is that law and legislators are losing the capability to hold people in the intelligence world to account, and also losing the appetite for it.

Worth reading in full.

Krebs on Security: Who’s Scanning Your Network? (A: Everyone)

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Not long ago I heard from a reader who wanted advice on how to stop someone from scanning his home network, or at least recommendations about to whom he should report the person doing the scanning. I couldn’t believe that people actually still cared about scanning, and I told him as much: These days there are countless entities — some benign and research-oriented, and some less benign — that are continuously mapping and cataloging virtually every devices that’s put online.

GF5One of the more benign is, a data repository of research findings collected through continuous scans of the public Internet. The project, hosted by the ZMap Team at the University of Michigan, includes huge, regularly updated results grouped around scanning for Internet hosts running some of the most commonly used “ports” or network entryways, such as Port 443 (think Web sites protected by the lock icon denoting SSL/TLS Web site encryption); Port 21, or file transfer protocol (FTP); and Port 25, or simple mail transfer protocol (SMTP), used by many businesses to send email.

When I was first getting my feet wet on the security beat roughly 15 years ago, the practice of scanning networks you didn’t own looking for the virtual equivalent of open doors and windows was still fairly frowned upon — if not grounds to get one into legal trouble. These days, complaining about being scanned is about as useful as griping that the top of your home is viewable via Google Earth. Trying to put devices on the Internet and and then hoping that someone or something won’t find them is one of the most futile exercises in security-by-obscurity.

To get a gut check on this, I spoke at length last week with University of Michigan researchers Michael D. Bailey (MB) and Zakir Durumeric (ZD) about their ongoing and very public project to scan all the Internet-facing things. I was curious to get their perspective on how public perception of widespread Internet scanning has changed over the years, and how targeted scanning can actually lead to beneficial results for Internet users as a whole.

MB: Because of the historic bias against scanning and this debate between disclosure and security-by-obscurity, we’ve approached this very carefully. We certainly think that the benefits of publishing this information are huge, and that we’re just scratching the surface of what we can learn from it.

ZD: Yes, there are close to two dozen papers published now based on broad, Internet-wide scanning. People who are more focused on comprehensive scans tend to be the more serious publications that are trying to do statistical or large-scale analyses that are complete, versus just finding devices on the Internet. It’s really been in the last year that we’ve started ramping up and adding scans [to the site] more frequently.

BK: What are your short- and long-term goals with this project?

ZD: I think long-term we do want to add coverage of additional protocols. A lot of what we’re focused on is different aspects of a protocol. For example, if you’re looking at hosts running the “https://” protocol, there are many different ways you can ask questions depending on what perspective you come from. You see different attributes and behavior. So a lot of what we’ve done has revolved around https, which is of course hot right now within the research community.

MB: I’m excited to add other protocols. There a handful of protocols that are critical to operations of the Internet, and I’m very interested in understanding the deployment of DNS, BGP, and TLS’s interception with SMTP. Right now, there’s a pretty long tail to all of these protocols, and so that’s where it starts to get interesting. We’d like to start looking at things like programmable logic controllers (PLCs) and things that are responding from industrial control systems.

ZD: One of the things we’re trying to pay more attention to is the world of embedded devices, or this ‘Internet of Things’ phenomenon. As Michael said, there are also industrial protocols, and there are different protocols that these embedded devices are supporting, and I think we’ll continue to add protocols around that class of devices as well because from a security perspective it’s incredibly interesting which devices are popping up on the Internet.

BK: What are some of the things you’ve found in your aggregate scanning results that surprised you?

ZD: I think one thing in the “https://” world that really popped out was we have this very large certificate authority ecosystem, and a lot of the attention is focused on a small number of authorities, but actually there is this very long tail — there are hundreds of certificate authorities that we don’t really think about on a daily basis, but that still have permission to sign for any Web site. That’s something we didn’t necessary expect. We knew there were a lot, but we didn’t really know what would come up until we looked at those.

There also was work we did a couple of years ago on cryptographic keys and how those are shared between devices. In one example, primes were being shared between RSA keys, and because of this we were able to factor a large number of keys, but we really wouldn’t have seen that unless we started to dig into that aspect [their research paper on this is available here].

MB: One of things we’ve been surprised about is when we measure these things at scale in a way that hasn’t been done before, often times these kinds of emergent behaviors become clear.

BK: Talk about what you hope to do with all this data.

ZD: We were involved a lot in the analysis of the Heartbleed vulnerability. And one of the surprising developments there wasn’t that there were lots of people vulnerable, but it was interesting to see who patched, how and how quickly. What we were able to find was by taking the data from these scans and actually doing vulnerability notifications to everybody, we were able to increase patching for the Heartbleed bug by 50 percent. So there was an interesting kind of surprise there, not what you learn from looking at the data, but in terms of what actions do you take from that analysis? And that’s something we’re incredibly interested in: Which is how can we spur progress within the community to improve security, whether that be through vulnerability notification, or helping with configurations.

BK: How do you know your notifications helped speed up patching?

MB: With the Heartbleed vulnerability, we took the known vulnerable population from scans, and ran an A/B test. We split the population that was vulnerable in half and notified one half of the population, while not notifying the other half, and then measured the difference in patching rates between the two populations. We did end up after a week notifying the second population…the other half.

BK: How many people did you notify after going through the data from the Heartbleed vulnerability scanning? 

ZD: We took everyone on the IPv4 address space, found those that were vulnerable, and then contacted the registered abuse contact for each block of IP space. We used data from 200,000 hosts, which corresponded to 4,600 abuse contacts, and then we split those into an A/B test. [Their research on this testing was published here].

So, that’s the other thing that’s really exciting about this data. Notification is one thing, but the other is we’ve been building models that are predictive of organizational behavior. So, if you can watch, for example, how an organization runs their Web server, how they respond to certificate revocation, or how fast they patch — that actually tells you something about the security posture of the organization, and you can start to build models of risk profiles of those organizations. It moves away from this sort of patch-and-break or patch-and-pray game we’ve been playing. So, that’s the other thing we’ve been starting to see, which is the potential for being more proactive about security.

BK: How exactly do you go about the notification process? That’s a hard thing to do effectively and smoothly even if you already have a good relationship with the organization you’re notifying….

MB: I think one of the reasons why the Heartbleed notification experiment was so successful is we did notifications on the heels of a broad vulnerability disclosure. The press and the general atmosphere and culture provided the impetus for people to be excited about patching. The overwhelming response we received from notifications associated with that were very positive. A lot of people we reached out to say, ‘Hey, this is a great, please scan me again, and let me know if I’m patched.” Pretty much everyone was excited to have the help.

Another interesting challenge was that we did some filtering as well in cases where the IP address had no known patches. So, for example, where we got information from a national CERT [Computer Emergency Response Team] that this was an embedded device for which there was no patch available, we withheld that notification because we felt it would do more harm than good since there was no path forward for them. We did some aggregation as well, because it was clear there were a lot of DSL and dial-up pools affected, and we did some notifications to ISPs directly.

BK: You must get some pushback from people about being included in these scans. Do you think that idea that scanning is inherently bad or should somehow prompt some kind of reaction in and of itself, do you think that ship has sailed?

ZD: There is some small subset that does have issues. What we try to do with this is be as transparent as possible. All of our hosts we use for scanning, if look at them on WHOIS records or just visit them with a browser it will tell you right away that this machine is part of this research study, here’s the information we’re collecting and here’s how you can be excluded. A very small percentage of people who visit that page will read it and then contact us and ask to be excluded. If you send us an email [and request removal], we’ll remove you from all future scans. A lot of this comes down to education, a lot of people to whom we explain our process and motives are okay with it.

BK: Are those that object and ask to be removed more likely to be companies and governments, or individuals?

ZD: It’s a mix of all of them. I do remember offhand there were a fair number of academic institutions and government organizations, but there were a surprising number of home users. Actually, when we broke down the numbers last year (PDF), the largest category was small to mid-sized businesses. This time last year, we had excluded only 157 organizations that had asked for it.

BK: Was there any pattern to those that asked to be excluded?

ZD: I think that actually is somewhat interesting: The exclusion requests aren’t generally coming from large corporations, which likely notice our scanning but don’t have an issue with it. A lot of emails we get are from these small businesses and organizations that really don’t know how to interpret their logs, and often times just choose the most conservative route.

So we’ve been scanning for a several years now, and I think when we originally started scanning, we expected to have all the people who were watching for this to contact us all at once, and say ”Please exclude us.’ And then we sort of expected that the number of people who’d ask to be excluded would plateau, and we wouldn’t have problems again. But what we’ve seen is, almost the exact opposite. We still get [exclusion request] emails each day, but what we’re really finding is people aren’t discovering these scans proactively. Instead, they’re going through their logs while trying to troubleshoot some other issue, and they see a scan coming from us there and they don’t know who we are or why we’re contacting their servers. And so it’s not these organizations that are watching, it’s the ones who really aren’t watching who are contacting us.

BK: Do you guys go back and delete historic records associated with network owners that have asked to be excluded from scans going forward?

ZD: At this point we haven’t gone back and removed data. One reason is there are published research results that are based on those data sets, results, and so it’s very hard to change that information after the fact because if another researcher went back and tried to confirm an experiment or perform something similar, there would be no easy way of doing that.

BK: Is this what you’re thinking about for the future of your project? How to do more notification and build on the data you have for those purposes? Or are you going in a different or additional direction?

MB: When I think about the ethics of this kind of activity, I have very utilitarian view: I’m interested in doing as much good as we possibly can with the data we have. I think that lies in notifications, being proactive, helping organizations that run networks to better understand what their external posture looks like, and in building better safe defaults. But I’m most interested in a handful of core protocols that are under-serviced and not well understood. And so I think we should spend a majority of effort focusing on a small handful of those, including BGP, TLS, and DNS.

ZD: In many ways, we’re just kind of at the tip of this iceberg. We’re just starting to see what types of security questions we can answer from these large-scale analyses. I think in terms of notifications, it’s very exciting that there are things beyond the analysis that we can use to actually trigger actions, but that’s something that clearly needs a lot more analysis. The challenge is learning how to do this correctly. Every time we look at another protocol, we start seeing these weird trends and behavior we never noticed before. With every protocol we look at there are these endless questions that seem to need to be answered. And at this point there are far more questions than we have hours in the day to answer.

Raspberry Pi: Another Raspbian Desktop User Interface Update

This post was syndicated from: Raspberry Pi and was written by: Simon Long. Original post: at Raspberry Pi

Hopefully the dust has now settled on the first batch of changes to the Raspbian desktop which were made available at Christmas, and you’ve either a) decided you like them, or b) decided you hate them and have rolled back to a previous version, never to upgrade again. If you are in group b), I suggest you stop reading now…

The next set of desktop changes are now available. The overall look and feel hasn’t changed, but there have been a few major new features added, and a lot of small tweaks and bug fixes. The new features include the following:

New Wifi Interface

Connecting to a wifi network has been made much simpler in most cases by including a version of Roy Marples’ excellent dhcpcd and dhcpcd-ui packages. If you look towards the right-hand end of the menu bar, there is now a network icon. This shows the current state of your network connection – if a wifi connection is in use, it shows the signal strength; if not, it shows the connection state of your Ethernet connection. While connections are being set up, the icon flashes to indicate that a connection is being established.

To connect to a wifi network (assuming you have a USB wifi dongle connected to your Pi), left-click on the network icon and you should see a list of all visible wireless networks. Select the one you want, and if it is secured, you will be prompted for the network password. Enter the password, press OK and wait a few seconds for the wifi icon to stop flashing; if you then left-click the icon again, the selected network should be shown at the top of the list with a tick next to it, and you are then good to go.


If you right-click the network icon and choose the Wifi Networks (dhcpcdui) Settings option from the pop-up menu, you can manually enter IP addresses – choose the SSID option from the Configure drop-down, select your network and enter the IP addresses you want. From the same dialog, you can manually enter an IP address for your Ethernet connection by selecting the Interface option from the drop-down and choosing eth0 as the interface. Once you’ve clicked Apply, you may need to reboot your Pi for the new settings to take effect.

Speaking of rebooting, the first time you plug in a WiFi dongle, you will need to reboot your Pi to have it recognised by the new UI. Once that is done, with most dongles, you should be able to hot-plug them, or even to have two dongles connected at once if you wish to connect to two networks. (A few dongles won’t work when hot-plugged – the Edimax dongle is one, due to a bug in its driver. The official Raspberry Pi dongle and the PiHut dongle should both hot-plug without problems.)

If you prefer to use the original wpa_gui application, it is still installed – just type:


in a terminal window to open it.

New Volume / Audio Interface

Next to the network interface icon on the menu bar, you will see a volume icon. This works exactly the same way as on Windows or MacOS – left-click it to drop down the volume control, which you can then slide with the mouse. There’s a mute checkbox at the bottom to quickly mute or unmute the audio output. (Note that not all devices support mute, so with some external soundcards this may be greyed-out.)

If you right-click the volume icon, a pop-up menu appears to allow you to select which audio output is used – on a standard Pi, you have the choice of HDMI or Analog. If you have installed a USB audio interface, or an external soundcard, these will also appear as options in this menu.

To allow more detailed control of any external audio devices, we’ve included a custom version of xfce4-mixer, the mixer dialog from the XFCE desktop environment – you can access this either under Device Settings from the volume right-click menu (the option only appears if there is an external audio device connected) or from the Preferences section of the main menu, where it is listed as Audio Device Settings.


From this dialog, select the device you want to control from the drop-down at the top, and then press the Select Controls button to choose which of the controls the device offers that you want to display. Pressing the Make Default button on this window has the same effect as choosing an output source in the volume right-click menu.

New Appearance Settings

There is a new custom Appearance Settings dialog, available from the Preferences section of the main menu, which works with a new custom theme for Openbox and GTK called PiX. This allows you to set how your desktop looks (colour, background picture), how the menu bar looks (including an easy option to move it to the top or bottom of the screen) and how window title bars look.


This groups together the most useful settings from the original Openbox and LXSession appearance dialog boxes, which are now hidden in the menu. They are still on the system, so if you would prefer to use them, access them from a terminal window by typing obconf or lxappearance, respectively. (Note that using both the new dialog and the old obconf and lxappearance dialogs may result in strange combinations of settings; I’d strongly recommend you use one or the other, not all three…)

Other Stuff

There are a lot of other small changes, mostly to the appearance rather than functionality, and some minor bug fixes. For example, dialog buttons have a cleaner appearance due to the removal of icons and shortcut underlines. (The shortcuts are still there – just hold down the Alt key and they will appear on the buttons.)

By popular demand, you can now set the foreground and background colours of the CPU monitor on the menu bar – just right-click it and choose CPU Usage Monitor Settings, and you can colour it however you like. Also back by popular demand, the ability to use an arbitrary format for the menu bar clock (right-click and choose Digital Clock Settings) – it should also now resize and align properly if you do so…

Various other applications have been updated – the Epiphany browser has had speed and compatibility improvements; the Minecraft Python API is now compatible with Python 3, and there is a new version of Sonic Pi.

How Do I Get It?

If this all sounds good to you, it’s very easy to add to your Pi. Just open a terminal and type:

sudo apt-get update

sudo apt-get dist-upgrade

Updating will overwrite your desktop configuration files (found in /home/pi/.config). Just in case you had customised something in there for your own use, the original files are backed up as part of the upgrade process, and can be found in /home/pi/oldconffiles.

The new network interface is a separate package – we know that some people will have carefully customised their network setups, so you may not want the new changes, as they will overwrite your network configuration. If you do want the new network interface, type:

sudo apt-get install raspberrypi-net-mods

You will be prompted as part of the install to confirm that you want the new version of the network configuration file – press Y when prompted. (Or N if you’ve had a last-minute change of mind!)

As ever, user interface design isn’t an exact science – my hope is that the changes make your Pi nicer to use, but feedback (good or bad) is always welcome, so do post a comment and let me know your thoughts!

SANS Internet Storm Center, InfoCON: green: The Art of Logging, (Thu, May 7th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

[This is a Guest Diary by Xavier Mertens]

Handling log files is not a new topic. For a long time, people should know that taking care of your logs is a must have. They are very valuable when you need to investigate an incident. But, if collecting events and storing them for later processing is one point, events must be properly generated to be able to investigate suspicious activities! Lets take by example a firewall… Logging all the accepted traffic is one step but whats really important is to log all the rejected traffic. Most of the modern security devices (IDS, firewalls, web application firewalls, …) can integrate dynamic blacklists maintained by external organizations. They are plenty of useful blacklists on the internet with IP addresses, domain names, etc… It”>”>With the blacklist”>Lets assume a web application firewall which has this kind of feature. It will drop all connections from a (reported as) suspicious IP address from the beginning without more details. Let”>”>”>”>elif “>”>If we block the malicious IP addresses at the beginning of the policy, well never know which kind of attack has been tried. By blocking our malicious IP addresses at the end, we know that if one IP is blocked, our policy was not effective enough to block the attack! Maybe a new type of attack was tried and we need to add a new pattern. Blocking attackers is good but its more valuable to know why they were blocked

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Anchor Cloud Hosting: Rebuilding An OpenStack Instance and Keeping the Same Fixed IP

This post was syndicated from: Anchor Cloud Hosting and was written by: Craige McWhirter. Original post: at Anchor Cloud Hosting

OpenStack and in particular the compute service, Nova, has a useful rebuild function that allows you to rebuild an instance from a fresh image while maintaining the same fixed and floating IP addresses, amongst other metadata.

However if you have a shared storage back end, such as Ceph, you’re out of luck as this function is not for you.

Fortunately, there is another way.

Prepare for the Rebuild:

Note the fixed IP address of the instance that you wish to rebuild and the network ID:

$ nova show demoinstance0 | grep network
| DemoTutorial network                       |,                     |
$ export FIXED_IP=
$ neutron floatingip-list | grep
| ee7ecd21-bd93-4f89-a220-b00b04ef6753 |                  |      |
$ export FLOATIP_ID=ee7ecd21-bd93-4f89-a220-b00b04ef6753
$ neutron net-show DemoTutorial | grep " id "
| id              | 9068dff2-9f7e-4a72-9607-0e1421a78d0d |
$ export OS_NET=9068dff2-9f7e-4a72-9607-0e1421a78d0d  |

You now need to delete the instance that you wish to rebuild:

$ nova delete demoinstance0
Request to delete server demoinstance0 has been accepted.

Manually Prepare the Networking:

Now you need to re-create the port and re-assign the floating IP, if it had one:

$ neutron port-create --name demoinstance0 --fixed-ip ip_address=$FIXED_IP $OS_NET
Created a new port:
| Field                 | Value                                                                                 |
| admin_state_up        | True                                                                                  |
| allowed_address_pairs |                                                                                       |
| binding:vnic_type     | normal                                                                                |
| device_id             |                                                                                       |
| device_owner          |                                                                                       |
| fixed_ips             | {"subnet_id": "eb5db27f-edad-480e-92cb-1f8fec8848a8", "ip_address": ""}  |
| id                    | c1927578-451b-4682-8888-55c7163898a4                                                  |
| mac_address           | fa:16:3e:5a:39:67                                                                     |
| name                  | demoinstance0                                                                         |
| network_id            | 9068dff2-9f7e-4a72-9607-0e1421a78d0d                                                  |
| security_groups       | 5898c15a-4670-429b-a414-9f59671c4d8b                                                  |
| status                | DOWN                                                                                  |
| tenant_id             | gsu7j52c50804cf3aad71b92e6ced65e                                                      |
$ export OS_PORT=c1927578-451b-4682-8888-55c7163898a4
$ neutron floatingip-associate $FLOATIP_ID $OS_PORT
Associated floating IP ee7ecd21-bd93-4f89-a220-b00b04ef6753
$ neutron floatingip-list | grep $FIXED_IP
| ee7ecd21-bd93-4f89-a220-b00b04ef6753 |   |     | c1927578-451b-4682-8888-55c7163898a4 |


Now you need to boot the instance again and specify port you created:

$ nova boot --flavor=m1.tiny --image=MyImage --nic port-id=$OS_PORT demoinstance0
$ nova show demoinstance0 | grep network
| DemoTutorial network                       |,                     |

Now your rebuild has been completed, you’ve got your old IPs back and you’re done. Enjoy :-)

The post Rebuilding An OpenStack Instance and Keeping the Same Fixed IP appeared first on Anchor Cloud Hosting.

SANS Internet Storm Center, InfoCON: green: Upatre/Dyre – the daily grind of botnet-based malspam, (Tue, May 5th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

Malicious spam (malspam) delivering Upatre/Dyre has been an ongoing issue for quite some time. Many organizations have posted articles about this malware. Ive read good information on Dyre last year [1, 2] and this year [3].

Upatre is the malware downloader that retrieves Dyre (Dyreza), an information stealer described as a Zeus-like banking Trojan [4]. Earlier this year, EmergingThreats reported Upatre and Dyre are under constant development [5], while SecureWorks told us banking botnets continue to deliver this malspam despite previous takedowns [6].

Botnets sending waves of malspam with Upatre as zip file attachments are a near-daily occurrence. Most organizations wont see these emails, because the messages are almost always blocked by spam filters.

Because security researchers find Upatre/Dyre malspam nearly every day, its a bit tiresome to write about, and we sometimes gloss over the information when it comes our way. After all, the malspam is being blocked, right?

Nonetheless, we should continue to document some waves of Upatre/Dyre malspam to see if anything is changing or evolving.

Heres one wave we found aftersearching through our blocked spam filters at Rackspace within the past 24 hours:

  • Start date/time: 2015-05-04 13:48 UTC
  • End date/time: 2015-05-04 16:40 UTC
  • Timespan: 2 hours and 52 minutes
  • Number of emails: 212

We searched for subject lines starting with the word Holded and found 31 different subjects:

  • Holded account alert
  • Holded account caution
  • Holded account message
  • Holded account notification
  • Holded account report
  • Holded account warning
  • Holded bank operation alert
  • Holded bank operation caution
  • Holded bank operation message
  • Holded bank operation notification
  • Holded bank operation report
  • Holded bank operation warning
  • Holded operation alert
  • Holded operation caution
  • Holded operation message
  • Holded operation notification
  • Holded operation report
  • Holded operation warning
  • Holded payment alert
  • Holded payment caution
  • Holded payment message
  • Holded payment notification
  • Holded payment report
  • Holded payment warning
  • Holded transaction alert
  • Holded transaction caution
  • Holded transaction message
  • Holded transaction notification
  • Holded transaction report
  • Holded transaction warning

The 212 messages had different attachments. Heres a small sampling of the different file names:


Emails sent by this botnet came from different IP addresses before they hit our mail servers. Senders and message ID headers were all spoofed. Each of the email headers show the same Google IP address spoofed as the previous sender. In the images below, the source IP address–right before the message hit our email servers–is outlined in red. The spoofed Google IP address is highlighted in blue. The only true items are the IP addresses before these emails hit our mail servers. Everything else is cannot be verifiedandcan be considered” />

This wave sent dozens of different attachment names with hundreds of different file hashes. I took a random sample and infected a host to generate some traffic. This Dyre malware is VM-aware, so I had to use a physical host for the infection traffic. It shows the usual Upatre URLs, Dyre SSL certs and STUN traffic we” />
Shown above: Filtered” />
Shown above: EmergingThreats-based Snort events on the infection traffic using Security Onion.

Of note, is a service run by one of myfellow Rackspace employees [7]. By itself, its not malicious. is merely a free service that reports your hosts IP address. Unfortunately, malware authors use this and similar services to check an infected computers IP address. Because of that, youll often find alerts that report any traffic to these domains as an indicator of compromise (IOC).

The Upatre HTTP GET requests didnt return anything. Apparently, the follow-up Dyre malware was downloaded over one of the SSL connections. Here”>Dyre first saved to: C:\Users\username\AppData\Local\Temp\vwlsrAgtqYXVcRW.exe
Dyre was then moved to: “>

The zipfile is password-protected with the standard password. If you dont know it, email and ask.

Final words

Its a daily grind reviewing this information, and most security professionals have higher priority issues to deal with. However, if we dont periodically review these waves of Upatre/Dyre, our front-line analysts and other securitypersonnel mightnotrecognize the traffic and may miss the IOCs.

Brad Duncan, Security Researcher at Rackspace
Blog: – Twitter: @malware_traffic



(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

TorrentFreak: Microsoft Logs IP Addresses to Catch Windows 7 Pirates

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

microsoft-pirateDue to the fact that one needs to be present on most computers in order for them to work at all, operating systems are among the most pirated types of software around.

There can be little doubt that across its range in its 29 year history, Microsoft’s Windows operating systems have been pirated countless millions of times. It’s a practice that on some levels Microsoft has come to accept, with regular consumers largely avoiding the company’s aggression.

However, as one or perhaps more pirates are about to find out, the same cannot be said of those pirating the company’s products on a commercial scale.

In a lawsuit filed this week at a district court in Seattle, Microsoft targets individuals behind a single Verizon IP address – Who he, she or they are is unknown at this point, but according to Microsoft they’re responsible for some serious Windows pirating.

“As part of its cyberforensic methods, Microsoft analyzes product key activation data voluntarily provided by users when they activate Microsoft software, including the IP address from which a given product key is activated,” the lawsuit reads.

Microsoft says that its forensic tools allow the company to analyze billions of activations of Microsoft software and identify patterns “that make it more likely than not” that an IP address associated with activations is one through which pirated software is being activated.

“Microsoft’s cyberforensics have identified hundreds of product key activations
originating from IP address…which is presently assigned to
Verizon Online LLC. These activations have characteristics that on information and belief, establish that Defendants are using the IP address to activate pirated software.”

Microsoft says that the defendant(s) have activated hundreds of copies of Windows 7 using product keys that have been “stolen” from the company’s supply chain or have never been issued with a valid license, or keys used more times than their license allows.

In addition to immediate injunctive relief and the impounding of all infringing materials, the company demands profits attributable to the infringements, treble damages and attorney fees or, alternatively, statutory damages.

This week’s lawsuit (pdf) follows similar action in December 2014 in which Microsoft targeted the user behind an AT&T account.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and anonymous VPN services.

Schneier on Security: Detecting QUANTUMINSERT

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Fox-IT has a blog post (and has published Snort rules) on how to detect man-on-the-side Internet attacks like the NSA’s QUANTUMINSERT.

From a Wired article:

But hidden within another document leaked by Snowden was a slide that provided a few hints about detecting Quantum Insert attacks, which prompted the Fox-IT researchers to test a method that ultimately proved to be successful. They set up a controlled environment and launched a number of Quantum Insert attacks against their own machines to analyze the packets and devise a detection method.

According to the Snowden document, the secret lies in analyzing the first content-carrying packets that come back to a browser in response to its GET request. One of the packets will contain content for the rogue page; the other will be content for the legitimate site sent from a legitimate server. Both packets, however, will have the same sequence number. That, it turns out, is a dead giveaway.

Here’s why: When your browser sends a GET request to pull up a web page, it sends out a packet containing a variety of information, including the source and destination IP address of the browser as well as so-called sequence and acknowledge numbers, or ACK numbers. The responding server sends back a response in the form of a series of packets, each with the same ACK number as well as a sequential number so that the series of packets can be reconstructed by the browser as each packet arrives to render the web page.

But when the NSA or another attacker launches a Quantum Insert attack, the victim’s machine receives duplicate TCP packets with the same sequence number but with a different payload. “The first TCP packet will be the ‘inserted’ one while the other is from the real server, but will be ignored by the [browser],” the researchers note in their blog post. “Of course it could also be the other way around; if the QI failed because it lost the race with the real server response.”

Although it’s possible that in some cases a browser will receive two packets with the same sequence number from a legitimate server, they will still contain the same general content; a Quantum Insert packet, however, will have content with significant differences.

It’s important we develop defenses against these attacks, because everyone is using them.

SANS Internet Storm Center, InfoCON: green: Dalexis/CTB-Locker malspam campaign, (Thu, Apr 30th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

MalwareEvery Day

Malicious spam (malspam) is by sent by botnets every day. These malspam campaigns send malware designed to infect Windows computers. Ill see Dridex or Upatre/Dyre campaigns a daily basis. Fortunately, most of these emails are blocked by our spam filters.

This diary concerns a recent malspam wave on Tuesday 2015-04-28 from a botnet pushing Dalexis/CTB-Locker.

What is Dalexis/CTB-Locker?

Dalexis is a malware downloader. It drops a CAB file with embedded document thats opened on a users computer [1] then downloads more malware. Dalexis is often used to deliver CTB-Locker [2][3]. CTB-Locker is ransomware that encrypts files on your computer. In exchange for a ransom payment, the malware authors will provide a key to decrypt your files. Behavior of this malware is well-documented, but small changes often occur as new waves of malspam are sent out.

A similar wave of malspam from Monday 2015-04-27 was reported by [4]. The next day saw similar activity. This campaign will likely continue. Below is a flow chart from Tuesday” />

The messages have slightly different subject lines, and each email attachment has a different file hash. I infected a host using one of the attachments. Below are links to the associated files:

The ZIP file is password-protected with the standard password. If you dont know it, email and ask.

Infection as Seen from the Desktop

Extracted malware from these email attachments is an SCR file with an Excel icon. ” />

Had to download a Tor browser to get at the decryption instructions. The bitcoin address for the ransom payment is: 18GuppWVuZGqutYvZz9uaHxHcostrU6Upc” />

” />

Dalexis uses an HTTP GET request to download CTB-Locker. The file is encrypted in transit, but I retrieved a decrypted copy from the infected host. Dalexis reports to a command and control (CnC) server after the malware is successfully downloaded.

In the image below, youll find HTTP POST requests to different servers as Dalexis tries to find a CnC server that will respond. ” />

For indicators of compromise (IOCs), a list of domains unique to this infection follows:

(Read: IP address – domain name)

  • –
  • –
  • –
  • –
  • –
  • –
  • –
  • various –
  • various –

Example of Malspam From Tuesday 2015-04-28

From: Eda Uhrhammer
Date: Tuesday, April 28, 2015 at 16:16 UTC
To: [redacted]
Subject: [Issue 5261CC6247C37550] Account #295030013990 Temporarily Locked

Dear user,

We detect unauthorized Login Attempts to your ID #295030013990 from other IP Address.
Please re-confirm your identity. See attached docs for full information.

Eda Uhrhammer
Millard Peter
111 Hunter Street East, Peterborough, ON K9H 1G7



NOTE: The emails contain various international names, addresses, and phone numbers in the signature block.

Emails Collected

Start time: 2015-04-28 10:00:13 UTC
End time: 2015-04-28 16:16:28 UTC
Emails found: 24

Senders and Subject Lines

  • Sender: – Subject: [Issue 35078504EBA94667] Account #59859805294 Temporarily Locked
  • Sender: – Subject: [Issue 84908E27DF477852] Account #40648428303 Temporarily Locked
  • Sender: – Subject: [Issue 8694097116D18193] Account #257547165590 Temporarily Locked
  • Sender: – Subject: [Issue 11123E749D533902] Account #621999149649 Temporarily Locked
  • Sender: – Subject: [Issue 24789101648C8407] Account #250874039146 Temporarily Locked
  • Sender: – Subject: [Issue 6412D16736356564] Account #238632826769 Temporarily Locked
  • Sender: – Subject: [Issue 9139F9678C9A7466] Account #216021389500 Temporarily Locked
  • Sender: – Subject: [Issue 982886631E9E7489] Account #114654416120 Temporarily Locked
  • Sender: – Subject: [Issue 4895D8D81ADE1399] Account #843871639720 Temporarily Locked
  • Sender: – Subject: [Issue 72986FD85CE93134] Account #622243029178 Temporarily Locked
  • Sender: – Subject: [Issue 27883AA546718876] Account #475770363394 Temporarily Locked
  • Sender: – Subject: [Issue 5384A21F5AB26075] Account #717973552140 Temporarily Locked
  • Sender: – Subject: [Issue 5694B0643FCD587] Account #642271991381 Temporarily Locked
  • Sender: – Subject: [Issue 8219423F8CFB6864] Account #692223104314 Temporarily Locked
  • Sender: – Subject: [Issue 70308834A3929842] Account #339648082242 Temporarily Locked
  • Sender: – Subject: [Issue 33190977A2D04088] Account #831865092451 Temporarily Locked
  • Sender: – Subject: [Issue 706584024E142555] Account #196387638377 Temporarily Locked
  • Sender: – Subject: [Issue 830689BB76F4615] Account #162723085828 Temporarily Locked
  • Sender: – Subject: [Issue 46714D12FB834480] Account #526735661562 Temporarily Locked
  • Sender: – Subject: [Issue 39494AFE933A5158] Account #552561607876 Temporarily Locked
  • Sender: – Subject: [Issue 974641F53DD66126] Account #325636779394 Temporarily Locked
  • Sender: – Subject: [Issue 7505716EA6244832] Account #603263972311 Temporarily Locked
  • Sender: – Subject: [Issue 50438E220A5D7432] Account #906152957589 Temporarily Locked
  • Sender: – Subject: [Issue 5261CC6247C37550] Account #295030013990 Temporarily Locked

NOTE: The sending email addresses might be spoofed.


  • – 19,135 bytes – MD5 hash: 1a9fdce6b6efd094af354a389b0e04da
  • – 20,688 bytes – MD5 hash: a1b066361440a5ff6125f15b1ba2e1b1
  • – 20,681 bytes – MD5 hash: 01f8976034223337915e4900b76f9f26
  • – 19,135 bytes – MD5 hash: ab9a07054a985c6ce31c7d53eee90fbe
  • – 19,135 bytes – MD5 hash: 899689538df49556197bf1bac52f1b84
  • – 19,135 bytes – MD5 hash: eea0fd780ecad755940110fc7ee6d727
  • – 19,114 bytes – MD5 hash: f236e637e17bc44764e43a8041749e6c
  • – 20,168 bytes – MD5 hash: eda8075438646c617419eda13700c43a
  • – 20,177 bytes – MD5 hash: d00861c5066289ea9cca3f0076f97681
  • – 20,703 bytes – MD5 hash: 657e3d615bb1b6e7168319e1f9c5039f
  • – 19,113 bytes – MD5 hash: b7fe085962dc7aa7622bd15c3a303b41
  • – 20,642 bytes – MD5 hash: 2ba4d511e07090937b5d6305af13db68
  • – 20,710 bytes – MD5 hash: 24698aa84b14c42121f96a22fb107d00
  • – 20,709 bytes – MD5 hash: 04abf53d3b4d7bb7941a5c8397594db7
  • – 19,071 bytes – MD5 hash: b2ca48afbc0eb578a9908af8241f2ae8
  • – 20,175 bytes – MD5 hash: fa43842bda650c44db99f5789ef314e3
  • – 19,135 bytes – MD5 hash: 802d9abf21c812501400320f2efe7040
  • – 20,681 bytes – MD5 hash: 0687f63ce92e57a76b990a8bd5500b69
  • – 20,644 bytes – MD5 hash: 0918c8bfed6daac6b63145545d911c72
  • – 20,703 bytes – MD5 hash: 2e90e6d71e665b2a079b80979ab0e2cb
  • – 20,721 bytes – MD5 hash: 5b8a27e6f366f40cda9c2167d501552e
  • – 20,718 bytes – MD5 hash: 9c1acc3f27d7007a44fc0da8fceba120
  • – 20,713 bytes – MD5 hash: 1a6b20a5636115ac8ed3c4c4dd73f6aa
  • – 20,134 bytes – MD5 hash: b9d19a68205f2a7e2321ca3228aa74d1

Extracted Malware

  • 114654416120.scr – 98,304 bytes – MD5 hash: 46838a76fbf59e9b78d684699417b216
  • 162723085828.scr – 90,112 bytes – MD5 hash: 8f5df86fdf5f3c8e475357bab7bc38e8
  • 196387638377.scr – 90,112 bytes – MD5 hash: 59f71ef10861d1339e9765fb512d991c
  • 216021389500.scr – 98,304 bytes – MD5 hash: 0baa21fab10c7d8c64157ede39453ae5
  • 238632826769.scr – 98,304 bytes – MD5 hash: f953b4c8093276fbde3cfa5e63f990eb
  • 250874039146.scr – 98,304 bytes – MD5 hash: 6580e4ee7d718421128476a1f2f09951
  • 257547165590.scr – 94,208 bytes – MD5 hash: 6a15d6fa9f00d931ca95632697e5ba70
  • 295030013990.scr – 86,016 bytes – MD5 hash: 54c1ac0d5e8fa05255ae594adfe5706e
  • 325636779394.scr – 94,208 bytes – MD5 hash: 08a0c2aaf7653530322f4d7ec738a3df
  • 339648082242.scr – 94,208 bytes – MD5 hash: 1aaecdfd929725c195a7a67fc6be9b4b
  • 40648428303.scr – 94,208 bytes – MD5 hash: f51fcf418c973a94a7d208c3a8a30f19
  • 475770363394.scr – 81,920 bytes – MD5 hash: dbea4b3fb5341ce3ca37272e2b8052ae
  • 526735661562.scr – 94,208 bytes – MD5 hash: c0dc49296b0aec09c5bfefcf4129c29b
  • 552561607876.scr – 98,304 bytes – MD5 hash: 9239ec6fe6703279e959f498919fdfb0
  • 59859805294.scr – 86,016 bytes – MD5 hash: a9d11a69c692b35235ce9c69175f0796
  • 603263972311.scr – 94,208 bytes – MD5 hash: bcaf9ce1881f0f282cec5489ec303585
  • 621999149649.scr – 98,304 bytes – MD5 hash: 70a63f45eb84cb10ab1cc3dfb4ac8a3e
  • 622243029178.scr – 90,112 bytes – MD5 hash: d1b1e371aebfc3d500919e9e33bcd6c1
  • 642271991381.scr – 81,920 bytes – MD5 hash: 15a5acfbccbb80b01e6d270ea8af3789
  • 692223104314.scr – 94,208 bytes – MD5 hash: fa0fe28ffe83ef3dcc5c667bf2127d4c
  • 717973552140.scr – 98,304 bytes – MD5 hash: 646640f63f327296df0767fd0c9454d4
  • 831865092451.scr – 98,304 bytes – MD5 hash: ec872872bff91040d2bc1e4c4619cbbc
  • 843871639720.scr – 98,304 bytes – MD5 hash: b8e8e3ec7f4d6efee311e36613193b8d
  • 906152957589.scr – 94,208 bytes – MD5 hash: 36abcedd5fb6d17038bd7069808574e4


Brad Duncan, Security Researcher at Rackspace
Blog: – Twitter: @malware_traffic



(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Krebs on Security: A Day in the Life of a Stolen Healthcare Record

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

When your credit card gets stolen because a merchant you did business with got hacked, it’s often quite easy for investigators to figure out which company was victimized. The process of divining the provenance of stolen healthcare records, however, is far trickier because these records typically are processed or handled by a gauntlet of third party firms, most of which have no direct relationship with the patient or customer ultimately harmed by the breach.

I was reminded of this last month, after receiving a tip from a source at a cyber intelligence firm based in California who asked to remain anonymous. My source had discovered a seller on the darknet marketplace AlphaBay who was posting stolen healthcare data into a subsection of the market called “Random DB ripoffs,” (“DB,” of course, is short for “database”).

Eventually, this same fraudster leaked a large text file titled, “Tenet Health Hilton Medical Center,” which contained the name, address, Social Security number and other sensitive information on dozens of physicians across the country.

An AlphaBay user named "Boogie" giving away dozens of healthcare records.

An AlphaBay user named “Boogie” giving away dozens of healthcare records he claims to have stolen.

Contacted by KrebsOnSecurity, Tenet Health officials said the data was not stolen from its databases, but rather from a company called InCompass Healthcare. Turns out, InCompass disclosed a breach in August 2014, which reportedly occurred after a subcontractor of one of the company’s service providers failed to secure a computer server containing account information. The affected company was 24 ON Physicians, an affiliate of InCompass Healthcare.

“The breach affected approximately 10,000 patients treated at 29 facilities throughout the U.S. and approximately 40 employed physicians,” wrote Rebecca Kirkham, a spokeswoman for InCompass.

“As a result, a limited amount of personal information may have been exposed to the Internet between December 1, 2013 and April 17, 2014, Kirkham wrote in an emailed statement. Information that may have been exposed included patient names, invoice numbers, procedure codes, dates of service, charge amounts, balance due, policy numbers, and billing-related status comments. Patient social security number, home address, telephone number and date of birth were not in the files that were subject to possible exposure. Additionally, no patient medical records or bank account information were put at risk. The physician information that may have been exposed included physician name, facility, provider number and social security number.”

Kirkham said up until being contacted by this reporter, InCompass “had received no indication that personal information has been acquired or used maliciously.”

So who was the subcontractor that leaked the data? According to (and now confirmed by InCompass), the subcontractor responsible was PST Services, a McKesson subsidiary providing medical billing services, which left more than 10,000 patients’ information exposed via Google search for over four months.

As this incident shows, a breach at one service provider or healthcare billing company can have a broad impact across the healthcare system, but can be quite challenging to piece together.

Still, not all breaches involving health information are difficult to backtrack to the source. In September 2014, I discovered a fraudster on the now-defunct Evolution Market dark web community who was selling life insurance records for less than $7 apiece. That breach was fairly easily tied back to Torchmark Corp., an insurance holding company based in Texas; the name of the company’s subsidiary was plastered all over stolen records listing applicants’ medical histories.


Health records are huge targets for fraudsters because they typically contain all of the information thieves would need to conduct mischief in the victim’s name — from fraudulently opening new lines of credit to filing phony tax refund requests with the Internal Revenue Service. Last year, a great many physicians in multiple states came forward to say they’d been apparently targeted by tax refund fraudsters, but could not figure out the source of the leaked data. Chances are, the scammers stole it from hacked medical providers like PST Services and others.

In March 2015, HealthCare IT News published a list of healthcare providers that experienced data breaches since 2009, using information from the Department of Health and Human Services. That data includes HIPAA breaches reported by 1,149 covered entities and business associates, and covers some 41 million Americans. Curiously, the database does not mention some 80 million Social Security numbers and other data jeopardized in the Anthem breach that went public in February 2015 (nor 11 million records lost in the Premera breach that came to light in mid-March 2015).

Sensitive stolen data posted to cybercrime forums can rapidly spread to miscreants and ne’er-do-wells around the globe. In an experiment conducted earlier this month, security firm Bitglass synthesized 1,568 fake names, Social Security numbers, credit card numbers, addresses and phone numbers that were saved in an Excel spreadsheet. The spreadsheet was then transmitted through the company’s proxy, which automatically watermarked the file. The researchers set it up so that each time the file was opened, the persistent watermark (which Bitglass says survives copy, paste and other file manipulations), “called home” to record view information such as IP address, geographic location and device type.

The company posted the spreadsheet of manufactured identities anonymously to cyber-crime marketplaces on the Dark Web. The result was that in less than two weeks, the file had traveled to 22 countries on five continents, was accessed more than 1,100 times. “Additionally, time, location, and IP address analysis uncovered a high rate of activity amongst two groups of similar viewers, indicating the possibility of two cyber crime syndicates, one operating within Nigeria and the other in Russia,” the report concluded.

Source: Bitglass

Source: Bitglass