Posts tagged ‘ip address’

TorrentFreak: Google Publishes Chrome Fix For Serious VPN Security Hole

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

As large numbers of Internet users wise up to seemingly endless online privacy issues, security products are increasingly being viewed as essential for even basic tasks such as web browsing.

In addition to regular anti-virus, firewall and ad-busting products, users wishing to go the extra mile often invest in a decent VPN service which allow them to hide their real IP addresses from the world. Well that’s the theory at least.

January this year details of a serious vulnerability revealed that in certain situations third parties were able to discover the real IP addresses of Chrome and Firefox users even though they were connected to a VPN.

This wasn’t the fault of any VPN provider though. The problem was caused by features present in WebRTC, an open-source project supported by Google, Mozilla and Opera.

By placing a few lines of code on a website and using a STUN server it became possible to reveal not only users’ true IP addresses, but also their local network address too.

While users were immediately alerted to broad blocking techniques that could mitigate the problem, it’s taken many months for the first wave of ‘smart’ solutions to arrive.

Following on the heels of a Chrome fix published by Rentamob earlier this month which protects against VPN leaks while leaving WebRTC enabled, Google has now thrown its hat into the ring.

Titled ‘WebRTC Network Limiter‘, the tiny Chrome extension (just 7.31KB) disables the WebRTC multiple-routes option in Chrome’s privacy settings while configuring WebRTC not to use certain IP addresses.

In addition to hiding local IP addresses that are normally inaccessible to the public Internet (such as, the extension also stops other public IP addresses being revealed.

“Any public IP addresses associated with network interfaces that are not used for web traffic (e.g. an ISP-provided address, when browsing through a VPN) [are hidden],” Google says.

“Once the extension is installed, WebRTC will only use public IP addresses associated with the interface used for web traffic, typically the same addresses that are already provided to sites in browser HTTP requests.”

While both the Google and Rentamob solutions provide more elegant responses to the problem than previously available, both admit to having issues.

“Some WebRTC functions, like VOIP, may be affected by the multiple routes disabled setting. This is unavoidable,” Rentamob explains.

Google details similar problems, including issues directly linked to funneling traffic through a VPN.

“This extension may affect the performance of applications that use WebRTC for audio/video or real-time data communication. Because it limits the potential network paths, WebRTC may pick a path that results in significantly longer delay or lower quality (e.g. through a VPN). We are attempting to determine how common this is,” the company concludes.

After applying the blocks and fixes detailed above, Chrome users can check for IP address leaks by using sites including IPLeak and BrowserLeaks.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and the best VPN services.

Schneier on Security: Fugitive Located by Spotify

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

The latest in identification by data:

Webber said a tipster had spotted recent activity from Nunn on the Spotify streaming service and alerted law enforcement. He scoured the Internet for other evidence of Nunn and Barr’s movements, eventually filling out 12 search warrants for records at different technology companies. Those searches led him to an IP address that traced Nunn to Cabo San Lucas, Webber said.

Nunn, he said, had been avidly streaming television shows and children’s programs on various online services, giving the sheriff’s department a hint to the couple’s location.

SANS Internet Storm Center, InfoCON: green: Angler’s best friends, (Mon, Jul 27th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

Nope, not the kind of angler whose best friends are rubber boots, strings tied into flies, or a tape measure that starts with 5inches where others have a zero. This is about the Angler Exploit Kit, which currently makes rampant use of the recent Adobe Flash zero-days to exploit the computers of unsuspecting users, and to push Cryptowall 3.0 on to them. Fellow ISC Handler Brad has covered before how this works.

Looking though our quite exhaustive (but likely nowhere near complete) list of IP addresses that were seen hosting Angler EK over the past 30 days or so, it is obvious that the crooks behind this exploit kit have a pretty savvy operation going on. First of all, they seem to test the waters at a new hosting provider, probably to see how quickly they get evicted. If no or slow action is forthcoming, the same provider will likely become the main Angler hoster a couple of days down the road. Obviously, this is bound to create some ruckus and lead to some complaints with said provider, but by the time the provider gets around to investigating, the bad guys usually have hopped one house down the road.

Amazingly, they seem to get away with this – staying at the same provider, but just switching to another IP address. With most providers these days touting the features of their Cloud, including the ability to spin up your image in any of our 20 data centers around the globe within a matter of seconds, this isnt really surprising. But it sure is highly unwelcome from a malware fighting point of view. We used to hate the fast flux domain name switcheroo, but now increasingly were getting fast instance, where the exploit hosting site itself moves every hour or two.

The statistics from this month also look like it takes the average hoster/provider about a week to catch on that the bad guys are simply moving onto the adjacent vacant lot, and to start evicting them for good. Though even this is hard to tell from the data – it could well also be that the providers never really caught on, and the bad guys just moved on their own to a new neighbourhood, for opsec reasons.

Without further ado, heres an excerpt from the list of Angler hosting sites that weve observed recently.

July 1148.251.167.57Hetzner Online AG, GermanyJuly 1 Online AG, GermanyJuly 8 Online AG, GermanyJuly 9 Online AG, GermanyJuly 10 Online AG, GermanyJuly 12 Online AG, GermanyJuly 14206.190.134.189Westhost Salt Lake City, USAJuly 15, NetherlandsJuly 16 Salt Lake City, USAJuly 16 Salt Lake City, USAJuly 17 Networks, Dallas, USAJuly 19 Networks, Dallas, USAJuly 20 Networks, Dallas, USAJuly 20, Netherlands and Czech RepublicJuly 21 Networks, Dallas, USAJuly 23 Networks, USA and NtherlandsJuly 23 Networks, Dallas, USAJuly 23 Networks, Dallas, USAJuly 24216.245.213.138Limestone Networks, USA and NtherlandsJuly 24, Netherlands and Czech RepublicJuly 25, Netherlands and Czech Republic

Now, of course, Im not insinuating that this misuse occurs with the tacit or implicit approval of the providers, likely, they are just being taken for a ride, but if you are such a provider, and you receive a complaint about one of your IPs hosting Angler EK, how about:

– checking ALL your IPs, not just the one that was reported, and keep checking over the next week or two
– correlating the data used to purchase these IPs, and proactively suspend, or at least activate a full packet trace, on all others that match similar info?

Icing on the cake would be if you as the provider could spend some brain cycles to translate the awesome Emerging Threat signatures from matching on client traffic to matching on server traffic (no big deal, primarily, you just need to flip $HOME_NET and $EXTERNAL_NET, and maybe adjust the from_server flow direction, depending on the rule match) and then apply these onto your inbound stream. You know, 20+ days after a signature became available for the current Angler EK landing page traffic .. one would think that you, as a professional web hoster, had some way to detect such traffic into your datacenters, and that it would take you less than a week to put a lid on it?

Also, it would help a lot if all you hosters could submit ALL your intelligence on this incident to Law Enforcement. Eventually (like, 3 years down the road…), the law will catch up with the perps, and decent evidence is what makes a conviction stick. I also suspect that it would work wonders if Law Enforcement could stop by for a chat with the CEOs of the hosters who seem to be having a hard time keeping the Angler from fishing in their waters, and offer suitable assistance. Most of these hosters are in cut-throat competition, and any revenue seems to be good revenue, but a little visit from the Feds might help to put things into perspective.

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Linux How-Tos and Linux Tutorials: How to Create a Streaming Media Server with Linux Using Plex

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Jack Wallen. Original post: at Linux How-Tos and Linux Tutorials

Figure 1: The Plex web interface.

Media is king—it has been for a very long time. It helps inform, enlighten, inspire, and entertain. More and more, users aren’t content with sitting in front of a television or desktop to enjoy their media. Or, collections have become so large, transferring them from drive to drive has become cumbersome at best. So what do you do when you want to take advantage of that massive media collection from multiple locations and your target devices either don’t have space for it or you don’t want to take the time to transfer it?

You set up a media streaming server.

With this type of server you can enjoy your media from a desktop, laptop, smart phone, or tablet. Naturally, each media streaming server offers different features and there are plenty of available servers (from bare bones to full-featured solutions). I want to demonstrate the process of setting up a streaming server using Plex. Why Plex? Because it is one of the most feature-rich media servers available that also happens to be cross-platform, has a built-in media player (and transcoder), and also offers apps for both Android and iOS. With Plex, you can also sign into your account and stream outside of your local network (and enjoy other features). Do note: Some of the Plex features require a premium membership fee.

With all of that in mind, let’s begin the process of setting up the Plex server. I will be installing on the latest release of openSUSE, but the server can be installed on Fedora, Ubuntu, Debian, and more.


You will be surprised to find out that installation is the easiest piece of the Plex puzzle—even on a non-Ubuntu distribution. Let’s walk through the steps (with a reminder, I’m using the latest version of openSUSE):

  1. Open up your web browser and download the installer that meets your needs.

  2. Open up your file manager (in my case, on openSUSE, that’d be Files).

  3. Change into the directory housing the downloaded installer.

  4. Double-click on the downloaded installer.

  5. Type your admin password and hit Enter.

  6. Allow the installation to complete.

That’s it! The installation of the Plex media server is done.

Starting the server

Now you must start up the Plex server. This is done manually at this point (reboot should start the server automatically); so open up a terminal window and su to the root user. Once you have admin power at your fingertips, issue the following command:

/etc/init.d/plexmediaserver start

The Plex server will start running in the background and you can connect to the web-based interface to set up your server.

Set up

You might not be surprised to know that the setup of the Plex server will take the most time. It’s not terribly difficult, but can be time consuming. I’ll cut to the chase and illustrate the important pieces of this puzzle.

The first thing you should do is sign up for a Plex account (even the free account) so you are able to take advantage of some of the extra features. You can sign up here. Once you have your account created, you’re ready to go.

Open up your browser (on the PC housing the Plex server) and point it to http://localhost:32400/web. You will be presented with the Plex web-based administration tool (Figure 1, above).

NOTE: You can also configure the Plex server from a remote machine. To do this, you simply replace localhost with the IP address of the Plex server.

The first thing you want to do is click on the user drop-down and select Sign In. When prompted, enter your Plex credentials and then click on the Config icon (wrench to the left of the user drop-down). In the Setting page (Figure 2), click on the Server tab.

plex configuration

You will see a configuration option called Friendly name. Enter a name for your Plex server and then click SAVE CHANGES.

Now it’s time to configure the locations of your libraries. This is one issue that must warrant a tiny explanation. Yes, you can configure the root location of your Music and Video libraries; but there are guidelines for naming files and folders. Here are some tips:

  • Separate media into appropriate folders (Music, Movies, TV, etc)

  • Movies should be named as follows: [Movie_Name (Release_Year)].mp4

  • TV shows should include season and episode numbers in the name: [Show Name – sXXeYY].mp4

  • Each TV Show episode file should be stored in a set of folders as follows: ~/TV Shows/Show Name/Season/episodes (NOTE: For TV Shows, the folder structure is crucial.)

  • Music content should be stored as follows: ~/Music/Artist/Album/tracks

NOTE: Your root folder does not have to be housed under your home user directory.

In order to be able to stream, you have to create Libraries for each type of media. Here’s how:

  1. Open up the Plex web admin tool

  2. Click on the Home button

  3. Click the + button associated with PLEXSERVER (or whatever friendly name you gave your Plex server) in the left navigation

  4. Click the icon for the media type associated with the library to be added (We’ll add a music library for example)

  5. Give the library a name, select a language, and click NEXT (Figure 3)


  7. Locate the media folder and, once you’ve selected it, click NEXT

  8. Select Create a basic music library and click NEXT

  9. Scan through the presented options on the last page (the defaults work fine) and click ADD LIBRARY.

naming a plex media library

That’s it. After a refresh, your media should show up (depending on how large the folder is, this could take some time.)

Repeat this process for every type of media you want to add and your Plex streaming server is ready.

Using your server

Out of the box, you can point any desktop or laptop device on your network to the IP of the Plex server (in the form http://IP_ADDRESS:32400/web) and Plex will appear, ready to stream media (Figure 4).

plex streaming

To connect to your Plex streaming server from the mobile app is incredibly simple—you open the app, select your server from the dropdown, and your stream-able media will appear (Figure 5).

Figure 5: Plex running on a Verizon-branded Motorola Droid Turbo.

NOTE: Unless you pay the $4.99 license fee for the app, your media will be limited in playback (music will stop after 1 minute and all videos/images will be watermarked).

If you’re looking for one of the most feature-rich and well supported media streaming servers on the market, you can’t go wrong with Plex. Yes, there are plenty of other streaming servers available, but you’ll be hard-pressed to find one as robust and ready to serve.

Schneier on Security: Remotely Hacking a Car While It’s Driving

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

This is a big deal. Hackers can remotely hack the Uconnect system in cars just by knowing the car’s IP address. They can disable the brakes, turn on the AC, blast music, and disable the transmission:

The attack tools Miller and Valasek developed can remotely trigger more than the dashboard and transmission tricks they used against me on the highway. They demonstrated as much on the same day as my traumatic experience on I-64; After narrowly averting death by semi-trailer, I managed to roll the lame Jeep down an exit ramp, re-engaged the transmission by turning the ignition off and on, and found an empty lot where I could safely continue the experiment.

Miller and Valasek’s full arsenal includes functions that at lower speeds fully kill the engine, abruptly engage the brakes, or disable them altogether. The most disturbing maneuver came when they cut the Jeep’s brakes, leaving me frantically pumping the pedal as the 2-ton SUV slid uncontrollably into a ditch. The researchers say they’re working on perfecting their steering control — for now they can only hijack the wheel when the Jeep is in reverse. Their hack enables surveillance too: They can track a targeted Jeep’s GPS coordinates, measure its speed, and even drop pins on a map to trace its route.

In related news, there’s a Senate bill to improve car security standards. Honestly, I’m not sure our security technology is enough to prevent this sort of thing if the car’s controls are attached to the Internet.

Krebs on Security: Hacking Team Used Spammer Tricks to Resurrect Spy Network

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Last week, hacktivists posted online 400 GB worth of internal emails, documents and other data stolen from Hacking Team, an Italian security firm that has earned the ire of privacy and civil liberties groups for selling spy software to governments worldwide. New analysis of the leaked Hacking Team emails suggests that in 2013 the company used techniques perfected by spammers to hijack Internet address space from a spammer-friendly Internet service provider in a bid to regain control over a spy network it apparently had set up for the Italian National Military Police.


Hacking Team is in the business of selling exploits that allow clients to secretly deploy spyware on targeted systems. In just the past week since the Hacking Team data was leaked, for example, Adobe has fixed two previously undocumented zero-day vulnerabilities in its Flash Player software that Hacking Team had sold to clients as spyware delivery mechanisms.

The spyware deployed by Hacking Team’s exploits are essentially remote-access Trojan horse programs designed to hoover up stored data, recorded communications, keystrokes, etc. from infected devices, giving the malware’s operator full control over victim machines.

Systems infested with Hacking Team’s malware are configured to periodically check for new instructions or updates at a server controlled by Hacking Team and/or its clients. This type of setup is very similar to the way spammers and cybercriminals design “botnets,” huge collections of hacked PCs that are harvested for valuable data and used for a variety of nefarious purposes.

No surprise, then, that Hacking Team placed its control servers in this case at an ISP that was heavily favored by spammers. Leaked Hacking Team emails show that in 2013, the company set up a malware control server for the Special Operations Group of the Italian National Military Police (INMP), an entity focused on investigating organized crime and terrorism. One or both of these organizations chose to position that control at Santrex, a notorious Web hosting provider that at the time served as a virtual haven for spammers and malicious software downloads.

But that decision backfired. As I documented in October 2013, Santrex unexpectedly shut down all of its servers, following a series of internal network issues and extensive downtime. Santrex made that decision after several months of incessant attacks, hacks and equipment failures at its facilities caused massive and costly problems for the ISP and its customers. The company’s connectivity problems essentially made it impossible for either Hacking Team or the INMP to maintain control over the machines infected with the spyware.

According to research published Sunday by OpenDNS Security Labs, around that same time the INMP and Hacking Team cooked up a plan to regain control over the Internet addresses abandoned by Santrex. The plan centered around a traffic redirection technique known as “BGP hijacking,” which involves one ISP fraudulently “announcing” to the rest of the world’s ISPs that it is in fact the rightful custodian of a dormant range of Internet addresses that it doesn’t actually have the right to control.

IP address hijacking is hardly a new phenomenon. Spammers sometimes hijack Internet address ranges that go unused for periods of time (see this story from 2014 and this piece I wrote in 2008 for The Washington Post for examples of spammers hijacking Internet space). Dormant or “unannounced” address ranges are ripe for abuse partly because of the way the global routing system works: Miscreants can “announce” to the rest of the Internet that their hosting facilities are the authorized location for given Internet addresses. If nothing or nobody objects to the change, the Internet address ranges fall into the hands of the hijacker.

Apparently nobody detected the BGP hijack at the time, and that action eventually allowed Hacking Team and its Italian government customer to reconnect with the Trojaned systems that once called home to their control server at Santrex. OpenDNS said it was able to review historic BGP records and verify the hijack, which at the time allowed Hacking Team and the INMP to migrate their malware control server to another network.

This case is interesting because it sheds new light on the potential dual use of cybercrime-friendly hosting providers. For example, law enforcement agencies have been known to allow malicious ISPs like Santrex to operate with impunity because the alternative — shutting the provider down or otherwise interfering with its operations –can interfere with the ability of investigators to gather sufficient evidence of wrongdoing by bad actors operating at those ISPs. Indeed, the notoriously bad and spammer-friendly ISPs McColo and Atrivo were perfect examples of this prior to their being ostracized and summarily shut down by the Internet community in 2008.

But this example shows that some Western law enforcement agencies may also seek to conceal their investigations by relying on the same techniques and hosting providers that are patronized by the very criminals they are investigating.

SANS Internet Storm Center, InfoCON: green: Detecting Random – Finding Algorithmically chosen DNS names (DGA), (Thu, Jul 9th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

Most normal user traffic communicates via a hostname and not an IP address. Solooking at traffic communicating directly by IP with no associated DNS request is a good thing do to. Some attackers use DNS names for their communications. There is alsomalware such as Skybot and the Styx exploit kit that use algorithmically chosen host name rather than IP addresses for their command and control channels. This malware uses what has been called DGA or Domain Generation Algorithms to create random lookinghost names for its TLS command and control channel or to digitally sign its SSL certificates. These do not look like normal host names. A human being can easily pick them out of our logs and traffic, but it turns out to be a somewhat challenging thing to do in an automated process. Natural Language Processing or measuring the randomness dont seem to work very well. Here is a video that illustrates the problem and one possible approach to solving it.

One way you might try to solve this is with a tool called ent. ent a great Linux tool for detecting entropy in files.”>Entropy = 7.999982 bits per byte.”> –“>[~]$ python -c print A*1000000 | ent
Entropy = 0.000021 bits per byte. — 0 = not random

So 8 is highly random and 0 is not random at all.”>[~]$ echo google | ent
Entropy = 2.235926 bits per byte.
[~]$ echo clearing-house | ent
Entropy = 3.773557 bits per byte. – Valid hosts are in the 2 to 4 range

Google scores 2.23 and clearing-house scores 3.7. So it appears as thoughlegitimate host names willbe in the 2 to 4 range.”>[~]$ echo e6nbbzucq2zrhzqzf | ent
Entropy = 3.503258 bits per byte.
[~]$ echo sdfe3454hhdf | ent
Entropy = 3.085055 bits per byte. – Malicious host from Skybot and Styx malware are in the same range as valid hosts

Thats no good. Known malicious host names are also in the 2 to 4 range. They score just about the same as normal host names. We need a different approach to this problem.

Normal readable English has some pairs of characters that appear more frequently than others. TH, QU and ER appear very frequently but other pairs like WZ appear very rarely. Specifically, there is approximately a 40% chance that a T will be followed by an H. There is approximately a 97% change that a Q will be followed by the letter U. There is approximately a 19% chance that E is followed by R. With regard to unlikely pairs, there is approximately a 0.004% chance that W will be followed by a Z. So here is the idea, lets analyze a bunch of text and figure out what normal looks like. Then measure the host names against the tables. Im making this script and a Windows executable version of this tool available to you to try it out. Let me know how it works. Here is a look at how to use the tool.

Step 1) You need a frequency table. I shared two of them in my github if you want to use them you can download them and skip to step 2.

1a) Create the table: Im creating a table called custom.freq.”>C:\freqfreq.exe –create custom.freq

1b) You can optionally turn ON case sensitivity if you want the frequency table to count uppercase letters and lowercase letters separately. Without this option the tool will convert everything to lowercase before counting character pairs.”>C:\freqfreq.exe -t custom.freq

1c) Next fill the frequency table with normal text. You might load it with known legitimate host names like the Alexa top 1 million most commonly accessed websites. ( I will just load it up with famous works of literature.”>C:\freqfor %i in (txtdocs\*.*) do freq.exe –normalfile %i custom.freq
C:\freqfreq.exe –normalfile txtdocs\center_earth custom.freq
C:\freqfreq.exe –normalfile txtdocs\defoe-robinson-103.txt custom.freq
C:\freqfreq.exe –normalfile txtdocs\dracula.txt custom.freq
C:\freqfreq.exe –normalfile txtdocs\freck10.txt custom.freq

Step 2) Measure badness!

Once the frequency table is filled with data you can start to measure strings to see how probable they are according to our frequency tables.”>C:\freqfreq.exe –measure google custom.freq
C:\freqfreq.exe –measure clearing-house custom.freq

So normal host names have a probability above 5 (at least these two and most others do). We will consider anything above 5 to be good for our tests.”>C:\freqfreq.exe –measure asdfl213u1 custom.freq
C:\freqfreq.exe –measure po24sf92cxlk”>Our malicious hosts are less than 5. 5 seems to be a pretty good benchmark. In my testing it seems to work pretty well for picking out these abnormal host names. But it isnt perfect. Nothing is. One problem is that very small host names and acronyms that are not in the source files you use to build your frequency tables will be below 5. For example, fbi and cia both come up below 5 when I just use classic literature to build my frequency tables. But I am not limited to classic literature. That leads us to step 3.

Step 3) Tune for your organization.

The real power of frequency tables is when you tune it to match normal traffic for your network. –normal and –odd. –normal can be given a normal string and it will update the frequency table with that string. Both –normal and –odd can be used with the –weight option tocontrol how much influence the given string has on the probabilities in the frequency table. Its effectiveness is demonstrated by the accompanying youtube video. Note that marking random host names as –odd is not a good strategy. It simply injects noise into the frequency table. Like everything else in security identifying all the bad in the world is a losing proposition. Instead focus on learning normal and identifying anomalies. So passing –normal cia –weight 10000 adds 10000 counts of the pair ci and the pair ia to the frequency table and increases the probability of cia”>C:\freqfreq.exe –normal cia –weight 10000 custom.freq

The source code and a Windows Executable version of this program can be downloaded from here:

Tomorrow I in my diary I will show you some other cool things you can do with this approach and how you can incorporate this into your own tools.

Follow me on twitter @MarkBaggett

Want to learn to use this code in your own script or build tools of your own? Join me for PythonSEC573 in Las Vegas this September 14th! Click here for more information.

What do you think? Leave a comment.

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

SANS Internet Storm Center, InfoCON: green: BizCN gate actor changes from Fiesta to Nuclear exploit kit, (Mon, Jul 6th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green


An actor using gates registered through BizCN recently switched from Fiesta to Nuclear exploit kit (EK). This happened around last month, and we first noticed the change on 2015-06-15.

I started writing about this actor in 2014 [1, 2] and recently posted an ISC diary about it on 2015-04-28 [3]. Ive been calling this group the BizCN gate actor because domains used for the gate have all been registered through the Chinese registrar BizCN.

We collected traffic and malware samples related to this actor from Friday 2015-07-03 through Sunday 2015-07-05. This traffic has the following characteristics:

  • Compromised servers are usually (but not limited to) forum-style websites.
  • Gate domains have all been registered through the Chinese registrar BizCN using privacy protection.
  • The domains for Nuclear EK change every few hours and were registered through
  • Nuclear EK for this actor is on, which is an IP registered to Vultr, a hosting provider specializing in SSD cloud servers [4].
  • The payload occasionally changes and includes malware identified as Yakes [5], Boaxxe [6], and Kovter.

NOTE: For now, Kovter is relatively easy to spot, since its the only malware Ive noticed that updates the infected hosts Flash player [7].

Chain of events

During a full infection chain, the traffic follows a specific chain of events. The compromised website has malicious javascript injected into the page that points to a URL hosted on a BizCN-registered gate domain. The gate domain redirects traffic to Nuclear EK on If a Windows host running the web browser is vulnerable, Nuclear EK will infect it. Simply put, the chain of events is:

  • Compromised website
  • BizCN-registered gate domain
  • Nuclear EK

Lets take a closer look at how this happens.

Compromised website

Compromised websites are the first step in an infection chain.” />

In most cases, the malicious javascript will be injected on any page from the site, assuming you get to it from a search engine or other referrer.

BizCN-registered gate domain

The gate directs traffic from the compromised website to the EK. The HTTP GET request to the gate domain returns javascript. In my last diary discussing this actor [3], you could easily figure out the URL for the EK landing page.” />

Weve found at least four IP addresses hosting the BizCN-registered gate domain. They are:


If you have proxy logs or other records of your HTTP traffic, search for these IP addresses. If you find the referrers, you might discover other websites compromised by this actor.

Nuclear EK

Examples of infection traffic generated from 2015-07-03 through 2015-07-05 all show as the IP address hosting Nuclear EK. This IP address is registered to Vultr, a hosting provider specializing in SSD cloud servers [4]. ” />

Finally, Nuclear EK sends the malware payload. It” />

Malware sent by this actor

During the three-day period, we infected ten hosts, saw two different Flash exploits, and retrieved five different malware payloads. Most of these payloads were Kovter (ad fraud malware).” />

Below are links to reports from for the individual pieces of malware:

Final words

Its usually difficult to generate a full chain of infection traffic from compromised websites associated with this BizCN gate actor. We often see HTTP GET requests to the gate domain return a 404 Not Found. In some cases, the gate domain might not appear in traffic at all.

We believe the BizCN gate actor will continue to make changes as a way to evade detection. Fortunately, the ISC and other organizations try our best to track these actors, and well let you know if we discover any significant changes.

Examples of the traffic and malware can be found at:

As always, the zip file is password-protected with the standard password. If you dont know it, email and ask.

Brad Duncan
Security Researcher at Rackspace and ISC Handler
Blog: – Twitter: @malware_traffic



(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

TorrentFreak: VPN Providers Respond To Allegations of Data Leakage

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

vpn4lifeAs Internet users seek to bypass censorship, boost privacy and achieve a level of anonymity, VPN services have stepped in with commercial solutions to assist with these aims. The uptake among consumers has been impressive.

Reviews of VPN services are commonplace and usually base their ratings on price and speed. At TorrentFreak we examine many services annually, but with a focus on privacy issues instead.

Now a team of researchers from universities in London and Rome have published a paper titled A Glance through the VPN Looking Glass: IPv6 Leakage and DNS Hijacking in Commercial VPN clients. (pdf) after investigating 14 popular services on the market today.

“Our findings confirm the criticality of the current situation: many of these providers leak all, or a critical part of the user traffic in mildly adversarial environments. The reasons for these failings are diverse, not least the poorly defined, poorly explored nature of VPN usage, requirements and threat models,” the researchers write.

While noting that all providers are able to successfully send data through an encrypted tunnel, the paper claims that problems arise during the second stage of the VPN client’s operation: traffic redirection.

“The problem stems from the fact that routing tables are a resource that is concurrently managed by the operating system, which is unaware of the security requirements of the VPN client,” the researchers write.

This means that changes to the routing table (whether they are malicious or accidental) could result in traffic circumventing the VPN tunnel and leaking to other interfaces.

IPv6 VPN Traffic Leakage

“The vulnerability is driven by the fact that, whereas all VPN clients manipulate the IPv4 routing table, they tend to ignore the IPv6 routing table. No rules are added to redirect IPv6 traffic into the tunnel. This can result in all IPv6 traffic bypassing the VPN’s virtual interface,” the researchers explain.


As illustrated by the chart above, the paper claims that all desktop clients (except for those provided by Private Internet Access, Mullvad and VyprVPN) leaked “the entirety” of IPv6 traffic, while all providers except Astrill were vulnerable to IPv6 DNS hijacking attacks.

The paper was covered yesterday by The Register with the scary-sounding title “VPNs are so insecure you might as well wear a KICK ME sign” but without any input from the providers in question. We decided to contact a few of them for their take on the paper.

PureVPN told TF that they “take the security of our customers very seriously and thus, a dedicated team has been assigned to look into the matter.” Other providers had already received advanced notice of the paper.

“At least for AirVPN the paper is outdated,” AirVPN told TorrentFreak.

“We think that the researchers, who kindly sent the paper to us many months in advance and were warned about that, had no time to fix [the paper] before publication. There is nothing to worry about for AirVPN.”

“Current topology allows us to have the same IP address for VPN DNS server and VPN gateway, solving the vulnerability at its roots, months before the publication of the paper.”

TorGuard also knew of the whitepaper and have been working to address the issues it raises. The company adds that while The Register’s “the sky is falling” coverage of yesterday is “deceptive”, the study does illustrate the need for providers to stay vigilant. Specifically, TorGuard says that it has launched a new IPv6 leak prevention feature on Windows, Mac and Linux.

“Today we have released a new feature that will address this issue by giving users the option of capturing ALL IPv6 traffic and forcing it through the OpenVPN tunnel. During our testing this method proved highly effective in blocking potential IPv6 leaks, even in circumstances when these services were active or in use on the client’s machine,” the company reports.

On the DNS hijacking issue, TorGuard provides the following detail.

“It is important to note that the potential for this exploit only exists (in theory) if you are connected to a compromised WiFi network in which the attacker has gained full control of the router. If that is the case, DNS hijacking is only the beginning of one’s worries,” TorGuard notes.

“During our own testing of TorGuard’s OpenVPN app, we were unable to reproduce this when using private DNS servers because any DNS queries can only be accessed from within the tunnel itself.”

Noting that they released IPv6 Leak Protection in October 2013, leading VPN provider Private Internet Access told TorrentFreak that they feel the paper is lacking.

“While the article purported to be an unbiased and intricate look into the security offered by consumer VPN services, it was greatly flawed since the inputs or observations made by the researchers were inaccurate,” PIA said.

“While a scientific theory or scientific test can be proven by a logical formula or algorithm, if the observed or collected data is incorrect, the conclusion will be in error as well.”

PIA criticizes the report on a number of fronts, including incorrect claims about its DNS resolver.

“Contrary to the report, we have our own private DNS daemon running on the Choopa network. Additionally, the DNS server that is reported, while it is a real DNS resolver, is not the actual DNS that your system will use when connected to the VPN,” the company explains.

“Your DNS requests are handled by a local DNS resolver running on the VPN gateway you are connected to. This can be easily verified through a site like Additionally… we do not allow our DNS servers to report IPv6 (AAAA records) results. We’re very serious about security and privacy.”

Finally, in a comprehensive response (now published here) in which it notes that its Windows client is safe, PIA commends the researchers for documenting the DNS hijacking method but criticizes how it was presented to the VPN community.

“The DNS Hijacking that the author describes [..] is something that has recently been brought to light by these researchers and we commend them on their discovery. Proper reporting routines would have been great, however. Shamefully, this is improper security disclosure,” PIA adds.

While non-IPv6 users have nothing to fear, all users looking for a simply fix can disable IPv6 by following instructions for Windows, Linux and Mac.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and the best VPN services.

TorrentFreak: Cloudflare Reveals Pirate Site Locations in an Instant

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

cloudflareFive years ago, discovering the physical location of almost any ‘pirate’ site was achievable in a matter of seconds using widely available online tools. All one needed was an IP address and a simple lookup.

As sites became more aware of the need for security, cloaking efforts became more commonplace. Smaller sites, private trackers in particular, began using tunnels and proxies to hide their true locations, hampering anti-piracy efforts in the process. Later these kinds of techniques were used on even the largest sites, The Pirate Bay for example.

In the meantime the services of a rising company called Cloudflare had begun to pique the interest of security-minded site owners. Designed to optimize the performance of sites while blocking various kinds of abuse, Cloudflare-enabled sites get to exchange their regular IP address for one operated by Cloudflare, a neat side-effect for a site wishing to remain in the shadows.


Today, Cloudflare ‘protects’ dozens – perhaps hundreds – of ‘pirate’ sites. Some use Cloudflare for its anti-DDoS capabilities but all get to hide their real IP addresses from copyright holders. This has the potential to reduce the amount of DMCA notices and other complaints filtering through to their real hosts.

Surprisingly, however, belief persists in some quarters that Cloudflare is an impenetrable shield that allows ‘pirate’ sites to operate completely unhindered. In fact, nothing could be further from the truth.

In recent days a perfect example appeared in the shape of Sparvar (Sparrows), a Swedish torrent site that has been regularly hounded by anti-piracy outfit Rights Alliance. Sometime after moving to Canada in 2014, Sparvar began using the services of Cloudflare, which effectively cloaked the site’s true location from the world. Well, that was the theory.

According to an announcement from the site, Rights Alliance lawyer Henrik Pontén recently approached Cloudflare in an effort to uncover Sparvar’s email address and the true location of its servers. The discussions between Rights Alliance and Cloudflare were seen by Sparvar, which set alarm bells ringing.

“After seeing the conversations between Rights Alliance and server providers / CloudFlare we urge staff of other Swedish trackers to consider whether the risk they’re taking is really worth it,” site staff said.

“All that is required is an email to CloudFlare and then [anti-piracy companies] will have your IP address.”

As a result of this reveal, Sparvar is now offline. No site or user data has been compromised but it appears that the site felt it best to close down, at least for now.


This obviously upset users of the site, some of whom emailed TorrentFreak to express disappointment at the way the situation was handled by Cloudflare. However, Cloudflare’s terms and conditions should leave no doubt as to how the company handles these kinds of complaints.

One clause in which Cloudflare reserves the right to investigate not only sites but also their operators, it’s made crystal clear what information may be given up to third parties.

“You acknowledge that CloudFlare may, at its own discretion, reveal the information about your web server to alleged copyright holders or other complainants who have filed complaints with us,” the company writes.

The situation is further underlined when Cloudflare receives DMCA notices from copyright holders and forwards an alert to a site using its services.

“We have provided the name of your hosting provider to the reporter. Additionally, we have forwarded this complaint to your hosting provider as well,” the site’s abuse team regular advises.

While Cloudflare itself tends not to take direct action against sites it receives complaints about, problems can mount if a copyright holder is persistent enough. Just recently Cloudflare was ordered by a U.S. court to discontinue services to a Grooveshark replacement. That site is yet to reappear.

Finally, Sparvar staff have some parting advice for other site operators hoping to use Cloudflare services without being uncovered.

“We hope that you do not have your servers directly behind CloudFlare which means a big security risk. We hope and believe that you are also running some kind of reverse proxy,” the site concludes.

At the time of publication, Henrik Pontén of Rights Alliance had not responded to our requests for comment.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and the best VPN services.

The Hacker Factor Blog: Bot Spotting

This post was syndicated from: The Hacker Factor Blog and was written by: The Hacker Factor Blog. Original post: at The Hacker Factor Blog

There’s a common problem that impacts every web service out there: bots. The problem isn’t that web bots crawl web sites. Rather, the problem is that there are so many bots, and many bots behave poorly. A solid 40% of traffic to my blog comes from various kinds of bots. It used to be over 90%, but I took steps to mitigate bot traffic.

Some bots just want to crawl the entire web site. The poorly behaved bots (including GoogleBot and MSNBOT) sometimes get stuck in virtual directories and try to traverse indefinitely.

Other bots just want to hit the same URL over and over and over. One bot (from China) hit the same URL over a thousand times in one hour. I ended up creating a rule to detect and block rapid repeaters. This same rule stops RSS updaters that try to refresh more than a few times per hour. (I update this blog 1-2 times a week. There is no reason for an RSS system to request an update every 15 seconds.)

By blocking these bots and automated abuses, I cut my network traffic usage by more than half, freed up CPU cycles, and made real user requests much more responsive.

Finding Bots

There are a couple of different methods for finding bots.

The first method evaluates the user-agent string sent by the bot. If you see strings like “AhrefsBot”, “PostDesk-ReadBot”, or “OpenHoseBot”, then it is likely a bot crawling the web site. Of course, not every bot uses the term ‘bot’ in it’s name. There’s crawler4j, Digg Deeper, Jamie’s Spider, and many more. But this type of heuristic does make for a good first guess. If you see “googlebot”, then you can probably assume that the request is coming from Google. (Sure, some bots lie and most browsers can be configured to display an arbitrary user-agent string. But as a first-pass heuristic, this is a pretty good one.)

The second option is to filter by source network address. For example, Baidospider and the Internet Archive’s archive.org_bot each come from very specific network ranges. If I see a request coming from –, then I can be pretty confident that it is

More Complex

Of course, if there is a way to make a situation more complicated, then you can be certain that Google will find it… and Microsoft will then try to out-complex Google.

For example, Google does not use just one type of bot. The main crawler is Googlebot, but there’s also AdsBot-Google, Googlebot-Mobile, and AppEngine-Google. Unless you look for all of these different strings, you might end up with a confused Google web crawler infinitely traversing your site. (And I’m not convinced that these are all of the different types of bots by Google.) By the same means, Microsoft uses many different bot names. There’s “msnbot”, “bingbot”, “BingPreview”, and others. Unless you have a long list of Microsoft strings, you are unlikely to find them all.

Another approach is to maintain a permitted list of user-agent strings. But this leads to two big problems. First, most devices have different user-agent strings. You are unlikely to find every device that should be white-listed. The other problem is that many bots like to pretend to be other devices. Here are two user-agent strings, where BingBot pretends to be an Apple iPhone 7, and Google impersonates an iPhone 6:

Mozilla/5.0 (iPhone; CPU iPhone OS 7_0 like Mac OS X) AppleWebKit/537.51.1 (KHTML, like Gecko) Version/7.0 Mobile/11A465 Safari/9537.53 (compatible; bingbot/2.0; +

Mozilla/5.0 (iPhone; CPU iPhone OS 6_0 like Mac OS X) AppleWebKit/536.26 (KHTML, like Gecko) Version/6.0 Mobile/10A5376e Safari/8536.25 (compatible; Googlebot/2.1; +

(Yes, Google and Microsoft bots frequently impersonate Apple devices.) If you manage to white-list Apple iPhones, then you’ll be letting these bots in.

Adding to the problem, both Microsoft and Google refuse to publish a list of addresses used by their bots. As Microsoft stated:

Bing does not publish a list of IP addresses or ranges from which we crawl the Internet. The reason is simple: the IP addresses or ranges we use can change any time, so responding to requests differently based on a hardcoded list is not a recommended approach and may cause problems down the line.

This refusal to publish is very similar to Google’s policy:

Google doesn’t post a public list of IP addresses for webmasters to whitelist. This is because these IP address ranges can change, causing problems for any webmasters who have hard coded them. The best way to identify accesses by Googlebot is to use the user-agent (Googlebot).

Google’s advice to check the user-agent string for “Googlebot” is clearly not enough since it will miss AdsBot-Google and others. Microsoft gives similar advice, saying to look for a user-agent string containing “bingbot”. But that will miss msnbot, BingPreview, and other Microsoft bots.

Both Google and Microsoft do offer an alternative… They both suggest doing a reverse hostname lookup. This is where you identify the hostname from the network address. (As a web service, you always know the network address used by the client.) If the client’s network address resolves to “a name ending in”, then Microsoft says it is one of their bots. Google says to look for “” in the hostname. However, both of these options have some significant limitations. For example:

  • Resale and Fakes. Google sells their search engine solutions to various organizations. I spotted the Brazilian government running their own Googlebot service. The user-agent clearly says Googlebot and the crawling acted like Googlebot. However, the associated hostname ended with “”. Then again, these non-Google Googlebots may actually be fake Googlebots… the only real way to tell is by doing a hostname lookup.

  • Wrong bot. I caught accessing my site. It resolves to “”, so this does not match Google’s suggested “” domain. In this case, the user-agent string identified the bot as “Google-SearchByImage”.
  • Speed. Let’s assume that these companies do return the correct hostname when doing a reverse lookup for one of their bots. A single DNS query is relatively fast. However, automating the lookup for each incoming query will add processing time. This will result in a measurable delay for even a moderately high number of requests.
  • Quota. Most DNS servers consider more than a few dozen rapid lookups to be a network attack. This is why most include rate limiting features. If I want to stop every Microsoft and Google bot from getting caught in virtual directory mirroring loops, then I will need to look up every incoming query in order to see if it is actually from Google or Microsoft. This means querying DNS hundreds of times per hour. I am certain to trigger the rate limiting thresholds and become blocked from querying DNS.

Requiring web services to perform hundreds of DNS queries in order to identify real bots is impractical and self restricting.

Arbitrary Limitations

What I find really ironic is that Google and Microsoft happily publish their list of cloud service addresses. Google documents them in DNS TXT records. Retrieving this list of address ranges requires less than a dozen DNS queries, and not one per address. Similarly, Microsoft posts their cloud addresses on a web site. Yet, neither company will identify the subnets where their bots come from. While there is no standard method for publishing these lists, at least they are documented and published somewhere.

I actually think there is an ulterior motive and “IP address ranges can change” is nothing more than an excuse. Both Microsoft and Google make a significant income from their search engine technologies. If they published their network ranges, then many sites would either block access or offer alternate content. In effect, these companies just want to make it more difficult to detect and block.

The downside from not having an official listing is that many third-party sites have reverse-engineered the subnets used by these bots.,, and others have published lists of subnets used by common web bots. Unforunately, the quality of these lists vary dramatically. Many of them have not been maintained and list old addresses. And most of them list large subnets when only small subnets are used. For example, says that MSNBOT uses the range – However, I have only recorded MSNBOT coming from – (a significantly smaller network range).

Using excessively large ranges can lead to big problems. I had been using to identify bots. However, I recently saw a (human) user with a Windows Phone trying to access my site from one of the ranges that associated with MSNBOT. The user saw my automated “no bots allowed” message. Using Microsoft’s document reverse hostname check, I determined that has identified an overly large network range. I have since decided to use tighter network ranges, but I may still be over-blocking some subnets.

Managing Bots

If bots worked politely, then this wouldn’t be a problem. Unfortunately, many bots come in fast and furious. They can quickly suck up bandwidth and drive up the CPU load. (And if you’re on Amazon Cloud, where they charge for every network bit and every CPU cycle, then bots can result in real money being spent.)

There are a few options for mitigating bot abuse. Having a robots.txt exclusion file is a good start. Google and Microsoft seem to obey it, but other bots, like Getty’s Picscout do not even bother to retrieve the file. And some attack bots use the robots.txt listing as starting points for vulnerability scans.

Both Google and Microsoft have webmaster tools where you can claim ownership of your domain and specify scan rates. But even these are not consistently used. I have seen Microsoft ignore the scan rate settings and flood my server with requests.

There’s other little tricks that may alter web bot behavior. For example, you might add “nofollow” or “noindex” meta attributes in the HTML code, or generate an HTTP “X-Robots-Tag: noindex” response. However none of these other options are widely supported.

Even profiling web actions may not be enough to identify a bot. Real humans with web browsers will retrieve my web page style sheet (CSS) and associated dependencies (images, javascript, etc.), while bots usually do not access support links. However, even this may be difficult to track. For example, if the user reads my blog via the RSS feed, then they may not retrieve my CSS file. If they obey my web page cache settings, then they should not retrieve the CSS file every time. And if they use a corporate proxy network, then one web request may use one exit proxy (one network address), and a subsequent query may use a different proxy (a different network address). According to my logs, I have seen employees at Amazon request my web page from one address and the style sheet from a different address as they rotate through corporate proxies. (I know they were employees because I was on the phone with them at the time. “Now go to this URL on my web site. Wait… your network address changed!”)

I am still looking for other options to identify and mitigate web bots. A lot of them get stuck in my blog because the blog software has multiple ways to identify the same blog entry. If I could better identify bots, then I would have them stick to the permalinks rather than trying to index entries by page numbers that change whenever I post a new blog entry.

Pid Eins: The new sd-bus API of systemd

This post was syndicated from: Pid Eins and was written by: Lennart Poettering. Original post: at Pid Eins

With the new v221 release of

we are declaring the
API shipped with
stable. sd-bus is our minimal D-Bus
C library, supporting as
back-ends both classic socket-based D-Bus and
kdbus. The library has been been
part of systemd for a while, but has only been used internally, since
we wanted to have the liberty to still make API changes without
affecting external consumers of the library. However, now we are
confident to commit to a stable API for it, starting with v221.

In this blog story I hope to provide you with a quick overview on
sd-bus, a short reiteration on D-Bus and its concepts, as well as a
few simple examples how to write D-Bus clients and services with it.

What is D-Bus again?

Let’s start with a quick reminder what
D-Bus actually is: it’s a
powerful, generic IPC system for Linux and other operating systems. It
knows concepts like buses, objects, interfaces, methods, signals,
properties. It provides you with fine-grained access control, a rich
type system, discoverability, introspection, monitoring, reliable
multicasting, service activation, file descriptor passing, and
more. There are bindings for numerous programming languages that are
used on Linux.

D-Bus has been a core component of Linux systems since more than 10
years. It is certainly the most widely established high-level local
IPC system on Linux. Since systemd’s inception it has been the IPC
system it exposes its interfaces on. And even before systemd, it was
the IPC system Upstart used to expose its interfaces. It is used by
GNOME, by KDE and by a variety of system components.

D-Bus refers to both a
and a reference
. The
reference implementation provides both a bus server component, as well
as a client library. While there are multiple other, popular
reimplementations of the client library – for both C and other
programming languages –, the only commonly used server side is the
one from the reference implementation. (However, the kdbus project is
working on providing an alternative to this server implementation as a
kernel component.)

D-Bus is mostly used as local IPC, on top of AF_UNIX sockets. However,
the protocol may be used on top of TCP/IP as well. It does not
natively support encryption, hence using D-Bus directly on TCP is
usually not a good idea. It is possible to combine D-Bus with a
transport like ssh in order to secure it. systemd uses this to make
many of its APIs accessible remotely.

A frequently asked question about D-Bus is why it exists at all,
given that AF_UNIX sockets and FIFOs already exist on UNIX and have
been used for a long time successfully. To answer this question let’s
make a comparison with popular web technology of today: what
AF_UNIX/FIFOs are to D-Bus, TCP is to HTTP/REST. While AF_UNIX
sockets/FIFOs only shovel raw bytes between processes, D-Bus defines
actual message encoding and adds concepts like method call
transactions, an object system, security mechanisms, multicasting and

From our 10year+ experience with D-Bus we know today that while there
are some areas where we can improve things (and we are working on
that, both with kdbus and sd-bus), it generally appears to be a very
well designed system, that stood the test of time, aged well and is
widely established. Today, if we’d sit down and design a completely
new IPC system incorporating all the experience and knowledge we
gained with D-Bus, I am sure the result would be very close to what
D-Bus already is.

Or in short: D-Bus is great. If you hack on a Linux project and need a
local IPC, it should be your first choice. Not only because D-Bus is
well designed, but also because there aren’t many alternatives that
can cover similar functionality.

Where does sd-bus fit in?

Let’s discuss why sd-bus exists, how it compares with the other
existing C D-Bus libraries and why it might be a library to consider
for your project.

For C, there are two established, popular D-Bus libraries: libdbus, as
it is shipped in the reference implementation of D-Bus, as well as
GDBus, a component of GLib, the low-level tool library of GNOME.

Of the two libdbus is the much older one, as it was written at the
time the specification was put together. The library was written with
a focus on being portable and to be useful as back-end for higher-level
language bindings. Both of these goals required the API to be very
generic, resulting in a relatively baroque, hard-to-use API that lacks
the bits that make it easy and fun to use from C. It provides the
building blocks, but few tools to actually make it straightforward to
build a house from them. On the other hand, the library is suitable
for most use-cases (for example, it is OOM-safe making it suitable for
writing lowest level system software), and is portable to operating
systems like Windows or more exotic UNIXes.

is a much newer implementation. It has been written after considerable
experience with using a GLib/GObject wrapper around libdbus. GDBus is
implemented from scratch, shares no code with libdbus. Its design
differs substantially from libdbus, it contains code generators to
make it specifically easy to expose GObject objects on the bus, or
talking to D-Bus objects as GObject objects. It translates D-Bus data
types to GVariant, which is GLib’s powerful data serialization
format. If you are used to GLib-style programming then you’ll feel
right at home, hacking D-Bus services and clients with it is a lot
simpler than using libdbus.

With sd-bus we now provide a third implementation, sharing no code
with either libdbus or GDBus. For us, the focus was on providing kind
of a middle ground between libdbus and GDBus: a low-level C library
that actually is fun to work with, that has enough syntactic sugar to
make it easy to write clients and services with, but on the other hand
is more low-level than GDBus/GLib/GObject/GVariant. To be able to use
it in systemd’s various system-level components it needed to be
OOM-safe and minimal. Another major point we wanted to focus on was
supporting a kdbus back-end right from the beginning, in addition to
the socket transport of the original D-Bus specification (“dbus1″). In
fact, we wanted to design the library closer to kdbus’ semantics than
to dbus1’s, wherever they are different, but still cover both
transports nicely. In contrast to libdbus or GDBus portability is not
a priority for sd-bus, instead we try to make the best of the Linux
platform and expose specific Linux concepts wherever that is
beneficial. Finally, performance was also an issue (though a secondary
one): neither libdbus nor GDBus will win any speed records. We wanted
to improve on performance (throughput and latency) — but simplicity
and correctness are more important to us. We believe the result of our
work delivers our goals quite nicely: the library is fun to use,
supports kdbus and sockets as back-end, is relatively minimal, and the
performance is substantially

than both libdbus and GDBus.

To decide which of the three APIs to use for you C project, here are
short guidelines:

  • If you hack on a GLib/GObject project, GDBus is definitely your
    first choice.

  • If portability to non-Linux kernels — including Windows, Mac OS and
    other UNIXes — is important to you, use either GDBus (which more or
    less means buying into GLib/GObject) or libdbus (which requires a
    lot of manual work).

  • Otherwise, sd-bus would be my recommended choice.

(I am not covering C++ specifically here, this is all about plain C
only. But do note: if you use Qt, then QtDBus is the D-Bus API of
choice, being a wrapper around libdbus.)

Introduction to D-Bus Concepts

To the uninitiated D-Bus usually appears to be a relatively opaque
technology. It uses lots of concepts that appear unnecessarily complex
and redundant on first sight. But actually, they make a lot of
sense. Let’s have a look:

  • A bus is where you look for IPC services. There are usually two
    kinds of buses: a system bus, of which there’s exactly one per
    system, and which is where you’d look for system services; and a
    user bus, of which there’s one per user, and which is where you’d
    look for user services, like the address book service or the mail
    program. (Originally, the user bus was actually a session bus — so
    that you get multiple of them if you log in many times as the same
    user –, and on most setups it still is, but we are working on
    moving things to a true user bus, of which there is only one per
    user on a system, regardless how many times that user happens to
    log in.)

  • A service is a program that offers some IPC API on a bus. A
    service is identified by a name in reverse domain name
    notation. Thus, the org.freedesktop.NetworkManager service on the
    system bus is where NetworkManager’s APIs are available and
    org.freedesktop.login1 on the system bus is where
    systemd-logind‘s APIs are exposed.

  • A client is a program that makes use of some IPC API on a bus. It
    talks to a service, monitors it and generally doesn’t provide any
    services on its own. That said, lines are blurry and many services
    are also clients to other services. Frequently the term peer is
    used as a generalization to refer to either a service or a client.

  • An object path is an identifier for an object on a specific
    service. In a way this is comparable to a C pointer, since that’s
    how you generally reference a C object, if you hack object-oriented
    programs in C. However, C pointers are just memory addresses, and
    passing memory addresses around to other processes would make
    little sense, since they of course refer to the address space of
    the service, the client couldn’t make sense of it. Thus, the D-Bus
    designers came up with the object path concept, which is just a
    string that looks like a file system path. Example:
    /org/freedesktop/login1 is the object path of the ‘manager’
    object of the org.freedesktop.login1 service (which, as we
    remember from above, is still the service systemd-logind
    exposes). Because object paths are structured like file system
    paths they can be neatly arranged in a tree, so that you end up
    with a venerable tree of objects. For example, you’ll find all user
    sessions systemd-logind manages below the
    /org/freedesktop/login1/session sub-tree, for example called
    /org/freedesktop/login1/session/_55 and so on. How services
    precisely label their objects and arrange them in a tree is
    completely up to the developers of the services.

  • Each object that is identified by an object path has one or more
    interfaces. An interface is a collection of signals, methods, and
    properties (collectively called members), that belong
    together. The concept of a D-Bus interface is actually pretty
    much identical to what you know from programming languages such as
    Java, which also know an interface concept. Which interfaces an
    object implements are up the developers of the service. Interface
    names are in reverse domain name notation, much like service
    names. (Yes, that’s admittedly confusing, in particular since it’s
    pretty common for simpler services to reuse the service name string
    also as an interface name.) A couple of interfaces are standardized
    though and you’ll find them available on many of the objects
    offered by the various services. Specifically, those are
    org.freedesktop.DBus.Introspectable, org.freedesktop.DBus.Peer
    and org.freedesktop.DBus.Properties.

  • An interface can contain methods. The word “method” is more or
    less just a fancy word for “function”, and is a term used pretty
    much the same way in object-oriented languages such as Java. The
    most common interaction between D-Bus peers is that one peer
    invokes one of these methods on another peer and gets a reply. A
    D-Bus method takes a couple of parameters, and returns others. The
    parameters are transmitted in a type-safe way, and the type
    information is included in the introspection data you can query
    from each object. Usually, method names (and the other member
    types) follow a CamelCase syntax. For example, systemd-logind
    exposes an ActivateSession method on the
    org.freedesktop.login1.Manager interface that is available on the
    /org/freedesktop/login1 object of the org.freedesktop.login1

  • A signature describes a set of parameters a function (or signal,
    property, see below) takes or returns. It’s a series of characters
    that each encode one parameter by its type. The set of types
    available is pretty powerful. For example, there are simpler types
    like s for string, or u for 32bit integer, but also complex
    types such as as for an array of strings or a(sb) for an array
    of structures consisting of one string and one boolean each. See
    the D-Bus specification
    for the full explanation of the type system. The
    ActivateSession method mentioned above takes a single string as
    parameter (the parameter signature is hence s), and returns
    nothing (the return signature is hence the empty string). Of
    course, the signature can get a lot more complex, see below for
    more examples.

  • A signal is another member type that the D-Bus object system
    knows. Much like a method it has a signature. However, they serve
    different purposes. While in a method call a single client issues a
    request on a single service, and that service sends back a response
    to the client, signals are for general notification of
    peers. Services send them out when they want to tell one or more
    peers on the bus that something happened or changed. In contrast to
    method calls and their replies they are hence usually broadcast
    over a bus. While method calls/replies are used for duplex
    one-to-one communication, signals are usually used for simplex
    one-to-many communication (note however that that’s not a
    requirement, they can also be used one-to-one). Example:
    systemd-logind broadcasts a SessionNew signal from its manager
    object each time a user logs in, and a SessionRemoved signal
    every time a user logs out.

  • A property is the third member type that the D-Bus object system
    knows. It’s similar to the property concept known by languages like
    C#. Properties also have a signature, and are more or less just
    variables that an object exposes, that can be read or altered by
    clients. Example: systemd-logind exposes a property Docked of
    the signature b (a boolean). It reflects whether systemd-logind
    thinks the system is currently in a docking station of some form
    (only applies to laptops …).

So much for the various concepts D-Bus knows. Of course, all these new
concepts might be overwhelming. Let’s look at them from a different
perspective. I assume many of the readers have an understanding of
today’s web technology, specifically HTTP and REST. Let’s try to
compare the concept of a HTTP request with the concept of a D-Bus
method call:

  • A HTTP request you issue on a specific network. It could be the
    Internet, or it could be your local LAN, or a company
    VPN. Depending on which network you issue the request on, you’ll be
    able to talk to a different set of servers. This is not unlike the
    “bus” concept of D-Bus.

  • On the network you then pick a specific HTTP server to talk
    to. That’s roughly comparable to picking a service on a specific bus.

  • On the HTTP server you then ask for a specific URL. The “path” part
    of the URL (by which I mean everything after the host name of the
    server, up to the last “/”) is pretty similar to a D-Bus object path.

  • The “file” part of the URL (by which I mean everything after the
    last slash, following the path, as described above), then defines
    the actual call to make. In D-Bus this could be mapped to an
    interface and method name.

  • Finally, the parameters of a HTTP call follow the path after the
    “?”, they map to the signature of the D-Bus call.

Of course, comparing an HTTP request to a D-Bus method call is a bit
comparing apples and oranges. However, I think it’s still useful to
get a bit of a feeling of what maps to what.

From the shell

So much about the concepts and the gray theory behind them. Let’s make
this exciting, let’s actually see how this feels on a real system.

Since a while systemd has included a tool busctl that is useful to
explore and interact with the D-Bus object system. When invoked
without parameters, it will show you a list of all peers connected to
the system bus. (Use --user to see the peers of your user bus

$ busctl
NAME                                       PID PROCESS         USER             CONNECTION    UNIT                      SESSION    DESCRIPTION
:1.1                                         1 systemd         root             :1.1          -                         -          -
:1.11                                      705 NetworkManager  root             :1.11         NetworkManager.service    -          -
:1.14                                      744 gdm             root             :1.14         gdm.service               -          -
:1.4                                       708 systemd-logind  root             :1.4          systemd-logind.service    -          -
:1.7200                                  17563 busctl          lennart          :1.7200       session-1.scope           1          -
org.freedesktop.NetworkManager             705 NetworkManager  root             :1.11         NetworkManager.service    -          -
org.freedesktop.login1                     708 systemd-logind  root             :1.4          systemd-logind.service    -          -
org.freedesktop.systemd1                     1 systemd         root             :1.1          -                         -          -
org.gnome.DisplayManager                   744 gdm             root             :1.14         gdm.service               -          -

(I have shortened the output a bit, to make keep things brief).

The list begins with a list of all peers currently connected to the
bus. They are identified by peer names like “:1.11″. These are called
unique names in D-Bus nomenclature. Basically, every peer has a
unique name, and they are assigned automatically when a peer connects
to the bus. They are much like an IP address if you so will. You’ll
notice that a couple of peers are already connected, including our
little busctl tool itself as well as a number of system services. The
list then shows all actual services on the bus, identified by their
service names (as discussed above; to discern them from the unique
names these are also called well-known names). In many ways
well-known names are similar to DNS host names, i.e. they are a
friendlier way to reference a peer, but on the lower level they just
map to an IP address, or in this comparison the unique name. Much like
you can connect to a host on the Internet by either its host name or
its IP address, you can also connect to a bus peer either by its
unique or its well-known name. (Note that each peer can have as many
well-known names as it likes, much like an IP address can have
multiple host names referring to it).

OK, that’s already kinda cool. Try it for yourself, on your local
machine (all you need is a recent, systemd-based distribution).

Let’s now go the next step. Let’s see which objects the
org.freedesktop.login1 service actually offers:

$ busctl tree org.freedesktop.login1
  │ ├─/org/freedesktop/login1/seat/seat0
  │ └─/org/freedesktop/login1/seat/self
  │ ├─/org/freedesktop/login1/session/_31
  │ └─/org/freedesktop/login1/session/self

Pretty, isn’t it? What’s actually even nicer, and which the output
does not show is that there’s full command line completion
available: as you press TAB the shell will auto-complete the service
names for you. It’s a real pleasure to explore your D-Bus objects that

The output shows some objects that you might recognize from the
explanations above. Now, let’s go further. Let’s see what interfaces,
methods, signals and properties one of these objects actually exposes:

$ busctl introspect org.freedesktop.login1 /org/freedesktop/login1/session/_31
NAME                                TYPE      SIGNATURE RESULT/VALUE                             FLAGS
org.freedesktop.DBus.Introspectable interface -         -                                        -
.Introspect                         method    -         s                                        -
org.freedesktop.DBus.Peer           interface -         -                                        -
.GetMachineId                       method    -         s                                        -
.Ping                               method    -         -                                        -
org.freedesktop.DBus.Properties     interface -         -                                        -
.Get                                method    ss        v                                        -
.GetAll                             method    s         a{sv}                                    -
.Set                                method    ssv       -                                        -
.PropertiesChanged                  signal    sa{sv}as  -                                        -
org.freedesktop.login1.Session      interface -         -                                        -
.Activate                           method    -         -                                        -
.Kill                               method    si        -                                        -
.Lock                               method    -         -                                        -
.PauseDeviceComplete                method    uu        -                                        -
.ReleaseControl                     method    -         -                                        -
.ReleaseDevice                      method    uu        -                                        -
.SetIdleHint                        method    b         -                                        -
.TakeControl                        method    b         -                                        -
.TakeDevice                         method    uu        hb                                       -
.Terminate                          method    -         -                                        -
.Unlock                             method    -         -                                        -
.Active                             property  b         true                                     emits-change
.Audit                              property  u         1                                        const
.Class                              property  s         "user"                                   const
.Desktop                            property  s         ""                                       const
.Display                            property  s         ""                                       const
.Id                                 property  s         "1"                                      const
.IdleHint                           property  b         true                                     emits-change
.IdleSinceHint                      property  t         1434494624206001                         emits-change
.IdleSinceHintMonotonic             property  t         0                                        emits-change
.Leader                             property  u         762                                      const
.Name                               property  s         "lennart"                                const
.Remote                             property  b         false                                    const
.RemoteHost                         property  s         ""                                       const
.RemoteUser                         property  s         ""                                       const
.Scope                              property  s         "session-1.scope"                        const
.Seat                               property  (so)      "seat0" "/org/freedesktop/login1/seat... const
.Service                            property  s         "gdm-autologin"                          const
.State                              property  s         "active"                                 -
.TTY                                property  s         "/dev/tty1"                              const
.Timestamp                          property  t         1434494630344367                         const
.TimestampMonotonic                 property  t         34814579                                 const
.Type                               property  s         "x11"                                    const
.User                               property  (uo)      1000 "/org/freedesktop/login1/user/_1... const
.VTNr                               property  u         1                                        const
.Lock                               signal    -         -                                        -
.PauseDevice                        signal    uus       -                                        -
.ResumeDevice                       signal    uuh       -                                        -
.Unlock                             signal    -         -                                        -

As before, the busctl command supports command line completion, hence
both the service name and the object path used are easily put together
on the shell simply by pressing TAB. The output shows the methods,
properties, signals of one of the session objects that are currently
made available by systemd-logind. There’s a section for each
interface the object knows. The second column tells you what kind of
member is shown in the line. The third column shows the signature of
the member. In case of method calls that’s the input parameters, the
fourth column shows what is returned. For properties, the fourth
column encodes the current value of them.

So far, we just explored. Let’s take the next step now: let’s become
active – let’s call a method:

# busctl call org.freedesktop.login1 /org/freedesktop/login1/session/_31 org.freedesktop.login1.Session Lock

I don’t think I need to mention this anymore, but anyway: again
there’s full command line completion available. The third argument is
the interface name, the fourth the method name, both can be easily
completed by pressing TAB. In this case we picked the Lock method,
which activates the screen lock for the specific session. And yupp,
the instant I pressed enter on this line my screen lock turned on
(this only works on DEs that correctly hook into systemd-logind for
this to work. GNOME works fine, and KDE should work too).

The Lock method call we picked is very simple, as it takes no
parameters and returns none. Of course, it can get more complicated
for some calls. Here’s another example, this time using one of
systemd’s own bus calls, to start an arbitrary system unit:

# busctl call org.freedesktop.systemd1 /org/freedesktop/systemd1 org.freedesktop.systemd1.Manager StartUnit ss "cups.service" "replace"
o "/org/freedesktop/systemd1/job/42684"

This call takes two strings as input parameters, as we denote in the
signature string that follows the method name (as usual, command line
completion helps you getting this right). Following the signature the
next two parameters are simply the two strings to pass. The specified
signature string hence indicates what comes next. systemd’s StartUnit
method call takes the unit name to start as first parameter, and the
mode in which to start it as second. The call returned a single object
path value. It is encoded the same way as the input parameter: a
signature (just o for the object path) followed by the actual value.

Of course, some method call parameters can get a ton more complex, but
with busctl it’s relatively easy to encode them all. See the man

busctl knows a number of other operations. For example, you can use
it to monitor D-Bus traffic as it happens (including generating a
.cap file for use with Wireshark!) or you can set or get specific
properties. However, this blog story was supposed to be about sd-bus,
not busctl, hence let’s cut this short here, and let me direct you
to the man page in case you want to know more about the tool.

busctl (like the rest of system) is implemented using the sd-bus
API. Thus it exposes many of the features of sd-bus itself. For
example, you can use to connect to remote or container buses. It
understands both kdbus and classic D-Bus, and more!


But enough! Let’s get back on topic, let’s talk about sd-bus itself.

The sd-bus set of APIs is mostly contained in the header file

Here’s a random selection of features of the library, that make it
compare well with the other implementations available.

  • Supports both kdbus and dbus1 as back-end.

  • Has high-level support for connecting to remote buses via ssh, and
    to buses of local OS containers.

  • Powerful credential model, to implement authentication of clients
    in services. Currently 34 individual fields are supported, from the
    PID of the client to the cgroup or capability sets.

  • Support for tracking the life-cycle of peers in order to release
    local objects automatically when all peers referencing them

  • The client builds an efficient decision tree to determine which
    handlers to deliver an incoming bus message to.

  • Automatically translates D-Bus errors into UNIX style errors and
    back (this is lossy though), to ensure best integration of D-Bus
    into low-level Linux programs.

  • Powerful but lightweight object model for exposing local objects on
    the bus. Automatically generates introspection as necessary.

The API is currently not fully documented, but we are working on
completing the set of manual pages. For details
see all pages starting with sd_bus_.

Invoking a Method, from C, with sd-bus

So much about the library in general. Here’s an example for connecting
to the bus and issuing a method call:

#include <stdio.h>
#include <stdlib.h>
#include <systemd/sd-bus.h>

int main(int argc, char *argv[]) {
        sd_bus_error error = SD_BUS_ERROR_NULL;
        sd_bus_message *m = NULL;
        sd_bus *bus = NULL;
        const char *path;
        int r;

        /* Connect to the system bus */
        r = sd_bus_open_system(&bus);
        if (r < 0) {
                fprintf(stderr, "Failed to connect to system bus: %sn", strerror(-r));
                goto finish;

        /* Issue the method call and store the respons message in m */
        r = sd_bus_call_method(bus,
                               "org.freedesktop.systemd1",           /* service to contact */
                               "/org/freedesktop/systemd1",          /* object path */
                               "org.freedesktop.systemd1.Manager",   /* interface name */
                               "StartUnit",                          /* method name */
                               &error,                               /* object to return error in */
                               &m,                                   /* return message on success */
                               "ss",                                 /* input signature */
                               "cups.service",                       /* first argument */
                               "replace");                           /* second argument */
        if (r < 0) {
                fprintf(stderr, "Failed to issue method call: %sn", error.message);
                goto finish;

        /* Parse the response message */
        r = sd_bus_message_read(m, "o", &path);
        if (r < 0) {
                fprintf(stderr, "Failed to parse response message: %sn", strerror(-r));
                goto finish;

        printf("Queued service job as %s.n", path);


        return r < 0 ? EXIT_FAILURE : EXIT_SUCCESS;

Save this example as bus-client.c, then build it with:

$ gcc bus-client.c -o bus-client `pkg-config --cflags --libs libsystemd`

This will generate a binary bus-client you can now run. Make sure to
run it as root though, since access to the StartUnit method is

# ./bus-client
Queued service job as /org/freedesktop/systemd1/job/3586.

And that’s it already, our first example. It showed how we invoked a
method call on the bus. The actual function call of the method is very
close to the busctl command line we used before. I hope the code
excerpt needs little further explanation. It’s supposed to give you a
taste how to write D-Bus clients with sd-bus. For more more
information please have a look at the header file, the man page or
even the sd-bus sources.

Implementing a Service, in C, with sd-bus

Of course, just calling a single method is a rather simplistic
example. Let’s have a look on how to write a bus service. We’ll write
a small calculator service, that exposes a single object, which
implements an interface that exposes two methods: one to multiply two
64bit signed integers, and one to divide one 64bit signed integer by

#include <stdio.h>
#include <stdlib.h>
#include <errno.h>
#include <systemd/sd-bus.h>

static int method_multiply(sd_bus_message *m, void *userdata, sd_bus_error *ret_error) {
        int64_t x, y;
        int r;

        /* Read the parameters */
        r = sd_bus_message_read(m, "xx", &x, &y);
        if (r < 0) {
                fprintf(stderr, "Failed to parse parameters: %sn", strerror(-r));
                return r;

        /* Reply with the response */
        return sd_bus_reply_method_return(m, "x", x * y);

static int method_divide(sd_bus_message *m, void *userdata, sd_bus_error *ret_error) {
        int64_t x, y;
        int r;

        /* Read the parameters */
        r = sd_bus_message_read(m, "xx", &x, &y);
        if (r < 0) {
                fprintf(stderr, "Failed to parse parameters: %sn", strerror(-r));
                return r;

        /* Return an error on division by zero */
        if (y == 0) {
                sd_bus_error_set_const(ret_error, "net.poettering.DivisionByZero", "Sorry, can't allow division by zero.");
                return -EINVAL;

        return sd_bus_reply_method_return(m, "x", x / y);

/* The vtable of our little object, implements the net.poettering.Calculator interface */
static const sd_bus_vtable calculator_vtable[] = {
        SD_BUS_METHOD("Multiply", "xx", "x", method_multiply, SD_BUS_VTABLE_UNPRIVILEGED),
        SD_BUS_METHOD("Divide",   "xx", "x", method_divide,   SD_BUS_VTABLE_UNPRIVILEGED),

int main(int argc, char *argv[]) {
        sd_bus_slot *slot = NULL;
        sd_bus *bus = NULL;
        int r;

        /* Connect to the user bus this time */
        r = sd_bus_open_user(&bus);
        if (r < 0) {
                fprintf(stderr, "Failed to connect to system bus: %sn", strerror(-r));
                goto finish;

        /* Install the object */
        r = sd_bus_add_object_vtable(bus,
                                     "/net/poettering/Calculator",  /* object path */
                                     "net.poettering.Calculator",   /* interface name */
        if (r < 0) {
                fprintf(stderr, "Failed to issue method call: %sn", strerror(-r));
                goto finish;

        /* Take a well-known service name so that clients can find us */
        r = sd_bus_request_name(bus, "net.poettering.Calculator", 0);
        if (r < 0) {
                fprintf(stderr, "Failed to acquire service name: %sn", strerror(-r));
                goto finish;

        for (;;) {
                /* Process requests */
                r = sd_bus_process(bus, NULL);
                if (r < 0) {
                        fprintf(stderr, "Failed to process bus: %sn", strerror(-r));
                        goto finish;
                if (r > 0) /* we processed a request, try to process another one, right-away */

                /* Wait for the next request to process */
                r = sd_bus_wait(bus, (uint64_t) -1);
                if (r < 0) {
                        fprintf(stderr, "Failed to wait on bus: %sn", strerror(-r));
                        goto finish;


        return r < 0 ? EXIT_FAILURE : EXIT_SUCCESS;

Save this example as bus-service.c, then build it with:

$ gcc bus-service.c -o bus-service `pkg-config --cflags --libs libsystemd`

Now, let’s run it:

$ ./bus-service

In another terminal, let’s try to talk to it. Note that this service
is now on the user bus, not on the system bus as before. We do this
for simplicity reasons: on the system bus access to services is
tightly controlled so unprivileged clients cannot request privileged
operations. On the user bus however things are simpler: as only
processes of the user owning the bus can connect no further policy
enforcement will complicate this example. Because the service is on
the user bus, we have to pass the --user switch on the busctl
command line. Let’s start with looking at the service’s object tree.

$ busctl --user tree net.poettering.Calculator

As we can see, there’s only a single object on the service, which is
not surprising, given that our code above only registered one. Let’s
see the interfaces and the members this object exposes:

$ busctl --user introspect net.poettering.Calculator /net/poettering/Calculator
NAME                                TYPE      SIGNATURE RESULT/VALUE FLAGS
net.poettering.Calculator           interface -         -            -
.Divide                             method    xx        x            -
.Multiply                           method    xx        x            -
org.freedesktop.DBus.Introspectable interface -         -            -
.Introspect                         method    -         s            -
org.freedesktop.DBus.Peer           interface -         -            -
.GetMachineId                       method    -         s            -
.Ping                               method    -         -            -
org.freedesktop.DBus.Properties     interface -         -            -
.Get                                method    ss        v            -
.GetAll                             method    s         a{sv}        -
.Set                                method    ssv       -            -
.PropertiesChanged                  signal    sa{sv}as  -            -

The sd-bus library automatically added a couple of generic interfaces,
as mentioned above. But the first interface we see is actually the one
we added! It shows our two methods, and both take “xx” (two 64bit
signed integers) as input parameters, and return one “x”. Great! But
does it work?

$ busctl --user call net.poettering.Calculator /net/poettering/Calculator net.poettering.Calculator Multiply xx 5 7
x 35

Woohoo! We passed the two integers 5 and 7, and the service actually
multiplied them for us and returned a single integer 35! Let’s try the
other method:

$ busctl --user call net.poettering.Calculator /net/poettering/Calculator net.poettering.Calculator Divide xx 99 17
x 5

Oh, wow! It can even do integer division! Fantastic! But let’s trick
it into dividing by zero:

$ busctl --user call net.poettering.Calculator /net/poettering/Calculator net.poettering.Calculator Divide xx 43 0
Sorry, can't allow division by zero.

Nice! It detected this nicely and returned a clean error about it. If
you look in the source code example above you’ll see how precisely we
generated the error.

And that’s really all I have for today. Of course, the examples I
showed are short, and I don’t get into detail here on what precisely
each line does. However, this is supposed to be a short introduction
into D-Bus and sd-bus, and it’s already way too long for that …

I hope this blog story was useful to you. If you are interested in
using sd-bus for your own programs, I hope this gets you started. If
you have further questions, check the (incomplete) man pages, and
inquire us on IRC or the systemd mailing list. If you need more
examples, have a look at the systemd source tree, all of systemd’s
many bus services use sd-bus extensively.

Darknet - The Darkside: Just-Metadata – Gathers & Analyse IP Address Metadata

This post was syndicated from: Darknet - The Darkside and was written by: Darknet. Original post: at Darknet - The Darkside

Just-Metadata is a tool that can be used to gather IP address metadata passively about a large number of IP addresses, and attempt to extrapolate relationships that might not otherwise be seen. Just-Metadata has “gather” modules which are used to gather metadata about IPs loaded into the framework across multiple resources on the…

Read the full post at

SANS Internet Storm Center, InfoCON: green: How much is your IPv4 Space Worth, (Wed, Jun 10th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

Thanks to Rob for reminding me of IPv4auction websites again. I looked at them a couple years ago, but there was very little real activity at the time. Looks like that has changed now. ARIN is essentially out of IPv4 space, and very restrictive in handing out any addition addresses. It has gotten very hard, if not impossible, to obtain a larger block of IPv4 space. So no surprise that markets for IPv4 space are coming up.

These markets are not in line with registrar policies [1]. If someone receives an IP address assignment, then they dont technically own the addresses. Once they are no longer needed, they are supposed to be returned to ARIN to be handed to the next applicant in line. But there has been little enforcement, and there have always been grey areas. For example, a company may buy another company, and in the process obtain access to that companies IP address space. Later, assets other then the IP address space could be sold off, leaving the buy with the rights to the IP address space.

Here are some of the sites offering IP address space (I am not endorsing them, and have no idea how real they are):

– Currently three offers for space up to a /20 at $7-$10 per address. There are a couple of bids.
– There are a number of auctions with IP addresses for sale and for rent. Looks like they are going for about the same price as the addresses at [2]

Some sites have dones so in the past, but already shut down (e.g. In other cases, the nanog mailing list was used to offer IP address space, or IP addresses were purchased as part of bankruptcy auctions [3]


Johannes B. Ullrich, Ph.D.

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

TorrentFreak: Kim Dotcom’s MegaNet Preps Jan 2016 Crowdfunding Campaign

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

dotcom-laptopFor many years Kim Dotcom was associated with a crazy lifestyle but these days he prefers to be seen more as a family man.

Regularly posting pictures of his children on Twitter and playing down his wild past, Dotcom seems unlikely to entertain a recent request from Pirate Bay founder Peter Sunde to join him on the Gumball Rally.

But while yachts and fast cars might be a thing of the past, Dotcom has certainly not lost the fire in his belly when it comes to his current predicament. As he fights off a ravenous U.S. government determined to bring him to justice by any means possible, spying included, the Megaupload founder has positioned himself as a champion of Internet privacy.

On January 19, 2013, Dotcom marked the anniversary of the raid on his empire by launching the privacy-focused cloud-storage service Next year on the same date, the tenacious German says he will deliver again.

Thus far, details are thin on the ground, but what we do know is that Dotcom is planning a new anti-censorship network he calls MegaNet.

“How would you like a new Internet that can’t be controlled, censored or destroyed by Governments or Corporations?” Dotcom teased in February.

MegaNet’s precise mechanism is yet to be revealed, but Dotcom has already stated that the network will be non-IP address based and that blockchain technology will play an important role.

What we also know is that users’ mobile phones will play a crucial role, although at launch other devices will participate in the network.

“All your mobile phones become an encrypted network,” Dotcom notes. “You’d be surprised how much idle storage & bandwidth capacity mobile phones have. MegaNet will turn that idle capacity into a new network.”

At this stage it appears that Dotcom envisions a totally decentralized system, an essential quality if he is to deliver on his claims of absolute privacy.

With the earlier promise that participants in MegaNet “become the MegaNet”, Dotcom’s announcement this morning that the project will seek monetary contributions from the masses seems entirely fitting.

“MegaNet details will be revealed and equity will be available via crowd funding on 20 Jan 2016, the fourth anniversary of the raid [on Dotcom and Megupload],” Dotcom confirmed.

And for now, that is all. Dotcom has become somewhat of an expert at dripping small details to the masses as and when he sees fit while allowing the media to fill in the blanks. It’s a somewhat effective strategy which provides millions in free advertising for close to zero marketing outlay.

The big question now is how much equity MegaNet will need to get off the ground and how many of Dotcom’s supporters will believe that privacy is a commodity worth supporting with their wallets. People were happy to support Peter Sunde’s on the same premise, but as recently revealed the amount of cash required to compete can be considerable.

However, Dotcom probably won’t attempt this entirely on his own. Given his history there’s a significant chance that the entrepreneur will pull in heavyweights such as Julian Assange and Glenn Greenwald to support the campaign. That will definitely help to boost the coffers.

Update: Kim Dotcom has sent TorrentFreak additional details on how MegaNet will operate.

“MegaNet has a unique file crystallization and recreation protocol utilizing the blockchain. You can load entire websites with this new technology and it makes them immune to almost all hacker attacks and ddos,” Dotcom informs TF.

“In the beginning MegaNet will still utilize the current Internet as a dumb pipe but in 10 years it will run exclusively on smartphones with hopefully over 500 million users carrying the network.

“A network by the people for the people. Not controlled by any government or corporations. MegaNet will be a powerful tool to guard our privacy and freedoms and it will also be my legacy,” Dotcom concludes.

On the finance front, MegaNet will partner with and Max Keiser to raise capital.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and anonymous VPN services.

TorrentFreak: Court Orders Cloudflare to Disconnect ‘New’ Grooveshark

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

cloudflareLast month the long running lawsuit between the RIAA and Grooveshark came to an end. However, within days a new site was launched aiming to take its place.

The RIAA wasn’t happy with this development and quickly obtained a restraining order, preventing domain registrars and hosting companies from offering their services to the site.

This was somewhat effective, as Namecheap quickly suspended the original domain name. However, not all parties were as cooperative.

Popular CDN-service CloudFlare refused to take action on the basis that it is not “aiding and abetting” piracy. The RIAA disagreed and asked New York District Court Judge Alison Nathan to rule on the matter.

In an order (pdf) just published, Judge Nathan agrees with the music group.

CloudFlare argued that they were not bound to the restraining order since they were not in “active concert or participation” due to the automated nature of its services. In addition, the company countered that even if it disconnected Grooveshark, the site would still be accessible.

In her order Nathan notes that she finds neither argument persuasive. The fact that CloudFlare is aware of the infringements and provides services that help the public to easily access the infringing site, shows otherwise.

“Connecting internet users to in this manner benefits Defendants and quite fundamentally assists them in violating the injunction because, without it, users would not be able to connect to Defendants’ site unless they knew the specific IP address for the site,” Judge Nathan writes.

“Beyond the authoritative domain name server, CloudFlare also provides additional services that it describes as improving the performance of the site,” she adds.

The argument that the ‘new’ Grooveshark will still be around after CloudFlare suspends the account was found to be irrelevant. A third-party can still be bound by a restraining order even if terminating its services doesn’t render a website inaccessible.

“… just because another third party could aid and abet the Defendants in violating the injunction does not mean that CloudFlare is not doing so,” the order reads.

Finally, the Judge agrees that there may be other services that are not covered by the order. However, in this case CloudFlare is directly facilitating Grooveshark, with specific knowledge of the accounts that are responsible.

For CloudFlare the ruling comes as a disappointment, opening the door for a slew of similar requests. The CDN has several of the largest pirate sites as clients, including The Pirate Bay, which is now a relatively easy target.

At the time of writing is no longer accessible, suggesting that CloudFlare has already complied with the order.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and anonymous VPN services.

SANS Internet Storm Center, InfoCON: green: Guest Diary: Xavier Mertens – Playing with IP Reputation with Dshield & OSSEC, (Tue, Jun 2nd)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

[Guest Diary: Xavier Mertens] [Playing with IP Reputation with Dshield “>]

When investigating incidents or searching for malicious activity in your logs, IP reputation is a nice way to increasethe reliability of generated alerts. It can help toprioritizeincidents. Lets take an example with a WordPress blog. Itwill, sooner or later, be targeted by a brute-force attack on the default /wp-admin page. In this case, IP reputationcan be helpful: An attack performed from an IP address reported as actively scanning the Internet will not (or less)attract my attention. On the contrary, if the same kind of attack is coming from an unkown IP address, this could bemore suspicious…

By using a reputation system, our monitoring tool can tag an IP address with a label like reported as maliciousbased on a repository. The real value of this repository depends directly of the value of collected information. Im abig fan, a free service provided by the SANS Internet Storm Center. Such service is working thanks tothe data submitted by many people across the Internet. For years, Im also pushing my firewall logs to dshield.orgfrom my OSSEC server. I wrote a tool to achieve this:ossec2dshield ( By contributing to the system, its now time toget some benefits from my participation: Im re-using the database to automatically check the reputation of the IPaddresses attacking me. We come full circle!

To achieve this, lets use theAPI ( on and theOSSEC ( called Active-Response whichallows to trigger a script upon a set of conditions. In this example, we call the reputation script with ourattacker address for any alert with a level = 6.

(Check the Active-Response( details)

The ISC API can be used to query information about an IP address. The returned results are:

{ip:{abusecontact:unknown,number:,country: FR ,as:12876 ,asname: AS12876 ONLINE S.A.S.,FR,network:\/16 ,comment:null}}

The most interesting fields are:

count – the number of times the IP address has been reported as an attacker
attacks – the number of targeted IP addresses
mindate – the first report
maxdata – the last report

The script can be used from the command line or from an OSSEC Active-Responseconfiguration block. To reduce the requests against the API, a SQLite database is created and populated with a localcopy of the data. Existing IP addresses will be checked again after a specified TTL (time-to-live), by default 5 days.Data are also dumped in a flat file or Syslog for further processing by another tool. Here is an example of entry:

$ tail -f /var/log/ipreputation.log
[2015-05-27 23:30:07,769] DEBUG No data found, fetching from ISC
[2015-05-27 23:30:07,770] DEBUG Using proxy:
[2015-05-27 23:30:07,772] DEBUG Using user-agent: isc-ipreputation/1.0 (
[2015-05-27 23:30:09,760] DEBUG No data found, fetching from ISC
[2015-05-27 23:30:09,761] DEBUG Using proxy:
[2015-05-27 23:30:09,762] DEBUG Using user-agent: isc-ipreputation/1.0 (
[2015-05-27 23:30:10,138] DEBUG Saving

[2015-05-27 23:30:10,145] INFO IP=, AS=6848(TELENET-AS Telenet N.V.,BE), Network=, Country=BE, Count=148, AttackedIP=97, Trend=0, FirstSeen=2015-04-21, LastSeen=2015-05-27, Updated=2015-05-27 18:37:15

In this example, you can see that this IP address started to attack on the 21st of April. It was reported 148 timeswhile attacking 97 different IP addresses (This IP is certainly part of a botnet).

The script can be configuration with a YAML configuration file (default to /etc/isc-ipreputation.conf) which is veryeasy to understand:


debug: yes

path: /data/ossec/logs/isc-ipreputation.db
exclude-ip: 192\.168\..*|172\.16\..*|10\..*|fe80:.*
ttl-days: 5
user-agent: isc-ipreputation/1.0 (
Finally, the SQLite database can use used to get interesting statistics. Example, to get the top-10 of suspicious IPaddresses that attacked me (and their associated country):

$ sqlite3 isc-ipreputation.db
SQLite version 3.8.2 2013-12-06 14:53:30
Enter .help for instructions
Enter SQL statements terminated with a


It is also very easy to generate dynamic lists of IP addresses (orCDB ( called by OSSEC). The following commandwill generate a CDB list with my top-10 of malicious IP addresses:

$ sqlite3 isc-ipreputation.db \
echo $IP:Suspicious
done /data/ossec/lists/bad-ips
$ cat /data/ossec/lists/bad-ips
$ ossec-makelists
* File lists/bad-ips.cdb needs to be updated

Based on this list, you can add more granularity to your alerts by correlating the attacks with the CDB list. Note proposes arecommended block list( to be used. A few months ago,Richard Porter ( ( integrate one of them in a Palo Alto Networks firewall. This is a great resource but I think that both arecomplementary.

The script is available on my githubrepository (
“>If the enemy leaves a door open, you must rush in.”>PGP Key:

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

TorrentFreak: Hola VPN Already Exploited By “Bad Guys”, Security Firm Says

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

After a flurry of reports, last week the people behind geo-unblocking software Hola were forced to concede that their users’ bandwidth is being sold elsewhere for commercial purposes. But for the Israel-based company, that was the tip of the iceberg.

Following an initial unproofed report that the software operates as a botnet, this weekend researchers published an advisory confirming serious problems with the tool.

“The Hola Unblocker Windows client, Firefox addon, Chrome extension and Android application contain multiple vulnerabilities which allow a remote or local attacker to gain code execution and potentially escalate privileges on a user’s system,” the advisory reads.

Yesterday and after several days of intense pressure, Hola published a response in which it quoted Steve Jobs and admitted that mistakes had been made. Hola said that it would now be making it “completely clear” to its users that their resources are being used elsewhere in exchange for a free product.

Hola also confirmed that two vulnerabilities found by the researchers at Adios-Hola had now been fixed, but the researchers quickly fired back.

“We know this to be false,” they wrote in an update. “The vulnerabilities are *still* there, they just broke our vulnerability checker and exploit demonstration. Not only that; there weren’t two vulnerabilities, there were six.”

With Hola saying it now intends to put things right (it says it has committed to an external audit with “one of the big 4 auditing companies”) the company stood by its claims that its software does not turn users’ computers into a botnet. Today, however, an analysis by cybersecurity firm Vectra is painting Hola in an even more unfavorable light.

In its report Vectra not only insists that Hola behaves like a botnet, but it’s possible it has malicious features by design.

“While analyzing Hola, Vectra Threat Labs researchers found that in addition to behaving like a botnet, Hola contains a variety of capabilities that almost appear to be designed to enable a targeted, human-driven cyber attack on the network in which an Hola user’s machine resides,” the company writes.

“First, the Hola software can download and install any additional software without the user’s knowledge. This is because in addition to being signed with a valid code-signing certificate, once Hola has been installed, the software installs its own code-signing certificate on the user’s system.”

If the implications of that aren’t entirely clear, Vectra assists on that front too. On Windows machines, the certificate is added to the Trusted Publishers Certificate Store which allows *any code* to be installed and run with no notification given to the user. That is frightening.

Furthermore, Vectra found that Hola contains a built-in console (“zconsole”) that is not only constantly active but also has powerful functions including the ability to kill running processes, download a file and run it whilst bypassing anti-virus software, plus read and write content to any IP address or device.

“These capabilities enable a competent attacker to accomplish almost anything. This shifts the discussion away from a leaky and unscrupulous anonymity network, and instead forces us to acknowledge the possibility that an attacker could easily use Hola as a platform to launch a targeted attack within any network containing the Hola software,” Vectra says.

Finally, Vectra says that while analyzing the protocol used by Hola, its researchers found five different malware samples on VirusTotal that contain the Hola protocol. Worryingly, they existed before the recent bad press.

“Unsurprisingly, this means that bad guys had realized the potential of Hola before the recent flurry of public reports by the good guys,” the company adds.

For now, Hola is making a big show of the updates being made to its FAQ as part of its efforts to be more transparent. However, items in the FAQ are still phrased in a manner that portrays criticized elements of the service as positive features, something that is likely to mislead non-tech oriented users.

“Since [Hola] uses real peers to route your traffic and not proxy servers, it makes you more anonymous and more secure than regular VPN services,” one item reads.

How Hola will respond to Vectra’s latest analysis remains to be seen, but at this point there appears little that the company can say or do to pacify much of the hardcore tech community. That being said, if Joe Public still can’t see the harm in a free “community” VPN operating a commercial division with full access to his computer, Hola might settle for that.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and anonymous VPN services.

TorrentFreak: Court Orders Cox to Expose “Most Egregious” BitTorrent Pirates

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

cox-logo Last year BMG Rights Management and Round Hill Music sued Cox Communications, arguing that the ISP fails to terminate the accounts of repeat infringers.

The companies, which control the publishing rights to songs by Katy Perry, The Beatles and David Bowie among others, claim that Cox has given up its DMCA safe harbor protections due to this inaction.

The case revolves around the “repeat infringer” clause of the DMCA, which prescribes that Internet providers must terminate the accounts of persistent pirates.

As part of the discovery process the music outfits requested details on the accounts which they caught downloading their content. In total there are 150,000 alleged pirates, but as a compromise BMG and Round Hill limited their initial request to 500 IP-addresses.

Cox refused to hand over the requested information arguing that the Cable Privacy Act prevents the company from disclosing this information.

The matter was discussed during a court hearing late last week. After a brief deliberation Judge John Anderson ruled that the ISP must hand over the personal details of 250 subscribers.

“Defendants shall produce customer information associated with the Top 250 IP Addresses recorded to have infringed in the six months prior to filing the Complaint,” Judge Anderson writes.

“This production shall include the information as requested in Interrogatory No.13, specifically: name, address, account number, the bandwidth speed associated with each account, and associated IP address of each customer.”

The order

The music companies also asked for the account information of the top 250 IP-addresses connected to the piracy of their files after the complaint was filed, but this request was denied. Similarly, if the copyright holders want information on any of the 149,500 other Cox customers they need a separate order.

The music companies previously informed the court that the personal details are crucial to proof their direct infringement claims, but it’s unclear how they plan to use the data.

While none of the Cox customers are facing any direct claims as of yet, it’s not unthinkable that some may be named in the suit to increase pressure on the ISP.

The full list of IP-addresses is available for download here (PDF).

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and anonymous VPN services.

Schneier on Security: USPS Tracking Queries to Its Package Tracking Website

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

A man was arrested for drug dealing based on the IP address he used while querying the USPS package tracking website.

SANS Internet Storm Center, InfoCON: green: Upatre/Dyre malspam – Subject: eFax message from “unknown”, (Wed, May 20th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green


Yesterday on 2015-05-19, I attended a meeting from my local chapter of the Information Systems Security Association (ISSA). During the meeting, one of the speakers discussed different levels of incident response by Security Operations Center (SOC) personnel. For non-targeted issues like botnet-based malicious spam (malspam) infecting a Windows host, you probably wont waste valuable time investigating every little detail. In most cases, youll probably start the process to re-image the infected computer and move on. Other suspicious events await, and they might reveal a more serious, targeted threat.

However, we still recover information about these malspam campaigns. Traffic patterns evolve, and changes should be documented.

Todays example of malspam

Searching through my employers blocked spam filters, I found the following Upatre/Dyre wave of malspam:

  • Date/Time: 2015-05-19 from from 12:00 AM to 5:47 AM CST
  • Number of messages: 20
  • Sender (spoofed):
  • Subject: eFax message from unknown” />

    As shown in the above image, these messages were tailored for the recipients. Youll also notice some of the recipient email addresses contain random characters and numbers. Nothing new here. Its just one of the many waves of malspam our filters block every day. I reported a similar wave earlier this month [1]. Let” />

    The attachment is a typical example of Upatre, much like weve seen before. Lets see what this malware does in a controlled environment.

    Indicators of compromise (IOC)

    I ran the malware on a physical host and generated the following traffic:

    • 2015-05-19 15:16:12 UTC – port 80 – – GET /
    • 2015-05-19 15:16:13 UTC – port 13410 – SYN packet to server, no response
    • 2015-05-19 15:16:16 UTC – port 443 – two SYN packets to server, no response
    • 2015-05-19 15:16:58 UTC – port 443 – two SYN packets to server, no response
    • 2015-05-19 15:17:40 UTC – port 443 – SSL traffic – approx 510 KB sent from server to infected host
    • 2015-05-19 15:17:56 UTC – port 3478 – UDP STUN traffic to:
    • 2015-05-19 15:17:58 UTC – port 443 – SSL traffic – approx 256 KB sent from server to infected host
    • 2015-05-19 15:18:40 UTC – port 13409 – SYN packet to server, no response

    In my last post about Upatre/Dyre, we saw Upatre-style HTTP GET requests to but no HTTP response from the server [1]. Thats been the case for quite some time now.” />
    Shown above: Attempted TCP connections to the same IP address now reset (RST) by the server

    How can we tell this is Upatre?” />

    As Ive mentioned before, is a service run by one of my fellow Rackspace employees [2]. By itself, its not malicious. Unfortunately, malware authors use this and similar services to check an infected computers IP address.

    What alerts trigger on this traffic?” />

    Related files on the infected host include:

    • C:UsersusernameAppDataLocalPwTwUwWTWcqBhWG.exe (Dyre)
    • C:UsersusernameAppDataLocalne9bzef6m8.dll
    • C:UsersusernameAppDataLocalTemp~TP95D5.tmp (encrypted or otherwise obfuscated)
    • C:UsersusernameAppDataLocalTempJinhoteb.exe (where Upatre copied itself after it was run)

    Some Windows registry changes for persistence:

    • Key name: HKEY_CURRENT_USERSoftwareMicrosoftWindowsCurrentVersionRun
    • Key name: HKEY_USERSS-1-5-21-52162474-342682794-3533990878-1000SoftwareMicrosoftWindowsCurrentVersionRun
    • Value name: GoogleUpdate
    • Value type: REG_SZ
    • Value data: C:UsersusernameAppDataLocalPwTwUwWTWcqBhWG.exe

    A pcap of the infection traffic is available at:

    A zip file of the associated Upatre/Dyre malware is available at:

    The zip file is password-protected with the standard password. If you dont know it, email and ask.

    Final words

    This was yet another wave of Upatre/Dyre malspam. No real surprises, but its always interesting to note the small changes from these campaigns.

    Brad Duncan, Security Researcher at Rackspace
    Blog: – Twitter: @malware_traffic



    (c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Linux How-Tos and Linux Tutorials: Install Linux on a Modern WiFi Router: Linksys WRT1900AC and OpenWrt

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Ben Martin. Original post: at Linux How-Tos and Linux Tutorials

linksyswrt1900ac router

The Linksys WRT1900AC is a top-end modern router that gets even sweeter when you unleash Linux on it and install OpenWrt. OpenWrt includes the opkg package management system giving you easy access to a great deal of additional open source software to use on your router. If you want the pleasure of SSH access on your router, the ability to use iptables on connections passing through it, and the ability to run various small servers on it, the Linksys WRT1900AC and OpenWrt are a powerful combination.

From a hardware perspective, the Linksys WRT1900AC includes simultaneous dual band with support for 802.11n (2.4 GigaHertz) up to 600 Megabytes per second and 802.11ac (5 GHz) up to 1.3 Gigabytes per second. This lets you connect your older devices to 802.11n and newer hardware can take advantage the greater speed and less congested 802.11ac signal.

The router has a dual-core Marvell Armada 370/XP CPU with 256 MB of RAM and 128 MB of flash storage. You can also attach more storage to the WRT1900AC using its USB 3.0 and eSATA ports. When using OpenWrt you might also like to attach a webcam and printer to the router. The Linksys WRT1900AC has a 4 port gigabit switch and a gigabit upstream WAN port.

Initial setup

The stock firmware that comes with the Linksys WRT1900AC uses a very simple four-step procedure for initial setup. I only partially followed the recommended setup steps.

Step 1: Connect the antennae and power.

Step 2: Connect your upstream “Internet” link to the appropriate port on the router.

Step 3: Connect to the wifi signal from the router. You are given a custom wireless network name and password which appears to be set differently for each individual router. This step 3 nicely removes the security vulnerability inherent in initial router setup, because your router will have a custom password right from the first time you power it on.

Step 4: Log in to and setup the router.

Instead of directly connecting to the Internet port, I used one of the 4 gigabit switch ports to attach the router to the local LAN. This made using the website at step 4 not work for me. I could create an account on the smartwifi site, but it wanted me to be connected through the wifi router in order to adjust the settings.

You can however set up the router without needing to use any remote websites. The Linksys will appear at and connecting a laptop to the wifi router and manually forcing the laptop’s IP address to allowed me to access the router configuration page. At that stage the Connectivity/Local Network page lets you set the IP address of the router to be something that will fit into your LAN in a non conflicting manner (and on the subnet you are using) and also disable the DHCP server if you already have one configured.

The initial screen I got when I was connecting directly using again wanted to take me off to a remote website, though you can click through to avoid having to do that if you want.

I tried to attach a 120 GB SanDisk Extreme SSD to test eSATA storage. Unfortunately ext4 is not a supported filesystem for External Storage in the stock firmware. It could see /dev/sda1 but 0 kilobytes used of 0 kb total space. Using a 16 GB flash pen drive formatted to FAT filesystem was fine; the ftp service was started and the drive showed up as a Samba share, too.

Switching over to OpenWrt

At the time of writing the options for installing OpenWrt on the device were changing. There were four images which offered Linux kernel version 3.18 or 4.0 and some level of extra fixes and updates depending on the image you choose. I used Kaloz’s evolving snapshots of trunk linked at openwrt_wrt1900ac_snapshot.img.

Flashing the image onto the router is very simple as you use the same web interface that is used to manually install standard firmware updates. The fun, and moments of anxiety that occur after the router reboots are familiar to anyone who has ever flashed a device.

When the router reboots you will not have any wifi signals at all from it. The router will come up at a default IP address of The easiest method to talk to the router is to use a laptop and force the ethernet interface to an address of Using a trunk distribution of OpenWrt you are likely not to have a useful web interface on the router. Visiting will likely show an empty web server with no files.

When falling back to trying to do an SSH or network login to the router, another little surprise awaits. Trying to SSH into the router showed that a connection was possible but I was unable to connect without any password. Unfortunately, OpenWrt sets the default password to nothing, creating a catch-22 with SSH not allowing a login with no password, so connection seemed impossible. The saving grace is that telnet is also running on the router and after installing the telnet client on the laptop I could login without any password without any issue. Gaining access to the router again was a tremendous relief.

In the telnet session you can use the passwd command to set a password and then you should be able to login using SSH. I opted to test the SSH login while the telnet session was still active so that I had a fallback in case login failed for some reason.

To make the web interface operational you will have to install the LuCI package. The below commands will do that for you. If you need to use a proxy to get to the Internet the http_proxy, https_proxy, and ftp_proxy environment variables will be of use. Again you might run into a little obstacle here, with the router on the subnet it might not be able to talk with your existing network if it is on the often used subnet. I found that manually forcing the IP address to a 192.168.0.X address using ifconfig on br-lan changed the address for bridged ports and everything moved to that subnet. This is not a permanent change, so if it doesn’t work rebooting the router gets you back to again. It is easy to change this for good using LuCI once you have that installed.

export http_proxy=
opkg update
opkg install luci

Once you have LuCI installed the rest of the router setup becomes point and click by visiting the web server on your router. To enable WiFi signals, go to the Network/Wifi page which gives you access to the two radios, one for 2.4 Ghz and the newer 5 Ghz 802.11nac standard. Each radio will be disabled by default. Oddly, I found that clicking edit for a radio and scrolling down to the Interface Configuration and the Wireless Security page, the default security was using “No Encryption.” I would have thought WPA2-PSK was perhaps a better default choice. So getting a radio up and running involved setting an ESSID, checking the Mode (I used Access Point), and setting the Wireless Security to something other than nothing and setting a password.

Many of the additional features you might install with opkg also have a LuCI support package available. For example, if you want to run a DLNA server on the Linksys WRT1900AC the minidlna package is available, and a luci-app-minidlna package will let you manage the server right from the LuCI web interface.

opkg install minidlna
opkg install luci-app-minidlna

Although the Linksys WRT1900AC has 128 MB of flash storage, it is broken up into many smaller partitions. The core /overlay partition had a size of only 24.6 MB with /tmp/syscfg being another 30 MB partition of which only around 300 KB was being used. While this provides plenty of space to install precompiled software, there isn’t enough space to install gcc onto the Linksys WRT1900AC/OpenWrt installation. I have a post up asking if there is a simple method to use more of the flash on the Linksys WRT1900AC from the OpenWrt file system. Another method to gain more space on an OpenWrt installation is to use an extroot, where the main system is stored on external storage. Perhaps with the Linksys WRT1900AC this could be a partition on an eSATA SSD.

If you don’t want to use extroot right away, another approach is to use another ARM machine that is running a soft floating point distribution to compile static binaries. Those can be transferred over using rsync to the OpenWrt installation on the Linksys WRT1900AC. An ARM machine is either using soft or hard floating point, and generally everything is compiled to work with one or the other. To see which version of floating point your hardware is expecting you can use the readelf tool to sniff at a few existing binaries as shown below. Note the soft-float ABI line in the output.

root@linksys1900ac:~# readelf -a /bin/bash|grep ABI
  OS/ABI:                            UNIX - System V
  ABI Version:                       0
  Flags:                             0x5000202, has entry point, Version5 EABI, soft-float ABI

I tried to get push button WPS setup to work from OpenWrt without success. I had used that feature under the standard firmware so it is able to work and makes connecting new devices to the router much simpler.

I also notice that there are serial TTL headers on the Linksys WRT1900AC and a post shows a method to reflash the firmware directly from uboot. I haven’t tried this out, but it is nice to see as a possible final ditch method to resurrect a device with non functioning firmware.

Another useful thing is to set up users other than root to use on the OpenWrt installation so that you have less risk of interrupting normal router activity. You might like to install that shadow utils and sudo in order to do this as shown below:

  root@wrt1900ac:/dev# opkg install sudo
  root@wrt1900ac:/dev# opkg install shadow-useradd shadow-groupadd
  root@wrt1900ac:/dev# sudo -u ben bash

I found that the fan came on when the Linksys WRT1900AC was booting into OpenWrt. The fan was turned off again soon after. The temperature readings are available using the sensors command as shown below.

root@wrt1900ac:~# sensors 
Adapter: mv64xxx_i2c adapter
ddr:          +52.8 C  
wifi:         +55.1 C  
Adapter: Virtual device
cpu:          +61.7 C  


Using an LG G3 phone with Android 5, the Wifi Network Analyzer app indicated a speed of 433 Mbps with the phone about a foot from the router. That speed dropped back to around 200Mbps when I moved several rooms away. The same results were given using the stock firmware and the OpenWrt image.

Running iperf (2.0.5) on the OpenWrt installation and a Mid 2012 Macbook Air gave a Bandwidth of 120 Mbps. The same client and server going through a DLink DIR-855 at a similar distance on 5 Ghz gave only 82 Mbps. Unfortunately the Mac only has wifi-n on it as wifi-ac was added to the next year’s model.

The LG G3 running Android 5 connected to the wifi-ac network using the iperf app could get 102 Mbps. These tests where run by starting the server with ‘-s’ and the client with ‘-c server-ip-address’. The server which was running on the Linksys WRT1900AC/OpenWrt machine chose a default of 85 kb TCP window size for these runs. Playing with window sizes I could get about 10 percent additional speed on the G3 without too much effort.

I connected a 120 GB SanDisk Extreme SSD to test the eSATA performance. For sequential IO Bonnie++ could write about 89 Mbps and read 148 Mbps and rewrite blocks at about 55 Mbps. Overall 5,200 seeks/s were able to be done. This compares well for read and rewrite with the eSATA on the Cubox which got 150 Mbps and 50  Mbps respectively. The Cubox could write at 120  Mbps which is about 35 percent faster than the Linksys WRT1900AC. This is using the same ext4 filesystem on both machines, the drive was just moved to each new machine.

bonnie++ -n 0 -f -m Linksys1900ac -d `pwd` 

OpenSSL performance for digests was in a similar ballpark to the BeagleBone Black and CuBox i4Pro. For ciphers the story was very different depending on which algorithm was used, DES and AES-256 were considerably slower than other ARM machines, whereas Blowfish and Cast ran at similar speeds to many other ARM CPUs. For 1,024 bit RSA signatures the Linksys WRT1900AC was around 25-30 percent the performance of the more budget ARM CPUs.

digests linksys router  

ciphers linksys router

rsa sign Linksys router

Final Thoughts

It is great to see that LuCI gives easy access to the router features and even has “app” packages to let you configure some of the additional software that you might like to install on your OpenWrt device. OpenWrt images for the Linksys WRT1900AC are a relatively recent development. Once a recommended stable image with LuCI included is released it should mitigate some of the tense moments that reflashing can present at the moment. The 177+ pages on the OpenWrt forum for the Linksys WRT1900AC are testament to the community interest in running OpenWrt on the device.

It is wonderful to see the powerful hardware that the Linksys WRT1900AC provides being able to run OpenWrt. The pairing of Linux/FLOSS and contemporary hardware lets you customize the device to fit your usage needs. Knowing that you can not only SSH in but that rsync is ready for you and that your programming language of choice can be installed on the device for those little programs that you want to have available all the time but don’t really want to leave a machine on in order to do. There are also some applications which work well on the router itself, for example, packet filtering. A single policy on the router can block tablets and phones from connecting to your work machines.

We would like to thank Linksys for providing the WRT1900AC hardware used in this article.

The Hacker Factor Blog: Email Delivery Errors

This post was syndicated from: The Hacker Factor Blog and was written by: The Hacker Factor Blog. Original post: at The Hacker Factor Blog

Email seems like a necessary evil. While I dislike spam, I like the non-immediate nature of the communication and the fact that messages can queue up (for better or for worse). And best of all, I can script actions for handling emails. If the email matches certain signatures, then the script can mark it as spam. If it comes from certain colleagues, then the script can mark it as urgent. In this regard, I think email is better than most communication methods.

Other forms of communication have their niche, but they also have their limitations. For example:

  • Phone. If the phone is there for my convenience, then why do I have to drop everything to answer it? (Dropping everything is not convenient.) And I have never had an answering machine show me the subject of the call before listening to it. Most answering machines require you to listen to message in the order they were received.

  • Chat rooms. Does anyone still use IRC or Jabber/XMPP? Real-time chat rooms are good if everyone is online at the same time. But if we’re all online and working together on a project, then it is just as easy to do a conference call on the phone, via Skype, or using any of those other VoIP protocols. Then again, most chat rooms do have ways to log conversations — which can be great for documenting progress.
  • Twitter. You rarely see details in 140 characters. It’s also hard to go back and see previous messages. And if you are following lots of people (or a few prolific people), then you might miss something important. (I view Twitter like the UDP of social communications… it’s alright to miss a few packets.)
  • Text messages. These are almost as bad as Twitter. At least with Twitter, I’m not charged per message.
  • Message boards. Whether it’s forum software, comments on a blog, or a private wall page at Facebook, message boards are everywhere. You can set topics, have threaded replies, etc. However, messages are restricted to members. If I am not a member of the message board, then I cannot leave you a message. (Message boards without membership requirements are either moderated or flooded with spam.) And there may be no easy way for someone to search or recall previous discussions.
  • Private messages. LinkedIn, Facebook, Flickr, Imgur, Reddit… Most services have ways to send private messages between members. This is fine if everyone you know uses those services. But messages are limited to the service.

In contrast, email permits large messages to be sent in a timely manner to people who use different services. If I cannot get to the message immediately, then it will sit in my mailbox — I will get to it when it is convenient. I can use my home email system to write to friends, regardless of whether they use Gmail, Yahoo, or Facebook. There are even email-to-anything and anything-to-email crossover systems, like If-this-then-that. Even Google Voice happily sends me email when someone leaves a message. (Google Voice also tries to translate the voice mail to text in the email. I know it’s from my brother when Google writes, “Unable to transcribe this message.”)

Clear Notifications

As automated tasks go, it is very common to have email sent as a notification. My RAID emails me monthly with the current status (“everything’s shiny!”) When one of my Linux servers had a memory failure, it emailed me.

Over at FotoForensics, I built an internal messaging system. As an administrator, I get notices about certain system events. I’ve even linked these messages with email — administrators get email when something important is queued up and needs a response. This really helps simplify maintenance — I usually get an email from the system every few days.

When users submit comments, I get a message. And I’ve designed the system to allow me to respond to the user via email. (This is why the comment form asks for an email address.) For the FotoForensics Lab service, I even configured a double-opt-in system so users can request accounts without my assistance.

And therein lies a problem… The easier it is to send messages, the easier it is to abuse it with spam. Over the decades, people have employed layers upon layers of spam detectors and heuristics to mitigate abuse.

With all of the layers of anti-spam crap that people use, creating a system that can send a status email or a double-opt-in message to anyone who requests contact can get complicated. It’s not as simple as calling a PHP function to send an email. In my experience, the PHP mail() function will succeed less than half of the time; usually the PHP mail() messages get discarded by spam filters.

Enabling Email

Even though my system works most of the time, I still have to fight with it occasionally in order to make sure that users receive responses to inquires. Some of the battles I had to fight so far:

  • Blacklists. Before you begin, make sure that your network address is not on any blacklists. If your network address was previously used by a spammer, then you’ve inherited a blacklisted address and nobody will receive your emails. Getting removed from blacklists ranges from difficult to impossible. And as long as your system is blacklisted, most people will not receive your emails.

  • Scripts. Lots of spammers use scripts. If you use a pre-packaged script to generate outgoing email, then it is likely to be identified as spam. This happens because different tools generate different signatures. If your tool matches the profile of a tool known to send spam, then it will be filtered. And chances are really good that spammers have already abused any pre-packaged scripts for sending spam.
  • Real mail. The email protocols (SMTP and ESMTP) are pretty straightforward. However, most scripts to send email only do the bare minimum. In particular, they usually don’t handle email errors very well. I ended up using a PHP script that communicates with my real mail server (Postfix). The postfix server properly delivers email and handles errors correctly. I’ve configured my postfix server to send email, but it never receives email. (Incoming email goes to a different mail server.)

At this point — with no blacklists, custom scripts, and a real outgoing email server — I was able to send email replies to about half of the people who requested service information. (Replying to people who fill out the contact form or who request a Lab account.) However, I still could not send email to anyone using Gmail, AOL, Microsoft Outlook, etc.

  • SPF. By itself, email is unauthenticated; anyone can send email as anyone. There are a handful of anti-spam additions to email that attempt to authenticate the sender. One of the most common ones is SPF — sender permitted from. This is a DNS record (TXT field) that lists the network addresses that can send email on behalf of a domain. If the recipient server sees that the sender does not match the SPF record, then it can be immediately discarded as spam.

    Many professional email services require an SPF record. Without it, they will assume that the email is unauthenticated and from a spammer. Enabling SPF approaches the 90% deliverable mark. Email can be delivered to Gmail, but not AOL or anyone using the Microsoft Outlook service.

  • Reverse hostnames. When emailing users at AOL, the AOL server would respond with a cryptic error message:

    521 5.2.1 : AOL will not accept delivery of this message.

    This is not one of AOL’s documented error codes. It took a lot of research, but I finally discovered that this is related to the reverse network address. Both AOL and Microsoft require the sender’s reverse hostname to resolve to the sender’s domain name. (Or in the case of AOL, it can resolve to anything except an IP address. If a lookup of your network address returns a hostname with the network address in it, then AOL will reject the email.) If you have a residential service (like Comcast or Verizon), then the reverse DNS lookup will not be permitted — you cannot send email to AOL directly from most residential ISPs. Fortunately, my hosting provider for FotoForensics was able to set my reverse DNS so I could send email from the FotoForensics server.

  • Microsoft. With everything else done, I could send email to all users except those who use the Microsoft Outlook service. The error message Microsoft returns says (with recipient information redacted):
    <recipient@recipient.domain>: host[213.x.x.x>] said: 550 5.7.1
    Service unavailable; Client host [65.x.x.x>] blocked using FBLW15; To
    request removal from this list please forward this message to (in reply to RCPT TO command)

    This cryptic warning is Microsoft’s way of saying that I need to contact them first and get permission to email their users.

    In my experience, writing in to ask permission will get you nowhere. Most services won’t answer the phone, ignore emails about delivery issues, and won’t help you at all. However, with Microsoft, I really had no other option. They didn’t give me any other option to contact them.

    With nothing left to lose, I bounced the entire email with the error message, original email, and headers, to Microsoft. I was actually amazed when I received an automated email with a trouble ticket number and telling me to wait 24 hours. I was even more amazed when, after 10 hours, I received a confirmation that the block was removed. I resent the FotoForensics contact form reply to the user… and it was delivered.

While I am thrilled to see that my server can now send replies to requests at every major service, I certainly hope other services do not adopt the Microsoft method. If my server needs to send replies to users at 100 different domains, then I do not want to spend time contacting each domain first and begging for permission to contact their users.

(Fortunately, this worked. If writing to Microsoft had not worked, then I was prepared to detect email addresses that use Outlook as a service and just blacklist them. “Please use a different email service since your provider will not accept email from us.”)

The dog ate it

While email is a convenient form of communication, I still have no idea whether I’ve fixed all of the delivery issues. Many sites will silently drop email rather than sending back a delivery error notice. Although I believe my outgoing email system now works with Gmail, Microsoft, Yahoo, AOL, and most other providers, the message may still be filtered somewhere down the line. (Email lacks a reliable delivery confirmation system. Hacks like web bugs and return receipts are unsupported by many email services.) It’s very possible for a long reply to never reach the recipient, and I’ll never know it.

Currently, the site sends about a half-dozen emails per day (max). These are responses to removal and unban requests, replies to comments, and double-opt-in messages (you requested an account; click on this link to confirm and create the account). I honestly never see a future when I will use email to promote new services or features. (Having spent decades tracking down spammers and developing anti-spam solutions, I cannot see myself joining the dark side.)

Of course, email is not the only option for communication. I’ve just started learning about WebRTC and HTML5 — I want to be able to give online training sessions and host voice calls via the web browser.

Schneier on Security: More on the NSA’s Capabilities

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Ross Anderson summarizes a meeting in Princeton where Edward Snowden was “present.”

Third, the leaks give us a clear view of an intelligence analyst’s workflow. She will mainly look in Xkeyscore which is the Google of 5eyes comint; it’s a federated system hoovering up masses of stuff not just from 5eyes own assets but from other countries where the NSA cooperates or pays for access. Data are “ingested” into a vast rolling buffer; an analyst can run a federated search, using a selector (such as an IP address) or fingerprint (something that can be matched against the traffic). There are other such systems: “Dancing oasis” is the middle eastern version. Some xkeyscore assets are actually compromised third-party systems; there are multiple cases of rooted SMS servers that are queried in place and the results exfiltrated. Others involve vast infrastructure, like Tempora. If data in Xkeyscore are marked as of interest, they’re moved to Pinwale to be memorialised for 5+ years. This is one function of the MDRs (massive data repositories, now more tactfully renamed mission data repositories) like Utah. At present storage is behind ingestion. Xkeyscore buffer times just depend on volumes and what storage they managed to install, plus what they manage to filter out.

As for crypto capabilities, a lot of stuff is decrypted automatically on ingest (e.g. using a “stolen cert,” presumably a private key obtained through hacking). Else the analyst sends the ciphertext to CES and they either decrypt it or say they can’t. There’s no evidence of a “wow” cryptanalysis; it was key theft, or an implant, or a predicted RNG or supply-chain interference. Cryptanalysis has been seen of RC4, but not of elliptic curve crypto, and there’s no sign of exploits against other commonly used algorithms. Of course, the vendors of some products have been coopted, notably skype. Homegrown crypto is routinely problematic, but properly implemented crypto keeps the agency out; gpg ciphertexts with RSA 1024 were returned as fails.


What else might we learn from the disclosures when designing and implementing crypto? Well, read the disclosures and use your brain. Why did GCHQ bother stealing all the SIM card keys for Iceland from Gemalto, unless they have access to the local GSM radio links? Just look at the roof panels on US or UK embassies, that look like concrete but are actually transparent to RF. So when designing a protocol ask yourself whether a local listener is a serious consideration.


On the policy front, one of the eye-openers was the scale of intelligence sharing — it’s not just 5 eyes, but 15 or 35 or even 65 once you count all the countries sharing stuff with the NSA. So how does governance work? Quite simply, the NSA doesn’t care about policy. Their OGC has 100 lawyers whose job is to “enable the mission”; to figure out loopholes or new interpretations of the law that let stuff get done. How do you restrain this? Could you use courts in other countries, that have stronger human-rights law? The precedents are not encouraging. New Zealand’s GCSB was sharing intel with Bangladesh agencies while the NZ government was investigating them for human-rights abuses. Ramstein in Germany is involved in all the drone killings, as fibre is needed to keep latency down low enough for remote vehicle pilots. The problem is that the intelligence agencies figure out ways to shield the authorities from culpability, and this should not happen.


The spooks’ lawyers play games saying for example that they dumped content, but if you know IP address and file size you often have it; and IP address is a good enough pseudonym for most intel / LE use. They deny that they outsource to do legal arbitrage (e.g. NSA spies on Brits and GCHQ returns the favour by spying on Americans). Are they telling the truth? In theory there will be an MOU between NSA and the partner agency stipulating respect for each others’ laws, but there can be caveats, such as a classified version which says “this is not a binding legal document.” The sad fact is that law and legislators are losing the capability to hold people in the intelligence world to account, and also losing the appetite for it.

Worth reading in full.

Krebs on Security: Who’s Scanning Your Network? (A: Everyone)

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Not long ago I heard from a reader who wanted advice on how to stop someone from scanning his home network, or at least recommendations about to whom he should report the person doing the scanning. I couldn’t believe that people actually still cared about scanning, and I told him as much: These days there are countless entities — some benign and research-oriented, and some less benign — that are continuously mapping and cataloging virtually every devices that’s put online.

GF5One of the more benign is, a data repository of research findings collected through continuous scans of the public Internet. The project, hosted by the ZMap Team at the University of Michigan, includes huge, regularly updated results grouped around scanning for Internet hosts running some of the most commonly used “ports” or network entryways, such as Port 443 (think Web sites protected by the lock icon denoting SSL/TLS Web site encryption); Port 21, or file transfer protocol (FTP); and Port 25, or simple mail transfer protocol (SMTP), used by many businesses to send email.

When I was first getting my feet wet on the security beat roughly 15 years ago, the practice of scanning networks you didn’t own looking for the virtual equivalent of open doors and windows was still fairly frowned upon — if not grounds to get one into legal trouble. These days, complaining about being scanned is about as useful as griping that the top of your home is viewable via Google Earth. Trying to put devices on the Internet and and then hoping that someone or something won’t find them is one of the most futile exercises in security-by-obscurity.

To get a gut check on this, I spoke at length last week with University of Michigan researchers Michael D. Bailey (MB) and Zakir Durumeric (ZD) about their ongoing and very public project to scan all the Internet-facing things. I was curious to get their perspective on how public perception of widespread Internet scanning has changed over the years, and how targeted scanning can actually lead to beneficial results for Internet users as a whole.

MB: Because of the historic bias against scanning and this debate between disclosure and security-by-obscurity, we’ve approached this very carefully. We certainly think that the benefits of publishing this information are huge, and that we’re just scratching the surface of what we can learn from it.

ZD: Yes, there are close to two dozen papers published now based on broad, Internet-wide scanning. People who are more focused on comprehensive scans tend to be the more serious publications that are trying to do statistical or large-scale analyses that are complete, versus just finding devices on the Internet. It’s really been in the last year that we’ve started ramping up and adding scans [to the site] more frequently.

BK: What are your short- and long-term goals with this project?

ZD: I think long-term we do want to add coverage of additional protocols. A lot of what we’re focused on is different aspects of a protocol. For example, if you’re looking at hosts running the “https://” protocol, there are many different ways you can ask questions depending on what perspective you come from. You see different attributes and behavior. So a lot of what we’ve done has revolved around https, which is of course hot right now within the research community.

MB: I’m excited to add other protocols. There a handful of protocols that are critical to operations of the Internet, and I’m very interested in understanding the deployment of DNS, BGP, and TLS’s interception with SMTP. Right now, there’s a pretty long tail to all of these protocols, and so that’s where it starts to get interesting. We’d like to start looking at things like programmable logic controllers (PLCs) and things that are responding from industrial control systems.

ZD: One of the things we’re trying to pay more attention to is the world of embedded devices, or this ‘Internet of Things’ phenomenon. As Michael said, there are also industrial protocols, and there are different protocols that these embedded devices are supporting, and I think we’ll continue to add protocols around that class of devices as well because from a security perspective it’s incredibly interesting which devices are popping up on the Internet.

BK: What are some of the things you’ve found in your aggregate scanning results that surprised you?

ZD: I think one thing in the “https://” world that really popped out was we have this very large certificate authority ecosystem, and a lot of the attention is focused on a small number of authorities, but actually there is this very long tail — there are hundreds of certificate authorities that we don’t really think about on a daily basis, but that still have permission to sign for any Web site. That’s something we didn’t necessary expect. We knew there were a lot, but we didn’t really know what would come up until we looked at those.

There also was work we did a couple of years ago on cryptographic keys and how those are shared between devices. In one example, primes were being shared between RSA keys, and because of this we were able to factor a large number of keys, but we really wouldn’t have seen that unless we started to dig into that aspect [their research paper on this is available here].

MB: One of things we’ve been surprised about is when we measure these things at scale in a way that hasn’t been done before, often times these kinds of emergent behaviors become clear.

BK: Talk about what you hope to do with all this data.

ZD: We were involved a lot in the analysis of the Heartbleed vulnerability. And one of the surprising developments there wasn’t that there were lots of people vulnerable, but it was interesting to see who patched, how and how quickly. What we were able to find was by taking the data from these scans and actually doing vulnerability notifications to everybody, we were able to increase patching for the Heartbleed bug by 50 percent. So there was an interesting kind of surprise there, not what you learn from looking at the data, but in terms of what actions do you take from that analysis? And that’s something we’re incredibly interested in: Which is how can we spur progress within the community to improve security, whether that be through vulnerability notification, or helping with configurations.

BK: How do you know your notifications helped speed up patching?

MB: With the Heartbleed vulnerability, we took the known vulnerable population from scans, and ran an A/B test. We split the population that was vulnerable in half and notified one half of the population, while not notifying the other half, and then measured the difference in patching rates between the two populations. We did end up after a week notifying the second population…the other half.

BK: How many people did you notify after going through the data from the Heartbleed vulnerability scanning? 

ZD: We took everyone on the IPv4 address space, found those that were vulnerable, and then contacted the registered abuse contact for each block of IP space. We used data from 200,000 hosts, which corresponded to 4,600 abuse contacts, and then we split those into an A/B test. [Their research on this testing was published here].

So, that’s the other thing that’s really exciting about this data. Notification is one thing, but the other is we’ve been building models that are predictive of organizational behavior. So, if you can watch, for example, how an organization runs their Web server, how they respond to certificate revocation, or how fast they patch — that actually tells you something about the security posture of the organization, and you can start to build models of risk profiles of those organizations. It moves away from this sort of patch-and-break or patch-and-pray game we’ve been playing. So, that’s the other thing we’ve been starting to see, which is the potential for being more proactive about security.

BK: How exactly do you go about the notification process? That’s a hard thing to do effectively and smoothly even if you already have a good relationship with the organization you’re notifying….

MB: I think one of the reasons why the Heartbleed notification experiment was so successful is we did notifications on the heels of a broad vulnerability disclosure. The press and the general atmosphere and culture provided the impetus for people to be excited about patching. The overwhelming response we received from notifications associated with that were very positive. A lot of people we reached out to say, ‘Hey, this is a great, please scan me again, and let me know if I’m patched.” Pretty much everyone was excited to have the help.

Another interesting challenge was that we did some filtering as well in cases where the IP address had no known patches. So, for example, where we got information from a national CERT [Computer Emergency Response Team] that this was an embedded device for which there was no patch available, we withheld that notification because we felt it would do more harm than good since there was no path forward for them. We did some aggregation as well, because it was clear there were a lot of DSL and dial-up pools affected, and we did some notifications to ISPs directly.

BK: You must get some pushback from people about being included in these scans. Do you think that idea that scanning is inherently bad or should somehow prompt some kind of reaction in and of itself, do you think that ship has sailed?

ZD: There is some small subset that does have issues. What we try to do with this is be as transparent as possible. All of our hosts we use for scanning, if look at them on WHOIS records or just visit them with a browser it will tell you right away that this machine is part of this research study, here’s the information we’re collecting and here’s how you can be excluded. A very small percentage of people who visit that page will read it and then contact us and ask to be excluded. If you send us an email [and request removal], we’ll remove you from all future scans. A lot of this comes down to education, a lot of people to whom we explain our process and motives are okay with it.

BK: Are those that object and ask to be removed more likely to be companies and governments, or individuals?

ZD: It’s a mix of all of them. I do remember offhand there were a fair number of academic institutions and government organizations, but there were a surprising number of home users. Actually, when we broke down the numbers last year (PDF), the largest category was small to mid-sized businesses. This time last year, we had excluded only 157 organizations that had asked for it.

BK: Was there any pattern to those that asked to be excluded?

ZD: I think that actually is somewhat interesting: The exclusion requests aren’t generally coming from large corporations, which likely notice our scanning but don’t have an issue with it. A lot of emails we get are from these small businesses and organizations that really don’t know how to interpret their logs, and often times just choose the most conservative route.

So we’ve been scanning for a several years now, and I think when we originally started scanning, we expected to have all the people who were watching for this to contact us all at once, and say ”Please exclude us.’ And then we sort of expected that the number of people who’d ask to be excluded would plateau, and we wouldn’t have problems again. But what we’ve seen is, almost the exact opposite. We still get [exclusion request] emails each day, but what we’re really finding is people aren’t discovering these scans proactively. Instead, they’re going through their logs while trying to troubleshoot some other issue, and they see a scan coming from us there and they don’t know who we are or why we’re contacting their servers. And so it’s not these organizations that are watching, it’s the ones who really aren’t watching who are contacting us.

BK: Do you guys go back and delete historic records associated with network owners that have asked to be excluded from scans going forward?

ZD: At this point we haven’t gone back and removed data. One reason is there are published research results that are based on those data sets, results, and so it’s very hard to change that information after the fact because if another researcher went back and tried to confirm an experiment or perform something similar, there would be no easy way of doing that.

BK: Is this what you’re thinking about for the future of your project? How to do more notification and build on the data you have for those purposes? Or are you going in a different or additional direction?

MB: When I think about the ethics of this kind of activity, I have very utilitarian view: I’m interested in doing as much good as we possibly can with the data we have. I think that lies in notifications, being proactive, helping organizations that run networks to better understand what their external posture looks like, and in building better safe defaults. But I’m most interested in a handful of core protocols that are under-serviced and not well understood. And so I think we should spend a majority of effort focusing on a small handful of those, including BGP, TLS, and DNS.

ZD: In many ways, we’re just kind of at the tip of this iceberg. We’re just starting to see what types of security questions we can answer from these large-scale analyses. I think in terms of notifications, it’s very exciting that there are things beyond the analysis that we can use to actually trigger actions, but that’s something that clearly needs a lot more analysis. The challenge is learning how to do this correctly. Every time we look at another protocol, we start seeing these weird trends and behavior we never noticed before. With every protocol we look at there are these endless questions that seem to need to be answered. And at this point there are far more questions than we have hours in the day to answer.