Posts tagged ‘fbi’

Schneier on Security: Bizarre High-Tech Kidnapping

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

This is a story of a very high-tech kidnapping:

FBI court filings unsealed last week showed how Denise Huskins’ kidnappers used anonymous remailers, image sharing sites, Tor, and other people’s Wi-Fi to communicate with the police and the media, scrupulously scrubbing meta data from photos before sending. They tried to use computer spyware and a DropCam to monitor the aftermath of the abduction and had a Parrot radio-controlled drone standing by to pick up the ransom by remote control.

The story also demonstrates just how effective the FBI is tracing cell phone usage these days. They had a blocked call from the kidnappers to the victim’s cell phone. First they used an search warrant to AT&T to get the actual calling number. After learning that it was an AT&T prepaid Trakfone, they called AT&T to find out where the burner was bought, what the serial numbers were, and the location where the calls were made from.

The FBI reached out to Tracfone, which was able to tell the agents that the phone was purchased from a Target store in Pleasant Hill on March 2 at 5:39 pm. Target provided the bureau with a surveillance-cam photo of the buyer: a white male with dark hair and medium build. AT&T turned over records showing the phone had been used within 650 feet of a cell site in South Lake Tahoe.

Here’s the criminal complaint. It borders on surreal. Were it an episode of CSI:Cyber, you would never believe it.

Krebs on Security: The Wheels of Justice Turn Slowly

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

On the evening March 14, 2013, a heavily-armed police force surrounded my home in Annandale, Va., after responding to a phony hostage situation that someone had alerted authorities to at our address. I’ve recently received a notice from the U.S. Justice Department stating that one of the individuals involving in that “swatting” incident had pleaded guilty to a felony conspiracy charge.

swatnet“A federal investigation has revealed that several individuals participated in a scheme to commit swatting in the course of which these individuals committed various federal criminal offenses,” reads the DOJ letter, a portion of which is here (PDF). “You were the victim of the criminal conduct which resulted in swattings in that you were swattted.”

The letter goes on to state that one of the individuals who participated in the scheme has pleaded guilty to conspiracy charges (Title 18, Section 371) in federal court in Washington, D.C.

The notice offers little additional information about the individual who pleaded guilty or about his co-conspirators, and the case against him is sealed. It could be the individual identified at the conclusion of this story, or someone else. In any case, my own digging on this investigation suggests the government is in the process of securing charges or guilty pleas in connection with a group of young men who ran the celebrity “doxing” Web site exposed[dot]su (later renamed exposed[dot]re).

As I noted in a piece published just days after my swatting incident, the attack came not long after I wrote a story about the site, which was posting the Social Security numbers, previous addresses, phone numbers and credit reports on a slew of high-profile individuals, from the director of the FBI to Kim Kardashian, Bill Gates and First Lady Michelle Obama. Many of those individuals whose personal data were posted at the site also were the target of swatting attacks, including P. Diddy, Justin Timberlake and Ryan Seacrest.

The Web site exposed[dot]su featured the personal data of celebrities and public figures.

The Web site exposed[dot]su featured the personal data of celebrities and public figures.

Sources close to the investigation say Yours Truly was targeted because this site published a story correctly identifying the source of the personal data that the hackers posted on exposed[dot]su. According to my sources, the young men, nearly all of whom are based here in the United States, obtained the personal data after hacking into a now-defunct online identity theft service called ssndob[dot]ru.

Investigative reporting first published on KrebsOnSecurity in September 2013 revealed that the same miscreants controlling ssndob[dot]ru (later renamed ssndob[dot]ms) siphoned personal data from some of America’s largest consumer and business data aggregators, including LexisNexis, Dun & Bradstreet and Kroll Background America.

The administration page of ssndob[dot]ru. Note the logged in user,, is the administrator.

The administration page of ssndob[dot]ru. Note the logged in user,, is the administrator.

I look forward to the day that the Justice Department releases the names of the individuals responsible for these swatting incidents, for running exposed[dot]su, and hacking the ssndob[dot]ru ID theft service. While that identity theft site went offline in 2013, several competing services have unfortunately sprung up in its wake, offering the ability to pull Social Security numbers, dates of birth, previous addresses and credit reports and virtually all Americans.

Further reading:

Who Built the Identity Theft Service SSNDOB[dot]RU? 

Credit Reports Sold for Cheap in the Underweb

Data Broker Giants Hacked by ID Theft Service

Data Broker Hackers Also Compromised NW3C

Swatting Incidents Tied to ID Theft Sites?

Toward a Breach Canary for Data Brokers

How I Learn to Stop Worrying and Embrace the Credit Freeze

Schneier on Security: Using Secure Chat

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Micah Lee has a good tutorial on installing and using secure chat.

To recap: We have installed Orbot and connected to the Tor network on Android, and we have installed ChatSecure and created an anonymous secret identity Jabber account. We have added a contact to this account, started an encrypted session, and verified that their OTR fingerprint is correct. And now we can start chatting with them with an extraordinarily high degree of privacy.

FBI Director James Comey, UK Prime Minister David Cameron, and totalitarian governments around the world all don’t want you to be able to do this.

Krebs on Security: The Darkode Cybercrime Forum, Up Close

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

By now, many of you loyal KrebsOnSecurity readers have seen stories in the mainstream press about the coordinated global law enforcement takedown of Darkode[dot]me, an English-language cybercrime forum that served as a breeding ground for botnets, malware and just about every other form of virtual badness. This post is an attempt to distill several years’ worth of lurking on this forum into a narrative that hopefully sheds light on the individuals apprehended in this sting and the cybercrime forum scene in general.

To tell this tale completely would take a book the size of The Bible, but it’s useful to note that the history of Darkode — formerly darkode[dot]com — traces several distinct epochs that somewhat neatly track the rise and fall of the forum’s various leaders. What follows is a brief series of dossiers on those leaders, as well as a look at who these people are in real life.


Darkode began almost eight years ago as a pet project of Matjaz Skorjanc, a now-36-year-old Slovenian hacker best known under the hacker alisas “Iserdo” and “Netkairo.” Skorjanc was one of several individuals named in the complaints published today by the U.S. Justice Department.

Butterfly Bot customers wonder why Iserdo isn't responding to support requests. He was arrested hours before.

Butterfly Bot customers wonder why Iserdo isn’t responding to support requests. He was arrested hours before.

Iserdo was best known as the author of the ButterFly Bot, a plug-and-play malware strain that allowed even the most novice of would-be cybercriminals to set up a global cybercrime operation capable of harvesting data from thousands of infected PCs, and using the enslaved systems for crippling attacks on Web sites. Iserdo was arrested by Slovenian authorities in 2010. According to investigators, his ButterFly Bot kit sold for prices ranging from $500 to $2,000.

In May 2010, I wrote a story titled Accused Mariposa Botnet Operators Sought Jobs at Spanish Security Firm, which detailed how Skorjanc and several of his associates actually applied for jobs at Panda Security, an antivirus and security firm based in Spain. At the time, Skorjanc and his buddies were already under the watchful eye of the Spanish police.


Following Iserdo’s arrest, control of the forum fell to a hacker known variously as “Mafi,” “Crim” and “Synthet!c,” who according to the U.S. Justice Department is a 27-year-old Swedish man named Johan Anders Gudmunds. Mafi is accused of serving as the administrator of Darkode, and creating and selling malware that allowed hackers to build botnets. The Justice Department also alleges that Gudmunds operated his own botnet, “which at times consisted of more than 50,000 computers, and used his botnet to steal data from the users of those computers on approximately 200,000,000 occasions.”

Mafi was best known for creating the Crimepack exploit kit, a prepackaged bundle of commercial crimeware that attackers can use to booby-trap hacked Web sites with malicious software. Mafi’s stewardship over the forum coincided with the admittance of several high-profile Russian cybercriminals, including “Paunch,” an individual arrested in Russia in 2013 for selling a competing and far more popular exploit kit called Blackhole.

Paunch worked with another Darkode member named “J.P. Morgan,” who at one point maintained an $800,000 budget for buying so-called “zero-day vulnerabilities,” critical flaws in widely-used commercial software like Flash and Java that could be used to deploy malicious software.

Darkode admin "Mafi" explains his watermarking system.

Darkode admin “Mafi” explains his watermarking system.

Perhaps unsurprisingly, Mafi’s reign as administrator of Darkode coincided with the massive infiltration of the forum by a number of undercover law enforcement investigators, as well as several freelance security researchers (including this author).

As a result, Mafi spent much of his time devising new ways to discover which user accounts on Darkode were those used by informants, feds and researchers, and which were “legitimate” cybercriminals looking to ply their wares.

For example, in mid-2013 Mafi and his associates cooked up a scheme to create a fake sales thread for a zero-day vulnerability — all in a bid to uncover which forum participants were researchers or feds who might be lurking on the forum.

That plan, which relied on a clever watermarking scheme designed to “out” any forum members who posted screen shots of the forum online, worked well but also gave investigators key clues about the forum’s hierarchy and reporting structure.


Mafi worked quite closely with another prominent Darkode member nicknamed “Fubar,” and together the two of them advertised sales of a botnet crimeware package called Ngrbot (according to Mafi’s private messages on the forum, this was short for “Niggerbot.” Oddly enough, the password databases from several of Mafi’s accounts on hacked cybercrime forums would all include variations on the word “nigger” in some form). Mafi also advertised the sale of botnets based on “Grum” a spam botnet whose source code was leaked in 2013.


Conspicuously absent from the Justice Department’s press release on this takedown is any mention of Darkode’s most recent administrator — a hacker who goes by the handle “Sp3cialist.”

Better known to Darkode members at “Sp3c,” this individual’s principal contribution to the forum seems to have revolved around a desire to massively expand the membership of the form, as well as an obsession with purging the community of anyone who even remotely might emit a whiff of being a fed or researcher.

The personal signature of Sp3cialist.

The personal signature of Sp3cialist.

Sp3c is widely known as a core member of the Lizard Squad, a group of mostly low-skilled miscreants who specialize in launching distributed denial-of-service attacks (DDoS) aimed at knocking Web sites offline.

In late 2014, the Lizard Squad took responsibility for launching a series of high-profile DDoS attacks that knocked offline the online gaming networks of Sony and Microsoft for the majority of Christmas Day.

In the first few days of 2015, KrebsOnSecurity was taken offline by a series of large and sustained denial-of-service attacks apparently orchestrated by the Lizard Squad. As I noted ina previous story, the booter service — lizardstresser[dot]su — is hosted at an Internet provider in Bosnia that is home to a large number of malicious and hostile sites. As detailed in this story, the same botnet that took Sony and Microsoft offline was built using a global network of hacked wireless routers.

That provider happens to be on the same “bulletproof” hosting network advertised by “sp3c1alist,” the administrator of the cybercrime forum Darkode. At the time, Darkode and LizardStresser shared the same Internet address.

Another key individual named in the Justice Department’s complaint against Darkode is a hacker known only to most in the underground as “KMS.” The government says KMS is a 28-year-old from Opelousas, Louisiana named Rory Stephen Guidry, who used the Jabber instant message address “” Having interacted with this individual on numerous occasions, I’d be remiss if I didn’t at least explain why this person is at once the least culpable and perhaps most interesting of the group named in the law enforcement purge.

For the past 12 months, KMS has been involved in an effort to expose the Lizard Squad members, to varying degrees of success. To call this kid a master in social engineering is probably a disservice to the term of art itself: There are few individuals I would consider more skilled in tricking people into divulging information that is not in their best interests than this guy.

Near as I can tell, KMS has work assiduously (for his own reasons, no doubt) to expose the people behind the Lizard Squad and, by extension, the core members of Darkode. Unfortunately for KMS, his activities also appear to have ensnared him in this investigation.

To be clear, nobody is saying KMS is a saint. KMS’s best friend, a hacker from Kentucky named Ryan King (a.k.a. “Starfall” and a semi-frequent commenter on this blog), says KMS routinely had trouble seeing the lines between exposing others and involving himself in their activities. This kid was a master of social engineer, almost par none. Here’s one recording of him making a fake emergency call to the FBI,  eerily disguising his voice as that of President Obama.

For example, KMS is rumored to have played a part in exposing the Lizard Squad’s February 2015 hijack of’s domain in Vietnam. The message left behind in that crime suggested this author was somehow responsible, along with Sp3c and a Rory Andrew Godfrey, the only name that KMS was known under publicly until this week’s law enforcement action.

“As far as I know, I’m the only one who knew his real name,” said King, who described himself as a close personal friend and longtime acquaintance of Guidry. “The only botnets that he operated were those that he social engineered out of [less skilled hackers], but even those he was trying get shut down. All I know is that he and I were trying to get [root] access to Darkode and destroy it, and the feds beat us to it by about a week.”

The U.S. government sees things otherwise. Included in a heavily-redacted affidavit (PDF) related to Guidry’s case are details of a pricing structure that investigators say KMS used to sell access to hacked machines (see screenshot below)


As mentioned earlier, I could go on for volumes about the litany of cybercrimes advertised at Darkode. Instead, it’s probably best if I just leave here a living archive of screen grabs I’ve taken over the years of various discussions on the Darkode forum.

In its final days, Darkode’s true Internet address was protected from DDoS attacks and from meddlesome researchers by CloudFlare, a content distribution network that specializes in helping Web sites withstand otherwise crippling attacks. As such, it seems fitting that at least some of my personal archive of screen shots from my time on Darkode should also be hosted there. Happy hunting.

One final note: As happens with many of these takedowns, the bad guys don’t just go away: They go someplace else. In this case, that someplace else is most likely to be a Deep Web or Dark Web forum accessible only via Tor: According to chats observed from Sp3c’s public and private online accounts, the forum is getting ready to move much further underground.

Linux How-Tos and Linux Tutorials: Give Your Raspberry Pi Night Vision With the PiNoir Camera

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Ben Martin. Original post: at Linux How-Tos and Linux Tutorials


The Raspberry Pi and Pi2 are economical little ARM machines which can happily run Linux. The popularity of the Raspberry Pi and compatible Pi 2 models means that a great deal of accessories are available. These accessories include the PiNoir Camera and 4D Systems’ touch-sensitive, 3.5-inch display.

The PiNoir camera is so named because it does not have an Infrared Filter (no-IR). Without an IR filter the camera can be used at night, provided you have an infrared light source. With night vision you can use the Raspberry Pi as an around-the-clock surveillance camera monitor, baby monitor, or to give vision to a robot. The PiNoir Camera comes without a case, so you might like to pick up something to help protect it.

I’ll be setting up the 4D Systems screen and then taking a look not only at the PiNoir Camera, but how well it functions in combination with an infrared light source which offers a fairly wide beam and up to 10 meters of lighting. The camera connects to the camera Serial Interface (CSI) port on the Raspberry Pi and the screen connects to the 40 pin expansion header on the Raspberry Pi.

The 4D Systems 3.5-Inch Screen

The 4D Systems screen runs at a 480×320 QVGA resolution with 65k colors. Physically the screen is about the same size as the Raspberry Pi 2. The screen mounting screw tabs extend a little beyond the Raspberry Pi on the USB connector side of the device.

The datasheet for the screen mentions that you should include some shielding to prevent accidental electrical contact between items on the back of the screen and the Raspberry Pi. I found that the socket that connects the screen and USB ports provide a decent support for the screen. One chip could be easily made to touch the Ethernet connector on the Raspberry Pi, so some form of non-conducting standoff would probably be advisable to stop that from happening. There are also male pins on the back of the screen, though they seem to have a reasonable clearance from the Raspberry Pi, though again perhaps some form of shielding would be wise. The best solution would be to create a case with mounting holes for both the Raspberry Pi and screen which would keep both at the correct distance from each other.

There are drivers for both the Raspberry Pi and Pi2 models for Raspian which include screen and touch support. The Raspberry Pi 2 has a 40 pin expansion header running down one side of it. The back of the screen has a 26 pin female header to connect to the Raspberry Pi. The first 26 pins on the Raspberry Pi2 header are in the same configuration as the earlier Pi models. The common 26 pins are at the end farthest away from the USB sockets on the Raspberry Pi 2.

Before connecting the screen you should install the drivers for it. These are linked from the download section of the manufacturer 4D Systems’ product page. I was running Raspbian 7 (Wheezy) and used the drivers from kernel4dpi_1.3-3_pi2.deb to test the screen. Setting up the screen drivers is done by installing the kernel package as shown below. I found that if I performed an apt-get upgrade then I had to also reinstall the kernel4dpi to get the screen working again.

root@pi:~# wget https://.../kernel4dpi_1.3-3_pi2.deb
root@pi:~# dpkg -i kernel4dpi_1.3-3_pi2.deb
Enable boot to GUI [Yn]y  

After powering down the Raspberry Pi and attaching the screen I could see the boot messages as I restarted the Raspberry Pi but unfortunately, once booting was complete I saw the screen backlight but no image. I was then running kernel version Linux pi 3.18.9-v7+. After digging around, I noticed in /etc/rc.local the following line which should have brought up an X session on a framebuffer device which is backed by the screen.

sudo -u pi FRAMEBUFFER=/dev/fb1 startx &

From an SSH session I decided to run startx with the selected FRAMEBUFFER and the screen came to life with a nice desktop. My installation of Raspbian had S05kdm in its default runlevel (2) startup. Disabling S05kdm and rebooting brought up a graphical session right after boot as one would hope.

The screen provides a framebuffer device which is quite handy, as it allows you to view text, images, and video without using a full desktop if that is what you want. The following commands will view an image or video directly on the screen. Toolkits such as Qt will also let you run on a framebuffer.

root@pi:/usr/share/pixmaps# fbi -T 2 -d /dev/fb1 MagicLinuxPenguins.png 
root@pi:~# mplayer -vo fbdev:/dev/fb1  -vf scale=480:320  test.mkv

Touch calibration happens at two levels: using ts_calibrate on the framebuffer and xinput_calibrator for the X Window session. Calibration in both cases is fairly simple, clicking on four or five positions on the screen when asked. The screen orientation can be changed allowing four different rotations using the /boot/cmdline.txt file. All of these procedures are well documented in the datasheet for the screen.

Dimming and turning off the backlight can apparently be controlled using GPIO18 and an exposed /sys/class/backlight/4dpi/brightness file. Although I had a jumper on the J1 connector of the screen in the PWM position, writing to the brightness file did not change the screen brightness while running an X session. Perhaps X itself was in charge of the screen dimming or I had a configuration issue with the Raspberry Pi 2.

The PiNoir Camera

pinoirThe PiNoir Camera is capable of capturing 30 frames per second for 1080p video and still images up to 5 megapixels, which is 2592×1944. The camera comes connected to a board with a fairly short ribbon cable attached which you then attach to the Raspberry Pi. The Raspberry Pi has a CSI port near the Ethernet port to connect the camera ribbon cable into. There is a clip which you pull upwards on each end and then you can move the clip backwards a little bit allowing the ribbon to be inserted into the CSI port. The clip can then be moved back and pushed down to lock the ribbon into place. The manuals mention that the camera and ribbon are static sensitive. After grounding myself before starting to assemble the camera into its enclosure and connecting the camera ribbon to the CSI port the camera worked fine.

Once the hardware is connected you might need to enable the CSI port in software. To do this run raspi-config and select Enable Camera from the menu. You will then have to reboot your Raspberry Pi.

The raspistill program will let you get your first still image from the camera. The only thing you’ll need to tell it is where to save that image file to, as shown below. A 5-second delay will precede the actual image capture and the result will be saved into test.jpg. I found that the light setup was better with this delay, using a –timeout 100 parameter to take the image after only a 100 millisecond delay resulted in an image with much poorer exposure.

pi@pi~$ raspistill  -o test.jpg
pi@pi~$ raspistill --timeout 100 -o badlight.jpg

To test how well exposed an image was after camera startup I took a time lapse series of 10 images, one each second using the below command. It took about 3 seconds for the image to go from a rather green image to a much more color-rich image.

pi@pi ~ $ raspistill --timeout 10000 --timelapse 1000 
                     --nopreview -n -o test%04d.jpg

The raspivid tool is a good place to start when you want to get video from the camera. Unfortunately the test.h264 file created by the first command will be without any video file container, that is, just the raw h264 video stream. This makes it a bit harder to play back with normal video players, so the MP4Box command can be used to create an mp4 file which contains the raw test.h264 stream. The test.mp4 can be played using mplayer and other tools.

  pi@pi ~ $ raspivid -t 5000 -o test.h264
  pi@pi ~ $
  pi@pi ~ $ sudo apt-get install -y gpac
  pi@pi ~ $ MP4Box -fps 30 -add test.h264 test.mp4

If you want to export the video stream over the network, gstreamer might be a more useful tool. There is a gst-rpicamsrc project which adds support for the Raspberry Pi Camera as as source to gstreamer. Unfortunately, at the time of writing gst-rpicamsrc is not packaged in the main Raspbian distribution. Installing the libgstreamer1.0-dev package allowed me to git pull from the gst-rpicamsrc repository and build and install that module just as the commands in its README describe. I’ve replicated the commands below for ease of use.

pi@pi ~/src $ sudo apt-get install libgstreamer1.0-dev  libgstreamer-plugins-base1.0-dev
pi@pi ~/src $ git clone
pi@pi ~/src $ cd ./gst-rpicamsrc/
pi@pi ~/src $./ --prefix=/usr --libdir=/usr/lib/arm-linux-gnueabihf/
pi@pi ~/src $ make
pi@pi ~/src $ sudo make install

Now that you have the gst-rpicamsrc module installed you can send the h264 stream over the network using the below commands. I found that even at 1080 resolution and a high profile there was very little CPU used on the Raspberry Pi2 to stream the camera. CPU usage was in the single-digit values in a top(1) display.

The Raspberry Pi supports encoding video in hardware and because the rpicamsrc is supplying h264 encoded video right from the source I suspect that the hardware encoding was being used to produce that video stream. The first two commands stream 720 or 1080 video with a slightly different encode profile. The final command should be run on the ‘mydesktop’ machine to catch the stream and view it in a window.

  pi@pi~$ gst-launch-1.0 -v rpicamsrc bitrate=1000000 
  ! video/x-h264,width=1280,height=720,framerate=15/1 
  ! rtph264pay config-interval=1 
  ! udpsink host=mydesktop port=5000
  pi@pi~$ gst-launch-1.0 -v rpicamsrc bitrate=1000000 
  ! video/x-h264,width=1920,height=1080,framerate=15/1,profile=high 
  ! rtph264pay config-interval=1 
  ! udpsink host=mydesktop port=5000
  me@mydesktop ~$ gst-launch-1.0 udpsrc port=5000 
  ! rtph264depay ! decodebin ! autovideosink

Night Vision with the PiNoir Camera and IR source

The datasheet for the PiNoir camera states that it wants infrared light at around 880nm wavelength. The TV6700 IR Illuminator operates at 850nm so might not be directly on the sweet spot that the PiNoir is expecting. The specifications of the IR illuminator are a 10 meter range and a 70 degree radiation angle. The LEDs are listed for up to 6,000 hours of use and there is an automatic on/off capability to turn off the illuminator during the day.

Infrared lightThe TV6700 IR illuminator comes with a smallish round metal cylinder, one end of which you can see LEDs through a clear cover. The unit wants a 12-Volt power supply and there is a Y splitter with one female and two male DC jacks. If your existing camera’s power supply is 12 V and has some spare power capacity then the splitter will let you grab power right off the camera and mount the IR illuminator nearby. There is also a U shape bracket and some mounting screws.

An initial test of the illuminator was done in a room about 5 by 5 yards big. At night with a 3-Watt LED desk light and a fairly dark LCD screen I could just make out items such as a fan in the near ground (2 feet from the camera) but items on a bookshelf 2 yards from the camera were much harder to see, only being able to read very large text on the spines of white books. Turning off the 3 W LED blackened the PiNoir camera display completely. It seems the light emitted by the monitor when viewing a dark image is not enough for the PiNoir Camera to provide much of an image by.

Leaving the 3 W normal LED light off and turning on the IR illuminator made the whole picture clearly visible in greyscale. Waving a hand in front of the camera came through in real time although the room was dark. Using IR lighting and a camera that does not filter IR has the advantage of being able to stream fast movement properly. Using a still camera to take a sequence of longer exposure images would likely result in a collection of motion blur images.

Moving to an outside setting with a large grass lawn at night I could make out a person a little over 7 meters away from the Raspberry Pi and almost over the entire range of width of the captured video image from the camera. Perhaps only getting 7 meters visibility was the result of not precisely matching the infrared wavelength that the PiNoir camera expects.

RPi for Video Monitoring Applications

While the Raspberry Pi runs perfectly well without any display, some applications might benefit from being able to display information on demand. Because the 4D Systems 3.5 inch screen is about the same dimensions as the Raspberry Pi board it can offer a display without increasing the physical size of the Raspberry Pi or requiring external power for the screen.

Being able to produce and stream 1080 h264 encoded video using the PiNoir camera and in low-light situations opens up the Raspberry Pi to video monitoring applications. For example, using a headless Raspberry Pi with a PiNoir camera as a baby monitor. The main downside to using the Raspberry Pi as a monitor is that the CSI cable is rather short, especially when compared to a USB cable. If you want a Raspberry Pi at the heart of your robot then you can stream nice robot eye camera vision over the network without a great impact on the robot’s CPU performance.

The IR source is a must have if you plan to use the PiNoir camera for security monitoring. Being able to see who is doing what while they might be unaware of the camera or (infrared) light source helps you respond to the situation in an appropriate manner.

We would like to thank RS Components for supplying the hardware used in these articles. The PiNoir Camera, 4D Systems touch-sensitive 3.5-inch displayinfrared light source, and camera case are available for purchase on RS Components’ Australian website while stocks last, or through RS Components’ main website, if you’re ordering internationally.

TorrentFreak: FBI Assists Overseas Pirate Movie Site Raids

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

fbi-logoThere are many thousands of so-called ‘pirate’ sites online today, each specializing in a particular area. Some choose to target movies or music, for example, while others take a more general approach.

What most sites have in common, however, is their appetite for content created in the United States. This means that wherever they are in the world, it’s likely that these sites will attract the attention of some of the world’s largest entertainment companies and their law enforcement partners. That scenario now appears to have played out in Romania.

According to a report from the prosecutor’s office at Romania’s High Court of Cassation and Justice, an investigation dating back to 2011 against a trio of movie and TV show streaming sites has just resulted in raids police and officers from organized crime units carrying out raids.

Although not detailed by officials, one of the country’s most popular streaming portals is now offline. Visitors to are now being transparently redirected to a server operated by the Romanian Ministry of Justice carrying the message below.

rom-seized, one of the most popular file-hosting sites in Romania, is also down.

According to the prosecutor, authorities carried out searches at four locations including the homes of several suspects and companies believed to be offering services to the sites.

Local TV outlet StirileProTV says that after searching in the U.S. and Europe, the FBI and local police managed to track down the operation to an office block in the capital Bucharest.


The building contains Romanian web-hosting company Xservers which was featured in local media in connection with the case. Media allegations suggest that the company is somehow implicated in laundering money from the piracy operation but no official statement has yet been issued.

Several men were arrested on suspicion of intellectual property offenses, tax evasion and money laundering. Documents and computer hardware were also seized.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and the best VPN services.

Errata Security: ProxyHam conspiracy is nonsense

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

This DEF CON conspiracy theory is about a canceled talk about “ProxyHam”, which has been canceled under mysterious circumstances. It’s nonsense.

The talk was hype to begin with. You can buy a 900 MHz bridge from Ubquiti for $125 (or MicroTik device for $129) and attach it to a Raspberry Pi. How you’d do this is obvious. It’s a good DEF CON talk, because it’s the application that important, but the technical principles here are extremely basic.
If you look careful at the pic in the Wired story on ProxyHam, it appears they are indeed just using the Ubuiti device. Here is the pic from Wired:
And here is the pic from Ubquiti’s website:
I don’t know why the talk was canceled. One likely reason is that the stories (such as the one on Wired) sensationalized the thing, so maybe their employer got cold feet. Or maybe the FBI got scared and really did give them an NSL, though that’s incredibly implausible. The feds have other ways to encourage people to be silent (I’ve personally been threatened to cancel a talk), but it wouldn’t be an NSL.
Anyway, if DEF CON wants a talk on how to hook up a Raspberry Pi to a UbiQuiTi NanoStation LOCOM9 in order bridge WiFi, I’ll happily give that talk. It’s just basic TCP/IP configuration, and if you want to get fancy, some VPN configuration for the encryptions. Just give me enough lead time to actually buy the equipment and test it out. Also, if DEF CON wants to actually set this up in order to get long distance WiFi working to other hotels, I’ll happily buy a couple units and set them up this way.

Update: Accessing somebody’s open-wifi, like at Starbucks, is (probably) not a violation of the CFAA (Computer Fraud and Abuse Act). The act is vague, of course, so almost anything you do on a computer can violate the CFAA if prosectors want to go after you, but at the same time, this sort of access is far from the original intent of the CFAA. Public WiFi at places like Starbucks is public.

This is not a violation of FCC part 97 which forbids ham radios from encryption data. It’s operating in the unlicensed ISM bands, so is not covered by ham rules, despite the name “ProxyHam”.

Update: An even funner talk, which I’ve long wanted to do, is to do the same thing with cell phones. Take a cellphone, pull it apart, disconnect the antenna, then connect it to a highly directional antenna pointed at a distant cell tower — several cells away. You’d then be physically nowhere near where the cell tower thinks you are. I don’t know enough about how to block signals in other directions, though — radio waves are hard.

Update: There are other devices than those I mention:

SANS Internet Storm Center, InfoCON: green: Detecting Random – Finding Algorithmically chosen DNS names (DGA), (Thu, Jul 9th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

Most normal user traffic communicates via a hostname and not an IP address. Solooking at traffic communicating directly by IP with no associated DNS request is a good thing do to. Some attackers use DNS names for their communications. There is alsomalware such as Skybot and the Styx exploit kit that use algorithmically chosen host name rather than IP addresses for their command and control channels. This malware uses what has been called DGA or Domain Generation Algorithms to create random lookinghost names for its TLS command and control channel or to digitally sign its SSL certificates. These do not look like normal host names. A human being can easily pick them out of our logs and traffic, but it turns out to be a somewhat challenging thing to do in an automated process. Natural Language Processing or measuring the randomness dont seem to work very well. Here is a video that illustrates the problem and one possible approach to solving it.

One way you might try to solve this is with a tool called ent. ent a great Linux tool for detecting entropy in files.”>Entropy = 7.999982 bits per byte.”> –“>[~]$ python -c print A*1000000 | ent
Entropy = 0.000021 bits per byte. — 0 = not random

So 8 is highly random and 0 is not random at all.”>[~]$ echo google | ent
Entropy = 2.235926 bits per byte.
[~]$ echo clearing-house | ent
Entropy = 3.773557 bits per byte. – Valid hosts are in the 2 to 4 range

Google scores 2.23 and clearing-house scores 3.7. So it appears as thoughlegitimate host names willbe in the 2 to 4 range.”>[~]$ echo e6nbbzucq2zrhzqzf | ent
Entropy = 3.503258 bits per byte.
[~]$ echo sdfe3454hhdf | ent
Entropy = 3.085055 bits per byte. – Malicious host from Skybot and Styx malware are in the same range as valid hosts

Thats no good. Known malicious host names are also in the 2 to 4 range. They score just about the same as normal host names. We need a different approach to this problem.

Normal readable English has some pairs of characters that appear more frequently than others. TH, QU and ER appear very frequently but other pairs like WZ appear very rarely. Specifically, there is approximately a 40% chance that a T will be followed by an H. There is approximately a 97% change that a Q will be followed by the letter U. There is approximately a 19% chance that E is followed by R. With regard to unlikely pairs, there is approximately a 0.004% chance that W will be followed by a Z. So here is the idea, lets analyze a bunch of text and figure out what normal looks like. Then measure the host names against the tables. Im making this script and a Windows executable version of this tool available to you to try it out. Let me know how it works. Here is a look at how to use the tool.

Step 1) You need a frequency table. I shared two of them in my github if you want to use them you can download them and skip to step 2.

1a) Create the table: Im creating a table called custom.freq.”>C:\freqfreq.exe –create custom.freq

1b) You can optionally turn ON case sensitivity if you want the frequency table to count uppercase letters and lowercase letters separately. Without this option the tool will convert everything to lowercase before counting character pairs.”>C:\freqfreq.exe -t custom.freq

1c) Next fill the frequency table with normal text. You might load it with known legitimate host names like the Alexa top 1 million most commonly accessed websites. ( I will just load it up with famous works of literature.”>C:\freqfor %i in (txtdocs\*.*) do freq.exe –normalfile %i custom.freq
C:\freqfreq.exe –normalfile txtdocs\center_earth custom.freq
C:\freqfreq.exe –normalfile txtdocs\defoe-robinson-103.txt custom.freq
C:\freqfreq.exe –normalfile txtdocs\dracula.txt custom.freq
C:\freqfreq.exe –normalfile txtdocs\freck10.txt custom.freq

Step 2) Measure badness!

Once the frequency table is filled with data you can start to measure strings to see how probable they are according to our frequency tables.”>C:\freqfreq.exe –measure google custom.freq
C:\freqfreq.exe –measure clearing-house custom.freq

So normal host names have a probability above 5 (at least these two and most others do). We will consider anything above 5 to be good for our tests.”>C:\freqfreq.exe –measure asdfl213u1 custom.freq
C:\freqfreq.exe –measure po24sf92cxlk”>Our malicious hosts are less than 5. 5 seems to be a pretty good benchmark. In my testing it seems to work pretty well for picking out these abnormal host names. But it isnt perfect. Nothing is. One problem is that very small host names and acronyms that are not in the source files you use to build your frequency tables will be below 5. For example, fbi and cia both come up below 5 when I just use classic literature to build my frequency tables. But I am not limited to classic literature. That leads us to step 3.

Step 3) Tune for your organization.

The real power of frequency tables is when you tune it to match normal traffic for your network. –normal and –odd. –normal can be given a normal string and it will update the frequency table with that string. Both –normal and –odd can be used with the –weight option tocontrol how much influence the given string has on the probabilities in the frequency table. Its effectiveness is demonstrated by the accompanying youtube video. Note that marking random host names as –odd is not a good strategy. It simply injects noise into the frequency table. Like everything else in security identifying all the bad in the world is a losing proposition. Instead focus on learning normal and identifying anomalies. So passing –normal cia –weight 10000 adds 10000 counts of the pair ci and the pair ia to the frequency table and increases the probability of cia”>C:\freqfreq.exe –normal cia –weight 10000 custom.freq

The source code and a Windows Executable version of this program can be downloaded from here:

Tomorrow I in my diary I will show you some other cool things you can do with this approach and how you can incorporate this into your own tools.

Follow me on twitter @MarkBaggett

Want to learn to use this code in your own script or build tools of your own? Join me for PythonSEC573 in Las Vegas this September 14th! Click here for more information.

What do you think? Leave a comment.

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Krebs on Security: Finnish Decision is Win for Internet Trolls

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

In a win for Internet trolls and teenage cybercriminals everywhere, a Finnish court has decided not to incarcerate a 17-year-old found guilty of more than 50,000 cybercrimes, including data breaches, payment fraud, operating a huge botnet and calling in bomb threats, among other violations.

Julius "Ryan" Kivimaki.

Julius “Ryan” Kivimaki.

As the Finnish daily Helsingin Sanomat reports, Julius Kivimäki — a.k.a. “Ryan” and “Zeekill” — was given a two-year suspended sentence and ordered to forfeit EUR 6,558.

Kivimaki vaulted into the media spotlight late last year when he claimed affiliation with the Lizard Squad, a group of young hooligans who knocked offline the gaming networks of Microsoft and Sony for most of Christmas Day.

According to the BBC, evidence presented at Kivimaki’s trial showed that he compromised more than 50,000 computer servers by exploiting vulnerabilities in Adobe’s Cold Fusion web application software. Prosecutors also said Kivimaki used stolen credit cards to buy luxury goods and shop vouchers, and participated in a money laundering scheme that he used to fund a trip to Mexico.

Kivimaki allegedly also was involved in calling in multiple fake bomb threats and “swatting” incident — reporting fake hostage situations at an address to prompt a heavily armed police response to that location. DailyDot quotes Blair Strater, a victim of Kivimaki’s swatting and harassment, who expressed disgust at the Finnish ruling.

Speaking with KrebsOnSecurity, Strater called Kivimaki “a dangerous sociopath” who belongs behind bars.

Although it did not factor into his trial, sources close to the Lizard Squad investigation say Kivimaki also was responsible for making an August 2014 bomb threat against Sony Online Entertainment President John Smedley that grounded an American Airlines plane.

In an online interview with KrebsOnSecurity, Kivimaki denied involvement with the American Airlines incident, and said he was not surprised by the leniency shown by the court in his trial.

“During the trial it became apparent that nobody suffered significant (if any) damages because of the alleged hacks,” he said.

The danger in a decision such as this is that it emboldens young malicious hackers by reinforcing the already popular notion that there are no consequences for cybercrimes committed by individuals under the age of 18.

Case in point: Kivimaki is now crowing about the sentence; He’s changed the description on his Twitter profile to “Untouchable hacker god.” The Twitter account for the Lizard Squad tweeted the news of Kivimaki’s non-sentencing triumphantly: “All the people that said we would rot in prison don’t want to comprehend what we’ve been saying since the beginning, we have free passes.”

It is clear that the Finnish legal system, like that of the United States, simply does not know what to do with minors who are guilty of severe cybercrimes.  The FBI has for several years now been investigating several of Kivimaki’s contemporaries, young men under the age of 18 who are responsible for a similarly long list of cybercrimes — including credit card fraud, massively compromising a long list of Web sites and organizations running Cold Fusion software, as well as swatting my home in March 2013. Sadly, to this day those individuals also remain free and relatively untouched by the federal system.

Lance James, former head of cyber intelligence for Deloitte and a security researcher who’s followed the case closely, said he was disappointed at the court’s decision given the gravity and extensiveness of the crimes.

“We’re talking about the Internet equivalent of violent crimes and assault,” James said. “This is serious stuff.”

Kivimaki said he doesn’t agree with the characterization of swatting as a violent crime.

“I don’t see how a reasonable person could possibly compare cybercrime with violent crimes,” he said. “There’s a pretty clear distinction here. As far as I’m aware nobody has ever died in such an incident. Nor have I heard of anyone suffering bodily injury.”

As serious as Kivimaki’s crimes may be, kids like him need to be monitored, mentored, and molded — not jailed, says James.

“Studying his past, he’s extremely smart, but he’s trouble, and definitely needs a better direction,” James said. “A lot of these kids have trouble in the home, such as sibling or parental abuse and abandonment. These teenagers, they aren’t evil, they are troubled. There needs to be a diversion program — the same way they treat at-risk teenagers and divert them away from gang activity — that is designed to help them get on a better path.”

TorrentFreak: FBI Wants Pirate Bay Logs to Expose Copyright Trolls

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

pirate bayOver the past few years copyright troll law firm Prenda crossed the line on several occasions.

Most controversial was the clear evidence that Prenda uploaded their own torrents to The Pirate Bay, creating a honeypot for the people they later sued over pirated downloads.

The crucial evidence to back up this allegation came from The Pirate Bay, who shared upload logs with TorrentFreak that tied a user account and uploads to Prenda and its boss John Steele.

This serious allegation together with other violations piqued the interest of the FBI. For a long time there have been suspicions that the authorities are investigating the Prenda operation and today we can confirm that this is indeed the case.

The confirmation comes from Pirate Bay co-founders Peter Sunde and Fredrik Neij, who independently informed TF that they were questioned about Prenda during their stays in prison.

“I was told that Prenda Law has been under investigation for over a year, and from the printouts they showed me, I believe that,” Sunde tells TF.

Sunde was visited by Swedish police officers who identified themselves, noting that they were sent on behalf of the FBI. The officers mainly asked questions about Pirate Bay backups and logs.

“They asked many questions about the TPB backups and logs. I told them that even if they have one of the backups that it would be nearly impossible to decrypt,” Sunde says, adding that he couldn’t help them as he’s no longer associated with the site.

A short while after Sunde was questioned in prison the same happened to Neij. Again, the officers said they were gathering information about Pirate Bay’s logs on behalf of the FBI.

“They wanted to know if I could verify the accuracy of the IP-address logs, how they were stored, and how they could be retrieved,” Neij says.

The FBI’s interest in the logs was directly linked to the article we wrote on the Prenda honeypot in 2013. While it confirms that the feds are looking into Prenda, the FBI has not announced anything in public yet.

TF contacted the Swedish police a while ago asking for further details, but received no response.

It’s worth noting that the police officers also asked questions about the current state of The Pirate Bay and who’s running the site. With the recent raid in mind, it’s not unthinkable they may also have had an alternative motive.

In any case, today’s revelations show that Prenda is in serious trouble. The same copyright trolls who abused The Pirate Bay to trap pirates, may also face their demise thanks to the very same site.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and the best VPN services.

Schneier on Security: What is the DoD’s Position on Backdoors in Security Systems?

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

In May, Admiral James A. Winnefeld, Jr., vice-chairman of the Joint Chiefs of Staff, gave an address at the Joint Service Academies Cyber Security Summit at West Point. After he spoke for twenty minutes on the importance of Internet security and a good national defense, I was able to ask him a question (32:42 mark) about security versus surveillance:

Bruce Schneier: I’d like to hear you talk about this need to get beyond signatures and the more robust cyber defense and ask the industry to provide these technologies to make the infrastructure more secure. My question is, the only definition of “us” that makes sense is the world, is everybody. Any technologies that we’ve developed and built will be used by everyone — nation-state and non-nation-state. So anything we do to increase our resilience, infrastructure, and security will naturally make Admiral Rogers’s both intelligence and attack jobs much harder. Are you okay with that?

Admiral James A. Winnefeld: Yes. I think Mike’s okay with that, also. That’s a really, really good question. We call that IGL. Anyone know what IGL stands for? Intel gain-loss. And there’s this constant tension between the operational community and the intelligence community when a military action could cause the loss of a critical intelligence node. We live this every day. In fact, in ancient times, when we were collecting actual signals in the air, we would be on the operational side, “I want to take down that emitter so it’ll make it safer for my airplanes to penetrate the airspace,” and they’re saying, “No, you’ve got to keep that emitter up, because I’m getting all kinds of intelligence from it.” So this is a familiar problem. But I think we all win if our networks are more secure. And I think I would rather live on the side of secure networks and a harder problem for Mike on the intelligence side than very vulnerable networks and an easy problem for Mike. And part of that — it’s not only the right thing do, but part of that goes to the fact that we are more vulnerable than any other country in the world, on our dependence on cyber. I’m also very confident that Mike has some very clever people working for him. He might actually still be able to get some work done. But it’s an excellent question. It really is.

It’s a good answer, and one firmly on the side of not introducing security vulnerabilities, backdoors, key-escrow systems, or anything that weakens Internet systems. It speaks to what I have seen as a split in the the Second Crypto War, between the NSA and the FBI on building secure systems versus building systems with surveillance capabilities.

I have written about this before:

But here’s the problem: technological capabilities cannot distinguish based on morality, nationality, or legality; if the US government is able to use a backdoor in a communications system to spy on its enemies, the Chinese government can use the same backdoor to spy on its dissidents.

Even worse, modern computer technology is inherently democratizing. Today’s NSA secrets become tomorrow’s PhD theses and the next day’s hacker tools. As long as we’re all using the same computers, phones, social networking platforms, and computer networks, a vulnerability that allows us to spy also allows us to be spied upon.

We can’t choose a world where the US gets to spy but China doesn’t, or even a world where governments get to spy and criminals don’t. We need to choose, as a matter of policy, communications systems that are secure for all users, or ones that are vulnerable to all attackers. It’s security or surveillance.

NSA Director Admiral Mike Rogers was in the audience (he spoke earlier), and I saw him nodding at Winnefeld’s answer. Two weeks later, at CyCon in Tallinn, Rogers gave the opening keynote, and he seemed to be saying the opposite.

“Can we create some mechanism where within this legal framework there’s a means to access information that directly relates to the security of our respective nations, even as at the same time we are mindful we have got to protect the rights of our individual citizens?”


Rogers said a framework to allow law enforcement agencies to gain access to communications is in place within the phone system in the United States and other areas, so “why can’t we create a similar kind of framework within the internet and the digital age?”

He added: “I certainly have great respect for those that would argue that they most important thing is to ensure the privacy of our citizens and we shouldn’t allow any means for the government to access information. I would argue that’s not in the nation’s best long term interest, that we’ve got to create some structure that should enable us to do that mindful that it has to be done in a legal way and mindful that it shouldn’t be something arbitrary.”

Does Winnefeld know that Rogers is contradicting him? Can someone ask JCS about this?

Schneier on Security: Reassessing Airport Security

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

News that the Transportation Security Administration missed a whopping 95% of guns and bombs in recent airport security “red team” tests was justifiably shocking. It’s clear that we’re not getting value for the $7 billion we’re paying the TSA annually.

But there’s another conclusion, inescapable and disturbing to many, but good news all around: we don’t need $7 billion worth of airport security. These results demonstrate that there isn’t much risk of airplane terrorism, and we should ratchet security down to pre-9/11 levels.

We don’t need perfect airport security. We just need security that’s good enough to dissuade someone from building a plot around evading it. If you’re caught with a gun or a bomb, the TSA will detain you and call the FBI. Under those circumstances, even a medium chance of getting caught is enough to dissuade a sane terrorist. A 95% failure rate is too high, but a 20% one isn’t.

For those of us who have been watching the TSA, the 95% number wasn’t that much of a surprise. The TSA has been failing these sorts of tests since its inception: failures in 2003, a 91% failure rate at Newark Liberty International in 2006, a 75% failure rate at Los Angeles International in 2007, more failures in 2008. And those are just the public test results; I’m sure there are many more similarly damning reports the TSA has kept secret out of embarrassment.

Previous TSA excuses were that the results were isolated to a single airport, or not realistic simulations of terrorist behavior. That almost certainly wasn’t true then, but the TSA can’t even argue that now. The current test was conducted at many airports, and the testers didn’t use super-stealthy ninja-like weapon-hiding skills.

This is consistent with what we know anecdotally: the TSA misses a lot of weapons. Pretty much everyone I know has inadvertently carried a knife through airport security, and some people have told me about guns they mistakenly carried on airplanes. The TSA publishes statistics about how many guns it detects; last year, it was 2,212. This doesn’t mean the TSA missed 44,000 guns last year; a weapon that is mistakenly left in a carry-on bag is going to be easier to detect than a weapon deliberately hidden in the same bag. But we now know that it’s not hard to deliberately sneak a weapon through.

So why is the failure rate so high? The report doesn’t say, and I hope the TSA is going to conduct a thorough investigation as to the causes. My guess is that it’s a combination of things. Security screening is an incredibly boring job, and almost all alerts are false alarms. It’s very hard for people to remain vigilant in this sort of situation, and sloppiness is inevitable.

There are also technology failures. We know that current screening technologies are terrible at detecting the plastic explosive PETN — that’s what the underwear bomber had — and that a disassembled weapon has an excellent chance of getting through airport security. We know that some items allowed through airport security make excellent weapons.

The TSA is failing to defend us against the threat of terrorism. The only reason they’ve been able to get away with the scam for so long is that there isn’t much of a threat of terrorism to defend against.

Even with all these actual and potential failures, there have been no successful terrorist attacks against airplanes since 9/11. If there were lots of terrorists just waiting for us to let our guard down to destroy American planes, we would have seen attacks — attempted or successful — after all these years of screening failures. No one has hijacked a plane with a knife or a gun since 9/11. Not a single plane has blown up due to terrorism.

Terrorists are much rarer than we think, and launching a terrorist plot is much more difficult than we think. I understand this conclusion is counterintuitive, and contrary to the fearmongering we hear every day from our political leaders. But it’s what the data shows.

This isn’t to say that we can do away with airport security altogether. We need some security to dissuade the stupid or impulsive, but any more is a waste of money. The very rare smart terrorists are going to be able to bypass whatever we implement or choose an easier target. The more common stupid terrorists are going to be stopped by whatever measures we implement.

Smart terrorists are very rare, and we’re going to have to deal with them in two ways. One, we need vigilant passengers — that’s what protected us from both the shoe and the underwear bombers. And two, we’re going to need good intelligence and investigation — that’s how we caught the liquid bombers in their London apartments.

The real problem with airport security is that it’s only effective if the terrorists target airplanes. I generally am opposed to security measures that require us to correctly guess the terrorists’ tactics and targets. If we detect solids, the terrorists will use liquids. If we defend airports, they bomb movie theaters. It’s a lousy game to play, because we can’t win.

We should demand better results out of the TSA, but we should also recognize that the actual risk doesn’t justify their $7 billion budget. I’d rather see that money spent on intelligence and investigation — security that doesn’t require us to guess the next terrorist tactic and target, and works regardless of what the terrorists are planning next.

This essay previously appeared on

Errata Security: What’s the state of iPhone PIN guessing

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

I think even some experts have gotten this wrong, so I want to ask everyone: what’s the current state-of-the-art for trying to crack Apple PIN codes?

This is how I think it works currently (in iOS 8).
To start with, there is a special “crypto-chip” inside the iPhone that holds your secrets (like a TPM or ARM TrustZoneSecurCore). I think originally it was ARM’s TrustZone, but now that Apple designs its own chips, that they’ve customized it (“Secure Enclave”). I think they needed to add stuff to make Touch ID work.
All the data (on the internal flash drive) is encrypted with a random AES key that nobody, not even the NSA, can crack. This random AES key is stored on the crypto-chip. Thus, if your phone is stolen, the robbers cannot steal the data from it — as long as your phone is locked properly.
To unlock your phone, you type in a 4 digit passcode. This passcode gets sent to the crypto-chip, which verifies the code, then gives you the AES key needed to decrypt the flash drive. This is all invisible, of course, but that’s what’s going on underneath the scenes. Since the NSA can’t crack the AES key on the flash drive, they must instead get it from the crypto-chip.
Thus, unlocking the phone means guessing your 4 digit PIN.
This seems easy. After all, it’s only 4 digits. However, offline cracking is impossible. The only way to unlock the phone is to send guesses to the crypto-chip (a form of online cracking). This can be done over the USB port, so they (the NSA) don’t need to sit there trying to type every possible combination — they can simply write a little script to send commands over USB.
To make this more difficult, the crypto-chip will slow things down. After 6 failed guesses, the iPhone temporarily disables itself for 1-minute. Thus, it’ll take the NSA a week (6.9 days), trying all 10,000 combinations, once per minute.
Better yet, you can configure your phone to erase itself after 10 failed attempts ([Erase Data] Passcode setting). This isn’t the default configuration, but it’s how any paranoid person (like myself) configures their phone. This is a hard lock, preventing even the NSA from ever decrypting the phone. It’s the “going dark” problem that the FBI complains about. If they get the iPhone from a terrorist, drug dealers, or pedophile, they won’t be able to decrypt it (well, beyond the 0.1% chance of guessing 10 random numbers). (Note: I don’t think it actually erases the flash drive, but simply erases the secret AES key — which is essentially the same thing).
Instead of guessing PIN numbers, there may be a way to reverse-engineer such secrets from the crypto-chip, such as by using acids in order to remove the top from the chip then use an electron microscope to read the secrets. (Physical possession of the phone is required). One of the Snowden docs implies that the NSA can sometimes do this, but that it takes a month and a hundred thousand dollars, and has a 50% chance of destroying the chip permanently without being able to extract any secrets. In any event, that may have been possible with the older chips, but the latest iPhones now include custom chips designed by Apple where this may no longer be possible.
There may be a a physical exploit that gets around this. Somebody announced a device that would guess a PIN, then immediately power down the device before the failed guess could be recorded. That allows an infinite number of guesses, requiring a reboot of the phone in between. Since the reboot takes about a minute, it means hooking up the phone to the special device and waiting a week. This worked in phones up to iOS 8.1, but presumably it’s something Apple has since patched (some think 8.1.1 patched this).
There may be other exploits in software. In various versions of iOS, hackers have found ways of bypassing the lock screen. Generally, these exploits require the phone to still be powered on since it was stolen. (Coming out of sleep mode is different than being powered up, even though it looks like the same unlocking process to you). However, whenever hackers disclose such techniques, Apple quickly patches them, so it’s not a practical long term strategy. On the other hand, they steal the phone, the FBI/NSA may simply hold it powered on in storage for several years, hoping an exploit is released. The FBI is patient, they really don’t care if it takes a few years to complete a case. The last such exploit was in iOS 7.0, and Apple is about to announce iOS 9. They are paranoid about such exploits, I doubt that a new one will be found.
If the iPhone owner synced their phone with iTunes, then it’s probable that the FBI/NSA can confiscate both the phone and the desktop in order to grab the data. They can then unlock the phone from the desktop, or they can simply grab the backup files from the desktop. If your desktop computer also uses disk encryption, you can prevent this. Some desktops use TPMs to protect the disk (requiring slow online cracking similar to cracking the iPhone PIN). Others would allow offline cracking of your password, but if you chose a sufficiently long passwords (mine is 23 characters), even the NSA can’t crack it — even at the rate of billions of guesses per second that would be possible with offline cracking.
The upshot is this. If you are a paranoid person and do things correctly (set iPhone to erase after 10 attempts, either don’t sync with desktop or sync with proper full disk encryption), then when the FBI or NSA comes for you, they won’t be able to decrypt your phone. You are safe to carry out all your evil cyber-terrorist plans.
I’m writing this up in general terms because I think this is how it works. Many of us general experts glance over the docs and make assumptions about how we think things should work, based on our knowledge of crypto, but we haven’t paid attention to the details, especially the details as the state-of-the-art changes over the years. Sadly, asking general questions gets general answers from well-meaning, helpful people who really know only just as much as I do. I’m hoping those who are up on the latest details, experts like Jonathan Zdziarski, will point out where I’m wrong.

Response: So Jonathan has written a post describing this in more detail here:

He is more confident the NSA has 0days to get around everything. I think the point wroth remembering is that nothing can be decrypted without 0days. and that if ever 0days become public, Apple patches them. Hence, you can’t take steal somebody phone and take it to the local repair shop to get it read — unless it’s an old phone that hasn’t been updated. It also means the FBI is unlikely to get the data — at least without revealing that they’ve got an 0da.

Specifics: Specifically, I think this is what happens.

Unique-id (UID): When the CPU is manufactured, it’s assigned a unique-identifier. This is done with hardware fuses, some of which are blown to create 1 and 0s. Apple promises the following:

  • that UIDs are secret and can never be read from the chip, but anybody, for any reason
  • that all IDs are truely random (nobody can guess the random number generation)
  • that they (or suppliers) keep no record of them

This is the root of all security. If it fails, then the NSA can decrypt the phone.

Crypto-accelerator: The CPU has a built-in AES accelerator that’s mostly separate from the main CPU. One reason it exists is to quickly (with low power consumption) decrypt/encrypt everything on the flash-drive. It’s the only part of the CPU that can read the UID. It can therefore use the UID, plus the PIN/passcode, to encrypt/decrypt something.

Special flash: Either a reserved area of the flash-drive, or a wholly separate flash chip, is used to store the rest of the secrets. These are encrypted using the UID/PIN combo. Apple calls this “effaceable” storage. When it “wipes” the phone, this area is erased, but the rest of the flash drive isn’t. Information like your fingerprint (for Touch ID) is stored here.

So the steps are:

  1. iOS boots
  2. phone asks for PIN/passcode
  3. iOS sends PIN/passcode to crypto-accelerate to decrypt flash-drive key (read from the “effaceable” storage area)
  4. uses flash-drive key to decrypt all your data

I’m skipping details. This is just enough to answer certain questions.

FAQ: Where is the unique hardware ID stored? On the flash memory? The answer is within the CPU itself. Flash memory will contain further keys, for example to unlock all your data, but they have to be decrypted using the unique-id plus PIN/passcode.

Schneier on Security: NSA Running a Massive IDS on the Internet Backbone

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

The latest story from the Snowden documents, co-published by The New York Times and ProPublica, shows that the NSA is operating a signature-based intrusion detection system on the Internet backbone:

In mid-2012, Justice Department lawyers wrote two secret memos permitting the spy agency to begin hunting on Internet cables, without a warrant and on American soil, for data linked to computer intrusions originating abroad — including traffic that flows to suspicious Internet addresses or contains malware, the documents show.

The Justice Department allowed the agency to monitor only addresses and “cybersignatures” ­- patterns associated with computer intrusions ­ that it could tie to foreign governments. But the documents also note that the N.S.A. sought to target hackers even when it could not establish any links to foreign powers.

To me, the big deal here is 1) the NSA is doing this without a warrant, and 2) that the policy change happened in secret, without any public policy debate.

The effort is the latest known expansion of the N.S.A.’s warrantless surveillance program, which allows the government to intercept Americans’ cross-border communications if the target is a foreigner abroad. While the N.S.A. has long searched for specific email addresses and phone numbers of foreign intelligence targets, the Obama administration three years ago started allowing the agency to search its communications streams for less-identifying Internet protocol addresses or strings of harmful computer code.


To carry out the orders, the F.B.I. negotiated in 2012 to use the N.S.A.’s system for monitoring Internet traffic crossing “chokepoints operated by U.S. providers through which international communications enter and leave the United States,” according to a 2012 N.S.A. document. The N.S.A. would send the intercepted traffic to the bureau’s “cyberdata repository” in Quantico, Virginia.

Ninety pages of NSA documents accompany the article. Here is a single OCRed PDF of them all.

Jonathan Mayer was consulted on the article. He gives more details on his blog, which I recommend you all read.

In my view, the key takeaway is this: for over a decade, there has been a public policy debate about what role the NSA should play in domestic cybersecurity. The debate has largely presupposed that the NSA’s domestic authority is narrowly circumscribed, and that DHS and DOJ play a far greater role. Today, we learn that assumption is incorrect. The NSA already asserts broad domestic cybersecurity powers. Recognizing the scope of the NSA’s authority is particularly critical for pending legislation.

This is especially important for pending information sharing legislation, which Mayer explains.

The other big news is that ProPublica’s Julia Angwin is working with Laura Poitras on the Snowden documents. I expect that this isn’t the last artcile we’re going to see.

EDITED TO ADD: Others are writing about these documents. Shane Harris explains how the NSA and FBI are working together on Internet surveillance. Benjamin Wittes says that the story is wrong, that “combatting overseas cybersecurity threats from foreign governments” is exactly what the NSA is supposed to be doing, and that they don’t need a warrant for any of that. And Marcy Wheeler points out that she has been saying for years that the NSA has been using Section 702 to justify Internet surveillance.

Schneier on Security: Yet Another New Biometric: Brainprints

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

New research:

In “Brainprint,” a newly published study in academic journal Neurocomputing, researchers from Binghamton University observed the brain signals of 45 volunteers as they read a list of 75 acronyms, such as FBI and DVD. They recorded the brain’s reaction to each group of letters, focusing on the part of the brain associated with reading and recognizing words, and found that participants’ brains reacted differently to each acronym, enough that a computer system was able to identify each volunteer with 94 percent accuracy. The results suggest that brainwaves could be used by security systems to verify a person’s identity.

I have no idea what the false negatives are, or how robust this biometric is over time, but the article makes the important point that unlike most biometrics this one can be updated.

“If someone’s fingerprint is stolen, that person can’t just grow a new finger to replace the compromised fingerprint — the fingerprint for that person is compromised forever. Fingerprints are ‘non-cancellable.’ Brainprints, on the other hand, are potentially cancellable. So, in the unlikely event that attackers were actually able to steal a brainprint from an authorized user, the authorized user could then ‘reset’ their brainprint,” Laszlo said.

Presumably the resetting involves a new set of acronyms.

Author’s self-archived version of the paper (pdf).

Errata Security: Uh, the only reform of domestic surveillance is dismantling it

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

A lot of smart people are cheering the reforms of domestic surveillance in the USA “FREEDOM” Act. Examples include  Timothy Lee, EFF, Julian Sanchez, and Amie Stepanovich. I don’t understand why. Domestic surveillance is a violation of our rights. The only acceptable reform is getting rid of it. Anything less is the moral equivalent of forcing muggers to not wear ski masks — it doesn’t actually address the core problem (mugging, in this case).
Bulk collection still happens, and searches still happen. The only thing the act does is move ownership of the metadata databases from the NSA to the phone companies. In no way does the bill reform the idea that, on the pretext of terrorism, law enforcement can still rummage through the records, looking for everyone “two hops” away from a terrorist.
We all know the Patriot Act is used primarily to prosecute the War on Drugs rather than the War on Terror. I see nothing in FREEDOM act that reforms this. We all know the government cloaks its abuses under the secrecy of national security — and while I see lots in the act that tries to make things more transparent, the act still allows such a cloak.
I see none of the reforms I’d want. For example, I want a law that requires the disclosure, to the public, of the total number of US phone records the government has grabbed every month, regardless of which law enforcement or intelligence agency grabbed them, regardless of which program or authority was used to grab them. After Snowden caught the government using wild justification for it’s metadata program — any law that doesn’t target such all such collection regardless of justification will work to reign it in.
A vast array of other things need to be reformed regarding domestic surveillance, such as use of “Stingray” devices, the “third party doctrine” allowing the grabbing of business records even without terrorism as a justification, parallel construction, the border search exemption, license plate readers, and so on.
Bulk collection happens. our lives are increasingly electronic. We leave a long trail of “business records” behind us whatever we do. Consider Chris Roberts, “Sindragon”, who joked about hacking a plane on Twitter and is now under investigation by the FBI. They can easily rummage through all those records. While they might not find him guilty of hacking, they may find he violated an obscure tax law or export law, and charge him with that sort of crime. That everything about our lives is being collected in bulk, allowing arbitrary searches by law enforcement, is still a cyber surveillance state, that the FREEDOM act comes nowhere close to touching.
The fact of the matter is that the NSA’s bulk collection was the least of our problems. Indeed, the NSA’s focus on foreign targets meant, in practice, it really wasn’t used domestically. The FREEDOM act now opens up searches of metadata to all the other law enforcement agencies. Instead of skulking in secret occasionally searching metadata, the FBI, DEA, and ATF can now do so publicly, with the blessing of the law behind them.

TorrentFreak: Seized Megaupload Domains Link to Scam Ads and Malware

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

dojWell over three years have passed since Megaupload was shutdown, but there is still little progress in the criminal proceedings against the operation.

The United States hopes that New Zealand will extradite Kim Dotcom and his colleagues, but the hearings have been delayed several times already.

Meanwhile, several domain names including the popular and remain under the control of the U.S. Government. At least, that should be the case. In reality, however, they’re now being exploited by ‘cyber criminals.’

Instead of a banner announcing that the domains names have been seized as part of a criminal investigation they now direct people to a Zero-Click adverting feed. This feed often links to malware installers and other malicious ads.

One of the many malicious “ads” the Megaupload and Megavideo domain names are serving links to a fake BBC article, suggesting people can get an iPhone 6 for only £1.

And here is another example of a malicious ad prompting visitors to update their browser.


The question that immediately comes to mind is this: How can it be that the Department of Justice is allowing the domains to be used for such nefarious purposes?

Looking at the Whois records everything seems to be in order. The domain name still lists Megaupload Limited as registrant, which is as it was before. Nothing out of the ordinary.

The nameserver PLEASEDROPTHISHOST15525.CIRFU.BIZ, on the other hand, triggers several alarm bells.


CIRFU refers to the FBI’s Cyber Initiative and Resource Fusion Unit, a specialized tech team tasked with handling online crime and scams. The unit used the CIRFU.NET domain name as nameserver for various seized domains, including the Mega ones.

Interestingly, the CIRFU.NET domain now lists “Syndk8 Media Limited” as registrant, which doesn’t appear to have any connections with the FBI. Similarly, CIRFU.BIZ is not an official CIRFU domain either and points to a server in the Netherlands hosted by LeaseWeb.

It appears that the domain which the Department of Justice (DoJ) used as nameserver is no longer in control of the Government. Perhaps it expired, or was taken over via other means.

As a result, Megaupload and Megavideo are now serving malicious ads, run by the third party that controls the nameserver.

This is quite a mistake for one of the country’s top cybercrime units, to say the least. It’s also one that affects tends of thousands of people, as the domain remains frequently visited.

Commenting on the rogue domains, Megaupload founder Kim Dotcom notes that the people who are responsible should have known better.

“With U.S. Assistant Attorney Jay Prabhu the DOJ in Virginia employs a guy who doesn’t know the difference between civil & criminal law. And after this recent abuse of our seized Mega domains I wonder how this guy was appointed Chief of the Cybercrime Unit when he can’t even do the basics like safeguard the domains he has seized,” he tells TF.

“Jay Prabhu keeps embarrassing the U.S. government. I would send him back to law school and give him a crash course in ‘how the Internet works’,” Dotcom adds.

Making matters worse for the Government, and are not the only domain names affected. Various poker domains that were previously seized, including and, also link to malicious content now.

While the Government appears to have lost control of the old nameservers, it can still correct the problem through a nameserver update at their end. However, that doesn’t save those people who had their systems compromised during recent days, and it certainly won’t repair the PR damage.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and anonymous VPN services.

Schneier on Security: Why the Current Section 215 Reform Debate Doesn’t Matter Much

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

The ACLU’s Chris Soghoian explains (time 25:52-30:55) why the current debate over Section 215 of the Patriot Act is just a minor facet of a large and complex bulk collection program by the FBI and the NSA.

There were 180 orders authorized last year by the FISA Court under Section 215 — 180 orders issued by this court. Only five of those orders relate to the telephony metadata program. There are 175 orders about completely separate things. In six weeks, Congress will either reauthorize this statute or let it expire, and we’re having a debate — to the extent we’re even having a debate — but the debate that’s taking place is focused on five of the 180, and there’s no debate at all about the other 175 orders.

Now, Senator Wyden has said there are other bulk collection programs targeted at Americans that the public would be shocked to learn about. We don’t know, for example, how the government collects records from Internet providers. We don’t know how they get bulk metadata from tech companies about Americans. We don’t know how the American government gets calling card records.

If we take General Hayden at face value — and I think you’re an honest guy — if the purpose of the 215 program is to identify people who are calling Yemen and Pakistan and Somalia, where one end is in the United States, your average Somali-American is not calling Somalia from their land line phone or their cell phone for the simple reason that AT&T will charge them $7.00 a minute in long distance fees. The way that people in the diaspora call home — the way that people in the Somali or Yemeni community call their family and friends back home — they walk into convenience stores and they buy prepaid calling cards. That is how regular people make international long distance calls.

So the 215 program that has been disclosed publicly, the 215 program that is being debated publicly, is about records to major carriers like AT&T and Verizon. We have not had a debate about surveillance requests, bulk orders to calling card companies, to Skype, to voice over Internet protocol companies. Now, if NSA isn’t collecting those records, they’re not doing their job. I actually think that that’s where the most useful data is. But why are we having this debate about these records that don’t contain a lot of calls to Somalia when we should be having a debate about the records that do contain calls to Somalia and do contain records of e-mails and instant messages and searches and people posting inflammatory videos to YouTube?

Certainly the government is collecting that data, but we don’t know how they’re doing it, we don’t know at what scale they’re doing it, and we don’t know with which authority they’re doing it. And I think it is a farce to say that we’re having a debate about the surveillance authority when really, we’re just debating this very narrow usage of the statute.

Further underscoring this point, yesterday the Department of Justice’s Office of the Inspector General released a redacted version of its internal audit of the FBI’s use of Section 215: “A Review of the FBI’s Use of Section 215 Orders: Assessment of Progress in Implementing Recommendations and Examination of Use in 2007 through 2009,” following the reports of the statute’s use from 2002-2005 and 2006. (Remember that the FBI and the NSA are inexorably connected here. The order to Verizon was from the FBI, requiring it to turn data over to the NSA.)

Details about legal justifications are all in the report (see here for an important point about minimization), but detailed data on exactly what the FBI is collecting — whether targeted or bulk — is left out. We read that the FBI demanded “customer information” (p. 36), “medical and educational records” (p. 39) “account information and electronic communications transactional records” (p. 41), “information regarding other cyber activity” (p. 42). Some of this was undoubtedly targeted against individuals; some of it was undoubtedly bulk.

I believe bulk collection is discussed in detail in Chapter VI. The chapter title is redacted, as well as the introduction (p. 46). Section A is “Bulk Telephony Metadata.” Section B (pp. 59-63) is completely redacted, including the section title. There’s a summary in the Introduction (p. 3): “In Section VI, we update the information about the uses of Section 215 authority described [redacted word] Classified Appendices to our last report. These appendices described the FBI’s use of Section 215 authority on behalf of the NSA to obtain bulk collections of telephony metadata [long redacted clause].” Sounds like a comprehensive discussion of bulk collection under Section 215.

What’s in there? As Soghoian says, certainly other communications systems like prepaid calling cards, Skype, text messaging systems, and e-mails. Search history and browser logs? Financial transactions? The “medical and educational records” mentioned above? Probably all of them — and the data is in the report, redacated (p. 29) — but there’s nothing public.

The problem is that those are the pages Congress should be debating, and not the telephony metadata program exposed by Snowden.

Krebs on Security: mSpy Denies Breach, Even as Customers Confirm It

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Last week, KrebsOnSecurity broke the news that sensitive data apparently stolen from hundreds of thousands of customers mobile spyware maker mSpy had been posted online. mSpy has since been quoted twice by other publications denying a breach of its systems. Meanwhile, this blog has since contacted multiple people whose data was published to the deep Web, all of whom confirmed they were active or former mSpy customers.

myspyappmSpy told BBC News it had been the victim of a “predatory attack” by blackmailers, but said it had not given in to demands for money. mSpy also told the BBC that claims the hackers had breached its systems and stolen data were false.

“There is no data of 400,000 of our customers on the web,” a spokeswoman for the company told the BBC. “We believe to have become a victim of a predatory attack, aimed to take advantage of our estimated commercial achievements.”

Let’s parse that statement a bit further. No, the stolen records aren’t on the Web; rather, they’ve been posted to various sites on the Deep Web, which is only accessible using Tor. Also, I don’t doubt that mSpy was the target of extortion attempts; the fact that the company did not pay the extortionist is likely what resulted in its customers’ data being posted online.

How am I confident of this, considering mSpy has still not responded to my requests for comment? I spent the better part of the day today pulling customer records from the hundreds of gigabytes of data leaked from mSpy. I spoke with multiple customers whose payment and personal data — and that of their kids, employees and significant others — were included in the huge cache. All confirmed they are or were recently paying customers of mSpy.

Joe Natoli, director of a home care provider in Arizona, confirmed what was clear from looking at the leaked data — that he had paid mSpy hundreds of dollars a month for a subscription to monitor all of the mobile devices distributed to employees by his company. Natoli said all employees agree to the monitoring when they are hired, but that he only used mSpy for approximately four months.

“The value proposition for the cost didn’t work out,” Natoli said.

Katherine Till‘s information also was in the leaked data. Till confirmed that she and her husband had paid mSpy to monitor the mobile device of their 14-year-old daughter, and were still a paying customer as of my call to her.

Till added that she was unaware of a breach, and was disturbed that mSpy might try to cover it up.

“This is disturbing, because who knows what someone could do with all that data from her phone,” Till said, noting that she and her husband had both discussed the monitoring software with their daughter. “As parents, it’s hard to keep up and teach kids all the time what they can and can’t do. I’m sure there are lots more people like us that are in this situation now.”

Another user whose financial and personal data was in the cache asked not to be identified, but sheepishly confirmed that he had paid mSpy to secretly monitor the mobile device of a “friend.”


News of the mSpy breach prompted renewed calls from Sen. Al Franken for outlawing products like mSpy, which the Minnesota democrat refers to as “stalking apps.” In a letter (PDF) sent this week to the U.S. Justice Department and Federal Trade Commission, Franken urged the agencies to investigate mSpy, whose products he called ‘deeply troubling’ and “nothing short of terrifying” when “in the hands of a stalker or abuse intimate partner.”

Last year, Franken reintroduced The Location Privacy Protection Act of 2014, legislation that would outlaw the development, operation, and sale of such products.

U.S. regulators and law enforcers have taken a dim view of companies that offer mobile spyware services like mSpy. In September 2014, U.S. authorities arrested a 31-year-old Hammad Akbar, the CEO of a Lahore-based company that makes a spyware app called StealthGenie. The FBI noted that while the company advertised StealthGenie’s use for “monitoring employees and loved ones such as children,” the primary target audience was people who thought their partners were cheating. Akbar was charged with selling and advertising wiretapping equipment.

“Advertising and selling spyware technology is a criminal offense, and such conduct will be aggressively pursued by this office and our law enforcement partners,” U.S. Attorney Dana Boente said in a press release tied to Akbar’s indictment.

Akbar pleaded guilty to the charges in November 2014, and according to the Justice Department he is “the first-ever person to admit criminal activity in advertising and selling spyware that invades an unwitting victim’s confidential communications.”

Schneier on Security: More on Chris Roberts and Avionics Security

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Last month ago I blogged about security researcher Chris Roberts being detained by the FBI after tweeting about avionics security while on a United flight:

But to me, the fascinating part of this story is that a computer was monitoring the Twitter feed and understood the obscure references, alerted a person who figured out who wrote them, researched what flight he was on, and sent an FBI team to the Syracuse airport within a couple of hours. There’s some serious surveillance going on.

We know a lot more of the backstory from the FBI’s warrant application. He was interviewed by the FBI multiple times previously, and was able to take control of at least some of the panes’ controls during flight.

During two interviews with F.B.I. agents in February and March of this year, Roberts said he hacked the inflight entertainment systems of Boeing and Airbus aircraft, during flights, about 15 to 20 times between 2011 and 2014. In one instance, Roberts told the federal agents he hacked into an airplane’s thrust management computer and momentarily took control of an engine, according to an affidavit attached to the application for a search warrant.

“He stated that he successfully commanded the system he had accessed to issue the ‘CLB’ or climb command. He stated that he thereby caused one of the airplane engines to climb resulting in a lateral or sideways movement of the plane during one of these flights,” said the affidavit, signed by F.B.I. agent Mike Hurley.

Roberts also told the agents he hacked into airplane networks and was able “to monitor traffic from the cockpit system.”

According to the search warrant application, Roberts said he hacked into the systems by accessing the in-flight entertainment system using his laptop and an Ethernet cable.

Wired has more.

This makes the FBI’s behavior much more reasonable. They weren’t scanning the Twitter feed for random keywords; they were watching his account.

We don’t know if the FBI’s statements are true, though. But if Roberts was hacking an airplane while sitting in the passenger seat…wow is that a stupid thing to do.

From the Christian Science Monitor:

But Roberts’ statements and the FBI’s actions raise as many questions as they answer. For Roberts, the question is why the FBI is suddenly focused on years-old research that has long been part of the public record.

“This has been a known issue for four or five years, where a bunch of us have been stood up and pounding our chest and saying, ‘This has to be fixed,'” Roberts noted. “Is there a credible threat? Is something happening? If so, they’re not going to tell us,” he said.

Roberts isn’t the only one confused by the series of events surrounding his detention in April and the revelations about his interviews with federal agents.

“I would like to see a transcript (of the interviews),” said one former federal computer crimes prosecutor, speaking on condition of anonymity. “If he did what he said he did, why is he not in jail? And if he didn’t do it, why is the FBI saying he did?”

The real issue is that the avionics and the entertainment system are on the same network. That’s an even stupider thing to do. Also last month I wrote about the risks of hacking airplanes, and said that I wasn’t all that worried about it. Now I’m more worried.

Errata Security: Our Lord of the Flies moment

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

In its war on researchers, the FBI doesn’t have to imprison us. Merely opening an investigation into a researcher is enough to scare away investors and bankrupt their company, which is what happened last week with Chris Roberts. The scary thing about this process is that the FBI has all the credibility, and the researcher none — even among other researchers. After hearing only one side of the story, the FBI’s side, cybersecurity researchers quickly turned on their own, condemning Chris Roberts for endangering lives by taking control of an airplane.

As reported by Kim Zetter at Wired, though, Roberts denies the FBI’s allegations. He claims his comments were taken out of context, and that on the subject of taking control a plane, it was in fact a simulator not a real airplane.

I don’t know which side is telling the truth, of course. I’m not going to defend Chris Roberts in the face of strong evidence of his guilt. But at the same time, I demand real evidence of his guilt before I condemn him. I’m not going to take the FBI’s word for it.

We know how things get distorted. Security researchers are notoriously misunderstood. To the average person, what we say is all magic technobabble anyway. They find this witchcraft threatening, so when we say we “could” do something, it’s interpreted as a threat that we “would” do something, or even that we “have” done something. Important exculpatory details, like “I hacked a simulation”, get lost in all the technobabble.

Likewise, the FBI is notoriously dishonest. Until last year, they forbad audio/visual recording of interviews, preferring instead to take notes. This inshrines any misunderstandings into official record. The FBI has long abused this, such as for threatening people to inform on friends. It is unlikely the FBI had the technical understanding to understand what Chris Roberts said. It’s likely they willfully misunderstood him in order to justify a search warrant.

There is a war on researchers. What we do embarrasses the powerful. They will use any means possible to stop us, such as using the DMCA to suppress publication of research, or using the CFAA to imprison researchers. Criminal prosecution is so one sided that it rarely gets that far. Instead, merely the threat of prosecution ruins lives, getting people fired or bankrupted.

When they come for us, the narrative will never be on our side. They will have constructed a story that makes us look very bad indeed. It’s scary how easily the FBI convict people in the press. They have great leeway to concoct any story they want. Journalists then report the FBI’s allegations as fact. The targets, who need to remain silent lest their words are used against them, can do little to defend themselves. It’s like how in the Matt Dehart case, the FBI alleges child pornography. But when you look into the details, it’s nothing of the sort. The mere taint of this makes people run from supporting Dehart. Similarly with Chris Roberts, the FBI wove a tale of endangering an airplane, based on no evidence, and everyone ran from him.

We need to stand together on or fall alone. No, this doesn’t mean ignoring malfeasance on our side. But it does mean that, absent clear evidence of guilt, that we stand with our fellow researchers. We shouldn’t go all Lord of the Flies on the accused, eagerly devouring Piggy because we are so relieved it wasn’t us.

P.S. Alex Stamos is awesome, don’t let my bitch slapping of him make you believe otherwise.

Errata Security: Those expressing moral outrage probably can’t do math

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

Many are discussing the FBI document where Chris Roberts (“the airplane hacker”) claimed to an FBI agent that at one point, he hacked the plane’s controls and caused the plane to climb sideways. The discussion hasn’t elevated itself above the level of anti-vaxxers.

It’s almost certain that the FBI’s account of events is not accurate. The technical details are garbled in the affidavit. The FBI is notorious for hearing what they want to hear from a subject, which is why for years their policy has been to forbid recording devices during interrogations. If they need Roberts to have said “I hacked a plane” in order to get a search warrant, then that’s what their notes will say. It’s like cops who will yank the collar of a drug sniffing dog in order to “trigger” on drugs so that they have an excuse to search the car.

Also, security researchers are notorious for being misunderstood. Whenever we make innocent statements about what we “could” do, others often interpret this either as a threat or a statement of what we already have done.

Assuming this scenario is true, that Roberts did indeed control the plane briefly, many claim that this is especially reprehensible because it endangered lives. That’s the wrong way of thinking about it. Yes, it would be wrong because it means accessing computers without permission, but the “endangered lives” component doesn’t necessarily make things worse.

Many operate under the principle that you can’t put a price on a human life. That is false, provably so. If you take your children with you to the store, instead of paying the neighbor $10 to babysit them, then you’ve implicitly put a price on your children’s lives. Traffic accidents near the home are the leading cause of death for children. Driving to the store is a vastly more dangerous than leaving the kids at home, so you’ve priced that danger around $10.

Likewise, society has limited resources. Every dollar spent on airline safety has to come from somewhere, such as from AIDS research. With current spending, society is effectively saying that airline passenger lives are worth more than AIDS victims.

Does pentesting an airplane put passenger lives in danger? Maybe. But then so does leaving airplane vulnerabilities untested, which is the current approach. I don’t know which one is worse — but I do know that your argument is wrong when you claim that endangering planes is unthinkable. It is thinkable, and we should be thinking about it. We should be doing the math to measure the risk, pricing each of the alternatives.

It’s like whistleblowers. The intelligence community hides illegal mass surveillance programs from the American public because it would be unthinkable to endanger people’s lives. The reality is that the danger from the programs is worse, and when revealed by whistleblowers, nothing bad happens.

The same is true here. Airlines assure us that planes are safe and cannot be hacked — while simultaneously saying it’s too dangerous for us to try hacking them. Both claims cannot be true, so we know something fishy is going on. The only way to pierce this bubble and find out the truth is to do something the airlines don’t want, such as whistleblowing or live pentesting.

The systems are built to be reset and manually overridden in-flight. Hacking past the entertainment system to prove one could control the airplane introduces only a tiny danger to the lives of those on-board. Conversely, the current “security through obscurity” stance of the airlines and FAA is an enormous danger. Deliberately crashing a plane just to prove it’s possible would of course be unthinkable. But, running a tiny risk of crashing the plane, in order to prove it’s possible, probably will harm nobody. If never having a plane crash due to hacking is your goal, then a live test on a plane during flight is a better way of doing this than the current official polices of keeping everything secret. The supposed “unthinkable” option of live pentest is still (probably) less dangerous than the “thinkable” options.

I’m not advocating anyone do it, of course. There are still better options, such as hacking the system once the plane is on the ground. My point is only that it’s not an unthinkable danger. Those claiming it is haven’t measure the dangers and alternatives.

The same is true of all security research. Those outside the industry believe in security-through-obscurity, that if only they can keep details hidden and pentesters away from computers, then they will be safe. We inside the community believe the opposite, in Kerckhoff’s Principle of openness, and that the only trustworthy systems are those which have been thoroughly attacked by pentesters. There is a short term cost of releasing vulns in Adobe Flash, because hackers will use them. But the long term benefit is that this leads to a more secure Flash, and better alternatives like HTML5. If you can’t hack planes in-flight, then what you are effectively saying is that our believe in Kerckhoff’s Principle is wrong.

Each year, people die (or get permanently damaged) from vaccines. But we do vaccines anyway because we are rational creatures who can do math, and can see that the benefits of vaccines are a million to one times greater than the dangers. We look down on the anti-vaxxers who rely upon “herd immunity” and the fact the rest of us put our children through danger in order to protect their own. We should apply that same rationality to airline safety. If you think pentesting live airplanes is unthinkable, then you should similarly be able to do math and prove it, rather than rely upon irrational moral outrage.

I’m not arguing hacking airplanes mid-flight is a good idea. I’m simply pointing out it’s a matter of math, not outrage.

Krebs on Security: Mobile Spy Software Maker mSpy Hacked, Customer Data Leaked

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

mSpy, the makers of a dubious software-as-a-service product that claims to help more than two million people spy on the mobile devices of their kids and partners, appears to have been massively hacked. Last week, a huge trove of data apparently stolen from the company’s servers was posted on the Deep Web, exposing countless emails, text messages, payment and location data on an undetermined number of mSpy “users.”

mSpy has not responded to multiple requests for comment left for the company over the past five days. KrebsOnSecurity learned of the apparent breach from an anonymous source who shared a link to a Web page that is only reachable via Tor, a technology that helps users hide their true Internet address and allows users to host Web sites that are extremely difficult to get taken down.

The Tor-based Web site hosting content stolen from mobile devices running Mspy.

The Tor-based Web site hosting content stolen from mobile devices running mSpy.

The Tor-based site hosts several hundred gigabytes worth of data taken from mobile devices running mSpy’s products, including some four million events logged by the software. The message left by the unknown hackers who’ve claimed responsibility for this intrusion suggests that the data dump includes information on more than 400,000 users, including Apple IDs and passwords, tracking data, and payment details on some 145,000 successful transactions.

The exact number of mSpy users compromised could not be confirmed, but one thing is clear: There is a crazy amount of personal and sensitive data in this cache, including photos, calendar data, corporate email threads, and very private conversations. Also included in the data dump are thousands of support request emails from people around the world who paid between $8.33 to as much as $799 for a variety of subscriptions to mSpy’s surveillance software.

Mspy users can track Android and iPhone users, snoop on apps like Snapchat and Skype, and keep a record of every key the user types.

mSspy users can track the exact location of Android and iPhone users, snoop on apps like Snapchat and Skype, and keep a record of every word the user types.

It’s unclear exactly where mSpy is based; the company’s Web site suggests it has offices in the United States, Germany and the United Kingdom, although the firm does not appear to list an official physical address. However, according to historic Web site registration records, the company is tied to a now-defunct firm called MTechnology LTD out of the United Kingdom.

Documents obtained from Companies House, an official register of corporations in the U.K., indicate that the two founding members of the company are self-described programmers Aleksey Fedorchuk and Pavel Daletski. Those records (PDF) indicate that Daletski is a British citizen, and that Mr. Fedorchuk is from Russia. Neither men could be reached for comment.

Court documents (PDF) obtained from the U.S. District Court in Jacksonville, Fla. regarding a trademark dispute involving mSpy and Daletski state that mSpy has a U.S.-based address of 800 West El Camino Real, in Mountain View, Calif. Those same court documents indicate that Daletski is a director at a firm based in the Seychelles called Bitex Group LTD. Interestingly, that lawsuit was brought by Retina-X Studios, an mSpy competitor based in Jacksonville, Fla. that makes a product called MobileSpy.

U.S. regulators and law enforcers have taken a dim view of companies that offer mobile spyware services like mSpy. In September 2014, U.S. authorities arrested a 31-year-old Hammad Akbar, the CEO of a Lahore-based company that makes a spyware app called StealthGenie. The FBI noted that while the company advertised StealthGenie’s use for “monitoring employees and loved ones such as children,” the primary target audience was people who thought their partners were cheating. Akbar was charged with selling and advertising wiretapping equipment.

“Advertising and selling spyware technology is a criminal offense, and such conduct will be aggressively pursued by this office and our law enforcement partners,” U.S. Attorney Dana Boente said in a press release tied to Akbar’s indictment.

Akbar pleaded guilty to the charges in November 2014, and according to the Justice Department he is “the first-ever person to admin criminal activity in advertising and selling spyware that invades an unwitting victim’s confidential communications.”

Unlike Akbar’s StealthGenie and some other mobile spyware products, mSpy advertises that its product works even on non-jailbroken iPhones, giving users the ability to log the device holder’s contacts, call logs, text messages, browser history, events and notes.

“If you have opted to purchase mSpy Without Jailbreak, and you have the mobile user’s iCloud credentials, you will not need physical access to the device,” the company’s FAQ states. “However, there may be some instances where physical access may be necessary. If you purchase mSpy for a jailbroken iOS phone or tablet, you will need 5-15 minutes of physical access to the device for successful installation.”

A public relations pitch from mSpy to KrebsOnSecurity in March 2015 stated that approximately 40 percent of the company’s users are parents interested in keeping tabs on their kids. Assuming that is a true statement, it’s ironic that so many parents have now unwittingly exposed their kids to predators, bullies and other ne’er-do-wells thanks to this breach.

Schneier on Security: Admiral Rogers Speaking at the Joint Service Academy Cyber Security Summit

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Admiral Mike Rogers gave the keynote address at the Joint Service Academy Cyber Security Summit today at West Point. He started by explaining the four tenets of security that he thinks about.

First: partnerships. This includes government, civilian, everyone. Capabilities, knowledge, and insight of various groups, and aligning them to generate better outcomes to everyone. Ability to generate and share insight and knowledge, and to do that in a timely manner.

Second, innovation. It’s about much more than just technology. It’s about ways to organize, values, training, and so on. We need to think about innovation very broadly.

Third, technology. This is a technologically based problem, and we need to apply technology to defense as well.

Fourth, human capital. If we don’t get people working right, all of this is doomed to fail. We need to build security workforces inside and outside of military. We need to keep them current in a world of changing technology.

So, what is the Department of Defense doing? They’re investing in cyber, both because it’s a critical part of future fighting of wars and because of the mission to defend the nation.

Rogers then explained the five strategic goals listed in the recent DoD cyber strategy:

  1. Build and maintain ready forces and capabilities to conduct cyberspace operations;

  2. Defend the DoD information network, secure DoD data, and mitigate risks to DoD missions;
  3. Be prepared to defend the U.S. homeland and U.S. vital interests from disruptive or destructive cyberattacks of significant consequence;
  4. Build and maintain viable cyber options and plan to use those options to control conflict escalation and to shape the conflict environment at all stages;
  5. Build and maintain robust international alliances and partnerships to deter shared threats and increase international security and stability.

Expect to see more detailed policy around these coming goals in the coming months.

What is the role of the US CyberCommand and the NSA in all of this? The CyberCommand has three missions related to the five strategic goals. They defend DoD networks. They create the cyber workforce. And, if directed, they defend national critical infrastructure.

At one point, Rogers said that he constantly reminds his people: “If it was designed by man, it can be defeated by man.” I hope he also tells this to the FBI when they talk about needing third-party access to encrypted communications.

All of this has to be underpinned by a cultural ethos that recognizes the importance of professionalism and compliance. Every person with a keyboard is both a potential asset and a threat. There needs to be well-defined processes and procedures within DoD, and a culture of following them.

What’s the threat dynamic, and what’s the nature of the world? The threat is going to increase; it’s going to get worse, not better; cyber is a great equalizer. Cyber doesn’t recognize physical geography. Four “prisms” to look at threat: criminals, nation states, hacktivists, groups wanting to do harm to the nation. This fourth group is increasing. Groups like ISIL are going to use the Internet to cause harm. Also embarrassment: releasing documents, shutting down services, and so on.

We spend a lot of time thinking about how to stop attackers from getting in; we need to think more about how to get them out once they’ve gotten in — and how to continue to operate even though they are in. (That was especially nice to hear, because that’s what I’m doing at my company.) Sony was a “wake-up call”: a nation-state using cyber for coercion. It was theft of intellectual property, denial of service, and destruction. And it was important for the US to acknowledge the attack, attribute it, and retaliate.

Last point: “Total force approach to the problem.” It’s not just about people in uniform. It’s about active duty military, reserve military, corporations, government contractors — everyone. We need to work on this together. “I am not interested in endless discussion…. I am interested in outcomes.” “Cyber is the ultimate team sport.” There’s no single entity, or single technology, or single anything, that will solve all of this. He wants to partner with the corporate world, and to do it in a way that benefits both.

First question was about the domains and missions of the respective services. Rogers talked about the inherent expertise that each service brings to the problem, and how to use cyber to extend that expertise — and the mission. The goal is to create a single integrated cyber force, but not a single service. Cyber occurs in a broader context, and that context is applicable to all the military services. We need to build on their individual expertises and contexts, and to apply it in an integrated way. Similar to how we do special forces.

Second question was about values, intention, and what’s at risk. Rogers replied that any structure for the NSA has to integrate with the nation’s values. He talked about the value of privacy. He also talked about “the security of the nation.” Both are imperatives, and we need to achieve both at the same time. The problem is that the nation is polarized; the threat is getting worse at the same time trust is decreasing. We need to figure out how to improve trust.

Third question we about DoD protecting commercial cyberspace. Rogers replied that the DHS is the lead organization in this regard, and DoD provides capability through that civilian authority. Any DoD partnership with the private sector will go through DHS.

Fourth question: How will DoD reach out to corporations, both established and start-ups? Many ways. By providing people to the private sectors. Funding companies, through mechanisms like the CIA’s In-Q-Tel.. And some sort of innovation capability. Those are the three main vectors, but more important is that the DoD mindset has to change. DoD has traditionally been very insular; in this case, more partnerships are required.

Final question was about the NSA sharing security information in some sort of semi-classified way. Rogers said that there are lot of internal conversations about doing this. It’s important.

In all, nothing really new or controversial.

These comments were recorded — I can’t find them online now — and are on the record. Much of the rest of the summit was held under Chatham House Rules. I participated in a panel on “Crypto Wars 2015″ with Matt Blaze and a couple of government employees.

Errata Security: NSA: ad hominem is still a fallacy

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

An ad hominem attack is where, instead of refuting a person’s arguments, you attack their character. It’s a fallacy that enlightened people avoid. I point this out because of a The Intercept piece about how some of NSA’s defenders have financial ties to the NSA. This is a fallacy.

The first rule of NSA club is don’t talk about NSA club. The intelligence community frequently publishes rules to this effect to all their employees, contractors, and anybody else under their thumb. They don’t want their people talking about the NSA, even in defense. Their preferred defense is lobbying politicians privately in back rooms. They hate having things out in the public. Or, when they do want something public, they want to control the messaging (they are control freaks). They don’t want their supporters muddying the waters with conflicting messaging, even if it is all positive. What they fear most is bad supporters, the type that does more harm than good. Inevitably, some defender of the NSA is going to say “ragheads must die”, and that’ll be the one thing attackers will cherry pick to smear the NSA’s reputation.

Thus, you can tell how close somebody is to the NSA by how much they talk about the NSA — the closer to the NSA they are, the less they talk about it. That’s how you know that I’m mostly an outsider — if I actually had the close ties to the NSA that some people think I do, then I couldn’t publish this blogpost.

Note that there are a few cases where this might not apply, like Michael Hayden (former head) and Stewart Baker (former chief lawyer). Presumably, these guys have such close ties with insiders that they can coordinate messaging. But they are exceptions, not the rule.

The idea of “conflict of interest” is a fallacy because it works both ways. You’d expect employees of the NSA to like the NSA. But at the same time, you’d expect that those who like the NSA would also seek a job at the NSA. Thus, it’s likely they sincerely like the NSA, and not just because they are paid to do so.

This applies even to Edward Snowden himself. In an interview, he said of the NSA “These are good people trying to do hard work for good reasons”. He went to work for the intelligence community because he believe in their mission, that they were good people. He leaked the information because he felt the NSA overstepped their bounds, not because the mission of spying for your country was wrong.

If the “conflict of interest” fallacy were correct, then it would apply to The Intercept as well, whose entire purpose is to fan the flames of outrage over the NSA. If the conflict of interest about NSA contractors is a matter of public concern, then so is the amount Glenn Greenwald is getting paid for his stash of Snowden secrets, and how much Snowden gets paid living in Russia.

The reality is this. Those who attack the NSA, like The Intercept, are probably sincere in their attacks. Likewise, those who defend the NSA are likely sincere in their defense.

As the book Too Kill a Mockingbird said, you don’t truly know somebody until you’ve walked a mile in their shoes. Many defend the NSA simply because they’ve walked a mile in the NSA’s shoes. I say this from my own personal perspective. True, I often attack the NSA, because I agree with Snowden that surveillance has gone too far. But at the same time, again like Snowden, I feel they’ve been unfairly demonized — because I’ve seen them up close and personal. In the intelligence community, it’s the NSA who takes civil rights seriously, and it’s organizations like the DEA, ATF, and FBI that’ll readily stomp on your rights. We should be hating these other organizations more than the NSA.

It’s those like The Intercept who are the questionable bigots here. They make no attempt to see things from another point of view. As a technical expert, I know their stories based on Snowden leaks are often bunk — exploited to trigger rage with little interest in understanding the truth.

Stewart Baker and Michael Hayden are fascist pieces of crap who want a police state. That doesn’t mean their arguments are always invalid, though. They know a lot about the NSA. They are worth considering, even if wrong.