Posts tagged ‘Other’

TorrentFreak: Rightscorp Offered Internet Provider a Cut of Piracy Settlements

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

cox-logoPiracy monetization firm Rightscorp has made headlines over the past year, often because of its aggressive attempts to obtain settlements from allegedly pirating Internet users.

Working on behalf of various copyright owners including Warner Bros. and BMG the company sends copyright infringement notices to Internet providers in the U.S. and Canada. These notices include a settlement proposal, offering alleged downloaders an option to pay off their “debt.”

Rightscorp’s practices haven’t been without controversy. The company and its clients have been sued for abuse and harassment and various large ISPs refuse to forward the settlements to their subscribers.

Cox Communications, one of the larger Internet providers in the U.S. also chose not to work with Rightscorp. The ISP didn’t comment on this refusal initially, but now that Cox has been sued by several Rightscorp clients, it reveals why.

In a statement that leaves little to the imagination, Cox notes that Rightscorp is “threatening” subscribers with “extortionate” letters.

“Rightscorp is in the business of threatening Internet users on behalf of copyright owners. Rightscorp specifically threatens subscribers of ISPs with loss of their Internet service — a punishment that is not within Rightscorp’s control — unless the subscribers pay a settlement demand,” Cox writes (pdf).

As a result, the ISP decided not to participate in the controversial scheme unless Rightscorp revised the notifications and removed the extortion-like language.

“Because Rightscorp’s purported DMCA notices were, in fact, improper threats against consumers to scare them into paying settlements to Rightscorp, Cox refused to accept or forward those notices, or otherwise to participate in Rightscorp’s extortionate scheme.”

“Cox expressly and repeatedly informed Rightscorp that it would not accept Rightscorp’s improper extortion threat communications, unless and until Rightscorp revised them to be proper notices.”

The two parties went back and forth over the details and somewhere in this process Rightscorp came up with a controversial proposal. The company offered Cox a cut of the settlement money its subscribers would pay, so the ISP could also profit.

“Rightscorp had a history of interactions with Cox in which Rightscorp offered Cox a share of the settlement revenue stream in return for Cox’s cooperation in transmitting extortionate letters to Cox’s customers. Cox rebuffed Rightscorp’s approach,” Cox informs the court.

This allegation is something that was never revealed, and it shows to what great lengths Rightscorp is willing to go to get ISPs to comply. It’s not clear whether the same proposal was made to other ISPs are well, but that wouldn’t be a surprise.

Cox, however, didn’t take the bait and still refused to join the scheme. Rightscorp wasn’t happy with this decision and according to the ISP, the company and its clients are now getting back at them through the “repeat infringer” lawsuit.

“This lawsuit is, in effect, a bid both to punish Cox for not participating in Rightscorp’s scheme, and to gain leverage over Cox’s customers for the settlement shakedown business model that Plaintiffs and Rightscorp jointly employ,” Cox notes.

Despite the strong language and extortion accusations used by Cox, the revelations didn’t prevent the Court from granting copyright holders access to the personal details of 250 accused copyright infringers.

The case is just getting started though, and judging from the aggressive stance being taken by both sides we can expect a lot more dirt to come out in the months ahead.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and anonymous VPN services. [$] A tale of two data-corruption bugs

This post was syndicated from: and was written by: corbet. Original post: at

There have been two bugs causing filesystem corruption in the news
recently. One of them, a bug in ext4, has gotten the bulk of the
attention, despite the fact that it is an old bug that is hard to trigger.
The other, however, is recent and able to cause data loss on
filesystems installed on a RAID 0 array. Both are interesting
examples of how things can go wrong, and, thus, merit a closer look.

TorrentFreak: Court Orders Cox to Expose “Most Egregious” BitTorrent Pirates

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

cox-logo Last year BMG Rights Management and Round Hill Music sued Cox Communications, arguing that the ISP fails to terminate the accounts of repeat infringers.

The companies, which control the publishing rights to songs by Katy Perry, The Beatles and David Bowie among others, claim that Cox has given up its DMCA safe harbor protections due to this inaction.

The case revolves around the “repeat infringer” clause of the DMCA, which prescribes that Internet providers must terminate the accounts of persistent pirates.

As part of the discovery process the music outfits requested details on the accounts which they caught downloading their content. In total there are 150,000 alleged pirates, but as a compromise BMG and Round Hill limited their initial request to 500 IP-addresses.

Cox refused to hand over the requested information arguing that the Cable Privacy Act prevents the company from disclosing this information.

The matter was discussed during a court hearing late last week. After a brief deliberation Judge John Anderson ruled that the ISP must hand over the personal details of 250 subscribers.

“Defendants shall produce customer information associated with the Top 250 IP Addresses recorded to have infringed in the six months prior to filing the Complaint,” Judge Anderson writes.

“This production shall include the information as requested in Interrogatory No.13, specifically: name, address, account number, the bandwidth speed associated with each account, and associated IP address of each customer.”

The order

The music companies also asked for the account information of the top 250 IP-addresses connected to the piracy of their files after the complaint was filed, but this request was denied. Similarly, if the copyright holders want information on any of the 149,500 other Cox customers they need a separate order.

The music companies previously informed the court that the personal details are crucial to proof their direct infringement claims, but it’s unclear how they plan to use the data.

While none of the Cox customers are facing any direct claims as of yet, it’s not unthinkable that some may be named in the suit to increase pressure on the ISP.

The full list of IP-addresses is available for download here (PDF).

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and anonymous VPN services.

TorrentFreak: WebHost Owner Cleared of Aiding Torrent Site Piracy

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

pirate bay flagFollowing a complaint from Swedish anti-piracy group Antipiratbyrån, in November 2011 police carried out raids in two locations against private torrent site TTi, aka The Internationals.

In one location police targeted site owner Joel Larsson. In another, Patrik Lagerman, boss of web-hosting firm PatrikWeb, the company providing hosting for the torrent site.

The case against Larsson centered around the unlawful distribution of copyrighted video content by his site’s users. Lagerman was accused of aiding that infringement after he refused to take the site down following a request (not backed by a court order) from Antipiratbyrån.

The case dragged on for more than three and a half years but concluded earlier this month. The judgment was handed down yesterday and its one of mixed fortunes.

Larsson previously admitted to being the operator of TTi and also the person who accepted donations from site members, an amount equivalent to around US$12,000. He also insisted that he never controlled the content shared by his site’s users.

In its judgment, however, the court noted that files found on a confiscated PC revealed details of meetings with site staff indicating that Larsson fully understood that the site was involved in the exchange of infringing content.

The Court found Larsson guilty of copyright infringement and sentenced him to 90 hours community service. If prison had been suggested by the prosecutor he would have served three months.

The Court also seized several servers connected with the site but rejected a prosecution claim for the forfeiture of $12,000 in site donations after it was determined Larsson spent the same amount keeping the site running.

For Patrik Lagerman, the site’s host, things went much better. Despite finding that Lagerman had indeed been involved in the site’s operations by providing hosting and infrastructure, he was deemed not negligent for his refusal to take down the site without a court order. He was acquitted on all charges.

Commenting on the judgment, Sara Lindbäck at Rights Alliance told TorrentFreak that getting a conviction was the important thing in this case.

“The person responsible for the illegal service was found guilty. That is the important part in the ruling. The illegal services are causing tremendous damages to the rights holders,” Lindbäck said.

“In this case the person had also received substantial amounts in donations, in other words receiving money for content that somebody else has created.”

Speaking on Lagerman’s acquittal, Lindbäck acknowledged that the situation had been less straightforward.

“Regarding the hosting provider, the court did not find him responsible for copyright infringement. The legal aspects to the responsibility for hosting providers is of course interesting legally. We will now analyze the ruling further and see what consequences it can have in the future.”

Rights Alliance did not reveal whether it intends to appeal, but considering the amount of time already passed since the arrests in 2011, that seems unlikely.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and anonymous VPN services. Nocera: iio-sensor-proxy 1.0 is out!

This post was syndicated from: and was written by: n8willis. Original post: at

At his blog, Bastien Nocera announces
the 1.0 release of iio-sensor-proxy,
a framework for accessing the various environmental sensors (e.g.,
accelerometer, magnetometer, proximity, or ambient-light sensors) built
in to recent laptops. The proxy is a daemon that listens to the
Industrial I/O (IIO) subsystem and provides access to the sensor
readings over D-Bus. As of right now, support for ambient-light
sensors and accelerometers is working; other sensor types are in
development. The current API is based on those used by Android and
iOS, but may be expanded in the future. “For future versions,
we’ll want to export the raw accelerometer readings, so that
applications, including games, can make use of them, which might bring
up security issues. SDL, Firefox, WebKit could all do with being
adapted, in the near future.

The Hacker Factor Blog: The Friendly Skies

This post was syndicated from: The Hacker Factor Blog and was written by: The Hacker Factor Blog. Original post: at The Hacker Factor Blog

The alleged airplane hacking by Chris Roberts is having some significant implications. Some experts are claiming that it cannot be done, Roberts says that he did manage to compromise aircraft security, and United Airlines has banned Roberts for life. (If there isn’t a real risk, then why ban him for life?) I don’t know the actual details of the Roberts exploit, but I’ve been hearing about airplane vulnerabilities for years. Ironically, the General Accounting Office (GAO) recently release a report which discusses potential vulnerabilities to aircraft computer systems.

Meanwhile, United Airlines just announced a bug bounty program. Many people in the media are praising United for taking this first step. However, I think that the people who are praising this bug bounty program never actually read the announcement. In my opinion, I think their program is nothing more than entrapment. So…. I filed a bug! The bug is their bug bounty program!

Thank you for choosing United

There is always a consideration about how long to give a vendor to respond to a bug submission. Since United recently started this program, they should be well staffed (“should” being the operative word). I doubt that most people are submitting bugs related to the textual guidelines, so my bug should be routed quickly to their legal department for review. And finally, there is the question of the impact… the way it is written, people who report bugs into United’s bounty program could face litigation. In fact, the rules are explicitly written so that United can sue anyone who tries to report bugs. I view this as United Airlines putting my fellow security researchers and Good Samaritans at risk. Therefore, I think five workdays is ample time for a response.

Keep in mind, I was only looking for a response stating that they had evaluated the bug, not that they had issued changes or decided to not issue changes. To date, I have not received any response beyond the confirmation of receipt.

I submitted this bug on Monday morning. I decided to give United until close-of-business on Friday to respond with more than an automated “received” message. Since I have not received a response, I am making details of this bug public.

Here is the bug that I submitted:

Dear United Airlines,

I believe that I have found a significant bug on a United Airlines customer-facing web site. The effected page is the Bug Bounty Program page:

(I am not joking. I am also including the Electronic Frontier Foundation (EFF) on this bug report.)

There are many types of bugs. This particular bug is in the documentation. The way the terms are written, any technical bug “will result in permanent disqualification from the bug bounty program and possible criminal and/or legal investigation.”

For example:

  • The list of bugs that are eligible for submission includes cross-site request forgery and cross-site scripting exploits. However, under “Do not attempt” it explicitly excludes code injection on live systems. Cross-site request and scripting attacks are explicitly types of code injection attacks and the only sites available for evaluation are live systems. Thus, any report of this nature will be subject to permanent disqualification and possible criminal charges.

  • The list of eligible bugs includes “the ability to brute-force reservations, MileagePlus numbers, PINs or passwords”. However, the exclusions explicitly list brute-force attacks. At best, the current rules permit a researcher to say “there might possibly maybe be a brute-force bug, but I am unable to make any concrete determination since doing so would be a violation.”
  • With regards to brute-force attempts: A researcher testing any iterators may not know a priori what record will be returned. Evaluating the site for a potential brute-force exploit may result in the “compromise or testing of MileagePlus accounts that are not your own” — a prohibited act that could lead to legal action from United Airlines. This restriction is also mentioned in the Terms and Conditions: “You must not knowingly or intentionally access or acquire the personal information of any United customer or member.” While it may not be “knowingly”, any type of brute force attack identification is “intentional” and may reveal information about other United Airlines customers.
  • Eligible bugs include “potential for information disclosure” and “timing attacks that prove the existence of a private repository, user or reservation”. As with brute-force attempts, these may result in the disclosure of information that the researcher does not own.
  • The list of “bugs that are not eligible for submission” includes “bugs that only affect legacy or unsupported browsers, plugins or operating systems”. As long as a system is currently in use, it poses a potential risk to consumers. By excluding these from any evaluation, it effectively says that United Airlines is not interested in real bugs that affect real consumers/customers because they exist on systems that United uses but no longer wishes to maintain.
  • The list of ineligible bugs includes “bugs on internal sites for United employees or agents (not customer-facing)”. However, if an external user is able to access or compromise an internal site, then isn’t that a significant risk? Again, it appears as though United Airlines wishes to exclude systems that could pose a risk to customers.
  • The list of ineligible bugs includes “bugs on onboard Wi-Fi, entertainment systems or avionics”. While I have no desire to compromise the security of any airplane, shouldn’t United Airlines be willing to listen to serious security risks posed by the onboard computer network and systems? This exclusion strikes me as if corporate attorneys are excluding items so that they can claim ignorance and not need to address any potential problems.
  • The Terms and Conditions says “Bugs or potential Bugs you discover may not at any time be disclosed publicly or to a third-party.” This runs contrary to established guidelines for responsible disclosure. (See: and In particular, this clause in the T&C permits United Airlines to receive bug reports and do nothing with them for an indefinite amount of time.

    Because the United Airlines reporting process includes the threat of legal action, I am including the attorneys at the EFF on this bug report. If, by including the attorneys, I have violated the United Airlines Bug Bounty Program Terms and Conditions, then I see no harm in posting these findings on my blog. Please let me know if I should not blog about this in the near future.

  • The Terms and Conditions also states that information received under the bounty program is considered “Confidential Information”. However, this clause may backfire on United. In particular, any bug that does not receive an award could be considered rejected by United. Any notice of rejection (or no notice at all) could lead to large legal headaches for United Airlines in the event that the declined bug turns out to be more serious than first determined by United Airlines.
  • The Bug Bounty Program makes no mention of any identifier tracking reported issues, time frame for a response, or any other information related to managing reports. Without this information, a security specialist performing a responsible disclosure will have no means to determine whether the issue was received or is being evaluated by United Airlines.
  • While I am not an attorney, the list of “Bugs that are not eligible for submission” appears to be an acknowledgment and acceptance of risk. United Airlines either knows or suspects of the existence of vulnerabilities related to the excluded topics. This type of acknowledgment may result in issues related to airline insurance and result in closer scrutiny by transportation regulators.

If I may be so bold, I suggest the following changes to the United Airlines bug bounty program:

  1. Replace the entire “Do not attempt” section with clauses that prohibit endangering passengers and their personal information. Short-term minor testing should be acceptable as long as they are non-hostile proof-of-concepts and any disclosed information remains confidential.

  2. Remove the entire section on excluded topics. These exclusions do not impact malicious individuals — they are not bound by the exclusions. In contrast, these exclusions restrict what Good Samaritans can report on.
  3. If you are concerned with researchers causing problems on live systems, then deploy a set of test systems for the public to evaluate. The test systems should run the same code with the same configuration, but should not process flight requests or contain real customer information. (Populate it with test data.)
  4. The Defcon security conference is coming up in August. Consider providing a real airplane (or a fully-functional simulator) that attendees can evaluate and attempt to compromise. (Either the EFF or I can put you in touch with the conference organizers.) If Defcon is not a viable option at this time, then consider selecting smaller groups of security researchers.

    Currently other companies, including Apple, Microsoft, and Tesla have participated in “hacking challenges”. The approach for airplanes would be similar: provide a functional system to a group of talented security experts and see what they can identify. (Let me know if you are interested in this option; I know many of these researchers and can put you in touch with them.)

    Public hacking challenges permit involvement by the security community. These challenges have helped mitigate risks and increase awareness in the importance that security plays. (See This type of transparency will ensure the perception to the flying public that United is doing everything possible to protect consumers, and demonstrate to DHS, FAA, and other governmental bodies that this issue is important and that United is willing to have the security community become involved in order to identify and mitigate issues quickly.

  5. Explicitly describe the evaluation process at United Airlines. For example, a bug is reported, a tracking issue identifier is provided as a confirmation of receipt, and the bug will receive a preliminary evaluation and feedback within one work week. The current program page lacks these details.
  6. Consider consulting with the EFF, CERT, or others who specialize in developing responsible disclosure policies and bug bounty programs. They can provide advice on how to create a functional bug bounty program. In addition, ensure that the United Airlines Bug Bounty Program is compliant with both ISO 27035 (Security Incident Management) and ISO 30111 (Vulnerability Handling Processes); the current bug bounty program does not appear to be ISO compliant.

The program that United Airlines has currently provided appears on the surface to request cooperation with the security community. However, it effectively excludes any type of bug that poses a potential risk to customers and threatens researchers with legal action for reporting bugs.


Neal Krawetz, Ph.D.
Hacker Factor Solutions
[redacted phone number]
(If you look me up on your systems, you will find my MileagePlus number. I am not including it here since I am CC:’ing the EFF on this email.)

Yes: I really submitted that bug.

Fasten your safety belt low and tight across your lap

On one hand, submitting to United’s bug bounty program claims to be an agreement to comply with the terms and conditions. (The actual text says, “By participating, you agree to comply with the United Terms.”) However, the bug bounty program also states a number of requirements that I did not satisfy. For example, they say that reports must include my United MileagePlus frequent flier number. Rather than supplying it, I told them to go look it up. If United attempts to sue me for making this public, then I will point out that I did not comply with their submission requirements and am therefore not bound by their terms and conditions.

It is also worth noting that United does not seem to offer any way to report bugs except via the bug bounty program. What if I want to submit a bug but do not want to participate in their bug bounty program?

The terms also state that I cannot discuss this exploit with anyone. The actual text says, “Bugs or potential Bugs you discover may not at any time be disclosed publicly or to a third-party. Doing so will disqualify you from receiving award miles.” I have a lot of problems with this part of the terms and conditions.

  • It says that I cannot discuss the bug with any third-party. “Any” appears to include legal council. While I am not an attorney, I would be very surprised if this clause were not thrown out by the court since United cannot bar me from talking to legal council.

  • If the bug is serious enough, then it puts lives in danger. In the case of this bug, I point out that any bug report is effectively entrapment and legitimate security researchers could face jail time that would impact their lives. Thus, I would rather protect honest people than stay quiet indefinitely. As such, I am including a short description of this issue in response to a Request For Comments from the Department of Commerce.
  • United’s bug bounty program lists “payments” that are awarded to specific types of bugs. Bugs about their bug program are not one of their payment categories, so I do not expected any payment. The threat that I will not receive mileage points for violating the terms and conditions seems moot.

Be careful: objects may have shifted in the overhead compartments

In response to my bug submission on Monday, I received a short automated reply:


Thank you for your submission to our Bug Bounty Program! We wanted to confirm that we have received your submission and will review it and contact you as soon as we can. While it may take us some time to get through all of our responses, rest assured that we will respond to all messages in the order we received them.

Thank you,
United Airlines Bug Bounty Team

Even this response has problems. First, it is missing any kind of tracking number. If I submitted a dozen bugs, then there is no method to identify which bug is being discussed in any future contact. Also, if I have any updates, then I have no means to tell them to update a previous submission. For example:

  • I should have also cited ISO 29147 (Vulnerability Disclosure) in my bug report.

  • I should have also mentioned that the terms and conditions states that “The Program is not a game or competition”. Yet, they explicitly list rules, boundaries, penalties, and rewards. Those are all of the criteria necessary for turning this into a game.

Without a case number or tracking ID, I cannot tell United to update my submission.

In contrast, Microsoft, Google, GoDaddy, the Internet Storm Center, NCMEC, and every bugzilla project provides me with a tracking number. In fact, I cannot remember the last time I sent a bug report to a company where I didn’t receive a tracking number… until I submitted to United Airlines.

Second, United provided me with no timeframe for the review. Every other company includes some kind of time period. It might be “we will get back to you in 48 hours” or a week, but it is not an indefinite “as soon as we can”. In fact, FedEx and GoDaddy send periodic followups. (GoDaddy’s phone support must pick up the phone and talk to you ever few minutes, just to let you know that they are still working on it.) Since the typical window is usually a few days, I decided to give United five days to follow up. I submitted the report on Monday morning. I posted this blog entry on Friday after 5pm (Eastern).

Sit back and enjoy your flight

While I believe that United Airlines is honestly trying to do the right thing, I am also dumbfounded by the approach they decided to take. You don’t threaten to sue people who are trying to provide positive feedback; you don’t shoot the messenger. You don’t exclude areas because you don’t want people looking there; if anything, that creates more of an incentive to look. And you don’t forget to provide a tracking number since that gives the impression that you do not expect any follow-ups.

There are plenty of sample bug reporting systems that United could have looked at. In my opinion, Microsoft currently has one of the best reporting processes out there. GoDaddy and Google also have very good at bug tracking systems. Even the evil telephone company has a great issue tracking system. (Horrible service, but they can look up issues quickly.) As it is, United appeared to just wing it — and failed miserably.

To me, this half-hearted attempt at a bug bounty program seems more like a superficial effort to appear friendly and to put a positive spin on their recent computer security problems. However, their strong emphasis on legal action, contradictory requirements, and incomplete vulnerability report tracking makes it appear as if they are not interested in any public assistance. If you plan to submit a potential vulnerability to United Airlines, please be careful… United does not know what they are doing, and that makes them more dangerous than any other airline.

TorrentFreak: Pirate Bay Loses New Domain Name, Hydra Lives On

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

tpbhydraxEarlier this week the Stockholm District Court ordered the Pirate Bay’s .SE domains to be handed over to the Swedish state, arguing that they were linked to copyright crimes.

The Pirate Bay was fully prepared for the negative outcome and quickly redirected its visitors to six new domain names.

Since then the site has been accessible through the GS, LA, VG, AM, MN and GD domain names, without even a second of downtime.

Marking the change The Pirate Bay updated its logo to the familiar Hydra logo, linking a TLD to each of the heads. However, we can now reveal that one head has already been chopped off.

The site’s .GS domain name has been suspended by the registry, and is now listed as “ServerHold” and “Inactive.”

The Pirate Bay informs us that the .GS domain has indeed been lost, which didn’t come as a complete shock. In fact, one of the reasons to move to six domains was to see which ones would hold up.

“We have more domain names behind, if needed. We are stronger than ever and will defend the site to the end,” the TPB team tells us.

At this point it’s unclear for how long the other domain names will remain available. Hoping to find out more, we reached out to the respective registries to discover their policies on domains being operated by The Pirate Bay.

The Mongolian .MN registry informs TF that they will process potential complaints through ICANN’s Dispute Resolution Policy, suggesting that they will not take any voluntary action.

The VG Registry referred us to their terms and conditions, specifically sections 3.4 and 7.2, which allow for an immediate termination or suspension if a domain infringes on the rights of third parties. However, it could not comment on this specific case.

“We will review any complaint and act accordingly. Please understand that we cannot make any predictions based on theoretical options,” a VG Registry spokesperson says.

It won’t be a big surprise if several more Pirate Bay domain names are suspended during the days and weeks to come. That’s a Whac-A-Mole game the site’s operators are all too familiar with now, but one that won’t bring the site to its knees.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and anonymous VPN services.

Linux How-Tos and Linux Tutorials: Master and Protect PDF Documents with PDFChain on Linux

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Jack Wallen. Original post: at Linux How-Tos and Linux Tutorials

pdfchain catenate

 If you’re a user of the Linux platform, you know there are a lot of tools at your disposal. For those that work with PDF documents, you may feel as if the open source ecosystem has fallen a bit short in the PDF productivity category. Fortunately, you’d be wrong with that line of thought. In fact, Linux has a number of outstanding PDF tools. From a full-fledged, pro-quality DTP tool in Scribus all the way down to command line tools (such as pdftotext).

Between Scribus and pdftotext lie some outstanding PDF tools, ready to serve. One such tool is PDFChain—a graphical user interface for the PDF Toolkit. With this piece of user-friendly software you can master your PDF documents (catenate, watermark, add attachments, split a document into single pages), password protect documents, and even control permissions of a document. It is the last feature which might be of most interest to users. Why? Imagine creating a PDF document and being able to control whether or not a user can:

  • Print

  • Copy contents

  • Modify contents

  • Modify annotations

  • Use degraded printing

  • Use screen recorders

  • and more.

Let’s walk through the process of piecing together a single PDF document (using multiple .pdf files), breaking apart a single PDF document, as well as adding a background watermark, and altering the permissions to prevent users from having complete access to the document and its features.

Installing PDFChain

Before you begin working with the tool, it must be installed. Fortunately, PDFChain can be found in most standard repositories. Open up your package manager (such as the Ubuntu Software Center or Synaptic) and search for PDFChain. You should see it listed and ready to be installed. If not, you can always download and install from source.

To install from source, follow these steps:

  1. Download the file into your Downloads directory

  2. Open up a terminal window and change into the Downloads directory with the command cd ~/Downloads

  3. Unpack the file with the command tar xvzf pdfchain-XXX.tar.gz (Where XXX is the release number)

  4. Change into the newly-created directory with the command cd pdfchain-XXX (Again, where XXX is the release number)

  5. Issue the command ./configure

  6. Compile the software with the command make

  7. Install the software with the command make install

You should now be able to start the software either from your desktop menu or with the command pdfchain.

Mastering a document

Clearly, the first thing you will want to do is to start mastering a PDF document. One thing you must understand is that PDFChain is not a tool that allows you to create a PDF document from scratch (for that, you will want to give Scribus a try or export from LibreOffice). With this tool you are mastering other PDF documents into a single document (or breaking a multi-page PDF document into single page documents).

How do you catenate with PDFChain? Easy. Here’s how.

  1. Open up the PDFChain tool

  2. From the Catenate tab in the main window (Figure 1), click the + button

  3. In your file manager, locate the files you want to use and add them

  4. Arrange the files in the correct order by selecting them (individually) and moving them up or down with the arrows

  5. Click the Save As button

  6. Give the file a new name

  7. Click Save.

You should now have a full document made up of your constituent pieces. The one caveat to this is that each of the original documents will begin on its own new page of the master document. You cannot make this a continuous document (with Page Z beginning right where Page Y left off).

Add as many pages for the master as you need. You can also remove and duplicate pages for the creation of the master document.

What about the opposite direction? Say you have a long PDF document and you want to break it up into individual pages. With PDFChain, you can do that. Here’s how:

  1. Open PDFChain

  2. Click the Burst tab

  3. Click the Document Browse button

  4. Locate the document to be separated

  5. If necessary, change the Prefix label

  6. Click the Save As button

  7. In your file manager, locate the folder to house the individual files

  8. Click Open.

You should now find individual .pdf files in the folder.

Adding a watermark

Say you want to add a watermark (background stamp) to your document. This is often used to place a company logo in the background of a document. To do this, you will need two things:

  • Master PDF document

  • Watermark image as PDF.

NOTE: If you don’t already have your watermark image as a PDF document, you can always open up the image in The Gimp and export the file as a PDF.

Once you have everything necessary for the watermark, here’s how you master the document:

  1. Open up PDFChain

  2. Click on the Background/Stamp tab (Figure 2)

  3. Click on the Document Browse button

  4. In your file manager, locate the file that will serve as the document

  5. Click on the Background/Stamp Browse button

  6. Locate the file that will serve as the watermark

  7. Click Save As

  8. Give the new master document a file name

  9. Click Save.

pdfchain 2

Open the newly mastered document to see the watermark on each page (Figure 3).

pdfchain 3


Now for the fun part. Before you save your master document, click on the Permissions button to reveal the Permissions pane (Figure 4).

pdfchain 4

In this pane you can add an owner and/or a user password as well as add/remove permissions for each of the various options. Say, for example, you don’t want to allow the contents of the PDF to be modified. For this, de-select the Modify contents check to disable the feature (if there’s a check by the option, it’s enabled). You can also select the encryption level for the document (None, RC4 40, or RC4 128).

Once you’ve set the options for the master document, click Save As, give the file a name, and click Save. Your new PDF will be ready with all the bells and whistles you just added/created.

Within the realm of productivity, Linux doesn’t disappoint. Tools like PDFChain not only make your office life easier, but give you more power and flexibility than you might have thought you had. Once you get the hang of PDFChain, you’ll be mastering new PDF documents like a pro.


Raspberry Pi: Getting started with the Internet of Things

This post was syndicated from: Raspberry Pi and was written by: Clive Beale. Original post: at Raspberry Pi

By 2020 there will be twelvety gigajillion of Internet Things all shouting at each other and sulking – Alice Bevel

The Internet of Things had been around for a while (since 1982 apparently) but it’s still a bit of a mystery to many. The concept of hooking up physical devices and letting them talk to each other is great, but how do you get started? How do you get useful data from them?

A Thing

A Thing

I’ve been playing around with IoT this week and came across this great starter IoT project for the Pi, a people counting project by Agustin Pelaez. It’s an oldie but goodie and worth a mention because it’s as simple as it gets in terms of IoT—a sensor sends data to a server, which then presents the data in a nice, human-friendly form.

PIR connected to Pi

A £2 PIR connected directly to the Pi with just three wires ( Photo: Agustin Pelaez)

It’s also as cheap as chips—apart from a Pi you only need a passive infra-red sensor (PIR) as used in several of our resources. We love PIRs: they cost a couple of quid, connect directly to the Pi GPIO pins and they can be used for all sorts of useful and/or mad projects. The basic Ubidots account that stores and analyses the data is free. So this is an ideal IoT beginners’ project— cheap, straightforward and can be adapted to other projects. (Note that there is a bug in the code, peopleev = 0 should read peoplecount = 0.)

node-red and thingbox

Node-RED on Thingbox, controlling LEDs on the Pi via the web (Photo:

If you want to dig further without too much pain, the ThingBox has an SD card image for the Pi that allows you to “Install Internet of Things technologies on a Raspberry Pi without any technical knowledge” and has a number of basic projects to get you started. It works with Ubidots out of the box and has a number of tutorials that will help you learn common IoT tools like Node-RED on the Pi (including a PIR counter project which is a nice compare-and-contrast  to the Python based one above.)

I like the ThingBox a lot. It lowers the activation energy needed to get started with IoT on the Pi (actually, it makes it easy) and it allows all Pi owners access to what appears at first glance to be an arcane … Thing. The Internet of Things is fun, useful and empowering, and a natural extension to physical computing using the GPIO pins on the Pi. Hook up some Things today and have play.

Another Thing

Another Thing

The post Getting started with the Internet of Things appeared first on Raspberry Pi.

Schneier on Security: Why the Current Section 215 Reform Debate Doesn’t Matter Much

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

The ACLU’s Chris Soghoian explains (time 25:52-30:55) why the current debate over Section 215 of the Patriot Act is just a minor facet of a large and complex bulk collection program by the FBI and the NSA.

There were 180 orders authorized last year by the FISA Court under Section 215 — 180 orders issued by this court. Only five of those orders relate to the telephony metadata program. There are 175 orders about completely separate things. In six weeks, Congress will either reauthorize this statute or let it expire, and we’re having a debate — to the extent we’re even having a debate — but the debate that’s taking place is focused on five of the 180, and there’s no debate at all about the other 175 orders.

Now, Senator Wyden has said there are other bulk collection programs targeted at Americans that the public would be shocked to learn about. We don’t know, for example, how the government collects records from Internet providers. We don’t know how they get bulk metadata from tech companies about Americans. We don’t know how the American government gets calling card records.

If we take General Hayden at face value — and I think you’re an honest guy — if the purpose of the 215 program is to identify people who are calling Yemen and Pakistan and Somalia, where one end is in the United States, your average Somali-American is not calling Somalia from their land line phone or their cell phone for the simple reason that AT&T will charge them $7.00 a minute in long distance fees. The way that people in the diaspora call home — the way that people in the Somali or Yemeni community call their family and friends back home — they walk into convenience stores and they buy prepaid calling cards. That is how regular people make international long distance calls.

So the 215 program that has been disclosed publicly, the 215 program that is being debated publicly, is about records to major carriers like AT&T and Verizon. We have not had a debate about surveillance requests, bulk orders to calling card companies, to Skype, to voice over Internet protocol companies. Now, if NSA isn’t collecting those records, they’re not doing their job. I actually think that that’s where the most useful data is. But why are we having this debate about these records that don’t contain a lot of calls to Somalia when we should be having a debate about the records that do contain calls to Somalia and do contain records of e-mails and instant messages and searches and people posting inflammatory videos to YouTube?

Certainly the government is collecting that data, but we don’t know how they’re doing it, we don’t know at what scale they’re doing it, and we don’t know with which authority they’re doing it. And I think it is a farce to say that we’re having a debate about the surveillance authority when really, we’re just debating this very narrow usage of the statute.

Further underscoring this point, yesterday the Department of Justice’s Office of the Inspector General released a redacted version of its internal audit of the FBI’s use of Section 215: “A Review of the FBI’s Use of Section 215 Orders: Assessment of Progress in Implementing Recommendations and Examination of Use in 2007 through 2009,” following the reports of the statute’s use from 2002-2005 and 2006. (Remember that the FBI and the NSA are inexorably connected here. The order to Verizon was from the FBI, requiring it to turn data over to the NSA.)

Details about legal justifications are all in the report (see here for an important point about minimization), but detailed data on exactly what the FBI is collecting — whether targeted or bulk — is left out. We read that the FBI demanded “customer information” (p. 36), “medical and educational records” (p. 39) “account information and electronic communications transactional records” (p. 41), “information regarding other cyber activity” (p. 42). Some of this was undoubtedly targeted against individuals; some of it was undoubtedly bulk.

I believe bulk collection is discussed in detail in Chapter VI. The chapter title is redacted, as well as the introduction (p. 46). Section A is “Bulk Telephony Metadata.” Section B (pp. 59-63) is completely redacted, including the section title. There’s a summary in the Introduction (p. 3): “In Section VI, we update the information about the uses of Section 215 authority described [redacted word] Classified Appendices to our last report. These appendices described the FBI’s use of Section 215 authority on behalf of the NSA to obtain bulk collections of telephony metadata [long redacted clause].” Sounds like a comprehensive discussion of bulk collection under Section 215.

What’s in there? As Soghoian says, certainly other communications systems like prepaid calling cards, Skype, text messaging systems, and e-mails. Search history and browser logs? Financial transactions? The “medical and educational records” mentioned above? Probably all of them — and the data is in the report, redacated (p. 29) — but there’s nothing public.

The problem is that those are the pages Congress should be debating, and not the telephony metadata program exposed by Snowden.

TorrentFreak: Supergirl Pilot Leaks to Torrent Sites, Six Months Early

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

supergirlAfter making an appearance as far back as 1958, Supergirl was intended to be a female counterpart to DC Comics’ Superman who first appeared 20 years earlier. While successful in her own right, she never quite reached the dizzy heights of the Clark Kent-based character.

This yeah, however, the world is braced for the return of Supergirl in a new CBS TV series. Featuring Melissa Benoist (Glee, Homeland, Law and Order) as Kara Zor-El, an alien who has hidden her powers since escaping from Krypton, the show will see her transform into Supergirl and “the superhero she was meant to be.”

After a commitment in September 2014, the series was officially picked up by CBS earlier this month. The pilot was scheduled to debut in November, but those plans have now massively unraveled after the episode leaked online, six months earlier than its planned debut.

Two ‘Scene’ release groups – DiMENSiON and LOL – competed to premiere the title first this morning, with the latter beating the former by around 90 seconds. LOL’s version is a convenient 400mb so likely to become the most sought after copy. On the other hand DiMENSiON’s is more than 15 times the size, but for 1080p connoisseurs it’ll be worth the wait.

Although it’s certainly possible that the pilot contains hidden watermarks, as far as visible identifiers go the 46 minute episode looks very clean. As illustrated by the image below, there are no tell-tale ‘property of’ warnings that are regularly seen on ‘screener’ copies of leaked movies.


The leak of the pilot came as a complete surprise a couple of hours ago so download stats on BitTorrent sites are a currently quite modest 25,000 or so. However, given the anticipated media snowball effect during the day the number of downloads is likely to increase dramatically, probably to more than a million by this time tomorrow.

The Supergirl leak comes just weeks after the first four episodes of the new series of Game of Thrones leaked online. That event triggered a piracy crazy that continues to this day.

Whether more episodes of Supergirl will leak online in the days to come is unknown but in any event it seems likely that CBS will try to stem the current tide. The company is a prolific sender of DMCA takedown notices and regularly sends more than 100,000 each week to Google alone.

Supergirl trailer

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and anonymous VPN services.

TorrentFreak: Pirate Domain Seizures Are Easy in the United States

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

court1-featuredOne the biggest piracy-related stories of the year broke this week after Swedish authorities succeeded in their quest to take over two key Pirate Bay domains.

The court order, handed down Tuesday, will see and fall under the control of the Swedish government, provided no appeal is filed in the coming weeks. It’s been a long and drawn out process but given the site’s history, one with an almost inevitable outcome.

Over in the United States and spurred on by ‘rogue’ sites such as TPB, much attention has been focused on depriving ‘pirate’ sites of their essential infrastructure, domains included. Just last week the MPAA and RIAA appeared before the House Judiciary Committee’s Internet subcommittee complaining that ICANN isn’t doing enough to deal with infringing domains.

Of course, having ICANN quickly suspend domains would be convenient, but entertainment industry groups aren’t completely helpless. In fact, yet another complaint filed in the United States by TV company ABS-CBN shows how easily it is to take control of allegedly infringing domains.

The architect of several recent copyright infringement complaints, in its latest action ABS-CBN requested assistance from the United States District Court for the Southern District of Florida.

The TV company complained that eleven sites (listed below) have been infringing its rights by offering content without permission. To protect its business moving forward ABS-CBN requested an immediate restraining order and after an ex parte hearing, District Court Judge William P. Dimitrouleas was happy to oblige.

In an order (pdf) handed down May 15 (one day after the complaint was filed) Judge Dimitrouleas acknowledges that the sites unlawfully “advertised, promoted, offered for distribution, distributed or performed” copyrighted works while infringing on ABS-CBN trademarks. He further accepted that the sites were likely to continue their infringement and cause “irreparable injury” to the TV company in the absence of protection by the Court.

Granting a temporary order (which will become preliminary and then permanent in the absence of any defense by the sites in question) the Judge restrained the site operators from further infringing on ABS-CBN copyrights and trademarks. However, it is the domain element that provokes the most interest.

In addition to ordering the sites’ operators not to transfer any domains until the Court advises, Judge Dimitrouleas ordered the registrars of the domains to transfer their certificates to ABS-CBN’s counsel. Registrars must then lock the domains and inform their registrants what has taken place.

Furthermore, the Whois privacy protection services active on the domains and used to conceal registrant identities are ordered to hand over the site operators’ personal details to ABS-CBN so that the TV company is able to send a copy of the restraining order. If no active email address is present in Whois records, ABS-CBN is allowed to contact the defendants via their websites.

Once this stage is complete the domain registrars are ordered to transfer the domains to a new registrar of ABS-CBN’s choosing. However, if the registrars fail to act within 24 hours, the TLD registries (.COM etc) must take overriding action within five days.

The Court also ordered ABS-CBN’s registrar to redirect any visitors to the domains to a specific URL ( which is supposed to contain a copy of the order. At the time of writing, however, that URL is non-functional.

Also of interest is how the Court locks down attempts to get the sites running again. In addition to expanding the restraining order to any new domains the site operators may choose to move to, the Court grants ABS-CBN access to Google Webmaster Tools so that the company may “cancel any redirection of the domains that have been entered there by Defendants which redirect traffic to the counterfeit operations to a new domain name or website.”

The domains affected are:,,,,,,,,, and

Despite the order having been issued last Thursday, at the time of writing all but one of the domains remains operational. Furthermore, and in an interesting twist, and have already skipped to fresh domains operated by none other than the Swedish administered .SE registry.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and anonymous VPN services.

Linux How-Tos and Linux Tutorials: How to Manage Amazon S3 Files From Your Server-Side Code

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Jeff Cogswell. Original post: at Linux How-Tos and Linux Tutorials

aws console

In this tutorial, we’re going to look at managing Amazon S3 from your server-side code. S3 (which stands for Simple Storage Service) is a part of Amazon Web Services. It’s essentially a place to store files. Amazon stores the files in its massive data centers that are distributed throughout the planet. Your files are automatically backed up and duplicated to help ensure that they don’t get lost and are always accessible. You can keep the files private so that only you can download them, or public so that anyone can access them.

In terms of software development, S3 provides a nice place to store your files without having to bog down your own servers. The price is extremely low (pennies per GB of storage and transfer), which makes it a good option for decreasing demans on your own servers. There are APIs for adding S3 access to your applications that run on web servers as well as from your mobile apps.

In order to access S3, you use a RESTful service and access it through HTTP calls, even though you’re connecting from your server, and usually not with the browser. That’s not to say you can’t access it from the browser; however, there are security issues from using it from the browser. To access AWS, you need a private key. You don’t want to pass this private key around and by accessing AWS from the browser, there’s really no way to keep the private key hidden, which allows other people to start using your S3 account without your permisson. Instead, you’ll usually want to access AWS from your server, where you keep your private key, and then you’ll provide a browser interface into your server, not directly into AWS.

Now that said, we need to decide on server-side languages. My favorite language this year is node.js, so that’s what I’ll use. However, the concepts apply to other languages.

Generally when I learn a new RESTful API, I first try to learn the direct HTTP interface; and then after that, I decide whether to use an SDK. The idea is that often the RESTful API itself might be a bit cumbersome, and as such the developers of the API then provide SDKs in different languages. These SDKs provide classes and functions to simplify the use of the API. Sometimes, however, the APIs themselves are pretty easy to use, and I don’t even bother with the SDK. Other times, the SDKs really do help.

AWS has an API that’s not too difficult to use directly, except one part, in my opinion: The security. In order to call into the AWS API, you need to sign each HTTP call. The signature is essentially an encryption hash of the parameters you’re passing, along with your private key. This way Amazon can know the call likely came from the person or application it claims to come from. But along with the aforementioned parameters, you also provide a timestamp with your call. AWS will check that timestamp and if more than 15 minutes has passed since the timestamp, AWS will issue an error. In other words, API calls expire. When you construct an API call, you need to call it quickly, or AWS won’t accept it.

Adding in the encryption hash is a bit complex without some helper code. And that’s one reason I prefer to use the SDKs when using Amazon, rather than make direct HTTP calls. The SDKs include the code to sign your calls for you. So while I usually like to master the HTTP calls directly and then only use the SDK if I find it helps, in this case, I’m skipping right to the SDK.

Using the AWS SDK

Let’s get started. Create a directory to hold a test app in node.js. Now let’s add the AWS sdk. Type:

npm install aws-sdk

In order to use the SDK, you need to store your credentials. You can either store them in a separate configuration file, or you can use them right in your code. Recently a security expert I know got very upset about programmers storing keys right in their code; however, since this is just a test, I’m going to do that anyway. The aws-sdk documentation shows you how to store the credentials in a separate file.

To get the credentials, click on your name in the upper-right corner of the console; then in the drop down click Security Credentials. From here you can manage and create your security keys. You’ll need to expand the Access Keys section and obtain or create both an Access Key ID and a Secret Access Key.

Using your favorite text editor, create a file called test1.js. In this code, we’re going to create an S3 bucket. S3 bucket names need to be unique among all users. Since I just created a bucket called s3demo2015, you won’t be able to. Substitute a name that you want to try, and you may get to see the error code you get back. Add this code to your file:

var aws = require('aws-sdk');
  accessKeyId: 'abcdef',
  secretAccessKey: '123456',

but replace abcdef with your access key ID, and 123456 with your secret access key.

The object returned by the require(‘aws-sdk’) call contains several functions that serve as constructors for different AWS services. One is for S3. Add the following line to the end of your file to call the S3 constructor and save the new object in a variable called s3:

var s3 = new aws.S3();

And then add this line so we can inspect the members of our new s3 object:


Now run what we have so far:

node test1.js

You should see an output that includes several members containing data such as your credentials and the endpoints. Since this is a RESTful interface, it makes use of URLs that are known as endpoints. These are the URLs that your app will be calling to manipulate your S3 buckets.

Since we don’t really need this output to use S3, go ahead and remove the console.log line.

Create a bucket

Next we’re going to add code to create a bucket. You could just use the provided endpoints and make an HTTP request yourself. But, since we’re using the SDK and it provides wrapper functions, we’ll use the wrapper functions.

The function for creating a bucket is simply called createBucket. (This function is part of the prototype for the aws.S3 constructor that’s why it didn’t show up in the console.log output.) Because node.js is synchronous, to call the createBucket function, you provide along with your other parameters a callback function. Add the following code to your source file, but don’t change the name s3demo2015 to your own bucket; this way you can see the error you’ll get if you try to create a bucket that already exists:

s3.createBucket({Bucket: 's3demo2015'}, function(err, resp) {
    if (err) {

Now run what you have, again with:

node test1.js

You’ll see the output of the error. We’re just writing out the raw object; in an actual app, you’ll probably want to return just the error message to your user, which is err.message, and then write the full err object to your error log files. (You do keep log files, right?)

Also, if you put in the wrong keys, instead of getting a message about the bucket already existing, you’ll see an error about the security key being wrong.

Now change the s3demo2015 string to a name you want to actually create, and update the code to print out the response:

s3.createBucket({Bucket: 's3demo2015'}, function(err, resp) {
    if (err) {

Run it again, and if you found a unique name, you’ll get back a JavaScript object with a single member:

{ Location: '' }

This object contains the URL for your newly created bucket. Save this URL, because you’ll be using it later on in this tutorial.

Comment the bucket code

Now we could put additional code inside the callback where we use the bucket we created. But from a practical standpoint, we might not want that. In your own apps, you might only be occasionally creating a bucket, but mostly using a bucket that you’ve already created. So what we’re going to do is comment out the bucket creation code, saving it so we can find it later as an example, but not using it again here:

var aws = require('aws-sdk');
    accessKeyId: 'abcdef',
    secretAccessKey: '123456',
    region:'us-west-2' }
var s3 = new aws.S3();
/*s3.createBucket({Bucket: 's3demo2015'}, function(err, resp) {
    if (err) {
    for (name in resp) {

Add code to the bucket

Now we’ll add code that simply uses the bucket we created. S3 isn’t particularly complex; it’s mainly a storage system, providing ways to save files, read files, delete files, and list the files in a bucket. You can also list buckets and delete buckets. There are security options, as well, which you can configure, such as specifying whether a file is accessible to everyone or not. You can find the whole list here.

Let’s add code that will upload a file and make the file accessible to everyone. In S3 parlance, files are objects. The function we use for uploading is putObject. Look at this page for the options available to this function. Options are given as values in an object; you then pass this object as a parameter to the function. Members of an object in JavaScript don’t technically have an order (although JavaScript tends to maintain the order in which they’re created, but you shouldn’t rely on that as a fact), so you can provide these members in any order you like. The first two listed in the documentation are required: The name of the bucket and the name to assign to the file. The file name is given as Key:

  Bucket: 'bucketname',
  Key: 'filename'

You also include in this object the security information if you want to make this file public. By default, files can only be read by yourself. You can make files readable or writable by the public; typically you won’t want to make them writable by the public. But we’ll make this file readable by the public. The member for the parameter object for specifying the security is called ACL, which stands for Access Control List (as opposed to the ligament in our knee that we tear). The ACL is a string specifying the privileges. You can see the options in the documenation. The one we want is ‘public-read’.

To really get the most out of S3, I encourage you to look through all the options for putObject. This will provide you with a good understanding of what all you can do with S3. Along with that, read over the general S3 documentation, not just the SDK documentation. One option I’ve used is the StorageClass. Sometimes I just need to share a large file with a few people, and don’t need the cloud storage capabilities of S3. To save money, I’ll save the file with StorageClass set to ‘REDUCED_REDUNDANCY’. The file isn’t saved redundantly across the AWS cloud, and costs less to store. All of these can be configured through this parameter object.

Upload a file to S3

Now, let’s do it. We’ll upload a file to S3. The file we upload in this case will just be some HTML stored in a text string. That way we can easily view the file in a browser once it’s uploaded. Here’s the code to add after the commented-out code:

var html = '<html><body><h1>Welcome to S3</h1></body></html>';
s3.putObject( {Bucket:'s3demo2015', Key:'myfile.html', ACL:'public-read', Body: html}, function(err, resp) {
    if (err) {

Notice there’s on additional parameter that I included; I added this after running the test for this article, and I’ll explain it in a moment.

If all goes well, you should get back a string similar to this:

{ ETag: '"a8c49e10d2a2bbe0c3e662ee7557e79e"' }

The ETag is an identifier that can be used to determine whether the file has changed. This is typically used in web browsers. A browser may want to determine if a file has changed, and if not, just display the file from cache. But if it has changed, then re-download the file.

But to determine if the file has changed, the browser will obtain, along with the original file, a long hexadecimal number called an ETag. To determine if the file has changed, the browser will first ask the web server for the latest ETag for the file. The server will send back the ETag. If the ETag is different from what the browser has stored along with the cached file, the browser will know that the file has changed. But if the ETag is the same, the browser will know the file hasn’t changed and won’t bother re-downloading it, and will instead use what’s in the cache. This speeds up the web in general and minimizes the amount of bandwidth used.

Read the uploaded file

Now that the file is uploaded, you can read it. Just to be sure, you can go over to the AWS console and look at the bucket. In the image, above, you can see the file I uploaded with the preceding code, as well as a couple other files I created while preparing this article, including one I mention later.

Now let’s look at the file itself. This is where we need the URL that we got back when we created the bucket. In our example we named our file myfile.html. So let’s grab it by combining the URL and the filename. Open up a new browser tab and put this in your address bar, replacing the s3demo2015 with the name of your bucket:

(You can also get this URL by clicking on the file in the AWS console and then clicking the Properties button.) You’ll see your HTML-formatted file, as in the following image:

welcome window on Amazon S3

Use the right content type

Now for that final parameter I had to add after I first created this example. When I first made the example, and pointed my browser to the URL, instead of displaying the HTML, the browser downloaded the file. Since I’ve been doing web development for a long time, I knew what that meant: I had the content type set wrong. The content type basically tells the browser what type the file is so that the browser knows what to do with it. I checked the documentation and saw that the correct parameter object’s member is called ContentType. So I added the normal content type for HTML, which is ‘text/html’.

The content type is especially important here if you’re uploading CSS and JavaScript files. If you don’t set the content type for correctly, and then load the CSS or JavaScript from an HTML file, the browser won’t process the files as CSS and JavaScript respectively. So always make sure you have the correct content type.

Also note that we’re not limited to text files. We can save binary files as well. If we were calling the RESTful API manually, this would get a little tricky. But Amazon did a good job creating the SDK and it correctly uploads our files even if they’re binary. That means we can read a binary file in through node’s filesystem (fs) module, and get back a binary array. We can just pass this array into the putObject function, and it will get uploaded correctly.

Just to be sure, I compiled a small C++ program that writes out the number 1. The compiler created a binary file called a.out. I modified the preceding code to read in the file and upload it to S3. I then used wget to pull down the uploaded file and it matched the original; I was also able to execute it, showing that it did indeed upload as a binary file rather than get corrupted through some conversion to text.


S3 is quite easy to use programmatically. The key is first knowing how S3 works, including how to manage your files through buckets, and how to control the security of the files. Then find the SDK for your language of choice, install it, and practice creating buckets and uploading files. JavaScript is asyncronous by design (because it needed to so that web pages wouldn’t freeze up during network calls such as AJAX), and that carries forward to node. Other languages use different approaches, but the SDK calls will be similar.

Then once you master the S3 SDK, you’ll be ready to add S3 storage to your apps. And after that, you can move on to other AWS services. Want to explore more AWS services? Share your thoughts in the comments.

Schneier on Security: The Logjam (and Another) Vulnerability against Diffie-Hellman Key Exchange

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Logjam is a new attack against the Diffie-Hellman key-exchange protocol used in TLS. Basically:

The Logjam attack allows a man-in-the-middle attacker to downgrade vulnerable TLS connections to 512-bit export-grade cryptography. This allows the attacker to read and modify any data passed over the connection. The attack is reminiscent of the FREAK attack, but is due to a flaw in the TLS protocol rather than an implementation vulnerability, and attacks a Diffie-Hellman key exchange rather than an RSA key exchange. The attack affects any server that supports DHE_EXPORT ciphers, and affects all modern web browsers. 8.4% of the Top 1 Million domains were initially vulnerable.

Here’s the academic paper.

One of the problems with patching the vulnerability is that it breaks things:

On the plus side, the vulnerability has largely been patched thanks to consultation with tech companies like Google, and updates are available now or coming soon for Chrome, Firefox and other browsers. The bad news is that the fix rendered many sites unreachable, including the main website at the University of Michigan, which is home to many of the researchers that found the security hole.

This is a common problem with version downgrade attacks; patching them makes you incompatible with anyone who hasn’t patched. And it’s the vulnerability the media is focusing on.

Much more interesting is the other vulnerability that the researchers found:

Millions of HTTPS, SSH, and VPN servers all use the same prime numbers for Diffie-Hellman key exchange. Practitioners believed this was safe as long as new key exchange messages were generated for every connection. However, the first step in the number field sieve — the most efficient algorithm for breaking a Diffie-Hellman connection — is dependent only on this prime. After this first step, an attacker can quickly break individual connections.

The researchers believe the NSA has been using this attack:

We carried out this computation against the most common 512-bit prime used for TLS and demonstrate that the Logjam attack can be used to downgrade connections to 80% of TLS servers supporting DHE_EXPORT. We further estimate that an academic team can break a 768-bit prime and that a nation-state can break a 1024-bit prime. Breaking the single, most common 1024-bit prime used by web servers would allow passive eavesdropping on connections to 18% of the Top 1 Million HTTPS domains. A second prime would allow passive decryption of connections to 66% of VPN servers and 26% of SSH servers. A close reading of published NSA leaks shows that the agency’s attacks on VPNs are consistent with having achieved such a break.

Remember James Bamford’s 2012 comment about the NSA’s cryptanalytic capabilities:

According to another top official also involved with the program, the NSA made an enormous breakthrough several years ago in its ability to cryptanalyze, or break, unfathomably complex encryption systems employed by not only governments around the world but also many average computer users in the US. The upshot, according to this official: “Everybody’s a target; everybody with communication is a target.”


The breakthrough was enormous, says the former official, and soon afterward the agency pulled the shade down tight on the project, even within the intelligence community and Congress. “Only the chairman and vice chairman and the two staff directors of each intelligence committee were told about it,” he says. The reason? “They were thinking that this computing breakthrough was going to give them the ability to crack current public encryption.”

And remember Director of National Intelligence James Clapper’s introduction to the 2013 “Black Budget“:

Also, we are investing in groundbreaking cryptanalytic capabilities to defeat adversarial cryptography and exploit internet traffic.

It’s a reasonable guess that this is what both Bamford’s source and Clapper are talking about. It’s an attack that requires a lot of precomputation — just the sort of thing a national intelligence agency would go for.

But that requirement also speaks to its limitations. The NSA isn’t going to put this capability at collection points like Room 641A at AT&T’s San Francisco office: the precomputation table is too big, and the sensitivity of the capability is too high. More likely, an analyst identifies a target through some other means, and then looks for data by that target in databases like XKEYSCORE. Then he sends whatever ciphertext he finds to the Cryptanalysis and Exploitation Services (CES) group, which decrypts it if it can using this and other techniques.

Ross Anderson wrote about this earlier this month, almost certainly quoting Snowden:

As for crypto capabilities, a lot of stuff is decrypted automatically on ingest (e.g. using a “stolen cert”, presumably a private key obtained through hacking). Else the analyst sends the ciphertext to CES and they either decrypt it or say they can’t.

The analysts are instructed not to think about how this all works. This quote also applied to NSA employees:

Strict guidelines were laid down at the GCHQ complex in Cheltenham, Gloucestershire, on how to discuss projects relating to decryption. Analysts were instructed: “Do not ask about or speculate on sources or methods underpinning Bullrun.”

I remember the same instructions in documents I saw about the NSA’s CES.

Again, the NSA has put surveillance ahead of security. It never bothered to tell us that many of the “secure” encryption systems we were using were not secure. And we don’t know what other national intelligence agencies independently discovered and used this attack.

The good news is now that we know reusing prime numbers is a bad idea, we can stop doing it.

TorrentFreak: Court Orders Israeli ISPs to Block Popcorn Time Websites

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

popcorntBranded a “Netflix for Pirates,” the Popcorn Time app quickly gathered a user base of millions of people over the past year.

The application has some of the major media giants worried, including Netflix which sees the pirate app as a serious competitor to its business.

Since Popcorn Time is powered by BitTorrent it is hard to stop the downloads directly, but copyright holders can go after the websites that offer the application. In Israel the local anti-piracy outfit ZIRA went down this route.

The group, which represents several media companies, applied for an ex parte injunction ordering local Internet providers to block access to the websites of several Popcorn Time forks.

This week the Tel Aviv court granted the application, arguing that the application does indeed violate the rights of copyright holders.

The copyright holders are pleased with the outcome, which shows that services such as Popcorn Time are infringing even though they don’t host any files themselves.

“The Popcorn Time software provides users with a service to stream and download content on the Internet, including Israeli movies and foreign movies and TV series with English subtitles, without having any permission from copyright holders to do so,” attorney Presenti told local media.

The ISP blockades will prevent people from downloading Popcorn Time in the future. However, applications that have been downloaded already will continue to work for now.

To address this, ZIRA’s lawyers say the are considering additional steps including the option to block the ports Popcorn Time uses. While that may be effective, it may also block other traffic, especially if the app switches to more common ports such as port 80.

Israel is the second country to block access to Popcorn Time websites. Last month the UK High Court issued a similar order, which also targeted the domain names of various APIs the applications use.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and anonymous VPN services.

lcamtuf's blog: Lesser-known features of afl-fuzz

This post was syndicated from: lcamtuf's blog and was written by: Michal Zalewski. Original post: at lcamtuf's blog

AFL is designed to be simple to use, but there are quite a few advanced, time-saving features that may be easy to overlook. So, here are several useful tricks that aren’t covered in README:

  • Test case postprocessing: need to fix up checksums or length fields in a particular file format? AFL supports modular postprocessors that can take care of this for you. See experimental/post_library/ for sample code and other tips.

  • Deferred forkserver: stuck with a binary that initializes a lot of stuff before actually getting to the input data? When using clang, you can avoid this CPU overhead by instructing AFL to clone the process from an already-initialized image. It’s simpler than it sounds – have a look at llvm_mode/README.llvm for advice.

  • Helpful stats: in addition to using afl-plot to generate pretty progress graphs, you can also directly parse <out_dir>/fuzzer_stats for machine-readable statistics on any background tasks. The afl-whatsup script is a simple demo of that.

  • Faster resume: if you don’t care about detecting non-deterministic behavior in tested binaries, set AFL_NO_VAR_CHECK=1 before resuming afl-fuzz jobs. It can speed things up by a factor of ten. While you’re at it, be sure to see docs/perf_tips.txt for other performance tips.

  • Heterogeneous parallelization: the parallelization mechanism described in docs/parallel_fuzzing.txt can be very easily used to co-fuzz several different parsers using a shared corpus, or to seamlessly couple afl-fuzz to any other guided tools – say, symbolic execution frameworks.

  • Third-party tools: have a look at docs/sister_projects.txt for a collection of third-party tools that help you manage multiple instances of AFL, simplify crash triage, allow you to fuzz network servers or clients, and add support for languages such as Python or Go.

  • Minimizing stuff: when you have a crashing test case, afl-tmin will work even with non-instrumented binaries – so you can use it to shrink and simplify almost anything, even if it has nothing to do with AFL.


Krebs on Security: mSpy Denies Breach, Even as Customers Confirm It

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Last week, KrebsOnSecurity broke the news that sensitive data apparently stolen from hundreds of thousands of customers mobile spyware maker mSpy had been posted online. mSpy has since been quoted twice by other publications denying a breach of its systems. Meanwhile, this blog has since contacted multiple people whose data was published to the deep Web, all of whom confirmed they were active or former mSpy customers.

myspyappmSpy told BBC News it had been the victim of a “predatory attack” by blackmailers, but said it had not given in to demands for money. mSpy also told the BBC that claims the hackers had breached its systems and stolen data were false.

“There is no data of 400,000 of our customers on the web,” a spokeswoman for the company told the BBC. “We believe to have become a victim of a predatory attack, aimed to take advantage of our estimated commercial achievements.”

Let’s parse that statement a bit further. No, the stolen records aren’t on the Web; rather, they’ve been posted to various sites on the Deep Web, which is only accessible using Tor. Also, I don’t doubt that mSpy was the target of extortion attempts; the fact that the company did not pay the extortionist is likely what resulted in its customers’ data being posted online.

How am I confident of this, considering mSpy has still not responded to my requests for comment? I spent the better part of the day today pulling customer records from the hundreds of gigabytes of data leaked from mSpy. I spoke with multiple customers whose payment and personal data — and that of their kids, employees and significant others — were included in the huge cache. All confirmed they are or were recently paying customers of mSpy.

Joe Natoli, director of a home care provider in Arizona, confirmed what was clear from looking at the leaked data — that he had paid mSpy hundreds of dollars a month for a subscription to monitor all of the mobile devices distributed to employees by his company. Natoli said all employees agree to the monitoring when they are hired, but that he only used mSpy for approximately four months.

“The value proposition for the cost didn’t work out,” Natoli said.

Katherine Till‘s information also was in the leaked data. Till confirmed that she and her husband had paid mSpy to monitor the mobile device of their 14-year-old daughter, and were still a paying customer as of my call to her.

Till added that she was unaware of a breach, and was disturbed that mSpy might try to cover it up.

“This is disturbing, because who knows what someone could do with all that data from her phone,” Till said, noting that she and her husband had both discussed the monitoring software with their daughter. “As parents, it’s hard to keep up and teach kids all the time what they can and can’t do. I’m sure there are lots more people like us that are in this situation now.”

Another user whose financial and personal data was in the cache asked not to be identified, but sheepishly confirmed that he had paid mSpy to secretly monitor the mobile device of a “friend.”


News of the mSpy breach prompted renewed calls from Sen. Al Franken for outlawing products like mSpy, which the Minnesota democrat refers to as “stalking apps.” In a letter (PDF) sent this week to the U.S. Justice Department and Federal Trade Commission, Franken urged the agencies to investigate mSpy, whose products he called ‘deeply troubling’ and “nothing short of terrifying” when “in the hands of a stalker or abuse intimate partner.”

Last year, Franken reintroduced The Location Privacy Protection Act of 2014, legislation that would outlaw the development, operation, and sale of such products.

U.S. regulators and law enforcers have taken a dim view of companies that offer mobile spyware services like mSpy. In September 2014, U.S. authorities arrested a 31-year-old Hammad Akbar, the CEO of a Lahore-based company that makes a spyware app called StealthGenie. The FBI noted that while the company advertised StealthGenie’s use for “monitoring employees and loved ones such as children,” the primary target audience was people who thought their partners were cheating. Akbar was charged with selling and advertising wiretapping equipment.

“Advertising and selling spyware technology is a criminal offense, and such conduct will be aggressively pursued by this office and our law enforcement partners,” U.S. Attorney Dana Boente said in a press release tied to Akbar’s indictment.

Akbar pleaded guilty to the charges in November 2014, and according to the Justice Department he is “the first-ever person to admit criminal activity in advertising and selling spyware that invades an unwitting victim’s confidential communications.”

TorrentFreak: Google Fiber Sends Automated Piracy ‘Fines’ to Subscribers

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

googlefiberlogoEvery month Google receives dozens of millions of DMCA takedown requests from copyright holders, most of which are directed at its search engine.

However, with Google Fiber being rolled out in more cities, notices targeting allegedly pirating Internet subscribers are becoming more common as well.

These include regular takedown notices but also the more controversial settlement demands sent by companies such as Rightscorp and CEG TEK.

Instead of merely alerting subscribers that their connections have been used to share copyright infringing material, these notices serve as automated fines, offering subscribers settlements ranging from $20 to $300.

The scheme uses the standard DMCA takedown process which means that the copyright holder doesn’t have to go to court or even know who the recipient is. In fact, the affected subscriber is often not the person who shared the pirated file.

To protect customers against these practices many ISPs including Comcast, Verizon and AT&T have chosen not to forward settlement demands. However, information received by TF shows that Google does take part.

Over the past week we have seen settlement demands from Rightscorp and CEG TEK which were sent to Google Fiber customers. In an email, Google forwards the notice with an additional warning that repeated violations may result in a permanent disconnection.

“Repeated violations of our Terms of Service may result in remedial action being taken against your Google Fiber account, up to and including possible termination of your service,” Google Fiber writes.


Below Google’s message is the notification with the settlement demand, which in this example was sent on behalf of music licensing outfit BMG. In the notice, the subscriber is warned over possible legal action if the dispute is not settled.

“BMG will pursue every available remedy including injunctions and recovery of attorney’s fees, costs and any and all other damages which are incurred by BMG as a result of any action that is commenced against you,” the notice reads.


Facing such threatening language many subscribers are inclined to pay up, which led some to accuse the senders of harassment and abuse. In addition, several legal experts have spoken out against this use of the DMCA takedown process.

Mitch Stoltz, staff attorney at the Electronic Frontier Foundation (EFF) previously told us that Internet providers should carefully review what they’re forwarding to their users. Under U.S. law they are not required to forward DMCA notices and forwarding these automated fines may not be in the best interest of consumers.

“In the U.S., ISPs don’t have any legal obligation to forward infringement notices in their entirety. An ISP that cares about protecting its customers from abuse should strip out demands for money before forwarding infringement notices. Many do this,” Stoltz said.

According to Stoltz these settlement demands are often misleading or inaccurate, suggesting that account holders are responsible for all use of their Internet connections.

“The problem with notices demanding money from ISP subscribers is that they’re often misleading. They often give the impression that the person whose name is on the ISP bill is legally responsible for all infringement that might happen on the Internet connection, which is simply not true,” he notes.

While Google is certainly not the only ISP that forwards these notices it is the biggest name involved. TF asked Google why they have decided to forward the notices in their entirely but unfortunately the company did not respond to our request for comment.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and anonymous VPN services. [$] PostgreSQL: the good, the bad, and the ugly

This post was syndicated from: and was written by: corbet. Original post: at

The PostgreSQL development community is working toward the 9.5 release,
currently planned for the third quarter of this year. Development activity
is at peak levels as the planned feature freeze for this release approaches.
While this activity is resulting in the merging of some interesting
functionality, including the long-awaited “upsert” feature,
it is also
revealing some fault lines within the community. The fact that PostgreSQL
lacks the review resources needed to keep up with its natural rate of
change has been understood for years; many other projects suffer from the
same problem. But the pressures on PostgreSQL seem to be becoming more
acute, leading to concerns about fairness in the community and the
durability of the project’s cherished reputation for high-quality software.

TorrentFreak: Netflix Needs BitTorrent Expert to Implement P2P Streaming

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

netflix-logoWith roughly 60 million subscribers globally, Netflix is a giant in the world of online video entertainment.

The service moves massive amounts of data and is credited with consuming a third of all Internet traffic in North America during peak hours.

Netlix’s data use is quite costly for the company and also results in network congestion and stream buffering at times. However, thanks to P2P-powered streaming these problems may soon be a thing of the past.

In a job posting late April, Netflix says it is looking to expand its team with the addition of a Senior Software Engineer. While that’s nothing new, the description reveals information on the company’s P2P-streaming plans.

“Our team is evaluating up-and-coming content distribution technologies, and we are seeking a highly talented senior engineer to grow the knowledge base in the area of peer-to-peer technologies and lead the technology design and prototyping effort,” the application reads.

The software engineer will be tasked with guiding the project from start to finish. This includes the design and architecture phase, implementation, testing, the internal release and final evaluation.

“This is a great opportunity to enhance your full-stack development skills, and simultaneously grow your knowledge of the state of the art in peer-to-peer content distribution and network optimization techniques,” Netflix writes.

A few weeks ago Netflix told its shareholders that it sees the BitTorrent-powered piracy app Popcorn Time as a serious threat. However, the job application makes it clear that BitTorrent can be used for legal distribution as well.

Among the qualification requirements Netflix lists experience with BitTorrent and other P2P-protocols. Having contributed to the open source torrent streaming tool WebTorrent or a similar project is listed as a preferred job qualification.

In other words, existing Popcorn Time developers are well-suited candidates for the position.

– You have experience with peer-to-peer protocols such as the BitTorrent protocol

– You have strong experience in the development of peer-to-peer protocols and software

– You have contributed to a major peer-to-peer open source product such as WebTorrent

– You have strong experience in the development of web-based video applications and tools

Moving to P2P-assisted streaming appears to be a logical step. It will be possible to stream videos in a higher quality than is currently possible. In addition, it will offer a significant cost reduction.

BitTorrent inventor Bram Cohen will be happy to see that Netflix is considering using his technology. He previously said that Netflix’s video quality is really terrible, adding that BitTorrent-powered solutions are far superior.

“The fact is that by using BitTorrent it’s possible to give customers a much better experience with much less cost than has ever been possible before. It’s really not being utilized properly and that’s really unfortunate,” Cohen said.

While the job posting is yet more evidence that Netflix is seriously considering a move to P2P-powered streaming, it’s still unclear whether the new technology will ever see the light of day.

The job posting

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and anonymous VPN services.

Schneier on Security: Spy Dust

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Used by the Soviet Union during the Cold War:

A defecting agent revealed that powder containing both luminol and a substance called nitrophenyl pentadien (NPPD) had been applied to doorknobs, the floor mats of cars, and other surfaces that Americans living in Moscow had touched. They would then track or smear the substance over every surface they subsequently touched.

Krebs on Security: Security Firm Redefines APT: African Phishing Threat

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

A security firm made headlines earlier this month when it boasted it had thwarted plans by organized Russian cyber criminals to launch an attack against multiple US-based banks. But a closer look at the details behind that report suggests the actors in question were relatively unsophisticated Nigerian phishers who’d simply registered a bunch of new fake bank Web sites.

The report was released by Colorado Springs, Colo.-based security vendor root9B, which touts a number of former National Security Agency (NSA) and Department of Defense cybersecurity experts among its ranks. The report attracted coverage by multiple media outlets, including, Fox News, PoliticoSC Magazine and The Hill. root9B said it had unearthed plans by a Russian hacking gang known variously as the Sofacy Group and APT28. APT is short for “advanced persistent threat,” and it’s a term much used among companies that sell cybersecurity services in response to breaches from state-funded adversaries in China and Russia that are bent on stealing trade secrets via extremely stealthy attacks.

The cover art for the root9B report.

The cover art for the root9B report.

“While performing surveillance for a root9B client, the company discovered malware generally associated with nation state attacks,” root9B CEO Eric Hipkins wrote of the scheme, which he said was targeted financial institutions such as Bank of America, Regions Bank and TD Bank, among others.

“It is the first instance of a Sofacy or other attack being discovered, identified and reported before an attack occurred,” Hipkins said. “Our team did an amazing job of uncovering what could have been a significant event for the international banking community. We’ve spent the past three days informing the proper authorities in Washington and the UAE, as well as the CISOs at the financial organizations.”

However, according to an analysis of the domains reportedly used by the criminals in the planned attack, perhaps root9B should clarify what it means by APT. Unless the company is holding back key details about their research, their definition of APT can more accurately be described as “African Phishing Threat.”

The report correctly identifies several key email addresses and physical addresses that the fraudsters used in common across all of the fake bank domains. But root9B appears to have scant evidence connecting the individual(s) who registered those domains to the Sofacy APT gang. Indeed, a reading of their analysis suggests their sole connection is that some of the fake bank domains used a domain name server previously associated with Sofacy activity: carbon2go[dot]com (warning: malicious host that will likely set off antivirus alerts).

The problem with that linkage is although carbon2go[dot]com was in fact at one time associated with activity emanating from the Sofacy APT group, Sofacy is hardly the only bad actor using that dodgy name server. There is plenty of other badness unrelated to Sofacy that calls Carbon2go home for their DNS operations, including these clowns.

From what I can tell, the vast majority of the report documents activity stemming from Nigerian scammers who have been conducting run-of-the-mill bank phishing scams for almost a decade now and have left quite a trail.

rolexzadFor example, most of the wordage in this report from root9B discusses fake domains registered to one or two email addresses, including “,”,” and “”.

Each of these emails have long been associated with phishing sites erected by apparent Nigerian scammers. They are tied to this Facebook profile for a Showunmi Oluwaseun, who lists his job as CEO of a rather fishy-sounding organization called Rolexzad Fishery Nig. Ltd.

The domain rolexad[dot]com was flagged as early as 2008 by, a volunteer group that seeks to shut down phishing sites — particularly those emanating from Nigerian scammers (hence the reference to the Nigerian criminal code 419, which outlaws various confidence scams and frauds). That domain also references the above-mentioned email addresses. Here’s another phishy bank domain registered by this same scammer, dating all the way back to 2005!

Bob Zito, a spokesperson for root9B, said “the root9B team stands by the report as 100 percent accurate and it has been received very favorably by the proper authorities in Washington (and others in the cyber community, including other cyber firms).”
I wanted to know if I was alone in finding fault with the root9B report, so I reached out to Jaime Blasco, vice president and chief scientist at AlienVault — one of the security firms that first published the initial findings on the Sofacy/APT28 group back in October 2014. Blasco called the root9B research “very poor” (full disclosure: AlienVault is one of several advertisers on this blog).
“Actually, there isn’t a link between what root9B published and Sofacy activity,” he said. “The only link is there was a DNS server that was used by a Sofacy domain and the banking stuff root9B published. It doesn’t mean they are related by any means. I’m really surprised that it got a lot of media attention due to the poor research they did, and [their use] of [terms] like ‘zeroday hahes’ in the report really blew my mind. Apart from that it really looks like a ‘marketing report/we want media coverage asap,’ since days after that report they published their Q1 financial results and probably that increased the value of their penny stocks.”

Blasco’s comments may sound harsh, but it is true that root9B CEO Joe Grano bought large quantities of the firm’s stock roughly a week before issuing this report. On May 14, 2015, root9B issued its first quarter 2015 financial results.

There is an old adage: If the only tool you have is a hammer, you tend to treat everything as if it were a nail. In this case, if all you do is APT research, then you’ll likely see APT actors everywhere you look. 

SANS Internet Storm Center, InfoCON: green: Logjam – vulnerabilities in Diffie-Hellman key exchange affect browsers and servers using TLS, (Wed, May 20th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

Theres a new vulnerability in town… The new bug, dubbed LogJam, is a cousin of Freak. But its in the basic design of TLS itself, meaning all Web browsers, and some email servers, are vulnerable. [1] According to the article, Internet-security experts crafted a fix for a previously undisclosed bug in security tools used by all modern Web browsers. But deploying the fix could break the Internet for thousands of websites.

Logjam attack can allow an attacker to significantly weaken the encrypted connection between a user and a Web or email server…”>We have uncovered several weaknesses in how Diffie-Hellman key exchange has been deployed…

Were starting to see news coverage from other outlets, and were sure more analysis will emerge. However, at this time your best source for more information on this bug is at

For now, ensure you have the most recent version of your browser installed, and check for updates frequently. If youre a system administrator, please review the Guide to Deploying Diffie-Hellman for TLS at

Brad Duncan
ISC Handler and Security Researcher at Rackspace



(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

SANS Internet Storm Center, InfoCON: green: Upatre/Dyre malspam – Subject: eFax message from “unknown”, (Wed, May 20th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green


Yesterday on 2015-05-19, I attended a meeting from my local chapter of the Information Systems Security Association (ISSA). During the meeting, one of the speakers discussed different levels of incident response by Security Operations Center (SOC) personnel. For non-targeted issues like botnet-based malicious spam (malspam) infecting a Windows host, you probably wont waste valuable time investigating every little detail. In most cases, youll probably start the process to re-image the infected computer and move on. Other suspicious events await, and they might reveal a more serious, targeted threat.

However, we still recover information about these malspam campaigns. Traffic patterns evolve, and changes should be documented.

Todays example of malspam

Searching through my employers blocked spam filters, I found the following Upatre/Dyre wave of malspam:

  • Date/Time: 2015-05-19 from from 12:00 AM to 5:47 AM CST
  • Number of messages: 20
  • Sender (spoofed):
  • Subject: eFax message from unknown” />

    As shown in the above image, these messages were tailored for the recipients. Youll also notice some of the recipient email addresses contain random characters and numbers. Nothing new here. Its just one of the many waves of malspam our filters block every day. I reported a similar wave earlier this month [1]. Let” />

    The attachment is a typical example of Upatre, much like weve seen before. Lets see what this malware does in a controlled environment.

    Indicators of compromise (IOC)

    I ran the malware on a physical host and generated the following traffic:

    • 2015-05-19 15:16:12 UTC – port 80 – – GET /
    • 2015-05-19 15:16:13 UTC – port 13410 – SYN packet to server, no response
    • 2015-05-19 15:16:16 UTC – port 443 – two SYN packets to server, no response
    • 2015-05-19 15:16:58 UTC – port 443 – two SYN packets to server, no response
    • 2015-05-19 15:17:40 UTC – port 443 – SSL traffic – approx 510 KB sent from server to infected host
    • 2015-05-19 15:17:56 UTC – port 3478 – UDP STUN traffic to:
    • 2015-05-19 15:17:58 UTC – port 443 – SSL traffic – approx 256 KB sent from server to infected host
    • 2015-05-19 15:18:40 UTC – port 13409 – SYN packet to server, no response

    In my last post about Upatre/Dyre, we saw Upatre-style HTTP GET requests to but no HTTP response from the server [1]. Thats been the case for quite some time now.” />
    Shown above: Attempted TCP connections to the same IP address now reset (RST) by the server

    How can we tell this is Upatre?” />

    As Ive mentioned before, is a service run by one of my fellow Rackspace employees [2]. By itself, its not malicious. Unfortunately, malware authors use this and similar services to check an infected computers IP address.

    What alerts trigger on this traffic?” />

    Related files on the infected host include:

    • C:UsersusernameAppDataLocalPwTwUwWTWcqBhWG.exe (Dyre)
    • C:UsersusernameAppDataLocalne9bzef6m8.dll
    • C:UsersusernameAppDataLocalTemp~TP95D5.tmp (encrypted or otherwise obfuscated)
    • C:UsersusernameAppDataLocalTempJinhoteb.exe (where Upatre copied itself after it was run)

    Some Windows registry changes for persistence:

    • Key name: HKEY_CURRENT_USERSoftwareMicrosoftWindowsCurrentVersionRun
    • Key name: HKEY_USERSS-1-5-21-52162474-342682794-3533990878-1000SoftwareMicrosoftWindowsCurrentVersionRun
    • Value name: GoogleUpdate
    • Value type: REG_SZ
    • Value data: C:UsersusernameAppDataLocalPwTwUwWTWcqBhWG.exe

    A pcap of the infection traffic is available at:

    A zip file of the associated Upatre/Dyre malware is available at:

    The zip file is password-protected with the standard password. If you dont know it, email and ask.

    Final words

    This was yet another wave of Upatre/Dyre malspam. No real surprises, but its always interesting to note the small changes from these campaigns.

    Brad Duncan, Security Researcher at Rackspace
    Blog: – Twitter: @malware_traffic



    (c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

SANS Internet Storm Center, InfoCON: green: False Positive? resolving to Microsoft Blackhole IP, (Tue, May 19th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

Thanks to Xavier for bringing this to our attention. It looks a couple of days ago, a legitimate Microsoft host name, started to resolve to a Microsoft IP that is commonly used for blackholes that Microsoft operates:

$ host is an alias for is an alias for has address

Connecting to a blackhole IP like this is often an indicator of compromise, and many IDS”>[**] [1:2016101:2] ET TROJAN DNS Reply Sinkhole – Microsoft – [**] [Classification: A Network Trojan was detected] [Priority: 1] …

It is not yet clear what process causes the connect to this IP on port 443. But a number of other users are reporting similar issues. For example, see here:

At this point, I am assuming that this is some kind of configuration error at Microsoft.

Johannes B. Ullrich, Ph.D.

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.