TorrentFreak: Busted Pirate Told to Get 200K YouTube Hits or Face Huge Fine

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

jakubfOver the past 15 years countless individuals have faced financial punishments due to online copyright infringement offenses. But what happens when a case is won by copyright holders but the alleged pirate simply cannot pay?

Answers to that question vary, but over in the Czech Republic the people at the Business Software Alliance (BSA) have come up with the most unusual solution so far to settle their case with a long-time pirate.

The case involves an individual known as Jakub F who was accused by the BSA of pirating software including Microsoft’s Windows. Over many years he uploaded links to various forums which allowed others to download content from file-hosting sites. The BSA took exception to that and tracked him down. Eventually the police ended up at Jakub’s house, confiscating his computer, DVDs and an external hard drive.

The case went to trial and in September Jakub was found guilty, with a district court handing him a three-year suspended sentence and ordering the confiscation of his equipment. But for Jakub the matter was not over yet.

Various companies involved in the lawsuit including Microsoft, HBO, Sony Music and Twentieth Century Fox estimated that Jakub had caused them around $373,000 in damages, with Microsoft alone calling for $223,000. However, it appears that the court wasn’t prepared to accept the companies’ somewhat hypothetical calculations.

Whether the companies ever intended to claw back these sums remains unclear but it now transpires that the plaintiffs and Jakub F reached agreement on what they describe as an “alternative sentence.”

Instead of paying out a small fortune to his tormentors at the BSA, Jakub F agreed to star in an anti-piracy PSA about his life as a pirate. The video, which is embedded below and titled “The Story of My Piracy”, is being promoted on, a site ostensibly setup by Jakub himself, with the aim of deterring others from following in his footsteps.


“I had to start this site because for eight years I spread pirated software and then they caught me. I thought that I wasn’t doing anything wrong. I thought that it didn’t hurt the big companies. I didn’t even do it for the money, I did it for fun,” Jakub begins.

“I felt in the warez community that I meant something. I was convinced that I was too small a fish for someone to get to me. But eventually, they got me. Even for me, the investigators came to work.”

The video is a professional affair starring Jakub himself. Set to a dramatic soundtrack, Jakub talks about the fun he had on warez forums, sharing content for the pleasure of others. However, it all came crashing down when he was told that copyright holders wanted hundreds of thousands of dollars in damages, damages he could never pay.

But while Jakub appears to have kept up his side of the bargain so far, the BSA say that the 30-year-old’s fate lies with how popular the video becomes. Unless the video gets 200,000 views on YouTube, there’s a suggestion that a huge fine will become payable.

“If I promote my story and my video gets at least 200 thousand views, I will only serve the general part of my sentence,” Jakub explains.

“In the video I play myself and this is really my story. I shot the video with a professional firm. Sharing is how this started and sharing is how I would like my story to end up.”

At the time of writing Jakub’s video has more than 80,000 views so he needs 120,000 more to clear his debt. Needless to say, this is one propaganda film video he’ll be hoping doesn’t get pirated.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services. Software Freedom Conservancy Launches 2015 Fundraiser

This post was syndicated from: and was written by: ris. Original post: at

Software Freedom Conservancy has announced
a major fundraising effort. “Pointing to the difficulty of relying on corporate funding while pursuing important but controversial issues, like GPL compliance, Conservancy has structured its fundraiser to increase individual support. The organization needs at least 750 annual Supporters to continue its basic community services and 2500 to avoid hibernating its enforcement efforts. If Conservancy does not meet its goals, it will be forced to radically restructure and wind down a substantial portion of its operations.

Application Management Blog: AWS CloudFormation Security Best Practices

This post was syndicated from: Application Management Blog and was written by: George Huang. Original post: at Application Management Blog

The following is a guest post by Hubert Cheung, Solutions Architect.

AWS CloudFormation makes it easy for developers and systems administrators to create and manage a collection of related AWS resources by provisioning and updating them in an orderly and predictable way. Many of our customers use CloudFormation to control all of the resources in their AWS environments so that they can succinctly capture changes, perform version control, and manage costs in their infrastructure, among other activities.

Customers often ask us how to control permissions for CloudFormation stacks. In this post, we share some of the best security practices for CloudFormation, which include using AWS Identity and Access Management (IAM) policies, CloudFormation-specific IAM conditions, and CloudFormation stack policies. Because most CloudFormation deployments are executed from the AWS command line interface (CLI) and SDK, we focus on using the AWS CLI and SDK to show you how to implement the best practices.

Limiting Access to CloudFormation Stacks with IAM

With IAM, you can securely control access to AWS services and resources by using policies and users or roles. CloudFormation leverages IAM to provide fine-grained access control.

As a best practice, we recommend that you limit service and resource access through IAM policies by applying the principle of least privilege. The simplest way to do this is to limit specific API calls to CloudFormation. For example, you may not want specific IAM users or roles to update or delete CloudFormation stacks. The following sample policy allows all CloudFormation APIs access, but denies UpdateStack and DeleteStack APIs access on your production stack:


We know that IAM policies often need to allow the creation of particular resources, but you may not want them to be created as part of CloudFormation. This is where CloudFormation’s support for IAM conditions comes in.

IAM Conditions for CloudFormation

There are three CloudFormation-specific IAM conditions that you can add to your IAM policies:

  • cloudformation:TemplateURL
  • cloudformation:ResourceTypes
  • cloudformation:StackPolicyURL

With these three conditions, you can ensure that API calls for stack actions, such as create or update, use a specific template or are limited to specific resources, and that your stacks use a stack policy, which prevents stack resources from unintentionally being updated or deleted during stack updates.

Condition: TemplateURL

The first condition, cloudformation:TemplateURL, lets you specify where the CloudFormation template for a stack action, such as create or update, resides and enforce that it be used. In an IAM policy, it would look like this:

        "Effect": "Deny",
        "Action": [
        "Resource": "*",
        "Condition": {
            "StringNotEquals": {
                "cloudformation:TemplateURL": [
        "Effect": "Deny",
        "Action": [
        "Resource": "*",
        "Condition": {
            "Null": {
                "cloudformation:TemplateURL": "true"

The first statement ensures that for all CreateStack or UpdateStack API calls, users must use the specified template. The second ensures that all CreateStack or UpdateStack API calls must include the TemplateURL parameter. From the CLI, your calls need to include the –template-url parameter:

aws cloudformation create-stack –stack-name cloudformation-demo –template-url

Condition: ResourceTypes

CloudFormation also allows you to control the types of resources that are created or updated in templates with an IAM policy. The CloudFormation API accepts a ResourceTypes parameter. In your API call, you specify which types of resources can be created or updated. However, to use the new ResourceTypes parameter, you need to modify your IAM policies to enforce the use of this particular parameter by adding in conditions like this:

        "Effect": "Deny",
        "Action": [
        "Resource": "*",
        "Condition": {
            "ForAllValues:StringLike": {
                "cloudformation:ResourceTypes": [
        "Effect": "Deny",
        "Action": [
        "Resource": "*",
        "Condition": {
            "Null": {
                "cloudformation:ResourceTypes": "true"

From the CLI, your calls need to include a –resource-types parameter. A call to update your stack will look like this:

aws cloudformation create-stack –stack-name cloudformation-demo –template-url –resource-types=”[AWS::IAM::Group, AWS::IAM::User]”

Depending on the shell, the command might need to be enclosed in quotation marks as follow; otherwise, you’ll get a “No JSON object could be decoded” error:

aws cloudformation create-stack –stack-name cloudformation-demo –template-url –resource-types=’[“AWS::IAM::Group”, “AWS::IAM::User”]’

The ResourceTypes conditions ensure that CloudFormation creates or updates the right resource types and templates with your CLI or API calls. In the first example, our IAM policy would have blocked the API calls because the example included AWS::IAM resources. If our template included only AWS::EC2::Instance resources, the CLI command would look like this and would succeed:

aws cloudformation create-stack –stack-name cloudformation-demo –template-url –resource-types=’[“AWS::EC2::Instance”]’

The third condition is the StackPolicyURL condition. Before we explain how that works, we need to provide some additional context about stack policies.

Stack Policies

Often, the worst disruptions are caused by unintentional changes to resources. To help in mitigating this risk, CloudFormation provides stack policies, which prevent stack resources from unintentionally being updated or deleted during stack updates. When used in conjunction with IAM, stack policies provide a second layer of defense against both unintentional and malicious changes to your stack resources.

The CloudFormation stack policy is a JSON document that defines what can be updated as part of a stack update operation. To set or update the policy, your IAM users or roles must first have the ability to call the cloudformation:SetStackPolicy action.

You apply the stack policy directly to the stack. Note that this is not an IAM policy. By default, setting a stack policy protects all stack resources with a Deny to deny any updates unless you specify an explicit Allow. This means that if you want to restrict only a few resources, you must explicitly allow all updates by including an Allow on the resource "*" and a Deny for specific resources. 

For example, stack policies are often used to protect a production database because it contains data that will go live. Depending on the field that’s changing, there are times when the entire database could be replaced during an update. In the following example, the stack policy explicitly denies attempts to update your production database:

  "Statement" : [
      "Effect" : "Deny",
      "Action" : "Update:*",
      "Principal": "*",
      "Resource" : "LogicalResourceId/ProductionDB_logical_ID"
      "Effect" : "Allow",
      "Action" : "Update:*",
      "Principal": "*",
      "Resource" : "*"

You can generalize your stack policy to include all RDS DB instances or any given ResourceType. To achieve this, you use conditions. However, note that because we used a wildcard in our example, the condition must use the "StringLike" condition and not "StringEquals":

  "Statement" : [
      "Effect" : "Deny",
      "Action" : "Update:*",
      "Principal": "*",
      "Resource" : "*",
      "Condition" : {
        "StringLike" : {
          "ResourceType" : ["AWS::RDS::DBInstance", "AWS::AutoScaling::*"]
      "Effect" : "Allow",
      "Action" : "Update:*",
      "Principal": "*",
      "Resource" : "*"

For more information about stack policies, see Prevent Updates to Stack Resources.

Finally, let’s ensure that all of your stacks have an appropriate pre-defined stack policy. To address this, we return to  IAM policies.


From within your IAM policy, you can ensure that every CloudFormation stack has a stack policy associated with it upon creation with the StackPolicyURL condition:

            "Effect": "Deny",
            "Action": [
            "Resource": "*",
            "Condition": {
                "ForAnyValue:StringNotEquals": {
                    "cloudformation:StackPolicyUrl": [
        "Effect": "Deny",
        "Action": [
        "Resource": "*",
        "Condition": {
            "ForAnyValue:StringNotEquals": {
                "cloudformation:StackPolicyUrl": [
        "Effect": "Deny",
        "Action": [
        "Resource": "*",
        "Condition": {
            "Null": {
                "cloudformation:StackPolicyUrl": "true"

This policy ensures that there must be a specific stack policy URL any time SetStackPolicy is called. In this case, the URL is Similarly, for any create and update stack operation, this policy ensures that the StackPolicyURL is set to the sampledenypolicy.json document in S3 and that a StackPolicyURL is always specified. From the CLI, a create-stack command would look like this:

aws cloudformation create-stack –stack-name cloudformation-demo –parameters ParameterKey=Password,ParameterValue=CloudFormationDemo –capabilities CAPABILITY_IAM –template-url –stack-policy-url

Note that if you specify a new stack policy on a stack update, CloudFormation uses the existing stack policy: it uses the new policy only for subsequent updates. For example, if your current policy is set to deny all updates, you must run a SetStackPolicy command to change the stack policy to the one that allows updates. Then you can run an update command against the stack. To update the stack we just created, you can run this:

aws cloudformation set-stack-policy –stack-name cloudformation-demo –stack-policy-url

Then you can run the update:

aws cloudformation update-stack –stack-name cloudformation-demo –parameters ParameterKey=Password,ParameterValue=NewPassword –capabilities CAPABILITY_IAM –template-url –stack-policy-url

The IAM policy that we used ensures that a specific stack policy is applied to the stack any time a stack is updated or created.


CloudFormation provides a repeatable way to create and manage related AWS resources. By using a combination of IAM policies, users, and roles, CloudFormation-specific IAM conditions, and stack policies, you can ensure that your CloudFormation stacks are used as intended and minimize accidental resource updates or deletions.

You can learn more about this topic and other CloudFormation best practices in the recording of our re:Invent 2015 session, (DVO304) AWS CloudFormation Best Practices, and in our documentation. Security advisories for Wednesday

This post was syndicated from: and was written by: ris. Original post: at

Debian has updated libcommons-collections3-java (unsanitized input data) and symfony (two vulnerabilities).

Debian-LTS has updated putty (memory corruption).

Fedora has updated grub2 (F23:
Secure Boot circumvention), krb5 (F21:
multiple vulnerabilities), libpng10 (F23; F22; F21: two vulnerabilities), sblim-sfcb
(F23; F22;
F21: denial of service), and wpa_supplicant (F22: denial of service).

Slackware has updated pcre (code execution).

SUSE has updated linux-3.12.32
(SLELP12: two vulnerabilities), linux-3.12.36 (SLELP12: two vulnerabilities),
linux-3.12.38 (SLELP12: two
vulnerabilities), linux-3.12.39 (SLELP12:
two vulnerabilities), linux-3.12.43
(SLELP12: two vulnerabilities), linux-3.12.44 (SLELP12: two vulnerabilities),
and linux-3.12.44 (SLELP12: two vulnerabilities).

Ubuntu has updated icedtea-web
(15.10, 15.04, 14.04: applet execution) and python-django (15.10, 15.04, 14.04, 12.04: information disclosure).

Krebs on Security: Breach at IT Automation Firm LANDESK

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

LANDESK, a company that sells software to help organizations securely and remotely manage their fleets of desktop computers, servers and mobile devices, alerted employees last week that a data breach may have exposed their personal information. But LANDESK employees contacted by this author say the breach may go far deeper for the company and its customers.

landeskThe South Jordan, Utah-based LANDESK makes and markets software that helps organizations manage all users, platforms and devices from a single digital dashboard. The company’s software specializes in automating and integrating IT systems management, endpoint security management, service management, IT asset management, and mobile device management.

On Nov. 18, 2015, LANDESK sent a letter to current and former employees warning of an intrusion, stating that “it is possible that, through this compromise, hackers obtained personal information, including names and Social Security numbers, of some LANDESK employees and former Wavelink employees.”

LANDESK declined to answer questions for this story. But the company did share a written statement that mirrors much of the text in the letter sent to affected employees:

“We recently became aware of some unusual activity on our systems and immediately initiated safeguards as a precaution and began an investigation. As part of our ongoing investigation in partnership with a leading computer forensics firm, we recently learned that a small amount of personally identifiable information for a limited number of our employees may have been accessible during the breach. While no data compromises of personally identifiable information are confirmed at this point, we have reached out with information and security resources to individuals who may have been affected. The security of our networks is our top priority and we are acting accordingly.”

“The few employees who may have been affected were notified promptly, and at this point the impact appears to be quite small.”

According to a LANDESK employee who spoke on condition of anonymity, the breach was discovered quite recently, but system logs show the attackers first broke into LANDESK’s network 17 months ago, in June 2014.

The employee, we’ll call him “John,” said the company only noticed the intrusion when several co-workers started complaining of slow Internet speeds. A LANDESK software developer later found that someone in the IT department had been logging into his build server, so he asked them about it. The IT department said it knew nothing of the issue.

John said further investigation showed that the attackers were able to compromise the passwords of the global IT director in Utah and another domain administrator from China.

“LANDESK has found remnants of text files with lists of source code and build servers that the attackers compiled,” John said. “They know for a fact that the attackers have been slowly [archiving] data from the build and source code servers, uploading it to LANDESK’s web servers, and downloading it.”

The implications are potentially far reaching. This breach happened more than a year and a half ago, during which time several versions and fixes of LANDESK software have been released. LANDESK has thousands of customers in all areas of commerce. By compromising LANDESK and embedding a back door directly in their source code, the attackers could have access to large number of computers and servers worldwide.

The wholesale theft of LANDESK source code also could make it easier for malware and exploit developers to find security vulnerabilities in the company’s software.

A LANDESK spokesperson would neither confirm nor deny the date of the breach or the source code theft, saying only that the investigation into the breach is ongoing and that the company “won’t comment on speculation.”

Schneier on Security: NSA Lectures on Communications Security from 1973

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Newly declassified: “A History of U.S. Communications Security (Volumes I and II),” the David G. Boak Lectures, National Security Agency (NSA), 1973. (The document was initially declassified in 2008. We just got a whole bunch of additional material declassified. Both versions are in the document, so you can compare and see what was kept secret seven years ago.)

Raspberry Pi: Alex’s Nixie Clock

This post was syndicated from: Raspberry Pi and was written by: Liz Upton. Original post: at Raspberry Pi

Liz: Alex is ten years old. He lives in Texas. He shared his most recent school project with us. It’s a great project and a fantastically clear tutorial: we thought you ought to see it too.

My Mom wanted a Nixie Clock, and I needed to do a project for school. I had a Raspberry Pi I wasn’t using, so I built a Nixie Clock. It took me about 2 months.

My Dad ordered some Nixie tubes and chips from Russia, and bought a 170V power supply to power the Nixie tubes. The first thing to do was to test them:


To start with I installed a tube, chip and power supply onto a breadboard. The chip has 4 input lines (A, B, C, and D) that are used to tell it which number to light up. For example in binary 7 is 0111, so you need to set input A to high, B to high, C to high and D to low (A=1, B=2, C=4 and D=8) to light up the number 7. I tested the first one by using a jumper cable to connect the 4 inputs to either 0V (low) or 5V (high).

Once I knew the first tube and chip worked, I wrote a program on the Rasberry Pi to test them. I used 4 GPIO pins, wired to pins A,B, C and D on the chip. My program would loop through the numbers 0 to 9, and turn on/off the pins by converting to binary using logical AND’s.

For example – for the number 7:

  • 7 AND 1 = 1, so pin A would be set high.
  • 7 AND 2 = 2, so pin B would be set high.
  • 7 AND 4 = 4, so pin C would be set high.
  • 7 AND 8 = 0, so pin D would be set low.

Once I had the program working, it was easy to test all the chips and Nixie Tubes. Everything worked, except one tube – the 3 and the 9 would light up at the same time. So I used this for the first digit for the hours, since that only ever needs to show 1.

The Program:

When the Raspberry Pi starts up, it automatically starts my clock program.

I wrote the clock program in C using the geany editor.

When the program starts, first it sets all the digital pins to OUTPUT and LOW to make sure everything is off.

Then I turn on pin 0, which turns on the high voltage power supply using a transistor.

Then I test the clock, which makes the hours show 1 to 12, and minutes 0-59.

Then I start the loop. Once every second I do the following:

  • Ask the computer the time (if it is connected to the internet, it will always show the right time).
  • The hours come back as a number between 1 and 23, so if the hour is bigger than 12, I subtract 12 from it.
  • Then I break out the hour into 2 digits, and the minutes into 2 digits. The first digit is the quotient of the hour divided by 10. The second digit it the remainder of the hour divided by 10. Then I do the same for the minutes.
  • For each number, I have to convert it into binary (for example 7 is 0111 in binary). Each number has up to 4 wires, each wire is for a binary digit. If the digit is 0 the pin/wire is set to LOW, if it is a 1 it is set to HIGH. So for the number 7 the wires are LOW, HIGH, HIGH, HIGH.
  • These wires are soldered to the driver chip. The chip has 10 switches in it, one for each number in the Nixie Tubes. These switches are connected to the chips with yellow wires. The chips look at the 4 wires to see which binary number it is, and then switches on the correct light in the Nixie Tube.

The table below shows the wires and their values for each digit.

Digit Black Wire Blue Wire Grey Wire White Wire Binary

Here is the source code in C:

#include       /* These are libraries */

// turns a pin on or off
void nixiePin(int p, int v){

  if (p != -1) {
    digitalWrite(p, v);

// converts to binary and sends values to 4 pins
void nixiePins(int p1, int p2, int p4, int p8, int v){


// splits the time into digits
void nixieTime(int h,int m, int s) {

  nixiePins( 1, -1, -1, -1, h/10);  /* quotient of hour / 10  */
  nixiePins( 2,  3,  4,  5, h%10);  /* remainder of hour / 10 */
  nixiePins( 6,  7, 21, -1, m/10);  /* quotient of minute / 10*/
  nixiePins(22, 23, 24, 25, m%10);  /* remainder or min / 10  */

// makue sure all the digits work
void testClock(void){
  int i;
  for (i=1; i<=12; i++) {
  for (i=1; i<=59; i++) {

// set up the pins we will use
void initPin(int p) {
  pinMode(p, OUTPUT);
  digitalWrite(p, LOW);	

// this is the main part of the program
int main (void) {           
  time_t now;         /* its a variable that holds time info */
  struct tm *ntm;     /* it is a variable */
  int i;
  wiringPiSetup();    /* set up pins 0-7 and 21-29 to use  */
  for (i=0; i <=7;i++) {
  for (i=21; i <=29;i++) { 
  digitalWrite(0, HIGH);            /* turn on high voltage power */ 
  testClock();                      /* test all the digits */ 

  while (1) {                       /*starts and infinite loop */ 
    now=time(NULL);                 /* ask the computer for the time */ 
    ntm=localtime(&now);            /* it formats the time */ 
    if (ntm->tm_hour > 12) {        /* if hour is more than 12 - 12 */
      ntm->tm_hour = ntm->tm_hour-12;

    /* it tells it to write that number to the nixie tubes*/

    delay (1000);   /* wait for 1 second */


The Circuit Board:


My dad drilled a piece of plastic for me for the Nixie Tubes to sit on.

The circuit board has 4 Nixie tubes, and 4 chips (one for each).

The chips are wired to the Nixie Tubes with yellow wires.

Black wires are used for Ground, and red wires for 5 and 12 Volts. 5V and Ground was wired to each chip.

The Nixie Tubes require 170V DC to work, so in one corner I have soldered a high voltage power supply. This takes 12V and turns it into 170V. All 170V wires are green.

The Nixie Tubes need resistors attached to them, so they don’t take too much current and burn out. The resistors limit the current to 2mA.

There is also a Transistor with 2 more resistors to limit the current.  This transistor acts as a switch, and lets my program turn the High Voltage Power Supply on or off.

I also added a USB port, and wired it so it has 5V and Ground. This lets me use it as a power supply for the Raspberry Pi.

Then the inputs to the chips were wired to pins on the Raspberry Pi GPIO (see code for pin numbers).

Soldering took a very long time. Before we turned it on, my Dad checked over everything, making sure the 170V was safe. He found a couple of shorts that had to be fixed.

When I turned it on the first time, the tubes just half glowed and flickered. However if I took two chips out of the sockets, then the other two would work. This was because the 170V power supply wasn’t powerful enough. I double checked the datasheet, I should have been using about 1.5W, well under the 5W the power supply should be able to make from 5V. Instead of running the high voltage power supply on 5V, I tried 12V (it is rated up to 16V input), and that solved the power problem.

The Case:

I made a box out of wood and plastic. I got to use a big circular miter saw with my Dad supervising to cut the wood. The plastic is cut by using a sharp blade to cut into it, and then snapping it. Then everything was screwed together:


What’s Next:

I was very nervous about taking it into school – the last boy that took an electronic clock into school in Texas got arrested, so my Dad contacted the school first to let them know. I think my teacher was impressed, I had to explain everything in detail to her.

This is only the start of the project. I want to put it in a nicer case with my Dad’s help before I give it to my Mom. I want to add an alarm. I also want to add a hidden camera, microphone and speaker, so it can run voice/face recognition. Then I can turn it into J.A.R.V.I.S. from Ironman. That may take me a while, but I’ll add more posts on my blog as I do things to it.

Liz: Have you made a school project with the Pi that you’d like to share with us? Leave us a note in the comments!

The post Alex’s Nixie Clock appeared first on Raspberry Pi.

TorrentFreak: MPAA Wins $10.5 Million Piracy Damages From MovieTube

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

movietubeUnauthorized movie streaming sites have been a thorn in the side of Hollywood for many years.

Responding to this threat the MPAA decided to take one of the most prominent players to court earlier this year.

MPAA members 20th Century Fox, Columbia Pictures, Disney, Paramount, Universal and Warner Bros filed a lawsuit against a group of MovieTube affiliated websites, which were operating from more than two dozen domain names.

In the complaint the MPAA listed several popular websites including,,,,,,, and, which were all believed to be operated by the same people.

Despite facing millions of dollars in damages, the site’s operators remained silent. They swiftly pulled the targeted sites offline after the compliant was filed, but never responded to any of the claims in court.

Due to this inaction the MPAA requested a default judgment at a New York federal court, demanding a permanent injunction as well as millions of dollars in damages.

This week a federal court judge ruled in favor of the MPAA, finding MovieTube liable for copyright infringement, federal trademark counterfeiting, and unfair competition (pdf).

The court agreed to statutory damages for willful copyright infringement in the amount of $75,000 per work, which brings the total to $10.5 million.


The default judgment also includes a permanent injunction that prohibits MovieTube’s operators from offering or linking to any copyright infringing material. In addition, the movie studios will now take ownership of all domain names.

Dean Marks, the MPAA’s Executive Vice President, is happy with the outcome and says it helps to protect the livelihood of movie industry workers.

“By shutting down these illegal commercial enterprises we are protecting not only our members’ creative work and the hundreds of innovative, legal digital distribution platforms, but also the millions of people whose jobs depend on a vibrant motion picture and television industry.”

“This court order will help ensure the sites stay down and are not transferred to others for the purposes of continuing a piracy operation on a massive scale.”

While shutting down the MovieTube sites is a significant win for the MPAA, they are unlikely to see any of the money that’s been awarded to them. The true operators of the MovieTube sites remain unknown and will do their best to keep it that way.

The full list of domain names that will be signed over to the MPAA is as follows:,,,,,,,,,,,,,,,,,,,,,,,,,,, and

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Linux How-Tos and Linux Tutorials: How to Control Hardware With the Raspberry Pi Using WiringPi

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Ben Martin. Original post: at Linux How-Tos and Linux Tutorials

Raspberry Pi motor

Our last tutorial in this series used the Raspberry Pi 2’s 40 pin header to connect a touch screen to the Pi. This time around we’ll take a look at how to directly interact with hardware — in this case an electric gearmotor — from the command line using the 40 pin header. The following design can also be extended to allow a Raspberry Pi to be mounted to a small robot and move it (and itself) around.

The Raspberry Pi is a small ARM single board computer which has great community support and has many Linux distributions available for it. The Raspberry Pi 2 is the latest model of the series and includes among other things a quad core ARM, 1GB of RAM, Ethernet, USB, HDMI, microSD, and a 40 pin header for connecting hardware.

First, we’ll need to connect the Pi to the breadboard. The connecting wires that are used on breadboards are Male to Male Dupont connectors, which won’t work with the Pi. You can get Male to Female connectors, and the latter end will let you directly connect to the pins on the Raspberry Pi 2. Another option is to get a “Wedge” which connects the Raspberry Pi using a ribbon cable with a custom PCB that can be inserted into a breadboard. A significant advantage to using a Wedge is that the pins are labeled on the Wedge PCB — much, much simpler than trying to keep count of which pin you are at in the 20 columns of unlabeled pins on the Pi itself.

Next we’ll install the WiringPi project’s “gpio” command line tool which allows interaction with the 40 pins on the Raspberry Pi header. I was using the Raspbian distribution on my Pi. The below commands should checkout the latest source, compile, and install it for you.

pi@pi ~/src $ git clone git://

Cloning into 'wiringPi'...

remote: Counting objects: 914, done.

remote: Compressing objects: 100% (748/748), done.

remote: Total 914 (delta 654), reused 217 (delta 142)

Receiving objects: 100% (914/914), 285.58 KiB | 123 KiB/s, done.

Resolving deltas: 100% (654/654), done.

pi@pi ~/src $ cd ./wiringPi

pi@pi ~/src/wiringPi $ ./build

The WiringPi library offers easy access to the GPIO pins on the Raspberry Pi and provides both the command line tool gpio and an API for hardware interaction for your programs. It also includes some support for interacting with chips which are connected to the Raspberry Pi. For example, mapping a GPIO pin multiplexer chip for easy access using calls that are familiar with Arduino programmers such as digitalWrite().

WiringPi has its own pin numbering scheme. As you can see from the table below, much of the time the name of the pin and the name that WiringPi uses will match. I used the SparkFun Wedge, which labels the GPIO pins using the BCM numbers. So the physical pin 12 on the Raspberry Pi header has a BCM pin name of 18, and so is labeled as G18 on the Wedge. The same pin has a WiringPi pin number of 1. It seems like there might be one too many levels of indirection in there. But, if you are using a Wedge then you should be able to read the BCM pin number and know what WiringPi (wPi) pin number you need to use in order to interact with that pin on the Wedge. The Wedge also makes it a little less likely to accidentally connect ground and voltage to the wrong places.

root@pi:~# gpio readall
+—–+—–+———+——+—+—Pi 2—+—+——+———+—–+—–+
| BCM | wPi |   Name  | Mode | V | Physical | V | Mode | Name    | wPi | BCM |
|     |     |    3.3v |      |   |  1 || 2  |   |      | 5v      |     |     |
|   2 |   8 |   SDA.1 |   IN | 1 |  3 || 4  |   |      | 5V      |     |     |
|   3 |   9 |   SCL.1 |   IN | 1 |  5 || 6  |   |      | 0v      |     |     |
|   4 |   7 | GPIO. 7 |   IN | 1 |  7 || 8  | 1 | ALT0 | TxD     | 15  | 14  |
|     |     |      0v |      |   |  9 || 10 | 1 | ALT0 | RxD     | 16  | 15  |
|  17 |   0 | GPIO. 0 |   IN | 0 | 11 || 12 | 1 | ALT5 | GPIO. 1 | 1   | 18  |
|  27 |   2 | GPIO. 2 |   IN | 0 | 13 || 14 |   |      | 0v      |     |     |
|  22 |   3 | GPIO. 3 |   IN | 0 | 15 || 16 | 0 | IN   | GPIO. 4 | 4   | 23  |
|     |     |    3.3v |      |   | 17 || 18 | 0 | IN   | GPIO. 5 | 5   | 24  |
|  10 |  12 |    MOSI | ALT0 | 0 | 19 || 20 |   |      | 0v      |     |     |
|   9 |  13 |    MISO | ALT0 | 0 | 21 || 22 | 0 | IN   | GPIO. 6 | 6   | 25  |
|  11 |  14 |    SCLK | ALT0 | 0 | 23 || 24 | 1 | ALT0 | CE0     | 10  | 8   |
|     |     |      0v |      |   | 25 || 26 | 1 | ALT0 | CE1     | 11  | 7   |
|   0 |  30 |   SDA.0 |   IN | 1 | 27 || 28 | 1 | IN   | SCL.0   | 31  | 1   |
|   5 |  21 | GPIO.21 |   IN | 1 | 29 || 30 |   |      | 0v      |     |     |
|   6 |  22 | GPIO.22 |   IN | 1 | 31 || 32 | 0 | IN   | GPIO.26 | 26  | 12  |
|  13 |  23 | GPIO.23 |   IN | 0 | 33 || 34 |   |      | 0v      |     |     |
|  19 |  24 | GPIO.24 |   IN | 0 | 35 || 36 | 0 | IN   | GPIO.27 | 27  | 16  |
|  26 |  25 | GPIO.25 |   IN | 0 | 37 || 38 | 0 | IN   | GPIO.28 | 28  | 20  |
|     |     |      0v |      |   | 39 || 40 | 0 | IN   | GPIO.29 | 29  | 21  |
| BCM | wPi |   Name  | Mode | V | Physical | V | Mode | Name    | wPi | BCM |
+—–+—–+———+——+—+—Pi 2—+—+——+———+—–+—–+

Test the Setup

Connecting an LED and resistor in series to a GPIO is a standard test to quickly see if setting a GPIO has an effect. Connecting one end of the LED-resistor combination to G18 (BCM18) on the Wedge and the other end to ground allows the below commands to turn the LED on and off.

root@pi:~# gpio mode 1 output
root@pi:~# gpio write 1 1
root@pi:~# gpio write 1 0

Pin G18/BCM18 is special on the Raspberry Pi because it can send a Pulse Width Modulated (PWM) signal. One way of thinking about a PWM signal is that it is on for a certain percentage of the time and off for the rest. For example, a value of 0 means the signal is always a low (ground) output. A value of 1023 would keep the pin high all of the time. A value of 512 would result in the pin being on half the time and off half the time.

The script shown below will give a glowing pulse effect on the LED instead of just turning it on and off directly. Notice the use of the trap command which runs a cleanup function when the script is exited or closed using control-c from the command line.

root@pi:~# cat ./


trap "{ echo 'bye...'; gpio mode 1 output; gpio write 1 0; exit 0; }" EXIT SIGINT SIGTERM
gpio mode 1 pwm

for i in $(seq 1 10); do

  for v in $(seq $minval 10 $maxval); do
     gpio pwm $pin $v
     sleep 0.001
  for v in $(seq $maxval -10 $minval); do
     gpio pwm $pin $v
     sleep 0.001
  sleep 0.5

exit 0

Get Your Motor Running!

The photo above shows a common method to control an electric gearmotor from a microcontroller or computer. A few complications are introduced when running gearmotors from computers. For a start, the motor is likely to want to run at a higher voltage than what the computer is using. Even if the motor can operate at the voltage that the GPIO pins on the computer operate at, the motor will likely want to draw more current than the computer is rated to supply. So operating a gearmotor directly from the GPIO pins is usually a very bad idea. Damage to the controlling computer has a fairly good chance of occurring if you try that. A common solution to this problem is to use a motor driver chip which drives the motors using a separate power supply and lets you command the chip from your computer.

The small red PCB on the left side of the photo has a TB6612FNG motor driver chip on it. The TB6612FNG is not a DIP chip, so it cannot insert directly into the breadboard. There are many PCBs available like that shown in the photo which contain the TB6612FNG chip and have a pinout that allows for insertion into a breadboard. The chip lets you run two motors at different speeds and directions using a dedicated power source for the motor and control the chip using a different voltage level from a computer. Each motor wants to use three pins on the Raspberry Pi for control; a PWM pin to set the motor rotation speed, and two pins to set the direction that the motor spins.

Shown on the lower side of the TB6612FNG chip, the motor is wired to B01 and B02. It doesn’t matter which way around you wire this, as inserting the motor the other way around will only cause it to spin in the other direction. I’m using a block of AA batteries to power the gearmotor; the battery has its positive lead connected to the VM (Voltage Motor) input and the ground is connected to the ground shared with the Raspberry Pi. Using red and green/black for power and ground is a reasonably common wire color scheme and helps to avoid accidentally connecting things that might create a short circuit. The ground of the Raspberry Pi and the battery pack are connected to establish a common ground. The battery pack supplies the Voltage Motor pin which is used to power the gearmotor. All signals sent to the TB6612FNG chip use the logic voltage level which is set by the Raspberry Pi.

The STBY (Standby) line is pulled to logic voltage high. There is an internal pull down resistor on the STBY pin, and if the STBY is low (ground) then the motors will not turn. The PWMB, BIN2, and BIN1 are connected to G18, G19, and G20 respectively. The G18 pin has a special double meaning because it can output a PWM signal using hardware on the Raspberry Pi.

The first commands shown below will set the motor rotation direction and setup the controlling PWM pin ready to start rotating the motor. The PWM setting defaults to a range 0-1023 with higher values causing the motor to spin faster. Once the motor is stopped, the settings on pins 24 and 28 are swapped, so the motor will spin in the opposite direction.

root@pi:~# gpio mode 24 out
root@pi:~# gpio mode 28 out
root@pi:~# gpio write 24 1
root@pi:~# gpio write 28 0
root@pi:~# gpio mode 1 pwm

root@pi:~# gpio pwm 1 200
root@pi:~# gpio pwm 1 800
root@pi:~# gpio pwm 1 0
root@pi:~# gpio write 24 0
root@pi:~# gpio write 28 1
root@pi:~# gpio pwm 1 800
root@pi:~# gpio pwm 1 0

The same PWM chip that controls wPi pin 1 also controls wPi pin 26. Moving the PWM pin of the gearmotor to wPi pin 26 I could still control the speed of the motor by setting the PWM signal on wPi pin 1. So these pins seem to share the same PWM signal, at least when I controlled them through the gpio tool. Moving the direction setting pins to free up wiring pin 24 (BCM pin 19) allows the use of a second PWM output signal. For example, moving to using BCM_20 and BCM_21 to set the motor direction.

Final Words

The Raspberry Pi 2 has two PWM outputs. It has been mentioned that using one of those PWMs might affect audio on the Raspberry Pi. A common method of controlling a robot is differential drive which uses two independently controlled motors and a drag wheel or ball as a third point of contact with the ground. Using two PWM outputs and four other GPIO pins the above design can be extended to allow a Raspberry Pi to be mounted to a small robot and move it around.

The Wiring Pi project can also control 595 shift registers, and GPIO extension chips like the MCP23008 and MCP23017. I hope to show interaction with some of these chips using Wiring Pi as well as TWI or SPI interaction in a future article.

Krebs on Security: Hilton Acknowledges Credit Card Breach

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Two months after KrebsOnSecurity first reported that multiple banks suspected a credit card breach at Hilton Hotel properties across the country, Hilton has acknowledged an intrusion involving malicious software found on some point-of-sale systems.

hiltonAccording to a statement released after markets closed on Tuesday, the breach persisted over a 17-week period from Nov. 18, 2014 to Dec. 5, 2014, or April 21 to July 27, 2015.

“Hilton Worldwide (NYSE: HLT) has identified and taken action to eradicate unauthorized malware that targeted payment card information in some point-of-sale systems,” the company said. “Hilton immediately launched an investigation and has further strengthened its systems.”

Hilton said the data stolen includes cardholder names, payment card numbers, security codes and expiration dates, but no addresses or personal identification numbers (PINs).

The company did not say how many Hilton locations or brands were impacted, or whether the breach was limited to compromised point-of-sale devices inside of franchised restaurants, coffee bars and gift shops within Hilton properties — as previously reported here.

The announcement from Hilton comes just five days after Starwood Hotel & Resorts Worldwide — including some 50 Sheraton and Westin locations — was hit by a similar breach that lasted nearly six months.

Starwood and Hilton join several other major hotel brands in announcing a malware-driven credit card data breach over the past year. In October 2015, The Trump Hotel Collection confirmed a report first published by KrebsOnSecurity in June about a possible card breach at the luxury hotel chain.

In March, upscale hotel chain Mandarin Oriental acknowledged a similar breach. The following month, hotel franchising firm White Lodging allowed that — for the second time in 12 months — card processing systems at several of its locations were breached by hackers.

Readers should remember that they are not liable for unauthorized debit or credit card charges, but with one big caveat: the onus is on the cardholder to spot and report any unauthorized charges. Keep a close eye on your monthly statements and report any bogus activity immediately. Many card issuers now let customers receive text alerts for each card purchase and/or for any account changes. Take a moment to review the notification options available to you from your bank or card issuer.

AWS Compute Blog: Amazon ECS improves console first run experience, ability to troubleshoot Docker errors

This post was syndicated from: AWS Compute Blog and was written by: Chris Barclay. Original post: at AWS Compute Blog

Today Amazon EC2 Container Service (ECS) added a new first run experience that streamlines getting your first containerized application running on ECS. Amazon ECS is a highly scalable, high performance container management service that supports Docker containers and allows you to easily run applications on a managed cluster of Amazon EC2 instances. Amazon ECS eliminates the need for you to install, operate, and scale your own cluster management infrastructure. With simple API calls, you can launch and stop Docker-enabled applications, query the complete state of your cluster, and access many familiar features like security groups, Elastic Load Balancing, EBS volumes, and IAM roles. You can start with the sample application or provide a Docker image and ECS will create all the resources required to run your containerized application on a cluster of Amazon EC2 instances.

For clusters created in the new first run experience, you can now scale EC2 instances up and down directly in the cluster’s ECS instances tab in the console. This gives you an easier way to manage your cluster’s capacity.

ECS also added task stopped reasons and task start and stop times. You can now see if a task was stopped by a user or stopped due to other reasons such as a failing Elastic Load Balancing health check, as well as the time the task was started and stopped.

Service scheduler error messages have additional information that describe why tasks cannot be placed in the cluster. These changes make it easier to diagnose problems.

These improvements came directly from your feedback. To get started with ECS, go to the console’s new first run wizard. And thank you for the input!

Backblaze Blog | The Life of a Cloud Backup Company: Join the Alliance – Backblaze Needs a Senior Network Engineer & Datacenter Tech

This post was syndicated from: Backblaze Blog | The Life of a Cloud Backup Company and was written by: Yev. Original post: at Backblaze Blog | The Life of a Cloud Backup Company


With the announcement of Backblaze B2, we keep on growing, and we need some help! We’re looking for two Rebels to join the Alliance, a Senior Network Engineer for our San Mateo office and a Datacenter Tech for our Sacramento datacenter. Do you have what it takes to defeat the data loss Empire? Read the description and apply below! Please remember, many Bothans died to bring us this information.

Senior Network Engineer – San Mateo, CA


  • Lead efforts in planning, provisioning, and deploying network systems within the back-end operations, and across the various corporate and datacenter sites, (switches, VPNs, routers, etc)
  • Lead efforts to automate deploying & updating of network systems and equipment.
  • Lead efforts in monitoring and troubleshooting network operational issues
  • Collaborate on network security (including PCI compliance, firewalls, ACLs, HackerOne, Log Analysis, etc)
  • Participate in other Operations Automation efforts
  • Collaborate on capacity planning (manage network bandwidth and how it relates to storage burn rate)
  • Understand environment thoroughly enough to administer/debug any system in operations.
  • Collaborate on strategic planning (optimize performance, reduce cost, increase efficiency, mitigate risk)
  • Help manage infrastructure services installation/configuration (DNS, DHCP, NTP, Certificate Authority, Clonezilla, PXE, etc)
  • Help manage web services installation/configuration (Tomcat, Apache, WordPress, Java, etc)
  • Help administer database servers (MySQL, Cassandra)
  • Help debug/repair software problems (File system, RAID & boot drive repairs)
  • Participate in the 24×7 on-call pager rotation and respond to alerts as needed


  • Expert knowledge and practical experience in designing, provisioning, and deploying network systems
  • Expert knowledge of Linux system administration, Debian experience preferred
  • 4+ years of experience or equivalent
  • Bash scripting and Automation skills
  • Position based in San Mateo, CA

Required for all Backblaze Employees

  • Good attitude and willingness to do whatever it takes to get the job done
  • Strong desire to work for a small fast paced company
  • Desire to learn and adapt to rapidly changing technologies and work environment
  • Occasional visits to Backblaze datacenters necessary
  • Rigorous adherence to best practices
  • Relentless attention to detail
  • Excellent interpersonal skills and good oral/written communication
  • Excellent troubleshooting and problem solving skills

Datacenter Technician – Sacramento, CA


  • Work as Backblaze’s physical presence in Sacramento area datacenter(s)
  • Maintain physical infrastructure including racking equipment, replacing hard drives and other system components
  • Repair and troubleshoot defective equipment with minimal supervision
  • Receive deliveries, maintain accurate inventory counts/records and RMA defective components
  • Provision, test & deploy new equipment via the Linux command line and web GUIs
  • Help qualify new hardware & software configurations (load & component testing, qa, etc)
  • Help train new Datacenter Technicians
  • Follow and improve datacenter best practices and documentation
  • Maintain a clean and well organized work environment
  • On-call responsibilities include 24×7 trips to datacenter to resolve issues that can’t be handled remotely


  • Ability to learn quickly
  • Ability to lift/move 50-75 lbs and work down near the floor on a daily basis
  • Position based near Sacramento, California and may require periodic visits to the corporate office in San Mateo


  • Working knowledge of Linux
  • 1-2 years experience in technology related field
  • Experience working at a datacenter in a support role


Check out these videos on our Datacenter Operations team:

Want to join our team? Follow these three steps:

  1. Send an Email to with one of the positions listed above in the subject line
  2. Include your resume
  3. Include your answers to 2 of the following 3 questions
    a. What about working at Backblaze excites you the most?
    b. Provide 3 adjectives that best describe your personal work space.
    c. How would you manage multiple facilities to 1,000+ servers each?

The post Join the Alliance – Backblaze Needs a Senior Network Engineer & Datacenter Tech appeared first on Backblaze Blog | The Life of a Cloud Backup Company.

AWS Official Blog: New AWS Quick Start – Sitecore

This post was syndicated from: AWS Official Blog and was written by: Jeff Barr. Original post: at AWS Official Blog

Sitecore is a popular enterprise content management system that also includes a multi-channel marketing automation component with an architecture that is a great fit for the AWS cloud! It allows marketers to deliver a personalized experience that takes into account the customers’ prior interaction with the site and the brand (they call this feature Context Marketing).

Today we are publishing a new Sitecore Quick Start Reference Deployment.  This 19-page document will show you how to build an AWS cluster that is fault-tolerant and highly scalable. It builds on the information provided in the Sitecore Scaling Guide and recommends an architecture that uses the Amazon Relational Database Service (RDS), Elastic Load Balancing, and Auto Scaling.

Using the AWS CloudFormation template referenced in the Quick Start, you can launch Sitecore into a Amazon Virtual Private Cloud in a matter of minutes. The template creates a fully functional deployment of Sitecore 7.2 that runs on Windows Server 2012 R2. The production configuration runs in two Availability Zones:

You can use the template as-is, or you can copy it and then modify it as you see fit. If you decide to do this, the new CloudFormation Visual Designer may be helpful:

The Quick Start includes directions for setting up a test server along with some security guidelines. It also discusses the use of Amazon CloudFront to improve site performance and AWS WAF to help improve application security.

Jeff; [$] A journal for MD/RAID5

This post was syndicated from: and was written by: corbet. Original post: at

RAID5 support in the MD driver has been part of mainline Linux since
2.4.0 was released in early 2001. During this time it has been used
widely by hobbyists and small installations, but there has
been little evidence of any impact on the larger or “enterprise”
sites. Anecdotal evidence suggests that such sites are usually
happier with so-called “hardware RAID” configurations where a purpose-built
computer, whether attached by PCI or fibre channel or similar,
is dedicated to managing the array.
This situation could begin to change with the 4.4 kernel, which brings some
enhancements to the MD driver that should make it
more competitive with hardware-RAID controllers.

Schneier on Security: NSA Collected Americans’ E-mails Even After it Stopped Collecting Americans’ E-mails

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

In 2011, the Bush administration authorized — almost certainly illegally — the NSA to conduct bulk electronic surveillance on Americans: phone calls, e-mails, financial information, and so on. We learned a lot about the bulk phone metadata collection program from the documents provided by Edward Snowden, and it was the focus of debate surrounding the USA FREEDOM Act. E-mail metadata surveillance, however, wasn’t part of that law. We learned the name of the program — STELLAR WIND — when it was leaked in 2004. But supposedly the NSA stopped collecting that data in 2011, because it wasn’t cost-effective.

“The internet metadata collection program authorized by the FISA court was discontinued in 2011 for operational and resource reasons and has not been restarted,” Shawn Turner, the Obama administration’s director of communications for National Intelligence, said in a statement to the Guardian.”

When Turner said that in 2013, we knew from the Snowden documents that the NSA was still collecting some Americans’ Internet metadata from communications links between the US and abroad. Now we have more proof. It turns out that the NSA never stopped collecting e-mail metadata on Americans. They just cancelled one particular program and changed the legal authority under which they collected it.

The report explained that there were two other legal ways to get such data. One was the collection of bulk data that had been gathered in other countries, where the N.S.A.’s activities are largely not subject to regulation by the Foreign Intelligence Surveillance Act and oversight by the intelligence court.


The N.S.A. had long barred analysts from using Americans’ data that had been swept up abroad, but in November 2010 it changed that rule, documents leaked by Edward J. Snowden have shown. The inspector general report cited that change to the N.S.A.’s internal procedures.

The other replacement source for the data was collection under the FISA Amendments Act of 2008, which permits warrantless surveillance on domestic soil that targets specific noncitizens abroad, including their new or stored emails to or from Americans.

In Data and Goliath, I wrote:

Some members of Congress are trying to impose limits on the NSA, and some of their proposals have real teeth and might make a difference. Even so, I don’t have any hope of meaningful congressional reform right now, because all of the proposals focus on specific programs and authorities: the telephone metadata collection program under Section 215, bulk records collection under Section 702, and so on. It’s a piecemeal approach that can’t work. We are now beyond the stage where simple legal interventions can make a difference. There’s just too much secrecy, and too much shifting of programs amongst different legal justifications.

The NSA continually plays this shell game with Congressional overseers. Whenever an intelligence-community official testifies that something is not being done under this particular program, or this particular authority, you can be sure that it’s being done under some other program or some other authority. In particular, the NSA regularly uses rules that allow them to conduct bulk surveillance outside the US — rules that largely evade both Congressional and Judicial oversight — to conduct bulk surveillance on Americans. Effective oversight of the NSA is impossible in the face of this level of misdirection and deception.

Grigor Gatchev - A Weblog: Тихомир Димитров: „Аварията“

This post was syndicated from: Grigor Gatchev - A Weblog and was written by: Григор. Original post: at Grigor Gatchev - A Weblog

Преди доста време, отегчен и оказал се с малко свободно време, седнах да чета първото пробутано ми като някакъв вид фантастика нещо. Наглед непретенциозна книжка от неизвестен ми автор… След като я привърших обаче, се постарах да запомня добре името на автора. Интересното съчетание на идеи беше оформено и изпълнено с великолепно писателско умение. Не беше нужно да съм специалист по литература, за да усетя – на този автор му предстои кариерата на майстор.

Книгата се казваше „Душа назаем“. А авторът – Тихомир Димитров.

По-късно и се запознах с него. Не помня дали първо дойде срещата на едно феновско събиране, или първо открих в Мрежата неговия блог. Няма и значение – Тишо е еднакво ценен и на двете места. Винаги оптимист, но без да губи трезвата си преценка. Бистър ум и възхитителна личност – с позиция и аргументи за нея.

След „Душа назаем“ последваха още книги. Пътеписи, съвременни български разкази… Като поклонник на фантастиката обаче с нетърпение чаках Тишо да се върне към нея. И това най-сетне се случи.

Новият му роман се нарича „Аварията“, и имах привилегията авторът да ми позволи да го прегледам предварително. Наистина е привилегия, не само заради доверието на автора – и заради качеството на произведението.

Сюжетът е изпълнен с неочаквани поврати. Всеки път, когато си помислех, че „нещата влязоха в познато русло“, следваше изкусен завой и тръгваха в съвсем друга посока, хем логична и достоверна, хем непредположима докато не се случи. Уж фантастика, а в същото време разкошна илюстрация на българския манталитет, и със силните, и със слабите му страни. Нещо, което бих гледал на филм с най-голямо удоволствие. Мога единствено да изкажа възхищението и завистта си към Тишо.

Не зная дали ще бъде отпечатана скоро. Не съм питал – може би вече е. Вероятно трябва да попитам автора, но пропуснах… Твърдо съм решил обаче – прочел я или не, появи ли се на хартия, ще си я купя. Не само за да поощря автора, но и за да имам свое собствено копие. Изпълненията в стил 2018 не ми харесват.

И знаете ли какво? Ще издам една малка тайна. Тишо усилено работи по продължение на романа! Така че приказката няма да свърши. Не зная накъде ще завие, и точно това е най-хубавото в нея!

TorrentFreak: Insurer Refuses to Cover Cox in Massive Piracy Lawsuit

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

cox-logoFollowing a ruling from a Virginia federal court that Cox is not protected by the safe-harbor provisions of the DMCA, the Internet provider must now deal with another setback.

Beazley, a high risk insurance underwriter for Lloyds, is refusing to cover legal costs related to the “repeat infringer” case which goes to trial next week.

It’s a crucial case that could define how Internet providers must handle copyright infringement complaints. At the moment it’s rare for ISPs to disconnect pirating users but the case has the potential to alter the landscape.

The case also exposes Cox to dozens if not hundreds of millions in potential piracy damages plus substantial legal fees. The Internet provider hoped to cover some of the costs under an insurance policy at Lloyd’s but the insurer is refusing to cooperate.

In a request for a declaratory judgment (pdf) Lloyd’s underwriter asks the court to rule that it doesn’t have to cover the costs, which have already exceeded $1 million in legal fees alone.

In the complaint Beazley states that Cox was well aware of the potential liabilities. Rightscorp, the company that sends the copyright infringement notices, had already warned the ISP over its precarious position several years ago.

“By letter dated January 9, 2012, Cox was advised by an agent of copyright holders that if it did not forward those notices to its customers, it would be exposed to claims of contributory and vicarious copyright infringement,” the insurer writes.

Cox, however, refused to forward the millions of notices as they were bundled with settlement demands, which are seen by some as extortion. This refusal eventually lead to the lawsuit filed by music rights companies BMG and Round Hill.

“Cox continued to intentionally ignore the notices and did not forward them to its customers,” the complaint notes.

In light of the above, Beazley argues that the lawsuit is the result of an intentional business policy rather than the act of rendering Internet services, which is what the insurance policy covers.

“…the BMG Claim arose out of Cox’s policy and practice of ignoring and failing to forward infringement notices and refusing to terminate or block infringing customers’ accounts, not acts in rendering internet services.”

In addition Beazley point out that the piracy lawsuit was filed November last year, several days before the December 1, 2014 date the insurance policy began.

If the court grants Beazley’s request for declaratory judgment then Lloyd’s policy will not cover any of the costs related to the lawsuit. This will be a costly setback for the ISP if it loses the piracy lawsuit.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Lauren Weinstein's Blog: The Three Letter Cure for Web Accessibility and Discrimination Problems

This post was syndicated from: Lauren Weinstein&#039;s Blog and was written by: Lauren. Original post: at Lauren Weinstein's Blog

A few months ago, in “UI Fail: How Our User Interfaces Help to Ruin Lives” (, I discussed the many ways that modern Web and app interfaces can be frustrating, useless, and even painful for vast numbers of users who don’t fit the “majority” category for which app and Web designers tend to build their user interfaces. This doesn’t just… Security updates for Tuesday

This post was syndicated from: and was written by: ris. Original post: at

Debian-LTS has updated openjdk-6 (multiple vulnerabilities).

Fedora has updated libsndfile (F22; F21:
buffer overflow), mingw-freeimage (F23; F22:
integer overflow), rpm (F23: denial of
service), wpa_supplicant (F21: denial of
service), and zarafa (F21: two
vulnerabilities, one from 2012).

Oracle has updated autofs (OL7:
privilege escalation), binutils (OL7:
multiple vulnerabilities), chrony (OL7:
multiple vulnerabilities), cpio (OL7:
denial of service), cups-filters (OL7:
multiple vulnerabilities), curl (OL7:
multiple vulnerabilities), file (OL7:
multiple vulnerabilities), grep (OL7: heap
buffer overrun), grub2 (OL7: Secure Boot
circumvention), krb5 (OL7: two
vulnerabilities), libreport (OL6: data
leak), libssh2 (OL7: information leak), net-snmp (OL7: denial of service), netcf (OL7: denial of service), ntp (OL7: multiple vulnerabilities), openhpi (OL7: world writable /var/lib/openhpi
directory), openldap (OL7: unintended
cipher usage), openssh (OL7: two
vulnerabilities), python (OL7: multiple
vulnerabilities), rest (OL7: denial of
service), rubygem-bundler and rubygem-thor
(OL7: installs malicious gem files), squid
(OL7: certificate validation bypass), unbound (OL7: denial of service), wireshark (OL7: multiple vulnerabilities), and
xfsprogs (OL7: information disclosure).

Scientific Linux has updated libreport (SL6: data leak).

SUSE has updated firefox
(SLES10SP4: multiple vulnerabilities).

AWS Security Blog: How to Use a Single IAM User to Easily Access All Your Accounts by Using the AWS CLI

This post was syndicated from: AWS Security Blog and was written by: Brian Wagner. Original post: at AWS Security Blog

Many AWS customers keep their environments separated from each other: development resources do not interact with production, and vice versa. One way to achieve this separation is by using multiple AWS accounts. Though this approach does help with resource isolation, it can increase your user management because each AWS account can have its own AWS Identity and Access Management (IAM) users, groups, and roles.

All programmatic access to your AWS resources takes place via an API call, and all API calls must be signed for authentication and authorization. To sign an AWS API call, you need AWS access keys. Therefore, having multiple users across AWS accounts also can pose a challenge because more users can result in maintaining more AWS access keys. Furthermore, it’s important that you protect them. One way of reducing the number of credentials to manage is to leverage temporary AWS security credentials. You can do this by using AWS Security Token Service (STS) and IAM roles.

To use an IAM role, you have to make an API call to STS:AssumeRole, which will return a temporary access key ID, secret key, and security token that can then be used to sign future API calls. Formerly, to achieve secure cross-account, role-based access from the AWS Command Line Interface (CLI), an explicit call to STS:AssumeRole was required, and your long-term credentials were used. The resulting temporary credentials were captured and stored in your profile, and that profile was used for subsequent AWS API calls. This process had to be repeated when the temporary credentials expired (after 1 hour, by default).

Today, even though the actual chain of API calls is still necessary, the AWS API automates this workflow for you. With a simple setup, you can achieve secure cross-account access with the AWS CLI by simply appending a suffix to your existing AWS CLI commands.

In this blog post, I will show how easy it is to use a single IAM user and the AWS CLI to access all your AWS accounts.


Let’s assume that you have two accounts, Dev and Prod. You want to give IAM users in the Dev account limited access to the Prod account via the AWS CLI. This keeps user management in the Dev account and long-term credentials out of the Prod account. To avoid making costly mistakes in your Prod account, the default CLI should leverage an IAM user in your Dev account. As shown in the following diagram, to achieve this you will use an IAM role, which has the access needed in your Prod account. An authenticated user in your Dev account will assume a privileged IAM role in the Prod account with an API call to STS:AssumeRole. This API call will return temporary security credentials that the Dev user’s AWS CLI will automatically use to create or modify resources in the Prod account.

Default behavior

It is important to understand how the AWS CLI handles authentication out of the box because this is what developers will be using for most of their day-to-day activity. After you have installed the AWS CLI, you will need to set up your default credentials. To set up your default CLI credentials, you should gather the AWS access key and secret key for your Dev user, and then run the aws configure command. You will be prompted for 4 inputs (replace the placeholder keys with your user’s keys).

AWS access key ID [None]: <YOUR_AWS_ACCESS_KEY>
AWS secret access key [None]: <YOUR_AWS_SECRET_KEY>
Default region name [None]: us-west-1
Default output format [None]: json

The AWS CLI organizes configuration and credentials into two separate files found in the home directory of your operating system. They are separated to isolate your credentials from the less sensitive configuration options of region and output.

# ~/.aws/config

region = us-west-1
output = json
# ~/.aws/credentials

aws_access_key_id = <YOUR_AWS_ACCESS_KEY>
aws_secret_access_key = <YOUR_AWS_SECRET_KEY>

As you can see, the CLI has created these two files and identified them with [default], which indicates that unless otherwise specified, these credentials will be used for any given CLI call.


The “magic” behind the CLI’s ability to assume a role is the use of named profiles. You can easily create profiles in your configuration and credentials file by using the aws configure set command:

aws configure set profile.example.aws_access_key_id myid
aws configure set profile.example.aws_secret_access_key mysecret
aws configure set profile.example.region us-west-1

This results in the following.

# ~/.aws/config

region = us-west-1
output = json

[profile example]
region = us-west-1

# ~/.aws/credentials

aws_access_key_id = <YOUR_AWS_ACCESS_KEY>
aws_secret_access_key = <YOUR_AWS_SECRET_KEY>	

aws_access_key_id = myid
aws_secret_access_key = mysecret

Using aws configure will create the sections for you, if they do not already exist.

Credential providers

As noted in the AWS CLI documentation, the CLI uses a provider chain to look for AWS credentials in a number of different places, including system or user environment variables and local AWS configuration files.

The AWS CLI will look for credentials on each call in this order: command-line options, environment variables, AWS credentials file, CLI configuration file, and instance profiles. For the credentials or configuration settings matched first—in the order just mentioned—the credentials or configurations settings are returned and used for that call.


Now that you understand how the AWS CLI works, you can apply that model for cross-account calls. As I said before, let’s assume that you already have two accounts called Dev and Prod. You can anticipate that most of the day-to-day development and activity happens in the Dev account, so this will be where the individual IAM user credentials are created and issued. The Prod account will be the one to which secure access is established from the privileged users in the Dev account.

After the accounts are established, perform three tasks:

  1. Create an IAM role in your Prod account.
  2. Create a user in your Dev account to assume that IAM role.
  3. Establish cross-account trust and access from the user in the Dev account to the role in the Prod account.

Task 1: Create an IAM role in your Prod account (the account that users want to sign in to)

You first will need to create an IAM role in your Prod account with a user that has privileged access to do so. This role will need with it a trust policy, which specifies who is allowed to assume the associated role. (Replace the placeholder ID with your own Dev account ID.)


  "Version": "2012-10-17",
  "Statement": [{
    "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::Dev-account-ID:user/bjwagner"
    "Action": "sts:AssumeRole"

With the trust policy as defined, you can create the role.

aws iam create-role 
  --role-name CrossAccountPowerUser 
  --assume-role-policy-document file://./prod_trust_policy.json 
  --profile prod

Running this in your terminal will produce some information about your role, including the Amazon Resource Name (ARN), which you should take note of before moving on to Task 2. The ARN should look like: "AWS": "arn:aws:iam::Prod-account-ID:role/CrossAccountPowerUser".

By default, IAM resources (such as roles) are created without any permissions, so you need to define what this role is capable of doing when it is assumed by attaching a policy. Attach the ReadOnlyAccess managed policy.

aws iam attach-role-policy 
  --role-name CrossAccountPowerUser 
  --policy-arn arn:aws:iam::aws:policy/ReadOnlyAccess 
  --profile prod

Task 2: Create a user in the Dev account with permission to assume the IAM role in the Prod account

Now that you have an appropriate IAM role in place, create a policy that allows its principal to assume the role in Prod. Using the ARN returned from Task 1 as the Resource, the policy looks like the following.


    "Version": "2012-10-17",
    "Statement": {
        "Effect": "Allow",
        "Action": "sts:AssumeRole",
        "Resource": "arn:aws:iam::Prod-account-ID:role/CrossAccountPowerUser"

The trust policy in Task 1 only allowed the IAM user bjwagner from the Dev account to assume the role, so you will use create-policy to create an IAM managed policy that you will associate with an IAM user.

aws iam create-policy 
  --policy-name ProdAccountAccess 
  --policy-document file://./dev_assume_role_prod.json

Notice that you didn’t use the –profile option. This is because without that option, the CLI will use the default credentials that were defined with aws configure, which should be configured to use the Dev account.

Upon success, this will return some information about the newly created policy. You will need to take note of the ARN that is part of the output. If you are using JSON, your output format will look similar to the following.

    "Policy": {
        "PolicyName": " ProdAccountAccess",
        "CreateDate": "2015-11-10T15:01:32.772Z",
        "AttachmentCount": 0,
        "IsAttachable": true,
        "DefaultVersionId": "v1",
        "Path": "/",
        "Arn": "arn:aws:iam::Dev-account-ID:policy/ProdAccountAccess",
        "UpdateDate": "2015-11-10T15:01:32.772Z"

Using the resulting ARN, your last step for this task is to associate the newly created policy with the IAM user in the Dev account. This is achieved with the use of attach-user-policy command.

aws iam attach-user-policy 
  --user-name bjwagner 
  --policy-arn arn:aws:iam::Dev-account-ID:policy/ProdAccountAccess

If nothing is returned, the operation was successful. At this point you have established the permissions needed to achieve cross-account access, and now must configure the AWS CLI to utilize it.

Task 3: Establish cross-account trust and access from the user to the role

Now set the AWS CLI to leverage these changes. To create a profile that will use the role in your Prod account, first apply it to your configuration.

aws configure set arn:aws:iam::Prod-account-ID:role/CrossAccountPowerUser

aws configure set default

The first command will create a new CLI profile called prod and will append the given role_arn to ~/.aws/config. The second command sets the source_profile, which references the default credentials profile so that you can use the same IAM user for Dev and Prod.

Your ~/.aws/config file will look like the following.

# ~/.aws/config

region = us-west-1
output = json

[profile prod]
role_arn = arn:aws:iam::Prod-account-ID:role/CrossAccountPowerUser
source_profile = default

And the ~/.aws/credentials file will remain the same.

# ~/.aws/credentials

aws_access_key_id = <YOUR_AWS_ACCESS_KEY>
aws_secret_access_key = <YOUR_AWS_SECRET_KEY>

Excercising your power

With this method, you not only keep your Prod account secure by not permitting long-term credentials, but you also manage IAM users by keeping your users in one account. So what does your workflow look like now?

Without specifying the –profile option in your AWS CLI command, you will automatically use the default profile, which is configured to interact with your Dev account using the long-term credentials that were input when calling aws configure.

aws ec2 describe-instances --region us-west-1

This should return your Amazon EC2 resources in your Dev account in the US West (N. California) region. With the addition of –profile prod in the same command, you should get a very different result set back, which are your EC2 resources from the Prod account in US West (N. California).

aws ec2 describe-instances --region us-west-1 --profile prod

By simply appending –profile prod to your command, you have told the AWS CLI to use the named profile prod, which is configured for an IAM role. The CLI will automatically make an STS:AssumeRole call and store the resulting temporary credentials in the ~/.aws/cache file. All future calls made using the same named profile will use the cached temporary credentials until they expire. When the credentials do expire, the AWS CLI will automatically repeat the process to give you fresh credentials.


Not only does leveraging the AWS CLI’s ability to assume roles across accounts address some IAM best practices, it also gives you a central way to manage your user credentials and protect your production environment.

If you are interested in a similar concept that uses the AWS Management Console instead, see the IAM documentation about Switching to a Role (AWS Management Console).

If you have questions or comments, submit them below or on the IAM forum.

– Brian

Raspberry Pi: GPIO Zero: a friendly Python API for physical computing

This post was syndicated from: Raspberry Pi and was written by: Ben Nuttall. Original post: at Raspberry Pi

Physical computing is one of the most engaging classroom activities, and it’s at the heart of most projects we see in the community. From flashing lights to IoT smart homes, the Pi’s GPIO pins make programming objects in the real world accessible to everybody.

Some three years ago, Ben Croston created a Python library called RPi.GPIO, which he used as part of his beer brewing process. This allowed people to control GPIO pins from their Python programs, and became a hit both in education and in personal projects. We use it in many of our free learning resources.

However, recently I’ve been thinking of ways to make this code seem more accessible. I created some simple and obvious interfaces for a few of the components I had lying around on my desk – namely the brilliant CamJam EduKits. I added interfaces for LED, Button and Buzzer, and started to look at some more interesting components – sensors, motors and even a few simple add-on boards. I got some great help from Dave Jones, author of the excellent picamera library, who added some really clever aspects to the library. I decided to call it GPIO Zero as it shares the same philosophy as PyGame Zero, which requires minimal boilerplate code to get started.


This is how you flash an LED using GPIO Zero:

from gpiozero import LED
from time import sleep

led = LED(2)

while True:

(Also see the built-in blink method)

As well as controlling individual components in obvious ways, you can also connect multiple components together.


Here’s an example of controlling an LED with a push button:

from gpiozero import LED, Button
from signal import pause

led = LED(2)
button = Button(3)

button.when_pressed = led.on
button.when_released =


We’ve thought really hard to try to get the naming right, and hope people old and young will find the library intuitive once shown a few simple examples. The API has been designed with education in mind and I’ve been demoing it to teachers to get feedback and they love it! Another thing is the idea of minimal configuration – so to use a button you don’t have to think about pull-ups and pull-downs – all you need is the pin number it’s connected to. Of course you can specify this – but the default assumes the common pull-up circuit. For example:

button_1 = Button(4)  # connected to GPIO pin 4, pull-up

button_2 = Button(5, pull_up=False)  # connected to GPIO pin 5, pull-down

Normally, if you want to detect the button being pressed you have to think about the edge falling if it’s pulled up, or rising if it’s pulled down. With GPIO Zero, the edge is configured when you create the Button object, so things like when_pressed, when_released, wait_for_press, wait_for_release just work as expected. While understanding edges is important in electronics, I don’t think it should be essential for anyone who wants to

Here’s a list of devices which currently supported:

  • LED (also PWM LED allowing change of brightness)
  • Buzzer
  • Motor
  • Button
  • Motion Sensor
  • Light Sensor
  • Analogue-to-Digital converters MCP3004 and MCP3008
  • Robot

Also collections of components like LEDBoard (for any collection of LEDs), FishDish, Traffic HAT, generic traffic lights – and there are plenty more to come.

There’s a great feature Dave added which allows the value of output devices (like LEDs and motors) to be set to whatever the current value of an input device is, automatically, without having to poll in a loop. The following example allows the RGB values of an LED to be determined by three potentiometers for colour mixing:

from gpiozero import RGBLED, MCP3008
from signal import pause

led = RGBLED(red=2, green=3, blue=4)
red_pot = MCP3008(channel=0)
green_pot = MCP3008(channel=1)
blue_pot = MCP3008(channel=2) = red_pot.values = green_pot.values = blue_pot.values


Other wacky ways to set the brightness of an LED: from a Google spreadsheet – or according to the number of instances of the word “pies” on the BBC News homepage!

Alex Eames gave it a test drive and made a video of a security light project using a relay – coded in just 16 lines of code.

GPIO Zero Security Light in 16 lines of code

Using GPIO Zero Beta to make a security light in 16 lines of code. See blog article here… If you like the look of the RasPiO Portsplus port labels board I’m using to identify the ports, you can find that here

Yasmin Bey created a robot controlled by a Wii remote:

Yasmin Bey on Twitter

@ben_nuttall @RyanteckLTD

Version 1.0 is out now so the API will not change – but we will continue to add components and additional features. GPIO Zero is now pre-installed in the new Raspbian Jessie image available on the downloads page. It will also appear in the apt repo shortly.

Remember – since the release of Raspbian Jessie, you no longer need to run GPIO programs with sudo – so you can just run these programs directly from IDLE or the Python shell. GPIO Zero supports both Python 2 and Python 3. Python 3 is recommended!

Let me know your suggestions for additional components and interfaces in the comments below – and use the hashtag #gpiozero to share your project code and photos!

A huge thanks goes to Ben Croston, whose excellent RPi.GPIO library sits at the foundation of everything in GPIO Zero, and to Dave Jones whose contributions have made this new library quite special.

See the GPIO Zero documentation and recipes and check out the Getting Started with GPIO Zero resource – more coming soon.

The post GPIO Zero: a friendly Python API for physical computing appeared first on Raspberry Pi.

Schneier on Security: Policy Repercussions of the Paris Terrorist Attacks

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

In 2013, in the early days of the Snowden leaks, Harvard Law School professor and former Assistant Attorney General Jack Goldsmith reflected on the increase in NSA surveillance post 9/11. He wrote:

Two important lessons of the last dozen years are (1) the government will increase its powers to meet the national security threat fully (because the People demand it), and (2) the enhanced powers will be accompanied by novel systems of review and transparency that seem to those in the Executive branch to be intrusive and antagonistic to the traditional national security mission, but that in the end are key legitimating factors for the expanded authorities.

Goldsmith is right, and I think about this quote as I read news articles about surveillance policies with headlines like “Political winds shifting on surveillance after Paris attacks?

The politics of surveillance are the politics of fear. As long as the people are afraid of terrorism — regardless of how realistic their fears are — they will demand that the government keep them safe. And if the government can convince them that it needs this or that power in order to keep the people safe, the people will willingly grant them those powers. That’s Goldsmith’s first point.

Today, in the wake of the horrific and devastating Paris terror attacks, we’re at a pivotal moment. People are scared, and already Western governments are lining up to authorize more invasive surveillance powers. The US want to back-door encryption products in some vain hope that the bad guys are 1) naive enough to use those products for their own communications instead of more secure ones, and 2) too stupid to use the back doors against the rest of us. The UK is trying to rush the passage of legislation that legalizes a whole bunch of surveillance activities that GCHQ has already been doing to its own citizens. France just gave its police a bunch of new powers. It doesn’t matter that mass surveillance isn’t an effective anti-terrorist tool: a scared populace wants to be reassured.

And politicians want to reassure. It’s smart politics to exaggerate the threat. It’s smart politics to do something, even if that something isn’t effective at mitigating the threat. The surveillance apparatus has the ear of the politicians, and the primary tool in its box is more surveillance. There’s minimal political will to push back on those ideas, especially when people are scared.

Writing about our country’s reaction to the Paris attacks, Tom Engelhardt wrote:

…the officials of that security state have bet the farm on the preeminence of the terrorist ‘threat,’ which has, not so surprisingly, left them eerily reliant on the Islamic State and other such organizations for the perpetuation of their way of life, their career opportunities, their growing powers, and their relative freedom to infringe on basic rights, as well as for that comfortably all-embracing blanket of secrecy that envelops their activities.

Goldsmith’s second point is more subtle: when these power increases are made in public, they’re legitimized through bureaucracy. Together, the scared populace and their scared elected officials serve to make the expanded national security and law enforcement powers normal.

Terrorism is singularly designed to push our fear buttons in ways completely out of proportion to the actual threat. And as long as people are scared of terrorism, they’ll give their governments all sorts of new powers of surveillance, arrest, detention, and so on, regardless of whether those powers actual combat the actual threat. This means that those who want those powers need a steady stream of terrorist attacks to enact their agenda. It’s not that these people are actively rooting for the terrorists, but they know a good opportunity when they see it.

We know that the PATRIOT Act was largely written before the 9/11 terrorist attacks, and that the political climate was right for its introduction and passage. More recently:

Although “the legislative environment is very hostile today,” the intelligence community’s top lawyer, Robert S. Litt, said to colleagues in an August e-mail, which was obtained by The Post, “it could turn in the event of a terrorist attack or criminal event where strong encryption can be shown to have hindered law enforcement.”

The Paris attacks could very well be that event.

I am very worried that the Obama administration has already secretly told the NSA to increase its surveillance inside the US. And I am worried that there will be new legislation legitimizing that surveillance and granting other invasive powers to law enforcement. As Goldsmith says, these powers will be accompanied by novel systems of review and transparency. But I have no faith that those systems will be effective in limiting abuse any more than they have been over the last couple of decades.

TorrentFreak: Cox Has No DMCA Safe Harbor Protection, Judge Rules

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

cox-logoLast year BMG Rights Management and Round Hill Music sued Cox Communications, arguing that the ISP fails to terminate the accounts of subscribers who frequently pirate content.

The companies, which control publishing rights to songs by Katy Perry, The Beatles and David Bowie among others, claim that Cox gave up its DMCA safe harbor protections due to this inaction.

The case is scheduled to go to trial before a jury next month, but an order just issued by District Court Judge Liam O’Grady already puts the Internet provider at a severe disadvantage.

In his order Judge O’Grady ruled on a motion for partial summary judgment from the music companies, which argued that Cox has not met the requirements for safe harbor protection under the DMCA.

Although Cox does have a policy to disconnect accounts of pirating subscribers, it discarded the copyright infringement notices from the plaintiffs. These notices are bundled with settlement requests, something Cox likens to harassment.

After reviewing the arguments from both sides Judge O’Grady has sided with the copyright holders, as HWR first reported.

“The court grants the motion with respect to defendant’s safe-harbor defense under the Digital Millennium Copyright Act (DMCA). The is no genuine issue of material fact as to whether defendants reasonably implemented a repeat-infringer policy as is required…,” the order (pdf) reads.

Judge O’Grady’s order

The judge has yet to publish his full opinion motivating the decision and we will follow this up as soon as it’s handed down. However, the ruling makes it clear that Cox is in a very tough spot.

DMCA safe-harbor is a crucial protection for ISPs against copyright complaints. Aside from the liability Cox faces in the case, it also suggests that ISPs should disconnect subscribers solely based on accusations from copyright holders, which affects the entire industry.

Judge O’Grady, who’s also in charge of the criminal case against Megaupload and Kim Dotcom, doesn’t appear to be concerned about any collateral damage though.

Techdirt reports that he previously lashed out against the EFF and Public Knowledge, which submitted an amicus brief in support of Cox.

“I read the brief. It adds absolutely nothing helpful at all. It is a combination of describing the horrors that one endures from losing the Internet for any length of time,” O’Grady said, rejecting the brief.

“Frankly, it sounded like my son complaining when I took his electronics away when he watched YouTube videos instead of doing homework. And it’s completely hysterical.”

To be continued.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Linux How-Tos and Linux Tutorials: The tar Command Explained

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Falko Timme. Original post: at Linux How-Tos and Linux Tutorials

The Linux tar command is the swiss army of the Linux admin when it comes to archiving or distributing files. Gnu Tar archives can contain multiple files and directories, file permissions can be preserved and it supports multiple compression formats. The name tar stands for “Tape Archiver”, the format is an official POSIX standard.

Read more at HowtoForge

TorrentFreak: Kim Dotcom Slams U.S. “Bullies” as Extradition Hearing Ends

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

megaupload-logoDespite criticisms emanating from both sides, it will be difficult to argue that New Zealand has skipped on the amount of time it’s dedicated to the Megaupload case.

Hundreds of hours of legal resources have been expended since the fateful raid on Kim Dotcom’s mansion in 2012 and with 2016 just a few short weeks away, it will soon be determined whether the Internet entrepreneur will be shipped to the United States.

That decision now lies with Judge Nevin Dawson, who today listened to the closing submissions in an extradition hearing that has lasted (some say dragged on) for 10 long weeks after being scheduled for just four.

Both sides have demonstrated only one common ground – each feels they have a solid case and that the other is dead wrong.

Prosecutors acting for the United States say that Megaupload was business built from the ground up for illegal purposes. They argue that Dotcom and fellow defendants Mathias Ortmann, Finn Batato and Bram van der Kolk knew that their users were breaching copyright and even financially rewarded those who infringed the most.

Rather than removing content as the law requires, Megaupload merely removed the links, leaving content intact so that it could live to infringe another day, the U.S. claims. Communications between some of the company’s executives only served to underline the above, with admissions that Megaupload was profiting from 90% infringing files.

The former Megaupload operators see things very differently. From day one they have argued that their Internet business was a legitimate cloud storage service, originally setup to overcome the limitations of sending files via email.

Megaupload was not dissimilar to Dropbox, Dotcom et al argued, and had deals with copyright holders so that any content they wanted removed could be taken down in a timely fashion.

“Not only did Megaupload achieve 99.999% takedown compliance, numerous emails from major content owners thanked us for our compliance,” Dotcom reiterated today.

But despite complying with the laws of the land, Dotcom says that Megaupload was cut down in its prime by an aggressive and malicious U.S. government who from the very beginning has been doing Hollywood’s bidding. Those same authorities closed down his site, seized his funds and then denied him access to a fair trial.

“My defense team has shown how utterly unreliable, malicious and unethical the U.S. case against me is. They have exposed a dirty ugly bully,” Dotcom said this morning.

But allegations of dirty tricks aside, Dotcom and his former associates insist that the very basis of the U.S. case falls on stony ground, that the copyright infringement charges on which all of the other charges are based simply aren’t extraditable offenses. In any event, Internet service providers such as Megaupload can’t be prosecuted for their users’ crimes, they say.

Nevertheless, the reality remains. Dotcom and his former Megaupload operators face charges of copyright infringement, conspiracy, money laundering and racketeering in the United States and after putting up a colossal battle, U.S. prosecutors aren’t likely to be backing down anytime soon.

But for now the fate of the now famous quartet lies in the hands of Judge Nevin Dawson, the man who has sat patiently – sometimes less so – through ten weeks of hearings and hundreds of pages of submissions from both sides in this epic war of words.

“The 10 week extradition hearing has ended. My life is in the hands of Judge Nevin Dawson. He was the Judge who granted me bail. There’s hope!” Dotcom said after the hearing ended today.

When Judge Dawson will deliver his final decision is unclear, but when he does so it will be to an open court in the presence of men facing decades of jail time in the United States.

It will be an agonized wait for both sides but history shows us that no matter which side loses, neither will easily accept defeat. This show is definitely not over yet.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.