Posts tagged ‘Other’ Android 6.0 Marshmallow, thoroughly reviewed (Ars Technica)

This post was syndicated from: and was written by: ris. Original post: at

Ars Technica presents
a lengthy review
of Android 6.0 “Marshmallow”. “While this is a review of the final build of “Android 6.0,” we’re going to cover many of Google’s apps along with some other bits that aren’t technically exclusive to Marshmallow. Indeed, big chunks of “Android” don’t actually live in the operating system anymore. Google offloads as much of Android as possible to Google Play Services and to the Play Store for easier updating and backporting to older versions, and this structure allows the company to retain control over its open source platform. As such, consider this a look at the shipping Google Android software package rather than just the base operating system. “Review: New Android stuff Google has released recently” would be a more accurate title, though not as catchy.

AWS Official Blog: AWS Week in Review – September 28, 2015

This post was syndicated from: AWS Official Blog and was written by: Jeff Barr. Original post: at AWS Official Blog

Let’s take a quick look at what happened in AWS-land last week:


September 28


September 29


September 30


October 1


October 2


October 4

New & Notable Open Source

  • Otto simplifies development and deployment.
  • SOPS uses KMS and PGP to manage encrypted files for distribution of secrets.
  • Interferon signals you when infrastructure or application issues arise.
  • reinvent-sessions-api is an API to the re:Invent session list.
  • eureka is an AWS Service registry for mid-tier load balancing and failover.
  • acli is an alternative CLI for AWS.
  • BasicConsumer is an example consume for Kinesis.
  • wt-aws-spotter manages EC2 Spot instances using webtasks.
  • ec2-management is a CLI for controlling and scaling DCE Matterhorn clusters on AWS.
  • TrendingTopics discovers what is trending anywhere in the world using big data tools on AWS.

New YouTube Videos

New Customer Success Stories

New SlideShare Presentations

Upcoming Events

Upcoming Events at the AWS Loft (San Francisco)

Upcoming Events at the AWS Loft (New York)

  • October 6 – AWS Pop-up Loft Trivia Night (6 – 8 PM).
  • October 7 – AWS re:Invent at the Loft – Keynote Live Stream (11:30 AM – 1:30 PM).
  • October 8 – AWS re:Invent at the Loft – Keynote Live Stream (12:00 PM – 1:30 PM).
  • October 8 – AWS re:Invent at the Loft — re:Play Happy Hour! (7 – 9 PM).

Upcoming Events at the AWS Loft (Berlin) – Register Now

  • October 15 – An overview of Hadoop & Spark, using Amazon Elastic MapReduce (9 AM).
  • October 15 – Processing streams of data with Amazon Kinesis (and other tools) (10 AM).
  • October 15 – STUPS – A Cloud Infrastructure for Autonomous Teams (5 PM).
  • October 16 – Transparency and Audit on AWS (9 AM).
  • October 16 – Encryption Options on AWS (10 AM).
  • October 16 – Simple Security for Startups (6 PM).
  • October 19 – Introduction to AWS Directory Service, Amazon WorkSpaces, Amazon WorkDocs and Amazon WorkMail (9 AM).
  • October 19 – Amazon WorkSpaces: Advanced Topics and Deep Dive (10 AM).
  • October 19 – Building a global real-time discovery platform on AWS (6 PM).
  • October 20 – Scaling Your Web Applications with AWS Elastic Beanstalk (10 AM).

Upcoming Events at the AWS Loft (London) – Register Now

  • October 7 – Amazon DynamoDB (10 AM).
  • October 7 – Amazon Machine Learning (1 PM).
  • October 7 – Innovation & Amazon: Building New Customer Experiences for Mobile and Home with Amazon Technology (3 PM).
  • October 7 – IoT Lab Session (4 PM).
  • October 8 – AWS Lambda (10 AM).
  • October 8 – Amazon API Gateway (1 PM).
  • October 8 – A DevOps Way to Security (3 PM).
  • October 9 – AWS Bootcamp: Architecting Highly Available Applications on AWS (10 AM).
  • October 12 – Hands-on Labs Drop In (1 PM).
  • October 14 – Masterclass Live: Amazon EMR (10 AM).
  • October 14 – IoT on AWS (Noon).
  • October 14 – FinTech in the Cloud: How to build scalable, compliant and secure architecture with AWS (2 PM).
  • October 14 – AWS for Startups (6 PM).
  • October 15 – AWS Container Day (10 AM).
  • October 16 – HPC in the Cloud Workshop (2 – 4 PM).
  • October 19 – Hands-on Labs Drop In (1 PM).
  • October 20 – An Introduction to Using Amazon Web Services and the Alexa Skills Kit to Build Voice Driven Experiences + Open Hackathon (10 AM).
  • October 21 – Startup Showcase – B2C (10 AM).
  • October 21 – Chef Cookbook Workflow (6 PM).
  • October 22 – AWS Security Day (10 AM).
  • October 22 – Working with Planetary-Scale Open Data Sets on AWS (4 PM).
  • October 23 – AWS Booktamp: Taking AWS Operations to the Next Level (10 AM).
  • October 26 – Hands-on Labs Drop In (1 PM).
  • October 27 – IoT Hack Day: AWS Pop-up Loft Hack Series – Sponsored by Intel (10 AM).

Help Wanted

Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.


TorrentFreak: RIAA Labels Want $22 Million Piracy Damages From MP3Skull

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

skullEarlier this year a coalition of record labels including Capitol Records, Sony Music, Warner Bros. Records and Universal Music Group filed a lawsuit against MP3Skull.

With millions of visitors per month the MP3 download site had been one of the prime sources of pirated music for a long time.

Several months have passed since the RIAA members submitted their complaint and since the owners of MP3Skull failed to respond, the labels are now asking for a default judgement.

In their motion filed last Friday at a Florida District Court, the record labels describe MP3Skull as a notorious pirate site that promotes copyright infringement on a commercial scale.

“Defendants designed, promote, support and maintain the MP3Skull website for the well-known, express and overarching purpose of reproducing, distributing, performing and otherwise exploiting unlimited copies of Plaintiffs’ sound recordings without any authorization or license.

“By providing to the public the fruits of Plaintiffs’ investment of money, labor and expertise, MP3Skull has become one of the most notorious pirate websites in the world,” the labels add (pdf).

Besides offering a comprehensive database of links to music tracks, the labels also accuse the site’s operators of actively promoting piracy through social media. Among other things, MP3Skull helped users to find pirated tracks after copyright holders removed links from the site.

Based on the blatant piracy carried out by operators and users, the labels argue that MP3Skull is liable for willful copyright infringement.

Listing 148 music tracks as evidence, the companies ask for the maximum $150,000 in statutory damages for each, bringing the total to more than $22 million.

“Under these egregious circumstances, Plaintiffs should be awarded statutory damages in the full amount of $150,000 for each of the 148 works identified in the Complaint, for a total of $22,200,000,” the motion reads.

In addition the RIAA labels request a permanent injunction to make it harder for MP3Skull from continuing to operate the site. The proposed injunction (pdf) prevents domain name registrars and registries form providing services to MP3Skull and orders the transfer of existing domains to the copyright holders.

While a default judgment would be a big hit to the site, most damage has already been done. Last year MP3Skull was listed among the 500 most-visited websites on the Internet according to Alexa, but after Google downranked the site it quickly lost its traffic.


The site subsequently hopped from domain to domain and is currently operating from the .ninja TLD with only a fraction of the number of users it had before.

Given that MP3Juices failed to appear before the court it’s likely that the District Court will approve the proposed default judgment. Whether the record labels will ever see a penny of the claimed millions is doubtful though, as the true owners of the MP3Skull site remain unknown.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Linux How-Tos and Linux Tutorials: How to Convert Videos in Linux Using the Command Line

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Swapnil Bhartiya. Original post: at Linux How-Tos and Linux Tutorials

Swapnil-cli-presetsLinux users don’t need to transcode video files, because they have VLC and many other apps at their disposal that can play almost any media format out there. However, if you want to play videos on mobile devices such as your iPhone or iPad, or if you run streaming servers, then transcoding your videos into supported formats becomes essential.

In a previous article, I wrote about some GUI tools that can transcode videos with ease. There was a demand for CLI (command-line interface) tools for the same job. I confess that even if I ran a headless file server at home, I never really bothered to do it via an SSH session. I would mount the removable drive on my desktop and convert files using GUI tools. It’s never late to do it differently.

In this article, I will share how I transcode video in Linux using CLI tools. Just keep one point in mind: this is just one of the “many” ways you can do it in Linux. There are dozens of such tools out there, and I am covering the one that I frequently use, because it’s easy and I’ve been using it for a long time. Some of the popular tools include ffmpeg, mencoder, and my favorite Handbrake. In this article, I will show how to use Handbrake to convert video files.

Using Handbrake to Transcode

First you need to install the “Handbrake CLI” packages on your system. Most major distributions such as openSUSE, Arch, Fedora, and Ubuntu have this in their main repositories. If not, then you can enable the necessary third-party repo and install the software.

Now that Handbrake is installed, let’s take a closer look. First, open the terminal. This is the command you will need to convert any file:

HandBrakeCLI - i PATH-OF-SOURCE-FILE -o NAME-OF-OUTPUT-FILE --"preset-name"

Ok, this is not the only command you can use; there are different ways of doing it. For example, you can give Handbrake detailed instructions on how it should deal with audio, video, what bitrate it should use, and what codec it should deploy, but that would become intimidating for a new user, and I like to keep things simple. So, I use the above pattern.

In the above command “i” stands for input and “o” stands for output here. It’s self-explanatory that you have to provide the path of the source file and the destination where you want to save the converted file. If you want to keep the transcoded file in the same folder, then just give the name of the file. Keep in mind that you do have to give the name and extension of the output file.

The “Preset Name” option is the reason I use Handbrake over others. If you have used the GUI version of Handbrake (as you can see in this article), it comes with different presets so you can transcode your video for the targeted devices. If you don’t use the preset (and I am not sure why you should not), then you will have to specify every single thing as I explained above and that, in my opinion, is overkill.

If you want to use the presets, it’s extremely easy to find what you need, just run this command:

[swapnil@arch ~]$ HandBrakeCLI --preset-list

This will give a long and detailed output of all the available presets (see Figure 1 above).

You will notice presets for iPhone, iPod, iPad, Apple TV, and other such devices. I use the last preset — “High Profile” — for two reasons: 1) In my experience, it offers the best quality; 2) It will work across devices — from mobile to HDTV. If you run Kodi instead of Plex server, then I suggest this profile, because Kodi doesn’t transcode videos on the server side.

Let’s Do It

I downloaded the “flv” format of a Linux Foundation video called Distributed Genius and wanted to transcode it into .mp4 format, which is the format that plays everywhere — from Mac OS X to iPad and Kodi (Linux plays everything, so I am not worried about it).

swapnil-encoding-cropThe file was downloaded to the Downloads folder in my home directory, and I wanted to save the transcoded file in the Videos folder, so this is the command I ran:

[swapnil@arch ~]$ HandBrakeCLI -i /home/swapnil/The Distributed  Genius.flv -o /home/swapnil/Videos/the_dstributed_genius.mp4 --preset="High Profile"

Lo and behold, Handbrake will start transcoding your video (Figure 2). Then, HandBrake will tell you once the transcoding is finished (Figure 3).


So, if you are planning to transcode some video, Handbrake is my easy to use, go-to solution. In the future, I will talk about other CLI tools for performing the same task. Let me know which tools you use, in the comments below.

Raspberry Pi: World Maker Faire New York 2015

This post was syndicated from: Raspberry Pi and was written by: Matt Richardson. Original post: at Raspberry Pi

If you can believe it, it’s been four years since Raspberry Pi’s first appearance at World Maker Faire New York in Queens. These days, when we go to Maker Faire, we aren’t only introducing people to Raspberry Pi for the first time. We also want to show something new to those of you who know us well. And at this year’s event, we had a lot of shiny new gear to show off.

The Maker Faire booth crew (L-R) Roger, Matt, Eben, Philip, Russell, Rachel, and Ben.

The Maker Faire booth crew (L-R) Roger, Matt, Eben, Philip, Russell, Rachel, and Ben.

Since the new touch display and Sense HAT just started shipping, it was only fitting that we brought along some demos. We had a Kivy-based multitouch demo to show off the new touch display. There was also a quick demo of the functions of the Sense HAT. For those that wanted to get more hands on with Raspberry Pi and the Sense HAT, we had workstations setup for creating animations with Sense HAT’s 8×8 RGB LED Matrix. We owe a huge thanks to Raspberry Pi Certified Educator Richard Hayler for his work on RPi_8x8GridDraw, which we forked for Maker Faire.

Maker Faire attendees using Raspberry Pis to program animations onto the new Sense HAT.

Maker Faire attendees using Raspberry Pis to program animations onto the new Sense HAT.

Mounted to some swanky new mounting hardware we picked up from B&H, we also had the Astro Pi flight hardware on display. It was neat to show attendees how you have to pack up a Pi to ship to to space. We were absolutely chuffed (did I use that word correctly?) that Astro Pi won Best In Class from the editors of Make:

Philip Colligan on Twitter

We won best in class at #makerfaire – go @Raspberry_Pi

Maker Faire was also an excellent opportunity to share The MagPi in print with our fans. We have a great new subscription offer for the US, but we also wanted attendees to know that they can now find the official Raspberry Pi community magazine on the shelves at Barnes & Noble and Micro Center across the United States.

As with other Maker Faires, Raspberry Pis were spread far-and-wide throughout the event. In fact, one of the main showpieces used Raspberry Pi… 256 of them to be precise:

Watch the @Raspberry_Pi kinetic art in action, Zone 1 #WMF15 @makerfaire

Watch MAKE’s Vine “Watch the @Raspberry_Pi kinetic art in action, Zone 1 #WMF15 @makerfaire” taken on 26 September 2015. It has 12 likes. Vine is the best way to see and share life in motion. Create short, beautiful, looping videos in a simple and fun way for your friends and family to see.

Read more about Sam Blanchard and team’s SeeMore here.

Thanks to everyone who came by to see us! See you next year, New York!

The post World Maker Faire New York 2015 appeared first on Raspberry Pi.

Schneier on Security: Automatic Face Recognition and Surveillance

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

ID checks were a common response to the terrorist attacks of 9/11, but they’ll soon be obsolete. You won’t have to show your ID, because you’ll be identified automatically. A security camera will capture your face, and it’ll be matched with your name and a whole lot of other information besides. Welcome to the world of automatic facial recognition. Those who have access to databases of identified photos will have the power to identify us. Yes, it’ll enable some amazing personalized services; but it’ll also enable whole new levels of surveillance. The underlying technologies are being developed today, and there are currently no rules limiting their use.

Walk into a store, and the salesclerks will know your name. The store’s cameras and computers will have figured out your identity, and looked you up in both their store database and a commercial marketing database they’ve subscribed to. They’ll know your name, salary, interests, what sort of sales pitches you’re most vulnerable to, and how profitable a customer you are. Maybe they’ll have read a profile based on your tweets and know what sort of mood you’re in. Maybe they’ll know your political affiliation or sexual identity, both predictable by your social media activity. And they’re going to engage with you accordingly, perhaps by making sure you’re well taken care of or possibly by trying to make you so uncomfortable that you’ll leave.

Walk by a policeman, and she will know your name, address, criminal record, and with whom you routinely are seen. The potential for discrimination is enormous, especially in low-income communities where people are routinely harassed for things like unpaid parking tickets and other minor violations. And in a country where people are arrested for their political views, the use of this technology quickly turns into a nightmare scenario.

The critical technology here is computer face recognition. Traditionally it has been pretty poor, but it’s slowly improving. A computer is now as good as a person. Already Google’s algorithms can accurately match child and adult photos of the same person, and Facebook has an algorithm that works by recognizing hair style, body shape, and body language ­- and works even when it can’t see faces. And while we humans are pretty much as good at this as we’re ever going to get, computers will continue to improve. Over the next years, they’ll continue to get more accurate, making better matches using even worse photos.

Matching photos with names also requires a database of identified photos, and we have plenty of those too. Driver’s license databases are a gold mine: all shot face forward, in good focus and even light, with accurate identity information attached to each photo. The enormous photo collections of social media and photo archiving sites are another. They contain photos of us from all sorts of angles and in all sorts of lighting conditions, and we helpfully do the identifying step for the companies by tagging ourselves and our friends. Maybe this data will appear on handheld screens. Maybe it’ll be automatically displayed on computer-enhanced glasses. Imagine salesclerks ­- or politicians ­- being able to scan a room and instantly see wealthy customers highlighted in green, or policemen seeing people with criminal records highlighted in red.

Science fiction writers have been exploring this future in both books and movies for decades. Ads followed people from billboard to billboard in the movie Minority Report. In John Scalzi’s recent novel Lock In, characters scan each other like the salesclerks I described above.

This is no longer fiction. High-tech billboards can target ads based on the gender of who’s standing in front of them. In 2011, researchers at Carnegie Mellon pointed a camera at a public area on campus and were able to match live video footage with a public database of tagged photos in real time. Already government and commercial authorities have set up facial recognition systems to identify and monitor people at sporting events, music festivals, and even churches. The Dubai police are working on integrating facial recognition into Google Glass, and more US local police forces are using the technology.

Facebook, Google, Twitter, and other companies with large databases of tagged photos know how valuable their archives are. They see all kinds of services powered by their technologies ­ services they can sell to businesses like the stores you walk into and the governments you might interact with.

Other companies will spring up whose business models depend on capturing our images in public and selling them to whoever has use for them. If you think this is farfetched, consider a related technology that’s already far down that path: license-plate capture.

Today in the US there’s a massive but invisible industry that records the movements of cars around the country. Cameras mounted on cars and tow trucks capture license places along with date/time/location information, and companies use that data to find cars that are scheduled for repossession. One company, Vigilant Solutions, claims to collect 70 million scans in the US every month. The companies that engage in this business routinely share that data with the police, giving the police a steady stream of surveillance information on innocent people that they could not legally collect on their own. And the companies are already looking for other profit streams, selling that surveillance data to anyone else who thinks they have a need for it.

This could easily happen with face recognition. Finding bail jumpers could even be the initial driving force, just as finding cars to repossess was for license plate capture.

Already the FBI has a database of 52 million faces, and describes its integration of facial recognition software with that database as “fully operational.” In 2014, FBI Director James Comey told Congress that the database would not include photos of ordinary citizens, although the FBI’s own documents indicate otherwise. And just last month, we learned that the FBI is looking to buy a system that will collect facial images of anyone an officer stops on the street.

In 2013, Facebook had a quarter of a trillion user photos in its database. There’s currently a class-action lawsuit in Illinois alleging that the company has over a billion “face templates” of people, collected without their knowledge or consent.

Last year, the US Department of Commerce tried to prevail upon industry representatives and privacy organizations to write a voluntary code of conduct for companies using facial recognition technologies. After 16 months of negotiations, all of the consumer-focused privacy organizations pulled out of the process because industry representatives were unable to agree on any limitations on something as basic as nonconsensual facial recognition.

When we talk about surveillance, we tend to concentrate on the problems of data collection: CCTV cameras, tagged photos, purchasing habits, our writings on sites like Facebook and Twitter. We think much less about data analysis. But effective and pervasive surveillance is just as much about analysis. It’s sustained by a combination of cheap and ubiquitous cameras, tagged photo databases, commercial databases of our actions that reveal our habits and personalities, and ­- most of all ­- fast and accurate face recognition software.

Don’t expect to have access to this technology for yourself anytime soon. This is not facial recognition for all. It’s just for those who can either demand or pay for access to the required technologies ­- most importantly, the tagged photo databases. And while we can easily imagine how this might be misused in a totalitarian country, there are dangers in free societies as well. Without meaningful regulation, we’re moving into a world where governments and corporations will be able to identify people both in real time and backwards in time, remotely and in secret, without consent or recourse.

Despite protests from industry, we need to regulate this budding industry. We need limitations on how our images can be collected without our knowledge or consent, and on how they can be used. The technologies aren’t going away, and we can’t uninvent these capabilities. But we can ensure that they’re used ethically and responsibly, and not just as a mechanism to increase police and corporate power over us.

This essay previously appeared on

EDITED TO ADD: Two articles that say much the same thing.

TorrentFreak: Megaupload Accuses U.S. of Unfair Tactics, Seeks Stay

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

After suffering ten postponements the extradition hearing of Kim Dotcom, Mathias Ortmann, Finn Batato and Bram van der Kolk was never likely to be a smooth, straightforward affair, even when it eventually got underway in the Auckland District Court.

The first few days of the hearing were spent by Crown lawyer Christine Gordon QC, who has been acting on behalf the U.S., painting a highly negative picture of the quartet. Incriminating correspondence, culled from their Skype accounts, suggested that there had been knowledge of infringement, she claimed.

After the U.S. finally wrapped up its case last Thursday, Judge Dawson was asked to decide when several applications filed by Dotcom’s team to drop the hearing would be heard. Would it be appropriate to deal with them before the accused took the stand to fight the extradition, or at another time?

In the event Judge Dawson decided that the stay applications – which cover the U.S. freeze on Dotcom’s funds and other ‘unreasonable’ behavior, plus allegations of abuse of process by Crown lawyers – should be heard first.

During this morning’s session defense attorney Grant Illingworth QC motivated the request to stay the case. Illingworth told the Court that due to the ongoing U.S.-ordered freeze on his clients’ funds (and the prospect that any funds sent to the U.S. would have the same fate), they are unable to retain experts on U.S. law.

“We say the issue is that they cannot use restrained funds to pay experts in US law, if those experts are not New Zealand citizens,” Illingworth said.

“Access to such expertise is necessary but being prevented by the US. It means not having the ability to call evidence but also the ability to get advice so counsel can present their case.”

Illingworth said the unfair tactics amounted to an abuse of process which has reduced New Zealand-based lawyers to a position of dealing with U.S. law on a “guess work” basis which could leave them open to accusations of being both negligent and incompetent.

“In any other case we would seek expert advice,” Illingworth said, but in this situation obtaining that is proving impossible. No defense means that the hearing is fundamentally unfair, he argued.

In this afternoon’s session, attention turned to claims by the United States that Bram van der Kolk uploaded a pre-release copy of the Liam Neeson movie ‘Taken’ to Megaupload and shared links with friends.

The U.S. says that nine people downloaded the thriller but lawyers for Van Der Kolk insist that whatever the case, the movie was not ‘pre-release’ since it was already available in more than two dozen countries during 2008. The movie was premiered in the United States during 2009. Pre-release movie piracy is a criminal offense under U.S. law.

The extradition hearing is now in its third week and is expected to last another three, but that could change. If the applications currently being heard are successful, the Judge could order a new case or might even stay the extradition hearing altogether.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

TorrentFreak: The Confessions of a Camming Movie Pirate

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

Last week TF published an interview with Philip Danks, the West Midlands man handed the toughest ever UK sentence for recording a movie in a cinema and uploading it to the Internet. After receiving 33 months for his crimes, Danks’ message was one of caution.

“Simply put, prison isn’t worth the kudos you get from being the first to leak a movie, stay away from it all and be happy with your family!” he said last week.

But despite the words of warning, what Danks did is still pretty intriguing. So-called ‘movie camming’ is not only very dangerous but also shrouded in mystery. So what made this guy do what he did?

“My original motivation was to be the first to get a copy of a film worldwide, which would certainly drag thousands of new unique visitors to my website (Bit Buddy), increasing members and site activity, as at the time there was just over 200 members which is extremely small,” Danks informs TF.

“I decided on Fast 6 due to the popularity and demand for the film as it was by far the most highly anticipated and eagerly awaited film. It was hugely popular and I knew demand would be high for that film.”

After settling on a film to ‘cam’, Danks had to pick a location. In the end he chose a Showcase cinema, located not far from his Walsall home in central England.

Showcase – Walsall


“The cinema I chose was my local cinema just 4 – 5 miles away, and I chose the first available viewing with tickets available. I only decided on the day that I would do it, so it was spur and heat of the moment thing,” he explains.


“I went into town and brought a second hand camcorder from a pawn shop for £70, however it only had a 60 minute battery life and I had to return to get a better camcorder. The second one I chose had brilliant reviews online for battery life and quality, so I chose that one for £80.”

The digital camera had no storage medium, so to capture the whole thing in decent quality Danks had to spend a little extra on 16GB SD card. However, other problems quickly raised their head and would need attention if his cover wasn’t to be immediately blown.

“I realized quite quickly that the device was extremely visible in the dark, so I covered all LED lights with insulation tape,” Danks says.

Also proving troublesome was the brightness of the camera’s LCD screen. Covering that so early on would mean that focusing on the cinema screen would be a hit-and-miss affair and at worse a complete disaster. Danks decided to take the risk on the initial focusing setup and then used a small black bag to cover the LCD.

Location and position

As anyone who has watched a ‘cam’ copy will know, the positioning of the ‘cammer’ is vital to a decent recording. Too far to the left or right of the big screen and angles can creep in. Too far forward or back raises other issues, including other movie goers causing annoying black silhouettes every time they move in front of the camera.

To avoid the latter and of course detection, Danks decided to enroll some accomplices.

“I decided that a few friends in front, to the side and to the back of me, was enough cover to keep the staff from seeing me, so I invited a few friends along, one of which was Michael Bell, my co-defendant,” Danks says.

“I got the focus right while the camcorder was in my lap, meaning I could hold it with my legs and keep it fairly steady, also providing a little more cover from staff as I looked like I just had my hands in my lap.”

For readers putting themselves in Danks’ shoes, fear of getting caught during the next two hours would probably be high on the list of emotions. However, Danks says he wasn’t really concerned about being discovered and was more interested in the end result.

“While I was actually recording my only thoughts were whether or not the quality would be good enough for a release, and if the sound would be in sync with the video. I blocked out any thoughts of getting caught and just got on with the job at hand,” he explains.

The Great Escape

Soon enough the movie was over. Danks hadn’t been caught in the act but there was still a possibility that he’d been monitored and a welcoming party was waiting for him in the cinema lobby. With that in mind, he set about mitigating the risks.

“As soon as the movie was over I concealed the camcorder down my trousers and the memory chip was hidden in a separate place – in my sock. I knew cinema staff could not perform a search without a police officer present,” he says.

But what if staff were outside ready to give him a hard time? Danks had thought about that too and already had an escape strategy in mind.

“I planned to simply run if I got caught. I know that any attempt to detain me without the authorities would be unlawful arrest and kidnap, so that was no concern,” he says.

In the event his exit from the cinema was trouble free. All that remained was to get the video off the card and onto the Internet.

Conversion and uploading

“I had an SD card reader on my laptop, so transferring the file was no issue, although encoding was. I had problems with audio synchronization and had to adjust the sound offset by around 0.8s to get it just right after conversion to AVI,” he recalls.

“Another problem I ran into was file size. Because the movie was so long the total file was around 6GB separated into 1GB chunks, so I first had to use a video joiner to combine all the chunks in the right order before I could compress and reduce the final size. In total it took over six hours to convert and upload the file for the first person apart from me to have a copy.”

The whole point of recording Fast 6 was for Danks to be able to claim first place in the race to upload the movie to the Internet and he wanted his own site, Bit Buddy, to share in the glory.

“I decided to first upload the movie to Bit Buddy, date stamps would then prove my site was the first in the world to have a real copy. I then decided the best place for it to be picked up was on, KickAssTorrents and ThePirateBay, because I know dump sites scrape all three on a regular basis,” he says.

“Afterwards I simply went to sleep, but by this time it was 6am and I was back up at 10am to check on things. The amount of visitors to my site caused it to crash my home servers (three of them) because I simply wasn’t prepared for the traffic.”

So far so good

Danks says that in the aftermath he felt happy with what he’d done. The movie had been recorded, he hadn’t been caught, and his site had been placed on the map. That desire to be first had paid off with the feelings he’d expected.

“The next day I felt great. I felt like I had achieved something, something no one else could do, and that was get the tightest film of the year security-wise and plaster it over the Internet. By the time I checked it lunch time it had around 50,000 seeders and was on every worthy site going, so I was very proud,” he recalls.

Game over

As detailed in our earlier article, the Federation Against Copyright Theft had been watching Danks’ activities online. That led to an oversized police response and his subsequent arrest.

“It was only five days later when I was arrested that it dawned on me and I realized how much trouble I was in. However at the time I just simply didn’t care, I had one up on the fatcats in Hollywood and that was all I was bothered about,” he recalls.

“It wasn’t until over a year later when I was actually in court that I realized I was facing a lengthy custodial sentence for what I had done, and that the Hollywood fatcats had actually beaten me at my own game.

“Going to prison is technically the ultimate punishment, and that’s what they did to me. So really I was the one who lost out long term, losing my home, my job, my car and my freedom. That’s not winning.”

Moving on

Danks is now out of prison and on license, which means he has to be home by 7pm and cannot leave again until 7am. He spends his free time programming and playing poker in an attempt to build up his funds to pre-prison levels. And, since there has been so much interest in his case, he’s also hoping to commit his experiences to film via his own documentary.

Unless someone already in the field is interested in working with him, in which case TF will be happy to forward their details.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

TorrentFreak: Thousands of “Spies” Are Watching Trackerless Torrents

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

spyThe beauty of BitTorrent is that thousands of people can share a single file simultaneously to speed up downloading. In order for this to work, trackers announce the IP-addresses of all file-sharers in public.

The downside of this approach is that anyone can see who’s sharing a particular file. It’s not even required for monitoring outfits to actively participate.

This ‘vulnerability’ is used by dozens of tracking companies around the world, some of which send file-sharers warning letters, or worse. However, the “spies” are not just getting info from trackers, they also use BitTorrent’s DHT.

Through DHT, BitTorrent users share IP-addresses with other peers. Thus far, little was known about the volume of monitoring through DHT, but research from Peersm’s Aymeric Vitte shows that it’s rampant.

Through various experiments Vitte consistently ran into hundreds of thousands of IP-addresses that show clear signs of spying behavior.

The spies are not hard to find and many monitor pretty much all torrents hashes they can find. Blocking them is not straightforward though, as they frequently rotate IP-addresses and pollute swarms.

“The spies are organized to monitor automatically whatever exists in the BitTorrent network, they are easy to find but difficult to follow since they might change their IP addresses and are polluting the DHT with existing peers not related to monitoring activities,” Vitte writes.

The research further found that not all spies are actively monitoring BitTorrent transfers. Vitte makes a distinction between level 1 and level 2 spies, for example.

The first group is the largest and spreads IP-addresses of random peers and the more dangerous level 2 spies, which are used to connect file-sharers to the latter group. They respond automatically, and even return peers for torrents that don’t exist.

The level 2 spies are the data collectors, some if which use quickly changing IP-addresses. They pretend to offer a certain file and wait for BitTorrent users to connect to them.

The image below shows how rapidly the spies were discovered in one of the experiments and how quickly they rotate IP-addresses.


Interestingly, only very few of the level 2 spies actually accept data from an alleged pirate, meaning that most can’t proof without a doubt that pirates really shared something (e.g. they could just be checking a torrent without downloading).

According to Vitte, this could be used by accused pirates as a defense.

“That’s why people who receive settlement demands while using only DHT should challenge this, and ask precisely what proves that they downloaded a file,” he says.

After months of research and several experiments Vitte found that there are roughly 3,000 dangerous spies. These include known anti-piracy outfits such as Trident Media Guard, but also unnamed spies that use rotating third party IPs so they are harder to track.

Since many monitoring outfits constantly change their IP-addresses, static blocklists are useless. At TF we are no fans of blocklists in general, but Vitte believes that the dynamic blocklist he has developed provides decent protection, with near instant updates.

This (paid) blocklist is part of the Open Source Torrent-Live client which has several built in optimizations to prevent people from monitoring downloads. People can also use it to built and maintain a custom blocklist.

In his research paper Vitte further proposes several changes to the BitTorrent protocol which aim to make it harder to spy on users. He hopes other developers will pick this up to protect users from excessive monitoring.

Another option to stop the monitoring is to use an anonymous VPN service or proxy, which hides ones actual IP-address.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

TorrentFreak: Anti-Piracy Activities Get VPNs Banned at Torrent Sites

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

spyFor the privacy-conscious Internet user, VPNs and similar services are now considered must-have tools. In addition to providing much needed security, VPNs also allow users to side-step geo-blocking technology, a useful ability for today’s global web-trotter.

While VPNs are often associated with file-sharing activity, it may be of interest to learn that they are also used by groups looking to crack down on the practice. Just like file-sharers it appears that anti-piracy groups prefer to work undetected, as events during the past few days have shown.

Earlier this week while doing our usual sweep of the world’s leading torrent sites, it became evident that at least two popular portals were refusing to load. Finding no complaints that the sites were down, we were able to access them via publicly accessible proxies and as a result thought no more of it.

A day later, however, comments began to surface on Twitter that some VPN users were having problems accessing certain torrent sites. Sure enough, after we disabled our VPN the affected sites sprang into action. Shortly after, reader emails to TF revealed that other users were experiencing similar problems.

Eager to learn more, TF opened up a dialog with one of the affected sites and in return for granting complete anonymity, its operator agreed to tell us what had been happening.

“The IP range you mentioned was used for massive DMCA crawling and thus it’s been blocked,” the admin told us.

Intrigued, we asked the operator more questions. How do DMCA crawlers manifest themselves? Are they easy to spot and deal with?

“If you see 15,000 requests from the same IP address after integrity checks on the IP’s browsers for the day, you can safely assume its a [DMCA] bot,” the admin said.

From the above we now know that anti-piracy bots use commercial VPN services, but do they also access the sites by other means?

“They mostly use rented dedicated servers. But sometimes I’ve even caught them using Hola VPN,” our source adds. Interestingly, it appears that the anti-piracy activities were directed through the IP addresses of Hola users without them knowing.

Once spotted the IP addresses used by the aggressive bots are banned. The site admin wouldn’t tell TF how his system works. However, he did disclose that sizable computing resources are deployed to deal with the issue and that the intelligence gathered proves extremely useful.

Of course, just because an IP address is banned at a torrent site it doesn’t necessarily follow that a similar anti-DMCA system is being deployed. IP addresses are often excluded after being linked to users uploading spam, fakes and malware. Additionally, users can share IP addresses, particularly in the case of VPNs. Nevertheless, the banning of DMCA notice-senders is a documented phenomenon.

Earlier this month Jonathan Bailey at Plagiarism Today revealed his frustrations when attempting to get so-called “revenge porn” removed from various sites.

“Once you file your copyright or other notice of abuse, the host, rather than remove the material at question, simply blocks you, the submitter, from accessing the site,” Bailey explains.

“This is most commonly done by blocking your IP address. This means, when you come back to check and see if the site’s content is down, it appears that the content, and maybe the entire site, is offline. However, in reality, the rest of the world can view the content, it’s just you that can’t see it,” he notes.

Perhaps unsurprisingly, Bailey advises a simple way of regaining access to a site using these methods.

“I keep subscriptions with multiple VPN providers that give access to over a hundred potential IP addresses that I can use to get around such tactics,” he reveals.

The good news for both file-sharers and anti-piracy groups alike is that IP address blocks like these don’t last forever. The site we spoke with said that blocks on the VPN range we inquired about had already been removed. Still, the cat and mouse game is likely to continue.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

TorrentFreak: Torrent Sites Remove Millions of Links to Pirate Content

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

deleteEntertainment industry groups including the RIAA and MPAA view BitTorrent sites as a major threat. The owners of most BitTorrent sites, however, believe they do nothing wrong.

While it’s common knowledge that The Pirate Bay refuses to remove any torrents, all of the other major BitTorrent sites do honor DMCA-style takedown requests.

Several copyright holders make use of these takedown services to remove infringing content, resulting in tens of thousands of takedown requests per month.

Bitsnoop is one of the prime targets. The site boasts one of the largest torrent databases on the Internet, more than 24 million files in total. This number could have been higher though, as the site has complied with 2,220,099 takedown requests over the years.

The overview below shows that most of the takedown notices received by Bitsnoop were sent by Remove Your Media. Other prominent names such as the RIAA and Microsoft also appear in the list of top senders.


As one of the largest torrent sites, KickassTorrents (KAT) is also frequently contacted by copyright holders.

The site doesn’t list as many torrents as Bitsnoop does, but with tens of thousands of takedown notices per month it receives its fair share of takedown requests.

The KAT team informs TF that they removed 26,060 torrents over the past month, and a total of 856,463 since they started counting.

Torrent sites are not the only ones targeted. Copyright holders also ask Google to indirectly remove access to infringing torrents that appear in its search results. Interestingly, Google receives more requests for Bitsnoop and KAT than the sites themselves do.

Google’s transparency report currently lists 3,902,882 Bitsnoop URLs and several million for KickassTorrents’ most recent domain names. The people at TorrentTags noticed this as well and recently published some additional insights from their own database.

Despite the proper takedown policies it’s hard for torrent sites to escape criticism. On the one hand users complain that their torrents are vanishing. On the other, copyright holders are not happy with the constant stream of newly uploaded torrents.

Not all torrent sites are happy with the takedown procedure either. ExtraTorrent doesn’t keep track of the number of takedown requests the site receives, but the operator informs TF that many contain errors or include links that point to different domains.

Still, most torrent sites feel obligated to accept takedown notices and will continue to do so in order to avoid further trouble.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

lcamtuf's blog: Subjective explainer: gun debate in the US

This post was syndicated from: lcamtuf's blog and was written by: Michal Zalewski. Original post: at lcamtuf's blog

In the wake of the tragic events in Roseburg, I decided to return to the topic of looking at the US culture from the perspective of a person born in Europe. In particular, I wanted to circle back to the topic of firearms.

Contrary to popular beliefs, the United States has witnessed a dramatic decline in violence over the past 20 years. In fact, when it comes to most types of violent crime – say, robbery, assault, or rape – the country now compares favorably to the UK and many other OECD nations. But as I explored in my earlier posts, one particular statistic – homicide – remains stubbornly high, registering about three times as high as in many other places within the EU.

The homicide epidemic in the United States has a complex nature and overwhelmingly affects ethnic minorities and other disadvantaged social groups; perhaps because of this, the phenomenon sees very little honest, public scrutiny. It is propelled into the limelight only in the wake of spree shootings and other sickening, seemingly random acts of terror; such incidents, although statistically insignificant, take a profound mental toll on the American society. Yet, the effects of such violence also seem strangely short-lived: they trigger a series of impassioned political speeches, invariably focusing on the connection between violence and guns – but the nation soon goes back to business as usual, knowing full well that another massacre will happen soon, perhaps the very same year.

On the face of it, this pattern defies all reason – angering my friends in Europe and upsetting many brilliant and well-educated progressives in the US. They utter frustrated remarks about the all-powerful gun lobby and the spineless politicians and are quick to blame the partisan gridlock for the failure to pass even the most reasonable and toothless gun control laws. I used to be in the same camp; today, I think the reality is more complex than that.

To get to the bottom of this mystery, it helps to look at the spirit of radical individualism and libertarianism that remains the national ethos of the United States – and in fact, is enjoying a degree of resurgence unseen for many decades prior. In Europe, it has long been settled that many individual liberties – be it the freedom of speech or the natural right to self-defense – can be constrained to advance even some fairly far-fetched communal goals. On the old continent, such sacrifices sometimes paid off, and sometimes led to atrocities; but the basic premise of European collectivism is not up for serious debate. In America, the same notion certainly cannot be taken for granted today.

When it comes to firearm ownership in particular, the country is facing a fundamental choice between two possible realities:

  • A largely disarmed society that depends on the state to protect it from almost all harm, and where citizens are generally not permitted to own guns without presenting a compelling cause. In this model, firearms would be less available to criminals – the resulting black market would be smaller, costlier, and more dangerous to approach. At the same time, the nation would arguably become more vulnerable to foreign invasion or domestic terror, should the state ever fail to provide adequate protection to all its citizens.

  • A well-armed society where firearms are available to almost all competent adults, and where the natural right to self-defense is subject to few constraints. In this model, the country would be likely more resilient in the face of calamity. At the same time, the citizens must probably accept some inherent, non-trivial increase in violent crime due to the prospect of firearms more easily falling into the wrong hands.

It seems doubtful that a viable middle-ground approach can exist in the United States. With more than 300 million civilian firearms in circulation, most of them in unknown hands, the premise of reducing crime through gun control would critically depend on some form of confiscation; without it, the supply of firearms to the criminal underground or to unfit individuals would not be disrupted in any meaningful way. Because of this, intellectual integrity requires us to look at many of the legislative proposals not only through the prism of their immediate utility, but to also consider which of the two societal models they are likely to advance in the long haul.

And herein lies the problem: many of the current “common-sense” gun control proposals have very little merit when considered in isolation. There is scant evidence that bans on military-looking semi-automatic rifles (“assault weapons”), or the prohibition on private sales at gun shows, would deliver measurable results. There is also no compelling reason to believe that ammo taxes, firearm owner liability insurance, mandatory gun store cameras, firearm-free school zones, or federal gun registration can have any impact on violent crime. And so, the debate often plays out like this:

At the same time, by the virtue of making weapons more difficult, expensive, and burdensome to own, many of the legislative proposals floated by progressives would probably begin to gradually weaken the US gun culture; intentionally or not, their long-term product could be a society less interested in firearms and more willing to follow in the footsteps of Australia or the UK. Only as we cross that line and confiscate hundreds of millions of guns, it’s fathomable – yet still far from certain – that we would see a sharp drop in homicides.

This method of inquiry helps explain the visceral response from gun rights advocates: given the legislation’s unclear benefits and its predicted long-term consequences, many pro-gun folks are genuinely worried that any compromise would eventually mean giving up one of their cherished civil liberties – and on some level, they are right. It is fashionable to imply that there is a sinister corporate “gun lobby” that derails the political debate for its own financial gain; but the evidence of this is virtually non-existent – and it’s unlikely that gun manufacturers honestly care about being allowed to put barrel shrouds or larger magazines on the rifles they sell.

Another factor that poisons the debate is that despite being highly educated and eloquent, the progressive proponents of gun control measures are often hopelessly unfamiliar with the very devices they are trying to outlaw:

I’m reminded of the widespread contempt faced by Senator Ted Stevens following his attempt to compare the Internet to a “series of tubes” as he was arguing against net neutrality. His analogy wasn’t very wrong – it just struck a nerve as simplistic and out-of-date. My progressive friends did not react the same way when Representative Carolyn McCarthy – one of the key proponents of the ban on assault weapons – showed no understanding of the firearm features she was trying to eradicate. Such bloopers are not rare, too; not long ago, Mr. Bloomberg, one of the leading progressive voices on gun control in America, argued against semi-automatic rifles without understanding how they differ from the already-illegal machine guns:

There are countless dubious and polarizing claims made by the supporters of gun rights, too; but when introducing new legislation, the burden of making educated and thoughtful arguments should rest on its proponents, not other citizens. When folks such as Bloomberg prescribe sweeping changes to the American society while demonstrating striking ignorance about the topics they want to regulate, they come across as elitist and flippant – and deservedly so.

Given how controversial the topic is, I think it’s wise to start an open, national conversation about the European model of gun control and the risks and benefits of living in an unarmed society. But it’s also likely that such a debate wouldn’t last long. Progressive politicians like to say that the dialogue is impossible because of the undue influence of the National Rifle Association – but as I discussed in my earlier blog posts, the organization’s financial resources and power are often overstated: it does not even make it onto the list of top 100 lobbyists in Washington, and its support comes mostly from member dues, not from shadowy business interests or wealthy oligarchs. In reality, disarmament just happens to be a very unpopular policy in America today: the support for gun ownership is very strong and has been growing over the past 20 years – even though hunting is on the decline.

Perhaps it would serve the progressive movement better to embrace the gun culture – and then think of ways to curb its unwanted costs. Addressing inner-city violence, especially among the disadvantaged youth, would quickly bring the US homicide rate much closer to the rest of the highly developed world. But admitting the staggering scale of this social problem can be an uncomfortable and politically charged position to hold.

PS. If you are interested in a more systematic evaluation of the scale, the impact, and the politics of gun ownership in the United States, you may enjoy an earlier entry on this blog.

Krebs on Security: Scottrade Breach Hits 4.6 Million Customers

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Welcome to Day 2 of Cybersecurity (Breach) Awareness Month! Today’s awareness lesson is brought to you by retail brokerage firm Scottrade Inc., which just disclosed a breach involving contact information and possibly Social Security numbers on 4.6 million customers.

scottradeIn an email sent today to customers, St. Louis-based Scottrade said it recently heard from federal law enforcement officials about crimes involving the theft of information from Scottrade and other financial services companies.

“Based upon our subsequent internal investigation coupled with information provided by the authorities, we believe a list of client names and street addresses was taken from our system,” the email notice reads. “Importantly, we have no reason to believe that Scottrade’s trading platforms or any client funds were compromised. All client passwords remained encrypted at all times and we have not seen any indication of fraudulent activity as a result of this incident.”

The notice said that although Social Security numbers, email addresses and other sensitive data were contained in the system accessed, “it appears that contact information was the focus of the incident.” The company said the unauthorized access appears to have occurred over a period between late 2013 and early 2014.

Asked about the context of the notification from federal law enforcement officials, Scottrade spokesperson Shea Leordeanu said the company couldn’t comment on the incident much more than the information included in its Web site notice about the attack. But she did say that Scottrade learned about the data theft from the FBI, and that the company is working with agents from FBI field offices in Atlanta and New York. FBI officials could not be immediately reached for comment.

It may well be that the intruders were after Scottrade user data to facilitate stock scams, and that a spike in spam email for affected Scottrade customers will be the main fallout from this break-in.

In July 2015, prosecutors in Manhattan filed charges against five people — including some suspected of having played a role in the 2014 breach at JPMorgan Chase that exposed the contact information on more than 80 million consumers. The authorities in that investigation said they suspect that group sought to use email addresses stolen in the JPMorgan hacking to further stock manipulation schemes involving spam emails to pump up the price of otherwise worthless penny stocks.

Scottrade said despite the fact that it doesn’t believe Social Security numbers were stolen, the company is offering a year’s worth of free credit monitoring services to affected customers. Readers who are concerned about protecting their credit files from identity thieves should read How I Learned to Stop Worrying and Embrace the Security Freeze.

AWS Official Blog: Spot Fleet Update – Console Support, Fleet Scaling, CloudFormation

This post was syndicated from: AWS Official Blog and was written by: Jeff Barr. Original post: at AWS Official Blog

There’s a lot of buzz about Spot instances these days. Customers are really starting to understand the power that comes with the ability to name their own price for compute power!

After launching the Spot fleet API in May to allow you to manage thousands of Spot instances with a single request, we followed up with resource-oriented bidding in August and the option to distribute your fleet across multiple instance pools in September.

One quick note before I dig in: While the word “fleet” might make you think that this model is best-suited to running hundreds or thousands of instances at a time, everything that I have to say here applies regardless of the size of your fleet, whether it is comprised of one, two, three, or three thousand instances! As you will see in a moment, you get a console that’s flexible and easy to use, along with the ability to draw resources from multiple pools of Spot capacity, when you create and run a Spot fleet.

Today we are adding three more features to the roster: a new Spot console, the ability to change the size of a running fleet, and CloudFormation support.

New Spot Console (With Fleet Support)
In addition to CLI and API support, you can now design and launch Spot fleets using the new Spot Instance Launch Wizard. The new wizard allows you to create resource-oriented bids that are denominated in instances, vCPUs, or arbitrary units that you can specify when you design your fleet.  It also helps you to choose a bid price that is high enough (given the current state of the Spot market) to allow you to launch instances of the desired types.

I start by choosing the desired AMI (stock or custom), the capacity unit (I’ll start with instances), and the amount of capacity that I need. I can specify a fixed bid price across all of the instance types that I select, or I set it to be a percentage of the On-Demand price for the type. Either way, the wizard will indicate (with the “caution” icon) any bid prices that are too low to succeed:

When I find a set of prices and instance types that satisfies my requirements, I can select them and click on Next to move forward.

I can also make resource-oriented bids using a custom capacity unit. When I do this I have even more control over the bid. First, I can specify the minimum requirements (vCPUs, memory, instance storage, and generation) for the instances that I want in my fleet:

The display will update to indicate the instance types that meet my requirements.

The second element that I can control is the amount of capacity per instance type (as I explained in an earlier post, this might be driven by the amount of throughput that a particular instance type can deliver for my application). I can control this by clicking in the Weighted Capacity column and entering the designated amount of capacity for each instance type:

As you can see from the screen shot above, I have chosen all of instance types that offer weighted capacity at less than $0.35 / unit.

Now that I have designed my fleet, I can configure it by choosing the allocation strategy (diversified or lowest price), the VPC, security groups, availability zones / subnets, and a key pair for SSH access:

I can also click on Advanced to create requests that are valid only between certain dates and times, and to set other options:

After that I review my settings and click on Launch to move ahead:

My Spot fleet is visible in the Console. I can select it and see which instances were used to satisfy my request:

If I plan to make requests for similar fleets from time to time, I can download a JSON version of my settings:

Fleet Size Modification
We are also giving you the ability to modify the size of an existing fleet. The new ModifySpotFleetRequest allows you to make an existing fleet larger or smaller by specifying a new target capacity.

When you increase the capacity of one of your existing fleets, new bids will be placed in accordance with the fleet’s allocation strategy (lowest price or diversified).

When you decrease the capacity of one of your existing fleets, you can request that excess instances be terminated based on the allocation strategy. Alternatively, you can leave the instances running, and manually terminate them using a strategy of your own.

You can also modify the size of your fleet using the Console:

CloudFormation Support
We are also adding support for the creation of Spot fleets via a CloudFormation template. Here’s a sample:

"SpotFleet": {
  "Type": "AWS::EC2::SpotFleet",
  "Properties": {
    "SpotFleetRequestConfigData": {
      "IamFleetRole": { "Ref": "IAMFleetRole" },
      "SpotPrice": "1000",
      "TargetCapacity": { "Ref": "TargetCapacity" },
      "LaunchSpecifications": [
        "EbsOptimized": "false",
        "InstanceType": { "Ref": "InstanceType" },
        "ImageId": { "Fn::FindInMap": [ "AWSRegionArch2AMI", { "Ref": "AWS::Region" },
                     { "Fn::FindInMap": [ "AWSInstanceType2Arch", { "Ref": "InstanceType" }, "Arch" ] }
        "WeightedCapacity": "8"
        "EbsOptimized": "true",
        "InstanceType": { "Ref": "InstanceType" },
        "ImageId": { "Fn::FindInMap": [ "AWSRegionArch2AMI", { "Ref": "AWS::Region" },
                     { "Fn::FindInMap": [ "AWSInstanceType2Arch", { "Ref": "InstanceType" }, "Arch" ] }
        "Monitoring": { "Enabled": "true" },
        "SecurityGroups": [ { "GroupId": { "Fn::GetAtt": [ "SG0", "GroupId" ] } } ],
        "SubnetId": { "Ref": "Subnet0" },
        "IamInstanceProfile": { "Arn": { "Fn::GetAtt": [ "RootInstanceProfile", "Arn" ] } },
        "WeightedCapacity": "8"

Available Now
The new Spot Fleet Console, the new ModifySpotFleetRequest function, and the CloudFormation support are available now and you can start using them today!


TorrentFreak: Comcast User Hit With 112 DMCA Notices in 48 Hours

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

Every day, DMCA-style notices are sent to regular Internet users who use BitTorrent to share copyrighted material. These notices are delivered to users’ Internet service providers who pass them on in the hope that customers correct their behavior.

The most well-known notice system in operation in the United States is the so-called “six strikes” scheme, in which the leading recording labels and movie studios send educational warning notices to presumed pirates. Not surprisingly, six-strikes refers to users receiving a maximum of six notices. However, content providers outside the scheme are not bound by its rules – sometimes to the extreme.

According to a lawsuit filed this week in the United States District Court for the Western District of Pennsylvania (pdf), one unlucky Comcast user was subjected not only to a barrage of copyright notices on an unprecedented scale, but during one of the narrowest time frames yet.

The complaint comes from Rotten Records who state that the account holder behind a single Comcast IP address used BitTorrent to share the discography of Dog Fashion Disco, a long-since defunct metal band previously known as Hug the Retard.

“Defendant distributed all of the pieces of the Infringing Files allowing others to assemble them into a playable audio file,” Rotten Records’ attorney Flynn Wirkus Young explain.

Considering Rotten Records have been working with Rightscorp on other cases this year, it will come as no surprise that the anti-piracy outfit is also involved in this one. And boy have they been busy tracking this particular user. In a single 48 hour period, Rightscorp hammered the Comcast subscriber with more than two DMCA notices every hour over a single torrent.

“Rightscorp sent Defendant 112 notices via Defendant’s ISP Comcast from June 15, 2015 to June 17, 2015 demanding that Defendant stop illegally distributing Plaintiff’s work,” the lawsuit reads.

“Defendant ignored each and every notice and continued to illegally distribute Plaintiff’s work.”


While it’s clear that the John Doe behind IP address shouldn’t have been sharing the works in question (if he indeed was the culprit and not someone else), the suggestion to the Court that he or she systematically ignored 112 demands to stop infringing copyright is stretching the bounds of reasonable to say the least.

trolloridiotIn fact, Court documents state that after infringement began sometime on June 15, the latest infringement took place on June 16 at 11:49am, meaning that the defendant may well have acted on Rightscorp’s notices within 24 hours – and that’s presuming that Comcast passed them on right away, or even at all.

Either way, the attempt here is to portray the defendant as someone who had zero respect for Rotten Record’s rights, even after being warned by Rightscorp more than a hundred and ten times. Trouble is, all of those notices covered an alleged infringing period of less than 36 hours – hardly a reasonable time in which to react.

Still, it’s unlikely the Court will be particularly interested and will probably issue an order for Comcast to hand over their subscriber’s identity so he or she can be targeted by Rotten Records for a cash settlement.

Rotten has targeted Comcast users on several earlier occasions, despite being able to sue the subscribers of any service provider. Notably, while Comcast does indeed pass on Rightscorp’s DMCA takedown notices, it strips the cash settlement demand from the bottom.

One has to wonder whether Rightscorp and its client are trying to send the ISP a message with these lawsuits.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Krebs on Security: Experian Breach Affects 15 Million Consumers

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Kicking off National Cybersecurity Awareness Month with a bang, credit bureau and consumer data broker Experian North America disclosed Thursday that a breach of its computer systems exposed approximately 15 million Social Security numbers and other data on people who applied for financing from wireless provider T-Mobile USA Inc.

experianExperian said the compromise of an internal server exposed names, dates of birth, addresses, Social Security numbers and/or drivers’ license numbers, as well as additional information used in T-Moblie’s own credit assessment. The Costa Mesa-based data broker stressed that no payment card or banking details were stolen, and that the intruders never touched its consumer credit database.

Based on the wording of Experian’s public statement, many publications have reported that the breach lasted for two years from Sept. 1, 2013 to Sept. 16, 2015. But according to Experian spokesperson Susan Henson, the forensic investigation is ongoing, and it remains unclear at this point the exact date that the intruders broke into Experian’s server.

Henson told KrebsOnSecurity that Experian detected the breach on Sept. 15, 2015, and confirmed the theft of a single file containing the T-Mobile data on Sept. 22, 1015.

T-Mobile CEO John Legere blasted Experian in a statement posted to T-Mobile’s site. “Obviously I am incredibly angry about this data breach and we will institute a thorough review of our relationship with Experian, but right now my top concern and first focus is assisting any and all consumers affected,” Legere wrote.


Experian said it will be notifying affected consumers by snail mail, and that it will be offering affected consumers free credit monitoring through its “Protect MyID” service. Take them up on this offer if you want , but I would strongly encourage anyone affected by this breach to instead place a security freeze on their credit files at Experian and at the other big three credit bureaus, including Equifax, Trans Union and Innovis.

Experian’s offer to sign victims up for a credit monitoring service to address a breach of its own making is pretty rich. Moreover, credit monitoring services aren’t really built to prevent ID theft. The most you can hope for from a credit monitoring service is that they give you a heads up when ID theft does happen, and then help you through the often labyrinthine process of getting the credit bureaus and/or creditors to remove the fraudulent activity and to fix your credit score.

If after ordering a free copy of your credit report at you find unauthorized activity on your credit file, by all means take advantage of the credit monitoring service, which should assist you in removing those inquiries from your credit file and restoring your credit score if it was dinged in the process.

But as I explain at length in my story How I Learned to Stop Worrying and Embrace the Security Freeze, credit monitoring services aren’t really built to stop thieves from opening new lines of credit in your name.

If you wish to block thieves from using your personal information to obtain new credit in your name, freeze your credit file with the major bureaus. For more on how to do that and for my own personal experience with placing a freeze, see this piece.

I will be taking a much closer look at Experian’s security (or lack thereof) in the coming days, and my guess is lawmakers on Capitol Hill will be following suit. This is hardly first time lax security at Experian has exposed millions of consumer records. Earlier this year, a Vietnamese man named Hieu Minh Ngo was sentenced to 13 years in prison for running an online identity theft service that pulled consumer data directly from an Experian subsidiary. Experian is now fighting off a class-action lawsuit over the incident.

During the time that ID theft service was in operation, customers of Ngo’s service had access to more than 200 million consumer records. Experian didn’t detect Ngo’s activity until it was notified by federal investigators that Ngo was an ID thief posing as a private investigator based in the United States. The data broker failed to detect the anomalous activity even though Ngo’s monthly payments for consumer data lookups his hundreds of customers conducted each month came via wire transfers from a bank in Singapore.

Linux How-Tos and Linux Tutorials: Using G’MIC to Work Magic on Your Graphics

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Jack Wallen. Original post: at Linux How-Tos and Linux Tutorials

jack-gmic-1I’ve been doing graphic design for a long, long time. During that time, I’ve used one tool and only one tool… Gimp. Gimp has always offered all the power I need to create amazing graphics from book covers, to promotional images, photo retouch, and much more. But…

There’s always a but.

Even though Gimp has a rather powerful (and easy to use) set of filters, those filters tend to be very much one-trick-ponies. In other words, if you want to create a complex look on an image, you most likely will wind up using a combination of multiple filters to get the effect you want. This is great, simply because you have the filters at your command. However, sometimes knowing which filter to use for what effect can be a bit daunting.

That’s why GREYC’s Magic for Image Computing (aka G’MIC) is such a breath of fresh air. This particular plugin for Gimp has saved me time, effort, and hair pulling on a number of occasions. What G’MIC does is easily extend the capabilities of not just Gimp, but the Gimp user. G’MIC is a set of predefined filters and effects that make using Gimp exponentially easier.

The list of filters and effects available from G’MIC is beyond impressive. You’ll find things like:

  • Arrays & tiles

  • Bokeh

  • Cartoon

  • Chalk it up

  • Finger paint

  • Graphic novel

  • Hope poster

  • Lylejk’s painting

  • Make squiggly

  • Paint daub

  • Pen drawing

  • Warhol

  • Watercolor

  • Charcoal

  • Sketch

  • Stamp

  • Boost-fade

  • Luminance

  • Decompose channels

  • Hue lighten-darken

  • Metallic look

  • Water drops

  • Vintage style

  • Skeleton

  • Euclidean – polar

  • Reflection

  • Ripple

  • Wave

  • Wind

  • Noise

  • Old Movie Stripes

 And more. For an entire listing of the effects and filters available, check out the ascii chart here.

At this point, any Gimp user should be salivating at the thought of using this wonderful tool. With that said, let’s install and get to know G’MIC.


The good news is that you can find G’MIC in your distribution’s standard repositories. I’ll show you how to install using the Ubuntu Software Center.

The first thing to do, once you’ve opened up the Ubuntu Software Center, is to search for Gimp. Click on the entry for Gimp and then click the More Info button. Scroll down until you see the Optional add-ons (see Figure 1 above).

From within the optional add-ons listing, make sure to check the box for GREYC’s Magic for Image Computing and then click Apply Changes.

With the installation of G’MIC complete, you are ready to start using the tool.

I will warn you: I currently use the unstable version (2.9.1) of Gimp. Although unstable, there are features and improvements in this version that blow away the 2.8 branch. So… if you’re willing to work with a possibly unstable product (I find it stable), it’s worth the risk. To install the 2.9 branch of Gimp on a Ubuntu-based distribution, follow these steps:

  1. Open a terminal window

  2. Add the necessary repository with the command sudo add-apt-repository ppa:otto-kesselgulasch/gimp-edge

  3. Update apt with the command sudo apt-get update

  4. Install the development build of Gimp, issue the command sudo apt-get install gimp

The above should also upgrade G’MIC as well. If not, you might need to follow up the install with the command sudo apt-get upgrade.


Now it’s time to start using G’MIC. If you search through your desktop menu, you’ll not find G’MIC listed. That is because it is integrated into Gimp itself. In fact, you should see G’MIC listed in the menu structure. If you click that entry, you’ll see G’MIC listed, but it’s grayed out. That is because G’MIC can only open when you’re actually working on an image (remember, this is a set of predefined filters that act on an image, not create an image). With that said, open up an image and then click G’MIC > G’MIC. A new window will open (Figure 2) showing the abundance of filters and effects available to you.

jack-gmic-2The first thing you need to know is the Input/Output section (bottom left corner). Here you can decide, first, what G’MIC is working on. For example, you can tell G’MIC to use the currently active layer for Input but to output to a brand new layer. This can sometimes be handy so you’re not changing the current working layer (you might not want to do destructive editing on something you’ve spent hours on). If you like what G’MIC did with the layer, you can then move it into place and delete (or hide) the original layer.

At this point, it’s all about scrolling through each of the included pre-built effects and filters to find what you want. Each filter/effect offers a varying degree of user-controlled options (Figure 3 illustrates the controls for the Dirty filter under Degradations).

jack-gmic-3One thing you must get used to is making sure to select the layer you want to work on before opening G’MIC. If you don’t, you’ll have to close G’MIC, select the correct layer, and re-open G’MIC. You also need to understand that some of the filters take much longer to work their magic than others. You’ll see a progress bar at the bottom of the Gimp window, indicating the filter/effect is being applied.

If you want to test G’MIC before installing it, or you want to test filters/effects before applying them to your own work, you can test it with this handy online demo version. This tool allows you to work with G’MIC on a demo image so you can not only see how well the effects/filters work, but get the hang of using G’MIC (it’s not hard).

If you’re a Gimp power user, G’MIC is, without a doubt, one of the single most important add-ons available for the flagship open source image editing tool. With G’MIC you can bring some real magic to your digital images… and do so with ease. Give it a go and see if it doesn’t take your Gimp work to the next level.

AWS Official Blog: Are You Well-Architected?

This post was syndicated from: AWS Official Blog and was written by: Jeff Barr. Original post: at AWS Official Blog

Seattle-born musical legend Jimi Hendrix started out his career with a landmark album titled Are You Experienced?

I’ve got a similar question for you: Are You Well-Architected? In other words, have you chosen a cloud architecture that is in alignment with the best practices for the use of AWS?

We want to make sure that your applications are well-architected. After working with thousands of customers, the AWS Solutions Architects have identified a set of core strategies and best practices for architecting systems in the cloud and have codified them in our new AWS Well-Architected Framework. This document contains a set of foundational questions that will allow you to measure your architecture against these best practices and to learn how to address any shortcomings.

The AWS Well-Architected Framework is based around four pillars:

  • Security – The ability to protect information systems and assets while delivering business value through risk assessments and mitigation strategies.
  • Reliability – The ability to recover from infrastructure or service failures, dynamically acquire computing resources to meet demand, and mitigate disruptions such as misconfigurations or transient network issues.
  • Performance Efficiency -The efficient use computing resources to meet system requirements, and maintaining that efficiency as demand changes and technologies evolve.
  • Cost Optimization – The ability to avoid or eliminate unneeded cost or suboptimal resources.

For each pillar, the guide puts forth a series of design principles, and then defines the pillar in detail. Then it outlines a set of best practices for the pillar and proffers a set of questions that will help you to understand where you are with respect to the best practices. The questions are open-ended. For example, there’s no simple answer to the question “How does your system withstand component failures?” or “How are you planning for recovery?”

As you work your way through the Framework, I would suggest that you capture and save the answers to each of the questions. This will give you a point-in-time reference and will allow you to look back later in order to measure your progress toward well-architected.

The AWS Well-Architected Framework is available at no charge. If you find yourself in need of additional help along your journey to the cloud, be sure to tap in to accumulated knowledge and expertise of our team of Solutions Architects.


PS – If you are coming to AWS re:Invent, be sure to attend the Well-Architected Workshop at 1 PM on Wednesday, October 7th.

Schneier on Security: Stealing Fingerprints

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

The news from the Office of Personnel Management hack keeps getting worse. In addition to the personal records of over 20 million US government employees, we’ve now learned that the hackers stole fingerprint files for 5.6 million of them.

This is fundamentally different from the data thefts we regularly read about in the news, and should give us pause before we entrust our biometric data to large networked databases.

There are three basic kinds of data that can be stolen. The first, and most common, is authentication credentials. These are passwords and other information that allows someone else access into our accounts and — usually — our money. An example would be the 56 million credit card numbers hackers stole from Home Depot in 2014, or the 21.5 million Social Security numbers hackers stole in the OPM breach. The motivation is typically financial. The hackers want to steal money from our bank accounts, process fraudulent credit card charges in our name, or open new lines of credit or apply for tax refunds.

It’s a huge illegal business, but we know how to deal with it when it happens. We detect these hacks as quickly as possible, and update our account credentials as soon as we detect an attack. (We also need to stop treating Social Security numbers as if they were secret.)

The second kind of data stolen is personal information. Examples would be the medical data stolen and exposed when Sony was hacked in 2014, or the very personal data from the infidelity website Ashley Madison stolen and published this year. In these instances, there is no real way to recover after a breach. Once the data is public, or in the hands of an adversary, it’s impossible to make it private again.

This is the main consequence of the OPM data breach. Whoever stole the data — we suspect it was the Chinese — got copies the security-clearance paperwork of all those government employees. This documentation includes the answers to some very personal and embarrassing questions, and now these employees up to blackmail and other types of coercion.

Fingerprints are another type of data entirely. They’re used to identify people at crime scenes, but increasingly they’re used as an authentication credential. If you have an iPhone, for example, you probably use your fingerprint to unlock your phone. This type of authentication is increasingly common, replacing a password — something you know — with a biometric: something you are. The problem with biometrics is that they can’t be replaced. So while it’s easy to update your password or get a new credit card number, you can’t get a new finger.

And now, for the rest of their lives, 5.6 million US government employees need to remember that someone, somewhere, has their fingerprints. And we really don’t know the future value of this data. If, in twenty years, we routinely use our fingerprints at ATM machines, that fingerprint database will become very profitable to criminals. If fingerprints start being used on our computers to authorize our access to files and data, that database will become very profitable to spies.

Of course, it’s not that simple. Fingerprint readers employ various technologies to prevent being fooled by fake fingers: detecting temperature, pores, a heartbeat, and so on. But this is an arms race between attackers and defenders, and there are many ways to fool fingerprint readers. When Apple introduced its iPhone fingerprint reader, hackers figured out how to fool it within days, and have continued to fool each new generation of phone readers equally quickly.

Not every use of biometrics requires the biometric data to be stored in a central server somewhere. Apple’s system, for example, only stores the data locally: on your phone. That way there’s no central repository to be hacked. And many systems don’t store the biometric data at all, only a mathematical function of the data that can be used for authentication but can’t be used to reconstruct the actual biometric. Unfortunately, OPM stored copies of actual fingerprints.

Ashley Madison has taught us all the dangers of entrusting our intimate secrets to a company’s computers and networks, because once that data is out there’s no getting it back. All biometric data, whether it be fingerprints, retinal scans, voiceprints, or something else, has that same property. We should be skeptical of any attempts to store this data en masse, whether by governments or by corporations. We need our biometrics for authentication, and we can’t afford to lose them to hackers.

This essay previously appeared on Motherboard.

Raspberry Pi: Astro Pi: Mission Update 6 – Payload Handover

This post was syndicated from: Raspberry Pi and was written by: David Honess. Original post: at Raspberry Pi

Those of you who regularly read our blog will know all about Astro Pi. If not then, to briefly recap, two specially augmented Raspberry Pis (called Astro Pis) are being launched to the International Space Station (ISS) as part of British ESA Astronaut Tim Peake’s mission starting in December. The launch date is December the 15th.

Britsh ESA Astronaut Tim Peake with Astro Pi

British ESA astronaut Tim Peake with Astro Pi – Image credit ESA

The Astro Pi competition

Last year we joined forces with the UK Space Agency, ESA and the UK Space Trade Association to run a competition that gave school-age students in the UK the chance to devise computer science experiments for Tim to run aboard the ISS.

Here is our competition video voiced by Tim Peake himself:

Astro Pi

This is “Astro Pi” by Raspberry Pi Foundation on Vimeo, the home for high quality videos and the people who love them.

This ran from December 2014 to July 2015 and produced seven winning programs that will be run on the ISS by Tim. You can read about those in a previous blog post here. They range from fun reaction-time games to real science experiments looking at the radiation environment in space. The results will be downloaded back to Earth and made available online for all to see.

During the competition we saw kids with little or no coding experience become so motivated by the possibility of having their code run in space that they learned programming from scratch and grew proficient enough to submit an entry.

Flight safety testing and laser etching

Meanwhile we were working with ESA and a number of the UK space companies to get the Astro Pi flight hardware (below) certified for space.

An Astro Pi unit in its flight case

An Astro Pi unit in its space-grade aluminium flight case

This was a very long process which began in September 2014 and is only now coming to an end. Read all about it in the blog entry here.

The final step in this process was to get some laser engraving done. This is to label every port and every feature that the crew can interact with. Their time is heavily scheduled up there and they use step-by-step scripts to explicitly coordinate everything from getting the Astro Pis out and setting them up, to getting data off the SD cards and packing them away again.


So this labelling (known within ESA as Ops Noms) allows the features of the flight cases to exactly match what is written in those ISS deployment scripts. There can be no doubt about anything this way.


In order to do this we asked our CAD guy, Jonathan Wells, to produce updated drawings of the flight cases showing the labels. We then took those to a company called Cut Tec up in Barnsley to do the work.

They have a machine, rather like a plotter, which laser etches according to the CAD file provided. The process actually involves melting the metal of the cases to leave a permanent, hard wearing, burn mark.

They engraved four of our ground Astro Pi units (used for training and verification purposes) followed by the two precious flight units that went through all the safety testing. Here is a video:

Private Video on Vimeo

Join the web’s most supportive community of creators and get high-quality tools for hosting, sharing, and streaming videos in gorgeous HD with no ads.

After many months of hard work the only thing left to do was to package up the payload and ship it to ESA! This was done on Friday of last week.

Raspberry Pi on Twitter

The final flight @astro_pi payload has left the building! @gsholling @astro_timpeake @spacegovuk @esa

The payload is now with a space contractor company in Italy called ALTEC. They will be cleaning the units, applying special ISS bar codes, and packaging them into Nomex pouch bags for launch. After that the payload will be shipped to the Baikonur Cosmodrome in Kazakhstan to be loaded onto the same launch vehicle that Tim Peake will use to get into space: the Soyuz 45S.

This is not the last you’ll hear of Astro Pi!

We have a range of new Astro Pi educational resources coming up. There will be opportunities to examine the results of the winning competition experiments, and a data analysis activity where you can obtain a CSV file full of time-stamped sensor readings direct from Tim.

Tim has also said that, during the flight, he wants to use some of his free time on Saturday afternoons to do educational outreach. While we can’t confirm anything at this stage we are hopeful that some kind of interactive Astro Pi activities will take place. There could yet be more opportunities to get your code running on the ISS!

If you want to participate in this we recommend that you prepare by obtaining a Sense HAT and maybe even building a mock-up of the Astro Pi flight unit like the students of Cranmere Primary School did to test their competition entry.

Richard Hayler ☀ on Twitter

We’ve built a Lego version of the @astro_pi flight case to make sweaty-astronaut testing as realistic as possible.

It’s been about 25 years since we last had a British Astronaut (Helen Sharman in 1991) and we all feel that this is a hugely historic and aspirational moment for Great Britain. To be so intimately involved thus far has been an honour and a privilege for us. We’ve made some great friends at the UK Space Agency, ESA, CGI, Airbus Defence & Space and Surrey Satellite Technology to name a few.

We wish Tim Peake all the best for what remains of his training and for the mission ahead. Thanks for reading, and please watch this short video if you want to find out a bit more about the man himself:

Tim Peake: How to be an Astronaut – Preview – BBC Two

Programme website: An intimate portrait of the man behind the visor – British astronaut Tim Peake. Follow Tim Peake @BBCScienceClub, as he prepares for take off. #BritInSpace

The Astro Pis are staying on the ISS until 2022 when the coin cell batteries in their real time clocks reach end of life. So we sincerely hope that other crew members flying to the ISS will use them in the future.


Columbus ISS Training Module in Germany – Image credit ESA

The post Astro Pi: Mission Update 6 – Payload Handover appeared first on Raspberry Pi.

TorrentFreak: Copyright Trolls Announce UK Anti-Piracy Invasion

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

trollsignSo-called copyright trolls were a common occurrence in the UK half a decade ago, when many Internet subscribers received settlement demands for allegedly downloading pirated files.

After one of the key players went bankrupt the focus shifted to other countries, but now they’re back. One of the best known trolling outfits has just announced the largest anti-piracy push in the UK for many years.

The renewed efforts began earlier this year when the makers of “The Company You Keep” began demanding cash from many Sky Broadband customers.

This action was spearheaded by Maverick Eye, a German outfit that tracks and monitors BitTorrent piracy data that forms the basis of these campaigns. Today, the company says that this was just the beginning.

Framed as one of the largest anti-piracy campaigns in history, Maverick Eye says it teamed up with law firm Hatton & Berkeley and other key players to launch a new wave of settlement demands.

“Since July this year, Hatton & Berkeley and Maverick Eye have been busy working with producers, lawyers, key industry figures, investors, partners, and supporters to develop a program to protect the industry and defend the UK cinema against rampant piracy online,” Maverick Eye says.

“The entertainment industry can expect even more from these experts as they continue the fight against piracy in the UK.”

The companies have yet to announce which copyright holders are involved, but Maverick Eye is already working with the makers of the movies Dallas Buyers Club, The Cobbler and Survivor in other countries.

Most recently, they supported a series of lawsuits against several Popcorn Time users in the U.S., and they also targeted BitTorrent users in Canada and Australia.

Hatton & Berkeley commonly offers administrative services and says it will provide “essential infrastructure” for the UK anti-piracy campaign.

“Hatton and Berkeley stands alongside our colleagues in an international operation that has so far yielded drastic reductions in streaming, torrenting and illegal downloads across Europe,” the company announces.

In the UK it is relatively easy for copyright holders to obtain the personal details of thousands of subscribers at once, which means that tens of thousands of people could be at risk of being targeted.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

AWS Official Blog: New – Amazon Elasticsearch Service

This post was syndicated from: AWS Official Blog and was written by: Jeff Barr. Original post: at AWS Official Blog

Elasticsearch  is a real-time, distributed search and analytics engine that fits nicely into a cloud environment. It is document-oriented and does not require a schema to be defined up-front. It supports structured, unstructured, and time-series queries and serves as a substrate for other applications and visualization tools including Kibana.

Today we are launching the new Amazon Elasticsearch Service (Amazon ES for short). You can launch a scalable Elasticsearch cluster from the AWS Management Console in minutes, point your client at the cluster’s endpoint, and start to load, process, analyze, and visualize data shortly thereafter.

Creating a Domain
Let’s go ahead and create an Amazon ES domain (as usual, you also can do this using the AWS Command Line Interface (CLI), AWS Tools for Windows PowerShell, or the Amazon Elasticsearch Service API). Simply click on the Get Started button on the splash page and enter a name for your domain (I chose my-es-cluster):

Select an instance type and an instance count (both can be changed later if necessary):

Here are some guidelines to help you to choose appropriate instance types:

  • T2 – Dev and test (also good for dedicated master nodes).
  • R3 – Processing loads that are read-heavy or that have complex queries (e.g. nested aggregations).
  • I2 – High-write, large-scale data storage.
  • M3 – Balanced read/write loads.

If you check Enable dedicated master, Amazon ES will create a separate master node for the cluster. This node will not hold data or respond to upload requests. We recommend that you enable this option and use at least three master nodes to ensure maximum cluster stability. Also, clusters should always have an odd number of master nodes in order to protect against split-brain scenarios.

If you check Enable zone awareness, Amazon ES will distribute the nodes across multiple Availability Zones in the region to increase availability. If you choose to do this, you will also need to set up replicas using the Elasticsearch Index API; you can also use the same API to do this when you create new indexes (learn more).

I chose to use EBS General Purpose (SSD) storage for my data nodes. I could have chosen to store the data on the instance, or to use another type of EBS volume. Using EBS allows me to store more data and to run on less costly instances; however on-instance storage will offer better write performance. Large data sets can run on I2 instances (they have up to 1.6 terabytes of SSD storage per node).

Next, set the access policy. I chose to make mine wide-open in order to simplify testing (don’t do this for your cluster); I could have used one of the IP-based or user-based templates and a wizard to create a more restrictive policy.

Finally, review the settings and click on Confirm and create:

The cluster will be created in a couple of minutes, and will be listed on the Elasticsearch Service dashboard (I added some documents before I took this screenshot):

And that’s it!

Loading Documents
I knew next to nothing about Elasticsearch before I started to write this blog post, but that didn’t stop me from trying it out. Following the steps in Having Fun: Python and Elasticsearch, Part 1, I installed the Python library for Elasticsearch, and returned to the AWS Management Console to locate the endpoint for my cluster.

I performed the status check outlined in the blog post, and everything worked as described therein. Then I pasted the Python code from the post into a file, and ran it to create some sample data. I was able to see the new index in the Console:

That was easy!

Querying Documents
With the data successfully loaded, I clicked on the Kibana link for my cluster to see what else I could do:

Kibana (v4) opened in another browser tab and I configured it to index my posts:

Kibana confirmed the fields in the domain:

From there (if I had more time and actually knew what I was doing) I could visualize my data using Kibana.

Version 3 of Kibana is also available. To access it, simply append _plugin/kibana3/ to the endpoint of your cluster.

Other Goodies
You can scale your cluster using the CLI (aws es update-elasticsearch-domain-configuration), API (UpdateElasticsearchDomainConfig), or the console. You simply set the new configuration and Amazon ES will create the new cluster and copy your the data to it with no down time.

As part of today’s launch of Amazon ES, we are launching integration with CloudWatch Logs. You can arrange to route your CloudWatch Logs to Amazon ES by creating an Amazon ES domain, navigating to the Cloudwatch Logs Console and clicking on Subscribe to Lambda / Amazon ES, then stepping through the wizard:

The wizard will help you to set up a subscription filter pattern for the incoming logs (the pattern is optional, but having one allows you to define a schema for the logs). Here are some sample Kibana dashboards that you can use to view several different types of logs, along with the filter patterns that you’ll need to use when you route the logs to Amazon ES:

  • VPC Flow Dashboard – use this filter pattern to map the log entries:
    [version, account_id, interface_id, srcaddr, dstaddr, srcport, dstport,
    protocol, packets, bytes, start, end, action, log_status]
  • Lambda Dashboard – use this filter pattern to map the log entries:
    [timestamp=*Z, request_id="*-*", event].
  • CloudTrail Dashboard – no filter pattern is needed; the log entries are in self-identifying JSON form.

Amazon ES supports the ICU Analysis Plugin and the Kuromoji plugin. You can configure these normally through the Elasticsearch Mapping API. Amazon ES does not currently support commercial plugins like Shield or Marvel. The AWS equivalents for these plugins are AWS Identity and Access Management (IAM) and CloudWatch.

Amazon ES automatically takes a snapshot of your cluster every day and stores it durably for 14 days. You can contact us to restore your cluster from a stored backup. You can set the hour of the day during which that backup occurs via the “automated snapshot hour.” You can also use the Elasticsearch Snapshot API to take a snapshot of your cluster and store it in your S3 bucket or restore an Elasticsearch snapshot (Amazon ES or self-managed) to an Amazon ES cluster from your S3 bucket.

Each Amazon ES domain also forwards 17 separate metrics to CloudWatch. You can view these metrics on the Amazon ES console’s monitoring tab or in the CloudWatch console. The cluster Status metrics (green, yellow, and red) expose the underlying cluster’s status: green means all shards are assigned to a node; yellow means that at least one replica shard is not assigned to any node; red means that at least one primary shard is not assigned to a node. One common occurrence is for a cluster to go yellow when it has a single data node and replication is set to 1 (Logstash does that by default). The simple fix is to add another node to the cluster.

CPU Utilization is most directly affected by request processing (reads or writes). When this metric is high you should increase replication and add instances to the cluster to allow for additional parallel processing. Similarly for JVM Memory pressure, increase instance count or change to R3 instances. You should set CloudWatch alarms on these metrics, to keep 10-20% free storage, and free CPU at all times.

Available Now
You can create your own Amazon ES clusters today in the US East (Northern Virginia), US West (Northern California), US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Sydney), South America (Brazil), Europe (Ireland), and Europe (Frankfurt) regions.

If you qualify for the AWS Free Tier, you can use a t2.micro.elasticsearch node for up to 750 hours  per month, along with up to 10 gigabytes of Magnetic or SSD-Backed EBS storage at no charge.


Backblaze Blog | The Life of a Cloud Backup Company: Adding Google Drive To Your Backup Plan

This post was syndicated from: Backblaze Blog | The Life of a Cloud Backup Company and was written by: Dave Greenbaum. Original post: at Backblaze Blog | The Life of a Cloud Backup Company

Backup Plan
Backing up your Google documents seems rather redundant doesn’t it? Google does that stuff for you, right? However, forgetting to backup your Google items is something you could deeply regret.

There are a few common reasons for this error and some easy ways to prevent it.

Risk of Accidental Deletion or Removal

Although Google keeps track of changes, they don’t keep deleted items forever. Most business and consumer accounts have 25 days to restore a deleted file. After that point, you’re out of luck. If someone is sharing a document with you, you may not be able to recover it after the other person deletes it. They ultimately maintain control of the file unless you transfer ownership.

Risk of Hackers or Lockouts

Like any online account, Google accounts run the risk of being anonymously hacked. Google will help you regain control, but you’ll temporarily be locked out. An equally likely threat is someone you know like a vindictive ex or a savvy child completely locks you out of your account. Again, Google might be able to help you get your stuff back, but that takes time.

Risk of Denied Access

If your account is owned by someone else like your employer or school, the administrator can close your account after you separate. Depending on your agreement they may not give you an opportunity to take your data with you. If it’s a personal account, when you die, your family may need access to your stuff. In that case, Google lets you transfer ownership after going through some hoops.

Manual Backup: Google Takeout
Google’s “Takeout” service gives you the ability to download everything Google is keeping for you. Part of that is your documents and pictures. Takeout is available to all personal Gmail accounts. Your school or business might disable that feature, so ask your administrator. When you order takeout Google puts all your items in a .zip file and lets you download that archive for about a week. These files can then be part of your regular computer backup strategy. Like any manual backup you need to remember to do it. The big advantage of takeout is it’s free. This isn’t the same as Google’s Drive software. That software downloads links to your Google files, but they aren’t usable without your Google account.

Google Drive and Online Backup Services

Services like Backblaze backup your computer, not your cloud. To automate your Google backups to another cloud services, you’ll need a different type of backup company. Companies like Spanning, Spinbackup, Backupify and Syscloud will backup your Google drive for around $40 per year for personal users. You can backup your Google Drive to another free cloud service like Box or Dropbox with Zapier. Zapier pricing depends on how often you’d like to backup and how many services. If all you need is an occasional Google Drive backup, the service is free.

Windows and Mac Backup Options

I recommend this option to most people. It’s part of a tiered backup strategy. Copying your Google data to your local computer means you always have an offline copy if your Google account is inaccessible. It also lets you integrate into other backup strategies. My Google Drive is backed up to my Mac. My Mac then backups up to Backblaze and Time Machine. Yes this means I have four different backups of my Google Drive. That’s not a bad thing.

For Mac users, I recommend Cloud Pull ($24.99) and on Windows I’ve worked both with SyncBackPro ($54.99) and GoodSync ($29.99). All these programs will backup multiple Google accounts. SyncBackPro and GoodSync backup not just your Google Drive, but other cloud services like Box and Dropbox. Again once it’s on your hard drive you can then back that data up once again to a cloud service like Backblaze. It may seem like overkill, but a good backup strategy has many layers.

Its unlikely Google is going to suffer a hard drive crash that causes you data loss. That doesn’t mean you shouldn’t be backing up. The risks are out there, but so are easy solutions.

The post Adding Google Drive To Your Backup Plan appeared first on Backblaze Blog | The Life of a Cloud Backup Company.

AWS Official Blog: New – AWS CloudFormation Designer + Support for More Services

This post was syndicated from: AWS Official Blog and was written by: Jeff Barr. Original post: at AWS Official Blog

AWS CloudFormation makes it easy for you to create and manage a collection of related AWS resources (which we call a stack). Starting from a template, CloudFormation creates the resources in an orderly and predictable fashion, taking in to account dependencies between them, and connecting them together as defined in the template.

A CloudFormation template is nothing more than a text file. Inside the file, data in JSON format defines the AWS resources: their names, properties, relationships to other resources, and so forth. While the text-based model is very powerful, it is effectively linear and, as such, does not make relationships between the resources very obvious.

Today we are launching the new AWS CloudFormation Designer. This visual tool allows you to create and modify CloudFormation templates using a drag-and-drop interface. You can easily add, modify, or remove resources and the underlying JSON will be altered accordingly. If you modify a template that is associated with a running stack, you can update the stack so that it conforms to the template.

We are also launching CloudFormation support for four additional AWS services.

Free Tour!
Let’s take a quick tour of the CloudFormation Designer. Here’s the big picture:

The design surface is center stage, with the resource menu on the left and a JSON editor at the bottom. I simply select the desired AWS resources on the left, drag them to the design surface, and create relationships between them. Here’s an EC2 instance and 3 EBS volumes:

I  created the relationships between the instance and the volumes by dragging the “dot” on the lower right corner of the instance (labeled AWS::EC2::Volume) to each volume in turn.

I can select an object and then edit its properties in the JSON editor:

Here’s a slightly more complex example:

The dotted blue lines denote resource-to-resource references (this is the visual equivalent of CloudFormation’s Ref attribute). For example, the DBSecurityGroup (middle row, left) refers to the EC2 SecurityGroup (top row, left); here’s the JSON:

I should point out that this tool does not perform any magic!  You will still need to have a solid understanding of the AWS resources that you include in your template, including a sense of how to put them together to form a complete system. You can right-click on any resource to display a menu; from there you can click on the ? to open the CloudFormation documentation for the resource:

Clicking on the eye icon will allow you to edit the resource’s properties. Once you have completed your design you can launch a stack from within the Designer.

You can also open up the sample CloudFormation templates and examine them in the Designer:

The layout data (positions and sizes) for the AWS resources is stored within the template.

Support for Additional Services
We are also adding support for the following services today:

Visit the complete list of supported services and resources to learn more.

Available Now
The new CloudFormation Designer is available now and you can start using it today by opening the CloudFormation Console. Like CloudFormation itself, there is no charge to use the Designer; you pay only for the AWS resources that you use when you launch a stack.


AWS Security Blog: Learn About the Rest of the Security and Compliance Track Sessions Being Offered at re:Invent 2015

This post was syndicated from: AWS Security Blog and was written by: Craig Liebendorfer. Original post: at AWS Security Blog

Previously, I mentioned that the re:Invent 2015 Security & Compliance track sessions had been announced, and I also discussed the AWS Identity and Access Management (IAM) sessions that will be offered as part of the Security & Compliance track.

Today, I will highlight the remainder of the sessions that will be presented as part of the Security & Compliance track. If you are going to re:Invent 2015, you can add these sessions to your schedule now. If you won’t be attending re:Invent in person this year, keep in mind that all sessions will be available on YouTube (video) and SlideShare (slide decks) after the conference.


SEC314: Full Configuration Visibility and Control with AWS Config

With AWS Config, you can discover what is being used on AWS, understand how resources are configured and how their configurations changed over time—all without disrupting end-user productivity on AWS. You can use this visibility to assess continuous compliance with best practices, and integrate with IT service management, configuration management, and other ITIL tools. In this session, AWS Senior Product Manager Prashant Prahlad will discuss:

  • Mechanisms to aggregate this deep visibility to gain insights into your overall security and operational posture.
  • Ways to leverage notifications from the service to stay informed, trigger workflows, or graph your infrastructure.
  • Integrating AWS Config with ticketing and workflow tools to help you maintain compliance with internal practices or industry guidelines.
  • Aggregating this data with other configuration management tools to move toward a single source of truth solution for configuration management.

This session is best suited for administrators and developers with a focus on audit, security, and compliance.

SEC318: AWS CloudTrail Deep Dive

Ever wondered how can you find out which user made a particular API call, when the call was made, and which resources were acted upon? In this session, you will learn from AWS Senior Product Manager Sivakanth Mundru how to turn on AWS CloudTrail for hundreds of AWS accounts in all AWS regions to ensure you have full visibility into API activity in all your AWS accounts. We will demonstrate how to use CloudTrail Lookup in the AWS Management Console to troubleshoot operational and security issues and how to use the AWS CLI or SDKs to integrate your applications with CloudTrail.

We will also demonstrate how you can monitor for specific API activity by using Amazon CloudWatch and receive email notifications, when such activity occurs. Using CloudTrail Lookup and CloudWatch Alarms, you can take immediate action to quickly remediate any security or operational issues. We will also share best practices and ready-to-use scripts, and dive deep into new features that help you configure additional layers of security for CloudTrail log files.

SEC403: Timely Security Alerts and Analytics: Diving into AWS CloudTrail Events by Using Apache Spark on Amazon EMR

Do you want to analyze AWS CloudTrail events within minutes of them arriving in your Amazon S3 bucket? Would you like to learn how to run expressive queries over your CloudTrail logs? AWS Senior Security Engineer Will Kruse will demonstrate Apache Spark and Apache Spark Streaming as two tools to analyze recent and historical security logs for your accounts. To do so, we will use Amazon Elastic MapReduce (EMR), your logs stored in S3, and Amazon SNS to generate alerts. With these tools at your fingertips, you will be the first to know about security events that require your attention, and you will be able to quickly identify and evaluate the relevant security log entries.


SEC306: Defending Against DDoS Attacks

In this session, AWS Operations Manager Jeff Lyon and AWS Software Development Manager Andrew Kiggins will address the current threat landscape, present DDoS attacks that we have seen on AWS, and discuss the methods and technologies we use to protect AWS services. You will leave this session with a better understanding of:

  • DDoS attacks on AWS as well as the actual threats and volumes that we typically see.
  • What AWS does to protect our services from these attacks.
  • How this all relates to the AWS Shared Responsibility Model.

Incident Response

SEC308: Wrangling Security Events in the Cloud

Have you prepared your AWS environment for detecting and managing security-related events? Do you have all the incident response training and tools you need to rapidly respond to, recover from, and determine the root cause of security events in the cloud? Even if you have a team of incident response rock stars with an arsenal of automated data acquisition and computer forensics capabilities, there is likely a thing or two you will learn from several step-by-step demonstrations of wrangling various potential security events within an AWS environment, from detection to response to recovery to investigating root cause. At a minimum, show up to find out who to call and what to expect when you need assistance with applying your existing, already awesome incident response runbook to your AWS environment. Presenters are AWS Principal Security Engineer Don “Beetle” Bailey and AWS Senior Security Consultant Josh Du Lac.

SEC316: Harden Your Architecture with Security Incident Response Simulations (SIRS)

Using Security Incident Response Simulations (SIRS—also commonly called IR Game Days) regularly keeps your first responders in practice and ready to engage in real events. SIRS help you identify and close security gaps in your platform, and application layers then validate your ability to respond. In this session, AWS Senior Technical Program Manager Jonathan Miller and AWS Global Security Architect Armando Leite will share a straightforward method for conducting SIRS. Then AWS enterprise customers will take the stage to share their experience running joint SIRS with AWS on their AWS architectures. Learn about detection, containment, data preservation, security controls, and more.

Key Management

SEC301: Strategies for Protecting Data Using Encryption in AWS

Protecting sensitive data in the cloud typically requires encryption. Managing the keys used for encryption can be challenging as your sensitive data passes between services and applications. AWS offers several options for using encryption and managing keys to help simplify the protection of your data at rest. In this session, AWS Principal Product Manager Ken Beer and Adobe Systems Principal Scientist Frank Wiebe will help you understand which features are available and how to use them, with emphasis on AWS Key Management Service and AWS CloudHSM. Adobe Systems Incorporated will present their experience using AWS encryption services to solve data security needs.

SEC401: Encryption Key Storage with AWS KMS at Okta

One of the biggest challenges in writing code that manages encrypted data is developing a secure model for obtaining keys and rotating them when an administrator leaves. AWS Key Management Service (KMS) changes the equation by offering key management as a service, enabling a number of security improvements over conventional key storage methods. Okta Senior Software Architect Jon Todd will show how Okta uses the KMS API to secure a multi-region system serving thousands of customers. This talk is oriented toward developers looking to secure their applications and simplify key management.

Overall Security

SEC201: AWS Security State of the Union

Security must be at the forefront for any online business. At AWS, security is priority number one. AWS Vice President and Chief Information Security Officer Stephen Schmidt will share his insights into cloud security and how AWS meets customers’ demanding security and compliance requirements—and in many cases helps them improve their security posture. Stephen, with his background with the FBI and his work with AWS customers in the government, space exploration, research, and financial services organizations, will share an industry perspective that’s unique and invaluable for today’s IT decision makers.

SEC202: If You Build It, They Will Come: Best Practices for Securely Leveraging the Cloud

Cloud adoption is driving digital business growth and enabling companies to shift to processes and practices that make innovation continual. As with any paradigm shift, cloud computing requires different rules and a different way of thinking. This presentation will highlight best practices to build and secure scalable systems in the cloud and capitalize on the cloud with confidence and clarity.

In this session, Sumo Logic VP of Security/CISO Joan Pepin will cover:

  • Key market drivers and advantages for leveraging cloud architectures.
  • Foundational design principles to guide strategy for securely leveraging the cloud.
  • The “Defense in Depth” approach to building secure services in the cloud, whether it’s private, public, or hybrid.
  • Real-world customer insights from organizations who have successfully adopted the "Defense in Depth" approach.

Session sponsored by Sumo Logic.

SEC203: Journey to Securing Time Inc’s Move to the Cloud

Learn how Time Inc. met security requirements as they transitioned from their data centers to the AWS cloud. Colin Bodell, CTO from Time Inc. will start off this session by presenting Time’s objective to move away from on-premise and co-location data centers to AWS and the cost savings that has been realized with this transition. Chris Nicodemo from Time Inc. and Derek Uzzle from Alert Logic will then share lessons learned in the journey to secure dozens of high volume media websites during the migration, and how it has enhanced overall security flexibility and scalability. They will also provide a deep dive on the solutions Time has leveraged for their enterprise security best practices, and show you how they were able to execute their security strategy. 

Who should attend: InfoSec and IT management. Session sponsored by Alert Logic.

SEC303: Architecting for End-to-End Security in the Enterprise

This session will tell the story of how security-minded enterprises provide end-to-end protection of their sensitive data in AWS. Learn about the enterprise security architecture decisions made by Fortune 500 organizations during actual sensitive workload deployments as told by the AWS professional service security, risk, and compliance team members who lived them. In this technical walkthrough, AWS Principal Consultant Hart Rossman and AWS Principal Security Solutions Architect Bill Shinn will share lessons learned from the development of enterprise security strategy, security use-case development, end-to-end security architecture and service composition, security configuration decisions, and the creation of AWS security operations playbooks to support the architecture.

SEC321: AWS for the Enterprise—Implementing Policy, Governance, and Security for Enterprise Workloads

CSC Director of Global Cloud Portfolio Kyle Falkenhagen will demonstrate enterprise policy, governance, and security products to deploy and manage enterprise and industry applications AWS.  CSC will demonstrate automated provisioning and management of big data platforms and industry specific enterprise applications with automatically provisioned secure network connectivity from the datacenter to AWS over layer 2 routed AT&T Netbond (provides AWS DirectConnect access) connection.  CSC will also demonstrate how applications blueprinted on CSC’s Agility Platform can be re-hosted on AWS in minutes or re-instantiated across multiple AWS regions. CSC will also demonstrate how CSC can provide agile and consumption-based endpoint security for workloads in any cloud or virtual infrastructure, providing enterprise management and 24×7 monitoring of workload compliance, vulnerabilities, and potential threats.

Session sponsored by CSC.

SEC402: Enterprise Cloud Security via DevSecOps 2.0

Running enterprise workloads with sensitive data in AWS is hard and requires an in-depth understanding about software-defined security risks. At re:Invent 2014, Intuit and AWS presented "Enterprise Cloud Security via DevSecOps" to help the community understand how to embrace AWS features and a software-defined security model. Since then, we’ve learned quite a bit more about running sensitive workloads in AWS.

We’ve evaluated new security features, worked with vendors, and generally explored how to develop security-as-code skills. Come join Intuit DevSecOps Leader Shannon Lietz and AWS Senior Security Consultant Matt Bretan to learn about second-year lessons and see how DevSecOps is evolving. We’ve built skills in security engineering, compliance operations, security science, and security operations to secure AWS-hosted applications. We will share stories and insights about DevSecOps experiments, and show you how to crawl, walk, and then run into the world of DevSecOps.

Security Architecture

SEC205: Learn How to Hackproof Your Cloud Using Native AWS Tools

The cloud requires us to rethink much of what we do to secure our applications. The idea of physical security morphs as infrastructure becomes virtualized by AWS APIs. In a new world of ephemeral, autoscaling infrastructure, you need to adapt your security architecture to meet both compliance and security threats. And AWS provides powerful tools that enable users to confidently overcome these challenges.

In this session, CloudCheckr Founder and CTO Aaron Newman will discuss leveraging native AWS tools as he covers topics including:

  • Minimizing attack vectors and surface area.
  • Conducting perimeter assessments of your virtual private clouds (VPCs).
  • Identifying internal vs. external threats.
  • Monitoring threats.
  • Reevaluating intrusion detection, activity monitoring, and vulnerability assessment in AWS.

Session sponsored by CloudCheckr.

Enjoy re:Invent!

– Craig