Posts tagged ‘research’

Lauren Weinstein's Blog: Research Request: Seeking Facebook or Other “Real Name” Identity Policy Abuse Stories

This post was syndicated from: Lauren Weinstein's Blog and was written by: Lauren. Original post: at Lauren Weinstein's Blog

Facebook and potentially other social media “real name” identity policies, such as discussed in: continue to raise the specter of individuals being targeted and harmed as a result of these policies — not just online but physically as well — especially persons who are already vulnerable in various ways. While we know of some specific examples where those so…

Schneier on Security: SHA-1 Freestart Collision

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

There’s a new cryptanalysis result against the hash function SHA-1:

Abstract: We present in this article a freestart collision example for SHA-1, i.e., a collision for its internal compression function. This is the first practical break of the full SHA-1, reaching all 80 out of 80 steps, while only 10 days of computation on a 64 GPU cluster were necessary to perform the attack. This work builds on a continuous series of cryptanalytic advancements on SHA-1 since the theoretical collision attack breakthrough in 2005. In particular, we extend the recent freestart collision work on reduced-round SHA-1 from CRYPTO 2015 that leverages the computational power of graphic cards and adapt it to allow the use of boomerang speed-up techniques. We also leverage the cryptanalytic techniques by Stevens from EUROCRYPT 2013 to obtain optimal attack conditions, which required further refinements for this work. Freestart collisions, like the one presented here, do not directly imply a collision for SHA-1.

However, this work is an important milestone towards an actual SHA-1 collision and it further shows how graphics cards can be used very efficiently for these kind of attacks. Based on the state-of-the-art collision attack on SHA-1 by Stevens from EUROCRYPT 2013, we are able to present new projections on the computational/financial cost required by a SHA-1 collision computation. These projections are significantly lower than previously anticipated by the industry, due to the use of the more cost efficient graphics cards compared to regular CPUs. We therefore recommend the industry, in particular Internet browser vendors and Certification Authorities, to retract SHA-1 soon. We hope the industry has learned from the events surrounding the cryptanalytic breaks of MD5 and will retract SHA-1 before example signature forgeries appear in the near future. With our new cost projections in mind, we strongly and urgently recommend against a recent proposal to extend the issuance of SHA-1 certificates by a year in the CAB/forum (the vote closes on October 16 2015 after a discussion period ending on October 9).

Especially note this bit: “Freestart collisions, like the one presented here, do not directly imply a collision for SHA-1. However, this work is an important milestone towards an actual SHA-1 collision and it further shows how graphics cards can be used very efficiently for these kind of attacks.” In other words: don’t panic, but prepare for a future panic.

This is not that unexpected. We’ve long known that SHA-1 is broken, at least theoretically. All the major browsers are planning to stop accepting SHA-1 signatures by 2017. Microsoft is retiring it on that same schedule. What’s news is that our previous estimates may be too conservative.

There’s a saying inside the NSA: “Attacks always get better; they never get worse.” This is obviously true, but it’s worth explaining why. Attacks get better for three reasons. One, Moore’s Law means that computers are always getting faster, which means that any cryptanalytic attack gets faster. Two, we’re forever making tweaks in existing attacks, which make them faster. (Note above: “…due to the use of the more cost efficient graphics cards compared to regular CPUs.”) And three, we regularly invent new cryptanalytic attacks. The first of those is generally predictable, the second is somewhat predictable, and the third is not at all predictable.

Way back in 2004, I wrote: “It’s time for us all to migrate away from SHA-1.” Since then, we have developed an excellent replacement: SHA-3 has been agreed on since 2012, and just became a standard.

This new result is important right now:

Thursday’s research showing SHA1 is weaker than previously thought comes as browser developers and certificate authorities are considering a proposal that would extend the permitted issuance of the SHA1-based HTTPS certificates by 12 months, that is through the end of 2016 rather than no later than January of that year. The proposal argued that some large organizations currently find it hard to move to a more secure hashing algorithm for their digital certificates and need the additional year to make the transition.

As the papers’ authors note, approving this proposal is a bad idea.

More on the paper here.

TorrentFreak: New York Judge Puts Brakes on Copyright Troll Subpoenas

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

For the past seven or eight years alleged file-sharers in the United States have found themselves at the mercy of so-called copyright trolls and right at the very forefront are those from the adult movie industry.

By a country mile, adult video outfit Malibu Media (X-Art) is the most litigious after filing over 4,500 cases in less than 4 years, but news coming out of New York should give this notorious troll pause for thought.

Events began in June when Malibu filed suit in the Eastern District of New York against a so-called John Doe defendant known only by his Verizon IP address, The porn outfit claimed that the individual was responsible for 18 counts of copyright infringement between February and May 2015.

Early August the defendant received a letter from Verizon informing him that a subpoena had been received which required the ISP to identify the individual using the IP address on May 23, 2015. This caused the defendant to fight back.

“Since Defendant’s IP addresses were assigned dynamically by the ISP, even if Defendant was identified as the subscriber assigned the IP address,, at 03:31:54 on May 23, 2015, it doesn’t mean that Defendant is the same subscriber who was assigned the IP address at the other seventeen occasions,” the defendant’s motion to quash reads.

“If Defendant’s identifying information is given to Plaintiff, Plaintiff, as part of
their business model, will seek settlements of thousands of dollars claiming Defendant’s responsibility for eighteen downloads of copyright protected works under the threat of litigation and public exposure with no serious intention of naming Defendant.”

Case specifics aside, the motion also contains broad allegations about Malibu Media’s entire business model, beginning with the manner in which it collects evidence on alleged infringers using BitTorrent networks.

Citing a University of Washington study which famously demonstrated a printer receiving a DMCA notice for copyright infringement, the motion concludes that the techniques employed by Malibu for tracking down infringers are simply not up to the job.

“The research concludes that the common approach for identifying infringing users in the poplar BitTorrent file sharing network is not conclusive,” the motion notes.

“Even if Plaintiff could definitively trace the BitTorrent activity in question to the IP-registrant, Malibu conspicuously fails to present any evidence that John Doe either uploaded, downloaded, or even possessed а complete copyrighted video file.”

While detection is rightfully put under the spotlight, the filing places greater emphasis on the apparent extortion-like practices demonstrated by copyright trolls such as Malibu Media.

Citing the earlier words of Judge Harold Baer, the motion notes that “troll” cases not only risk the public embarrassment of a misidentified defendant, but also create the likelihood that he or she will be “coerced into an unjust settlement with the plaintiff to prevent the dissemination of publicity surrounding unfounded allegations.”

The motion continues by describing Malibu as an aggressive litigant which deliberately tries to embarrass and shame defendants in the aim of receiving cash payments.

“[Malibu] seeks quick, out-of-court settlements which, because they are hidden, raise serious questions about misuse of court procedure. Judges regularly complain about Malibu,” the motion reads.

“Malibu’s strategy and its business models are to extort, harass, and embarrass
defendants to persuade defendants to pay settlements with plaintiffs instead of paying for legal assistance while attempting to keep their anonymity and defending against allegations which can greatly damage their reputations.”

Following receipt of the motion, yesterday Judge Steven I. Locke handed down his order and it represents a potentially serious setback for Malibu.

“Because the arguments advanced in the Doe Defendant’s Motion to Quash raise serious questions as to whether good cause exists in these actions to permit the expedited pre-answer discovery provided for in the Court’s September 4, 2015 Order, the relief and directives provided for in that Order are stayed pending resolution of the Doe Defendant’s Motion to Quash,” Judge Locke writes.

If putting the brakes on one discovery subpoena wasn’t enough, the Judge’s order lists 19 other cases that are now the subject of an indefinite stay. However, as highlighted by FightCopyrightTrolls, the actual exposure is much greater, with a total of 88 subpoenas in the Eastern District now placed on hold.

As a result, ISPs are now under strict orders not to hand over the real identities of their subscribers until the Court gives the instruction following a ruling by Judge Locke. In the meantime, Malibu has until October 27 to respond to the Verizon user’s motion.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Backblaze Blog | The Life of a Cloud Backup Company: Data Storage Technologies of the Future

This post was syndicated from: Backblaze Blog | The Life of a Cloud Backup Company and was written by: Melanie Pinola. Original post: at Backblaze Blog | The Life of a Cloud Backup Company

Backblaze Dat Storage Technologies of the Future

If someone from the future–two decades or two centuries from now–traveled back in time to today, they’d probably chuckle at our use of hard drives and USB sticks, the way we now wonder how we ever survived with floppy disks and Zip drives. Want a peek at the kinds of storage devices we’ll be using in the future? From helium hard drives to DNA digital storage, here’s what the future of data storage technology might look like.

Inventors and researchers continue to push the envelope when it comes to capacity, performance, and the physical size of our storage media. Today, Backblaze stores 150 petabytes of customer data in its data centers, but in the future, they’ll likely be able to store an almost incomprehensible amount data–zettabytes if not domegemegrottebytes. (Nice names, right? A petabyte is equivalent to one million gigabytes, a zettabyte equals one million petabytes, and a domegemegrottebyte equals 1,000 zettabytes.) With the human race creating and saving an exponential amount of data, this is a great thing and the future of data storage is pretty exciting. Here are a few of the emerging storage technologies that may be signs of what’s on the horizon.

Helium Drives

Helium-filled hard drives have lately been pushing the capacity boundaries of hard drives, which are typically filled with air. Last September, Western Digital announced the world’s first 10TB hard drive, just a few weeks after Seagate announced its 8TB air-filled hard drive (the largest hard drive at the time). By using helium instead of air, helium-filled drives use less power to spin the disks (which spin more easily thanks to less resistance compared to air), they run cooler, and they can pack in more disks. This summer, Backblaze created a 360TB Storage Pod with 45 HGST 8TB drives and found these to be tops for data load tests. At $0.068 per GB for the 8TB HGST helium drive (about $550 on Amazon. Seagate helium drives have a lower cost per GB, however), the technology is still expensive. Still, these high performance drives will likely only get cheaper and even more expansive–perhaps affordable enough even for consumer use.

Shingled Magnetic Recording (SMR)

SMR is a new hard drive recording technology. As with helium-filled drives, SMR technology allows for higher capacity on hard drives than traditional storage methods. As Seagate explains it:

SMR achieves higher areal densities by squeezing tracks closer together. Tracks overlap one another, like shingles on a roof, allowing more data to be written to the same space. As new data is written, the drive tracks are trimmed, or shingled. Because the reader element on the drive head is smaller than the writer, all data can still be read off the trimmed track without compromise to data integrity or reliability. In addition, traditional reader and writer elements can be used for SMR. This does not require significant new production capital to be used in a product, and will enable SMR-enabled HDDs to help keep costs low.

In 2014, Seagate introduced the first SMR hard drive, which improved hard drive density by 25%. At $260 for 8TB (three cents per GB), it’s a cost-effective drive for backups and archiving–though not necessarily performance , since the drive only has a 5,900 rpm spindle speed.


Perhaps the strangest new storage technology of the future is DNA. Yes, the molecule that stores biological information could be used to store other kinds of data. Harvard researchers in 2012 were able to encode DNA with digital information, including a 53,400-word book in HTML, eleven JPEG images, and one JavaScript program. DNA offers incredible storage density, 2.2 petabytes per gram, which means that a DNA hard drive about the size of a teaspoon could fit all of the world’s data on it–every song ever composed, book ever written, video ever shared. Besides the space savings, DNA is ideal for long-term storage: While you’re lucky if your hard drive lasts four years and optical disks are susceptible to heat and humidity, lead Harvard researcher George Church says “You can drop DNA wherever you want, in the desert or your backyard, and it will be there 400,000 years later.”

DNA takes a long time to read and write to and, as you might imagine, the technology is still too expensive to be usable now. According to New Scientist, in one recent study the cost to encode 83 kilobytes was £1000 (about $1,500 US dollars). Still, scientists are encoding information into artificial DNA and adding it to bacteria. It’s like a sci-fi novel that’s currently being written and lived. DNA could be the ultimate eternal drive one day.

Other Futuristic Storage Technologies

Not all innovative storage technologies end up becoming mainstream or widely used beyond just research, of course.

Scientists and tech companies have been working on holographic data storage for at least a decade. In 2011, GE demonstrated its holographic discs storage: DVD-sized disks that could store 500GB thanks to cramming the data onto layers of tiny holograms (unlike Blu-Ray discs, which store data just on the surface). These discs also had a relatively long lifespan prediction of 30 or more years. Not much has been said about the Holographic Virtual Disc (HVD) lately, though, and one of the biggest developers of the holographic drives, InPhase Technologies, went bankrupt in 2010. That’s not to say the technology won’t be a prominent storage technology in the future (what says “future” more than “holographic” anyway?).

Well, maybe quantum storage. Scientists are currently investigating ways to store data using quantum physics-e.g., a bit of data attached to the spin of an electron. Right now this technology can only store tiny amounts of data for a very short amount of time (not even a day yet), but if it works and takes off, we could see instant data syncing between two points anywhere, thanks to quantum entanglement.

Wonder what they’ll come up with next.

The post Data Storage Technologies of the Future appeared first on Backblaze Blog | The Life of a Cloud Backup Company. The Real-Time Linux Collaborative Project

This post was syndicated from: and was written by: corbet. Original post: at

The Linux Foundation has announced
the formation of a collaborative project to support the ongoing development
of the realtime kernel patch set. “The RTL Collaborative Project
will focus on pushing critical code upstream to be reviewed and eventually
merged into the mainline Linux kernel where it will receive ongoing
support. This will save the industry millions of dollars in research and
development. It will also improve quality of the code through robust
upstream kernel test infrastructure, since anything maintained in the
mainline kernel is collectively supported by thousands of developers and
hundreds of companies around the world.
” As part of the project,
the Foundation has appointed Thomas Gleixner into a Fellow position.

TorrentFreak: Thousands of “Spies” Are Watching Trackerless Torrents

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

spyThe beauty of BitTorrent is that thousands of people can share a single file simultaneously to speed up downloading. In order for this to work, trackers announce the IP-addresses of all file-sharers in public.

The downside of this approach is that anyone can see who’s sharing a particular file. It’s not even required for monitoring outfits to actively participate.

This ‘vulnerability’ is used by dozens of tracking companies around the world, some of which send file-sharers warning letters, or worse. However, the “spies” are not just getting info from trackers, they also use BitTorrent’s DHT.

Through DHT, BitTorrent users share IP-addresses with other peers. Thus far, little was known about the volume of monitoring through DHT, but research from Peersm’s Aymeric Vitte shows that it’s rampant.

Through various experiments Vitte consistently ran into hundreds of thousands of IP-addresses that show clear signs of spying behavior.

The spies are not hard to find and many monitor pretty much all torrents hashes they can find. Blocking them is not straightforward though, as they frequently rotate IP-addresses and pollute swarms.

“The spies are organized to monitor automatically whatever exists in the BitTorrent network, they are easy to find but difficult to follow since they might change their IP addresses and are polluting the DHT with existing peers not related to monitoring activities,” Vitte writes.

The research further found that not all spies are actively monitoring BitTorrent transfers. Vitte makes a distinction between level 1 and level 2 spies, for example.

The first group is the largest and spreads IP-addresses of random peers and the more dangerous level 2 spies, which are used to connect file-sharers to the latter group. They respond automatically, and even return peers for torrents that don’t exist.

The level 2 spies are the data collectors, some if which use quickly changing IP-addresses. They pretend to offer a certain file and wait for BitTorrent users to connect to them.

The image below shows how rapidly the spies were discovered in one of the experiments and how quickly they rotate IP-addresses.


Interestingly, only very few of the level 2 spies actually accept data from an alleged pirate, meaning that most can’t proof without a doubt that pirates really shared something (e.g. they could just be checking a torrent without downloading).

According to Vitte, this could be used by accused pirates as a defense.

“That’s why people who receive settlement demands while using only DHT should challenge this, and ask precisely what proves that they downloaded a file,” he says.

After months of research and several experiments Vitte found that there are roughly 3,000 dangerous spies. These include known anti-piracy outfits such as Trident Media Guard, but also unnamed spies that use rotating third party IPs so they are harder to track.

Since many monitoring outfits constantly change their IP-addresses, static blocklists are useless. At TF we are no fans of blocklists in general, but Vitte believes that the dynamic blocklist he has developed provides decent protection, with near instant updates.

This (paid) blocklist is part of the Open Source Torrent-Live client which has several built in optimizations to prevent people from monitoring downloads. People can also use it to built and maintain a custom blocklist.

In his research paper Vitte further proposes several changes to the BitTorrent protocol which aim to make it harder to spy on users. He hopes other developers will pick this up to protect users from excessive monitoring.

Another option to stop the monitoring is to use an anonymous VPN service or proxy, which hides ones actual IP-address.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

AWS Security Blog: Learn About the Rest of the Security and Compliance Track Sessions Being Offered at re:Invent 2015

This post was syndicated from: AWS Security Blog and was written by: Craig Liebendorfer. Original post: at AWS Security Blog

Previously, I mentioned that the re:Invent 2015 Security & Compliance track sessions had been announced, and I also discussed the AWS Identity and Access Management (IAM) sessions that will be offered as part of the Security & Compliance track.

Today, I will highlight the remainder of the sessions that will be presented as part of the Security & Compliance track. If you are going to re:Invent 2015, you can add these sessions to your schedule now. If you won’t be attending re:Invent in person this year, keep in mind that all sessions will be available on YouTube (video) and SlideShare (slide decks) after the conference.


SEC314: Full Configuration Visibility and Control with AWS Config

With AWS Config, you can discover what is being used on AWS, understand how resources are configured and how their configurations changed over time—all without disrupting end-user productivity on AWS. You can use this visibility to assess continuous compliance with best practices, and integrate with IT service management, configuration management, and other ITIL tools. In this session, AWS Senior Product Manager Prashant Prahlad will discuss:

  • Mechanisms to aggregate this deep visibility to gain insights into your overall security and operational posture.
  • Ways to leverage notifications from the service to stay informed, trigger workflows, or graph your infrastructure.
  • Integrating AWS Config with ticketing and workflow tools to help you maintain compliance with internal practices or industry guidelines.
  • Aggregating this data with other configuration management tools to move toward a single source of truth solution for configuration management.

This session is best suited for administrators and developers with a focus on audit, security, and compliance.

SEC318: AWS CloudTrail Deep Dive

Ever wondered how can you find out which user made a particular API call, when the call was made, and which resources were acted upon? In this session, you will learn from AWS Senior Product Manager Sivakanth Mundru how to turn on AWS CloudTrail for hundreds of AWS accounts in all AWS regions to ensure you have full visibility into API activity in all your AWS accounts. We will demonstrate how to use CloudTrail Lookup in the AWS Management Console to troubleshoot operational and security issues and how to use the AWS CLI or SDKs to integrate your applications with CloudTrail.

We will also demonstrate how you can monitor for specific API activity by using Amazon CloudWatch and receive email notifications, when such activity occurs. Using CloudTrail Lookup and CloudWatch Alarms, you can take immediate action to quickly remediate any security or operational issues. We will also share best practices and ready-to-use scripts, and dive deep into new features that help you configure additional layers of security for CloudTrail log files.

SEC403: Timely Security Alerts and Analytics: Diving into AWS CloudTrail Events by Using Apache Spark on Amazon EMR

Do you want to analyze AWS CloudTrail events within minutes of them arriving in your Amazon S3 bucket? Would you like to learn how to run expressive queries over your CloudTrail logs? AWS Senior Security Engineer Will Kruse will demonstrate Apache Spark and Apache Spark Streaming as two tools to analyze recent and historical security logs for your accounts. To do so, we will use Amazon Elastic MapReduce (EMR), your logs stored in S3, and Amazon SNS to generate alerts. With these tools at your fingertips, you will be the first to know about security events that require your attention, and you will be able to quickly identify and evaluate the relevant security log entries.


SEC306: Defending Against DDoS Attacks

In this session, AWS Operations Manager Jeff Lyon and AWS Software Development Manager Andrew Kiggins will address the current threat landscape, present DDoS attacks that we have seen on AWS, and discuss the methods and technologies we use to protect AWS services. You will leave this session with a better understanding of:

  • DDoS attacks on AWS as well as the actual threats and volumes that we typically see.
  • What AWS does to protect our services from these attacks.
  • How this all relates to the AWS Shared Responsibility Model.

Incident Response

SEC308: Wrangling Security Events in the Cloud

Have you prepared your AWS environment for detecting and managing security-related events? Do you have all the incident response training and tools you need to rapidly respond to, recover from, and determine the root cause of security events in the cloud? Even if you have a team of incident response rock stars with an arsenal of automated data acquisition and computer forensics capabilities, there is likely a thing or two you will learn from several step-by-step demonstrations of wrangling various potential security events within an AWS environment, from detection to response to recovery to investigating root cause. At a minimum, show up to find out who to call and what to expect when you need assistance with applying your existing, already awesome incident response runbook to your AWS environment. Presenters are AWS Principal Security Engineer Don “Beetle” Bailey and AWS Senior Security Consultant Josh Du Lac.

SEC316: Harden Your Architecture with Security Incident Response Simulations (SIRS)

Using Security Incident Response Simulations (SIRS—also commonly called IR Game Days) regularly keeps your first responders in practice and ready to engage in real events. SIRS help you identify and close security gaps in your platform, and application layers then validate your ability to respond. In this session, AWS Senior Technical Program Manager Jonathan Miller and AWS Global Security Architect Armando Leite will share a straightforward method for conducting SIRS. Then AWS enterprise customers will take the stage to share their experience running joint SIRS with AWS on their AWS architectures. Learn about detection, containment, data preservation, security controls, and more.

Key Management

SEC301: Strategies for Protecting Data Using Encryption in AWS

Protecting sensitive data in the cloud typically requires encryption. Managing the keys used for encryption can be challenging as your sensitive data passes between services and applications. AWS offers several options for using encryption and managing keys to help simplify the protection of your data at rest. In this session, AWS Principal Product Manager Ken Beer and Adobe Systems Principal Scientist Frank Wiebe will help you understand which features are available and how to use them, with emphasis on AWS Key Management Service and AWS CloudHSM. Adobe Systems Incorporated will present their experience using AWS encryption services to solve data security needs.

SEC401: Encryption Key Storage with AWS KMS at Okta

One of the biggest challenges in writing code that manages encrypted data is developing a secure model for obtaining keys and rotating them when an administrator leaves. AWS Key Management Service (KMS) changes the equation by offering key management as a service, enabling a number of security improvements over conventional key storage methods. Okta Senior Software Architect Jon Todd will show how Okta uses the KMS API to secure a multi-region system serving thousands of customers. This talk is oriented toward developers looking to secure their applications and simplify key management.

Overall Security

SEC201: AWS Security State of the Union

Security must be at the forefront for any online business. At AWS, security is priority number one. AWS Vice President and Chief Information Security Officer Stephen Schmidt will share his insights into cloud security and how AWS meets customers’ demanding security and compliance requirements—and in many cases helps them improve their security posture. Stephen, with his background with the FBI and his work with AWS customers in the government, space exploration, research, and financial services organizations, will share an industry perspective that’s unique and invaluable for today’s IT decision makers.

SEC202: If You Build It, They Will Come: Best Practices for Securely Leveraging the Cloud

Cloud adoption is driving digital business growth and enabling companies to shift to processes and practices that make innovation continual. As with any paradigm shift, cloud computing requires different rules and a different way of thinking. This presentation will highlight best practices to build and secure scalable systems in the cloud and capitalize on the cloud with confidence and clarity.

In this session, Sumo Logic VP of Security/CISO Joan Pepin will cover:

  • Key market drivers and advantages for leveraging cloud architectures.
  • Foundational design principles to guide strategy for securely leveraging the cloud.
  • The “Defense in Depth” approach to building secure services in the cloud, whether it’s private, public, or hybrid.
  • Real-world customer insights from organizations who have successfully adopted the "Defense in Depth" approach.

Session sponsored by Sumo Logic.

SEC203: Journey to Securing Time Inc’s Move to the Cloud

Learn how Time Inc. met security requirements as they transitioned from their data centers to the AWS cloud. Colin Bodell, CTO from Time Inc. will start off this session by presenting Time’s objective to move away from on-premise and co-location data centers to AWS and the cost savings that has been realized with this transition. Chris Nicodemo from Time Inc. and Derek Uzzle from Alert Logic will then share lessons learned in the journey to secure dozens of high volume media websites during the migration, and how it has enhanced overall security flexibility and scalability. They will also provide a deep dive on the solutions Time has leveraged for their enterprise security best practices, and show you how they were able to execute their security strategy. 

Who should attend: InfoSec and IT management. Session sponsored by Alert Logic.

SEC303: Architecting for End-to-End Security in the Enterprise

This session will tell the story of how security-minded enterprises provide end-to-end protection of their sensitive data in AWS. Learn about the enterprise security architecture decisions made by Fortune 500 organizations during actual sensitive workload deployments as told by the AWS professional service security, risk, and compliance team members who lived them. In this technical walkthrough, AWS Principal Consultant Hart Rossman and AWS Principal Security Solutions Architect Bill Shinn will share lessons learned from the development of enterprise security strategy, security use-case development, end-to-end security architecture and service composition, security configuration decisions, and the creation of AWS security operations playbooks to support the architecture.

SEC321: AWS for the Enterprise—Implementing Policy, Governance, and Security for Enterprise Workloads

CSC Director of Global Cloud Portfolio Kyle Falkenhagen will demonstrate enterprise policy, governance, and security products to deploy and manage enterprise and industry applications AWS.  CSC will demonstrate automated provisioning and management of big data platforms and industry specific enterprise applications with automatically provisioned secure network connectivity from the datacenter to AWS over layer 2 routed AT&T Netbond (provides AWS DirectConnect access) connection.  CSC will also demonstrate how applications blueprinted on CSC’s Agility Platform can be re-hosted on AWS in minutes or re-instantiated across multiple AWS regions. CSC will also demonstrate how CSC can provide agile and consumption-based endpoint security for workloads in any cloud or virtual infrastructure, providing enterprise management and 24×7 monitoring of workload compliance, vulnerabilities, and potential threats.

Session sponsored by CSC.

SEC402: Enterprise Cloud Security via DevSecOps 2.0

Running enterprise workloads with sensitive data in AWS is hard and requires an in-depth understanding about software-defined security risks. At re:Invent 2014, Intuit and AWS presented "Enterprise Cloud Security via DevSecOps" to help the community understand how to embrace AWS features and a software-defined security model. Since then, we’ve learned quite a bit more about running sensitive workloads in AWS.

We’ve evaluated new security features, worked with vendors, and generally explored how to develop security-as-code skills. Come join Intuit DevSecOps Leader Shannon Lietz and AWS Senior Security Consultant Matt Bretan to learn about second-year lessons and see how DevSecOps is evolving. We’ve built skills in security engineering, compliance operations, security science, and security operations to secure AWS-hosted applications. We will share stories and insights about DevSecOps experiments, and show you how to crawl, walk, and then run into the world of DevSecOps.

Security Architecture

SEC205: Learn How to Hackproof Your Cloud Using Native AWS Tools

The cloud requires us to rethink much of what we do to secure our applications. The idea of physical security morphs as infrastructure becomes virtualized by AWS APIs. In a new world of ephemeral, autoscaling infrastructure, you need to adapt your security architecture to meet both compliance and security threats. And AWS provides powerful tools that enable users to confidently overcome these challenges.

In this session, CloudCheckr Founder and CTO Aaron Newman will discuss leveraging native AWS tools as he covers topics including:

  • Minimizing attack vectors and surface area.
  • Conducting perimeter assessments of your virtual private clouds (VPCs).
  • Identifying internal vs. external threats.
  • Monitoring threats.
  • Reevaluating intrusion detection, activity monitoring, and vulnerability assessment in AWS.

Session sponsored by CloudCheckr.

Enjoy re:Invent!

– Craig

Krebs on Security: With Stolen Cards, Fraudsters Shop to Drop

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

A time-honored method of extracting cash from stolen credit cards involves “reshipping” scams, which manage the purchase, reshipment and resale of carded consumer goods from America to Eastern Europe — primarily Russia. A new study suggests that some 1.6 million credit and debit cards are used to commit at least $1.8 billion in reshipping fraud each year, and identifies some choke points for disrupting this lucrative money laundering activity.

Many retailers long ago stopped allowing direct shipments of consumer goods from the United States to Russia and Eastern Europe, citing the high rate of fraudulent transactions for goods destined to those areas. As a result, fraudsters have perfected the reshipping service, a criminal enterprise that allows card thieves and the service operators essentially split the profits from merchandise ordered with stolen credit and debit cards.

Source: Drops for Stuff research paper.

Source: Drops for Stuff research paper.

Much of the insight in this story comes from a study released last week called “Drops for Stuff: An Analysis of Reshipping Mule Scams,” which has multiple contributors (including this author). To better understand reshipping scheme, it helps to have a quick primer on the terminology thieves use to describe different actors in the scam.

The “operator” of the reshipping service specializes in recruiting “reshipping mules” or “drops” — essentially unwitting consumers in the United States who are enlisted through work-at-home job scams and promised up to $2,500 per month salary just for receiving and reshipping packages.

In practice, virtually all drops are cut loose after approximately 30 days of their first shipment — just before the promised paycheck is due. Because of this constant churn, the operator must be constantly recruiting new drops.

The operator sells access to his stable of drops to card thieves, also known as “stuffers.” The stuffers use stolen cards to purchase high-value products from merchants and have the merchants ship the items to the drops’ address. Once the drops receive the packages, the stuffers provide them with prepaid shipping labels that the mules will use to ship the packages to the stuffers themselves. After they receive the packaged relayed by the drops, the stuffers then sell the products on the local black market.

The shipping service operator will either take a percentage cut (up to 50 percent) where stuffers pay a portion of the product’s retail value to the site operator as the reshipping fee. On the other hand, those operations that target lower-priced products (clothing, e.g.) may simply charge a flat-rate fee of $50 to $70 per package. Depending on the sophistication of the reshipping service, stuffers can either buy shipping labels directly from the service — generally at a volume discount — or provide their own [for a discussion of ancillary criminal services that resell stolen USPS labels purchased wholesale, check out this story from 2014].

The researchers found that reshipping sites typically guarantee a certain level of customer satisfaction for successful package delivery, with some important caveats. If a drop who is not marked
as problematic embezzles the package, reshipping sites offer free shipping for the next package or pay up to 15% of the item’s value as compensation to stuffers (e.g., as compensation for “burning” the
credit card or the already-paid reshipping label).

However, in cases where the authorities identify the drop and intercept the package, the reshipping
sites provide no compensation — it calls these incidents “acts of God” over which it has no control.

“For a premium, stuffers can rent private drops that no other stuffers will have access to,” the researchers wrote. “Such private drops are presumably more reliable and are shielded from interference by other stuffers and, in turn, have a reduced risk to be discovered (hence, lower risk of losing packages).”


One of the key benefits of cashing out stolen cards using a reshipping service is that many luxury consumer goods that are typically bought with stolen cards — gaming consoles, iPads, iPhones and other Apple devices, for instance — can be sold in Russia for a 30 percent to 5o percent markup on top of the original purchase price, allowing the thieves to increase their return on each stolen card.

shopFor example, an Apple MacBook selling for 1,000 US dollars in the United States typically retails for for about 1,400 US dollars in Russia because a variety of customs duties, taxes and other fees increase their price.

It’s not hard to see how this can become a very lucrative form of fraud for everyone involved (except the drops). According to the researchers, the average damage from a reshipping scheme per cardholder is $1, 156.93. In this case, the stuffer buys a card off the black market for $10, turns around and purchases more than $1,100 worth of goods. After the reshipping service takes its cut (~$550), and the stuffer pays for his reshipping label (~$100), the stuffer receives the stolen goods and sells them on the black market in Russia for $1,400. He has just turned a $10 investment into more than $700. Rinse, wash, and repeat.

The study examined the inner workings of seven different reshipping services over a period of five years, from 2010 to 2015, and involved data shared by the FBI and the U.S. Postal Investigative Service. The analysis showed that at least 85 percent of packages being reshipped via these schemes were being sent to Moscow or to the immediate surrounding areas of Moscow.

The researchers wrote that “although it is often impossible to apprehend criminals who are abroad, the patterns of reshipping destinations can help to intercept the international shipping packages beforethey leave the country, e.g., at an USPS International Service Center. Focusing inspection efforts on the packages destined to the stuffers’ prime destination cities can increase the success of intercepting items from reshipping scams.”

The research team wrote that disrupting the reshipping chains of these scams has the potential to cripple the underground economy by affecting a major income stream of cybercriminals. By way of example, the team found that a single criminal-operated reshipping service  can earn a yearly revenue of over 7.3 million US dollars, most of which is profit.

A copy of the full paper is available here (PDF).

Krebs on Security: Bidding for Breaches, Redefining Targeted Attacks

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

A growing community of private and highly-vetted cybercrime forums is redefining the very meaning of “targeted attacks.” These bid-and-ask forums match crooks who are looking for access to specific data, resources or systems within major corporations with hired muscle who are up to the task or who already have access to those resources.

A good example of this until recently could be found at a secretive online forum called “Enigma,” a now-defunct community that was built as kind of eBay for data breach targets. Vetted users on Enigma were either bidders or buyers — posting requests for data from or access to specific corporate targets, or answering such requests with a bid to provide the requested data. The forum, operating on the open Web for months until recently, was apparently scuttled when the forum administrators (rightly) feared that the community had been infiltrated by spies.

The screen shot below shows several bids on Enigma from March through June 2015, requesting data and services related to HSBC UK, Citibank, Air Berlin and Bank of America:

Enigma, an exclusive forum for cyber thieves to buy and sell access to or data stolen from companies.

Enigma, an exclusive forum for cyber thieves to buy and sell access to or data stolen from companies.

One particularly active member, shown in the screen shot above and the one below using the nickname “Demander,” posts on Jan. 10, 2015 that he is looking for credentials from Cisco and that the request is urgent (it’s unclear from the posting whether he’s looking for access to Cisco Corp. or simply to a specific Cisco router). Demander also was searching for services related to Bank of America ATMs and unspecified data or services from Wells Fargo.

More bids on Enigma forum for services.

More bids on Enigma forum for services, data, and access to major corporations.

Much of the information about Enigma comes from Noam Jolles, a senior intelligence expert at Diskin Advanced Technologies. The employees at Jolles’ firm are all former members of Shin Bet, a.k.a. the Israel Security Agency/General Security Service — Israel’s counterespionage and counterterrorism agency, and similar to the British MI5 or the American FBI. The firm’s namesake comes from its founder, Yuval Diskin, who headed Shin Bet from 2005 to 2011.

“On Enigma, members post a bid and call on people to attack certain targets or that they are looking for certain databases for which they are willing to pay,” Jolles said. “And people are answering it and offering their merchandise.”

Those bids can take many forms, Jolles said, from requests to commit a specific cyberattack to bids for access to certain Web servers or internal corporate networks.

“I even saw bids regarding names of people who could serve as insiders,” she said. “Lists of people who might be susceptible to being recruited or extorted.”

Many experts believe the breach that exposed tens of millions user accounts at — an infidelity site that promises to hook up cheating spouses — originated from or was at least assisted by an insider at the company. Interestingly, on June 25, 2015 — three weeks before news of the breach broke — a member on a related secret data-trading forum called the “Gentlemen’s Club” solicits “data and service” related to AshleyMadison, saying “Don’t waste time if you don’t know what I’m talking about. Big job opportunity.”

On June 26, 2015, a forum member named "Diablo" requests data and services related to

On June 26, 2015, a “Gentlemen’s Club” forum member named “Diablo” requests data and services related to

Cybercrime forums like Enigma vet new users and require non-refundable deposits of virtual currency (such as Bitcoin). More importantly, they have strict rules: If the forum administrators notice you’re not trading with others on the forum, you’ll soon be expelled from the community. This policy means that users who are not actively involved in illicit activities — such as buying or selling access to hacked resources — aren’t allowed to remain on the board for long.


In some respects, the above-mentioned forums — as exclusive as they appear to be — are a logical extension of cybercrime forum activity that has been maturing for more than a decade.

As I wrote in my book, Spam Nation: The Inside Story of Organized Cyber Crime — From Global Epidemic to Your Front Door, “crime forums almost universally help lower the barriers to entry for would-be cybercriminals. Crime forums offer crooks with disparate skills a place to market and test their services and wares, and in turn to buy ill-gotten goods and services from others.”

globeauthThe interesting twist with forums like Enigma is that they focus on connecting miscreants seeking specific information or access with those who can be hired to execute a hack or supply the sought-after information from a corpus of already-compromised data. Based on her interaction with other buyers and sellers on these forums, Jolles said a great many of the requests for services seem to be people hiring others to conduct spear-phishing attacks — those that target certain key individuals within companies and organizations.

“What strikes me the most about these forums is the obvious use of spear-phishing attacks, the raw demand for people who know how to map targets for phishing, and the fact that so many people are apparently willing to pay for it,” Jolles said. “It surprises me how much people are willing to pay for good fraudsters and good social engineering experts who are hooking the the bait for phishing.”

Jolles believes Enigma and similar bid-and-ask forums are helping to blur international and geographic boundaries between attackers responsible for stealing the data and those who seek to use it for illicit means.

“We have seen an attack be committed by an Eastern European gang, for example, and the [stolen] database will eventually get to China,” Jolles said. “In this data-trading arena, the boundaries are getting warped within it. I can be a state-level buyer, while the attackers will be eastern European criminals.”


Jolles said she began digging deeper into these forums in a bid to answer the question of what happens to what she calls the “missing databases.” Avivah Litan, a fraud analyst with Gartner Inc., wrote about Jolles’ research in July 2015, and explained it this way:

“Where has all the stolen data gone and how is it being used? 

We have all been bombarded by weekly, if not daily reports of breaches and theft of sensitive personal information at organizations such as Anthem, JP Morgan Chase and OPM. Yet, despite the ongoing onslaught of reported breaches (and we have to assume that only the sloppy hackers get caught and that the reported breaches are just a fraction of the total breach pie) – we have not seen widespread identity theft or personal damage inflicted from these breaches.

Have any of you heard of direct negative impacts from these thefts amongst your friends, family, or acquaintances? I certainly have not.

Jolles said a good example of a cybercriminal actor who helps to blur the typical geographic lines in cybercrime is a mysterious mass-purchaser of stolen data known to many on Enigma and other such forums by a number of nicknames, including “King,” but most commonly “The Samurai.”

“According to what I can understand so far, this was a nickname was given to him and not one he picked himself,” Jolles said. “He is looking for any kind of large volumes of stolen data. Of course, I am getting my information from people who are actually trading with him, not me trading with him directly. But they all say he will buy it and pay immediately, and that he is from China.”

What other clues are there that The Samurai could be affiliated with a state-sponsored actor? Jolles said this actor pays immediately for good, verifiable databases, and generally doesn’t haggle over the price.

“People think he’s Chinese, that he’s government because the way he pays,” Jolles said. “He pays immediately and he’s not negotiating.”

The Samurai may be just some guy in a trailer park in the middle of America, or an identity adopted by a group of individuals, for all I know. Alternatively, he could be something of a modern-day Keyser Söze, a sort of virtual boogeyman who gains mythical status among investigators and criminals alike.

Nevertheless, new forums like The Gentlemen’s Club and Enigma are notable because they’re changing the face of targeted attacks, building crucial bridges between far-flung opportunistic hackers, hired guns and those wishing to harness those resources.

AWS Official Blog: New AWS Public Data Set – 3000 Rice Genome

This post was syndicated from: AWS Official Blog and was written by: Jeff Barr. Original post: at AWS Official Blog

My colleague Angel Pizzaro wrote the guest post below to tell you about an amazing new AWS Public Data Set!

— Jeff;

You can now access the genome sequence data of 3,024 rice varieties that have been aligned and analyzed against five different reference genomes as an AWS Public Data Set.  The data contains over 30 million genetic variations that span across all known and predicted rice genes, as well as potential regulatory regions surrounding these genes. Through analysis of this data, researchers can potentially identify genes associated with important agronomic traits such as crop yield, climate stress tolerance, and disease resistance. Together, they represent an unprecedented resource for advancing rice science and breeding technology.

Rice is a staple food source for half the world’s population, and accounts for over 20% of all calories per capita. In order to keep up with global population increases, we must find some way to increase rice crop yields by 25% by 2030. The current rate of increasing rice yield by traditional breeding is insufficient, especially when taking into account observed trends in climate change and pollution. In order to meet the world’s projected demand for a stable food supply, modern methods of breeding that take into account the underlying genetic information must be adopted by the community at large.

The 3,000 Rice Genome sequencing project is an international effort to sequence the genomes of 3,024 rice varieties from 89 countries. The collaborating centers involved are the Chinese Academy of Agricultural Sciences, BGI Shenzhen, and the International Rice Research Institute (IRRI). The consortium partnered with DNAnexus to analyze the sequence data of the 3,024 different rice varieties against five published draft genome builds of the rice genome. Partnering with DNAnexus allowed them to take advantage of the scalable computing capability at AWS to process all of the source genomic data across 37,000 compute cores working together in just two days — more than 200 times faster than would have been possible on local computing infrastructure. In addition, the data are accessible via DNAnexus for further analysis. For more details on accessing the data within DNAnexus, refer to the project documentation.

More in-depth analyses of this dataset could lead to inferences about higher yield and stress tolerance to pests, diseases, and climate change. You can learn more about the data and how to access it on the 3000 Rice Genome Public Data Set page.

Working with the Genomic Data Set on AWS
Because the data are hosted on S3 and accessible over common HTTP protocols, researchers have already done some amazing integrations within pre-existing tools. I’ve included some initial examples here and we’ll work with IRRI to share more examples as they emerge.

Visualizing the data using SNP-Seek
The International Rice Informatics Consortium (IRIC) has made the data available for querying and visualization through their SNP-Seek portal.  User are now able to query across all of the strains and narrow down regions of interest that show diversity across multiple genome references, integrated with the rice research community’s genomic annotation data:

Open Source Tools
In addition to the rich set of AWS partner offerings for life sciences, the full genomics open source ecosystem is available for use with the data. From command line applications such as samtools to rich user interfaces such as Galaxy or iobio, researchers can get started right away to analyze the data.

What’s Next?
The challenge for the research community is now to comprehensively and systematically mine this dataset to link genotypic variation to functional variation with the ultimate goal of creating new and sustainable rice varieties. Combining these efforts with other studies such as careful trait phenotyping in controlled and wild environments, as well as environmental studies based on satellite imagery like the Landsat data you can already access on AWS, can help is to keep up with the demands of the future world’s population growth.

Visit the 3000 Rice Genome Public Data Set page to access the data and sign up for project updates.

Angel Pizzaro, Technical Business Development Manager, AWS Scientific Computing

TorrentFreak: Copyright Scares University Researchers From Sharing Their Findings

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

copyright-brandedA few weeks ago I spotted the abstract of an article that had just been published in an academic journal.

The article was relevant to the topics we cover here at TorrentFreak, but unfortunately it was hidden behind a paywall, like most scientific articles are.

To bypass this hurdle I usually ask the author for a review copy. Not to publish it online, but to get a better picture of the findings and perhaps cover them in a news piece.

In this case the author in question was kind enough to respond, although not with a copy of the paper. Instead, he encouraged me to contact the publisher noting that they now control the rights.

“We no longer own the copyright of our work,” the author wrote back.

This certainly wasn’t the first time that a researcher has shown reluctance to share work, so I didn’t complain and gave the publisher a call. The publisher, one of the largest in the world, then informed me that the person responsible for these matters was not available.

A bit frustrated, I decided to reach out to the author of the article again. Instead of requesting a copy of the paper I sent over a few questions regarding the methodology and results of the study, which would be enough to begin a piece.

But, instead of commenting on the findings the author asked if the publisher had given permission to discuss the matter, fearing that it would otherwise lead to “trouble.”

Baffled by what had happened I lost all interest in writing an article and decided to move on to something else.

While the above is an extreme example, it does signal a problem that many scientists face. They are literally scared they’ll get into trouble if they share their own papers with the rest of the world.

The author above was a junior researcher with little experience, but even established researchers encounter similar problems. For example, we previously reported that the American Society of Civil Engineers cracked down on researchers who posted their articles on their personal websites.

So where is this coming from?

Well, in order to get published in subscription based journals researchers have to sign away their copyrights. A typical “copyright transfer” agreement (pdf) prohibits them from sharing the final article in public, even on their own websites.


Accepted articles are separately sold for dozens of dollars per piece, so if the researchers shared these for free the publishers could lose income. It’s a commercial decision.

That said, most publishers do allow authors to talk about their work, so the author in our example had no real reason to be worried. Similarly, it’s often permitted to share pre-print copies in public without restrictions.

Still, the reluctance among researchers and the restrictions they face are not helping knowledge to spread, which is a key goal of science.

So why aren’t a few bright minds starting a non-profit publishing outlet then?

Well, these already exist and there are several initiatives to promote “open access” publications, where everyone can read the articles freely. However, in many research fields the most prominent (high impact) journals are controlled by commercial publishers and placed behind paywalls.

Journals get a high impact rating if they publish a lot of frequently cited articles so it’s hard for new ones to gain ground.

And since researchers are often evaluated based on the impact factor of the journals they publish in, “open access” doesn’t appeal to a wide audience yet. In a way, science is trapped in a copyright stranglehold controlled by a few large publishers.

It’s an absurd situation in which universities pay researchers to write articles, the copyrights to which are signed over to publishers. Those publishers then demand a licensing fee from the same universities to access the articles written by their own employees.

Please read the paragraph above once more, and keep in mind that some researchers are actually scared to share their work…

Meanwhile, Elsevier enjoys a net income of more than $1 billion per year, while suing websites that dare to infringe on the copyrights that researchers are ‘forced’ sign over.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

TorrentFreak: MPA Reveals 500+ Instances of Pirate Site Blocking in Europe

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

mpaOver the past several years Hollywood and its counterparts in the worldwide music industry have made huge strides in their efforts to complicate user access to so-called ‘pirate’ sites.

The theory is that if consumers find sites like The Pirate Bay more difficult to find, then the chances of those people buying official content will increase.

The first unlicensed site (AllofMP3) was ordered blocked in Denmark in 2006, and ever since rightsholders have been thirsty for more.

For almost a decade and with increasing frequency since 2010, site-blocking has been in the news, mainly centered around actions against torrent sites. In most cases of rightsholders testing the judicial waters around Europe, The Pirate Bay has been used as the guinea pig. History tells us that once The Pirate Bay gets blocked, the floodgates are well and truly open.

Although we’ve reported on every site-blocking court battle around Europe (including some that have been held behind closed doors), there are no publicly available central resources that provide an accurate overview of how many sites are blocked in each country. It doesn’t help that in UK, for example, rightsholders add sites to existing court orders without any fresh announcement.

Yesterday, however, the MPAA’s international variant, MPA Europe, provided some interesting numbers which highlight the extent of site-blocking on copyright grounds on the continent. The presentation, made by Deputy General Counsel Okke Visser at the iCLIC Conference in Southampton, UK, included the slide below.


What the image shows is a total of 504 instances of web-blocking across Europe. It’s worth noting that some of the instances are duplicates, since sites like Pirate Bay and KickassTorrents are blocked in multiple regions. Also, it appears that proxies aren’t included in the total.


The region with by far the greatest number of blockades is Italy, down in the south of Europe with 238 instances. The country’s AGCOM agency has been ordering sites to be blocked at an alarming rate, with no trials needed for a blackout.

However, things haven’t necessarily been going to plan. Research carried out in Italy found that blocking only increased blocked websites’ popularity, via the so-called “Streisand Effect”.

United Kingdom

It’s no surprise that the UK takes second place with 135 instances of blocking. Today they’re being ordered on behalf of Hollywood, the music industry, book publishers, sports broadcasters and even watch manufacturers.

The very first site to be blocked in the country on copyright grounds was defunct Usenet indexer Newzbin/2. The official process began in 2010 when MPA Europe, citing legal action in Denmark, asked local ISP BT to block the site. Subsequent court action resulted in an injunction and the floodgates were open for dozens of additional demands.


After being the site-blocking pioneer of Europe, Denmark now has 41 instances of site-blocking according to the MPAA. Earlier this year a large batch of torrent and streaming sites were blocked, followed by a second wave in August.


When new legislation came into effect in Spain in January, site-blocking was bound to follow.

Sure enough, in March 2015 local ISPs were given 72 hours to block The Pirate Bay and in April a block of a popular music site followed. According to MPA Europe, Spain now has 24 instances of blocking.

The rest

While blocking measures are in place across the whole of the far west of Europe, thus far plenty of countries are holding their ground. In the north, Sweden is currently block-free, but that could all change depending on the outcome of pending legal action.

After putting up a tremendous fight against the odds, the Netherlands also has no blocks in place. However, a case against local ISPs still has some way to run.

Slightly to the east, Germany has no blocks and to date there has been little discussion on the topic in Poland or Romania. However, neighbor Austria now has six instances of blocking after the movie industry won a protracted legal battle against The Pirate Bay and other sites.

Instances of copyright-related site-blocking across Europe

#1 – Italy (238)
#2 – United Kingdom (135)
#3 – Denmark (41)
#4 – Spain (24)
#5 – France (18) (ref)
#6 – Portugal (15) (ref 1,2)
#7 – Belgium (13) (ref 1,2)
#8 – Norway (7) (ref)
#9 – Austria (6)
#10 – Ireland (2) (ref 1,2)
#10 – Greece (2) (ref)
#10 – Iceland (2) (ref)
#11 – Finland (1) (ref)

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

TorrentFreak: Science “Pirate” Attacks Elsevier’s Copyright Monopoly in Court

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

200px-Elsevier.svg“Information wants to be free” is a commonly used phrase in copyright debates. While it may not apply universally, in the academic world it’s certainly relevant.

Information and knowledge are the cornerstones of science. Yet, most top research is locked up behind expensive paywalls.

As with most digital content, however, there are specialized sites that offer free and unauthorized access. In the academic world the Library Genesis and projects are two of the main ‘pirate’ outlets, and their presence hasn’t gone unnoticed.

Earlier this year publishing company Elsevier filed a complaint at a New York District Court, hoping to shut down the two portals. According to the publisher the sites willingly offer millions of pirated scientific articles.

The court has yet to decide on Elsevier’s request for an injunction and allowed the operators time to respond. This week, Sci-Hub founder Alexandra Elbakyan submitted her first response.

While Elbakyan’s letter doesn’t address the legality of her website she does place the case in a wider context, explaining how the site came to be.

“When I was a student in Kazakhstan university, I did not have access to any research papers. Papers I needed for my research project,” Elbakyan writes (pdf), explaining that it was impossible as a student to pay for access.

“Payment of 32 dollars is just insane when you need to skim or read tens or hundreds of these papers to do research. I obtained these papers by pirating them,” she adds.

As explained in an earlier interview with TF, Elbakyan then decided to help other researchers to obtain research articles, which eventually grew to become a library of millions of works.

Elbakyan continues her letter by informing the court that unlike in other industries, the authors of these papers don’t get paid. Elsevier requires researchers to sign the copyright over to the company and collects money from their work through licensing and direct sales.

“All papers on their website are written by researchers, and researchers do not receive money from what Elsevier collects. That is very different from the music or movie industry, where creators receive money from each copy sold,” she notes.

Researchers often have no other option than to agree because a career in academia often depends on publications in top journals, many of which are owned by Elsevier.

“They feel pressured to do this, because Elsevier is an owner of so-called ‘high-impact’ journals. If a researcher wants to be recognized, make a career – he or she needs to have publications in such journals,” Elbakyan writes.

Sci-Hub’s operator notes that she’s not alone in her opinion, pointing to several top researchers who have also criticized the model. Most prominently, in 2012 more than 15,000 researchers demanded that Elsevier change its business practices.

Adding another illustration, Elbakyan notes that she never received any complaints from the academic community, except from Elsevier.

While many researchers will agree with Sci-Hub’s operator, the case seems almost impossible to win. Elbakyan pretty much admits to breaking the law and the court has little room to ignore that.

Elsevier hopes the court will soon issue the preliminary injunction so the domain names can be seized. The publisher disagrees with Elbakyan’s comments and notes that it participates in several initiatives to provide free and cheap access to researchers in low-income countries.

If the court sides with Elsevier, Library Genesis and Sci-Hub will likely lose access to their U.S. controlled domain names. However, taking the sites offline will prove to be more difficult as their servers and operator are not in the United States.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

AWS Official Blog: New Spot Fleet Option – Distribute Your Fleet Across Multiple Capacity Pools

This post was syndicated from: AWS Official Blog and was written by: Jeff Barr. Original post: at AWS Official Blog

Last week I spoke to a technically-oriented audience at the Pacific Northwest PHP Conference. As part of my talk, I described cloud computing as a mix of technology and business, and made my point by talking about Spot instances. The audience looked somewhat puzzled at first, but as I explained further I could see their eyes light up as they started to think about the ways that they could save money for their companies by way of creative coding!

Earlier this year I wrote about the Spot Fleet API, and showed you how to use it to manage thousands of Spot instances with a single call to the RequestSpotFleet function. Today we are introducing a new “allocation strategy” option for that API. This option will allow you to create a Spot fleet that contains instances drawn from multiple capacity pools (a set of instances of a given type, within a particular region and Availability Zone).

As part of your call to RequestSpotFleet, you can include up to 20 launch specifications. If you make an untargeted request (by not specifying an Availability Zone or a subnet), you can target multiple capacity pools within an AWS region. This gives you access to a lot of EC2 capacity, and allows you to set up fleets that are a good match for your application.

You can set the allocation strategy to either of the following values:

  • lowestPrice – This is the default strategy. It will result in a Spot fleet that contains instances drawn from the lowest priced pool(s) specified in your request.
  • diversified – This is the new strategy, and it must be specified as part of your request. It will result in a Spot fleet that contains instances drawn from all of the pools specified in your request, with the exception of those where the current Spot price is above the On-Demand price.

This option allows you to choose the strategy that most closely matches your goals for each Spot fleet. The following table can be used as a guide:

lowestPrice diversified
Fleet Size Fine for modest-sized fleets. However, a request for a large fleet can affect pricing in the pool with the lowest price. Works well with larger fleets.
Total Fleet Operating Cost Can be unexpectedly high if pricing in the pool spikes. Should average 70%-80% off of On-Demand over time.
Consequence of Capacity Fluctuation in a Pool Entire fleet subject to possible interruption and subsequent replenishment. Fraction of fleet (1/Nth of total capacity) subject to possible interruption and subsequent replenishment.
Application Characteristics Short-running.
Not time sensitive.
Time sensitive.
Typical Applications Scientific simulations, research computations. Transcoding, customer-facing web servers, HPC, CI/CD.

If you create a fleet using the diversified strategy and use it to host your web servers, it is a good idea to select multiple pools and to have a fallback option in case all of them become unavailable.

Diversified allocation works really well in conjunction with the new resource-oriented bidding feature that we launched last month. When you use resource-oriented bidding and specify diversified allocation, each of the capacity pools in your launch specification will include the same number of capacity units.

To make use of this new strategy, simply include it in your CLI or API-driven request. If you are using the CLI, simply add the following entry to your configuration file:

"AllocationStrategy": "diversified"

If you are using the API, specify the same value in your SpotFleetRequestConfigData.

This option is available now and you can start using it today.


AWS Official Blog: AWS Week in Review – September 7, 2015

This post was syndicated from: AWS Official Blog and was written by: Jeff Barr. Original post: at AWS Official Blog

Let’s take a quick look at what happened in AWS-land last week:

Monday, September 7
Tuesday, September 8
Wednesday, September 9
Thursday, September 10
Friday, September 11

New & Notable Open Source

  • redshift-udfs contains SQL for many helpful Redshift UDFs.
  • cfn-flow is a command-line tool for developing CloudFormation templates and deploying stacks.
  • s3-backup-script tars up local files and folder, dumps a MySQL database, and uploads the resulting files to S3.
  • md5s3stash implements content-addressable storage in S3.
  • PhotoEncryptionInCloud is an Android app that stores encrypted photos in AWS.
  • auto-simple-calculator is an auto-pilot for the AWS Simple Monthly Calculator.
  • aws-cloudwatch-chart is a Node module that draws charts for CloudWatch metrics.
  • awsqr generates QR codes for AWS MFA logins.
  • grails-aws is an AWS plugin for Grails.
  • NFLX-Security-Monkey monitors policy changes and alerts on insecure configurations in an AWS account.

New Customer Success Stories

New SlideShare Content

New YouTube Videos

New Marketplace Applications

Upcoming Events

Upcoming Events at the AWS Loft (San Francisco)

Upcoming Events at the AWS Loft (New York)

Help Wanted

Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.


Schneier on Security: Friday Squid Blogging: The Chemistry of Squid Camouflage

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Interesting research.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

TorrentFreak: Police Raid Fails to Dent UK Top 40 Music Piracy

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

cityoflondonpoliceEarly last Thursday morning the UK’s Police Intellectual Property Crime Unit (PIPCU) were again mobilizing against online piracy.

Following a joint investigation with licensing outfit PRS for Music, officers from PIPCU and Merseyside police raided an address in Everton, Liverpool. Their target was a 38-year-old man believed to be involved in the unlawful distribution of music online.

In addition to uploading the UK’s Top 40 Singles to various torrent sites each week, police said the man also ran his own website offering ‘acapella’ audio tracks. Police further added that the man generated “significant” advertising revenue from his endeavors while possibly costing the industry “millions” in lost revenue.

A tip received by TF indicated that the man was connected to several accounts on the world’s major torrent sites, including The Pirate Bay and KickassTorrents. We can now reveal that the accounts were registered in the name of ‘OldSkoolScouse’. For those outside the UK, the term ‘scouse’ refers to the accent found primarily in and around the Liverpool area.


As shown in the KickassTorrents screenshot above, the profile links to a domain – Up until last Friday (the day after the raid) the domain linked to another site,, which was billed as the “Number #1 community and resource, for DJs & Producers.”

As can be seen from the image below, DeeJayPortal featured acapella tracks as described by the police.


It remains unclear how many users each domain had, but in the bigger picture the numbers are very small indeed. At its height DeeJayPortal appears to have barely scraped the world’s top 200,000 most popular sites while OldSkoolScouse is currently outside the top three million.

Both domains went down last Friday, as did the OldSkoolScouse Twitter account and Facebook page. As illustrated below, the former regularly announced torrent uploads of the UK Top 40 to DeeJayPortal.


Yet again it appears that the arrest last week was a case of rightsholders and police targeting low-hanging fruit. Using widely available research tools we were able to quickly uncover important names plus associated addresses, both email and physical. It seems likely that he made close to no effort to conceal his identity.

Due to being in the police spotlight it will come as little surprise that there was no weekly upload of the UK’s Top 40 most-popular tracks from OldSkoolScouse last Friday, something which probably disappointed the releaser’s fans. However, any upset would have been very temporary indeed.

As shown below, at least four other releases of exactly the same content were widely available on public torrent sites within hours of the UK chart results being announced last Friday, meaning the impact on availability was almost non-existent.


However, perhaps of more interest to the police and rightsholders is the impact the arrest will have on the public’s perception of how risky it is to engage in online piracy in the UK. Certainly, more people are being arrested in the UK file-sharing scene than in the United States currently, quite a surprise considering the aggressive anti-piracy stance usually taken in the U.S.

Finally, it will be really interesting to see if the arrest last week will conclude with a case going to court. PIPCU have made many arrest announcements connected to online piracy in the past two years, yet to our knowledge not one person has gone to trial.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

TorrentFreak: It’s Impossible to Torrent Anonymously, Lawyer Says

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

crowellHollywood lawyer Carl Crowell has filed numerous lawsuits against online pirates in Oregon. He works on behalf of the makers of Dallas Buyers Club, among others, and has been active for years.

These cases never go to trial but are typically settled for a few thousand dollars. In a recent Interview with a local newspaper he justifies this approach.

“The media calls what I do a scam, a fraud,” Crowell says, while pointing at the losses his clients suffer from online piracy.

According to Crowell many of the defendants falsely assume that not responding to a lawsuit will make it go away, but he stresses that this is not the case.

“There are stories online that tell people the worst thing you can do is respond, but it’s not a problem that’s going to go away by being ignored,” he notes.

Indeed, there have been various examples of default judgments against pirates in these type of cases, with judges handing out damages as high as $30,000. This means that ignoring a lawsuit is certainly not wise.

While the lawyer has no sympathy for pirates, he doesn’t just sue people left and right. Crowell says he focuses on people who upload content persistently, seeding files for several months.

“If you just download and don’t upload, you don’t cross my radar. I’m interested in persistent involvement over several months,” he says.

The above sounds reasonable for someone from his side of the bench, but soon after the interview takes a surprise turn. According to Crowell, it is impossible for pirates to escape his crosshairs as online anonymity is a myth.

“There is no anonymity online. If you want to pirate content, you have to do so publicly,” Crowell says.

This statement conflicts with reality, as there are plenty of way for people to hide their IP-addresses. In fact, research shows that pirates tend to be very keen on their privacy.

For example, an academic survey revealed that 70 percent of The Pirate Bay users utilize a VPN service or proxy, or were interested in doing so in future.

Perhaps Crowell is misrepresenting reality on purpose as a scare tactic, or he simply doesn’t know better. Luckily for him, not all pirates are aware of it either, otherwise his business would be ruined.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Schneier on Security: China’s “Great Cannon”

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Interesting research: “An Analysis of China’s ‘Great Cannon.’

Abstract: On March 16th, 2015, the Chinese censorship apparatus employed a new tool, the “Great Cannon”, to engineer a denial-of-service attack on, an organization dedicated to resisting China’s censorship. We present a technical analysis of the attack and what it reveals about the Great Cannon’s working, underscoring that in essence it constitutes a selective nation-state Man-in-the-Middle attack tool. Although sharing some code similarities and network locations with the Great Firewall, the Great Cannon is a distinct tool, designed to compromise foreign visitors to Chinese sites. We identify the Great Cannon’s operational behavior, localize it in the network topology, verify its distinctive side-channel, and attribute the system as likely operated by the Chinese government. We also discuss the substantial policy implications raised by its use, including the potential imposition on any user whose browser might visit (even inadvertently) a Chinese web site. [$] Debsources as a platform

This post was syndicated from: and was written by: n8willis. Original post: at

Debsources is a project that provides a web-based interface into
the source code of every package in the Debian software
archive—not a small task by any means. But, as Stefano
Zacchiroli and Matthieu Caneill explained in their DebConf 2015
session, Debsources is far more than a source-code browsing tool. It
provides a searchable viewport into 20 years of
free-software history, which makes it viable as a platform for many
varieties of research and experimentation.

lcamtuf's blog: Understanding the process of finding serious vulns

This post was syndicated from: lcamtuf's blog and was written by: Michal Zalewski. Original post: at lcamtuf's blog

Our industry tends to glamorize vulnerability research, with a growing number of bug reports accompanied by flashy conference presentations, media kits, and exclusive interviews. But for all that grandeur, the public understands relatively little about the effort that goes into identifying and troubleshooting the hundreds of serious vulnerabilities that crop up every year in the software we all depend on. It certainly does not help that many of the commercial security testing products are promoted with truly bombastic claims – and that some of the most vocal security researchers enjoy the image of savant hackers, seldom talking about the processes and toolkits they depend on to get stuff done.

I figured it may make sense to change this. Several weeks ago, I started trawling through the list of public CVE assignments, and then manually compiling a list of genuine, high-impact flaws in commonly used software. I tried to follow three basic principles:

  • For pragmatic reasons, I focused on problems where the nature of the vulnerability and the identity of the researcher is easy to ascertain. For this reason, I ended up rejecting entries such as CVE-2015-2132 or CVE-2015-3799.

  • I focused on widespread software – e.g., browsers, operating systems, network services – skipping many categories of niche enterprise products, WordPress add-ons, and so on. Good examples of rejected entries in this category include CVE-2015-5406 and CVE-2015-5681.

  • I skipped issues that appeared to be low impact, or where the credibility of the report seemed unclear. One example of a rejected submission is CVE-2015-4173.

To ensure that the data isn’t skewed toward more vulnerable software, I tried to focus on research efforts, rather than on individual bugs; where a single reporter was credited for multiple closely related vulnerabilities in the same product within a narrow timeframe, I would use only one sample from the entire series of bugs.

For the qualifying CVE entries, I started sending out anonymous surveys to the researchers who reported the underlying issues. The surveys open with a discussion of the basic method employed to find the bug:

  How did you find this issue?

  ( ) Manual bug hunting
  ( ) Automated vulnerability discovery
  ( ) Lucky accident while doing unrelated work

If “manual bug hunting” is selected, several additional options appear:

  ( ) I was reviewing the source code to check for flaws.
  ( ) I studied the binary using a disassembler, decompiler, or a tracing tool.
  ( ) I was doing black-box experimentation to see how the program behaves.
  ( ) I simply noticed that this bug is being exploited in the wild.
  ( ) I did something else: ____________________

Selecting “automated discovery” results in a different set of choices:

  ( ) I used a fuzzer.
  ( ) I ran a simple vulnerability scanner (e.g., Nessus).
  ( ) I used a source code analyzer (static analysis).
  ( ) I relied on symbolic or concolic execution.
  ( ) I did something else: ____________________

Researchers who relied on automated tools are also asked about the origins of the tool and the computing resources used:

  Name of tool used (optional): ____________________

  Where does this tool come from?

  ( ) I created it just for this project.
  ( ) It's an existing but non-public utility.
  ( ) It's a publicly available framework.

  At what scale did you perform the experiments?

  ( ) I used 16 CPU cores or less.
  ( ) I employed more than 16 cores.

Regardless of the underlying method, the survey also asks every participant about the use of memory diagnostic tools:

  Did you use any additional, automatic error-catching tools - like ASAN
  or Valgrind - to investigate this issue?

  ( ) Yes. ( ) Nope!

…and about the lengths to which the reporter went to demonstrate the bug:

  How far did you go to demonstrate the impact of the issue?

  ( ) I just pointed out the problematic code or functionality.
  ( ) I submitted a basic proof-of-concept (say, a crashing test case).
  ( ) I created a fully-fledged, working exploit.

It also touches on the communications with the vendor:

  Did you coordinate the disclosure with the vendor of the affected

  ( ) Yes. ( ) No.

  How long have you waited before having the issue disclosed to the

  ( ) I disclosed right away. ( ) Less than a week. ( ) 1-4 weeks.
  ( ) 1-3 months. ( ) 4-6 months. ( ) More than 6 months.

  In the end, did the vendor address the issue as quickly as you would
  have hoped?

  ( ) Yes. ( ) Nope.

…and the channel used to disclose the bug – an area where we have seen some stark changes over the past five years:

  How did you disclose it? Select all options that apply:

  [ ] I made a blog post about the bug.
  [ ] I posted to a security mailing list (e.g., BUGTRAQ).
  [ ] I shared the finding on a web-based discussion forum.
  [ ] I announced it at a security conference.
  [ ] I shared it on Twitter or other social media.
  [ ] We made a press kit or reached out to a journalist.
  [ ] Vendor released an advisory.

The survey ends with a question about the motivation and the overall amount of effort that went into this work:

  What motivated you to look for this bug?

  ( ) It's just a hobby project.
  ( ) I received a scientific grant.
  ( ) I wanted to participate in a bounty program.
  ( ) I was doing contract work.
  ( ) It's a part of my full-time job.

  How much effort did you end up putting into this project?

  ( ) Just a couple of hours.
  ( ) Several days.
  ( ) Several weeks or more.

So far, the response rate for the survey is approximately 80%; because I only started in August, I currently don’t have enough answers to draw particularly detailed conclusions from the data set – this should change over the next couple of months. Still, I’m already seeing several well-defined if preliminary trends:

  • The use of fuzzers is ubiquitous (incidentally, of named projects, afl-fuzz leads the fray so far); the use of other automated tools, such as static analysis frameworks or concolic execution, appears to be unheard of – despite the undivided attention that such methods receive in academic settings.

  • Memory diagnostic tools, such as ASAN and Valgrind, are extremely popular – and are an untold success story of vulnerability research.

  • Most of public vulnerability research appears to be done by people who work on it full-time, employed by vendors; hobby work and bug bounties follow closely.

  • Only a small minority of serious vulnerabilities appear to be disclosed anywhere outside a vendor advisory, making it extremely dangerous to rely on press coverage (or any other casual source) for evaluating personal risk.

Of course, some security work happens out of public view; for example, some enterprises have well-established and meaningful security assurance programs that likely prevent hundreds of security bugs from ever shipping in the reviewed code. Since it is difficult to collect comprehensive and unbiased data about such programs, there is always some speculation involved when discussing the similarities and differences between this work and public security research.

Well, that’s it! Watch this space for updates – and let me know if there’s anything you’d change or add to the questionnaire.

Anchor Cloud Hosting: Is Your Website Ready for Click Frenzy 2015?

This post was syndicated from: Anchor Cloud Hosting and was written by: Jessica Field. Original post: at Anchor Cloud Hosting

With Click Frenzy happening again on November 17th, and online sales trends continuing to climb, it is once more predicted to break online sales records—as well as any participating websites that aren’t prepared.

Last year’s Click Frenzy event saw a 27.7% rise in online sales over the previous year, with the average customer order increasing 15.2% to $151.021. Any sudden rise in online activity, particularly when compressed into a single 24-hour period, can trigger a frenzy of technical issues for some retailers. According to IBM research, the first Click Frenzy in November 2012 saw 37% more online sales than even the Boxing Day sales. This unprecedented flurry of online transactions in such a short period of time saw many retailers crash under the pressure, losing sales and frustrating customers2.

That first event revealed how easy it is for online retailers to snatch failure from the jaws of success by underestimating the ability of their website infrastructure to cope with the stampede of eager customers.

Online retailer Just Bedding decided against participating in the first Click Frenzy campaign, as the risk of making a bad impression outweighed the attraction of a huge retail opportunity. The sudden increase in traffic across a single day, and the unpredictability of the demand, would have required too much effort from the technical team to keep the site from falling over, and with no guarantee of success.

But, in 2013, Anchor’s server support gave Just Bedding the confidence to take advantage of the massive sales opportunity. Gabriel Luis, Just Bedding’s website administrator, called up Anchor to let the team know a potential flood of traffic was coming. Forewarned, Anchor was able to work with Just Bedding’s technical team to seamlessly scale the website.

“Anchor simply upped the server package to cope with the increase in traffic on this day,” says Luis. “They did create a precautionary landing page, just in case there were any issues with customers reaching the site, but it wasn’t needed.”

The website was able to withstand a huge spike in traffic on the day, eliminating much of the risk while maximising the opportunity.

If you’re one of the many retailers already signed up or still contemplating whether to join Click Frenzy this year, now is the time to talk to your hosting provider and your technical team. With a little forward planning, you can enjoy a frenzy of sales, not

The post Is Your Website Ready for Click Frenzy 2015? appeared first on Anchor Cloud Hosting.

Schneier on Security: Using Samsung’s Internet-Enabled Refrigerator for Man-in-the-Middle Attacks

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

This is interesting research::

Whilst the fridge implements SSL, it FAILS to validate SSL certificates, thereby enabling man-in-the-middle attacks against most connections. This includes those made to Google’s servers to download Gmail calendar information for the on-screen display.

So, MITM the victim’s fridge from next door, or on the road outside and you can potentially steal their Google credentials.

The notable exception to the rule above is when the terminal connects to the update server — we were able to isolate the URL which is the same used by TVs, etc. We generated a set of certificates with the exact same contents as those on the real website (fake server cert + fake CA signing cert) in the hope that the validation was weak but it failed.

The terminal must have a copy of the CA and is making sure that the server’s cert is signed against that one. We can’t hack this without access to the file system where we could replace the CA it is validating against. Long story short we couldn’t intercept communications between the fridge terminal and the update server.

When I think about the security implications of the Internet of things, this is one of my primary worries. As we connect things to each other, vulnerabilities on one of them affect the security of another. And because so many of the things we connect to the Internet will be poorly designed, and low cost, there will be lots of vulnerabilities in them. Expect a lot more of this kind of thing as we move forward.

TorrentFreak: Tech Giants Want to Punish DMCA Takedown Abusers

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

copyright-brandedEvery day copyright holders send millions of DMCA takedown notices to various Internet services.

Most of these requests are legitimate, aimed at disabling access to copyright-infringing material. However, there are also many overbroad and abusive takedown notices which lead to unwarranted censorship.

These abuses are a thorn in the side of major tech companies such as Google, Facebook and Microsoft. These companies face serious legal consequences if they fail to take content down, but copyright holders who don’t play by the rules often walk free.

This problem is one of the main issues highlighted in a new research report (pdf) published by the CCIA, a trade group which lists many prominent tech companies among its members.

The report proposes several changes to copyright legislation that should bring it in line with the current state of the digital landscape. One of the suggestions is to introduce statutory damages for people who abuse the takedown process.

“One shortcoming of the DMCA is that the injunctive-like remedy of a takedown, combined with a lack of due process, encourages abuse by individuals and entities interested in suppressing content,” CCIA writes.

“Although most rightsholders make good faith use of the DMCA, there are numerous well-documented cases of misuse of the DMCA’s extraordinary remedy. In many cases, bad actors have forced the removal of material that did not infringe copyright.”

The report lists several examples, including DMCA notices which are used to chill political speech by demanding the takedown of news clips, suppress consumer reviews, or retaliate against critics.

Many Internet services are hesitant to refuse these type of takedown requests at it may cause them to lose their safe harbor protection, while the abusers themselves don’t face any serious legal risk.

The CCIA proposes to change this by introducing statutory damage awards for abusive takedown requests. This means that the senders would face the same consequences as the copyright infringers.

“To more effectively deter intentional DMCA abuse, Congress should extend Section 512(f) remedies for willful misrepresentations under the DMCA to include statutory awards, as it has for willful infringement under Section 504(c),” CCIA writes.

In addition to tackling DMCA abuse the tech companies propose several other changes to copyright law.

One of the suggestions is to change the minimum and maximum statutory damages for copyright infringement, which are currently $750 and $150,000 per work.

According to the CCIA the minimum should be lowered to suit cases that involve many infringements, such as a user who hosts thousands of infringing works on a cloud storage platform.

The $150,000 maximum, on the other hand, is open to abuse by copyright trolls and rightsholders who may use it as a pressure tool.

The tech companies hopes that U.S. lawmakers will consider these and other suggestions put forward in the research paper, to improve copyright law and make it future proof.

“Since copyright law was written more than 100 years ago, the goal has been to encourage creativity to benefit the overall public good. It’s important as copyright is modernized to ensure that reforms continue to benefit not just rightsholders, but the overall public good,” the CCIA concludes.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

SANS Internet Storm Center, InfoCON: green: Actor that tried Neutrino exploit kit now back to Angler, (Wed, Aug 26th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green


Last week, we saw the group behind a significant amount of Angler exploit kit (EK) switch to Neutrino EK [1]. We didnt know if the change was permanent, and I also noted that criminal groups using EKs have quickly changed tactics in the past. This week, the group is back to Angler EK.

The past few days, Ive noticed several examples Angler EK pushing TeslaCrypt 2.0 ransomware. For todays diary, well look at four examples of Angler EK on Tuesday 2015-08-25 from 16:42 to 18:24 UTC. All examples delivered the same sample of TeslaCrypt 2.0 ransomware.

TeslaCrypt 2.0

TeslaCrypt is a recent familyofransomware that first appeared early this year. Its beenknown to mimic CryptoLocker, and weve seen it use the names TelsaCrypt and AlphaCrypt in previous infections [2,3,4]. According to Kaspersky Lab, version 2.0 of TeslaCrypt usesthe same type of decrypt instructions as CryptoWall [5]. however, artifacts and traffic from the infected host reveal this is actuallyTeslaCrypt.

Kafeine from Malware Dont Need Coffee first tweeted about the new ransomware on 2015-07-13 [6]. The next day on, Kaspersky Lab released details on this most recent version of TeslaCrypt [5].

I saw my first sample of TeslaCrypt 2.0 sent from Nuclear EK on 2015-07-20 [7]. Most TeslaCrypt 2.0 samples weve run across since then were delivered by however, we havent seen a great deal of it. Until recently, most of the ransomware delivered by Angler EK was CryptoWall 3.0. however, this time the iframes pointed to Angler EK. In most cases, the iframe led directly to the Angler EK landing page. ” />
Shown above: From the third example, the” />
Shown above: From the fourth example, theiframe pointing to an Angler EK landing page.

Looking at the traffic in Wireshark, we find two different IPs and four different domains from the four Angler infections during a 1 hour and 42 minute time span.”>”>. Although Angler EK sends its payload encrypted, I was able to grab a decrypted copy from an infected host before it deleted itself.

  • File name: 2015-08-25-Angler-EK-payload-TeslaCrypt-2.0.exe
  • File size: 346.9 KB (355,239 bytes)
  • MD5 hash: 4321192c28109be890decfa5657fb3b3
  • SHA1 hash: 352f81f9f7c1dcdb5dbfe9bee0faa82edba043b9
  • SHA256 hash: 838f89a2eead1cfdf066010c6862005cd3ae15cf8dc5190848b564352c412cfa
  • Detection ratio: 3 / 49
  • First submission: 2015-08-25 19:51:01 UTC
  • Virus Total analysis: link
  • analysis: link
  • analysis: link

The following post-infection traffic was seenfrom the four infected hosts:

  • – TCP port 80 (http) – IP address check
  • – TCP port 80 (http)- – post-infection callback
  • – TCP port 80 (http) – – post-infection callback

Malwr.coms analysis of the payload reveals additional IP addresses and hosts:

  • – TCP port 80 (http) –
  • – TCP port 80 (http) –
  • – TCP port 80 (http) –
  • – TCP port 80 (http) –
  • – TCP port 443 (encrypted) –
  • – TCP port 443 (encrypted) –

Snort-based alerts on the traffic

I played back the pcap on Security Onion using Suricata with the EmergingThreats (ET) and ET Pro rule sets. The results show alerts for Angler EK and AlphaCrypt. The AlphaCrypt alerts triggered on callback traffic from TeslaCrypt 2.0. ” />
Shown above: Got a captcha when trying one of the URLs” />
Shown above: Final decrypt instructions with a bitcoin address for the ransom payment.

Final words

On the same cloned host with the same malware, we saw a different URLfor the decrypt instructions each time. Every infection resulted in a different bitcoin address for the ransom payment, even though it was the same sample infecting the same cloned host.

We continue to see EKs used by this and other criminal groups to spread malware. Although we havent seen as much CryptoWall this week, the situation could easily change in a few days time.

Traffic and malware for thisdiary are listed below:

  • A zip archive of four pcap files with the infection traffic from Tuesday 2015-08-25 is available here. “>(4.14″>MB)
  • A zip archive of the malware and other artifacts is available here. “>(957 KB)

The zip archive for the malware is password-protected with the standard password. If you dont know it, email and ask.

Brad Duncan
Security Researcher at Rackspace
Blog: – Twitter: @malware_traffic



(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.