Posts tagged ‘research’

TorrentFreak: Hilarious Remixers Hand Out Copyright Smackdown

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

det-logoConsidering the amount of publicity a wrongful DMCA notice can generate these days, it’s no surprise that when a gift of a story presents itself, people are happy to jump on board.

Unfortunately, however, some stories are more complex than they first appear and when that complexity is borne out of a deliberate desire to mislead, chaos is bound to ensue.

On November 25 a tantalizing piece appeared in Electronic Beats detailing how in an apparent desire to protect copyright, Soundcloud had finally gone too far. A follow-up piece from YourEDM put meat on the bones.

“Just when you thought Soundcloud couldn’t get any worse, they strike again harder than ever. Now reaching an all time low, Soundcloud has removed a track that is nothing but 4 minutes of pure silence due to ‘Copyright Infringement’ claims,” it declared.

The piece was uploaded to an account operated by D.J. Detweiler and consisted of a remix (if one could ever be possible) of the John Cage ‘track’ 4’33”, a famous performance consisting of nothing but silence.


“That’s right, a song that has literally no sound was flagged for removal. How? Because Soundcloud is lazy and takes shortcuts to flag and remove content,” the YourEDM piece continued.

“Instead of crawling the uploaded content for copyright material, which takes a decent amount of CPU power, Soundcloud has resorted into cutting that process out entirely and beginning to flag content based on JUST the track title.”

As recipes for outrage go, this was an absolute doozy and no wonder it was picked up by several publications in the days that followed. However, as is now becoming painfully obvious, the whole thing was a giant stunt. A statement from Soundcloud obtained by Engadget revealed the cringe-worthy truth.

“The upload referenced in the screenshot was not a track of silence and was taken down because it included Justin Bieber’s What Do You Mean without the rightsholder’s permission,” the company said.

“The respective user uploaded the track under the title “4’33”,” which is also the name of John Cage’s famous piece of silence but it was not, in fact, silence.”

So what were D.J. Detweiler’s aims? Well, trolling the press appears to be one. In a biting follow-up amid several retweets of regurgitated articles on the same topic, D.J. Detweiler posted the following image.

Another aim appears to be recreating the work of Cage to prove a point. Although Cage’s track 4’33” was supposed to be silent, ‘performers’ are expected to be present but not play. Unless done so in a vacuum, the resulting ‘performance’ therefore includes ambient noise. Equally, it appears that D.J. Detweiler’s ‘silence’ is now intentionally causing noise around the Internet too.

“We are making a remix of the original performance of John Cage. The only different thing is that we are making it on the internet in 2015, instead of doing it in a space like a theater, like John Cage did. The whole environment around what we’re doing is the performance because everybody’s reacting.”

But trolling and frivolity aside, it does appear that DJ Detweiler have a copyright message to deliver.

“When John Cage wrote that piece, one of the main reasons was because he was trying to ask, who owns the silence? Who has the copyright for the silence?” they ask. “The laws surrounding copyright at this point seem highly outdated and need some sort of reformation, and we just want to push that.”

While the group have certainly achieved their aims, it’s perhaps a bit of a shame that’s been achieved at the expense of publications who mainly appeared to have sympathy with often overreaching copyright law.

That being said, when one looks at DJ Detweiler’s Facebook and homepages (epilepsy warning!), the value of doing more research really starts to pay off.

DJ Detweiler are taking part in a panel discussion about “branding, hype and trends” this Thursday at the 3hd Festival in Berlin. He’s described as an individual there but at this point, who knows?

In the meantime enjoy his/their remix of Sandstorm, Smack My Bitch Up, and my personal favorite, DJ Hazard’s Mr Happy.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Krebs on Security: DHS Giving Firms Free Penetration Tests

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

The U.S. Department of Homeland Security (DHS) has been quietly launching stealthy cyber attacks against a range of private U.S. companies — mostly banks and energy firms. These digital intrusion attempts, commissioned in advance by the private sector targets themselves, are part of a little-known program at DHS designed to help “critical infrastructure” companies shore up their computer and network defenses against real-world adversaries. And it’s all free of charge (well, on the U.S. taxpayer’s dime).

Organizations participating in DHS's "Cyber Hygiene" vulnerability scans. Source: DHS

Organizations participating in DHS’s “Cyber Hygiene” vulnerability scans. Source: DHS

KrebsOnSecurity first learned about DHS’s National Cybersecurity Assessment and Technical Services (NCATS) program after hearing from a risk manager at a small financial institution in the eastern United States. The manager was comparing the free services offered by NCATS with private sector offerings and was seeking my opinion. I asked around to a number of otherwise clueful sources who had no idea this DHS program even existed.

DHS declined requests for an interview about NCATS, but the agency has published some information about the program. According to DHS, the NCATS program offers full-scope penetration testing capabilities in the form of two separate programs: a “Risk and Vulnerability Assessment,” (RVA) and a “Cyber Hygiene” evaluation. Both are designed to help the partner organization better understand how external systems and infrastructure appear to potential attackers.

“The Department of Homeland Security (DHS) works closely with public and private sector partners to strengthen the security and resilience of their systems against evolving threats in cyberspace,” DHS spokesperson Sy Lee wrote in an email response to an interview request. “The National Cybersecurity Assessments and Technical Services (NCATS) team focuses on proactively engaging with federal, state, local, tribal, territorial and private sector stakeholders to assist them in improving their cybersecurity posture, limit exposure to risks and threats, and reduce rates of exploitation. As part of this effort, the NCATS team offers cybersecurity services such as red team and penetration testing and vulnerability scanning at no cost.”

The RVA program reportedly includes scans the target’s operating systems, databases, and Web applications for known vulnerabilities, and then tests to see if any of the weaknesses found can be used to successfully compromise the target’s systems. In addition, RVA program participants receive scans for rogue wireless devices, and their employees are tested with “social engineering” attempts to see how employees respond to targeted phishing attacks.

The Cyber Hygiene program — which is currently mandatory for agencies in the federal civilian executive branch but optional for private sector and state, local and tribal stakeholders — includes both internal and external vulnerability and Web application scanning.

The reports show detailed information about the organization’s vulnerabilities, including suggested steps to mitigate the flaws.  DHS uses the aggregate information from each client and creates a yearly non-attributable report. The FY14 End of Year report created with data from the Cyber Hygiene and RVA program is here (PDF).

Among the findings in that report, which drew information from more than 100 engagements last year:

-Manual testing was required to identify 67 percent of the RVA vulnerability findings (as opposed to off-the-shelf, automated vulnerability scans);

-More than 50 percent of the total 344 vulnerabilities found during the scans last year earned a severity rating of “high” (4o percent) or “critical” (13 percent).

-RVA phishing emails resulted in a click rate of 25 percent.

Data from NCATS FY 2014 Report.

Data from NCATS FY 2014 Report.


I was curious to know how many private sector companies had taken DHS up on its rather generous offers, since these services can be quite expensive if conducted by private companies. In response to questions from this author, DHS said that in Fiscal Year 2015 NCATS provided support to 53 private sector partners.  According to data provided by DHS, the majority of the program’s private sector participation come from the financial services and energy sectors — typically at regional or smaller institutions.

DHS has taken its lumps over the years for not doing enough to gets its own cybersecurity house in order, let alone helping industry fix its problems. In light of its past cybersecurity foibles, the NCATS program on the surface would seem like a concrete step toward blunting those criticisms.

I wondered how someone in the penetration testing industry would feel about the government throwing its free services into the ring. Dave Aitel is chief technology officer at Immunity Inc., a Miami Beach, Fla. based security firm that offers many of the same services NCATS bundles in its product.


Aitel said one of the major benefits for DHS in offering NCATS is that it can use the program learn about real-world vulnerabilities in critical infrastructure companies.

“DHS is a big player in the ‘regulation’ policy area, and the last thing we need is an uninformed DHS that has little technical expertise in the areas that penetration testing covers,” Aitel said. “The more DHS understands about the realities of information security on the ground – the more it treats American companies as their customers – the better and less impactful their policy recommendations will be. We always say that Offense is the professor of Defense, and in this case, without having gone on the offense DHS would be helpless to suggest remedies to critical infrastructure companies.”

Of course, the downsides are that sometimes you get what you pay for, and the NCATS offering raises some interesting questions, Aitel said.

“Even if the DHS team doing the work is great, part of the value of an expensive penetration test is that companies feel obligated to follow the recommendations and improve their security,” he said. “Does the data found by a DHS testing team affect a company’s SEC liabilities in any way? What if the Government gets access to customer data during a penetration test – what legal ramifications does that have? This is a common event and pre-CISPA it may carry significant liability.”

As far as the potential legal ramifications of any mistakes DHS may or may not make in its assessments, the acceptance letter (PDF) that all NCATS customers must sign says DHS provides no warranties of any kind related to the free services. The rules of engagement letter from DHS further lays out ground rules and specifics of the NCATS testing services.

Aitel, a former research scientist at the National Security Agency (NSA), raised another issue: Any vulnerabilities found anywhere within the government — for example, in a piece of third party software — are supposed to go to the NSA for triage, and sometimes the NSA is later able to use those vulnerabilities in clandestine cyber offensive operations.

But what about previously unknown vulnerabilities found by DHS examiners?

“This may be less of an issue when DHS uses a third party team, but if they use a DHS team, and they find a bug in Microsoft IIS (Web server), that’s not going to the customer – that’s going to the NSA,” Aitel said.

And then there are potential legal issues with the government competing with private industry.

Alan Paller, director of research at the SANS Institute, a Bethesda, Md. based security training group, isn’t so much concerned about the government competing with the private sector for security audits. But he said DHS is giving away something big with its free assessments: An excuse for the leadership at scanned organizations for not doing anything after the assessment and using the results as a way to actually spend less on security.

“The NCATS program could be an excellent service that does a lot of good but it isn’t,” Paller said. “The problem is that it measures only a very limited subset of of the vulnerability space but comes with a gold plated get out of jail free card: ‘The US government came and checked us.’ They say they are doing it only for organizations that cannot afford commercial assessments, but they often go to organizations that have deep enough pockets.”

According to Paller, despite what the NCATS documents say, the testers do not do active penetration tasks against the network. Rather, he said, they are constrained by their rules of engagement.

“Mostly they do architectural assessments and traffic analysis,” he said. “They get a big packet capture and they baseline and profile and do some protocol analysis (wireless).”

Paller said the sort of network architecture review offered by DHS’s scans can only tell you so much, and that the folks doing it do not have deep experience with one of the more arcane aspects of critical infrastructure systems: Industrial control systems of the sort that might be present in an energy firm that turns to NCATS for its cybersecurity assessment.

“In general the architectural reviews are done by younger folks with little real world experience,” Paller said. “The big problem is that the customer is not fully briefed on the limitations of what is being done in their assessment and testing.”

Does your organization have experience with NCATs assessments? Are you part of a critical infrastructure company that might use these services? Would you? Sound off in the comments below.

Krebs on Security: Security Bug in Dell PCs Shipped Since 8/15

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

All new Dell laptops and desktops shipped since August 2015 contain a serious security vulnerability that exposes users to online eavesdropping and malware attacks. Dell says it is prepping a fix for the issue, but experts say the threat may ultimately need to be stomped out by the major Web browser makers.

d3llAt issue is a root certificate installed on newer Dell computers that also includes the private cryptographic key for that certificate. Clever attackers can use this key from Dell to sign phony browser security certificates for any HTTPS-protected site.

Translation: A malicious hacker could exploit this flaw on open, public networks (think WiFi hotspots, coffee shops, airports) to impersonate any Web site to a Dell user, and to quietly intercept, read and modify all of a vulnerable Dell system’s Web traffic.

According to Joe Nord, the computer security researcher credited with discovering the problem, said the trouble stems from a certificate Dell installed named “eDellRoot.”

Dell says the eDellRoot certificate was installed on all new desktop and laptops shipped from August 2015 to the present day. According to the company, the certificate was intended to make it easier for Dell customer support to assist customers in troubleshooting technical issues with their computers.

“We began loading the current version on our consumer and commercial devices in August to make servicing PC issues faster and easier for customers,” Dell spokesperson David Frink said. “When a PC engages with Dell online support, the certificate provides the system service tag allowing Dell online support to immediately identify the PC model, drivers, OS, hard drive, etc. making it easier and faster to service.”

“Unfortunately, the certificate introduced an unintended security vulnerability,” the company said in a written statement. “To address this, we are providing our customers with instructions to permanently remove the certificate from their systems via direct email, on our support site and Technical Support.”

In the meantime, Dell says it is removing the certificate from all Dell systems going forward.

“Note, commercial customers who image their own systems will not be affected by this issue,” the company’s statement concluded. “Dell does not pre-install any adware or malware. The certificate will not reinstall itself once it is properly removed using the recommended Dell process.”

The vulnerable certificate from Dell. Image: Joe Nord

The vulnerable certificate from Dell. Image: Joe Nord

It’s unclear why nobody at Dell saw this as a potential problem, especially since Dell’s competitor Lenovo suffered a very similar security nightmare earlier this year when it shipped an online ad tracking component called Superfish with all new computers.

Researchers later discovered that Superfish exposed users to having their Web traffic intercepted by anyone else who happened to be on that user’s local network. Lenovo later issued a fix and said it would no longer ship computers with the vulnerable component.

Dell’s Frink said the company would not divulge how many computers it has shipped in the vulnerable state. But according to industry watcher IDC, the third-largest computer maker will ship a little more than 10 million computers worldwide in the third quarter of 2015.

Zakir Durumeric, a Ph.D. student and research fellow in computer science and engineering at the University of Michigan, helped build a tool on his site — — which should tell Dell users if they’re running a vulnerable system.

Durumeric said the major browser makers will most likely address this flaw in future updates soon.

“My guess is this has to be addressed by the browser makers, and that we’ll seem them blocking” the eDellRoot certificate. “My advice to end users is to make sure their browsers are up-to-date.”

Further reading:

An in-depth discussion of this issue on Reddit.

Dan Goodin‘s coverage over at Ars Technica.

Dell’s blog advisory.

Update, 1:15 a.m. ET: Added link to Dell’s instructions for removing the problem.

AWS Security Blog: s2n and Lucky 13

This post was syndicated from: AWS Security Blog and was written by: Colm MacCarthaigh. Original post: at AWS Security Blog

Great security research combines extremely high levels of creativity, paranoia, and attention to detail. All of these qualities are in evidence in two new research papers about how s2n, our Open Source implementation of the SSL/TLS protocols, handles the Lucky 13 attack from 2013. 

The research found issues with how s2n mitigates Lucky 13 and improvements that could be made. These issues did not impact Amazon, AWS, or our customers, and are not the kind that could be exploited in the real world, but the research shows that the early versions of s2n’s mitigations made Lucky 13 attacks tens of millions of times more difficult for attackers, rather than the trillions of times more difficult that we had intended. In any event, the versions of s2n concerned have never been used in production, and improvements that completely prevent the issue were included in the versions of s2n available on GitHub from July 11th , 2015.

The two papers are from:

  • Martin Albrecht  and Kenny Paterson of the Information Security Group at Royal Holloway, University of London, who have a paper concerning s2n’s initial implementation of Lucky 13 CBC verification, and s2n’s secondary mitigation mechanism using timing delays.  Kenny Paterson is also one of the original authors of the Lucky 13 paper.
  • Manuel Barbosa (HASLab – INESC TEC and  FCUP) has discovered a bug introduced in the process of making improvements related to the first paper, and has upcoming publications in collaboration with other researchers (see below) where they cover the limits of human review and other interesting implications of this programming error, and exploring formal verification solutions to prevent such errors in the first place.

We would like to thank Albrecht, Paterson and Barbosa for reporting their research through our vulnerability reporting process and for discussing and reviewing the changes we made to s2n in response to that research. Both papers are valuable contributions towards improving the protections in s2n and cryptography generally, and we’re grateful for the work involved.

Our summary: s2n includes two different mitigations against the Lucky 13 attack and although the original implementations in s2n were effective against real-world attacks, each could be improved to be more robust against theoretical attacks.

Lucky 13: a quick recap

In February 2013, Nadhem J. AlFardan and Kenny Paterson of the Information Security Group at Royal Holloway, University of London, published the Lucky 13 attack against TLS. Adam Langley’s blog post on the topic is a great detailed summary of the attack and how it operates and how It was mitigated In OpenSSL.

A brief synopsis is that Lucky 13 is an “Active Person in the Middle” attack against block ciphers where an attacker who is already capable of intercepting and modifying your traffic may tamper with that traffic in ways that allow the attacker to determine which portions of your traffic are encrypted data, and which which portions of your traffic are padding (bytes included to round up to a certain block size).

The attacker’s attempt to tamper with the traffic is detected by the design of the TLS protocol, and triggers an error message. The Lucky 13 research showed that receiving this error message can take a different amount of time depending on whether real data or padding was modified. That information can be combined with other cryptographic attacks to recover the original plaintext.

How the Lucky 13 attack interacts with TLS

Fortunately, there are a number of factors that make the Lucky 13 attack very difficult in real world settings against SSL/TLS.

Firstly, the timing differences involved are extremely small, commonly smaller than a microsecond. The attacker must intercept, modify, pass on traffic and intercept the errors on a network that is stable to the point that differences of less than a microsecond are measurable. Unsecured Wi-Fi networks are easiest to intercept traffic on, but are neither fast enough nor stable enough for the attack to work. Wide-area networks are also too inconsistent so an attacker must be close to their target.  Even within a single high-performance data center network, it is very difficult to obtain the conditions required for this attack to succeed.  With the security features of Amazon Virtual Private Cloud (VPC), it is impossible to surreptitiously intercept TLS traffic within AWS data centers.

Making things harder still for the attacker, attempts to accelerate the attack by trying multiple connections in parallel will contend network queues and can obscure the very measurements needed for the attack to succeed. Our own experiments with real-world networks show an exponential distribution of packet delays in the microsecond to tens of microseconds range.

Secondly, the Lucky 13 attack requires a client to attempt to send or receive the same information over and over again; hundreds to millions of times depending on what is known about the data already along with the exact details of the implementation being targeted. Web browsers typically limit the number of attempts they make to three, as do clients such as the AWS SDKs. Still, as the Lucky 13 paper points out: more advanced and involved attacks may use JavaScript or other techniques to generate that kind of request pattern.

Thirdly, the errors generated by the Lucky 13 attack are fatal in SSL/TLS and cause the connection to be closed. An attack would depend on neither the client nor the server noticing incredibly high connection error rates. At AWS, monitoring and alarming on far lower error rates than would be required to mount a Lucky 13 attack is a standard part of our operational readiness.

Lastly, in response to the original Lucky 13 research, the CBC cipher suites impacted by the vulnerability are no longer as common as they were. Today’s modern web browsers and servers no longer prefer these cipher suites. At AWS, we now see less than 10% of connections use CBC cipher suites.

Considering the limitations on the attack, it is difficult to conceive of circumstances where it is exploitable in the real-world against TLS. Indeed, some implementations of the TLS protocol have chosen not to mitigate the Lucky 13 attack.

With s2n, we decided to be cautious, as attacks only improve over time and to follow the statement of the risk from Matthew Green’s blogpost on the topic: “The attack is borderline practical if you’re using the Datagram version of TLS (DTLS). It’s more on the theoretical side if you’re using standard TLS. However, with some clever engineering, that could change in the future. You should probably patch!“.

Lucky 13, s2n, and balancing trade-offs

Despite the significant difficulty of any practical attack, as described above, s2n has included two different forms of mitigation against the Lucky 13 attack since release: first by minimizing the timing difference mentioned above, and second, by masking any difference by including a delay of up to 10 seconds whenever any error is triggered.

Martin Albrecht and Kenny Paterson, again of the Information Security Group at Royal Holloway, University of London, got in touch to make us aware that our first mitigation could be made more effective on its own, and then after learning about the additional mitigation strategy also performed some impressive research on the effectiveness of the delay technique.

One obvious question was raised: why doesn’t s2n mitigate Lucky 13 in the same way that OpenSSL does? OpenSSL’s change introduced significant complexity, but s2n’s primary goal is to implement the TLS/SSL protocols in a secure way, which includes risk reduction via reduced complexity. Besides hard-to-attack timing issues, our observation is that other software coding errors can lead to much more practical forms of attacks including remote execution, such as the ShellShock vulnerability, or memory disclosure such as the HeartBleed vulnerability in OpenSSL. In short: these latter kinds of vulnerabilities, due to small coding errors, can be catastrophic and rank as higher impact threats.

We must carefully balance the risk of additional complexity and code against the benefit. This leads us to be extremely conservative in adding code to s2n and to prioritize readability, auditability and simplicity in our implementation, which we emphasize in s2n’s development guide.  Simply put, we keep things simple for better security.

Some modern cryptographic implementations, most notably NaCl, strive to make all operations take the same amount of time to execute regardless of the input, using techniques such as branch-free programming and bitwise operations. Where appropriate s2n employs these techniques, too.

For example, s2n verifies CBC padding in constant time regardless of the amount of padding, an important aspect of mitigating Lucky 13 on its own.  Wherever s2n uses these techniques, we include explanatory code comments, in addition to analyzing and auditing the machine code emitted by multiple common compilers.

Unfortunately, the design of the CBC algorithm that was impacted by Lucky 13 pre-dates these techniques, and while it is feasible to do constant-time padding verification, it is not possible to apply the technique to HMAC verification without changing the interface to the HMAC algorithm in quite a complex way, and the interaction between CBC and HMAC verification is where the timing issue shows up.

As the research from Albrecht and Paterson found, retrofitting constant-time practices into OpenSSL’s CBC handling required hundreds of lines of code; inter-weaving the TLS handling code and the HMAC code, and changing the interface to HMAC in a non-standard way. Although this was done with great care and in a manner appropriate to OpenSSL, we do not believe a similar approach is suitable for s2n, where it would lead to an almost 10% increase in code and, in our view, would have a significant impact on clarity, for a very marginal benefit. Additionally, we are also formally verifying our implementation of the HMAC algorithm against the published HMAC standard, so we try hard to use HMAC via its standard interface as closely as possible.

In other words: changing the HMAC interface to be similar to OpenSSL would be adding hard-to-follow code and would defeat much of the benefit of formally verifying our implementation of the HMAC algorithm. With these trade-offs in mind, s2n first mitigated Lucky 13 by counting the number of bytes handled by HMAC and making this equal in all cases, a “close to constant time” technique that completely preserves the standard HMAC interface. This is an improvement over several alternative TLS implementations, which do not equalize the input to HMAC, and we could not observe any timing differences in our network testing.

Albrecht and Paterson pointed out to us that this technique does not account for some internal differences in how the HMAC algorithm behaves depending on the size of the data processed.  We were aware of this limitation, but our validation experiments – attacking s2n the same way an attacker would, and with our secondary mitigation turned off – had not been able to measure a time difference using this approach.

Separate targeted measurements, using a synthetic benchmarking environment and direct instrumentation of the extracted code illustrated the timing difference better, and their latest paper includes some analysis of how this timing difference may be used in an attack.

Albrecht and Paterson were also able to help us avoid making complex changes by suggesting a small and clever change to the HMAC interface: add a single new call that always performs two internal rounds of compression, even if one may not be necessary. With this change in place, the timing differences would be unobservable even in the synthetic environment. Considering the low risk and the experimental evidence of our previous testing we made this change in s2n as part of a commit on July 13th, just less than two weeks after s2n’s public launch on GitHub.

Underscoring the principle that “with more code comes more risk,” this change had a small bug itself; instead of expecting it to take 9 bytes of space to finalize a checksum, the original change specified 8. Manuel Barbosa, José Almeida (HASLab – INESC TEC and Univ. Minho), Gilles Barthe, and François Dupressoir (IMDEA Software Institute) have put together a comprehensive paper detailing the issue and the value of more testing in this area and how properties such as constant time of execution can be verified more systematically.

Perhaps most interesting about this is that the code author, reviewers, and researchers involved are all extremely familiar with the internals of HMAC and hash algorithms, and this small detail still escaped review (though no formal code audit had yet begun on the changes). The same authors are also working on a formal verification tool specific for constant time implementations in collaboration with Michael Emmi (IMDEA Software Institute). We are working with vendors  to apply these techniques more generally.

s2n’s timing blinding mitigation

Lucky 13 is not the only attack against the TLS protocol to use timing differences, and so, as part of developing s2n, we decided to include a general mitigation against timing attacks. Whenever a TLS error is encountered, including Lucky 13 attack attempts, s2n pauses a duration of time between 1ms and 10 seconds. The goal here is to produce a general high-level mitigation which would have rendered any known timing attack impractical, even if the underlying issue were unpatched. Or, as is the case here, if another intended mitigation is not fully effective.

The effect is to hide a signal (the timing difference) in a very large sea of noise. This doesn’t make it impossible to find the original signal, but it takes more measurements. The math involved is straightforward and is included in the original Lucky 13 paper. If the timing issue is 1 microsecond, then a delay of between 1 millisecond and 10 seconds increases the number of attempts required by about 8.3 trillion; far outside the realm of the practical. However this is only the case if the delay is uniform.

Initially our random delay duration was implemented using the usleep routine (or by the calling process if it is handling multiple connections), which is granular to microseconds.  One of our concerns with the usleep approach is that it may be too coarse: a timing issue smaller than a microsecond may still “slip through” if the implementation of usleep is consistent enough.

This issue is not fatal; merely by adding 5 seconds of delay on average to each attempt the Lucky 13 attack is slowed down from a roughly 64-hour per byte attack to an attack which would take at least a year of attack time per byte to succeed (again, assuming idealized circumstances for the attacker).

Based on the renewed concerns, and with the encouragement of Albrecht and Paterson, we changed s2n to use nanosleep() for our delaying mechanism on July 11th. Indeed this change proved beneficial as further research by Albrecht and Paterson over the coming months showed that observable timing issues could indeed slip through the old microsecond granular sleep mechanism and that the plaintext could be recovered in an idealized synthetic environment by making tens of millions attempts per byte, which is less than the intended trillions.

So for an over-simplified assessment of how impractical an attack of this nature would be, assuming an attacker was in the right place, at the right time, and could convince their victim to send their password to a server, over and over again as fast as the network would allow, it would take in the region of 20 years to recover an 8-character password via the Lucky 13 attack vs s2n. The real-world wall-clock time required may be shortened here by trying attempts in parallel, but the contention of network queues and cumulative delay of packets also interfere with measurements, and it is also an even harder attack to mount against clients in the first place.

The change to nanoseconds means that this is now prevented. We are also examining other changes, including a mode in which s2n emulates hardware by handling errors only at set “tick” intervals, as well as a change to enforce a higher minimum timing delay in all cases.


As mentioned earlier, some implementations of TLS do not include mitigations against Lucky 13. In some cases this is because they are implemented in higher level languages, where constant time coding techniques are not possible.

These languages are generally safer than low-level ones from the kinds of remote-execution and memory disclosure risk presented by coding errors. As technologists, it would be extremely unfortunate if we were forced to choose between safer programming languages and safer cryptography. Timing blinding may be a useful way to bridge this gap, and we look forward to further research in this area. 

– Colm

Linux How-Tos and Linux Tutorials: Best in Breed Twitter Clients for Linux

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Jack Wallen. Original post: at Linux How-Tos and Linux Tutorials

twitter choqok client on linux

Twitter is a social networking service that is a bit of a conundrum to many. At any given time it can be used to connect with people of a like mind, and at another it’s an exercise in frustration, thanks to the never-ending stream of data. But for those that depend upon the service as a means to either stay connected, promote a product or service, or even (on certain levels) research a given topic, it’s a boon.

But for Linux users, the client side of things has lagged behind for some time. Thankfully, you can now find solid clients that do the job and do it well. Back in “the day” amazing tools like Tweetdeck were client-based tools for Windows and Mac. Running the best of the best required you install WINE and download the .msi installer and cross your fingers. That was then…this is now. Most services like Tweetdeck now run flawlessly in nearly every browser (even Midori).

But even though the browser has become King of the apps, there are still desktop and command line clients for the likes of Twitter available—each of which offers a variety of features. But if one of those desktop clients won’t do it for you, I’ll show you a handy trick to help make one browser-based client behave a bit more like a desktop client.

Let’s first look at what I consider to be the two best in breed Twitter clients for Linux.


Choqok is the Persian word for sparrow and is a Twitter client that has an impressive list of features. Although it is a KDE-centric app (and does run a bit better in its native environment), Choqok will perform splendidly in nearly all desktop environments and it supports the latest Twitter API. Choqok also enjoys panel integration (even with Ubuntu Unity), where you can do quick posts, update your timeline, and configure the app. But what is most impressive about this desktop client is its interface. Unlike Tweetdeck or Hootsuite (which can both very quickly become overwhelming), Choqok simplifies the Twitter experience and even helps to curtail the insanely fast flowing stream of tweets (that can cause you to miss out on twitter Choqok client setupsomething you actually want to see).

Installing Choqok is actually very simple, as it is found in your standard repositories. You can open up the Ubuntu Software Center (or whatever you happen to use—AppGrid, Synaptic, etc.) and, with a single click, install the client. The nice thing about installing from the Ubuntu Software Center is that this is one instance where you actually get the latest release.

Once you’ve installed it, I recommend logging into your Twitter account using the desktop’s default browser. With that out of the way, fire up Choqok and then (when prompted) request an authentication token. Once you have the token, copy/paste it into the requesting Choqok window and grant the app permission to your Twitter account. You will finally be greeted by the Choqok main window (Figure A).

Beyond the interface, one feature you will want to make use of is the Choqok filtering system. With the help of this filtering system you can make it far easier to see exactly what you want from your Twitter feeds. This is actually one area where Choqok excels. Here’s how it works.

Open up Choqok and then click Tools > Configure Filters. When the new window opens, click the + button to open the filter definition window (Figure B).

At this point you have to make a few choices. The first is the Filter field. There are four options:

  • Post Text: filter the text of a post

  • Author Username: filter the name of a Twitter user

  • Reply to User: filter replies from a user

  • Author Client: filter through the client used by the author.

Let’s say you want to set up a filter for posts containing the keyword linux. To do that you would set the following options:

  • Select Post Text from the Filter field

  • Select Contain from the Filter type

  • Enter linux in the Text field

  • Select Highlight Posts from the filter action.

Once you’re done, click OK and the filter is ready. These are considered quick filters, so they are applied immediately. How do they work? Simple. Since we selected Highlight Posts from the Filter action, all posts that match a filter will be highlighted with a red box as your timeline updates (Figure C).

twitter choqok filters

This makes it incredibly easy to scan through your main Twitter feed to find posts related to your filters.


Here is another simple-to-use Linux Twitter client with an eye for outstanding interface. Once again you won’t be inundated with a blinding fast timeline that’s nearly impossible to follow. Corebird is to GNOME what Choqok is for KDE…and it does so with a bit more zip. It offers a very similar feature set to Choqok and can be installed from a specific PPA. Here are the steps for installation:

  1. Open up a terminal window

  2. Add the PPA with the command sudo apt-add-repository ppa:ubuntuhandbook1/corebird

  3. Update apt with the command sudo apt-get update 

  4. Install Corebird with the command sudo apt-get install corebird

  5. Allow the installation to complete

  6. If you run into dependency errors during installation, solve the errors with the command sudo apt-get install -f

One of the best features of Corebird is the inclusion of lists. This makes following a collection of users so much easier (especially when you have thousands of people you follow). Say, for example, you follow a number of users interested in (or posting about) linux and you want to be able to quickly see what they’ve posted on a regular basis. You can create a list for these users by doing the following:

  1. Open Corebird

  2. Click on the list icon in the left navigation (third up from the bottom)

  3. Enter a name for the list

  4. Click Create.

Now that the list is created, you have to add users. To do this, simply find a user in your timeline (there’s a handy search function for that) and then, from the drop-down in their profile, select Add to/Remove from list. In the popup window, locate the newly created list(s), select the list, and click Save (Figure D). The user has now been added to the list. Continue adding users until your list is complete.

twitter Corebird client

To read posts associated with that list, click on the List icon, locate the list in question, and double click its name. All posts from users on the list will appear in the feed.


At one time, the only way you could enjoy Tweetdeck was to install the Chrome addon and view it from your browser. Now, however, Tweetdeck works perfectly from within Firefox. However, if you happen to be a Chrome (or Chromium) user, here’s a cool trick. You can create a launcher for Tweetdeck such that it will open the webpage in its own app-like window (without the extraneous web browser bits and pieces). I’ll demonstrate how to do that in Elementary OS Freya.

  1. Install the Tweetdeck addon to Chrome (or Chromium)

  2. From within Chrome, click the Apps button

  3. Locate the Tweetdeck icon

  4. Right-click the Tweetdeck icon

  5. Click Create Shortcuts

  6. De-select Desktop

  7. Right-click the Tweetdeck icon again

  8. Select Open as window

  9. Click on the desktop menu (aka Slingshot Menu)

  10. Locate and click the Tweetdeck entry to open the “app”

That’s it. You should see Tweetdeck open in its very own app-like window (Figure E).

twitter tweetdeck linux

While the Tweetdeck window is open, you can right-click its icon on the dock and select Keep In Dock to add a launcher on the dock.


For those who prefer the command line over a GUI, you’re in luck. The TTYtter application is a simple tool you can use to quickly post to Twitter from the command line. It’s easy to install, a bit tricky to set up, and very simple to use.

To install TTYtter, do the following:

  1. Open a terminal window

  2. Issue the command sudo apt-get install ttytter 

  3. Type your sudo password and hit Enter

  4. Type y to continue

  5. Allow the installation to complete

Once installed, you run the app with the command ttytter. On first run, the app will request a token and then return a URL that you then must paste into a browser (one that has already logged into your Twitter account). When prompted (in your browser) click the Authorize App button which will present you with an authorization PIN. Enter that PIN into the waiting command prompt (Figure F) and hit Enter. Run the ttytter command again and you will be logged on with your Twitter account.

twitter command line client

To post to your account with TTYtter, you simply issue a command as such:

ttytter -status=”The Linux Foundation rocks!”

You can also issue the command ttytter and then hit Enter to get a TTYtter prompt, where you can post status simply by typing your post and hitting Enter (without having to add ttytter -status=””). To exit out of TTYtter, hit CTRL+c.

Personally, of the three GUI options, Tweetdeck is by far the best—but it’s not truly a desktop client. If you’re looking for a straight up desktop Twitter client for Linux, you can’t go wrong with either Choqok or Corebird. If you’re okay with a web-based client, you can always trick Tweetdeck into behaving like a desktop app with my handy little trick (which also works for most desktop environments).

Happy tweeting!

Schneier on Security: Voter Surveillance

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

There hasn’t been that much written about surveillance and big data being used to manipulate voters. In Data and Goliath, I wrote:

Unique harms can arise from the use of surveillance data in politics. Election politics is very much a type of marketing, and politicians are starting to use personalized marketing’s capability to discriminate as a way to track voting patterns and better “sell” a candidate or policy position. Candidates and advocacy groups can create ads and fund-raising appeals targeted to particular categories: people who earn more than $100,000 a year, gun owners, people who have read news articles on one side of a particular issue, unemployed veterans…anything you can think of. They can target outraged ads to one group of people, and thoughtful policy-based ads to another. They can also fine-tune their get-out-the-vote campaigns on Election Day, and more efficiently gerrymander districts between elections. Such use of data will likely have fundamental effects on democracy and voting.

A new research paper looks at the trends:

Abstract: This paper surveys the various voter surveillance practices recently observed in the United States, assesses the extent to which they have been adopted in other democratic countries, and discusses the broad implications for privacy and democracy. Four broad trends are discussed: the move from voter management databases to integrated voter management platforms; the shift from mass-messaging to micro-targeting employing personal data from commercial data brokerage firms; the analysis of social media and the social graph; and the decentralization of data to local campaigns through mobile applications. The de-alignment of the electorate in most Western societies has placed pressures on parties to target voters outside their traditional bases, and to find new, cheaper, and potentially more intrusive, ways to influence their political behavior. This paper builds on previous research to consider the theoretical tensions between concerns for excessive surveillance, and the broad democratic responsibility of parties to mobilize voters and increase political engagement. These issues have been insufficiently studied in the surveillance literature. They are not just confined to the privacy of the individual voter, but relate to broader dynamics in democratic politics.

TorrentFreak: Google Asked to Remove 1,500 “Pirate Links” Per Minute

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

deleteIn recent years copyright holders have flooded Google with DMCA takedown notices, asking the company to delete links to pirated content.

The number of requests issued has increased dramatically. In 2011, the search engine received only a few hundred takedown notices per day, but in the same period it now processes more than two million “pirate” links.

This translates to 1,500 links per minute, or 25 per second, and is double the amount being handled last year around the same time. The graph below illustrates the continuing increase.

Google takedown surge

Over the past month Google received takedown notices from 5,609 different copyright holders targeting 65 million links, together spanning 68,484 different domain names.

Most of the reported URLs indeed point to pirated content and the associated links are often swiftly removed from Google’s search results. However, with the massive volume of reports coming in, mistakes and duplicate requests are also common.

The availability of pirated content in search results is a hot button issue for copyright holders, who believe that Google sometimes steers legitimate customers to unauthorized sites.

Google addressed this issue last year by implementing a significant change to its search algorithm, which downranks sites that receive many copyright infringement notices.

These efforts helped to make most large torrent sites less visible, but recent research shows that many streaming sites are still among the top results.

According to industry groups such as the MPAA and RIAA, Google should take a more aggressive approach and blacklist the worst offenders entirely. However, Google believes that this type of site-wide censorship goes too far.

For now, the dispute between both camps remains unresolved, which means that the takedown surge and purge is likely to continue.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

TorrentFreak: Sci-Hub, BookFi and LibGen Resurface After Being Shut Down

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

libhenLast month a New York District Court issued a preliminary injunction against several sites that provide unauthorized access to academic journals.

As a result the operators were ordered to quit offering access to any Elsevier content and the associated registries had to suspend their domain names.

After a few days the registries did indeed disable the domain names mentioned in the lawsuit which are currently all unavailable, much to the disappointment of the sites’ users.

However, the operators of Sci-Hub, BookFi and LibGen have no intention of complying with the U.S. court order. Instead, they’re rendering the domain suspensions ineffective by switching to several new ones.

At the time of writing LibGen is readily available again via several alternative domains. Except from a new URL, not much has changed and the site is fully operational. Similarly, BookFi is also accessible via various domains including

The same is true for Sci-Hub, which changed its address to a .io domain. TF spoke with the site’s operator, Alexandra Elbakyan, who confirmed the move and is still hopeful that she can get the original domain back.

“Several new domains are operating already,” Elbakyan says. “For some reason, I think that in future justice will prevail and all our domains will be unblocked.”

To make sure that the site remains accessible, Sci-Hub also added an .onion address which allows users to access the site via Tor, and bypass any future domain name suspensions.

Despite the domain problems and a disappointing court order, Elbakyan is glad that the case brought attention to the paywall problems academia faces.

“In some sense, this case was helpful: more people now agree that copyright should be destroyed, and that academic publishing needs serious reform,” Sci-Hub’s operator says.

“Before, many people would say: why bother acting against copyright laws if they can be so easily bypassed? Or what is the point in an open access movement if anyone can download any paid article for free?”

Elsevier may have the law on their side, but the largest academic publisher can’t count on universal support from the academic community.

In recent weeks many scientists and scholars have come out in support of Sci-Hub, BookFi and LibGen, arguing that access to academic research should be free and universal.

For Elbakyan and others this support offers enough motivation to continue what they do.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

TorrentFreak: YouTube Pays Users’ Legal Bills to Defend Fair Use

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

youtubefaceAccording to Google more than half a million hours of video are uploaded to YouTube every day. Although with ContentID the company tries, determining the copyright status of every single minute is an almost impossible task.

While identifying copyrighted movies, TV shows and music are all within the company’s abilities, when used in certain ways all of those things can be legally shown on YouTube, even without copyright holders’ permission.

Under U.S. law the concept is known as ‘fair use’ and it enables copyrighted material to be used for purposes including criticism, news reporting, teaching and research. However, some copyright holders like to contest the use of their content on YouTube no matter what the context, issuing DMCA takedown notices and landing YouTube users with a ‘strike’ against their account.

YouTube has been criticized in the past for not doing enough to protect its users against wrongful claims but now the company appears to be drawing a line in the sand, albeit a limited one, in defense of those legally using copyrighted content in transformative ways.

In a blog post Google’s Copyright Legal Director says that YouTube will showcase several user-created videos in its Copyright Center and cover all legal costs should rightsholders challenge how each uses copyrighted content.

“YouTube will now protect some of the best examples of fair use on YouTube by agreeing to defend them in court if necessary,” Fred von Lohmann said.

“We’re doing this because we recognize that creators can be intimidated by the DMCA’s counter notification process, and the potential for litigation that comes with it.”

The first four titles showcased can be found here and each presents a classic demonstration of fair use. For example, the first uses game clips for the purposes of review, while the second offers a critique of third-party UFO videos.

Google hopes that by standing behind videos such as these, YouTubers and those seeking to take down content will become educated on what is and isn’t appropriate when it comes to using other people’s copyrighted content.

“In addition to protecting the individual creator, this program could, over time, create a ‘demo reel’ that will help the YouTube community and copyright owners alike better understand what fair use looks like online and develop best practices as a community,” Google’s Copyright Legal Director adds.

Perhaps needless to say, Google isn’t in a position to offer legal support to everyone uploading content to YouTube but it has pledged to “resist legally unsupported DMCA takedowns” as part of its normal processes.

“We believe even the small number of videos we are able to protect will make a positive impact on the entire YouTube ecosystem, ensuring YouTube remains a place where creativity and expression can be rewarded,” Fred von Lohmann concludes.

Of course, it’s unlikely that any video showcased by Google will experience any legal problems so the defense offer from the company is largely symbolic. However, the overall gesture indicates that the company is paying attention to the fair use debate and is prepared to help its users stand up for their rights. That will be gratefully received.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

AWS Official Blog: New AWS Public Data Sets – TCGA and ICGC

This post was syndicated from: AWS Official Blog and was written by: Jeff Barr. Original post: at AWS Official Blog

My colleagues Ariel Gold and Angel Pizarro wrote the incredibly interesting guest post below.

— Jeff;

Today we are pleased to announce that qualified researchers can now access two of the world’s largest collections of cancer genome data at no cost on AWS as part of the AWS Public Data Sets program. Providing access to these petabyte-scale genomic data as shared resources on AWS lowers the barrier to entry, thus expanding the research community and accelerating the pace of research and discovery in the development of new treatments for cancer patients.

The Cancer Genome Atlas (TCGA) corpus of raw and processed genomic, transcriptomic, and epigenomic data from thousands of cancer patients is now freely available on Amazon S3 to users of the Cancer Genomics Cloud, a cloud pilot program funded by the National Cancer Institute and powered by the Seven Bridges Genomics platform.

The International Cancer Genome Consortium (ICGC) PanCancer dataset generated by the Pancancer Analysis of Whole Genomes (PCAWG) study is also now available on AWS, giving cancer researchers access to over 2,400 consistently analyzed genomes corresponding to over 1,100 unique ICGC donors. These data will also be freely available on Amazon S3 for credentialed researchers subject to ICGC data sharing policies.

These two data sets represent the first controlled-access genomic data that have been redistributed to the wider research audience on the cloud. Previously, researchers needed to download and store their own copies of the data before they could begin their experiments. Now, with this data hosted on AWS for the community, researchers can begin their work right away. Researchers will also have access to a broader toolset hosted and shared by the community within AWS. This translates into a much lower barrier to entry and more time for science.

Making these data and tools available in the cloud will also enable a greater level of collaboration across research groups, since they will have a common place to access and share data. Finally, researchers will also be able to securely bring their own data and tools into AWS, and combine these with the existing public data for more robust analysis. No-cost data access, a broader set of available tools, and increased collaborative capabilities will enable researchers to focus on their science and not infrastructure, allowing them to get more done in shorter periods, and ultimately accelerating the pace of research and discovery in the study of cancer.

Accessing TCGA and ICGC on AWS
The difference between TCGA and ICGC, and previously released AWS Public Data Sets such as the National Institutes of Health (NIH) 1000 Genomes Project, Genome in a Bottle (GIAB), and the 3000 Rice Genome, is the need to limit access to researchers that have gone through a review process for their intended use of the data. Because of this requirement, access to TCGA and ICGC on AWS will be administered by our third-party partners, Seven Bridges Genomics and the Ontario Institute for Cancer Research, respectively. These partners have the rights to redistribute the data on behalf of the original data providers. The partners will also curate and update the data over time, as well as develop a community of users who can share cloud-based tools and best practices in order to accelerate use of the data and advance our understanding of cancer.

You can learn more about the data sets, and specifics on how to access them, on our TCGA on AWS page and ICGC on AWS page.

Tools and Resources for Working with the Data
The TCGA data will be available to users of the Cancer Genomics Cloud (CGC). Researchers can apply for early access here. Once accepted, users will be able access the data via the CGC Web portal or use the CGC’s API for programmatic access to the data. The CGC will have a set of data analysis pipelines already integrated into the platform so that users can start working right away with the most common toolsets.

The ICGC data will be generally accessible via the use of a downloadable command line tool. Users can search for files using the ICGC Data Portal and access individual or related sets of alignment and variant files through the ICGC Storage Client. The alignments and a selection of Sanger somatic variant calls are currently available in Amazon S3. Further variant calls will be released following additional quality checking, validation, and analysis. For more information see the ICGC on the Cloud page and ICGC Storage Client documentation.

As always, when working with sensitive genomics data on AWS, you should take care to secure your storage and computational resources. The Architecting for Genomic Data Security and Compliance in AWS whitepaper is a good starting point if you are unfamiliar with the service features and tools necessary to work with data in a secure manner. Genomics platforms such as the CGC take care to meet these types of requirements as part of their value proposition. For example, DNAnexus has provided user documentation on how to leverage the ICGC Storage Client within their platform here.

Recognizing that it is no easy task to work with data at this scale, the PCAWG group are also releasing the PanCancer Launcher. This is an open source system to create EC2 instances, enqueue the analysis work items, trigger Docker-based analysis pipelines, and clean up the launched resources as computational tasks complete.

Currently, the PanCancer Launcher includes support for the BWA-mem-based alignment pipeline and its associated quality control steps.  Future releases will expand support for the variant calling pipelines created by the project that encompassed current best practice variant calling pipelines from 4 academic organizations: the German Cancer Research Center (DKFZ), the European Molecular Biology Laboratory (EMBL) in Heidelberg, the Wellcome Trust Sanger Institute, and the Broad Institute. You can read more about how to leverage the PanCancer Launcher in the Launcher HOWTO Guide.

Genomics in the Era of Cloud Computing
It has been interesting to witness the parallel evolution of genomics and cloud computing over the last decade. Both have been driven by new technologies that leverage economies of scale. Both have fundamentally changed the types of questions that can be asked simply because we can now collect and analyze the data in the same place.

The genomics research community, which have witnessed their storage and compute requirements double overnight when new chemistry kits are released, realized long ago that scalable cloud computing models are a better fit than large capital purchases that have to be planned for and amortized over 3-5 years. Today, it is common practice to work with data sets that reach in the hundreds of terabytes, and a few important ones that reach into the petabytes like the TCGA and ICGC. For genomics, cloud has become the new normal for how science gets done.

You can learn more about how genomics thought leaders are innovating in the genomics field through the use of cloud in this new video:

Be sure to also visit the Scientific Computing on AWS and Genomics on AWS pages for more user stories and tools.

Thank You
We’d like to thank our collaborators at the Ontario Institute for Cancer Research and Seven Bridges Genomics who helped us launch these public data sets and will be curating the data, administering access, and cultivating the ecosystem of tools around them. We look forward to working with many more organizations and researchers who will share their expertise and tools in order to accelerate the development of new treatments for cancer patients. Tell us how you’re using the data via the TCGA on AWS and ICGC on AWS pages and sign up for project updates.

Ariel Gold (AWS Public Data Sets) and Angel Pizarro (AWS Scientific Computing)

Krebs on Security: Report: Everyone Should Get a Security Freeze

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

This author has frequently urged readers to place a security freeze on their credit files as a means of proactively preventing identity theft. Now, a major consumer advocacy group is recommending the same: The U.S. Public Interest Research Group (US-PIRG) recently issued a call for all consumers to request credit file freezes before becoming victims of ID theft.


Each time news of a major data breach breaks, the hacked organization arranges free credit monitoring for all customers potentially at risk from the intrusion. But as I’ve echoed time and again, credit monitoring services do little if anything to stop thieves from stealing your identity. The best you can hope for from these services is that they will alert you when a thief opens or tries to open a new line of credit in your name.

But with a “security freeze” on your credit file at the four major credit bureaus, creditors won’t even be able to look at your file in order to grant that phony new line of credit to ID thieves.

Thankfully, US-PIRG — the federation of state public interest research groups — also is now recommending that consumers file proactive security freezes on their credit files.

“These constant breaches reveal what’s wrong with data security and data breach response. Agencies and companies hold too much information for too long and don’t protect it adequately,” the organization wrote in a report (PDF) issued late last month. “Then, they might wait months or even years before informing victims. Then, they make things worse by offering weak, short-term help such as credit monitoring services.”

The report continues: “Whether your personal information has been stolen or not, your best protection against someone opening new credit accounts in your name is the security freeze (also known as the credit freeze), not the often-offered, under-achieving credit monitoring. Paid credit monitoring services in particular are not necessary because federal law requires each of the three major credit bureaus to provide a free credit report every year to all customers who request one. You can use those free reports as a form of do-it-yourself credit monitoring.”

Check out the USPIRG’s full report, Why You Should Get Security Freezes Before Your Information is Stolen (PDF) for more good advice. In case anything in that report is unclear, in June I posted a Q&A on security freezes, explaining how they work, how to place them and the benefits and potential drawbacks of placing a freeze.

Have you frozen your credit file? If so, sound off about the experience in the comments. If not, why not?

Schneier on Security: Personal Data Sharing by Mobile Apps

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Interesting research:

“Who Knows What About Me? A Survey of Behind the Scenes Personal Data Sharing to Third Parties by Mobile Apps,” by Jinyan Zang, Krysta Dummit, James Graves, Paul Lisker, and Latanya Sweeney.

We tested 110 popular, free Android and iOS apps to look for apps that shared personal, behavioral, and location data with third parties.

73% of Android apps shared personal information such as email address with third parties, and 47% of iOS apps shared geo-coordinates and other location data with third parties.

93% of Android apps tested connected to a mysterious domain,, likely due to a background process of the Android phone.

We show that a significant proportion of apps share data from user inputs such as personal information or search terms with third parties without Android or iOS requiring a notification to the user. Did the FBI Pay a University to Attack Tor Users? (Tor blog)

This post was syndicated from: and was written by: jake. Original post: at

The Tor blog is carrying a post from interim executive director Roger Dingledine that accuses Carnegie Mellon University (CMU) of accepting $1 million from the FBI to de-anonymize Tor users.
There is no indication yet that they had a warrant or any institutional oversight by Carnegie Mellon’s Institutional Review Board. We think it’s unlikely they could have gotten a valid warrant for CMU’s attack as conducted, since it was not narrowly tailored to target criminals or criminal activity, but instead appears to have indiscriminately targeted many users at once.

Such action is a violation of our trust and basic guidelines for ethical research. We strongly support independent research on our software and network, but this attack crosses the crucial line between research and endangering innocent users.” Cryptographer Matthew Green has also weighed in (among others, including Forbes and Ars Technica): “If CMU really did conduct Tor de-anonymization research for the benefit of the FBI, the people they identified were allegedly not doing the nicest things. It’s hard to feel particularly sympathetic.

Except for one small detail: there’s no reason to believe that the defendants were the only people affected.”

Krebs on Security: The Lingering Mess from Default Insecurity

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

The Internet of Things is fast turning into the Internet-of-Things-We-Can’t-Afford. Almost daily now we are hearing about virtual shakedowns wherein attackers demand payment in Bitcoin virtual currency from a bank, e-retailer or online service. Those who don’t pay the ransom see their sites knocked offline in coordinated cyberattacks.  This story examines one contributor to the problem, and asks whether we should demand better security from ISPs, software and hardware makers.

armyThese attacks are fueled in part by an explosion in the number of Internet-connected things that are either misconfigured or shipped in a default insecure state. In June I wrote about robot networks or “botnets” of hacked Internet routers that were all made and shipped by networking firm Ubiquiti. Attackers were able to compromise the routers because Ubiquiti shipped them with remote administration switched on by default and protected by a factory default password pair (ubnt/ubnt or no password at all).

That story followed on reports from security firm Imperva (see Lax Security Opens the Door for Mass-Scale Hijacking of SOHO Routers) which found a botnet of tens of thousands of hijacked Ubiquiti routers being used to launch massive ransom-based denial-of-service attacks. Imperva discovered that those tens of thousands of hacked devices were so easy to remotely control that each router was being exploited by several different extortion groups or individual criminal actors. The company also found those actors used the hacked routers to continuously scan the Internet for more vulnerable routers.

Last week, researchers in Vienna, Austria-based security firm SEC Consult released data suggesting that there are more than 600,000 vulnerable Ubiquiti routers in use by Internet service providers (ISPs) and their customers. All are sitting on the Internet wide open and permitting anyone to abuse them for these digital shakedowns.

These vulnerable devices tend to coalesce in distinct geographical pools with deeper pools in countries with more ISPs that shipped them direct to customers without modification. SEC Consult said it found heavy concentrations of the exposed Ubiquiti devices in Brazil (480,000), Thailand (170,000) and the United States (77,000).

SEC Consult cautions that the actual number of vulnerable Ubiquiti systems may be closer to 1.1 million. Turns out, the devices ship with a cryptographic certificate embedded in the router’s built-in software (or “firmware”) that further weakens security on the devices and makes them trivial to discover on the open Internet. Indeed, the Censys Project, a scan-driven Internet search engine that allows anyone to quickly find hosts that use that certificate, shows exactly where each exposed router resides online.

The Imperva research from May 2015 touched a nerve among some Ubiquiti customers who thought the company should be doing more to help customers secure these routers. In a May 2015 discussion thread on the company’s support site, Ubiquiti’s vice president of technology applications Matt Harding said the router maker briefly disabled remote access on new devices, only to reverse that move after pushback from ISPs and other customers who wanted the feature turned back on.

In a statement sent to KrebsOnSecurity via email, Hardy said the company doesn’t market its products to home users, and that it sells its products to industry professionals and ISPs.

“Because of this we originally shipped with the products’ configurations as flexible as possible and relied on the ISPs to secure their equipment appropriately,” he said. “Some ISPs use self-built provisioning scripts and intentionally locking down devices out of the box would interfere with the provisioning workflows of many customers.”

Hardy said it’s common in the networking equipment industry to ship with a default password for initial use. While this may be true, it seems far less common that networking companies ship hardware that allows remote administration over the Internet by default. He added that beginning with firmware version 5.5.2 — originally released in August 2012 — Ubiquiti devices have included very persistent messaging in the user interface to remind customers to follow best practices and change their passwords.

“Any devices shipping since then would have this reminder and users would have to intentionally ignore it to install equipment with default credentials,” he wrote.  Hardy noted that the company also provides a management platform that ISPs can use to change all default device passwords in bulk.

Ubiquiti's nag screen asking users to change the default credentials. The company's devices still ship with remote administration turned on.

Ubiquiti’s nag screen asking users to change the default credentials. The company’s devices still ship with remote administration turned on.


When companies ship products, software or services with built-in, by-design vulnerabilities, good citizens of the Internet suffer for it. Protonmail — an email service dedicated to privacy enthusiasts — has been offline for much of the past week thanks to one of these shakedowns.

[NB: While no one is claiming that compromised routers were involved in the Protonmail attacks, the situation with Ubiquiti is an example of the type of vulnerability that allows attackers to get in and abuse these devices for nefarious purposes without the legitimate users ever even knowing they are unwittingly facilitating criminal activity (and also making themselves a target of data theft)].

Protonmail received a ransom demand: Pay Bitcoins or be knocked offline. The sad part? The company paid the ransom and soon got hit by what appears to be a second extortion group that likely smelled blood in the water.

The criminal or group that extorted Protonmail, which self-identifies as the “Armada Collective,” also tried to extort VFEmail, another email service provider.  VFE’s Rick Romero blogged about the extortion demand, which turned into a full-blown outage for his ISP when he ignored it. The attack caused major disruption for other customers on his ISP’s network, and now Romero says he’s having to look for another provider. But he said he never paid the ransom.

“It took out my [hosting] provider and THEIR upstream providers,” he said in an email. “After the 3rd attack took down their datacenter, I got kicked out.”

For his part, Romero places a large portion of the blame for the attacks on the ISP community.

“Who can see this bandwidth? Who can stop this,” Romero asked in his online column. “I once had an argument with a nice German fellow – they have very strict privacy laws – about what the ISP can block.  You can’t block anything in the EU.  In the US we’re fighting for open access, and for good reason – but we still have to be responsible netizens. I think the ISP should have the flexibility to block potentially harmful traffic – whether it be email spam, fraud, or denial of service attacks.”

So, hardware makers definitely could be doing more, but ISPs probably have a much bigger role to play in fighting large scale attacks. Indeed, many security experts and recent victims of these Bitcoin shakedowns say the ISP community could be doing a lot more to make it difficult for attackers to exploit these exposed devices.

This is how the former cyber advisor to Presidents Clinton and Bush sees it. Richard Clarke, now chairman and CEO of Good Harbor Consulting, said at a conference last year that the ISPs could stop an awful lot of what’s going with malware and denial-of-service attacks, but they don’t.

“They don’t, they ship it on, and in some cases they actually make money by shipping it on,” Clarke said at a May 2014 conference by the Information Systems Security Association (ISSA). “Denial-of-service attacks actually make money for the ISPs, huge volumes of data coming down the line. Why don’t we require ISPs to do everything that the technology allows to stop [denial-of-service] attacks and to identify and kill malware before it gets to its destination. They could do it.”

One basic step that many ISPs can but are not taking to blunt these attacks involves a network security standard that was developed and released more than a dozen years ago. Known as BCP38, its use prevents abusable resources on an ISPs network (hacked Ubiquiti routers, e.g.) from being leveraged in especially destructive and powerful denial-of-service attacks.

Back in the day, attackers focused on having huge armies of bot-infected computers they controlled from afar. These days an attacker needs far fewer resources to launch even more destructive attacks that let the assailant both mask his true origin online and amplify the bandwidth of his attacks.

Using a technique called traffic amplification, the attacker reflects his traffic from one or more third-party machines toward the intended target. In this type of assault, the attacker sends a message to a third party, while spoofing the Internet address of the victim. When the third party replies to the message, the reply is sent to the victim — and the reply is much larger than the original message, thereby amplifying the size of the attack.

BCP-38 is designed to filter such spoofed traffic, so that it never even traverses the network of an ISP that’s adopted the anti-spoofing measures. This blog post from the Internet Society does a good job of explaining why many ISPs ultimately decide not to implement BCP38.

As the Internet of Things grows, we can scarcely afford a massive glut of things that are insecure-by-design.  One reason is that this stuff has far too long a half-life, and it will remain in our Internet’s land and streams for many years to come.

Okay, so maybe that’s putting it a bit too dramatically, but I don’t think by much. Mass-deployed, insecure-by-default devices are difficult and expensive to clean up and/or harden for security, and the costs of that vulnerability are felt across the Internet and around the globe.

Continue reading ‘Krebs on Security: The Lingering Mess from Default Insecurity’ »

Anchor Cloud Hosting: Magento a Perfect Fit for Princess Polly

This post was syndicated from: Anchor Cloud Hosting and was written by: Eddy Respondek. Original post: at Anchor Cloud Hosting

Hand-tailored by Anchor and Acidgreen

Anchor Hosting Back in Fashion

Thwarted by the limitations of its existing CMS, Princess Polly chose Magento as the flexible foundation for the fashion retailer’s future ecommerce plans. However, for the project to succeed, Princess Polly also needed a digital agency to perform the necessary alterations to create a personalised designer platform. Plus, they needed a hosting partner to tailor the bespoke hosting ensemble to Magento’s particular shape, for a perfect t with plenty of room in all the right places.

“Working with Anchor is like having an extra developer in house. We consider them to be part of our development team”
James Lowe: Senior Technical Consultant, Acidgreen


Princess Polly started as a fashion boutique in Surfers Paradise back in 2005, launching an online store in 2010. “Within a year we were one of Hitwise’s Top 20 Power Retailers,” says Wez Bryett, General Manager at Princess Polly. “We experienced rapid growth for the first three years. Now the business has matured so we’re looking for improved stability while planning the future.”
Those plans include a global expansion, starting with the US and then moving into parts of Asia. Unfortunately, Princess Polly’s previous ecommerce platform was no longer supported.
“The existing system was pretty limiting,” explains James Lowe, Senior Technical Consultant for Acidgreen, the digital agency that took on the Princess Polly project. “Wez and the team at Princess Polly have big ideas. But they were constantly being told it couldn’t be done or that it was going to cost a considerable amount of money to make
even just minor changes.”
“We needed an ecommerce platform we could stay on for good,” continues Bryett. “Easy to change, and with plenty of plugins available so you don’t have to custom build everything yourself.”
Bryett’s research reached one clear conclusion. “Magento offers enterprise-level features without the enterprise-level pricing. It’s also geared towards international ecommerce, with multi-site options, an integrated currency converter, and so on.”
Bryett was adamant that performance should not be compromised to achieve this increased functionality. Page load times had to be fast and the infrastructure robust and scalable enough to handle the busiest periods with ease. However, Bryett knew his limitations. “I have an IT management background, but I wouldn’t be comfortable managing the platform and the hosting myself. Instead of learning how to configure, maintain and fix everything in house, making mistakes along the way, we wanted to work with a team that has dealt with these problems before.”

“On a normal hosting setup, I found Magento to be quite slow. Extremely slow, even. But with the tweaks made by Anchor and Acidgreen, it’s become much faster.”
Wez Bryett: General Manager, Princess Polly


Acidgreen services a wide clientele, from start-ups to major brands. Now an official Magento partner, Acidgreen specialises in ecommerce websites for some of Australia’s biggest online retailers. e digital agency already worked closely with Anchor, having previously adopted managed hosting for previous projects. However, Princess Polly initially chose a different hosting provider.
Unfortunately, not every hosting provider can adapt, customise and manage a complex hosting infrastructure without hitting some obstacles. “They just couldn’t get the infrastructure right. They had a lot of problems configuring everything to be fast enough,” says Lowe. “It was quite a costly decision for the client to switch hosting providers a second time, but for the project to succeed we had to transfer to Anchor.”
Anchor tailored a bespoke hosting stack to fit the new Magento website and worked closely with Acidgreen to plan the migration. “Our first objective was to replicate what we already had. No redesign, new features or anything,” says Bryett.
Lowe agrees. “We didn’t want to freak out regular customers with a new user interface as well as a new platform. So we took baby steps, migrating the existing website across with only minor changes. Then we could compare apples with apples, giving us a benchmark and making any improvements measurable. Then we identified each piece of functionality and customisation that was needed, making the changes while keeping the front end user experience pretty similar.”


Anchor is also an official Magento partner, specialising in custom hosting environments. The team unstitched the various layers of Magento to understand how it works at every level, before rebuilding the hosting stack as a perfect t for the resource-hungry CMS.
The result is a fully managed clustered environment, with load balancers and three front-end machines backed onto a high-availability back-end. e made-to-measure application stack eliminates performance bottlenecks anywhere in the hosting infrastructure.
The stack includes Varnish, Turpentine and New Relic to track and cache over 90 per cent of the website content, dramatically improving page load speeds while reducing database requests. Meanwhile, load times for assets such as CSS, images and JavaScript improved with the addition of a PageSpeed optimisation module to Nginx, automatically applying web performance best practice across the site. Even the shopping cart was optimised and cached where possible, by rebuilding it in AJAX.
Anchor provided the necessary DevOps expertise, allowing Acidgreen to deliver a lightning fast website without relying on less-experienced internal resources. “Just the way Anchor designed the infrastructure made the difference between a four second and two second page load,” said Lowe.


“On a normal hosting setup, I found Magento to be quite slow,” says Bryett. “Extremely slow, even. But with the tweaks made by Anchor and Acidgreen, it’s much faster. So stage one was the successful backend migration.
“Now that’s done, we’re making enhancements and adding new features all the time. For example, we just put in a Google Shopping Feed, which was a Magento extension. That would have cost us $15,000 to custom build in the old platform.”
Lowe says that the global expansion plans are now a lot easier to implement. “Create a new store in Magento with the multi-site feature, configure it with Varnish and CloudFlare, and bang — you’ve got a new US site. Simple.” Princess Polly was a tricky project, says Lowe, but he credits its success to the close working relationship between Acidgreen and Anchor. “We consider Anchor to be part of our development team. At any time, there’s always someone we can call. Whatever the task or issue, we both look at it and collaborate on sorting it out.
“The end result is a happy client that will stick with us.”

ANCHOR: Managed Operations

An Anchor hosting infrastructure represents the very latest in hosting couture – fresh, clean and bang up to date.

Stop patching your servers like an old pair of jeans because you lack the time, resources or expertise to update or replace them. Managed Operations means never putting up with unfashionable hosting ever again.

“Managed hosting frees up time and provides us with expertise we would otherwise have to hire in. It’s a huge competitive advantage.”
James Lowe: Senior Technical Consultant, Acidgreen


An off-the-peg hosting plan is o en tight where you need freedom to move and baggy where you’ll never grow into it. If you’ve invested in a top quality website or application, you need a top quality hosting provider to tailor your environment to the perfect t, and perform seamless repairs when necessary.
Your website or application can strut its stuff, confident that a wrong move won’t have you busting the seams of your bandwidth.


Anchor is responsible for your entire hosting environment, right up to your code. We’ll x it so you’re never caught in public in threadbare hosting.
We take care of the operating system and application stack, security hardening, performance optimisation, patches and security updates, configuration changes, backups, monitoring, auto-scaling, troubleshooting and emergency response.
All your code needs to do is wear it well.
Don't get stitched up by off-the-peg hosting plans

The post Magento a Perfect Fit for Princess Polly appeared first on Anchor Cloud Hosting.

TorrentFreak: Blizzard Sues Bot Maker For Copyright Infringement

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

blizzOver the years video game developer and publisher Blizzard Entertainment has released many popular game titles.

However, to the disappointment of the developer and the majority of its customers there exists a small subgroup of players who are happy to deceive their opponents to get an edge in Blizzard’s games. Through hacks and cheats these players are often able to dominate the competition with minimal effort.

In an attempt to stamp out this type of abuse Blizzard has now filed a lawsuit against James Enright (aka “Apoc”), the individual behind a popular series of gaming bots. Enright’s software allows users to cheat in World of Warcraft, Diablo and Heroes of Storm, among others.

In a complaint filed at a California federal court, Blizzard notes that the “HonorBuddy,” “DemonBuddy” and “StormBuddy” bots infringe on its copyrights. In addition, the bots ruin the fun for other players, which causes financial damage to the company.

“The Bots created by Enright and his team have caused, and are continuing to cause, massive harm to Blizzard. Blizzard’s business depends upon its games being enjoyable and balanced for players of all skill levels,” the complaint (pdf) reads.

“The Bots that Enright has programmed and helps distribute destroy the integrity of the Blizzard Games, alienating and frustrating legitimate players, and diverting revenue from Blizzard to Defendants,” they add.

Blizzard believes that the bots cause legitimate players to lose interest, costing the company millions in lost revenue. The bot maker, meanwhile, is generating a significant profit.

“As a result of Enright’s conduct, Blizzard has lost millions or tens of millions of dollars in revenue and in consumer goodwill. Meanwhile, Enright and his team have been massively and unjustly enriched at Blizzard’s expense,” Blizzard adds.

Blizzard believes that Enright may have made millions through the bot sales, which start at €24.98 ($27) for the most basic World of Warcraft version.

The WoW Honorbot

Aside from breach of contract, by violating the EULA which prohibits the use of bots and cheats, Enright and his team are accused of copyright infringement.

“Defendants have infringed, and are continuing to infringe, Blizzard’s copyrights by reproducing, adapting, distributing, and/or authorizing others to reproduce, adapt, and distribute copyrighted elements of the Blizzard Games without authorization,” Blizzard writes.

Blizzard asks the court to issue an order against the defendants to prevent them from distributing the software. In addition, they demand actual or statutory damages for the alleged copyright infringements, which could add up to tens of millions of dollars.

The company’s claimed losses are supported by research which has shown that WoW bots can create a massive amount of in-game gold, which raises the prices of items for legitimate users. These users may then lose their motivation and stop playing, hurting Blizzard’s revenue.

At the time of writing the Buddy Forum and the associated website remain operational, claiming that “botting is not against any law.”

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

TorrentFreak: Google’s “Pirate Update” Fails to Punish Streaming Sites

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

google-bayOver the past few years the entertainment industries have repeatedly asked Google to step up its game when it comes to anti-piracy efforts.

These remarks haven’t fallen on deaf ears and in response Google has slowly implemented various new anti-piracy measures.

Last year Google made changes to its core algorithms aimed at lowering the visibility of “pirate” sites. Using the number of accurate DMCA requests as an indicator, these sites are now demoted in search results for certain key phrases.

This “Pirate Update” hit torrent sites hard, as early analysis previously showed. However, new research from streaming search engine JustWatch shows that the effect is limited.

Using search engine visibility data from SearchMetrics as well as SimilarWeb’s page statistics, the company evaluated how much search engine traffic the top torrent and streaming sites received before and after the algorithm change.

The findings confirm that torrent sites did indeed lose a lot of Google traffic. The graph below shows the search engine visibility ranking for the top 20 torrent sites including The Pirate Bay, KickassTorrents and Torrentz.

Torrent site search engine visibility

However, while Google’s update had a dramatic effect on torrent sites, frequently visited streaming portals are seemingly unaffected by the change.

The graph below shows that on average the top streaming sites increased their search engine presence. This means that these streaming portals, including Solarmovie, Couchtuner and Movie4k, remain frequently featured in the top search results.

Streaming sites remain strong

While SearchMetrics data doesn’t directly measure traffic, the report estimates that the visibility of streaming sites is 15 times larger than that of torrent sites.

Translated to a traffic number, JustWatch estimates that roughly a third of all visits to the top streaming sites come from search engines, which number nearly three billion since Google’s “pirate” update.

Search engine traffic to illegal streaming sites

The critique is not new. In recent months several entertainment industry groups have urged Google to improve its downranking methods or completely remove pirate sites from search results.

Google, however, has stated that removing entire domain names goes too far and could possibly be counterproductive.

As a streaming search engine JustWatch of course has a significant interest in the results they report. Perhaps unsurprisingly, they also encourage Google to improve.

“Google should stand by their word to use the DMCA takedown requests per domain and factor it stronger into their ranking signals,” JustWatch CEO David Croyé tells TF.

According to JustWatch pirate streaming sites still dominate the top search results when people enter phrases such as “300 watch online”

Croyé understands that the algorithms want to rank pirate sites higher, because this is actually what people are looking for. However, that makes it harder for legal services to compete.

“We are a self funded startup that wants to connect fans with their favorite movie content worldwide and make it easier also for Google to show more legal offers. We’ve already aggregated them in a structured way for them”

“So it’s in our own interest as a startup to be able to compete with free, but illegal alternatives on Google,” Croyé adds.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

SANS Internet Storm Center, InfoCON: green: Application Aware and Critical Control 2, (Wed, Nov 4th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

Have you ever considered how many Critical Controls that your contextual (e.g. Next Generation) platform applies to? I bet it is more than you think. Consider your application aware platforms feature, in which it does deep layer 7 packet inspection and identifies applications. Wait a second, I assumed by inventory that the Control mean going around to every workstation and assessing what was installed? Sure, that is a critical component, but with application aware platforms, your platform can quickly be turned into an audit device. Set up a span/mirror/tap on a spare port and assess VLANs. Pull reports on ingress/egress segments. This is all part of implementing critical controls.

First, you can run analysis and identify what applications and services are leaving your environment. Add on to that encryption inspection and then the platform becomes an effective shadow Information Technology (IT) audit device. Imagine for a moment business unit X, unit X we will call Marketing, for the moment (secretly I like to pick on marketingbecause they are maverick thinkers). Marketing decides they need a new website with explosive new features. They know that IT Security will have a cow on this but it will drive business, they say. Now, how many of us know this has either

A) Seen this

B) Had a colleague tell us

C) Can imagine

We will go one step further for this illustration and say super important Event Y is in 3 weeks. This event is the biggest XYZ event of our industry.

For this scenario we will even go a few strides further and say the event and the launch is a smashing success. Now ask yourself, does Marketing go back and update? Do they contract maintenance? Does all the regular order of what it takes to maintain an IT application occur? Who knows!

People drive business, features and function drive revenue to be sure! Now, lets get back to Critical Control number 2, know thyself (e.g. Software). For sure, you should inventory what software is deployed in your environment. This would include more than what your contextual next genration platform can do, however lets stop for a moment? Some software stays local on systems, a great deal of software talks to the cloud. What if there was a platform that could pervasively identify wait setting myself up to well Applications? The platform can provide you insight into applications and services running in your environment and serve as an analysis platform. This can clearly aid in Critical Control 2 as well as serve as an audit and control platform.

Let us say there is a research and development network segment that needs inventory. There is an effort underway to assess, pragmatically, each workstation. Now imagine if you could have a view into what applications were in regular use? Application aware platforms is a Critical Control 2 enabler!

Richard Porter

— ISC Handler on Duty

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

SANS Internet Storm Center, InfoCON: green: Internet Wide Scanners Wanted, (Wed, Nov 4th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

In our data, we often find researchers performing internet wide scans. To better identify these scans, we would like to add a label to these IPs identifying them as part of a research project. If you are part of such a project, or if you know of a project, please let me know. You can submit any information as a comment or via our contact form. If the IP addresses change often, then a URLs with a parseable list would be appreciated to facilitate automatic updates.

Johannes B. Ullrich, Ph.D.

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Raspberry Pi: Putting a Code Club in every community

This post was syndicated from: Raspberry Pi and was written by: Philip Colligan. Original post: at Raspberry Pi

Raspberry Pi Foundation and Code Club join forces

I am delighted to announce that Raspberry Pi Foundation and Code Club are joining forces in a merger that will give many more young people the opportunity to learn how to make things with computers.

Raspberry Pi Foundation and Code Club were both created as responses to the collective failure to prepare young people for life and work in a world that is shaped by digital technologies.

We’re part of a growing worldwide movement that is trying to solve that problem by equipping people with the knowledge and confidence to be digital makers, not just consumers.

Children at a Code Club

We’ve made a good start.

Since launching our first product in 2012, we have sold 7 million Raspberry Pi computers and reached hundreds of thousands of young people through our educational programmes, resources, open source software, and teacher training.

Since its launch in 2012, Code Club has helped establish over 3,800 clubs in the UK and over 1,000 clubs in 70 other countries. Run by volunteers, Code Clubs focus on giving 9-11 year olds the opportunity to make things with computers. Right now over 44,000 young people regularly attend Code Clubs in the UK alone, around 40% of whom are girls.

But we’ve got much more to do.

Research by Nesta shows that in the UK, many young people who want to get involved in digital making lack the opportunity to do so. We want to solve that problem, ensuring that there is a Code Club in every community in the UK and, ultimately, across the world.

A child absorbed in a task at a Code Club

In many ways, the decision to join forces was an obvious step. We share a common mission and values, we hugely respect each other’s work, and there are clear benefits from combining our capabilities, particularly if we want to have impact at a serious scale.

Code Club and Raspberry Pi share one other important characteristic: we’re both, at heart, community efforts, only possible thanks to the huge numbers of volunteers and educators who share our passion to get kids involved in digital making. One of our main goals is to support that community to grow.

Code Club – volunteer with us!

Code Club is building a network of coding clubs for children aged 9-11 across the UK. Can you help us inspire the next generation to get excited about digital making? Find out more about how to get involved at

The other critical part of Code Club’s success has been the generous philanthropic partners who have provided the resources and practical support that have enabled it to grow quickly, while being free for kids. ARM, Google, Cabinet Office, Nesta, Samsung and many other organisations have been brilliant partners already, and they will be just as important to the next stage of Code Club’s growth.

So what does this all mean in practice?

Technically, Code Club will become a wholly owned subsidiary of the Raspberry Pi Foundation. Importantly, its brand and approach will continue unchanged. It’s a proven model that works incredibly well and we don’t want to change it.

For the teachers and volunteers who run Code Clubs, nothing will change. Code Club HQ will continue to create awesome projects that you can use in your clubs. You will still use whatever hardware and software works best for your kids. We’ll still be working hard to match volunteers and schools to set up new clubs across the country, and developing partnerships that launch Code Clubs in other countries around the world.

For Raspberry Pi Foundation, this is an important step in diversifying our educational programmes. Of course, a lot of our work focuses on the Raspberry Pi computer as a tool for education (and it always will), but our mission and activities are much broader than that, and many of our programmes, like Code Club, are designed to be platform-neutral.

Code Club robot and Raspberry Pi robot: high five!

Personally, I’m really excited about working more closely with Code Club and helping them grow. I’ve been a big fan of their work for a long time, and over the past few weeks I’ve had the opportunity to visit Code Clubs across the country. I’ve been blown away by the energy and enthusiasm of the teachers, volunteers and young people involved.

If you don’t know them already, check them out at and, if you can, get involved. I know that many people in the Raspberry Pi community already volunteer at their local Code Club. I’d love to see that number grow!

The post Putting a Code Club in every community appeared first on Raspberry Pi.

TorrentFreak: Court Orders Shutdown of Libgen, Bookfi and Sci-Hub

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

libgenWith a net income of more than $1 billion Elsevier is one of the largest academic publishers in the world.

Through its ScienceDirect portal the company offers access to millions of scientific articles spread out over 2,200 journals, most of which are behind a paywall.

Websites such as Sci-Hub and The Library Genesis Project, or Libgen for short, have systematically breached this barrier by hosting pirated copies of scientific publications as well as mainstream books.

Earlier this year one of the largest publishers went into action to stop this threat. Elsevier filed a complaint at a New York District Court, accusing the sites’ operators of systematic copyright infringement.

The publisher requested damages and asked for a preliminary injunction to prevent the sites from distributing their articles while the case is ongoing.

Late last week District Court Judge Robert Sweet approved the request (pdf), ordering the operators of,, and several sister sites to cease their activities.

In addition, the responsible domain name registries are ordered to suspend the associated domain names until further notice.

Previously the Public Interest Registry (.ORG) refused to do so when Elsevier put in a request, noting that it would require a valid court order to suspend a domain name.


According to the order Elsevier showed that it’s likely to succeed based on its copyright infringement claims. In addition, there’s enough evidence to suggest that the defendants violated the Computer Fraud and Abuse Act.

“The balance of hardships clearly tips in favor of the Plaintiffs. Elsevier has shown that it is likely to succeed on the merits, and that it continues to suffer irreparable harm due to the Defendants’ making its copyrighted material available for free,” Judge Sweet writes.

The site’s operators have few grounds on which to fight the injunction, as they don’t have the right to distribute most of the articles in the first place.

“The Defendants cannot be legally harmed by the fact that they cannot continue to steal the Plaintiff’s content, even if they tried to do so for public-spirited reasons,” the order reads.

Alexandra Elbakyan, the founder of Sci-Hub, is the only person who responded to Elsevier’s complaint. In a letter she sent to the court before the injunction hearing, she argued that the publisher is exploiting researchers and blocking access to knowledge.

Judge Sweet agrees that there is a public interest to safeguard broad access to scientific research. However, simply putting all research online without permission is not the answer.

“Elbakyan’s solution to the problems she identifies, simply making copyrighted content available for free via a foreign website, disserves the public interest,” Judge Sweet writes.

The Judge notes that under current law researchers and the public are allowed to publicly share “ideas and insights” from the articles without restrictions. People can also freely use the copyrighted articles for research or educational purposes under the fair use doctrine.

“Under this doctrine, Elsevier’s articles themselves may be taken and used, but only for legitimate purposes, and not for wholesale infringement,” the order reads.

At the time of writing several of the websites, including and, are still online. It is expected that they will be suspended by the registry in a matter of days.

Time will tell whether the site operators will also stop offering copyrighted articles, or if they will simply move to a new domain name and continue business as usual.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

SANS Internet Storm Center, InfoCON: green: This Article is Brought to You By the Letter , (Fri, Oct 30th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

Recently, I managed to register the domain name This domain name uses thejapanese character, which looks somewhat like aslash typically used at the end of the domain name. As a result, an unsuspecting user may mistake the host name for the page at

International domain names and lookalikesare nothing new. As a result, registrars as well as browsers implemented various safeguards. But even with these safeguards, it is still possible to come up with creative domain names. Even without international characters, we do see typo squatting domains like rnicrosoft (this is r and n instead of m). There are a number of tools available that are trying to find all look alike domains. For example,Domaintoolsprovides a simple online tool [1]. Some companies attempt to register all look-alike domains. But a domain like comindex.jpcould be used to impersonate arbitrary .com domain names.

The DNS protocol does not understand anything but plain ASCII. To encodeIDNs, punycode is used.Punycodeencoded domain names start with xn--, followed by all the ASCII letters in the domain name, followed by a dash and the international letters in an encoded format. For example, my domain encodes To mitigate the risks ofIDNs, some browsers usepunycodeto display the domain name if they consider it invalid.

Punycodeand other related standards are described in a document commonly referred to asIDNA2008(International Domain Names for Applications, 2008) and this document is reflected in RFC 5890-5895. You may still find references to an earlier version inRFCs3490-3492. TheRFCsmention some of the character confusion issues, but for the most part, refer to registrars to apply appropriate policies.

Similarly, there is no clear standard for browsers. Different browsers implementIDNsdifferently.

Safari: Safarirednersmost international characters with few exceptions. For examplecyrillicandgreekcharacters are excluded as they are particularly easily confused with English characters[2]

Firefox: Firefox maintains awhitelistof top level domains for which it will render international characters. See about:config for details. .com is not on thewhitelistby default, but .org is. Country levelTLDsare on thewhitelist.

Chrome: Chromes policy is a bit more granular [3].

Internet Explorer: Similar to chrome. Also, international characters are only supported if the respective language support is enabled in Windows [4]. The document on Microsofts MSDN website was written for Internet Explorer 7, but still appears to remain valid.

Microsoft Edge: I couldnt find any details about Microsoft Edge, but it appears to follow Internet Explorers policy.

And finally here is a quick matrix what I found users reporting with my test URL:

Chrome: displayspunycode.
Firefox: displays Unicode
Safari: displays Unicode (users of Safari on OS X 10.10 report seeingpunycode)
Opera: only a small number of Opera users participated, most reporting Unicode.
Internet Explorer: displayspunycode

Mobile browsers behave just like the desktop version. E.g. Google Chrome on Android does not display Unicode, but Safari on iOS does.

For summaries of Unicode security issues, also see and,_locale_and_Unicode(among other OWASP documents)


NB: Sorry for any RSS feeds that the title may break.

Johannes B.Ullrich, Ph.D.

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

AWS Official Blog: Amazon RDS Update – Cross-Account Snapshot Sharing

This post was syndicated from: AWS Official Blog and was written by: Jeff Barr. Original post: at AWS Official Blog

Today I would like to tell you about a new cross-account snapshot sharing feature for Amazon Relational Database Service (RDS). You can share the snapshots with specific AWS accounts or you can make them public.

Cross-Account Snapshot Sharing
I often create snapshot backups as part of my RDS demos:

The snapshots are easy to create and can be restored to a fresh RDS database instance with a couple of clicks.

Today’s big news is that you can now share unencrypted MySQL, Oracle, SQL Server, and PostgreSQL snapshots with other AWS accounts. If you, like many sophisticated AWS customers, use separate AWS accounts for development, testing, and production, you can now share snapshots between AWS accounts in a controlled fashion. If a late-breaking bug is discovered in a production system, you can create a database snapshot and then share it with select developers so that they can diagnose the problem without having to have access to the production account or system.

Each snapshot can be shared with up to 20 other accounts (we can raise this limit for your account if necessary; just ask). You can also mark snapshots as public so that any RDS user can restore a database containing your data. This is a great way to share data sets and research results!

Here is how you share a snapshot with another AWS account using the RDS Console (you can also do this from the command line or the RDS API):

Here’s how a snapshot appears in the accounts that it is shared with (again, this functionality is also accessible from the command line and the RDS API):

Here is how you create a public snapshot:

Snapshot sharing works across regions, but does not apply to the China (Beijing) region or to AWS GovCloud (US).



TorrentFreak: Spotify Helps to Beat Music Piracy, European Commission Finds

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

spotifyWhen Spotify launched its first beta in the fall of 2008 we branded it “an alternative to music piracy.”

With the option to stream millions of tracks supported by an occasional ad, or free of ads for a small subscription fee, Spotify appeared to be a serious competitor to unauthorized downloading.

While there has been plenty of anecdotal support for this claim, actual research on the topic has been lacking. A new study published by the European Commission’s Joint Research Centre aims to fill this gap.

In the study researchers Luis Aguiar (IPTS) and Joel Waldfogel (NBER) compare Spotify streaming data to download numbers from the 8,000 pirated artists on torrent sites, as well as legal digital track sales.

Based on this data the researchers conclude that Spotify has a clear displacement effect on piracy. For every 47 streams the number of illegal downloads decreases by one.

This is in line with comments from Spotify’s Daniel Ek, who previously argued that the streaming service helps to convert pirates into paying customers.

“According to these results, an additional 47 streams reduces by one the number of tracks obtained without payment,” the paper reads (pdf).

“This piracy displacement is consistent with Ek’s claim that Spotify’s bundled offering harvests revenue from consumers who – or at least from consumption instances – were previously not generating revenue,” the researchers add.

While that’s good news for the music industry, it doesn’t necessarily mean that more revenue is being generated. In addition to piracy, streaming services also impact legal track sales on iTunes and other platforms.

According to the researchers, 137 Spotify streams reduce the number of individual digital track sales by one. Factoring in the revenue per stream and download, the overall impact is relatively neutral.

“Given the current industry’s revenue from track sales ($0.82 per sale) and the average payment received per stream ($0.007 per stream), our sales displacement estimates show that the losses from displaced sales are roughly outweighed by the gains in streaming revenue.”

“In other words, our analysis shows that interactive streaming appears to be revenue-neutral for the recorded music industry,” the researchers add.

More studies are needed to see how streaming services impact the music industry in the long run, but for now it’s safe to conclude that they do indeed help to beat online piracy, as often suggested.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

AWS Official Blog: New AWS Public Data Set – Real-Time and Archived NEXRAD Weather Data

This post was syndicated from: AWS Official Blog and was written by: Jeff Barr. Original post: at AWS Official Blog

My colleague Ariel Gold wrote the guest post below to introduce the newest AWS Public Data Set.



You can now access real-time and archival NEXRAD weather radar data as an AWS Public Data Set.

The Next Generation Weather Radar (NEXRAD) is a network of 160 high-resolution Doppler radar sites that detects precipitation and atmospheric movement and disseminates data in approximately 5 minute intervals from each site. NEXRAD enables severe storm prediction and is used by researchers and commercial enterprises to study and address the impact of weather across multiple sectors. As part of our research agreement with the US National Oceanic and Atmospheric Administration (NOAA), we are making NEXRAD data freely available on Amazon S3.

The real-time feed and full historical archive of NEXRAD Level II data, from June 1991 to present, is now available for anyone to use. Level II is original resolution, base data from the NEXRAD system.

This is the first time the full NEXRAD Level II archive has been accessible to the public on demand. A wide range of customers have expressed interest in this data, including insurance providers, climate researchers, logistics companies, and weather companies. We’re excited to see what you do with it!

You can learn more about the data and how to access it on our NEXRAD on AWS page.

How We’re Sharing the Data
We’ve been testing out new ways to make NEXRAD data easy to use in the cloud. Before I get into some of the details of our approach, here are a couple of radar data terms for the uninitiated. First, “volume scan” refers to the data collected by the Doppler radar site as it scans the atmosphere. The NEXRAD site breaks these volume scans into “chunks” – small packages of data that are quickly transmitted as a real-time feed. The NEXRAD network generates about 1,200 chunks per hour.

We are storing the real-time chunks and archive (volume scan files) Level II data in two public Amazon S3 buckets. Data flows into the chunks bucket via Unidata’s Local Data Manager (LDM) system with minimal latency from the NEXRAD sites. Using event-based processing with AWS Lambda, the chunks are assembled into volume scan files and added to the archive bucket within seconds or minutes of production. This creates a continuously updated, near-real-time archive of volume scan files.

You can find information on the data structure on our NEXRAD on AWS page. You’ll see that the real-time data is hosted in the “unidata-nexrad-level2-chunks” Amazon S3 bucket. Unidata provides data services, tools, and cyberinfrastructure leadership for the earth science community and they have been fantastic collaborators on this project. You can read more about their experience setting up the NEXRAD real-time feed on AWS on their blog.

Getting Started with NEXRAD on AWS
Unidata, The Climate Corporation, and CartoDB have contributed tutorials to help you get started using NEXRAD on AWS. For example, this tutorial from The Climate Corporation shows you how to read and display the NEXRAD Level II archive data from your Python programs.

Unidata has also made the NEXRAD Level II archive data available via their THREDDS Data Server. You can also browse the archive contents via the AWS JavaScript S3 Explorer:

Learn more about ways to use the data on our NEXRAD on AWS page.

Thank You
We’d like to thank our collaborators at NOAA, CICS-NC, Unidata, and The Weather Company who helped us launch this public data set and continue to help make it available. Many others helped test and contribute tools to the data set and we welcome additional contributions. Tell us how you’re using the data via the NEXRAD on AWS page and sign up for updates on the NOAA Big Data Project here.

Ariel Gold, Program Manager, AWS Open Data