Posts tagged ‘research’

Errata Security: x86 is a high-level language

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

Just so you know, x86 machine-code is now a “high-level” language. What instructions say, and what they do, are very different things.

I mention this because of those commenting on this post on OpenSSL’s “constant-time” calculations, designed to avoid revealing secrets due to variations in compute time. The major comment is that it’s hard to do this perfectly in C. My response is that it’s hard to do this even in x86 machine code.

Consider registers, for example. Everyone knows that the 32-bit x86 was limited to 8 registers, while 64-bit expanded that to 16 registers. This isn’t actually true. The latest Intel processors have 168 registers. The name of the register in x86 code is really just a variable name, similar to how variables work in high-level languages.

So many registers are needed because the processor has 300 instructions “in flight” at any point in time in various stages of execution. It rearranges these instructions, executing them out-of-order. Everyone knows that processors can execute things slightly out-of-order, but that’s understated. Today’s processors are massively out-of-order.

Consider the traditional branch pair of a CMP (compare) followed by a JMPcc (conditional jump). While this is defined as two separate instructions as far as we humans are concerned, it’s now a single instruction as far as the processor is concerned.

Consider the “xor eax, eax” instruction, which is how we’ve traditionally cleared registers. This is never executed as an instruction, but just marks “eax” as no longer used, so that the next time an instructions needs the register, to allocate a new (zeroed) register from that pool of 168 registers.

Consider “mov eax, ebx”. Again, this doesn’t do anything, except rename the register as far as the processor is concerned, so that from this point on, what was referred to as ebx is now eax.

The processor has to stop and wait 5 clock cycles to read something from L1 cache, 12 cycles for L2 cache, or 30 cycles for L3 cache. But because the processor is massively out-of-order, I can continue executing instructions in the future that don’t depend upon this memory read. This includes other memory reads. Inside the CPU, the results always appear as if the processor executed everything in-order, but outside the CPU, things happen in strange order.

This means any attempt to get smooth, predictable execution out of the processor is very difficult. That means “side-channel” attacks on x86 leaking software crypto secrets may always be with us.

One solution to these problems is the CMOV, “conditional move”, instruction. It’s like a normal “MOV” instruction, but succeeds or fails based on condition flags. It can be used in some cases to replace branches, which makes pipelined code more efficient in some cases. Currently, it takes constant time. When moving from memory, it still waits for data to arrive, even when it knows it’s going to throw it away. As Linus Torvalds famously pointed out, CMOV doesn’t always speed up code. However, that’s not the point here — it does make code execution time more predictable. But, at the same time, Intel can arbitrarily change the behavior on future processors, making it less predictable.

The upshot is this: Intel’s x86 is a high-level language. Coding everything up according to Agner Fog’s instruction timings still won’t produce the predictable, constant-time code you are looking for. There may be some solutions, like using CMOV, but it will take research.

Schneier on Security: BIOS Hacking

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

We’ve learned a lot about the NSA’s abilities to hack a computer’s BIOS so that the hack survives reinstalling the OS. Now we have a research presentation about it.

From Wired:

The BIOS boots a computer and helps load the operating system. By infecting this core software, which operates below antivirus and other security products and therefore is not usually scanned by them, spies can plant malware that remains live and undetected even if the computer’s operating system were wiped and re-installed.

[…]

Although most BIOS have protections to prevent unauthorized modifications, the researchers were able to bypass these to reflash the BIOS and implant their malicious code.

[…]

Because many BIOS share some of the same code, they were able to uncover vulnerabilities in 80 percent of the PCs they examined, including ones from Dell, Lenovo and HP. The vulnerabilities, which they’re calling incursion vulnerabilities, were so easy to find that they wrote a script to automate the process and eventually stopped counting the vulns it uncovered because there were too many.

From ThreatPost:

Kallenberg said an attacker would need to already have remote access to a compromised computer in order to execute the implant and elevate privileges on the machine through the hardware. Their exploit turns down existing protections in place to prevent re-flashing of the firmware, enabling the implant to be inserted and executed.

The devious part of their exploit is that they’ve found a way to insert their agent into System Management Mode, which is used by firmware and runs separately from the operating system, managing various hardware controls. System Management Mode also has access to memory, which puts supposedly secure operating systems such as Tails in the line of fire of the implant.

From the Register:

“Because almost no one patches their BIOSes, almost every BIOS in the wild is affected by at least one vulnerability, and can be infected,” Kopvah says.

“The high amount of code reuse across UEFI BIOSes means that BIOS infection can be automatic and reliable.

“The point is less about how vendors don’t fix the problems, and more how the vendors’ fixes are going un-applied by users, corporations, and governments.”

From Forbes:

Though such “voodoo” hacking will likely remain a tool in the arsenal of intelligence and military agencies, it’s getting easier, Kallenberg and Kovah believe. This is in part due to the widespread adoption of UEFI, a framework that makes it easier for the vendors along the manufacturing chain to add modules and tinker with the code. That’s proven useful for the good guys, but also made it simpler for researchers to inspect the BIOS, find holes and create tools that find problems, allowing Kallenberg and Kovah to show off exploits across different PCs. In the demo to FORBES, an HP PC was used to carry out an attack on an ASUS machine. Kovah claimed that in tests across different PCs, he was able to find and exploit BIOS vulnerabilities across 80 per cent of machines he had access to and he could find flaws in the remaining 10 per cent.

“There are protections in place that are supposed to prevent you from flashing the BIOS and we’ve essentially automated a way to find vulnerabilities in this process to allow us to bypass them. It turns out bypassing the protections is pretty easy as well,” added Kallenberg.

The NSA has a term for vulnerabilities it think are exclusive to it: NOBUS, for “nobody but us.” Turns out that NOBUS is a flawed concept. As I keep saying: “Today’s top-secret programs become tomorrow’s PhD theses and the next day’s hacker tools.” By continuing to exploit these vulnerabilities rather than fixing them, the NSA is keeping us all vulnerable.

Two Slashdot threads. Hacker News thread. Reddit thread.

Schneier on Security: Understanding the Organizational Failures of Terrorist Organizations

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

New research: Max Abrahms and Philip B.K. Potter, “Explaining Terrorism: Leadership Deficits and Militant Group Tactics,” International Organizations.

Abstract: Certain types of militant groups — those suffering from leadership deficits — are more likely to attack civilians. Their leadership deficits exacerbate the principal-agent problem between leaders and foot soldiers, who have stronger incentives to harm civilians. We establish the validity of this proposition with a tripartite research strategy that balances generalizability and identification. First, we demonstrate in a sample of militant organizations operating in the Middle East and North Africa that those lacking centralized leadership are prone to targeting civilians. Second, we show that when the leaderships of militant groups are degraded from drone strikes in the Afghanistan-Pakistan tribal regions, the selectivity of organizational violence plummets. Third, we elucidate the mechanism with a detailed case study of the al-Aqsa Martyrs Brigade, a Palestinian group that turned to terrorism during the Second Intifada because pressure on the leadership allowed low-level members to act on their preexisting incentives to attack civilians. These findings indicate that a lack of principal control is an important, underappreciated cause of militant group violence against civilians.

I have previously blogged Max Abrahms’s work here, here, and here.

Schneier on Security: How We Become Habituated to Security Warnings on Computers

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

New research: “How Polymorphic Warnings Reduce Habituation in the Brain ­- Insights from an fMRI Study.”

Abstract: Research on security warnings consistently points to habituation as a key reason why users ignore security warnings. However, because habituation as a mental state is difficult to observe, previous research has examined habituation indirectly by observing its influence on security behaviors. This study addresses this gap by using functional magnetic resonance imaging (fMRI) to open the “black box” of the brain to observe habituation as it develops in response to security warnings. Our results show a dramatic drop in the visual processing centers of the brain after only the second exposure to a warning, with further decreases with subsequent exposures. To combat the problem of habituation, we designed a polymorphic warning that changes its appearance. We show in two separate experiments using fMRI and mouse cursor tracking that our polymorphic warning is substantially more resistant to habituation than conventional warnings. Together, our neurophysiological findings illustrate the considerable influence of human biology on users’ habituation to security warnings.

Webpage.

[Медийно право] [Нели Огнянова] : Класация на университетите в света 2014 – 2015

This post was syndicated from: [Медийно право] [Нели Огнянова] and was written by: nellyo. Original post: at [Медийно право] [Нели Огнянова]

The Times Higher Education World University Rankings 2014-2015

Класация на университетите

2015 rank 2014 rank Institution
1 1 Harvard University (US)
2 4 University of Cambridge (UK)
3 5 University of Oxford (UK)
4 2 Massachusetts Institute of Technology (US)
5 3 Stanford University (US)
6 6 University of California, Berkeley (US)
7 7 Princeton University (US)
8 8 Yale University (US)
9 9 California Institute of Technology (US)
10 12 Columbia University (US)

Класация на университетите по области

Top 100 за социални науки

А ако се интересувате от социалните науки в Европа – ето началото:

 

Rank Institution Location Overall score change criteria
3 University of Oxford United Kingdom
93.2
5 University of Cambridge United Kingdom
92.0
9 Imperial College London United Kingdom
87.5
13 ETH Zürich – Swiss Federal Institute of Technology Zürich Switzerland
84.6
22 University College London (UCL) United Kingdom
78.7
29 Ludwig Maximilian University of Munich Germany
71.9
34 École Polytechnique Fédérale de Lausanne Switzerland
70.9
34 London School of Economics and Political Science (LSE) United Kingdom
70.9
36 University of Edinburgh United Kingdom
70.4
40 King’s College London United Kingdom
69.4
44 Karolinska Institute Sweden
66.8
52 University of Manchester United Kingdom
64.5
55 KU Leuven Belgium
63.7
61 École Polytechnique France
62.2
63 Scuola Normale Superiore di Pisa Italy
61.9
64 Leiden University Netherlands
61.3
67 Georg-August-Universität Göttingen Germany
61.0
70 Heidelberg University Germany
59.6
71 Delft University of Technology Netherlands
59.2
72 Erasmus University Rotterdam Netherlands
59.1
73 Wageningen University and Research Center Netherlands
59.0
74 University of Bristol United Kingdom
58.9
75 Universität Basel Switzerland
58.4
77 University of Amsterdam Netherlands
58.2
78 École Normale Supérieure France
58.1
79 Utrecht University Netherlands
58.0
80 Humboldt University of Berlin Germany
57.9
81 Free University of Berlin Germany
57.6
83 Durham University United Kingdom
56.9
90 Ghent University Belgium
56.2
94 University of Glasgow United Kingdom
55.3
98 Stockholm University Sweden
54.6
98 Technical University of Munich Germany
54.6
98 Uppsala University Sweden
54.6
101
Maastricht University
Netherlands
54.3
103 University of Helsinki Finland
53.9
103 Université Pierre et Marie Curie France
53.9
103
University of Warwick
United Kingdom
53.9
103 University of Zürich Switzerland
53.9
107 Queen Mary University of London United Kingdom
53.8
107 University of Geneva Switzerland
53.8
111 University of St Andrews United Kingdom
53.6
111 University of Sussex United Kingdom
53.6
113 University of York United Kingdom
53.4
113 Eberhard Karls Universität Tübingen Germany
53.4
117
University of Groningen
Netherlands
53.1
118 Royal Holloway, University of London United Kingdom
53.0
119 Lund University

TorrentFreak: Granny Pirate Busted For Torrents at 63 Years Young

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

oldpirateEven as new services like Spotify and Netflix gain traction, people are still flocking to file-sharing networks in their millions. These days people are increasingly likely to get a warning letter in the post advising them to mend their ways or face bigger trouble, but tougher approaches still exist.

While being targeted by a copyright troll must be a pretty miserable experience, being arrested has to be a lot worse. It only happens rarely and when it does it tends to affect the tech savvy 18-to-35s who grew up with the social norm of sharing files online. On occasion, however, it happens to those much older.

In 2011, a 58-year-old grandmother from Scotland was arrested and eventually sentenced to three years probation for sharing files online. However a new case in Europe has cast that earlier one into the shadows.

According to police in Romania, a 63-year-old woman has just been arrested for sharing files using BitTorrent. The raid took place in Cluj-Napoca (commonly known as Cluj), the second most populous city in Romania after the capital Bucharest.

“Following investigations by the economic crime investigation, police in Cluj…prosecuted a 63-year-old woman,” a police statement reads.

“This investigation was about the offense of making content available to the public, including via the Internet or other computer networks so that the public can access it anywhere and at any time individually chosen.”

Local police say their research revealed that the woman had been making available significant quantities of movies, music and other content without the necessary permission from rights holders. While that doesn’t sound out of the ordinary, the country doesn’t have much of a record for this kind of action. In fact, many torrent sites themselves operate out of Romania trouble free.

A source familiar with the copyright and enforcement scene in Romania told TorrentFreak that while it is indeed unusual for someone so old to be prosecuted for file-sharing, in Romania the prosecution of file-sharers of any age is “very very rare.”

“The police are doing this on their own? Never,” he said. “They only follow [pressure from] companies.”

The suggestion that complaints from rightsholders prompted the arrest is not an unusual one and Romanian media notes that entertainment company involvement in the case will continue as potential damages claims are assessed.

The lady at the center of this Romanian case is quite possibly the oldest file-sharer to be prosecuted anywhere in the world. The case that featured the youngest alleged pirate – just 9-years-old – became infamous following the confiscation of a Winnie-the-Pooh laptop.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and anonymous VPN services.

TorrentFreak: Mega Ponders Legal Action in Response to Damaging Paypal Ban

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

mega_logoSeptember last year the Digital Citizens Alliance and NetNames released a report that looked into the business models of “shadowy” file-storage sites.

Titled “Behind The Cyberlocker Door: A Report How Shadowy Cyberlockers Use Credit Card Companies to Make Millions,” the report offers insight into the money streams that end up at these alleged pirate sites.

The research claims that the sites in question are mostly used for copyright infringement. But while there are indeed many shadowy hosting services, many were surprised to see the Kim Dotcom-founded Mega.co.nz on there.

For entertainment industry groups the report offered an opportunity to put pressure on Visa and MasterCard. In doing so they received support from U.S. Senator Patrick Leahy, who was also the lead sponsor of the defunct controversial Protect IP Act (PIPA).

Senator Leahy wrote a letter to the credit card companies claiming that the sites mentioned in the report have “no legitimate purpose or activity,” hoping they would cut their connections to the mentioned sites.

Visa and MasterCard took these concerns to heart and pressed PayPal to cut off its services to Mega, which eventually happened late last month. Interestingly, PayPal cited Mega’s end-to-end-encryption as one of the key problems, as that would make it harder to see what files users store.

The PayPal ban has been a huge blow for Mega, both reputation-wise and financially. And the realization that the controversial NetNames report is one of the main facilitators of the problems is all the more frustrating.

TorrentFreak spoke with CEO Graham Gaylard, who previously characterized the report as “grossly untrue and highly defamatory,” to discuss whether Mega still intends to take steps against the UK-based NetNames for their accusations.

Initially, taking legal action against NetNames for defamation was difficult, as UK law requires the complaining party to show economic damage. However, after the PayPal ban this shouldn’t be hard to do.

Gaylard is traveling through Europe at the moment and he notes that possible repercussions against the damaging report are high on the agenda.

“Yes, I am here to see Mega’s London-based legal counsel to discuss the next steps in progressing the NetNames’ response,” Gaylard informs TF.

Mega’s CEO couldn’t release any details on a possible defamation lawsuit, but he stressed that his company will fiercely defend itself against smear campaigns.

“Mega has been operating, and continues to operate a completely legitimate and transparent business. Unfortunately now, with the blatant, obvious, political pressure and industry lobbying against Mega, Mega needs to defend itself and will now cease taking a passive stance,” Gaylard says.

According to the CEO Mega is running a perfectly legal business. The allegation that it’s a piracy haven is completely fabricated. Like any other storage provider, there is copyrighted content on Mega’s servers, but that’s a tiny fraction of the total stored.

To illustrate this, Gaylard mentions that they only receive a few hundred takedown notices per month. In addition, he notes more than 99.7% of the 18 million files that are uploaded per day are smaller than 20MB in size, not enough to share a movie or TV-show.

These statistics are certainly not the hallmark of a service with “no legitimate purpose or activity,” as was claimed.

While the PayPal ban is a major setback, Mega is still doing well in terms of growth. They have 15 million registered customers across 200 countries, and hundreds of thousands of new users join every month.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and anonymous VPN services.

Schneier on Security: Geotagging Twitter Users by Mining Their Social Graphs

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

New research: Geotagging One Hundred Million Twitter Accounts with Total Variation Minimization,” by Ryan Compton, David Jurgens, and David Allen.

Abstract: Geographically annotated social media is extremely valuable for modern information retrieval. However, when researchers can only access publicly-visible data, one quickly finds that social media users rarely publish location information. In this work, we provide a method which can geolocate the overwhelming majority of active Twitter users, independent of their location sharing preferences, using only publicly-visible Twitter data.

Our method infers an unknown user’s location by examining their friend’s locations. We frame the geotagging problem as an optimization over a social network with a total variation-based objective and provide a scalable and distributed algorithm for its solution. Furthermore, we show how a robust estimate of the geographic dispersion of each user’s ego network can be used as a per-user accuracy measure which is effective at removing outlying errors.

Leave-many-out evaluation shows that our method is able to infer location for 101,846,236 Twitter users at a median error of 6.38 km, allowing us to geotag over 80% of public tweets.

Schneier on Security: <i>Data and Goliath</i>’s Big Idea

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Data and Goliath is a book about surveillance, both government and corporate. It’s an exploration in three parts: what’s happening, why it matters, and what to do about it. This is a big and important issue, and one that I’ve been working on for decades now. We’ve been on a headlong path of more and more surveillance, fueled by fear­–of terrorism mostly­–on the government side, and convenience on the corporate side. My goal was to step back and say “wait a minute; does any of this make sense?” I’m proud of the book, and hope it will contribute to the debate.

But there’s a big idea here too, and that’s the balance between group interest and self-interest. Data about us is individually private, and at the same time valuable to all us collectively. How do we decide between the two? If President Obama tells us that we have to sacrifice the privacy of our data to keep our society safe from terrorism, how do we decide if that’s a good trade-off? If Google and Facebook offer us free services in exchange for allowing them to build intimate dossiers on us, how do know whether to take the deal?

There are a lot of these sorts of deals on offer. Wayz gives us real-time traffic information, but does it by collecting the location data of everyone using the service. The medical community wants our detailed health data to perform all sorts of health studies and to get early warning of pandemics. The government wants to know all about you to better deliver social services. Google wants to know everything about you for marketing purposes, but will “pay” you with free search, free e-mail, and the like.

Here’s another one I describe in the book: “Social media researcher Reynol Junco analyzes the study habits of his students. Many textbooks are online, and the textbook websites collect an enormous amount of data about how­–and how often­–students interact with the course material. Junco augments that information with surveillance of his students’ other computer activities. This is incredibly invasive research, but its duration is limited and he is gaining new understanding about how both good and bad students study­–and has developed interventions aimed at improving how students learn. Did the group benefit of this study outweigh the individual privacy interest of the subjects who took part in it?”

Again and again, it’s the same trade-off: individual value versus group value.

I believe this is the fundamental issue of the information age, and solving it means careful thinking about the specific issues and a moral analysis of how they affect our core values.

You can see that in some of the debate today. I know hardened privacy advocates who think it should be a crime for people to withhold their medical data from the pool of information. I know people who are fine with pretty much any corporate surveillance but want to prohibit all government surveillance, and others who advocate the exact opposite.

When possible, we need to figure out how to get the best of both: how to design systems that make use of our data collectively to benefit society as a whole, while at the same time protecting people individually.

The world isn’t waiting; decisions about surveillance are being made for us­–often in secret. If we don’t figure this out for ourselves, others will decide what they want to do with us and our data. And we don’t want that. I say: “We don’t want the FBI and NSA to secretly decide what levels of government surveillance are the default on our cell phones; we want Congress to decide matters like these in an open and public debate. We don’t want the governments of China and Russia to decide what censorship capabilities are built into the Internet; we want an international standards body to make those decisions. We don’t want Facebook to decide the extent of privacy we enjoy amongst our friends; we want to decide for ourselves.”

In my last chapter, I write: “Data is the pollution problem of the information age, and protecting privacy is the environmental challenge. Almost all computers produce personal information. It stays around, festering. How we deal with it­–how we contain it and how we dispose of it­–is central to the health of our information economy. Just as we look back today at the early decades of the industrial age and wonder how our ancestors could have ignored pollution in their rush to build an industrial world, our grandchildren will look back at us during these early decades of the information age and judge us on how we addressed the challenge of data collection and misuse.”

That’s it; that’s our big challenge. Some of our data is best shared with others. Some of it can be ‘processed’­–anonymized, maybe­–before reuse. Some of it needs to be disposed of properly, either immediately or after a time. And some of it should be saved forever. Knowing what data goes where is a balancing act between group and self-interest, a trade-off that will continually change as technology changes, and one that we will be debating for decades to come.

This essay previously appeared on John Scalzi’s blog Whatever.

Krebs on Security: Intuit Failed at ‘Know Your Customer’ Basics

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Intuit, the makers of TurboTax, recently introduced several changes to beef up the security of customer accounts following a spike in tax refund fraud at the state and federal level. Unfortunately, those changes don’t go far enough. Here’s a look at some of the missteps that precipitated this mess, and what the company can do differently going forward.

dyot copy2

As The Wall Street Journal noted in a story this week, competitors H&R Block and TaxAct say they haven’t seen a similar surge in fraud this year. Perhaps the bad guys are just picking on the industry leader. But with 29 million customers last year — far more than H&R Block or TaxAct (which each had about seven million) — TurboTax should also be leading the industry in security.

Keep in mind that none of the security steps described below are going to stop fraud alone. But taken together, they do or would provide more robust security for TurboTax accounts, and significantly raise the costs for criminals engaged in this type of fraud.

NO EMAIL VALIDATION

Intuit fails to take basic steps to validate key account information, such as email addresses and mobile numbers, and these failures have limited the company’s ability to enact stricter account security measures. In fact, TurboTax still does not require new users to verify their email address, a basic security precaution that even random Internet forums which don’t collect nearly as much sensitive data require of all new users.

Last month, KrebsOnSecurity featured an in-depth story that stemmed from information provided by two former Intuit security employees who accused the company of making millions of dollars knowingly processing tax refund requests filed by cybercriminals. Those individuals shared a great deal about Intuit’s internal discussions on how best to handle a spike in account takeovers and fraudsters using stolen personal information to file tax refund requests on unwitting consumers.

Both whistleblowers said the lack of email verification routinely led to bizarre scenarios in which customers would complain of seeing other peoples’ tax data in their accounts. These were customers who’d forgotten their passwords and entered their email address at the site to receive a password reset link, only to find their email address tied to multiple identities that belonged to other victims of stolen identity refund fraud.

In mid-February, Intuit announced that it would begin the process of prompting all users to validate their accounts, either by validating their email address, answering a set of knowledge-based authentication questions, or entering a code sent to their mobile phone.

In an interview today, Intuit’s leadership sidestepped questions about why the company still does not validate email addresses. But TurboTax Chief Information Security Officer Indu Kodukula did say TurboTax will no longer display multiple profiles tied to a single email address when users attempt to reset their passwords by supplying an email address.

“We had an option where when you entered an email address, we’d show you a list of user IDs that were associated with that address,” Kodukula said. “We’ve removed that option, so now if you try to do password recovery, you have to go back to the email associated with you.”

NO PHONE VALIDATION

As previously stated, TurboTax doesn’t require users to enter a valid mobile phone number, so multi-factor authentication will not be available for many new and existing customers. More importantly, in failing to require customers to supply mobile numbers, Intuit is passing up a major tool to combat fraud and account takeovers.

Verifying customers by sending a one-time code to their mobile that they then have to enter into the Web site before their account is created can dramatically drive up the costs for fraudsters. I’ve written several stories on academic research that looked at the market for bulk-created online accounts sought after by spammers, such as free Webmail and Twitter accounts. That research showed that bulk-created accounts at services which required phone verification were far more expensive than accounts at providers that lacked this requirement.

True, fraudsters can outsource this account validation process to freelancers, but there is no denying that it increases the cost of creating new accounts because scammers must have a unique mobile number for every account they create. TurboTax should require all users to supply a working mobile phone number.

NO NOTICE OF ACCOUNT CHANGES

Until very recently, if hackers broke into your TurboTax account and made important changes, you might never know about it until you went to file your return and received a notification that someone had already filed them for you. This allowed fraudsters who had hijacked an account to wait until the legitimate user had filled out their personal data, and then change the bank account to which the refund would be credited.

On Feb. 26, 2015, Intuit said it would begin notifying customers via email if any user profile data is altered, including the account password, email address, security question, login name, phone number, name or address.

NO ‘KNOW YOUR CUSTOMER’ VALIDATION

According to the interviews with Intuit’s former security employees, much of the tax refund fraud being perpetrated through TurboTax stems from a basic weakness: The company does not require new customers to do anything to prove their identity before signing up for a TurboTax account. During the account sign-up, you’re whoever you want to be. There is no identity proofing, such as a requirement to answer so-called “out-of-wallet” or “knowledge-based authentication” questions.

Out-of-wallet questions are hardly an insurmountable hurdle for fraudsters. Indeed, some of the major providers of these challenges have been targeted by underground identity theft services. But these questions do complicated things for fraudsters. Intuit should take a cue from credit score and credit file montitoring service creditkarma.com, which asks a series of these questions before allowing users to create an account. And, unlike turbotax.com — which will happily let multiple users create accounts with the same Social Security number and other information — creditkarma.com blocks this activity.

Kodukula said Intuit is considering requiring out of wallet questions at account signup. This is good news, because as I noted in last month’s story, Intuit’s anti-fraud efforts have been tempered by a focus on zero tolerance for “false positives” — the problem of incorrectly flagging a legitimate customer refund request as suspicious. Given that focus, Intuit should do everything it can to prevent fraudsters from signing up with its service in the first place.

LAX ACCOUNT RECOVERY TOOLS

In an interview with KrebsOnSecurity last month, Kodukula said a recent spike in tax refund fraud at the state level was due in part to an increase in account takeovers. Kodukula said a big part of that increase stemmed the tendency for people to re-use passwords across multiple sites.  “This technique works because a fair percentage of users re-use passwords at multiple sites,” I wrote in that article. “When a breach at one site exposes the email addresses and passwords of its users, fraudsters will invariably try the stolen account credentials at other sites, knowing that a small percentage of them will work.”

But according to the whistleblowers, Intuit has historically made it quite easy for fraudsters to hijack accounts by abusing TurboTax’s procedures for helping customers recover access to accounts when they forgot their account password and the email address used to register the account. Users who forget both of these things are prompted to supply their name, address, date of birth, Social Security number and ZIP code, information that is not terribly difficult to obtain cheaply from multiple ID theft services in the cybercrime underground.

In fact, the whistleblowers related a story about how they sought to raise awareness of the problem internally at Intuit by using TurboTax’s account recovery tools to hijack the TurboTax account of the company’s CEO Brad Smith.

Kokudula said that pursuant to changes made in the last two weeks, users who try to recover their passwords will now need to successfully answer a series of out-of-wallet questions to to complete that process.

UNLINKED STATE RETURNS

As I wrote last month, a big reason why the spike in tax refund fraud disproportionately affected TurboTax is that until very recently, TurboTax was the only major do-it-yourself online tax prep company that allowed so-called “unlinked” state tax filings.

States allow unlinked returns because most taxpayers owe taxes at the federal level but are due refunds from their state. Thus, unlinked returns allow taxpayers who owe money to the IRS to pay some or all of that off with state refund money.

Unlinked returns typically have made up a very small chunk of Intuit’s overall returns, Intuit’s Kodukula explained. However, so far in this year’s tax filing season, Intuit has seen between three and 37-fold increases in unlinked, state-only returns. Convinced that most of those requests are fraudulent, the company now blocks users from filing unlinked returns via TurboTax. According to The Wall Street Journal, neither TaxAct nor H&R Block allowed users to file unlinked returns.

Schneier on Security: Now Corporate Drones are Spying on Cell Phones

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

The marketing firm Adnear is using drones to track cell phone users:

The capture does not involve conversations or personally identifiable information, according to director of marketing and research Smriti Kataria. It uses signal strength, cell tower triangulation, and other indicators to determine where the device is, and that information is then used to map the user’s travel patterns.

“Let’s say someone is walking near a coffee shop,” Kataria said by way of example.

The coffee shop may want to offer in-app ads or discount coupons to people who often walk by but don’t enter, as well as to frequent patrons when they are elsewhere. Adnear’s client would be the coffee shop or other retailers who want to entice passersby.

[…]

The system identifies a given user through the device ID, and the location info is used to flesh out the user’s physical traffic pattern in his profile. Although anonymous, the user is “identified” as a code. The company says that no name, phone number, router ID, or other personally identifiable information is captured, and there is no photography or video.

Does anyone except this company believe that device ID is not personally identifiable information?

SANS Internet Storm Center, InfoCON: green: Let’s Encrypt!, (Fri, Feb 27th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

As I have stated in the past,I am not a fan of all of the incomprehensible warning messages that average users are inundated with, and almost universally fail to understand, and the click-thru culture these dialogsare propagating.

Unfortunately this is not just confined to websites on the Internet. With the increased use of HTTPS for web based management, this issue is increasingly appearing on corporate networks.” />

The issue in most cases is caused by what is called a self-signed certificate. Essentially a certificate not backed up by a recognized certificate authority. The fact is that recognized certificates are not cheap. For vendors to supply valid certificates for every device they sell would add significant cost to the product and would require the vendor to manage those certificates on all of their machines.

The Internet Security Research Group (ISRG)a public benefit corporation sponsored by the Electronic Frontier Foundation (EFF), Mozilla and other heavy hitters aims to help reduce this problem and cleanup the invalid certificate warning dialogs.

Their project, Lets Encrypt, aims to provide certificates for free, and automate the deployment and expiry of certificates.

Essentially, a piece of software is installed on the server which will talk to the Lets Encrypt certificate authority. From Lets Encypts website:

The Lets Encrypt management software will:

  • Automatically prove to the Lets Encrypt CA that you control the website
  • Obtain a browser-trusted certificate and set it up on your web server
  • Keep track of when your certificate is going to expire, and automatically renew it
  • Help you revoke the certificate if that ever becomes necessary.

While there is still some complexity involved it should make it a lot easier, and cheaper, for vendors to deploy legitimate certificates into their products. I am interested to see how they will stop bad guys from using their certificates for Phishing sites, and what the process will be to report fraudulent use, but I am sure all of that will come.

Currently, it sounds like the Lets Encrypt certificate authority will start issuing certificates in mid-2015.

– Rick Wanner MSISE – rwanner at isc dot sans dot edu – http://namedeplume.blogspot.com/ – Twitter:namedeplume (Protected)

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Schneier on Security: Cell Phones Leak Location Information through Power Usage

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

New research on tracking the location of smart phone users by monitoring power consumption:

PowerSpy takes advantage of the fact that a phone’s cellular transmissions use more power to reach a given cell tower the farther it travels from that tower, or when obstacles like buildings or mountains block its signal. That correlation between battery use and variables like environmental conditions and cell tower distance is strong enough that momentary power drains like a phone conversation or the use of another power-hungry app can be filtered out, Michalevsky says.

One of the machine-learning tricks the researchers used to detect that “noise” is a focus on longer-term trends in the phone’s power use rather than those than last just a few seconds or minutes. “A sufficiently long power measurement (several minutes) enables the learning algorithm to ‘see’ through the noise,” the researchers write. “We show that measuring the phone’s aggregate power consumption over time completely reveals the phone’s location and movement.”

Even so, PowerSpy has a major limitation: It requires that the snooper pre-measure how a phone’s power use behaves as it travels along defined routes. This means you can’t snoop on a place you or a cohort has never been, as you need to have actually walked or driven along the route your subject’s phone takes in order to draw any location conclusions.

I’m not sure how practical this is, but it’s certainly interesting.

The paper.

Linux How-Tos and Linux Tutorials: How to Use KDE Plasma Desktop Like a Pro

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Swapnil Bhartiya. Original post: at Linux How-Tos and Linux Tutorials

kde desktop layout

KDE is, in my opinion, the most advanced desktop environment around; and I am going to uncover why in this article. There are so many features hidden behind the plain sight which can expand the functionality of the KDE manifold. Let’s get started.

Customize the KDE desktop layout

Change begins at home, they say, and KDE lives that philosophy. The first thing that we see is the default desktop, and in Plasma you have complete control over it. You can change the entire layout of the desktop instead of being stuck with the default one.

Right-click anywhere on the desktop and choose the last option ‘Default desktop ‘ from the context menu. It will open the ‘desktop settings’ option; you can play with each. I like the ‘Search and Launch’ option. It provides a neat interface with quick access to apps or documents right from the desktop – similar to the home screen of Android or iOS.

You can pin your frequently-used applications or locations for quick access. The most exciting tool there is the ‘search’ box.

When you enter any term in the search box it works more or less like the Dash of Gnome or Unity and offers apps, files, locations related to that term in an overlay – it’s fast, responsive and very neat.

Add a dash search feature

If you don’t want to change the entire desktop layout and yet want a ‘dash search’ feature then Homerun is what you are looking for. You can install it for your distro and then add it to the panel. Click on the ‘cashew’ > add widget and search for ‘ Homerun’. Click on the one with full screen overlay. If you want you can replace the default Menu Launcher with Homerun, you can go ahead and move the panel to the left edge of your monitor to get a Unity-like experience.

kde desktop homerun

kde desktop homerun

Change the default application launcher

You can customize the default application launcher without having to use Homerun or by changing the default desktop layout. Right click on the launcher icon and change it to ‘classic menu style’ if you want a simpler menu which resembles the one from lightweight desktop environments like LXDE or the good old Gnome 2.

kde desktop menu

kde desktop menu

I don’t much like the default launcher and always replace it with Lancelot menu. I found Lancelot to be faster and more responsive than the default menu. The menu doesn’t come pre-installed with the Lancelot menu so, depending on the distro, you may have to install it. Try it out and you won’t regret it.

Krunner

What if there were a way where you didn’t have to open the launcher/menu to open applications or documents? Plasma has something up its sleeve. It’s called Krunner. It can be triggered by hitting the Alt+F2 key.

It works as a ‘jack of all trades’ tool. You can open apps, by typing their names there; you can also ‘kill’ any app with it, too. Just type ‘kill’ followed by the name of the app.

When I said ‘jack of all trades’, I meant it. There are so many things that can be done with it. There is a wrench icon on krunner and when you click on it you will see a long list of plugins that add more features to krunner.

You can uncheck to disable any plugin; as you can see all the additional ‘features’ come through these plugins.

Krunner can do more than just open apps or files and folders, you can perform many tasks from it such as calculation and conversion. Let’s say, for example, you want to calculate 483 times 8. Use Krunner as a calculator and type ‘483×3=’ and that would give you the answer.

kde desktop krunner

How about converting temperature, distance from one system to another? Type 100 cm and it will show you the converted numbers; try the same with any currency and it will show the converted rates in different currencies.

Krunner calculator

You can open websites, bookmarks, and also search your Kmail. Just keep one point in mind that krunner doesn’t autocomplete, so you have to provide the full command. You can play movies, music, directly from the krunner – just enter the complete path of the file. You can open any directory just by entering in the entire path of that directory.

If you think that’s all, I have a surprise for you; you can ssh into your server through krunner; or you can open the samba server, just use the appropriate protocol. Krunner will open the file, directory or location using the default applications.

Krunner has so much to offer that it can’t be covered in one article, so go ahead and start playing and exploring.

Getting started

It often happens when I have to shut down or restart my system and there is so much running on my system; dozens of applications, windows, websites opened and I don’t want to lose what I am working on. What will I do? In Plasma you can very easily save the entire session and when you boot into your system again all of that work will open as it was before.

It can be accessed through System Settings > Startup and Shutdown. The option called ‘Session Management’ at the bottom allows you to save the current session or change it to ‘start with an empty session’.

The Autostart option on the same window makes it easier for users to manage which applications or scripts start at boot or login. If you notice some programs configured to start at system boot are slowing down your system, you can disable them. At the same time, if there are applications that you want to start at the boot time, just add them – one such program is Transmission which I use to download torrents for Linux distros.

Ksnapshot

As a writer who extensively writes about Linux, I need screenshots for my stories. I am not the only user who needs screenshots, and Plasma once again beats everything out there. KDE’s screen-capture tool, called ksnapshot, is one of the best screenshot tools out there.

Ksnapshot offers the flexibility of choosing the capture mode including ‘full screen’, window under the cursor, a rectangular region, freehand region and a section of the windows. The last option is interesting as you can capture the screenshot of a particular section of the window.

kde desktop ksnapshot

Unlike Gnome’s screenshot tool, Ksnapshot remembers the mode you chose last time, whereas Gnome keeps forgetting and going back to the default one. Ksnap also allows a user to give a name to screenshots and if you are recording a series of screenshots – to show some steps of an application – it saves them with your chosen game in sequence. With Gnome it defaults to the same ‘ScreenShot ….’ name. Plasma, once again, gives complete control to the user.

A multi-monitor bliss

Plasma is bliss for those with multiple monitor setups. Plasma gives each monitor a personality of its own – which is quite limited on other DEs. You can give each monitor a different wallpaper (you can actually set different wallpapers for virtual desktops and activities as well.) Each monitor can have its own desktop layout and panels. The widgets on these panels and desktops can be configured differently which means, if you work with clients who are in different time zones, you can change the time of each desktop to that particular time-zone.

No other DE, in my knowledge, is capable of doing that.

Panels and widgets

Two core components of the Plasma desktop experience are panels and widgets which enhance the user experience.

On a Plasma desktop you can move the panel wherever you want – bottom, top, left or side. You can have more than one panel. I often slap the panel on the left corner of the screen – which makes better use of the wide screen monitors. Then add Homerun widget to get an Ubuntu Unity-like experience.

To access the extra features of the panel, click on the cashew icon on the right hand side of each panel and then configure it. I don’t really know why they use ‘cashew’, a gear icon may be more appropriate so a user gets a hint of what it does.

If you want to add more panels, just right-click on the empty desktop and choose ‘add panel’ from the context menu.

KDE’s widgets take the customization of the desktop to the next level. These widgets allow you to access information quickly on the desktop, as well as on the panel. These widgets, embedded on panel are not mere icons to open that app – they work like the widgets you have seen on Android.

Widgets can also be added to the desktop – the way you do with Android. Depending on the distro, a Plasma desktop comes with a set of widgets, but you can always install more widgets which are being developed by the community. I installed a couple of widgets such as Play Control (which allows me to control the music player), RSS reader, Weather, etc. Go ahead, explore and you will find something new.

kde desktop widget

Dolphin, smarter than others

Dolphin is one of the many gems that KDE has; by far it is the best file manager which can perform tasks that others can’t.

The basic functionality of Dolphin can be further enhanced by adding new services. And any third party developer can create a new package for Dolphin to integrate a service, such as Dropbox, with the Plasma desktop.

One area where I cringe whenever I use other DE’s is their inability to bulk or mass rename files. I am an avid photographer and end up with hundreds of images with names like DCS323.NEF on my PC. That makes it extremely hard to search and find the right images when you need them. I wish there were some standard ‘tagging’ for images which could have been used across platforms, sadly there is none. So providing proper names to images is the best, time-tested solution.

I was looking for Linus Torvalds images from LinuxCon 2014 and the file name helped. In Dolphin I can easily select multiple image and change their names, something that can’t be done on Gnome or even Mac OS X. To rename files, just select them and hit F2, that will open a file rename dialog.

Dolphin is also capable of showing thumbnails of different file types including images, videos, and text – which improves the overall desktop environment.

Some mysterious activities seen on Plasma desktop

Activities are one of the most mysterious and lesser known features of the Plasma desktop. I must admit that even I don’t make full use of Activities.

So what are these activities? The short, and not so accurate, answer is that that they are more or less an extension of ‘virtual desktops’.

However, each activity can have its own virtual desktop. Now what does it mean? I will try to keep it as simple as possible. Let’s say I am writing my novel on Sublime Text with research material open on the second. This pristine setup gets disturbed every time I open another window or application to check my mail or chat with a colleague. That’s where virtual desktops, aka workspaces, come into play on a Linux desktop. Activities take that experience to the next level.

I can create a new activity for my novel writing, configure all the monitors and workspaces as I want – open the apps where I need them. The second activity can be about my journalism work with a text editor, all the rss feeds, emails, configured. So on and so forth I can create different activities for different work. I can easily switch between them without interrupting others.

kde desktop activities

Now where it gets even more interesting is that each activity can have their own panels, widgets and background wallpaper. The name and icon of each Activity can be changed to further personalize them.

Conclusion

KDE Plasma remains one of the best open source technologies that Linux users can enjoy. Plasma allows us to exploit the full potential of our PCs and shows us that they don’t have to make compromises with what they want to do on their PCs.

You don’t have to compromise if you are using a Linux desktop. That’s the whole point!

lcamtuf's blog: Reflecting on visibility

This post was syndicated from: lcamtuf's blog and was written by: Michal Zalewski. Original post: at lcamtuf's blog

In a recent Twitter conversation, Julien Vanegue asked me why I don’t put more thought into formally documenting and disseminating much of my security work – say, in the form of journal-published research papers, conference presentations, or similar rigidly-structured, long-lasting docs. It’s an interesting question, and one that I struggled to answer in 140 characters or less.

In the most basic sense, I think I simply find thrill in trying to solve practical problems; I also enjoy describing my thought process and comparing notes with others as I go. The security community is very unique in its openness, giving us many ways to accomplish that goal and to stay in touch with practitioners who are most likely to benefit from our work (or can critique it well). Because of this, I always felt that communicating with my peers through conferences, research journals, or press releases would be one of the least efficient ways of actually making contributions that stick.

Of course, such venues do offer a claim to immortality – be it in the form of seeing your name in the mainstream press, or witnessing a steady stream of citations for decades to come. When I was making my first steps in the field, I used to enjoy this sort of attention and I actively sought it to some extent (please don’t dig up the videos). But as I have grown older, it started to ring a bit hollow.

In my view, the progression of computer science in general, and infosec in particular, is a very incremental and collaborative process. We sometimes celebrate the engineers behind individual milestones – be it based on their skill, on their charisma, or on pure happenstance. But in doing so, we often do disservice to those who carried the torch and developed the technologies into what they are today. When reciting the names of the fathers of the modern Internet, few readers will mention the masterminds behind such complex engineering feats as BGP, DNS, TLS, SSH, TCP performance extensions, JPEG, or CSS.

Compared to computer science, becoming a household name in information security is not hard; perhaps I’d still have a shot at it. But in retrospect, I would not want to be known forever only as the guy who developed Fenris or did some obscure work on TCP fifteen years ago. In fact, I sort of find pride in being able to look at my research from five years back and recognize the subtle marks it left on other people’s projects – but also see all the painful mistakes I have made, knowing that I can do much better today.

And yup, my current work will be once again unimportant 10 or 15 years from now. Hopefully, I will be working on something more interesting by then.

SANS Internet Storm Center, InfoCON: green: A Different Kind of Equation, (Tue, Feb 17th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

Both the mainstream media and our security media is abuzz with Kasperksys disclosure of their research on the Equation group and the associated malware. You can find the original blog post here: http://www.kaspersky.com/about/news/virus/2015/equation-group-the-crown-creator-of-cyber-espionage

But if you want some real detail, check out the Q http://securelist.com/files/2015/02/Equation_group_questions_and_answers.pdf

Way more detail, and much more sobering to see that this group of malware goes all the way back to 2001, and includes code to map disconnected networks (using USB key CC like Stuxnet did), as well as the disk firmware facet thats everyones headline today.

Some Indicators of Compromise, something we can use to identify if our organizations or clients are affected – are included in the PDF. The DNS IoCs included are especially easy to use, either as checks against logs or as black-hole entries.

===============
Rob VandenBrink
Metafore

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Krebs on Security: The Great Bank Heist, or Death by 1,000 Cuts?

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

I received a number of media requests and emails from readers over the weekend to comment on a front-page New York Times story about an organized gang of cybercriminals pulling off “one of the largest bank heists ever.” Turns out, I reported on this gang’s activities in December 2014, although my story ran minus many of the superlatives in the Times piece.

The Times’ story, “Bank Hackers Steal Millions Via Malware,” looks at the activities of an Eastern European cybercrime group that Russian security firm Kaspersky Lab calls the “Carbanak” gang. According to Kaspersky, this group deployed malware via phishing scams to get inside of computers at more than 100 banks and steal upwards of USD $300 million — possibly as high as USD $1 billion.

Image: Kaspersky

Image: Kaspersky

Such jaw-dropping numbers were missing from a story I wrote in December 2014 about this same outfit, Gang Hacked ATMs From Inside Banks. That piece was based on similar research published (PDF) jointly by Dutch security firm Fox-IT and by Group-IB, a Russian computer forensics company. Fox-IT and Group-IB called the crime group “Anunak,” and described how the crooks sent malware laced Microsoft Office attachments in spear phishing attacks to compromise specific users inside targeted banks.

“Most cybercrime targets consumers and businesses, stealing account information such as passwords and other data that lets thieves cash out hijacked bank accounts, as well as credit and debit cards,” my December 2014 story observed. “But this gang specializes in hacking into banks directly, and then working out ingenious ways to funnel cash directly from the financial institution itself.”

I also noted that a source told me this group of hackers is thought to be the same criminal gang responsible for several credit and debit card breaches at major retailers across the United States, including women’s clothier Bebe Stores Inc., western wear store Sheplers, and office supply store Staples Inc.

Andy Chandler, Fox-IT’s general manager and senior vice president, said the group profiled in its December report and in the Kaspersky study are the same.

“Anunak or Carbanak are the same,” Chandler said. “We continue to track this organization but there are no major revelations since December. So far in 2015, the financial industry have been kept busy by other more creative criminal groups,” such as those responsible for spreading the Dyre and Dridex banking malware, he said.

ANALYSIS

Certainly, learning that this group stole possibly close to USD $1 billion advances the story, even if the Kaspersky report is a couple of months late, or generous to the attackers by a few hundred million bucks. The Kaspersky report also references (but doesn’t name) victim banks in the United States, although the New York Times story notes that the majority of the targeted financial institutions were in Russia. The Group-IB/Fox-IT report did not mention US banks as victims.

Two readers at different financial institutions asked whether The Times was accurate in stating that employees at victim banks had their computers infected merely after opening booby-trapped emails. “The cybercriminals sent their victims infected emails — a news clip or message that appeared to come from a colleague — as bait,” The  Times’ story reads. “When the bank employees clicked on the email, they inadvertently downloaded malicious code.”

As the Kaspersky report (and my earlier reporting) notes, the attackers leveraged vulnerabilities in Microsoft Office products for which Microsoft had already produced patches many months prior — targeting organizations that had fallen behind on patching. Victims had to open booby trapped attachments within spear phishing emails.

“Despite increased awareness of cybercrime within the financial services sector, it appears that spear phishing attacks and old exploits (for which patches have been disseminated) remain effective against larger companies,” Kaspersky’s report concludes. “Attackers always use this minimal effort approach in order to bypass a victim’s defenses.”

Minimal effort. That’s an interesting choice of words to describe the activities of crime groups like this one. The Kaspersky report is titled “The Great Bank Robbery,” but the work of this gang could probably be more accurately described as “Death by 1,000 cuts.”

Why should crime groups like this one expend more than minimal effort? After all, there are thousands of financial institutions here in the United States alone, and it’s a fair bet that on any given day a decent number of those banks are months behind on installing security updates. They’re mostly running IT infrastructure entirely based on Microsoft Windows, and probably letting employees browse the Web with older versions of Internet Explorer from the same computers used to initiate wire transfers (I witnessed this firsthand just last week at the local branch of a major U.S. bank). It’s worth noting that most of the crime gang’s infrastructure appears to be Linux-based.

This isn’t intended as a dig at Microsoft, but to illustrate a point: Most organizations — even many financial institutions — aren’t set up to defeat skilled attackers; their network security is built around ease-of-use, compliance, and/or defeating auditors and regulators. Organizations architected around security (particularly banks) are expecting these sorts of attacks, assuming that attackers are going to get in, and focusing their non-compliance efforts on breach response. This “security maturity” graphic nicely illustrates the gap between these two types of organizations.

As I wrote in my December story, the attacks from the Anunak/Carbanak gang showcase once again how important it is for organizations to refocus more resources away from preventing intrusions toward detecting intrusions as quickly as possible and stopping the bleeding. According to the Fox-IT/Group-IB report, the average time from the moment this group breaks into bank internal networks and the successful theft of cash is a whopping 42 days.

Kaspersky’s report notes a similar time range: “There is evidence indicating that in most cases the network was compromised for between two to four months, and that many hundreds of computers within a single victim organization may have been infected.” Both the Kaspersky and Group-IB/Fox-IT reports contain pages and pages of threat indicators, including digital signatures and network infrastructure used by this group.

So those are some takeaways for financial institutions, but what about banking customers? Sadly, these developments should serve as yet another wake-up call for small to mid-sized businesses based in the U.S. and banking online. While consumers in the United States are shielded by law against unauthorized online banking transactions, businesses have no such protection.

Russian hacking gangs like this one have stolen hundreds of millions of dollars from small- to mid-sized businesses in the U.S. and Europe over the past five years (for dozens of examples, see my series, Target: Small Businesses). In the vast majority of those cyberheists, the malware that thieves used to empty business accounts was on the victim organization’s computers — not the bank’s.

Now, add to that risk the threat of the business’s bank getting compromised from within and the inability of the institution to detect the breach for months on end.

“Advanced control and fraud detection systems have been used for years by the financial services industry,” the Kaspersky report observed. “However, these focus on fraudulent transactions within customer accounts. The Carbanak attackers bypassed these protections, by for example, using the industry-wide funds transfer (the SWIFT network), updating balances of account holders and using disbursement mechanisms (the ATM network). In neither of these cases did the attackers exploit a vulnerability within the service. Instead, they studied the victim´s internal procedures and pinpointed who they should impersonate locally in order to process fraudulent transactions through the aforementioned services. It is clear that the attackers were very familiar with financial services software and networks.”

Do you run your own business and bank online but are unwilling to place all of your trust in your bank’s security? Consider adopting some of the advice I laid out in Online Banking Best Practices for Businesses and Banking on a Live CD.

lcamtuf's blog: Bi-level TIFFs and the tale of the unexpectedly early patch

This post was syndicated from: lcamtuf's blog and was written by: Michal Zalewski. Original post: at lcamtuf's blog

Today’s release of MS15-016 (CVE-2015-0061) fixes another of the series of browser memory disclosure bugs found with afl-fuzz – this time, related to the handling of bi-level (1-bpp) TIFFs in Internet Explorer (yup, MSIE displays TIFFs!). You can check out a simple proof-of-concept here, or simply enjoy this screenshot of eight subsequent renderings of the same TIFF file:

The vulnerability is conceptually similar to other previously-identified problems with GIF and JPEG handling in popular browsers (example 1, example 2), with the SOS handling bug in libjpeg, or the DHT bug in libjpeg-turbo (details here) – so I will try not to repeat the same points in this post.

Instead, I wanted to take note of what really sets this bug apart: Microsoft has addressed it in precisely 60 days, counting form my initial e-mail to the availability of a patch! This struck me as a big deal: although vulnerability research is not my full-time job, I do have a decent sample size – and I don’t think I have seen this happen for any of the few dozen MSIE bugs that I reported to MSRC over the past few years. The average patch time always seemed to be closer to 6+ months – coupled with what the somewhat odd practice of withholding attribution in security bulletins and engaging in seemingly punitive PR outreach if the reporter ever went public before that.

I am very excited and hopeful that rapid patching is the new norm – and huge thanks to MSRC folks if so :-)

Krebs on Security: Anthem Breach May Have Started in April 2014

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Analysis of open source information on the cybercriminal infrastructure likely used to siphon 80 million Social Security numbers and other sensitive data from health insurance giant Anthem suggests the attackers may have first gained a foothold in April 2014, nine months before the company says it discovered the intrusion.

The Wall Street Journal reported last week that security experts involved in the ongoing forensics investigation into the breach say the servers and attack tools used in the attack on Anthem bear the hallmark of a state-sponsored Chinese cyber espionage group known by a number of names, including “Deep Panda,” “Axiom,” Group 72,” and the “Shell_Crew,” to name but a few.

Deep Panda is the name given to this group by security firm CrowdStrike. In November 2014, Crowdstrike published a snapshot of a graphic showing the malware and malicious Internet servers used in what security experts at PriceWaterhouseCoopers dubbed the ScanBox Framework, a suite of tools that have been used to launch a number of cyber espionage attacks.

A Maltego transform published by CrowdStrike. The graphic is intended to illustrate some tools and Internet servers that are closely tied to a Chinese cyber espionage group that CrowdStrike calls "Deep Panda."

A Maltego transform published by CrowdStrike. The graphic is intended to illustrate some tools and Internet servers thought to be closely tied to a Chinese cyber espionage group that CrowdStrike calls “Deep Panda.”

Crowdstrike’s snapshot (produced with the visualization tool Maltego) lists many of the tools the company has come to associate with activity linked to Deep Panda, including a password stealing Trojan horse program called Derusbi, and an Internet address — 198[dot]200[dot]45[dot]112.

CrowdStrike’s image curiously redacts the resource tied to that Internet address (note the black box in the image above), but a variety of open source records indicate that this particular address was until very recently the home for a very interesting domain: we11point.com. The third and fourth characters in that domain name are the numeral one, but it appears that whoever registered the domain was attempting to make it look like “Wellpoint,” the former name of Anthem before the company changed its corporate name in late 2014.

We11point[dot]com was registered on April 21, 2014 to a bulk domain registration service in China. Eight minutes later, someone changed the site’s registration records to remove any trace of a connection to China.

Intrigued by the fake Wellpoint domains, Rich Barger, chief information officer for Arlington, Va. security firm ThreatConnect Inc., dug deeper into so-called “passive DNS” records — historic records of the mapping between numeric Internet addresses and domain names. That digging revealed a host of other subdomains tied to the suspicious we11point[dot]com site. In the process, Barger discovered that these subdomains — including myhr.we11point[dot]com, and hrsolutions.we11point[dot]com - mimicked components of Wellpoint’s actual network as it existed in April 2014.

“We were able to verify that the evil we11point infrastructure is constructed to masquerade as legitimate Wellpoint infrastructure,” Barger said.

Another fishy subdomain that Barger discovered was extcitrix.we11point[dot]com. The “citrix” portion of that domain likely refers to Citrix, a software tool that many large corporations commonly use to allow employees remotely access internal networks over a virtual private network (VPN).

Interestingly, that extcitrix.we11point[dot]com domain, first put online on April 22, 2014, was referenced in a malware scan from a malicious file that someone uploaded to malware scanning service Virustotal.com. According to the writeup on that malware, it appears to be a backdoor program masquerading as Citrix VPN software. The malware is digitally signed with a certificate issued to an organization called DTOPTOOLZ Co. According to CrowdStrike and other security firms, that digital signature is the calling card of the Deep Panda Chinese espionage group.

CONNECTIONS TO OTHER VICTIMS?

As noted in a story in HealthITSecurity.com, Anthem has been sharing information about the attack with the Health Information Trust Alliance (HITRUST) and the National Health Information Sharing and Analysis Center (NH-ISAC), industry groups whose mission is to disseminate information about cyber threats to the healthcare industry.

A news alert published by HITRUST last week notes that Anthem has been sharing so-called “indicators of compromise” (IOCs) — Internet addresses, malware signatures and other information associated with the breach. “It was quickly determined that the IOCs were not found by other organizations across the industry and this attack was targeted a specific organization,” HITRUST wrote in its alert. “Upon further investigation and analysis it is believed to be a targeted advanced persistent threat (APT) actor. With that information, HITRUST determined it was not necessary to issue a broad industry alert.”

An alert released by the Health Information Trust Alliance (HITRUST) about the APT attack on Anthem.

An alert released by the Health Information Trust Alliance (HITRUST) about the APT attack on Anthem.

But a variety of data points suggest that the same infrastructure used to attack Anthem may have been leveraged against a Reston, Va.-based information technology firm that primarily serves the Department of Defense.

A writeup on a piece of malware that Symantec calls “Mivast” was produced on Feb. 6, 2015. It describes a backdoor Trojan that Symantec says may call out to one of a half-dozen domains, including the aforementioned extcitrix.we11point[dot]com domain and another — sharepoint-vaeit.com. Other domains on the same server include ssl-vaeit.com, and wiki-vaeit.com. Once again, it appears that we have a malware sample calling home to a domain designed to mimic the internal network of an organization — most likely VAE Inc. (whose legitimate domain is vaeit.com).

Barger and his team at ThreatConnect discovered that the sharepoint-vaeit.com domain also was tied to a malware sample made to look like it was VPN software made by networking giant Juniper. That malware was created in May 2014, and was also signed with the DTOPTOOLZ Co. digital certificate that CrowdStrike has tied to Deep Panda.

dtoptools

In response to an inquiry from KrebsOnSecurity, VAE said it detected a targeted phishing attack in May 2014 that used malware which phoned home to those domains, but the company said it was not aware of any successful compromise of its users.

In any case, the Symantec writeup on Mivast also says the malware tries to contact the Internet address 192[dot]199[dot]254[dot]126, which resolved to just one Web domain: topsec2014[dot]com. That domain was registered on May 6, 2014 to a bulk domain reseller who immediately changed the registration records and assigned the domain to the email address topsec_2014@163.com. That address appears to be the personal email of one Song Yubo, a professor with the Information Security Research Center at the Southeast University in Nanjing, Jiangsu, China.

Yubo and his university were named in a March 2012 report, “Occupying the Information High Ground: Chinese Capabilities for Computer Network Operations and Cyber Espionage,” (PDF) produced by U.S. defense contractor Northrop Grumman Corp. for the U.S.-China Economic and Security Review Commission. According to the report, Yubo’s center is one of a handful of civilian universities in China that receive funding from the Chinese government to conduct sensitive research and development with information security an information warfare applications.

ANALYSIS

Of course, it could well be that this is all a strange coincidence, and/or that the basic information on Deep Panda is flawed. But that seems unlikely given the number of connections and patterns emerging in just this small data set.

It’s remarkable that the security industry so seldom learns from past mistakes. For example, one of the more confounding and long-running problems in the field of malware detection and prevention is the proliferation of varying names for the same threat. We’re seeing this once again with the nicknames assigned to various cyberespionage groups (see the second paragraph of this story for examples).

It’s also incredible that so many companies could see the outlines of a threat against such a huge target, and that it took until just this past week for the target to become aware of it. For its part, ThreatConnect tweeted about its findings back in November 2014, and shared the information out to its user base.

CrowdStrike declined to confirm whether the resource blanked out in the above pictured graphic from November 2014 was in fact we11point[dot]com.

“What I can tell you is that this domain is a Deep Panda domain, and that we always try to alert victims whenever we discover them,” said Dmitri Alperovitch, co-founder of CrowdStrike.

Also, it’s myopic for an industry information sharing and analysis center (ISAC) to decide not to share indicators of compromise with other industry ISACs, let alone its own members. This should not be a siloed effort. Somehow, we need to figure out a better — more timely way — to share threat intelligence and information across industries.

Perhaps the answer is crowdsourcing threat intelligence, or maybe it’s something we haven’t thought of yet. But one thing is clear: there is a yawning gap between the time it takes for an adversary to compromise a target and the length of time that typically passes before the victim figures out they’ve been had.

The most staggering and telling statistic included in Verizon’s 2014 Data Breach Investigations Report (well worth a read) is the graphic showing the difference between the “time to compromise” and the “time to discovery.” TL;DR: That gap is not improving, but instead is widening.

timetocompromise

Then again, maybe this breach at Anthem isn’t as bad as it seems. After all, if the above data and pundits are to be believed, the attackers were likely looking for a needle in a haystack — searching for data on a few individuals that might give Chinese spies a way to better siphon military technology or infiltrate some U.S. defense program.

Perhaps, as Barger wryly observed, the Anthem breach was little more than the product of a class assignment — albeit an expensive and aggravating one for Anthem and its 80 million affected members. In May 2014, the aforementioned Southeast University Professor Song Yubo posted a “Talent Cup” tournament challenge to his information security students.

“Just as the OSS [Office of Strategic Services] and CIA used professors to recruit spies, it could be that this was all just a class project,” Barger mused.

lcamtuf's blog: Symbolic execution in vuln research

This post was syndicated from: lcamtuf's blog and was written by: Michal Zalewski. Original post: at lcamtuf's blog

There is no serious disagreement that symbolic execution has a remarkable potential for programatically detecting broad classes of security vulnerabilities in modern software. Fuzzing, in comparison, is an extremely crude tool: it’s the banging-two-rocks-together way of doing business, as contrasted with brain surgery.

Because of this, it comes as no surprise that for the past decade or so, the topic of symbolic execution and related techniques has been the mainstay of almost every single self-respecting security conference around the globe. The tone of such presentations is often lofty: the slides and research papers are frequently accompanied by claims of extraordinary results and the proclamations of the imminent demise of less sophisticated tools.

Yet, despite the crippling and obvious limitations of fuzzing and the virtues of symbolic execution, there is one jarring discord: I’m fairly certain that probably around 70% of all remote code execution vulnerabilities disclosed in the past few years trace back to fairly “dumb” fuzzing tools, with the pattern showing little change over time. The remaining 30% is attributable almost exclusively to manual work – be it systematic code reviews, or just aimlessly poking the application in hopes of seeing it come apart. When you dig through public bug trackers, vendor advisories, and CVE assignments, the mark left by symbolic execution can be seen only with a magnifying glass.

This is an odd discrepancy, and one that is sometimes blamed on the practitioners being backwardly, stubborn, and ignorant. This may be true, but only to a very limited extent; ultimately, most geeks are quick to embrace the tools that serve them well. I think that the disconnect has its roots elsewhere:

  1. The code behind many of the most-cited, seminal publications on security-themed symbolic execution remains non-public; this is particularly true for Mayhem and SAGE. Implementation secrecy is fairly atypical in the security community, is usually viewed with distrust, and makes it difficult to independently evaluate, replicate, or build on top of the published results.

  2. The research often fails to fully acknowledge the limitations of the underlying methods – while seemingly being designed to work around these flaws. For example, the famed Mayhem experiment helped identify thousands of bugs, but most of them seemed to be remarkably trivial and affected only very obscure, seldom-used software packages with no significance to security. It is likely that the framework struggled with more practical issues in higher-value targets – a prospect that, especially if not addressed head-on, can lead to cynical responses and discourage further research.

  3. Any published comparisons to more established vulnerability-hunting techniques are almost always retrospective; for example, after the discovery of Heartbleed, several teams have claimed that their tools would have found the bug. But analyses that look at ways to reach an already-known fault condition are very susceptible to cognitive bias. Perhaps more importantly, it is always tempting to ask why the tools are not tasked with producing a steady stream of similarly high-impact, headline-grabbing bugs.

The uses of symbolic execution, concolic execution, static analysis, and other emerging technologies to spot substantial vulnerabilities in complex, unstructured, and non-annotated code are still in their infancy. The techniques suffer from many performance trade-offs and failure modes, and while there is no doubt that they will shape the future of infosec, thoughtful introspection will probably get us there sooner than bold claims with little or no follow-through. We need to move toward open-source frameworks, verifiable results, and solutions that work effortlessly and reliably for everyone, against almost any target. That’s the domain where the traditional tools truly shine, and that’s why they scale so well.

Ultimately, the key to winning the hearts and minds of practitioners is very simple: you need to show them how the proposed approach finds new, interesting bugs in the software they care about.

SANS Internet Storm Center, InfoCON: green: Another Network Forensic Tool for the Toolbox – Dshell, (Tue, Feb 3rd)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

This is a guest diary written byMr. William Glodek Chief, Network Security Branch, U.S. Army Research Laboratory

As a network analysis practitioner, I analyze multiple gigabytes of pcap data across multiple files on a daily basis. I have encountered many challenges where the standard tools (tcpdump, tcpflow, Wireshark/tshark) were either not flexible enough or couldnt be prototyped quickly enough to do specialized analyzes in a timely manner. Either the analysis couldnt be done without recompiling the tool itself, or the plugin system was difficult to work with via command line tools.

Dshell, a Python-based network forensic analysis framework developed by the U.S. Army Research Laboratory, can help make that job a little easier [1]. The framework handles stream reassembly of both IPv4 and IPv6 network traffic and also includes geolocation and IP-to-ASN mapping data for each connection. The framework also enables development of network analysis plug-ins that are designed to aid in the understanding of network traffic and present results to the user in a concise, useful manner by allowing users to parse and present data of interest from multiple levels of the network stack. from tweaking an existing decoder to extract slightly different information from existing protocols, to writing a new parser for a completely novel protocol. Here are two scenarios where Dshell has decreased the time required to identify and respond to network forensic challenges.

  1. Malware authors will frequently embed a domain name in a piece of malware for improved command and control or resiliency to security countermeasures such as IP blocking. When the attackers have completed their objective for the day, they minimize the network activity of the malware by updating the DNS record for the hostile domain to point to a non-Internet routable IP address (ex. 127.0.0.1).”>Dshell decode d reservedips *.pcap

    The reservedips module will find all of the DNS request/response pairs for domains that resolve to a non-routable IP address, and display them on a single line. By having each result displayed on a single line, I can utilize other command line utilities like awk or grep to further filter the results. Dshell can also present the output in CSV format, which may be imported into many Security Event and Incident Management (SEIM) tools or other analytic platforms.

    1. A drive-by-download attack is successful and a malicious executable is downloaded [2]. I need to find the network flow of the download of the malicious executable and extract the executable from the network traffic.
      Using the web module, I can inspect all the web traffic contained in the sample file. In the example below, a request for xzz1.exe with a successful server response is likely the malicious file.

    I can then extract the executable from the network traffic by using the rip-http module. The rip-http module will reassemble the IP/TCP/HTTP stream, identify the filename being requested, strip the HTTP headers, and write the data to disk with the appropriate filename.

    dlink extracting stream from cap

    There are additional modules within the Dshell framework to solve other challenges faced with network forensics. The ability to rapidly develop and share analytical modules is a core strength of Dshell. If you are interested in using or contributing to Dshell, please visit the project at https://github.com/USArmyResearchLab/Dshell.

    [1] Dshell https://github.com/USArmyResearchLab/Dshell
    [2] http://malware-traffic-analysis.net/2015/01/03/index.html

    (c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

TorrentFreak: Hollywood’s Release Delays Breed Pirates

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

babypirateHollywood has a message to all those pirates who keep making excuses to download and stream films illegally.

“You have no excuse.”

The major movie studios have done enough to make their content legally available, launching thousands of convenient movie services worldwide, they claim.

“We need to bust the myth that legal content is unavailable. Creative industries are tirelessly experimenting with new business models that deliver films, books, music, TV programs, newspapers, games and other creative works to consumers,” Stan McCoy noted on the MPAA blog this week.

“In Europe, there are over 3,000 on-demand audio-visual services available to European citizens,” he adds.

So is the MPA right? Is “availability” an imaginary problem that pirates use as an excuse not to pay?

We decided to investigate the issue by looking at the online availability of the ten most downloaded films of last week. Since the MPAA’s blog post talked about Europe and the UK we decided to use Findanyfilm.com which focuses on UK content. The results of our small survey speak for themselves.

Of the ten most pirated movies only Gone Girl is available to buy or rent online. A pretty weak result, especially since it’s still missing from the most popular video subscription service Netflix.

Ranking Movie Available Online? Buy / Rent
torrentfreak.com
1 Interstellar NO
2 American Sniper NO
3 Taken 3 NO
4 The Hobbit: The Battle of the Five Armies NO
5 John Wick NO
6 Into The Woods NO
7 Fury NO
8 Gone Girl Rent/Buy
9 American Heist NO
10 The Judge NO

Yes, the results above are heavily skewed because they only include movies that were released recently. Looking up films from 2011 will result in a much more favorable outcome in terms of availability.

But isn’t that the problem exactly? Most film fans are not interested in last year’s blockbusters, they want to able to see the new stuff in their home too. And since the movie industry prefers to keep its windowing business model intact, piracy is often the only option to watch recent movies online.

So when the MPA’s Stan McCoy says that lacking availability is a myth, he’s ignoring the elephant in the room.

For as long as the film industry keeps its windowing business model intact, releasing films online months after their theatrical release, people will search for other ways to access content, keeping their piracy habit alive.

Admittedly, changing a business that has relied on complex licensing schemes and windowing strategies for decades isn’t easy. But completely ignoring that these issues play a role is a bit shortsighted.

There’s no doubt that the movie studios are making progress. It’s also true that many people choose to pirate content that is legally available, simply because it’s free. There is no good excuse for these freeriders, but it’s also a myth that Hollywood has done all it can to eradicate piracy.

Even its own research proves them wrong.

Earlier this year a KPMG report, commissioned by NBC Universal, showed that only 16% of the most popular and critically acclaimed films are available via Netflix and other on-demand subscription services. The missing 84% includes recent titles but also older ones that are held back due to rights issues.

Clearly, availability is still an issue.

So if Hollywood accuses Google of breeding pirates, then it’s safe to say the same about Hollywood.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and anonymous VPN services.

SANS Internet Storm Center, InfoCON: green: Improving SSL Warnings, (Sun, Feb 1st)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

One of the things that has concerned mefor the last few years is how we are slowly creating a click-thru culture. ” />

I honestly believe the intent is correct, but the implementation is faulty. The messages are not in tune with the average Internet users knowledge level. In other words the warningsare incomprehensible to my sister, my parents and my grandparents, the average Internet users of today. Given a choice between going to their favorite website or trusting an incomprehensible warning message…well you know what happens next.

A team at Google has been looking at these issues and are driving browser changes in Chrome base on their research. As they point out the vast majority of these errors are attributable to webmaster mistakes with only a very small fraction being actual attacks.

The paper, is Improving SSL Warnings: Comprehension and Adherence, and there is an accompanying presentation.

– Rick Wanner MSISE – rwanner at isc dot sans dot edu – http://namedeplume.blogspot.com/ – Twitter:namedeplume (Protected)

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

lcamtuf's blog: Technical analysis of Qualys’ GHOST

This post was syndicated from: lcamtuf's blog and was written by: Michal Zalewski. Original post: at lcamtuf's blog

This morning, a leaked note from Qualys’ external PR agency made us aware of GHOST. In this blog entry, our crack team of analysts examines the technical details of GHOST and makes a series of recommendations to better protect your enterprise from mishaps of this sort.


Figure 1: The logo of GHOST, courtesy of Qualys PR.

Internally, GHOST appears to be implemented as a lossy representation of a two-dimensional raster image, combining YCbCr chroma subsampling and DCT quantization techniques to achieve high compression rates; among security professionals, this technique is known as JPEG/JFIF. This compressed datastream maps to an underlying array of 8-bpp RGB pixels, arranged sequentially into a rectangular shape that is 300 pixels wide and 320 pixels high. The image is not accompanied by an embedded color profile; we must note that this poses a considerable risk that on some devices, the picture may not be rendered faithfully and that crucial information may be lost.

In addition to the compressed image data, the file also contains APP12, EXIF, and XMP sections totaling 818 bytes. This metadata tells us that the image has been created with Photoshop CC on Macintosh. Our security personnel notes that Photoshop CC is an obsolete version of the application, superseded last year by Photoshop CC 2014. In line with industry best practices and OWASP guidelines, we recommend all users to urgently upgrade their copy of Photoshop to avoid exposure to potential security risks.

The image file modification date returned by the HTTP server at community.qualys.com is Thu, 02 Oct 2014 02:40:27 GMT (Last-Modified, link). The roughly 90-day delay between the creation of the image and the release of the advisory probably corresponds to the industry-standard period needed to test the materials with appropriate focus groups.

Removal of the metadata allows the JPEG image to be shrunk from 22,049 to 21,192 bytes (-4%) without any loss of image quality; enterprises wishing to conserve vulnerability-disclosure-related bandwidth may want to consider running jhead -purejpg to accomplish this goal.

Of course, all this mundane technical detail about JPEG images distracts us from the broader issue highlighted by the GHOST report. We’re talking here about the fact that the JPEG compression is not particularly suitable for non-photographic content such as logos, especially when the graphics need to be reproduced with high fidelity or repeatedly incorporated into other work. To illustrate the ringing artifacts introduced by the lossy compression algorithm used by the JPEG file format, our investigative team prepared this enhanced visualization:


Figure 2: A critical flaw in GHOST: ringing artifacts.

Artifacts aside, our research has conclusively showed that the JPEG formats offers an inferior compression rate compared to some of the alternatives. In particular, when converted to a 12-color PNG and processed with pngcrush, the same image can be shrunk to 4,229 bytes (-80%):


Figure 3: Optimized GHOST after conversion to PNG.

PS. Tavis also points out that “>_” is not a standard unix shell prompt. We believe that such design errors can be automatically prevented with commercially-available static logo analysis tools.

PPS. On a more serious note, check out this message to get a sense of the risk your server may be at. Either way, it’s smart to upgrade.

TorrentFreak: Netflix Sees Popcorn Time As a Serious Competitor

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

netflix-logoThe Popcorn Time app brought peer-to-peer streaming to a mainstream public last year.

Branded the “Netflix for Pirates” it became an instant hit by offering BitTorrent-powered streaming in an easy-to-use Netflix-style interface.

This was cause for concern for many Hollywood executives and Netflix itself is now also starting to worry. In a letter to the company’s shareholders Popcorn Time gets a special mention.

“Piracy continues to be one of our biggest competitors,” Netflix CEO Reed Hastings writes.

“This graph of Popcorn Time’s sharp rise relative to Netflix and HBO in the Netherlands, for example, is sobering,” he adds, referencing the Google trends data below showing Popcorn Time quickly catching up with Netflix.

popcorn-netflix

While it’s a relatively small note, Hastings’ comments do mark a change in attitude for a company that previously described itself as a piracy killer.

Netflix’s CEO previously noted that piracy might even help the company, as many torrent users would eventually switch to Netflix as it offers a much better user experience.

“Certainly there’s some torrenting that goes on, and that’s true around the world, but some of that just creates the demand,” Hastings said last year.

“Netflix is so much easier than torrenting. You don’t have to deal with files, you don’t have to download them and move them around. You just click and watch,” he added.

The problem with Popcorn Time is that it’s just as easy as Netflix, if not easier. And in terms of recent movies and TV-shows the pirated alternative has a superior content library too.

A study published by research firm KPMG previously revealed that only 16% of the most popular and critically acclaimed films are available via Netflix and other on-demand subscription services.

While Netflix largely depends on the content creators when it comes to what content they can make available, this is certainly one of the areas where they have to “catch up.”

Despite the Popcorn Time concerns, business is going well for Netflix. The company announced its results for the fourth quarter of 2014 which resulted in $1.48 billion in revenue, up 26%, and a profit of $83 million.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and anonymous VPN services.