Posts tagged ‘chrome’

SANS Internet Storm Center, InfoCON: green: POODLE: Turning off SSLv3 for various servers and client. , (Wed, Oct 15th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

Before you start: While adjusting your SSL configuration, you should also check for various other SSL related configuration options. A good outline can be found at as well as at (for web servers in particular)

Here are some configuration directives to turn off SSLv3 support on servers:

Apache: Add -SSLv3 to the SSLProtocol line. It should already contain -SSLv2 unless you list specific protocols.

nginx: list specific allowed protocols in the ssl_protocols line. Make sure SSLv2

Postfix: Disable SSLv3 support in the smtpd_tls_manadatory_protocols configuration line. For example: smtpd_tls_mandatory_protocols=!SSLv2,!SSLv3

Dovecot: similar, disable SSLv2 and SSLv3 in the ssl_protocols line. For example: ssl_protocols =!SSLv2 !SSLv3

HAProxy Server: the bind configuration line should include no-sslv3 (this line also lists allowed ciphers)

puppet:see for instructions

For clients, turning off SSLv3 can be a bit more tricky, or just impossible.

Google Chrome: you need to start Google Chrome with the –ssl-version-min=tls1 option.

Internet Explorer: You can turn off SSLv3 support in the advanced internet option dialog.

Firefox: check the security.tls.version.min setting in about:config and set it to 1. Oddly enough, in our testing, the default setting of 0 will allow SSLv3 connections, but refuses to connect to our SSLv3 only server.

For Microsoft Windows, you can use group policies. For details see Microsofts advisory:

To test, continue to use our POODLE Test page at or the QualysSSLLabs page at

To detect the use of SSLv3, you can try the following filters:

tshark/wireshark display filters:ssl.handshake.version==0×0300

tcpdump filter: (1) accounting for variable TCP header length:tcp[((tcp[12]4)*4)+9:2]=0×0300
(2) assuming TCP header length is 20:tcp[29:2]=0×0300

We will also have a special webcast at 3pm ET. For details see

the webcast will probably last 20-30 minutes and summarize the highlights of what we know so far.

Johannes B. Ullrich, Ph.D.

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Krebs on Security: Microsoft, Adobe Push Critical Security Fixes

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Adobe, Microsoft and Oracle each released updates today to plug critical security holes in their products. Adobe released patches for its Flash Player and Adobe AIR software. A patch from Oracle fixes at least 25 flaws in Java. And Microsoft pushed patches to fix at least two-dozen vulnerabilities in a number of Windows components, including Office, Internet Explorer and .NET. One of the updates addresses a zero-day flaw that reportedly is already being exploited in active cyber espionage attacks.

brokenwindowsEarlier today, iSight Partners released research on a threat the company has dubbed “Sandworm” that exploits one of the vulnerabilities being patched today (CVE-2014-4114). iSight said it discovered that Russian hackers have been conducting cyber espionage campaigns using the flaw, which is apparently present in every supported version of Windows. The New York Times carried a story today about the extent of the attacks against this flaw.

In its advisory on the zero-day vulnerability, Microsoft said the bug could allow remote code execution if a user opens a specially crafted malicious Microsoft Office document. According to iSight, the flaw was used in targeted email attacks that targeted NATO, Ukrainian and Western government organizations, and firms in the energy sector.

More than half of the other vulnerabilities fixed in this month’s patch batch address flaws in Internet Explorer. Additional details about the individual Microsoft patches released today is available at this link.

brokenflash-aSeparately, Adobe issued its usual round of updates for its Flash Player and AIR products. The patches plug at least three distinct security holes in these products. Adobe says it’s not aware of any active attacks against these vulnerabilities. Updates are available for Windows, Mac and Linux versions of Flash.

Adobe says users of the Adobe Flash Player desktop runtime for Windows and Macintosh should update to Adobe Flash Player To see which version of Flash you have installed, check this link. IE10/IE11 on Windows 8.x and Chrome should auto-update their versions of Flash, although my installation of Chrome says it is up-to-date and yet is still running v. (with no outstanding updates available, and no word yet from Chrome about when the fix might be available).

The most recent versions of Flash are available from the Flash home page, but beware potentially unwanted add-ons, like McAfee Security Scan. To avoid this, uncheck the pre-checked box before downloading, or grab your OS-specific Flash download from here.

Windows users who browse the Web with anything other than Internet Explorer may need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.). If you have Adobe AIR installed, you’ll want to update this program. AIR ships with an auto-update function that should prompt users to update when they start an application that requires it; the newest, patched version is v. for Windows, Mac, and Android.

Finally, Oracle is releasing an update for its Java software today that corrects more than two-dozen security flaws in the software. Oracle says 22 of these vulnerabilities may be remotely exploitable without authentication, i.e., may be exploited over a network without the need for a username and password. Java SE 8 updates are available here; the latest version of Java SE 7 is here.

If you really need and use Java for specific Web sites or applications, take a few minutes to update this software. Updates are available from or via the Java Control Panel. I don’t have an installation of Java handy on the machine I’m using to compose this post, but keep in mind that updating via the control panel may auto-select the installation of third-party software, so de-select that if you don’t want the added crapware.

javamessOtherwise, seriously consider removing Java altogether. I’ve long urged end users to junk Java unless they have a specific use for it (this advice does not scale for businesses, which often have legacy and custom applications that rely on Java). This widely installed and powerful program is riddled with security holes, and is a top target of malware writers and miscreants.

If you have an affirmative use or need for Java, unplug it from the browser unless and until you’re at a site that requires it (or at least take advantage of click-to-play). The latest versions of Java let users disable Java content in web browsers through the Java Control Panel. Alternatively, consider a dual-browser approach, unplugging Java from the browser you use for everyday surfing, and leaving it plugged in to a second browser that you only use for sites that require Java.

For Java power users — or for those who are having trouble upgrading or removing a stubborn older version — I recommend JavaRa, which can assist in repairing or removing Java when other methods fail (requires the Microsoft .NET Framework, which also received updates today from Microsoft).

SANS Internet Storm Center, InfoCON: green: Content Security Policy (CSP) is Growing Up., (Wed, Sep 10th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

We have talked here about Content Security Policy (CSP) in the past. CSP is trying to tackle a pretty difficult problem. When it comes to cross-site-scripting (XSS), the browser and the user is usually the victim, not so much the server that is susceptible to XSS. As a result, it makes a lot of sense to add protections to the browser to prevent XSS. This isn’t easy, because the browser has no idea what Javascript (or other content) to expect from a particular site. Microsoft implemented a simple filter in IE 8 and later, matching content submitted by the user to content reflected back by the site, but this approach is quite limited.

CSP is an attempt to define a policy informing the browser about what content to expect from a site. Initially, only Firfox supported CSP. But lately, CSP has evolved into a standard, and other browsers started to implement it [1]. The very granular langauge defined by CSP allows sites to specify exactly what content is “legal” on a particular site.

Implementing CSP on a new site isn’t terrible hard, and may actually lead to a cleaner site. But the difficult part is to implement CSP on existing sites (like this site). Sites grow “organically” over the years, and it is difficult to come back later and define a policy. You are bound to run into false positives, or your policy is relaxed to the point where it becomes meaningless.

Luckily, CSP has a mechanism to help us. You are able to define a “Report URL”, and browsers will report any errors they encounter to said URLs. The reports are reasonably easy to read JSON snippets including the page that caused the problem, the policy they violated, and even an excerpt from the part of the page that caused the problem.

Recently, a few nice tools have cropped up to make it easier to parse these reports and build CSPs. For example Stuart Larsen implemented “CASPR” [2], a plugin for Chrome that was built to create CSPs and to analyze the reports. Tools like this make implementing CSPs a lot easier. 

Any other tools or resources you like to help implementing CSPs?

Update: We got a couple of additional resources in via Twitter:

Using “Virtual Patching” to implement CSP on your Web Application Firewall
Twitter account focusing on CSP:

Thanks to @imeleven for pointing out that Firefox was the first browser to support CSP. He also pointed to this slide deck:





Johannes B. Ullrich, Ph.D.

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Krebs on Security: Critical Fixes for Adobe, Microsoft Software

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Adobe today released updates to fix at least a dozen critical security problems in its Flash Player and AIR software. Separately, Microsoft pushed four update bundles to address at least 42 vulnerabilities in Windows, Internet Explorer, Lync and .NET Framework. If you use any of these, it’s time to update!

winiconMost of the flaws Microsoft fixed today (37 of them) are in addressed in an Internet Explorer update — the only patch this month to earn Microsoft’s most-dire “critical” label. A critical update wins that rating if the vulnerabilities fixed in the update could be exploited with little to no action on the part of users, save for perhaps visiting a hacked or malicious Web site with IE.

I’ve experienced troubles installing Patch Tuesday packages along with .NET updates, so I make every effort to update .NET separately. To avoid any complications, I would recommend that Windows users install all other available recommended patches except for the .NET bundle; after installing those updates, restart Windows and then install any pending .NET fixes). Your mileage may vary.

For more information on the rest of the updates released today, see this post at the Microsoft Security Response Center Blog.

brokenflash-aAdobe’s critical update for Flash Player fixes at least 12 security holes in the program. Adobe is urging Windows and Macintosh update to Adobe Flash Player v. by visiting the Adobe Flash Player Download Center, or via the update mechanism within the product when prompted. If you’d rather not be bothered with downloaders and software “extras” like antivirus scanners, you’re probably best off getting the appropriate update for your operating system from this link.

To see which version of Flash you have installed, check this link. IE10/IE11 on Windows 8.x and Chrome should auto-update their versions of Flash.

Windows users who browse the Web with anything other than Internet Explorer may need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.). If you have Adobe AIR installed (required by some programs like Pandora Desktop), you’ll want to update this program. AIR ships with an auto-update function that should prompt users to update when they start an application that requires it; the newest, patched version is v. 15 for Windows, Mac, and Android.

Adobe had also been scheduled to release updates today for Adobe Reader and Acrobat, but the company said it was pushing that release date back to Sept. 15 to address some issues that popped up during testing of the patches.

As always, if you experience any issues updating these products, please leave a note about your troubles in the comments below.

SANS Internet Storm Center, InfoCON: green: Dodging Browser Zero Days – Changing your Org’s Default Browser Centrally, (Mon, Sep 1st)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

In a recent story about “what’s a sysadmin to do?“, we suggested that since our browsers seem to take turns with zero days lately, that system administrator should have processes in place to prepare for when their corporate standard browser has a major vulnerability that doesn’t yet have a patch.  Administrators should be able to “push” out a change for their user community’s default browser within a few minutes of a zero day being confirmed.

So – How exactly would you do this in an Active Directory Domain?

First of all, have a desktop or start menu shortcut that uses http:// or https:// – usually pointed to one or more corporate applications.  It’s not uncommon to also see corporate web apps in the start menu.   Be sure that none of these links point to the programs themselves, just the URI’s.  This gets folks in the habit of punching that shortcut every morning (or or having it auto-start for them), starting them off on the right foot – with the browser you’ve selected for them.  Having people start their browser by the actual link to the executable defeats the purpose of setting the defaults.

It turns out that the default browser can be changed by updating just 5 registry keys – the prefered application for htm and html files, as well as the prefered application for the ftp, http and https protocols.


============ Registry keys for Firefox  – reg-ff.reg ==============





============  Registry keys for Internet Explorer – reg-ie.reg ==============


============  Registry keys for Chrome – reg-goo.reg ==============



You can dig and find lots of other registry keys that will influence the browser, but these 5 will nail most things in a hurry – which is the goal.  You can also find more reg keys that will change the default browser, but these are the keys set by control panel (in Windows 7 anyway), so for me they’re likely the safest keys – the ones that, for today at least, will be most likely to work most reliably for most environments.

So, what’s the easiest way to push these settings out?   There are a few ways to go.  First, save the above into 3 different text based REG files

The easiest way in my book is to update everyone’s startup – in a Group Policy, add the following to User Configuration / Windows Settings / Scripts (Logon/Logoff)

registry /s browser-chrome.reg  (or whichever REG is your target).

The trick then is to get folks to logout and login – hopefully you are forcing folks to logout each day by setting a hard logout time (a good thing to consider if you’re not doing that today), so if you get your change in before folks typically start, they’ll get your update.

If you need to push this out with GPO in mid-stream, you can set registry keys directly in Group Policy, under GPO > User Configuration > Preferences > Windows Settings > Registry

Microsoft publishes a “right way” to set the default browser on a few different pages, but it typically involves importing settings from a known correct station ( ).  This can be a problem if you’ve got multiple operating systems or want a more script-controlled approach.

There are certainly many other ways to push settings out using Group Policy (using ADM/ADMX files for instance), or by scripting using sysinternals or powershell commands.  The sysinternals approach has a lot of appeal because many admins already have a sysinternals “go fix it” approach already in their toolbelt.  Powershell appeals because it’s the whiz-bang-shiny new tool, but lots of admins are still learning this language, so it might not fall into the “get it done quick” bucket so neatly.  ADMs will absolutely do the job nicely – I didn’t have the time to cobble together and ADM or ADMX file for this, but will give it a shot over the next few days (unless one of our readers beats me to it that is!)

Once set, each browser can be configured using group policy using a vendor-supplied or open-source ADM or ADMX file.  Import the vendor file ADM(X) into GPO, and you’ll be able to configure or restrict 3rd party browsers just as easily as you do IE.

This article was meant more as set of a “quick and dirty” ways to make this change for a large number of your user community in a hurry.  If you’ve got a neat script or an ADM file that does this job in a more elegant way than I’ve described, please, share using our comment form!


Rob VandenBrink

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Linux How-Tos and Linux Tutorials: How to Install the Netflix Streaming Client On Linux

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Jack Wallen. Original post: at Linux How-Tos and Linux Tutorials


Netflix is one of the biggest video streaming services on the planet. You’ll find movies, television, documentaries, and more streamed to mobile devices, televisions, laptops, desktops, and much more. What you won’t find, however, is an official Linux client for the service. This is odd, considering Netflix so heavily relies upon FreeBSD.

This is Linux, though, so as always the adage ‘Where there’s a will, there’s a way’ very much applies. With just a few quick steps, you can have a Netflix client on your desktop. This client does require the installation of the following extras:

  • Wine

  • Mono

  • msttcorefonts

  • Gecko

I will walk you through the installation of this on a Ubuntu 14.04 desktop. I have also tested this same installation on both Linux Mint and Deepin – all with the same success. If you like living on the bleeding edge, you can get the full Netflix experience, without having to go through the steps I outline here. For that, you must be running the latest developer or beta release of Google Chrome with the Ubuntu 14.04 distribution. NOTE: You will also have to upgrade libnss3 (32 bit or 64 bit). Once you’ve installed all of that, you then have to modify the user-agent string of the browser so Netflix thinks you are accessing its services with a supported browser. The easiest way to do this is to install the User Agent Switcher Extension. The information you’ll need for the HTTP string is:

  • Name: Netflix Linux

  • String: Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/38.0.2114.2 Safari/537.36

  • Group: (is filled in automatically)

  • Append?: Select ‘Replace’

  • Flag: IE

If dealing with bleeding edge software and user agent strings isn’t for you, the method below works like a champ. The majority of this installation will happen through the command line, so be prepared to either type or cut and paste. Let’s begin.

Installing the repository prepare apt-get

The first thing you must do is open up a terminal window. Once that is opened, issue the following comands to add the correct repository, update apt-get, and install the software.

  • sudo apt-add-repository ppa:ehoover/compholio

  • sudo apt-get update

Now, you’re ready to start installing software. There are two pieces of software to be installed. The first is the actual Netflix Desktop app. The second is the msttcorefonts package that cannot be installed by the Netflix Desktop client (all other dependencies are installed through the Netflix Desktop client). The two commands you need to issue are:

  • sudo apt-get install netflix-desktop

  • sudo apt-get install msttcorefonts

The installation of the netflix-desktop package will take some time (as there are a number of dependencies it must first install). Once that installation completes, install the msttcorefonts package and you’re ready to continue.

First run

You’re ready to fire up the Netflix Desktop Client. To do this (in Ubuntu), open up the Dash and type netflix. When you see the launcher appear, click on it to start the client. When you first run the Netflix Desktop Client you will be required to first install Mono. Wine will take care of this for you, but you do have to okay the installer. When prompted, click Install (Figure 1) and the Wine installer will take care of the rest.

wine mono installer

You will also be prompted to allow Wine to install Gecko as well. When prompted, click Install for this action to complete.

At this point, all you have to do is sign in to Netflix and enjoy streaming content on your Linux desktop. You will notice that the client opens in full screen mode. To switch this to window mode, hit F11 and the client will appear in a window.

Although this isn’t an ideal situation, and there may be those that balk at installing Mono, by following these steps, you can have Netflix streaming video service on your Linux desktop. It works perfectly and you won’t miss a single feature (you can enjoy profiles, searching, rating, and much more).

Linux is an incredible desktop that offers everything the competition has and more. Give this installation of Netflix a go and see if you’re one step closer to dropping the other platforms from your desktop or laptop for good.

Darknet - The Darkside: Twitter Patents Technique To Detect Mobile Malware

This post was syndicated from: Darknet - The Darkside and was written by: Darknet. Original post: at Darknet - The Darkside

So it was discovered that Twitter has been granted a patent which covers detection of mobile malware on websites to protect its user base. The patent was filed back in 2012, but well – as we know these things take time. The method is something like the technology Google uses in Chrome to warn you [...]

The post Twitter Patents Technique To…

Read the full post at

Krebs on Security: Adobe, Microsoft Push Critical Security Fixes

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Adobe and Microsoft today each independently released security updates to fix critical problems with their products. Adobe issued patches for Adobe Reader/Acrobat, Flash Player and AIR, while Microsoft pushed nine security updates to address at least 37 security holes in Windows and related software.

Microsoft's recommended patch deployment priority for enterprises, Aug. 2014.

Microsoft’s recommended patch deployment priority for enterprises, Aug. 2014.

Two of the seven update bundles Microsoft released today earned the company’s most-dire “critical” label, meaning the vulnerabilities fixed in the updates can be exploited by bad guys or malware without any help from users. A critical update for Internet Explorer accounts for the bulk of flaws addressed this month, including one that was actively being exploited by attackers prior to today, and another that was already publicly disclosed, according to Microsoft.

Other Microsoft products fixed in today’s release include Windows Media Center, One Note, SQL Server and SharePoint. Check out the Technet roundup here and the Microsoft Bulletin Summary Web page at this link.

There are a couple other important changes from Microsoft this month: The company announced that it will soon begin blocking out-of-date ActiveX controls for Internet Explorer users, and that it will support only the most recent versions of the .NET Framework and IE for each supported operating system (.NET is a programming platform required by a great many third-party Windows applications and is therefore broadly installed).

These changes are both worth mentioning because this month’s patch batch also includes Flash fixes (an ActiveX plugin on IE) and another .NET update. I’ve had difficulties installing large Patch Tuesday packages along with .NET updates, so I try to update them separately. To avoid any complications, I would recommend that Windows users install all other available recommended patches except for the .NET bundle; after installing those updates, restart Windows and then install any pending .NET fixes).

Finally, I should note that Microsoft released a major new version (version 5) of its Enhanced Mitigation Experience Toolkit (EMET), a set of tools designed to protect Windows systems even before new and undiscovered threats against the operating system and third-party software are formally addressed by security updates and antimalware software. I’ll have more on EMET 5.0 in an upcoming blog post (my review of EMET 4 is here) but this is a great tool that can definitely help harden Windows systems from attacks. If you already have EMET installed, you’ll want to remove the previous version and reboot before upgrading to 5.0.


Adobe’s critical update for Flash Player fixes at least seven security holes in the program. Which version of Flash you should have on your system in order to get the protection from these latest fixes depends on which operating system and which browser you use, so consult the (admittedly complex) chart below for your appropriate version number.

brokenflash-aTo see which version of Flash you have installed, check this link. IE10/IE11 on Windows 8.x and Chrome should auto-update their versions of Flash, although my installation of Chrome says it is up-to-date and yet is still running v. (with no outstanding updates available, and no word yet from Chrome about when the fix might be available).

The most recent versions of Flash are available from the Flash home page, but beware potentially unwanted add-ons, like McAfee Security Scan. To avoid this, uncheck the pre-checked box before downloading, or grab your OS-specific Flash download from here.

Windows users who browse the Web with anything other than Internet Explorer may need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.). If you have Adobe AIR installed (required by some programs like Tweetdeck and Pandora Desktop), you’ll want to update this program. AIR ships with an auto-update function that should prompt users to update when they start an application that requires it; the newest, patched version is v. for Windows, Mac, and Android.


Adobe said it is not aware of any exploits in the wild that target any of the issues addressed in this month’s Flash update. However, the company says there are signs that attackers are are already targeting the lone bug fixed in an update released today for Windows versions of Adobe Reader and Acrobat (Adobe Reader and Acrobat for Apple’s OS X are not affected).


Experience technical issues during or after applying any of these updates, or with the instructions above? Please feel free to sound off in the comments below.

lcamtuf's blog: A bit more about american fuzzy lop

This post was syndicated from: lcamtuf's blog and was written by: Michal Zalewski. Original post: at lcamtuf's blog

Fuzzing is one of the most powerful strategies for identifying security issues in real-world software. Unfortunately, it also offers fairly shallow coverage: it is impractical to exhaustively cycle through all possible inputs, so even something as simple as setting three separate bytes to a specific value to reach a chunk of unsafe code can be an insurmountable obstacle to a typical fuzzer.

There have been numerous attempts to solve this problem by augmenting the process with additional information about the behavior of the tested code. These techniques can be divided into three broad groups:

  • Simple coverage maximization. This approach boils down to trying to isolate initial test cases that offer diverse code coverage in the targeted application – and them fuzzing them using conventional techniques.

  • Control flow analysis. A more sophisticated technique that leverages instrumented binaries to focus the fuzzing efforts on mutations that generate distinctive sequences of conditional branches within the instrumented binary.

  • Static analysis. An approach that attempts to reason about potentially interesting states within the tested program and then make educated guesses about the input values that could possibly trigger them.

The first technique is surprisingly powerful when used to pre-select initial test cases from a massive corpus of valid data – say, the result of a large-scale web crawl. Unfortunately, coverage measurements provide only a very simplistic view of the internal state of the program, making them less suited for creatively guiding the fuzzing process later on.

The latter two techniques are extremely promising in experimental settings. That said, in real-world applications, they are not only very slow, but frequently lead to irreducible complexity: most of the high-value targets will have a vast number of internal states and possible execution paths, and deciding which ones are interesting and substantially different from the rest is an extremely difficult challenge that, if not solved, usually causes the “smart” fuzzer to perform no better than a traditional one.

American fuzzy lop tries to find a reasonable middle ground between sophistication and practical utility. In essence, it’s a fuzzer that relies on a form of edge coverage measurements to detect subtle, local-scale changes to program control flow without having to perform complex global-scale comparisons between series of long and winding execution traces – a common failure point for similar tools.

In almost-plain English, the fuzzer does this by instrumenting every effective line of C or C++ code (or any other GCC-supported language) to record a tuple in the following format:

[ID of current code location], [ID of previously-executed code location]

The ordering information for tuples is discarded; the primary signal used by the fuzzer is the appearance of a previously-unseen tuple in the output dataset; this is also coupled with coarse magnitude count for tuple hit rate. This method combines the self-limiting nature of simple coverage measurements with the sensitivity of control flow analysis. It detects both explicit conditional branches, and indirect variations in the behavior of the tested app.

The output from this instrumentation is used as a part of a simple, vaguely “genetic” algorithm:

  1. Load user-supplied initial test cases into the queue,

  2. Take input file from the queue,

  3. Repeatedly mutate the file using a balanced variety of traditional fuzzing strategies (see later),

  4. If any of the generated mutations resulted in a new tuple being recorded by the instrumentation, add mutated output as a new entry in the queue.

  5. Go to 2.

The discovered test cases are also periodically culled to eliminate ones that have been made obsolete by more inclusive finds discovered later in the fuzzing process. Because of this, the fuzzer is useful not only for identifying crashes, but is exceptionally effective at turning a single valid input file into a reasonably-sized corpus of interesting test cases that can be manually investigated for non-crashing problems, handed over to valgrind, or used to stress-test applications that are harder to instrument or too slow to fuzz efficiently. In particular, it can be extremely useful for generating small test sets that may be programatically or manually examined for anomalies in a browser environment.

(For a quick partial demo, click here.)

Of course, there are countless “smart” fuzzer designs that look good on paper, but fail in real-world applications. I tried to make sure that this is not the case here: for example, afl can easily tackle security-relevant and tough targets such as gzip, xz, lzo, libjpeg, libpng, giflib, libtiff, or webp – all with absolutely no fine-tuning and while running at blazing speeds. The control flow information is also extremely useful for accurately de-duping crashes, so the tool does that for you.

In fact, I spent some time running it on a single machine against libjpeg, giflib, and libpng – some of the most robust best-tested image parsing libraries out there. So far, the tool found:

  • CVE-2013-6629: JPEG SOS component uninitialized memory disclosure in jpeg6b and libjpeg-turbo,

  • CVE-2013-6630: JPEG DHT uninitialized memory disclosure in libjpeg-turbo,

  • MSRC 0380191: A separate JPEG DHT uninitialized memory disclosure in Internet Explorer,

  • CVE-2014-1564: Uninitialized memory disclosure via GIF images in Firefox,

  • CVE-2014-1580: Uninitialized memory disclosure via <canvas> in Firefox,

  • Chromium bug #398235, Mozilla bug #1050342: Probable library-related JPEG security issues in Chrome and Firefox (pending),

  • PNG zlib API misuse bug in MSIE (DoS-only),

  • Several browser-crashing images in WebKit browsers (DoS-only).

More is probably to come. In other words, you should probably try it out. The most significant limitation today is that the current fuzzing strategies are optimized for binary files; the fuzzer does:

  • Walking bitflips – 1, 2, and 4 bits,

  • Walking byte flips – 1, 2, and 4 bytes,

  • Walking addition and subtraction of small integers – byte, word, dword (both endians),

  • Walking insertion of interesting integers (-1, MAX_INT, etc) – byte, word, dword (both endians),

  • Random stacked flips, arithmetics, block cloning, insertion, deletion, etc,

  • Random splicing of synthetized test cases – pretty unique!

All these strategies have been specifically selected for an optimal balance between fuzzing cost and yields measured in terms of the number of discovered execution paths with binary formats; for highly-redundant text-based formats such as HTML or XML, syntax-aware strategies (template- or ABNF-based) will obviously yield better results. Plugging them into AFL would not be hard, but requires work.

TorrentFreak: Chrome Extension Turns Amazon Into a Pirate eBook Site

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

amazon-pirate-logoAs one of the largest online retailers, Amazon is the go-to store for many people.

Amazon became big by selling books and in recent years eBooks have become some of the fastest selling items. However, pirates are now directly targeting the company’s successful business model.

With a new Chrome extension pirates are entering Amazon, effectively transforming it into a pirate ‘store.’

When the LibGen extension is installed, it adds a new row on top of the Amazon product page of books that are also available through unauthorized sources.

The plugin uses data from the search engine which lists over a million books. Below is a screenshot of an Amazon book page, with a new row on the top linking to pirated downloads of the same title.


LibGen, short for Library Genesis, lists a wide variety of pirate sources for most books, including direct downloads, torrents and magnet links. It appears to work well, although there are occasional mismatches where links to books with similar titles are listed.

Needless to say book publishers are not going to be pleased with Amazon’s unofficial feature. Whether Amazon plans to take any action to stop the extension has yet to be seen.

The idea to transform Amazon into a pirate site is not entirely new. A few years ago a Firefox plugin integrated Pirate Bay download links into the site, which also worked for music and movies. This plugin was quickly taken offline quickly after the news was picked up by the mainstream media.

There are still other extensions floating around with the same functionality. Torrent This, for example, enhances Amazon with links to Pirate Bay download pages for all sorts of media, much like the “Pirates of the Amazon” plugin did.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Linux How-Tos and Linux Tutorials: How To Install And Use The Chrome Remote Desktop Sharing Feature In Ubuntu

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Falko Timme. Original post: at Linux How-Tos and Linux Tutorials

Chrome Remote Desktop Sharing feature in Ubuntu

In this tutorial I will introduce you with the Chrome remote Desktop sharing feature. This is an alternate as similar to team-viewer type property for sharing the screen with remote clients. It seems to be very useful for remote desktop control features. I will install the webplugin in Ubuntu 14.04.

Read more at HowtoForge

TorrentFreak: Chrome Blocks uTorrent as Malicious and Harmful Software

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

maliciousutorrWith millions of new downloads per month uTorrent is without a doubt the most used BitTorrent client around.

However, since this weekend the number of installs must have dropped quite a bit after Google Chrome began warning users away from the software. According to Chrome the BitTorrent client poses a serious risk.

“uTorrent.exe is malicious and Chrome has blocked it,” the browser informs those who attempt to download the latest stable release.

Chrome does give users the option to restore the file but not without another warning. The browser is convinced that the file is harmful and suggests that the uTorrent website may have been hacked.

“This file will harm your computer. Even if you have downloaded files from this website before, the website may have been hacked. Instead of recovering this file you can retry the download later.”


The first reports of Chrome’s block came in three days ago and at the time of writing the problems persist. The warnings appear for the latest stable release ( and no other releases appear to be affected.

Currently there is no indication why the software has been flagged, but a scan by more than 50 of the most popular anti-virus services reveals no active threats.

Google’s safe browsing diagnostic page claims that the uTorrent website was involved in malware distribution in recent months, but no further details on the nature of the supposed malware are provided.

“This site has hosted malicious software over the past 90 days. It infected 4 domain(s), including,,,” the diagnostics page reads.

This isn’t the first time that uTorrent has reported problems with Chrome. The same happened late last year when the malware blocking feature was still in beta. At the time uTorrent parent company BitTorrent Inc. managed to resolve the issues after several days.

Thus far, none of the developers have responded to user complaints in the uTorrent forums.

Update We discovered that uTorrent occasionally serves other versions as well, these are not blocked. The vast majority of the downloads are still blocked though.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Linux How-Tos and Linux Tutorials: How to Operate Linux Spycams With Motion

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Carla Schroder. Original post: at Linux How-Tos and Linux Tutorials

fig1 spycamWhen you want something a little simpler and more lightweight than Zoneminder for operating surveillance cameras, try Motion.

Motion is a nice lightweight, yet capable application for operating surveillance cameras on Linux. It works with any Linux-supported video camera, including all V4L Webcams, many IP cameras, Axis cameras, and it controls pan and tilt functions. Motion records movies and snapshots in JPEG, PPM, and MPEG formats, and you can view these remotely in a Web browser thanks to Motion’s built-in HTTP server. It stores image files in a directory of your choosing, and it does not require a database, though it supports MySQL and PostgreSQL if you do want to use one.

First let’s look at how to get an IP camera working with Motion using my trusty Foscam FI8905W (figure 1), and then we’ll add a USB Webcam.

Installation is easy on Debian and Debian derivatives, because Motion is included in the standard software repositories. So all you need to do is run apt-get install motion. You also need libav-tools, which is a fork of ffmpeg. Many moons ago, Debian dropped ffmpeg and replaced it with libav-tools (See Is FFmpeg missing from the official repositories in 14.04? to learn the gory details, and how to get ffmpeg itself if that’s what you really want). On other distros, check the downloads page and installation guide for instructions. Most other distros still include ffmpeg.

The installer should create a motion group and user, and add the motion user to the video group. If it doesn’t, then you must create them yourself. Add yourself to the video group as well, to get around permissions hassles.

Now run motion to see if it works:

$ sudo motion
[0] Processing thread 0 - config file /etc/motion/motion.conf
[0] Motion 3.2.12 Started
[0] Thread 1 is from /etc/motion/motion.conf
[1] Thread 1 started
[0] motion-httpd/3.2.12 running, accepting connections
[1] Failed to open video device /dev/video0: No such file or directory
[0] motion-httpd: waiting for data on port TCP 8080
[1] Could not fetch initial image from camera
[1] Motion continues using width and height from config file(s)
[1] Resizing pre_capture buffer to 1 items
[1] Started stream webcam server in port 8081

It will go on for many more lines, until you see:

[1] Failed to open video device /dev/video0: No such file or directory
[1] Video signal lost - Adding grey image

Point your Web browser to localhost:8081 and you will see a gray image:

fig2 gray image

This is good, as it means Motion is installed correctly, and all you have to do is configure it. Press Ctrl+C to stop it. Then create a .motion directory in your home directory, copy the default configuration file into it, and change ownership to you:

~$ mkdir .motion
~$ sudo cp /etc/motion/motion.conf .motion/
~$ sudo chown carla:carla .motion/motion.conf

You also need a directory to store images captured by motion:

~$ mkdir motion-images

When you start Motion it looks for a configuration file in the current directory, then in ~/.motion, and finally /etc/motion. Now edit your ~/.motion/motion.conf file– this example includes basic configurations, and the lines relevant to my Foscam IP camera:

# Start in daemon (background) mode and release terminal (default: off)
daemon on
# Output 'normal' pictures when motion is detected (default: on)
output_normal off
# File to store the process ID, also called pid file. (default: not defined)
process_id_file /var/run/motion/ 
# Image width (pixels). Valid range: Camera dependent, default: 352
width 640
# Image height (pixels). Valid range: Camera dependent, default: 288
height 480
# Maximum number of frames to be captured per second.
# Valid range: 2-100. Default: 100 (almost no limit).
framerate 7
# URL to use if you are using a network camera, size will be autodetected (incl http:// ftp:// or file:///)
# Must be a URL that returns single jpeg pictures or a raw mjpeg stream. Default: Not defined
netcam_url value http://
# Username and password for network camera (only if required). Default: not defined
# Syntax is user:password
netcam_userpass admin:mypassword
# Target base directory for pictures and films
# Recommended to use absolute path. (Default: current working directory)
target_dir /home/carla/motion-images
# Codec to used by ffmpeg for the video compression.
ffmpeg_video_codec mpeg4

You need to create the directory for storing the PID file, as it says in motion.conf:

$ sudo mkdir /var/run/motion

Now try starting it up again:

$ sudo motion
[0] Processing thread 0 - config file /home/carla/.motion/motion.conf
[0] Motion 3.2.12 Started
[0] Motion going to daemon mode

Good so far, now try localhost:8081 again:

fig3 driveway

Well look, there is my driveway. Now I will have plenty of warning when visitors come, so I can loose the moat monsters. Run around in front of your camera to trigger motion detection, and when you come back your images directory should have some .avi movies in it. You should also find a simple Motion control panel at localhost:8080.

IP Camera Settings

How to Operate Your Spycams with ZoneMinder on Linux (part 1) goes into some detail on setting up your camera. You must follow the vendor’s instructions for the initial setup, such as assigning a login and password, and setting the IP address. You may have other options as well, such as frame size, motion sensitivity, and color depth or black and white.

Getting the correct netcam_url is sometimes a hassle. For my Foscam I brought up its control panel in Firefox, right-clicked on the image (figure 4), then left-clicked View Image Info. This opens a screen like figure 5, which shows the exact URL of the videostream. In the Chrome browser use “Inspect element.”

fig4 control panel

fig5 foscam

Fine-tuning Configuration Values

You can make all kinds of adjustments in your configuration file such as image size, image quality, frame rate, sensitivity to movement, greater sensitivity in selected areas of the frame, file paths, HTTP server settings, and time stamp formats. Motion Guide – Alphabetical Option Reference Manual gives detailed information on each option. Remember to harmonize your Motion settings with the settings in your camera’s control panel, if it has one.

USB Cameras

Any V4l-supported USB Webcam should work with little fuss. The video device will be /dev/video0/dev/video0 will be present only when a video camera is connected directory to your computer. This is a basic example configuration for my Logitech Webcam:

videodevice /dev/video0
width 640
height 480
framerate 24
output_normal off
ffmpeg_video_codec mpeg4
target_dir /home/carla/motion

And again, remember that settings such as frame rate and size are dependent on what your camera supports.

Daemonizing Motion

Once you have everything working, make Motion run as a daemon by editing /etc/default/motion, and changing start_motion_daemon=no to start_motion_daemon=yes. Now Motion will start automatically when you start your computer, and you can start and stop it like any other daemon.

Controlling Multiple Cameras

Motion manages multiple cameras with ease — all you do is give each camera its own configuration file, named thread1.confthread1.conf, and so on. You still need your main motion.conf for common options such as daemon on and filepaths. Then each “thread” file has configurations specific to each camera.

Krebs on Security: Microsoft, Adobe Push Critical Fixes

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

If you use Microsoft products or Adobe Flash Player, please take a moment to read this post and update your software. Adobe today issued a critical update that plugs at least three security holes in the program. Separately, Microsoft released six security updates that address 29 vulnerabilities in Windows and Internet Explorer.

brokenwindowsMost of the bugs that Microsoft addressed with today’s updates (24 of the 29 flaws) are fixed in a single patch for the company’s Internet Explorer browser. According to Microsoft, one of those 24 flaws (a weakness in the way IE checks Extended Validation SSL certificates) was already publicly disclosed prior to today’s bulletins.

The other critical patch fixes a security problem with the way that Windows handles files meant to be opened and edited by Windows Journal, a note-taking application built in to more recent versions of the operating system (including Windows Vista, 7 and 8).

More details on the rest of the updates that Microsoft released today can be found at Microsoft’s Technet blog, Qualys’s site, and the SANS Internet Storm Center.

Adobe’s Flash Player update brings Flash to version on Windows, Mac and Linux systems. Adobe said it is not aware of exploits in the wild for any of the vulnerabilities fixed in this release.

To see which version of Flash you have installed, check this link. IE10/IE11 on Windows 8.x and Chrome should auto-update their versions of Flash, although my installation of Chrome says it is up-to-date and yet is still running v.

brokenflash-aFlash has a built-in auto-updater, but you might wait days or weeks for it to prompt you to update, regardless of its settings. The most recent versions of Flash are available from the Adobe download center, but beware potentially unwanted add-ons, like McAfee Security Scan. To avoid this, uncheck the pre-checked box before downloading, or grab your OS-specific Flash download from here.

Windows users who browse the Web with anything other than Internet Explorer may need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.). If you have Adobe AIR installed (required by some programs like Tweetdeck and Pandora Desktop), you’ll want to update this program. AIR ships with an auto-update function that should prompt users to update when they start an application that requires it; the newest, patched version is v. for Windows, Mac, and Android.


Errata Security: Products endorsed by cybersec experts

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

The idea came up in Twitter, so I thought I’d write a quick blog post answering the question: “What products do cybersec experts endorse as being secure?”

The answer, of course, is none. It’s a fallacy, because perfect security is impossible. If you want your computer data to be perfectly secure, then smash your device to pieces, run them through a blender, and drop the bits into volcanic lava.
With that said, we cybersec experts do use stuff. From this you can derive some sort of implicit endorsement. I use Windows, iPhone, and GMail, from which you can assume they are probably “secure enough”.
I use an iPhone because it has excellent security. For all I criticize Apple’s security, the fact is that they have very smart people solving the toughest problems. For example, their most recent operating system will randomize MAC addresses when looking for WiFi in order to avoid disclosing your identity. This is a security problem I’ve blogged about for years, and it’s gratifying that Apple is the first company to tackle this problem.
If you do the right thing, such as locking your iPhone with a complex code, you are likely safe enough. If a thief steals your phone, they will likely not get your private secrets from it.
On the other hand, if you don’t lock your iPhone, then the thief can steal everything from your phone, including things your phone has access to, like your email. That’s the problem with “security endorsements”: I as an expert can’t help if you don’t help yourself. Your biggest threat isn’t the products you use, but you yourself. Your top threats you are getting easily tricked by “phishing emails”, drive-by downloads, lack of patches, and using the same password across many websites. Choosing greater or lesser secure product doesn’t really much matter in the face of bad decisions you make with those products.
With that said, there are some recommendations I can make. Public wifi, such as at Starbucks or the airport, is very very bad. Among the things I’m known for is demonstrating just had bad this can be (“sidejacking”). The safest thing is not to use it — tether through your phone instead. But, if you have to use it, use a VPN. This encrypts your data to a remote site across the Internet, so that local people near you can’t decrypt it. There are lots of free/cheap VPN providers. Another option is “Tor”, which acts like a VPN, but also anonimizes your identity. These are a little bit technical and hard to use, but can make using public WiFi secure.

We in the security industry know that some things are exceptionally bad. Browser apps using Java and ActiveX, the thing found in most corporate environments, are very bad. Adobe products Flash and PDF are likewise insecure in the browser. These technologies aren’t bad in of themselves, but only bad when hackers have direct access to them via the web browser. What you want instead is a browser like Chrome using JavaScript applets, HTML5 replacing Flash, and built-in viewers for PDF rather than Adobe’s viewer.

We experts know that the standard way of building web apps on the backend using the “LAMP” stack is inherently insecure. PHP, in particular, is a nightmare. Pasting strings together to form SQL queries is bad. Not whitelisting output characters is bad. If programmers just heeded these last three sentences, they’d stop 99% of the ways hackers break into websites.
Microsoft, Apple, and Google care about cybersecurity. They are really the only companies I can point to that really do care. Their problems stem from the fact that they are also popular, and therefore, the top targets of hackers. Their problems also stem from the fact that security is a tradeoff: caring too much about security makes products unusable.
Tradeoffs is why Android is less secure than iPhone. Apple limits apps to only those they’ve approved, whereas Android allows apps to be downloaded from anywhere. Android’s policy is better, it gives control over the phone to the user rather than than the fascist control Apple has over their phones. But the price is additional risk, as users frequently download apps from dodgy websites that “infect” their phone with a “virus”. Thus, if you want a secure phone, choose iPhone, but if you want a phone that you can control yourself, choose Android. Note that Microsoft makes technically excellent phones, but nobody cares, because they don’t have the apps, so I don’t mention them in the comparison :).
I use GMail. Google’s web apps have the best track record of security, being the first to adopt SSL everywhere all the time. There are still problems, of course, but their track record is better than others.
As an operating system, I currently use Win7, Mac OS X, and Ubuntu (using Windows the majority of the time). I use them with full disk encryption. They are all equally secure as far as I’m concerned. I use Microsoft’s Office, on both Windows and Mac, as well as their cloud apps.
Finally, I want to discuss the security community’s historic dislike of Microsoft. It’s not valid. It’s always been a political dislike of Microsoft’s monopolistic control over the desktop, and an elitist preference for things like Linux that aren’t useable by mainstream. I point this out because I can’t endorse the advice form security experts — their advise is more often going to be political rather than technical.

The Hacker Factor Blog: Not So Bright

This post was syndicated from: The Hacker Factor Blog and was written by: The Hacker Factor Blog. Original post: at The Hacker Factor Blog

There’s a feature that I’ve wanted to enable at FotoForensics for quite a while: brightness control. Some pictures are just too dark, and some analysis algorithms generate dark results that could benefit from a little brightening. (Not ELA, where dark means “low quality”. But other algorithms could use some brightness control.)

The one thing I don’t want to do is use the server to manage minor color alterations. That would just be too high of a load and consume far too much disk space with temporary files. (If I permit increments of 10%, then that’s 11 potential brightness renderings per picture. And there may be dozens of pictures: original, ELA, full-size, scaled, etc.)

Fortunately, FotoForensics requires HTML5. That means JavaScript, CSS filters, and other fun options to offset this work to the client’s browser and not task the server.

Unfortunately, I’m still hitting some problems…

Problem #1: Browser Support

CSS and CSS3 have a huge amount of support. To paraphrase Visa’s slogan: CSS3 is everywhere I want to be. And most CSS3 filters have support on most browsers. Filters like rotation, translation, and skewing are very common.

Unfortunately, the brightness filter lacks widespread support. The web site maintains a great list of HTML and CSS features and their support across most web browsers. For brightness, only Chrome and Opera really have full support. Safari supports it as of 6.0, but if you have an older version of Safari then you’re out of luck. (All of my Macs are too old, so none of them support CSS3 brightness.) And unless you have the most recent mobile device, you probably don’t support it. (People who use Firefox or Internet Explorer are out of luck.)

Since I consider brightness to be a “nice to have” feature and not a mandatory requirement, I’ve built a little JavaScript code to detect if it is supported. If the browser supports it, then it is enabled. But if it isn’t supported, then the user never sees the option.

I have similar conditional JavaScript checks for the rotation and flip buttons. You only see the buttons if your browser supports the feature. But unlike brightness, rotate and flip are almost universally supported.

Problem #2: Brightness Parameters

According to the CSS3 specifications, brightness takes one parameter in one of two formats. You can either specify a percentage or a fraction. Code like['-webkit-filter']="brightness(110%)"; will make the picture brighter. The values range from 0% (black) to 200% (white) with 100% being “no adjustment”. When using fractions, the values range from -1 (black) to +1 (white) with 0 being unaltered.

This seems simple enough… except that the implementation seems inconsistent. Older Chrome browsers (those with version numbers in the 20s) only support the numerical values. Newer Chrome browsers only consistently support the percentages. As far as I can tell, brightness is a “work in progress” on Chrome, and different versions support different parameters.

The nice thing about JavaScript is that unsupported styles are never added to the element. I ended up making a complicated JavaScript detector that checks if the browser supports brightness and identifies which parameters to use. It works by adding various styles and checks to see if they were actually added:

var properties = [ '-webkit-filter' ];
var p;
var BrightnessVal=-100; // undesirable value
var BrightnessMin=100;
var BrightnessMax=100;
var BrightnessStep=10;
var BrightnessChar='';
while((BrightnessVal==-100) && (p = properties.shift()))
// My properties are currently only -webkit-filter.
// Check if the browser supports -webkit-filter.
if (p in
// create a temporary image element to test with
el = document.createElement('img');
// See if it supports the percent parameters[p]='brightness(200%)'; // set value
if ([p]=='brightness(200%)') // check value
// Supported! Set the range and notations
// No percent support? Check for fractions![p]='brightness(0.5)';
if ([p]=='brightness(0.5)')
// Supports fractions! Set the range and notations
delete el;
// Now see if it is supported...
if (BrightnessVal > -100)
// it's supported, so add the user controls

With this code, I know if the browser supports the brightness parameter as well as the minimum and maximum values and whether to include the “%” character. This means that I can support Chrome, Opera, and some versions of Safari.

Problem #3: Fractions

JavaScript has made huge strides over the last decade. It used to be a huge security risk and really slowed down the computer. But with better compiler designs and implementation decisions, it really isn’t the same huge security risk that it previously was. And as far as speed goes: wow — I can do most tasks in real time and with minimal computer resources.

So it really makes me shake my head when I hit a really bad, fundamental problem with JavaScript. JavaScript sucks at fractions.

I created two buttons. One increases the brightness by 10%, and the other decreases it by 10%. My basic test is to click on each button 3 times. When using the integer range (0% – 200%): 100% + 10% + 10% + 10% – 10% – 10% – 10% = 100%. There is no floating point error. This works fine.

However, I hit a problem when I use the fractional range (-1 to +1): 0 + 0.1 + 0.1 + 0.1 – 0.1 – 0.1 – 0.1 = 0.0000000000004. Different browsers generate different values for the floating point error. But seriously, why am I seeing any floating point error when I’m adding and subtracting tenths??? At this point, I have two options. I can either do everything as integers (10 times all values and just divide by 10 before use), or I can call Math.round(val×10)/10 to strip the error out after each addition and subtraction. (I went with the latter option since it mitigates long-term error accumulation.)

Back in college, I collected buttons. I have a button that says “2+2=5 for sufficiently large quantities of 2″. I fear that this is really the case with JavaScript.

Update: Stuart and Justin left great comments about why this happens. I still think JavaScript could take steps to mitigate the issue, but at least it is understandable.

Problem #4: Applying Brightness

In computer graphics, there are two common ways to brighten up an image. The first way scales the RGB values. For example, I can multiply every value by 1.1 and achieve a 10% increase in brightness. The value “1″ becomes “1.1″ which rounds to 1, 2 becomes 2.2 which is 2, 3 becomes 3, but 5 becomes 5.5 which rounds to 6. 7 becomes 8, 8 becomes 9, … and 200 becomes 220. The bigger the number is, the more the number moves.

Think of this like elementary school gym class. Everyone stands in a line and then the teacher says to stand one arm’s length apart. People at the front of the line barely move. People at the end of the line have to walk a long distance in order to remain one arm’s length apart. (This always sucked for the kids who’s last names began with Z. And the kids with names that begin with “A” always wondered why this simple “spread out” task took so long…)

This scaling approach emphasizes differences in the darker regions. By moving darker colors more, minor gradient differences among the darker colors are scaled larger. Brightening up the picture permits you to see details within the darkness more clearly.

The other common approach is to convert the colors to HSV, YUV, or some other colorspace where the intensity (V or Y) is separate from the hue. The intensity is adjusted linearly and then the colors are converted back to RGB. This approach typically brightens the image but mutes details a little since differences that are only in the chrominance (the color and not the intensity) are unchanged.

However… there is a method for brightening images that I have only seen used on the web — real graphics applications never use this approach and I don’t think it is taught in any graphics courses. It’s a fake brightening algorithm because it doesn’t “brighten” so much as “wash out” the image. Here’s how the algorithm works: apply a completely white image as a transparency over the image. The amount of transparency controls the brightness. That’s right: it combines the picture with “white”.

Using this pseudo-brightness approach, big numbers barely change while little numbers change a lot. For example, at 10% brighter, the value 200 gets combined with a white (255) transparency: 10%×255 + 90%×200 = 205.5 and rounds to 206, so it moves 6 values in intensity. The value 2 becomes 10%×255 + 90%×2 = 27.3 and rounds to 27, so it moves 25 values in intensity. This makes dark values look lighter, but it removes details because the gradients between adjacent dark values is reduced. The result is a “brighter” picture but with less detail. If the purpose of brightening the image is to see minor differences in the dark regions, then this brightness function won’t help you.

Other Options

At this point, we have a “brightness” function that is not widely supported, has inconsistent parameters, and is poorly implemented. As a result, I have no desire to release this as an option at FotoForensics. It is rarely supported and doesn’t do the job when it is supported.

I’ve seen a couple of forums where people have tried to do workarounds. The most common suggestion is to use a white background and to adjust the image’s transparency. Every HTML5 browser seems to support transparencies, so this is a functionally applicable option. However, this is the same pseudo-brightness algorithm that mutes details rather than actually brightening the image. This is not a practical solution.

A better option is to use an HTML5 canvas object. Canvas is widely supported and gives me (the developer) full control over every pixel. I can easily implement the scaling RGB function. However, this introduces another problem… Some pictures can be very large, and not every browser implementation supports HTML5 with OpenGL functions. (OpenGL provides speed to graphical rendering.) As a result, increasing the brightness may be very slow. I have a few sample pictures where each click on the “increase brightness by 10%” button takes 2-3 seconds. (Longer if you’re on a mobile device.) This speed issue hinders usability, so I’ve ruled it out as an option.

(JavaScript libraries like Raphaël and jquery are dependent on the canvas object, so they have the exact same speed limitations.)

I’m still looking for alternative methods to implement a real ‘brightness’ function in JavaScript. However, it looks like I will have to wait for JavaScript to grow up a little more. Right now, I’m out of bright ideas.

Krebs on Security: Adobe, Microsoft Push Critical Security Fixes

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Adobe and Microsoft today each released updates to fix critical security vulnerabilities in their software. Adobe issued patches for Flash Player and AIR, while Microsoft’s Patch Tuesday batch includes seven update bundles to address a whopping 66 distinct security holes in Windows and related products.

winiconThe vast majority of the vulnerabilities addressed by Microsoft today are in Internet Explorer, the default browser on Windows machines. A single patch for IE this month (MS14-035) shores up at least 59 separate security issues scattered across virtually every supported version of IE. Other patches fix flaws in Microsoft Word, as well as other components of the Windows operating system itself.

Most of the vulnerabilities Microsoft fixed today earned its “critical” rating, meaning malware or bad guys could exploit the flaws to seize control over vulnerable systems without any help from users, save perhaps for having the Windows or IE user visit a hacked or booby-trapped Web site. For more details on the individual patches, see this roundup at the Microsoft Technet blog.

Adobe’s update for Flash Player fixes at least a half-dozen bugs in the widely-used browser plugin. The Flash update brings the media player to v. on Windows and Mac systems, and v. for Linux users. To see which version of Flash you have installed, check this link.

brokenflash-aIE10/IE11 and Chrome should auto-update their versions of Flash. If your version of Flash on Chrome (on either Windows, Mac or Linux) is not yet updated, you may just need to close and restart the browser. Chrome version 35.0.1916.153  includes this Flash update; to see which version of Chrome you’re running, click the 3-bars icon to the right of the address bar and select “About Google Chrome.”

The most recent versions of Flash are available from the Adobe download center, but beware potentially unwanted add-ons, like McAfee Security Scan). To avoid this, uncheck the pre-checked box before downloading, or grab your OS-specific Flash download from here.

Windows users who browse the Web with anything other than Internet Explorer will need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.). If you have Adobe AIR installed (required by some programs like Tweetdeck and Pandora Desktop), you’ll want to update this program. AIR ships with an auto-update function that should prompt users to update when they start an application that requires it; the newest, patched version is v. for Windows, Mac, and Android.

flash14-0-0-125 Making end-to-end encryption easier to use (Google Online Security Blog)

This post was syndicated from: and was written by: jake. Original post: at

The Google Online Security Blog has announced the alpha release of an OpenPGP-compliant end-to-end encryption extension for the Chrome/Chromium browser.
While end-to-end encryption tools like PGP and GnuPG have been around for a long time, they require a great deal of technical know-how and manual effort to use. To help make this kind of encryption a bit easier, we’re releasing code for a new Chrome extension that uses OpenPGP, an open standard supported by many existing encryption tools.

However, you won’t find the End-to-End extension in the Chrome Web Store quite yet; we’re just sharing the code today so that the community can test and evaluate it, helping us make sure that it’s as secure as it needs to be before people start relying on it. (And we mean it: our Vulnerability Reward Program offers financial awards for finding security bugs in Google code, including End-to-End.)”

Клошкодил: 2014-06-01 Лекцията ми от Websummit 2014, “Възможности, които всички web услуги трябва да имат”

This post was syndicated from: Клошкодил and was written by: Vasil Kolev. Original post: at Клошкодил


[slide 2 Идеята за тази лекция]

Идеята за тази лекция ми дойде около няколко разговора с различни хора и от нашият опит с правенето на pCloud. Много често ни се случваше да погледнем конкуренцията как е направила нещо и да се ужасим в първия момент.
(а след това да се зарадваме, че се срещу нас има такива хора)

За всичките неща, за които ще говоря има лесни решения – прости като идея и сравнително лесни за реализация, като правенето им трябва да доведе до един по-добър свят.

Също така в историята има много случаи на проблеми, които са се случвали и за които всички са забравили или просто са отказали да обърнат внимание.

Нищо от това, което ще кажа не е особено ново, гениално или изобщо велико откритие. Всичкото е откриваемо с малко четене в google, wikipedia и от време на време някоя книга. Извинявам се предварително, че нямаме осигурени възглавници за хората, на които ще им се доспи от скучните неща, които ще разказвам.
(ако четете това като текст – нищо против нямам да го ползвате за приспиване)

[slide 3 Защо ние го говорим това]

Ние харесваме добре написаните неща. Също така се стараем да сме максимално отворени откъм API, код и т.н., и за това държим да правим неща, от които да не се срамуваме, или даже да се гордеем с тях.

Също така съм съвсем наясно, че тази лекция звучи като безкрайно фукане. Поне половината е такава :) За това искам да отбележа, че доста от тези грешки сме ги допускал и ние, в този или предишен проект.

[slide 4 Кои сме ние]

Минутка за реклама – ние сме cloud storage услуга, направена правилно. Startup сме, има ни от около година, имаме около 600k потребители и 600TB заето място и правим всякакви интересни неща, за които обаче ще ми трябва тричасов слот, че да ги изброя :)

Стараем се да сме максимално отворени – имаме пълна документация на API-то ни, както и код, който си говори с него. Ние самите за приложенията, които пишем ползваме същото API, т.е. не крием нищо от разработчиците :)

[slide 5 Authentication]

Почти всички услуги трябва да пазят в някакъв вид пароли за потребителите си и да им дават да влизат в системата, за да получат достъп до някаква допълнителна функционалност. Човек би си помислил, че това е изчистено и изяснено и трябва да няма проблеми…

[slide 6 Стандартен метод за auth]

По принцип в повечето случаи влизането в системата се реализира чрез изпращане на user и парола и получаване на отговор/token, с който да работим след това.

Което е лоша идея, понеже паролата ви минава по мрежата в някакъв вид, и не може да сте сигурни дали някой не подслушва или дали някой не се прави на отсрещния сървър.

[slide 7 Правилен метод]

Правилният метод, който използва един прост криптографски примитив, се нарича challenge-response. Има и реализация в самия HTTP протокол, т.нар. digest authentication, както и в много други системи (например SIP). В web твърде рядко се среща, поради нуждата от още една заявка (или поради неинформираността на разработчиците).

Работи по следния начин:
Сървърът ви генерира nonce – някаква стойност, която е произволна, няма да се повтори пак. Смятате криптографски hash в/у нея и паролата и изпращате резултата на сървъра. Той от своя страна може да направи същото изчисление и да разбере, че вие знаете същата парола.

Така никаква информация за самата ви парола не минава през мрежата, така че дори някой да се представя за сървъра, няма начин да бъде научена.

Методът е по-бавен от другия, понеже изисква още една заявка (да си вземете nonce, с който да работите), т.е. вкарва още един round-trip. Това не е проблем, понеже това така или иначе не трябва да е особено бърз процес – т.е. колкото по-бърз е, толкова по-лесно може да се използва за brute-force атаки.

Тази схема с малко разширяване дава възможност и клиентът да потвърди, че сървъра има паролата (т.е. да се получи двустранна auth.), което ще ви оставя за домашно :) Ние все още не го реализираме, но е в плановете за светлото бъдеще.

[slide 9 Plaintext в базата данни]

Пазенето на паролите се оказва проблем, който твърде малко хора успяват да решат.

Така като гледам, сме 2014, и все пак се случва някоя сериозна услуга да пази паролите в plaintext. Това в някакви моменти е удобно – lost password функционалността може да ви прати директно паролата по пощата (сигурността на което няма да обсъждам).

От друга страна обаче, поне няколко услуги биват hack-нати всяка година и паролите им попадат в ръцете на лоши хора (или в целия internet). Това е доста неприятно за потребителите, понеже те въпреки съветите на експерти и други заблудени люде все пак използват същата парола на много места, което води до допълнителни пробиви и подобни неприятни проблеми.
(съвсем скорошен пример – преди седмица-две-три източиха на ebay паролите)

[slide 10 Първи лесен (и грешен) вариант]

Първото решение на проблема, което е лесно и грешно, е просто да се съхранява някакъв криптографски hash на паролата – md5 (много-лоша-идея) или sha1 (не-толкова-лоша идея). Това се атакува тривиално с rainbow таблици.

Съвсем накратко, rainbow таблиците са пресметнати таблици с (почти) всички възможни пароли, така че лесно да може да се намери в тях някакъв валиден string, който да отговаря на дадения hash. Също така имат метод за съхраняване на по-малко данни, та по принцип за md5 таблиците са под 2TB, за sha1 има таблици с повечето пароли пак в някакъв такъв размер.

Накратко – лоша идея.

[slide 11 Втори лесен (и правилен) вариант]

Второто решение (и първото работещо) е известно на света от поне 1970та година, от хеширането на паролите в старите unix системи, дори в повечето езици името на функцията – crypt() идва от там (тя дори не криптира).

Идеята е проста, генерираният hash да е на някакъв salt и паролата. Salt-ът се подбира да е различен за всяка парола, и се слага преди получения hash. Така лесно може да се пресметне, но не може да му се генерира rainbow таблица (т.е. ако се генерира, ще трябва да съдържа и възможните salt-ове).

[slide 12 Трети не-толкова-лесен (но правилен) вариант]

Третият вариант, който може да се използва (и който сме реализирали при нас) е да се пази “частичен” hash на паролата (т.е. по-точно пълния state на hash функцията) и някакъв salt, така че да може и да се реализира challenge-response. Това ни отне малко повече време за реализация, но така не пазим пароли в plaintext и можем да реализираме challenge-response, за да гарантираме максимална сигурност на потребителите си.

Много близко е до lenghtening атаката в/у хешове.

( Корекция след самата лекция: Реално погледнато, challenge-response може да се реализира дори с хеширани пароли по предишния начин)

[slide 13 Четвърти (велик и труден за правене) вариант]

И последният метод, който дава най-голяма сигурност, и който не съм виждал реализиран никъде, освен на една лекция. Идеята му е в общи линии като на password-authenticated key agreement протоколите, повечето от които поне доскоро бяха патентовани и като цяло не се срещат особено често.

На база на паролата се захранва един pseudo-random генератор, от който след това се генерира двойка RSA ключове (частен и публичен). На сървъра се дава само публичния ключ, а клиентът няма нужда да пази нищо, понеже от паролата може да генерира същата двойка когато си реши. Автентикацията става чрез стандартните схеми за RSA и дава възможност да се пазите от всичките атаки по-горе, и дава още едно ниво на сигурност – ако някой ви се добере до базата на сървъра, с информацията после няма да може даже да влиза при вас, т.е. даже няма да има нужда потребителите да си сменят паролите.

За съжаление не съм виждал никъде реализация на схемата. Описанието и видях в една лекция на Дан Камински преди няколко години.

[slide 14 Further reading]

И понеже темата е доста обширна, може да потърсите този стандарт – password-based key derivation function, както и да погледнете конкурсът за нови такива алгоритми за съхраняване на пароли, може да са ви полезни.

Специално PBKDF2 показва едно доста важно свойство на всички тези схеми. Важно е проверката на паролата да не е максимално бързо действие, а да отнема някакво малко, но достатъчно време, така че да да се направят brute-force атаките по-трудни.
Самият PBKDF е схема за прилагане на хеш функция много пъти (1000, 10000 или нещо такова) върху входни данни и salt, така че от това да се получи някакъв набор битове, които да се използват или като съхранена парола, или като ключ за нещо друго. Поради големия брой действия, които няма как да се направят по-лесно, лесно може да се сметнат параметри, които да ви дадат нещо като 10ms за извършване на сметката, което да доведе до невъзможност да се правят повече от 100 теста в секунда на един core.
М/у другото, пример за използвана такава схема е WPA при wi-fi мрежите – там ключът за комуникация се генерира с pbkdf2 с 1000 пъти sha1 и essid-то (името) на мрежата за salt. Ако някой е пробвал да чупи такива пароли знае, че върви доста по-бавно, отколкото просто на хешове.

[slide 15 Комуникация]

Изразът с тъпата брадва идва от едно изречение на Дийкстра, “Молив не може да се наостри с тъпа брадва, същото важи и за 10 тъпи брадви”. Подобен израз, който може би е по-подходящ (и за който се сетих на лекцията) е “Манчестърска отвертка” – която всъщност е чук, идеалният инструмент за решаване на всякакви проблеми.

[slide 16 Стандартни неща]

По принцип всичко живо използва JSON и HTTP. Те са бавни протоколи, текстови, с много overhead и не-чак-толкова удобни за parse-ване (добре поне, че има такива библиотеки в почти всеки език).

Да не говорим, че HTTP е мислен за съвсем различни неща от тези, за които се използва в момента.

[slide 17 Трябва алтернатива]

Добре е да имаме алтернативи. Не всичките ни клиенти са browser-и, но всички са принудени да ползват интерфейси, писани за тях, което е доста неприятно и на моменти – много неефективно (но понеже хората са свикнали, вече не обръщат внимание). Най-добрият пример е какво ползват мобилните приложения – все http базирани api-та, а те уж са устройства, които трябва да пестят от батерия и CPU колкото се може повече.

Дори в самите browser-и вече не е това единственият подходящ и ефективен начин – в момента навлизат websockets и webrtc, точно заради това.

(имах въпрос от публиката после защо нещо не съм много оптимистично настроен по темата webrtc, човекът (Лъчко всъщност) се надявал да може да замести skype – понеже това, което реално успява да прави skype е да работи през всякакви мрежи и NAT-ове, което за останалата телефония е сложно и не се справя толкова добре. Webrtc-то още не е стигнало да бори наистина сериозно такива проблеми и вероятно ще му трябва доста време, докато започне да се справя с тях).

[slide 18 Примерен binary протокол]

По тази причина ние сме направили допълнителната възможност да може да се комуникира с интерфейсите ни през просто binary api.
Като казвам просто – то наистина е такова, описанието му се събира на няколко страници, реализацията (която имаме публикувана в github) е 700 реда на C или 536 реда на java, заедно с коментарите. Има много по-малко overhead, и се обработва много лесно на всякаква платформа. Също така в себе си има няколко хитрини, например ако в даден пакет определен string се среща повече от веднъж, съдържанието му се пази само на едно място и останалите са reference.

Разбира се, има и други варианти – в един предишен проект сме използвали protobuf (който се оказа доста удобен), има msgpack и какви ли не още отворени проекти с публични реализации за повечето платформи.
(ползвали сме protobuf за един друг проект и даже аз, дето по принцип не пиша код се оправих съвсем лесно да си parse-вам данните)

[slide 19 QUIC и пътят напред]

И да слезем по-надолу – един от проблемите в съществуващите протоколи е, че осъществяването на връзка е бавно. Нужни са 3 пакета (и 1 RTT), за да се осъществи връзка м/у две точки по TCP, а когато добавим SSL/TLS, добавяме и още 2-3 RTT-та в зависимост от някои настройки на протокола.

Едно от съществуващите решения е разработка на google, казва се QUIC – Quick UDP Internet Connections, протокол, който замества комбинацията от TCP+TLS. Има в себе си всички полезни неща от TCP, като правилните алгоритми за напасване на скоростта и congestion avoidance, но криптирането в него е по подразбиране, и осъществяването на връзка става в рамките на едно rtt, заедно с избирането на ключове и т.н..
Има също така интересни feature-и, като например multiplexing – да се обработват няколко заявки в една връзка (като в SCTP). Друго нещо е forward error correction, да може да коригира грешки в пакетите или изгубени такива по вече пратени данни.

Ние сме в процес на разглеждане на протокола, ако ни хареса достатъчно, мислим да го използваме. В момента има поддръжка в chrome и opera, ако придобие достатъчно разпространение, ще реши проблема и с firewall-ите. Разработката на google е пусната под BSD лиценз, и е сравнително малко код на C++, та би трябвало скоро да се появи и на други места.

Като цяло, струва си да прочетете описанието, протоколът е пример как трябва да изглежда следващото поколение internet протоколи.

[slide 20 Употреба на SSL/TLS]

Ако не сте живели под камък в последната година и половина, вероятно сте наясно защо е важно да се ползва криптиране на връзките и по принцип SSL/TLS.
(ако сте, питайте търсачките за Edward Snowden)

Вече повечето услуги имат поддръжка за TLS – отне години и доста лобиране/мрънкане от страна на общността, но е често срещано. Не винаги е реализирано правилно обаче…

[slide 21 Правилна употреба на SSL/TLS]

Първото е, че в твърде много случаи не се обръща внимание на шифрите, които се използват, съответно сигурността на връзката с някои места е почти същата като за без криптография. Пример за това е как преди около година Android смениха списъка си с поддържани шифри/алгоритми и приоритетите им, понеже “в java стандарта било така” (предишния им списък беше взет от openssl). Java 6 е стандарт на около 10 години и в новия е оправено…

Друго важно нещо е т.нар. forward secrecy – това е възможността след осъществяването на връзката двете страни да разменят през DH или друг алгоритъм ключове, които са само за тази сесия и се забравят след нея. Така се подсигуряваме, че дори да изтекат главните ключове на сървъра, то записаната от преди това комуникация няма да може да се декриптира и ще остане тайна.

За заинтересуваните хора, които имат сървъри – на този сайт може да проверите поддръжката си на подходящите алгоритми и шифри, резултатът може да ви е интересен.

И разбира се, SSL/TLS не е панацея, и те имат собствени проблеми (като например heartbleed наскоро).

[slide 22 SSL/TLS Session caching]

За услуги, които имат повече от един физически сървър, който поема криптираните сесии (но на същия domain), има друг полезен момент – да се реализира глобален SSL session cache, който да пази определени параметри, за да не се renegotiate-ват всеки път. Това спестява едно RTT от всяка заявка и определено се усеща.

За целта може да ви се наложи да използвате някакъв backend storage (memcache или redis), в който всичките ви сървъри да пазят тези сесии, но да се подкара не е особено сложно и може да има видим ефект за потребителите ви.

[slide 23 Privacy]

Много услуги правят грешката да разкриват информация за потребителите си, която реално не трябва да излиза извън тях. Например често през някоя функционалност, като lost password или invite може да разберете дали даден потребител е регистриран в услугата.
Това е неприятно, понеже улеснява всякакви атаки в/у вашите потребители, особено ако не сте ориентирани към разпространяване на информацията на потребителите ви и продаването и/им (като почти всички социални мрежи)
(корекция от публиката на лекцията – всички социални мрежи)

Примерът за разкриването дали даден файл (което btw е проблем, който идва основно от дедупликацията в повечето такива услуги) е как един определен service можеше да му кажеш “аз ще ти кача файл с тоя MD5″ и той казваше “а, аз го имам, заповядай”. Това за известно време сериозно се използваше за share-ване на файлове без реално да се вижда какво става…

[slide 24 Не всички клиенти са равни]

Изобщо, на моменти имам чувството, че много от интерфейсите са правени от някой, който приятелката му го е зарязала заради някой developer и той сега държи да си го върне. Представям си човек, който стои в тъмна стаичка, пише спецификация и си мисли “и аз на теб да ти го …”…

Няма един правилен интерфейс, един правилен начин, един пръстен, който ги владее и т.н.. Колкото и да ни се повтаря от някакви хора, които нямат въображение, няма да стане вярно :) Има нужда от интерфейси, който да са удобни за различни хора, по различен начин, от различни среди и различни начини на мислене.

[slide 25 Предаване на параметри]

Например, предаването на параметри. При нас параметър може да бъде предаден където ви е удобно – като GET, като POST, в URL-то на POST-а, или в cookie. Даваме ви възможност за прострелване в крака, но от друга страна ви даваме начин да ни ползвате от възможно най-простите клиенти на тоя свят, които нямат POST.

[slide 26 обработка на request-и в движение]

Нещо, което е повече в категорията “имам тъпа брадва и искам с нея да остря моливи” – ужасно много услуги обработват заявката, след като я получат цялата. Ако качвате файл, който трябва после да отиде по други машини, имате една очевидна латентност, дължаща се на последвалата обработка.

Това се дължи най-вече на ползването на HTTP и криви сървъри, и като цяло може да не е така – повечето content може да се обработва в реално време, докато се качва – да се копира на други места, да му се смятат checksum-и и какво ли не още. Не е нужно да караме потребителите да чакат, само защото нас ни е домързяло да направим няколко прости подобрения.

[slide 27 blatant self-promotion]

И оставащата част от презентацията е хубави неща, които са специфични за cloud storage услуги, или “защо ние сме по-добри от останалите”. Не само ние правим тези неща, но май само ние правим всичките.

[slide 28 Thumbnail combining]

Нещо, което пак идва от тъпата брадва на HTTP/AJAX и компания – ако имате да покажете галерия от thumbnail-и, няма смисъл да ги дърпате един по един. Много по-ефективно е да може да ги комбинирате в един голям файл и да ги показвате, докато пристигат.

Конкретните методи за това са доста грозни (thumb-овете, изредени по един на ред в base64 и се декодират после), но си личи промяната в скоростта.

[slide 29 On-the-fly zip]

Нещо, което идва основно от мързела на разработчиците – не е нужно да генерирате zip файл и така да го дадете на потребителя, може директно да му го stream-нете, понеже zip-ът е такъв удобен формат (явно идва от времената, в които е трябвало да може да се пише на лента). Не е единственият такъв, но е пример за как някои неща могат да се случват моментално, вместо да се изчакват.

[slide 30 Rsync-подобен протокол]

Rsync съществува като идея от времената, в които аз бях младши системен администратор, но по някаква причина не се среща достатъчно често като идея в повечето услуги.

Идеята е много проста – ако от едната страна имате част от даден файл или негова по-ранна версия, е много по-лесно да копирате само разликите, а не целият файл. Поддържаме го и в двете посоки – и за качване, и за сваляне на файлове, като самата идея не е сложна за реализация, дори и в web приложения. Не ползваме директно rsync протокола, понеже не пасва добре в нашия интерфейс, но сме сравнително близо, и също така за една част от хешовете използваме друга функция, понеже се оказва по-удобна.

(още нямаме качена документацията за метода, понеже не остана време, обещавам да я качим в най-скоро време. Ще качим и код, който го реализира.)

[slide 31 Видео с напасващ се bit rate]

Една хубава възможност, която доста video услуги биха могли да предлагат – да се напасва bitrate на видеото към връзката ви (т.е. да не праща с повече, отколкото вие може да приемете), което прави възможно да гледате видеа и на доста неприятна връзка.

[slide 32 Файлови операции]

И накрая нещо, което е повече интересно, отколкото важно – интерфейсът за дребни промени по файловете, редакция на място е доста удобен и помага да се пренесат различни приложения и действията им директно към storage услугата.

[slide 33 Други готини неща за в бъдеще]

Тук исках да кажа и други неща, но не ми стигна времето да ги извадя.

Това е един план за светлото бъдеще (в рамките на годината) – да направим end-to-end криптиране на файловете на потребителите, т.е. да им дадем функционалност, чрез която те още при себе си да криптират всичко и да не можем да го четем, както и да са сравнително сигурни, че не не им подменяме или омазваме файловете. Би трябвало всички да го правят – реално погледнато няма причина нещата, които са лични за потребителя да могат да бъдат известни на оператора на услугата, и има доста разработки по въпроса.
Например, има дори няколко paper-а по темата за криптирана база данни – тя стои криптирана на един сървър (който няма ключовете), вие пращате криптирана заявка, сървъра прави някакви сметки и ви дава отговор от вашите данни, който също е криптиран и който само вие може да разберете. Все още нещата са в начални стадии, но са една добра идея за бъдещето.

[slide 34 Отворени сме към други идеи]

Сигурен съм, че има неща, които съм пропуснал в лекцията, и за които не сме се сетили да сложим в нашата услуга. Приемаме всякакви идеи и критики, а тази лекция винаги може да се удължи (например като за едно-семестриален курс).

Приемаме всякакви корекции и идеи – по принцип имаме натрупана работа като за 3-4 години напред, но това още не е успяло да ни уплаши :)

Errata Security: Can I drop a pacemaker 0day?

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

Can I drop a pacemaker 0day at DefCon that is capable of killing people?

Computers now run our cars. It’s now possible for a hacker to infect your car with a “virus” that can slam on the brakes in the middle of the freeway. Computers now run medical devices like pacemakers and insulin pumps, it’s now becoming possible assassinate somebody by stopping their pacemaker with a bluetooth exploit.

The problem is that manufacturers are 20 years behind in terms of computer “security”. They don’t just have vulnerabilities, they have obvious vulnerabilities. That means not only can these devices be hacked, they can be easily be hacked by teenagers. Vendors do something like put a secret backdoor password in a device believing nobody is smart enough to find it — then a kid finds it in under a minute using a simple program like “strings“.
Telling vendors about the problem rarely helps because vendors don’t care. If they cared at all, they wouldn’t have been putting the vulnerabilities in their product to begin with. 30% of such products have easily discovered backdoors, which is something they should already care about, so telling them you’ve discovered they are one of the 30% won’t help.
Historically, we’ve dealt with vendor unresponsiveness through the process of “full disclosure”. If a vendor was unresponsive after we gave them a chance to first fix the bug, we simply published the bug (“drop 0day”), either on a mailing list, or during a talk at a hacker convention like DefCon. Only after full disclosure does the company take the problem seriously and fix it.
This process has worked well. If we look at the evolution of products from Windows to Chrome, the threat of 0day has caused them to vastly improve their products. Moreover, now they court 0day: Google pays you a bounty for Chrome 0day, with no strings attached on how you might also maliciously use it.
So let’s say I’ve found a pacemaker with an obvious BlueTooth backdoor that allows me to kill a person, and a year after notifying the vendor, they still ignore the problem, continuing to ship vulnerable pacemakers to customers. What should I do? If I do nothing, more and more such pacemakers will ship, endangering more lives. If I disclose the bug, then hackers may use it to kill some people.
The problem is that dropping a pacemaker 0day is so horrific that most people would readily agree it should be outlawed. But, at the same time, without the threat of 0day, vendors will ignore the problem.
This is the question for groups that defend “coder’s rights”, like the EFF. Will they really defend coders in this hypothetical scenario, declaring that releasing code 0day code is free speech that reveals problems of public concern? Or will they agree that such code should be suppressed in the name of public safety?
I ask this question because right now they are avoiding the issue, because whichever stance they take will anger a lot of people. This paper from the EFF on the issue seems to support disclosing 0days, but only in the abstract, not in the concrete scenario that I support. The EFF has a history of backing away from previous principles when they become unpopular. For example, they once fought against regulating the Internet as a public utility, now they fight for it in the name of net neutrality. Another example is selling 0days to the government, which the EFF criticizes. I doubt if the EFF will continue to support disclosing 0days when they can kill people. The first time a child dies due to a car crash caused by a hacker, every organization is going to run from “coder’s rights”.
By the way, it should be clear in the above post on which side of this question I stand: for coder’s rights.

Update: Here’s another scenario. In Twitter discussions, people have said that the remedy for unresponsive vendors is to contact an organization like ICS-CERT, the DHS organization responsible for “control systems”. That doesn’t work, because ICS-CERT is itself a political, unresponsive organization.

The ICS-CERT doesn’t label “default passwords” as a “vulnerability”, despite the fact that it’s a leading cause of hacks, and a common feature of exploit kits. They claim that it’s the user’s responsibility to change the password, and not the fault of the vendor if they don’t.

Yet, disclosing default passwords is one of the things that vendors try to suppress. When a researcher reveals a default password in a control system, and a hacker exploits it to cause a power outage, it’s the researcher who will get blamed for revealing information that was not-a-vulnerability.

I say this because I was personally threatened by the FBI to suppress something that was not-a-vulnerability, yet which they claimed would hurt national security if I revealed it to Chinese hackers.

Again, the only thing that causes change is full disclosure. Everything else allows politics to suppress information vital to public safety.

Update: Some have suggested it’s that moral and legal are two different arguments, that someone can call full disclosure immoral without necessarily arguing that it should be illegal.

That’s not true. That’s like saying that speech is immoral when Nazi’s do it. It isn’t — the content may be vile, but the act of speaking never immoral.

The “moral but legal” argument is too subtle for politics, you really have to pick one or the other. We saw that happen with the EFF. They originally championed the idea that the Internet should not be regulated. They, they championed the idea of net neutrality — which is Internet regulation. They original claimed there was no paradox, because they were saying merely that net neutrality was moral not that it should be law. Now they’ve discarded that charade, and are actively lobbying congress to make net neutrality law.

Sure, sometimes some full disclosure will result in bad results, but more often, those with political power will seek to suppress vital information with reasons that sound good at the time, like “think of the children!”. We need to firmly defend full disclosure as free speech, in all circumstances.

Update: Some have suggested that instead of disclosing details, a researcher can inform the media.

This has been tried. It doesn’t work. Vendors have more influence on the media than researchers.

We say this happen in the Apple WiFi fiasco. It was an obvious bug (SSID’s longer than 97 bytes), but at the time Apple kernel exploitation wasn’t widely known. Therefore, the researchers tried to avoid damaging Apple by not disclosing the full exploit. Thus, people could know about the bug without people being able to exploit it.

This didn’t work. Apple’s marketing department claimed the entire thing was fake. They did later fix the bug — claiming it was something they found unrelated to the “fake” vulns from the researchers.

Another example was two years ago when researchers described bugs in airplane control systems. The FAA said the vulns were fake, and the press took the FAA’s line on the problem.

The history of “going to the media” has demonstrated that only full-disclosure works.

Krebs on Security: Why You Should Ditch Adobe Shockwave

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

This author has long advised computer users who have Adobe‘s Shockwave Player installed to junk the product, mainly on the basis that few sites actually require the browser plugin, and because it’s yet another plugin that requires constant updating. But I was positively shocked this week to learn that this software introduces a far more pernicious problem: Turns out, it bundles a component of Adobe Flash that is more than 15 months behind on security updates, and which can be used to backdoor virtually any computer running it.

shockwaveMy re-education on this topic comes courtesy of Will Dormann, a computer security expert who writes threat advisories for Carnegie Mellon University’s CERT. In a recent post on the release of the latest bundle of security updates for Adobe’s Flash player, Dormann commented that Shockwave actually provides its own version of the Flash runtime, and that the latest Shockwave version released by Adobe has none of the recent Flash fixes.

Worse yet, Dormann said, the current version of Shockwave for both Windows and Mac systems lacks any of the Flash security fixes released since January 2013. By my count, Adobe has issued nearly 20 separate security updates for Flash since then, including fixes for several dangerous zero-day vulnerabilities.

“Flash updates can come frequently,  but Shockwave not so much,” Dormann said. “So architecturally,  it’s just flawed to provide its own Flash.”

Dormann said he initially alerted the public to this gaping security hole in 2012 via this advisory, but that he first told Adobe about this lackluster update process back in 2010.

As if that weren’t bad enough, Dormann said it may actually be easier for attackers to exploit Flash vulnerabilities via Shockwave than it is to exploit them directly against the standalone Flash plugin itself. That’s because Shockwave has several modules that don’t opt in to trivial exploit mitigation techniques built into Microsoft Windows, such as SafeSEH.

“So not only are the vulnerabilities there, but they’re easier to exploit as well,” Dormann said. “One of the things that helps make a vulnerability more difficult [to exploit] is how many of the exploit mitigations a vendor opts in to. In the case of Shockwave, there are some mitigations missing in a number of modules, such as SafeSEH.   Because of this, it may be easier to exploit a vulnerability when Flash is hosted by Shockwave, for example.”

Adobe spokeswoman Heather Edell confirmed that CERT’s information is correct, and that the next release of Shockwave Player will include the updated version of Flash Player.

“We are reviewing our security update process in order to mitigate risks in Shockwave Player,” Edell said.

For those who need Shockwave Player installed for some reason, Microsoft’s Enhanced Mitigation Experience Toolkit (EMET 4.1 or higher)) can help prevent the exploitation of this weakness.

Not sure whether your computer has Shockwave installed? If you visit this link and see a short animation, it should tell you which version of Shockwave you have installed. If it prompts you to download Shockwave (or in the case of Google Chrome for some reason just automatically downloads the installer), then you don’t have Shockwave installed. To remove Shockwave, grab Adobe’s uninstall tool here. Mozilla Firefox users should note that the presence of the “Shockwave Flash” plugin listed in the Firefox Add-ons section denotes an installation of Adobe Flash Player plugin — not Adobe Shockwave Player.

Krebs on Security: The Mad, Mad Dash to Update Flash

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

An analysis of how quickly different browser users patch Adobe Flash vulnerabilities shows a marked variation among browser makers. The data suggest that Google Chrome and Mozilla Firefox users tend to get Flash updates relatively quickly, while many users on Microsoft’s Internet Explorer browser consistently lag behind.

The information comes from ThreatMetrix, a company that helps retailers and financial institutions detect and block patterns of online fraud. ThreatMetrix Chief Technology Officer Andreas Baumhof looked back over the past five months across 10,000+ sites the company serves, to see how quickly visitors were updating to the latest versions of Flash.

Baumhof measured the rates of update adoption for these six Flash patches:

Jan 14, 2014 – APSB14-02 Security updates available for Adobe Flash Player (2 critical vulnerabilities)

Feb 4, 2014 – APSB14-04 Security updates available for Adobe Flash Player (2 critical flaws, including 1 zero-day)

Feb 20, 2014 – APSB14-07 Security updates available for Adobe Flash Player (1 zero-day)

Mar 11, 2014 – APSB14-08 Security updates available for Adobe Flash Player (2 critical vulnerabilities)

Apr 8, 2014, – APSB14-09 Security updates available for Adobe Flash Player (4 critical vulnerabilities)

Apr 28, 2014 - APSB14-13 Security updates available for Adobe Flash Player (1 zero-day)

Overall, Google Chrome users were protected the fastest. According to Baumhof, Chrome usually takes just a few days to push the latest update out to 90 percent of users. Chrome pioneered auto-updates for Flash several years ago, with Firefox and newer versions of IE both following suit in recent years.

The adoption rate, broken down by browser type, of the last six Adobe Flash updates.

The adoption rate, broken down by browser type, of the last six Adobe Flash updates.

Interestingly, the data show that IE users tend to receive updates at a considerably slower clip (although there are a few times in which IE surpasses Firefox users in adoption of the latest Flash updates).  This probably has to do with the way Flash is updated on IE, and the legacy versions of IE that are still out there. Flash seems to have more of a seamless auto-update process on IE 10 and 11 on Windows 8 and above, and more of a manual one on earlier versions of the browser and operating system.

Another explanation for IE’s performance here is that it is commonly used in business environments, which tend to take a few days at least to test patches before rolling them out in a coordinated fashion across the enterprise along with the rest of the Patch Tuesday updates.

The following graphic depicts Flash patch adoption by IE version for Period #4 in the image above (Mar 11, 2014 - APSB14-08 Security updates available for Adobe Flash Player (2 critical vulnerabilities)):

Adoption of Flash patch APSB14-08 (Mar. 11, 2014), broken down by IE version.

Adoption of Flash patch APSB14-08 (Mar. 11, 2014), broken down by IE version.

“In the period 4 you can see that IE11 is nicely up to 90% – which is in line with Chrome, but obviously the older the browser version, the less updated Flash is,” Baumhof said.

It’s unclear what might explain the apparent slow uptake of Flash patches for IE and Firefox users following the January and early April Flash updates. It’s worth noting, however, that the Flash patches which saw the fastest uptake regardless of browser type included fixes for zero-day vulnerabilities (see periods 2, 3 and 6 in the first graphic above).

While Chrome appears to have the speediest update process for Flash patches (the company frequently pushes Flash updates out even before Adobe releases them publicly), it’s important to remember that applying any auto-pushed Flash patches in Chrome requires a restart of the browser.

“I use Chrome and I typically never close my browser as I always just hibernate my computer,” Baumhof said. “I noticed that it took me almost seven days to apply a Flash update because Chrome could only do this when you restart the browser, and I simply wasn’t aware of it.”

Flash is a buggy security risk, but a great many Web sites simply won’t work or display certain content without the Flash plugin installed. As such, I’ve urged readers to take advantage of Click-to-Play, which blocks plugin activity by default, replacing the plugin content on the page with a blank box. Users who wish to view the blocked content need only click the boxes to enable the Flash content inside of them.

Krebs on Security: Adobe, Microsoft Issue Critical Security Fixes

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Adobe and Microsoft today each released software updates to plug dangerous security holes in their products. Adobe pushed patches to fix holes in Adobe Acrobat/Reader as well as Flash Player. Microsoft issued eight update bundles to nix at least 13 security vulnerabilities in Windows and software that runs on top of the operating system.

A majority of the patches released by Microsoft are fixes for products that run in enterprise environments. Chief among the consumer-facing Microsoft updates is cumulative patch for Internet Explorer that fixes a pair of flaws in all supported versions of IE. This patch also includes the emergency update that Microsoft released earlier this month to address a zero-day vulnerability in IE. Microsoft also issued fixes for several Office vulnerabilities. This month’s batch also includes a .NET fix, which in my experience is best installed separately.

Adobe released a fix for its Flash Player software that corrects at least six security flaws. The Flash update brings the media player to v. on Windows and Mac systems, and v. for Linux users. To see which version of Flash you have installed, check this link

IE10/IE11 and Chrome should auto-update their versions of Flash. If your version of Flash on Chrome (on either Windows, Mac or Linux) is not yet updated, you may just need to close and restart the browser.

The most recent versions of Flash are available from the Adobe download center, but beware potentially unwanted add-ons, like McAfee Security Scan). To avoid this, uncheck the pre-checked box before downloading, or grab your OS-specific Flash download from here. Windows users who browse the Web with anything other than Internet Explorer will need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.).

In addition, there is an update available that fixes at least 11 security holes in versions of Adobe Acrobat and Adobe Reader. Windows and Mac users should update to the latest  version (11.0.07).

SANS Internet Storm Center, InfoCON: green: And the Web it keeps Changing: Recent security relevant changes to Browsers and HTML/HTTP Standards, (Tue, May 6th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

As we all know, web standards are only leaving “draft” status once they start becoming irrelevant. It is a constant challenge to keep up with how web browsers interpret standards and how the standards themselves keep changing. We are just going through one of the perpetual updates for our “Defending Web Applications” class, and I got reminded again about some of the changes we had to make over the last year or so.


This weekend we just had yet another post about people picking bad passwords. The only real way around this problem is a password manager. For a long time, browsers included features to allow you to save passwords. But historically, these features were not liked very well as they tended to protect the password inadequately. But with the number of leaked passwords going up and up, and browser makers feeling more confident about their built in password safe features, some browsers started to ignore this setting. For example recent versions of Chrome and Safari will offer saving your password no matter if the “autocomplete=off” attribute is set or not.

BTW: You may still need to keep your autocomplete=off attribute in your forms to pass the PCI audit. After all, in this case you are not defending against hackers but against auditors and the attribute still works great to fend of auditor questions.

In the end, this means it is up to the user to decide to enable or disable this feature, and what password safe to use. Personally I don’t think you can do without a password safe. But some people still think they can remember > 100 random passwords/passphrases. (I am having a hard time with one or two).

Cookie2 Headers

“nobody” ever really used the Cookie2 header. It was supposed to address privacy concerns people had with regular cookies. Cookies set via the Cookie2 header are essentially session cookies. They can not be set “cross domain” and they expire as soon as you close the browser. But that was back in the day when people still considered privacy as something attainable. RFC 6265 officially obsoletes Cookie2 back in 2011. I guess nobody noticed (me neither) because nobody uses it.

URL Bars

Another “good old days” feature of many browsers was URL bars. They are slowly disappearing. The simple reason is that most users (no… you are not “most users” as you are reading this post) have no idea what a URL is or how to decipher it. It all started with mobile browsers who pushed the URL off the screen as soon as possible to save the few mega pixels it would take to render the URL bar. I think it was Internet Explorer 8 where I first noticed that the URL bar got squished into a corner in order to provide more space for the search bar. Google now is tying to make this change more official by only showing the hostname, not the full URL, in recent beta releases of Chrome. The idea here is that the hostname is what matters and the other parts of the URL are usually just used by phishers to confuse the user as to the actual location of the page.

Anything I missed? Not looking for brand new features like HTTP/2.0 but for old features that no longer work in new browsers and are somewhat security related. I may add a couple more items to this post or as a comment as I remember them.


Johannes B. Ullrich, Ph.D.
SANS Technology Institute

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Linux How-Tos and Linux Tutorials: How to Use Google Web Designer for HTML5 Design on Linux

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Linux How-Tos and Linux Tutorials. Original post: at Linux How-Tos and Linux Tutorials

Google Web Designer is a GUI tool created by Google for designing advanced HTML5 content using an integrated visual editor interface. It can create an interactive HTML5 web page as well as animated graphic ads that can run on any device. This tool is finally available for Linux, while it is still in beta stage. […]
Continue reading…

The post How to use Google Web Designer for HTML5 design on Linux appeared first on Xmodulo.

Read more at Xmodulo