SANS Internet Storm Center, InfoCON: green: ISC StormCast for Monday, April 21st 2014 http://isc.sans.edu/podcastdetail.html?id=3943, (Mon, Apr 21st)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

(c) SANS Internet Storm Center. http://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

SANS Internet Storm Center, InfoCON: green: Heartbleed hunting, (Mon, Apr 21st)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

Yes, I know that by now you are really tired of hear and read about Heartbleed. You probably already got all testing scripts and tools and are looking on your network for vulnerable servers. 

I was just playing with the Shodan transformer for Maltego and looking for some specific versions of OpenSSL. The results are not good…

Somethings to keep in mind when checking your network is that the tools may not detect all vulnerable hosts since they may be buggy themselves :)

According some research, one of the first scripts released to test the vulnerability, and that most of people still use to identify vulnerable servers contains some bugs that may not detect correctly the vulnerable servers.

The heartbeat request generated on the proof of concept script is:

18 03 02 00 03 01 40 00 <– the bold bytes basically tell the server to use TLS 1.1, so if the server only supports TLS 1.0 or TLS 1.2 it won’t work. Of course that 1.0 and 1.2 are not widely used, so the chance of you having it on your network is small, but still, there is a chance. 

Early last week, while testing different online and offline tools, I also came across different results, so you may want to use different tools on your check.

Network signatures may also provide additional check to help you identify vulnerable hosts. Again there is a chance of False Positives, but it is worth to provide you with more info on your checks.

Snort signatures and network parses for products like Netwitness can also be very effective to detect not only when an exploit was used against your hosts, but most importantly, when your vulnerable host provided information back (the leaked info). 

Happy hunting to you because the bad guys are already hunting!

——————–

Pedro Bueno (pbueno /%%/ isc. sans. org)
Twitter: http://twitter.com/besecure

(c) SANS Internet Storm Center. http://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

TorrentFreak: The Copyright Monopoly’s Fundamental Problem Remains The Same…

This post was syndicated from: TorrentFreak and was written by: Rick Falkvinge. Original post: at TorrentFreak

copyright-brandedWhen we share knowledge and culture in order to manufacture our own copies of it, this happens in private communications – it happens as part of the ones and zeroes that arrive at and are transmitted from our computers.

However, some part of these transmissions may be in violation of the copyright monopoly. The only way to find if any are is to listen to them and break the postal secret; to open all the digital letters and violate the privacy of correspondence.

There is no way to enforce the copyright monopoly without reading all the private communications in transit – mass eavesdropping and mass surveillance. There is no magic way to just wiretap the violations and ignore the rest; the act of finding which communications may violate the copyright monopoly requires that you sort all correspondence into legal and illegal. The act of sorting requires observation; you cannot determine if something is legal or illegal without looking at it. At that point, the postal secret and the privacy of correspondence have been broken.

(Some proponents of the copyright monopoly would argue that the act of sharing knowledge and culture wouldn’t classify as private correspondence. This is irrelevant, as in any case, it is intermixed with private correspondence that must still be unpacked and looked at in the sorting process.)

So we’re at a crossroads where we as a society must determine which is more important – the right to communicate in private at all, or the obsolete distribution and manufacturing monopoly of an entertainment industry. These two are completely mutually exclusive and cannot coexist. This is, and has been, the problem since the cassette tape.

The copyright industry understands this perfectly, which is why they have been working hard, long, and tenaciously to eliminate the concept of private correspondence online and introduce ubiquitous mass surveillance. A few examples:

In Ireland, the copyright industry (in the shape of the big four record labels) sued the country’s largest ISP, Eircom, for the right to install wiretapping and censorship equipment in the deepest of their core Internet switches: they demanded the ability to detect and prevent communications they didn’t like. Yes, you read that right: a private industry full-out demanded the right to examine all (and prevent any) private correspondence in the entire country.

In Sweden, the copyright industry did a two-pronged approach to get their own access to ISP access logs through a ridiculous over-implementation of the IPRED directive, along with working feverently to get mandatory ISP logging in the shape of Data Retention passed (the mass surveillance mechanism that was just now declared in violation of basic human rights by the highest EU court). The copyright industry (in the shape of IFPI) even demanded independent, extrajudicial access to the mass surveillance data from the Data Retention mechanisms. Yes, you read that right: a private industry demanded independent and unfiltered access to surveillance records of practically every footstep and every correspondence you make in your everyday life.

The copyright industry is very much a part of the mass surveillance industry. Mass surveillance is the only way they can maintain their crumbling monopoly on manufacturing copies.

At the end of the day, these two mechanisms must be weighed against one another: do we prefer the ability to communicate in private at all, or do we prefer the distribution and manufacturing monopoly of an entertainment industry? As is today, they can’t coexist, and this has always been and still remains the key point of contention.

One of the reasons that we’ve gotten to this point is that these two mechanisms are usually handled by different departments. The copyright monopoly tends to be under the Department of Commerce, whereas the fundamental rights such as freedom of speech and privacy of correspondence falls under the Department of Justice in most countries. This means that there has never been anybody with the responsibility of weighing them against each other, and coming to the obvious conclusion that the right to private correspondence far outweighs the distribution and manufacturing monopoly of an entertainment industry.

We need to keep kicking politicians out of office until they realize this enormous blind spot of theirs.

About The Author

Rick Falkvinge is a regular columnist on TorrentFreak, sharing his thoughts every other week. He is the founder of the Swedish and first Pirate Party, a whisky aficionado, and a low-altitude motorcycle pilot. His blog at falkvinge.net focuses on information policy.

Book Falkvinge as speaker?

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

LWN.net: Kernel prepatch 3.15-rc2

This post was syndicated from: LWN.net and was written by: corbet. Original post: at LWN.net

The second 3.15 kernel prepatch is
available for testing. “And on the seventh day the rc release rose
again, in accordance with the Scriptures laid down at the kernel summit of
the year 2004.

TorrentFreak: Google Asked to Censor Two Million Pirate Bay URLs

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

google-bayIn an effort to make it difficult for the public to find pirated content, copyright holders send millions of takedown notices to Google every week.

One of the top domains listed in these notices is thepiratebay.se. Since the notorious torrent site doesn’t accept takedown requests itself, copyright holders have to turn to Google to do something about the appearance of their work on the infamous torrent site.

This week the number of thepiratebay.se URLs submitted to Google reached the two million mark. Nearly all of these links have been removed and can no longer be accessed through search results.

The chart below shows the number of links that have been submitted per week. There is a sharp decline towards the end of 2013 when The Pirate Bay used another domain name. The requests increased again in December when the torrent site switched back.

Google-removed-tpb

In total, the two million URLs were submitted in 93,070 separate takedown notices, averaging more than 20 links per takedown request. A staggering number, but one that pales in comparison to other sites.

Looking at the list of domains that received the most URLs removal requests, The Pirate Bay ends up in 29th place. The top spot goes to filestube.com with more than 11 million URLs, followed by dilandau.eu, rapidgator.net, zippyshare.com and 4shared.com.Torrentz.eu, the first torrent site in the list, comes in 8th with 4.4 million URLs.

The million dollar question is of course whether all these takedown requests have had a significant impact on the availability of pirated content.

According to Google, the two million URLs represent between one and five percent of all links that are indexed, so it’s safe to say that there’s still plenty of Pirate Bay content available via Google. Similarly, removing the search results doesn’t hinder people from going to the notorious torrent site directly.

The Pirate Bay itself isn’t particularly concerned about this development. The site’s traffic has increased steadily over the past few years, and so has the number of files being uploaded to the site.

In common with the many ISP blockades of The Pirate Bay, it’s safe to conclude that people can find plenty of alternative routes to end up where they want to be…

blocktpb1

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

The Hacker Factor Blog: A Bit Off-Color

This post was syndicated from: The Hacker Factor Blog and was written by: The Hacker Factor Blog. Original post: at The Hacker Factor Blog

FotoForensics just hit another major milestone yesterday. It received it’s 500,000th unique image upload! That’s a half-million pictures! There were a bunch of things that I had wanted to release to celebrate this achievement, but most of it isn’t done yet. (Either you guys uploaded too fast, or I am programming to slow.) So I enabled the one feature that is currently ready: Digest. It’s listed as an analyzer option and there’s also a tutorial for it.

A digest is a basic summary that can be used to identify a specific image file. Usually it’s a complex hash, but it can be as simple as the picture’s dimensions and file size. The digest lets the analyst know that they are analyzing the correct file. If the file is altered, then the digest changes. With cryptographic digests, such as MD5 or SHA1, a modification as small as 1 bit is detectable. (You will know that the file is different, but not what was changed, who changed it, or when it was changed.)

For people who analyze files for a living, this makes perfect sense. Recording the cryptographic hash value and image attributes are pretty much the first thing you do. And if you are handed a file to evaluate, the first thing you are supposed to do is make sure the digest information matches. The last thing an analyst needs is to waste hours evaluating the wrong file.

These digests also go toward tamper detection and chain of evidence handling. If the digest ever changes, then you know something is wrong.

Of course, a lot of people just go by what the picture shows. For example, I can show you a picture of some Easter Eggs.

Since we are all looking at the same picture, it is easy enough for us to talk about it. For example, there are three eggs. The left egg is the largest and the front egg is leaning on it’s side. And from front to back, they are colored purple, yellow, and blue.

What? You do see purple, yellow, and blue, right? Uh… you don’t? What colors do you see? Let’s make sure that we are talking about the right file…

Colorful Eggs

The colors seen when a picture is displayed are not limited to the pixels in the image. Most common picture formats also support the addition of a color profile. The International Color Consortium defines the common color profile format (ICC Profiles). The profiles allow the application to shift the colors based on the display.

The easiest way to describe this is to walk into an electronics store and look at the wall of TVs. Every TV may be displaying the same show, but the colors will all be a little different. Red on one TV will be vibrant, while another TV may be dull or even a little purplish. With ICC color profiles, the video signal can be shifted to a uniform color space and then converted to colors that better match the monitor. This way, red on your screen will look like red on my screen.

As it turns out, there are two common versions for ICC Profiles. Version 2 is the most widely adopted, but version 4 is also supported by some applications. If the ICC Profile contains version 2 and version 4 information, then the application will use whatever it supports. If your browser supports version 4, then it will use that color correction information. If it only supports version 2, then it will apply the version 2 information. And if it does not support color corrections, then it will display the basic picture.

There’s a couple of web sites that offer pictures designed to identify your application’s level of compatibility. There’s the ICC Test with four pictures, the same test at Microsoft, and a cool car photo at PetaPixel.

The egg picture is my own extreme version of this same color management test. It contains a custom ICC Profile that screws up the colors based on which color management is used.

Here’s the test picture again:

And here’s what it looks like based on your browser’s support:

No color profile support (most mobile devices)
ICC Profile version 2 support (most versions of Firefox and Chrome)
ICC Profile version 4 support

However, I didn’t stop there. A few vendors, like Microsoft, have created their own proprietary extensions to the ICC Profile format. If Microsoft sees their custom extension, then they will use it instead of the version 2 or version 4 information. My egg picture has one of these extensions. I only tested it on Windows 8.1, but it should work on any Windows Vista or later system. The biggest issue is that Microsoft is inconsistent in their application. The same file will look different under Internet Explorer compared to Microsoft Photo Viewer.

Microsoft’s proprietary ICC Profile extension support (under Internet Explorer on Windows 8.1)
Microsoft’s proprietary ICC Profile extension support (under Microsoft Photo Viewer on Windows 8.1)

Unfortunately, I cannot find any documentation about this custom Microsoft extension. I really have no idea what type of transformation it is performing. I only know that Microsoft uses their extension and renders pictures differently based on the application.

Without the Microsoft extension, Internet Explorer and Microsoft Photo Viewer will render the ICC Profile version 4 image.

On every operating system and application, a color management system (CMS) is used to apply the ICC Profile. There’s a few open source CMS systems, but Microsoft, Apple, and Adobe each have their own systems. With each system comes compatibility issues and computational differences. For example, Apple’s CMS on 10.5.8 mostly looks like the ICC version 2 image, except that Apple’s blue egg looks a little more purplish. The purplish value may be due to my monitor’s own color profile, so your picture may look different if you use a Mac.

Apple CMS

When loading the picture into Adobe Photoshop CS5 for the Mac, I am prompted to “keep” the current profile, “convert” it to Adobe’s color space, or “discard” the profile information. Each option creates very different results:

Adobe CS5 for Mac with “convert” profile
Adobe CS5 for Mac with “kept” profile
Adobe CS5 for Mac with “discard” profile (the result is the same as using no profile)

But even minor changes made by the file format can result in significant differences. Here is the original egg picture saved twice: the right is a PNG and the left is a JPEG made from the PNG. Below each picture is what it looks like when colorized using the open source Little Color Management System (LCMS):

LCMS supports ICC version 4, so the result on the PNG is as expected. However, JPEG is a lossy file format and can cause pixels to shift in value a little. These shifts appear as a new color due to a little numerical error in LCMS. (If LCMS has a little numerical error that causes some purple on the background, then Adobe must have a ton of error since they have a solid purple background!) Then again, since no two applications seem to render the exact same image, who is to say which one is “incorrect”?

The problems with color transformations do not end with the ICC Profile; ICC Profiles are not the only way to store transformation information. For example, the PNG file format also supports it’s own color profile information in cHRM and gAMA tags. If a PNG contains both internal color information and an ICC Profile (like my egg example), then the application should use whatever it can. If it supports ICCv4, then use that information, otherwise use ICCv2, else use the PNG color information. And if all else fails, render the picture without any color corrections. Here’s what the image looks like if your application only supports the PNG color information (e.g., Apple’s Quicktime player):

Impacting Forensics

While my egg example is an extreme case, it demonstrates that different applications perform different color transformations. When doing digital photo forensics, the application that you use may alter the data that you are evaluating. This can dramatically impact test results.

For example, applying an ICC Profile is explicitly a color transformation. In general, each pixel has the opportunity to shift a little in the color space. Since pixels can change values, any shifting results in a higher error level potential. Applying an ICC Profile or PNG color transform will alter the results from an error level analysis.

Color transformations can also impact signal-to-noise ratios, light direction, and even cloned-region detection.

When performing any type of digital forensics, you want to minimize the amount of transformation made when trying to render the picture. (At FotoForensics, we explicitly do not apply any color transformations. This way, we do not evaluate any artifacts that could be created during the color transform.)

Digesting Colors

It took me a couple of days, but this one egg picture contains a complex set of color profiles. Each profile makes the picture look different. The color that you see all depends on the tools that you use. With a little more time, I can easily add in at least 8 more transformation tests into this one picture. Currently, it contains an ICC Profile with both version 2 and version 4 information, plus a Microsoft-specific ICC Profile extension and a PNG color correction.

The result is pretty clear: I have one picture (a PNG) that yields NINE different color sets! (Ten if you convert it to JPEG and use LCMS to render it.) The colors that you see are strictly dependent on the specific program that you use to view the image. Even something as minor as calibrating your video driver or patching your software could alter how the image is displayed.

You may be evaluating one file, but it can appear at least ten different ways!


All of this comes back to the digest. You and I are likely seeing two different sets of egg colors. So how do we know that we are looking at the same file? The answer is: check the digests.

My egg picture is a PNG that is 295×224 and uses 3 color channels. The file size is 33,492 bytes. It has an MD5 checksum of b513db97c45848fae40ef55c5e0fdfd5, and the SHA1 checksum is e53cae13e50d5556b6f2021000ff7fbe171561f8. If your file has all of these same attributes, then we are looking at the same file. Now we can start asking why the picture looks different if it is the exact same file.

And remember: it does not take much to alter the picture. If someone uploads this egg picture to Facebook, Twitter, or some other social media site, then the digest will probably change. And a change means that you will probably be seeing different colors.

TorrentFreak: Popcorn Time Devs Drop Like Flies, But No One Will Talk

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

popcorncensorThe Popcorn Time phenomenon hardly needs an introduction but it’s safe to say this application really shook things up after its launch in March.

In a nutshell, Popcorn Time delivered no new content whatsoever. What it did present was existing movies in an incredibly simple and elegant way, making it an extremely attractive proposition to file-sharing veterans and newcomers alike. But very quickly the honeymoon period was over. In mid-March the original developers said they would cease their operations.

“Popcorn Time is shutting down today. Not because we ran out of energy, commitment, focus or allies. But because we need to move on with our lives,” they announced.

“Our experiment has put us at the doors of endless debates about piracy and copyright, legal threats and the shady machinery that makes us feel in danger for doing what we love. And that’s not a battle we want a place in.”

It has proven impossible to get definitive proof as to who was behind the legal threats, since no one wants to talk either on or off the record. However, if one adds two and two (while calling on history) all fingers point to the owners of the content Popcorn Time exploits – Hollywood.

Popcorn

Since it was open source, Popcorn Time had the strength to recover and it didn’t take long for numerous alternative forks of the popular software began to appear. The first main contender was created by a developer from YTS/YIFY, although it later transpired that it would be a lone project rather than one backed by the site.

It continued for a while with several supporting contributors, including some who had worked on the original project. Then, after releasing a new version of the client in late March, things got strange. Suddenly the app was deleted from its Github repository and a previously very enthusiastic developer went completely silent. From being super-chatty, not a single email or instant message was returned.

Something had definitely changed. People don’t flip like that in a matter of a few hours unless there has been some kind of event. Information subsequently received by TF that everything was absolutely fine and normal simply did not match reality.

In the weeks that followed, TorrentFreak chatted with other developers, each working on their own version of the software. The main developer behind Popcorn-Time.tv told us that he’d created his site after the one detailed above had disappeared.

“A few days ago..[..]..the other developer went missing, the main repository and its website were shutdown as well,” he explained.

Then, just a few days after setting up to replace a mysteriously discontinued fork of Popcorn Time, this new developer also had a dramatic change of heart. Suddenly his version of Popcorn Time also disappeared from Github. He followed the first guy and dropped off the radar.

Gitgone

Efforts at contact failed. Emails from TorrentFreak went unanswered. Then, a day ago, there was a surprise reappearance in a discussion thread on Reddit.

“All you need to know is that I’m still alive and moved on to others projects,” he wrote. “I can’t really tell you anything more than that and I won’t contribute anymore to popcorn-time.”

Somewhere in the middle of all this we were contacted by another developer of yet another fork of Popcorn Time. Just like the others, he approached us with much enthusiasm. Then, just a couple of days later, he too had gone, with rapid email exchanges being replaced by complete silence.

We have no definite proof as to what has caused all of these developers to close down their work and refuse to talk, but the circumstances are suspicious to say the least. What they all had in common was their talent, enthusiasm, eloquence and a willingness to push their projects forward. They were all happy to talk too, then all of a sudden no one wanted to say anything.

Why everything should change almost overnight may never be officially revealed, but if it walks like a duck….

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

___: Христос Возкресе

This post was syndicated from: ___ and was written by: j. Original post: at ___

Христос Возкресе!

Happy Easter!

TorrentFreak: Is Your VPN / Proxy Working? Check Your Torrent IP-Address

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

boxedEvery day dozens of millions of people share files using BitTorrent, willingly exposing their IP-addresses to the rest of the world.

For those who value their privacy this is a problem, so many sign up with a VPN provider or torrent proxy service. This is fine, but some people then forget to check whether their setup is actually working.

While it’s easy enough to test your web IP-address through one of the many IP-checking services, checking the IP-address that’s broadcasted via your torrent client is more complex.

There are a few services that offer a “torrent IP check” tool, but for the truly paranoid there’s now an Open Source solution as well.

The developer, who goes by the nickname “cbdev”, found most of the existing tools to be somewhat “fishy,” so he coded one for himself and those who want to run their own torrent IP checkers.

“I’d rather have something I can control entirely,” cbdev tells TF.

“So, I wrote a tool people can install on their own servers, with the added bonus of it using magnet links, so ‘Tracking torrent’ files are required,” he adds.

The ipMagnet tool allows BitTorrent users to download a magnet link which they can then load into their BitTorrent client. When the magnet link connects to the tracker, the user’s IP-address will be displayed on the site, alongside a time-stamp and the torrent client version.

ip-magnet

Alternatively, users can check out the tracker tab in their torrent clients, where the IP-address will be displayed as well.

For users who are connected to a VPN, the IP-address should be the same as the one they see in their web browser, and different from the IP-address that’s displayed when the VPN is disconnected.

Proxy users, on the other hand, should see a different IP-address than their browser displays, since torrent proxies only work through the torrent client.

torrent-ip

People are free to use the ipMagnet tool demo here, but are encouraged to run a copy on their own server. The whole project is less than 500 lines of code, so those with basic knowledge of PHP, JavaScript and HTML can verify that it’s not doing anything nefarious.

If you’re setting up a copy of your own, feel free to promote it in the comments below. Those who want more tips can read up on how to make a VPN more secure, and which VPN providers and torrent proxies really take anonymity seriously.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

TorrentFreak: ‘Notorious Market’ Blocks Piracy in its P2P Streaming Player

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

ustrEvery year the United States Trade Representative calls out countries, companies and services that step over the line when it comes to copyright enforcement. Year after year the same core players appear and China is one of the countries regularly subjected to criticism.

Chinese companies such as Baidu have been fixtures in the USTR’s reporting for many years, but changes to its operations in 2011 meant that it was able to stay off the list, although at home it is still the subject of various legal clashes. Now, just two months after the USTR published its 2013 Out-of-Cycle Review of Notorious Markets, another Chinese company is hoping to please both local and US interests by ditching its pirate reputation.

In its last publication, sandwiched between KickassTorrents and MP3Skull, the USTR called out a site called Kuaibo. The company behind that site is the Shenzhen QVOD Technology Co. It’s the creator of QVOD, a technology originally designed to enable small and medium sized business to distribute their content online using BitTorrent, P2P, and streaming technology.

With an estimated userbase of 25 million (100 million on its mobile app) the company’s player software is undoubtedly popular. However, many of its users are now using QVOD to share unauthorized content via what appears to be a Popcorn Time-style P2P streaming feature.

QVOD

“QVOD has become a leading facilitator of wide-scale distribution of copyright-infringing content and of other content considered illicit in China,” the USTR wrote, referring to pirate movies/music and pornography.

However, in an announcement this week, Shenzhen QVOD Technology Co reported that it had taken steps to stop the unlawful distribution of both copyright-infringing and adult content via its software. All illegal content will be blocked and the company will move to a commercial and fully-licensed footing.

“From now on, the previous ‘fast play mode’ [of QVOD’s Nora Player) will come to an end,” a company spokesman said. “Nora is willing to work with counterparts to jointly promote the development of the genuine video industry.”

The motivation for “going legal” appears to be financial. Analysts quoted in Chinese media say that its become increasingly difficult for QVOD to get advertisers who are happy for their brands to appear alongside infringing content. Since the company is pledging to spend more than $16m on licenses it needs money quickly, but whether its millions of pirates are ready to spend is far from clear.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Велоеволюция: Карането на велосипед в София ще спасява около 200 живота годишно

This post was syndicated from: Велоеволюция and was written by: hap4oteka. Original post: at Велоеволюция

velo1sКарането на велосипед може да създаде над 76 хил. работни места и да спасява повече от 9 хил. човешки живота годишно, сочи съвместно проучване на Комисията за Европа на ООН (КЕОН) и на Световната здравна организация (СЗО).
Понеже КЕОН включва САЩ и Канада, а Израел и всички бивши съветски републики в Азия принадлежат към европейския регион на СЗО, данните за тези страни също фигурират в проучването, което обхваща само столиците.

Замърсяването на въздуха поради интензивния трафик отнема 500 хил. живота, докато други 90 хил. загиват годишно при автомобилни катастрофи.

Доколкото ползването на градския транспорт не изисква системни физически упражнения, това го прави един от факторите, които ежегодно причиняват смъртта на 1 млн. души.
Шумът от градското движение има вредни последици за близо 70 млн. жители на столични градове, сочат данните на ООН.
Според проучването здравословен и икономически ефект ще има повишеното каране на велосипед в региона. Засега то е най-предпочитано в столиците на Холандия – 33% от средствата за придвижване и Дания – 26%.

До тях се доближават Германия с 13 на сто, Швейцария – 11% и Словения – 10%. Във всички останали столици придвижването с велосипед е между 0 – 9%, като в нулевата графа са само два града – Минск, Беларус и Рим, Италия.

София е с велосипедна активност от 1% в компанията на още 12 столици, като за три от тях – тези на Таджикистан, на Украйна и на Узбекистан, се смята, че данните са завишени.

Прогнозира се, че столицата на България има потенциал да създаде1332 работни места (срещу 53 в момента), свързани с продажбата и поддръжката на велосипеди и на екипировка за тях, както и с необходимите градоустройствени промени.
Предпочитанието към велосипеда ще спасява ежегодно в София 195 човешки живота.

Най-голямо увеличение на работните места и намаляване на жертвите ще бъде постигнато в Москва – 12 085 и 2912, Лондон – 8196 и 613 и Анкара – 5122 и 565. Общо за региона прогнозата е 76 658 нови работни места и 9401 спасени живота.

Източник: http://www.vesti.bg/

LWN.net: Ars Technica: Tor network’s ranks of relay servers cut because of Heartbleed bug

This post was syndicated from: LWN.net and was written by: n8willis. Original post: at LWN.net

Ars Technica reports
on the impact that the “Heartbleed” bug in OpenSSL has had for the Tor
anonymizing network. “The Tor Project team has been moving to
provide patches for all of the components, and most of the core
network was quickly secured. However, a significant percentage of the relay servers, many of which serve countries with heavy Internet censorship, have remained unpatched. These systems are operated by volunteers and may run unattended.

Schneier on Security: Friday Squid Blogging: Squid Jigging

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Good news from Malaysia:

The Terengganu International Squid Jigging Festival (TISJF) will be continued and become an annual event as one of the state’s main tourism products, said Menteri Besar Datuk Seri Ahmad Said.

He said TISJF will become a signature event intended to enhance the branding of Terengganu as a leading tourism destination in the region.

“Beside introducing squid jigging as a leisure activity, the event also highlights the state’s beautiful beaches, lakes and islands and also our arts, culture and heritage,” he said.

I assume that Malaysian squid jigging is the same as American squid jigging. But I don’t really know.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Errata Security: xkcd is wrong about "free speech"

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

The usually awesome XKCD cartoon opines that “the right to free speech means only that the government cannot arrest you for what you say“. This is profoundly wrong in every way that something can be wrong.

The First Amendment to the constitution says that “Congress shall pass no law abridging freedom of speech”. This wording is important. It doesn’t say that congress shall pass laws protecting our speech, but that congress shall not abridge it. “Free speech” is not a right given to us by government. Instead, “free speech” is a right we have — the stipulation is only that government should not infringe it.
The forces that want to restrict your speech include others than just government. For example, cartoonists around the world draw pictures of Jesus and Buddha, but do not draw pictures of Mohamed, because they are afraid of being murdered by Islamic fundamentalists. South Park depicted Jesus as addicted to Internet porn, and Buddha with a cocaine habit, but the censors forced them to cover up a totally innocuous picture of Mohamed. It’s a free-speech issue, but not one that involves government.

In oppressive countries like Russia, the threats to speech rarely come directly from the government. It’s not the police arresting people for speech. Instead, it’s the local young thugs beating up journalists, with the police looking the other way.
In the United States, society has gone overboard on “political correctness” that silences speech. A good example in cybersec community is when the “Ada Initiative” got that talk canceled at “BSidesSF” last year. That sort of thing is very much a “free speech” issue, even though the official government wasn’t involved.
The “responsible-disclosure” debate is about “free-speech”, where some try to use the hammer of “ethical behavior” to control speech. Last night I tweeted a line of code from the OpenSSL source code that demonstrates a hilariously funny bug. I was attacked for my speech from OpenSSL defenders who want me to quietly submit bug patches rather than making OpenSSL look bad on Twitter.
That’s why so many of us oppose the idea of “responsible-disclosure” — the principle of “free-speech” means “free-disclosure”. If you’ve found a vulnerability, keep it secret, sell it, notify the vendor, notify the press, do whatever the heck you want to do with it. Only “free-disclosure” advances security — “responsible” disclosure that tries to control the process holds back security.
Such debates do circle back to government. For example, in the Andrew ‘weev’ Auernheimer case, the government cited Andrew’s behavior (notifying the press) as an example of irresponsible behavior, because it didn’t fit within the white-hat security norms of “responsible-disclosure”. Andrew was sent to jail for speech that embarrassed the powerful — and it was your anti-free-speech arguments of “responsible-disclosure” that helped put him there.
Certainly, it’s technically inaccurate to cite “First Amendment” rights universally, as that’s only a restriction on government. But the “free speech” is distinct: you can certainly cite your “right to free speech” in cases that have nothing to do with government.
Update: I shoulda just started this post by citing the Wikipedia entry on rights: “Rights are legal, social, or ethical principles of freedom“. In other words, it’s perfectly valid to use the word “right” in contexts other than “legal”.

Update: I mention South Park because the XKCD mentions “when your show gets canceled”. If your show gets canceled because nobody watches it, then that’s certainly not a free-speech issue. But, when your show gets canceled because of threats from Islamists, then it certainly is a free-speech issue.

Schneier on Security: Metaphors of Surveillance

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

There’s a new study looking at the metaphors we use to describe surveillance.

Over 62 days between December and February, we combed through 133 articles by 105 different authors and over 60 news outlets. We found that 91 percent of the articles contained metaphors about surveillance. There is rich thematic diversity in the types of metaphors that are used, but there is also a failure of imagination in using literature to describe surveillance.

Over 9 percent of the articles in our study contained metaphors related to the act of collection; 8 percent to literature (more on that later); about 6 percent to nautical themes; and more than 3 percent to authoritarian regimes.

On the one hand, journalists and bloggers have been extremely creative in attempting to describe government surveillance, for example, by using a variety of metaphors related to the act of collection: sweep, harvest, gather, scoop, glean, pluck, trap. These also include nautical metaphors, such as trawling, tentacles, harbor, net, and inundation. These metaphors seem to fit with data and information flows.

The only literature metaphor used is the book 1984.

This is sad. I agree with Daniel Solove that Kafka’s The Trial is a much better literary metaphor. This article suggests some other literary metaphors, most notably Philip K. Dick. And this one suggests the Eye of Sauron.

Спирт, есенция и умора: 2014-04-18 “IPv6 Deployment in Sofia University” lecture at RIPE-SEE-3

This post was syndicated from: Спирт, есенция и умора and was written by: Vasil Kolev. Original post: at Спирт, есенция и умора

This is my talk from RIPE-SEE3, on the deployment of IPv6 in Sofia University. The presentation can be downloade in pdf or odt format.

I’ve put the slide number or the notes/comments I have in [], the rest is as close to what I said as I remember, with some corrections.

[1]

Hello. I’m going to talk about the IPv6 deployment in Sofia University. It’s an important example how this can be done on a medium-to-large scale, with no hardware upgrades or hidden costs.

This presentation was prepared by Vesselin Kolev, who did most of the architecture of the network there and was involved in its operations until recently.

(I want to clarify, we’re not the same person, and we’re not brothers, I get that question a lot)

[2]

Please note that I’m only doing this presentation, I was not involved in the deployment or any network operations of the university’s network.

[3]

These are the people that did and operated this deployment. Most of them aren’t there any more – Vesselin Kolev is at Technion in Israel, Nikolay Nikolov and Vladislav Rusanov are at Tradeo, Hristo Dragolov is at Manson, Radoslav Buchakchiev is at Aalbord University, Ivan Yordanov is at Interoute, Georgi Naidenov is a freelancer. Stefan Dimitrov, Vladislav Georgiev and Mariana Petkova are still at the university, and some of them (and some of the new people there) are at this conference, so if you have questions related to the current operations there, they’ll probably be able to answer better than me.

[4]

So, let me start with the addressing.

[5]

This is the current unicast network that’s in use in the university. It was assigned around February 2011 and was used from the 11th of February 2011 onwards. It’s a /47, later on I’ll explain why.

Also please note that the maintenance of the records in the RIPE database for the network is done only with PGP-signed emails, to protect this from hijacking. As noted by one previous speaker, there are a lot of cases in RIPE NCC, where bad people try to steal prefixes (as their value is rising), and that’s possible mostly because there aren’t good security measures in place in a lot of LIRs.

[6]

Before that, the university used a /35 from SpectrumNet (since then bought by Mobiltel), until they got their own allocation.

It should be noted that in IPv6, the renumbering is a lot easier than in IPv4 and this was done very fast.

[7]

Now, on the allocation policy,

[8]

this is how unicast addresses are assigned. It’s based on RFC4291, and for the basic entities in the university (faculties) there’s a /60, for each backbone network, server farm or virtual machine bridge there’s a /64, and all the additional allocations are the same as the initial size (on request).
Also, allocations of separate /64 are available for special/specific projects.

[9]

The university also utilizes RFC4139 addresses, for local restricted resources. The allocations are done on /32 basis.

[10]

Now, onto the intra- and inter-AS routing,

[11]

The software used is Quagga on CentOS.
There is a specific reason for using CentOS – the distribution is a recompilation of RHEL, and follows it closely, which means that it’s as stable as it gets – if there’s a security or other important update, just the pieces are backported to the current version of the software, which effectively means that you can have automatic updates running on your servers and routers and be confident that they won’t break horribly.
That’s in stark contrast with almost every router vendor I can think of.

[12]

The current transit providers for the university network are Unicom-B (the academic network in Bulgaria), and SpectrumNet (now owned by Mobiltel).
The university has private peering with Digital systems, Evolink (it changed it’s name from Lirex), ITD, Netissat and Neterra. It also provides an AS112 stub node.

(For the people that don’t know, AS112 is somewhat volunteer project run anycast nodes for the DNS servers for some zones that generate crap traffic, e.g. “example.com”, or the reverse zones for the RFC1918 addresses)

[13]

This is the basic schema of the external connectivity. The university has two border router, each of which is connected to both upstream providers (and to the private peers, but that would’ve cluttered the drawing too much). They’re interconnected through one of the MAN networks, and are in the Lozenetz campus (where the University Computing Center is, the main operator of the network) and the Rectorate.

[14]

These are the prefixes that the university originates. Here we can see why the university has a /47 – so it could be de-aggregated, for traffic engineering of the inbound traffic. That’s one problem that nothing has solved yet, and that would plague us for a lot more years…
Here each border router announces a /48, and both announce the /47, so they can effectively split the inbound traffic.

There’s also the IPv6 prefix for AS112, announced from both borders.

[15]

This is what every router should do for the prefix announces it receives. Basically, everything from a private ASN is dropped, and all prefixes that are not in 2000::/3 (the unicast part of the IPv6 space), are shorter than /3 or longer than /49 are also dropped.

[16]

Here you can see the schema of the backbone network – the two border routers, and the access routers of the faculties. They’re all under the administrative control of the network operations team.

[17]

For this schema to work efficiently, the two border routers do the job of route reflectors, and the access routers are route reflector clients – e.g. each access router has two BGP sessions, one with each border router, and that way it learns all the routes coming from the rest of the access routers.
This setup would’ve been unmanageable otherwise, as doing a full mesh of so many routers would’ve resulted in full mess.
[Ok, I didn't think of that pun at the presentation]

[back to 16]

The initial idea was to actually have the border routers be route reflectors only, and have all the access routers in the VLANs of the external BGP peers, for the traffic to flow directly and not have the two borders as bottlenecks. This wasn’t implemented because of administrative/layer8-9 issues.

[19]

This is how the core network connects with the faculty networks – the access router is connected in a network with the routers of the faculty, and via OSPF (or RIP in some cases) it redistributes a default route to them.

(Yes, RIP is used and as someone told me a few hours ago, if it’s stupid and it works, maybe it’s not that stupid)

[20]

Now here’s something I would’ve done differently :)

Both OSPF and RIP are secured using IPSec, here’s how the config looks like.

[21]

This is not something that you usually see, mostly because it’s harder to configure and the weirdness in the interoperability of different IPSec implementations. For an university network, where risk factors like students exist, this provides a layer of protection of the routing daemons (which, to be frank, aren’t the most secure software you can find) from random packets that can be sent in those segments.
It’s reasonable to accept that the kernel’s IPSec code is more secure than the security code of the routing daemons.

This the only thing that this setup provides more than the other options – a pre-shared key is used, as there aren’t any implementations that have IKEv1 and can be used for this task.

Also, this is harder to operate and configure for untrained personnel, but the team decided to go ahead with the extra security.

[22]

And of course, there has to be some filtering.

[23]

Here’s a schema of one external link – in this case, to Neterra.

[24]

Here we can see the configuration of the packet filtering. This is basically an implementation of BCP38.

First, let me ask, how many of you know what BCP38 is?
[about 25-30% of the audience raise their hands]
And how many of you do it in their own networks?
[Three times less people raise their hands]
OK, for the rest – please google BCP38, and deploy it :)
[Actually, this RFC has a whole site dedicated to it]

Basically, what BCP38 says is that you must not allow inbound traffic from source addresses that are not routed through that interface. In the Cisco world it’s known as RPF (Reverse path filtering), AFAIK in Juniper it’s the same.

In Linux, this is done best using the routing subsystem. Here what we can see is that on the external interface we block everything that’s coming from addresses in our network, and on the internal – anything that’s not from the prefixes we originate.

[25]

Here we can see the setup on an access router (as we can see, BCP38 is deployed on both the border and access routers). Here we can differentiate and allow only the network of the end-users.

[26]

On the border routers, there’s also some filtering with IPtables, to disallow anyone outside of the backbone network to connect to the BGP daemon, and also to disallow anyone to query the NTP server.
(what’s not seen here is that connection tracking is disabled for the rest of the traffic, not to overload it)

[27]

On the access routers, we also have the filtering for the BGP daemon and the NTP server, but also

[28]

we filter out any traffic that’s not related to a connection that was established from the outside. This is the usually cited “benefit” of NAT and the reason for some ill-informed people to ask for NAT in IPv6.

With this, the big-bad-internet can’t connect directly to the workstations, which, let’s be frank, is a very good idea, knowing how bad the workstation OS security is.

[29]

The relevant services provided by the university have IPv6 enabled.

[30]

Most of the web server have had support since 2007.

The DNS servers too, and there’s also a very interesting anycast implementation for the recursive resolvers that I’ll talk in detail about later.

The email services also have supported IPv6 since 2007, and they got their first spam message over IPv6 in on the 12th of December, 2007 (which should be some kind of record, I got mine two years ago, from a server in some space-related institute in Russia).

[31]

ftp.uni-sofia.bg has been IPv6 enabled since 2010, it’s a service for mirroring different projects (for example Debian).

The university also operates some OpenVPN concentrators, that are accessible over IPv6 and can provide IPv6 services.

[32]

The coverage of the deployment is very good – in the first year they had more than 50% of the workstations in the Lozenetz campus with IPv6 connectivity, and today more than 85% of the workstation in the whole university have IPv6 connectivity.
(the rest don’t have it because of two reasons – either they are too old to support it, or they are not turning it on because they had problems with it at the beginning and are refusing to use it)

Around 20% of the traffic that the university does is IPv6.
[question from the public - what is in that traffic? We should be showing people that there's actually IPv6 content out there, like facebook and youtube]

From my understanding (the university doesn’t have a policy to snoop on the users’ traffic) this is either to google(youtube), or to the CERN grid (which is IPv6 only). Yes, there actually exists a lot of content over IPv6, netflix/hulu have already deployed it, and it’s just a few of the big sites (twitter is an example) that are still holding back.

The university provides both ways for the configuration of end-nodes – Router Advertisement (RA) and DHCPv6. For most cases they’re both needed, as RA can’t provide DNS servers, and DHCPv6 can’t provide a gateway (which is something that’s really annoying to most people doing deployments).

[33]

Here’s how the default route is propagated in the network.

[34]

This was actually a surprise for me – there are actually two default routes in IPv6. One is ::/0, which includes the whole address space, and there is 2000::/3, which includes only the unicast space. There is a use for sending just 2000::/3, to be able to fence devices/virtual machines from the internet, but to leave them access to local resources (so they can update after a security breach, for example).

[35]

Here is how you redistribute ::/0, by first creating a null route and adding it in the configuration.

[36]

And here’s how it’s propagated in the network, using OSPF.

[37]

We can do the same for 2000::/3,

[38]

and push it to the virtual machines, who also have connectivity to the local resources through a different path.

[39]

Now, this is something very interesting that was done to have a highly-available recursive DNS resolver for the whole network.

[40 and 41 are the same, sorry for the mistake]

[41]

This is how a node works. Basically, there’s a small program (“manager”) that checks every millisecond if the BIND daemon responds, and if it’s just starting, it adds the anycast resolver addresses to the loopback interface and redistributes a route through OSPF.

(note the addresses used – FEC0::.. – turns out that this is the default resolver for MS windows, so this was the easiest choice. Seems that even in the land of IPv6 we still get dictated by MS what to do)

[42]

If the name server daemon dies, the manager withdraws the addresses.

Also, if the whole nodes dies, the OSPF announce will be withdrawn with it.

[43][44][43][44][43][44]

Here you can see what changes when a node is active. If one node is down, the routers will see the route to the other one, and send traffic accordingly.

[45]

And if they’re both operational, the routers in both campuses get the routes to the node in the other campus with lower local preference, so if they route traffic just to their own, and this way the traffic is load-balanced.

[46]

Thank you, and before the questions, I just want to show you again

[3]

the people that made this deployment. They deserve the credit for it :)

[In the questions section, Stefan Dimitrov from SU stood up and explained that now they have three border routers, the third one is at the 4th kilometer campus]

TorrentFreak: MPAA and RIAA Members Uploaded Over 2,000 Gigabytes to Megaupload

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

megauploadFollowing in the footsteps of the U.S. Government, this month the major record labels and Hollywood’s top movie studios filed lawsuits against Megaupload and Kim Dotcom.

While the legal action doesn’t come as a surprise, there is a double standard that has not been addressed thus far.

The entertainment industry groups have always been quick to brand Megaupload as a pirate haven, designed to profit from massive copyright infringement. The comment below from MPAA’s general counsel Steve Fabrizio is a good example.

“Megaupload was built on an incentive system that rewarded users for uploading the most popular content to the site, which was almost always stolen movies, TV shows and other commercial entertainment content,” Fabrizio commented when the MPAA filed its suit.

However, data from Megaupload’s database shared with TorrentFreak shows that employees of MPAA and RIAA member companies had hundreds of accounts at the file-storage site. This includes people working at Disney, Warner Bros., Paramount, 20th Century Fox, Universal Music Group, Sony, and Warner Music.

In total, there were 490 Megaupload accounts that were connected to MPAA and RIAA members, who sent 181 premium payments in total. Together, these users uploaded 16,455 files which are good for more than 2,097 gigabytes in storage.

Remember, those are only from addresses that could be easily identified as belonging to a major movie studio or record label, so the real numbers should be much higher.

MPAA / RIAA member accounts
mega-mpaariaa

But there’s more. The same companies that are now asking for millions of dollars in damages due to massive copyright infringement were previously eager to work with Megaupload and Megavideo.

As we noted previously, Disney, Warner Brothers, Fox and others contacted Kim Dotcom’s companies to discuss advertising and distribution deals.

For example, Shelina Sayani, Digital Marketing Coordinator for Warner Bros, offered a deal to syndicate “exciting” Warner content to Megaupload’s Megavideo site.

Subject: Warner Bros. – Looking for Content Manager
Date: Wed, 14 Jan 2009 08:55:50 -0800
From: Sayani, Shelina
To: demand@megavideo.com

Dear Megavideo,

I’m writing from Warner Bros., offering opportunities to syndicate our exciting entertainment content (e.g. Dark Knight, Harry Potter, Sex and the City clips and trailer) for your users. Could you please pass on my information to the appropriate content manager or forward me to them? Thanks so much for your time.

Shelina Sayani
WB Advanced Digital Services
3300 W Olive Ave, Bldg 168 Room 4-023
Burbank, CA 91505
818.977.4668

Similarly, Disney attorney Gregg Pendola reached out to Megaupload, not to threaten or sue the company, but to set up a deal to have Disney content posted on the Megavideo site.

Subject: Posting on Megavideo.com
From: “Pendola, Gregg”
Date: 8/13/2008 10:06 AM
To: love@megavideo.com

My name is Gregg Pendola. I am Executive Counsel for The Walt Disney Company. Certain properties of The Walt Disney Company have content that they would like to post on your site.

However, we are uncomfortable with a couple of the provisions of your Terms of Use that we feel may jeopardize our rights in our content. We were hoping that you would be amenable to reviewing a 1-page agreement we have drafted that we would like to use in place of your Terms of Use.

Is there someone I can contact to discuss this? Or someone I can email the Agreement to for review?

Thanks. Gregg

Gregg Pendola
Executive Counsel
The Walt Disney Company

For Fox, the interest in Megaupload wasn’t necessarily aimed at spreading studio content, but to utilize Megaupload’s considerable reach by setting up an advertising deal. In this email former Senior Director Matt Barash touts FAN, the Fox Audience Network.

Subject: Fox Ad Partnership
Date: Mon, 23 Feb 2009 08:09:14 -0800
From: Matt Barash
To: sales@megaupload.com

I’m reaching out to see if you have a few minutes to discuss the recently launched Fox Audience Network.

FAN is now up and running and fully operational, utilizing best of breed optimization technology to bring cutting edge relevancy to the ad network landscape.
We are scaling rapidly and seeking the right 3rd party publishers to add as partners to our portfolio.

Please let me know if you have some time to chat this week about how we can work together to better monetize your inventory.

Best,
Matt

Matt Barash
Director, Publisher Development
Fox Audience Network

The above are just a few examples of major industry players who wanted to team up with Kim Dotcom. Now, several years later, the same companies accuse the site of being one of the largest piracy vehicles the Internet has ever seen.

If the MPAA and RIAA cases proceed, Megaupload’s defense will probably present some of these examples to highlight the apparent double standard. That will be an interesting narrative to follow, for sure.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

SANS Internet Storm Center, InfoCON: green: Testing your website for the heartbleed vulnerability with nmap, (Fri, Apr 18th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

We have received reports by many readers about buggy tools to test for the heartbleed vulnerability. Today I want to show you how easy it is to check for this vulnerability using a reliable tool as nmap.

You just need to trigger a version scan (-sV) along with the script (ssl-heartbleed). The following example with show a command that will scan 192.168.0.107 for this bug:

nmap -sV 192.168.0.107 --script=ssl-heartbleed

This will be the output for a non-vulnerable website. As you can see, no warnings are shown:

ssl-heartbleed output

If you are vulnerable, you will get the following:

Vulnerable message for heartbleed

For vulnerability testing, always use reliable tools which won’t contain malicious code infecting your computer and won’t give you false positive messages.

Manuel Humberto Santander Peláez
SANS Internet Storm Center – Handler
Twitter:@manuelsantander
Web:http://manuel.santander.name
e-mail: msantand at isc dot sans dot org

(c) SANS Internet Storm Center. http://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Backblaze Blog: Lasers, Forklifts, and the Backblaze B

This post was syndicated from: Backblaze Blog and was written by: Yev. Original post: at Backblaze Blog

Backblaze had a large presence at Macworld 2014 this year, and we had a great time at the show. We love going to Macworld and spreading the good word about online backup. We spoke with lots of interesting people, met with tons of Backblaze users, and even got some of them on film. One of […]

LWN.net: Friday’s security updates

This post was syndicated from: LWN.net and was written by: n8willis. Original post: at LWN.net

Debian has updated openssl
(multiple vulnerabilities), qemu (code
execution), and qemu-kvm (code execution).

Mageia has updated apache-mod_security (rules bypass), cups-filters (M4: code execution), openjpeg (code execution), php (denial of service), and rsync (M4: denial of service).

Oracle has updated kernel (2.6.39 – OL5;
OL6: privilege escalation) and kernel (3.8.13 – OL6: privilege escalation).

SUSE has updated jakarta-commons-fileupload (SLES11 SP3: denial of service).

LWN.net: Debian 6.0 to get long-term support

This post was syndicated from: LWN.net and was written by: corbet. Original post: at LWN.net

The Debian project has announced that the security support period for the
6.0 (“squeeze”) release has been extended by nearly two years; it now runs
out in February 2016. At the end, squeeze will have received a full five
years of security support. “squeeze-lts is only going to support i386 and amd64. If you’re
running a different architecture you need to upgrade to Debian 7
(wheezy). Also there are going to be a few packages which will not
be supported in squeeze-lts (e.g. a few web-based applications
which cannot be supported for five years). There will be a tool to
detect such unsupported packages.

Schneier on Security: Reverse Heartbleed

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Heartbleed can affect clients as well as servers.

Schneier on Security: Overreacting to Risk

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

This is a crazy overreaction:

A 19-year-old man was caught on camera urinating in a reservoir that holds Portland’s drinking water Wednesday, according to city officials.

Now the city must drain 38 million gallons of water from Reservoir 5 at Mount Tabor Park in southeast Portland.

I understand the natural human disgust reaction, but do these people actually think that their normal drinking water is any more pure? That a single human is that much worse than all the normal birds and other animals? A few ounces distributed amongst 38 million gallons is negligible.

Another story.

TorrentFreak: Five Years Before Any New U.S. Anti-Piracy Laws, MP Predicts

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

sam-pirateUnder immense pressure from powerful entertainment companies, in 2011 it looked almost inevitable that the United States would introduce powerful new legislation to massively undermine Internet piracy.

Championed by Hollywood and the world’s leading record labels, the Stop Online Piracy Act made headlines around the world for putting super-aggressive tools into the government’s arsenal. At the same time, however, proper consideration wasn’t given to their potential impact on innovation.

As a result, citizens and technology companies teamed up to stage the biggest protest the Internet has ever seen resulting in a back-down by the government – and Hollywood in particular – on an unprecedented scale.

The fallout became obvious in the months that followed. The usual anti-piracy rhetoric from the MPAA and RIAA was massively toned down, at times becoming non-existent. In its place emerged a new and softer approach, one aimed at making peace with the very technology companies that had stood in their way.

This week an intellectual property enforcement leader very familiar with the big studios and record labels revealed just how much damage the SOPA defeat is responsible for.

Speaking in Los Angeles at an event hosted by the Motion Picture Licensing Corp., UK MP and Prime Minister’s Intellectual Property Advisor Mike Weatherley said that it would be a very long time before anyone dared to push for new legislation in the United States.

weatherley“It’s going to be five years before anybody puts his head above the parapet again,” Weatherley told executives.

If Weatherley’s predictions are correct, that takes us beyond 2020 before any new legislation gets put in place, a comparative lifetime online and a timescale during which almost anything can happen.

But Hollywood and the labels aren’t sitting still in this apparent ‘quiet’ period. A new strategy has been adopted, one that seeks voluntary cooperation with technology-based companies, the “six-strikes” deal with United States ISPs being a prime example.

Cooperation has also been sought from advertising companies in an attempt to strangle the revenues of so-called pirate sites, a move that has been gathering momentum in recent months. Weatherley told the meeting that existing laws might need to be “beefed up” a little, but from his overall tone those tweaks seem unlikely to provoke any SOPA-like backlash.

Also generating interest is Weatherley’s attitude towards Google. The world’s leading search engine has been under intense pressure to do something about the infringing results that appear in its listings. At times the rhetoric, especially from the music industry, has been intense, and could’ve easily spilled over into aggression if Google had decided to bite back. However, the UK Prime Minister’s IP advisor says he sees things differently.

“I know in America [Google] are considered much more of a pariah than they are perhaps in the U.K. But I have to say they are engaging with me and they recognize that something has got to be done,” Weatherley told the meeting.

But while Weatherley talks peace and cooperation and the MPAA and RIAA keep their heads down in the States, much anti-piracy work is being conducted through their proxies FACT and the BPI in the UK. Instead of tackling the world’s leading file-sharing sites from U.S. soil, the job has been transferred to the City of London Police Intellectual Property Crime Unit. Not only does it keep the controversy down at home, it also costs much less, with the British taxpayer footing much of the bill.

TorrentFreak has learned that only last week a new batch of letters went out to file-sharing related sites, with yet more demands for them to shut down or face the consequences. Things might appear quiet in the United States, but that doesn’t meant things aren’t happening.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

SANS Internet Storm Center, InfoCON: green: Heartbleed CRL Activity Spike Found, (Wed, Apr 16th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

Update: CloudFlare posted in their blog twice today claiming responsibility for the majority of this spike. Quoting: “If you assume that the global average price for bandwidth is around $10/Mbps, just supporting the traffic to deliver the CRL would have added $400,000USD to Globalsign’s monthly bandwidth bill.”

Update: We’ve also seen articles from ZDNet and WIRED today in response to the below insights, with further analysis therein.

It looks like, as I had suspected, the CRL activity numbers we have been seeing did not reflect the real volume caused by the OpenSSL Heartbleed bug.

This evening I noticed a massive spike in the amount of revocations being reported by this CRL: http://crl.globalsign.com/gs/gsorganizationvalg2.crl

The spike is so large that we initially thought it was a mistake, but we have since confirmed that it’s real! We’re talking about over 50,000 unique revocations from a single CRL:

This is by an order of magnitude the largest spike in revocation activity seen in years, according to our current data.

I have set up a new page for everyone to monitor the activity as well as see how we are obtaining this data. The page can be found at https://isc.sans.edu/crls.html.

How will you use this page in your projects or general analysis? We’d love to hear some ideas.

If you know of other CRLs that we can add, please let us know in the comments! Additionally, if you would like to see an API call added so that you can automatically query us for this information, please let us know so that we are aware of the demand.

On a side note, we can see a clear upward trend in revocations over the past 3 or 4 years:

What do you attribute this consistent growth in revocations to? What do you think caused the previous spikes?

– 
Alex Stanford – GIAC GWEB,
Research Operations Manager,
SANS Internet Storm Center
/in/alexstanford | @alexstanford

(c) SANS Internet Storm Center. http://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.