Posts tagged ‘chrome’

LWN.net: EFF: Lenovo is breaking HTTPS security on its recent laptops

This post was syndicated from: LWN.net and was written by: corbet. Original post: at LWN.net

Here is a
statement from the Electronic Frontier Foundation
on the revelation
that Lenovo has been shipping insecure man-in-the-middle malware on its
laptops. “Lenovo has not just injected ads in a wildly inappropriate
manner, but engineered a massive security catastrophe for its users. The
use of a single certificate for all of the MITM attacks means that all
HTTPS security for at least Internet Explorer, Chrome, and Safari for
Windows, on all of these Lenovo laptops, is now broken.
” For
additional amusement, see Lenovo’s
statement
on the issue.

There are a lot of Lenovo users in LWN’s audience. Presumably most of them
have long since done away with the original software, but those who might
have kept it around would be well advised to look into the issue; this site can evidently indicate
whether a machine is vulnerable or not.

Errata Security: Some notes on SuperFish

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

What’s the big deal?

Lenovo, a huge maker of laptops, bundles software on laptops for the consumer market (it doesn’t for business laptops). Much of this software is from vendors who pay Lenovo to be included. Such software is usually limited versions, hoping users will pay to upgrade. Other software is ad supported. Some software, such as the notorious “Ask.com Toolbar”, hijacks the browser to display advertisements.

Such software is usually bad, especially the ad-supported software, but the SuperFish software is particularly bad. It’s designed to intercept all encrypted connections, things is shouldn’t be able to see. It does this in a poor way that it leaves the system open to hackers or NSA-style spies. For example, it can spy on your private bank connections, as shown in this picture.

Marc Rogers has a post where he points out that what the software does is hijack your connections, monitors them, collects personal information, injects advertising into legitimate pages, and causes popup advertisement.


Who discovered this mess?

People had noticed the malware before, but it’s Chris Palmer (@fugueish) that noticed the implications. He’s a security engineer for Google who just bought a new Lenovo laptop, and noticed how it was Man-in-the-Middling his Bank of America connection. He spread the word the rest of the security community, who immediately recognized how bad this is.

What’s the technical detail?

It does two things. The first is that SuperFish installs a transparent-proxy (MitM) service on the computer intercepting browser connections. It appears to be based on Komodia’s “SSL Digestor”, described in detail here.

But such interception still cannot decrypt SSL. Therefore, SuperFish installs its own root CA certificate in Windows system. It then generates certificates on the fly for each attempted SSL connection. Thus, when you have a Lenovo computer, it appears as SuperFish is the root CA of all the websites you visit. This allows SuperFish to intercept an encrypted SSL connection, decrypt it, then re-encrypt it again.

Only the traffic from the browser to the SuperFish internal proxy uses the website’s certificate. The traffic on the Internet still uses the normal website’s certificate, so we can’t tell if a machine is infected by SuperFish by looking at this traffic. However, SuperFish makes queries to additional webpages to download JavaScript, which may be detectable.

SuperFish’s advertising works by injecting JavaScript code into web-pages. This is known to cause a lot of problems on websites.
It’s the same root CA private-key for every computer. This means that hackers at your local cafe WiFi hotspot, or the NSA eavesdropping on the Internet, can use that private-key to likewise intercept all SSL connections from SuperFish users.

SuperFish is “adware” or “malware”

The company claims it’s providing a useful service, helping users do price comparisons. This is false. It’s really adware. They don’t even offer the software for download from their own website. It’s hard Googling for the software if you want a copy because your search results will be filled with help on removing it. The majority of companies that track adware label this as adware.

Their business comes from earning money from those ads, and it pays companies (like Lenovo) to bundle the software against a user’s will. They rely upon the fact that unsophisticated users don’t know how to get rid of it, and will therefore endure the ads.

Lenovo’s response

Lenovo’s response is here. They have stopped including the software on new systems.

However, they still defend the concept of the software, claiming it’s helpful and wanted by users, when it’s clear to everyone else that most users do not want this software.

It’s been going on since at least June 2014

The earliest forum posting is from June of 2014. However, other people report that it’s not installed on their mid-2013 Lenovo laptops.

Here is a post from September 2014.

It’s legal

According to Lenovo, users are presented with a license agreement to accept the first time they load the browser. Thus, they have accepted this software, and it’s not a “hacker virus”. 
But this has long been the dodge of most adware. That users don’t know what they agree to has long been known to be a problem. While it may be legal, just because users agreed to it doesn’t mean it isn’t bad.

Firefox is affected differently

Internet Explorer and Chrome use the Windows default certificates. Firefox has its own separate root certificates. Therefore, the tool apparently updates the Firefox certificate file separately.

Update: Following reports on the Internet, I said Firefox wasn’t affected. This tweet corrected me.

Uninstalling SuperFish leaves behind the root certificate

The problem persists even if the user uninstalls the software. They have to go into the Windows system and remove the certificate manually.

Other adware software does similar things

This post lists other software that does similar things.

How to uninstall the software?

First, run the “uninstall.exe” program that comes from the software. One way is from the installed program list on Windows. Another way is to go to “C:\Program Files (x86)\Lenovo\VisualDiscovery\uninstall.exe”.
This removes the bad software, but the certificate is still left behind.
For Internet Explorer and Chrome, click on the “Start” menu and select “Run”. Run the program “certmgr.msc”. Select the “Trusted Root Certificate Authorities”, and scroll down to “Superfish, Inc.” and delete it (right click and select “Delete”, or select and hit the delete key).
For Firefox, click on the main menu “Options”, “Advanced”, “Certificates”. The Certificate Manager pops up. Scroll down, select “Superfish, Inc.”, then “Delete or Detrust”. This is shown in the image below.

How can you tell if you’re vulnerable? or if the removal worked?

Some sites test for you, like https://filippo.io/Badfish.

Do all machines have the same root certificate?

The EFF SSL observatory suggests yes, all the infected machines have the same root certificate. This means they can all be attacked. If they all had different certificates, then they wouldn’t be attackable.

Linux How-Tos and Linux Tutorials: Spinning Up a Server with the OpenStack API

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Jeff Cogswell. Original post: at Linux How-Tos and Linux Tutorials

In the previous article, we looked at the OpenStack API and how to get started using it. The idea is that you make HTTP calls to a server, which performs the requested command, and gives back some information. These commands can simply gather information, such as giving you a list of your running servers, or to do something more complex such as allocating servers. OpenStack includes an entire API for such operations, including managing storage, managing databases, allocating servers, de-allocating servers, creating images (which are used in the creation of servers), networking (such as allocating private networking), and more. In future articles we can look at tasks such as allocating private networking.

To make use of the API, you can take several routes. The simplest is to not make the calls through code, but rather use a console application. One such console application is the Horizon dashboard. Horizon is a fully-featured dashboard that runs in the browser, and allows you to perform OpenStack tasks. However, the dashboard is interactive in that you click and choose what you want to do. Some tasks you need to automate, in which case you’ll want to write scripts and programs. That’s where the APIs and SDKs come in. So let’s continue our discussion of the API, and try out a few more tasks.

Places to Test the API

Like with the previous article, I’m using Rackspace for these examples. I’m in no way endorsing Rackspace, but simply using them because their OpenStack API is a nearly complete implementation of OpenStack. And if you want to practice with Rackspace’s OpenStack API, you can do so at very little cost. But there are other options as well.

If you head over to the OpenStack website, you’ll find a page for getting started with OpenStack which includes a section on premium clouds (such as Rackspace) as well as a nice local environment you can download and install called DevStack. (If you’re really interested in OpenStack, I would recommend DevStack so you can get some experience actually installing everything.) There’s also a pretty coolwebsite called TryStack that you can check out.. But for now I’ll keep it simple by using RackSpace.

Find the images

Let’s spin up a server with just an API call. To accomplish the API call, I’m going to use the cURL command-line tool. For this you’ll need to get an authentication token from Rackspace, as described in the previous article. Incidentally, here’s a quick tip that I didn’t provide last time: When you request the token, you get back a rather sizable JSON object that is not formatted at all. There are different ways to get this thing formatted into something we humans can read; I found a Chrome plugin that I like called JavaScript Unpacker and Beautifier, which you can find at http://jsbeautifier.org/. Note also that you might see backslashes before the forward slashes, because the JSON strings are escaped. You’ll need to remove the backslashes from the actual API calls.

beautifier

In order to spin up a server, we also need to know what images are available. Images have different meanings depending on what cloud is using them. Here they’re essentially ISO images containing, for example, an Ubuntu installer. Here’s how we can list the publicly-available images using cURL:

curl -s https://iad.images.api.rackspacecloud.com/v2/images 
    -H 'X-Auth-Token: abcdef123456'

You would replace the abcdef123456 with your token. Note also that because we’re requesting information, we use a GET method, instead of a post. (GET is the default for cURL, so we don’t have to specify it.) When I run this command, I get back another big JSON object. This one lists 25 images available. (But there’s actually more, as you’ll see shortly.)

Now here’s another tip for dealing with these JSON objects: Go into the Chrome browser, and open up the dev tools by pressing F12. Then in the console, type

x =

and paste in the JSON text you got back from the cURL call. This will store the JSON object into a variable called x. Then you can explore the members of the object by expanding the array items and the object members, as shown in the following figure.

chrome json

Notice at the very end of the JSON object is a member called next. That’s because we’ve reached the limit of how many Rackspace will give us for the image lists. Rackspace pages the data, so let’s request another page of data. To do so, we start with the URL given by the next field:

"next": "/v2/images?marker=abc123-f20f-454d-9f7d-abcdef"

This is the URL we use for the cURL command, prepended with the domain name and https. And then we get 25 more, as well as yet another next list. Looking through the 50 so far, I’m finding different images such as a Debian Wheezy. I don’t really want to dig through all of these looking for the one I want, so let’s try another cURL call, but this time we’ll include some parameters.

If you go to this page on Rackspace’s documentation, we can see what the parameters here. There are two places we can find the parameters: We can find those that OpenStack in general supports by going to the OpenStack documentation. But providers may include additional operations beyond OpenStack. So I’ll look at Rackspace’s own documentation.

If you look at the JSON objects we got back, there are even more members than are listed in the documentation. One such parameter is os_distro. Let’s try searching on that. For these parameters, we tack them onto the URL as query parameters. Let’s find the Ubuntu distros:

curl -s https://iad.images.api.rackspacecloud.com/v2/images?os_distro=ubuntu 
    -H 'X-Auth-Token: abcdef123456'

It worked. I got back a big JSON object. Pasting it into Chrome’s dev tools, I can see I got back 10 objects. Now let’s suppose we’re working on a project that requires a 12.04 version of Ubuntu. It turns out Rackspace also has that information in the objects. So we can search on that as well. I’m going to add another parameter to my URL, which requires an ampersand. I don’t want the bash shell to use the ampersand, so I’ll add single quotes around my URL. Here goes:

curl -s 'https://iad.images.api.rackspacecloud.com/v2/images?os_distro=ubuntu&org.openstack__1__os_version=12.04' 
    -H 'X-Auth-Token: abcdef123456'

You can see how I included both the os_distro and a parameter for the version. Now I just got back three images, and I can pick one. Again I’ll pull these into Chrome to see what’s what. Of course, this is still totally interactive, which means we’ll need to figure out a way to grind through these through code instead of copying them into Chrome. We’ll take that up in a future article. For now, I’m going to pick the one with the name “Ubuntu 12.04 LTS (Precise Pangolin) (PV)”.

Choose a Flavor

Before we can spin up a server, we need to choose a type of server, which is called a flavor. Just as we listed the available images, we can list the available flavors:

curl -s https://iad.servers.api.rackspacecloud.com/v2/12345/flavors 
    -H 'X-Auth-Token: abcdef123456'

You would replace 12345 with your tenant ID and as usual the abcdef123456 with your authentication token. Notice that the second word in the URL is “servers” because flavors fall under the servers section of the API. When I ran this, I got back a JSON object with 38 different delicious flavors. For this test server, I’ll pick a small one. Here’s the second in the list of flavors:

{
    "id": "2",
    "links": [{
        "href": "https://iad.servers.api.rackspacecloud.com/v2/12345/flavors/2",
        "rel": "self"
    },
    {
        "href": "https://iad.servers.api.rackspacecloud.com/12345/flavors/2",
        "rel": "bookmark"
    }],
    "name": "512MB Standard Instance"
}

Now a quick point about this response; notice there are fields such as href and self. This is in line with one common approach to a RESTful interface, whereby you get back an array of links that include an href (the address) and a rel (a description, or, more precisely, a relationship).

Using the first href, I can get back detailed information about this flavor:

curl -s https://iad.servers.api.rackspacecloud.com/v2/12345/flavors/2 
    -H 'X-Auth-Token: abcdef123456'

This gives me back the following details:

{
    "flavor": {
        "OS-FLV-WITH-EXT-SPECS:extra_specs": {
            "policy_class": "standard_flavor",
            "class": "standard1",
            "disk_io_index": "2",
            "number_of_data_disks": "0"
        },
        "name": "512MB Standard Instance",
        "links": [{
            "href": "https://iad.servers.api.rackspacecloud.com/v2/12345/flavors/2",
            "rel": "self"
        },
        {
            "href": "https://iad.servers.api.rackspacecloud.com/12345/flavors/2",
            "rel": "bookmark"
        }],
        "ram": 512,
        "vcpus": 1,
        "swap": 512,
        "rxtx_factor": 80.0,
        "OS-FLV-EXT-DATA:ephemeral": 0,
        "disk": 20,
        "id": "2"
    }
}

That flavor should work for our test. Now finally, before we spin up the server, I need to make one more point. You might be noticing that while it would be nice to be able to automate all this through scripts, there’s also a certain amount of interactivity here that could lend itself to a simple application. You might, for example, build a small app that requests flavors and available Ubuntu images and provides a list of choices for a user (or even yourself). You could make the same API calls we did here, provide the user the option to choose the flavor and image, and then finally spin up the server. There are many possibilities here. But note that by nature of the RESTful interface, we start with an API call that returns to us a set of data as well as additional URLs for other API calls. We then use those URLs for future calls.

Spin up the server

Now let’s finally spin up the server. You need the id of the image and the id of the flavor. Both of these are included in the JSON objects, both with the member name id. You also have to provide a name for your server: 

  • The id for the image is “71893ec7-b625-44a5-b333-
  • ca19885b941d”.
  • The id for the flavor is 2.
  • The name we’ll go with is Ubuntu-1.

(Please don’t hardcode the image IDs, though, if you’re writing an app. Cloud hosts are continually updating their images and replacing old images, meaning this ID might not be valid tomorrow. That’s why you’ll want to traverse down through the results you get from the starting API calls.)

Creating a server requires a POST method. We use the same URL as listing servers, but the POST method tells Rackspace to create a server instead of listing it. For our ids and name, we construct a JSON object that we pass in through the -d parameter. Make sure you conform to true JSON, with member names enclosed in double-quotes. Here we go:

curl -X POST -s https://iad.servers.api.rackspacecloud.com/v2/12345/servers 
    -d '{"server": { "name": "Ubuntu-1", "imageRef":"71893ec7-b625-44a5-b333-ca19885b941d", "flavorRef":"2" }}' 
    -H 'X-Auth-Token: abcdef123456' 
    -H "Content-Type: application/json"

If you type this incorrectly, you’ll get an error message describing what went wrong (such as malformed request body, which can happen if your JSON isn’t coded right). But if done correctly, you’ll get back a JSON object with information about your server that’s being built:

{
    "server": {
        "OS-DCF:diskConfig": "AUTO",
        "id": "abcdef-02d0-41db-bb9f-abcdef",
        "links": [{
            "href": "https://iad.servers.api.rackspacecloud.com/v2/12345/servers/abcdef-02d0-41db-bb9f-abcdef",
            "rel": "self"
        },
        {
            "href": "https://iad.servers.api.rackspacecloud.com/12345/servers/f02de705-02d0-41db-bb9f-75a5eb5ebaf4",
            "rel": "bookmark"
        }],
        "adminPass": "abcdefXuS7KD34a"
    }
}

Pay close attention to the adminPass field. You’ll need that for logging into your server!

Then you can use the first href to get information about the server:

curl -s https://iad.servers.api.rackspacecloud.com/v2/12345/servers/abcdef-02d0-41db-bb9f-abcdef 
    -H 'X-Auth-Token: abcdefXuS7KD34a'

Which tells me a lot of detail about the server, including its IP addresses. Here’s the first part of the JSON object:

{
    "server": {
        "status": "ACTIVE",
        "updated": "2015-02-09T19:35:41Z",
        "hostId": "abcdef4157ab9f2fca7d5ae77720b952565c9bb45023f0a44abcdef",
        "addresses": {
            "public": [{
                "version": 6,
                "addr": "2001:4802:7800:2:be76:4eff:fe20:4fba"
            },
            {
                "version": 4,
                "addr": "162.209.107.187"
            }],
            "private": [{
                "version": 4,
                "addr": "10.176.66.51"
            }]
        },

I can log into this using ssh, as shown in this screenshot:

server ssh

Now don’t forget to delete the server. We can do that through the Rackspace web portal, but why not use the API since we’re here? Here’s the cURL:

curl -X DELETE 
    -s https://iad.servers.api.rackspacecloud.com/v2/12345/servers/abcdef-02d0-41db-bb9f- abcdef 
    -H 'X-Auth-Token: abcdefXuS7KD34a'

And we’re done!

Conclusion

Spinning up a server is easy, if you follow the process of first obtaining information about images and flavors, and then using the ids from the image and flavor you choose. Make sure to use the URLs that you get back inside the JSON responses, as this will help your app conform to the rules of a RESTful interface. Next up, we’ll try using an SDK in a couple of languages.

Linux How-Tos and Linux Tutorials: How to Edit Images on Chromebook Like a Pro

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Swapnil Bhartiya. Original post: at Linux How-Tos and Linux Tutorials

Chromebooks are becoming extremely popular among users – both individuals and enterprises. These inexpensive devices are capable of doing almost everything that one can do on a full fledged desktop PC.

I know it because my wife has become a full-time Chromebook user (she used to be on Mac OS X and then moved to Ubuntu, before switching to Chromebook full time). I myself am a Chromebook user and use it often – mostly to write my stories; which also require serious image editing. If you are a Chromebook user, this article will help you in becoming a pro at image editing on your Chromebook.

Chrome’s native image editing tool

There is no dearth of image editing applications for Chrome OS powered devices. However, like any other image editing software, these apps are a bit resource intensive so if you are looking for a decent experience you should be running the latest Chromebooks with a faster processor and as much RAM as possible.

It’s a lesser known fact that Chrome OS comes with a built-in image editor which allows one to do basic image editing. It’s a perfect tool for me when I adjust images for my articles.

chrome os built in

Open ‘Files’ and then click on the image that you want to edit. Once the image is open, you will notice an edit icon; click on it to enter the edit mode. In the edit mode you will see several editing actions; there is a checkbox which says ‘overwrite original’, if you don’t want to make any changes to the original image (and that’s heavily recommended), uncheck the box and the editor will work on a copy leaving the original image intact.

The first option is autofix, where the editor will try to adjust the image; I never do that. I trust my artistic skills more than I trust some algorithm. Other options are: cropping, brightness/contrast control, rotate and undo/redo. Here are some useful shortcuts for each action:

a -autofix
c – crop
b – brightness and contrast
r/l – rotate the image right or left
e – toggle between view and edit mode
Ctrl+z – undo changes
Ctrl+y – redo changes

If you made any mistake you can always undo the changes. The tool bar has some nifty preview options which allow you to see all images in a ‘Mosaic’ layout or as a slideshow. If you want to rename any of the images you are currently working on, click on the image name and give it the new name.

3rd party image editing tools

If the built-in image editor doesn’t suffice, there are many third-party applications to do the job.

1. Pixlr

One tool that comes closest to offering a Photoshop-like experience on Chromebook is AutoDesk’s Pixlr. However they recently became self-destructive and embedded a wide skyscraper advertisement which leaves very little space for image editing.

chrome os pixlr

Autodesk may offer a paid, ad-free version of Pixlr; I won’t mind paying $5 for a decent image editing tool. However, don’t get too upset about the ad, it doesn’t show up in the full-screen mode. So if you want to focus on your work without being distracted by the ad, go to ‘View > Fullscreen’.

It comes with a plethora of tools and options; the left panel resembles the one from Photoshop or GIMP including – crop, move, marques, lasso, wand tool, pencil, pen quick select, brush, etc. That’s not all, the app comes with quite a good selection of filters and adjustment options.

chrome os pixlr

It supports layer, so you can slap layer on layer to work with several images. If you have used GIMP or Photoshop, you won’t struggle to use the app. In most cases you may even be tempted to use this one over GIMP.This is by far the best image editing tool on Chrome OS devices.

2. Sumo Paint

Another neat image editing app is Sumo Paint. This app, unlike Pixlr doesn’t have any ads so you do get more real estate to work. The app has a similar UI, and also features several filters and adjustment layers.

chrome os sumo

Sumo and Pixlr are neck to neck when it comes to features; I found Sumo to be more useful than Pixlr, but that’s my personal preference. Since both apps are available for free of cost, install both and see which one works best for you.

3. Polarr

As a photographer I depend heavily on LightRoom and Photoshop for my professional work, whereas Darktable and GIMP are deployed for personal projects. The good news for Chromebook users is that there is a nifty app, called Polarr, which can be a great replacement for Lightroom (don’t get me wrong, Lightroom and Photoshop are the best of the breed image editing software and if you are a professional user, you know there is nothing which beats them).

The best part of Polarr is that it also support RAW image format. That’s the format any serious photographer would use – it keeps all the image data from the sensor because the camera does minimal processing of the data. RAW formats like NEF can be directly opened in Polarr – no conversion needed! They are also working on version 2 which has an improved UI (you may mistake it for LightRoom).

Polarr comes with several filters/presets to give your images the desired treatment. The left panel features some of the image ‘effect’ presets – want that ‘Instagram’ look?

chrome os polarr instagram

The right panel offer the ‘most important’ tools one need to fine tune an image. Since the camera doesn’t process the RAM images, you increase/decrease the exposure, adjust the temperature and tint. Balance the lighting using Light & Shadow; increase black and white, add clarity, vibrance. That’s not all, there is much more.

chrome os polarr

chrome os polarr 2

Polarr is by far the best photograph management tool; you can import your entire folder to it and work on your images. Install the app from the Web Store, and try it yourself. If you have ever done any serious work on Lightroom, you will be amazed with this application.

Conclusion

In a nutshell these apps allow me to work on my images without having to go to Linux or Mac OS X boxes. I would not underrate these apps by calling them ‘basic’ image editing tools, they are way too advanced for that category. From the built-in image editing tool to Polarr, Chromebooks cover the entire spectrum of image editing – with Pixlr and Sumo Paint somewhere in middle.

There is only one caveat though. These _are_ image editing apps and thus are quite resource hungry. If you want to do some serious image editing work and are planning to buy a Chromebook, make sure to get the most powerful processor and as much RAM as you can get. Try these apps and let us know which one works best for you.

Krebs on Security: Yet Another Flash Patch Fixes Zero-Day Flaw

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

For the third time in two weeks, Adobe has issued an emergency security update for its Flash Player software to fix a dangerous zero-day vulnerability that hackers already are exploiting to launch drive-by download attacks.

brokenflash-aThe newest update, version 16.0.0.305, addresses a critical security bug (CVE-2015-0313) present in the version of Flash that Adobe released on Jan. 27 (v. 16.0.0.296). Adobe said it is are aware of reports that this vulnerability is being actively exploited in the wild via drive-by-download attacks against systems running Internet Explorer and Firefox on Windows 8.1 and below.

Adobe’s advisory credits both Trend Micro and Microsoft with reporting this bug. Trend Micro published a blog post three days ago warning that the flaw was being used in malvertising attacks – booby-trapped ads uploaded by criminals to online ad networks. Trend also published a more in-depth post examining this flaw’s use in the Hanjuan Exploit Kit, a crimeware package made to be stitched into hacked Web sites and foist malware on visitors via browser plug-in flaws like this one.

To see which version of Flash you have installed, check this link. Windows users who browse the Web with anything other than Internet Explorer may need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.).

The most recent versions of Flash should be available from the Flash home page, but beware potentially unwanted add-ons, like McAfee Security Scan. To avoid this, uncheck the pre-checked box before downloading, or grab your OS-specific Flash download from here.

IE10/IE11 on Windows 8.x and Chrome should auto-update their versions of Flash. Google Chrome version 40.0.2214.111 includes this update, and is available now. To check for updates in Chrome, click the stacked three bars to the right of the address bar in Chrome, and look for a listing near the bottom that says “Update Chrome.”

As I noted in a previous Flash post, short of removing Flash altogether — which may be impractical for some users — there are intermediate solutions. Script-blocking applications like Noscript and ScriptSafe are useful in blocking Flash content, but script blockers can be challenging for many users to handle.

My favorite in-between approach is click-to-play, which is a feature available for most browsers (except IE, sadly) that blocks Flash content from loading by default, replacing the content on Web sites with a blank box. With click-to-play, users who wish to view the blocked content need only click the boxes to enable Flash content inside of them (click-to-play also blocks Java applets from loading by default).

Windows users also should take full advantage of the Enhanced Mitigation Experience Toolkit(EMET), a free tool from Microsoft that can help Windows users beef up the security of third-party applications.

TorrentFreak: Google Chrome Dragged Into Internet Censorship Fight

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

chromeHelped by the MPAA, Mississippi State Attorney General Jim Hood launched a secret campaign to revive SOPA-like censorship efforts in the United States.

The MPAA and Hood want Internet services to bring website blocking and search engine filtering back to the table after the controversial law failed to pass.

The plan became public through various emails that were released in the Sony Pictures leaks and in a response Google said that it was “deeply concerned” about the developments.

To counter the looming threat Google filed a complaint against Hood last December, asking the court to quash a pending subpoena that addresses Google’s failure to take down or block access to illegal content, including pirate sites.

Recognizing the importance of this case, several interested parties have written to the court to share their concerns. There’s been support for both parties with some siding with Google and others backing Hood.

In a joint amicus curae brief (pdf) the Consumer Electronics Association (CEA), Computer & Communications Association (CCIA) and
advocacy organization Engine warn that Hood’s efforts endanger free speech and innovation.

“No public official should have discretion to filter the Internet. Where the public official is one of fifty state attorneys general, the danger to free speech and to innovation is even more profound,” they write.

According to the tech groups it would be impossible for Internet services to screen and police the Internet for questionable content.

“Internet businesses rely not only on the ability to communicate freely with their consumers, but also on the ability to give the public ways to communicate with each other. This communication, at the speed of the Internet, is impossible to pre-screen.”

Not everyone agrees with this position though. On the other side of the argument we find outfits such as Stop Child Predators, Digital Citizens Alliance, Taylor Hooton Foundation and Ryan United.

In their brief they point out that Google’s services are used to facilitate criminal practices such as illegal drug sales and piracy. Blocking content may also be needed to protect children from other threats.

“Google’s YouTube service has been used by those seeking to sell steroids and other illegal drugs online,” they warn, adding that the video platform is also “routinely used to distribute other content that is harmful to minors, such as videos regarding ‘How to Buy Smokes Under-Age’, and ‘Best Fake ID Service Around’.

Going a step further, the groups also suggest that Google should filter content in its Chrome browser. The brief mentions that Google recently removed Pirate Bay apps from its Play Store, but failed to block the site in search results or Chrome.

“In December 2014, responding to the crackdown on leading filesharing website PirateBay, Google removed a file-sharing application from its mobile software store, but reports indicate that Google has continued to allow access to the same and similar sites through its search engine and Chrome browser,” they write.

The Attorney General should be allowed to thoroughly investigate these threats and do something about it, the groups add.

“It is simply not tenable to suggest that the top law enforcement officials of each state are powerless even to investigate whether search engines or other intermediaries such as Google are being used—knowingly or unknowingly—to facilitate the distribution of illegal content…”

In addition to the examples above, several other organizations submitted amicus briefs arguing why the subpoena should or shouldn’t be allowed under the First Amendment and Section 230 of the CDA, including the International AntiCounterfeiting Coalition, EFF, the Center for Democracy & Technology and Public Knowledge.

Considering the stakes at hand, both sides will leave no resource untapped to defend their positions. In any event, this is certainly not the last time we’ll hear of the case.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and anonymous VPN services.

SANS Internet Storm Center, InfoCON: green: Improving SSL Warnings, (Sun, Feb 1st)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

One of the things that has concerned mefor the last few years is how we are slowly creating a click-thru culture. ” />

I honestly believe the intent is correct, but the implementation is faulty. The messages are not in tune with the average Internet users knowledge level. In other words the warningsare incomprehensible to my sister, my parents and my grandparents, the average Internet users of today. Given a choice between going to their favorite website or trusting an incomprehensible warning message…well you know what happens next.

A team at Google has been looking at these issues and are driving browser changes in Chrome base on their research. As they point out the vast majority of these errors are attributable to webmaster mistakes with only a very small fraction being actual attacks.

The paper, is Improving SSL Warnings: Comprehension and Adherence, and there is an accompanying presentation.

– Rick Wanner MSISE – rwanner at isc dot sans dot edu – http://namedeplume.blogspot.com/ – Twitter:namedeplume (Protected)

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

TorrentFreak: Huge Security Flaw Leaks VPN Users’ Real IP-Addresses

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

boxedThe Snowden revelations have made it clear that online privacy is certainly not a given.

Just a few days ago we learned that the Canadian Government tracked visitors of dozens of popular file-sharing sites.

As these stories make headlines around the world interest in anonymity services such as VPNs has increased, as even regular Internet users don’t like the idea of being spied on.

Unfortunately, even the best VPN services can’t guarantee to be 100% secure. This week a very concerning security flaw revealed that it’s easy to see the real IP-addresses of many VPN users through a WebRTC feature.

With a few lines of code websites can make requests to STUN servers and log users’ VPN IP-address and the “hidden” home IP-address, as well as local network addresses.

The vulnerability affects WebRTC-supporting browsers including Firefox and Chrome and appears to be limited to Windows machines.

A demo published on GitHub by developer Daniel Roesler allows people to check if they are affected by the security flaw.

IP-address leak
nkoreaip

The demo claims that browser plugins can’t block the vulnerability, but luckily this isn’t entirely true. There are several easy fixes available to patch the security hole.

Chrome users can install the WebRTC block extension or ScriptSafe, which both reportedly block the vulnerability.

Firefox users should be able to block the request with the NoScript addon. Alternatively, they can type “about:config” in the address bar and set the “media.peerconnection.enabled” setting to false.

peerconn

TF asked various VPN providers to share their thoughts and tips on the vulnerability. Private Internet Access told us that the are currently investigating the issue to see what they can do on their end to address it.

TorGuard informed us that they issued a warning in a blog post along with instructions on how to stop the browser leak. Ben Van Der Pelt, TorGuard’s CEO, further informed us that tunneling the VPN through a router is another fix.

“Perhaps the best way to be protected from WebRTC and similar vulnerabilities is to run the VPN tunnel directly on the router. This allows the user to be connected to a VPN directly via Wi-Fi, leaving no possibility of a rogue script bypassing a software VPN tunnel and finding one’s real IP,” Van der Pelt says.

“During our testing Windows users who were connected by way of a VPN router were not vulnerable to WebRTC IP leaks even without any browser fixes,” he adds.

While the fixes above are all reported to work, the leak is a reminder that anonymity should never be taken for granted.

As is often the case with these type of vulnerabilities, VPN and proxy users should regularly check if their connection is secure. This also includes testing against DNS leaks and proxy vulnerabilities.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and anonymous VPN services.

Krebs on Security: Yet Another Emergency Flash Player Patch

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

For the second time in a week, Adobe has issued an emergency update to fix critical security flaws that crooks are actively exploiting in its Flash Player software. Updates are available for Flash Player on Windows and Mac OS X.

brokenflash-aLast week, Adobe released an out-of-band Flash Patch to fix a dangerous bug that attackers were already exploiting. In that advisory, Adobe said it was aware of yet another zero-day flaw that also was being exploited, but that last week’s patch didn’t fix that flaw.

Earlier this week, Adobe began pushing out Flash v. 16.0.0.296 to address the outstanding zero-day flaw. Adobe said users who have enabled auto-update for Flash Player will be receiving the update automatically this week. Alternatively, users can manually update by downloading the latest version from this page.

Adobe said it is working with its distribution partners to make the update available in Google Chrome and Internet Explorer 10 and 11. Google Chrome version 40.0.2214.93 includes this update, and is available now. To check for updates in Chrome, click the stacked three bars to the right of the address bar in Chrome, and look for a listing near the bottom that says “Update Chrome.”

To see which version of Flash you have installed, check this link. Windows users who browse the Web with anything other than Internet Explorer may need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.).

Linux How-Tos and Linux Tutorials: How to Install a Seafile Server to Run a Private Cloud

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Swapnil Bhartiya. Original post: at Linux How-Tos and Linux Tutorials

Cloud is a buzzword these days; everyone is moving to the cloud even if most of us don’t even know what it actually means. To me, cloud is a fictional place that processes and stores my data; in the process it liberates me from that one device where my data is stored. With ‘Cloud’ I can access my data from any networked device.

What actually happens is that my data moves from my local machine to a remote machine or a remote cluster of machines – the storage and processing of the data happens at those machines.

This ‘movement’ of data changes things dramatically. If I don’t ‘own’ those remote machines, the one who does also becomes the ‘co-owner’ of my data. The ‘co-owner’ will scan my private data to see if it infringes upon any copyrights and it may block access to my own data for numerous, unclear reasons.

There was one incident where Microsoft allegedly blocked a user from accessing their own data after the company found some objectionable content in the user’s private folder. I wonder what Microsoft was doing in a private folder?

The point is, I don’t trust third party cloud providers, and cases like these further reinforce my belief to not trust them.

That’s why I keep all of my private data on a cloud that I run and own. I have used a couple of open source file sync and storage solutions, including ownCloud, and recently came to know about Seafile which is quickly becoming my favorite.

A few weeks ago I installed Seafile on my server and made it my primary cloud. Since open source is all about sharing, let’s share the procedure I followed to install Seafile on a server.

My server

I am running Seafile on a Virtual Private Server (VPS) running fully patched Ubuntu 14.04. So get yourself an Ubuntu or Debian machine and let’s get started.

Step #1 Install and secure MariaDB

I don’t use MySQL and heavily recommend MariaDB. To get the latest version of MariaDB, which is 10.x (I don’t recommend 5.x branch) on Ubuntu, you need to enable extra repositories. Check out this page to get instructions for adding the appropriate repository for your OS. Since I am using Ubuntu 14.04 I added the repo through following steps:

sudo apt-get install software-properties-common
sudo apt-key adv –recv-keys –keyserver hkp://keyserver.ubuntu.com:80 0xcbcb082a1bb943db
sudo add-apt-repository ‘deb http://nyc2.mirrors.digitalocean.com/mariadb/repo/10.0/ubuntu trusty main’

Update the repositories and install MariaDB:

sudo apt-get update
sudo apt-get install mariadb-server

During the installation, MariaDB will ask for a root password for the database, which is different from the system root password. Enter the desired password to proceed.

mariadb

Now we need to secure the database, but we need to kill the database server daemon before we proceed to the next step or you will encounter an error:

sudo killall mysqld

Now run the following command:

sudo mysql_install_db

Once it runs successfully start the database server:

sudo service mysql start

Then run this command:

sudo mysql_secure_installation

It will ask you to provide the root password. In the next step, it will ask whether you want to change the root password for the database: say no. In the rest of the steps, say ‘yes’ to everything. If everything works fine then you will see this message:

Thanks for using MariaDB!

Step #2 Install Apache

Now it’s time to install the web server and enable the needed modules. On this server I am using Apache with FastCGI. Since FastCGI is not available through default repositories we have to enable the Multiverse repository. In most cases, depending on your VPS provider, the multiverse repos are available in the source list but commented out. Open the source list file and uncomment them:

sudo nano /etc/apt/source.list

If the repositories are not in the source.list file, then add them from this page of the Ubuntu Wiki.

The default Ubuntu repositories look like the ones below, but you may want to find a mirror closer to your server for better performance:

deb http://us.archive.ubuntu.com/ubuntu/ trusty multiverse
deb-src http://us.archive.ubuntu.com/ubuntu/ trusty multiverse
deb http://us.archive.ubuntu.com/ubuntu/ trusty-updates multiverse
deb-src http://us.archive.ubuntu.com/ubuntu/ trusty-updates multiverse

Once the multiverse repos are enabled, run an update and install the two packages:

sudo apt-get update
sudo apt-get install apache2 libapache2-mod-fastcgi

Then enable these modules:

a2enmod rewrite
a2enmod fastcgi
a2enmod proxy_http

Step #3 Configure Vhost

Before we move ahead let’s create the web directory where we will download Seafile packages. On Ubuntu it should be under /var/www/

sudo mkdir -p /var/www/directory_name

example

sudo mkdir -p /var/www/sea

Now we have to create a vhost file for the seafile server:

nano /etc/apache2/sites-available/your_vhost_name.conf

Example

nano /etc/apache2/sites-available/sea.conf

The vhost file should look something like the one below:

<VirtualHost *:80>
 ServerName www.your-domain-name.com
 # Use "DocumentRoot /var/www/html" for Centos/Fedora
 # Use "DocumentRoot /var/www" for Ubuntu/Debian
 DocumentRoot /var/www/your-directory/
 Alias /media /var/www/your-directory/seafile-server-latest/seahub/media
 RewriteEngine On 
    #  </Location>
    <Location /media>
        Require all granted
    </Location>
 # seafile fileserver
 ProxyPass /seafhttp http://127.0.0.1:8082
 ProxyPassReverse /seafhttp http://127.0.0.1:8082
 RewriteRule ^/seafhttp - [QSA,L]
 # seahub
 RewriteRule ^/(media.*)$ /$1 [QSA,L,PT]
 RewriteCond %{REQUEST_FILENAME} !-f
 RewriteRule ^(.*)$ /seahub.fcgi$1 [QSA,L,E=HTTP_AUTHORIZATION:%{HTTP:Authorization}]
</VirtualHost>

In your vhost you have to change three things: ServerName to reflect the URL or your server DocumentRoot: provide the path to the directory we created above Alias /media /var/www/your_directory_path/seafile-server-latest/seahub/media

Open the apache.conf file

nano /etc/apache2/apache2.conf

and add the this line at the end (don’t forget to change the path of directory):

FastCGIExternalServer /var/www/your_directory_path/seahub.fcgi -host 127.0.0.1:8000

Step #4 Install Seafile

First install the packages needed by Seafile:

apt-get install python2.7 python-setuptools python-imaging python-mysqldb python-flup

Now let’s ‘cd’ to the newly create directory where we will install Seafile

cd /var/www/sea/

wget the latest Seafile packages into this directory (you should check the download page for the latest release):

sudo wget https://bitbucket.org/haiwen/seafile/downloads/seafile-server_4.0.5_x86-64.tar.gz

Extract the files:

tar xzvf seafile-server*

Then cd to the extracted ’seafile-server’ directory

cd seafile-server*

Run this script which will create the required databases and directories for the Seafile server:

./setup-seafile-mysql.sh

This script will guide you to setup your Seafile server using MySQL. Choose the default options for steps between 3–6:

“ENTER” to continue 1: Give Server name 2: Server IP or domain 3: Default port 4: Where do you want to put your seafile data? 5: Which port do you want to use for the seafile server? 6: Which port do you want to use for the seafile fileserver? 7: Create user (If you don’t have users then choose the option [1] which will automatically create database and users.)

If you chose option [1] to create databases, you will come across following options: In option 4, instead of using ‘root’ as root user for Seafile database create a new user. In my case, I created a user named ‘seau’. Leave everything else as is.

1 What is the host of mysql server?
[ default “localhost” ] 
2 What is the port of mysql server?
[ default “3306” ] 
3 What is the password of the mysql root user?
[ root password ] 
verifying password of user root … done
4 Enter the name for mysql user of seafile. It would be created if not exists.
[ default “root” ] seau
5 Enter the password for mysql user “seau”:
[ password for seau ] 
6 Enter the database name for ccnet-server:
[ default “ccnet-db” ] 
7 Enter the database name for seafile-server:
[ default “seafile-db” ] 
8 Enter the database name for seahub:
[ default “seahub-db” ]

Once done the script will give you a summary of the tasks performed.

Now we have to edit two configuration files: ccnet.conf and seahub_settings.py. These files reside in the document root directory.

Open ccnet.conf with desired editor, I use nano:

sudo nano /var/www/your-directory/ccnet/ccnet.conf

In this file check that the ‘SERVICE_URL’ points to the correct domain.

SERVICE_URL = http://www.your_domain.com:8000

Now edit the second config file:

sudo nano /var/www/your-directory/seahub_settings.py

and add the following line before DATABASES

FILE_SERVER_ROOT = ‘http://www.your-domain.com/seafhttp’

Step #5 Start the server

First we have to run a script which will enable the site which we configured within the apache2 configuration at Step #3 Configure Vhost.

a2ensite your_vhost_name.conf

In my case it was:

a2ensite sea.conf

Then restart apache:

service apache2 restart

Now let’s run Seafile server

/var/www/your-directory/seafile-server-latest/./seafile.sh start
/var/www/your-directory/seafile-server-latest/./seahub.sh start-fastcgi

The second command will ask you to create an admin account for your Seafile server, which will be an existing email ID and password. This email ID and password will be used to log into your server.

That’s it. You are all set.

Open any web browser, Chrome is recommended, and enter the site URL or IP address of your server

Example:

www.seafile.com

or

10.20.11.11

seafile

This will open the login page of your Seafile sever. Enter the username and password, which you created above, and you will be logged into your very own Seafile server! Bye bye Dropbox!

Getting started with Seafile server

Seafile uses a different model. Unlike Dropbox or ownCloud, everything is a library here. You can think of these as directories. These Libraries are the ones that are synced between different machines using desktop clients.

You can either create desired folders inside the default ‘My Library’ or create new Libraries if you want more flexibility with syncing. I simply deleted the default ‘My Library’ and created a couple of Libraries such as Images, Documents, eBooks, Music, Movies, etc. The great news about Seafile is that you can encrypt these libraries right from the web browser.

Go ahead and download the desktop client for your OS. When you run the client for the first time it will ask for the location where you would like the client to keep files.

seafile

Enter the account details for the server. Then right click on the library that you want to sync with this machine.

seafile file sync

seafile desktop

The client will give you the option to choose the desired location for this file. This is one part that I love the most about Seafile, as I can have different Libraries synced with folders on different partitions.

That’s all! Enjoy your very own ‘Seafile Cloud Server’.

SANS Internet Storm Center, InfoCON: green: Adobe updates Security Advisory for Adobe Flash Player, Infocon returns to green, (Mon, Jan 26th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

On Saturday, 24 JAN 2015, Adobe updated their Security Advisory for Adobe Flash Player specific to CVE-2015-0311. From the update:

Users who have enabled auto-update for the Flash Player desktop runtime will be receiving version 16.0.0.296 beginning on January 24. This version includes a fix for CVE-2015-0311. Adobe expects to have an update available for manual download during the week of January 26, and we are working with our distribution partners to make the update available in Google Chrome and Internet Explorer 10 and 11. For more information on updating Flash Player please refer to this post.

To that end we”>GREEN. Please ensure you apply updates as soon as possible and stay tuned here as additional related information”>|”>@holisticinfosec

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

SANS Internet Storm Center, InfoCON: yellow: Flash 0-Day: Deciphering CVEs and Understanding Patches, (Fri, Jan 23rd)

This post was syndicated from: SANS Internet Storm Center, InfoCON: yellow and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: yellow

(updated with Jan 24thupdate)

The last two weeks, we so far had two different Adobe advisories (one regularly scheduled, and one out of band), and three new vulnerabilities. I would like to help our readers deciphering some of the CVEs and patches that you may have seen.

CVE Fixed in Flash Version”>yes APSA15-01

So in short: There is still one unpatchedFlash vulnerability. System running Windows 8 or below with Firefox or Internet Explorer are vulnerable. You are not vulnerable if you are running Windows 8.1 and the vulnerability is not exposed via Chrome. EMET appears to help, so may other tools like Malwarebytes Anti-Exploit.


Johannes B. Ullrich, Ph.D.
STI|Twitter|LinkedIn

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Linux How-Tos and Linux Tutorials: How to Install and Update Software on openSUSE Like a Pro

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Jack Wallen. Original post: at Linux How-Tos and Linux Tutorials

There are so many reasons why you might be considering the migration to SUSE or openSUSE. For some, it’s the logical step to integrating Linux into a business environment (SUSE paid support is phenomenal and the openSUSE community is always at the ready to help). To others, it’s one of the most power-user friendly Linux distributions on the market.

Regardless of why you are considering a move to the SUSE ecosystem (be it through SUSE or openSUSE), it’s best you know the tools of the trade before you make the leap. Fortunately, as with the whole of the Linux landscape, package management is an incredibly user-friendly task ─ when you know what you’re looking for.

Some distributions make the process of managing software incredibly easy. Take, for instance, Ubuntu Linux. Front and center on the Launcher is the Ubuntu Software Center icon. Click that icon and search hundreds of thousands of apps to install. With openSUSE, you won’t find that launcher so up front and center, but the tool is easy to locate and easy to use.

Let’s dive into the world of package management with openSUSE, from the GUI perspective. After giving this a read, you should be able to easily install software, update your machine, and even add repositories (so you can install third-party applications).

YaST2 is all you need

One outstanding element of the SUSE-verse, is they centralize the vast majority of their system management into a single tool called YaST2 (Yet Another Setup Tool). From within YaST2 you can do a great many things ─ one of which is manage the software on your system.

I’m going to be working with the latest release of openSUSE (13.2) and the KDE desktop. If you’ve opted for the GNOME desktop environment, this will not change YaST (only how you get to YaST2).

The easiest way to get to YaST2 is to open up the KDE “K” menu and type “yast” in the search field (Figure 1). When the YaST2 entry appears, click it to fire up the tool.

yast 1

 Once YaST2 is open, click on the Software entry in the left navigation (Figure 2) to reveal all of the available software-related entries.

yast 2 

Installing software

The first thing I want to demonstrate is how to install a piece of software. This is quite simple. From with the Software section of YaST2, click the Software Management and wait for the software management system to open. 

  1. Enter the title of the software you want to install in the Search field.

  2. Click Search.

  3. When the software appears in the main panel, click the associated check box (Figure 3).

  4. Click Accept.

  5. Read through the dependencies (a popup will appear).

  6. If the dependencies are acceptable, click Continue.

  7. Allow the installation to complete.

  8. When the software is complete, click Finish.

yast 3

That’s it! You’ve officially installed your first piece of software on openSUSE.

Updating software

One of the most important things you can do with YaST2 is update your system. Updates are crucial as they often contain security patches and bug fixes. Updates are handled from within the same YaST2 sub-section (Software). Within that sub-section, you will find an entry called Online Update. Click that and YaST2 will check for available updates. When the check is complete, you will be presented with a full listing of what is available (Figure 4).

Upgrading your system with YaST2.

By default, all available upgrades will be selected for processing. You can comb through the package listing and de-select any packages you might not want to upgrade. However, if you opt to remove packages, from the upgrade list, know that they can impact other upgrades as well. If you’re okay with the list, click Accept and the upgrade will begin.

NOTES: In some instances (as with the upgrade of any Adobe packages), you may have to accept an End User License Agreement (EULA). There may also be conflict resolution to deal with. To resolve any issues, click Continue when presented with the dependency resolutions. If the kernel is being updated, YaST will inform you that a reboot will be necessary. To continue after this warning, you must click Continue (Figure 5).

yast 5

Depending upon how many updates are available, the process can take a while. Sit back and enjoy or go about administering your other machines or network. Once the update completes, reboot the machine (if prompted) and enjoy the latest iteration of your software packages.

Adding repositories

Now we get into something that may be a bit more challenging to newcomers. First and foremost, what is a software repository? Software repositories are simply online locations that house packages for installation. The openSUSE platform has its own, official, repositories and many other applications have their own. When you search for a piece of software to install within YaST2 ─ a software title you know exists for Linux ─ and it doesn’t appear in the search results, most likely YaST2 simply doesn’t know where to find it. Because of this, you have to tell YaST2 where that software can be found: a software repository.

Let’s say, for instance, you want to install the Google Chrome browser onto openSUSE. To do this, you will have to first add the official Google repository. Here are the steps:

  1. Open YaST2

  2. Click on Software (left panel)

  3. Click on Software Repositories (right panel)

  4. From the Software Repositories click Add (Figure 6)

    Figure 6: Adding a new software repository.

  5. Select Specify URL and click Next

  6. Name the repository Google Chrome

  7. Enter the url http://dl.google.com/linux/rpm/stable/i386 (Figure 7)

    yast 7

  8. Click Next

  9. Click OK

  10. Click Yes (when prompted) to accept the GnuPG Key.

NOTE: If you are using a 64-bit machine, the above URL would change to http://dl.google.com/linux/rpm/stable/x86_64

At this point, you can now go back to the Software Management section, search for Google Chrome, and install (Figure 8).

Figure 8: You can now install Google Chrome on openSUSE.

If you find a package you want to install on openSUSE, and it doesn’t show up in YaST2, a bit of googling should locate an available repository for the platform.

Managing software on openSUSE is not in the least bit challenging. Once you know where to look and what to do, you can be installing and updating software like a pro.

 

 

Krebs on Security: Flash Patch Targets Zero-Day Exploit

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Adobe today released an important security update for its Flash Player software that fixes a vulnerability which is already being exploited in active attacks. Compounding the threat, the company said it is investigating reports that crooks may have developed a separate exploit that gets around the protections in this latest update.

brokenflash-aEarly indicators of a Flash zero-day vulnerability came this week in a blog post by Kafeine, a noted security researcher who keeps close tabs on new innovations in “exploit kits.” Often called exploit packs — exploit kits are automated software tools that help thieves booby-trap hacked sites to deploy malicious code.

Kafeine wrote that a popular crimeware package called the Angler Exploit Kit was targeting previously undocumented vulnerability in Flash that appears to work against many different combinations of Internet Explorer browser on Microsoft Windows systems.

Attackers may be targeting Windows and IE users now, but the vulnerability fixed by this update exists in versions of Flash that run on Mac and Linux as well. The Flash update brings the media player to version 16.0.0.287 on Mac and Windows systems, and 11.2.202.438 on Linux.

While Flash users should definitely update as soon as possible, there are indications that this fix may not plug all of the holes in Flash for which attackers have developed exploits. In a statement released along with the Flash update today, Adobe said its patch addresses a newly discovered vulnerability that is being actively exploited, but that there appears to be another active attack this patch doesn’t address.

“Adobe is aware of reports that an exploit for CVE-2015-0310 exists in the wild, which is being used in attacks against older versions of Flash Player,” Adobe said. “Additionally, we are investigating reports that a separate exploit for Flash Player 16.0.0.287 and earlier also exists in the wild.”

To see which version of Flash you have installed, check this link. IE10/IE11 on Windows 8.x and Chrome should auto-update their versions of Flash, although as of this writing it seems that the latest version of Chrome (40.0.2214.91) is still running v. 16.0.0.257

The most recent versions of Flash are available from the Flash home page, but beware potentially unwanted add-ons, like McAfee Security Scan. To avoid this, uncheck the pre-checked box before downloading, or grab your OS-specific Flash download from here.

Windows users who browse the Web with anything other than Internet Explorer may need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.).

I am looking forward to day in which far fewer sites require Flash Player to view content, and instead rely on HTML5 for rendering video content. For now, it’s probably impractical for most users to remove Flash altogether, but there are in-between options to limit automatic rendering of Flash content in the browser. My favorite is click-to-play, which is a feature available for most browsers (except IE, sadly) that blocks Flash content from loading by default, replacing the content on Web sites with a blank box. With click-to-play, users who wish to view the blocked content need only click the boxes to enable Flash content inside of them (click-to-play also blocks Java applets from loading by default).

Windows users also should take full advantage of the Enhanced Mitigation Experience Toolkit (EMET), a free tool from Microsoft that can help Windows users beef up the security of third-party applications.

Darknet - The Darkside: Flash Zero Day Being Exploited In The Wild

This post was syndicated from: Darknet - The Darkside and was written by: Darknet. Original post: at Darknet - The Darkside

This is not the first Flash Zero Day and it certainly won’t be the last, thanks to the Sandbox implemented in Chrome since 2011 – users of the browser are fairly safe. Those using IE are in danger (as usual) and certain versions of Firefox. It has been rolled into the popular Angler Exploit Kit, […]

The post Flash Zero Day Being…

Read the full post at darknet.org.uk

SANS Internet Storm Center, InfoCON: green: Flash 0-Day Exploit Used by Angler Exploit Kit, (Wed, Jan 21st)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

The Angler exploit kit is a tool frequently used in drive-by download attacks to probe the browser for different vulnerabilities, and then exploit them to install malware. The exploit kit is very flexible and new exploits are added to it constantly.

However, the blog post below shows how this exploit kit is currently using an unpatchedFlash 0-day to install malware. Current versions of Windows (e.g. Window 8 + IE 10) appear to be vulnerable. Windows 8.1, or Google Chrome do not appear to be vulnerable.

This is still a developing story, but typically we see these exploits more in targeted attacks, not in widely used exploit kits. This flaw could affect a large number of users very quickly. Please refer to the original blog for details.

[1] http://malware.dontneedcoffee.com/2015/01/unpatched-vulnerability-0day-in-flash.html


Johannes B. Ullrich, Ph.D.
STI|Twitter|LinkedIn

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Linux How-Tos and Linux Tutorials: How to Stream Content from a Linux System to Chromecast

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Swapnil Bhartiya. Original post: at Linux How-Tos and Linux Tutorials

chromecast app loginChromecast is one of the most used devices in my household. After using it for over a year now, I believe there is no longer a market for the so-called ‘smart TV’. Inexpensive devices like Chromecast can turn any HDMI-enabled TV into a smart TV with immense possibilities to expand its features.

Google continues to add new features to Chromecast, except for one much-needed feature: native support for playback of local content. There is no _easy_ way to stream content sitting on your smart phone or desktop to Chromecast. Let me be honest, there are some Chrome apps which can play videos stored on your computer, but none offer a desirable solution.

However, nothing is impossible for a Linux user. 

What’s desirable? The Chromecast is plugged into the TV in the living room whereas my PCs and hard-drives are in my office. There are three doors between these two rooms and I don’t want to shuttle in between my living room and office to play movies. I want the control to be in my hands, while I lay on the couch. The data remains on my PCs and I can use my Android devices to stream content to Chromecast, without having to get up. I am lazy!

Well, that’s exactly what I have done. I have created a local file server on my Linux box, which allows me to access movies, music and images from any device over the local network. Then I use an Android app which works as a remote to access and stream these files to the Chromecast. And I will show you how to do this, too.

Let’s get started. First things first. Let’s make our data accessible over the local network, and there is nothing better than setting up a Samba server.There are different ways of installing and configuring Samba on different distributions. Since I run openSUSE, Arch Linux and Kubuntu on my PCs, in this tutorial I will focus on openSUSE and Ubuntu families (Arch users can refer to the official wiki).

Install Samba Server

The chances are that Samba is already installed on your system; in that case skip this section and fast forward to ‘Grab File Manager’ section:

Step #1: Install Samba

openSUSE:

 $ sudo zypper in samba

Kubuntu/Ubuntu family:

 $ sudo apt-get install samba

chromecast file selectionStep #2: Now we need to add a user to a Samba group so it will have the desired permissions to access the shared data. Since I don’t let guests access my file server I really don’t bother with creating a separate user. In this tutorial we are using the system user for samba share.

openSUSE:

We need to create a Samba group in openSUSE and add the user to that group.

$ sudo groupadd smbgroup
$ sudo usermod -a -G smbgroup name_of_user
$ sudo smbpasswd -a name_of_user

Ubuntu/Kubuntu:

$ sudo smbpasswd -a name_of_user

Step #3 Now we have to edit the Samba configuration files to tell Samba which directories are shared.This step is the same for all distributions:

$ sudo nano /etc/samba/smb.conf

In this file, leave the entire [global] section intact and comment everything below it. Right after the end of the [global] section add a few lines using the following pattern:

[4TB] -> The name of the shared directory
path = /media/4tb/ -> The path of the shared directory 
read only = No -> Ensures that it's not read only
browsable = yes -> Ensures that the subfolder of the directory are browsable 
writeable = yes -> Ensures that user can write to it from networked device
valid users = swapnil -> The system user

In my case it looks something like this:

[4TB]
path = /media/4tb/
read only = No
browsable = yes
writeable = yes
valid users = swapnil

Add a new section for each directory you want to share over the network.

Step #4 start Samba server.

Now we have to start the server and also ensure that it kicks in at system boot.

openSUSE:

Start start Samba services:

systemctl start smb.service 
systemctl start nmb.service 

chromecast play videoThen enable the services to start at system boot:

systemctl enable smb.service 
systemctl enable nmb.service 

Ubuntu/Kubuntu:

sudo service nmbd restart
sudo service smbd restart

You should now be able to access these directories over the local network.

Grab file manager

I use Android because I find iOS to be a sub-standard and extremely restricted OS when it comes to getting some real work done. I couldn’t find a decent free file explorer on the App Store which can compete with the ones available on Android. ES File Manager is one of the best applications out there, for our set-up.

Download and install ES File Manager and it’s Chromecast Plugin from Google Play Store.

Open the app and go to ‘network’ option from the menu.

Select LAN and run ‘scan’.

It will detect your Samba server; provide the app with the username and password (the system user for your PC where Samba is installed). (See Image 1, above.)

Once connected, open the network directory where the media is saved and choose the file that you want to play on Chromecast. (Image 2) Long press on the file and it will show a checkbox. Tick the ‘checkbox’ and then click on the ‘more’ option at bottom left. You will see ‘Chromecast’ in the menu. Select Chromecast and it will scan for the Chromecasts available on your network. Hit on the name of your device when it pops up and your video will start playing on the Chromecast. (Image 3)

Now you can just lay back in your couch and play movies, music and images right from your palm. Linux and open source just turned you into a couch potato.

The Hacker Factor Blog: Two Steps Forward, One Step Back

This post was syndicated from: The Hacker Factor Blog and was written by: The Hacker Factor Blog. Original post: at The Hacker Factor Blog

Today I moved FotoForensics from the original server to a new server. Back when I first took on this project (Feb 2012), there were a few immediate hurdles. Getting the legal issues covered, designing and developing the server software, and most importantly: finding a hosting site.

I initially tried to get a price quote for using the Amazon Cloud. But after a few hours of their online pricing system, I realized that I could not get a straight answer. It would cost me somewhere between $10 and $1000 a month, but I wouldn’t know more until after I got the bill.

I also priced out a couple of other hosting sites. Nothing was in a range I could afford. And it didn’t help that, starting up a new online service, I had no idea about bandwidth requirements.

Fortunately, my friend Chris came to my rescue. He had a server and offered me a virtual machine on the system. I had one CPU, a good amount of RAM, and a 250 gig partition for storing files.

After nearly three years, FotoForensics has outgrown that system. Today, I moved everything to a new server. Instead of one CPU, the kernel sees six. Instead of a gig of RAM, I’ve allocated four gigs. And disk space? I’ve allocated 1.5T. I left some room to expand, just in case it’s needed. The first server could handle a flood of requests from Reddit. This new server? Should have plenty of power for the next few years.

Thanks Chris — I wouldn’t have been able to do this without your help.

Nothing’s ever easy

The transfer of the FotoForensics site, from the original server to the new hardware, seemed painless. The new OS installed without problems. Files transferred without issue, and the DNS updated properly. Total downtime was about 15 minutes. The new system is really snappy! While the network isn’t any faster, computing time is noticeably reduced.

Of course, there were a few hiccups. I installed the latest-greatest Ubuntu LTS. The original server was running Ubuntu 10.04. Since LTS releases only have 5 years of support, and “10” came out in 2010, it didn’t make sense to stick with an obsolete system. The problem is, between 2010 and 2014, Apache was updated and changed their configuration files. Rather than having the confusing “Order allow,deny” rules, they now have confusing “Required deny/granted” rules. (Less confusing, but a pain to debug on the fly.)

After the server was up and running, everything looked great. Until the first segfault happened. Then another. And another… Here’s what they look like:

[Sun Jan 18 14:08:57.803056 2015] [core:notice] [pid 840] AH00052: child pid 1602 exit signal Segmentation fault (11)
[Sun Jan 18 14:10:38.001748 2015] [core:notice] [pid 840] AH00052: child pid 1604 exit signal Segmentation fault (11)
[Sun Jan 18 14:10:38.001984 2015] [core:notice] [pid 840] AH00052: child pid 1606 exit signal Segmentation fault (11)
[Sun Jan 18 14:17:29.337708 2015] [core:notice] [pid 840] AH00052: child pid 1851 exit signal Segmentation fault (11)
[Sun Jan 18 14:19:54.796962 2015] [core:notice] [pid 2134] AH00052: child pid 2138 exit signal Segmentation fault (11)
[Sun Jan 18 14:37:12.774170 2015] [core:notice] [pid 11312] AH00052: child pid 11613 exit signal Segmentation fault (11)
[Sun Jan 18 15:10:55.751700 2015] [core:notice] [pid 11312] AH00052: child pid 12417 exit signal Segmentation fault (11)
[Sun Jan 18 15:10:55.751901 2015] [core:notice] [pid 11312] AH00052: child pid 12433 exit signal Segmentation fault (11)
[Sun Jan 18 15:13:47.985333 2015] [core:notice] [pid 11312] AH00052: child pid 12592 exit signal Segmentation fault (11)
[Sun Jan 18 15:18:53.698946 2015] [core:notice] [pid 12854] AH00052: child pid 12902 exit signal Segmentation fault (11)
[Sun Jan 18 15:19:43.765232 2015] [core:notice] [pid 12854] AH00052: child pid 12887 exit signal Segmentation fault (11)
[Sun Jan 18 15:38:32.076192 2015] [core:notice] [pid 13150] AH00052: child pid 13346 exit signal Segmentation fault (11)
[Sun Jan 18 15:54:40.371988 2015] [core:notice] [pid 13150] AH00052: child pid 13636 exit signal Segmentation fault (11)
[Sun Jan 18 15:54:40.372105 2015] [core:notice] [pid 13150] AH00052: child pid 13651 exit signal Segmentation fault (11)
[Sun Jan 18 16:31:44.588575 2015] [core:notice] [pid 15416] AH00052: child pid 25734 exit signal Segmentation fault (11)
[Sun Jan 18 17:02:39.581156 2015] [core:notice] [pid 4928] AH00052: child pid 5114 exit signal Segmentation fault (11)
[Sun Jan 18 17:14:55.486788 2015] [core:notice] [pid 4928] AH00052: child pid 5283 exit signal Segmentation fault (11)
[Sun Jan 18 17:15:07.505491 2015] [core:notice] [pid 4928] AH00052: child pid 5122 exit signal Segmentation fault (11)

I searched for these errors online and found literally hundreds of people who see the same problem. There’s lots of guesswork about the cause, but nobody has a solution. Some people think it’s an Apache problem. The Apache community says it is a PHP problem. The PHP people just have an open bug.

There’s a wide variety of suggestions. Increase the number of worker threads, remove unused modules, etc. I’ve tried them all. Nothing solves the issue.

I even tried to regress the version of PHP, but that caused other problems. (Seriously: don’t try regressing.) Looking over the changelogs, it looks like the most recent PHP versions fixed various memory leaks. I grabbed the last two stable PHP releases and tried to compile them. They compile fine, but both fail the self tests. I’m not going to install “stable” code that fails a self-test.

I was just about to reinstall with 10.04 LTS (since it was stable and didn’t have these errors), but then I noticed something… My site has one visitor every 1-2 seconds, so it’s easy to match the error to the visitor. So far, 100% of the time, my site identifies an iPhone/iPad user visiting the site a fraction of a second before the segfault occurs. Firefox doesn’t have a problem. Chrome is fine. Only iPhone/iPad browsers. It isn’t every iPhone/iPad, and I’m not seeing any access logs showing an error result. So as far as I can tell, users are not seeing this — only me.

I searched for this same bug associated iPhone devices and found one great hint: Apache-2.4 Gives Segmentation Fault On Apple-clients. In this posting from 2012, Pascal describes the problem and the symptoms perfectly. He concludes by speculating about an AppleWebkit issue. However, I’m not sure that the problem is related to AppleWebkit. I think it might be related to how user-agent strings are processed in .htaccess files.

The closest thing I could find is in /etc/apache2/mods-enabled/setenvif.conf. This file contains a bunch of special handling rules for specific user-agents. For example:

BrowserMatch “Mozilla/2″ nokeepalive
BrowserMatch “MSIE 4.0b2;” nokeepalive downgrade-1.0 force-response-1.0
BrowserMatch “RealPlayer 4.0″ force-response-1.0
BrowserMatch “Java/1.0″ force-response-1.0
BrowserMatch “JDK/1.0″ force-response-1.0

I went ahead and disabled these special exceptions. I also added in code to check for undefined superglobals, as Pascal identified. The net result is that these steps reduced the problem from every few seconds to once every 10-30 minutes. The crashes are not gone, but they’re less often. And they are still related to iOS devices. I cannot help but wonder if iOS is doing something weird with the network socket. Maybe sending unexpected packets, not closing, or sending something out of band? Or maybe it’s a problem with MPM?

I’m very open to suggestions, recommendations, and possible solutions.

Krebs on Security: Adobe, Microsoft Push Critical Security Fixes

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Microsoft on Tuesday posted eight security updates to fix serious security vulnerabilities in computers powered by its Windows operating system. Separately, Adobe pushed out a patch to plug at least nine holes in its Flash Player software.

brokenwindowsLeading the batch of Microsoft patches for 2015 is a drama-laden update to fix a vulnerability in Windows 8.1 that Google researchers disclosed just two days ago. Google has a relatively new policy of publicly disclosing flaws 90 days after they are reported to the responsible software vendor — whether or not that vendor has fixed the bug yet. That 90-day period elapsed over the weekend, causing Google to spill the beans and potentially help attackers develop an exploit in advance of Patch Tuesday.

For its part, Microsoft issued a strongly-worded blog post chiding Google for what it called a “gotcha” policy that leaves Microsoft users in the lurch. Somehow I doubt this is the last time we’ll see this tension between these two software giants. But then again, who said patching had to be boring? For a full rundown of updates fixed in today’s release, see this link.

Adobe, as it is prone to do on Patch Tuesday, issued an update to fix a whole mess of security problems with its Flash Player program. Adobe’s update brings the Player to v. 16.0.0.257 for Windows and Mac users, and fixes at least nine critical bugs in the software. Adobe said it is not aware of exploits that exist in the wild for any of the vulnerabilities fixed in this release.

brokenflash-aTo see which version of Flash you have installed, check this link. IE10/IE11 on Windows 8.x and Chrome should auto-update their versions of Flash. If your version of Chrome doesn’t show the latest version of Flash, you may need to restart the browser or manually force Chrome to check for updates (click the three-bar icon to the right of the address bar, select “About Google Chrome” and it should check then).

The most recent versions of Flash are available from the Flash home page, but beware potentially unwanted add-ons, like McAfee Security Scan. To avoid this, uncheck the pre-checked box before downloading, or grab your OS-specific Flash download from here.

Windows users who browse the Web with anything other than Internet Explorer may need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.).

As always, please feel free to sound off in the comments section below with your experience about applying any of these security patches.

SANS Internet Storm Center, InfoCON: green: Adobe Patch Tuesday – January 2015, (Tue, Jan 13th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

Adobe released one bulletin today, affecting Flash Player. The update should be applied to Windows, OS X as well as Linux versions of Adobes Flash player. It is rated with a priority of 1 for most Windows versions of Flash Player.

Adobe Air, as well as browser like Chrome and Internet Explorer are affected as well.

http://helpx.adobe.com/security/products/flash-player/apsb15-01.html


Johannes B. Ullrich, Ph.D.
STI|Twitter|LinkedIn

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Errata Security: A Call for Better Vulnerability Response

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

Microsoft forced a self-serving vulnerability disclosure policy on the industry 10 years ago, but cries foul when Google does the same today.

Ten years ago, Microsoft dominated the cybersecurity industry. It employed, directly or through consultancies, the largest chunk of security experts. The ability to grant or withhold business meant influencing those consulting companies — Microsoft didn’t even have to explicitly ask for consulting companies to fire Microsoft critics for that to happen. Every product company depended upon Microsoft’s goodwill in order to develop security products for Windows, engineering and marketing help that could be withheld on a whim.

This meant, among other things, that Microsoft dictated the “industry standard” of how security problems (“vulnerabilities”) were reported. Cybersecurity researchers who found such bugs were expected to tell the vendor in secret, and give the vendor as much time as they needed in order to fix the bug. Microsoft sometimes sat on bugs for years before fixing them, relying upon their ability to blacklist researchers to keep them quiet. Security researchers who didn’t toe the line found bad things happening to them.

I experienced this personally. We found a bug in a product called TippingPoint that allowed us to decrypt their “signatures”, which we planned to release at the BlackHat hacker convention, after giving the vendor months to fix the bug. According to rumors, Microsoft had a secret program with TippingPoint with special signatures designed to track down cybercriminals. Microsoft was afraid that if we disclosed how to decrypt those signatures, that their program would be found out.

Microsoft contacted our former employer, ISS, which sent us legal threats. Microsoft sent FBI agents to threaten us in the name of national security. A Microsoft consultant told the BlackHat organizer, Jeff Moss, that our research was made up, that it didn’t work, so I had to sit down with Jeff at the start of the conference to prove it worked before I was allowed to speak.

My point is that a decade ago in the cybersecurity industry, Microsoft dictated terms.

Today, the proverbial shoe is on the other foot. Microsoft’s products are now legacy, so Windows security is becoming as relevant as IBM mainframe security. Today’s cybersecurity researchers care about Apple, Google Chrome, Android, and the cloud. Microsoft is powerless to threaten the industry. It’s now Google who sets the industry’s standard for reporting vulnerabilities. Their policy is that after 90 days, vulnerabilities will be reported regardless if the vendor has fixed the bug. This applies even to Google itself when researchers find bugs in products like Chrome.

This is a nasty trick, of course. Google uses modern “agile” processes to develop software. That means that after making a change, the new software is tested automatically and shipped to customers within 24 hours. Microsoft is still mired in antiquated 1980s development processes, so that it takes three months and expensive manual testing before a change is ready for release. Google’s standard doesn’t affect everyone equally — it hits old vendors like Microsoft the hardest.

We saw the effect this last week, where after notifying Microsoft of a bug 90 days ago, Google dumped the 0day (the information hackers need to exploit the bug) on the Internet before Microsoft could release a fix.

I enjoyed reading Microsoft’s official response to this event, full of high-minded rhetoric why Google is bad, and why Microsoft should be given more time to fix bugs. It’s just whining — Microsoft’s alternative disclosure policy is even more self-serving than Google’s. They are upset over their inability to adapt and fix bugs in a timely fashion. They resent how Google exploits its unfair advantage. Since Microsoft can’t change their development, they try to change public opinion to force Google to change.

But Google is right. Since we can’t make perfect software, we must make fast and frequent fixes the standard. Nobody should be in the business of providing “secure” software that can’t turn around bugs quickly. Rather than 90 days being too short, it’s really too long. Microsoft either needs to move forward with the times and adopt “agile” methodologies, or just accept its role of milking legacy for the next few decades as IBM does with mainframes.

Krebs on Security: Lizard Stresser Runs on Hacked Home Routers

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

The online attack service launched late last year by the same criminals who knocked Sony and Microsoft’s gaming networks offline over the holidays is powered mostly by thousands of hacked home Internet routers, KrebsOnSecurity.com has discovered.

Just days after the attacks on Sony and Microsoft, a group of young hoodlums calling themselves the Lizard Squad took responsibility for the attack and announced the whole thing was merely an elaborate commercial for their new “booter” or “stresser” site — a service designed to help paying customers knock virtually any site or person offline for hours or days at a time. As it turns out, that service draws on Internet bandwidth from hacked home Internet routers around the globe that are protected by little more than factory-default usernames and passwords.

The Lizard Stresser's add-on plans. In case it wasn't clear, this service is *not* sponsored by Brian Krebs.

The Lizard Stresser’s add-on plans. Despite this site’s claims, it is *not* sponsored by this author.

In the first few days of 2015, KrebsOnSecurity was taken offline by a series of large and sustained denial-of-service attacks apparently orchestrated by the Lizard Squad. As I noted in a previous story, the booter service — lizardstresser[dot]su — is hosted at an Internet provider in Bosnia that is home to a large number of malicious and hostile sites.

That provider happens to be on the same “bulletproof” hosting network advertised by “sp3c1alist,” the administrator of the cybercrime forum Darkode. Until a few days ago, Darkode and LizardStresser shared the same Internet address. Interestingly, one of the core members of the Lizard Squad is an individual who goes by the nickname “Sp3c.”

On Jan. 4, KrebsOnSecurity discovered the location of the malware that powers the botnet. Hard-coded inside of that malware was the location of the LizardStresser botnet controller, which happens to be situated in the same small swath Internet address space occupied by the LizardStresser Web site (217.71.50.x)

The malicious code that converts vulnerable systems into stresser bots is a variation on a piece of rather crude malware first documented in November by Russian security firm Dr. Web, but the malware itself appears to date back to early 2014 (Google’s Chrome browser should auto-translate that page; for others, a Google-translated copy of the Dr. Web writeup is here).

As we can see in that writeup, in addition to turning the infected host into attack zombies, the malicious code uses the infected system to scan the Internet for additional devices that also allow access via factory default credentials, such as “admin/admin,” or “root/12345”. In this way, each infected host is constantly trying to spread the infection to new home routers and other devices accepting incoming connections (via telnet) with default credentials.

The botnet is not made entirely of home routers; some of the infected hosts appear to be commercial routers at universities and companies, and there are undoubtedly other devices involved. The preponderance of routers represented in the botnet probably has to do with the way that the botnet spreads and scans for new potential hosts. But there is no reason the malware couldn’t spread to a wide range of devices powered by the Linux operating system, including desktop servers and Internet-connected cameras.

KrebsOnSecurity had extensive help on this project from a team of security researchers who have been working closely with law enforcement officials investigating the LizardSquad. Those researchers, however, asked to remain anonymous in this story. The researchers who assisted on this project are working with law enforcement officials and ISPs to get the infected systems taken offline.

This is not the first time members of LizardSquad have built a botnet. Shortly after their attack on Sony and Microsoft, the group’s members came up with the brilliant idea to mess with the Tor network, an anonymity system that bounces users’ connections between multiple networks around the world, encrypting the communications at every step of the way. Their plan was to set up many hundreds of servers to act as Tor relays, and somehow use that access to undermine the integrity of the Tor network.

This graphic reflects a sharp uptick in Tor relays stood up at the end of 2015 in a failed bid by the Lizard Squad to mess with Tor.

This graphic reflects a sharp uptick in Tor relays stood up at the end of 2014 in a failed bid by the Lizard Squad to mess with Tor.

According to sources close to the LizardSquad investigation, the group’s members used stolen credit cards to purchase thousands of instances of Google’s cloud computing service — virtual computing resources that can be rented by the day or longer. That scheme failed shortly after the bots were stood up, as Google quickly became aware of the activity and shut down the computing resources that were purchased with stolen cards.

A Google spokesperson said he was not able to discuss specific incidents, noting only that, “We’re aware of these reports, and have taken the appropriate actions.” Nevertheless, the incident was documented in several places, including this Pastebin post listing the Google bots that were used in the failed scheme, as well as a discussion thread on the Tor Project mailing list.

ROUTER SECURITY 101

Wireless and wired Internet routers are very popular consumer devices, but few users take the time to make sure these integral systems are locked down tightly. Don’t make that same mistake. Take a few minutes to review these tips for hardening your hardware.

wrtFor starters, make sure you change the default credentials on the router. This is the username and password that were factory installed by the router maker. The administrative page of most commercial routers can be accessed by typing 192.168.1.1, or 192.168.0.1 into a Web browser address bar. If neither of those work, try looking up the documentation at the router maker’s site, or checking to see if the address is listed here. If you still can’t find it, open the command prompt (Start > Run/or Search for “cmd”) and then enter ipconfig. The address you need should be next to Default Gateway under your Local Area Connection.

If you don’t know your router’s default username and password, you can look it up here. Leaving these as-is out-of-the-box is a very bad idea. Most modern routers will let you change both the default user name and password, so do both if you can. But it’s most important to pick a strong password.

When you’ve changed the default password, you’ll want to encrypt your connection if you’re using a wireless router (one that broadcasts your modem’s Internet connection so that it can be accessed via wireless devices, like tablets and smart phones). Onguardonline.gov has published some video how-tos on enabling wireless encryption on your router. WPA2 is the strongest encryption technology available in most modern routers, followed by WPA and WEP (the latter is fairly trivial to crack with open source tools, so don’t use it unless it’s your only option).

wpsBut even users who have a strong router password and have protected their wireless Internet connection with a strong WPA2 passphrase may have the security of their routers undermined by security flaws built into these routers. At issue is a technology called “Wi-Fi Protected Setup” (WPS) that ships with many routers marketed to consumers and small businesses. According to the Wi-Fi Alliance, an industry group, WPS is “designed to ease the task of setting up and configuring security on wireless local area networks. WPS enables typical users who possess little understanding of traditional Wi-Fi configuration and security settings to automatically configure new wireless networks, add new devices and enable security.”

But WPS also may expose routers to easy compromise. Read more about this vulnerability here. If your router is among those listed as vulnerable, see if you can disable WPS from the router’s administration page. If you’re not sure whether it can be, or if you’d like to see whether your router maker has shipped an update to fix the WPS problem on their hardware, check this spreadsheet. If your router maker doesn’t offer a firmware fix, consider installing an open source alternative, such as DD-WRT (my favorite) or Tomato.

opendnsWhile you’re monkeying around with your router setting, consider changing the router’s default DNS servers to those maintained by OpenDNS. The company’s free service filters out malicious Web page requests at the domain name system (DNS) level. DNS is responsible for translating human-friendly Web site names like “example.com” into numeric, machine-readable Internet addresses. Anytime you send an e-mail or browse a Web site, your machine is sending a DNS look-up request to your Internet service provider to help route the traffic.

Most Internet users use their ISP’s DNS servers for this task, either explicitly because the information was entered when signing up for service, or by default because the user hasn’t specified any external DNS servers. By creating a free account at OpenDNS.com, changing the DNS settings on your machine, and registering your Internet address with OpenDNS, the company will block your computer from communicating with known malware and phishing sites. OpenDNS also offers a fairly effective adult content filtering service that can be used to block porn sites on an entire household’s network.

The above advice on router security was taken from a broader tutorial on how to stay safe online, called “Tools for a Safer PC.”

TorrentFreak: PirateSnoop Browser Unblocks Torrent Sites

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

pirate-cardBlocking of file-sharing related sites is becoming widespread in Europe, particularly so in the UK. In fact, it’s now almost impossible to access a top torrent site from any of the country’s leading ISPs, with the notable exception of OldPirateBay since the site is so new.

Users in the United States can’t rest easy either. As reported here in December, the MPAA is working hard to introduce site-blocking by utilizing creative interpretations of existing law. It seems unlikely that Hollywood will stop until it gets its way.

It’s becoming clear that Internet users everywhere will need to prepare if they want unfettered access to the Internet. While that can be achieved using premium services such as VPNs, there will always be those looking for a free solution. Today we have news of one such product.

In appearance PirateSnoop looks a lot like the popular Chrome browser. In fact the only immediate giveaway that things are a little different is the existence of a small pirate-themed button on the right hand side of its toolbar.

pirate-unblock

Underneath, however, PirateSnoop is based on the freeware web browser SRWare Iron which aims to eliminate some of the privacy-compromising features present in Google Chrome. PirateSnoop is then augmented with special extensions to enable its site unblocking features.

PirateSnoop (PS) was created by the team at public torrent site RARBG. While certainly less referenced by the mainstream media than The Pirate Bay for example, RARBG is now the 7th most popular torrent site in the world and a force to be reckoned with. It was also blocked by major UK ISPs recently.

Anti-censorship agenda

rarbg-logo“Nazi Germany had less censorship than we have today on the Internet,” the PS team informs TorrentFreak.

“However you are not paying for the Internet itself to your ISPs, but for the carrying of the Internet connectivity. ISPs are legally enforced by their countries to block content and what we are worried about is that little to none of the ISPs decided to fight any blocking court order.”

PirateSnoop vs PirateBrowser

The web-blocking features of PirateSnoop are similar to those of The Pirate Bay’s PirateBrowser, but there are some important differences. Although users are not rendered anonymous, PirateBrowser uses the TOR network. PirateSnoop sees this as problematic as torrent sites are increasingly blocking TOR IPs.

“The TOR network is abused by a lot of people – uploading fakes for example. It’s also used by DMCA agencies to scan sites. TOR is no longer an option to access sites. Its blocked on almost every site I know,” a dev explains.

Instead, PirateSnoop uses its own custom proxy network which utilizes full HTTPS instead of the HTTP used by basic proxies. Just like a regular browser to website connection, PS allows websites to see their users’ IP addresses (unless they’re using a VPN) in order to cut down on abuse.

Overall, PirateSnoop should be a faster browsing solution than PirateBrowser, its creators say.

Limitations and future upgrades

Currently several major blocked sites are supported by PirateSnoop but there are a couple of omissions. However, the team is prepared to expand the browser’s reach based on user demand.

“Any site that is requested to be added will be added immediately with no questions asked,” the team note.

The PirateSnoop team say they are committed to upgrades of their software to include proxy updates (added automatically upon browser restart) and full browser updates following any Iron browser core updates.

PirateSnoop can be downloaded here (using BitTorrent, of course).

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and anonymous VPN services.

Linux How-Tos and Linux Tutorials: How to Configure a Touchscreen on Linux

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Jack Wallen. Original post: at Linux How-Tos and Linux Tutorials

Ah the touchscreen ─ that piece of hardware that promises to finally strip humanity of an interface very much long in the tooth. I’m talking about the mouse. It’s that piece of technology that is being threatened with extinction, thanks to the touchscreen. And with good reason. Once you’ve used the touchscreen, you fully understand that they are, in fact, a much-needed breath of fresh air.

But in Linux-land, all isn’t exactly rosy. Once you get your hands on a supported device (such as the fantastic System76 Sable Touch running Ubuntu 14.10), you’ll find that not everything works as you’d expect. Sure there are some handy three and four finger multi-touch gestures that work out of the box, but the go-to gestures (such as right mouse click and Firefox scrolling) simply don’t work.

The good news, getting those very necessary gestures to work isn’t all that challenging. It does, however, require the installation of an app and a Firefox extension. The bad news is that not all distributions respond the same way to these workarounds. Ultimately, this falls into the hands of the Linux community to resolve, as touchscreens aren’t going away (and, in fact, will continue to rise in popularity). With that said, let’s take a look at what you can do to get that shiny new touchscreen device working in a way that actually makes sense.

What you will need

First we’re going to address the browser ─ since that is one of the most-used tools of the desktop trade. There’s a bit more bad news on that front ─ you’re going to have to scrap Google Chrome. Why? Because, at least as of this writing, Google Chrome and Linux touchscreens do not play well together. With that said, we’re going to focus our efforts on Firefox and a simple extension.

Second, you will need to install and use a handy app called Touchegg. This app will serve as a means to configure specific events for touchscreen interaction.

With that said, let’s begin.

Firefox

Out of the box, Firefox doesn’t much care for touchscreens. However, there is an extension you can install that will overcome that issue. The extension is called Grab and Drag. This extension will enable grab and drag scrolling as well as flick scrolling and momentum scrolling.

To install this extension click Tools > Add-ons and then click Get Add-ons. In the search bar of the new tab, enter “grab and drag.” When the results appear (Figure 1), click the Install button associated with the Grab and Drag extension.

touchscreen 1

You will be prompted to restart Firefox. Do this and then, when it reopens go back into the Add-ons window, tap Extensions, select Grab and Drag, and then tap Preferences. In the Preferences screen, you can ignore the Momentum tab (as this feature doesn’t work with touchscreens). You will, most likely, want to open the More Options tab and play with the Drag Multiplier setting (Figure 2). By default, the scrolling is rather slow. I’ve found a Drag Multiplier of 1.6 to be ideal for using touchscreens and Firefox.

touchscreen 2

Now that you have Firefox enabled, let’s install an app that (in some instances) will allow you to control nearly every multi-touch gesture on Linux.

Touchegg

I’ll demonstrate how to install this app on Ubuntu 14.10. I will also add a GUI tool that allows easier control over the configuration of gestures. The GUI tool, touchegg-gce, does have a number of dependencies that must be first installed.

Before we install the GUI, let’s install the base tool. Touchegg can be found in the standard repositories, so a single command will install:

sudo apt-get install touchegg

Once that installation completes, let’s install the dependencies for the GUI tool. The command for this is:

sudo apt-get install build-essential libqt4-dev libx11-6

After the dependencies are installed, download the Touchegg-gce file and place it in a directory that gives you write access (such as ~/). Here are the steps to install this app:

  1. Change to the directory holding the .zip file.

  2. Issue the command unzip Touchegg-gce-master.zip to extract the file.

  3. Change into the Touchegg-gce-master folder.

  4. Issue the command qmake

  5. Issue the command make

  6. Copy the touchegg-gce file to /usr/bin

That’s it. You can now issue the command touchegg-gce from any directory and the app will run. When the app starts, you must first choose your language (this happens every time you run the app). From the app main window (Figure 3), tap the Load button to load your Touchegg configuration file (the default should be ~/.config/touchegg/).

touchscreen 3

At this point, you can either modify an existing gesture or add a new gesture. What you need to know about this process is the configuration options available. With each entry, there are four options:

  • Fingers: How many fingers make up the entry

  • Gesture: What is the actual gesture (tap, drag, pinch, rotate, Tap & Hold, Double Tap)

  • Direction: The direction of the gesture (All, Up, Down, Left, Right)

  • Action: What is the action associated with the gesture (i.e. Mouse Click, Scroll, Minimize, Maximize, Close, etc).

Tap (or click) the Add button to create a new gesture. For the purpose of example, we’ll create a two finger drag for scrolling up. We’ll create this gesture under the All Group (which means it will apply to all applications ─ more on this in a bit). From the popup window (Figure 4), configure the following:

  • Fingers: 2

  • Gesture: Drag

  • Directions: Up

  • Action: Scroll.

When you’ve configured this, tap OK and the gesture is ready to try out.

touchscreen 4

Let’s say, however, you want to associate a specific gesture with a specific application (or group of applications). For that you must create a new Group. To do this, tap the Add button under the groups (on the left side of the window). In the popup (Figure 5), you have to configure three options:

  • Applications: The applications this gesture will use

  • Add to: Select <New Group> to create a new group

  • Take gestures from: You can import gestures from another group to serve as a template.

touchscreen 5

Once you’ve created the new group, you can create new gestures that will work only for that group.

After you’ve completed the process of creating gestures and groups, make sure to tap (or click) the Save button. If you do not do this final step, your configurations will be lost when you close the app. When you save the configuration, Touchegg will be restarted and your new gestures should work.

Even with the help of apps like Drag and Grab and Touchegg, Linux and the touchscreen have a long way to go. Not every gesture will work on every device and, in some cases, you might still find yourself grabbing a mouse more often than not. Hopefully, over the next year, we’ll see major improvement on this front ─ otherwise Linux will struggle as more and more touchscreen devices are adopted.

Schneier on Security: How Browsers Store Passwords

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Good information on how Internet Explorer, Chrome, and Firefox store user passwords.