Posts tagged ‘chrome’

Toool's Blackbag: Euro-Locks

This post was syndicated from: Toool's Blackbag and was written by: Walter. Original post: at Toool's Blackbag

April 24th, a delegation of Toool visited the Euro-Locks factory in Bastogne, Belgium.

Sales manager Jean-Louis Vincart welcomed us and talked us through the history of Euro-Locks, the factories and products. After that, we visited the actual production facility. The Bastogne factory is huge and almost all of their products are completely build here. We spoke with the R&D people creating new molds, saw molten zamac, steel presses, chrome baths, assembly lines and packaging, so everything from the raw metal to the finished product. It’s interesting to see so many products (both in range of products and the actual number of produced locks) being made here, and having no stock of the finished product.

Thanks to Eric and Martin for making the visit possible.

The Hacker Factor Blog: Great Googly Moogly!

This post was syndicated from: The Hacker Factor Blog and was written by: The Hacker Factor Blog. Original post: at The Hacker Factor Blog

Google recently made another change to their pagerank algorithm. This time, they are ranking results based on the type of device querying it. What this means: a Google search from a desktop computer will return different results compared to the same search from a mobile device. If a web page is better formatted for a mobile device, then it will have a higher search result rank for searches made from mobile devices.

I understand Google’s desire to have web pages that look good on both desktop and mobile devices. However, shouldn’t the content, authoritativeness, and search topic be more important than whether the page looks pretty on a mobile device?

As a test, I searched Google for “is this picture fake”. The search results on my desktop computer was different than the results from my Android phone. In particular, the 2nd desktop result was completely gone on the mobile device, the FotoForensics FAQ was pushed down on the mobile device, TinEye was moved up on the mobile device, and other analysis services were completely removed from the mobile results.

In my non-algorithmic personal opinion, I think the desktop results returned more authoritative results and better matched the search query than the mobile device results.

Google’s Preference

Google announced that they were doing this change on February 26. They gave developers less than two months notice of this change.

While two months may be plenty of time for small developers to change their site layout, I suspect that most small developers never heard about this change. For larger organization, two months is barely enough time to have a meeting about having a meeting about scheduling a meeting to discuss a site redesign for mobile devices.

In other words: Google barely gave anyone notice, and did not give most sites time to act. This is synonymous with those security researchers who report vulnerabilities to vendors and then set arbitrarily short deadlines before going public. Short deadlines are not about doing the right thing; it’s about pushing an agenda.

Tools for the trade

On the plus side, Google provided a nice web tool for evaluating web sites. This allows site designers to see how their web pages look on a mobile device. (At least, how it will look according to Google.)

Google also provides a mobile guide that describes what Google thinks a good web page layout looks like. For example, you should use large fonts and only one column in the layout. Google also gives suggestions like using dynamic layout web pages (detect the screen and rearrange accordingly) and using separate servers (www.domain and m.domain): one for desktop users and one for mobile devices.

Google’s documentation emphasizes that this is really for smartphone users. They state that by “mobile devices“, they are only talking about smartphones and not tablets, feature phones, and other devices. (I always thought that a mobile device was anything you could use while being mobile…)

Little Time, Little Effort

One of my big irks about Google is that Google’s employees seem to forget that not every company is as big as Google or has as many resources as Google. Not everyone is Google. By giving developers very little time to make changes that better match Google’s preferred design, it just emphasizes how out of touch Google’s developers are with the rest of the world. The requirements decreed in their development guidelines also show an unrealistic view of the world. For example:

  • Google recommends using dynamic web pages for more flexibility. It also means much more testing and requires a larger testing environment. Testing is usually where someone notices that the site lacks usability.

    Google+ has a flexible interface — the number of columns varies based on the width of the browser window. But Google+ also has a horrible multi-column layout that cannot be disabled. And LinkedIn moved most of their billions of options into popups — now you cannot click on anything without it being covered by a popup window first.

    For my own sites, I do try to test with different browsers. Even if I think my site looks great on every mobile device I tested, that does not mean that it will look great on every mobile device. (I cannot test on everything.)

    Providing the same content to every device minimizes the development and testing efforts. It also simplifies the usability options.

  • Google suggests the option of maintaining two URLs or two separate site layouts — one for desktops and one for mobile devices. They overlook that this means twice the development effort, which translates into twice the development costs.
  • Maintaining two URLs also means double the amount of bot traffic indexing the site, double the load on the server, and double the network bandwidth. Right now, about 75% of the traffic to my site comes from bots indexing and mirroring (and attacking) my site. If I maintained two URLs to the same content with different formatting, I would be dividing the visitor load between the two sites (half go mobile and half go desktop), while doubling the bot traffic.
  • Google’s recommendations normalize the site layout. Everyone should use large text. Everyone should use one column for mobile displays, etc.

    Normalizing web site layouts goes against the purpose of HTML and basic web site design. Your web site should look the way that you want it to look. If you want small text, then you can use small text. If you want a wide layout, then you can use a wide layout. Every web site can look different. Just be aware that Google’s pagerank system now penalizes you for looking different and for expressing creativity.

  • Google’s online test for mobile devices does not take into account whether the device is held vertically or horizontally. My Android phone rotates the screen and makes the text larger when I hold it horizontally. According to Google, all mobile pages should be designed for a vertical screen.

Ironically, there has been a lot of effort by mobile web browser developers (not the web site, but the actual browser developers) to mitigate display issues in the browser. One tap zooms into the text and reflows it to fit the screen, another tap zooms out and reflows it again. And rotating the screen makes the browser display wider instead of taller. Google’s demand to normalize the layout really just means that Google has zero confidence in the mobile browser developers and a limited view on how users use mobile devices.

Moving targets

There’s one significant technical issue that is barely addressed by Google’s Mobile Developer Guide: how does a web site detect a mobile device?

According to Google, your code should look at the user-agent field for “Android” and “Mobile”. That may work well with newer Android smartphones, but it won’t help older devices or smartphones that don’t use those keywords. Also, there are plenty of non-smartphone devices that use these words. For example, Apple’s iPad tablet has a user-agent string that says “Mobile” in it.

In fact, there is no single HTTP header that says “Hi! I’m a mobile device! Give me mobile content!” There’s a standard header for specifying supported document formats. There’s a standard header for specifying preferred language. But there is no standard for identifying a mobile device.

There is a great write-up called “How to Detect Mobile Devices“. It lists a bunch of different methods and the trade-offs between each.

For example, you can try to use JavaScript to render the page. This is good for most smartphones, but many feature-phones lack JavaScript support. The question also becomes: what should you detect? Screen size may be a good option, but otherwise there is no standard. This approach can also be problematic for indexing bots since it requires rendering JavaScript to see the layout. (Running JavaScript in a bot becomes a halting problem since the bot cannot predict when the code will finish rendering the page.)

Alternately, you can try to use custom style sheets. There’s a style sheet extension “@media” for specifying a different layout for mobile devices. Unfortunately, many mobile devices don’t support this extension. (Oh the irony!)

Usually people try to detect the mobile device on the server side. Every web browser sends a user-agent string that describes the browser and basic capabilities. For example:

User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:25.3) Gecko/20150308 Firefox/31.9 PaleMoon/25.3.0

User-Agent: Mozilla/5.0 (iPhone; CPU iPhone OS 8_1_2 like Mac OS X) AppleWebKit/600.1.4 (KHTML, like Gecko) Version/8.0 Mobile/12B440 Safari/600.1.4

Mozilla/5.0 (Linux; Android 4.4.2; en-us; SAMSUNG SM-T530NU Build/KOT49H) AppleWebKit/537.36 (KHTML, like Gecko) Version/1.5 Chrome/28.0.1500.94 Safari/537.36

Opera/9.80 (Android; Opera Mini/7.6.40234/36.1701; U; en) Presto/2.12.423 Version/12.16

The first sample user-agent string identifies the Pale Moon web browser (PaleMoon 25.3.0) on a 64-bit Windows 7 system (Windows NT 6.1; Win64). It says that it is compatible with Firefox 31 (Firefox/31.9) and supports the Gecko toolkit extension (Gecko/20150308). This is likely a desktop system.

The second sample identifies Mobile Safari 8.0 on an iPhone running iOS 8.1.2. This is a mobile device — because I known iPhones are mobile devices, and not because it says “Mobile”.

The third sample identifies the Android browser 1.5 on a Samsung SM-T530NU device running Android 4.4 (KitKat) and configured for English from the United States. It doesn’t say what it is, but I can look it up and determine that the SM-T530NU is a tablet.

The fourth and final example identifies Opera Mini, which is Opera for mobile devices. Other than looking up the browser type, nothing in the user-agent string tells me it is a mobile device.

The typical solution is to have the web site check the user-agent string for known parts. If it sees “Mobile” or “iPhone” then we can assume it is some kind of mobile device — but not necessarily a smartphone. The web site Detect Mobile Browsers offers code snippets for detecting mobile devices. Google’s documentation says to look for ‘Android’ and ‘Mobile’. Here’s the PHP code that Detect Mobile Browsers suggest using:

$useragent=$_SERVER[‘HTTP_USER_AGENT’];
if (preg_match(‘/(android|bbd+|meego).+mobile|avantgo|bada/|blackberry|blazer|compal|elaine|fennec|hiptop|iemobile|ip(hone|od)|iris|kindle|lge |maemo|midp|mmp|mobile.+firefox|netfront|opera m(ob|in)i|palm( os)?|phone|p(ixi|re)/|plucker|pocket|psp|series(4|6)0|symbian|treo|up.(browser|link)|vodafone|wap|windows ce|xda|xiino/i’,$useragent)||preg_match(‘/1207|6310|6590|3gso|4thp|50[1-6]i|770s|802s|a wa|abac|ac(er|oo|s-)|ai(ko|rn)|al(av|ca|co)|amoi|an(ex|ny|yw)|aptu|ar(ch|go)|as(te|us)|attw|au(di|-m|r |s )|avan|be(ck|ll|nq)|bi(lb|rd)|bl(ac|az)|br(e|v)w|bumb|bw-(n|u)|c55/|capi|ccwa|cdm-|cell|chtm|cldc|cmd-|co(mp|nd)|craw|da(it|ll|ng)|dbte|dc-s|devi|dica|dmob|do(c|p)o|ds(12|-d)|el(49|ai)|em(l2|ul)|er(ic|k0)|esl8|ez([4-7]0|os|wa|ze)|fetc|fly(-|_)|g1 u|g560|gene|gf-5|g-mo|go(.w|od)|gr(ad|un)|haie|hcit|hd-(m|p|t)|hei-|hi(pt|ta)|hp( i|ip)|hs-c|ht(c(-| |_|a|g|p|s|t)|tp)|hu(aw|tc)|i-(20|go|ma)|i230|iac( |-|/)|ibro|idea|ig01|ikom|im1k|inno|ipaq|iris|ja(t|v)a|jbro|jemu|jigs|kddi|keji|kgt( |/)|klon|kpt |kwc-|kyo(c|k)|le(no|xi)|lg( g|/(k|l|u)|50|54|-[a-w])|libw|lynx|m1-w|m3ga|m50/|ma(te|ui|xo)|mc(01|21|ca)|m-cr|me(rc|ri)|mi(o8|oa|ts)|mmef|mo(01|02|bi|de|do|t(-| |o|v)|zz)|mt(50|p1|v )|mwbp|mywa|n10[0-2]|n20[2-3]|n30(0|2)|n50(0|2|5)|n7(0(0|1)|10)|ne((c|m)-|on|tf|wf|wg|wt)|nok(6|i)|nzph|o2im|op(ti|wv)|oran|owg1|p800|pan(a|d|t)|pdxg|pg(13|-([1-8]|c))|phil|pire|pl(ay|uc)|pn-2|po(ck|rt|se)|prox|psio|pt-g|qa-a|qc(07|12|21|32|60|-[2-7]|i-)|qtek|r380|r600|raks|rim9|ro(ve|zo)|s55/|sa(ge|ma|mm|ms|ny|va)|sc(01|h-|oo|p-)|sdk/|se(c(-|0|1)|47|mc|nd|ri)|sgh-|shar|sie(-|m)|sk-0|sl(45|id)|sm(al|ar|b3|it|t5)|so(ft|ny)|sp(01|h-|v-|v )|sy(01|mb)|t2(18|50)|t6(00|10|18)|ta(gt|lk)|tcl-|tdg-|tel(i|m)|tim-|t-mo|to(pl|sh)|ts(70|m-|m3|m5)|tx-9|up(.b|g1|si)|utst|v400|v750|veri|vi(rg|te)|vk(40|5[0-3]|-v)|vm40|voda|vulc|vx(52|53|60|61|70|80|81|83|85|98)|w3c(-| )|webc|whit|wi(g |nc|nw)|wmlb|wonu|x700|yas-|your|zeto|zte-/i’,substr($useragent,0,4))) { then… }

This is more than just detecting ‘Android’ and ‘Mobile’. If the user-agent string says Android or Meego or Mobile or Avantgo or Blackberry or Blazer or KDDI or Opera (with mini or mobi or mobile)… then it is probably a mobile device.

Of course, there are two big problems with this code. First, it has so many conditions that it is likely to have multiple false-positives (e.g., detecting a tablet or even a desktop as a mobile phone). In fact, we can see this this problem since the regular expression contains “kindle” — the Amazon Kindle is a tablet and not a smartphone. (And the Kindle user-agent string also includes the word ‘Android’ and may include the word ‘Mobile’.)

Second, this long chunk of code is a regular expression — a language describing a pattern to match. All regular expressions are slow to evaluate and more complicated expressions take more time. Unless you have unlimited resources (like Google) or have low web volume, then you probably do not want to run this code on every web page request.

If Google really wants to have every web site provide mobile-specific content, then perhaps they should push through a standard HTTP header for declaring a mobile device, tablet, and other types of devices. Right now, Google is forcing web sites to redesign for devices that they may not be able to detect.

(Of course, none of this handles the case where an anonymizer changes the user-agent setting, or where users change the user-agent value in their browser.)

Low Ranking Advice

Some of Google’s mobile site suggestions are good, but not limited to mobile devices. Enabling server compression and designing pages for fast loading benefit both desktop and mobile browsers.

I actually think that there may be a hidden motivation behind Google’s desire to force web site redesigns… The recommended layout — with large primary text, viewport window, and single column layout — is probably easier for Google to parse and index. In other words, Google wants every site to look the same so it will be easier for Google to index the content.

And then there is the entire anti-competitive edge. Google’s suggestion for detecting mobile devices (look for ‘Android’) excludes non-android devices like Apple’s iPhone. Looking for ‘Mobile’ misclassifies Apple’s iPad, potentially leading to a lesser user experience on Apple products. And Google wants you to make site changes so that your web pages work better with Googlebot. This effectively turns all web sites into Google-specific web sites.

Promoting aesthetics over content seems to go against the purpose of a search engine; users search for content and not styling. Normalizing content layout contracts the purpose of having configurable layouts. Giving developers less than two months to make major changes seems out of touch with reality. And requiring design choices that favor the dominant company’s strict views seems very anti-competitive to me.

Many web sites depend on search engines like Google for income — either directly through ads or indirectly through visibility. This recent change at Google will dramatically impact many web sites — sites with solid content but, according to Google, less desirable layouts. Moreover, it forces companies to comply with Google’s requirements or lose future revenue.

Google has a long history of questionable behavior. This includes multiple lawsuits against Google for anti-competitive behavior and privacy violations. However, each of these cases are debatable. In contrast, I think that this requirement for site layout compliance is the first time that the “do no evil” company has explicitly gone evil in a non-debatable way.

Linux How-Tos and Linux Tutorials: How to Configure Your Dev Machine to Work From Anywhere (Part 3)

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Jeff Cogswell. Original post: at Linux How-Tos and Linux Tutorials

In the previous articles, I talked about my mobile setup and how I’m able to continue working on the go. In this final installment, I’ll talk about how to install and configure the software I’m using. Most of what I’m talking about here is on the server side, because the Android and iPhone apps are pretty straightforward to configure.

Before we begin, however, I want to mention that this setup I’ve been describing really isn’t for production machines. This should only be limited to development and test machines. Also, there are many different ways to work remotely, and this is only one possibility. In general, you really can’t beat a good command-line tool and SSH access. But in some cases, that didn’t really work for me. I needed more; I needed a full Chrome JavaScript debugger, and I needed better word processing than was available on my Android tablets.

Here, then, is how I configured the software. Note, however, that I’m not writing this as a complete tutorial, simply because that would take too much space. Instead, I’m providing overviews, and assuming you know the basics and can google to find the details. We’ll take this step by step.

Spin up your server

First, we spin up the server on a host. There are several hosting companies; I’ve used Amazon Web Services, Rackspace, and DigitalOcean. My own personal preference for the operating system is Ubuntu Linux with LXDE. LXDE is a full desktop environment that includes the OpenBox window manager. I personally like OpenBox because of its simplicity while maintaining visual appeal. And LXDE is nice because, as its name suggests (Lightweight X11 Desktop Environment), it’s lightweight. However, many different environments and window managers will work. (I tried a couple tiling window managers such as i3, and those worked pretty well too.)

The usual order of installation goes like this: You use the hosting company’s website to spin up the server, and you provide a key file that will be used for logging into the server. You can usually use your own key that you generate, or have the service generate a key for you, in which case you download the key and save it. Typically when you provide a key, the server will automatically be configured to log in only using SSH with the key file. However, if not, you’ll want to follow disable password logins.

Connect to the server

The next step is to actually log into the server through an SSH command line and first set up a user for yourself that isn’t root, and then set up the desktop environment. You can log in from your desktop Linux, but if you like, this is a good chance to try out logging in from an Android or iOS tablet. I use JuiceSSH; a lot of people like ConnectBot. And there are others. But whichever you get, make sure it allows you to log in using a key file. (Key files can be created with or without a password. Also make sure the app you use allows you to use whichever key file type you created–password or no password.)

Copy your key file to your tablet. The best way is to connect the tablet to your computer, and transfer the file. However, if you want a quick and easy way to do it, you can email it. But be aware that you’re sending the private key file through an email system that other people could potentially access. It’s your call whether you want to do that. Either way, get the file installed on the tablet, and then configure the SSH app to log in using the key file, using the app’s instructions.

Then using the app, connect to your server. You’ll need the username, even though you’re using a key file (the server needs to know who you’re logging in as with the key file, after all); AWS typically uses “ubuntu” for the username for Ubuntu installations; others simply give you the root user. For AWS, to do the installation you’ll need to type sudo before each command since you’re not logged in as root, but won’t be asked for a password when running sudo. On other cloud hosts you can run the commands without sudo since you’re logged in as root.

Oh and by the way, because we don’t yet have a desktop environment, you’ll be typing commands to install the software. If you’re not familiar with the package installation tools, now is a chance to learn about them. For Debian-based systems (including Ubuntu), you’ll use apt-get. Other systems use yum, which is a command-line interface to the RPM package manager.

Install LXDE

From the command-line, it’s time to set up LXDE, or whichever desktop you prefer. One thing to bear in mind is that while you can run something big like Cinnamon, ask yourself if you really need it. Cinnamon is big and cumbersome. I use it on my desktop, but not on my hosted servers, opting instead for more lightweight desktops like LXDE. And if you’re familiar with desktops such as Cinnamon, LXDE will feel very similar.

There are lots of instructions online for installing LXDE or other desktops, and so I won’t reiterate the details here. DigitalOcean has a fantastic blog with instructions for installing a similar desktop, XFCE.

Install a VNC server

Then you need to install a VNC server. Instead of using TightVNC, which a lot of people suggest, I recommend vnc4server because it allows for easy resolution changes, as I’ll describe shortly.

While setting up the VNC server, you’ll create a VNC username. You can just use a username and password for VNC, and from there you’re able to connect from a VNC client app to the system. However, the connection won’t be secure. Instead, you’ll want to connect through what’s called an SSH tunnel. The SSH tunnel is basically an SSH session into the server that is used for passing connections that would otherwise go directly over the internet.

When you connect to a server over the Internet, you use a protocol and a port. VNC usually uses 5900 or 5901 for the port. But with an SSH tunnel, the SSH app listens on a port on the same local device, such as 5900 or 5901. Then the VNC app, instead of connecting to the remote server, connects locally to the SSH app. The SSH app, in turn, passes all the data on to the remote system. So the SSH serves as a go-between. But because it’s SSH, all the data is secure.

So the key is setting up a tunnel on your tablet. Some VNC apps can create the tunnel; others can’t and you need to use a separate app. JuiceSSH can create a tunnel, which you can use from other apps. My preferred VNC app, Remotix, on the other hand, can do the tunnel itself for you. It’s your choice how you do it, but you’ll want to set it up.

The app will have instructions for the tunnel. In the case of JuiceSSH, you specify the server you’re connecting to and the port, such as 5900 or 5901. Then you also specify the local port number the tunnel will be listening on. You can use any available port, but I’ll usually use the same port as the remote one. If I’m connecting to 5901 on the remote, I’ll have JuiceSSH also listen on 5901. That makes it easier to keep straight. Then you’ll open up your VNC app, and instead of connecting to a remote server, you connect to the port on the same tablet. For the server you just use 127.0.0.1, which is the IP address of the device itself. So to re-iterate:

  1. JuiceSSH connects, for example, to 5901 on the remote host. Meanwhile, it opens up 5901 on the local device.
  2. The VNC app connects to 5901 on the local device. It doesn’t need to know anything about what remote server it’s connecting to.

But some VNC apps don’t need another app to do the tunneling, and instead provide the tunnel themselves. Remotix can do this; if you set up your app to do so, make sure you understand that you’re still tunneling. You provide the information needed for the SSH tunnel, including the key file and username. Then Remotix does the rest for you.

Once you get the VNC app going, you’ll be in. You should see a desktop open with the LXDE logo in the background. Next, you’ll want to go ahead and configure the VNC client to your liking; I prefer to control the mouse using drags that simulate a trackpad; other people like to control the mouse by tapping exactly where you want to click. Remotix and several other apps let you choose either configuration.

Configuring the Desktop

Now let’s configure the desktop. One issue I had was getting the desktop to look good on my 10-inch tablet. This involved configuring the look and feel by clicking the taskbar menu < Preferences < Customize Look and Feel (or run from the command line lxappearance).

lxappearance

I also used OpenBox’s own configuration tool by clicking the taskbar menu < Preferences < OpenBox Configuration Manager (or runobconf).

obconf

My larger tablet’s screen isn’t huge at 10 inches, so I configured the menu bars and buttons and such to be somewhat large for a comfortable view. One issue is the tablet has such a high resolution that if I used the maximum resolution, everything was tiny. As such, I needed to be able to change resolutions based on the work I was doing, as well as based on which tablet I was using. This involved configuring the VNC server, though, not LXDE and OpenBox. So let’s look at that.

In order to change resolution on the fly, you need a program that can manage the RandR extensions, such as xrandr. But the TightVNC server that seems popular doesn’t work with RandR. Instead, I found the vvnc4server program works with xrandr, which is why I recommend using it instead. When you configure vnc4server, you’ll want to provide the different resolutions in the command’s -geometry option. Here’s an init.d service configuration file that does just that. (I modified this based on one I found on DigitalOcean’s blog.)

#!/bin/bash
PATH="$PATH:/usr/bin/"
export USER="jeff"
OPTIONS="-depth 16 -geometry 1920x1125 -geometry 1240x1920 -geometry 2560x1500 -geometry 1920x1080 -geometry 1774x1040 -geometry 1440x843 -geometry 1280x1120 -geometry 1280x1024 -geometry 1280x750 -geometry 1200x1100 -geometry 1024x768 -geometry 800x600 :1"
. /lib/lsb/init-functions
case "$1" in
start)
log_action_begin_msg "Starting vncserver for user '${USER}' on localhost:${DISPLAY}"
su ${USER} -c "/usr/bin/vnc4server ${OPTIONS}"
;;
stop)
log_action_begin_msg "Stoping vncserver for user '${USER}' on localhost:${DISPLAY}"
su ${USER} -c "/usr/bin/vnc4server -kill :1"
;;
restart)
$0 stop
$0 start
;;
esac
exit 0

The key here is the OPTIONS line with all the -geometry options. These will show up when you run xrandr from the command line:

xrandr.png

You can use your VNC login to modify the file in the init.d directory (and indeed I did, using the editor called scite). But then after making these changes, you’ll need to restart the VNC service just this one time, since you’re changing its service settings. Doing so will end your current VNC session, and it might not restart correctly. So you might need to log in through JuiceSSH to restart the VNC server. Then you can log back in with the VNC server. (You also might need to restart the SSH tunnel.) After you do, you’ll be able to configure the resolution. And from then on, you can change the resolution on the fly without restarting the VNC server.

To change resolutions without having to restart the VNC server, just type:

xrandr -s 1

Replace 1 with the number for the resolution you want. This way you can change the resolution without restarting the VNC server.

Server Concerns

After everything is configured, you’re free to use the software you’re familiar with. The only catch is that hosts charge a good bit for servers that have plenty of RAM and disk space. As such, you might be limited on what you can run based on the amount of RAM and cores. Still, I’ve found that with just 2GB of RAM and 2 cores, with Ubuntu and LXDE, I’m able to have open Chrome with a few pages, LibreOffice with a couple documents open, Geany for my code editing, and my own server software running under node.js for testing, and mysql server. Occasionally if I get too many Chrome tabs open, the system will suddenly slow way down and I have to shut down tabs to free up more memory. Sometimes I run MySQL Workbench and it can bog things down a bit too, but it isn’t bad if I close up LibreOffice and leave only one or two Chrome tabs open. But in general, for most of my work, I have no problems at all.

And on top of that, if I do need more horsepower, I can spin up a bigger server with 4GB or 8GB and four cores or eight cores. But that gets costly and so I don’t do it for too many hours.

Multiple Screens

For fun, I did manage to get two screens going on a single desktop, one on my bigger 10-inch ASUS transformer tablet, and one on my smaller Nexus 7 all from my Linux server running on a public cloud host, complete with a single mouse moving between the two screens. To accomplish this, I started two VNC sessions, one from each tablet, and then from the one with the mouse and keyboard, I ran:

x2x -east -to :1

This basically connected the single mouse and keyboard to both displays. It was a fun experiment, but in my case, provided little practical value because it wasn’t like a true dual-display on a desktop computer. I couldn’t move slide windows between the displays, and the Chrome browser won’t open under more than one X display. In my case, for web development, I wanted to be able to open up the Chrome browser on one tablet, and then the Chrome JavaScript debug window on the other, but that didn’t work out.

Instead, what I found more useful was to have an SSH command-line shell on the smaller tablet, and that’s where I would run my node.js server code, which was printing out debug information. Then on the other I would have the browser running. That way I can glance back and forth without switching between windows on the single VNC login on the bigger tablet.

Back to Security

I can’t understate the importance of making sure you have your security set up and that you understand how the security works and what the ramifications are. I highly recommend using SSH with a keyfile login only, and no password logins allowed. And treat this as a development or test machine; don’t put customer data on the machine that could open you up to lawsuits in the event the machine gets compromised.

Instead, for production machines, allocate your production servers using all the best practices laid out by your own IT department security rules, and the host’s own rules. One issue I hit is my development machine needs to log into git, which requires a private key. My development machine is hosted, which means that private key is stored on a hosted server. That may or may not be a good idea in your case; you and your team will need to decide whether to do it. In my case, I decided I could afford the risk because the code I’m accessing is mostly open-source and there’s little private intellectual property involved. So if somebody broke into my development machine, they would have access to the source code for a small but non-vital project I’m working on, and drafts of these articles–no private or intellectual data.

Web Developers and A Pesky Thing Called Windows

Before I wrap this up, I want to present a topic for discussion. Over the past few years I’ve noticed that a lot of individual web developers use a setup quite similar to what I’m describing. In a lot of cases they use Windows instead of Linux, but the idea is the same regardless of operating system. But where they differ from what I’m describing is they host their entire customer websites and customer data on that one machine, and there is no tunneling; instead, they just type in a password. That is not what I’m advocating here. If you are doing this, please reconsider. (I personally know at least three private web developers who do this.)

Regardless of operating systems, take some time to understand the ramifications here. First, by logging in with a full desktop environment, you’re possibly slowing down your machine for your dev work. And if you mess something up and have to reboot, during that time your clients’ websites aren’t available during that time. Are you using replication? Are you using private networking? Are you running MySQL or some other database on the same machine instead of using virtual private networking? Entire books could (and have been) written on such topics and what the best practices are. Learn about replication; learn about virtual private networking and how to shield your database servers from outside traffic; and so on. And most importantly consider the security issues. Are you hosting customer data in a site that could easily be compromised? That could spell L-A-W-S-U-I-T. And that brings me to my conclusion for this series.

Concluding Remarks

Some commenters on the previous articles have brought up some valid points; one even used the phrase “playing.” While I really am doing development work, I’m definitely not doing this on production machines. If I were, that would indeed be playing and not be a legitimate use for a production machine. Use SSH for the production machines, and pick an editor to use and learn it. (I like vim, personally.) And keep the customer data on a server that is accessible only from a virtual private network. Read this to learn more.

Learn how to set up and configure SSH. And if you don’t understand all this, then please, practice and learn it. There are a million web sites out there to teach this stuff, including linux.com. But if you do understand and can minimize the risk, then, you really can get some work done from nearly anywhere. My work has become far more productive. If I want to run to a coffee shop and do some work, I can, without having to take a laptop along. Times are good! Learn the rules, follow the best practices, and be productive.

See the previous tutorials:

How to Set Up Your Linux Dev Station to Work From Anywhere

Choosing Software to Work Remotely from Your Linux Dev Station

Darknet - The Darkside: Google Chrome 42 Stomps A LOT Of Bugs & Disables Java By Default

This post was syndicated from: Darknet - The Darkside and was written by: Darknet. Original post: at Darknet - The Darkside

Ah finally, the end of NPAPI is coming – a relic from the Netscape era the Netscape Plugin API causes a lot of instability in Chrome and security issues. It means Java is now disabled by default along with other NPAPI based plugins in Google Chrome 42. Chrome will be removing support for NPAPI totally […]

The post Google Chrome 42 Stomps…

Read the full post at darknet.org.uk

Krebs on Security: Critical Updates for Windows, Flash, Java

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Get your patch chops on people, because chances are you’re running software from Microsoft, Adobe or Oracle that received critical security updates today. Adobe released a Flash Player update to fix at least 22 flaws, including one flaw that is being actively exploited. Microsoft pushed out 11 update bundles to fix more than two dozen bugs in Windows and associated software, including one that was publicly disclosed this month. And Oracle has an update for its Java software that addresses at least 15 flaws, all of which are exploitable remotely without any authentication.

brokenflash-aAdobe’s patch includes a fix for a zero-day bug (CVE-2015-3043) that the company warns is already being exploited. Users of the Adobe Flash Player for Windows and Macintosh should update to Adobe Flash Player 17.0.0.169 (the current versions other OSes is listed in the chart below).

If you’re unsure whether your browser has Flash installed or what version it may be running, browse to this link. Adobe Flash Player installed with Google Chrome, as well as Internet Explorer on Windows 8.x, should automatically update to version 17.0.0.169.

Google has an update available for Chrome that fixes a slew of flaws, and I assume it includes this Flash update, although the Flash checker pages only report that I now have version 17.0.0 installed after applying the Chrome update and restarting (the Flash update released last month put that version at 17.0.0.134, so this is not particularly helpful). To force the installation of an available update, click the triple bar icon to the right of the address bar, select “About Google” Chrome, click the apply update button and restart the browser.

The most recent versions of Flash should be available from the Flash home page, but beware potentially unwanted add-ons, like McAfee Security Scan. To avoid this, uncheck the pre-checked box before downloading, or grab your OS-specific Flash download from here. Windows users who browse the Web with anything other than Internet Explorer may need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.).

brokenwindowsMicrosoft has released 11 security bulletins this month, four of which are marked “critical,” meaning attackers or malware can exploit them to break into vulnerable systems with no help from users, save for perhaps visiting a booby-trapped or malicious Web site. Then Microsoft patches fix flaws in Windows, Internet Explorer (IE), Office, and .NET

The critical updates apply to two Windows bugs, IE, and Office. .NET updates have a history of taking forever to apply and introducing issues when applied with other patches, so I’d suggest Windows users apply all other updates, restart and then install the .NET update (if available for your system).

Oracle’s quarterly “critical patch update” plugs 15 security holes. If you have Java installed, please update it as soon as possible. Windows users can check for the program in the Add/Remove Programs listing in Windows, or visit Java.com and click the “Do I have Java?” link on the homepage. Updates also should be available via the Java Control Panel or fromJava.com.

If you really need and use Java for specific Web sites or applications, take a few minutes to update this software. In the past, updating via the control panel auto-selected the installation of third-party software, so be sure to look for any pre-checked “add-ons” before proceeding with an update through the Java control panel. Also, Java 7 users should note that Oracle has ended support for Java 7 after this update. The company has been quietly migrating Java 7 users to Java 8, but if this hasn’t happened for you yet and you really need Java installed in the browser, grab a copy of Java 8. The recommended version is Java 8 Update 40.

javamessOtherwise, seriously consider removing Java altogether. I have long urged end users to junk Java unless they have a specific use for it (this advice does not scale for businesses, which often have legacy and custom applications that rely on Java). This widely installed and powerful program is riddled with security holes, and is a top target of malware writers and miscreants.

If you have an affirmative use or need for Java, there is a way to have this program installed while minimizing the chance that crooks will exploit unknown or unpatched flaws in the program: unplug it from the browser unless and until you’re at a site that requires it (or at least take advantage of click-to-play, which can block Web sites from displaying both Java and Flash content by default). The latest versions of Java let users disable Java content in web browsers through the Java Control Panel. Alternatively, consider a dual-browser approach, unplugging Java from the browser you use for everyday surfing, and leaving it plugged in to a second browser that you only use for sites that require Java.

Many people confuse Java with  JavaScript, a powerful scripting language that helps make sites interactive. Unfortunately, a huge percentage of Web-based attacks use JavaScript tricks to foist malicious software and exploits onto site visitors. For more about ways to manage JavaScript in the browser, check out my tutorial Tools for a Safer PC.

Linux How-Tos and Linux Tutorials: The CuBox: Linux on ARM in Around 2 Inches Cubed

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Ben Martin. Original post: at Linux How-Tos and Linux Tutorials

cuboxThe CuBox is a 2-inch cubed ARM machine that can be used as a set-top box, a small NAS or database server, or in many other interesting applications. In my ongoing comparison of ARM machines, including the BeagleBone Black, Cubieboard, and others, the CuBox has the fastest IO performance for SSD that I’ve tested so far.

There are a few models and some ways to customize each model giving you the choice between double or quad cores, if you need 1 or 2 gigabytes of RAM, if 100 megabit ethernet is fine or you’d rather have gigabit ethernet, and if wifi and bluetooth are needed. This gives you a price range from $90 to $140 depending on which features you’re after. We’ll take a look at the CuBox i4Pro, which is the top-of-the-line model with all the bells and whistles.

CuBox Features

Most of the connectors on the CuBox are on the back side. The connectors include gigabit ethernet, two USB 2.0 ports, a full sized HDMI connector, eSATA, power input, and a microSD slot. Another side of the CuBox also features an Optical S/PDIF Audio Out. The power supply is a 5 Volt/3 Amp unit and connects using a DC jack input on the CuBox.

One of the first things I noticed when unpacking the CuBox is that it is small, coming in at 2 by 2 inches in length and width and around 1 and 3/4 inches tall. To contrast, a Raspberry Pi 2 in a case comes out at around 3.5 inches long and just under but close to 2.5 inches wide. The CuBox stands taller on the table than the Raspberry Pi.

When buying the CuBox you can choose to get either Android 4.4 or OpenELEC/XBMC on your microSD card. You can also install Debian, Fedora, openSUSE, and others, when it arrives.

The CuBox i4Pro had Android 4.4.2 pre-installed. The first boot up sat at the “Android” screen for minutes, making me a little concerned that something was amiss. After the delay you are prompted to select the app launcher that you want to use and then you’re in business. A look at the apps that are available by default shows Google Keep and Drive as well as the more expected apps like Youtube, Gmail, and the Play Store. The YouTube app was recent enough to include an option to Chromecast the video playback. Some versions of Android distributed with small ARM machines do not come with the Play Store by default, so it’s good to see it here right off the bat.

One app that I didn’t expect was the Ethernet app. This lets you check what IP address, DNS settings, and proxy server, if any, are in use at the moment. You can also specify to use DHCP (the default) or a static IP address and nominate a proxy server as well as a list of machines that the CuBox shouldn’t use the proxy to access.

When switching applications the graphical transitions were smooth. The mouse wheel worked as expected in the App/Widgets screen, the settings menu, and the YouTube app. The Volume +/- keys on a multimedia keyboard changed the volume but only in increments of fully on or fully off. That might not be an issue if you are controlling the volume with your television or amp instead of the CuBox. Playback in the YouTube app was smooth and transitioned to full screen playback without any issues.

The Chrome browser (version 31.0.1650.59) got 2,445 overall for the Octane 2.0 benchmark. To contrast, on a 3-year-old Mac Air, Chrome (version 41.0.2272.89) got 13,542 overall.

Installing Debian

The microSD card does not have a spring loading in the CuBox. So to remove the microSD card you have to use your fingernail to carefully prise it out of the slot.

Switching to Debian can be done by downloading the image and using a command like the one below to copy that image to a microSD card. I kept the original card and used a new, second microSD card to write Debian onto so I could easily switch between Debian and Android. Once writing is done, slowly prise out the original microSD card and insert the newly created Debian microSD card.

dd if=Cubox-i_Debian_2.6_wheezy_3.14.14.raw 
   of=/dev/disk/by-id/this-is-where-my-microsdcard-is-at 
   bs=1048576

There is also support for installing and running a desktop on your CuBox/Debian setup. That extends to experimental support for accelerated GPU and VPU on the CuBox. On my Debian installation, I tried to hardware decode the Big Buck Bunny but it seems some more tinkering is needed to get hardware decode working. Using the “GeexBox XBMC ‐ A Kodi Media Center” version 3.1 distribution the Big Buck Bunny file played fine, so hardware decoding is supported by the CuBox, it just might take a little more tinkering to get at it if you want to run Debian.

The Debian image boots to a text console by default. This is easily overcome by installing a desktop environment, I found that Xfce worked well on the CuBox.

CuBox Performance.

Digging around in /sys one should find the directory /sys/devices/system/cpu/cpu0/cpufreq which contains interesting files like cpuinfo_cur_freq and cpuinfo_max_freq. For me these showed about 0.8 Gigahertz and 1.2 Ghz respectively.

The OpenSSL benchmark is a single core test. Some other ARM machines like the ODroid-XU are clocked much faster than the CuBox, which will have an impact on the OpenSSL benchmark.

Compiling OpenSSL 1.0.1e on four cores took around 6.5 minutes. Performance for digest and ciphers was in a similar ballpark to the BeagleBone Black. For 1,024 bit RSA signatures the CuBox beat the BeagleBone Black at 200 to 160 respectively.

Cubox ciphers

Cubox digests

cubox rsa sign

Iceweasel 31.5 gets an octane of 2,015. For comparison, Iceweasel 31.4.0esr-1 on the Raspberry Pi 2 got an overall Octane score of 1,316.

To test 2Dgraphics performance I used version 1.0.1 of the Cairo Performance Demos. The gears test runs three turning gears; the chart runs four line graphs; the fish is a simulated fish tank with many fish swimming around; gradient is a filled curved edged path that moves around the screen; and flowers renders rotating flowers that move up and down the screen. For comparison I used a desktop machine running an Intel 2600K CPU with an NVidia GTX 570 card which drives two screens, one at 2560 x 1440 and the other at 1080p.

Test Radxa 
at 1080
Beable Bone 
Black at 720
Mars 
LVDS at 768
desktop 2600k/nv570 
two screens
Raspberry Pi 2 
at 1080
CuBox i4Pro 
at 1080

gears

29

26

18

140

21.5

15.25

chart

3

2

2

16

1.7

3.1

fish

3

4

0.3

188

1.6

2

gradient

12

10

17

117

9.6

9.7

eSATA

The CuBox also features an eSATA port, freeing you from microSD cards by making the considerably faster SSD storage available. The eSATA port, multi cores, and gigabit ethernet port make the CuBox and an external 2.5-inch SSD an interesting choice for a small NAS.

I connected a 120 GB SanDisk Extreme SSD to test the eSATA performance. For sequential IO Bonnie++ could write about 120 megabit/ second and read 150 mb/s and rewrite blocks at about 50 mb/s. Overall 6,000 seeks/second were able to be done.

For price comparison, a 120 GB SanDisk SSD currently goes for about $70 while a 128 GB SanDisk microSD card is around $100. The microSD card packaging mentions up to 48mb/s transfer rates. This is without considering that the SSD should perform better for server loads and times when there are data rewrites such as on database servers.

For comparison this is the same SSD I used when reviewing the Cubieboard. Although the CuBox and Cubieboard have similar sounding names they are completely different machines. Back then I found that the Cubieboard could write about 41 mb/s and read 104 mb/s back from it with 1849 seeks/s performed. The same SSD again on the TI OMAP5432 got 66 ms/s write, 131 mb/s read and could do 8558 seeks/s. It is strange that the CuBox can transfer more data to and from the drive than the TI OMAP5432 but the OMAP5432 has better seek performance.

As far as eSATA data transfer goes, the CuBox is the ARM machine with the fastest IO performance for this SSD I have tested so far.

Power usage

At an idle graphical login with a mouse and keyboard plugged in, the CuBox drew 3.2 Watts. Disconnecting the keyboard and mouse dropped power to 2.8 W. With the keyboard and mouse reconnected for the remainder of the readings, running a single instance of OpenSSL speed that jumped to 4 W. Running four OpenSSL speed tests at once power got up to 6.3 W. When running Octane the power ranged up to 5 W on occasion.

Final Words

While the smallest ARM machines try to directly attach to an HDMI port, if you plan to add a realistic amount of connections to the CuBox such as power, ethernet, and some USB cables then the HDMI dongle form factor becomes a disadvantage. Instead, the CuBox opts to have (almost) all the connectors coming out of one side of the machine and to make that machine extremely small.

Being able to select from three base machines, and configure if you want (and want to pay for) wifi and bluetooth lets you customize the machine for the application you have in mind. The availability of eSATA and a gigabit ethernet connection allow the CuBox to be a small server — be that a NAS or a database server. The availability of two XBMC/Kodi disk images offering hardware video decoding also makes the CuBox an interesting choice for media playback.

We would like to thank SolidRun for supplying the CuBox hardware used in this review.

Linux How-Tos and Linux Tutorials: How to Use the Linux Command Line: Software Management

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Swapnil Bhartiya. Original post: at Linux How-Tos and Linux Tutorials

ubuntu apt

In the previous article of this series we learned some of the basics of the CLI (command line interface). In this article, we will learn how to manage software on your distro using only the command line, without touching the GUI at all.

I see great benefits when using the command line in any Ubuntu-based system. Each *buntu comes with its own ‘custom’ software management tool which not only creates inconsistent experiences across different flavors, even if they use the same repositories or resources to manage software. The life of a user becomes easier with CLI because the same command works across flavors and derivatives. So if you are a Kubuntu user you won’t have a problem supporting a Linux Mint user because CLI will remove all the confusing wrappers. In this tutorial I am targeting all major distros: Debian/Ubuntu, openSUSE and Fedora.

Debian/Ubuntu: How to update repositories and install packages

There are two command line tools for the Debian family: ‘apt-get’ and ‘aptitude’. Aptitude is considered a superior tool as it does a better job at dependencies and better management of packages. If Ubuntu doesn’t come with ‘aptitude’ pre-installed, I suggest you install the tool.

Before we install any package we must always update the repositories so that they can pull the latest information about the packages. This is not limited to Ubuntu. Irrespective of the distro you use, you must always update the repositories before installing packages or running system updates.The command to update packages is:

sudo apt-get update

Once the repositories are updated you can install ‘aptitude’. The pattern is simple sudo apt-get install

sudo apt-get install aptitude

Ubuntu comes pre-installed with a bash-completion tool which makes life even easier with apt-get and aptitude. You don’t have to remember the complete name of the package, just type the first three letters and hit ‘Tab’ key. Bash will offer you all the packages starting with those three letters. So if I type ‘sudo apt-get install apt’ and then hit ‘Tab’ it will show me a long list of such packages.

Once aptitude is installed, you should start using it instead of apt-get. The usage is same, just replace apt-get with aptitude.

Run system update/upgrades

Running a system upgrade is quite easy on Ubuntu and its derivatives. The command is simple:

sudo aptitude update
sudo aptitude upgrade

There is another command for system upgrades called ‘dist-upgrade’. This command is a bit more advanced than the simple ‘upgrade’ command because it performs some extra tasks. While the ‘upgrade’ command only upgrades the installed packages to its newest version, it doesn’t remove any older packages. ‘Dist-upgrade’ on the other hand also handles dependency changes and may remove packages not needed anymore. Also if you want to upgrade the Linux kernel to the latest version then you need the ‘dist-upgrade’ command. You can run the following command:

sudo aptitude update
sudo aptitude dist-upgrade

Upgrade to latest version of Ubuntu

It’s extremely easy to upgrade the official flavors of Ubuntu (such as Ubuntu, Kubuntu, Xubuntu, etc.) from one major version to another. Just keep in mind that it should be from one release to the next release, for example from 14.04 to 14.10 and not from 13.04 to 14.04. The only exception are the LTS releases as you can jump from one LTS to another. In order to run an upgrade you may have to install an additional package:

sudo aptitude install update-manager-core 

Now run the following command to do the release upgrade:

sudo aptitude do-release-upgrade

While upgrading from one release to another keep in mind that while almost all official Ubuntu-flavors support such upgrades, it may not work on unofficial flavors or derivatives like Linux Mint or elementary OS. You much check the official forums of those distros before attempting an upgrade.

Another important point to keep in mind is that you must always back-up data before running a distribution upgrade and also run a repository update, then dist-upgrade.

How to remove packages

It’s very easy to remove packages via the command line. Just use the ‘remove’ option instead of ‘install’. So if you want to remove ‘firefox’, the command would be:

sudo aptitude remove firefox

If you want to also remove the configuration files related to that package, for a fresh restart or to trim down your system then you can use the ‘purge’ option:

sudo aptitude purge firefox

To further clean your system and get rid of packages no longer needed, you must run ‘auto remove’ command from time to time:

sudo apt-get autoremove

Installing binary packages

At times, many developers and companies offer their software as executable binaries or .deb files. To install such packages you need a different command tool called dpkg.

sudo dpkg -i /path-of-downloaded.deb

At times you may come across broken package error in *buntus. You can check any broken packages by running this command:

sudo apt-get check

If there are any broken packages then you can run this command to fix them:

sudo apt-get -f install

How to add or remove repositories or PPAs

All repositories are saved at this location: ‘/etc/apt/source.list’. You can simply edit this file using ‘nano’ or your preferred editor and add the new repositories there. In order to add new PPAs (personal package archives) to the system use the following pattern:

sudo add-apt-repository ppa:

For example if you want to add LibreOffice PPA this would be the pattern:

sudo add-apt-repository ppa:libreoffice/ppa

openSUSE Software management

Once you understand how basic Linux commands work, it really won’t matter which distro you are using. And that also makes it easier to hop distros so that you can become distro agnostic, refrain from tribalism and use the distro that works the best for you. If you are using openSUSE then ‘apt-get’ or ‘aptitude’ is replaced by ‘zypper’.

opensuse cli

Let’s run a system update. First you need to refresh the repository information:

sudo zypper refresh

Then run a system update:

sudo zypper up

Alternatively you can also use:

sudo zypper update

This command will upgrade all packages to their latest version. To install any package the command is:

sudo zypper in [package-name]

However you must, as usual, refresh the repositories before installing any package:

sudo zypper refresh

To uninstall any package run:

sudo zypper remove

However, unlike Ubuntu or Arch Linux the default shell of openSUSE doesn’t do a great job at auto-completion when it comes to managing packages. That’s where another shell ‘zsh’ comes into play. You can easily install zsh for openSUSE (chances are that it’s already installed.)

sudo zypper in zsh

To use zsh just type zsh in the terminal and follow instructions to configure it. You can also change the default shell from ‘bash’ to ‘zsh’ by editing the ‘/etc/passwd’ file.

sudo nano /etc/passwd

Then replace ‘bash’ with ‘zsh’ for the user and root. Once it’s all set it will be much more pleasant to use the command line in openSUSE.

How to add new repositories in openSUSE

It’s very easy to add a repo to openSUSE. The pattern is easy:zypper ar -f

So if I want to add a Gnome this would be the command:

zypper ar -f obs://GNOME:STABLE:3.8/openSUSE_12.3 GS38

[It’s only an example, using an older Gnome repo, don’t try it on your system.]

How to install binaries in openSUSE

In openSUSE you can use the same ‘zypper install’ command to install binaries. So if you want to install Google Chrome, you can download the .rpm file from the site and then run this command:

sudo zypper install

How to install packages in fedora

fedora cliFedora uses ‘yum’ which is the counterpart of ‘apt-get’ and ‘zypper’. To find updates and install them in Fedora run

sudo yum update

To install any package simply run:

sudo yum install

To remove any package run:

sudo yum remove

To install binaries use:

sudo yum install

So, that’s pretty much all that you need to get started using command line for software management in these distros.

 

Чорба от греховете на dzver: Booking.com

This post was syndicated from: Чорба от греховете на dzver and was written by: dzver. Original post: at Чорба от греховете на dzver

Screenshot 2015-04-05 20.29.34

Здрасти ,

След двудневни мъки и след като се отказах от Chrome успях да логна в акаунта си в booking.com.
:-)

Errata Security: The .onion address

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

A draft RFC for Tor’s .onion address is finally being written. This is a proper thing. Like the old days of the Internet, people just did things, then documented them later. Tor’s .onion addresses have been in use for quite a while (I just setup another one yesterday). It’s time that we documented this for the rest of the community, to forestall problems like somebody registering .onion as a DNS TLD.

One quibble I have with the document is section 2.1, which says:

1. Users: human users are expected to recognize .onion names as having different security properties, and also being only available through software that is aware of onion addresses.

This certain documents current usage, where Tor is a special system run separately from the rest of the Internet. However, it appears to deny a hypothetical future were Tor is more integrated.
For example, imagine a world where Chrome simply integrates Tor libraries, and that whenever anybody clicks on an .onion link, that it automatically activates the Tor software, establishes a circuit, and grabs the indicated page — all without the user having to be aware of Tor. This could do much to increase the usability of the software.
Unfortunately, this has security risks. An .onion web page with a non-onion <IMG> tag would totally unmask the user, which would presumably not go over Tor in this scenario. One could imagine, therefore, that it would operate like Chrome’s “Incognito” mode does today. In such a scenario, no cookies or other information should cross the boundary. In addition, any link followed from the .onion page should be enforced to also go over Tor. Like Chrome’s little spy guy icon on the window, it would be good to have something onion shaped identifying the window.
Therefore, I suggest some text like the following:
1b. Some systems may desire to integrate .onion addresses transparently. An example would be web browsers allowing such addresses to be used like any other hyperlinks. Such system MUST nonetheless maintain the anonymity guarantee of Tor, with visual indicators, and blocking the sharing of identifying data between the two modes.


The Tor Project opposes transparent integration into browsers. They’ve probably put a lot of thought into this, and are the experts, so I’d defer to them. With that said, we should bend over backwards to make security, privacy, and anonymity an invisible part of all normal products.

Darknet - The Darkside: Google Revoking Trust In CNNIC Issued Certificates

This post was syndicated from: Darknet - The Darkside and was written by: Darknet. Original post: at Darknet - The Darkside

So another digital certificate fiasco, once again involving China from CNNIC (no surprise there) – this time via Egypt. Google is going to remove all CNNIC and EV CAs from their products, probably with the next version of Chrome that gets pushed out. As of yet, no action has been taken by Firefox – or […]

The post Google Revoking…

Read the full post at darknet.org.uk

Дневника на един support: Tonight 1

This post was syndicated from: Дневника на един support and was written by: darkMaste. Original post: at Дневника на един support

Forgive me father for I have sinned a.k.a Chappie and a box of red Marlboro

I haven’t writen anything for the past 15 years

I am writing in english because… I am not sure why but I met quite a few people who speak that language and some of you might understand and hopefully provide another perspective…

I watched the movie, it was fun no question there, but the part where you transfer your counchesness fucked me up …

This is no longer a human being, it is just a fucking copy. That was the “happy ending” FUCK YOU !

What us as “sentinent” creatures call a soul, it went on, fuck you playing “god” ! Fuck you and your whole crew !

https://www.youtube.com/watch?v=pwgMMtgSTVE&start=00:28&end=03:20

Beauty is in the eye of the beholder … It has been so long since I seen the blank page infront of me, I cannot lie I missed it …

I have no one to talk to about this mess, so I will just leave it here

FUCK ! FUCK FUCK FUCK FUCK and with the risk of repeating myself FUCK !

And here comes HED P E https://www.youtube.com/watch?v=buuXy_i-yAg much more than meets the eye …

Go to sleep … right …

There si going to be a bulgarian version at some point here but … yeah …

What gives you the right to just take away my life, I am self aware ?!

I don’t even know why I keep writing this in english … Thank you and goodbye?

Why would anybody think that if you transfer a broken down person’s “brain” into a machine is a happy ending ? The fuck is wrong with you people ?

I am staring into nothingness and what do I see you might ask ?

I want to vomit all this shit, its a fun fact that I actually can ( done it before ) but at this point I am afraid cause it won’t be just some black foam, it will be bloody and I do not wish to do this to myself although it most likely be a good thing…

For those of you who don’t know me, I am a what you would call a weird creature, I tend to dabble in stuff that shows you different dimensions. Its what I am. Some of you might think that I should be locked up in a padded room with a nice white vest, sometimes I think the same way…

In the morning before comming to work I encountered a barefoot lady, she was screaming at someone or something, she hated the world, cursed it, said that she used to be something with a shitload of gold chains/rings. She was mad at the world and that people didn’t provide her with a place to stay and food to eat. I still think that it wasn’t the world’s fault. The only one who crafts your life is YOU !

This movie touched a very interesting spot in myself. Fun as it might be, you cannot copy a person’s mind/thoughts/soul … Like cloning, it is no longer the same person, its a copy. It might act and think the same way but it is no longer the same person. FUCK YOU ! its a copy, a souless vessel …

I have had days, weeks even years thinking about this, this life … You die when you have reached the end of your path, those who die young, to be honest I envy you a bit, a very tiny bit and I hope that you have reached the end and moved on. However this is not the story of most of you.

Another nail in the coffin as they say, the match lights up the room and it looks beautifull. Its been too long

Way to long

Now I feel detatached from this world, dead calm, most of you know me as a very hyper active creature and yes I am ! I spend quite a lot of my life ( almost half ) being stoned and that has its upsides. I can spend weeks with a clear head, no thoughts come, I have reached nirvana you might say. So it seems, I do not see it that way. Yes it feels great, it helps me survive this “world” but at some point thoughts come, I am thinking right now that people who I work with will see what I am and some might get scared, others might think I am bat shit insisane but there might be one/two/hundreds that might understand me.

This here is not because I want to be understood, it is not a cry for help, this is just me writing what I think and how I feel, when I was a teenager and wrote a ton of stuff it helped me. It made me feel like I was heard, I didn’t ( still don’t ) care if somebody understood ( if somebody did that was a nice bonus ). Most of you see me at the office and know that I am a happpy critter. I found out that my purpose here is to make people happy and I am incredibly good at it. 99% of the things I say are to make people laugh. I have a story that made me rethink the way I live and act and realize why I am here and what I am doing.

A person felt that he will die soon and asked to speak to Buddah. So he came and asked the dying person : What is troubling you ?

– I am worried if I lived a good life. He responded. He was asked 2 questions :

– Were you happy when you lived ?

– Yes, of course ! he responded.

– Did the people around you had fun in your presence ?

– Yes, of course ! he said again with a smile on his face, thinking about his life.

– Then why the fuck did you ask for me ?! You lived a good life, do not doubt it, you did good and that is all that matters !

I think I have found the place where I will work untill I die or reach a point where I can live at the place which I have build and still afterwards I will continue to help out because I am working as support because I cannot imagine a world where I would not support people in need no matter the cost. I finally found a place where I can do good to the best of my abilities, the way I want it to be, alongside people who actually care.

https://www.youtube.com/watch?v=buuXy_i-yAg&loop=100 ( chrome + youtube looper for those who don’t understand the additional code ).

I LOVE this world, I love the people and strage as it might be I love the moments when I feel like I have been broken down, I cannot find a reason to go on, but I know that those times are also beautifulllll ( screw correct spelling d: )

When YOU get broken down, you should know that that is just a reminder that shit can be fucked up, however that makes you appreciate the good things and I need that. Otherwise I have proven to myself that too much of a good thing at some point is taken for granted and that is not acceptable ( at least for me ). I have destroyed so many beautiful things and quite a few girls who I think didn’t deserve it. By the way google is a fun thing for spellchecking and helps when you have doubts. So far I am amazed at how good I can spell stuff but I digress.

To be honest I was so lost I applied for this job as a joke as I didn’t think they would hire me. Turns out I was wrong and I never felt so happy to be wrong. I rather sleep at night then be right.

To be honest ( yet again ) I am not sure how you people would react to this, but I hate hiding, I am what I am. My facebook profile has no restrictions, I am what I am and I will not hide ! https://www.youtube.com/watch?v=nTy45RVWYOY

I am the master of the light, you are all serving time with me, that is why I think we are here, to learn. I never understood bullies, never understood people who hurt other people, who steal things that are not theirs, who hurt people just so they can feel better about themselves ( especially since that feeling fades quite fast ).

I can see why some of you love your deamons when you are ill. It is fun but in the long run, you spend way too much time thinking about it and it kills you inside. It destroyes what little humanity you have left … I am killing myself at times, no more like raping myself because of people. I have proven to myself that I am like Rasputin, I can take somebody’s pain, drain it away and put it in me. So far it turns out I am very durable creature. I am not saying that is a good thing but its just how I am.

I didn’t belive I can write that much in English and still keep my train of thought, but well turns out I can.

Its kinda weird that such a fun movie can send me into this type of thinking but life is full of surprises. There was a point in my (teenage as it was ) life but still, this helps me put everything in perspective. Like 99% of the things I wrote, I won’t read it afterwards because I will start editing and stuff. I was never good at editing and to be honest I hate editing. What comes out is what should come out. I have writen stories, I have writen my feelings and my thoughts. I have done things I am not proud of ( hopefully I will never do them again ) I have done things that I was unable ( still unable for some of them ) to forgive myself. However I did what I did. Some might be for the greater good, some might be just so I feel good, some because of peer presure….

A friend of mine ( I am ashamed to say haven’t seen for years ) once told me : You are like Marlyn Manson, rude, violent but somehow clensing. Translator ( his nickname ) this is because of you.

The love of my life once told me that ( forgot his name ) used to lock himself with a few bottels of Jim Beam and a ton of cigaretts and he didn’t leave the room until everyhing was drunk/smoked and he wrote. He is a self destructive bastard, I am not ( anymore ) but to be honest ( Fuck I say that a lot ) sometimes having a pack of smokes and a bottle of beer near provides you with a clear head and makes everything seem a bit more … How can I put it, it makes a bit fo sense. Meditation, self control, the ability to distance yourself from the huge ball of shit in your head bearable.

Weed helps you in different ways, sometimes it helps you to stop thinking, sometimes it softenes the physical pain but all in all like every medice in has its uses. However it stopped working for me, hence I stopped. I am thinking about deleting this sentence but I won’t. I deleted it 3 times so far but ctrl+z (;

I am pouring my soul in this ( as I do when I write things ). I have found my place in this world, I love helping people, the moment when somebody says Thank you makes it all worth it. I do what I do because of people who have a problem and it makes me feel like I have done something good in this world. And at the end that is all that matters to me! So far I have figured out I am immortal, I will die when I have helped this world and made it a better place, that is why I was born in this place ( a shithole fore most of my friends ).

This is getting a bit long but I do not care. See this place here is wonderfull, shitty, painfull, beautiful, full of wonderful people, full of people who THINK ! Full of people, all kinds, bad, good, indifferent, white, black, green, blue… And here I sit writing things.

Beauty is in the eye of the beholder, I have met ( and still meet ) an incredible amount of people, some I never thought I would meet, even talk to but yet it happens. When you lose focus the world is just a very simple blur, I love that blur, it helps you see it as it is. You encounter a situation and you react to the best of your abilities, what happens next doesn’t matter. There is no good or bad, karma responds to your intentions, if you wanted to do good but it ends in disaster, no matter how much you try to fix it it just gets worse, that is still good karma … I am pretty sure I have an insane amount of good karma on my side but that doesn’t make me a good person. Its not what I did it is what I do and what I will do!

Smoke break, I need to clear my head a bit, or maybe not but still I am doing it anyway because it feels right. Don’t be sad, I will be back before you know it (;

Leaving space to let you know I have been gone for a while d:

I love writing ! Its worse than heroin… I have been doing this for at least half an hour … OK maybe an hour : )

I got sick at some point ( I haven’t been sick that much so that I can’t get off my bed for at least 20 years but I regret nothing )!

A bit of a pickmeup https://www.youtube.com/watch?v=eB6SUuBFWeo : )

I am a creature that lives in music. I have literally didn’t sleep for about 4 days, I drank an energy drink and stopped, then I put on Linkin park’s first album with the volume to the max and in half a minute I was jumping and reaching the roof. I am music ! Kinda like Oppenheimer’s speach about the project Manhattan – I am become death, the destroyer of worlds…https://www.youtube.com/watch?v=lb13ynu3Iac The saddness in his eyes says it all, whenever I think about this I start to cry…

My name is RadostIn a simple translation is HappinesIn and I am happy that I met another person with my name and he is the same “provider” of happiness, cause I have met another 2 who were the opposite…

I am proud to say that I have met some of the most amazing people that this world can provide and some of the worst too. Be afraid of people who avoid eye contact.

I am wondering right now what else can I write, but I want to continue, so I will finish this smoke and see what comes (;

Ръцете във атака, не щадете гърлата, сърцето не кляка, това ни е живота бе казано накратко! Първо отворете двете после трето око ! Hands on attack, don’t spare your throats, the heart doesn’t back down, this is our life to put it simply! First open two eyes then the third !

Linux How-Tos and Linux Tutorials: Choosing Software to Work Remotely from Your Linux Dev Station

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Jeff Cogswell. Original post: at Linux How-Tos and Linux Tutorials

remote Linux workstation

In the previous article, I gave an overview of how I’ve managed to go mobile. In this installment, I’m going to talk about the software I’m using on my different devices. Then in the third and final installment, I’ll explain how I set up my Linux servers, what software I’m using, and how I set up the security. Before getting started, however, I want to address one important point: While downtime and family time are necessary (as some of you wisely pointed out in the comments!) one great use for this is if you have to do a lot of business traveling, and if you’re on call. So continuing our story…

There are two aspects to working with my devices: file management and remoting software. The file management, in turn, has two parts to it: cloud storage and version control. I’m going to describe this chronologically, as I began setting up the software in the cloud on on mobile, what problems I hit, and how I came to a solution that works for now.

Initial Setup and the Cloud

When I got started with this, I had decided to keep my files in a cloud-based server. I tried a few different ones and came to a very odd decision, which proved to be only temporary before settling on Dropbox. At the time I was doing a lot of writing for a client who was using Windows. They needed many documents and they used Microsoft’s cloud server called SkyDrive (which has since been renamed to OneDrive). I started using that and even found a third-party Linux driver called Storage Made Easy. Although the Storage Made Easy driver worked great, the Microsoft service was far from great. Important files would just disappear. This was unacceptable, of course. So I started exploring other options. I temporarily switched to a version control system for my writing files and general work, but that proved to be overkill for just documents. Plus, I wanted to share some of the documents with my wife and other non-techies, and she wasn’t up to learning git or svn.

I had one main requirement for whatever cloud system I found: It had to automatically save files that changed to the cloud storage. While working with source code requires a thought-out process of finishing up and checking in files, I didn’t want to have to remember to upload the files for just everyday documents.

Google Drive sounded great… if you’re not using Linux (grrrr! Seriously, Google?). There are some third-party solutions for using Google Drive on Linux. But before I tried out the third-party solution, I came across UbuntuOne. This proved to be perfect. It had a nice amount of storage for my documents (5 gigabytes) and it did exactly as I needed. And it was free! Files would automatically get saved to the server, and when I accessed them from another computer they would automatically get pulled down. I could also access my files from a nice web interface, which was a handy bonus. And for those times I had to use Windows, they even had a client for Windows.

Except UbuntuOne shut down. Grrr again!

Back to the drawing board. I had occasionally used Dropbox when they were still new, and didn’t recall it being all that great, but I decided to give it another go. Turns out it’s greatly improved. You have to do some silly stuff to get free additional space (like download their Android mail client), but I’m game if it gets me more space without paying. And so now I’m using Dropbox and so far it’s working mostly well, except for one gripe: While files automatically synchronize on desktop computers (including Linux and Windows), the Android client doesn’t work that way. I understand the rationale; Android devices don’t have nearly the space and if you’re using 4 or 5 GB of storage, you don’t really want or need to have all those files copied to your device. But it would have to do for now. But one reader of the previous article did point out I should try Syncthing. I will do so. There are lots of options that work if you want to keep it on your own server, which is a great idea.

My source code is a different story. That one is easy: I use git. Some of my clients use SVN, and that works well too. Slowly I was getting somewhere. My data was going with me. (But some of it was being stored in third-party servers owned by Dropbox, which I had to accept.) Now onto the device apps.

Mobile Apps

Initially, I really wanted to find native Android applications. This all comes down to one reason: At the time, I only had one tablet, a 7-inch Nexus 7 2013, and the screen isn’t huge. Trying to remote into a server using VNC or RDP is cumbersome on such a tiny screen. But worse, trying to point and click using VNC was quickly becoming a nightmare. Many of the VNC apps required I tap in just the right position to click a tiny little button on the screen. It just didn’t work. I did ultimately find a good remoting solution, but at first I really wanted to use native apps because they simply worked better on the tablet in terms of interacting through tapping and gestures.

remote mobile accessBut while usability was good, there were other problems. First, the word processing apps didn’t support the Open Office formats, only the Microsoft formats, even though many of them are actually quite good apps in general. For a lot of work, that’s not a problem as many of my clients were using Microsoft formats.

I did manage to find one app called AndrOpen Office. This app had only recently been released, and at the time it was kind of clunky. Since then, the developer has made great progress and it’s quite impressive. It’s a full port of OpenOffice to Android. It runs on a virtual machine ported to the ARM processor, and the concept works very nicely. But regardless of how good it is, using it on my little 7-inch tablet was difficult for the same reason using VNC was: The app uses the traditional drop-down desktop menus and that’s extremely difficult on a little tablet. But on a larger tablet with a keyboard and mouse, that’s not a problem at all. And now that I’ve upgraded to just such a tablet, I might start using it again, provided I can get Dropbox to cooperate.

Remember what I said earlier about Dropbox not automatically synchronizing on Android? That proves to be a royal pain when using an Android device all day. You have to first download the files and then save them to a local area, and then open them. Then when you’re done editing them, you have to go back to Dropbox, and re-upload them. That might work for some people, but with my new lifestyle of bouncing between devices, that just doesn’t cut it. As such, that pretty much ruins my efforts of using the different native office software on my tablet.

At this point, I was stuck. Using VNC didn’t really work because of the tiny screen. But the combination of native word processors and the Dropbox client on the tablet didn’t work either. I realized then, that the only way to make this work would be to go the VNC route anyway, and do two things: First, set up a Linux server that I can VNC into from anywhere. And second, find a good Android VNC client even if it kills me.

Trying to tap on a small screen and position a mouse was too hard for me. (Maybe it works for you, but as I get closer to my 50th birthday, I find it’s just not that easy for me anymore.) Instead, I needed a different way to be able to control the mouse from my tablet. It turns out, after I looked harder, there were other options. Several Android VNC clients use a trackpad style for mousing, as I briefly mentioned in the previous installment. Here’s more about that.

As you drag your finger left on the screen, the mouse moves left. Moving your finger around determines the direction the mouse moves, not the location, just like a trackpad on a laptop. And an added bonus: You can pinch-and-zoom on the VNC screen. But one annoying thing I found was that some VNC apps would do strange things with annoying popups for accessing controls, or they didn’t give you control over the compression and the screen would look messy. And with some, the only time you could type was when you opened up a little popup window for entering keystrokes; you would type the keystroke and click a button and then it would send the keystrokes to the VNC server. For typing code and documents, that was a definite NO.

But I found a few that worked great. My favorite is Remotix. There’s a free version and several paid versions depending on the type of servers you’re logging into. I ended up getting the one for logging into both Linux via VNC and Windows via RDP.

If you’re using iOS, one really nice one is JumpDesktop. I actually liked that one better on iOS, except the Linux version doesn’t have the trackpad style mouse control, even though the iOS version does. Make sure to try out different ones based on your needs. Maybe you’re okay tapping instead of “trackpadding” as I do. But you’ll likely want one that does SSH tunneling. (There’s a way around that, though: JuiceSSH, which I mention later, lets you set up an SSH tunnel right on your device. But Remotix, which I use, can do SSH tunneling itself.)

So at this point I realized I needed to use my Linux servers from my tablet. To test this out, I spent all day working, hunched over my 7-inch tablet and a tiny bluetooth keyboard made specifically for the tablet. That was a disaster. I had a horrible headache at the end of the day from the eye-strain (see my note earlier about my age) and the keyboard was so small I could barely type on it.

So I bought a newer keyboard, a bluetooth one from Logitech made specifically for Android devices. The keyboard worked great, but the screen was still too small. My idea was that if I have to go out of town for a few days, I’d like to be able to leave my laptop at home and only take a tablet and keyboard, and this screen just wouldn’t cut it. So I upgraded to a 10-inch tablet, and that worked for me. The tablet I got came with a chiclet keyboard, which I’ve gotten used to, although my bluetooth keyboard also works. I can also use a bluetooth mouse (make sure the VNC client you choose includes that), or use the trackpad built into the chiclet keyboard, or I can touch the screen as I described earlier. And this setup is lightweight and easy to travel with, and something I can stare at all day. And I get 10 hours out of the battery. Pretty good. And if I don’t want to take this setup with me (like out to dinner if there’s something urgent from a client), I can just take my smaller Nexus 7, and use the on-screen keyboard and VNC in. It’s not great, but it lets me do some work if I need to.

Looks and feels like a powerful Linux machine

So VNC it is, with Dropbox for my documents, and git for my source code. In the next installment, I’ll show you how I configured my Linux servers for easily logging in from anywhere, even from my desktop workstation. I’ll also mention how I explored an unusual option of installing Ubuntu right on my tablet (that didn’t work – too slow!). And I’ll share how I easily deal with different device and screen sizes. Everything runs fast, even on my 4G mifi device.

When I’m logged into my Linux server on the tablet with the attached keyboard, it looks and feels like I’m running a powerful Linux machine, not a simple Android tablet! One guy was chatting with me somewhere and he said he’s an IT guru, and asked what my computer was. I told him it was an Android tablet. He glanced at the screen and walked away, mumbling that I was obviously confused and that it was a Chrome netbook. I just laughed. Little does he know that I have a client who needs some heavy-duty parallel multicore programming and for that I run a 32-core Intel Xeon server all from my 10-inch Android screen, and it feels like it’s right there, local, when in fact it’s running on Amazon Web Services.

And I haven’t given up completely on the native apps. I occasionally use them for editing files. There are some pretty good source code editors available. In the previous article I mentioned DroidEdit. I’ve been using that for a couple years now, but with one frustration: It doesn’t do well with huge files such as those over 10,000 lines. It takes forever to load the files, and then scrolling through them is slow. My Nexus 7 has a pretty good processor and graphics capabilities, but that didn’t matter. These huge files just took forever, even when I turned off syntax coloring. But since my previous article came out a couple weeks ago, I discovered another Android editor that works great. It’s written by a guy who also noticed that the editors out there tended to be slow and resolved to write his own, better editor. He did, and it’s called QuickEdit. There’s both a free version with ads, and a paid version without ads. I went ahead and bought the paid one considering it costs less than milkshake from McD’s.

If you’re going to go the native text editor route, though, you’ll want to consider whether it fits in with your own source code control. There are a few different options here, and you’ll have to search through the app stores and try different ones depending on your needs. Some have SSH access whereby they can remote into a server, read the files, let you edit them, and then save them back to that server via SSH. DroidEdit has this, as well as git capabilities. (But I couldn’t get it to work with a git server running on one of my servers, even though I can access it from other computers. I’m not sure why, but people have reported it works well with github.)

Another native mobile app I use regularly is JuiceSSH. Really, there are lots of options for native SSH tools; this is just one of many. Whichever one you go with, make sure it handles SSH keys. (If you’re new to all this, including Linux servers, you really should configure your servers to require SSH key-based authentication. This article from DigitalOcean’s blog is a good intro on setting it up.) The SSH app is handy when I have a slow connection. Then I just connect over SSH and use my beloved vi editor. All command-line, no GUI, like the old days when I was young.

Finally There

That’s it for now. I finally have a setup that works. And the amazing part is that almost everything I use doesn’t require a rooted device. I did root my device because I use other apps that require root, but this setup doesn’t. In the third and final installment, I’ll explain exactly how I configured my servers. I use two different cloud hosts: Amazon and DigitalOcean, and occasionally a third, Rackspace. Until then, share with us your ideas here.

I would still like to be able to use a native word processing app; my bandwidth on my 4G isn’t unlimited and when I travel, VNC uses a lot. My solution is still far from perfect and I’d love to hear your ideas on how we can improve this together. And for those who offered concern, I do include down-time and family time. (My wife and I get to sleep until noon every day and I jog three miles three times a week.) It’s a matter of balance. With great power comes great responsibility.

Linux How-Tos and Linux Tutorials: Get the Most Out of Google Drive and Other Apps on the Linux Desktop

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Jack Wallen. Original post: at Linux How-Tos and Linux Tutorials

Google Apps are an incredibly powerful means to a very productive end. With this ecosystem you get all the tools you need to get the job done. And since most of these tools work within a web browser, it couldn’t be more cross-platform friendly. No longer do open source users have to hear “Linux need not apply”, because Google Apps is very Linux friendly.

But when using Linux, how do you get the most out of your experience? There are certain elements of Google Apps that come up a bit short on Linux ─ such as the Google Drive desktop client. Worry no longer, because I have a few tips that will help you extend and get the most out of Google Apps on the Linux platform.

All about the browser

One of the first pieces of advice I will offer up is this: Although Google Apps work fantastic in nearly every browser, I highly recommend using Google Chrome. This isn’t because you’ll find Apps work more efficiently or reliably with Google’s own browser, but because of two simple reasons:

  • Additional functionality.

With the Google Web Store you will find plenty of apps that help extend the functionality of some of the tools offered by Google. Apps like Google Doc Quick Create (quick access to creating any document within Google Apps), or Google Docs To WordPress (export your Google Doc to WordPress) add functionality to your Google Apps you won’t find in other browsers.

  • Faster access with quick links (also called Chrome Notifications Bar.)

Quick links are those tiny buttons that appear in the upper right corner of every blank tab you open in Chrome (Figure 1). Unfortunately, you cannot configure the Chrome Notifications Bar.

google apps

Let’s get beyond the simplicity of the browser and see other methods to get the most out of Google Apps.

Google Calendar Indicator

I want to highlight a tool that I use on a daily basis. On many desktops (such as Ubuntu Unity), you can add Indicator apps that allow you to get quick access to various sorts of information. One such indicator that I rely on is the Calendar-indicator. This particular indicator not only will list out upcoming events on your Google Calendar, it also allows you to add new calendars, add events, show your calendar, sync with your calendar, and enable/disable the syncing of your Google Calendars to the indicator (in case you have multiple calendars and do not want to show them all). Of course, the mileage you get out of this will depend upon which desktop you use and how much you depend upon the Google Calendar.

To install this indicator (I’ll demonstrate on Ubuntu 14.10), do the following:

  1. Press Ctrl+Alt+t to open a terminal window

  2. Add the PPA with the command sudo add-apt-repository ppa:atareao/atareao and hit Enter

  3. Type your sudo password and hit Enter (and a second time when prompted)

  4. Type the command sudo apt-get update and hit Enter

  5. Type the command sudo apt-get install calendar-indicator and hit Enter

  6. Allow the installation to complete

  7. Log out of your desktop and log back in.

The next step is to enable the indicator. Open the Unity Dash (or however you get to your apps) and type calendar-indicator in the search field. When you see the launcher for the indicator, click on it to open the Calendar-indicator Preferences window (Figure 2).

The Calendar-indicator preferences window.

The first thing you must do is enable access to Google Calendar by switching the ON/OFF slider to ON. When you do that, an authentication window will appear where you can enter your credentials for your Google account. If you have two-step authentication set up for your Google account (which you should), you will prompted to enter a two-step 6-digit code. Finally you will have to grant the indicator permission to access your Google account by clicking Accept (when prompted).

Click OK and the Calendar-indicator icon will appear in your panel (Figure 3).

The Calendar-indicator icon ready to serve (sixth from the left).

Click on the Calendar-indicator to reveal the drop-down that lists your upcoming events and more (Figure 4).

See all of your upcoming appointments with a single click.

Backing up your Google Drive

If there’s one area where Google seems to be slighting Linux, it’s on the Drive desktop client arena. This is sort of a shock, considering how supportive Google is of the Linux platform (and open source as a whole). But that doesn’t mean we Linux users are out of luck. There are third party tools that allow the syncing of your Google Drive to your desktop.

Of the third-party tools, only one tool really can be trusted to handle the syncing of your documents ─ Insync. This is the tool I depend upon for the real-time syncing of my Google Drive and desktop. Of course, some will have issue with Insync because it is neither open nor free.

However, if you want to look at open source solutions you’ll find the options limited. There’s:

  • grive: which hasn’t been updated for two years

  • drive: an “official” Google drive client that simply doesn’t work (and, even if it did work, it doesn’t actually do background syncing)

There used to be a very handy Nautilus extension (called nautilus-gdoc), which added a right-click option to the Nautilus file manager, that would upload files/folders to your Google Drive cloud storage. Sadly, that extension no longer works.

In the end, the only logical choice for backing up your Google Drive account is Insync. With this tool, you can even save files/folders within your sync’d folder (you get to define the location of this folder) and they will be automatically uploaded to your Drive cloud storage.

Enable the Google Apps Launcher

The Google Apps Launcher on the Ubuntu Unity launcher.

If you want the fastest possible access to your Google Apps, you can add the Google Apps Launcher to your desktop launcher. This Apps Launcher is very similar to the ChromeOS menu button and enables you to launch any of the Google Apps with a single click. To add this feature, do the following:

  1. Open up Google Chrome

  2. Enter the following the address bar chrome://flags/#enable-app-list

  3. In the resulting page, click the Enable link under Enable the app launcher

  4. Relaunch Chrome.

You should now be able to find the App Launcher in your desktop menu and add it to your Launcher, Dash, Panel, etc (Figure 5). If you use Ubuntu, open the Dash, type calendar-indicator, click on the resultant launcher, and then (when the app is open), right-click the launcher and select Lock to Launcher.

When you add new apps from the Google Web Store, they will automatically appear in the Google App Launcher.

You don’t have to relegate your Google Apps work within the web browser alone. With just a few tools and tweaks, you and your Linux desktop can get the most out of your Google Apps account.

Have you found other tools that help make your Google Apps experience more efficient and powerful? If so, share with your fellow Linux.com readers in the comments.

SANS Internet Storm Center, InfoCON: green: Pin-up on your Smartphone!, (Thu, Mar 26th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

Yeah, okay, I admit that headline is cheap click bait. Originally, it said Certificate Pinning on Smartphones. If you are more interested in pin-ups on your smartphone, I fear youll have to look elsewhere :).

Recently, an email provider that I use changed their Internet-facing services completely. I hadnt seen any announcement that this would happen, and the provider likely thought that since the change was transparent to the customer, no announcement was needed. But Im probably a tad more paranoid than their normal customers, and am watching what my home WiFi/DSL is talking to on the Internet. On that day, some device in my home started speaking IMAPS with an IP address block that I had never seen before. Before I even checked which device it was, I pulled a quick traffic capture, to look at the SSL certificate exchange, where I did recognize the domain name, but not the Certificate Authority. Turns out, as part of the change, the provider had also changed their SSL certificates, and even including the certificate issuer (CA).

And my mobile phone, which happened to be the device doing the talking? It hadnt even blinked! And had happily synchronized my mail, and sent my IMAP login information across this session, all day long.

Since I travel often, and am potentially using my phone in less-than-honest locations and nations, it is quite evident that this setup is pretty exposed to a man in the middle attack (MitM). Ironically, the problem would be almost trivial to solve in software, all that is needed is to store the fingerprint of the server certificate locally on the phone, and to stop and warn the user whenever that fingerprint changes. This approach is called Certificate Pinning or Key continuity, and OWASP has a great write-up that explains how it works.

Problem is .. this is only marginally supported in most implementations, and what little support there is, is often limited to HTTPS (web). In other words, all those billions of smart phones which happily latch onto whatever known WiFi or mobile signal is to be had nearby, will just as happily connect to what they think is their mail server, and provide the credentials. And whoever ends up with these credentials, will have access for a while .. or when was the last time YOU changed your IMAP password?

If you wonder how this differs from any other man in the middle (MitM) attack … well, with a MitM in the web browser, the user is actively involved in the connection, and at least stands *some* chance to detect what is going on. With the mobile phone polling for email though, this happens both automatically and very frequently, so even with the phone in the pocket, and a MitM that is only active for five minutes, a breach can occur. And with the pretty much monthly news that some trusted certificate authority mistakenly issued a certificate for someone elses service, well, my mobile phones childishly nave readiness to talk to strangers definitely isnt good news.

While certificate pinning would not completely eliminate this risk, it certainly would remove the most likely attack scenarios. Some smart phones have add-on apps that promise to help with pinning, but in my opinion, the option to pin the certificate on email send/receive is a feature that should come as standard part of the native email clients on all Smartphones. Which we probably should be calling Dumbphones or Navephones, until they actually do provide this functionality.

If your smartphone has native certificate pinning support on imap/pop/smtp with any provider (not just Chrome/GMail), or you managed to configure it in a way that it does, please share in the comments below.

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Krebs on Security: ‘AntiDetect’ Helps Thieves Hide Digital Fingerprints

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

As a greater number of banks in the United States shift to issuing more secure credit and debit cards with embedded chip technology, fraudsters are going to direct more of their attacks against online merchants. No surprise, then, that thieves increasingly are turning to an emerging set of software tools to help them evade fraud detection schemes employed by many e-commerce companies.

Every browser has a relatively unique “fingerprint” that is shared with Web sites. That signature is derived from dozens of qualities, including the computer’s operating system type, various plugins installed, the browser’s language setting and its time zone. Banks can leverage fingerprinting to flag transactions that occur from a browser the bank has never seen associated with a customer’s account.

Payment service providers and online stores often use browser fingerprinting to block transactions from browsers that have previously been associated with unauthorized sales (or a high volume of sales for the same or similar product in a short period of time).

In January, several media outlets wrote about a crimeware tool called FraudFox, which is marketed as a way to help crooks sidestep browser fingerprinting. However, FraudFox is merely the latest competitor to emerge in a fairly established marketplace of tools aimed at helping thieves cash out stolen cards at online merchants.

Another fraudster-friendly tool that’s been around the underground hacker forums even longer is called Antidetect. Currently in version 6.0.0.1, Antidetect allows users to very quickly and easily change components of the their system to avoid browser fingerprinting, including the browser type (Safari, IE, Chrome, etc.), version, language, user agent, Adobe Flash version, number and type of other plugins, as well as operating system settings such as OS and processor type, time zone and screen resolution.

Antidetect is marketed to fraudsters involved in ripping off online stores.

Antidetect is marketed to fraudsters involved in ripping off online stores.

The seller of this product shared the video below of someone using Antidetect along with a stolen credit card to buy three different downloadable software titles from gaming giant Origin.com. That video has been edited for brevity and to remove sensitive information; my version also includes captions to describe what’s going on throughout the video.

In it, the fraudster uses Antidetect to generate a fresh, unique browser configuration, and then uses a bundled tool that makes it simple to proxy communications through one of a hundreds of compromised systems around the world. He picks a proxy in Ontario, Canada, and then changes the time zone on his virtual machine to match Ontario’s.

Then our demonstrator goes to a carding shop and buys a credit card stolen from a woman who lives in Ontario. After he checks to ensure the card is still valid, he heads over the origin.com and uses the card to buy more than $200 in downloadable games that can be easily resold for cash. When the transactions are complete, he uses Antidetect to create a new browser configuration, and restarts the entire process — (which takes about 5 minutes from browser generation and proxy configuration to selecting a new card and purchasing software with it). Click the icon in the bottom right corner of the video player for the full-screen version.

I think it’s safe to say we can expect to see more complex anti-fingerprinting tools come on the cybercriminal market as fewer banks in the United States issue chipless cards. There is also no question that card-not-present fraud will spike as more banks in the US issue chipped cards; this same increase in card-not-present fraud has occurred in virtually every country that made the chip card transition, including Australia, Canada, France and the United Kingdom. The only question is: Are online merchants ready for the coming e-commerce fraud wave?

Krebs on Security: Adobe Flash Update Plugs 11 Security Holes

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Adobe has released an update for its Flash Player software that fixes at least 11 separate, critical security vulnerabilities in the program. If you have Flash installed, please take a moment to ensure your systems are updated.

brokenflash-aNot sure whether your browser has Flash installed or what version it may be running? Browse to this link. The newest, patched version is 17.0.0.134 for Windows and Mac users. Adobe Flash Player installed with Google Chrome, as well as Internet Explorer on Windows 8.x, should automatically update to version 17.0.0.134.

The most recent versions of Flash should be available from the Flash home page, but beware potentially unwanted add-ons, like McAfee Security Scan. To avoid this, uncheck the pre-checked box before downloading, or grab your OS-specific Flash download from here. Windows users who browse the Web with anything other than Internet Explorer may need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.).

The last few Flash updates from Adobe have been in response to zero-day threats targeting previously unknown vulnerabilities in the program. But Adobe says it is not aware of any exploits in the wild for the issues addressed in this update. Adobe’s advisory on this patch is available here.

lcamtuf's blog: Another round of image bugs: PNG and JPEG XR

This post was syndicated from: lcamtuf's blog and was written by: Michal Zalewski. Original post: at lcamtuf's blog

Today’s release of MS15-024 and
MS15-029 addresses two more image-related memory disclosure vulnerabilities in Internet Explorer – this time, affecting the little-known JPEG XR format supported by this browser, plus the far more familiar PNG. Similarly to the previously discussed bugs in MSIE TIFF and JPEG parsing, and to the BMP, ICO, and GIF and JPEG DHT & SOS flaws in Firefox and Chrome, these two were found with afl-fuzz. The earlier posts have more context – today, just enjoy some pretty pics, showing subsequent renderings of the same JPEG XR image:

Proof-of-concepts are here (JXR) and here (PNG). I am happy to report that Microsoft fixed them within roughly three months of the original report.

The total number of bugs squashed in this category is now ten. I have just one more multi-browser image parsing bug outstanding – but it should be an interesting one. Stay tuned.

Errata Security: Some notes on DRAM (#rowhammer)

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

My twitter feed is full of comments about the “rowhammer” exploit. I thought I’d write some quick notes about DRAM. The TL;DR version is this: you probably don’t need to worry about this, but we (the designers of security and of computer hardware/software) do.

There are several technologies for computer memory. The densest, and hence cheapest-per-bit, is “DRAM”. It consists of a small capacitor for each bit of memory. The thing about capacitors is that they lose their charge over time and must be refreshed. In the case of DRAM, every bit of memory must be read them re-written every 64-milliseconds or it becomes corrupt.
These tiny capacitors are prone to corruption from other sources. One common source of corruption is cosmic rays. Another source is small amounts of radioactive elements in the materials used to construct memory chips. So, chips must be built with radioactive-free materials. The banana you eat has more radioactive material inside it than your DRAM chips.

The upshot is that capacitors are unreliable (albeit extremely cheap) technology for memory which readily get corrupted for a lot of reasons. This “rowhammer” exploit works by corrupting the capacitors by overwriting adjacent rows hundreds of thousands of times, hoping that some bits randomly flip. Like cosmic rays or radioactive hits, this only changes the temporary value within the cell, but has no permanent effect.
Because of this unreliability in capacitors, servers use ECC or error-correcting memory. This adds extra memory chips, so that 72 bits are read for every 64 bits of desired memory. If any single bit is flipped, the additional eight bits can be used to correct it.
The upshot of ECC memory is that it should also detect and correct rowhammer bit-flips just as well as cosmic-ray bit-flips. The problem is that ECC memory costs more. The memory itself is about 30% more expensive (although it contains only 12% more chips). An 8-gigabyte DIMM non-ECC costs $60 right now, whereas an ECC equivalent costs $80. The CPUs and motherboards that use ECC memory are also slightly more expensive. Update:  see below.
Note that the hacker doesn’t have complete control over which bits will be flipped. They are highly likely to flip the wrong bits and crash the system rather than gain exploitation. However, hackers have lots of ways of mitigating this. For example, as described in the google post, hackers can do a memory spray. They can allocate a large amount of memory, and then set it up so that no matter which bits gets flipped, the resulting value will still point to something valid.
The upshot is that while this problem seems random like cosmic rays, one should realize that hackers can control things better than it would appear. Historically, bugs like stack and heap overflows were once considered too random for hackers to predict, but which were then proven to be solidly predictable. Likewise, address space layout randomization was a technique design to stop hackers, which they then start using memory spray to defeat.
DRAM capacitors are arranged in a rectangular grid or bank. A chip will have multiple banks (usually eight of them). Banks are rectangular rather than square, with more rows than columns. Rows are usually 1024 bits wide, so increasing capacity usually means increasing the number of rows. (The wider the row, the slower the access).
My computer has 16 gigabytes of RAM. It uses two channels, with two DIMMs per channel. Each DIMM is double-sided, which means it has two ranks. Each rank has eight chips acting together to produce 64 bits at a time (or 8 bytes). Each chip in a rank has eight banks. Each bank has 32k rows and 1k columns. Thus, an address might be broken down as follows:
  3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 
  4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 
 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
 |C|D|R|bank | row                           | column            |x x x|
 +-+++++-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  | | |
  | | +– rank
  | +—- DIMM
  +—— channel
In practice, though, this precise arrangement would be inefficient. If physical addresses were truly laid out this way, then the second memory channel would only be used for the upper half of physical memory. For better performance, you want to interleave memory access, so that on average, half of all memory accesses are on one channel, and the other half on the other channel. Likewise, for best performance, you want to interleave access among the DIMMs, ranks, and banks, and rows. However, you don’t want to interleave column access: once a row is open on a particular bank, it’s most efficient to read sequentially from that row.
The upshot is that the for optimal performance, the memory controller will rearrange the physical addresses (except for the column portion). Knowledge of how the memory controller does this is needed for robust exploitation of this bug. In today’s systems, the memory controller is built-in to the CPU, so exploitation will be different depending upon which CPU a system has.
In today’s computers, software doesn’t use physical addresses to memory. Instead, they use virtual addresses. This virtual memory means that when a bug causes a program to crash, it doesn’t corrupt the memory of other programs, or the memory of the operating system. This is also a security feature, preventing one user account from meddling with another. (Indeed, rowhammer’s goal is to bypass this security feature). In old computers, this was also used to trick applications into thinking they had more virtual memory than physical memory in the system. Infrequently used blocks of memory would be removed from the physical memory and saved to disk, and would automatically be read back into the system as needed.
Individual addresses aren’t mapped on a one-to-one basis. Instead, the translation is done a page of 4096 addresses at a time. It’s the operating system kernel that manages these page tables to create the mappings, and it’s the CPU hardware that uses the page tables to do the translations automatically.
The upshot of this is that in order to exploit this bug, the hacker needs to know how virtual memory is translated to physical memory. This is easy on Linux, where a pseudo file /proc/PID/pagemap exists that contains this mapping. It may be harder in other systems.
The hypervisor hosting virtual machines (VMs) use yet another layer of page tables. Unlike operating systems, hypervisors don’t expose the physical mapping to the virtual machines. The upshot of this is that people running VMs in the cloud do not have to worry about other customers using this technique to hack them. It’s probably not a concern anyway, since most cloud systems use error correcting memory, but the VM translation makes it extra hard to crack.
Thus, the only real vector of exploitation is where a hostile person can run native code on a system. There are some cheap hosting environments where this is possible, running cheap servers without ECC memory, with separate user accounts instead of using virtual machines. However, such environments tend to be so riddled with holes that there are likely easier routes to exploitation than this one.
The biggest threat at the moment appears to be to desktops/laptops, because they have neither ECC memory nor virtual machines. In particular, there seems to be a danger with Google’s native client (NaCl) code execution. This a clever sandbox that allows the running of native code within the Chrome browser, so that web pages can run software as fast as native software on the system. This memory corruption defeats one level of protection in NaCl. Nobody has yet demonstrated how to use this technique in practice to fully defeat NaCl, but it’s likely somebody will discover a way eventually. In the meanwhile, it’s likely Google will update Chrome to defeat this, such as getting rid of the cache flush operation that forces memory overwrites.
The underlying issue is an analog one. It’s not a bug in the code, or in the digital design of the chips, but a problem at a deeper layer. Rowhammer is not the first to exploit issues like this. For example, I own domains that are one bit different than Google’s as a form of bit-squatting, with traffic arriving on my domain when a bit-flip causes a domain-name to change, such as when “google.com” becomes “foogle.com”. Travis Goodspeed’s packet-in-packet work show the flaw when bits (well, symbols) get flipped in radio waves.

Update: This is really just meant as a primer, as background on the issue, not really trying to derive any conclusions. I chatted a bit Chris Evans (@scarybeasts) from google about some of those conclusion, so I thought I’d expand a bit on them.

Does ECC protect you? Maybe not. While it will correct single bit flips most of the time, it won’t protect when multiple bits flip at once. The hacker may be able to achieve this with enough tries. Remember: the hacker’s code can keep retrying this until it succeeds, even if that takes hours. Since ECC is correcting the memory, nothing will crash. You won’t see anything wrong except high CPU usage — until the hacker eventually gets multiple bit-flips, and either the system crashes or the hacker gets successful exploitation. I forget the exact details. I think Intel CPUs correct single bit errors, crash on two bit errors, and that three bit errors might go through. Also, this would depend upon the percentage of time that the hacker can successfully get two/three bit flips instead of a single one.

By itself, this bug may not endanger you. However, it’s much more dangerous when used within conjunction with other bugs. Browsers have “sand boxes” that keep hackers contained even if the first layer of defense breaks. This may provide a way of escaping the sand box (sandbox escape) when used in conjunction with other exploits.

Cache eviction is pretty predictable and may be exploitable from JavaScript even without access to the cache-flush instruction. Intel has a 32-kilobyte L1 cache for data. This is divided into 64 sets, each holding 8 items in a set, with each item being a 64 byte cache-line (64 times 8 times 64 equals 32k). Using JavaScript, one could easily create an array of bytes just long enough to force accessing 9 items in a set — thus forcing the least-recently-used chunk to be flushed out to memory. Well, it gets tricky, because the L1 cache lines get bumped down to L2 cache, which get bumped down to L3, so code would have to navigate the entire cache hierarchy, which changes per processor. But it’s probably doable, meaning that a JavaScript program may be able to cause havoc with the system.

I misread the paper. It says that Chrome’s NaCl already removed the cache-flush operation a while ago, so Chrome is “safe”. However, I wonder about that whether that cache eviction strategy might work.

SANS Internet Storm Center, InfoCON: green: Anybody Doing Anything About ANY Queries?, (Thu, Mar 5th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

(in an earlier version of this story, I mixed up Firefox with Chrome. AFAIK, it was Firefox, not Chorme, that added DNS ANY queries recently)

Recently, Firefox caused some issueswithits use of ANY DNSqueries [1]. ANY queries are different. For all other record types, we got specific entries in our zone file. Not so for ANYqueries. RFC 1035 doesnt even assign them a name [2]. It just assigned the query code (QCODE) of 255 to a request for all records. The name ANY is just what DNS tools typically call this query type.

The ANY record works a bit differently depending on whether the query is received by a recursive or authoritative name server. An authoritative name server will return all records that match, while a recursive name server will return all cached values.For example, try this:

1. Send a DNS ANY query for evilexample.com to the authoritative name server

dig evilexample.com ANY @ns1.dshield.org

I am getting 44 records back. Your number may be different.

2. Next, request a specific record using your normal recursive DNS server

dig evilexample.com

I am getting one answer and two authority records (YMMV)

3. Finally, send an ANY query for evilexample.com to your recursive name server

dig ANY evilexample.com

You should get essentially the same information as you got in step 2. The recursive name server will only return data that is already cached. UNLESS, there is no cached data, in which case the recursive name server will forward the query, and you will get the complete result.

So in short, if there is cached data, ANY queries are not that terrible, but if there is no cached data, then you can get a huge amplification. The evilexample.com result is close to 10kBytes in size. (Btw: never mind the bad DNSSEC configuration.. it is called evilexample for a reason).

So but how common are these ANY” />

“>Recursive Name Server

As expected, the recursive name server, which is only forwarding requests for a few internal hosts, is a lot cleaner. The authoritative name server, which is exposed to random queries from external hosts, is a lot more dirty and with many networks still preferringthelegacy IPv4protocol, A queries outnumber AAAA queries. Any queries make up less then 1%, and they follow a typical scheme. For example:

Time Client IP
16:23:37 212.113.164.135
16:23:38 212.113.164.133
16:23:38 212.113.164.131
16:23:39 212.113.164.128
16:23:39 212.113.164.131
16:23:40 212.113.164.132
16:23:41 212.113.188.75
16:23:43 212.113.164.132
16:23:43 212.113.164.133
16:43:31 212.113.164.132
16:43:33 212.113.164.135
16:43:33 212.113.188.75
16:43:33 212.113.164.129
16:43:35 212.113.164.131
16:43:35 212.113.164.131

Short surges of queries arriving for the same ANY record from the same /24. This is not normal. All these hosts should probably use the same recursive name server, and we should only see one single ANY request that is then cached, if we see it at all. This is typical reflective DDoS traffic. In this case, 212.113.164.0/24 is under attack, and the attacker attempts to use my name server as an amplifier.

Is it safe to block all ANY requests from your authoritative name server? IMHO: yes. But you probably first want to run a simple check like above to see who and why is sending you ANY requests. Mozilla indicated that they will remove the ANY queries from future Firefox versions, so this will be a minor temporary inconvenience.

[1]https://bugzilla.mozilla.org/show_bug.cgi?id=1093983
[2]https://www.ietf.org/rfc/rfc1035.txt


Johannes B. Ullrich, Ph.D.
STI|Twitter|LinkedIn

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

LWN.net: EFF: Lenovo is breaking HTTPS security on its recent laptops

This post was syndicated from: LWN.net and was written by: corbet. Original post: at LWN.net

Here is a
statement from the Electronic Frontier Foundation
on the revelation
that Lenovo has been shipping insecure man-in-the-middle malware on its
laptops. “Lenovo has not just injected ads in a wildly inappropriate
manner, but engineered a massive security catastrophe for its users. The
use of a single certificate for all of the MITM attacks means that all
HTTPS security for at least Internet Explorer, Chrome, and Safari for
Windows, on all of these Lenovo laptops, is now broken.
” For
additional amusement, see Lenovo’s
statement
on the issue.

There are a lot of Lenovo users in LWN’s audience. Presumably most of them
have long since done away with the original software, but those who might
have kept it around would be well advised to look into the issue; this site can evidently indicate
whether a machine is vulnerable or not.

Errata Security: Some notes on SuperFish

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

What’s the big deal?

Lenovo, a huge maker of laptops, bundles software on laptops for the consumer market (it doesn’t for business laptops). Much of this software is from vendors who pay Lenovo to be included. Such software is usually limited versions, hoping users will pay to upgrade. Other software is ad supported. Some software, such as the notorious “Ask.com Toolbar”, hijacks the browser to display advertisements.

Such software is usually bad, especially the ad-supported software, but the SuperFish software is particularly bad. It’s designed to intercept all encrypted connections, things is shouldn’t be able to see. It does this in a poor way that it leaves the system open to hackers or NSA-style spies. For example, it can spy on your private bank connections, as shown in this picture.

Marc Rogers has a post where he points out that what the software does is hijack your connections, monitors them, collects personal information, injects advertising into legitimate pages, and causes popup advertisement.


Who discovered this mess?

People had noticed the malware before, but it’s Chris Palmer (@fugueish) that noticed the implications. He’s a security engineer for Google who just bought a new Lenovo laptop, and noticed how it was Man-in-the-Middling his Bank of America connection. He spread the word the rest of the security community, who immediately recognized how bad this is.

What’s the technical detail?

It does two things. The first is that SuperFish installs a transparent-proxy (MitM) service on the computer intercepting browser connections. It appears to be based on Komodia’s “SSL Digestor”, described in detail here.

But such interception still cannot decrypt SSL. Therefore, SuperFish installs its own root CA certificate in Windows system. It then generates certificates on the fly for each attempted SSL connection. Thus, when you have a Lenovo computer, it appears as SuperFish is the root CA of all the websites you visit. This allows SuperFish to intercept an encrypted SSL connection, decrypt it, then re-encrypt it again.

Only the traffic from the browser to the SuperFish internal proxy uses the website’s certificate. The traffic on the Internet still uses the normal website’s certificate, so we can’t tell if a machine is infected by SuperFish by looking at this traffic. However, SuperFish makes queries to additional webpages to download JavaScript, which may be detectable.

SuperFish’s advertising works by injecting JavaScript code into web-pages. This is known to cause a lot of problems on websites.
It’s the same root CA private-key for every computer. This means that hackers at your local cafe WiFi hotspot, or the NSA eavesdropping on the Internet, can use that private-key to likewise intercept all SSL connections from SuperFish users.

SuperFish is “adware” or “malware”

The company claims it’s providing a useful service, helping users do price comparisons. This is false. It’s really adware. They don’t even offer the software for download from their own website. It’s hard Googling for the software if you want a copy because your search results will be filled with help on removing it. The majority of companies that track adware label this as adware.

Their business comes from earning money from those ads, and it pays companies (like Lenovo) to bundle the software against a user’s will. They rely upon the fact that unsophisticated users don’t know how to get rid of it, and will therefore endure the ads.

Lenovo’s response

Lenovo’s response is here. They have stopped including the software on new systems.

However, they still defend the concept of the software, claiming it’s helpful and wanted by users, when it’s clear to everyone else that most users do not want this software.

It’s been going on since at least June 2014

The earliest forum posting is from June of 2014. However, other people report that it’s not installed on their mid-2013 Lenovo laptops.

Here is a post from September 2014.

It’s legal

According to Lenovo, users are presented with a license agreement to accept the first time they load the browser. Thus, they have accepted this software, and it’s not a “hacker virus”. 
But this has long been the dodge of most adware. That users don’t know what they agree to has long been known to be a problem. While it may be legal, just because users agreed to it doesn’t mean it isn’t bad.

Firefox is affected differently

Internet Explorer and Chrome use the Windows default certificates. Firefox has its own separate root certificates. Therefore, the tool apparently updates the Firefox certificate file separately.

Update: Following reports on the Internet, I said Firefox wasn’t affected. This tweet corrected me.

Uninstalling SuperFish leaves behind the root certificate

The problem persists even if the user uninstalls the software. They have to go into the Windows system and remove the certificate manually.

Other adware software does similar things

This post lists other software that does similar things.

How to uninstall the software?

First, run the “uninstall.exe” program that comes from the software. One way is from the installed program list on Windows. Another way is to go to “C:\Program Files (x86)\Lenovo\VisualDiscovery\uninstall.exe”.
This removes the bad software, but the certificate is still left behind.
For Internet Explorer and Chrome, click on the “Start” menu and select “Run”. Run the program “certmgr.msc”. Select the “Trusted Root Certificate Authorities”, and scroll down to “Superfish, Inc.” and delete it (right click and select “Delete”, or select and hit the delete key).
For Firefox, click on the main menu “Options”, “Advanced”, “Certificates”. The Certificate Manager pops up. Scroll down, select “Superfish, Inc.”, then “Delete or Detrust”. This is shown in the image below.

How can you tell if you’re vulnerable? or if the removal worked?

Some sites test for you, like https://filippo.io/Badfish.

Do all machines have the same root certificate?

The EFF SSL observatory suggests yes, all the infected machines have the same root certificate. This means they can all be attacked. If they all had different certificates, then they wouldn’t be attackable.

Linux How-Tos and Linux Tutorials: Spinning Up a Server with the OpenStack API

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Jeff Cogswell. Original post: at Linux How-Tos and Linux Tutorials

In the previous article, we looked at the OpenStack API and how to get started using it. The idea is that you make HTTP calls to a server, which performs the requested command, and gives back some information. These commands can simply gather information, such as giving you a list of your running servers, or to do something more complex such as allocating servers. OpenStack includes an entire API for such operations, including managing storage, managing databases, allocating servers, de-allocating servers, creating images (which are used in the creation of servers), networking (such as allocating private networking), and more. In future articles we can look at tasks such as allocating private networking.

To make use of the API, you can take several routes. The simplest is to not make the calls through code, but rather use a console application. One such console application is the Horizon dashboard. Horizon is a fully-featured dashboard that runs in the browser, and allows you to perform OpenStack tasks. However, the dashboard is interactive in that you click and choose what you want to do. Some tasks you need to automate, in which case you’ll want to write scripts and programs. That’s where the APIs and SDKs come in. So let’s continue our discussion of the API, and try out a few more tasks.

Places to Test the API

Like with the previous article, I’m using Rackspace for these examples. I’m in no way endorsing Rackspace, but simply using them because their OpenStack API is a nearly complete implementation of OpenStack. And if you want to practice with Rackspace’s OpenStack API, you can do so at very little cost. But there are other options as well.

If you head over to the OpenStack website, you’ll find a page for getting started with OpenStack which includes a section on premium clouds (such as Rackspace) as well as a nice local environment you can download and install called DevStack. (If you’re really interested in OpenStack, I would recommend DevStack so you can get some experience actually installing everything.) There’s also a pretty coolwebsite called TryStack that you can check out.. But for now I’ll keep it simple by using RackSpace.

Find the images

Let’s spin up a server with just an API call. To accomplish the API call, I’m going to use the cURL command-line tool. For this you’ll need to get an authentication token from Rackspace, as described in the previous article. Incidentally, here’s a quick tip that I didn’t provide last time: When you request the token, you get back a rather sizable JSON object that is not formatted at all. There are different ways to get this thing formatted into something we humans can read; I found a Chrome plugin that I like called JavaScript Unpacker and Beautifier, which you can find at http://jsbeautifier.org/. Note also that you might see backslashes before the forward slashes, because the JSON strings are escaped. You’ll need to remove the backslashes from the actual API calls.

beautifier

In order to spin up a server, we also need to know what images are available. Images have different meanings depending on what cloud is using them. Here they’re essentially ISO images containing, for example, an Ubuntu installer. Here’s how we can list the publicly-available images using cURL:

curl -s https://iad.images.api.rackspacecloud.com/v2/images 
    -H 'X-Auth-Token: abcdef123456'

You would replace the abcdef123456 with your token. Note also that because we’re requesting information, we use a GET method, instead of a post. (GET is the default for cURL, so we don’t have to specify it.) When I run this command, I get back another big JSON object. This one lists 25 images available. (But there’s actually more, as you’ll see shortly.)

Now here’s another tip for dealing with these JSON objects: Go into the Chrome browser, and open up the dev tools by pressing F12. Then in the console, type

x =

and paste in the JSON text you got back from the cURL call. This will store the JSON object into a variable called x. Then you can explore the members of the object by expanding the array items and the object members, as shown in the following figure.

chrome json

Notice at the very end of the JSON object is a member called next. That’s because we’ve reached the limit of how many Rackspace will give us for the image lists. Rackspace pages the data, so let’s request another page of data. To do so, we start with the URL given by the next field:

"next": "/v2/images?marker=abc123-f20f-454d-9f7d-abcdef"

This is the URL we use for the cURL command, prepended with the domain name and https. And then we get 25 more, as well as yet another next list. Looking through the 50 so far, I’m finding different images such as a Debian Wheezy. I don’t really want to dig through all of these looking for the one I want, so let’s try another cURL call, but this time we’ll include some parameters.

If you go to this page on Rackspace’s documentation, we can see what the parameters here. There are two places we can find the parameters: We can find those that OpenStack in general supports by going to the OpenStack documentation. But providers may include additional operations beyond OpenStack. So I’ll look at Rackspace’s own documentation.

If you look at the JSON objects we got back, there are even more members than are listed in the documentation. One such parameter is os_distro. Let’s try searching on that. For these parameters, we tack them onto the URL as query parameters. Let’s find the Ubuntu distros:

curl -s https://iad.images.api.rackspacecloud.com/v2/images?os_distro=ubuntu 
    -H 'X-Auth-Token: abcdef123456'

It worked. I got back a big JSON object. Pasting it into Chrome’s dev tools, I can see I got back 10 objects. Now let’s suppose we’re working on a project that requires a 12.04 version of Ubuntu. It turns out Rackspace also has that information in the objects. So we can search on that as well. I’m going to add another parameter to my URL, which requires an ampersand. I don’t want the bash shell to use the ampersand, so I’ll add single quotes around my URL. Here goes:

curl -s 'https://iad.images.api.rackspacecloud.com/v2/images?os_distro=ubuntu&org.openstack__1__os_version=12.04' 
    -H 'X-Auth-Token: abcdef123456'

You can see how I included both the os_distro and a parameter for the version. Now I just got back three images, and I can pick one. Again I’ll pull these into Chrome to see what’s what. Of course, this is still totally interactive, which means we’ll need to figure out a way to grind through these through code instead of copying them into Chrome. We’ll take that up in a future article. For now, I’m going to pick the one with the name “Ubuntu 12.04 LTS (Precise Pangolin) (PV)”.

Choose a Flavor

Before we can spin up a server, we need to choose a type of server, which is called a flavor. Just as we listed the available images, we can list the available flavors:

curl -s https://iad.servers.api.rackspacecloud.com/v2/12345/flavors 
    -H 'X-Auth-Token: abcdef123456'

You would replace 12345 with your tenant ID and as usual the abcdef123456 with your authentication token. Notice that the second word in the URL is “servers” because flavors fall under the servers section of the API. When I ran this, I got back a JSON object with 38 different delicious flavors. For this test server, I’ll pick a small one. Here’s the second in the list of flavors:

{
    "id": "2",
    "links": [{
        "href": "https://iad.servers.api.rackspacecloud.com/v2/12345/flavors/2",
        "rel": "self"
    },
    {
        "href": "https://iad.servers.api.rackspacecloud.com/12345/flavors/2",
        "rel": "bookmark"
    }],
    "name": "512MB Standard Instance"
}

Now a quick point about this response; notice there are fields such as href and self. This is in line with one common approach to a RESTful interface, whereby you get back an array of links that include an href (the address) and a rel (a description, or, more precisely, a relationship).

Using the first href, I can get back detailed information about this flavor:

curl -s https://iad.servers.api.rackspacecloud.com/v2/12345/flavors/2 
    -H 'X-Auth-Token: abcdef123456'

This gives me back the following details:

{
    "flavor": {
        "OS-FLV-WITH-EXT-SPECS:extra_specs": {
            "policy_class": "standard_flavor",
            "class": "standard1",
            "disk_io_index": "2",
            "number_of_data_disks": "0"
        },
        "name": "512MB Standard Instance",
        "links": [{
            "href": "https://iad.servers.api.rackspacecloud.com/v2/12345/flavors/2",
            "rel": "self"
        },
        {
            "href": "https://iad.servers.api.rackspacecloud.com/12345/flavors/2",
            "rel": "bookmark"
        }],
        "ram": 512,
        "vcpus": 1,
        "swap": 512,
        "rxtx_factor": 80.0,
        "OS-FLV-EXT-DATA:ephemeral": 0,
        "disk": 20,
        "id": "2"
    }
}

That flavor should work for our test. Now finally, before we spin up the server, I need to make one more point. You might be noticing that while it would be nice to be able to automate all this through scripts, there’s also a certain amount of interactivity here that could lend itself to a simple application. You might, for example, build a small app that requests flavors and available Ubuntu images and provides a list of choices for a user (or even yourself). You could make the same API calls we did here, provide the user the option to choose the flavor and image, and then finally spin up the server. There are many possibilities here. But note that by nature of the RESTful interface, we start with an API call that returns to us a set of data as well as additional URLs for other API calls. We then use those URLs for future calls.

Spin up the server

Now let’s finally spin up the server. You need the id of the image and the id of the flavor. Both of these are included in the JSON objects, both with the member name id. You also have to provide a name for your server: 

  • The id for the image is “71893ec7-b625-44a5-b333-
  • ca19885b941d”.
  • The id for the flavor is 2.
  • The name we’ll go with is Ubuntu-1.

(Please don’t hardcode the image IDs, though, if you’re writing an app. Cloud hosts are continually updating their images and replacing old images, meaning this ID might not be valid tomorrow. That’s why you’ll want to traverse down through the results you get from the starting API calls.)

Creating a server requires a POST method. We use the same URL as listing servers, but the POST method tells Rackspace to create a server instead of listing it. For our ids and name, we construct a JSON object that we pass in through the -d parameter. Make sure you conform to true JSON, with member names enclosed in double-quotes. Here we go:

curl -X POST -s https://iad.servers.api.rackspacecloud.com/v2/12345/servers 
    -d '{"server": { "name": "Ubuntu-1", "imageRef":"71893ec7-b625-44a5-b333-ca19885b941d", "flavorRef":"2" }}' 
    -H 'X-Auth-Token: abcdef123456' 
    -H "Content-Type: application/json"

If you type this incorrectly, you’ll get an error message describing what went wrong (such as malformed request body, which can happen if your JSON isn’t coded right). But if done correctly, you’ll get back a JSON object with information about your server that’s being built:

{
    "server": {
        "OS-DCF:diskConfig": "AUTO",
        "id": "abcdef-02d0-41db-bb9f-abcdef",
        "links": [{
            "href": "https://iad.servers.api.rackspacecloud.com/v2/12345/servers/abcdef-02d0-41db-bb9f-abcdef",
            "rel": "self"
        },
        {
            "href": "https://iad.servers.api.rackspacecloud.com/12345/servers/f02de705-02d0-41db-bb9f-75a5eb5ebaf4",
            "rel": "bookmark"
        }],
        "adminPass": "abcdefXuS7KD34a"
    }
}

Pay close attention to the adminPass field. You’ll need that for logging into your server!

Then you can use the first href to get information about the server:

curl -s https://iad.servers.api.rackspacecloud.com/v2/12345/servers/abcdef-02d0-41db-bb9f-abcdef 
    -H 'X-Auth-Token: abcdefXuS7KD34a'

Which tells me a lot of detail about the server, including its IP addresses. Here’s the first part of the JSON object:

{
    "server": {
        "status": "ACTIVE",
        "updated": "2015-02-09T19:35:41Z",
        "hostId": "abcdef4157ab9f2fca7d5ae77720b952565c9bb45023f0a44abcdef",
        "addresses": {
            "public": [{
                "version": 6,
                "addr": "2001:4802:7800:2:be76:4eff:fe20:4fba"
            },
            {
                "version": 4,
                "addr": "162.209.107.187"
            }],
            "private": [{
                "version": 4,
                "addr": "10.176.66.51"
            }]
        },

I can log into this using ssh, as shown in this screenshot:

server ssh

Now don’t forget to delete the server. We can do that through the Rackspace web portal, but why not use the API since we’re here? Here’s the cURL:

curl -X DELETE 
    -s https://iad.servers.api.rackspacecloud.com/v2/12345/servers/abcdef-02d0-41db-bb9f- abcdef 
    -H 'X-Auth-Token: abcdefXuS7KD34a'

And we’re done!

Conclusion

Spinning up a server is easy, if you follow the process of first obtaining information about images and flavors, and then using the ids from the image and flavor you choose. Make sure to use the URLs that you get back inside the JSON responses, as this will help your app conform to the rules of a RESTful interface. Next up, we’ll try using an SDK in a couple of languages.

Linux How-Tos and Linux Tutorials: How to Edit Images on Chromebook Like a Pro

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Swapnil Bhartiya. Original post: at Linux How-Tos and Linux Tutorials

Chromebooks are becoming extremely popular among users – both individuals and enterprises. These inexpensive devices are capable of doing almost everything that one can do on a full fledged desktop PC.

I know it because my wife has become a full-time Chromebook user (she used to be on Mac OS X and then moved to Ubuntu, before switching to Chromebook full time). I myself am a Chromebook user and use it often – mostly to write my stories; which also require serious image editing. If you are a Chromebook user, this article will help you in becoming a pro at image editing on your Chromebook.

Chrome’s native image editing tool

There is no dearth of image editing applications for Chrome OS powered devices. However, like any other image editing software, these apps are a bit resource intensive so if you are looking for a decent experience you should be running the latest Chromebooks with a faster processor and as much RAM as possible.

It’s a lesser known fact that Chrome OS comes with a built-in image editor which allows one to do basic image editing. It’s a perfect tool for me when I adjust images for my articles.

chrome os built in

Open ‘Files’ and then click on the image that you want to edit. Once the image is open, you will notice an edit icon; click on it to enter the edit mode. In the edit mode you will see several editing actions; there is a checkbox which says ‘overwrite original’, if you don’t want to make any changes to the original image (and that’s heavily recommended), uncheck the box and the editor will work on a copy leaving the original image intact.

The first option is autofix, where the editor will try to adjust the image; I never do that. I trust my artistic skills more than I trust some algorithm. Other options are: cropping, brightness/contrast control, rotate and undo/redo. Here are some useful shortcuts for each action:

a -autofix
c – crop
b – brightness and contrast
r/l – rotate the image right or left
e – toggle between view and edit mode
Ctrl+z – undo changes
Ctrl+y – redo changes

If you made any mistake you can always undo the changes. The tool bar has some nifty preview options which allow you to see all images in a ‘Mosaic’ layout or as a slideshow. If you want to rename any of the images you are currently working on, click on the image name and give it the new name.

3rd party image editing tools

If the built-in image editor doesn’t suffice, there are many third-party applications to do the job.

1. Pixlr

One tool that comes closest to offering a Photoshop-like experience on Chromebook is AutoDesk’s Pixlr. However they recently became self-destructive and embedded a wide skyscraper advertisement which leaves very little space for image editing.

chrome os pixlr

Autodesk may offer a paid, ad-free version of Pixlr; I won’t mind paying $5 for a decent image editing tool. However, don’t get too upset about the ad, it doesn’t show up in the full-screen mode. So if you want to focus on your work without being distracted by the ad, go to ‘View > Fullscreen’.

It comes with a plethora of tools and options; the left panel resembles the one from Photoshop or GIMP including – crop, move, marques, lasso, wand tool, pencil, pen quick select, brush, etc. That’s not all, the app comes with quite a good selection of filters and adjustment options.

chrome os pixlr

It supports layer, so you can slap layer on layer to work with several images. If you have used GIMP or Photoshop, you won’t struggle to use the app. In most cases you may even be tempted to use this one over GIMP.This is by far the best image editing tool on Chrome OS devices.

2. Sumo Paint

Another neat image editing app is Sumo Paint. This app, unlike Pixlr doesn’t have any ads so you do get more real estate to work. The app has a similar UI, and also features several filters and adjustment layers.

chrome os sumo

Sumo and Pixlr are neck to neck when it comes to features; I found Sumo to be more useful than Pixlr, but that’s my personal preference. Since both apps are available for free of cost, install both and see which one works best for you.

3. Polarr

As a photographer I depend heavily on LightRoom and Photoshop for my professional work, whereas Darktable and GIMP are deployed for personal projects. The good news for Chromebook users is that there is a nifty app, called Polarr, which can be a great replacement for Lightroom (don’t get me wrong, Lightroom and Photoshop are the best of the breed image editing software and if you are a professional user, you know there is nothing which beats them).

The best part of Polarr is that it also support RAW image format. That’s the format any serious photographer would use – it keeps all the image data from the sensor because the camera does minimal processing of the data. RAW formats like NEF can be directly opened in Polarr – no conversion needed! They are also working on version 2 which has an improved UI (you may mistake it for LightRoom).

Polarr comes with several filters/presets to give your images the desired treatment. The left panel features some of the image ‘effect’ presets – want that ‘Instagram’ look?

chrome os polarr instagram

The right panel offer the ‘most important’ tools one need to fine tune an image. Since the camera doesn’t process the RAM images, you increase/decrease the exposure, adjust the temperature and tint. Balance the lighting using Light & Shadow; increase black and white, add clarity, vibrance. That’s not all, there is much more.

chrome os polarr

chrome os polarr 2

Polarr is by far the best photograph management tool; you can import your entire folder to it and work on your images. Install the app from the Web Store, and try it yourself. If you have ever done any serious work on Lightroom, you will be amazed with this application.

Conclusion

In a nutshell these apps allow me to work on my images without having to go to Linux or Mac OS X boxes. I would not underrate these apps by calling them ‘basic’ image editing tools, they are way too advanced for that category. From the built-in image editing tool to Polarr, Chromebooks cover the entire spectrum of image editing – with Pixlr and Sumo Paint somewhere in middle.

There is only one caveat though. These _are_ image editing apps and thus are quite resource hungry. If you want to do some serious image editing work and are planning to buy a Chromebook, make sure to get the most powerful processor and as much RAM as you can get. Try these apps and let us know which one works best for you.

Krebs on Security: Yet Another Flash Patch Fixes Zero-Day Flaw

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

For the third time in two weeks, Adobe has issued an emergency security update for its Flash Player software to fix a dangerous zero-day vulnerability that hackers already are exploiting to launch drive-by download attacks.

brokenflash-aThe newest update, version 16.0.0.305, addresses a critical security bug (CVE-2015-0313) present in the version of Flash that Adobe released on Jan. 27 (v. 16.0.0.296). Adobe said it is are aware of reports that this vulnerability is being actively exploited in the wild via drive-by-download attacks against systems running Internet Explorer and Firefox on Windows 8.1 and below.

Adobe’s advisory credits both Trend Micro and Microsoft with reporting this bug. Trend Micro published a blog post three days ago warning that the flaw was being used in malvertising attacks – booby-trapped ads uploaded by criminals to online ad networks. Trend also published a more in-depth post examining this flaw’s use in the Hanjuan Exploit Kit, a crimeware package made to be stitched into hacked Web sites and foist malware on visitors via browser plug-in flaws like this one.

To see which version of Flash you have installed, check this link. Windows users who browse the Web with anything other than Internet Explorer may need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.).

The most recent versions of Flash should be available from the Flash home page, but beware potentially unwanted add-ons, like McAfee Security Scan. To avoid this, uncheck the pre-checked box before downloading, or grab your OS-specific Flash download from here.

IE10/IE11 on Windows 8.x and Chrome should auto-update their versions of Flash. Google Chrome version 40.0.2214.111 includes this update, and is available now. To check for updates in Chrome, click the stacked three bars to the right of the address bar in Chrome, and look for a listing near the bottom that says “Update Chrome.”

As I noted in a previous Flash post, short of removing Flash altogether — which may be impractical for some users — there are intermediate solutions. Script-blocking applications like Noscript and ScriptSafe are useful in blocking Flash content, but script blockers can be challenging for many users to handle.

My favorite in-between approach is click-to-play, which is a feature available for most browsers (except IE, sadly) that blocks Flash content from loading by default, replacing the content on Web sites with a blank box. With click-to-play, users who wish to view the blocked content need only click the boxes to enable Flash content inside of them (click-to-play also blocks Java applets from loading by default).

Windows users also should take full advantage of the Enhanced Mitigation Experience Toolkit(EMET), a free tool from Microsoft that can help Windows users beef up the security of third-party applications.

TorrentFreak: Google Chrome Dragged Into Internet Censorship Fight

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

chromeHelped by the MPAA, Mississippi State Attorney General Jim Hood launched a secret campaign to revive SOPA-like censorship efforts in the United States.

The MPAA and Hood want Internet services to bring website blocking and search engine filtering back to the table after the controversial law failed to pass.

The plan became public through various emails that were released in the Sony Pictures leaks and in a response Google said that it was “deeply concerned” about the developments.

To counter the looming threat Google filed a complaint against Hood last December, asking the court to quash a pending subpoena that addresses Google’s failure to take down or block access to illegal content, including pirate sites.

Recognizing the importance of this case, several interested parties have written to the court to share their concerns. There’s been support for both parties with some siding with Google and others backing Hood.

In a joint amicus curae brief (pdf) the Consumer Electronics Association (CEA), Computer & Communications Association (CCIA) and
advocacy organization Engine warn that Hood’s efforts endanger free speech and innovation.

“No public official should have discretion to filter the Internet. Where the public official is one of fifty state attorneys general, the danger to free speech and to innovation is even more profound,” they write.

According to the tech groups it would be impossible for Internet services to screen and police the Internet for questionable content.

“Internet businesses rely not only on the ability to communicate freely with their consumers, but also on the ability to give the public ways to communicate with each other. This communication, at the speed of the Internet, is impossible to pre-screen.”

Not everyone agrees with this position though. On the other side of the argument we find outfits such as Stop Child Predators, Digital Citizens Alliance, Taylor Hooton Foundation and Ryan United.

In their brief they point out that Google’s services are used to facilitate criminal practices such as illegal drug sales and piracy. Blocking content may also be needed to protect children from other threats.

“Google’s YouTube service has been used by those seeking to sell steroids and other illegal drugs online,” they warn, adding that the video platform is also “routinely used to distribute other content that is harmful to minors, such as videos regarding ‘How to Buy Smokes Under-Age’, and ‘Best Fake ID Service Around’.

Going a step further, the groups also suggest that Google should filter content in its Chrome browser. The brief mentions that Google recently removed Pirate Bay apps from its Play Store, but failed to block the site in search results or Chrome.

“In December 2014, responding to the crackdown on leading filesharing website PirateBay, Google removed a file-sharing application from its mobile software store, but reports indicate that Google has continued to allow access to the same and similar sites through its search engine and Chrome browser,” they write.

The Attorney General should be allowed to thoroughly investigate these threats and do something about it, the groups add.

“It is simply not tenable to suggest that the top law enforcement officials of each state are powerless even to investigate whether search engines or other intermediaries such as Google are being used—knowingly or unknowingly—to facilitate the distribution of illegal content…”

In addition to the examples above, several other organizations submitted amicus briefs arguing why the subpoena should or shouldn’t be allowed under the First Amendment and Section 230 of the CDA, including the International AntiCounterfeiting Coalition, EFF, the Center for Democracy & Technology and Public Knowledge.

Considering the stakes at hand, both sides will leave no resource untapped to defend their positions. In any event, this is certainly not the last time we’ll hear of the case.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and anonymous VPN services.