Posts tagged ‘Other’

TorrentFreak: UK Govt. Warns Google, Microsoft & Yahoo Over Piracy

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

Developments over the past 12 months have sent the clearest message yet that the UK government is not only prepared to morally support the creative industries, but also spend public money on anti-piracy enforcement.

The government-funded City of London Intellectual Property Crime Unit is definitely showing no signs of losing interest, carrying out yet another arrest yesterday morning on behalf of video rightsholders. In the afternoon during the BPI’s Annual General Meeting in London, the unit was being praised by both government officials and a music sector also keen to bring piracy under control.

“We’ve given £2.5 million to support the City of London Police Intellectual Property Crime Unit, PIPCU,” Culture Secretary Sajid Javid told those in attendance.

“The first unit of its kind in the world, PIPCU is working with industry groups – including the BPI – on the Infringing Websites List. The list identifies sites that deliberately and consistently breach copyright, so brand owners can avoid advertising on them.”

Referencing rampant online piracy, Javid said that no industry or government could stand by and let “massive, industrial scale” levels of infringement continue.

“I know some people say the IP genie is out of the bottle and that no amount of wishing will force it back in. But I don’t agree with them,” he said.

“We don’t look at any other crimes and say ‘It’s such a big problem that it’s not worth bothering with.’ We wouldn’t stand idly by if paintings worth hundreds of millions of pounds were being stolen from the National Gallery.Copyright infringement is theft, pure and simple. And it’s vital we try to reduce it.”

Going on to detail the Creative Content initiative which the government is supporting to the tune of £3.5m, Javid said the system would deliver a “robust, fair and effective enforcement regime”.

But that, however, is only one part of the puzzle. Infringing sites need to be dealt with, directly and by other means, he added.

“Copyright crooks don’t love music. They love money, and they’ve been attracted to the industry solely by its potential to make them rich. Take away their profits and you take away their reason for being. Of course, it’s not just up to the government and music industry to deal with this issue,” he noted.

Putting search engines on notice, the MP said that they have an important role to play.

“They must step up and show willing. That’s why [Business Secretary] Vince Cable and I have written to Google, Microsoft and Yahoo, asking them to work with [the music industry] to stop search results sending people to illegal sites,” Javid said.

“And let me be perfectly clear: if we don’t see real progress, we will be looking at a legislative approach. In the words of [Beggars Group chairman] Martin Mills, ‘technology companies should be the partners of rights companies, not their masters’.”

The Culture Secretary said that when it comes to tackling piracy, the government, music industry and tech companies are “three sides of the same triangle.” But despite that expectation of togetherness, only time will tell if the search engines agree to the point of taking voluntary action to support it.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

TorrentFreak: Tribler Makes BitTorrent Anonymous With Built-in Tor Network

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

boxedThe Tribler client has been around for more nearly a decade already, and during that time it’s developed into the only truly decentralized BitTorrent client out there.

Even if all torrent sites were shut down today, Tribler users would still be able to find and add new content.

But the researchers want more. One of the key problems with BitTorrent is the lack of anonymity. Without a VPN or proxy all downloads can easily be traced back to an individual internet connection.

The Tribler team hopes to fix this problem with a built-in Tor network, routing all data through a series of peers. In essence, Tribler users then become their own Tor network helping each other to hide their IP-addresses through encrypted proxies.

“The Tribler anonymity feature aims to make strong encryption and authentication the Internet default,” Tribler leader Dr. Pouwelse tells TF.

For now the researchers have settled for three proxies between the senders of the data and the recipient. This minimizes the risk of being monitored by a rogue peer and significantly improves privacy.

“Adding three layers of proxies gives you more privacy. Three layers of protection make it difficult to trace you. Proxies no longer need to be fully trusted. A single bad proxy can not see exactly what is going on,” the Tribler team explains.

“The first proxy layer encrypts the data for you and each next proxy adds another layer of encryption. You are the only one who can decrypt these three layers correctly. Tribler uses three proxy layers to make sure bad proxies that are spying on people can do little damage.”

Tribler’s encrypted Tor routing
wtvTMix

Today Tribler opens up its technology to the public for the first time. The Tor network is fully functional but for now it is limited to a 50 MB test file. This will allow the developers to make some improvements before the final release goes out next month.

There has been an increased interest in encryption technologies lately. The Tribler team invites interested developers to help them improve their work, which is available on Github.

“We hope all developers will unite inside a single project to defeat the forces that have destroyed the Internet essence. We really don’t need a hundred more single-person projects on ‘secure’ chat applications that still fully expose who you talk to,” Pouwelse says.

For users the Tor like security means an increase in bandwidth usage. After all, they themselves also become proxies who have to pass on the transfers of other users. According to the researchers this shouldn’t result in any slowdowns though, as long as people are willing to share.

“Tribler has always been for social and sharing people. Like private tracker communities with plenty of bandwidth to go around we think we can offer anonymity without slow downs, if we can incentivize people to leave their computers on overnight and donate,” Pouwelse says.

“People who share will have superior anonymous speeds,” he adds.

Those interested in testing Tribler’s anonymity feature can download the latest version. Bandwidth statistics are also available. Please bear in mind that only the test file can be transferred securely at the moment.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

TorrentFreak: Popcorn Time Installed on 1.4 Million Devices in The U.S.

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

popcornThe Popcorn Time app brought BitTorrent streaming to the masses earlier this year.

The software became an instant hit by offering BitTorrent-powered streaming in an easy-to-use Netflix-style interface.

While the original app was shut down by the developers after a few weeks, the project was quickly picked up by others. This resulted in several popular forks that have gained a steady user-base in recent months.

Just how popular the application is remained a mystery, until now. TorrentFreak reached out to one of the most popular Popcorn Time forks at time4popcorn.eu to find out how many installs and active users there are in various parts of the world.

The Popcorn Time team was initially reluctant to share exact statistics on the app’s popularity across the globe, but they’re now ready to lift the veil.

Data shared with TorrentFreak shows that most users come from the United States where the application is installed on more than 1.4 million devices. There are currently over 100,000 active users in the U.S. and the number of new installs per day hovers around 15,000.

“At the beginning of August there were between 17-18K installations a day on all operating systems and last weekend there were somewhere between 13-15K a day,” the Popcorn Time teams informs us.

The application has a surprisingly large user base in the Netherlands too, as Android Planet found out. The country comes in second place with 1.3 million installs. That’s a huge number for a country with a population of less than 17 million people.

Brazil completes the top three at a respectful distance with 700,000 installed applications and around 56,000 active users.

The United Kingdom just missed a spot in the top three. The Popcorn Time fork has been installed on 500,000 devices there, with 30,000 active users and 4,500 new installs per day.

Australia, which generally has a very high piracy rate, is lagging behind a little with 93,000 installs thus far, and “only” 6,500 active users.

The statistics above only apply to the time4popcorn.eu application. While it’s probably the most used, other forks such as popcorntime.io also have a large following to add to the total Popcorn Time user base.

The team behind time4popcorn.eu, meanwhile, says that it will continue to add new features and support for more operating systems. They are currently finishing up the first iOS version which is expected to be released in a few days.

Aside from the technical challenges, the developers keep motivated by the large audience they’ve gathered in a relatively short period.

“We really love and appreciate all our devoted users from all over the world, and we want to emphasize to them once more that this is only the beginning of the beginning. We have so many awesome plans for the future,” they stress.

As long as there are no legal troubles down the road, this user base is expected to grow even further during the months to come.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

The Hacker Factor Blog: The Naked Truth

This post was syndicated from: The Hacker Factor Blog and was written by: The Hacker Factor Blog. Original post: at The Hacker Factor Blog

Warning: This blog entry discusses adult content.

In my previous blog entry, I wrote about the auto-ban system at FotoForensics. This system is designed to detect network attacks and prohibited content. Beginning yesterday, the system has been getting a serious workout. Over 600 people have been auto-banned. After 30 hours, the load is just beginning to ebb.

Yesterday on 4chan (the land of trolls), someone posted a long list of “celebrity nude photos”. Let me be blunt: they are all fakes. Some with heads pasted onto bodies, others have clothing digitally removed — and it’s all pretty poorly done. (Then again: if it came from the site that gave us Rickrolling and Pedobear, did anyone expect them to be real?)

Plenty of news outlets are reporting on this as if it was a massive data security leak. Except that there was no exploit beyond some very creepy and disturbed person with photoshop. (Seriously: to create this many fakes strikes me as a mental disorder from someone who is likely a sex offender.) When actress Victoria Justice tweeted that the pictures are fake, she was telling the truth. They are all fakes.

Unfortunately in this case, when people think photos may be fake, they upload them to FotoForensics. Since FotoForensics has a zero-tolerance policy related to porn, nudity, and sexually explicit content, every single person who uploads any of these pictures is banned. All of them. Banned for three months. And if they don’t get the hint and visit during the three-month ban, then the ban counter resets — it’s a three month ban from your last visit.

Why Ban?

I have previously written about why FotoForensics bans some content. To summarize the main reasons: we want less-biased content (not “50% porn”), we want to stay off blacklists that would prevent access from our desired user base, and we want to reduce the amount of child porn uploaded to the site.

As a service provider, I am a mandatory reporter. I don’t have the option to not report people who upload child porn. Either I turn you in and you get a felony, or I don’t turn you in and I get a felony. So, I’m turning you in ASAP. (As one law enforcement officer remarked after reviewing a report I submitted, “Wait… you’re telling me that they uploading child porn to a site named ‘Forensics’ and run by a company called ‘Hacker’?” I could hear her partners laughing in the background. “We don’t catch the smart ones.”)

By banning all porn, nudity, and sexually explicit content, it dramatically reduces the number of users who upload child porn. It also keeps the site workplace-safe and it stops porn from biasing the data archive.

The zero-tolerance policy at FotoForensics is really no different from the terms of service at Google, Facebook, Yahoo, Twitter, Reddit, and every other major service provider. All of them explicitly forbid child porn (because it’s a felony), and most just forbid all pornography and sexually explicit content because they know that sites without filters have problems with child porn.

Unfortunately, there’s another well-established trend at FotoForensics. Whenever there is a burst of activity, it is followed by people who upload porn, and then followed by people uploading child porn. This current trend (uploading fake nude celebrities) is a huge current trend. Already, we are seeing the switch over to regular porn. That means we are gearing up to report tons of child porn that will likely show up over the next few days. (This is the part of my job that I hate. I don’t hate reporting people — that’s fun and I hope they all get arrested. I hate having my admins and research partners potentially come across child porn.)

Coming Soon…

Over at FotoForensics, we have a lot of different research projects. Some of them are designed to identify fads and trends, while others are looking for ways to better conduct forensics. One of the research projects is focused on more accurately identifying prohibited content. These are all part of the auto-ban system.

Auto-ban has a dozen independent functions and a couple of reporting levels. Some people get banned instantly. Others get flagged for review based on suspicious activity or content. Some flagged content generates a warning for the user. The warning basically says that this is a family friendly site and makes the user agree that they are not uploading prohibited content. Other times content is silently flagged — the user never notices it, but it goes into the list of content for manual review and potential banning. (Even the review process is simplified: one person can easily review a few thousand files per hour.)

We typically deploy a new function as a flagging tool until it is well-tested. We want zero false-positives before we make banning automated. (Over the last 48 hours, auto-ban has banned over 600 people and flagged another 400 for review and manual banning.)

One of the current flagging rules is a high-performance and high-accuracy search engine that identifies visually similar content. (I’m not using the specific algorithms mentioned in my blog entry, but they are close enough to understand the concept.) This system can compare one BILLION hashes per second per CPU per gigahertz, and it scales linearly. (One 3.3GHz CPU can process nearly 3 billion hashes per second — it would be faster if it wasn’t I/O bound. And I don’t use a GPU because loading and unloading the GPU would take more time than just doing the comparisons on the basic CPU.) To put it simply, it will take a fraction of a second to compare every new upload against the list of known prohibited content. And if there’s a strong match, then we know it is the same picture, even if it has been resized, recolored, cropped, etc.

The last two days have been a great stress test for this new profiling system. I don’t think we missed banning any of these prohibited pictures. Later this week, it is going to graduate and become fully automated. Then we can begin banning people as fast as they upload.

TorrentFreak: Hustler Hustles Tor Exit-Node Operator Over Piracy

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

torFaced with the growing threat of online file-sharing, Hustler committed to “turning piracy into profit” several years ago.

The company has not been very active on this front in the United States, but more so in Europe. In Finland for example Hustler is sending out settlement demands for hundreds of euros to alleged pirates.

A few days ago one of these letters arrived at the doorstep of Sebastian Mäki, identifying the IP-address through which he offers a Tor exit-node. According to Hustler the IP-address had allegedly transferred a copy of Hustler’s “This Ain’t Game Of Thrones XXX.”

The letter is sent by lawfirm Hedman Partners who urge Mäki to pay 600 euros ($800) in damages or face worse.

However, Mäki has no intention to pay up. Besides running a Tor exit-node and an open wireless network through the connection, he also happens to be Vice-President of a local Pirate Party branch. As such, he has a decent knowledge of how to counter these threats.

“All we can do at the moment is fight against these trolls, and they are preying on easy victims, who have no time nor energy to fight and often are afraid of the embarrassment that could follow, because apparently porn is still a taboo somewhere,” Mäki tells TorrentFreak.

So instead of paying up, the Tor exit-node operator launched a counter attack. He wrote a lengthy reply to Hustler’s lawyers accusing them of blackmail.

“According to Finnish law, wrongfully forcing someone to dispose of their financial interests is known as blackmail. Threatening to make known one’s porn watching habits unless someone coughs up money sounds to me like activities for which you can get a sentence.”

Mäki explains that an IP-address is not necessarily a person and that Hustler’s copyright trolling is likely to affect innocent Internet users. Because of this, he has decided to report these dubious practices to the police.

“I am also concerned that other innocent citizens might not have as much time, energy, or wealth to fight back. Because your actions have the potential to cause so much damage to innocent bystanders, I find it morally questionable and made a police report.”

Whether the police will follow up on the complaint remains to be seen, but Hustler will have to take its hustling elsewhere for now. They clearly targeted the wrong person here, in more ways than one.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

TorrentFreak: ISP Alliance Accepts Piracy Crackdown, With Limits

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

us-ausFollowing last week’s leaked draft from Hollywood, Aussie ISPs including Telstra, iiNet and Optus have published their submission in response to a request by Attorney-General George Brandis and Communications Minister Malcolm Turnbull.

While the movie industry’s anti-piracy proposal demonstrates a desire to put ISPs under pressure in respect of their pirating customers, it comes as no surprise that their trade group, the Communications Alliance, has other things in mind.

The studios would like to see a change in copyright law to remove service providers’ safe harbor if they even suspect infringement is taking place on their networks but fail to take action, but the ISPs reject that.

ISP liability

“We urge careful consideration of the proposal to extend the authorization liability within the Copyright Act, because such an amendment has the potential to capture many other entities, including schools, universities, internet cafes, retailers, libraries and cloud-based services in ways that may hamper their legitimate activities and disadvantage consumers,” they write.

But while the ISPs are clear they don’t want to be held legally liable for customer piracy, they have given the clearest indication yet that they are in support of a piracy crackdown involving subscribers. Whether one would work is up for debate, however.

Graduated response

“[T]here is little or no evidence to date that [graduated response] schemes are successful, but no shortage of examples where such schemes have been
distinctly unsuccessful. Nonetheless, Communications Alliance remains willing to engage in good faith discussions with rights holders, with a view to agreeing on a scheme to address online copyright infringement, if the Government maintains that such a scheme is desirable,” they write.

If such as scheme could be agreed on, the ISPs say it would be a notice-and-notice system that didn’t carry the threat of ISP-imposed customer sanctions.

“Communications Alliance notes and supports the Government’s expectation, expressed in the paper that an industry scheme, if agreed, should not provide for the interruption of a subscriber’s internet access,” they note.

However, the appointment of a “judicial/regulatory /arbitration body” with the power to apply “meaningful sanctions” to repeat infringers is supported by the ISPs, but what those sanctions might be remains a mystery.

On the thorny issue of costs the ISPs say that the rightsholders must pay for everything. Interestingly, they turn the copyright holders’ claims of huge piracy losses against them, by stating that if just two-thirds of casual infringers change their ways, the video industry alone stands to generate AUS$420m (US$392) per year. On this basis they can easily afford to pay, the ISPs say.

Site blocking

While warning of potential pitfalls and inadvertent censorship, the Communications Alliance accepts that done properly, the blocking of ‘pirate’ sites could help to address online piracy.

“Although site blocking is a relatively blunt instrument and has its share of weaknesses and limitations, we believe that an appropriately structured and safeguarded injunctive relief scheme could play an important role in addressing online copyright infringement in Australia,” the Alliance writes.

One area in which the ISPs agree with the movie studios is in respect of ISP “knowledge” of infringement taking place in order for courts to order a block. The system currently employed in Ireland, where knowledge is not required, is favored by both parties, but the ISPs insist that the copyright holders should pick up the bill, from court procedures to putting the blocks in place.

The Alliance also has some additional conditions. The ISPs say they are only prepared to block “clearly, flagrantly and totally infringing websites” that exist outside Australia, and only those which use piracy as their main source of revenue.

Follow the Money

Pointing to the project currently underway in the UK coordinated by the Police Intellectual Property Crime Unit, the Communications Alliance says that regardless of the outcome on blocking, a “follow the money” approach should be employed against ‘pirate’ sites. This is something they already have an eye on.

“Some ISP members of Communications Alliance already have policies in place which prevent any of their advertising spend being directed to sites that promote or facilitate improper file sharing. Discussions are underway as to whether a united approach could be adopted by ISPs whereby the industry generally agrees on measures or policies to ensure the relevant websites do not benefit from any of the industry’s advertising revenues,” the ISPs note.

Better access to legal content

The Communications Alliance adds that rightsholders need to do more to serve their customers, noting that improved access to affordable content combined with public education on where to find it is required.

“We believe that for any scheme designed to address online copyright infringement to be sustainable it must also stimulate innovation by growing the digital content market, so Australians can continue to access and enjoy new and emerging content, devices and technologies.

“The ISP members of Communications Alliance remain willing to work toward an approach that balances the interests of all stakeholders, including consumers,” they conclude.

Conclusion

While some harmonies exist, the submissions from the movie studios and ISPs carry significant points of contention, with each having the power to completely stall negotiations. With legislative change hanging in the air, both sides will be keen to safeguard their interests on the key issues, ISP liability especially.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Pid Eins: Revisiting How We Put Together Linux Systems

This post was syndicated from: Pid Eins and was written by: Lennart Poettering. Original post: at Pid Eins

In a previous blog story I discussed
Factory Reset, Stateless Systems, Reproducible Systems & Verifiable Systems,
I now want to take the opportunity to explain a bit where we want to
take this with
systemd in the
longer run, and what we want to build out of it. This is going to be a
longer story, so better grab a cold bottle of
Club Mate before you start
reading.

Traditional Linux distributions are built around packaging systems
like RPM or dpkg, and an organization model where upstream developers
and downstream packagers are relatively clearly separated: an upstream
developer writes code, and puts it somewhere online, in a tarball. A
packager than grabs it and turns it into RPMs/DEBs. The user then
grabs these RPMs/DEBs and installs them locally on the system. For a
variety of uses this is a fantastic scheme: users have a large
selection of readily packaged software available, in mostly uniform
packaging, from a single source they can trust. In this scheme the
distribution vets all software it packages, and as long as the user
trusts the distribution all should be good. The distribution takes the
responsibility of ensuring the software is not malicious, of timely
fixing security problems and helping the user if something is wrong.

Upstream Projects

However, this scheme also has a number of problems, and doesn’t fit
many use-cases of our software particularly well. Let’s have a look at
the problems of this scheme for many upstreams:

  • Upstream software vendors are fully dependent on downstream
    distributions to package their stuff. It’s the downstream
    distribution that decides on schedules, packaging details, and how
    to handle support. Often upstream vendors want much faster release
    cycles then the downstream distributions follow.

  • Realistic testing is extremely unreliable and next to
    impossible. Since the end-user can run a variety of different
    package versions together, and expects the software he runs to just
    work on any combination, the test matrix explodes. If upstream tests
    its version on distribution X release Y, then there’s no guarantee
    that that’s the precise combination of packages that the end user
    will eventually run. In fact, it is very unlikely that the end user
    will, since most distributions probably updated a number of
    libraries the package relies on by the time the package ends up being
    made available to the user. The fact that each package can be
    individually updated by the user, and each user can combine library
    versions, plug-ins and executables relatively freely, results in a high
    risk of something going wrong.

  • Since there are so many different distributions in so many different
    versions around, if upstream tries to build and test software for
    them it needs to do so for a large number of distributions, which is
    a massive effort.

  • The distributions are actually quite different in many ways. In
    fact, they are different in a lot of the most basic
    functionality. For example, the path where to put x86-64 libraries
    is different on Fedora and Debian derived systems..

  • Developing software for a number of distributions and versions is
    hard: if you want to do it, you need to actually install them, each
    one of them, manually, and then build your software for each.

  • Since most downstream distributions have strict licensing and
    trademark requirements (and rightly so), any kind of closed source
    software (or otherwise non-free) does not fit into this scheme at
    all.

This all together makes it really hard for many upstreams to work
nicely with the current way how Linux works. Often they try to improve
the situation for them, for example by bundling libraries, to make
their test and build matrices smaller.

System Vendors

The toolbox approach of classic Linux distributions is fantastic for
people who want to put together their individual system, nicely
adjusted to exactly what they need. However, this is not really how
many of today’s Linux systems are built, installed or updated. If you
build any kind of embedded device, a server system, or even user
systems, you frequently do you work based on complete system images,
that are linearly versioned. You build these images somewhere, and
then you replicate them atomically to a larger number of systems. On
these systems, you don’t install or remove packages, you get a defined
set of files, and besides installing or updating the system there are
no ways how to change the set of tools you get.

The current Linux distributions are not particularly good at providing
for this major use-case of Linux. Their strict focus on individual
packages as well as package managers as end-user install and update
tool is incompatible with what many system vendors want.

Users

The classic Linux distribution scheme is frequently not what end users
want, either. Many users are used to app markets like Android, Windows
or iOS/Mac have. Markets are a platform that doesn’t package, build or
maintain software like distributions do, but simply allows users to
quickly find and download the software they need, with the app vendor
responsible for keeping the app updated, secured, and all that on the
vendor’s release cycle. Users tend to be impatient. They want their
software quickly, and the fine distinction between trusting a single
distribution or a myriad of app developers individually is usually not
important for them. The companies behind the marketplaces usually try
to improve this trust problem by providing sand-boxing technologies: as
a replacement for the distribution that audits, vets, builds and
packages the software and thus allows users to trust it to a certain
level, these vendors try to find technical solutions to ensure that
the software they offer for download can’t be malicious.

Existing Approaches To Fix These Problems

Now, all the issues pointed out above are not new, and there are
sometimes quite successful attempts to do something about it. Ubuntu
Apps, Docker, Software Collections, ChromeOS, CoreOS all fix part of
this problem set, usually with a strict focus on one facet of Linux
systems. For example, Ubuntu Apps focus strictly on end user (desktop)
applications, and don’t care about how we built/update/install the OS
itself, or containers. Docker OTOH focuses on containers only, and
doesn’t care about end-user apps. Software Collections tries to focus
on the development environments. ChromeOS focuses on the OS itself,
but only for end-user devices. CoreOS also focuses on the OS, but
only for server systems.

The approaches they find are usually good at specific things, and use
a variety of different technologies, on different layers. However,
none of these projects tried to fix this problems in a generic way,
for all uses, right in the core components of the OS itself.

Linux has come to tremendous successes because its kernel is so
generic: you can build supercomputers and tiny embedded devices out of
it. It’s time we come up with a basic, reusable scheme how to solve
the problem set described above, that is equally generic.

What We Want

The systemd cabal (Kay Sievers, Harald Hoyer, Daniel Mack, Tom
Gundersen, David Herrmann, and yours truly) recently met in Berlin
about all these things, and tried to come up with a scheme that is
somewhat simple, but tries to solve the issues generically, for all
use-cases, as part of the systemd project. All that in a way that is
somewhat compatible with the current scheme of distributions, to allow
a slow, gradual adoption. Also, and that’s something one cannot stress
enough: the toolbox scheme of classic Linux distributions is
actually a good one, and for many cases the right one. However, we
need to make sure we make distributions relevant again for all
use-cases, not just those of highly individualized systems.

Anyway, so let’s summarize what we are trying to do:

  • We want an efficient way that allows vendors to package their
    software (regardless if just an app, or the whole OS) directly for
    the end user, and know the precise combination of libraries and
    packages it will operate with.

  • We want to allow end users and administrators to install these
    packages on their systems, regardless which distribution they have
    installed on it.

  • We want a unified solution that ultimately can cover updates for
    full systems, OS containers, end user apps, programming ABIs, and
    more. These updates shall be double-buffered, (at least). This is an
    absolute necessity if we want to prepare the ground for operating
    systems that manage themselves, that can update safely without
    administrator involvement.

  • We want our images to be trustable (i.e. signed). In fact we want a
    fully trustable OS, with images that can be verified by a full
    trust chain from the firmware (EFI SecureBoot!), through the boot loader, through the
    kernel, and initrd. Cryptographically secure verification of the
    code we execute is relevant on the desktop (like ChromeOS does), but
    also for apps, for embedded devices and even on servers (in a post-Snowden
    world, in particular).

What We Propose

So much about the set of problems, and what we are trying to do. So,
now, let’s discuss the technical bits we came up with:

The scheme we propose is built around the variety of concepts of btrfs
and Linux file system name-spacing. btrfs at this point already has a
large number of features that fit neatly in our concept, and the
maintainers are busy working on a couple of others we want to
eventually make use of.

As first part of our proposal we make heavy use of btrfs sub-volumes and
introduce a clear naming scheme for them. We name snapshots like this:

  • usr:<vendorid>:<architecture>:<version> — This refers to a full
    vendor operating system tree. It’s basically a /usr tree (and no
    other directories), in a specific version, with everything you need to boot
    it up inside it. The <vendorid> field is replaced by some vendor
    identifier, maybe a scheme like
    org.fedoraproject.FedoraWorkstation. The <architecture> field
    specifies a CPU architecture the OS is designed for, for example
    x86-64. The <version> field specifies a specific OS version, for
    example 23.4. An example sub-volume name could hence look like this:
    usr:org.fedoraproject.FedoraWorkstation:x86_64:23.4

  • root:<name>:<vendorid>:<architecture> — This refers to an
    instance of an operating system. Its basically a root directory,
    containing primarily /etc and /var (but possibly more). Sub-volumes
    of this type do not contain a populated /usr tree though. The
    <name> field refers to some instance name (maybe the host name of
    the instance). The other fields are defined as above. An example
    sub-volume name is
    root:revolution:org.fedoraproject.FedoraWorkstation:x86_64.

  • runtime:<vendorid>:<architecture>:<version> — This refers to a
    vendor runtime. A runtime here is supposed to be a set of
    libraries and other resources that are needed to run apps (for the
    concept of apps see below), all in a /usr tree. In this regard this
    is very similar to the usr sub-volumes explained above, however,
    while a usr sub-volume is a full OS and contains everything
    necessary to boot, a runtime is really only a set of
    libraries. You cannot boot it, but you can run apps with it. An
    example sub-volume name is: runtime:org.gnome.GNOME3_20:3.20.1

  • framework:<vendorid>:<architecture>:<version> — This is very
    similar to a vendor runtime, as described above, it contains just a
    /usr tree, but goes one step further: it additionally contains all
    development headers, compilers and build tools, that allow
    developing against a specific runtime. For each runtime there should
    be a framework. When you develop against a specific framework in a
    specific architecture, then the resulting app will be compatible
    with the runtime of the same vendor ID and architecture. Example:
    framework:org.gnome.GNOME3_20:3.20.1

  • app:<vendorid>:<runtime>:<architecture>:<version> — This
    encapsulates an application bundle. It contains a tree that at
    runtime is mounted to /opt/<vendorid>, and contains all the
    application’s resources. The <vendorid> could be a string like
    org.libreoffice.LibreOffice, the <runtime> refers to one the
    vendor id of one specific runtime the application is built for, for
    example org.gnome.GNOME3_20:3.20.1. The <architecture> and
    <version> refer to the architecture the application is built for,
    and of course its version. Example:
    app:org.libreoffice.LibreOffice:GNOME3_20:x86_64:133

  • home:<user>:<uid>:<gid> — This sub-volume shall refer to the home
    directory of the specific user. The <user> field contains the user
    name, the <uid> and <gid> fields the numeric Unix UIDs and GIDs
    of the user. The idea here is that in the long run the list of
    sub-volumes is sufficient as a user database (but see
    below). Example: home:lennart:1000:1000.

btrfs partitions that adhere to this naming scheme should be clearly
identifiable. It is our intention to introduce a new GPT partition type
ID for this.

How To Use It

After we introduced this naming scheme let’s see what we can build of
this:

  • When booting up a system we mount the root directory from one of the
    root sub-volumes, and then mount /usr from a matching usr
    sub-volume. Matching here means it carries the same <vendor-id>
    and <architecture>. Of course, by default we should pick the
    matching usr sub-volume with the newest version by default.

  • When we boot up an OS container, we do exactly the same as the when
    we boot up a regular system: we simply combine a usr sub-volume
    with a root sub-volume.

  • When we enumerate the system’s users we simply go through the
    list of home snapshots.

  • When a user authenticates and logs in we mount his home
    directory from his snapshot.

  • When an app is run, we set up a new file system name-space, mount the
    app sub-volume to /opt/<vendorid>/, and the appropriate runtime
    sub-volume the app picked to /usr, as well as the user’s
    /home/$USER to its place.

  • When a developer wants to develop against a specific runtime he
    installs the right framework, and then temporarily transitions into
    a name space where /usris mounted from the framework sub-volume, and
    /home/$USER from his own home directory. In this name space he then
    runs his build commands. He can build in multiple name spaces at the
    same time, if he intends to builds software for multiple runtimes or
    architectures at the same time.

Instantiating a new system or OS container (which is exactly the same
in this scheme) just consists of creating a new appropriately named
root sub-volume. Completely naturally you can share one vendor OS
copy in one specific version with a multitude of container instances.

Everything is double-buffered (or actually, n-ary-buffered), because
usr, runtime, framework, app sub-volumes can exist in multiple
versions. Of course, by default the execution logic should always pick
the newest release of each sub-volume, but it is up to the user keep
multiple versions around, and possibly execute older versions, if he
desires to do so. In fact, like on ChromeOS this could even be handled
automatically: if a system fails to boot with a newer snapshot, the
boot loader can automatically revert back to an older version of the
OS.

An Example

Note that in result this allows installing not only multiple end-user
applications into the same btrfs volume, but also multiple operating
systems, multiple system instances, multiple runtimes, multiple
frameworks. Or to spell this out in an example:

Let’s say Fedora, Mandriva and ArchLinux all implement this scheme,
and provide ready-made end-user images. Also, the GNOME, KDE, SDL
projects all define a runtime+framework to develop against. Finally,
both LibreOffice and Firefox provide their stuff according to this
scheme. You can now trivially install of these into the same btrfs
volume:

  • usr:org.fedoraproject.WorkStation:x86_64:24.7
  • usr:org.fedoraproject.WorkStation:x86_64:24.8
  • usr:org.fedoraproject.WorkStation:x86_64:24.9
  • usr:org.fedoraproject.WorkStation:x86_64:25beta
  • usr:com.mandriva.Client:i386:39.3
  • usr:com.mandriva.Client:i386:39.4
  • usr:com.mandriva.Client:i386:39.6
  • usr:org.archlinux.Desktop:x86_64:302.7.8
  • usr:org.archlinux.Desktop:x86_64:302.7.9
  • usr:org.archlinux.Desktop:x86_64:302.7.10
  • root:revolution:org.fedoraproject.WorkStation:x86_64
  • root:testmachine:org.fedoraproject.WorkStation:x86_64
  • root:foo:com.mandriva.Client:i386
  • root:bar:org.archlinux.Desktop:x86_64
  • runtime:org.gnome.GNOME3_20:3.20.1
  • runtime:org.gnome.GNOME3_20:3.20.4
  • runtime:org.gnome.GNOME3_20:3.20.5
  • runtime:org.gnome.GNOME3_22:3.22.0
  • runtime:org.kde.KDE5_6:5.6.0
  • framework:org.gnome.GNOME3_22:3.22.0
  • framework:org.kde.KDE5_6:5.6.0
  • app:org.libreoffice.LibreOffice:GNOME3_20:x86_64:133
  • app:org.libreoffice.LibreOffice:GNOME3_22:x86_64:166
  • app:org.mozilla.Firefox:GNOME3_20:x86_64:39
  • app:org.mozilla.Firefox:GNOME3_20:x86_64:40
  • home:lennart:1000:1000
  • home:hrundivbakshi:1001:1001

In the example above, we have three vendor operating systems
installed. All of them in three versions, and one even in a beta
version. We have four system instances around. Two of them of Fedora,
maybe one of them we usually boot from, the other we run for very
specific purposes in an OS container. We also have the runtimes for
two GNOME releases in multiple versions, plus one for KDE. Then, we
have the development trees for one version of KDE and GNOME around, as
well as two apps, that make use of two releases of the GNOME
runtime. Finally, we have the home directories of two users.

Now, with the name-spacing concepts we introduced above, we can
actually relatively freely mix and match apps and OSes, or develop
against specific frameworks in specific versions on any operating
system. It doesn’t matter if you booted your ArchLinux instance, or
your Fedora one, you can execute both LibreOffice and Firefox just
fine, because at execution time they get matched up with the right
runtime, and all of them are available from all the operating systems
you installed. You get the precise runtime that the upstream vendor of
Firefox/LibreOffice did their testing with. It doesn’t matter anymore
which distribution you run, and which distribution the vendor prefers.

Also, given that the user database is actually encoded in the
sub-volume list, it doesn’t matter which system you boot, the
distribution should be able to find your local users automatically,
without any configuration in /etc/passwd.

Building Blocks

With this naming scheme plus the way how we can combine them on
execution we already came quite far, but how do we actually get these
sub-volumes onto the final machines, and how do we update them? Well,
btrfs has a feature they call “send-and-receive”. It basically allows
you do “diff” two file system versions, and generate a binary
delta. You can generate these deltas on a developer’s machine and then
push them into the user’s system, and he’ll get the exact same
sub-volume too. This is how we envision installation and updating of
operating systems, applications, runtimes, frameworks. At installation
time, we simply deserialize an initial send-and-receive delta into
our btrfs volume, and later, when a new version is released we just
add in the few bits that are new, by dropping in another
send-and-receive delta under a new sub-volume name. And we do it
exactly the same for the OS itself, for a runtime, a framework or an
app. There’s no technical distinction anymore. The underlying
operation for installing apps, runtime, frameworks, vendor OSes, as well
as the operation for updating them is done the exact same way for all.

Of course, keeping multiple full /usr trees around sounds like an
awful lot of waste, after all they will contain a lot of very similar
data, since a lot of resources are shared between distributions,
frameworks and runtimes. However, thankfully btrfs actually is able to
de-duplicate this for us. If we add in a new app snapshot, this simply
adds in the new files that changed. Moreover different runtimes and
operating systems might actually end up sharing the same tree.

Even though the example above focuses primarily on the end-user,
desktop side of things, the concept is also extremely powerful in
server scenarios. For example, it is easy to build your own usr
trees and deliver them to your hosts using this scheme. The usr
sub-volumes are supposed to be something that administrators can put
together. After deserializing them into a couple of hosts, you can
trivially instantiate them as OS containers there, simply by adding a
new root sub-volume for each instance, referencing the usr tree you
just put together. Instantiating OS containers hence becomes as easy
as creating a new btrfs sub-volume. And you can still update the images
nicely, get fully double-buffered updates and everything.

And of course, this scheme also applies great to embedded
use-cases. Regardless if you build a TV, an IVI system or a phone: you
can put together you OS versions as usr trees, and then use
btrfs-send-and-receive facilities to deliver them to the systems, and
update them there.

Many people when they hear the word “btrfs” instantly reply with “is
it ready yet?”. Thankfully, most of the functionality we really need
here is strictly read-only. With the exception of the home
sub-volumes (see below) all snapshots are strictly read-only, and are
delivered as immutable vendor trees onto the devices. They never are
changed. Even if btrfs might still be immature, for this kind of
read-only logic it should be more than good enough.

Note that this scheme also enables doing fat systems: for example,
an installer image could include a Fedora version compiled for x86-64,
one for i386, one for ARM, all in the same btrfs volume. Due to btrfs’
de-duplication they will share as much as possible, and when the image
is booted up the right sub-volume is automatically picked. Something
similar of course applies to the apps too!

This also allows us to implement something that we like to call
Operating-System-As-A-Virus. Installing a new system is little more
than:

  • Creating a new GPT partition table
  • Adding an EFI System Partition (FAT) to it
  • Adding a new btrfs volume to it
  • Deserializing a single usr sub-volume into the btrfs volume
  • Installing a boot loader
  • Rebooting

Now, since the only real vendor data you need is the usr sub-volume,
you can trivially duplicate this onto any block device you want. Let’s
say you are a happy Fedora user, and you want to provide a friend with
his own installation of this awesome system, all on a USB stick. All
you have to do for this is do the steps above, using your installed
usr tree as source to copy. And there you go! And you don’t have to
be afraid that any of your personal data is copied too, as the usr
sub-volume is the exact version your vendor provided you with. Or with
other words: there’s no distinction anymore between installer images
and installed systems. It’s all the same. Installation becomes
replication, not more. Live-CDs and installed systems can be fully
identical.

Note that in this design apps are actually developed against a single,
very specific runtime, that contains all libraries it can link against
(including a specific glibc version!). Any library that is not
included in the runtime the developer picked must be included in the
app itself. This is similar how apps on Android declare one very
specific Android version they are developed against. This greatly
simplifies application installation, as there’s no dependency hell:
each app pulls in one runtime, and the app is actually free to pick
which one, as you can have multiple installed, though only one is used
by each app.

Also note that operating systems built this way will never see
“half-updated” systems, as it is common when a system is updated using
RPM/dpkg. When updating the system the code will either run the old or
the new version, but it will never see part of the old files and part
of the new files. This is the same for apps, runtimes, and frameworks,
too.

Where We Are Now

We are currently working on a lot of the groundwork necessary for
this. This scheme relies on the ability to monopolize the
vendor OS resources in /usr, which is the key of what I described in
Factory Reset, Stateless Systems, Reproducible Systems & Verifiable Systems
a few weeks back. Then, of course, for the full desktop app concept we
need a strong sandbox, that does more than just hiding files from the
file system view. After all with an app concept like the above the
primary interfacing between the executed desktop apps and the rest of the
system is via IPC (which is why we work on kdbus and teach it all
kinds of sand-boxing features), and the kernel itself. Harald Hoyer has
started working on generating the btrfs send-and-receive images based
on Fedora.

Getting to the full scheme will take a while. Currently we have many
of the building blocks ready, but some major items are missing. For
example, we push quite a few problems into btrfs, that other solutions
try to solve in user space. One of them is actually
signing/verification of images. The btrfs maintainers are working on
adding this to the code base, but currently nothing exists. This
functionality is essential though to come to a fully verified system
where a trust chain exists all the way from the firmware to the
apps. Also, to make the home sub-volume scheme fully workable we
actually need encrypted sub-volumes, so that the sub-volume’s
pass-phrase can be used for authenticating users in PAM. This doesn’t
exist either.

Working towards this scheme is a gradual process. Many of the steps we
require for this are useful outside of the grand scheme though, which
means we can slowly work towards the goal, and our users can already
take benefit of what we are working on as we go.

Also, and most importantly, this is not really a departure from
traditional operating systems:

Each app, each OS and each app sees a traditional Unix hierarchy with
/usr, /home, /opt, /var, /etc. It executes in an environment that is
pretty much identical to how it would be run on traditional systems.

There’s no need to fully move to a system that uses only btrfs and
follows strictly this sub-volume scheme. For example, we intend to
provide implicit support for systems that are installed on ext4 or
xfs, or that are put together with traditional packaging tools such as
RPM or dpkg: if the the user tries to install a
runtime/app/framework/os image on a system that doesn’t use btrfs so
far, it can just create a loop-back btrfs image in /var, and push the
data into that. Even us developers will run our stuff like this for a
while, after all this new scheme is not particularly useful for highly
individualized systems, and we developers usually tend to run
systems like that.

Also note that this in no way a departure from packaging systems like
RPM or DEB. Even if the new scheme we propose is used for installing
and updating a specific system, it is RPM/DEB that is used to put
together the vendor OS tree initially. Hence, even in this scheme
RPM/DEB are highly relevant, though not strictly as an end-user tool
anymore, but as a build tool.

So Let’s Summarize Again What We Propose

  • We want a unified scheme, how we can install and update OS images,
    user apps, runtimes and frameworks.

  • We want a unified scheme how you can relatively freely mix OS
    images, apps, runtimes and frameworks on the same system.

  • We want a fully trusted system, where cryptographic verification of
    all executed code can be done, all the way to the firmware, as
    standard feature of the system.

  • We want to allow app vendors to write their programs against very
    specific frameworks, under the knowledge that they will end up being
    executed with the exact same set of libraries chosen.

  • We want to allow parallel installation of multiple OSes and versions
    of them, multiple runtimes in multiple versions, as well as multiple
    frameworks in multiple versions. And of course, multiple apps in
    multiple versions.

  • We want everything double buffered (or actual n-ary buffered), to
    ensure we can reliably update/rollback versions, in particular to
    safely do automatic updates.

  • We want a system where updating a runtime, OS, framework, or OS
    container is as simple as adding in a new snapshot and restarting
    the runtime/OS/framework/OS container.

  • We want a system where we can easily instantiate a number of OS
    instances from a single vendor tree, with zero difference for doing
    this on order to be able to boot it on bare metal/VM or as a
    container.

  • We want to enable Linux to have an open scheme that people can use
    to build app markets and similar schemes, not restricted to a
    specific vendor.

Final Words

I’ll be talking about this at LinuxCon Europe in October. I originally
intended to discuss this at the Linux Plumbers Conference (which I
assumed was the right forum for this kind of major plumbing level
improvement), and at linux.conf.au, but there was no interest in my
session submissions there…

Of course this is all work in progress. These are our current ideas we
are working towards. As we progress we will likely change a number of
things. For example, the precise naming of the sub-volumes might look
very different in the end.

Of course, we are developers of the systemd project. Implementing this
scheme is not just a job for the systemd developers. This is a
reinvention how distributions work, and hence needs great support from
the distributions. We really hope we can trigger some interest by
publishing this proposal now, to get the distributions on board. This
after all is explicitly not supposed to be a solution for one specific
project and one specific vendor project, we care about making this
open, and solving it for the generic case, without cutting corners.

If you have any questions about this, you know how you can reach us
(IRC, mail, G+, …).

The future is going to be awesome!

TorrentFreak: The Next-Generation Copyright Monopoly Wars Will Be Much Worse

This post was syndicated from: TorrentFreak and was written by: Rick Falkvinge. Original post: at TorrentFreak

copyright-brandedWe’ve been manufacturing our own copies of knowledge and culture without a license for quite some time now, a practice known first as mixtaping and then as file-sharing.

Home mass manufacturing of copies of culture and knowledge started some time in the 1980s with the Cassette Tape, the first widely available self-contained unit capable of recording music. It made the entire copyright industry go up in arms and demand “compensation” for activities that were not covered by their manufacturing monopoly, which is why we now pay protection money to the copyright industry in many countries for everything from cellphones to games consoles.

The same industry demanded harsh penalties – criminal penalties – for those who manufactured copies at home without a license rather than buying the expensive premade copies. Over the next three decades, such criminal penalties gradually crept into law, mostly because no politician thinks the issue is important enough to defy anybody on.

A couple of key patent monopolies on 3D printing are expiring as we speak, making next-generation 3D printing much, much higher quality. 3D printers such as this one are now appearing on Kickstarter, “printers” (more like fabs) that use laser sintering and similar technologies instead of layered melt deposit.

We’re now somewhere in the 1980s-equivalent of the next generation of copyright monopoly wars, which is about to spread to physical objects. The copyright industry is bad – downright atrociously cynically evil, sometimes – but nobody in the legislature gives them much thought. Wait until this conflict spreads outside the copyright industry, spreads to pretty much every manufacturing industry.

People are about to be sued out of their homes for making their own slippers instead of buying a pair.

If you think that sounds preposterous, that’s exactly what has been going on in the copyright monopoly wars so far, with people manufacturing their own copies of culture and knowledge instead of buying ready-made copies. There’s no legal difference to manufacturing a pair of slippers without having a license for it.

To be fair, a pair of slippers may be covered by more monopolies than just the copyright monopoly (the drawing) – it may be covered by a utility patent monopoly, a design patent monopoly, possibly a geographic indication if it’s some weird type of slipper, and many more arcane and archaic types of monopolies. Of course, people in general can’t tell the difference between a “utility patent”, a “design patent”, a “copyright duplication right”, a “copyright broadcast right”, a “related right”, and so on. To most people, it’s all just “the copyright monopoly” in broad strokes.

Therefore, it’s irrelevant to most people whether the person who gets sued out of their home for fabbing their own slippers from a drawing they found is technically found guilty of infringing the copyright monopoly (maybe) or a design patent (possibly). To 95% or more, it’s just “more copyright monopoly bullshit”. And you know what? Maybe that’s good.

The next generation of wars over knowledge, culture, drawings, information, and data is just around the corner, and it’s going to get much uglier with more stakes involved on all sides. We have gotten people elected to parliaments (and stayed there) on the conflict just as it stands now. As this divide deepens, and nothing suggests it won’t, then people will start to pay more attention.

And maybe, just maybe, that will be the beginning of the end of these immoral and deeply unjust monopolies known as copyrights and patents.

About The Author

Rick Falkvinge is a regular columnist on TorrentFreak, sharing his thoughts every other week. He is the founder of the Swedish and first Pirate Party, a whisky aficionado, and a low-altitude motorcycle pilot. His blog at falkvinge.net focuses on information policy.

Book Falkvinge as speaker?

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

TorrentFreak: Patent Allows Watermarking of Already Encrypted Movies

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

encryptionWhile the name Verance might not be particularly well known, the company’s anti-piracy technology is present in millions of DVD and Blu-ray players and the media they play.

Every licensed Blu-ray playback device since 2012 has supported the technology which is designed to limit the usefulness of pirated content. Illicit copies of movies protected by Cinavia work at first, but after a few minutes playback is halted and replaced by a warning notice.

This is achieved by a complex watermarking system that not only protects retail media but also illicit recordings of first-run movies. Now Verance has been awarded a patent for a new watermarking system with fresh aims in mind.

The patent, ‘Watermarking in an encrypted domain’, begins with a description of how encryption can protect multimedia content from piracy during storage or while being transported from one location to another.

“The encrypted content may be securely broadcast over the air, through the Internet, over cable networks, over wireless networks, distributed via storage media, or disseminated through other means with little concern about piracy of the content,” Verance begins.

Levels of security vary, Verance explains, depending on the strength of encryption algorithms and encryption key management. However, at some point content needs to be decrypted in order for it to be processed or consumed, and at this point it is vulnerable to piracy and distribution.

“This is particularly true for multimedia content that must inevitably be converted to audio and/or visual signals (e.g., analog format) in order to reach an audience,” Verance explain.

While the company notes that at this stage content is vulnerable to copying, solutions are available to help protect against what it describes as the “analog hole”. As the creator of Cinavia, it’s no surprise Verance promotes watermarking.

“Digital watermarking is typically referred to as the insertion of auxiliary information bits into a host signal without producing perceptible artifacts,” Verance explains.

In other words, content watermarked effectively will carry such marks regardless of further distribution, copying techniques, or deliberate attacks designed to remove them. Cinavia is one such example, the company notes.

However, Verance admits that watermarking has limitations. In a supply chain, for example, the need to watermark already encrypted content can trigger time-intensive operations. For this, the company says it has a solution.

Verance has come up with a system with the ability to insert watermarks into content that has already been compressed and encrypted, without the need for decryption, decompression, or subsequent re-compression and re-encryption.

In terms of an application, Verance describes an example workflow in which movie content could be watermarked and then encrypted in order to protect it during distribution. The system has the ability to further watermark encrypted content as it passes through various supply chain stages and locations without compromising its security.

“In a forensic tracking application, a digital movie, after appropriate post production processing, may be encrypted at the movie studio or post production house, and sent out for distribution to movie theaters, to on-line retailers, or directly to the consumer,” Verance explains.

“In such applications, it is often desired to insert forensic or transactional watermarks into the movie content to identify each entity or node in the distribution channel, including the purchasers of the content, the various distributors of the content, the presentation venue and the time/date/location of each presentation or purchase.”

Verance believes that being able to track distribution points, sales locations such as movie theaters or stores, and even end users will be a big plus to adopters. Those up to the complex analysis can see how the company intends to work its magic by viewing its extremely technical and lengthy patent.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

TorrentFreak: “Six Strikes” Anti-Piracy Warnings Double This Year

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

pirate-runningFebruary last year, five U.S. Internet providers started sending Copyright Alerts to customers who use BitTorrent to pirate movies, TV-shows and music.

These efforts are part of the Copyright Alert System, an anti-piracy plan that aims to educate the public. Through a series of warnings suspected pirates are informed that their connections are being used to share copyrighted material without permission, and told where they can find legal alternatives.

During the first ten months of the program more than more than 1.3 million anti-piracy alerts were sent out. That was just a ramp up phase though. This year the number of alerts will grow significantly.

“The program doubles in size this year,” says Jill Lesser, Executive Director of the overseeing Center for Copyright Information (CCI).

Lesser joined a panel at the Technology Policy Institute’s Aspen Forum where the Copyright Alert System was the main topic of discussion. While the media has focused a lot on the punishment side, Lesser notes that the main goal is to change people’s norms and regain their respect for copyright.

“The real goal here is to shift social norms and behavior. And to almost rejuvenate the notion of the value of copyright that existed in the world of books and vinyl records,” Lesser said.

The notifications are a “slap on the wrist” according to Lesser, but one which is paired with information explaining where people can get content legally.

In addition to sending more notices, the CCI will also consider adding more copyright holders and ISPs to the mix. Thus far the software and book industries have been left out, for example, and the same is true for smaller Internet providers.

“We’ve had lots of requests from content owners in other industries and ISPs to join, and how we do that is I think going to be a question for the year coming up,” Lesser noted.

Also present at the panel was Professor Chris Sprigman, who noted that the piracy problem is often exaggerated by copyright holders. Among other things, he gave various examples of how creative output has grown in recent years.

“This problem has been blown up into something it’s not. Do I like piracy? Not particularly. Do I think it’s a threat to our creative economy? Not in any area that I’ve seen,” Sprigman noted.

According to the professor the Copyright Alert System is very mild and incredible easy to evade, which is a good thing in his book.

The professor believes that it’s targeted at casual pirates, telling them that they are being watched. This may cause some to sign up for a VPN or proxy, but others may in fact change their behavior in the long run.

“Do I think that this is a solution to the piracy problem. No. But I think this is a way of reducing the size of it over time, possibly changing social norms over time. That could be productive. Not perfect but an admirable attempt,” Sprigman said.

Just how effective this attempt will be at changing people’s piracy habits is something that has yet to be seen.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Schneier on Security: Cell Phone Kill Switches Mandatory in California

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

California passed a kill-switch law, meaning that all cell phones sold in California must have the capability to be remotely turned off. It was sold as an antitheft measure. If the phone company could remotely render a cell phone inoperative, there would be less incentive to steal one.

I worry more about the side effects: once the feature is in place, it can be used by all sorts of people for all sorts of reasons.

The law raises concerns about how the switch might be used or abused, because it also provides law enforcement with the authority to use the feature to kill phones. And any feature accessible to consumers and law enforcement could be accessible to hackers, who might use it to randomly kill phones for kicks or revenge, or to perpetrators of crimes who might — depending on how the kill switch is implemented — be able to use it to prevent someone from calling for help.

“It’s great for the consumer, but it invites a lot of mischief,” says Hanni Fakhoury, staff attorney for the Electronic Frontier Foundation, which opposes the law. “You can imagine a domestic violence situation or a stalking context where someone kills [a victim's] phone and prevents them from calling the police or reporting abuse. It will not be a surprise when you see it being used this way.”

I wrote about this in 2008, more generally:

The possibilities are endless, and very dangerous. Making this work involves building a nearly flawless hierarchical system of authority. That’s a difficult security problem even in its simplest form. Distributing that system among a variety of different devices — computers, phones, PDAs, cameras, recorders — with different firmware and manufacturers, is even more difficult. Not to mention delegating different levels of authority to various agencies, enterprises, industries and individuals, and then enforcing the necessary safeguards.

Once we go down this path — giving one device authority over other devices — the security problems start piling up. Who has the authority to limit functionality of my devices, and how do they get that authority? What prevents them from abusing that power? Do I get the ability to override their limitations? In what circumstances, and how? Can they override my override?

The law only affects California, but phone manufacturers won’t sell two different phones. So this means that all cell phones will eventually have this capability. And, of course, the procedural controls and limitations written into the California law don’t apply elsewhere.

Linux How-Tos and Linux Tutorials: Photo Editing on Linux with Krita

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Carla Schroder. Original post: at Linux How-Tos and Linux Tutorials

fig-1 annabelle

Krita is a wonderful drawing and painting program, and it’s also a nice photo editor. Today we will learn how to add text to an image, and how to selectively sharpen portions of a photo.

Navigating Krita

Like all image creation and editing programs, Krita contains hundreds of tools and options, and redundant controls for exposing and using them. It’s worth taking some time to explore it and to see where everything is.

The default theme for Krita is a dark theme. I’m not a fan of dark themes, but fortunately Krita comes with a nice batch of themes that you can change anytime in the Settings > Theme menu.

Krita uses docking tool dialogues. Check Settings > Show Dockers to see your tool docks in the right and left panes, and Settings > Dockers to select the ones you want to see. The individual docks can drive you a little bit mad, as some of them open in a tiny squished aspect so you can’t see anything. You can drag them to the top and sides of your Krita window, enlarge and shrink them, and you can drag them out of Krita to any location on your computer screen. If you drop a dock onto another dock they automatically create tabs.

When you have arranged your perfect workspace, you can preserve it in the “Choose Workspace” picker. This is a button at the right end of the Brushes and Stuff toolbar (Settings > Toolbars Shown). This comes with a little batch of preset workspaces, and you can create your own (figure 2).

fig-2 workspaces

Krita has multiple zoom controls. Ctrl+= zooms in, Ctrl+- zooms out, and Ctrl+0 resets to 100%. You can also use the View > Zoom controls, and the zoom slider at the bottom right. There is also a dropdown zoom menu to the left of the slider.

The Tools menu sits in the left pane, and this contains your shape and selection tools. You have to hover your cursor over each tool to see its label. The Tool Options dock always displays options for the current tool you are using, and by default it sits in the right pane.

Crop Tool

Of course there is a crop tool in the Tools dock, and it is very easy to use. Draw a rectangle that contains the area you want to keep, use the drag handles to adjust it, and press the Return key. In the Tools Options dock you can choose to apply the crop to all layers or just the current layer, adjust the dimensions by typing in the size values, or size it as a percentage.

Adding Text

When you want to add some simple text to a photo, such as a label or a caption, Krita may leave you feeling overwhelmed because it contains so many artistic text effects. But it also supports adding simple text. Click the Text tool, and the Tool Options dock looks like figure 3.fig-3 text

Click the Multiline button. This opens the simple text tool; first draw a rectangle to contain your text, then start typing your text. The Tool Options dock has all the usual text formatting options: font selector, font size, text and background colors, alignment, and a bunch of paragraph styles. When you’re finished click the Shape Handling tool, which is the white arrow next to the Text tool button, to adjust the size, shape, and position of your text box. The Tool Options for the Shape Handling tool include borders of various thicknesses, colors, and alignments. Figure 4 shows the gleeful captioned photo I send to my city-trapped relatives.

fig-4 frontdoor

How to edit your existing text isn’t obvious. Click the Shape Handling tool, and double-click inside the text box. This opens editing mode, which is indicated by the text cursor. Now you can select text, add new text, change formatting, and so on.

Sharpening Selected Areas

Krita has a number of nice tools for making surgical edits. In figure 5 I want to sharpen Annabelle’s face and eyes. (Annabelle lives next door, but she has a crush on my dog and spends a lot of time here. My dog is terrified of her and runs away, but she is not discouraged.) First select an area with the “Select an area by its outline” tool. Then open Filter > Enhance > Unsharp Mask. You have three settings to play with: Half-Size, Amount, and Threshold. Most image editing software has Radius, Amount, and Threshold settings. A radius is half of a diameter, so Half-Size is technically correct, but perhaps needlessly confusing.

fig-5 annabelle

The Half-Size value controls the width of the sharpening lines. You want a large enough value to get a good affect, but not so large that it’s obvious.

The Threshold value determines how different two pixels need to be for the sharpening effect to be applied. 0 = maximum sharpening, and 99 is no sharpening.

Amount controls the strength of the sharpening effect; higher values apply more sharpening.

Sharpening is nearly always the last edit you want to make to a photo, because it is affected by anything else you do to your image: crop, resize, color and contrast… if you apply sharpening first and then make other changes it will mess up your sharpening.

And what, you ask, does unsharp mask mean? The name comes from the sharpening technique: the unsharp mask filter creates a blurred mask of the original, and then layers the unsharp mask over the original. This creates an image that appears sharper and clearer without creating a lot of obvious sharpening artifacts.

That is all for today. The documentation for Krita is abundant, but disorganized. Start at Krita Tutorials, and poke around YouTube for a lot of good video how-tos.

Move Over GIMP, Here Comes Krita 
Professional Graphics Creation on Linux 
Build A Serious Multimedia Production Workstation With Arch Linux krita">

Raspberry Pi: Learn to solder with Carrie Anne

This post was syndicated from: Raspberry Pi and was written by: Liz Upton. Original post: at Raspberry Pi

Carrie Anne Philbin, our Education Pioneer and author of the most excellent Adventures in Raspberry Pi, had a dark secret up until last week. She was a Raspberry Pi enthusiast who didn’t know how to solder.

Here’s a new video from her award-winning Geek Gurl Diaries YouTube channel, in which she addresses the issue.

Carrie says:

Everyone tells me that soldering is easy. For a long time I’ve seen it as a barrier to be able to do a lot of electronics or maker style crafts. I usually try and buy components that are pre-soldered or ask someone else to solder them for me. Since joining Raspberry Pi, this has been a bit of a joke for the engineers. They think I’m a bit silly. I’m certain I’m not alone in this.

Recently Wednesdays at Pi Towers have become ‘Gert Wednesdays’ when Gert comes into the office to visit us and teach some of us new skills. Gert Van Loo is an engineer, and one of the first volunteers working on Raspberry Pi. He has also created lots of add on boards for Raspberry Pi like the well titled ‘Gertboard’. He has also created the ‘Gertduino’. You can see where Gert Wednesdays come from cant you. Gert promised me he’d teach me how to solder and he didn’t disappoint. In one morning of simple tuition I was taught how to solder. THANKS GERT!

I decided that after I solder ALL THE THINGS in my office drawer that I’ve been dying to use with my Raspberry Pi, I would put my new found knowledge to good use by creating a tutorial video for GGD. It’s a short video but I hope it will help give other people the confidence to start or at least to attend a Maker Faire event where they can learn.

You can read much more from Carrie Anne on the Geek Gurl Diaries blog.

Anchor Managed Hosting: If your server is a function, is your company a library?

This post was syndicated from: Anchor Managed Hosting and was written by: Jessica Field. Original post: at Anchor Managed Hosting

Our Anchorite, Head of Engineering Andrew Cowie will be giving a talk next week at the Commerial Users of Functional Programming conference in Gothenburg, Sweeden. The abstract for the talk, titled “If your server is a function, is your company a library?” This article gives an interesting perspective on some of the cool stuff our Engineering department is up to.

====

Haskell is lauded for being a good foundation for building high quality software. The strong type system eliminates huge classes of runtime errors, laziness-forced-purity aides in separating messy IO from pure computational work, and the wealth of tools like quickcheck mean that individual codebases can be robustly unit tested.

That’s fine when your service runs from a single program. Building anything larger requires integration testing of components and that can be difficult when it’s a distributed system.

Over the past year, one of our projects has been building a data vault for system metrics. It has all the usual suspects: client side code in various languages, message queues and brokers, daemons ingesting data, more daemons to read data out again, analytics tools, and so on.

Testing all this was challenging; in essence we didn’t have any integration testing and overall-function testing was often manual.

As we embarked on a rewrite, a small change was made: the message broker was rewritten in Haskell and the daemon code was made a library. Soon other formerly independent Haskell programs were turned into libraries and linked in too. And before we knew it, we had the entire distributed system able to be tested as a single binary being quickchecked end-to-end. This is very cool.

This has, however, raised a bigger possibility: if each project is a library, can you express the entire company as the composition of these various libraries in a single executable? We have people working on internal APIs, monitoring systems, and infrastructure management tools.
These projects are all in a state of flux, but bringing the power of the Haskell compiler and type system to bear on the ecosystem as a whole has already improved interaction between teams.

There are a number of hard problems. Not everything is written in Haskell. Many systems have external dependencies (databases, third party web services, etc). This talk will describe our approaches to each of these, and our progress in abstracting the overall idea further.

====

[Pop culture reference: last year, Twitter released a paper describing how they compose services, titled "Your Server as a Function", Proceedings of the Seventh Workshop on Programming Languages and Operating Systems (PLOS '13), Article No. 5]

The post If your server is a function, is your company a library? appeared first on Anchor Managed Hosting.

TorrentFreak: MPAA Research: Blocking The Pirate Bay Works, So…..

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

blocktpb1Website blocking has become one of the favorite anti-piracy tools of the entertainment industries in recent years.

The UK is a leader on this front, with the High Court ordering local ISPs to block access to dozens of popular file-sharing sites, including The Pirate Bay and KickassTorrents.

Not everyone is equally excited about these measures and researchers have called their effectiveness into question. This prompted a Dutch court to lift The Pirate Bay blockade a few months ago. The MPAA, however, hopes to change the tide and prove these researchers wrong.

Earlier today Hollywood’s anti-piracy wish list was revealed through a leaked draft various copyright groups plan to submit to the Australian Government. Buried deep in the report is a rather intriguing statement that refers to internal MPAA research regarding website blockades.

“Recent research of the effectiveness of site blocking orders in the UK found that visits to infringing sites blocked declined by more than 90% in total during the measurement period or by 74.5% when proxy sites are included,” it reads.

MPAA internal research
mpaa-leak

In other words, MPAA’s own data shows that website blockades do help to deter piracy. Without further details on the methodology it’s hard to evaluate the findings, other than to say that they conflict with previous results.

But there is perhaps an even more interesting angle to the passage than the results themselves.

Why would the MPAA take an interest in the UK blockades when Hollywood has its own anti-piracy outfit (FACT) there? Could it be that the MPAA is planning to push for website blockades in the United States?

This is not the first sign to point in that direction. Two months ago MPAA boss Chris Dodd said that ISP blockades are one of the most effective anti-tools available.

Combine the above with the fact that the United States is by far the biggest traffic source for The Pirate Bay, and slowly the pieces of the puzzle begin to fall into place.

It seems only a matter of time before the MPAA makes a move towards website blocking in the United States. Whether that’s through a voluntary agreement or via the courts, something is bound to happen.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

TorrentFreak: Leaked Draft Reveals Hollywood’s Anti-Piracy Plans

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

us-ausAs the discussions over the future of anti-piracy legislation in Australia continue, a draft submission has revealed the wish-list of local movie groups and their Hollywood paymasters.

The draft, a response to a request by Attorney-General George Brandis and Communications Minister Malcolm Turnbull for submissions on current anti-piracy proposals, shows a desire to apply extreme pressure to local ISPs.

The authors of the draft (obtained by Crikey, subscription, ) are headed up by the Australia Screen Association, the anti-piracy group previously known as AFACT. While local company Village Roadshow is placed front and center, members including the Motion Picture Association, Disney, Paramount, Sony, Twentieth Century Fox, Universal and Warner make for a more familiar read.

Australian citizens – the world’s worst pirates

The companies begin with scathing criticism of the Australian public, branding them the world’s worst pirates, despite the ‘fact’ that content providers “have ensured the ready availability of online digital platforms and education of consumers on where they can acquire legitimate digital content.” It’s a bold claim that will anger many Australians, who even today feel like second-class consumers who have to wait longer and pay more for their content.

So what can be done about the piracy problem?

The draft makes it clear – litigation against individuals isn’t going to work and neither is legal action against “predominantly overseas” sites. The answer, Hollywood says, can be found in tighter control of what happens on the Internet.

Increased ISP liability

In a nutshell, the studios are still stinging over their loss to ISP iiNet in 2012. So now, with the help of the government, they hope to introduce amendments to copyright law in order to remove service providers’ safe harbor if they even suspect infringement is taking place on their networks but fail to take action.

“A new provision would deem authorization [of infringement] to occur where an ISP fails to take reasonable steps – which are also defined inclusively to include compliance with a Code or Regulations – in response to infringements of copyright it knows or reasonably suspects are taking place on its network,” the draft reads.

“A provision in this form would provide great clarity around the steps that an ISP would be required to take to avoid a finding of authorization and provide the very kind of incentive for the ISP to cooperate in the development of a Code.”

With “incentives” in place for them to take “reasonable steps”, ISPs would be expected to agree to various measures (outlined by a ‘Code’ or legislation) to “discourage or reduce” online copyright infringement in order to maintain their safe harbor. It will come as no surprise that subscriber warnings are on the table.

‘Voluntary’ Graduated Response

“These schemes, known as ‘graduated response schemes’, are based on a clear allocation of liability to ISPs that do not (by complying with the scheme) take steps to address copyright infringement by their users,” the studios explain.

“While this allocation of liability does not receive significant attention in most discussions of graduated response schemes, common sense dictates that the schemes would be unlikely to exist (much less be complied with by ISPs) in the absence of this basic incentive structure.”

While pointing out that such schemes are in place in eight countries worldwide, the movie and TV companies say that a number of them contain weaknesses, a trap that Australia must avoid.

“There are flaws in a number of these models, predominantly around the allocation of costs and lack of effective mitigation measures which, if mirrored in Australia, would make such a scheme ineffective and unlikely to be used,” the paper reads.

It appears that the studios believe that the US model, the Copyright Alerts System (CAS), is what Australia should aim for since it has “effective mitigation measures” and they don’t have to foot the entire bill.

“Copyright owners would pay their own costs of identifying the infringements and notifying these to the ISP, while ISPs would bear the costs of matching the IP addresses in the infringement notices to subscribers, issuing the notices and taking any necessary technical mitigation measures,” they explain.

In common with the CAS in the United States, providers would be allowed discretion on mitigation measures for persistent infringers. However, the studios also imply that ISPs’ ‘power to prevent’ piracy should extend to the use of customer contracts.

“[Power] to prevent piracy would include both direct and indirect power and definitions around the nature of the relationship which would recognize the significance of contractual relationships and the power that they provide to prevent or avoid online piracy,” they write.

Voluntary agreements, required by law, one way or another

The key is to make ISPs liable first, the studios argue, then negotiations on a “voluntary” scheme should fall into place.

“Once the authorization liability scheme is amended to make clear that ISPs will be liable for infringements of copyright by their subscribers which they know about but do not take reasonable steps to prevent or avoid, an industry code prescribing the content of those ‘reasonable steps’ is likely to be agreed between rightsholders and ISPs without excessively protracted negotiations.”

However, any failure by the ISPs to come to the table voluntarily should be met by legislative change.

“In the absence of any current intention of and incentive for ISPs in Australia to support such a scheme (and the strong opposition from some ISPs) legislative recognition of the reasonable steps involved in such a scheme is necessary,” they write.

Site blocking

Due to “weakness” in current Australian law in respect of ISP liability, site blocking has proved problematic. What the studios want is a “no-fault” injunction (similar to the model in Ireland) which requires ISPs to block sites like The Pirate Bay without having to target the ISPs themselves.

“Not being the target of a finding against it, an ISP is unlikely to oppose the injunction – as long as the procedural requirements for the injunction are met. Once made, a blocking injunction would immediately prevent Australian internet users from being tempted to or accessing the blocked sites,” the studios explain.

Despite The Pirate Bay doubling its traffic in the face of extensive blocking across Europe, the movie companies believe that not blocking in Australia is part of the problem.

“The absence of a no-fault procedure may explain the very high rates of film and TV piracy in Australia when compared with European countries
that have such a procedure,” they write.

Unsurprisingly, the studios want to keep the bar low when it comes to such injunctions.

“The extended injunctive relief provision should not require the Court to be satisfied that the dominant purpose of the website is to infringe copyright,” they urge.

“Raising the level of proof in this way would severely compromise the effectiveness of the new provision in that it would become significantly more difficult for rightsholders to obtain an injunction under the scheme: allegedly non-infringing content would be pointed to in each case, not for reasons of freedom of access to information on the internet, but purely as a basis to defeat the order.”

The studios also want the ISPs to pick up the bill on site-blocking.

“[Courts in Europe] have ordered the costs of site blocking injunctions be borne by the ISP. The Australian Film/TV Bodies submit that the same position should be adopted in Australia, especially as it is not likely that the evidence would be any different on a similar application here,” they add.

Conclusion

If the studios get everything they’ve asked for in Australia, the ensuing framework could become the benchmark for models of the future. There’s a still a long way to go, however, and some ISPs – iiNet in particular – won’t be an easy nut to crack.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Anchor Managed Hosting: Startmate 2015: Accelerating Australian Tech Start-ups

This post was syndicated from: Anchor Managed Hosting and was written by: Jessica Field. Original post: at Anchor Managed Hosting

startmate

The 2015 Startmate accelerator program is now open for applications from budding start-up entrepreneurs looking for a leg-up. Globally recognised, the Australian Startmate program has helped many tech start-ups become established and build that necessary momentum.

One side effect of the rapid evolution in digital technology is the vibrant tech start-up industry. Newspapers love to report on new start-ups that are suddenly acquired for astronomical sums because of the freshness of the idea and the elegance of the solution.

Of course, the main aim of most tech start-ups is to achieve a stable and growing business model, instead of chasing the unrealistic hope of mega deals and digital stardom. Yet, for every Instagram or Dropbox, there are hundreds of otherwise stunning ideas that never even make it to launch.

Turning an idea into a successful start-up is not easy. It requires support, funding and a head for business as well as technology innovation. This is why start-up accelerator programs like Startmate are so important.

Vero and Startmate

“If you have an idea or have been working on something you want to take to the next level, this is the most risk free and effective platform that I’ve seen,” says Chris Hexton of Vero (getvero.com).

Chris originally had a different idea and direction; developing invoicing software. But it was Chris’ involvement in the Startmate program that prompted a switch to something more commercially viable. The result was Vero — an email platform that tracks customer actions to send more relevant and personalised emails. The team quickly grew to five, working hard to establish the business without relying on external funding.

The Vero team gained the most value from working with the Startmate mentors. Each mentor is invited to contribute to the program because of his or her own start-up success, including the founders of tech brands like Atlassian, Spreets and Business Catalyst. Vero’s mentors each brought a different perspective and set of opinions to the project, which all helped Chris and the Vero team to make better, more informed and insightful decisions.

Andrew Rogers, founder of Anchor, is also a Startmate mentor. With Anchor supplying Vero with managed hosting services, Andrew has continued to follow their progress, helping Chris and the team with advice and support as the business has grown.

While the mentors provide the necessary experience and reality-checks, the other participants are also highly supportive and encouraging. With eight eager, embryonic tech companies in one room, the enthusiasm and passion is highly competitive and motivating. This is a shared experience, where individual success is felt by the entire group.

“Be passionate about your idea and your product, but don’t get too emotionally attached in the early validation stages,” says Chris. He credits the Startmate program with providing the insights his team needed to make the right decision and change direction. However, he cautions against losing sight of your core idea. “Listen to everyone, but apply your own grain of salt. You know your business better than anyone, so trust your instincts because they are what got you there in the first place.”

ScriptRock and Startmate

ScriptRock (scriptrock.com) is another Startmate alumni, joining the program in the second year, 2012. Alan and Mike were chosen for the program because of their direct experience of the problem their idea sought to solve. The duo’s deep insight into the customer issue and their commitment to the core idea made them ideal candidates for start-up acceleration.

Alan and Mike both started out as IT infrastructure consultants for the banking sector. They realised there was a more efficient and cost effective way to provide the same system configuration and testing services. The automated platform became ScriptRock.

Both Alan and Mike quit their day jobs to work full time on the new project, relocating to California. Their willingness to take a pay cut and a career risk has come back to reward them. In August 2014, ScriptRock announced $8.7 million in funding from August Capital, Valar Ventures and Squarepeg Capital.

ScriptRock and Vero are not unique. Similar stories emerge every year from the Startmate program. Could 2015 be the start of your tech success story?

Apply now for Startmate 2015

Applications are now open for the 2015 Startmate accelerator program. To qualify, your business must have at least one technical founder in your team of up to five members.

If you think your product or business idea has the potential to go global, make sure you apply before September 30th, 2014.

The post Startmate 2015: Accelerating Australian Tech Start-ups appeared first on Anchor Managed Hosting.

LWN.net: [$] Visual legerdemain abounds in G’MIC 1.6.0

This post was syndicated from: LWN.net and was written by: n8willis. Original post: at LWN.net

A new stable release of the G’MIC image-processing
framework was recently released. Version 1.6.0 adds a number of new commands and filters useful for manipulating image data, as well as
changes to the codebase that will hopefully make G’MIC easier to
integrate into other applications.

Click below (subscribers only) for a look at the G’MIC 1.6.0 release and
associated GIMP plugin.

Linux How-Tos and Linux Tutorials: How to Install the Netflix Streaming Client On Linux

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Jack Wallen. Original post: at Linux How-Tos and Linux Tutorials

netflix-client-screen

Netflix is one of the biggest video streaming services on the planet. You’ll find movies, television, documentaries, and more streamed to mobile devices, televisions, laptops, desktops, and much more. What you won’t find, however, is an official Linux client for the service. This is odd, considering Netflix so heavily relies upon FreeBSD.

This is Linux, though, so as always the adage ‘Where there’s a will, there’s a way’ very much applies. With just a few quick steps, you can have a Netflix client on your desktop. This client does require the installation of the following extras:

  • Wine

  • Mono

  • msttcorefonts

  • Gecko

I will walk you through the installation of this on a Ubuntu 14.04 desktop. I have also tested this same installation on both Linux Mint and Deepin – all with the same success. If you like living on the bleeding edge, you can get the full Netflix experience, without having to go through the steps I outline here. For that, you must be running the latest developer or beta release of Google Chrome with the Ubuntu 14.04 distribution. NOTE: You will also have to upgrade libnss3 (32 bit or 64 bit). Once you’ve installed all of that, you then have to modify the user-agent string of the browser so Netflix thinks you are accessing its services with a supported browser. The easiest way to do this is to install the User Agent Switcher Extension. The information you’ll need for the HTTP string is:

  • Name: Netflix Linux

  • String: Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/38.0.2114.2 Safari/537.36

  • Group: (is filled in automatically)

  • Append?: Select ‘Replace’

  • Flag: IE

If dealing with bleeding edge software and user agent strings isn’t for you, the method below works like a champ. The majority of this installation will happen through the command line, so be prepared to either type or cut and paste. Let’s begin.

Installing the repository prepare apt-get

The first thing you must do is open up a terminal window. Once that is opened, issue the following comands to add the correct repository, update apt-get, and install the software.

  • sudo apt-add-repository ppa:ehoover/compholio

  • sudo apt-get update

Now, you’re ready to start installing software. There are two pieces of software to be installed. The first is the actual Netflix Desktop app. The second is the msttcorefonts package that cannot be installed by the Netflix Desktop client (all other dependencies are installed through the Netflix Desktop client). The two commands you need to issue are:

  • sudo apt-get install netflix-desktop

  • sudo apt-get install msttcorefonts

The installation of the netflix-desktop package will take some time (as there are a number of dependencies it must first install). Once that installation completes, install the msttcorefonts package and you’re ready to continue.

First run

You’re ready to fire up the Netflix Desktop Client. To do this (in Ubuntu), open up the Dash and type netflix. When you see the launcher appear, click on it to start the client. When you first run the Netflix Desktop Client you will be required to first install Mono. Wine will take care of this for you, but you do have to okay the installer. When prompted, click Install (Figure 1) and the Wine installer will take care of the rest.

wine mono installer

You will also be prompted to allow Wine to install Gecko as well. When prompted, click Install for this action to complete.

At this point, all you have to do is sign in to Netflix and enjoy streaming content on your Linux desktop. You will notice that the client opens in full screen mode. To switch this to window mode, hit F11 and the client will appear in a window.

Although this isn’t an ideal situation, and there may be those that balk at installing Mono, by following these steps, you can have Netflix streaming video service on your Linux desktop. It works perfectly and you won’t miss a single feature (you can enjoy profiles, searching, rating, and much more).

Linux is an incredible desktop that offers everything the competition has and more. Give this installation of Netflix a go and see if you’re one step closer to dropping the other platforms from your desktop or laptop for good.

TorrentFreak: LA Police: Online Piracy Funds Drug Dealers and Terrorists

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

lacountyEarlier this month we reported how media conglomerate ABS-CBN is going after several website owners who link to pirated streams of its programming.

The Philippines-based company filed a lawsuit at a federal court in Oregon looking for millions of dollars in damages from two local residents. The court case has barely started but that didn’t prevent ABS-CBN from using its journalistic outlet to taint public opinion.

In a news report released by its American branch, the company slams the defendants who they align with hardcore criminals.

The coverage is presented as news but offers no balance. Instead it frames online piracy as a threat to everyone, with billions of dollars in losses that negatively impact America’s education and health care budgets.

But it gets even worse. It’s not just public services that are threatened by online piracy according to the news outlet, national security is at stake as well.

“Piracy actually aids and abets organized crime. Gangs and even terrorist groups have reportedly entered the piracy market because the penalties are much lighter than traditional crimes such as drug dealing – and the profit could be much higher,” ABS-CBN’s senior reporter Henni Espinosa notes.

It’s not the first time that we have heard these allegations. However, for a news organization to present them without context to further its own cause is a line that not even the MPAA and RIAA would dare to cross today.

The Los Angeles County Sheriff’s Department, on the other hand, has also noticed the link with organized crime and terrorism.

“[Piracy is] supporting their ability to buy drugs and guns and engage in violence. And then, the support of global terrorism, which is a threat to everybody,” LA County Assistant Sheriff Todd Rogers tells the new outlet.

Los Angeles County police say that piracy is one of their top priorities. They hope to make the local neighborhoods a little safer by tracking down these pirates and potential terrorists.

“To identify bad guys that we need to take out of the community so the rest of the folks can enjoy their neighborhood and their families,” Rogers concludes.

Since the above might have to sink in for a moment, we turn to the two Oregon citizens who ABS-CBN based the report on. Are Jeff Ashby and his Filipina wife Lenie Ashby really hardcore criminals?

Based on public statistics the five sites they operated barely had any visitors. According to Jeff he created them for his wife so she could enjoy entertainment from her home country. He actually didn’t make any copies of the media but merely provided links to other websites.

‘I created these websites for my wife who is from the Philippines, so she and others who are far from the Philippines could enjoy materials from their culture that are otherwise unavailable to them, Jeff Ashby wrote to the court.

“Since these materials were already on the web, we did not think there would-be a problem to simply link to them. No content was ever hosted on our server,” he adds.

The websites were all closed as soon as the Oregon couple were informed about the lawsuit. They regret their mistake and say they didn’t know that it could get them into trouble, certainly not $10 million worth of it.

So are these really the evil drug lords or terrorists the Los Angeles County Sheriff’s Department and ABS-CBN are referring to?

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

TorrentFreak: Kim Dotcom Battles to Keep Cash Sources Private

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

dotcom-laptopBack in 2012, millions of dollars of Megaupload and Kim Dotcom assets were seized in New Zealand and Hong Kong, action designed to bring the Internet entrepreneur financially to his knees.

That hasn’t been the case since, however. Dotcom has continued with his very public displays of wealth, living in one of New Zealand’s most expensive houses, flying around the country in helicopters, and bankrolling a brand new political party.

All this, 20th Century Fox, Disney, Paramount, Universal, Columbia Pictures and Warner Bros insist, are clear signs that Dotcom is disposing of wealth that will transfer to their hands should they prevail in their legal action against him – if there is any left of course.

Last month the High Court’s Judge Courtney agreed with the studios and ordered Dotcom to reveal all of his global assets “wherever they are located” and to identify “the nature of his interest in them.”

Needless to say, Dotcom has been putting up a fight, and has filed an appeal which will be heard in the second week of October. However, that date falls way beyond September 5, the date by which Dotcom has to comply with Judge Courtney’s disclosure order.

During a hearing today at the Court of Appeal, Dotcom’s legal team argued that their client should not have to hand over a list of his assets in advance of the October appeal as several legal points needed to be aired during the hearing.

According to Stuff, lawyer Tracey Walker said that the 2012 restraining order covered assets generated before that date, but have no scope moving forward.

“The assets that they are talking about now are new assets that were created because of my entrepreneurial skill after the raid,” Dotcom explained previously.

Dotcom has remained extremely active in the business sector since 2012, helping to create cloud storage service Mega.co.nz and then generating cash by selling shares in the company. The authorities and Hollywood are clearly trying to keep an eye on the money.

In Court, Walker said that since $11.8 million was seized from Dotcom in 2012 and other funds are currently frozen in Hong Kong, the studios have a fund to draw on should they win their case. Revealing more about his current financial situation would breach Dotcom’s privacy, Walker added.

Appearing for the US-based studios, lawyer Jack Hodder said the disclosure order was fully justified.

Ending the hearing, the Court of Appeal reserved its decision on whether Dotcom will have to comply with the High Court ruling and disclose on September 9, or whether he will indeed be able to wait until after the October hearing.

In the meantime the political mudslinging continues in New Zealand, with Kim Dotcom now preparing legal action against controversial blogger Cameron Slater who he accuses of publishing “200 plus smear stories” as part of a “character assassination” campaign handled by the ruling National Party.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Krebs on Security: DQ Breach? HQ Says No, But Would it Know?

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Sources in the financial industry say they’re seeing signs that Dairy Queen may be the latest retail chain to be victimized by cybercrooks bent on stealing credit and debit card data. Dairy Queen says it has no indication of a card breach at any of its thousands of locations, but the company also acknowledges that nearly all stores are franchises and that there is no established company process or requirement that franchisees communicate security issues or card breaches to Dairy Queen headquarters.

dqI first began hearing reports of a possible card breach at Dairy Queen at least two weeks ago, but could find no corroborating signs of it — either by lurking in shadowy online “card shops” or from talking with sources in the banking industry. Over the past few days, however, I’ve heard from multiple financial institutions that say they’re dealing with a pattern of fraud on cards that were all recently used at various Dairy Queen locations in several states. There are also indications that these same cards are being sold in the cybercrime underground.

The latest report in the trenches came from a credit union in the Midwestern United States. The person in charge of fraud prevention at this credit union reached out wanting to know if I’d heard of a breach at Dairy Queen, stating that the financial institution had detected fraud on cards that had all been recently used at a half-dozen Dairy Queen locations in and around its home state.

According to the credit union, more than 50 customers had been victimized by a blizzard of card fraud just in the past few days alone after using their credit and debit cards at Dairy Queen locations — some as far away as Florida — and that the pattern of fraud suggests the DQ stores were compromised at least as far back as early June 2014.

“We’re getting slammed today,” the fraud manager said Tuesday morning of fraud activity tracing back to member cards used at various Dairy Queen locations in the past three weeks. “We’re just getting all kinds of fraud cases coming in from members having counterfeit copies of their cards being used at dollar stores and grocery stores.”

Other financial institutions contacted by this reporter have seen recent fraud on cards that were all used at Dairy Queen locations in Florida and several other states, including Indiana, Illinois, Kentucky, Ohio, Tennessee, and Texas.

On Friday, Aug. 22, KrebsOnSecurity spoke with Dean Peters, director of communications for the Minneapolis-based fast food chain. Peters said the company had heard no reports of card fraud at individual DQ locations, but he stressed that nearly all of Dairy Queen stores were independently owned and operated. When asked whether DQ had any sort of requirement that its franchisees notify the company in the event of a security breach or problem with their card processing systems, Peters said no.

“At this time, there is no such policy,” Peters said. “We would assist them if [any franchisees] reached out to us about a breach, but so far we have not heard from any of our franchisees that they have had any kind of breach.”

Julie Conroy, research director at the advisory firm Aite Group, said nationwide companies like Dairy Queen should absolutely have breach notification policies in place for franchisees, if for no other reason than to protect the integrity of the company’s brand and public image.

“Without question this is a brand protection issue,” Conroy said. “This goes back to the eternal challenge with all small merchants. Even with companies like Dairy Queen, where the mother ship is huge, each of the individual establishments are essentially mom-and-pop stores, and a lot of these stores still don’t think they’re a target for this type of fraud. By extension, the mother ship is focused on herding a bunch of cats in the form of thousands of franchisees, and they’re not thinking that all of these stores are targets for cybercriminals and that they should have some sort of company-wide policy about it. In fact, franchised brands that have that sort of policy in place are far more the exception than the rule.”

DEJA VU ALL OVER AGAIN?

The situation apparently developing with Dairy Queen is reminiscent of similar reports last month from multiple banks about card fraud traced back to dozens of locations of Jimmy John’s, a nationwide sandwich shop chain that also is almost entirely franchisee-owned. Jimmy John’s has said it is investigating the breach claims, but so far it has not confirmed reports of card breaches at any of its 1,900+ stores nationwide.

The DHS/Secret Service advisory.

The DHS/Secret Service advisory.

Rumblings of a card breach involving at least some fraction of Dairy Queen’s 4,500 domestic, independently-run stores come amid increasingly vocal warnings from the U.S. Department of Homeland Security and the Secret Service, which last week said that more than 1,000 American businesses had been hit by malicious software designed to steal credit card data from cash register systems.

In that alert, the agencies warned that hackers have been scanning networks for point-of-sale systems with remote access capabilities (think LogMeIn and pcAnywhere), and then installing malware on POS devices protected by weak and easily guessed passwords.  The alert noted that at least seven point-of-sale vendors/providers confirmed they have had multiple clients affected.

Around the time that the Secret Service alert went out, UPS Stores, a subsidiary of the United Parcel Service, said that it scanned its systems for signs of the malware described in the alert and found security breaches that may have led to the theft of customer credit and debit data at 51 UPS franchises across the United States (about 1 percent of its 4,470 franchised center locations throughout the United States). Incidentally, the way UPS handled that breach disclosure — clearly calling out the individual stores affected — should stand as a model for other companies struggling with similar breaches.

In June, I wrote about a rash of card breaches involving car washes around the nation. The investigators I spoke with in reporting that story said all of the breached locations had one thing in common: They were all relying on point-of-sale systems that had remote access with weak passwords enabled.

My guess is that some Dairy Queen locations owned and operated by a particular franchisee group that runs multiple stores has experienced a breach, and that this incident is limited to a fraction of the total Dairy Queen locations nationwide. Unfortunately, without better and more timely reporting from individual franchises to the DQ HQ, it may be a while yet before we find out the whole story. In the meantime, DQ franchises that haven’t experienced a card breach may see their sales suffer as a result.

CARD BLIZZARD BREWING?

geodumpsLast week, this publication received a tip that a well-established fraud shop in the cybercrime underground had begun offering a new batch of stolen cards that was indexed for sale by U.S. state. The type of card data primarily sold by this shop — known as “dumps” — allows buyers to create counterfeit copies of the cards so that it can be used to buy goods (gift cards and other easily-resold merchandise) from big box retailers, dollar stores and grocers.

Increasingly, fraudsters who purchase stolen card data are demanding that cards for sale be “geolocated” or geographically indexed according to the U.S. state in which the compromised business is located. Many banks will block suspicious out-of-state card-present transactions (especially if this is unusual activity for the cardholder in question). As a result, fraudsters tend to prefer purchasing cards that were stolen from people who live near them.

This was an innovation made popular by the core group of cybercrooks responsible for selling cards stolen in the Dec. 2013 breach at Target Corp, which involved some 40 million compromised credit and debit cards. The same fraudsters would repeat and refine that innovation in selling tens of thousands of cards stolen in February 2014 from nationwide beauty products chain Sally Beauty.

This particular dumps shop pictured to the right appears to be run by a completely separate fraud group than the gang that hit Target and Sally Beauty. Nevertheless, just this month it added its first new batch of cards that is searchable by U.S. state. Two different financial institutions contacted by KrebsOnSecurity said the cards they acquired from this shop under this new “geo” batch name all had been used recently at different Dairy Queen locations.

The first batch of state-searchable cards at this particular card shop appears to have first gone on sale on Aug. 11, and included slightly more than 1,000 cards. The second batch debuted a week later and introduced more than twice as many stolen cards. A third bunch of more than 5,000 cards from this batch went up for sale early this morning.

An ad in the shop pimping a new batch of geo-located cards apparently stolen from Dairy Queen locations.

An ad in the shop pimping a new batch of geo-located cards apparently stolen from Dairy Queen locations.

The Hacker Factor Blog: Autobahn

This post was syndicated from: The Hacker Factor Blog and was written by: The Hacker Factor Blog. Original post: at The Hacker Factor Blog

When I build web sites, I don’t just make systems that serve content. Coming from the security field, I intentionally design systems that detect network attacks and react accordingly. If the web site detects a web-based attack, then it can lock out the attacker, perform reconnaissance, or just play with them in order to waste their time.

In some cases, the locked-out attacker just sees a blank page, generic 404 error message, or maybe the word “banned”. In other cases, the attacker sees the regular web page with no visible indication that the attack has been detected. But, because it was detected, they will never be able to get past the regular web page. (E.g., I serve up a static HTML page that looks like the active PHP/ASP page. But as a static page with no server-side scripting, it ignores all attacks.)

Sometimes I feel that misinformation is more damaging than no information. For example, there are some auto-exploit scripts that look for backup files or common database archive names. The attacker hopes that these might contain passwords or other exploitable information. Rather than returning nothing, my scripts may detect the attack and return a password protected archive that contains random binary data. In one case, I actually return a large binary archive that, upon cracking the password, extracts a Rick Astley video. My goal is to make the attacker waste days cracking the password, only to be Rickrolled.

(Some k1dd13 in Brussels spent over a week cracking the password, only to see the Rickrolling video. Then he told his friends. This led to a small burst of downloads as multiple people retrieved the file and applied the password in order to see the video. Ironically, this allowed me to identify his friends, the forum he used, and who posted to the forum… And since the initial poster was the only person who had acquired the file, I know who attacked my site.)

From the attacker’s viewpoint, they won’t know what they are getting. Does the password-protected archive contain valuable source code, or just random binary junk? They won’t know until they spend hours, days, or even longer trying to crack the encryption. And during that time, I’m sucking up all of their cracking resources with no effort on my end. Moreover, while they are trying to crack my password protected junk file, they are unable to use those same resources on other people’s files.

On the To-Do List

I call my automatic attack detection and response system “auto-ban”, but I pronounce it “autobahn”. It automatically stops common, automated attacks and does it really fast.

In the worst case (and typical case), my scripts automatically update my Apache web server’s .htaccess file to ban access to the site. And if the user still abuses the system, then it updates the firewall rules to ban the user.

This brings up one of my to-do list items. Eventually you need to clean up the banned addresses. Otherwise, you end up with a thousands of lines in your .htaccess file. (Over at FotoForensics, the main .htaccess file grew to over 1,500 lines in a year.) This results in slower per-page processing time and users who inherit bans as the bad guys change network addresses.

I recently took the time to address this problem.

The ideal solution is to have my Apache .htaccess file include other files that contain ban information. Unfortunately, .htaccess doesn’t seem to have an “include” directive. Another solution is to have the .htaccess file redirect all traffic through a script that checks for banned users. But that’s also a pain since I don’t want to waste unnecessary processing time on users who are already banned.

I ended up finding a solution using the mod_rewrite plugin. This module permits returning different content if a particular condition is matched. One of the condition rules checks if a file exists on the system. Now my detection code just needs to touch a file in a directory to ban a network address. In this case, the file’s name contains the hostile user’s network address. The mod_rewrite rule checks if the file exists and, if it does, it blocks the user’s access to the site.

ErrorDocument 410 “Banned”
RewriteCond %{DOCUMENT_ROOT}/../banned/badbot-%{REMOTE_ADDR} -f
RewriteRule $ – [G,L]

This code checks the directory “../banned/” for a file named “badbot-networkaddress” (e.g., “badbot-192.168.5.14″). I keep the banned list outside of the web server’s document root in order to deter arbitrary file traversal. If the banned file exists, then the web request generates a “HTTP 410 Gone” return code (the “G” rule). Finally, code 410 shows users a small text message that simply says “Banned”, rather than whatever Apache returns by default.

The net result from this change is a much smaller .htaccess file and a directory of banned network addresses. I also have a script that runs periodically (via cron) to remove banned files after three months. This mitigates the risk of someone inheriting the ban.

Reviewing logs

I implemented this new rule, deleted all of the banned users from the .htaccess file, and waited for new addresses to be banned. I didn’t even need to wait very long. Within a minute the first attack-bot was identified and prohibited. After a few hours, I had a big collection.

Most of the attack-bots are looking for generic WordPress exploits. I don’t run WordPress, but since these represent compromised systems (part of a botnet) that look for various exploits, I ban them all. (If you use WordPress and haven’t patched recently, then patch now. It seems like there’s a new WordPress exploit every few weeks and the attack-bots come by often. WordPress may be popular, but it isn’t very secure.)

I also see attack-bots that scan for generic web-based vulnerabilities. As I detect the scan, they get banned.

However, a couple of subnets were quickly banned:

badbot-212.224.119.139
badbot-212.224.119.140
badbot-212.224.119.142
badbot-212.224.119.177
badbot-212.224.119.178
badbot-212.224.119.179
badbot-212.224.119.181
badbot-212.224.119.182
badbot-212.224.119.183
badbot-88.198.160.50
badbot-88.198.160.51
badbot-88.198.160.52
badbot-88.198.160.53
badbot-88.198.160.55
badbot-88.198.160.56
badbot-88.198.160.57
badbot-88.198.160.59
badbot-88.198.160.60
badbot-88.198.160.61
badbot-88.198.247.163
badbot-88.198.247.164
badbot-88.198.247.165
badbot-88.198.247.166
badbot-88.198.247.167
badbot-88.198.247.168
badbot-88.198.247.169
badbot-88.198.247.170

These subnets trace to a German click-bot. This is a type of fraud where the bot submits URLs, hoping that my server will automatically get the content and appear as a user clicking on an ad or visiting a site. The bot is used to drive up click counters and generates revenue associated with artificially-increased traffic. If the payer notices, then he’ll blame my site for the click-fraud and not the German guy who’s abusing my site.

This particular click-bot is run by Xovi. They claim to provide an all-in-one marketing solution, but it really looks like a scam. They constantly submit URLs to sites like FotoForensics in order to drive up traffic to their client’s sites. What they’re really saying is that their “marketing” will drive up traffic to your site by automating click traffic. Their customers are paying this company to generate traffic without increasing human visibility — it will cost you in bandwidth, but not increase your market value. In my line of work, we call this “fraud”.

This isn’t Xovi’s only scummy action. Their bot documentation claims that they obey the robots.txt file for bot exclusion. However, they have never retrieved my robots.txt file. Their uploads are explicitly in violation of my robots.txt exclusion rules. (This makes them only slightly worse than Google. Google retrieves the robots.txt, and then accesses all of the excluded URLs.)

Even though this click-bot has been banned, they don’t seem to notice (or care) that it’s getting an HTTP error code back. It still attempts to post URLs every few minutes.

Unfortunately, I cannot poison their bot since they seem to ignore all results. The next step is usually some kind of attention-getting network attack against Xovi. However, I don’t think this small-time idiot scammer from Germany is worth the effort. (I’m sure that the German language has a single word for this. Something like “Ignorierendiekleinezeitidiotbetrüger”.)

Eventually this new version of auto-ban will catch and block botnets, anonymous proxies, and other types of systems that are used to attack web sites. But this time, it won’t cause a performance hit by increasing my .htaccess processing time.

TorrentFreak: BitTorrent’s Secure Dropbox Alternative Simplifies Sharing

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

syncThere are dozens of sync and backup services available on the Internet, but most have a major drawback. They require people to store data on external cloud-based servers that are not under their control.

BitTorrent Sync is a lightweight backup tool that eliminates this drawback, and it’s much faster too.

The functionality of the Sync application is comparable to most cloud-based sync tools, except for the fact that there’s no cloud involved. Users simply share their files across their own devices, or the devices of people they share files with.

Since its launch the application has built a steady user base of millions of users who already transferred a mind-boggling amount of data.

“Since the initial Alpha launch of Sync a little over a year ago, we’ve now hit over 10 million total user installs and have transferred over 80 Petabytes of data,” BitTorrent Inc’s Erik Pounds notes.

Today marks another big step in the development of Sync. With the release of version 1.4 users are now able to share files and folders more easily, by simply sending someone a URL. Previously, people had to exchange encryption keys which seemed more complicated.

Sharing a Sync file or folder
syncnew

People who receive a Sync URL will be directed to a download page where they are prompted to install Sync, if it isn’t already, and start downloading files right away.

Sync offers a wide variety of sharing options. Users have complete control over where their data is going and how it is used. This includes setting read/write permissions and the option to give access to approved devices only.

“Sync gives you full ownership over your data. With no third parties involved in storing or arbitrating your data, you know exactly where your files go,” Pounds explains.

In addition to the easier sharing options and various other improvements, the latest release also has a completely redesigned interface.

For those who are interested, the latest version of BitTorrent Sync is now available for download here, completely free of charge.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Linux How-Tos and Linux Tutorials: How to Control a 3 Wheel Robot from a Tablet With BeagleBone Black

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Ben Martin. Original post: at Linux How-Tos and Linux Tutorials

terrytee pantiltGive the BeagleBone Black its own wheels, completely untether any cables from the whole thing and control the robot from a tablet.

The 3 Wheel Robot kit includes all the structural material required to create a robot base. To the robot base you have to add two gearmotors, batteries, and some way to control it. Leaving the motors out of the 3 wheel kit allows you to choose the motor with a torque and RPM suitable for your application.

In this article we’ll use the Web server on the BeagleBone Black and some Bonescript to allow simple robot control from any tablet. No ‘apps’ required. Bonescript is a nodejs environment which comes with the BeagleBone Black.

Shown in the image at left is the 3 wheel robot base with an added pan and tilt mechanism. All items above the long black line and all batteries, electronics, cabling, and the two gearmotors were additions to the base. Once you can control the base using the BeagleBone Black, adding other hardware is relatively easy.

This article uses two 45 rpm Precision Gear Motors. The wheels are 6 inches in diameter so the robot will be speed-limited to about 86 feet per minute (26 meter/min). These motors can run from 6-12 volts and draw a maximum stall current draw of 1 amp. The large stall current draw will happen when the motor is trying to turn but is unable to. For example, if the robot has run into a wall and the tires do not slip. It is a good idea to detect cases that draw stall current and turn off power to avoid overheating and/or damaging the motor.

In this Linux.com series on the BeagleBone Black we have also seen how to use the Linux interface allowing us to access chips over SPI and receive interrupts when the voltage on a pin changes, and how to drive servo motors.

Constructing the 3 Wheel Robot

3 wheel robot kit parts

The parts for the 3 Wheel Robot kit are shown above (with the two gearmotors in addition to the raw kit). You can assemble the robot base in any order you choose. A fair number of the parts are used, together with whichever motors you selected, to mount the two powered front wheels. The two pieces of channel are connected using the hub spacer and the swivel hub is used to connect the shorter piece of channel at an angle at the rear of the robot. I’m assuming the two driving wheels are at the ‘front’. I started construction at the two drive wheels as that used up a hub adapter, screw hub, and two motor mount pieces. Taking those parts out of the mix left less clutter for the subsequent choice of pieces.terry in construction

Powering Everything

In a past article I covered how the BeagleBone Black wanted about 2.5 into the low 3 Watts of power to operate. The power requirements for the BeagleBone Black can be met in many ways. I chose to use a single 3.7-Volt 18650 lithium battery and a 5 V step up board. The BeagleBone Black has a power jack expecting 5 V. At a high CPU load the BeagleBone Black could take up to 3.5 W of power. So the battery and step up converter have to be comfortable supplying a 5V/700mA power supply. The battery is rated at about 3 amp-hours so the BeagleBone Black should be able to run for hours on a single charge.

The gearmotors for the wheels can operate on 6 to 12 V. I used a second battery source for the motors so that they wouldn’t interfere with the power of the BeagleBone Black. For the motors I used a block if 8 NiMH rechargeable AA batteries. This only offered around 9.5 V so the gearmotors would not achieve their maximum performance but it was a cheap supply to get going. I have manually avoided stalling either motor in testing so as not to attempt to draw too much power from the AA batteries. Some stall protection to cut power to the gearmotors and protect the batteries should be used or a more expensive motor battery source. For example, monitoring current and turning off the motors if they attempt to draw too much.

The motor power supply was connected to the H-bridge board. Making the ground terminal on the H-bridge a convenient location for a common ground connection to the BeagleBone Black.

Communicating without Wires

The BeagleBone Black does not have on-board wifi. One way to allow easy communication with the BeagleBone Black is to flash a TP-Link WR-703N with openWRT and use that to provide a wifi access point for access to the BeagleBone Black. The WR-703N is mounted to the robot base and is connected to the ethernet port of the BeagleBone Black. The tablets and laptops can then connect to the access point offered by the onboard WR-703N.

I found it convenient to setup the WR-703N to be a DHCP server and to assign the same IP address to the BeagleBone Black as it would have obtained when connected to my wired network. This way the tablet can communicate with the robot both in a wired prototyping setup and when the robot is untethered.

Controlling Gearmotors from the BeagleBone Black

Unlike the servo motors discussed in the previous article, gearmotors do not have the same Pulse Width Modulation (PWM) control line to set at an angle to rotate to. There is only power and ground to connect. If you connect the gearmotor directly to a 12 V power source it will spin up to turn as fast as it can. To turn the gearmotor a little bit slower, say at 70 percent of its maximum speed, you need to supply power only 70 percent of the time. So we are wanting to perform PWM on the power supply wire to the gearmotor. Unlike the PWM used to control the servo we do not have any fixed 20 millisecond time slots forced on us. We can divide up time any way we want, for example running full power for 0.7 seconds then no power for 0.3 s. Though a shorter time slice than 1 s will produce a smoother motion.

An H-Bridge chip is useful to be able to switch a high voltage, high current wire on and off from a 3.3 V wire connected to the BeagleBone Black. A single H-Bridge will let you control one gearmotor. Some chips like the L298 contain two H-Bridges. This is because two H-Bridges are useful if you want to control some stepper motors. A board containing an L298, heatsink and connection terminals can be had for as little as $5 from a China based shop, up to more than $30 for a fully populated configuration made in Canada that includes resistors to allow you to monitor the current being drawn by each motor.

The L298 has two pins to control the configuration of the H-Bridge and an enable pin. With the two control pins you can configure the H-Bridge to flow power through the motor in either direction. So you can turn the motor forwards and backwards depending on which of the two control pins is set high. When the enable pin is high then power flows from the motor batteries through the motor in the direction that the H-Bridge is configured for. The enable pin is where to use PWM in order to turn the motors at a rate slower than their top speed.

The two control lines and the enable line allow you to control one H-Bridge and thus one gearmotor. The L298 has a second set of enable and control lines so you can control a second gearmotor. Other than those lines the BeagleBone Black has to connect ground and 3.3 V to the H-Bridge.

When I first tried to run the robot in a straight line I found that it gradually turned left. After some experimenting I found that at full power the left motor was rotating at a slightly slower RPM relative to the right one. I’m not sure where this difference was being introduced but having found it early in the testing the software was designed to allow such callibration to be performed behind the scenes. You select 100 percent speed straight ahead and the software runs the right motor at only 97 percent power (or whatever callibration adjustment is currently applied).

To allow simple control of the two motors I used two concepts: the speed (0-100) and heading (0-100). A heading of 50 means that the robot should progress straight ahead. This mimics a car interface where steering (heading) and velocity are adjusted and the robot takes care of the details.

I have made the full source code available on github. Note the branch linux.com-article which is frozen in time at the point of the article. The master branch contains some new goodies and a few changes to the code structure, too.

The Server

Because the robot base was “T” shaped, over time it was referred to as TerryTee. The TerryTee nodejs class uses bonescript to control the PWM for the two gearmotors.

The constructor takes the pin identifier to use for the left and right motor PWM signals and a reduction to apply to each motor, with 1.0 being no reduction and 0.95 being to run the motor at only 95 percent the specified speed. The reduction is there so you can compensate if one motor runs slightly slower than the other.

function TerryTee( leftPWMpin, rightPWMpin, leftReduction, rightReduction )
{
    TerryTee.running = 1;
    TerryTee.leftPWMpin = leftPWMpin;
    TerryTee.rightPWMpin = rightPWMpin;
    TerryTee.leftReduction = leftReduction;
    TerryTee.rightReduction = rightReduction;
    TerryTee.speed = 0;
    TerryTee.heading = 50;
}

The setPWM() method shown below is the lowest level one in TerryTee, and other methods use it to change the speed of each motor. The PWMpin selects which motor to control and the ‘perc’ is the percentage of time that motor should be powered. I also made perc able to be from 0-100 as well as from 0.0 – 1.0 so the web interface could deal in whole numbers.

When an emergency stop is active, running is false so setPWM will not change the current signal. The setPWM also applies the motor strength callibration automatically so higher level code doesn’t need to be concerned with that. As the analogWrite() Bonescript call uses the underlying PWM hardware to output the signal, the PWM does not need to be constantly refreshed from software, once you set 70 percent then the robot motor will continue to try to rotate at that speed until you tell it otherwise.

TerryTee.prototype.setPWM = function (PWMpin,perc) 
{
    if( !TerryTee.running )
	return;
    if( PWMpin == TerryTee.leftPWMpin ) {
	perc *= TerryTee.leftReduction;
    } else {
	perc *= TerryTee.rightReduction;
    }
    if( perc >  1 )   
	perc /= 100;
    console.log("awrite PWMpin:" + PWMpin + " perc:" + perc  );
    b.analogWrite( PWMpin, perc, 2000 );
};

The setSpeed() call takes the current heading into consideration and updates the PWM signal for each wheel to reflect the heading and speed you have currently set.

TerryTee.prototype.setSpeed = function ( v ) 
{
    if( !TerryTee.running )
	return;
    if( v < 40 )
    {
	TerryTee.speed = 0;
	this.setPWM( TerryTee.leftPWMpin,  0 );
	this.setPWM( TerryTee.rightPWMpin, 0 );
	return;
    }
    var leftv  = v;
    var rightv = v;
    var heading = TerryTee.heading;
    
    if( heading > 50 )
    {
	if( heading >= 95 )
	    leftv = 0;
	else
	    leftv *= 1 - (heading-50)/50;
    }
    if( heading < 50 )
    {
	if( heading <= 5 )
	    rightv = 0;
	else
	    rightv *= 1 - (50-heading)/50;
    }
    console.log("setSpeed v:" + v + " leftv:" + leftv + " rightv:" + rightv );
    this.setPWM( TerryTee.leftPWMpin,  leftv );
    this.setPWM( TerryTee.rightPWMpin, rightv );
    TerryTee.speed = v;
};

The server itself creates a TerryTee object and then offers a Web socket to control that Terry. The ‘stop’ message is intended as an emergency stop which forces Terry to stop moving and ignore input for a period of time so that you can get to it and disable the power in case something has gone wrong.

var terry = new TerryTee('P8_46', 'P8_45', 1.0, 0.97 );
terry.setSpeed( 0 );
terry.setHeading( 50 );
b.pinMode     ('P8_37', b.OUTPUT);
b.pinMode     ('P8_38', b.OUTPUT);
b.pinMode     ('P8_39', b.OUTPUT);
b.pinMode     ('P8_40', b.OUTPUT);
b.digitalWrite('P8_37', b.HIGH);
b.digitalWrite('P8_38', b.HIGH);
b.digitalWrite('P8_39', b.LOW);
b.digitalWrite('P8_40', b.LOW);
io.sockets.on('connection', function (socket) {
  ...
  socket.on('stop', function (v) {
      terry.setSpeed( 0 );
      terry.setHeading( 0 );
      terry.forceStop();
  });
  socket.on('speed', function (v) {
      console.log('set speed to ', v );
      console.log('set speed to ', v.value );
      if( typeof v.value === 'undefined')
	  return;
      terry.setSpeed( v.value );
  });
  ...

The code on github is likely to evolve over time to move the various fixed cutoff numbers to be configurable and allow Terry to be reversed from the tablet.

The Client (Web page)

To quickly create a Web interface I used Bootstrap and jQuery. If the interface became more advanced then perhaps something like AngularJS would be a better fit. To control the speed and heading with an easy touch interface I also used the bootstrap-slider project.BeagleBone robot web interface

<div class="inner cover">
  <div class="row">
    <div class="col-md-1"><p class="lead">Speed</p></div>
    <div class="col-md-8"><input id="speed" data-slider-id='speedSlider' 
                    type="text" data-slider-min="0" data-slider-max="100" 
                    data-slider-step="1" data-slider-value="0"/></div>
  </div>
  <div class="row">
    <div class="col-md-1"><p class="lead">Heading</p></div>
    <div class="col-md-8"><input id="heading" data-slider-id='headingSlider' 
                    type="text" data-slider-min="0" data-slider-max="100" 
                    data-slider-step="1" data-slider-value="50"/></div>
  </div>
</div>
<div class="inner cover">
    <div class="btn-group">
	<button id="rotateleft" type="button" class="btn btn-default btn-lg" >
	  <span class="glyphicon glyphicon-hand-left"></span>&nbsp;Rot&nbsp;Left</button>
	<button id="straightahead" type="button" class="btn btn-default btn-lg" >
	  <span class="glyphicon glyphicon-arrow-up"></span>&nbsp;Straight&nbsp;ahead</button>
	<button id="rotateright" type="button" class="btn btn-default btn-lg" >
	  <span class="glyphicon glyphicon-hand-right"></span>&nbsp;Rot&nbsp;Right</button>
    </div>
</div>

With those UI elements the hook up to the server is completed using io.connect() to connect a ‘var socket’ back to the BeagleBone Black. The below code sends commands back to the BeagleBone Black as UI elements are adjusted on the page. The rotateleft command is simulated by setting the heading and speed for a few seconds and then stopping everything.

$("#speed").on('slide', function(slideEvt) {
    socket.emit('speed', {
        value: slideEvt.value[0],
        '/end': 'of-message'
    });
});
...
$('#straightahead').on('click', function (e) {
     $('#heading').data('slider').setValue(50);
})
$('#rotateleft').on('click', function (e) {
     $('#heading').data('slider').setValue(0);
     $('#speed').data('slider').setValue(70);
     setTimeout(function() {
        $('#speed').data('slider').setValue(0);
        $('#heading').data('slider').setValue(50);
     }, 2000);
})

The BeagleBone Black runs a Web server offering files from /usr/share/bone101. I found it convenient to put the whole project in /home/xuser/webapps/terry-tee and create a softlink to the project at /usr/share/bone101/terry-tee. This way http://mybeagleip/terry-tee/index.html will load the Web interface on a tablet. Cloud9 will automatically start any Bonescript files contained in /var/lib/cloud9/autorun. So two links setup Cloud9 to both serve the client and automatically start the server Bonescript for you:

root@beaglebone:/var/lib/cloud9/autorun# ls -l
lrwxrwxrwx 1 root root 39 Apr 23 07:02 terry.js -> /home/xuser/webapps/terry-tee/server.js
root@beaglebone:/var/lib/cloud9/autorun# cd /usr/share/bone101/
root@beaglebone:/usr/share/bone101# ls -l terry-tee
lrwxrwxrwx 1 root root 29 Apr 17 05:48 terry-tee -> /home/xuser/webapps/terry-tee

Wrap up

I originally tried to use the GPIO pins P8_41 to 44. I found that if I had wires connected to those ports the BeagleBone Black would not start. I could remove and reapply the wires after startup and things would function as expected. On the other hand, leaving 41-44 unconnected and using 37-40 instead the BeagleBone Black would boot up fine. If you have a problem starting your BeagleBone Black you might be accidentally using a connector that has a reserved function during startup.

While the configuration shown in this article allows control of only the movement of the robot base the same code could easily be extended to control other aspects of the robot you are building. For example, to control an arm attached and be able to move things around from your tablet.

Using a BeagleBone Black to control the robot base gives the robot plenty of CPU performance. This opens the door to using a mounted camera with OpenCV to implement object tracking. For example, the robot can move itself around in order to keep facing you. While the configuration in this article used wifi to connect with the robot, another interesting possibility is to use 3G to connect to a robot that isn’t physically nearby.

The BeagleBone Black can create a great Web-controlled robot and the 3 wheel robot base together with some gearmotors should get you moving fairly easily. Though once you have the base moving around you may find it difficult to resist giving your robot more capabilities!

We would like to thank ServoCity for supplying the 3 wheel robot base, gearmotors, gearbox and servo used in this article.