LuneOS is the new name for the mobile system once known as WebOS; the first
release is available for brave testers now. “The main focus of
LuneOS is to provide an operating system which is driven by the community
and continues what we love(d) about webOS. We’re not trying to reach
feature comparison with Android or iOS but rather building a system to
satisfy basic needs in the mobile environment.” The Nexus 4
and HP TouchPad
appear to be the best devices for those wanting to try LuneOS out on real
The Tribler client has been around for more nearly a decade already, and during that time it’s developed into the only truly decentralized BitTorrent client out there.
Even if all torrent sites were shut down today, Tribler users would still be able to find and add new content.
The Tribler team hopes to fix this problem with a built-in Tor network, routing all data through a series of peers. In essence, Tribler users then become their own Tor network helping each other to hide their IP-addresses through encrypted proxies.
“The Tribler anonymity feature aims to make strong encryption and authentication the Internet default,” Tribler leader Dr. Pouwelse tells TF.
For now the researchers have settled for three proxies between the senders of the data and the recipient. This minimizes the risk of being monitored by a rogue peer and significantly improves privacy.
“Adding three layers of proxies gives you more privacy. Three layers of protection make it difficult to trace you. Proxies no longer need to be fully trusted. A single bad proxy can not see exactly what is going on,” the Tribler team explains.
“The first proxy layer encrypts the data for you and each next proxy adds another layer of encryption. You are the only one who can decrypt these three layers correctly. Tribler uses three proxy layers to make sure bad proxies that are spying on people can do little damage.”
Today Tribler opens up its technology to the public for the first time. The Tor network is fully functional but for now it is limited to a 50 MB test file. This will allow the developers to make some improvements before the final release goes out next month.
There has been an increased interest in encryption technologies lately. The Tribler team invites interested developers to help them improve their work, which is available on Github.
“We hope all developers will unite inside a single project to defeat the forces that have destroyed the Internet essence. We really don’t need a hundred more single-person projects on ‘secure’ chat applications that still fully expose who you talk to,” Pouwelse says.
For users the Tor like security means an increase in bandwidth usage. After all, they themselves also become proxies who have to pass on the transfers of other users. According to the researchers this shouldn’t result in any slowdowns though, as long as people are willing to share.
“Tribler has always been for social and sharing people. Like private tracker communities with plenty of bandwidth to go around we think we can offer anonymity without slow downs, if we can incentivize people to leave their computers on overnight and donate,” Pouwelse says.
“People who share will have superior anonymous speeds,” he adds.
Those interested in testing Tribler’s anonymity feature can download the latest version. Bandwidth statistics are also available. Please bear in mind that only the test file can be transferred securely at the moment.
Are you a teacher? Have you got back-to-school blues after yesterday’s return to the staffroom? Are your classroom displays distinctly lacking in interaction or automation? Are you bored of taking the register the old fashioned way? Well we think that we have the perfect remedy for you!
We’re offering another two days of FREE training from the Education Team in our HQ home town of Cambridge, UK. You don’t need any experience with Raspberry Pi. We will teach you, inspire you, feed you, and give you free resources. All you need to do is get here! We are confident that you will have such a good time that you’ll shake those back-to school-blues and be excited about getting hands on with technology in your classroom, like Raspberry Certified Educators Dan Aldred and Sue Gray, who created a dancing and singing glove over the two days of training:
Apply now for September Picademy (29th & 30th September 2014). The deadline for applications for this event is on Friday 5th September, so you’ve only got a few more days. We will email all successful candidates on Monday 8th September.
Applications for October Picademy (27th & 28th October 2014) will remain open until Friday 3rd October.
We accept applications from practicing teachers from all over the world who teach any subject area. We’ve had art teachers, history teachers, science teachers and Primary non-subject specialists as well as ICT and Computing teachers visit Picademy; the course is appropriate for any teacher, no matter what their subject.
Here is what some of our Raspberry Pi Certified Educators have to say about their experience at a Picademy:
Picademy was a hard two days of CPD but was definitely the best I have been on. It is difficult to mention the best thing about it because there were so many! Unlike most CPD I have been on we were not just talked at – we were hands on developing and creating nearly all the time. We had so many opportunities to networking and share ideas – I have not used Twitter so much and am seeing more value in it now. The time simply flew by especially when we were working on our projects during which we were writing code, debugging, bouncing ideas around, sharing, creating, swearing, laughing, tweeting, eating sweets, learning, googling, performing bear surgery and collaborating. Although the two days finished last week for Picademy#3 it hasn’t stopped – ideas are still flowing and the tweets and emails are pinging about the internet. – Matthew Parry – CAS Master Teacher
It was an epic journey. For some present, they had never plugged in a Pi before Monday, by the end they were exploring different programming concepts not for necessity but for curiosity and intrigue. For others, we now had a colossal array of activity ideas and cross-curricular links not to mention a brilliant network of fellow interested educators. What more can you ask for from 2 free days of CPD? – Sway Grantham - Primary Teacher, UK.
Тръгнах с идея да карам до Кладница, но силите ми стигнаха повече (всъщност ми
свърши времето) и откарах чак до края на Боснек (белият дроб на Перник), където
в началото на селото нещо гореше и едвам се дишаше (бахти и белият дроб). Щях да
продължа, но стана късно и добре че реших да се върна, че половината от 30-те
километра до София ги карах в тъмното. До края на Боснек стигнах за около два
часа и половина от които 20-25 минути бяха почивки.
От Водния канал (над Владая) до Кладница е едно от най-готините карания, които
съм правил. Минават се около 15 километра по тясна горска пътека с леко спускане,
на която двама човека не могат да се разминат, но пък спокойно се поддържа скорост
около 25 км/ч при спускане. Ако има повече колоездачи по пътеката сигурно ще е опасно,
но понеже бях сам се изкефих максимално.
Изкачването от Владая до Водния канал е гадно и там е хубаво да се бута или носи
колелото. Изобщо където има яко изровен път с големи камъни и е по-стръмно, май е
най-добре да се минава пеша. Не се губи много време, а и в каране ще се хвърлят
повече сили отколкото ще е полезно.
70+ километра каране, 40 отиване през гори и камъняци и 30 връщане по магистрала
и “първокласен” път и естествено задната гума я спуках на пет километра от вкъщи.
Да ви сера на първокласните български пътища. В гората е по-безопасно…
PIPCU also relies on old-fashioned police work to deal with sites that fail to heed their warnings to tow the line. This has resulted in several arrests in the UK and the closure of dozens of domains, torrent site proxies in particular.
With key partner the Federation Against Copyright Theft and its members including the Premier League and BSkyB, piracy of TV-destined content has become an area of interest to PIPCU, particularly that involving live sports.
Early Monday, more than 200 miles away from their London base, officers from PIPCU arrested a man in Manchester in the north of England. Police say the 27-year-old is believed to have operated a series of websites which offered access to subscription-only TV services.
PIPCU say that the domains were sports-focused, so given the premium pay TV landscape in the UK it seems probable that they infringed the rights of BSkyB and possibly the Premier League. Police are yet to confirm the details.
While there are no figures available on site visitor numbers, police are using the term “industrial” to explain the size of the operation they shut down yesterday. A reported 12 computer servers streaming global sports were reportedly seized and their operator taken to a local police station for questioning.
“Today’s operation is the unit’s third arrest in relation to online streaming and sends out a strong message that we are homing in on those who knowingly commit or facilitate online copyright infringement,” said PIPCU chief DCI Danny Medlycott last evening.
“Not only is there a significant loss to industry with this particular operation but it is also unfair that millions of people work hard to be able to afford to pay for their subscription-only TV services when others cheat the system.”
PIPCU have not released the names of the sites in question so it’s impossible to assess their significance at this point. However, police are often quick to seize the domains of sites they close down so it’s expected that signs of that will begin to surface during the next few days enabling a more detailed assessment of the shutdown.
As pointed out by DCI Medlycott, yesterday’s arrest is the third involving a streaming site operator in the UK. Although the sites were not revealed by police at the time, TorrentFreak previously revealed that the operator of BoxingGuru.co.uk, boxingguru.eu, boxingguru.tv and nutjob.eu was arrested during April in the north of England.
In May, PIPCU had the domain of the Cricfree.tv streaming portal suspended but its operator was able to bring the site back under a new domain.
Yesterday’s arrest appears to be PIPCU’s first since the arrest of a UK-based torrent site proxy operator in early August.
The Popcorn Time app brought BitTorrent streaming to the masses earlier this year.
The software became an instant hit by offering BitTorrent-powered streaming in an easy-to-use Netflix-style interface.
While the original app was shut down by the developers after a few weeks, the project was quickly picked up by others. This resulted in several popular forks that have gained a steady user-base in recent months.
Just how popular the application is remained a mystery, until now. TorrentFreak reached out to one of the most popular Popcorn Time forks at time4popcorn.eu to find out how many installs and active users there are in various parts of the world.
The Popcorn Time team was initially reluctant to share exact statistics on the app’s popularity across the globe, but they’re now ready to lift the veil.
Data shared with TorrentFreak shows that most users come from the United States where the application is installed on more than 1.4 million devices. There are currently over 100,000 active users in the U.S. and the number of new installs per day hovers around 15,000.
“At the beginning of August there were between 17-18K installations a day on all operating systems and last weekend there were somewhere between 13-15K a day,” the Popcorn Time teams informs us.
The application has a surprisingly large user base in the Netherlands too, as Android Planet found out. The country comes in second place with 1.3 million installs. That’s a huge number for a country with a population of less than 17 million people.
Brazil completes the top three at a respectful distance with 700,000 installed applications and around 56,000 active users.
The United Kingdom just missed a spot in the top three. The Popcorn Time fork has been installed on 500,000 devices there, with 30,000 active users and 4,500 new installs per day.
Australia, which generally has a very high piracy rate, is lagging behind a little with 93,000 installs thus far, and “only” 6,500 active users.
The statistics above only apply to the time4popcorn.eu application. While it’s probably the most used, other forks such as popcorntime.io also have a large following to add to the total Popcorn Time user base.
The team behind time4popcorn.eu, meanwhile, says that it will continue to add new features and support for more operating systems. They are currently finishing up the first iOS version which is expected to be released in a few days.
Aside from the technical challenges, the developers keep motivated by the large audience they’ve gathered in a relatively short period.
“We really love and appreciate all our devoted users from all over the world, and we want to emphasize to them once more that this is only the beginning of the beginning. We have so many awesome plans for the future,” they stress.
As long as there are no legal troubles down the road, this user base is expected to grow even further during the months to come.
So this leak has caused quite a furore, normally I don’t pay attention to this stuff – but hey it’s JLaw and it’s a LOT of celebs at the same time – which indicates some kind of underlying problem. The massive list of over 100 celebs was posted originally on 4chan (of course) by an [...]
The post Massive Celeb Leak…
Read the full post at darknet.org.uk
Warning: This blog entry discusses adult content.
In my previous blog entry, I wrote about the auto-ban system at FotoForensics. This system is designed to detect network attacks and prohibited content. Beginning yesterday, the system has been getting a serious workout. Over 600 people have been auto-banned. After 30 hours, the load is just beginning to ebb.
Yesterday on 4chan (the land of trolls), someone posted a long list of “celebrity nude photos”. Let me be blunt: they are all fakes. Some with heads pasted onto bodies, others have clothing digitally removed — and it’s all pretty poorly done. (Then again: if it came from the site that gave us Rickrolling and Pedobear, did anyone expect them to be real?)
Plenty of news outlets are reporting on this as if it was a massive data security leak. Except that there was no exploit beyond some very creepy and disturbed person with photoshop. (Seriously: to create this many fakes strikes me as a mental disorder from someone who is likely a sex offender.) When actress Victoria Justice tweeted that the pictures are fake, she was telling the truth. They are all fakes.
Unfortunately in this case, when people think photos may be fake, they upload them to FotoForensics. Since FotoForensics has a zero-tolerance policy related to porn, nudity, and sexually explicit content, every single person who uploads any of these pictures is banned. All of them. Banned for three months. And if they don’t get the hint and visit during the three-month ban, then the ban counter resets — it’s a three month ban from your last visit.
I have previously written about why FotoForensics bans some content. To summarize the main reasons: we want less-biased content (not “50% porn”), we want to stay off blacklists that would prevent access from our desired user base, and we want to reduce the amount of child porn uploaded to the site.
As a service provider, I am a mandatory reporter. I don’t have the option to not report people who upload child porn. Either I turn you in and you get a felony, or I don’t turn you in and I get a felony. So, I’m turning you in ASAP. (As one law enforcement officer remarked after reviewing a report I submitted, “Wait… you’re telling me that they uploading child porn to a site named ‘Forensics’ and run by a company called ‘Hacker’?” I could hear her partners laughing in the background. “We don’t catch the smart ones.”)
By banning all porn, nudity, and sexually explicit content, it dramatically reduces the number of users who upload child porn. It also keeps the site workplace-safe and it stops porn from biasing the data archive.
The zero-tolerance policy at FotoForensics is really no different from the terms of service at Google, Facebook, Yahoo, Twitter, Reddit, and every other major service provider. All of them explicitly forbid child porn (because it’s a felony), and most just forbid all pornography and sexually explicit content because they know that sites without filters have problems with child porn.
Unfortunately, there’s another well-established trend at FotoForensics. Whenever there is a burst of activity, it is followed by people who upload porn, and then followed by people uploading child porn. This current trend (uploading fake nude celebrities) is a huge current trend. Already, we are seeing the switch over to regular porn. That means we are gearing up to report tons of child porn that will likely show up over the next few days. (This is the part of my job that I hate. I don’t hate reporting people — that’s fun and I hope they all get arrested. I hate having my admins and research partners potentially come across child porn.)
Over at FotoForensics, we have a lot of different research projects. Some of them are designed to identify fads and trends, while others are looking for ways to better conduct forensics. One of the research projects is focused on more accurately identifying prohibited content. These are all part of the auto-ban system.
Auto-ban has a dozen independent functions and a couple of reporting levels. Some people get banned instantly. Others get flagged for review based on suspicious activity or content. Some flagged content generates a warning for the user. The warning basically says that this is a family friendly site and makes the user agree that they are not uploading prohibited content. Other times content is silently flagged — the user never notices it, but it goes into the list of content for manual review and potential banning. (Even the review process is simplified: one person can easily review a few thousand files per hour.)
We typically deploy a new function as a flagging tool until it is well-tested. We want zero false-positives before we make banning automated. (Over the last 48 hours, auto-ban has banned over 600 people and flagged another 400 for review and manual banning.)
One of the current flagging rules is a high-performance and high-accuracy search engine that identifies visually similar content. (I’m not using the specific algorithms mentioned in my blog entry, but they are close enough to understand the concept.) This system can compare one BILLION hashes per second per CPU per gigahertz, and it scales linearly. (One 3.3GHz CPU can process nearly 3 billion hashes per second — it would be faster if it wasn’t I/O bound. And I don’t use a GPU because loading and unloading the GPU would take more time than just doing the comparisons on the basic CPU.) To put it simply, it will take a fraction of a second to compare every new upload against the list of known prohibited content. And if there’s a strong match, then we know it is the same picture, even if it has been resized, recolored, cropped, etc.
The last two days have been a great stress test for this new profiling system. I don’t think we missed banning any of these prohibited pictures. Later this week, it is going to graduate and become fully automated. Then we can begin banning people as fast as they upload.
Faced with the growing threat of online file-sharing, Hustler committed to “turning piracy into profit” several years ago.
The company has not been very active on this front in the United States, but more so in Europe. In Finland for example Hustler is sending out settlement demands for hundreds of euros to alleged pirates.
A few days ago one of these letters arrived at the doorstep of Sebastian Mäki, identifying the IP-address through which he offers a Tor exit-node. According to Hustler the IP-address had allegedly transferred a copy of Hustler’s “This Ain’t Game Of Thrones XXX.”
The letter is sent by lawfirm Hedman Partners who urge Mäki to pay 600 euros ($800) in damages or face worse.
However, Mäki has no intention to pay up. Besides running a Tor exit-node and an open wireless network through the connection, he also happens to be Vice-President of a local Pirate Party branch. As such, he has a decent knowledge of how to counter these threats.
“All we can do at the moment is fight against these trolls, and they are preying on easy victims, who have no time nor energy to fight and often are afraid of the embarrassment that could follow, because apparently porn is still a taboo somewhere,” Mäki tells TorrentFreak.
So instead of paying up, the Tor exit-node operator launched a counter attack. He wrote a lengthy reply to Hustler’s lawyers accusing them of blackmail.
“According to Finnish law, wrongfully forcing someone to dispose of their financial interests is known as blackmail. Threatening to make known one’s porn watching habits unless someone coughs up money sounds to me like activities for which you can get a sentence.”
Mäki explains that an IP-address is not necessarily a person and that Hustler’s copyright trolling is likely to affect innocent Internet users. Because of this, he has decided to report these dubious practices to the police.
“I am also concerned that other innocent citizens might not have as much time, energy, or wealth to fight back. Because your actions have the potential to cause so much damage to innocent bystanders, I find it morally questionable and made a police report.”
Whether the police will follow up on the complaint remains to be seen, but Hustler will have to take its hustling elsewhere for now. They clearly targeted the wrong person here, in more ways than one.
Back in December 2013, we discussed our plans to develop an improved web browser for Raspberry Pi. The browser is based on Epiphany (aka GNOME Web), as a replacement for the rather venerable version of Midori in Raspbian Wheezy.
Epiphany brings a host of neat features to Raspberry Pi, including:
- Much-improved HTML5 support
- Hardware-accelerated video decoding
- ARMv6-optimized blitting functions
- Better interactivity during page loading
- Faster scrolling
Future releases of Raspbian and NOOBS will include Epiphany as the default browser, but the necessary packages are already in our repository. To install, type:
sudo apt-get update sudo apt-get dist-upgrade sudo apt-get install epiphany-browser
Epiphany for Raspberry Pi was produced by our friends at Collabora. ARM assembly language optimisations were provided (as always) by Ben Avison.
Lennart Poettering has posted a
lengthy writeup of a plan put together by the “systemd cabal” (his
words) to rework Linux software distribution. It is based heavily on
namespaces and Btrfs snapshots. “Now, with the name-spacing concepts
we introduced above, we can actually relatively freely mix and match apps
and OSes, or develop against specific frameworks in specific versions on
any operating system. It doesn’t matter if you booted your ArchLinux
instance, or your Fedora one, you can execute both LibreOffice and Firefox
just fine, because at execution time they get matched up with the right
runtime, and all of them are available from all the operating systems you
installed. You get the precise runtime that the upstream vendor of
Firefox/LibreOffice did their testing with. It doesn’t matter anymore which
distribution you run, and which distribution the vendor prefers.”
The 3.17 development cycle continues with the release of 3.17-rc3. “As expected, it is larger
than rc2, since people are clearly getting back from their Kernel Summit
travels etc. But happily, it’s not *much* larger than rc2 was, and there’s
nothing particularly odd going on, so I’m going to just ignore the whole
‘it’s summer’ argument, and hope that things are just going that
Following last week’s leaked draft from Hollywood, Aussie ISPs including Telstra, iiNet and Optus have published their submission in response to a request by Attorney-General George Brandis and Communications Minister Malcolm Turnbull.
While the movie industry’s anti-piracy proposal demonstrates a desire to put ISPs under pressure in respect of their pirating customers, it comes as no surprise that their trade group, the Communications Alliance, has other things in mind.
The studios would like to see a change in copyright law to remove service providers’ safe harbor if they even suspect infringement is taking place on their networks but fail to take action, but the ISPs reject that.
“We urge careful consideration of the proposal to extend the authorization liability within the Copyright Act, because such an amendment has the potential to capture many other entities, including schools, universities, internet cafes, retailers, libraries and cloud-based services in ways that may hamper their legitimate activities and disadvantage consumers,” they write.
But while the ISPs are clear they don’t want to be held legally liable for customer piracy, they have given the clearest indication yet that they are in support of a piracy crackdown involving subscribers. Whether one would work is up for debate, however.
“[T]here is little or no evidence to date that [graduated response] schemes are successful, but no shortage of examples where such schemes have been
distinctly unsuccessful. Nonetheless, Communications Alliance remains willing to engage in good faith discussions with rights holders, with a view to agreeing on a scheme to address online copyright infringement, if the Government maintains that such a scheme is desirable,” they write.
If such as scheme could be agreed on, the ISPs say it would be a notice-and-notice system that didn’t carry the threat of ISP-imposed customer sanctions.
“Communications Alliance notes and supports the Government’s expectation, expressed in the paper that an industry scheme, if agreed, should not provide for the interruption of a subscriber’s internet access,” they note.
However, the appointment of a “judicial/regulatory /arbitration body” with the power to apply “meaningful sanctions” to repeat infringers is supported by the ISPs, but what those sanctions might be remains a mystery.
On the thorny issue of costs the ISPs say that the rightsholders must pay for everything. Interestingly, they turn the copyright holders’ claims of huge piracy losses against them, by stating that if just two-thirds of casual infringers change their ways, the video industry alone stands to generate AUS$420m (US$392) per year. On this basis they can easily afford to pay, the ISPs say.
While warning of potential pitfalls and inadvertent censorship, the Communications Alliance accepts that done properly, the blocking of ‘pirate’ sites could help to address online piracy.
“Although site blocking is a relatively blunt instrument and has its share of weaknesses and limitations, we believe that an appropriately structured and safeguarded injunctive relief scheme could play an important role in addressing online copyright infringement in Australia,” the Alliance writes.
One area in which the ISPs agree with the movie studios is in respect of ISP “knowledge” of infringement taking place in order for courts to order a block. The system currently employed in Ireland, where knowledge is not required, is favored by both parties, but the ISPs insist that the copyright holders should pick up the bill, from court procedures to putting the blocks in place.
The Alliance also has some additional conditions. The ISPs say they are only prepared to block “clearly, flagrantly and totally infringing websites” that exist outside Australia, and only those which use piracy as their main source of revenue.
Follow the Money
Pointing to the project currently underway in the UK coordinated by the Police Intellectual Property Crime Unit, the Communications Alliance says that regardless of the outcome on blocking, a “follow the money” approach should be employed against ‘pirate’ sites. This is something they already have an eye on.
“Some ISP members of Communications Alliance already have policies in place which prevent any of their advertising spend being directed to sites that promote or facilitate improper file sharing. Discussions are underway as to whether a united approach could be adopted by ISPs whereby the industry generally agrees on measures or policies to ensure the relevant websites do not benefit from any of the industry’s advertising revenues,” the ISPs note.
Better access to legal content
The Communications Alliance adds that rightsholders need to do more to serve their customers, noting that improved access to affordable content combined with public education on where to find it is required.
“We believe that for any scheme designed to address online copyright infringement to be sustainable it must also stimulate innovation by growing the digital content market, so Australians can continue to access and enjoy new and emerging content, devices and technologies.
“The ISP members of Communications Alliance remain willing to work toward an approach that balances the interests of all stakeholders, including consumers,” they conclude.
While some harmonies exist, the submissions from the movie studios and ISPs carry significant points of contention, with each having the power to completely stall negotiations. With legislative change hanging in the air, both sides will be keen to safeguard their interests on the key issues, ISP liability especially.
Maleficent is the most downloaded movie for the third week in a row.
The data for our weekly download chart is estimated by TorrentFreak, and is for informational and educational reference only. All the movies in the list are BD/DVDrips unless stated otherwise.
RSS feed for the weekly movie download chart.
|Ranking||(last week)||Movie||IMDb Rating / Trailer|
|1||(1)||Maleficent||7.4 / trailer|
|2||(2)||Godzilla (Webrip)||7.1 / trailer|
|3||(5)||The Fault in Our Stars||8.3 / trailer|
|4||(…)||Neighbors||6.7 / trailer|
|5||(…)||Think Like a Man Too||5.6 / trailer|
|6||(4)||Divergent||7.2 / trailer|
|7||(3)||Captain America: The Winter Soldier||8.1 / trailer|
|8||(7)||22 Jump Street (TS)||7.8 / trailer|
|9||(6)||X-Men: Days of Future Past (HDrip/TS)||8.4 / trailer|
|10||(8)||The Prince||4.6 / trailer|
Readers or “fans” of this blog have sent some pretty crazy stuff to my front door over the past few years, including a gram of heroin, a giant bag of feces, an enormous cross-shaped funeral arrangement, and a heavily armed police force. Last week, someone sent me a far less menacing package: an envelope full of cash. Granted, all of the cash turned out to be counterfeit money, but hey it’s the thought that counts, right?
This latest “donation” to Krebs On Security arrived via USPS Priority Mail, just days after I’d written about counterfeit cash sold online by a shadowy figure known only as “MrMouse.” These counterfeits had previously been offered on “dark web” — sites only accessible using special software such as Tor — but I wrote about MrMouse’s funny money because he’d started selling it openly on Reddit, as well as on a half-dozen hacker forums that are quite reachable on the regular Internet.
Sure enough, the package contained the minimum order that MrMouse allows: $500, split up into four fake $100s and two phony $50 bills — all with different serial numbers. I have no idea who sent the bogus bills; perhaps it was MrMouse himself, hoping I’d write a review of his offering. After all, since my story about his service was picked up by multiple media outlets, he’s changed his sales thread on several crime forums to read, “As seen on KrebsOnSecurity, Business Insider and Ars Technica…”
Anyhow, it’s not every day that I get a firsthand look at counterfeit cash, so for better for worse, I decided it would be a shame not to write about it. Since I was preparing to turn the entire package over to the local cops, I was careful to handle the cash sparingly and only with gloves. At first glance, the cash does look and feel like the real thing. Closer inspection, however, reveals that these bills are fakes.
In the video below, I run the fake bills through two basic tests designed to determine the authenticity of U.S. currency: The counterfeit pen test, and ultraviolet light. As we’ll see in the video, the $50 bills shipped in this package sort of failed the pen test (the fake $100 more or less passed). However, both the $50s and $100s completely flopped on the ultraviolet test. It’s too bad more businesses don’t check bills with a cheapo ultraviolet light: the pen test apparently can be defeated easily (by using acid-free paper or by bleaching real bills and using them as a starting point).
Let’s check out the bogus Benjamins. In the image below, we can see a pretty big difference in the watermarks on both bills. The legitimate $100 bill — shown at the bottom of the picture — has a very defined image of Benjamin Franklin as a watermark. In contrast, the fake $100 up top has a much less detailed watermark. Still, without comparing the fake and the real $100 side by side, this deficiency probably would be difficult to spot for the untrained eye.
Granted, hardly any merchants are going to put a customer’s cash under a microscope before deciding whether to accept it as legal tender, but I wanted to have a look because I wasn’t sure when I’d have the opportunity to do so again. One security feature of the $20s, $50s and $100s is the use of “color shifting” ink, which makes the denomination noted in the lower right corner of the bill appear to shift in color from green to black when the bill is tilted at different angles. The fake cash pictured here does a so-so job mimicking that color-shifting feature, but upon closer inspection using a cheap $50 Celestron handheld digital microscope, we can see distinct differences.
Again, using a microscope to inspect cash for counterfeits is impractical for regular businesses in detecting bogus bills, but it nevertheless reveals interesting dissimilarities between real and fake money. Most of those differences come down to the definition and clarity of markings and lettering. For instance, embedded in the bottom of the portraits of U.S. Presidents Grant and Franklin on the $50 and $100 bills, respectively, is the same message in super-fine print: “The United States of America.” As we can see in the video below, that message also is present in the counterfeits, but it’s quite a bit less clear in the funny money.
In some cases, entire areas of the real bills are completely absent in the counterfeits. Take a close look at the area of the $50 just to the left of Gen. Grant’s ear and you will see a blob of text that repeats the phrase “USA FIFTY” several times. The image on the left shows a closeup of the legitimate $50, while the snapshot on the right reveals how the phony bill completely lacks this feature.
Similarly, the “100″ in the lower left hand corner of the $100 bill is filled in with the words “USA 100,” as we can see in the close-up of a real $100, pictured below left. Magnification of the same area on the phony $100 note (right) shows that this area is filled with nothing more than dots.
Like most counterfeit currency, these bills look and feel fairly real on casual inspection, but they’d quickly be revealed as fakes to anyone with a $9 ultraviolet pen light or a simple magnifying glass.
If someone sticks you with a counterfeit bill, don’t try and pass it off on someone else; the penalties for passing counterfeit currency with intent to defraud are severe (steep fines and up to 15 years in prison). Instead, contact your local police department or the nearest U.S. Secret Service field office and hand it over to them.
In a previous blog story I discussed
Factory Reset, Stateless Systems, Reproducible Systems & Verifiable Systems,
I now want to take the opportunity to explain a bit where we want to
take this with
systemd in the
longer run, and what we want to build out of it. This is going to be a
longer story, so better grab a cold bottle of
Club Mate before you start
Traditional Linux distributions are built around packaging systems
like RPM or dpkg, and an organization model where upstream developers
and downstream packagers are relatively clearly separated: an upstream
developer writes code, and puts it somewhere online, in a tarball. A
packager than grabs it and turns it into RPMs/DEBs. The user then
grabs these RPMs/DEBs and installs them locally on the system. For a
variety of uses this is a fantastic scheme: users have a large
selection of readily packaged software available, in mostly uniform
packaging, from a single source they can trust. In this scheme the
distribution vets all software it packages, and as long as the user
trusts the distribution all should be good. The distribution takes the
responsibility of ensuring the software is not malicious, of timely
fixing security problems and helping the user if something is wrong.
However, this scheme also has a number of problems, and doesn’t fit
many use-cases of our software particularly well. Let’s have a look at
the problems of this scheme for many upstreams:
Upstream software vendors are fully dependent on downstream
distributions to package their stuff. It’s the downstream
distribution that decides on schedules, packaging details, and how
to handle support. Often upstream vendors want much faster release
cycles then the downstream distributions follow.
Realistic testing is extremely unreliable and next to
impossible. Since the end-user can run a variety of different
package versions together, and expects the software he runs to just
work on any combination, the test matrix explodes. If upstream tests
its version on distribution X release Y, then there’s no guarantee
that that’s the precise combination of packages that the end user
will eventually run. In fact, it is very unlikely that the end user
will, since most distributions probably updated a number of
libraries the package relies on by the time the package ends up being
made available to the user. The fact that each package can be
individually updated by the user, and each user can combine library
versions, plug-ins and executables relatively freely, results in a high
risk of something going wrong.
Since there are so many different distributions in so many different
versions around, if upstream tries to build and test software for
them it needs to do so for a large number of distributions, which is
a massive effort.
The distributions are actually quite different in many ways. In
fact, they are different in a lot of the most basic
functionality. For example, the path where to put x86-64 libraries
is different on Fedora and Debian derived systems..
Developing software for a number of distributions and versions is
hard: if you want to do it, you need to actually install them, each
one of them, manually, and then build your software for each.
Since most downstream distributions have strict licensing and
trademark requirements (and rightly so), any kind of closed source
software (or otherwise non-free) does not fit into this scheme at
This all together makes it really hard for many upstreams to work
nicely with the current way how Linux works. Often they try to improve
the situation for them, for example by bundling libraries, to make
their test and build matrices smaller.
The toolbox approach of classic Linux distributions is fantastic for
people who want to put together their individual system, nicely
adjusted to exactly what they need. However, this is not really how
many of today’s Linux systems are built, installed or updated. If you
build any kind of embedded device, a server system, or even user
systems, you frequently do you work based on complete system images,
that are linearly versioned. You build these images somewhere, and
then you replicate them atomically to a larger number of systems. On
these systems, you don’t install or remove packages, you get a defined
set of files, and besides installing or updating the system there are
no ways how to change the set of tools you get.
The current Linux distributions are not particularly good at providing
for this major use-case of Linux. Their strict focus on individual
packages as well as package managers as end-user install and update
tool is incompatible with what many system vendors want.
The classic Linux distribution scheme is frequently not what end users
want, either. Many users are used to app markets like Android, Windows
or iOS/Mac have. Markets are a platform that doesn’t package, build or
maintain software like distributions do, but simply allows users to
quickly find and download the software they need, with the app vendor
responsible for keeping the app updated, secured, and all that on the
vendor’s release cycle. Users tend to be impatient. They want their
software quickly, and the fine distinction between trusting a single
distribution or a myriad of app developers individually is usually not
important for them. The companies behind the marketplaces usually try
to improve this trust problem by providing sand-boxing technologies: as
a replacement for the distribution that audits, vets, builds and
packages the software and thus allows users to trust it to a certain
level, these vendors try to find technical solutions to ensure that
the software they offer for download can’t be malicious.
Existing Approaches To Fix These Problems
Now, all the issues pointed out above are not new, and there are
sometimes quite successful attempts to do something about it. Ubuntu
Apps, Docker, Software Collections, ChromeOS, CoreOS all fix part of
this problem set, usually with a strict focus on one facet of Linux
systems. For example, Ubuntu Apps focus strictly on end user (desktop)
applications, and don’t care about how we built/update/install the OS
itself, or containers. Docker OTOH focuses on containers only, and
doesn’t care about end-user apps. Software Collections tries to focus
on the development environments. ChromeOS focuses on the OS itself,
but only for end-user devices. CoreOS also focuses on the OS, but
only for server systems.
The approaches they find are usually good at specific things, and use
a variety of different technologies, on different layers. However,
none of these projects tried to fix this problems in a generic way,
for all uses, right in the core components of the OS itself.
Linux has come to tremendous successes because its kernel is so
generic: you can build supercomputers and tiny embedded devices out of
it. It’s time we come up with a basic, reusable scheme how to solve
the problem set described above, that is equally generic.
What We Want
The systemd cabal (Kay Sievers, Harald Hoyer, Daniel Mack, Tom
Gundersen, David Herrmann, and yours truly) recently met in Berlin
about all these things, and tried to come up with a scheme that is
somewhat simple, but tries to solve the issues generically, for all
use-cases, as part of the systemd project. All that in a way that is
somewhat compatible with the current scheme of distributions, to allow
a slow, gradual adoption. Also, and that’s something one cannot stress
enough: the toolbox scheme of classic Linux distributions is
actually a good one, and for many cases the right one. However, we
need to make sure we make distributions relevant again for all
use-cases, not just those of highly individualized systems.
Anyway, so let’s summarize what we are trying to do:
We want an efficient way that allows vendors to package their
software (regardless if just an app, or the whole OS) directly for
the end user, and know the precise combination of libraries and
packages it will operate with.
We want to allow end users and administrators to install these
packages on their systems, regardless which distribution they have
installed on it.
We want a unified solution that ultimately can cover updates for
full systems, OS containers, end user apps, programming ABIs, and
more. These updates shall be double-buffered, (at least). This is an
absolute necessity if we want to prepare the ground for operating
systems that manage themselves, that can update safely without
We want our images to be trustable (i.e. signed). In fact we want a
fully trustable OS, with images that can be verified by a full
trust chain from the firmware (EFI SecureBoot!), through the boot loader, through the
kernel, and initrd. Cryptographically secure verification of the
code we execute is relevant on the desktop (like ChromeOS does), but
also for apps, for embedded devices and even on servers (in a post-Snowden
world, in particular).
What We Propose
So much about the set of problems, and what we are trying to do. So,
now, let’s discuss the technical bits we came up with:
The scheme we propose is built around the variety of concepts of btrfs
and Linux file system name-spacing. btrfs at this point already has a
large number of features that fit neatly in our concept, and the
maintainers are busy working on a couple of others we want to
eventually make use of.
As first part of our proposal we make heavy use of btrfs sub-volumes and
introduce a clear naming scheme for them. We name snapshots like this:
usr:<vendorid>:<architecture>:<version>— This refers to a full
vendor operating system tree. It’s basically a /usr tree (and no
other directories), in a specific version, with everything you need to boot
it up inside it. The
<vendorid>field is replaced by some vendor
identifier, maybe a scheme like
specifies a CPU architecture the OS is designed for, for example
<version>field specifies a specific OS version, for
23.4. An example sub-volume name could hence look like this:
root:<name>:<vendorid>:<architecture>— This refers to an
instance of an operating system. Its basically a root directory,
containing primarily /etc and /var (but possibly more). Sub-volumes
of this type do not contain a populated /usr tree though. The
<name>field refers to some instance name (maybe the host name of
the instance). The other fields are defined as above. An example
sub-volume name is
runtime:<vendorid>:<architecture>:<version>— This refers to a
vendor runtime. A runtime here is supposed to be a set of
libraries and other resources that are needed to run apps (for the
concept of apps see below), all in a /usr tree. In this regard this
is very similar to the
usrsub-volumes explained above, however,
usrsub-volume is a full OS and contains everything
necessary to boot, a runtime is really only a set of
libraries. You cannot boot it, but you can run apps with it. An
example sub-volume name is:
framework:<vendorid>:<architecture>:<version>— This is very
similar to a vendor runtime, as described above, it contains just a
/usr tree, but goes one step further: it additionally contains all
development headers, compilers and build tools, that allow
developing against a specific runtime. For each runtime there should
be a framework. When you develop against a specific framework in a
specific architecture, then the resulting app will be compatible
with the runtime of the same vendor ID and architecture. Example:
encapsulates an application bundle. It contains a tree that at
runtime is mounted to
/opt/<vendorid>, and contains all the
application’s resources. The
<vendorid>could be a string like
<runtime>refers to one the
vendor id of one specific runtime the application is built for, for
<version>refer to the architecture the application is built for,
and of course its version. Example:
home:<user>:<uid>:<gid>— This sub-volume shall refer to the home
directory of the specific user. The
<user>field contains the user
<gid>fields the numeric Unix UIDs and GIDs
of the user. The idea here is that in the long run the list of
sub-volumes is sufficient as a user database (but see
btrfs partitions that adhere to this naming scheme should be clearly
identifiable. It is our intention to introduce a new GPT partition type
ID for this.
How To Use It
After we introduced this naming scheme let’s see what we can build of
When booting up a system we mount the root directory from one of the
rootsub-volumes, and then mount /usr from a matching
sub-volume. Matching here means it carries the same
<architecture>. Of course, by default we should pick the
usrsub-volume with the newest version by default.
When we boot up an OS container, we do exactly the same as the when
we boot up a regular system: we simply combine a
When we enumerate the system’s users we simply go through the
When a user authenticates and logs in we mount his home
directory from his snapshot.
When an app is run, we set up a new file system name-space, mount the
/opt/<vendorid>/, and the appropriate
sub-volume the app picked to
/usr, as well as the user’s
/home/$USERto its place.
When a developer wants to develop against a specific runtime he
installs the right framework, and then temporarily transitions into
a name space where
/usris mounted from the framework sub-volume, and
/home/$USERfrom his own home directory. In this name space he then
runs his build commands. He can build in multiple name spaces at the
same time, if he intends to builds software for multiple runtimes or
architectures at the same time.
Instantiating a new system or OS container (which is exactly the same
in this scheme) just consists of creating a new appropriately named
root sub-volume. Completely naturally you can share one vendor OS
copy in one specific version with a multitude of container instances.
Everything is double-buffered (or actually, n-ary-buffered), because
app sub-volumes can exist in multiple
versions. Of course, by default the execution logic should always pick
the newest release of each sub-volume, but it is up to the user keep
multiple versions around, and possibly execute older versions, if he
desires to do so. In fact, like on ChromeOS this could even be handled
automatically: if a system fails to boot with a newer snapshot, the
boot loader can automatically revert back to an older version of the
Note that in result this allows installing not only multiple end-user
applications into the same btrfs volume, but also multiple operating
systems, multiple system instances, multiple runtimes, multiple
frameworks. Or to spell this out in an example:
Let’s say Fedora, Mandriva and ArchLinux all implement this scheme,
and provide ready-made end-user images. Also, the GNOME, KDE, SDL
projects all define a runtime+framework to develop against. Finally,
both LibreOffice and Firefox provide their stuff according to this
scheme. You can now trivially install of these into the same btrfs
In the example above, we have three vendor operating systems
installed. All of them in three versions, and one even in a beta
version. We have four system instances around. Two of them of Fedora,
maybe one of them we usually boot from, the other we run for very
specific purposes in an OS container. We also have the runtimes for
two GNOME releases in multiple versions, plus one for KDE. Then, we
have the development trees for one version of KDE and GNOME around, as
well as two apps, that make use of two releases of the GNOME
runtime. Finally, we have the home directories of two users.
Now, with the name-spacing concepts we introduced above, we can
actually relatively freely mix and match apps and OSes, or develop
against specific frameworks in specific versions on any operating
system. It doesn’t matter if you booted your ArchLinux instance, or
your Fedora one, you can execute both LibreOffice and Firefox just
fine, because at execution time they get matched up with the right
runtime, and all of them are available from all the operating systems
you installed. You get the precise runtime that the upstream vendor of
Firefox/LibreOffice did their testing with. It doesn’t matter anymore
which distribution you run, and which distribution the vendor prefers.
Also, given that the user database is actually encoded in the
sub-volume list, it doesn’t matter which system you boot, the
distribution should be able to find your local users automatically,
without any configuration in /etc/passwd.
With this naming scheme plus the way how we can combine them on
execution we already came quite far, but how do we actually get these
sub-volumes onto the final machines, and how do we update them? Well,
btrfs has a feature they call “send-and-receive”. It basically allows
you do “diff” two file system versions, and generate a binary
delta. You can generate these deltas on a developer’s machine and then
push them into the user’s system, and he’ll get the exact same
sub-volume too. This is how we envision installation and updating of
operating systems, applications, runtimes, frameworks. At installation
time, we simply deserialize an initial send-and-receive delta into
our btrfs volume, and later, when a new version is released we just
add in the few bits that are new, by dropping in another
send-and-receive delta under a new sub-volume name. And we do it
exactly the same for the OS itself, for a runtime, a framework or an
app. There’s no technical distinction anymore. The underlying
operation for installing apps, runtime, frameworks, vendor OSes, as well
as the operation for updating them is done the exact same way for all.
Of course, keeping multiple full /usr trees around sounds like an
awful lot of waste, after all they will contain a lot of very similar
data, since a lot of resources are shared between distributions,
frameworks and runtimes. However, thankfully btrfs actually is able to
de-duplicate this for us. If we add in a new app snapshot, this simply
adds in the new files that changed. Moreover different runtimes and
operating systems might actually end up sharing the same tree.
Even though the example above focuses primarily on the end-user,
desktop side of things, the concept is also extremely powerful in
server scenarios. For example, it is easy to build your own
trees and deliver them to your hosts using this scheme. The
sub-volumes are supposed to be something that administrators can put
together. After deserializing them into a couple of hosts, you can
trivially instantiate them as OS containers there, simply by adding a
root sub-volume for each instance, referencing the
usr tree you
just put together. Instantiating OS containers hence becomes as easy
as creating a new btrfs sub-volume. And you can still update the images
nicely, get fully double-buffered updates and everything.
And of course, this scheme also applies great to embedded
use-cases. Regardless if you build a TV, an IVI system or a phone: you
can put together you OS versions as
usr trees, and then use
btrfs-send-and-receive facilities to deliver them to the systems, and
update them there.
Many people when they hear the word “btrfs” instantly reply with “is
it ready yet?”. Thankfully, most of the functionality we really need
here is strictly read-only. With the exception of the
sub-volumes (see below) all snapshots are strictly read-only, and are
delivered as immutable vendor trees onto the devices. They never are
changed. Even if btrfs might still be immature, for this kind of
read-only logic it should be more than good enough.
Note that this scheme also enables doing fat systems: for example,
an installer image could include a Fedora version compiled for x86-64,
one for i386, one for ARM, all in the same btrfs volume. Due to btrfs’
de-duplication they will share as much as possible, and when the image
is booted up the right sub-volume is automatically picked. Something
similar of course applies to the apps too!
This also allows us to implement something that we like to call
Operating-System-As-A-Virus. Installing a new system is little more
- Creating a new GPT partition table
- Adding an EFI System Partition (FAT) to it
- Adding a new btrfs volume to it
- Deserializing a single
usrsub-volume into the btrfs volume
- Installing a boot loader
Now, since the only real vendor data you need is the
you can trivially duplicate this onto any block device you want. Let’s
say you are a happy Fedora user, and you want to provide a friend with
his own installation of this awesome system, all on a USB stick. All
you have to do for this is do the steps above, using your installed
usr tree as source to copy. And there you go! And you don’t have to
be afraid that any of your personal data is copied too, as the
sub-volume is the exact version your vendor provided you with. Or with
other words: there’s no distinction anymore between installer images
and installed systems. It’s all the same. Installation becomes
replication, not more. Live-CDs and installed systems can be fully
Note that in this design apps are actually developed against a single,
very specific runtime, that contains all libraries it can link against
(including a specific glibc version!). Any library that is not
included in the runtime the developer picked must be included in the
app itself. This is similar how apps on Android declare one very
specific Android version they are developed against. This greatly
simplifies application installation, as there’s no dependency hell:
each app pulls in one runtime, and the app is actually free to pick
which one, as you can have multiple installed, though only one is used
by each app.
Also note that operating systems built this way will never see
“half-updated” systems, as it is common when a system is updated using
RPM/dpkg. When updating the system the code will either run the old or
the new version, but it will never see part of the old files and part
of the new files. This is the same for apps, runtimes, and frameworks,
Where We Are Now
We are currently working on a lot of the groundwork necessary for
this. This scheme relies on the ability to monopolize the
vendor OS resources in /usr, which is the key of what I described in
Factory Reset, Stateless Systems, Reproducible Systems & Verifiable Systems
a few weeks back. Then, of course, for the full desktop app concept we
need a strong sandbox, that does more than just hiding files from the
file system view. After all with an app concept like the above the
primary interfacing between the executed desktop apps and the rest of the
system is via IPC (which is why we work on kdbus and teach it all
kinds of sand-boxing features), and the kernel itself. Harald Hoyer has
started working on generating the btrfs send-and-receive images based
Getting to the full scheme will take a while. Currently we have many
of the building blocks ready, but some major items are missing. For
example, we push quite a few problems into btrfs, that other solutions
try to solve in user space. One of them is actually
signing/verification of images. The btrfs maintainers are working on
adding this to the code base, but currently nothing exists. This
functionality is essential though to come to a fully verified system
where a trust chain exists all the way from the firmware to the
apps. Also, to make the
home sub-volume scheme fully workable we
actually need encrypted sub-volumes, so that the sub-volume’s
pass-phrase can be used for authenticating users in PAM. This doesn’t
Working towards this scheme is a gradual process. Many of the steps we
require for this are useful outside of the grand scheme though, which
means we can slowly work towards the goal, and our users can already
take benefit of what we are working on as we go.
Also, and most importantly, this is not really a departure from
traditional operating systems:
Each app, each OS and each app sees a traditional Unix hierarchy with
/usr, /home, /opt, /var, /etc. It executes in an environment that is
pretty much identical to how it would be run on traditional systems.
There’s no need to fully move to a system that uses only btrfs and
follows strictly this sub-volume scheme. For example, we intend to
provide implicit support for systems that are installed on ext4 or
xfs, or that are put together with traditional packaging tools such as
RPM or dpkg: if the the user tries to install a
runtime/app/framework/os image on a system that doesn’t use btrfs so
far, it can just create a loop-back btrfs image in /var, and push the
data into that. Even us developers will run our stuff like this for a
while, after all this new scheme is not particularly useful for highly
individualized systems, and we developers usually tend to run
systems like that.
Also note that this in no way a departure from packaging systems like
RPM or DEB. Even if the new scheme we propose is used for installing
and updating a specific system, it is RPM/DEB that is used to put
together the vendor OS tree initially. Hence, even in this scheme
RPM/DEB are highly relevant, though not strictly as an end-user tool
anymore, but as a build tool.
So Let’s Summarize Again What We Propose
We want a unified scheme, how we can install and update OS images,
user apps, runtimes and frameworks.
We want a unified scheme how you can relatively freely mix OS
images, apps, runtimes and frameworks on the same system.
We want a fully trusted system, where cryptographic verification of
all executed code can be done, all the way to the firmware, as
standard feature of the system.
We want to allow app vendors to write their programs against very
specific frameworks, under the knowledge that they will end up being
executed with the exact same set of libraries chosen.
We want to allow parallel installation of multiple OSes and versions
of them, multiple runtimes in multiple versions, as well as multiple
frameworks in multiple versions. And of course, multiple apps in
We want everything double buffered (or actual n-ary buffered), to
ensure we can reliably update/rollback versions, in particular to
safely do automatic updates.
We want a system where updating a runtime, OS, framework, or OS
container is as simple as adding in a new snapshot and restarting
the runtime/OS/framework/OS container.
We want a system where we can easily instantiate a number of OS
instances from a single vendor tree, with zero difference for doing
this on order to be able to boot it on bare metal/VM or as a
We want to enable Linux to have an open scheme that people can use
to build app markets and similar schemes, not restricted to a
I’ll be talking about this at LinuxCon Europe in October. I originally
intended to discuss this at the Linux Plumbers Conference (which I
assumed was the right forum for this kind of major plumbing level
improvement), and at linux.conf.au, but there was no interest in my
session submissions there…
Of course this is all work in progress. These are our current ideas we
are working towards. As we progress we will likely change a number of
things. For example, the precise naming of the sub-volumes might look
very different in the end.
Of course, we are developers of the systemd project. Implementing this
scheme is not just a job for the systemd developers. This is a
reinvention how distributions work, and hence needs great support from
the distributions. We really hope we can trigger some interest by
publishing this proposal now, to get the distributions on board. This
after all is explicitly not supposed to be a solution for one specific
project and one specific vendor project, we care about making this
open, and solving it for the generic case, without cutting corners.
If you have any questions about this, you know how you can reach us
(IRC, mail, G+, …).
The future is going to be awesome!
Home mass manufacturing of copies of culture and knowledge started some time in the 1980s with the Cassette Tape, the first widely available self-contained unit capable of recording music. It made the entire copyright industry go up in arms and demand “compensation” for activities that were not covered by their manufacturing monopoly, which is why we now pay protection money to the copyright industry in many countries for everything from cellphones to games consoles.
The same industry demanded harsh penalties – criminal penalties – for those who manufactured copies at home without a license rather than buying the expensive premade copies. Over the next three decades, such criminal penalties gradually crept into law, mostly because no politician thinks the issue is important enough to defy anybody on.
A couple of key patent monopolies on 3D printing are expiring as we speak, making next-generation 3D printing much, much higher quality. 3D printers such as this one are now appearing on Kickstarter, “printers” (more like fabs) that use laser sintering and similar technologies instead of layered melt deposit.
We’re now somewhere in the 1980s-equivalent of the next generation of copyright monopoly wars, which is about to spread to physical objects. The copyright industry is bad – downright atrociously cynically evil, sometimes – but nobody in the legislature gives them much thought. Wait until this conflict spreads outside the copyright industry, spreads to pretty much every manufacturing industry.
People are about to be sued out of their homes for making their own slippers instead of buying a pair.
If you think that sounds preposterous, that’s exactly what has been going on in the copyright monopoly wars so far, with people manufacturing their own copies of culture and knowledge instead of buying ready-made copies. There’s no legal difference to manufacturing a pair of slippers without having a license for it.
To be fair, a pair of slippers may be covered by more monopolies than just the copyright monopoly (the drawing) – it may be covered by a utility patent monopoly, a design patent monopoly, possibly a geographic indication if it’s some weird type of slipper, and many more arcane and archaic types of monopolies. Of course, people in general can’t tell the difference between a “utility patent”, a “design patent”, a “copyright duplication right”, a “copyright broadcast right”, a “related right”, and so on. To most people, it’s all just “the copyright monopoly” in broad strokes.
Therefore, it’s irrelevant to most people whether the person who gets sued out of their home for fabbing their own slippers from a drawing they found is technically found guilty of infringing the copyright monopoly (maybe) or a design patent (possibly). To 95% or more, it’s just “more copyright monopoly bullshit”. And you know what? Maybe that’s good.
The next generation of wars over knowledge, culture, drawings, information, and data is just around the corner, and it’s going to get much uglier with more stakes involved on all sides. We have gotten people elected to parliaments (and stayed there) on the conflict just as it stands now. As this divide deepens, and nothing suggests it won’t, then people will start to pay more attention.
And maybe, just maybe, that will be the beginning of the end of these immoral and deeply unjust monopolies known as copyrights and patents.
С брат ми не се разбирахме като деца. Той е по-малък от мен с почти 4 години. Разликата не е голяма, но все пак…Странно нещо е, че из пирятелите ми няма друг с брат, всички имат сестри. Един вид липсва база на сравнение. Неразбирателството с брат ми винаги ми е тежало на душата. Това допринесе много за моя характер като юноша, а по-късно и като възрастен. Израстнах с убеждението, че не съм разбиран (но не и необичан), и винаги пристъпвах към нови социални контакти с тази мисъл “Няма да бъда разбран, давай го по-кротко и лесно”. Имаше обаче нещо, което ни свързваше много силно. Беше футболът. Играехме много често заедно и аз с удоволствие виждах как той става много добър футболист. Но не пожела да последва професионална кариера.
Напуснах родния си град Бургас на 19 години за да уча в София. В столицата прекарах 6 години и 4 месеца. Отначало като студент, прекарвах летата отново край морето. С брат ми се събирахме само за мачлета. Нямаше бири, нямаше купони заедно. Не знам дали е нормално или не. Нямах база за сравнение. Но ме глождеше мисълта, постоянното човъркване да излезем някъде на по бира две. Да говорим за нещо друго освен футбол. За жените, за пиенето, за политика, за мечтите ни, за целите, за случките от живота. Нашата комуникация не беше на по-високо ниво от двама далечни познати, които се виждат рядко само за да играят футбол заедно.
Болеше ме. Затворих се все повече в себе си. Обвинявах се! Защото може би…не може би, а със сигурност аз носех по-голямата вина за това. Със своя властови характер и неосъзнат заповеднически тон просто отблъсвах хората от себе си. И брат ми!
През 2008 емигрирах от България към Чехия! По-добра работа, по-добри условия, далеч от тази държава и най-вече далеч от хората, които отблъснах от себе си. Може би ново начало?
Разбира се, когато живееш и работиш не на 400 ами на 1400км от своите близки, се прибираш много по-рядко. Два пъти в годината, цифром и словом 20 дена отпуска. Коледа и лятото! Великден и лятото! Само лятото…за по две седмици.
Не остава време да пуснеш корени. Гостенин си…в дома, където си израстнал. Градът, където си направил първите си стъпки.
Градът се мени, улиците също, хората се променят, израстват, сменят си психиката.
Бях още по-сам! Но това не ме мъчеше. Мъчеше ме че хем имам брат..хем нямам. Егоистично! Мъчеше ме че аз носех по-голямата вина за това!
От друга страна, това като че ли помогна. Когато виждаш един обичан човек само за 10 дена в годината, научаваш се да преглъщаш, да не избухваш за щяло и за нещяло. А може би и израстнах? Не знам. Просто се случи. Преглъщах, научих се да подминавам нещата. Глупавите неща. Животът стана по-лесен.
Но мина време. Толкова много изпуснати години в пререкания за глупости…Трябваше още време.
Миналото лято имахме с брат ми доста силен, отворен, директен и емоционален разговор. Видях нещата по-чисто, видях грешките си. Върнах се вече в Германия, решен да изпиля и ошлайфам този характер. Да се науча да бъда слушащ, а не изискващ. Да давам, не да взимам.
Измина година! На новата работа ни позволяват да работим от вкъщи. Където и да е това вкъщи! Прибрах се не за 2 седмици, а за почти два месеца. Брат ми се женеше! Трябваше да бъда там за по-дълго…не исках да съм гост на неговата сватба!
Проблемът с гости статуса на един върнал се емигрант за отпуска е прост. Братя ,сестри и родители не искат да те занимават с ежеднвени проблеми. Кой на кого какво казал, коя котка чии дрехи опикала. Те искат да са до теб, да не те тормозят с проблемите си…а защо си се прибрал ,ако не да си с тях, да ги чуеш, да помогнеш ако можеш. Така реших – времето е фактор.
Прибрах се месец преди сватбата. Брат ми организира мачле и после се прибрахме по къщите.
Това щеше да иска време и още повече търпение!
Въоръжих се се с търпение! Най-добрият начин да научите нетърпелив човек да чака, да потърпи е да го накарате да снима рядка птица. Иска се дебнене, чакане, тишина.
Бургас е перфектното място за наблюдение на птици. С над 70% от птицепотока през България, преминаващ или оставащ край Бургас – шансът беше неимоверен.
Нарамих колата, бинокъла и фотоапарата!
Издебнах щъркели, розови пеликани, чайки,дъждосвирци…а короната беше земеродното рибарче в Пода.
Това рибарче…отидохме до Пода и момичето на смяна ни заобяснява. Вече научил се да мълча и търпя – правех точно това – мълчах и търпях. Гледах, снимах. Изведнъж нещо синьо-зелено прелетя покрай периферното м изрение (когато наблюдаваш птици, се учиш да перифразираш – тоест да гледаш с периферното си зрение много). Момичето каза -земеродно рибарче, ако сме търпеливи и тихи, то понякога каца на колчето ей там. Затърпяхме, но не кацна.
Обърнахме се към други части на местноста, чакахме двойка морски орли. Докато другите гледаха, аз се обърнах..и то беше там-рибарчето! Снимах,снимах, снимах…и се радвах,защото се бях научил на търпение..не знам колко време ,все едно беше миг от момента когато го зачакахме…по-късно разбрах, че са били цели 30 минути!
Кога аз ще закам с търпение нещо половин час!
Бях доволен, имах много повече търпение! Не бързах за никъде. Научих се! Отне години, но бях тук…и търпеливо чаках своя момент, все пак брат ми се женеше, много му бе на главата.
В деня на сватбата, направихме може би около 100км из Бургас за цветя, за букети ,за туй за онуй. Беше ми много кеф! Цял ден с него! За първи път от много години. И за първи път ми беше леко и лесно. Просто се радвах на момента…е в края за малко да го изпусна, но бързо поправих грешката си! Все пак това съм аз, след доста тренировки…може още. Но важното е да поправяш грешктие на момента. Да кажеш Извинявай!
После прекарахме един незабравим ден в Котел, нашето любимо място от деца! Там грешки не направих…беше супер!
Брат ми и аз сме много различни – той е тих, кротък, винаги спокоен винаги с готов коментар. Аз съм буен, избухлив, темпераментен. Аз обичам да пътувам, да се будя на нови места, да пробвам нови неща. Но сякаш това не е пречката, пречката е да разбереш другия, да му влезеш в обувките. Няма значение какъв си, какъв е този, който обичаш, просто трябва да го разбереш!
Чакам с нетърпение коледа и месеца, който съм си запланувал тогава.
Може би ще пием и бири с барбекю от пилешко на село?
Every licensed Blu-ray playback device since 2012 has supported the technology which is designed to limit the usefulness of pirated content. Illicit copies of movies protected by Cinavia work at first, but after a few minutes playback is halted and replaced by a warning notice.
This is achieved by a complex watermarking system that not only protects retail media but also illicit recordings of first-run movies. Now Verance has been awarded a patent for a new watermarking system with fresh aims in mind.
The patent, ‘Watermarking in an encrypted domain’, begins with a description of how encryption can protect multimedia content from piracy during storage or while being transported from one location to another.
“The encrypted content may be securely broadcast over the air, through the Internet, over cable networks, over wireless networks, distributed via storage media, or disseminated through other means with little concern about piracy of the content,” Verance begins.
Levels of security vary, Verance explains, depending on the strength of encryption algorithms and encryption key management. However, at some point content needs to be decrypted in order for it to be processed or consumed, and at this point it is vulnerable to piracy and distribution.
“This is particularly true for multimedia content that must inevitably be converted to audio and/or visual signals (e.g., analog format) in order to reach an audience,” Verance explain.
While the company notes that at this stage content is vulnerable to copying, solutions are available to help protect against what it describes as the “analog hole”. As the creator of Cinavia, it’s no surprise Verance promotes watermarking.
“Digital watermarking is typically referred to as the insertion of auxiliary information bits into a host signal without producing perceptible artifacts,” Verance explains.
In other words, content watermarked effectively will carry such marks regardless of further distribution, copying techniques, or deliberate attacks designed to remove them. Cinavia is one such example, the company notes.
However, Verance admits that watermarking has limitations. In a supply chain, for example, the need to watermark already encrypted content can trigger time-intensive operations. For this, the company says it has a solution.
Verance has come up with a system with the ability to insert watermarks into content that has already been compressed and encrypted, without the need for decryption, decompression, or subsequent re-compression and re-encryption.
In terms of an application, Verance describes an example workflow in which movie content could be watermarked and then encrypted in order to protect it during distribution. The system has the ability to further watermark encrypted content as it passes through various supply chain stages and locations without compromising its security.
“In a forensic tracking application, a digital movie, after appropriate post production processing, may be encrypted at the movie studio or post production house, and sent out for distribution to movie theaters, to on-line retailers, or directly to the consumer,” Verance explains.
“In such applications, it is often desired to insert forensic or transactional watermarks into the movie content to identify each entity or node in the distribution channel, including the purchasers of the content, the various distributors of the content, the presentation venue and the time/date/location of each presentation or purchase.”
Verance believes that being able to track distribution points, sales locations such as movie theaters or stores, and even end users will be a big plus to adopters. Those up to the complex analysis can see how the company intends to work its magic by viewing its extremely technical and lengthy patent.
One way to prevent this from happening is by using SSL encryption. This is supported by more and more sites, and last year Google even went as far as encrypting all searches by default.
Most of the larger torrent sites such as The Pirate Bay and Torrentz also offer SSL support. However, KickassTorrents is the first to force encryption. This means that everyone who visits the site will now be sending data over a secure https connection.
TorrentFreak spoke with the KickassTorrents team who told us that the new feature was implemented by popular demand.
“We’re just thinking about those people who will feel safer when they know all the data transferred between them and KAT is completely encrypted. People requested it, so we respond,” the KAT team informs TF.
SSL encryption will prevent one’s boss, school, or ISP from monitoring what pages are visited what data is sent or retrieved from the site. However, it’s still possible to see that the KickassTorrents domain was accessed, and how much time was spent there.
Also, it’s worth emphasizing that it doesn’t anonymize the visitor’s IP-addresses in any way, as a VPN or proxy might.
That said, enabling encryption is a good way for KickassTorrents to offer its users a little more security. On top of that, Google recently noted that it would prioritize SSL encrypted sites in its search results, something the site’s operators probably wont mind either.
February last year, five U.S. Internet providers started sending Copyright Alerts to customers who use BitTorrent to pirate movies, TV-shows and music.
These efforts are part of the Copyright Alert System, an anti-piracy plan that aims to educate the public. Through a series of warnings suspected pirates are informed that their connections are being used to share copyrighted material without permission, and told where they can find legal alternatives.
During the first ten months of the program more than more than 1.3 million anti-piracy alerts were sent out. That was just a ramp up phase though. This year the number of alerts will grow significantly.
“The program doubles in size this year,” says Jill Lesser, Executive Director of the overseeing Center for Copyright Information (CCI).
Lesser joined a panel at the Technology Policy Institute’s Aspen Forum where the Copyright Alert System was the main topic of discussion. While the media has focused a lot on the punishment side, Lesser notes that the main goal is to change people’s norms and regain their respect for copyright.
“The real goal here is to shift social norms and behavior. And to almost rejuvenate the notion of the value of copyright that existed in the world of books and vinyl records,” Lesser said.
The notifications are a “slap on the wrist” according to Lesser, but one which is paired with information explaining where people can get content legally.
In addition to sending more notices, the CCI will also consider adding more copyright holders and ISPs to the mix. Thus far the software and book industries have been left out, for example, and the same is true for smaller Internet providers.
“We’ve had lots of requests from content owners in other industries and ISPs to join, and how we do that is I think going to be a question for the year coming up,” Lesser noted.
Also present at the panel was Professor Chris Sprigman, who noted that the piracy problem is often exaggerated by copyright holders. Among other things, he gave various examples of how creative output has grown in recent years.
“This problem has been blown up into something it’s not. Do I like piracy? Not particularly. Do I think it’s a threat to our creative economy? Not in any area that I’ve seen,” Sprigman noted.
According to the professor the Copyright Alert System is very mild and incredible easy to evade, which is a good thing in his book.
The professor believes that it’s targeted at casual pirates, telling them that they are being watched. This may cause some to sign up for a VPN or proxy, but others may in fact change their behavior in the long run.
“Do I think that this is a solution to the piracy problem. No. But I think this is a way of reducing the size of it over time, possibly changing social norms over time. That could be productive. Not perfect but an admirable attempt,” Sprigman said.
Just how effective this attempt will be at changing people’s piracy habits is something that has yet to be seen.
“Operation Creative” began with the sending of warning letters to site owners, asking them to go legit or shut down. Late last year this was followed by a campaign targeted at domain registrars, asking them to suspend the domain names of several “illegal” sites.
Most registrars ignored these letters and only five out of the 75 requests were granted. The police aren’t giving up on their efforts though, as they have now contacted the registrars again, this time with a warning.
EasyDNS was one of the companies who refused to suspend domains without a court order. This week CEO Mark Jeftovic informed TorrentFreak that his company received a new letter from City of London PIPCU titled “notice of criminality.”
Unlike the previous one, the latest letter doesn’t have any concrete demands, but simply puts the registrars on notice.
Receipt of this email serves as notice that the aforementioned domain, managed by EASYDNS TECHNOLOGIES, INC. 28/03/2014 is being used to facilitate criminal activity, including offences under:
Fraud Act 2006
Copyright, Designs and Patents Act 1988
Serious Crime Act 2007
We respectfully request that EASYDNS TECHNOLOGIES, INC. give consideration to your ongoing business relationship with the owners/purchasers of the domain to avoid any future accusations of knowingly facilitating the movement of criminal funds.
According to easyDNS the warning appears to suggest that registrars themselves could face legal trouble if they fail to take action. A rather worrying development considering that no court has deemed the sites to be violating local law.
“We think this time the intent is not to actually get the domain name taken down, but rather to build some sort of ‘case’ that we, easyDNS, by mere ‘Receipt of this email’ are now knowingly allowing domains under management to be ‘used to facilitate criminal activity’,” Jeftovic notes.
“Thus, if we don’t takedown the domains PIPCU want us to, when they want us to, then we may face accusations in the future (in their own words) ‘of knowingly facilitating the movement of criminal funds’,” he adds.
Despite the repeated threats, easyDNS doesn’t plan to take any action without a proper court order. In a blog post Jeftovic explains this stance, noting that his company will fiercely defend due process.
The file-sharing domains PIPCU wants to take offline are guilty until proven innocent and there is no basis to act without a court order, he believes. Instead, he characterizes the warning letter as potentially libelous and a abuse of power.
“Hinting that failure to cooperate could result in adverse consequences such as being stripped of one’s trade accreditation or possibly being accused of a crime in the future, strikes me as coercive or an abuse of position on the part of PIPCU,” Jeftovic concludes.