Posts tagged ‘Other’

TorrentFreak: Fail: MPAA Makes Legal Content Unfindable In Google

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

wheretowatchThe entertainment industries have gone head to head with Google in recent months, demanding tougher anti-piracy measures from the search engine.

According to the MPAA and others, Google makes it too easy for its users to find pirated content. Instead, they would prefer Google to downrank sites such as The Pirate Bay from its search results or remove them entirely.

A few weeks ago Google took additional steps to decrease the visibility of pirated content, but the major movie studios haven’t been sitting still either.

Last week MPAA announced the launch of WhereToWatch.com, a website that lists where movies and TV-shows can be watched legally.

“WheretoWatch.com offers a simple, streamlined, comprehensive search of legitimate platforms – all in one place. It gives you the high-quality, easy viewing experience you deserve while supporting the hard work and creativity that go into making films and shows,” the MPAA’s Chris Dodd commented.

At first glance WhereToWatch offers a rather impressive database of entertainment content. It even features TorrentFreak TV, although this is listed as “not available” since the MPAA’s service doesn’t index The Pirate Bay.

Overall, however, it’s a decent service. WhereToWatch could also be an ideal platform to beat pirate sites in search results, something the MPAA desperate wants to achieve.

Sadly for the MPAA that is only a “could” since Google and other search engines currently have a hard time indexing the site. As it turns out, the MPAA’s legal platform isn’t designed with even the most basic SEO principles in mind.

For example, if Google visits the movie overview page all links to individual pages are hidden by Javascript, and the search engine only sees this. As a result, movie and TV-show pages in the MPAA’s legal platform are invisible to Google.

Google currently indexes only one movie page, which was most likely indexed through an external link. With Bing the problem is just as bad.

wtw-google

It’s worth noting that WhereToWatch doesn’t block search engines from spidering its content through the robots.txt file. It’s just the coding that makes it impossible for search engines to navigate and index the site.

This is a pretty big mistake, considering that the MPAA repeatedly hammered on Google to feature more legal content. With some proper search engine optimization (SEO) advice they can probably fix the problem in the near future.

Previously Google already offered SEO tips to copyright holders, but it’s obvious that the search engine wasn’t consulted in this project.

To help the MPAA on its way we asked isoHunt founder Gary Fung for some input. Last year Fung lost his case to the MPAA, forcing him to shut down the site, but he was glad to offer assistance nonetheless.

“I suggest MPAA optimize for search engine keywords such as ‘download ‘ and ‘torrent ‘. For some reason when people google for movies, that’s what they actually search for,” Fung tells us.

A pretty clever idea indeed, as the MPAA’s own research shows that pirate-related search terms are often used to “breed” new pirates.

Perhaps it’s an idea for the MPAA to hire Fung or other “industry” experts for some more advice. Or better still, just look at how the popular pirate sites have optimized their sites to do well in search engines, and steal their work.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

TorrentFreak: Swedes Prepare Record File-Sharing Prosecution

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

serversFollowing a lengthy investigation by anti-piracy group Antipiratbyrån, in 2010 police raided a “warez scene” topsite known as Devil. Dozens of servers were seized containing an estimated 250 terabytes of pirate content.

One man was arrested and earlier this year was eventually charged with unlawfully making content available “intentionally or by gross negligence.”

Police say that the man acted “in consultation or concert with other persons, supplied, installed, programmed, maintained, funded and otherwise administered and managed” the file-sharing network from where the infringements were carried out. It’s claimed that the Devil topsite had around 200 members.

All told the man is accused of illegally making available 2,250 mainly Hollywood movies, a record amount according to the prosecutor.

“We have not prosecuted for this many movies in the past. There are many movies and large data set,” says prosecutor Fredrik Ingblad. “It is also the largest analysis of computers ever made in an individual case.”

Few details have been made available on the case but it’s now been revealed that Antipiratbyrån managed to trace the main Devil server back to the data center of a Stockholm-based electronics company. The site’s alleged operator, a man from Väsbybo in his 50s and employee of the company, reportedly admitted being in control of the server.

While it would likely have been the intention of Devil’s operator for the content on the site to remain private, leaks inevitably occurred. Predictably some of that material ended up on public torrent sites, an aggravating factor according to Antipiratbyrån lawyer Henrik Pontén.

“This is a very big issue and it is this type of crime that is the basis for all illegal file sharing. The films available on Pirate Bay circulate from these smaller networks,” Pontén says.

The big question now concerns potential damages. Pontén says that the six main studios behind the case could demand between $673,400 and $2.69m per movie. Multiply that by 2,250 and that’s an astonishing amount, but the lawyer says that in order not to burden the justice system, a few titles could be selected.

Henrik Olsson Lilja, a lawyer representing the defendant, declined to comment in detail but criticized the potential for high damages.

“I want to wait for the trial, but there was no intent in the sense that the prosecutor is looking for,” Lilja told Mitte.se. “In practice, these are American-style punitive damages.”

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Backblaze Blog | The Life of a Cloud Backup Company: Backblaze + Time Machine = ♥

This post was syndicated from: Backblaze Blog | The Life of a Cloud Backup Company and was written by: Yev. Original post: at Backblaze Blog | The Life of a Cloud Backup Company

blog-time-machine

“Why do I need online backup if I have Time Machine Already?” We get that question a lot. Here, we recommend you use both. Backblaze strongly believes in a 3-2-1 backup policy. What’s 3-2-1? Three copies of your data, on two different media, and one copy off-site. If you have that baseline, you’re in good shape. The on-site portions of your backup strategy are typically, the original piece of data, and an external hard drive of some sort. Most of our Mac customers use Time Machine, so that’s the one we’ll focus on here.

Raising Awareness
Apple did a great job with Time Machine, and with building awareness for backups. When you plugged in your first external hard drive, your Mac would ask if you wanted to use that drive as a Time Machine backup drive, which was instrumental in teaching users about the importance and potential ease of backups. It also dramatically simplified data backup, making it automatic and continuous. Apple knew that having people manually drag and drop files into folders and drives so they were backed up was not a reliable backup strategy. By making it automatic, many people used Time Machine for their local backup, but this still left a hole in their backup strategy, they had nothing off-site.

Why Bother
Having an off-site backup comes in handy when your computer and local backup (Time Machine in this case) are both lost. That can occur because of fire, theft, flood, forgetfulness, or a wide variety of other unfortunate reasons. Stories of people neglecting to replace their failed Time Machine drive then having their computer crash are well known. An off-site backup that is current, such as an automatic online backup can also be used to augment the local Time Machine backup, especially when traveling. For example, your hard drive in your laptop crashes while you’re on vacation. Time Machine can be used to recover up to the point where you left for your trip and your online backup can be used to fill in the rest.

Some Limitations
One thing about using Time Machine, is that as a hard drive, it doesn’t scale with the amounts of data that you have. When you purchase a 500GB drive, that’s all the space you have for your backup. For example, if you have your Mac Pro or MacBook and have a Time Machine hard drive connected to it, it will back up the data that’s on the computer. If you add an additional hard drive in to the mix as a storage drive, the Time Machine drive may not be large enough to handle both data sets, from the Mac and from the additional storage. So the more data you accumulate, the larger the Time Machine drive you have to use.

Additionally, if you store data on your Time Machine drive itself, those files are not actually going to be included in the Time Machine backup, so be wary! Apple and Backblaze strongly recommend using a separate, dedicated drive for your Time Machine backup, and not keeping any original data on that drive. That way, if the drive fails, you only lose one copy, and avoid potentially losing both copies. Backblaze works similarly, because you have an off-site backup with Backblaze, it’s another layer of protection from data loss.

Diversification
So use both! And if you’re on a PC, use an external hard drive as your second media type (most come with their own local-backup software). There’s no such thing as too many backups. Backing up is like a retirement or stock portfolio, the more diversification you have, the less vulnerability you have!

Author information

Yev

Yev

Social Marketing Manager at Backblaze

Yev enjoys speed-walking on the beach. Speed-dating. Speed-writing blog posts. The film Speed. Speedy technology. Speedy Gonzales. And Speedos. But mostly technology.

Follow Yev on:

Twitter: @YevP | LinkedIn: Yev Pusin | Google+: Yev Pusin

The post Backblaze + Time Machine = ♥ appeared first on Backblaze Blog | The Life of a Cloud Backup Company.

LWN.net: Introducing AcousticBrainz

This post was syndicated from: LWN.net and was written by: n8willis. Original post: at LWN.net

MusicBrainz, the not-for-profit project that maintains an
assortment of “open content” music metadata databases, has announced
a new effort named AcousticBrainz. AcousticBrainz
is designed to be an open, crowd-sourced database cataloging various
“audio features” of music, including “low-level spectral
information such as tempo, and additional high level descriptors for
genres, moods, keys, scales and much more.
” The data collected
is more comprehensive than MusicBrainz’s existing AcoustID database,
which deals only with acoustic fingerprinting for song recognition.
The new project is a partnership with the Music Technology Group at
Universitat Pompeu Fabra, and uses that group’s free-software toolkit
Essentia to perform its
acoustic analyses. A follow-up
post
digs into the AcousticBrainz analysis of the project’s initial
650,000-track data set, including examinations of genre, mood, key,
and other factors.

LWN.net: Version 2 of the kdbus patches posted

This post was syndicated from: LWN.net and was written by: jake. Original post: at LWN.net

The second version of the kdbus patches have been posted to the Linux kernel mailing list by
Greg Kroah-Hartman. The biggest change since the original patch set (which
we looked at in early November) is that
kdbus now provides a filesystem-based interface (kdbusfs) rather than the
/dev/kdbus device-based interface. There are lots of other
changes in response to v1 review comments as well. “kdbus is a kernel-level IPC implementation that aims for resemblance to
[the] protocol layer with the existing userspace D-Bus daemon while
enabling some features that couldn’t be implemented before in userspace.

TorrentFreak: U.S. Copyright Alert System Security Could Be Improved, Review Finds

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

spyFebruary last year the MPAA, RIAA and five major Internet providers in the United States launched their “six strikes” anti-piracy plan.

The Copyright Alert System’s main goal is to inform subscribers that their Internet connections are being used to share copyrighted material without permission. These alerts start out friendly in tone, but repeat infringers face a temporary disconnection from the Internet or other mitigation measures.

The evidence behind the accusations is provided by MarkMonitor, which monitors BitTorrent users’ activities on copyright holders’ behalf. The overseeing Center for Copyright Information (CCI) previously hired an impartial and independent technology expert to review the system, hoping to gain trust from the public.

Their first pick, Stroz Friedberg, turned out to be not that impartial as the company previously worked as RIAA lobbyists. To correct this unfortunate choice, CCI assigned Professor Avi Rubin of Harbor Labs to re-examine the system.

This week CCI informed us that a summary of Harbor Labs’s findings is now available to the public. The full review is not being published due to the vast amount of confidential information it contains, but the overview of the findings does provide some interesting details.

Overall, Harbor Labs concludes that the evidence gathering system is solid and that false positives, cases where innocent subscribers are accused, are reasonably minimized.

“We conclude, based on our review, that the MarkMonitor AntiPiracy system is designed to ensure that there are no false positives under reasonable and realistic assumptions. Moreover, the system produces thorough case data for alleged infringement tracking.”

However, there is some room for improvement. For example, MarkMonitor could implement additional testing to ensure that false positives and human errors are indeed caught.

“… we believe that the system would benefit from additional testing and that the existing structure leaves open the potential for preventable failures. Additionally, we recommend that certain elements of operational security be enhanced,” Harbor Labs writes.

In addition, the collected evidence may need further protections to ensure that it can’t be tampered with or fall into the wrong hands.

“… we believe that this collected evidence and other potentially sensitive data is not adequately controlled. While MarkMonitor does protect the data from outside parties, its protection against inside threats (e.g., potential rogue employees) is minimal in terms of both policy and technical enforcement.”

The full recommendations as detailed in the report are as follows:

recommendations

The CCI is happy with the new results, which they say confirm the findings of the earlier Stroz Friedberg review.

“The Harbor Labs report reaffirms the findings from our first report – conducted by Stroz Friedberg – that the CAS is well designed and functioning as we hoped,” CCI informs TF.

In the months to come the operators of the Copyright Alert System will continue to work with copyright holders to make further enhancements and modifications to their processes.

“As the CAS exits the initial ramp-up period, CCI has been assured by our content owners that they have taken all recommendations made within both reports into account and are continuing to focus on maintaining the robust system that minimizes false positives and protects customer security and privacy,” CCI adds.

Meanwhile, they will continue to alert Internet subscribers to possible infringements. After nearly two years copyright holders have warned several million users, hoping to convert then to legal alternatives.

Thus far there’s no evidence that Copyright Alerts have had a significant impact on piracy rates. However, the voluntary agreement model is being widely embraced by various stakeholders and similar schemes are in the making in both the UK and Australia.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Krebs on Security: Convicted ID Thief, Tax Fraudster Now Fugitive

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

In April 2014, this blog featured a story about Lance Ealy, an Ohio man arrested last year for buying Social Security numbers and banking information from an underground identity theft service that relied in part on data obtained through a company owned by big-three credit bureau Experian. Earlier this week, Ealy was convicted of using the data to fraudulently claim tax refunds with the IRS in the names of more than 175 U.S. citizens, but not before he snipped his monitoring anklet and skipped town.

Lance Ealy, in self-portrait he uploaded to twitter before absconding.

Lance Ealy, in selfie he uploaded to Twitter before absconding.

On Nov. 18, a jury in Ohio convicted Ealy, 28, on all 46 charges, including aggravated identity theft, and wire and mail fraud. Government prosecutors presented evidence that Ealy had purchased Social Security numbers and financial data on hundreds of consumers, using an identity theft service called Superget.info (later renamed Findget.me). The jury found that Ealy used that information to fraudulently file at least 179 tax refund requests with the Internal Revenue Service, and to open up bank accounts in other victims’ names — accounts he set up to receive and withdraw tens of thousand of dollars in refund payments from the IRS.

The identity theft service that Ealy used was dismantled in 2013, after investigators with the U.S. Secret Service arrested its proprietor and began tracking and finding many of his customers. Investigators later discovered that the service’s owner had obtained much of the consumer data from data brokers by posing as a private investigator based in the United States.

In reality, the owner of Superget.info was a Vietnamese man paying for his accounts at data brokers using cash wire transfers from a bank in Singapore. Among the companies that Ngo signed up with was Court Ventures, a California company that was bought by credit bureau Experian nine months before the government shut down Superget.info.

Court records show that Ealy went to great lengths to delay his trial, and even reached out to this reporter hoping that I would write about his allegations that everyone from his lawyer to the judge in the case was somehow biased against him or unfit to participate in his trial. Early on, Ealy fired his attorney, and opted to represent himself. When the court appointed him a public defender, Ealy again choose to represent himself.

“Mr. Ealy’s motions were in a lot of respects common delay tactics that defendants use to try to avoid the inevitability of a trial,” said Alex Sistla, an assistant U.S. attorney in Ohio who helped prosecute the case.

Ealy also continued to steal peoples’ identities while he was on trial (although no longer buying from Superget.info), according to the government. His bail was revoked for several months, but in October the judge in the case ordered him released on a surety bond.

It is said that a man who represents himself in court has a fool for a client, and this seems doubly true when facing criminal charges by the U.S. government. Ealy’s trial lasted 11 days, and involved more than 70 witnesses — many of the ID theft victims. His last appearance in court was on Friday. When investigators checked in on Ealy at his home over the weekend, they found his electronic monitoring bracelet but not Ealy.

Ealy faces up to 10 years in prison on each count of possessing 15 or more unauthorized access devices with intent to defraud and using unauthorized access devices to obtain items of $1,000 or more in value; up to five years in prison on each count of filing false claims for income tax refunds with the IRS; up to 20 years in prison on each count of wire fraud and each count of mail fraud; and mandatory two-year sentences on each count of aggravated identity theft that must run consecutive to whatever sentence may ultimately be handed down. Each count of conviction also carries a fine of up to $250,000.

I hope they find Mr. Ealy soon and lock him up for a very long time. Unfortunately, he is one of countless fraudsters perpetrating this costly and disruptive form of identity theft. In 2014, both my sister and I were the victims of tax ID theft, learning that unknown fraudsters had already filed tax refunds in our names when we each filed our taxes with the IRS.

I would advise all U.S. readers to request a tax filing PIN from the IRS (sadly, it turns out that I applied for mine in Feburary, only days after the thieves filed my tax return). If approved, the PIN is required on any tax return filed for that consumer before a return can be accepted. To start the process of applying for a tax return PIN from the IRS, check out the steps at this link. You will almost certainly need to file an IRS form 14039 (PDF), and provide scanned or photocopied records, such a drivers license or passport.

To read more about other ID thieves who were customers of Superget.info that the Secret Service has nabbed and put on trial, check out the stories in this series. Ealy’s account on Twitter is an also an eye-opener.

TorrentFreak: BitTorrent Users are Avid, Eclectic Content Buyers, Survey Finds

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

Each month 150-170 million Internet users share files using the BitTorrent protocol, a massive audience by most standards. The common perception is that these people are only interested in obtaining content for free.

However, studies have found that file-sharers are often more engaged than the average consumer, as much was admitted by the RIAA back in 2012. There’s little doubt that within those millions of sharers lie people spending plenty of money on content and entertainment.

To get a closer look, in September BitTorrent Inc. conducted a survey among a sample of its users. In all, 2,500 people responded and now the company has published the results. The figures aren’t broken down into age groups, but BitTorrent Inc. informs TF that BitTorrent users trend towards young and male.

Music

From its survey the company found that 50% of respondents buy music each month, with a sway towards albums rather than singles (44% v 32%). BitTorrent users are reported as 170% more likely to have paid for a digital music download in the past six months than Joe Public.

Citing figures from the RIAA, BitTorrent Inc. says its users are also 8x more likely than the average Internet user to pay for a streaming music service, with 16% of BitTorrent users and 2% of the general public holding such an account.

Perhaps a little unexpectedly, supposedly tech-savvy torrent users are still buying CDs and vinyl, with 45% and 10% respectively reporting a purchase in the past 12 months. BitTorrent Inc. says that the latter represents users “engaging and unpacking art as a multimedia object”, a clear reference to how the company perceives its BitTorrent Bundles.

On average, BitTorrent Inc. says its user base spends $48 a year on music, with 31% spending more than $100 annually.

bit-music

Movies

When it comes to movies, 47% of respondents said they’d paid for a theater ticket in the preceding 12 months, up on the 38% who purchased a DVD or Blu-ray disc during the same period.

Users with active movie streaming accounts and those making digital movie purchases tied at 23%, with DVD rental (22%) and digital rental (16%) bringing up the rear.

All told, BitTorrent Inc. says that 52% of respondents buy movies on a monthly basis with the average annual spend amounting to $54. More than a third say they spend in excess of $100.

bit-movie

So do the results of the survey suggest that BitTorrent Inc.’s users have a lot to offer the market and if so, what?

“The results confirm what we knew already, that our users are super fans. They are consumers of content and are eager to reward artists for their work,” Christian Averill, BitTorrent Inc.’s Director of Communications, told TF.

“BitTorrent Bundle was started based on this premise and we have more than 10,000 artists now signed up, with more to come. With 90% of purchase going to the content creators, BitTorrent Bundle is the most artist friendly, direct-to-fan distribution platform on the market.”

It seems likely that promoting and shifting Bundles was a major motivator for BitTorrent Inc. to carry out the survey and by showing that torrent users aren’t shy to part with their cash, more artists like Thom Yorke will hopefully be prepared to engage with BitTorrent Inc.’s fanbase.

Also of note is the way BitTorrent Inc. is trying to position that fanbase or, indeed, how that fanbase has positioned itself. While rock (20%), electronic (15%) and pop (13%) took the top spots in terms of genre popularity among users, 23% described their tastes as a vague “other”. Overall, 61% of respondents described their musical tastes as “eclectic”.

“[Our] users are engaged in the creative community and they have diverse taste. They also do not define themselves by traditional genres. We feel this is a true representation about how fans view themselves universally these days. They are eclectic,” Averill concludes.

While monetizing content remains a key focus for BitTorrent Inc., the company is also making strides towards monetizing its distribution tools. Last evening uTorrent Plus was replaced by uTorrent Pro (Windows), an upgraded client offering torrent streaming, an inbuilt player, video file converter and anti-virus features. The ad-free client (more details here) is available for $19.95 per year.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Linux How-Tos and Linux Tutorials: Beginning Git and Github for Linux Users

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Carla Schroder. Original post: at Linux How-Tos and Linux Tutorials

fig-1 github

The Git distributed revision control system is a sweet step up from Subversion, CVS, Mercurial, and all those others we’ve tried and made do with. It’s great for distributed development, when you have multiple contributors working on the same project, and it is excellent for safely trying out all kinds of crazy changes. We’re going to use a free Github account for practice so we can jump right in and start doing stuff.

Conceptually Git is different from other revision control systems. Older RCS tracked changes to files, which you can see when you poke around in their configuration files. Git’s approach is more like filesystem snapshots, where each commit or saved state is a complete snapshot rather than a file full of diffs. Git is space-efficient because it stores only changes in each snapshot, and links to unchanged files. All changes are checksummed, so you are assured of data integrity, and always being able to reverse changes.

Git is very fast, because your work is all done on your local PC and then pushed to a remote repository. This makes everything you do totally safe, because nothing affects the remote repo until you push changes to it. And even then you have one more failsafe: branches. Git’s branching system is brilliant. Create a branch from your master branch, perform all manner of awful experiments, and then nuke it or push it upstream. When it’s upstream other contributors can work on it, or you can create a pull request to have it reviewed, and then after it passes muster merge it into the master branch.

So what if, after all this caution, it still blows up the master branch? No worries, because you can revert your merge.

Practice on Github

The quickest way to get some good hands-on Git practice is by opening a free Github account. Figure 1 shows my Github testbed, named playground. New Github accounts come with a prefab repo populated by a README file, license, and buttons for quickly creating bug reports, pull requests, Wikis, and other useful features.

Free Github accounts only allow public repositories. This allows anyone to see and download your files. However, no one can make commits unless they have a Github account and you have approved them as a collaborator. If you want a private repo hidden from the world you need a paid membership. Seven bucks a month gives you five private repos, and unlimited public repos with unlimited contributors.

Github kindly provides copy-and-paste URLs for cloning repositories. So you can create a directory on your computer for your repository, and then clone into it:

$ mkdir git-repos
$ cd git-repos
$ git clone https://github.com/AlracWebmaven/playground.git
Cloning into 'playground'...
remote: Counting objects: 4, done.
remote: Compressing objects: 100% (4/4), done.
remote: Total 4 (delta 0), reused 0 (delta 0)
Unpacking objects: 100% (4/4), done.
Checking connectivity... done.
$ ls playground/
LICENSE  README.md

All the files are copied to your computer, and you can read, edit, and delete them just like any other file. Let’s improve README.md and learn the wonderfulness of Git branching.

Branching

Git branches are gloriously excellent for safely making and testing changes. You can create and destroy them all you want. Let’s make one for editing README.md:

$ cd playground
$ git checkout -b test
Switched to a new branch 'test'

Run git status to see where you are:

$ git status
On branch test
nothing to commit, working directory clean

What branches have you created?

$ git branch
* test
  master

The asterisk indicates which branch you are on. master is your main branch, the one you never want to make any changes to until they have been tested in a branch. Now make some changes to README.md, and then check your status again:

$ git status
On branch test
Changes not staged for commit:
  (use "git add ..." to update what will be committed)
  (use "git checkout -- ..." to discard changes in working directory)
        modified:   README.md
no changes added to commit (use "git add" and/or "git commit -a")

Isn’t that nice, Git tells you what is going on, and gives hints. To discard your changes, run

$ git checkout README.md

Or you can delete the whole branch:

$ git checkout master
$ git branch -D test

Or you can have Git track the file:

$ git add README.md
$ git status
On branch test
Changes to be committed:
  (use "git reset HEAD ..." to unstage)
        modified:   README.md

At this stage Git is tracking README.md, and it is available to all of your branches. Git gives you a helpful hint– if you change your mind and don’t want Git to track this file, run git reset HEAD README.md. This, and all Git activity, is tracked in the .git directory in your repository. Everything is in plain text files: files, checksums, which user did what, remote and local repos– everything.

What if you have multiple files to add? You can list each one, for example git add file1 file2 file2, or add all files with git add *.

When there are deleted files, you can use git rm filename, which only un-stages them from Git and does not delete them from your system. If you have a lot of deleted files, use git add -u.

Committing Files

Now let’s commit our changed file. This adds it to our branch and it is no longer available to other branches:

$ git commit README.md
[test 5badf67] changes to readme
 1 file changed, 1 insertion(+)

You’ll be asked to supply a commit message. It is a good practice to make your commit messages detailed and specific, but for now we’re not going to be too fussy. Now your edited file has been committed to the branch test. It has not been merged with master or pushed upstream; it’s just sitting there. This is a good stopping point if you need to go do something else.

What if you have multiple files to commit? You can commit specific files, or all available files:

$ git commit file1 file2
$ git commit -a

How do you know which commits have not yet been pushed upstream, but are still sitting in branches? git status won’t tell you, so use this command:

$ git log --branches --not --remotes
commit 5badf677c55d0c53ca13d9753344a2a71de03199
Author: Carla Schroder 
Date:   Thu Nov 20 10:19:38 2014 -0800
    changes to readme

This lists un-merged commits, and when it returns nothing then all commits have been pushed upstream. Now let’s push this commit upstream:

$ git push origin test
Counting objects: 7, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 324 bytes | 0 bytes/s, done.
Total 3 (delta 1), reused 0 (delta 0)
To https://github.com/AlracWebmaven/playground.git
 * [new branch]      test -> test

You may be asked for your Github login credentials. Git caches them for 15 minutes, and you can change this. This example sets the cache at two hours:

$ git config --global credential.helper 'cache --timeout=7200'

Now go to Github and look at your new branch. Github lists all of your branches, and you can preview your files in the different branches (figure 2).

fig-2 github

Now you can create a pull request by clicking the Compare & Pull Request button. This gives you another chance to review your changes before merging with master. You can also generate pull requests from the command line on your computer, but it’s rather a cumbersome process, to the point that you can find all kinds of tools for easing the process all over the Web. So, for now, we’ll use the nice clicky Github buttons.

Github lets you view your files in plain text, and it also supports many markup languages so you can see a generated preview. At this point you can push more changes in the same branch. You can also make edits directly on Github, but when you do this you’ll get conflicts between the online version and your local version. When you are satisfied with your changes, click the Merge pull request button. You’ll have to click twice. Github automatically examines your pull request to see if it can be merged cleanly, and if there are conflicts you’ll have to fix them.

Another nice Github feature is when you have multiple branches, you can choose which one to merge into by clicking the Edit button at the right of the branches list (figure 3).

fig-3 github

After you have merged, click the Delete Branch button to keep everything tidy. Then on your local computer, delete the branch by first pulling the changes to master, and then you can delete your branch without Git complaining:

$ git checkout master
$ git pull origin master
$ git branch -d test

You can force-delete a branch with an uppercase -D:

$ git branch -D test

Reverting Changes

Again, the Github pointy-clicky way is easiest. It shows you a list of all changes, and you can revert any of them by clicking the appropriate button. You can even restore deleted branches.

You can also do all of these tasks exclusively from your command line, which is a great topic for another day because it’s complex. For an exhaustive Git tutorial try the free Git book, and you can test everything with your Github account.

TorrentFreak: Torrents Good For a Third of all Internet Traffic in Asia-Pacific

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

download-keyboardOver the years we have been following various reports on changes in Internet traffic, specifically in relation to torrents.

One of the patterns that emerged with the rise of video streaming services is that BitTorrent is losing its share of total Internet traffic, in North America at least, where good legal services are available.

This downward spiral is confirmed by the latest report from Sandvine which reveals that torrent traffic is now responsible for ‘only’ 5% of all U.S. Internet traffic in North America during peak hours, compared to 10.3% last year.

In other countries, however, this decrease is not clearly visible. In Europe, for example, the percentage of Internet traffic during peak hours has remained stable over the past two years at roughly 15%, while absolute traffic increased during the same period.

In Asia-Pacific BitTorrent traffic there’s yet another trend. Here, torrents are booming with BitTorrent traffic increasing more than 50% over the past year.

asia-pacific

According to Sandvine torrents now account for 32% of all traffic during peak hours, up from 21%. Since overall traffic use also increased during the same period, absolute traffic has more than doubled.

Looking at upstream data alone torrents are good for more than 55% of all traffic during peak hours.

One of the countries where unauthorized BitTorrent usage has been growing in recent years is Australia, which has one of the highest piracy rates in the world.

There are several reasons why torrents are growing in popularity, but the lack of good legal alternatives is expected to play an important role. It’s often hard or expensive to get access to the latest movies and TV-shows in this region.

It will be interesting to see whether this trend will reverse during the coming years as more legal services come online. Netflix’ arrival in Australia next year, for example, is bound to shake things up.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

SANS Internet Storm Center, InfoCON: green: Critical WordPress XSS Update, (Thu, Nov 20th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

Today, WordPress4.0.1 was released, which addresses a critical XSS vulnerability (among other vulnerabilities). [1]

The XSS vulnerability deserves a bit more attention, as it is an all too common problem, and often underestimated. First of all, why is XSS Critical? It doesnt allow direct data access like SQL Injection, and it doesnt allow code execution on the server. Or does it?

XSS does allow an attacker to modify the HTML of the site. With that, the attacker can easily modify form tags (think about the login form, changing the URL it submits its data to) or the attacker could use XMLHTTPRequest to conduct CSRF without being limited by same origin policy. The attacker will know what you type, and will be able to change what you type, so in short: The attacker is in full control. This is why XSS is happening.

The particular issue here was that WordPress allows some limited HTML tags in comments. This is always a very dangerous undertaking. The word press developers did attempt to implement the necessary safeguards. Only certain tags are allowed, and even for these tags, the code checked for unsafe attributes. Sadly, this check wasnt done quite right. Remember that browsers will also parse somewhat malformed HTML just fine.

A better solution would have probably been to use a standard library instead of trying to do this themselves. HTML Purifier is one such library for PHP. Many developer shy away from using it as it is pretty bulky. But it is bulky for a reason: it does try to cover a lot of ground. It not only normalizes HTML and eliminates malformed HTML, but it also provides a rather flexible configuration file. Many lightweight alternatives, like the solution WordPress came up with, rely on regular expressions. Regular expressions are typically not the right tool to parse HTML. Too much can go wrong starting from new lines and ending somewhere around multi-bytecharacters. In short: Dont use regular expressions to parse HTML (or XML), in particular for security.

[1] https://wordpress.org/news/2014/11/wordpress-4-0-1/


Johannes B. Ullrich, Ph.D.
STI|Twitter|LinkedIn

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Raspberry Pi: Northern Ireland’s first Raspberry Jams

This post was syndicated from: Raspberry Pi and was written by: Liz Upton. Original post: at Raspberry Pi

Liz: Andrew Mulholland is a first-year undergraduate student at Queen’s College Belfast, and the overall winner of 2014’s Talk Talk Digital Hero award. We’ve known him for a few years (he did work experience with us this summer – he created the Grandpa Scarer learning resource for us with Matt Timmons-Brown).

Andrew’s been setting up events to introduce other young people to computing for some years now. He‘s recently been running the very first Raspberry Jams in Northern Ireland, and is doing a lot of computing outreach with local schools. I asked him how the kids who’d attended the Jams had found the experience, and he sent me the blog post below. Well done Andrew – it’s brilliant to see how much fun an introduction to computing can be. You’re doing an amazing job.

Northern Ireland November Raspberry Jam

On Saturday 8th November 20+ soon-to-be Raspberry Pi enthusiasts arrived at Farset Labs for the 6th Northern Ireland Raspberry Jam.

farsettjam

September, NI Raspberry Jam 5

This months main activities? Sonic Pi 2 and Minecraft Pi!

At the Jam we also have all the previous months’ activities printed out, so that if the kids want to try something else out, they are more than welcome to.

There are activities ranging from Sonic Pi, to Minecraft Pi, to physical computing projects like creating a reaction timer game in Scratch GPIO, along with quite a few others.

asd

Lots of cool stuff to play with!

I asked a few of the kids at the jam to write down what they though.

haley

Haley (11) having way too much fun hacking someone else’s Minecraft Pi game!

Haley:

“It was my first Raspberry Jam and I was quite nervous when I walked in but one of the mentors came over and introduced himself to me and explained what we would be getting up to. He found me a chair and showed me how to connect all the wires together and by the end of the Jam I was laughing my head off! I really enjoyed learning how to make music using Sonic Pi. I made the tune Frère Jacques. My favourite part was learning how to code while playing Minecraft. Andrew told me I should learn how to code because I had never done it before. I used a programming language called Python to hack others Minecraft games and to teleport them to a random place. I heard another kid start exclaiming after teleporting her several times, initially she had no idea it was me! Andrew and Libby were very supportive the whole day and I learnt a massive amount thanks to them. It was great fun!”

Apparently Haley enjoyed her first Raspberry Jam.

Apparently Haley enjoyed her first Raspberry Jam.

 Katie:

“I heard about the Raspberry Jam because one of the mentors volunteers at my school and the Jam was announced in Assembly as part of EU Coding Week. My friend Rachel and I decided to give it a go. I didn’t know anything about a Raspberry Pi and had no idea what to expect before I went but Andrew and the mentors have taught me loads and are very encouraging. I have just done my second Raspberry Jam and I loved it! I created a piece of music using Sonic Pi, played/hacked Minecraft and played with an LEDBorg in Scratch GPIO! Also we got doughnuts and got to make use of Farset Lab’s huge blackboard! It is the biggest blackboard I’ve ever seen. I don’t have a favorite part because everything I did was great fun and everybody was helpful. I definitely suggest anyone my age giving it a go!”

Rachel and Katie creating music with Sonic-Pi 2

Rachel and Katie creating music with Sonic-Pi 2

Rachel

“I had a great time at my second Raspberry Jam at the weekend. The thing I enjoyed the most was learning with Scratch with the GPIO pins. This is something my school doesn’t teach so I don’t get the chance to do anything like this normally. It was great fun programming the LEDs to change different colours using a program I wrote.

The Raspberry Jam is such an amazing workshop and I am very grateful to Andrew and Libby for running it! I can’t wait till the December Jam!!”

We didn’t just have young people at the NI Raspberry Jam this month! The Jam is open to people of all ages, coding knowledge and backgrounds.

Never to old to play Minecraft! John (70) getting taught how to play Minecraft Pi by Isaac (10)

Never to old to play Minecraft! John (70) getting taught how to play Minecraft Pi by Isaac (10)

A parent:

“These events are really great. It lets the kids experiment with technology that they wouldn’t otherwise have got the opportunity to use in school. Most schools in Northern Ireland don’t seem to offer any coding opportunities for the kids so stuff like this is essential. And Andrew and Libby are great, giving up their Saturdays to come and teach these kids and my son!”

Next month is the Christmas special Jam! We have some secret new activities planned and of course, lots of food!

Some awesome cupcakes baked by @baker_geek for last months Jam.

Some awesome cupcakes baked by @baker_geek for last months Jam.

Want to come along to the next NI Raspberry Jam?

Northern Ireland Raspberry Jam is on the 2nd Saturday of every month with NI Raspberry Jam 7 (Christmas special) being on the 12th December at Farset Labs, Belfast.

Tickets are free! (Although we ask for a £3 donation towards the venue if able to).

The event is especially aimed at complete beginners to the Raspberry Pi or people just starting out, but we do have some more complex projects and challenges for you if you are an expert.

Special thanks to Libby (16) for helping me with this months Jam, and to Farset Labs for basically letting us take over the building for a Saturday afternoon!

You know when you are onto something good when you overhear one of the kids on their way out saying: “Daddy, daddy, can I borrow your phone to book next month’s tickets before they all go?”

Interested in finding a Raspberry Jam near you? Check out our Jams page!

TorrentFreak: U.S. Brands Kim Dotcom a Fugitive, ‘Spies’ on Others

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

megaupload-logoIt’s been nearly three years since Megaupload was taken down by the U.S. authorities but it’s still uncertain whether Kim Dotcom and his fellow defendants will be extradited overseas.

Two months ago the U.S. Government launched a separate civil action in which it asked the court to forfeit the bank accounts, cars and other seized possessions of the Megaupload defendants, claiming they were obtained through copyright and money laundering crimes.

Megaupload responded to these allegations at the federal court in Virginia with a motion to dismiss the complaint. According to Megaupload’s lawyers the U.S. Department of Justice (DoJ) is making up crimes that don’t exist.

In addition, Dotcom and his co-defendants claimed ownership of the assets U.S. authorities are trying to get their hands on. A few days ago the DoJ responded to these claims, arguing that they should be struck from the record as Dotcom and his colleagues are fugitives.

In a motion (pdf) submitted to a Virginia District Court the U.S. asks for the claims of the defendants to be disregarded based on the doctrine of fugitive disentitlement.

“Claimants Bram van der Kolk, Finn Batato, Julius Bencko, Kim Dotcom, Mathias Ortmann, and Sven Echternach, are deliberately avoiding prosecution by declining to enter the United States where the criminal case is pending,” U.S. Attorney Dana Boente writes.

“The key issue in determining whether a person is a fugitive from justice is that person’s intent. A defendant who flees with intent to avoid arrest is a fugitive from justice,” he adds.

Since Kim Dotcom and his New Zealand-based Megaupload colleagues are actively fighting their extradition they should be seen as fugitives, the DoJ concludes.

“Those claimants who are fighting extradition on the criminal charges in the related criminal case, claimants van der Kolk, Batato, Kim Dotcom, and Ortmann, are fugitives within the meaning of the statute, regardless of the reason for their opposition.”

Megaupload lawyer Ira Rothken disagrees with this line of reasoning. He told TF that the fugitive disentitlement doctrine shouldn’t apply here.

“The DOJ is trying to win the Megaupload case on procedure rather than the merits. Most people don’t realize that Kim Dotcom has never been to the United States,” Rothken says.

A person who has never been to the United States and is currently going through a lawful procedure in New Zealand shouldn’t be seen as a fugitive, according to Rothken.

The recent DoJ filing also highlights another aspect of the case. According to a declaration by special FBI agent Rodney Hays, the feds have obtained “online conversations” of Julius Bencko and Sven Echternach, the two defendants who currently reside in Europe.

These conversations were obtained by law enforcement officers and show that the authorities were ‘spying’ on some of the defendants months after Megaupload was raided.

tapped

“During a conversation that occurred on or about March 28, 2012, Bencko allegedly told a third-party, ‘I can come to Bratislava [Slovakia] if needed .. bu [sic] you know .. rather not travel around much .. ‘ Later in the conversation, Bencko states ‘i’m facing 55 years in usa’,” the declaration reads.

In addition to the two defendants, law enforcement also obtained a conversation of Kim’s wife Mona Dotcom, who is not a party in the case herself.

“During a conversation that occurred on or about February 9, 2012 a third-party told Mona Dotcom, ‘Also Julius [Bencko] wants Kim [Dotcom] to know that he will be supportive in what ever way possible that he needs’,”

According to the U.S. the ‘tapped’ conversations of Bencko and Echternach show that since they are avoiding travel to the United States, they too can be labeled fugitives.

It’s unclear how the online conversations were obtained, but Megaupload lawyer Ira Rothken told TF that he wouldn’t be surprised if civil liberties were violated in the process, as has happened before in the case.

Whether these fugitive arguments will be accepted by the court has yet to be seen. Highlighting the motion Megaupload submitted earlier, Rothken notes that regardless of these arguments the case should be dismissed because the court lacks jurisdiction.

“The United States doesn’t have a stature for criminal copyright infringement,” Rothken tells us. “We believe that the case should be dismissed based on a lack of subject matter jurisdiction.”

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Backblaze Blog | The Life of a Cloud Backup Company: There’s No I in Bryan

This post was syndicated from: Backblaze Blog | The Life of a Cloud Backup Company and was written by: Yev. Original post: at Backblaze Blog | The Life of a Cloud Backup Company

blog-bryan
Straight out of Portland, Bryan joins our Datacenter staff to help backup your world! Having had a wide variety of jobs before joining the Backblaze team, including farming and store clerking, Bryan is excited to join the tech industry, and can’t wait to help ensure your data is safe. Let’s learn some more about our fourth, and latest “Brian”!

What is your Backblaze Title?
Datacenter Technician

Where are you originally from?
Before Sacramento, I lived in Portland, Oregon. Before that, I called upstate New York “home”.

Why did you move to Sacramento?
I moved to California to help backup your world!

What attracted you to Backblaze?
I’ve lost data before and it’s horrible. I like knowing that my stuff is backed up securely, and I’d like to help other people know their stuff is backed up too. Backblaze is the place to do this.

From the outside, Backblaze struck me as inventive and ambitious, and the data center work looked like it would switch from thinking/planning to moving/doing and back again throughout the day at a good clip. I’ve been here for a week, and it really does function that way. I love it.

Where else have you worked?
Farms, video rental stores, gas stations, radio waves, computer stores, and offices. You know, the usual.

Tell us how you currently backup your photos, music, data, etc. on your home computer?
Local backups: Time Machine
Bootable backups: Shirt-Pocket’s Super Duper! and Bombich’s Carbon Copy Cloner
Offsite backups: Backblaze

If you won the lottery tomorrow, what would you do?
I would buy you lunch!

How did you get into computers?
In sixth grade when I was 12, my grandparents bought a Packard Bell so they could make spreadsheets tracking their stats in fantasy NASCAR. Every day after school I pedaled my bicycle to their house along ATV trails through the forest, so that I could use the computer. Eventually I was given someone’s used computer. I still visited my grandparents though.

Welcome Bryan! We’re jazzed to have you on board, and will definitely look forward to that lunch after you hit it big withe lotto!

Author information

Yev

Yev

Social Marketing Manager at Backblaze

Yev enjoys speed-walking on the beach. Speed-dating. Speed-writing blog posts. The film Speed. Speedy technology. Speedy Gonzales. And Speedos. But mostly technology.

Follow Yev on:

Twitter: @YevP | LinkedIn: Yev Pusin | Google+: Yev Pusin

The post There’s No I in Bryan appeared first on Backblaze Blog | The Life of a Cloud Backup Company.

SANS Internet Storm Center, InfoCON: green: “Big Data” Needs a Trip to the Security Chiropracter!, (Wed, Nov 19th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

When the fine folks at Portswigger updated Burp Suite last month to 1.6.07 (Nov 3), I was really glad to see NoSQL injection in the list of new features.

Whats NoSQL you ask? If your director is talking to you about Big Data or your Marketing is talking to you about customer metrics, likely what they mean is an app with a back-end database that uses NoSQL instead of real SQL.

Im tripping over this requirement this month in the retail space. Ive got clients that want to track a retail customers visit to the store (tracking their cellphones using the store wireless access points), to see:

  • if customers visit store sections where the sale items are?
  • or, if customers visit area x, do they statistically visit area y next?
  • or, having visited the above areas, how many customers actually purchase something?
  • or, after seeing a purchase, how many feature sale purchases are net-new customers (or repeat customers)

In other words, using the wireless system to track customer movements, then correlating it back to purchase behaviour to determine how effective each feature sale might be.

So what database do folks use for applications like this? Front-runners in the NoSQL race these days include MongoDB and CouchDB. Both databases do cool things with large volumes of data.”>Ensure that MongoDB runs in a trusted network environment and limit the interfaces on which MongoDB instances listen for incoming connections. Allow only trusted clients to access the network interfaces and ports on which MongoDB instances are available.

CouchDB has a similar statement at http://guide.couchdb.org/draft/security.html “>it should be obvious that putting a default installation into the wild is adventurous

So, where do I see folks deploying these databases? Why, in PUBLIC CLOUDs, thats where!” />

And what happens after you stand up your almost-free database and the analysis on that dataset is done? In most cases, the marketing folks who are using it simply abandon it, in a running state. What could possibly go wrong with that? Especially if they didnt tell anyone in either the IT or Security group that this database even existed?

Given that weve got hundreds of new ways to collect data that weve never had access to before, its pretty obvious that if big data infrastructures like these arent part of our current plans, they likely should be. All I ask is that folks do the risk assessments tha they would if this server was going up in their own datacenter. Ask some questions like:

  • What data will be on this server?
  • Who is the formal custodian of that data?
  • Is the data covered under a regulatory framework such as HIPAA or PCI? Do we need to host it inside of a specific zone or vlan?
  • What happens if this server is compromised? Will we need to disclose to anyone?
  • Who owns the operation of the server?
  • Who is responsible for securing the server?
  • Does the server have a pre-determined lifetime? Should it be deleted after some point?
  • Is the developer or marketing team thats looking at the dataset understand your regulatory requirements? Do they understand that Credit Card numbers and Patient Data are likely bad candidates for an off-prem / casual treatment like this (hint – NO THEY DO NOT).

Smartmeter applications are another big data thing Ive come across lately. Laying this out end-to-end – collecting data from hundreds of thousands of embedded devices that may or may not be securable, over a public network to be stored in an insecurable database in a public cloud. Oh, and the collected data impinges on at least 2 regulatory frameworks – PCI and NERC/FERC, possibly also privacy legislation depending on the country. Ouch!

Back to the tools to assess these databases – Burp isnt your only option to scan NoSQL database servers – in fact, Burp is more concerned with the web front-end to NoSQL itself. NoSQLMAP (http://www.nosqlmap.net/) is another tool thats seeing a lot of traction, and of course the standard usual suspects list of tools have NoSQL scripts, components and plugins – Nessus has a nice set of compliance checks for the database itself, NMAP has scripts for both couchdb, mongodbb and hadoop detection, as well as mining for database-specific information. OWASP has a good page on NoSQL injection at https://www.owasp.org/index.php/Testing_for_NoSQL_injection, and also check out http://opensecurity.in/nosql-exploitation-framework/.

Shodan is also a nice place to look in an assessment during your recon phase (for instance, take a look at http://www.shodanhq.com/search?q=MongoDB+Server+Information )

Have you used a different tool to assess a NoSQL Database? Or have you had – lets say an interesting conversation around securing data in such a database with your management or marketing group? Please, add to the story in our comment form!

===============
Rob VandenBrink
Metafore

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

TorrentFreak: Artists and Labels Now Sue Chrysler Over CD-Ripping Cars

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

ripping-carToward the end of the last century record labels feared that home taping would kill the music industry.

To counter the threat cassette tape recorders posed at the time, they asked Congress to take action.

This eventually resulted in the Audio Home Recording Act (AHRA) of 1992. Under this law importers and manufacturers must pay royalties on “digital audio recording devices,” among other things.

The legislation is still in play today. Instead of targeting cassette recorders, however, the threats are now other copying devices. According to the Alliance of Artists and Recording Companies, this includes media entertainment systems that are built into many cars.

This week the music group, which lists major record labels and 300,000 artists among its members, sued Chrysler and its technology partner Mitsubishi (pdf) for failing to pay royalties.

The dispute revolves around Chrysler’s media entertainment systems including “MyGIG” and “Uconnect Media Center” which allow car owners to rip CDs to a hard drive.

“These devices are covered by the AHRA, but the defendants have refused to pay royalties on them or include the required serial copy protections,” AARC Executive Director Linda Bocchi comments.

The music group reached out to Chrysler and Mitsubishi hoping to settle the issue, but these talks failed. As a result AARC saw no other option than to take the case to court.

“We had hoped Chrysler and the Mitsubishi Electric companies would settle their liability and begin paying what they owe once they had an opportunity to study and assess the issues,” Bocchi says.

“But it has now become painfully clear they have no intention of complying with the law. While litigation is always a last resort, it is clear this lawsuit is the only way to protect our members’ rights.”

The current lawsuit follows an earlier case against Ford and General Motors, which is still ongoing.

In both cases artists and record labels are looking for statutory damages, which could amount to hundreds of millions of dollars. In addition, they want to prevent the manufacturers from selling these unauthorized devices in their cars.

Ford has already filed a motion to dismiss arguing that AHRA doesn’t apply to their systems, and the other defendants including Chrysler are likely to do the same. Whose side the court will agree with is expected to become clear in the months to come.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Raspberry Pi: A collection of Pis

This post was syndicated from: Raspberry Pi and was written by: Liz Upton. Original post: at Raspberry Pi

Liz: Today’s guest post comes from Alex Eames, who runs the rather wonderful RasPi.TV. He’s been furtling through his drawers, and has discovered he owns a surprising number of Raspberry Pi variants. Thanks Alex! 

Now we have the A+, I thought it’d be a good time to celebrate its ‘birth’ by having a rundown of the various mass-produced models of Raspberry Pi.

I had a look through my collection and was somewhat surprised to see that I have 10 different variants of Raspberry Pi now. There is one I don’t have, but more about that later. Here’s the family photo. You can click it for a higher resolution version.

Raspberry_Pi_Family_A-annotated-15001

Rev 1 Model B

In row 1, column 1 we have the Rev 1 model B. Although I was up early on 29th February 2012, I didn’t get one of the first 10,000 Pis produced. This was delivered in May 2012. It’s a Farnell variant (I have an RS one as well, but it does full-time duty as my weather station). This was the original type of Pi to hit the market. It has 256 Mb RAM and polyfuses on the USB.

Rev 1 Model B – With Links

In row 1, column 2 you’ll see a slightly later variant of Rev 1 model B. This one has 0 Ohm links instead of polyfuses. It helped to overcome some of the voltage drop issues associated with the original Rev 1, but it introduced the “hot-swapping USB devices will now reboot your Pi” issue, which was fixed in the B+.

Rev 2 Model B (China)

Row 2, column 1. Here we have an early Rev 2 Pi. This one was manufactured in China. It originally had a sticker on saying “made in China”, but I took it off. This one was bought some time around October 2012. The Rev 2 model B has 512 Mb RAM (apart from a few early ones which had 256 Mb), mounting holes and two headers called P5 and P6.

Rev 2 Model B (UK)

Row 2, column 2. This is a much later Rev 2 Pi, made at SONY in Wales, UK.

Chinese Red Pi Rev 2 Model B

Row 3, column 1. This is one of the Red Pis made especially for the Chinese market. They are not allowed to be sold in the UK, but if you import one yourself that’s not a problem. It is manufactured to a less stringent spec than the ones at SONY, and is not EMC tested. Therefore it bears no CE/FCC marks.

Limited Edition Blue Pi Rev 2 Model B

Row 3, column 2. I’m not going to go into how I got hold of this. Suffice it to say it was not at all easy, but no laws were broken, and nobody got hurt. RS had 1000 of these made in March 2013 as a special limited anniversary edition to use as prizes and awards to people who’ve made a special contribution to education etc. I know of about 5 or 6 people who have them. (At least two of those people traded for them.) They are extremely hard to get. They come in a presentation box with a certificate. I have #0041. Other than their blueness, they are a Rev 2 model B Pi.

Model A

Row 1, Column 3 is a model A. The PCB is identical to the Rev 2 model B, but it has only one USB port, no ethernet port, no USB/ethernet chip and 256 Mb RAM. The $25 model A was released in February 2013. On the day I got mine, the day after launch, I made a quick and dirty “I’ve got mine first” video, part of which ended up on BBC Click. The model A sold about 100k units. Demand for it was outstripped by the model B, although at one point CPC was offering a brilliant deal on a camera module and model A for £25 (I snagged a couple of those).

Compute Module

Row 2, column 3 is the Compute Module, sitting atop the Compute Module development board. This was launched 23 June 2014 as a way to enable industrial use of the Pi in a more convenient form factor. The module is made so it fits in a SODIMM connector and is essentially the BCM 2835, its 512 Mb RAM and 4 Gb of eMMC flash memory with all available GPIO ports broken out. It costs $30 when bought by the hundred.

Model B+

Row 3, column 3 is the model B+. This was launched on 14 July 2014 and was a major change in form factor. Rounded corners, corner mount holes, 40 GPIO pins, 4 USB ports, improved power circuitry and a complete layout redesign. The B+ was announced as the ‘final revision’ of the B. So it would appear that it’s going to be with us for some time.

Model A+

In row 4, all by itself we have the shiny new Raspberry Pi A+, launched 10 November 2014. It’s essentially the same as a B+ with the USB end cut off. It’s the smallest, lightest, cheapest, and least power-hungry Pi of all so far. It’s 23g, $20 and uses just half a Watt at idle.

So Which One Don’t I Have?

I don’t have a Rev 2 256 MB variant. If you have one and would like to trade or sell it to me, I’d be happy to hear from you (alex AT raspi.tv).

I believe there is also now a red Chinese B+ I’ve not got one of those, but it’s only a matter of time. I wonder if there will be a red A+ at some point too? We Just Don’t Know!

 

 

TorrentFreak: If Illegal Sites Get Blocked Accidentally, Hard Luck Says Court

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

blockedThe movie and music industries have obtained several High Court orders which compel UK ISPs to block dozens of websites said to facilitate access to copyright-infringing content. Recently, however, they have been joined by those seeking blockades on trademark grounds.

The lead case on this front was initiated by Cartier and Mont Blanc owner Richemont. The company successfully argued that several sites were infringing on its trademarks and should be blocked by the UK’s leading ISPs.

The case is important not only to trademark owners but also to those operating in the file-sharing arena since the High Court is using developments in one set of cases to determine the outcome of legal argument in the other.

The latest ruling concerns potential over-blocking. In some cases target sites move to IP addresses that are shared with other sites that are not covered by an injunction. As a result, these third-party sites would become blocked if ISPs filter their IP addresses as ordered by the Court.

To tackle this problem Richemont put forward a set of proposals to the Court. The company suggested that it could take a number of actions to minimize the problem including writing to the third-party sites informing them that a court order is in force and warning them that their domains could become blocked. The third party sites could also be advised to move to a new IP address.

Complicating the issue is the question of legality. While third-party sites aren’t mentioned in blocking orders, Richemont views some of them as operating unlawfully. When the company’s proposals are taken as a package and sites are operating illegally, Richemont believes ISPs should not be concerned over “collateral damage.”

Counsel for the ISPs disagreed, however, arguing that the Court had no jurisdiction to grant such an order. Mr Justice Arnold rejected that notion and supported Richemont’s efforts to minimize over-blocking in certain circumstances.

“The purpose of Richemont’s proposal is to ensure that the [blocking] order is properly targeted, and in particular to ensure that it is as effective as possible while avoiding what counsel for Richemont described as ‘collateral damage’ to other lawful website operators which share the same IP address,” the Judge wrote.

“If the websites are not engaged in lawful activity, then the Court need not be concerned about any collateral damage which their operators may suffer. It is immaterial whether the Court would have jurisdiction, or, if it had jurisdiction, would exercise it, to make an order requiring the ISPs to block access to the other websites.”

The ISPs further argued that the Court’s jurisdiction to adopt Richemont’s proposals should be limited to sites acting illegally in an intellectual property rights sense. The argument was rejected by the Court.

Also of note was the argument put forward by the ISPs that it is the Court’s position, not anyone else’s, to determine if a third-party site is acting illegally or not. Justice Arnold said he had sympathy with the submission, but rejected it anyway.

“As counsel for Richemont submitted, the evidence shows that, in at least some cases, it is perfectly obvious that a particular website which shares an IP address with a Target Website is engaged in unlawful activity. Where there is no real doubt about the matter, the Court should not be required to rule,” the Judge wrote.

“Secondly, and perhaps more importantly, Richemont’s proposal gives the operators of the affected websites the chance either to move to an alternative server or to object before the IP address is blocked. If they do object, the IP address will not be blocked without a determination by the Court.”

In summary, any third-party sites taken down after sharing an IP address with a site featured in a blocking order will have no sympathy from the High Court, if at Richemont’s discretion they are acting illegally. The fact that they are not mentioned in an order will not save them, but they will have a chance to appeal before being blocked by UK ISPs.

“This action is about protecting Richemont’s Maisons and its customers from the sale of counterfeit goods online through the most efficient means, it is not about restricting freedom of speech or legitimate activity,” the company previously told TF.

“When assessing a site for blocking, the Court will consider whether the order is proportionate – ISP blocking will therefore only be used to prevent trade mark infringement where the Court is satisfied that it is appropriate to do so.”

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

TorrentFreak: MPAA Pays University $1,000,000 For Piracy Research

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

mpaa-logoLast week the MPAA submitted its latest tax filing covering 2013. While there are few changes compared to previous years there is one number that sticks out like a sore thumb.

The movie industry group made a rather sizable gift of $912,000 to Carnegie Mellon University, a figure that neither side has made public before.

This brings the MPAA’s total investment in the University over the past two years to more than a million dollars.

The money in question goes to the University’s “Initiative for Digital Entertainment Analytics” (IDEA) that researches various piracy related topics. During 2012 MPAA also contributed to the program, albeit significantly less at $100,000.

TF contacted IDEA co-director Rahul Telang, who told us that much of the money is spent on hiring researchers and, buying data from third parties and covering other research related expenses.

“For any substantial research program to progress it needs funding, and needs access to data and important stakeholders who care about this research. IDEA center has benefited from this funding significantly,” he says, emphasizing that the research applies to academic standards.

“All research is transparent, goes through academic peer review, and published in various outlets,” Telang adds.

While IDEA’s researchers operate independently, without an obligation to produce particular studies, their output thus far is in line with Hollywood’s agenda.

One study showed that the Megaupload shutdown boosted digital sales while another reviewed academic literature to show that piracy mostly hurts revenues. The MPAA later used these results to discredit an independent study which suggested that Megaupload’s closure hurt box office revenues.

Aside from countering opponents in the press, the MPAA also uses the research to convince lawmakers that tougher anti-piracy measures are warranted.

Most recently, an IDEA paper showed that search engines can help to diminish online piracy, an argument the MPAA has been hammering on for years.

The tax filing, picked up first by Variety, confirms a new trend of the MPAA putting more money into research. Earlier this year the industry group launched a new initiative offering researchers a $20,000 grant for projects that address various piracy related topics.

The MPAA sees academic research as an important tool in its efforts to ensure that copyright protections remain in place, or are strengthened if needed.

“We want to enlist the help of academics from around the world to provide new insight on a range of issues facing the content industry in the digital age,” MPAA CEO and former U.S. Senator Chris Dodd said at the time.

The movie industry isn’t alone in funding research for ‘political’ reasons. Google, for example, heavily supports academic research on copyright-related projects in part to further its own agenda, as do many other companies.

With over a million dollars in Hollywood funding in their pocket, it’s now up to IDEA’s researchers to ensure that their work is solid.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

TorrentFreak: Why Hollywood Director Lexi Alexander Sides With “Pirates”

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

lexIt’s pretty obvious that Lexi Alexander isn’t your average Hollywood director. Instead of parading on the red carpet sharing redundant quotes, she prefers to challenge the powers that rule Hollywood.

A few months ago Alexander campaigned to get Pirate Bay’s Peter Sunde released from prison, pointing out that throwing people in jail is not going to stop piracy.

She believes that the MPAA and other pro-copyright groups are a bigger threat than casual pirates, and unlike some of her colleagues she is not afraid to tell the world.

Recently Alexander penned five reasons why she’s pro file-sharing and copyright reform. While she’s doesn’t agree with the “everything should be free” mantra of some anti-copyright activists, Alexander believes that file-sharing is mostly a symptom of Hollywood’s failures.

Over the past day or so this turned into a heated debate (e.g. 1, 2) between a movie industry workers on Twitter, where various anti-piracy advocates condemned the movie director and others for siding with “pirates.”

From a Hollywood perspective Alexander’s ‘balanced’ comments may indeed appear extreme, not least since like-minded voices keep quiet for career reasons. So why has she decided to jump on the barricades then? Today, Alexander explains her motivations to us in a short interview.

TF: What triggered you to discuss file-sharing and copyright related topics in public?

Lexi: It wasn’t my intent to be that outspoken about file-sharing, at first I just wanted to expose the hypocrisy of Hollywood going after anybody for any crime. But after I had published that first blog, I was suddenly exposed to a lot more information about the issue, either from people in the copyright reform movement or through outlets like yours.

Frankly, TorrentFreak has a lot to do with the extent of my outspokenness. Sometimes I see your headlines in my Twitter feed and I think I’m in some alternative universe, where I’m the only one who swallowed the red pill. “Another kid in prison for a file-sharing”, “Anti file-sharing propaganda taught in schools”, “torrent sites reported to the state department”, etc., etc. All done in the name of an industry that is infamous for corruption. I mean, doesn’t anybody see that? Hollywood studios shaking their finger at people who illegally download stuff is like the Vatican shaking their finger at pedophiles.

TF: What’s your main motivation to support file-sharing and copyright reform??

Lexi: Well, first and foremost I will not stand for young, bright minds being hunted and locked up in my name. And since I am still part of the film & TV industry, albeit not the most popular member at this point, these acts are done in my name. Even if I would agree with this ludicrous idea that everything to do with file-sharing or downloading is theft and should be punished with prison…then I’d still insist that everybody in Hollywood who has ever stolen anything or cheated anybody needs to go to prison first. If we could somehow make that rule happen with magical fairy dust…you’d never hear another beep about imprisoning file-sharers.

Secondly, I have said this a million times and it’s like I’m talking to the wall…horrible thieves (aka the four letter acronym) are stealing 92.5 % of foreign levies from filmmakers in countries outside of the US, breaking the Berne Convention in the process. It’s actually not legal for those countries to hand any money to anybody else but the creator. But somehow, some very smart con men duped these shady collection societies into handing them all the dough. Ask me again why I need copyright reform?

See, I wish more of my colleagues would come out of the fog…but that fog is made of fears, so it is thick and consistent. Fear to upset the decision makers, fear to get blacklisted and never get to make movies again, fear to get fired by your agents, fear to become unpopular with your film-industry peers, it’s so much easier to blame the British, pimple-faced teenager, who uploaded Fast and the Furious 6, for the scarcity we experience.

I used to get frustrated about my peers’ lack of courage, but lately I feel only empathy. I don’t like seeing talented storytellers ruled by fear. I don’t even enjoy the endless admissions I get anymore from producers or Executives who whisper in my ear that they’re pro file-sharing too (this is often followed by a demonstration of their illegally downloaded goods or their torrent clients, as if they’re trying to make sure I’ll put in a good word, if the power were to shift to the other side one of these days).

TF: Do you believe that your opinions on these topics may impact your career? If so, how?

Lexi: What do you think? LOL

But my opinions on these topics are based on facts, so therefore the question I have to ask myself next is: If I keep the truth to myself and watch innovators get sent to prison by actual criminals…how does that impact my soul?

I do realize how huge the giant I decided to criticize really is whenever I read about the amount of money that’s at play here.

At the moment I still have a TV show under option, which I am currently developing and I’m getting ready to pitch another one. A few things definitely fell through right after my first piracy post and I’m not sure how many people don’t consider me for projects because of my file-sharing stance. I can’t really worry about that. First and foremost I’m still a filmmaker, so if this shit gets too real I have to force my mind down the rabbit hole (filmmaker euphemism for escaping into your screenplays or movies).

TF: File-sharing also has its downsides of course. What’s the worst side of piracy in your opinion?

Lexi: The worst part is that there are a lot of people who suddenly feel entitled to do anything they want with our work, at any given stage. I spoke to a filmmaker the other day whose film got leaked during post production. It was missing the visual effects and it had a temp score (temporary music used as a filler before the real score is ready). Then reviews started popping up about this version of the film on IMDB, yet the people who posted those reviews had no fucking clue what they were judging, revealed by the many comments about “the director ripping off the Dark Knight score”. It was the Dark Knight score, you morons.

That was really heartbreaking and whoever doesn’t understand that can go to hell. I don’t think there’s anybody in the world who’d like their work, whatever it may be, stolen when it’s half way done and paraded around the world with their name on it.

I also will never be able to respect anybody who films or watches one of those shaky cam movies. I don’t buy that there’s anybody who enjoys a movie that way, I think this is all about trying to be the shit on some forum.

TF: In what way do you think file-sharing will (and has) change(d) the movie industry?

Lexi: I entered this industry right at the beginning of the transition to digital technology. I remember insisting to shoot my first two films on film stock, by then people were already dropping the “dinosaur” and “stone age” hints. We were all beaten into submission when it came to new digital technologies, because they reduced production and distribution costs. Then the powers started realizing that those same technologies also made unauthorized duplications much easier, so the narrative changed and now we were told to hate that part of it. It’s almost comical isn’t it?

I quickly realized that file-sharing would shatter borders and as someone who considers herself a citizen of the world, rather than of one country, this made me extremely happy. I have always wanted entertainment events to be global rather than national. This is good for the world.

The more the audience becomes familiar with foreign movies and TV shows (not synchronized and released months later, but subtitled and premiering simultaneously) the sooner we will start accepting, maybe even demanding shows and movies with a diverse, global cast from the get go. And since those are the shows I create… it cannot happen fast enough.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Raspberry Pi: MagPi issue 28

This post was syndicated from: Raspberry Pi and was written by: Liz Upton. Original post: at Raspberry Pi

I’m in a bit of a rush today; we’re all at the factory in Wales where the Raspberry Pi is built to show the team that works in Cambridge how to make a Pi. So I’ll hand over to Team MagPi, who have just released their 28th edition of the free monthly Raspberry Pi magazine, written by Raspberry Pi fans for Raspberry Pi fans.

Screen Shot 2014-11-18 at 14.17.27

Editor Ash Stone says:

This month’s Issue is packed with hardware and programming articles.  We are pleased to present the first article in an OpenCV (open source computer vision) image recognition software series by Derek Campbell.  The robot that Derek used to test the software configuration is shown on this month’s cover.

Expanding the I/O possibilities of the Raspberry Pi is often a first step of electronics projects.  This time, Dougie Lawson presents a review of the Arduberry board from Dexter Industries.  This little board provides an ideal microcontroller interface for more complicated electronics projects.  This month’s hardware articles are rounded off by Karl-Ludwig’s third BitScope article, which includes examples of preamplifier circuits and associated test and measurement.

The Raspberry Pi provides the opportunity to run many different software applications.  Voice over IP (VoIP) allows telephone calls to be carried over an internet connection.  Walbarto Abad continues his mini-series by describing how to setup an Asterisk VoIP server.

The second application article this month continues the discussion of git (distributed version control system).  Git was originally produced for Linux kernel development, but is now a mainstay of many different development projects and has been adopted by several schools too.  Alec Clews leads us through his second tutorial on the subject.

This month’s programming article demonstrates how to build an arcade game using FUZE BASIC.  Jon Silvera includes instructions, code and images to build a horizontally scrolling game.

We are on the look out for more articles at all levels and on all subjects.  If you are interested in submitting an article, please get in touch with us by emailing articles@themagpi.com.

If you have any other comments, you can find us on Twitter (@TheMagP1) and Facebook (www.facebook.com/MagPiMagazine) too.

 

 

LWN.net: Live kernel patching for SUSE Enterprise Linux

This post was syndicated from: LWN.net and was written by: corbet. Original post: at LWN.net

SUSE has announced
that it is now using kGraft to make live
kernel patches available for its enterprise distribution. “Unlike
some other Linux kernel live patching technologies, SUSE Linux Enterprise
Live Patching doesn’t require stopping the whole system while it performs
the patching. And because it is a fully open source solution, it allows for
easy code review of the patch sources. SUSE is engaging with the upstream
community to help ensure a sustainable future for kernel live patching on
Linux in general and SUSE Linux Enterprise specifically.

TorrentFreak: Movie Chief Describes University Piracy Fines as “Terrific”

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

wifi-dangerIn addition to their obligations under the DMCA, in 2010 a new requirement was put in place which meant that university funding in the U.S. was placed in jeopardy if establishments didn’t take their anti-piracy responsibilities seriously.

The policy hasn’t been repeated in any other key countries in Europe or elsewhere, but that hasn’t stopped educational institutions from introducing their own policies to deal with on-campus infringement. A particularly harsh example can be found in Australia.

The University of New South Wales, which is ranked among the top five universities in Australia, offers its students free Wi-Fi Internet access. Known as Uniwide, the system was upgraded last year to offer speeds of 1.3 Gigabits per second in order to cope with around 20,000 devices being regularly connected to the network.

With students achieving up to 10 megabits per second on their connections, it’s perhaps no surprise that some use the Wi-Fi network for downloading movies, TV shows and other copyrighted content. In order to curtail the practice the university has put in place tough punishments for those who flout the rules.

uniwide

While disconnections and up to $1,000 in fines are serious enough, it may come as a surprise that monies collected don’t go to compensate artists. University of New South Wales pumps the money back into “student amenities” instead.

“I just find it disturbing that a university has decided how it will enforce the laws of the Commonwealth,” Michael Speck, an independent anti-piracy investigator and former NSW policeman told The Age. “It’s quite disturbing and without too much natural justice.”

Adding fuel to the fire, two parties embroiled in the general piracy debate currently raging in Australia have also weighed in with their opinions.

Steve Dalby, chief regulatory officer of Internet provider iiNet, called the fines “very strange”. The response from Dalby is predictable. The ISP famously refused to pass on infringement notices to its customers when prompted to by movie company Village Roadshow, a spat that took the pair to court.

On the other hand, comments from Graham Burke, co-chief executive of Village Roadshow, reveal that the rivals are still just as far apart in their views. Burke said it was “terrific” that the university was fining students and being “proactive and taking responsibility for the users of its network.”

“We think it is more important for students to be educated about copyright by the university imposing these fines than it is for the rights holders to collect damages for the breaches that are occurring,” Burke told The Age.

“In fact the more I think about it this action by the university is helping the future of good citizenship of its many students.”

There can be little doubt that traditionally poor students would find themselves thinking deeply about copyright when landed with a $1,000 fine but whether that would put money back in the artists’ pockets long-term is another matter.

Fortunately not too many WiFi users are falling foul of the rules. According to the university, three students and one staff member have received punishments this year. All had their access suspended and two of the students were fined $480 each.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

SANS Internet Storm Center, InfoCON: green: Updates for OS X , iOS and Apple TV, (Mon, Nov 17th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

Apple today released updates for iOS 8 and OS X 10.10 (Yosemite) . Here are some of the highlights from a security point of view:

OS 10.10.1

(approx. listed in order of severity)

CVE Impact ISC Rating Description
2014-4459 Remote Code Execution critical A vulnerability in Webkit could allow a malicious site to execute arbitrary code
2014-4453 Information Leakage important The index Spotlight creates on a removable drive may include content from other drives. This vulnerability was recently discussed publicly in a blog and the author discovered e-mail fragment in the Spotlight index created on a USB drive.
2014-4460 Information Leakage important Safari may not delete all cached files after leaving private browsing. If a user visits a site without private browsing after visiting the same site with private browsing enabled, then the site may be able to connect the two visits.
2014-4458 Information Leakage important The About this Mac feature includes unnecessary details that are reported back to Apple to determine the system model

iOS

CVE Impact Severity Description
CVE-2014-4452
CVE-2014-4462
remote code execution critical Webkit issues that will lead to arbitrary code execution when visting a malicious webpage
CVE-2014-4455 unsigned code exeuction important A local user may execute unsinged code
CVE-2014-4460 information leakage important Safari doesnt delete all cached files when leaving private mode
CVE-2014-4461 privilege escalation important A malicious application may execute arbitrary codes using System privileges.
CVE-2014-4451 security feature bypass important An attacker may be able to exceed the maximum passcode attempt limit to bypass the lockscreen.
CVE-2014-4463 information leakage important the leave message feature in Facetime may have allowed sending photos from the device.
CVE-2014-4457 code execution important the debug feature would allow applications to be spawned that were not being debugged.
CVE-2014-4453 informtion leakage important iOS would submit the devices location to Spotlight Suggestion servers before the user entered a query

Apple TV

CVE Impact Severity Description
CVE-2014-4462 Code Execution Critical A memory corruption in WebKit may be used to terminate applications or run arbitrary code.
CVE-2014-4455 Code Execution Important A local user may execute unsigned code
CVE-2014-4461 Privilege Elevation Important A malicious application may be able to execute arbitrary code with system privileges.


Johannes B. Ullrich, Ph.D.
STI|Twitter|LinkedIn

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Backblaze Blog: How Backblaze Achieved 917% Growth

This post was syndicated from: Backblaze Blog and was written by: Gleb Budman. Original post: at Backblaze Blog

blog-fast500

With 917% revenue growth over the last 5 years, Backblaze has just secured itself the 128th spot on the list of the fastest growing technology companies in the United States. The journey has been exciting but could have come to an abrupt end at various times. Let me share a bit about how we grew and what we’ve learned.

Start
In 2007, Jeanine had a computer crash and begged Brian for help recovering her files. She had no backup. He could not help.

Five of us talked about this experience and realized that 100% of the photos, movies, and personal and work documents were going digital. But with fewer than 10% of people backing up their computers, eventually all of these digital items would vaporize. We quit our jobs and started Backblaze to solve the impending disaster.

Financing
While we had previously raised VC funding for startups, we decided to start Backblaze differently, committing to each other that we would work for 1 year without pay, and to put a bit of money into the business. This would effectively be the seed round.

After five years of steady growth, we decided to raise our first VC round.

Challenges
From the outside it seems like a simple, beautiful exponential growth curve up and to the right. From the inside, the challenges along the way don’t fit onto a single page. Probably not into a book either. Paul Graham has a fantastic chart of this experience he calls the “Startup Curve”.

I thought of many issues we might have: not getting the product/market fit right, not being able to build the product, not being able to attract customers, running out of cash. And some of these bumps, such as finding that many of the expected ways to find customers don’t work, we actually did run into.

But others, such as the distraction of almost being acquired or the massively impactful challenge of a flood in Thailand were harder to predict.

Successes
Despite the challenges, there were two things that kept the company succeeding: 1) focused, determined, hard work, and 2) luck.

The day of our initial beta launch on June 4th, 2007, we had glowing articles in TechCrunch and Ars Technica. People were signing up in droves and it was thrilling. But a week later the servers were bored – no one was showing up to the website. The initial external excitement vaporized and what happened next was all of us having to put our heads down and plow forward. Day after day we needed to do the small things required to build the business, that over time, add up to growth.

And then there was luck. We planned to store data on Amazon S3. Since we couldn’t afford it, we designed our own storage. Not only did that end up being a huge boon to us as it dramatically reduced our costs, but open-sourcing our Backblaze Storage Pod design hit a nerve and 1 million people read that blog post. It helped put us on the map.

Growth
Early on the data center asked us to commit to ¼ of one cabinet for one year. At the time that was a $12,000 commitment and we negotiated it down to 6 months to reduce our risk. Now we have over 100 petabytes of data stored in over 100 cabinets, adding 3 cabinets of equipment every month, and committed for several years. Sometimes growth sneaks up on you.

From 2009 through 2013 we’ve grown revenue 917%. That was good enough for 128th place in the 2014 Deloitte Technology Fast 500™ in the United States – just beating out Facebook in the 129th spot.

To qualify for the Fast 500 a business had to earn over $50K in revenue in 2009 and over $5M in revenue in 2013. We obviously exceeded those numbers. (While we don’t disclose revenue, Backblaze is in double-digit millions of dollars in revenue.)

Balance
In the same period as Backblaze has grown 917%, it is estimated that 55% of companies failed. Mortality rates are even higher in the information technology space where Backblaze resides, and over the years multiple online backup companies and services have folded.

There is a saying I’ll paraphrase: Businesses don’t fail because they are unprofitable; they fail because they run out of cash.

Bootstrapping a company, especially a capital-intensive one, meant we constantly had to watch cash-flow. Initially we were “afraid of customers” because a large influx of new customers meant having to buy another $10,000 storage pod, for customers who would pay us $5 per month. Eventually it would make sense, but for the first year we would be cash-flow negative. We came up with one simple way to solve this cash-flow challenge, but without raising capital, sometimes you have to make the tradeoff that things that make sense in the long run aren’t feasible because you won’t make it to the ‘long run’ if you run out of cash.

Takeaways
I’m honored that Backblaze has received this Fast 500 award and there has been a lot that we have learned. Here are 4 key takeaways:

  1. Build a sustainable business

    I don’t mean a ‘green’ business; I mean a business that can last. A business can’t be high-growth if it’s out of business. Aim toward a model where customers support the company, even if at times you decide to raise funding. If customers are the cash-engine, your business won’t be at the whim of the funding markets.

  2. Plan for the long term

    Some companies are a flash-in-the-pan – founded, launched, and acquired in a year. There’s a draw to this quick-buck approach. But most successful companies take years to build. Work on something you’ll be excited to do for many years. It’ll make the journey great, help overcome the bumps, and increase the chances of success.

  3. Work a day-at-a-time

    A great launch or customer-win feels fantastic. Celebrate the successes, but don’t fear the small steps. A business that makes $1 in revenue the first day and grows a mere 1% per day will only make $37 in revenue per day after an entire year…but it will make $76,240,508 in revenue per day after five years.

  4. Stay focused

    When we started Backblaze, we wrote an entire wall of products and features we wanted to build. After 7 years, we’re still working on the first one. Solving the right problem takes focus and time, and doing that is generally much better than partially solving many different problems.

Today is one of those exciting ‘success’ days when we celebrate an achievement. But this growth is looking in the rearview mirror. And tomorrow it’s time to get our heads back down and charge on.

 

Author information

Gleb Budman

Co-founder and CEO of Backblaze. Founded three prior companies. He has been a speaker at GigaOm Structure, Ignite: Lean Startup, FailCon, CloudCon; profiled by Inc. and Forbes; a mentor for Teens in Tech; and holds 5 patents on security.

Follow Gleb on: Twitter / LinkedIn / Google+

The post How Backblaze Achieved 917% Growth appeared first on Backblaze Blog.