Posts tagged ‘Other’

Raspberry Pi: GoBox: A Robotics Subscription Service

This post was syndicated from: Raspberry Pi and was written by: Matt Richardson. Original post: at Raspberry Pi

Kit maker Dexter Industries pulled the wraps off their latest Kickstarter, GoBox, the first-ever robot subscription service. It’s aimed at kids age 7 and up along with the help of an adult. No prior knowledge of robotics is required and step-by-step guides and videos will be provided.

In the first month of service, kids will receive the popular GoPiGo kit to act as the core of their robot. This kit includes a Raspberry Pi, chassis, battery pack, motors, motor controller board, and wheels. Each subsequent month, they’ll receive a new component such as a sound sensor, servo, light sensor, and many more. Each month, they’ll also receive step-by-step instructions on how to accomplish a particular mission. See their Kickstarter page for details on the different backer rewards and a sample draft mission.

Of course, we’re delighted that Dexter Industries uses Raspberry Pi in their robotics kits. Why do they like our computer? I’ll let John Cole, Dexter’s Founder & CEO, speak for himself:

We’re using the Raspberry Pi because it’s the most open, flexible, and easy to start with hardware for learning programming. We can use Scratch to start with, which is super-easy for young learners to use. And we can walk learners all the way up to command line programming.

There are two interesting and important aspects to what makes GoBox different. The first is that we are starting with little to no background assumed. When we looked at other platforms for starting robotics, they assume you know something (maybe something about coding, about electronics, or about computers). We really wanted to minimize that, and make starting with robotics and programming as easy as possible. So that is why the Raspberry Pi is a perfect platform — because we really start the story from the beginning.

The second is that we’re trying to design the program to keep learners engaged over a long period of time with the subscription service. We’re helping learners gradually, and encouraging open-ended design problems, but with a new delivery every month, you keep learning over the course of a year, rather than rush in, try a few things, lose interest, and throw the program in a corner. A new box every month really encourages people to keep going, and to keep trying new things without overwhelming them all at once.

We think this is a powerful formula to learn some of the most important skills needed in the world today. We also are seeing the creative projects (“missions”) we have developed appeal to girls and boys alike, which is really encouraging.

Check out the GoBox Kickstarter for more details.

The post GoBox: A Robotics Subscription Service appeared first on Raspberry Pi.

Errata Security: Review: Rick and Morty

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

The best sci-fi on television right now is an animated series called Rick and Morty on the Cartoon Network.

You might dismiss it, as on the surface it appears to be yet another edgy, poorly-drawn cartoon like The Simpsons or South Park. And in many ways, it is. But at the same time, hard sci-fi concepts infuse each episode. Sometimes, it’s a parody of well-known sci-fi, such as shrinking a ship to voyage through a body. In other cases, it’s wholly original sci-fi, such as creating a parallel “micro” universe whose inhabitants power your car battery. At least I think it’s original. It might be based on some obscure sci-fi story I haven’t read. Also, the car battery episode is vaguely similar to William Gibson’s latest cyberpunk book “The Peripheral”.
My point is this. It’s got that offensive South Park quality that I love, but mostly, what I really like about the series is its hard sci-fi stories, and the way it either parodies or laughs at them. I know that in next year’s “Mad Kitties” slate, I’m definitely going to write in Rick and Morty for a Hugo Award.

Backblaze Blog | The Life of a Cloud Backup Company: Don’t Build a Billion-Dollar Business. Really.

This post was syndicated from: Backblaze Blog | The Life of a Cloud Backup Company and was written by: Gleb Budman. Original post: at Backblaze Blog | The Life of a Cloud Backup Company

Stack of Money

The better goal? Keep your head down, make a great product, and build a successful business.

Silicon Valley prays at the altar of the billion-dollar business. And for good reason: These companies define industries, give the founders so-called “F-you money,” and provide VCs with a return that can counterbalance a dozen lousy investments. I don’t take issue with any of this.

What I do take issue with is the fact that Silicon Valley derides almost any other kind of business. So, million-dollar businesses are referred to as “lifestyle” companies and even sizeable exits are called “acquihires.”

Here’s the reality: Almost none of the start-ups currently working away in Silicon Valley will become billion-dollar businesses. And the dirty secret? Most of the founders know this.

The Numbers

According to one New York Times report, there are 25 to 40 private tech companies worth over $1 billion. Sounds like a shockingly high number, right?

Well, consider the fact that these businesses were started over a period of five to 10 years. And consider that more than 50,000 companies get funded each year by some combination of angels and VCs. Thus, in rough numbers, the chance that a funded company goes on to become a billion-dollar business is 0.005 percent to 0.016 percent. Rounding to any significant digits, that’s basically zero.

Do You Buy Lottery Tickets?

So what happens to the other 99.995 percent of funded start-ups? At least half shut down within a couple years.

The successes? Those are the ones that develop great products, attract adoring customers, generate real revenues, and employ a significant portion of the economy.

They’re companies building profitable businesses, such as social sharing simplifier, Buffer, and online project management service, Wrike. Or companies such as online photo site SmugMug, small business collaboration service 37signals, and website-builder Jimdo, all of which have eschewed funding altogether and bootstrapped their businesses to success.

These companies generate $5 million, $10 million, even $50 million in revenue. They are not Facebook, Dropbox, or Pinterest, but they are successful on every metric.

What’s the Downside of Trying?

You may ask, “Why not try to build a billion-dollar business?” Because there are big downsides to the process:

  • You need a lot of money. Your business will require tons of cash, so you will be fundraising constantly instead of focusing on the product. Thought you’d start a company because you love the business and the product? Half of your time will be spent on finance instead.
  • You’re gambling. By aiming for hypergrowth, you have to place big bets ahead of actual data. Many of these bets will fail, which makes this method inefficient and expensive. One of the gambles is how much funding is required and if you can raise it in time. Misjudge slightly and you’re out of business in a blink.
  • You’ll lose a lot of ownership. Raising a ton of cash will mean diluting your share of the company. If you sell 80 percent of your shares before an exit, your company needs to be worth 500 percent more for you and your employees to make the same amount. (For a refresher on how this works, read “Understanding How Dilution Affects You At A Startup.”)
  • You’ll lose your culture. Hiring the right people and building a great culture is critical. But hypergrowth means hiring recruiters to hire managers to hire recruiters to hire as many people as possible… and having those people hire new employees only weeks after they start working at your company. This approach reduces the chances of finding the right people and building a healthy culture.

It Doesn’t Have to be This Way

Most venture capital firms are focused on investing big money in companies that might provide billion-dollar exits, but not all. A few new, innovative funds acknowledge that while some of their investments may grow to this size, a lot of room for success resides below that margin. These firms run their funds like a business rather than a gamble and understand that good ROI can come from multiple mid-size successes.

Hunter Walk, partner at Homebrew, a new seed-stage VC firm, says, “Rather than ask ‘Is this a billion-dollar company’ in terms of market size, we like to ask ‘Do these founders have the aptitude and attitude to build something big?’ If the answer is affirmative, then they’ll figure out how to create massive amounts of value along the way. Homebrew can returns great results to our investors with a mix of exit scenarios, not all of which need to be at the billion dollar level.”

500 Startups founder Dave McClure takes it further, “We think of our model sort of like Moneyball for venture capital,” he says, where consistent hitting and getting on base frequently is more important than swinging for the fences every time. We prefer to build a model where ‘singles’ (>$10M exits) and ‘doubles’ (>$100M exits) are still meaningful outcomes, and the occasional home run [>$1 billion exit] is a fantastic upside event, but not required for us to be successful.” This model has paid off for 500 Startups with a handful of their investments (Rapportive, Crocodoc, Moonfruit, Versly, and ZenCoder) all acquired in the $10- to $100-million range in the three years since the fund started.

EchoVC and Bullpen Capital are two other firms that invest smaller amounts in capital-efficient start-ups, knowing that a $50-million exit provides an excellent return.

So, What Size Business Should You Start?

Start a business because it addresses the problem you want to solve and produces the product you want to build. Figure out how you’ll make your first dollar. Then determine how to make the first million. Eventually, you may grow to a billion-dollar company, but it’s OK if you end up as one of the 99.995 percent. There is a whole lot of room for success between a billion and dead.

(This blog post was first published on the Venture Capital) blog.

The post Don’t Build a Billion-Dollar Business. Really. appeared first on Backblaze Blog | The Life of a Cloud Backup Company.

Schneier on Security: “The Declining Half-Life of Secrets”

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Several times I’ve mentioned Peter Swire’s concept of “the declining half-life of secrets.” He’s finally written it up:

The nature of secrets is changing. Secrets that would once have survived the 25 or 50 year test of time are more and more prone to leaks. The declining half-life of secrets has implications for the intelligence community and other secretive agencies, as they must now wrestle with new challenges posed by the transformative power of information technology innovation as well as the changing methods and targets of intelligence collection.

Raspberry Pi: Barrel o’ Fun: Arcade machine barrel table

This post was syndicated from: Raspberry Pi and was written by: Clive Beale. Original post: at Raspberry Pi

What do you do if you are given a big old wine barrel? You could make it into a twee garden planter; go over Niagara Falls in it; or cut off the end and make a secret passage like in Scooby Doo. Or you could do the obvious thing and build a Raspberry Pi-powered arcade machine. Matt Shaw did just that. Arcade games, wine and Donkey Kong style barrels—three of our favourite things in one.

The arcade machine in all it's barelly glory

The arcade machine in all it’s barelly glory

The machine itself has the benefit of a sit-down cocktail cab (you can put your drinks on top) with the standup advantage of being able to jostle your opponent. It’s a nice clean build—deliberately low tech—wired using crimps and block connectors with no soldering. The Raspberry Pi runs the excellent PiPlay, an OS for emulation and gaming.

The other great thing about this project is its scrounginess. Reusing and repurposing makes us happy and this whole project does just that: an unloved 4:3 monitor, free table glass from online classifieds and an old barrel. The main costs were the buttons, joysticks and wiring and the whole build came in at around £90.

Circuit testing at it's finest!

The circuit tester is quite brilliant

Although we’ve blogged about Pi-powered arcade machines before (we have two in Pi Towers, we like them, OK? :)) the point is that if you have a Pi lying around then you can make a games machine out of almost anything. For not much money. (And as someone who spent every Saturday feeding their pocket money into arcade machines in seedy arcades in Southport, that’s an amazing thing.)

The post Barrel o’ Fun: Arcade machine barrel table appeared first on Raspberry Pi.

TorrentFreak: Twitter Suspends ‘Pirate’ Site Accounts Over Dubious Claims

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

pirate-twitterIn common with many other online services, copyright holders regularly ask Twitter to remove tweets that link to pirated material.

If a user decides to post a link to a pirated blockbuster or music track there’s a good chance that it won’t be online for long. In addition, the Twitter user may have his or her account suspended.

The latter happened to the accounts of Spain’s largest torrent site EliteTorrent and the linking site Bajui recently, both following a copyright holder complaint. However, both accounts had refrained from linking to pirated material.

The takedown notices in which the accounts were targeted were sent by the Spanish company Golem Distribución, who own the distribution rights of the film “Cut Bank.” They both reference tweets where the title of the film was mentioned alongside the film poster.

In its copyright and DMCA policy Twitter explains that it takes action against “tweets containing links to allegedly infringing materials,” but EliteTorrent and Bajui didn’t post any links, just text and a film poster.

The Elitetorrent tweet

Morphoide, the founder of the Elitewebs Network which includes both EliteTorrent and Bajui, initially thought that the tweets were flagged because of the image. However, the DMCA notice makes no mention of this.

Instead, Golem Distribución accuses the accounts in broken English of distributing the film on their respective websites, not Twitter.

“According to the protocol of the DMCA (Digital Millennium Copyright Act): We have noted that the websites own, is offering free downloads and/or streaming of the work ‘CUT BANK’ belonging to GOLEM DISTRIBUCIÓN,” the notice reads.

The DMCA notice

Morphoide is disappointed with Twitter’s decision and informs us that he specifically chose not to include any links to avoid this kind of trouble.

“There were no links in the tweets. I stopped linking a long time ago because I didn’t want my account to be suspended for doing so,” Morphoide says.

Apparently even tweets without links can be flagged and both sites have had their accounts suspended as a result. This means that thousands of followers are gone, just like that.

The site’s founder says he has lost faith in Twitter and doesn’t intend to appeal the suspension.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Krebs on Security: OPM (Mis)Spends $133M on Credit Monitoring

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

The Office of Personnel Management (OPM) has awarded a $133 million contract to a private firm in an effort to provide credit monitoring services for three years to nearly 22 million people who had their Social Security numbers and other sensitive data stolen by cybercriminals. But perhaps the agency should be offering the option to pay for the cost that victims may incur in “freezing” their credit files, a much more effective way of preventing identity theft.

Not long after news broke that Chinese hackers had stolen SSNs and far more sensitive data on 4.2 million individuals — including background investigations, fingerprint data, addresses, medical and mental-health history, and financial history — OPM announced it had awarded a contract worth more than $20 million to Austin, Texas-based identity protection firm CSID to provide 18 months of protection for those affected.

Soon after the CSID contract was awarded, the OPM acknowledged that the breach actually impacted more than five times as many individuals as originally thought. In response, the OPM has awarded a $133 million contract to Portland, Ore. based Identity Theft Guard Solutions LLC.

No matter how you slice it, $133 million is a staggering figure for a service that in all likelihood will do little to prevent identity thieves from hijacking the names, good credit and good faith of breach victims. While state-sponsored hackers thought to be responsible for this breach were likely interested in the data for more strategic than financial reasons (recruiting, discovering and/or thwarting spies), the OPM should not force breach victims to pay for true protection.

As I’ve noted in story after story, identity protection services like those offered by CSID, Experian and others do little to block identity theft: The most you can hope for from these services is that they will notify you after crooks have opened a new line of credit in your name. Where these services do excel is in helping with the time-consuming and expensive process of cleaning up your credit report with the major credit reporting agencies.

Many of these third party services also induce people to provide even more information than was leaked in the original breach. For example, CSID offers the ability to “monitor thousands of websites, chat rooms, forums and networks, and alerts you if your personal information is being bought or sold online.” But in order to use this service, users are encouraged to provide bank account and credit card data, passport and medical ID numbers, as well as telephone numbers and driver’s license information.

The only step that will reliably block identity thieves from accessing your credit file — and therefore applying for new loans, credit cards and otherwise ruining your good name — is freezing your credit file with the major credit bureaus. This freeze process — described in detail in the primer, How I Learned to Stop Worrying and Embrace the Security Freeze — can be done online or over the phone. Each bureau will give the consumer a unique personal identification number (PIN) that the consumer will need to provide in the event that he needs to apply for new credit in the future.

But there’s a catch: Depending on which state in which you reside, the freeze can cost $5 to $15 per credit bureau. Also, in some states consumers can be charged a fee to temporarily lift the freeze.

It is true that most states allow consumers who can show they have been or are likely to be a victim of ID theft to obtain the freezes for free, but this generally requires the consumer to file a police report, obtain and mail a copy of that report along with photocopied identity documents, and submit an affidavit swearing that the victim believes his or her statement about identity theft to be true.

Unsurprisingly, many who seek the comprehensive protection offered by a freeze in the wake of a breach are more interested in securing the freeze than they are untangling a huge knot of red tape, and so they pay the freeze fees and get on with their lives.

The OPM’s advisory on this breach includes the same boilerplate advice sent to countless victims in other breaches, including the admonition to monitor’s one’s financial statements carefully, to obtain a free copy of one’s credit report from, and to consider filing a free and/or fraud alert with the three major credit bureaus. Nowhere does the agency mention the availability or merits of establishing a security freeze.

If you were affected by the OPM breach, or if you’re interested in learning more about what you can do to protect your identity, please read this story.

TorrentFreak: No Dallas Buyers Club Piracy Appeal in Oz, Company Considering Options

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

In April, Aussie file-sharers let out a collective groan when the company behind the movie Dallas Buyers Club (DBC) won the right to obtain the personal details of almost 4,800 individuals said to have downloaded and shared the movie without permission.

However, things didn’t go to plan. As six ISPs stood by ready to hand over the information, the Federal Court told DBC that before allowing the release of the identities it wanted to see the letters the company intended to send out to alleged infringers. Justice Nye Perram wanted to avoid the so-called ‘speculative invoicing’ practices seen in other countries in recent years.

In a mid-August ruling it became clear that Justice Perram was right to exercise caution.

DBC did intend to demand thousands from alleged infringers, from the cost of single purchase of the movie and a broad license to distribute, to attempting to factor in damages for other things people may have downloaded. All but the cost of the film and some legal costs were disallowed by the Court.

In the end and in order to make it financially unviable for DBC to go against the wishes of the court, the Judge told DBC it would have to pay a AUS$600,000 bond before any subscriber information was released. The company didn’t immediately accept that offer and was given a couple of weeks to appeal. That deadline expired last Friday.

But while those who believe they might have been caught in the dragnet breathe a sign of relief, the company is warning that it’s not done yet. Speaking with itNews, Michael Bradley of Marque Lawyers, the law firm representing DBC, said that while an appeal was considered risky, other options remain.

“Appeals are always hard, it’s an expensive course, and it’s unpredictable – if one judge has taken a particular view, you’re taking a gamble on whether three other judges are going to take a different view,” Bradley said.

“We think there may be another way of achieving the outcome [we want] without having to go through an appeal.”

DBC believes that by reworking the way it calculates its demands, the Judge will see its claim in a different light. One of the avenues being explored is the notion that pirates can not only be held liable for their own uploading, but also subsequent uploading (carried out by others) that was facilitated by theirs.

In his August ruling, Justice Perram said he had “no particular problem” with that theory but did not consider it in his ruling since DBC provided him with no information. Bradley believes that door remains open for negotiation.

“Whether an individual should be liable for damages based on other activity is not a closed subject,” Bradley says. “So there may be a different way of approaching it and coming up with something [Justice Perram] is more comfortable with.”

The idea that file-sharers should somehow be held liable for the activities of other file-sharers is an extremely complex one that will be hard if not impossible to prove from a technical standpoint.

While it could be shown that file-sharer ‘A’ entered a Dallas Buyers Club movie swarm before file-sharer ‘B’, there is no way of showing that ‘B’ benefited in any way from the activity of ‘A’. DBC has no access to any information that proves information was shared between the two, or even between the two via third parties.

The company could take a broader view of course, and claim that all pirates were equally responsible for the resulting infringement in the swarm. But that amounts to each person being held responsible for their own infringement and the judge has already determined that to be the purchase price of the movie.

While DBC may yet take a second bite at the cherry (it has little to lose having invested so much already in Australian legal action), it wouldn’t come as a surprise if the company decides to make its money elsewhere. There are easy pickings to be made in the United States and the UK, and parts of Scandinavia are now also being viewed as troll-friendly. Tipping money down the drain in Oz might not be the best option.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

SANS Internet Storm Center, InfoCON: green: What’s the situation this week for Neutrino and Angler EK?, (Wed, Sep 2nd)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green


Last month in mid-August 2015, an actor using Angler exploit kit (EK) switched to Neutrino EK [1]. A few days later, we found that actor using Angler again [2]. This week, were back to seeingNeutrino EK from the same actor.

Neutrino EK from this actor is sending TeslaCrypt 2.0 as the payload. We also saw another actor use Angler EK to pushBedep during the same timeframe.

Todays diary looks at two infection chains from Tuesday 2015-09-01, one for Angler EK and another for Neutrino.

First infection chain: Angler EK

This infection chain ended with a Bedep infection. Bedep is known for carrying out advertising fraud and downloading additional malware [3, 4, 5].

A page from the compromised website had malicious script injected before the opening HTML tag. The Bedep payloadalong withpatterns seen in the injected scriptindicate this is a different actor. Its not the same actorcurrently using Neutrino EK (or Angler EK)” />
Shown above: Injected script in a page from the compromised website.

We saw Angler EK on ” />
Click on the above image for a full-size view.

Using tcpreplay on the pcap in Security Onion, we findalerts for Angler EK and Bedep. This setup used Suricata with the EmergingThreats (ET) and ET Pro rule sets. ” />
Click on the above image for a full-size view.

Second infection chain: Neutrino EK

Our second infection chain ended in a TeslaCrypt 2.0 infection. Very little has changed fromlast weeksTelsaCrypt 2.0post-infection traffic. This actors malicious script was injected into a page from the compromised website. The injected script follows the same patterns seen last week [2]. “>Take another look at the image above. Notice another URL with the domain before the Neutrino EK landing URL. We saw the same type of URL using last week from injected script used by this actor. “>6], and it is currently hosted on a Ukrainian IP at It doesnt come up in the traffic. Im not sure what purpose it serves, but that URLis another indicator pointing to this particular actor.

Were still seeing the same type of activityfrom ourTeslaCrypt 2.0 payload this week. The only difference? Last week, the malware included a .bmp image file with decrypt instructions on the desktop. This week, the TeslaCrypt 2.0 sample didnt include any images. ” />
Shown above: A user” />
Shown above: TeslaCrypt 2.0s decrypt instructions, ripped off from CryptoWall 3.0, still stating CryptoWall in the text.

We saw Neutrino EK on As always, Neutrino uses non-standard ports for its HTTP traffic. This type of traffic is easily blocked on an organizations firewall, but most home users wont have that protection. ” />
Click on the above image for a full-size view.

Using tcpreplay on the pcap in Security Onion, we findalerts for Angler EK and AlphaCrypt. This setup used Suricata with the EmergingThreats (ET) and ET Pro rule sets. The TeslaCrypt 2.0 traffic triggered ET alerts for AlphaCrypt. ” />
Click on the above image for a full-size view.

Final words

We cant say how long this actor will stick with Neutrino EK. As I mentioned last time, the situation can quickly change. And despite this actors use of Neutrino EK, were still seeing other actors use Angler EK. As always, well continue to keep an eye on the cyber landscape. And astime permits, well let you know of any further changes we find from the various threat actors.

Traffic and malware for this diary are listed below:

  • A pcap file with the infection traffic for Angler EK from Tuesday 2015-09-01 is available here. (2.08 MB)
  • A pcap file with the infection traffic for Neutrino EK from Tuesday 2015-09-01 is available here. (594 KB)
  • A zip archive of the malware and other artifacts is available here. (692 KB)

The zip archive is password-protected with the standard password. If you dont know it, email and ask.

Brad Duncan
Security Researcher at Rackspace
Blog: – Twitter: @malware_traffic



(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Backblaze Blog | The Life of a Cloud Backup Company: A Behind-the-Scenes Look at Designing the Next Storage Pod

This post was syndicated from: Backblaze Blog | The Life of a Cloud Backup Company and was written by: Andy Klein. Original post: at Backblaze Blog | The Life of a Cloud Backup Company

Backblaze Labs Storage Pod Design
Tim holds up a drive beam and shows the design to Matt who is attending the meeting courtesy of a telepresence robot. They are discussing the results of a prototype Storage Pod developed using Agile development methodology applied to hardware design. The Storage Pod they are discussing was the result of the fifth sprint (iteration) of prototyping. Based on the evaluation by the Scrum (project) team, the prototype pod is either sent to the datacenter for more testing and hopefully deployment, or it is decided that more work is required and the pod is salvaged for usable parts. Learning from failure, as well as from success, is a key part of the Agile methodology.

Prototyping Storage Pod Design

Backblaze Labs uses the Scrum framework within the Agile methodology to manage the Storage Pod prototyping process. To start, we form a Scrum team to create a product backlog containing the tasks we want to accomplish with the project: redesign drive grids, use tool-less lids, etc. We then divide the tasks into four-week sprints, each with the goal of designing, manufacturing, and assembling one to five production-worthy Storage Pods.

Let’s take a look at some of the tasks on the product backlog that the Scrum team is working through as they develop the next generation Storage Pod.

Drive Grids Become Drive Beams

Each Storage Pod holds 45 hard drives. Previously we used drive grids to hold the drives in place so that they connect to the backplanes. Here’s the process of how drive grids became drive brackets that became drive beams.

Current Storage Pod 4.5 Design: Each pod has an upper and lower drive grid. Hard drives are placed in each “bay” of the grid and then a drive cover presses lightly on each row of 15 hard drives to secure them in place.
Storage Pod Drive Grids
Storage Pod Scrum – Sprint 1 Design: This drive bracket design allowed drives to be easily installed and removed, but didn’t seem to hold the drives well. A good first effort, but a better design was needed.
Drive Guides Sprint 2
Storage Pod Scrum – Sprint 3 Design: Sprint 3 introduced a new drive beam design. On the plus side, the drives were very secure. On the minus side, the beams sat too high in the chassis, so it was nearly impossible to remove a drive with your fingers.
Drive Beams Sprint 3
Storage Pod Scrum – Sprint 5 Design: The height of the drive beams was shortened in Sprint 5, making them low enough so that the hard drives could be removed by using your fingers.
Drive Beams Sprint 5
Operations Impacts the Design
Having Operations personnel on the Scrum team helped get the design of the drive beams right. For the first two sprints, the hard drives installed in the prototypes for testing purposes were small capacity drives. These drives were ¾ inch wide. Once installed, the gap between the drives provided enough space to use your fingers to remove the drives. Matt from Operations suggested that we use large capacity hard drives, like we do in production, to test the prototypes. These drives are one inch wide. When we did this in Sprint 3, we discovered there was no longer enough room between the drives our for fingers. This led to lowering the height of the drive beams in Sprint 5.

Drive Guides

In conjunction with the introduction of the new drive beam design from Sprint 3, we needed a way to keep the drives aligned and immobile within the beams. Drive guides were the proposed solution. These guides are screwed or inserted into the four mounting holes on each drive. The drive then slides into cutouts on the drive beam securing the drive in place. There were three different drive guide designs:

Sprint 2 – The first design for drive guides was to use screws in the four mounting holes on each drive. The screw head was the right size to slide into the drive beam. Inserting 180 screws into the 45 drives (45 x 4) during assembly was “time expensive” so other solutions were examined.
Sprint 3 – The second design for drive guides was plastic “buttons” that fit into each of the four drive-mounting screw holes. This was a better solution, but when the height of the drive beams was lowered in Sprint 4, the top two drive guide buttons were now above the drive beams with nothing to attach to.
Drive Guide Buttons Sprint 3
In Sprint 5, full-length drive guides replaced the buttons, with one guide attached to each side of the drive. The full-length guides allowed the drive to be firmly seated even though the drive beams were made shorter. They had the added benefit of taking much less time to install on the assembly line.
Drive Rails
Time Versus Money: 3D Printing
One lesson learned as we proceeded through the sprints was that we could pay extra to get the parts needed to complete the sprint on time. In other words, we could trade money for time. One example is the drive guides. About halfway through Sprint 5 we created the design for the new full-length drive guides. We couldn’t procure any in time to finish the sprint, so we made them. In fact, we printed them using a 3D printer. At $3 each to print ($270 for all 45 drives), the cost is too high for production, but for prototyping we spent the money. Later, a plastics manufacturer will make these for $0.30 each in a larger quantity – trading time to save money during production.

Drive Lids

Drive lids were introduced in Storage Pod 3.0 to place a small amount of pressure on the top of the hard drives, minimizing vibrations as they sat in the drive bay. The design worked well, but the locking mechanism was clumsy and the materials and production costs were a bit high.

Sprint 2 – We wanted to make it easier to remove the drive lids yet continue to apply the lid pressure evenly across the drives. This design looked nice, but required 6 screws to be removed in order to access a row of drives. In addition, the screws could cross-thread and leave metal shards in the chassis. A better solution was needed.
Screw Down Drive Lids
Sprint 4 – The candidate design was tool-less; there were no screws. This drive lid flexed slightly when pressed so as to fit into cutouts on the drive beams. The Scrum team decided to test multiple designs and metals to see if any performed better and have the results at the next sprint.
Drive Lids Sprint 4
Sprint 5 – The team reviewed the different drive lids and selected three different lids to move forward. Nine copies of each drive lid were made, and three Storage Pods were given to operations for testing to see how each of the three drive lids performed.
Drive Lids Sprint 5

Backplane Tray

In Storage Pod 4.5, 3.0, and prior versions, the backplanes were screwed to standoffs attached to the chassis body itself. Assembly personnel on the Scrum team asked if backplanes and all their cables, etc. could be simplified to make assembly easier.

Sprint 5 – The proposed solution was to create a backplane tray on which all the wiring could be installed and then the backplanes attached. The resulting backplane module could be pre-built and tested as a unit and then installed in the pod in minutes during the assembly process.
Backblaze Tray Sprint 5
Sprint 7 – The design of the backplane module is still being finalized. For example, variations on the location of wire tie downs and the wiring routes are being designed and tested.
Backblaze Tray Sprint 7
Metal Benders Matter
One thing that may not be obvious is that the actual Storage Pod chassis, the metal portion, is fabricated from scratch each sprint. Finding a shop that can meet the one-week turnaround time for metal manufacturing required by the sprint timeline is time-consuming, but well worth it. In addition, even though these shops are not directly part of the Scrum team, they can provide input into the process. For example, one of the shops told us that if we made a small design change to the drive beams they could fabricate them out of a single piece of metal and lower the unit production cost by up to half.

The Scrum team

Our Storage Pod Scrum team starts with a Scrum Master (Ariel/Backblaze), a designer (Petr/Evolve), and a Product Owner (Tim/Backblaze). Also included on the team are representatives from assembly (Edgar/Evolve and Greg/Evolve), operations (Matt/Backblaze), procurement (Amanda/Evolve), and an executive champion (Rich/Evolve). At the end of each four-week sprint, each person on the team helps evaluate the prototype created during that sprint. The idea is simple: identify and attack issues during the design phase. A good example of this thinking is the way we are changing the Storage Pod lids.

Current Storage Pods require the removal of twelve screws to access the drive bay and another eight screws to access the CPU bay. The screws are small and easy to lose, and they take time to remove and install. Operations suggested tool-less lids be part of the Scrum product backlog and they were introduced in Sprint 3. A couple of iterations (Sprints) later, we have CPU bay and drive bay lids that slide and latch into place and are secured with just two thumb screws each that stay attached to the lid.

Tool-less Lids Sprint 3

Going From Prototype to Production

One of the supposed drawbacks of using Scrum is that is it is hard to know when you are done. Our goal for each sprint is to produce a production-worthy Storage Pod. In reality, this will not be the case, since each time we introduce a major change to a component (drive beams, for example) things break. This is just like doing software development, where new modules often destabilize the overall system until the bugs are worked out. The key is for the Scrum team to capture any feedback and categorize it as: 1) must fix, 2) new idea to save for later, or 3) never mind.

Here’s the high-level process the Scrum team uses to move the Storage Pod design from prototype to production.

  1. Manufacture one to five Storage Pods chassis per sprint until the original product backlog tasks are addressed and, based on input from manufacturing, assembly and operations, the group is confident in the new design.
  2. The Scrum team is responsible for the following deliverables:
    • Final product specifications
    • Bill of materials
    • Final design files
    • Final draft of the work instructions (Build Book)
  3. Negotiate production pricing to manufacture the chassis. This will be done based on a minimum six-month production run.
  4. Negotiate production pricing to acquire the components used to assemble the Storage Pod: power supplies, motherboards, hard drives, etc. This will be done based on a minimum six-month production run.

Once all of these tasks are accomplished the design can be placed into production.

The Storage Pod Scrum is intended to continue even after we have placed Storage Pods into production. We start with a product backlog, build and test a number of prototypes and then go into production with a feature complete, tested unit. At that point, any new ideas generated along the way would then be added to the product backlog and the sprints for the next Storage Pod production unit would begin.

Scrum Process Diagram

Will the ongoing Storage Pod Scrum last forever? We started the Scrum with the intention of continuing as long as one or more of the following are being accomplished:

  • We are improving product reliability
  • We are reducing the manufacturing cost
  • We are reducing the assembly cost
  • We are reducing the maintenance cost
  • We are reducing the operating cost

The Next Storage Pod – Not Yet

Where are we in the process of creating the next Storage Pod version to open source? We’re not done yet. We have a handful of one-off pods in testing right now and we’ve worked through most of the major features in the product backlog, so we’re close.

In the meantime, if you need a Storage Pod or you’re just curious, you can check out There you can learn all about Storage Pods, including their history, the Storage Pod ecosystem, how to build or buy your very own Storage Pod and much more.

The post A Behind-the-Scenes Look at Designing the Next Storage Pod appeared first on Backblaze Blog | The Life of a Cloud Backup Company.

TorrentFreak: US Govt. Denies Responsibility for Megaupload’s Servers

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

megaupload-logoIn a few months’ time it will be four years since Megaupload’s servers were raided by U.S. authorities. Since then, virtually no progress has been made in the criminal case.

Kim Dotcom and his Megaupload colleagues are still awaiting their extradition hearing in New Zealand and have yet to formally appear in a U.S. court.

Meanwhile, more than 1,000 Megaupload servers from Carpathia Hosting remain in storage in Virginia, some of which contain crucial evidence as well as valuable files from users. The question is, for how long.

Last month QTS, the company that now owns the servers after acquiring Carpathia, asked the court if it can get rid of the data which is costing them thousands of dollars per month in storage fees.

This prompted a response from a former user who wants to preserve his data, as well as Megaupload, who don’t want any of the evidence to be destroyed.

Megaupload’s legal team suggested that the U.S. Government should buy the servers and take care of the hosting costs. However, in a new filing (pdf) just submitted to the District Court the authorities deny all responsibility.

United States Attorney Dana Boente explains that the Government has already backed up the data they need and no longer have a claim on the servers.

“…the government has already completed its acquisition of data from the Carpathia Servers authorized by the warrant, which the defendants will be entitled to during discovery,” Boente writes.

“As such, there is no basis for the Court to order the government to assume possession of the Carpathia Servers or reimburse Carpathia for ‘allocated costs’ related to their continued maintenance.”

The Government says it handed over its claim on the servers early 2012 after the search warrant was executed and the hosting company was informed at the time. This means that the U.S. can and will not determine the fate of the stored servers.

The authorities say they are willing to allow Megaupload and the other defendants to look through the data that was copied, but only after they are arraigned.

In any case, the U.S. denies any responsibility for the Megaupload servers and asks the court to keep it this way.

“…the United States continues to request that the Court deny any effort to impose unprecedented financial or supervisory obligations on the United States related to the Carpathia Servers,” the U.S. Attorney concludes.

Previously the U.S. and MPAA blocked Megaupload’s plans to buy the servers, which is one of the main reasons that there is still no solution after all those years.

The MPAA also renewed its previous position last week (pdf). The Hollywood group says doesn’t mind if users are reunited with their files as long as Megaupload doesn’t get hold of them.

“The MPAA members’ principal concern is assuring that adequate steps are taken [to] prevent the MPAA members’ content on the Mega Servers in Carpathia’s possession from falling back into the hands of Megaupload or otherwise entering the stream of commerce,” they write.

The above means that none of the parties is willing to move forward. The servers are still trapped in between the various parties and it appears that only District Court Judge Liam O’Grady can change this.

It appears to be a choice between saving the 25 Petabytes of data or wiping all servers clean.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Krebs on Security: Like Kaspersky, Russian Antivirus Firm Dr.Web Tested Rivals

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

A recent Reuters story accusing Russian security firm Kaspersky Lab of faking malware to harm rivals prompted denials from the company’s eponymous chief executive — Eugene Kaspersky — who called the story “complete BS” and noted that his firm was a victim of such activity.  But according to interviews with the CEO of Dr.Web — Kaspersky’s main competitor in Russia — both companies experimented with ways to expose antivirus vendors who blindly accepted malware intelligence shared by rival firms.

quarantineThe Reuters piece cited anonymous, former Kaspersky employees who said the company assigned staff to reverse-engineer competitors’ virus detection software to figure out how to fool those products into flagging good files as malicious. Such errors, known in the industry as “false positives,” can be quite costly, disruptive and embarrassing for antivirus vendors and their customers.

Reuters cited an experiment that Kaspersky first publicized in 2010, in which a German computer magazine created ten harmless files and told antivirus scanning service that Kaspersky detected them as malicious (Virustotal aggregates data on suspicious files and shares them with security companies). The story said the campaign targeted antivirus products sold or given away by AVG, Avast and Microsoft.

“Within a week and a half, all 10 files were declared dangerous by as many as 14 security companies that had blindly followed Kaspersky’s lead, according to a media presentation given by senior Kaspersky analyst Magnus Kalkuhl in Moscow in January 2010,” wrote Reuters’ Joe Menn. “When Kaspersky’s complaints did not lead to significant change, the former employees said, it stepped up the sabotage.”

Eugene Kaspersky posted a lengthy denial of the story on his personal blog, calling the story a “conflation of a number of facts with a generous amount of pure fiction.”  But according to Dr.Web CEO Boris Sharov, Kaspersky was not alone in probing which antivirus firms were merely aping the technology of competitors instead of developing their own.

Dr. Web CEO Boris Sharov.

Dr.Web CEO Boris Sharov.

In an interview with KrebsOnSecurity, Sharov said Dr.Web conducted similar analyses and reached similar conclusions, although he said the company never mislabeled samples submitted to testing labs.

“We did the same kind of thing,” Sharov said. “We went to the [antivirus] testing laboratories and said, ‘We are sending you clean files, but a little bit modified. Could you please check what your system says about that?’”

Sharov said the testing lab came back very quickly with an answer: Seven antivirus products detected the clean files as malicious.

“At this point, we were very confused, because our explanation was very clear: ‘We are sending you clean files. A little bit modified, but clean, harmless files,’” Sharov recalled of an experiment the company said it conducted over three years ago. “We then observed the evolution of these two files, and a week later, half of the antivirus products were flagging them as bad. But we never flagged these ourselves as bad.”

Sharov said the experiments by both Dr.Web and Kaspersky — although conducted differently and independently — were attempts to expose the reality that many antivirus products are simply following the leaders.

“The security industry in that case becomes bullshit, because people believe in those products and use them in their corporate environments without understanding that those products are just following others,” Sharov said. “It’s unacceptable.”

According to Sharov, a good antivirus product actually consists of two products: One that is sold to customers in a box and/or or online, and the second component that customers will never see — the back-end internal infrastructure of people, machines and databases that are constantly scanning incoming suspicious files and testing the overall product for quality assurance. Such systems, he said, include exhaustive “clean file” tests, which scan incoming samples to make sure they are not simply known, good files. Programs that have never been seen before are nearly always given more scrutiny, but they also are a frequent source of false positives.

“We have sometimes false positives because we are unable to gather all the clean files in the world,” Sharov said. “We know that we can get some part of them, but pretty sure we never get 100 percent. Anyway, this second part of the [antivirus product] should be much more powerful, to make sure what you release to public is not harmful or dangerous.”

Sharov said some antivirus firms (he declined to name which) have traditionally not invested in all of this technology and manpower, but have nevertheless gained top market share.

“For me it’s not clear that [Kaspersky Lab] would have deliberately attacked other antivirus firm, because you can’t attack a company in this way if they don’t have the infrastructure behind it,” Sharov said.

“If you carry out your own analysis of each file you will never be fooled like this,” Sharov said of the testing Dr.Web and Kaspersky conducted. “Some products prefer just to look at what others are doing, and they are quite successful in the market, much more successful than we are. We are not mad about it, but when you think how much harm could bring to customers, it’s quite bad really.

Sharov said he questions the timing of the anonymous sources who contributed to the Reuters report, which comes amid increasingly rocky relations between the United States and Russia. Indeed, Reuters reported today the United States is now considering economic sanctions against both Russian and Chinese individuals for cyber attacks against U.S. commercial targets.

Missing from the Reuters piece that started this hubub is the back story to what Dr.Web and Kaspersky both say was the impetus for their experiments: A long-running debate in the antivirus industry over the accuracy, methodology and real-world relevance of staged antivirus comparison tests run by third-party firms like and

Such tests often show many products block 99 percent of all known threats, but critics of this kind of testing say it doesn’t measure real-world attack, and in any case doesn’t reflect the reality that far too much malware is getting through antivirus defenses these days. For an example of this controversy, check out my piece from 2010, Anti-Virus Is a Poor Substitute for Common Sense.

How does all this affect the end user? My takeaway from that 2010 story hasn’t changed one bit: If you’re depending on an anti-virus product to save you from an ill-advised decision — such as opening an attachment in an e-mail you weren’t expecting, installing random video players from third-party sites, or downloading executable files from peer-to-peer file sharing networks — you’re playing a dangerous game of Russian Roulette with your computer.

Antivirus remains a useful — if somewhat antiquated and ineffective — approach to security.  Security is all about layers, and not depending on any one technology or approach to detect or save you from the latest threats. The most important layer in that security defense? You! Most threats succeed because they take advantage of human weaknesses (laziness, apathy, ignorance, etc.), and less because of their sophistication. So, take a few minutes to browse Krebs’s 3 Rules for Online Safety, and my Tools for a Safer PC primer.

Further reading: Antivirus is Dead: Long Live Antivirus!

SANS Internet Storm Center, InfoCON: green: How to hack, (Tue, Sep 1st)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

” />
Agreed, this information is not overly useful. These hacks are basically on the opposite end of the threat scale from the over-hyped Advanced Persistent Threat (APT). Lets call it the Basic Sporadic Annoyance (BSA), just to come up with a new acronym :).

The BSAs still tell us though what average wannabe hackers seem to be interested in breaking into, namely: websites, online games, wifi and phones. Cars, pacemakers, fridges and power plants are not on the list, suggesting that these targets are apparently not yet popular enough.

Being fully aware of the filter bubble we had several people try the same search, and they largely got the same result. Looks like Facebook really IS currently the main wannabe hacker target. But Facebook dont need to worry all that much. Because if you just type How to h, then the suggestions reveal that other problems are even more prominent than hacking Facebook” />

If your results (of the how to hack query, not the latter one) differ significantly,please share in the comments below.”>Updated to add: Thanks, we have enough samples now :)

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

lcamtuf's blog: Understanding the process of finding serious vulns

This post was syndicated from: lcamtuf's blog and was written by: Michal Zalewski. Original post: at lcamtuf's blog

Our industry tends to glamorize vulnerability research, with a growing number of bug reports accompanied by flashy conference presentations, media kits, and exclusive interviews. But for all that grandeur, the public understands relatively little about the effort that goes into identifying and troubleshooting the hundreds of serious vulnerabilities that crop up every year in the software we all depend on. It certainly does not help that many of the commercial security testing products are promoted with truly bombastic claims – and that some of the most vocal security researchers enjoy the image of savant hackers, seldom talking about the processes and toolkits they depend on to get stuff done.

I figured it may make sense to change this. Several weeks ago, I started trawling through the list of public CVE assignments, and then manually compiling a list of genuine, high-impact flaws in commonly used software. I tried to follow three basic principles:

  • For pragmatic reasons, I focused on problems where the nature of the vulnerability and the identity of the researcher is easy to ascertain. For this reason, I ended up rejecting entries such as CVE-2015-2132 or CVE-2015-3799.

  • I focused on widespread software – e.g., browsers, operating systems, network services – skipping many categories of niche enterprise products, WordPress add-ons, and so on. Good examples of rejected entries in this category include CVE-2015-5406 and CVE-2015-5681.

  • I skipped issues that appeared to be low impact, or where the credibility of the report seemed unclear. One example of a rejected submission is CVE-2015-4173.

To ensure that the data isn’t skewed toward more vulnerable software, I tried to focus on research efforts, rather than on individual bugs; where a single reporter was credited for multiple closely related vulnerabilities in the same product within a narrow timeframe, I would use only one sample from the entire series of bugs.

For the qualifying CVE entries, I started sending out anonymous surveys to the researchers who reported the underlying issues. The surveys open with a discussion of the basic method employed to find the bug:

  How did you find this issue?

  ( ) Manual bug hunting
  ( ) Automated vulnerability discovery
  ( ) Lucky accident while doing unrelated work

If “manual bug hunting” is selected, several additional options appear:

  ( ) I was reviewing the source code to check for flaws.
  ( ) I studied the binary using a disassembler, decompiler, or a tracing tool.
  ( ) I was doing black-box experimentation to see how the program behaves.
  ( ) I simply noticed that this bug is being exploited in the wild.
  ( ) I did something else: ____________________

Selecting “automated discovery” results in a different set of choices:

  ( ) I used a fuzzer.
  ( ) I ran a simple vulnerability scanner (e.g., Nessus).
  ( ) I used a source code analyzer (static analysis).
  ( ) I relied on symbolic or concolic execution.
  ( ) I did something else: ____________________

Researchers who relied on automated tools are also asked about the origins of the tool and the computing resources used:

  Name of tool used (optional): ____________________

  Where does this tool come from?

  ( ) I created it just for this project.
  ( ) It's an existing but non-public utility.
  ( ) It's a publicly available framework.

  At what scale did you perform the experiments?

  ( ) I used 16 CPU cores or less.
  ( ) I employed more than 16 cores.

Regardless of the underlying method, the survey also asks every participant about the use of memory diagnostic tools:

  Did you use any additional, automatic error-catching tools - like ASAN
  or Valgrind - to investigate this issue?

  ( ) Yes. ( ) Nope!

…and about the lengths to which the reporter went to demonstrate the bug:

  How far did you go to demonstrate the impact of the issue?

  ( ) I just pointed out the problematic code or functionality.
  ( ) I submitted a basic proof-of-concept (say, a crashing test case).
  ( ) I created a fully-fledged, working exploit.

It also touches on the communications with the vendor:

  Did you coordinate the disclosure with the vendor of the affected

  ( ) Yes. ( ) No.

  How long have you waited before having the issue disclosed to the

  ( ) I disclosed right away. ( ) Less than a week. ( ) 1-4 weeks.
  ( ) 1-3 months. ( ) 4-6 months. ( ) More than 6 months.

  In the end, did the vendor address the issue as quickly as you would
  have hoped?

  ( ) Yes. ( ) Nope.

…and the channel used to disclose the bug – an area where we have seen some stark changes over the past five years:

  How did you disclose it? Select all options that apply:

  [ ] I made a blog post about the bug.
  [ ] I posted to a security mailing list (e.g., BUGTRAQ).
  [ ] I shared the finding on a web-based discussion forum.
  [ ] I announced it at a security conference.
  [ ] I shared it on Twitter or other social media.
  [ ] We made a press kit or reached out to a journalist.
  [ ] Vendor released an advisory.

The survey ends with a question about the motivation and the overall amount of effort that went into this work:

  What motivated you to look for this bug?

  ( ) It's just a hobby project.
  ( ) I received a scientific grant.
  ( ) I wanted to participate in a bounty program.
  ( ) I was doing contract work.
  ( ) It's a part of my full-time job.

  How much effort did you end up putting into this project?

  ( ) Just a couple of hours.
  ( ) Several days.
  ( ) Several weeks or more.

So far, the response rate for the survey is approximately 80%; because I only started in August, I currently don’t have enough answers to draw particularly detailed conclusions from the data set – this should change over the next couple of months. Still, I’m already seeing several well-defined if preliminary trends:

  • The use of fuzzers is ubiquitous (incidentally, of named projects, afl-fuzz leads the fray so far); the use of other automated tools, such as static analysis frameworks or concolic execution, appears to be unheard of – despite the undivided attention that such methods receive in academic settings.

  • Memory diagnostic tools, such as ASAN and Valgrind, are extremely popular – and are an untold success story of vulnerability research.

  • Most of public vulnerability research appears to be done by people who work on it full-time, employed by vendors; hobby work and bug bounties follow closely.

  • Only a small minority of serious vulnerabilities appear to be disclosed anywhere outside a vendor advisory, making it extremely dangerous to rely on press coverage (or any other casual source) for evaluating personal risk.

Of course, some security work happens out of public view; for example, some enterprises have well-established and meaningful security assurance programs that likely prevent hundreds of security bugs from ever shipping in the reviewed code. Since it is difficult to collect comprehensive and unbiased data about such programs, there is always some speculation involved when discussing the similarities and differences between this work and public security research.

Well, that’s it! Watch this space for updates – and let me know if there’s anything you’d change or add to the questionnaire.

SANS Internet Storm Center, InfoCON: green: Encryption of "data at rest" in servers, (Tue, Sep 1st)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

Over in the SANS ISC discussion forum, a couple of readers have started a good discussion about which threats we actually aim to mitigate if we follow the HIPAA/HITECH (and other) recommendations to encrypt data at rest that is stored on a server in a data center. Yes, it helps against outright theft of the physical server, but – like many recent prominent data breaches suggest – it doesnt help all that much if the attacker comes in over the network and has acquired admin privileges, or if the attack exploits a SQL injection vulnerability in a web application.

There are types of encryption (mainly field or file level) that also can help against these eventualities, but they are usually more complicated and expensive, and not often applied. If you are interested in data at rest encryption for servers, please join the mentioned discussion in the Forum.

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

TorrentFreak: Torrenting “Manny” Pirate Must Pay $30,000 in Damages

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

pirate-runningWhile relatively underreported, many U.S. district courts are still swamped with lawsuits against alleged film pirates.

One of the newcomers this year are the makers of the sports documentary Manny. Over the past few months “Manny Film” has filed 215 lawsuits across several districts.

Most cases are settled without of court, presumably for a few thousands dollars. However, if the alleged downloader fails to respond the damage can be much worse.

District Court Judge Darrin Gayles recently issued a default judgment (pdf) against Micheal Chang, a Florida man who stood accused of pirating a copy of “Manny.

Since Chang didn’t respond to the allegations the Judge agreed with the filmmakers and ordered Chang to pay $30,000 in statutory damages. In addition he must pay attorneys’ fees and costs bringing the total to $31,657.

While the damages are a heavy burden to bear for most, the filmmakers say that the defendant got off lightly. Manny Film argued that Chang was guilty of willful copyright infringement for which the damages can go up to $150,000 per work.

“Here, despite the fact of Defendant’s willful infringement, Plaintiff only seeks an award of $30,000 per work in statutory damages,” Manny Film wrote.

According to the filmmakers Chang’s Internet connection was used to pirate over 2,400 files via BitTorrent in recent years, which they say proves that he willfully pirated their movie.

“…for nearly two years, Defendant infringed over 2,400 third-party works on BitTorrent. In addition to Plaintiff’s film, Defendant downloaded scores of Hollywood films, including works such as Avatar and The Wolf of Wall Street.”

It is unlikely that the court would have issued the same damages award if Chang had defended himself. Even in cases without representation several judges have shown reluctance to issue such severe punishments.

For example, last year U.S. District Court Judge Thomas Rice ruled that $30,000 in damages per shared film is excessive, referring to the Eighth Amendment which prohibits excessive fines as well as cruel and unusual punishments.

“This Court finds an award of $30,000 for each defendant would be an excessive punishment considering the seriousness of each Defendant’s conduct and the sum of money at issue,” Judge Rice wrote.

On the other hand, it could have been even worse. The damage award pales in comparison to some other default judgments. In Illinois three men had to pay $1.5 million each for sharing seven to ten movies using BitTorrent.

Controversy aside, Manny Film believes that they are certainly entitled to the $30,000. They have requested the same amount in other cases which are still pending, arguing that the actual lost revenue is even higher.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Schneier on Security: Using Samsung’s Internet-Enabled Refrigerator for Man-in-the-Middle Attacks

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

This is interesting research::

Whilst the fridge implements SSL, it FAILS to validate SSL certificates, thereby enabling man-in-the-middle attacks against most connections. This includes those made to Google’s servers to download Gmail calendar information for the on-screen display.

So, MITM the victim’s fridge from next door, or on the road outside and you can potentially steal their Google credentials.

The notable exception to the rule above is when the terminal connects to the update server — we were able to isolate the URL which is the same used by TVs, etc. We generated a set of certificates with the exact same contents as those on the real website (fake server cert + fake CA signing cert) in the hope that the validation was weak but it failed.

The terminal must have a copy of the CA and is making sure that the server’s cert is signed against that one. We can’t hack this without access to the file system where we could replace the CA it is validating against. Long story short we couldn’t intercept communications between the fridge terminal and the update server.

When I think about the security implications of the Internet of things, this is one of my primary worries. As we connect things to each other, vulnerabilities on one of them affect the security of another. And because so many of the things we connect to the Internet will be poorly designed, and low cost, there will be lots of vulnerabilities in them. Expect a lot more of this kind of thing as we move forward.

Matthew Garrett: Working with the kernel keyring

This post was syndicated from: Matthew Garrett and was written by: Matthew Garrett. Original post: at Matthew Garrett

The Linux kernel keyring is effectively a mechanism to allow shoving blobs of data into the kernel and then setting access controls on them. It’s convenient for a couple of reasons: the first is that these blobs are available to the kernel itself (so it can use them for things like NFSv4 authentication or module signing keys), and the second is that once they’re locked down there’s no way for even root to modify them.

But there’s a corner case that can be somewhat confusing here, and it’s one that I managed to crash into multiple times when I was implementing some code that works with this. Keys can be “possessed” by a process, and have permissions that are granted to the possessor orthogonally to any permissions granted to the user or group that owns the key. This is important because it allows for the creation of keyrings that are only visible to specific processes – if my userspace keyring manager is using the kernel keyring as a backing store for decrypted material, I don’t want any arbitrary process running as me to be able to obtain those keys[1]. As described in keyrings(7), keyrings exist at the session, process and thread levels of granularity.

This is absolutely fine in the normal case, but gets confusing when you start using sudo. sudo by default doesn’t create a new login session – when you’re working with sudo, you’re still working with key posession that’s tied to the original user. This makes sense when you consider that you often want applications you run with sudo to have access to the keys that you own, but it becomes a pain when you’re trying to work with keys that need to be accessible to a user no matter whether that user owns the login session or not.

I spent a while talking to David Howells about this and he explained the easiest way to handle this. If you do something like the following:
$ sudo keyctl add user testkey testdata @u
a new key will be created and added to UID 0’s user keyring (indicated by @u). This is possible because the keyring defaults to 0x3f3f0000 permissions, giving both the possessor and the user read/write access to the keyring. But if you then try to do something like:
$ sudo keyctl setperm 678913344 0x3f3f0000
where 678913344 is the ID of the key we created in the previous command, you’ll get permission denied. This is because the default permissions on a key are 0x3f010000, meaning that the possessor has permission to do anything to the key but the user only has permission to view its attributes. The cause of this confusion is that although we have permission to write to UID 0’s keyring (because the permissions are 0x3f3f0000), we don’t possess it – the only permissions we have for this key are the user ones, and the default state for user permissions on new keys only gives us permission to view the attributes, not change them.

But! There’s a way around this. If we instead do:
$ sudo keyctl add user testkey testdata @s
then the key is added to the current session keyring (@s). Because the session keyring belongs to us, we possess any keys within it and so we have permission to modify the permissions further. We can then do:
$ sudo keyctl setperm 678913344 0x3f3f0000
and it works. Hurrah! Except that if we log in as root, we’ll be part of another session and won’t be able to see that key. Boo. So, after setting the permissions, we should:
$ sudo keyctl link 678913344 @u
which ties it to UID 0’s user keyring. Someone who logs in as root will then be able to see the key, as will any processes running as root via sudo. But we probably also want to remove it from the unprivileged user’s session keyring, because that’s readable/writable by the unprivileged user – they’d be able to revoke the key from underneath us!
$ sudo keyctl unlink 678913344 @s
will achieve this, and now the key is configured appropriately – UID 0 can read, modify and delete the key, other users can’t.

This is part of our ongoing work at CoreOS to make rkt more secure. Moving the signing keys into the kernel is the first step towards rkt no longer having to trust the local writable filesystem[2]. Once keys have been enrolled the keyring can be locked down – rkt will then refuse to run any images unless they’re signed with one of these keys, and even root will be unable to alter them.

[1] (obviously it should also be impossible to ptrace() my userspace keyring manager)
[2] Part of our Secure Boot work has been the integration of dm-verity into CoreOS. Once deployed this will mean that the /usr partition is cryptographically verified by the kernel at runtime, making it impossible for anybody to modify it underneath the kernel. / remains writable in order to permit local configuration and to act as a data store, and right now rkt stores its trusted keys there.

comment count unavailable comments

TorrentFreak: Yandex Demands Takedown of ‘Illegal’ Music Downloader

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

githubYandex is a Russian Internet company that runs the country’s most popular search engine controlling more than 60% of the market.

Making use of the free music that could be found via its search results, in 2009 Yandex introduced its first music player. A year later the company launched Yandex.Music, a new service offering enhanced legal access to around 800,000 tracks from the company’s catalog.

In 2014 and after years of development, Yandex relaunched a revamped music platform with new features including a Spotify-like recommendation engine and licensing deals with Universal, EMI, Warner and Sony, among others. Today the service offers more than 20 million tracks, all available for streaming from


While the service can be reached by using an appropriate VPN, Yandex Music is technically only available to users from Russia, Ukraine, Belarus and Kazakhstan. Additionally, the service’s licensing terms allow only streaming.

Of course, there are some who don’t appreciate being so restricted and this has led to the development of third-party applications that are designed to offer full MP3 downloads.

In addition to various browser extensions, one of the most popular is Yandex Music Downloader. Hosted on Github, the program’s aims are straightforward – to provide swift downloading of music from Yandex while organizing everything from ID3 tags to cover images and playlists.

Unfortunately for its fanbase, however, the software has now attracted the attention of Yandex’s legal team.

“I am Legal Counsel of Yandex LLC, Russian Internet-company. We have learned that your service is hosting program code ‘Yandex.Music downloader’…which allows users to download content (music tracks) from the service Yandex.Music…,” a complaint from Yandex to Github reads.

“Service Yandex.Music is the biggest music service in Russia that provides users with access to the licensed music. Music that [is] placed on the service Yandex.Music is licensed from its right holders including: Sony Music, The Orchard, Universal Music, Warner Music and other,” the counsel continues.

“Service Yandex.Music does not provide users with possibility to download content from the service. Downloading content from the service Yandex.Music is illegal. This means that program code ‘Yandex.Music downloader’…provides illegal unauthorized access to the service Yandex.Music that breaches rights of Yandex LLC, right holders of the content and also breaches GitHub Terms of Service.”

As a result, users trying to obtain the application are now greeted with the following screen.


The Yandex complaint follows a similar one earlier in the month in which it targeted another variant of the software.

While the takedowns may temporarily affect the distribution of the tools, Yandex’s efforts are unlikely to affect the unauthorized downloading of MP3s from its service. A cursory Google search reveals plenty of alternative tools which provide high-quality MP3s on tap.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Darknet - The Darkside: Tiger – Unix Security Audit & Intrusion Detection Tool

This post was syndicated from: Darknet - The Darkside and was written by: Darknet. Original post: at Darknet - The Darkside

Tiger is a Unix security audit tool that can be use both for auditing and as an intrusion detection system. It supports multiple Unix platforms and it is free and provided under a GPL license. Unlike other tools, Tiger needs only POSIX tools and is written entirely in shell language. Tiger has some interesting features […]

The post Tiger…

Read the full post at

SANS Internet Storm Center, InfoCON: green: Detecting file changes on Microsoft systems with FCIV, (Mon, Aug 31st)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

Microsoft releases often interesting tools to help systemadministratorsand incident handlers to investigate suspicious activities on Windows systems. In 2012, they released a free tool called FCIV(File Checksum Integrity Verifier)(1). It is a stand alone executable which does not require any DLL or other resources. Just launch it from any location.Its goal is to browsea file systemor some directories recursivelyand togenerate MD5/SHA1 hashes of all the files found. The results are saved in a XMLdatabase. FCIVisused in proactive and reactive ways. The first step is tobuild a database of hashes on a clean computer (proactive). Thenthe generated database is re-used to verify a potentially compromised system (reactive)

Most big organizations work today with system images. The idea is to scan anunusedclean system(but which will of course receives patches and software updates with a system like WSUS)and to generate a baseline of”>PS: C:”>.job -type *.jar

This command will search recursivelyfor specified file types onthe C: drive and store both hashes in the specificed XML file.A smallPowerShell script(2) will do the job: it generates a database uniquename (based on the current date – yyyymmdd) and, at the end, compute also the SHA1 hash of thisdatabase. FCIV”>PS D:bin fciv.exe -xml d:hashdb-20150830.xml -v -bp C:

The database being a XML file, its tempting to have a look at it and reuse the content with other investigation or monitoring tools. Howeverits unusablein its default formatbecause Microsoft writes all the data on a single line andthe hashes are stored in raw Base64. So, they must be first Base64 decoded then encoded in hex to be recognizedas regularMD5 or SHA1 hashes. They can be achieved very easily with a few lines of Python. Here is a smallscript(3) that will parse a FCIV database and generate a CVS file with 3 columns: the full path of the file, the MD5 and SHA1 hashes.

A last tip: execute a scheduled task every night on a standard computer image from a USB stick and store the generated XML database (and its .sha1sum) to a remote system. Youllhave a good starting point to investigate a compromised computer.


(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

TorrentFreak: Court Orders Italian ISPs to Block Popcorn Time

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

popcorntBranded a “Netflix for Pirates,” the Popcorn Time app quickly gathered a user base of millions of people over the past year.

The application is a thorn in the side of many copyright holders who are increasingly trying to contain the threat.

In Italy this has now resulted in a new blocking order issued by the Criminal Court of Genoa. The Court ruled that Popcorn Time assists copyright infringement and has ordered local ISPs to block several domain names.

The domains listed in the ruling include those of the two most used forks, and, as well as the localized download page

While the ISP blockades will prevent people from downloading Popcorn Time from these sites, applications that have been downloaded already will continue to work for now.

Also, many other sites offering the same Popcorn Time software are still available. This means that the blockades will only have a limited effect.

Fulvio Sarzana, a lawyer with the Sarzana and Partners law firm who specializes in Internet and copyright disputes, informs TF that Popcorn Time could successfully fight the order.

Sarzana references a recent case in Israel where the Popcorn Time block was overturned because it hinders freedom of speech and says he’s willing to represent the developers.

For now the developers of the main .io Popcorn Time fork are showing little interest in fighting the decision. Instead, they’d rather put their efforts into making sure that the blockade has minimal impact.

“While they are able to block the website, Popcorn Time is a standalone program, so once a user has it downloaded it is unlikely that blocks will cause many issues other than new users getting the program from our site directly or in some cases updates.”

“However, we try our best to have things in place to make these blocks effectively null and void,” the Popcorn Time teams says.

Just a few days ago the same developers urged Hollywood to start competing with Popcorn Time. However, for now we expect that blocking efforts and other legal actions will remain top priorities.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Errata Security: About the systemd controversy…

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

As a troll, one of my favorite targets is “systemd“, because it generates so much hate on both sides. For bystanders, I thought I’d explain what that is. To begin with, I’ll give a little background.

An operating-system like Windows, Mac OS X, and Linux comes in two parts: a kernel and userspace. The kernel is the essential bit, though on the whole, most of the functionality is in userspace.

The word “Linux” technically only refers to the kernel itself. There are many optional userspaces that go with it. The most common is called BusyBox, a small bit of userspace functionality for the “Internet of Things” (home routers, TVs, fridges, and so on). The second most common is Android (the mobile phone system), with a Java-centric userspace on top of the Linux kernel. Finally, there are the many Linux distros for desktops/servers like RedHat Fedora and Ubuntu — the ones that power most of the servers on the Internet. Most people think of Linux in terms of the distros, but in practice, they are a small percentage of the billions of BusyBox and Android devices out there.

The first major controversy in Linux was the use of what’s known as the microkernel, an idea that removes most traditional kernel functionality and puts it in userspace instead. It was all the rage among academics in the early 1990s. Linus famously rejected the microkernel approach. Apple’s Mac OS X was originally based on a microkernel, but they have since moved large bits of functionality back into the kernel, so it’s no longer a microkernel. Likewise, Microsoft has moved a lot of functionality from userspace into the Windows kernel (such as font rendering), leading to important vulnerabilities that hackers can exploit. Academics still love microkernels today, but in the real world it’s too slow.

The second major controversy in Linux is the relationship with the GNU project. The GNU project was created long before Linux in order to create a Unix-like operating system. They failed at creating a usable kernel, but produced a lot of userland code. Since most the key parts of the userland code in Linux distros comes from GNU, some insist on saying “GNU/Linux” instead of just “Linux”. If you are thinking this sounds a bit childish, then yes, you are right.

Now we come to the systemd controversy. It started as a replacement for something called init. A running Linux system has about 20 different programs running in userspace. When the system boots up, it has only one, a program called “init”. This program then launches all the remaining userspace programs.

This init system harks back to the original creation of Unix back in the 1970s, and is bit of a kludge. It worked fine back then when systems were small (when 640k of memory was enough for anybody), but works less well on today’s huge systems. Moreover, the slight difference in init details among the different Linux distros, as well as other Unix systems like Mac OS X, *BSD, and Solaris, is a constant headache for those of us who have to sysadmin these boxes.

Systemd replaces the init kludge with a new design. It’s a lot less kludgy. It runs the same across all Linux distros. It also boots the system a lot a faster.

But on the flip side, it destroys the original Unix way of doing things, becoming a lot more like how the Windows equivalent (svchost.exe) works. The Unix init system ran as a bunch of scripts, allowing any administrator to change the startup sequence by changing a bit of code. This makes understanding the init process a lot easier, because at any point you can read the code that makes something happen. Init was something that anybody could understand, whereas nobody can say for certain exactly how things are being started in systemd.

On top of that, the designers of systemd are a bunch of jerks. Linus handles Linux controversies with maturity. While he derides those who say “GNU/Linux”, he doesn’t insist that it’s wrong. He responds to his critics largely by ignoring them. On the flip side, the systemd engineers can’t understand how anybody can think that their baby is ugly, and vigorously defend it. Linux is a big-tent system that accepts people of differing opinions, systemd is a narrow-minded religion, kicking out apostates.

The biggest flaw of systemd is mission creep. It is slowly growing to take over more and more userspace functionality of the system. This complexity leads to problems.

One example is that it’s replaced traditional logging with a new journal system. Traditional, text-based logs were “rotated” in order to prevent the disk from filling up. This could be done because each entry in a log was a single line of text, so tools could parse the log files in order to chop them up. The new journal system is binary, so it’s not easy to parse, and hence, people don’t rotate the logs. This causes the hard drive to fill up, killing the system. This is noticeable when doing things like trying to boot a Raspberry Pi from a 4-gigabyte microSD card. It works with older, pre-systemd versions of Linux, but will quickly die with systemd if something causes a lot of logging on the system.

Another example is D-Bus. This is the core system within systemd that allows different bits of userspace to talk to each other. But it’s got problems. A demonstration of the D-Bus problem is the recent Jeep hack by researchers Charlie Miller and Chris Valasek. The root problem was that D-Bus was openly (without authentication) accessible from the Internet. Likewise, the “AllJoyn” system for the “Internet of Things” opens up D-Bus on the home network. D-Bus indeed simplifies communication within userspace, but its philosophy is to put all your eggs in one basket, then drop the basket.

Personally, I have no opinion on systemd. I hate everything. Init was an ugly kludge, and systemd appears to be just as ugly, albeit for difference reasons. But, the amount of hate on both sides is so large that it needs to be trolled. The thing I troll most about is that one day, “systemd will replace Linux”. As systemd replaces more and more of Linux userspace, and begins to drive kernel development, I think this joke will one day become true.

TorrentFreak: When You’re Calling Culture Content, You’re Reinforcing The Idea Of A Container

This post was syndicated from: TorrentFreak and was written by: Rick Falkvinge. Original post: at TorrentFreak

copyright-brandedThe copyright industry has consistently used the word “content” for anything creative.

Just like most other things the copyright industry does, there’s a thought behind the choice of wording – a choice they hope that other people will copy, because it reinforces their view of the world, or rather, what they would like the world to look like.

When we use certain words for metaphors, the words we use convey meaning of their own. This is why you see the pro-choice vs pro-life camps on opposite sides of the abortion debate: both camps want to portray the other camp as anti-choice and anti-life, respectively.

In the liberties debate and the culture debate, there’s nothing of the sort. The copyright industry has been allowed to establish the language completely on its own, and therefore, we’re using terms today that reinforce the idea and the notion that the copyright industry is good and that people who share are bad.

That’s insane.

Stop doing that.

Stop doing that right now.

Language matters.

You’re on the other side of the pro-life camp and you’re willingly calling yourself “anti-life”. How are you expecting to win anything from that position?

One thing you can stop saying immediately is “copyright”. Call it “the copyright monopoly”, for it is a monopoly, and that should be reinforced every time the abomination is mentioned. Also, use the term “the copyright industry” – as in manufacturing copyright monopolies and profiting off them – as often as possible. Never ever talk about “Intellectual Property”, except when describing why it’s bad to do so, as using that term reinforces the idea that ideas can not just be contained, but owned – something that’s blatantly false.

If you have to use the IP term, let it stand for Industrial Protectionism instead. That’s a much more correct description. Never ever ever use the word “property” when you’re referring to a monopoly. Doing so is so factually incorrect that courts have actually banned the copyright industry from using terms like “property” and “theft” – and yet, they keep doing so. Playing along with that game is stupid, dumb, and self-defeating.

Today, I’ll focus on the word “content”.

You’ll notice that the copyright industry uses this word consistently for everything. There’s a reason for that: If you have content, you must also have a container.

Do you need a container for a bedtime story? Do you need a container for a campfire song? Do you need a container for a train of thought? Do you need a container for cool cosplay ideas?

Of course you don’t. They’re ideas shared, songs sung, stories told. The idea that they must have a container – because they’re “content” – is so somebody can lock up those stories told and those songs sung, and so we can buy the container with the “content” we desire, instead of just singing the songs and telling the stories unfettered.

Compare the mental imagery evoked by these two sentences:

“We need to fill this website with content.”

“We need to fill this website with the stories of people in the area.”

One is locked up, controlled, locked down, devalued. The other is shared, cultural, told.

The word “content” means that there must also be a “container”, and that container is the copyright industry.

Don’t ever use the word “content”. It’s as improductive as describing yourself as “anti-life”. Talk about songs, articles, stories, and ideas. Doing so brings new life to the stories you tell.

Above all, be aware of terms that have been established by the adversary to the Internet, to liberty, and to culture – and refuse using them. The copyright industry is not your friend.

About The Author

Rick Falkvinge is a regular columnist on TorrentFreak, sharing his thoughts every other week. He is the founder of the Swedish and first Pirate Party, a whisky aficionado, and a low-altitude motorcycle pilot. His blog at focuses on information policy.

Book Falkvinge as speaker?

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

TorrentFreak: Offcloud Downloads Torrents to Google Drive and Dropbox

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

offcloudlogoDownloading torrents remotely is nothing new. Most of the popular torrent clients support this functionality, but for many users it’s too much of a hassle to configure it correctly.

This is where Offcloud comes in.

The new startup offers users a wide range of tools to download and backup files from video services and file-hosters, and recently added torrent support and Google drive integration as well.

The idea is simple and straightforward. Users simply paste a torrent link into their Offcloud account and the service then downloads the files right away.

One of the main benefits to users is that they can add torrents from work, school or on the road. After the torrent is downloaded to Offcloud’s server the files can be downloaded to a local computer or synced to Google Drive, Dropbox or an FTP server.

Once the files are synced people can access or play the files directly from the cloud, since Dropbox and Google Drive support online streaming for various media formats.

Google Drive Streaming

Downloading files only to Offcloud is an option as well of course, as the service has a built-in media player.

The main downside of Offcloud is that it limits the number of downloads to two torrent links per week on a free account. This should be good enough for the casual user, but paid plans are also available starting at $1.99.

TF also asked the service about its seeding policy and the company clarified that it’s not a seedbox service.

“Offcloud does not have the ambition to be a seedbox service. We are not here to help BitTorrent uploaders, but rather to provide a simple cloud-based solution to users who simply wish to leech from BitTorrent in a fast and secure manner,” Offcloud’s spokesperson says.

The company tries to main torrent etiquette by uploading and downloading an equal amount of data. And thanks to the high bandwidth capacity the overall torrent swarm speeds will increase, at least temporarily.

“We usually aim at a 1:1 ratio for the sake of the BitTorrent swarm’s quality. Furthermore, our 10-Gbit nodes are truly boosting the swarm at the moment they are active on a certain torrent,” Offcloud notes.

TF tested the service which works as advertised. The torrents start quickly and download at much higher speeds than the average home connection, and they quickly appear in the designated Dropbox account or Google Drive.

In addition to torrents the service also downloads and converts videos from a range of other sites including YouTube, Vimeo, adult sites and most popular file-hosters.

People who want to take a look can head over to Offcloud to take it for a free spin.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.