Posts tagged ‘Other’

Backblaze Blog | The Life of a Cloud Backup Company: Storage Pod 4.5 – Tweaking a Proven Design

This post was syndicated from: Backblaze Blog | The Life of a Cloud Backup Company and was written by: Andy Klein. Original post: at Backblaze Blog | The Life of a Cloud Backup Company

Storage Pod 4.5
It has been nearly a year since we published an update to our Storage Pod design. Over the last few months we have been deploying Storage Pod 4.5, which is known internally as Storage Pod Classic. The reason for the “classic” nickname comes from the fact that Storage Pod 4.5 is derived from our Storage Pod 3.0 chassis and design. Storage Pod 4.5 returns to the backplane-based design of Storage Pod 3.0 but incorporates upgrades that improve reliability and reduce cost. The result is that Storage Pod 4.5 delivers 180 TB of data storage for only $0.048 per Gigabyte, our lowest cost ever.

What’s New In Storage Pod 4.5

Storage Pod 4.5 is built on the same chassis as Storage Pod 3.0, but with an upgraded parts list. The upgraded items are:

  1. Backplanes
  2. SATA cards
  3. CPU

To the delight/dismay of many in the reddit community, we did not change the on-off switch.

1) Back to Backplanes

    One of the principle design decisions we made in Storage Pod 4.5 was to return to using 5-port SATA backplanes versus the direct-wire design of Pod 4.0. We’ll dig in to why we returned to backplanes a little later in this post, but for the moment let’s cover the backplanes we are using in Pod 4.5.

    At the core of the new backplanes is the Marvell 9715 chipset. Over the years, Marvell has proven to be committed to the storage market and manufacturing the chipset fits nicely into their business. Both Sunrich and CFI use the 9715 chipset as part of their 5-port backplanes. We now have a backplane that is readily available, well supported and faster with 6 Gbps SATA-3 throughput.

2) New SATA Cards

    For Storage Pod 4.5 we upgraded to SATA cards manufactured by Sunrich using the Marvell 9235 chipset. The same chipset is also used by SYBA in their latest SATA cards. When combined with the upgraded backplanes, the new SATA cards deliver 6 Gbps SATA-3 throughput.

3) Upgraded CPU

    The CPU was upgraded from an i3-2100 to an i3-2120. The i3-2100 has been EOL’d by Intel and the i3-2120 delivers slightly better performance for the same price. We’ve also tested the i3-3240 and it worked fine. Other LGA 1155 socket CPUs should work as well, although we have not tested any beyond those mentioned.

Storage Pod Costs

Storage Pod 4.5 is less expensive to build and fill with hard drives than all its predecessors. The entire system, fully populated with 180 Terabytes worth of hard drives, costs Backblaze less than a nickel per Gigabyte. With larger capacity hard drives now coming to market, the cost/GB for hard drive storage should continue to decrease and drive down the cost of each Storage Pod we build. This will allow us to continue to charge just $5/month for our unlimited online backup service.

Below are the Backblaze costs for building the different Storage Pod versions.

Storage Pod Cost History

Building your own Storage Pod or having a Storage Pod built for you will likely cost somewhat more, but the cost of the hard drives will continue to be the main cost component.

Our Round Trip to Backplanes – The Inside Story

Each month Backblaze needs to build and deploy 20 to 30 Storage Pods. To do this, we employ a contract manufacturer to bend the metal and assemble the parts into a Storage Pod that we load test and qualify before placing into service. Backblaze then monitors and maintains the active Storage Pods. Prior to Storage Pod 4.0, we had built and deployed nearly 800 backplane-based Storage Pods.

Two unrelated events drove our decision to go down the direct-wire design of Storage Pod 4.0. First, we had a dwindling supply of 5-Port backplanes as Silicon Image stopped making the chipset used to manufacturer the backplanes and second, our existing contract manufacturer was having trouble meeting our production schedule for Storage Pods.

Into the breach stepped Protocase. Over the years, they have been a huge Backblaze Storage Pod supporter and created their 45 Drives division to sell Backblaze inspired storage servers. Protocase started building version 3.0 Storage Pods for us and when we decided to go with the direct-wire design described in the Storage Pod 4.0 blog post we used Protocase as our primary contract manufacturer.

With Storage Pod 4.0 being a completely new design, we had expected there to be “growing pains.” Indeed, looking back at Storage Pod 1.0, it went through multiple versions before it was ready for an operational environment. For Storage Pod 4.0, Protocase diligently worked with us over the course of several months to address the growing pains and get Pod 4.0 systems deployed in our data center.

While we worked to get Storage Pod 4.0 ready, we still had to deploy 20-30 operational storage pods each month. After testing out several local contract manufacturers, we found Evolve Manufacturing. They were eager, smart, located nearby, and proved to be excellent at building Storage Pods. At the same time, new vendors stepped forward with 5-port backplanes and SATA cards based on the Marvell chipsets. This meant we had everything we needed to go back to the future and build backplane-based storage pods to meet our operational needs and that’s what we did.

Late last year we finally made the decision to stop investing in the direct-wire design of Storage Pod 4.0 and proceed with upgrading our backplane-based design and Storage Pod 4.5 was born. To date we have deployed nearly 100 Storage Pod 4.5 systems.

What Should You Buy?

If you are looking to purchase a storage server inspired by Backblaze, you have multiple options: Protocase or Evolve Manufacturing. Protocase, through their 45 Drives division offers both direct-wire and backplane-based storage systems inspired by the Backblaze Storage Pod design. They are also capable of customizing a storage server to meet your specific needs.

Evolve Manufacturing is our Storage Pod 4.5 contract manufacturer. They will build you a backplane-based Backblaze inspired Storage Pod for $2,950.00 plus tax, packaging, shipping and handling. That price does not include the 45 hard drives, you’ll need to purchase them separately. If you’re interested, please contact Richard Smith (rich.smith [at] for more information.

Making a Storage Pod

If you are inclined to make your own Storage Pod, we’ve included a list of the parts you’ll need Appendix A. Most of the parts can be purchased online via Amazon, Newegg, etc. Some parts, as noted on the parts list, can be purchased through either a distributor or from one of the contract manufacturers. Since Storage Pod 4.5 is similar to Storage Pod 3.0 you can still use this nice screen shot assembly walk-through from Protocase and it’s companion Storage Pod assembly overview (PDF, 1.5MB) for guidance.

As a reminder, Backblaze does not sell Storage Pods and the design is open source so we don’t provide support or warranty for people who choose to build their own Storage Pods. You can find fellow Pod builders at and

Appendix A: Storage Pod 4.5 Parts List

Below is the parts list for building Storage Pod 4.5. The price shown is the current list price of the items needed to build one storage pod. You may be able to find a lower price for some of these items.

4U Custom Case
Includes case, anti-vibration assemblies, power supply bracket, etc.
760 Watt Power Supply
Zippy PSM-5760V Power Supply
On/Off Switch
FrozenCPU ELE-272 Momentary LED Power Switch
Case Fan
Fan Connection Housing
Dampener Kits
Power Supply Vibration Dampener
Soft Fan Mount
AFM02B (1 flat end)
Soft Fan Mount
AFM03B (2 tab ends)
Supermicro MBD-X9SCL-F (MicroATX)
Intel Core i3 processor i3-2120
240P PC3-10600 CL9 18C 256X8 DD
Port Multiplier Backplanes
5 Port Backplane (Marvell 9715 Chipset)
4-PORT PCIe Express (Marvell 9235 chipset)
SATA cables RA-to-STR 3 ft locking from Nippon Labs
Boot Drive
80GB 7200RPM SATA 2.5 IN
Screw: 4-40 X 3/16 Phillips 100D FLAT SST
Screw: 6-32 X 3/16 Phillips PAN SST ROHS
Screw: 6-32 X 1/4 Phillips PAN ZPS
Screw: 4-40 X 5/16 Phillips PAN ZPS ROHS
Screw: 4-40 X 1/4 Phillips 100D Flat ZPS
Screw: 6-32 X 1/4 Phillips 100D Flat ZPS
Screw: M3 X 5MM Long Phillips, HD
Standoff: M3 X 5MM Long Hex, SS
Standoff: Round 6-32 X 1/4 Dia X 5/16 Lng
Crimp Terminal, 22-30 AWG Power (Tin)
Foam Tape, 1″ x 50′ x 1/16 in black


  1. Purchase from Evolve Manufacturing for price listed, plus tax, packaging, handing, and shipping.
  2. Sunrich and CFI make the recommended backplanes and Sunrich and Syba make the recommended SATA Cards. These items may be purchased via Arrow (a distributor) or Evolve Manufacturing.
  3. Nippon Labs makes the recommended SATA cables. They may be purchased from Evolve Manufacturing.
  4. The Boot Drive can be any 2.5 or 3.5 inch internal drive.


Author information

Andy Klein

Andy Klein

Andy has 20+ years experience in technology marketing. He has shared his expertise in computer security and data backup at the Federal Trade Commission, Rootstech, RSA and over 100 other events. His current passion is to get everyone to back up their data before it’s too late.

The post Storage Pod 4.5 – Tweaking a Proven Design appeared first on Backblaze Blog | The Life of a Cloud Backup Company.

TorrentFreak: WordPress Wins $25,000 From DMCA Takedown Abuser

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

wordpressAutomattic, the company behind the popular WordPress blogging platform, has faced a dramatic increase in DMCA takedown notices in recent years.

Most requests are legitimate and indeed targeted at pirated content. However, there are also cases where the takedown process is clearly being abused.

To curb these fraudulent notices WordPress decided to take a stand in court, together with student journalist Oliver Hotham who had one of his articles on WordPress censored by a false takedown notice.

Hotham wrote an article about “Straight Pride UK” which included a comment he received from the organization’s press officer Nick Steiner. The latter didn’t like the article Hotham wrote, and after publication Steiner sent WordPress a takedown notice claiming that it infringed his copyrights.

WordPress and Hotham took the case to a California federal court where they asked to be compensated for the damage this abuse caused them.

The case is one of the rare instances where a service provider has taken action against DMCA abuse. The defendant, however, failed to respond in court which prompted WordPress to file a motion for default judgment.

The company argued that as an online service provider it faces overwhelming and crippling copyright liability if it fails to take down content. People such as Steiner abuse this weakness to censor critics or competitors.

“Steiner’s fraudulent takedown notice forced WordPress to take down Hotham’s post under threat of losing the protection of the DMCA safe harbor,” WordPress argued.

“Steiner did not do this to protect any legitimate intellectual property interest, but in an attempt to censor Hotham’s lawful expression critical of Straight Pride UK. He forced WordPress to delete perfectly lawful content from its website. As a result, WordPress has suffered damage to its reputation,” the company added.

After reviewing the case United States Magistrate Judge Joseph Spero wrote a report and recommendation in favor of WordPress and Hotham (pdf), and District Court Judge Phyllis Hamilton issued a default judgment this week.

“The court finds the report correct, well-reasoned and thorough, and adopts it in every respect,” Judge Hamilton writes (pdf).

“It is Ordered and Adjudged that defendant Nick Steiner pay damages in the amount of $960.00 for Hotham’s work and time, $1,860.00 for time spent by Automattic’s employees, and $22,264.00 for Automattic’s attorney’s fees, for a total award of $25,084.00.”

The case is mostly a symbolic win, but an important one. It should serve as a clear signal to other copyright holders that false DMCA takedown requests are not always left unpunished.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and anonymous VPN services.

Krebs on Security: Intuit Failed at ‘Know Your Customer’ Basics

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Intuit, the makers of TurboTax, recently introduced several changes to beef up the security of customer accounts following a spike in tax refund fraud at the state and federal level. Unfortunately, those changes don’t go far enough. Here’s a look at some of the missteps that precipitated this mess, and what the company can do differently going forward.

dyot copy2

As The Wall Street Journal noted in a story this week, competitors H&R Block and TaxAct say they haven’t seen a similar surge in fraud this year. Perhaps the bad guys are just picking on the industry leader. But with 29 million customers last year — far more than H&R Block or TaxAct (which each had about seven million) — TurboTax should also be leading the industry in security.

Keep in mind that none of the security steps described below are going to stop fraud alone. But taken together, they do or would provide more robust security for TurboTax accounts, and significantly raise the costs for criminals engaged in this type of fraud.


Intuit fails to take basic steps to validate key account information, such as email addresses and mobile numbers, and these failures have limited the company’s ability to enact stricter account security measures. In fact, TurboTax still does not require new users to verify their email address, a basic security precaution that even random Internet forums which don’t collect nearly as much sensitive data require of all new users.

Last month, KrebsOnSecurity featured an in-depth story that stemmed from information provided by two former Intuit security employees who accused the company of making millions of dollars knowingly processing tax refund requests filed by cybercriminals. Those individuals shared a great deal about Intuit’s internal discussions on how best to handle a spike in account takeovers and fraudsters using stolen personal information to file tax refund requests on unwitting consumers.

Both whistleblowers said the lack of email verification routinely led to bizarre scenarios in which customers would complain of seeing other peoples’ tax data in their accounts. These were customers who’d forgotten their passwords and entered their email address at the site to receive a password reset link, only to find their email address tied to multiple identities that belonged to other victims of stolen identity refund fraud.

In mid-February, Intuit announced that it would begin the process of prompting all users to validate their accounts, either by validating their email address, answering a set of knowledge-based authentication questions, or entering a code sent to their mobile phone.

In an interview today, Intuit’s leadership sidestepped questions about why the company still does not validate email addresses. But TurboTax Chief Information Security Officer Indu Kodukula did say TurboTax will no longer display multiple profiles tied to a single email address when users attempt to reset their passwords by supplying an email address.

“We had an option where when you entered an email address, we’d show you a list of user IDs that were associated with that address,” Kodukula said. “We’ve removed that option, so now if you try to do password recovery, you have to go back to the email associated with you.”


As previously stated, TurboTax doesn’t require users to enter a valid mobile phone number, so multi-factor authentication will not be available for many new and existing customers. More importantly, in failing to require customers to supply mobile numbers, Intuit is passing up a major tool to combat fraud and account takeovers.

Verifying customers by sending a one-time code to their mobile that they then have to enter into the Web site before their account is created can dramatically drive up the costs for fraudsters. I’ve written several stories on academic research that looked at the market for bulk-created online accounts sought after by spammers, such as free Webmail and Twitter accounts. That research showed that bulk-created accounts at services which required phone verification were far more expensive than accounts at providers that lacked this requirement.

True, fraudsters can outsource this account validation process to freelancers, but there is no denying that it increases the cost of creating new accounts because scammers must have a unique mobile number for every account they create. TurboTax should require all users to supply a working mobile phone number.


Until very recently, if hackers broke into your TurboTax account and made important changes, you might never know about it until you went to file your return and received a notification that someone had already filed them for you. This allowed fraudsters who had hijacked an account to wait until the legitimate user had filled out their personal data, and then change the bank account to which the refund would be credited.

On Feb. 26, 2015, Intuit said it would begin notifying customers via email if any user profile data is altered, including the account password, email address, security question, login name, phone number, name or address.


According to the interviews with Intuit’s former security employees, much of the tax refund fraud being perpetrated through TurboTax stems from a basic weakness: The company does not require new customers to do anything to prove their identity before signing up for a TurboTax account. During the account sign-up, you’re whoever you want to be. There is no identity proofing, such as a requirement to answer so-called “out-of-wallet” or “knowledge-based authentication” questions.

Out-of-wallet questions are hardly an insurmountable hurdle for fraudsters. Indeed, some of the major providers of these challenges have been targeted by underground identity theft services. But these questions do complicated things for fraudsters. Intuit should take a cue from credit score and credit file montitoring service, which asks a series of these questions before allowing users to create an account. And, unlike — which will happily let multiple users create accounts with the same Social Security number and other information — blocks this activity.

Kodukula said Intuit is considering requiring out of wallet questions at account signup. This is good news, because as I noted in last month’s story, Intuit’s anti-fraud efforts have been tempered by a focus on zero tolerance for “false positives” — the problem of incorrectly flagging a legitimate customer refund request as suspicious. Given that focus, Intuit should do everything it can to prevent fraudsters from signing up with its service in the first place.


In an interview with KrebsOnSecurity last month, Kodukula said a recent spike in tax refund fraud at the state level was due in part to an increase in account takeovers. Kodukula said a big part of that increase stemmed the tendency for people to re-use passwords across multiple sites.  “This technique works because a fair percentage of users re-use passwords at multiple sites,” I wrote in that article. “When a breach at one site exposes the email addresses and passwords of its users, fraudsters will invariably try the stolen account credentials at other sites, knowing that a small percentage of them will work.”

But according to the whistleblowers, Intuit has historically made it quite easy for fraudsters to hijack accounts by abusing TurboTax’s procedures for helping customers recover access to accounts when they forgot their account password and the email address used to register the account. Users who forget both of these things are prompted to supply their name, address, date of birth, Social Security number and ZIP code, information that is not terribly difficult to obtain cheaply from multiple ID theft services in the cybercrime underground.

In fact, the whistleblowers related a story about how they sought to raise awareness of the problem internally at Intuit by using TurboTax’s account recovery tools to hijack the TurboTax account of the company’s CEO Brad Smith.

Kokudula said that pursuant to changes made in the last two weeks, users who try to recover their passwords will now need to successfully answer a series of out-of-wallet questions to to complete that process.


As I wrote last month, a big reason why the spike in tax refund fraud disproportionately affected TurboTax is that until very recently, TurboTax was the only major do-it-yourself online tax prep company that allowed so-called “unlinked” state tax filings.

States allow unlinked returns because most taxpayers owe taxes at the federal level but are due refunds from their state. Thus, unlinked returns allow taxpayers who owe money to the IRS to pay some or all of that off with state refund money.

Unlinked returns typically have made up a very small chunk of Intuit’s overall returns, Intuit’s Kodukula explained. However, so far in this year’s tax filing season, Intuit has seen between three and 37-fold increases in unlinked, state-only returns. Convinced that most of those requests are fraudulent, the company now blocks users from filing unlinked returns via TurboTax. According to The Wall Street Journal, neither TaxAct nor H&R Block allowed users to file unlinked returns.

SANS Internet Storm Center, InfoCON: green: Anybody Doing Anything About ANY Queries?, (Thu, Mar 5th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

(in an earlier version of this story, I mixed up Firefox with Chrome. AFAIK, it was Firefox, not Chorme, that added DNS ANY queries recently)

Recently, Firefox caused some issueswithits use of ANY DNSqueries [1]. ANY queries are different. For all other record types, we got specific entries in our zone file. Not so for ANYqueries. RFC 1035 doesnt even assign them a name [2]. It just assigned the query code (QCODE) of 255 to a request for all records. The name ANY is just what DNS tools typically call this query type.

The ANY record works a bit differently depending on whether the query is received by a recursive or authoritative name server. An authoritative name server will return all records that match, while a recursive name server will return all cached values.For example, try this:

1. Send a DNS ANY query for to the authoritative name server

dig ANY

I am getting 44 records back. Your number may be different.

2. Next, request a specific record using your normal recursive DNS server


I am getting one answer and two authority records (YMMV)

3. Finally, send an ANY query for to your recursive name server

dig ANY

You should get essentially the same information as you got in step 2. The recursive name server will only return data that is already cached. UNLESS, there is no cached data, in which case the recursive name server will forward the query, and you will get the complete result.

So in short, if there is cached data, ANY queries are not that terrible, but if there is no cached data, then you can get a huge amplification. The result is close to 10kBytes in size. (Btw: never mind the bad DNSSEC configuration.. it is called evilexample for a reason).

So but how common are these ANY” />

“>Recursive Name Server

As expected, the recursive name server, which is only forwarding requests for a few internal hosts, is a lot cleaner. The authoritative name server, which is exposed to random queries from external hosts, is a lot more dirty and with many networks still preferringthelegacy IPv4protocol, A queries outnumber AAAA queries. Any queries make up less then 1%, and they follow a typical scheme. For example:

Time Client IP

Short surges of queries arriving for the same ANY record from the same /24. This is not normal. All these hosts should probably use the same recursive name server, and we should only see one single ANY request that is then cached, if we see it at all. This is typical reflective DDoS traffic. In this case, is under attack, and the attacker attempts to use my name server as an amplifier.

Is it safe to block all ANY requests from your authoritative name server? IMHO: yes. But you probably first want to run a simple check like above to see who and why is sending you ANY requests. Mozilla indicated that they will remove the ANY queries from future Firefox versions, so this will be a minor temporary inconvenience.


Johannes B. Ullrich, Ph.D.

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Server Fault Blog: How we upgrade a live data center

This post was syndicated from: Server Fault Blog and was written by: Nick Craver. Original post: at Server Fault Blog

A few weeks ago we upgraded a lot of the core infrastructure in our New York (okay, it’s really in New Jersey now – but don’t tell anyone) data center. We love being open with everything we do (including infrastructure), and really consider it one of the best job perks we have. So here’s how and why we upgrade a data center. First, take a moment to look at what Stack Overflow started as. It’s 5 years later and hardware has come a long way.


Up until 2 months ago, we hadn’t replaced any servers since upgrading from the original Stack Overflow web stack. There just hasn’t been a need since we first moved to the New York data center (Oct 23rd, 2010 – over 4 years ago).  We’re always reorganizing, tuning, checking allocations, and generally optimizing code and infrastructure wherever we can. We mostly do this for page load performance; the lower CPU and memory usage on the web tier is usually a (welcomed) side-effect.

So what happened? We had a meetup. All of the Stack Exchange engineering staff got together at our Denver office in October last year and we made some decisions. One of those decisions was what to do about infrastructure hardware from a lifecycle and financial standpoint. We decided that from here on out: hardware is good for approximately 4 years. After that we will: retire it, replace it, or make an exception and extend the warranty on it. This lets us simplify a great many things from a management perspective, for example: we limit ourselves to 2 generations of servers at any given time and we aren’t in the warranty renewal business except for exceptions. We can order all hardware up front with the simple goal of 4 years of life and with a 4 year warranty.

Why 4 years? It seems pretty arbitrary. Spoiler alert: it is. We were running on 4 year old hardware at the time and it worked out pretty well so far. Seriously, that’s it: do what works for you. Most companies depreciate hardware across 3 years, making questions like “what do we do with the old servers?” much easier. For those unfamiliar, depreciated hardware effectively means “off the books.” We could re-purpose it outside production, donate it, let employees go nuts, etc. If you haven’t heard, we raised a little money recently. While the final amounts weren’t decided when we were at the company meetup in Denver, we did know that we wanted to make 2015 an investment year and beef up hardware for the next 4.

Over the next 2 months, we evaluated what was over 4 years old and what was getting close. It turns out almost all of our Dell 11th generation hardware (including the web tier) fits these criteria – so it made a lot of sense to replace the entire generation and eliminate a slew of management-specific issues with it. Managing just 12th and 13th generation hardware and software makes life a lot easier – and the 12th generation hardware will be mostly software upgradable to near equivalency to 13th gen around April 2015.

What Got Love

In those 2 months, we realized we were running on a lot of old servers (most of them from May 2010):

  • Web Tier (11 servers)
  • Redis Servers (2 servers)
  • Second SQL Cluster (3 servers – 1 in Oregon)
  • File Server
  • Utility Server
  • VM Servers (5 servers)
  • Tag Engine Servers (2 servers)
  • SQL Log Database

We also could use some more space, so let’s add on:

  • An additional SAN
  • An additional DAS for the backup server

That’s a lot of servers getting replaced. How many? This many: Greg Bray, and a lot of old servers.

The Upgrade

I know what you’re thinking: “Nick, how do you go about making such a fancy pile of servers?” I’m glad you asked. Here’s how a Stack Exchange infrastructure upgrade happens in the live data center. We chose not to failover for this upgrade; instead we used multiple points of redundancy in the live data center to upgrade it while all traffic was flowing from there.

Day -3 (Thursday, Jan 22nd): Our upgrade plan was finished (this took about 1.5 days total), including everything we could think of. We had limited time on-site, so to make the best of that we itemized and planned all the upgrades in advance (most of them successfully, read on). You can find a read the full upgrade plan here.

Day 0 (Sunday, Jan 25th): The on-site sysadmins for this upgrade were George Beech, Greg Bray, and Nick Craver (note: several remote sysadmins were heavily involved in this upgrade as well: Geoff Dalgas online from Corvallis, OR, Shane Madden, online from Denver, CO, and Tom Limoncelli who helped a ton with the planning online from New Jersey). Shortly before flying in we got some unsettling news about the weather. We packed our snow gear and headed to New York.

Day 1 (Monday, Jan 26th): While our office is in lower Manhattan, the data center is now located in Jersey City across the Hudson river: Office to the Data Center We knew there was a lot to get done in the time we had allotted in New York, weather or not. The thought was that if we skipped Monday we likely couldn’t get back to the data center Tuesday if the PATH (mass transit to New Jersey) shut down. This did end up happening. The team decision was: go time. We got overnight gear then headed to the data center. Here’s what was there waiting to be installed:

Web, Redis, and Service serversNew 10Gb network gearFX2 Blade Chassis for VMs

Yeah, we were pretty excited too. Before we got started with the server upgrade though, we first had to fix a critical issue with the redis servers supporting the launching-in-24-hours Targeted Job Ads. These machines were originally for Cassandra (we broke that data store), then Elasticsearch (broke that too), and eventually redis. Curious? Jason Punyon and Kevin Montrose have an excellent blog series on Providence, you can find Punyon’s post on what broke with each data store here.

The data drives we ordered for these then-redundant systems were the Samsung 840 Pro drives which turned out to have a critical firmware bug. This was causing our server-to-server copies across dual 10Gb network connections to top out around 12MB/s (ouch). Given the hundreds of gigs of memory in these redis instances, that doesn’t really work. So we needed to upgrade the firmware on these drives to restore performance. This needed to be online, letting the RAID 10 arrays rebuild as we went. Since you can’t really upgrade firmware over most USB interfaces, we tore apart this poor, poor little desktop to do our bidding:

Once that was kicked off, it ran in parallel with other work (since RAID 10s with data take tens of minutes to rebuild, even with SSDs). The end result was much improved 100-200MB/s file copies (we’ll see what new bottleneck we’re hitting soon – still lots of tuning to do). Now the fun begins. In Rack C (we have high respect for our racks, they get title casing), we wanted to move from the existing SFP+ 10Gb connectivity combined with 1Gb uplinks for everything else to a single dual 10Gb BASE-T (RJ45 connector) copper solution. This is for a few reasons: The SFP+ cabling we use is called twinaxial which is harder to work with in cable arms, has unpredictable girth when ordered, and can’t easily be gotten natively in the network daughter cards for these Dell servers. The SFP+ FEXes also don’t allow us to connect any 1Gb BASE-T items that we may have (though that doesn’t apply in this rack, it does when making it a standard across all racks like with our load balancers). So here’s what we started with in Rack C:

What we want to end up with is:

The plan was to simplify network config, cabling, overall variety, and save 4U in the process. Here’s what the top of the rack looked like when we started: …and the middle (cable management covers already off):

Let’s get started. First, we wanted the KVMs online while working so we, ummm, “temporarily relocated” them: Now that those are out of the way, it’s time to drop the existing SFP+ FEXes down as low as we could to install the new 10Gb BASE-T FEXes in their final home up top: The nature of how the Nexus Fabric Extenders work allows us to allocate between 1 and 8 uplinks to each FEX. This means we can unplug 4 ports from each FEX without any network interruption, take the 4 we find dead in the VPC (virtual port channel) out of the VPC and assign them to the new FEX. So we go from 8/0 to 4/4 to 0/8 overall as we move from old to new through the upgrade. Here’s the middle step of that process: With the new network in place, we can start replacing some servers. We yanked several old servers already, one we virtualized and 2 we didn’t need anymore. Combine this with evacuating our NY-VM01 & NY-VM02 hosts and we’ve made 5U of space through the rack. On top of NY-VM01&02 was 1 of the 1Gb FEXes and 1U of cable management. Luckily for us, everything is plugged into both FEXes and we could rip one out early. This means we could spin up the new VM infrastructure faster than we had planned. Yep, we’re already changing THE PLAN™. That’s how it goes. What are we replacing those aging VM servers with? I’m glad you asked. These bad boys:

There are 2 of these Dell PowerEdge FX2 Blade Chassis each with 2 FC630 blades. Each blade has dual Intel E5-2698v3 18-core processors and 768GB of RAM (and that’s only half capacity). Each chassis has 80Gbps of uplink capacity as well via the dual 4x 10Gb IOA modules. Here they are installed:

The split with 2 half-full chassis give us 2 things: capacity to expand by double, and avoiding any single points of failure with the VM hosts. That was easy, right? Well what we didn’t plan on was the network portion of the day, it turns out those IO Aggregators in the back are pretty much full switches with 4 external 10Gbps ports and 8 internal 10Gbps (2 per blade) ports each. Once we figured out what they could and couldn’t do, we got the bonding in place and the new hosts spun up.

It’s important to note here it wasn’t any of the guys in the data center spinning up this VM architecture after the network was live. We’re setup so that Shane Madden was able to do all this remotely. Once he had the new NY-VM01 & 02 online (now blades), we migrated all VMs over to those 2 hosts and were able to rip out the old NY-VM03-05 servers to make more room. As we ripped things out, Shane was able to spin up the last 2 blades and bring our new beasts fully online. The net result of this upgrade was substantially more CPU and memory (from 528GB to 3,072GB overall) as well as network connectivity. The old hosts each had 4x 1Gb (trunk) for most access and 2x 10Gb for iSCSI access to the SAN. The new blade hosts each have 20Gb of trunk access to all networks to split as they need.

But we’re not done yet. Here’s the new EqualLogic PS6210 SAN that went in below (that’s NY-LOGSQL01 further below going in as well):

VM Servers, SAN, and NY-LOGSQL01 Our old SAN was a PS6200 with 24x 900GB 10k drives and SFP+ only. This is a newer 10Gb BASE-T 24x 1.2TB 10k version with more speed, more space, and the ability to go active/active with the existing SAN. Along the the SAN we also installed this new NY-LOGSQL01 server (replacing an aging Dell R510 never designed to be a SQL server – it was purchased as a NAS):

The additional space freed by the other VM hosts let us install a new file and utility server:

Of note here: the NY-UTIL02 utility server has a lot of drive bays so we could install 8x Samsung 840 Pros in a RAID 0 in order to restore and test the SQL backups we make every night. It’s RAID 0 for space because all of the data is literally loaded from scratch nightly – there’s nothing to lose. An important lesson we learned last year was that the 840 Pros do not have capacitors in there and power loss will cause data loss if they’re active since they have a bit of DIMM for write cache on board. Given this info – we opted to stick some Intel S3700 800GB drives we had from the production SQL server upgrades into our NY-DEVSQL01 box and move the less resilient 840s to this restore server where it really doesn’t matter.

Okay, let’s snap back to blizzard reality. At this point mass transit had shut down and all hotels in (blizzard) walking distance were booked solid. Though we started checking accommodations as soon as we arrived on site, we had no luck finding any hotels. Though the blizzard did far less than predicted, it was still stout enough to shut everything down. So, we decided to go as late as we could and get ahead of schedule. To be clear: this was the decision of the guys on site, not management. At Stack Exchange employees are trusted to get things done, however they best perceive how to do that. It’s something we really love about this job.

If life hands you lemons, ignore those silly lemons and go install shiny new hardware instead.

This is where we have to give a shout out to our data center QTS. These guys had the office manager help us find any hotel we could, set out extra cots for us to crash on, and even ordered extra pizza and drinks so we didn’t go starving. This was all without asking – they are always fantastic and we’d recommend them to anyone looking for hosting in a heartbeat.

After getting all the VMs spun up, the SAN configured, and some additional wiring ripped out, we ended around 9:30am Tuesday morning when mass transit was spinning back up. To wrap up the long night, this was the near-heart attack we ended on, a machine locking up at: BIOS Lockup Turns out a power supply was just too awesome and needed replacing. The BIOS did successfully upgrade with the defective power supply removed and we got a replacement in before the week was done. Note: we ordered a new one rather than RMA the old one (which we did later). We keep a spare power supply for each wattage level in the data center, and try to use as few different levels as possible.

Day 2 (Tuesday, Jan 27th): We got some sleep, got some food, and arrived on site around 8pm. Starting the web tier (a rolling build out) was kicked off first:

A stack of web serversA line of web serversSame line!Inside a web server

While we rotated 3 servers at a time out for rebuilds on the new hardware, we also upgraded some existing R620 servers from 4x 1Gb network daughter cards to 2x 10Gb + 2x 1Gb NDCs. Here’s what that looks like for NY-SERVICE03:

A line of web serversSame line!Inside a web server

The web tier rebuilding gave us a chance to clean up some cabling. Remember those 2 SFP+ FEXes? They’re almost empty: The last 2 items were the old SAN and that aging R510 NAS/SQL server. This is where the first major hiccup in our plan occurred. We planned to install a 3rd PCIe card in the backup server pictured here: We knew it was a Dell R620 10 bay chassis that has 3 half-height PCIe cards. We knew it had a SAS controller for the existing DAS and a PCIe card for the SFP+ 10Gb connections it has (it’s in the network rack with the cores in which all 96 ports are 10Gb SFP+). Oh hey look at that, it’s hooked to a tape drive which required another SAS controller we forgot aboutCrap. Okay, these things happen. New plan.

We had extra 10Gb network daughter cards (NDCs) on hand, so we decided to upgrade the NDC in the backup server, remove the SFP+ PCIe card, and replace it with the new 12Gb SAS controller. We also forgot to bring the half-height mounting bracket for the new card and had to get creative with some metal snips (edit: turns out it never came with one – we feel slightly less dumb about this now). So how do we plug that new 10Gb BASE-T card into the network core? We can’t. At least not at 10Gb. Those 2 last SFP+ items in Rack C also need a home – so we decided to make a trade. The whole backup setup (including  new MD1400 DAS) just love their new Rack C home:

Then we could finally remove those SFP+ FEXes, bring those KVMs back to sanity, and clean things up in Rack C:

Those pesky hanging KVMsTop of Rack CMiddle of Rack C

See? There was a plan all along. The last item to go in Rack C for the day is NY-GIT02, our new Gitlab and TeamCity server:

Signatures from the New York devsRacked and ready to go

Note: we used to run TeamCity on Windows on NY-WEB11. Geoff Dalgas threw out the idea during the upgrade of moving it to hardware: the NY-GIT02 box. Because they are such intertwined dependencies (for which both have an offsite backup), combining them actually made sense. It gave TeamCity more power, even faster disk access (it does a lot of XML file…stuff), and made the web tier more homogenous all at the same time. It also made the downtime of NY-WEB11 (which was imminent) have far less impact. This made lots of sense, so we changed THE PLAN™ and went with it. More specifically, Dalgas went with it and set it all up, remotely from Oregon. While this is happening, Greg was fighting with a DSC install hang regarding git on our web tier: Greg losing to DSC Wow that’s a lot of red, I wonder who’s winning. And that’s Dalgas in a hangout on my laptop, hi Dalgas! Since the web tier builds were a relatively new process fighting us, we took the time to address some of the recent cabling changes. The KVMs were installed hastily not long before this because we knew a re-cable was coming. In Rack A for example we moved the top 10Gb FEX up a U to expand the cable management to 2U and added 1U of management space between the KVMs. Here’s that process:

A messy starting KVMRemoving the cable management to make roomAhhhh room!That's better, all done.

Since we had to re-cable from the 1Gb middle FEXes in Rack A & B (all 4 being removed) to the 10Gb Top-of-Rack FEXes, we moved a few things around. The CloudFlare load balancers down below the web tier at the bottom moved up to spots freed by the recently virtualized DNS servers to join the other 2 public load balancers. The removal of the 1Gb FEXes as part of our all-10Gb overhaul meant that the middle of Racks A & B had much more space available, here’s the before and after:

Web tier below a 1Gb FEXLook at all that space!

After 2 batches of web servers, cable cleanup, and network gear removal, we called it quits around 8:30am to go grab some rest. Things were moving well and we only had half the web tier, cabling, and a few other servers left to replace.

Day 3 (Wednesday, Jan 28th): We were back in the data center just before 5pm, set up and ready to go. The last non-web servers to be replaced were the redis and “service” (tag engine, elasticsearch indexing, etc.) boxes:

A look inside redisNY-REDIS01 and NY-SERVICE05 racked and ready for an OS

We have 3 tag engine boxes (purely for reload stalls and optimal concurrency, not load) and 2 redis servers in the New York data center. One of the tag engine boxes was a more-recent R620, (this one got the 10Gb upgrade earlier) and wasn’t replaced. That left NY-SERVICE04, NY-SERVICE05, NY-REDIS01 and NY-REDIS02. On the service boxes the process was pretty easy, though we did learn something interesting: if you put both of the drives from the RAID 10 OS array in an R610 into the new R630…it boots all the way into Windows 2012 without any issues. This threw us for a moment because we didn’t remember building it in the last 3 minutes. Rebuild is simple: lay down Windows 2012 R2 via our image + updates + DSC, then install the jobs they do. StackServer (from a sysadmin standpoint) is simply a windows service – and our TeamCity build handles the install and such, it’s literally just a parameter flag. These boxes also run a small IIS instance for internal services but that’s also a simple build out. The last task they do is host a DFS share, which we wanted to trim down and simplify the topology of, so we left them disabled as DFS targets and tackled that the following week – we had NY-SERVICE03 in rotation for the shares and could do such work entirely remotely. For redis we always have a slave chain happening, it looks like this: This means we can do an upgrade/failover/upgrade without interrupting service at all. After all those buildouts, here’s the super fancy new web tier installed:

To get an idea of the scale of hardware difference, the old web tier was Dell R610s with dual Intel E5-5640 processors and 48GB of RAM (upgraded over the years). The new web tier has dual Intel 2687W v3 processors and 64GB of DDR4 memory. We re-used the same dual Intel 320 300GB SSDs for the OS RAID 1. If you’re curious about specs on all this hardware – the next post we’ll do is a detailed writeup of our current infrastructure including exact specs.

Day 4 (Thursday, Jan 29th): I picked a fight with the cluster rack, D. Much of the day was spent giving the cluster rack a makeover now that we had most of the cables we needed in. When it was first racked, the pieces we needed hadn’t arrived by go time. It turns out we were still short a few cat and power cables as you’ll see in the photos, but we were able to get 98% of the way there.

It took a while to whip this rack into shape because we added cable arms where they were missing, replaced most of the cabling, and are fairly particular about the way we do things. For instance: how do you know things are plugged into the right port and where the other end of the cable goes? Labels. Lots and lots of labels. We label both ends of every cable and every server on both sides. It adds a bit of time now, but it saves both time and mistakes later.

Cable labels!Web servers without labelsWeb servers with labels!Web server rear labels

Here’s what the racks ended up looking like when we ran out of time this trip:

It’s not perfect since we ran out of several cables of the proper color and length. We have ordered those and George will be tidying the last few bits up.

I know what you’re thinking. We don’t think that’s enough server eye-candy either.

Here’s the full album of our move.

And here’s the #SnowOps twitter stream which has a bit more.

What Went Wrong

  • We’d be downright lying to say everything went smoothly. Hardware upgrades of this magnitude never do. Expect it. Plan for it. Allow time for it.
  • Remember when we upgraded to those new database servers in 2010 and the performance wasn’t what we expected? Yeah, that. There is a bug we’re currently helping Dell track down in their 1.0.4/1.1.4 BIOS for these systems that seems to not respect whatever performance setting you have. With Windows, a custom performance profile disabling C-States to stay at max performance works. In CentOS 7, it does not – but disabling the Intel PState driver does. We have even ordered and just racked a minimal R630 to test and debug issues like this as well as test our deployment from bare metal to constantly improve our build automation. Whatever is at fault with these settings not being respected, our goal is to get that vendor to release an update addressing the issue so that others don’t get the same nasty surprise.
  • We ran into an issue deploying our web tier with DSC getting locked up on a certain reboot thinking it needed a reboot to finish but coming up in the same state after a reboot in an endless cycle. We also hit issues with our deployment of the git client on those machines.
  • We learned that accidentally sticking a server with nothing but naked IIS into rotation is really bad. Sorry about that one.
  • We learned that if you move the drives from a RAID array from an R610 to an R630 and don’t catch the PXE boot prompt, the server will happily boot all the way into the OS.
  • We learned the good and the bad of the Dell FX2 IOA architecture and how they are self-contained switches.
  • We learned the CMC (management) ports on the FX2 chassis are effectively a switch. We knew they were suitable for daisy chaining purposes. However, we promptly forgot this, plugged them both in for redundancy and created a switching loop that reset Spanning Tree on our management network. Oops.
  • We learned the one guy on twitter who was OCD about the one upside down box was right. It was a pain to flip that web server over after opening it upside down and removing some critical box supports.
  • We didn’t mention this was a charge-only cable. Wow, that one riled twitter up. We appreciate the #infosec concern though!
  • We drastically underestimated how much twitter loves naked servers. It’s okay, we do too.
  • We learned that Dell MD1400 (13g and 12Gb/s) DAS (direct attached storage) arrays do not support hooking into their 12g servers like our R620 backup server. We’re working with them on resolving this issue.
  • We learned Dell hardware diagnostics don’t even check the power supply, even when the server has an orange light on the front complaining about it.
  • We learned that Blizzards are cold, the wind is colder, and sleep is optional.

The Payoff

Here’s what the average render time for question pages looks like, if you look really closely you can guess when the upgrade happened: Question page render times The decrease on question render times (from approx 30-35ms to 10-15ms) is only part of the fun. The next post in this series will detail many of the other drastic performance increases we’ve seen as the result of our upgrades. Stay tuned for a lot of real world payoffs we’ll share in the coming weeks.

Does all this sound like fun?

To us, it is fun. If you feel the same way, come do it with us. We are specifically looking for sysadmins preferably with data center experience to come help out in New York. We are currently hiring 2 positions:

If you’re curious at all, please ask us questions here, Twitter, or wherever you’re most comfortable. Really. We love Q&A.

Schneier on Security: Now Corporate Drones are Spying on Cell Phones

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

The marketing firm Adnear is using drones to track cell phone users:

The capture does not involve conversations or personally identifiable information, according to director of marketing and research Smriti Kataria. It uses signal strength, cell tower triangulation, and other indicators to determine where the device is, and that information is then used to map the user’s travel patterns.

“Let’s say someone is walking near a coffee shop,” Kataria said by way of example.

The coffee shop may want to offer in-app ads or discount coupons to people who often walk by but don’t enter, as well as to frequent patrons when they are elsewhere. Adnear’s client would be the coffee shop or other retailers who want to entice passersby.


The system identifies a given user through the device ID, and the location info is used to flesh out the user’s physical traffic pattern in his profile. Although anonymous, the user is “identified” as a code. The company says that no name, phone number, router ID, or other personally identifiable information is captured, and there is no photography or video.

Does anyone except this company believe that device ID is not personally identifiable information?

TorrentFreak: World’s Most Beautiful Pirate Movie Site Killed in Infancy

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

nachoWhile sites like The Pirate Bay are stuck in a presentational time-warp, in recent times other file-sharing related domains have been revamping their appearances.

KickassTorrents is probably one of the best looking public torrent sites but there are plenty of private trackers whose presentations are something to behold. Streaming sites have also made big strides in their graphical layouts, something which has spooked entertainment industry outfits such as the MPAA.

That being said, one of the biggest problems for them at the moment is Popcorn Time. Its Netflix-style interface not only looks good but behind the scenes it works almost flawlessly. Add to the mix complete simplicity of use and the software is definitely a force to be reckoned with.

However, Popcorn Time – in whatever format chosen by the user – needs to be installed on a PC, cellphone, tablet or other device in order to work. A small obstacle for many users perhaps, but for the real novice that still has the potential to cause problems – unless people have nachos as well as popcorn, that is.

Sometime last week a brand new service hit the Internet. Titled NachoTime, the best way of explaining the product is PopcornTime in a browser. It looked absolutely beautiful, with full color graphics for all the content it presented – mainly mainstream movies of course.

Every single one played almost instantly in a YouTube-style interface with no noticeable buffering and it even worked flawlessly on mobile devices with no extra software required.


In addition to instant streaming, NachoTime provided links to torrents for the same content in various qualities supported by the appropriate subtitles if needed. No other public site has ever looked this good.

However, all of this is now relegated to history as Dutch anti-piracy group BREIN acted against the service before it even got off the ground. The site now displays a notice explaining that it has been permanently shutdown.

This site has been removed by BREIN because it supplies illicit entertainment content.

This website made use of an illegal online supply of films and television series.

Uploading and downloading of illegal content is prohibited by law and will therefore result in liability for the damages caused.


Go to and see where you can legally download and stream

BREIN says it contacted the site’s host who in turn contacted the site’s owner, who took NachoTime down immediately. TorrentFreak contacted BREIN chief Tim Kuik and put it to him that his group has just taken down the best looking site around.

“I agree, very well presented,” Kuik said. “A pity it was illegal.”

The reaction to NachoTime’s arrival by BREIN was particularly quick. That, Kuik says, is due to the nature of these new waves of PopcornTime-like services. The anti-piracy group says it views them as a greater threat than torrent sites.

“Their ease of use makes them very popular so they are hurting the growth of legal online platforms,” Kuik explains.

What will happen to the individual running the site isn’t clear, but Kuik told us that he’s a 22-year-old from the Dutch city of Utrecht who signed a declaration to keep the site offline. His quick response appears to have been well received by BREIN but others thinking of embarking on the same kind of project are already on a warning.

“They will be offered a settlement or they will be sued,” Kuik explains.

“Settlement is a cease and desist undertaking with a penalty sum, a BREIN text on the site referring to that lists legal online platforms and payment of compensation depending on circumstances. A court case would include an injunction and full costs (likely between € 8,000 – 15,000) and a procedure on the merits regarding damages and again full costs.”

While Popcorn and Nachos are definitely tasty snacks for pirates, at least one is currently off the menu. Time will tell when the next one will come along and how long it will last.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and anonymous VPN services.

Linux How-Tos and Linux Tutorials: How to Install the Prestashop Open Source Ecommerce Tool on Linux

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Jack Wallen. Original post: at Linux How-Tos and Linux Tutorials

Prestashop is one of the most powerful open source ecommerce tools you will ever have the privilege of using. Last year I decided to test the waters of selling books directly from my web site. To do this, I turned to Prestashop and was amazed at how much power and flexibility it offered. From digital downloads, weekly specials, standard and mobile themes, templates, modules ─ you name it, Prestashop can do it.

The core of Prestashop is free (there are paid modules to extend the functionality as well as a hosted, cloud-based version of the tool). But if you want to host your own Prestashop in-house, you can. The system requirements are fairly basic:

  • Supported operating system: Windows, Mac and Linux

  • Web server: Apache 1.3, Apache 2.x, Nginx or Microsoft IIS

  • PHP 5.2+ installed and enabled

  • MySQL 5.0+ installed with a database created.

I want to walk you through the process of getting Prestashop up and running on the Linux platform. I will demonstrate on Ubuntu 14.04 ─ but the steps are easily transferrable to other distributions. I will assume you already have the requirements met (in particular … the LAMP portion). The Prestashop installer does not create the database for you, so you will have to do that manually. There are a number of ways this can be done ─ my preference is using the PhpMyAdmin tool.

Let’s begin the process.

Database creation

Creating the database through PhpMyAdmin is simple:

  1. Point your browser to the PhpMyAdmin install on the server that will hold the Prestashop instance.

  2. Click on the Databases tab.

  3. Enter the name of the database to be created (Figure 1).

  4. Click Create.

PHPMyAdmin database creation

If you prefer creating databases from the command line, do the following:

  1. From a terminal window, issue the command

    sudo mysql -u root -p
  2. Hit Enter

  3. Type your MySQL root password and hit enter

  4. Type the command

    create database shop ;
  5. Hit Enter.

Your database should now be ready to use.

Download and install

With the database ready, you need to download the latest version of Prestashop and move it to the Apache document root. For our instance, that document root will be /var/www/html. Once you’ve downloaded the .zip file, move it to /var/www/html, change into the document root (using a terminal window and the command

cd /var/www/html

) and then unzip the package with the command

sudo unzip 

(where XXX is the release number). This will create a new directory in the document root called prestashop.


The remainder of the installation will be done through your web browser. So point the browser to http://ADDRESS_OF_SERVER/prestashop> and start walking through the installation wizard.

Web based install

The first step in the wizard is to select your language. From the language drop-down, make the appropriate selection and click Next. At this point you will need to agree to the licenses (there are more than one) and click Next. At this point, you will find out what all needs to be corrected for the installation to continue.

The most likely fixes necessary are the installation of the GD library, the mcrypt extension, and adding write permissions to a number of folders. Here are the quick fixes:

  1. To install the GD library, issue the command

    sudo apt-get install php5-gd 
  2. To install the mcrypt extension, issue the command

    sudo apt-get install php5-mcrypt
  3. Enable mcrypt with the command

    sudo php5enmod mcrypt
  4. Use the command

    sudo chmod -R ugo+w 

    on the directories (within the /var/www/html/prestashop directory) /config, /cache, /log, /img, /mails, /modules, /themes/default-bootstrap/lang/, /themes/default-bootstrap/pdf/lang/, /themes/default-bootstrap/cache/, /translations/, /upload/, /download/ 

Once you’ve made those corrections (if necessary), hit the Refresh these settings button again and you should see all is well (Figure 2).

prestashop install

Store information

In the next window (Figure 3), you must enter information about your store. Pay close attention to the Main Activity drop-down. If you’re going to offer digital downloads, you’ll want to select the Download option (so you don’t have to manually add that feature later).

prestashop store information

Fill out the information and click Next to continue on.

Database configuration

In the next window (Figure 4), you must enter the information for the database you created earlier. Enter the information and click Test your database connection now. If it returns Database is connected, you are good to go ─ click Next.

prestashop database

Once you click Next, all of the database tables will be created. This step can take some time (depending upon your hardware). Allow it to finish and you will be greeted with a new window with a number of different links. You can click to manage your store, view your store, find new templates or modules, and even share your successful installation on Facebook, Twitter, etc.

You will, most likely, want to head on over to the back office. However, you cannot actually visit the back office until you’ve done the following:

  • Delete the /var/www/html/prestashop/install folder

  • Rename the /var/www/html/prestashop/admin folder

Once you’ve renamed the admin folder, the URL for the Prestashop back office will be http://ADDRESS_TO_SERVER/prestashop/ADMIN_FOLDER> (where ADDRESS_TO_SERVER is the URL or IP address of the server and ADMIN_FOLDER is the new name for the admin folder). Go to that address and log in with the administration credentials you created during the Store Information setup. You will find yourself at the Prestashop Dashboard (Figure 5), where you can begin to manage your ecommerce solution!

prestashop dashboard

If you’re in need of a powerful ecommerce tool, look no further than open source and Prestashop. With this powerhouse online shopping solution, you’ll be selling your products and services with ease.


Krebs on Security: Credit Card Breach at Mandarin Oriental

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

In response to questions from KrebsOnSecurity, upscale hotel chain Mandarin Oriental Hotel Group today confirmed that its hotels have been affected by a credit card breach.

mandarinReached for comment about reports from financial industry sources about a pattern of fraudulent charges on customer cards that had all recently been used at Mandarin hotels, the company confirmed it is investigating a breach.

“We can confirm that Mandarin Oriental has been alerted to a potential credit card breach and is currently conducting a thorough investigation to identify and resolve the issue,” the company said in an emailed statement. “Unfortunately incidents of this nature are increasingly becoming an industry-wide concern. The Group takes the protection of customer information very seriously and is coordinating with credit card agencies and the necessary forensic specialists to ensure our guests are protected.” 

Mandarin isn’t saying yet how many of the company’s two-dozen or so location worldwide may be impacted, but banking industry sources say the breach almost certainly impacted most if not all Mandarin hotels in the United States, including locations in Boston, Florida, Las Vegas, Miami, New York, and Washington, D.C. Sources also say the compromise likely dates back to just before Christmas 2014.

It may well be that the cards are being stolen from compromised payment terminals at restaurants and other businesses located inside of these hotels — instead of the from hotel front desk systems. This was the case with hotels managed by White Lodging Services Corp., which last year disclosed a breach that impacted only restaurants and gift shops within the affected hotels.

It should be interesting to see how much the stolen cards are worth, when and if and they go up for sale in the underground card markets. I’m betting these cards would fetch a pretty penny. This hotel chain is frequented by high rollers who likely have hi- or no-limit credit cards. According to the Forbes Travel Guide, the average price of a basic room in the New York City Mandarin hotel is $850 per night.

More on this story as it becomes available.

TorrentFreak: EZTV Suffers Extended Downtime

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

eztv-logo-smallFounded in 2005, the TV-torrent site EZTV has served torrents for nearly a decade.

Over the past several years it has maintained a steady user-base and with millions of users it’s undoubtedly the most used TV-torrent site on the Internet.

Today, however, the site has been pretty much unreachable. Instead of the usual list of torrents users see a CloudFlare error message.


TorrentFreak contacted EZTV for additional information. The team is aware of the issues and we’ll update this article when we receive more details.

The outage doesn’t mean that there are no new releases coming out.

As always, the leading TV-torrent distribution group continues to post torrents on KickassTorrents, The Pirate Bay and other sites.

In addition, reverse proxies such as are also working fine.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and anonymous VPN services.

TorrentFreak: .SO Registry Bans More “KickassTorrents” Domains

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

kickasstorrents_500x500With millions of unique visitors per day KickassTorrents has become a prime target for copyright holders, many of whom would like to see the site taken offline.

Among other tactics, copyright holders ask domain name registries to suspend pirate site domain names. For a long time the Somalian .so TLD appeared to be a relatively safe haven, but this changed last month when the domain was “banned.”

Initially the action appeared to be an isolated incident, but the .SO registry wasn’t done with the Kickass brand yet.

A few days ago the .SO registry targeted a new round of “Kickass” related domains.,,,,, and were all added to the ban list.

Interestingly, none of the domains were affiliated with the notorious torrent site., for example, was a relatively low traffic streaming site that simply used the Kickass brand to gain traffic.

A similar domain name,, wasn’t even operational. The owner had parked the domain which wasn’t linking to any infringing material. Still, this domain was banned as well for an alleged violation of .SO’s policies.

“The central registry has deleted the domain ‘’. The domain was in violation of their usage policy,” Dynadot informed the owner.

The .SO registry isn’t commenting on its actions, but it’s very likely that complaints from copyright holders are the main reason. By taking away the domain names of popular sites rightsholders hope to frustrate and confuse the public.

To a certain degree this strategy seems to be working. KickassTorrents is currently hard to find on search engines such as Google. Instead, many users are redirected to scam sites where they have to leave their credit card details to register a “free” account.


Findability issues aside, the real KickassTorrents is still alive and kicking. The site continues to operate on its old domain name, which regained its spot among the 100 most visited sites on the Internet.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and anonymous VPN services.

Raspberry Pi: Duelling pianos. Literally.

This post was syndicated from: Raspberry Pi and was written by: Liz Upton. Original post: at Raspberry Pi

When someone mailed me a link to this performance, I assumed it was going to be one of those setups where two pianists play jolly tunes together in a bar. How wrong I was.

These two pianists (Alvise Sinivia and Léo Jassef from the Conservatoire National de Paris) are trying to kill each other in Streetfighter. Tunefully.

These pianos have been transformed by Eric and Cyril of Foobarflies (let me know your full names, Eric and Cyril, so I can add them here!) into Playstation controllers, using piezo triggers, a Pi, some Arduinos, and some custom Python firmware. The installation was set up for the reopening of Paris’ Maison de la Radio, now a cultural space open to the public.

Eric and Cyril have made a comprehensive writeup available (and if, like me, you’re terribly excited by the insides of pianos, you’ll love this one). This is one of my favourite projects in ages – thanks Foobarflies!

SANS Internet Storm Center, InfoCON: green: No Wireshark? No TCPDump? No Problem!, (Wed, Mar 4th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

Have you ever been on a pentest, or troubleshooting a customer issue, and the next step was to capture packets on a Windows host? Then you find that installing winpcap or wireshark was simply out of scope or otherwise not allowed on that SQL, Exchange, Oracle or other host? It used to be that this is when wed recommend installing Microsofts Netmon packet capture utility, but even then lots of IT managers would hesitate about using the install word in association with a critical server. Well, as they say in networking (and security as well), theres always another way, and this is that way.

netsh trace is your friend. And yes, it does exactly what it sounds like it does.

Type netsh trace help on any Windows 7 Windows Server 2008 or newer box, and you”>C:\”>Commands in this context:
? – Displays a list of commands.
convert – Converts a trace file to an HTML report.
correlate – Normalizes or filters a trace file to a new output file.
diagnose – Start a diagnose session.
dump – Displays a configuration script.
help – Displays a list of commands.
show – List interfaces, providers and tracing state.
start – Starts tracing.
stop – Stops tracing.

Of course, in most cases, tracing everything on any production box is not advisable – especially if its your main Exchange, SQL or Oracle server. Well need to filter the capture, usually to a specific host IP, protocol or similar.”>netsh trace show capturefilterhelp

One of the examples in this output shows you how t o e.g. “>netsh trace start capture=yes Ethernet.Type=IPv4 IPv4.Address=

You could also add Protocol=TCP or UDP and so on..

Full syntax and notes for netsh trace can be found here:

For instance, the following session shows me capturing an issue with a firewall that Im working on. Note that you need admin rights to run this, the same as any capture tool. In a pentest you would likely specify an output file that isnt in the users”>C:\”>Trace configuration:
Status: Running
Trace File: C:\Users\Administrator\AppData\Local\Temp\NetTraces\NetTrace
Append: Off
Circular: On
Max Size: 250 MB
Report: Off

When you are done capturing data, it”>C:\ netsh trace stop
Correlating traces … done
Generating data collection … done
The trace file and additional troubleshooting information have been compiled as

The cool thing about this is that it doesnt need a terminal session (with a GUI, cursor keys and so on). If all you have is a metasploit shell, netsh trace works great!

If this is a capture for standard sysadmin work, you can simply copy the capture over to your workstation and proceed on with analysis. If this is a pentest, a standard copy might still work (remember, were on a Microsoft server), but if you need netcat type function to exfiltrate your capture, take a look at PowerCat (which is a netcat port in PowerShell).

Next, open the file (which is in Microsofts ETL format) in Microsofts Message Analyzer app – which you can install on your workstation rather than the server we ran the capture on (” />

If you do need another packet analysis tool, its easy to a File / Save As / Export, and save as a PCAP file that Wireshark, tcpdump, SNORT, ngrep, standard python or perl calls, or any other standard tool can read natively.

Or you can convert to PCAP using PowerShell (of course you can).”>$s = New-PefTraceSession -Path C:\output\path\spec\OutFile.Cap -SaveOnStop
$s | Add-PefMessageProvider -Provider C:\input\path\spec\Input.etl
$s | Start-PefTraceSession

This Powershell cmdlet is not available in Windows 7 – youll need Windows 8, or Server 2008 or newer
(This script was found at )

If netsh trace has solved an interesting problem for you, or was the tool that got you some interesting data in a pentest, please, use our comment form to let us know how you used it (within your NDA of course!)

Rob VandenBrink

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

TorrentFreak: ISP Categorically Refuses to Block Pirate Bay – Trial Set For October

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

Despite its current difficulties in maintaining an efficient online presence, The Pirate Bay remains the world’s most hounded website. Entertainment industry companies around the globe have made the notorious site their number one anti-piracy target and legal action continues in many regions.

Perhaps one of the most interesting at the moment is the action filed last November by Universal Music, Sony Music, Warner Music, Nordisk Film and the Swedish Film Industry. It targets Swedish ISP Bredbandsbolaget (The Broadband Company) and effectively accuses the provider of being part of the Pirate Bay’s piracy machine.

The papers filed at the Stockholm District Court demand that Bredbandsbolaget block its subscribers from accessing The Pirate Bay and popular streaming portal Swefilmer. In December the ISP gave its response, stating in very clear terms that ISPs cannot be held responsible for the traffic carried on their networks.

Last month on February 20 the parties met in the Stockholm District Court to see if some kind of agreement or settlement could be reached. But the entertainment companies’ hopes have been dashed following the confirmation that Bredbandsbolaget will not comply with its wishes.

“It is an important principle that Internet providers of Internet infrastructure shall not be held responsible for the content that is transported over the Internet. In the same way that the Post should not meddle in what people write in the letter or where people send letters,” Commercial Director Mats Lundquist says.

“We stick to our starting point that our customers have the right to freely communicate and share information over the internet.”

With no settlement or compromise to be reached, DagensMedia reports that the district court has now set a date for what is being billed as a “historic trial”.

It will begin on Thursday 23 October and the outcome has the potential to reshape provider liability in The Pirate Bay’s spiritual homeland, despite the fact that it’s now run from overseas.

Bredbandsbolaget will certainly be outnumbered. TV companies including SVT, TV4 Group, MTG TV, SBS Discovery and C More will team up with the IFPI and the Swedish Video Distributors group which counts Fox Paramount, Disney, Warner and Sony among its members.

Internal movie industry documents obtained by TorrentFreak reveal that IFPI and the Swedish film producers have signed a binding agreement which compels them to conduct and finance the case. However, the MPAA is exerting its influence while providing its own evidence and know-how behind the scenes.

Also of interest is that IFPI took a decision to sue Bredbandsbolaget and not Teliasonera (described by the MPAA as “the largest and also very actively ‘copy-left’ Swedish ISP”). The reason for that was that IFPI’s counsel represents Teliasonera in other matters which would have raised a conflict of interest.

There are also some intriguing political implications and MPAA nervousness concerning the part of the case involving streaming portal Swefilmer. Those will be the topic of an upcoming TF article.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and anonymous VPN services.

SANS Internet Storm Center, InfoCON: green: Freak Attack – Surprised? No. Worried? A little. , (Wed, Mar 4th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

There has been some press surrounding the SSL issue published recently dubbed Freak. “>1 and other sites, but what does it really mean?

The issue relates to the use of Export Ciphers (the crypto equivalent of keeping the good biscuit yourself and giving the smaller broken one to your little brother or sister). The Export Ciphers were used as the allowed ciphers for non US use. The ciphersare part of OpenSSL and the researchers2 have identified a method of forcing the exchange between a client and server to use these weak ciphers, even if the cipher suite is not officially supported3. “>)attack. When you do aMITMattack you have full control over the connection anyway, so why bother decrypting anything?However, if Im reading and interpreting the examples correctly (kind of hoping Im wrong), it looks like this particular attack solves one challenge that a MITM has. For HTTPS intercept you usually generate a new certificate with the information of the site and resign the certificate before presenting it to the client. Whenever you present this newly signed certificatethe client receives an error message stating that the certificate does not match the expected certificate for the site. From the vids2 it looks like this attack could fix that particular problem. So now when you perform a MITM attack you retain the original certificate and the user is none the wiser. This could open up a whole new avenue of attacks against clients and potentially simplify something that was quite difficult to do.

What is the impact to organisations? Well it is quite possible that your sites will be impersonated and there wont be much that can be done about it and you may not even know that your customers are being attacked. To prevent your site from being used in this attack youll need to patch openSLL4 (yes again). This issue will remainuntil systems have been patched and updated, not just servers, but also client software. Client software should be updated soon(hopefully), but there will no doubt be devices that will be vulnerable to this attack for years to come (looking at you Android).

Matthew Green in his blog3describes the attack well and he raises a very valid point. Backdoors will always come back to bite.

The researchers have set up a site with more info5.


Mark H “>(Thanks Ugo for bringing it to our attention).


1 -
2 -
4 –
5 -

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

Schneier on Security: <i>Data and Goliath</i>: Reviews and Excerpts

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

On the net right now, there are excerpts from the Introduction on Scientific American, Chapter 5 on the Atlantic, Chapter 6 on the Blaze, Chapter 8 on Ars Technica, Chapter 15 on Slate, and Chapter 16 on Motherboard. That might seem like a lot, but it’s only 9,000 of the book’s 80,000 words: barely 10%.

There are also a few reviews: from Boing Boing, Booklist, Kirkus Reviews, and Nature. More reviews coming.

Amazon claims to be temporarily out of stock, but that’ll only be for a day or so. There are many other places to buy the book, including Indie Bound, which serves independent booksellers.

Book website is here.

SANS Internet Storm Center, InfoCON: green: An Example of Evolving Obfuscation, (Tue, Mar 3rd)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

Since May of 2014, Ive been tracking a particular group that uses the Sweet Orange exploit kit to deliver malware. This group alsouses obfuscation to make it harder to detectthe infection chain of events.

By 2015, this group included more obfuscation within the initial javascript. It however, the result causes more work to detect the malicious activity.

Either way, the infection chain flows according to following block diagram:

Previous obfuscation

Below are images from an infection chain from July 2014 [1]. Here we find malicious javascript from the compromised website. In this image, Ive highlighted two areas:” />

Here” />

Recent obfuscation

Below are images from an infection chain by the same actor in February 2015 [2]. Again we find malicious javascript from the compromised website. However, in this case, there” />

First is the function that replaces any non-hexadecimal characters with nothing and replaces various symbols with the percent symbol (%). This time, we have unicode-based hexadecimal obfuscation and some variables thrown in. This does the same basic function as the previous example. Its now a bit harder to find when you” />

That URL is now obfuscated with unicode-based hexadecimal characters. For example, u0074 represents the ASCII character t (lower case).

Once again, let” />

however, the result causes more work for analysts to fully map the chain of events. We can expect continued evolution of these obfuscation used by this and other actors.

Brad Duncan,Security Researcher atRackspace
Blog: @malware_traffic



(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

TorrentFreak: U.S. Govt Files For Default Judgment on Dotcom’s Cash and Cars

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

In the wake of the now-famous 2012 raid, the U.S. government has done everything in its power to deny Kim Dotcom access to the assets of his former Megaupload empire. Millions were seized, setting the basis for a legal battle that has dragged on for more than three years.

In a July 2014 complaint submitted at a federal court in Virginia, the Department of Justice asked for forfeiture of the bank accounts, cars and other seized possessions, claiming they were obtained through copyright and money laundering crimes.

“Kim Dotcom and Megaupload will vigorously oppose the US Department of Justice’s civil forfeiture action,” Dotcom lawyer Ira Rothken told TF at the time.

But in the final days of last month Dotcom received a blow when a ruling from the United States barred him from fighting the seizure. A Federal Court in Virginia found that Dotcom was not entitled to contest the forfeiture because he is viewed as a “fugitive” facing extradition.

“We think this is not offensive to just Kim Dotcom’s rights, but the rights of all Kiwis,” Rothken said.

Wasting no time, yesterday the United States went in for the kill. In a filing in the District Court for the Eastern District of Virginia, the Department of Justice requested an entry of default against the assets of Kim Dotcom plus co-defendants Mathias Ortmann, Bram van der Kolk, Finn Batato, Julius Bencko, and Sven Echternach.

The targets for forfeiture are six bank accounts held in Hong Kong in the names of Ortmann, der Kolk, Echternach, Bencko and Batato. New Zealand based assets include an ANZ National Bank account in the name of Megastuff Limited, an HSBC account held by der Kolk and a Cleaver Richards Limited Trust Account for Megastuff Limited held at the Bank of New Zealand. Two Mercedes-Benz vehicles (an A170 and an ML500) plus their license plates complete the claim.

The request for default judgment was entered soon after.

“In accordance with the Plaintiff’s request to enter default and the affidavit of Assistant United States Attorney Karen Ledbetter Taylor, counsel of record for the Plaintiff, the Clerk of this Court does hereby enter a default against the defendant,” Clerk of Court Fernando Galindo wrote.

Dotcom and his co-defendants will now have to wait to see if the U.S. court grants default judgment and forfeiture. However, even if that transpires it probably won’t be the end of the matter.

Since the assets are located overseas any U.S. order would have to be presented to the courts in those countries. In New Zealand, for example, the U.S. acknowledges that the forfeiture order might not be accepted and could become the subject of further litigation.

In any event the battle for Dotcom’s millions will continue, both in the United States and elsewhere. And with each passing day comes extra legal costs which diminishes the entrepreneur’s chances of mounting what is already an astronomically costly defense, a situation that plays right into the hands of the U.S.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and anonymous VPN services.

Krebs on Security: Hospital Sues Bank of America Over Million-Dollar Cyberheist

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

A public hospital in Washington state is suing Bank of America to recoup some of the losses from a $1.03 million cyberheist that the healthcare organization suffered in 2013.

cascadeIn April 2013, organized cyber thieves broke into the payroll accounts of Chelan County Hospital No. 1 , one of several hospitals managed by the Cascade Medical Center in Leavenworth, Wash. The crooks added to the hospital’s payroll account almost 100 “money mules,” unwitting accomplices who’d been hired to receive and forward money to the perpetrators.

On Thursday, April 19, and then again on April 20, the thieves put through a total of three unauthorized payroll payments (known as automated clearing house or ACH payments), siphoning approximately $1 million from the hospital.

Bank of America was ultimately able to claw back roughly $400,000 of the fraudulent payroll payments. But in a complaint (PDF) filed against the bank, the hospital alleges that an employee on the Chelan County  Treasurer’s staff noticed something amiss the following Monday — April 22, 2013 — and alerted the bank to the suspicious activity.

“Craig Scott, a Bank of America employee, contacted the Chelan County Treasurer’s office later that morning and asked if a pending transfer request of $603,575.00 was authorized,” the complaint reads. “No funds had been transferred at the time of the phone call.  Theresa Pinneo, an employee in the Chelan County Treasurer’s Office, responded immediately that the $603,575.00 transfer request was not authorized. Nonetheless, Bank of America processed the $603,575.00 transfer request and transferred the funds as directed by the hackers.”

Chelan County alleges breach of contract, noting that the agreement between the county and the bank incorporates rules of the National Automated Clearinghouse Association (NACHA), and that those rules require financial institutions to implement a risk management program for all ACH activities; to assess the nature of Chelan County’s ACH activity; to implement an exposure limit for Chelan County; to monitor Chelan County’s ACH activity across multiple settlement dates; and to enforce that exposure limit. The lawsuit alleges that Bank of America failed on all of those counts, and that it ran afoul of a Washington state law governing authorized and verified payment orders.

In a response (PDF) filed with the U.S. District Court for the Eastern District of Washington at Spokane, Bank of America denied nearly all of the allegations in the lawsuit, including that it ignored the hospital’s warning not to process the $603,575 payment batch.

The bank noted that its contractual obligations with the county are governed by the Uniform Commercial Code (UCC), which has been adopted by most states (including Washington). The UCC holds that a payment order received by the [bank] is “effective as the order of the customer,whether or not authorized, if the security procedure is a commercially reasonable method of providing security against unauthorized payment orders, and the bank proves that it accepted the payment order in good faith and in compliance with the security procedure and any written agreement or instruction of the customer restricting acceptance of payment orders issued in the name of the customer.”

This cyberheist mirrors attacks against dozens of other businesses over the past five years that have lost tens of millions of dollars at the hands of crooks armed with powerful banking Trojans such as ZeuS. It’s not clear what strain of malware was used in this attack, but the money was funneled through a cashout gang that this blog has tied to cyberheists orchestrated by organized crooks who distributed ZeuS via email spam campaigns.

Business and consumers operate under vastly different rules when it comes to banking online. Consumers are protected by Regulation E, which dramatically limits the liability for those who lose money from unauthorized account activity online (provided the victim notifies their financial institution of the fraudulent activity within 60 days of receiving a disputed account statement).

Businesses, however, do not enjoy such protections. The victim organization’s bank may decide to reimburse the victim for some of the losses, but beyond that the only recourse for the victim is to sue the their bank. Under state interpretations of the UCC, the most that a business hit with a cyberheist can hope to recover is the amount that was stolen. That means that it’s generally not in the business’s best interests to sue their bank unless the amount of theft was quite high, because the litigation fees required to win a court battle can quickly equal or surpass the amount stolen.

So, if you run a business and you’re expecting your bank to protect your assets should you or one of your employees fall victim to a malware phishing scheme, you could be in for a rude awakening. Keep a close eye on your books, require that more than one employee sign off on all large transfers, and consider adopting some of these: Online Banking Best Practices for Businesses.

Raspberry Pi: Launch day – what happened to the website?

This post was syndicated from: Raspberry Pi and was written by: Liz Upton. Original post: at Raspberry Pi

Liz: As you may recall, back on February 2 we launched a new product. This website buckled a little under the strain (as did some of our partners’ websites). At the time, we promised you a post about what happened here and how we dealt with it, with plenty of graphs. We like graphs.

Here’s Pete Stevens from Mythic Beasts, our hosts, to explain exactly what was going on. Over to you, Pete!

On Monday, the Raspberry Pi 2 was announced, and The Register’s predictions of global geekgasm proved to be about right. Slashdot, BBC News, global trending on Twitter and many other sources covering the story resulted in quite a lot of traffic. We saw 11 million page requests from over 700,000 unique IP addresses in our logs from Monday, around 6x the normal traffic load.

The Raspberry Pi website is hosted on WordPress using the WP Super Cache plugin. This plugin generally works very well, resulting in the vast majority of page requests being served from a static file, rather than hitting PHP and MySQL. The second major part of the site is the forums and the different parts of the site have wildly differing typical performance characteristics. In addition to this, the site is fronted by four load balancers which supply most of the downloads directly and scrub some malicious requests. We can cope with roughly:

Cached WordPress 160 pages / second
Non cached WordPress 10 pages / second
Forum page 10 pages / second
Maintenance page at least 10,000 pages / second

Back in 2012 for the original launch, we had a rather smaller server setup and we just put a maintenance page up and directed everyone to buy a Pi direct from Farnell or RS, both of whom had some trouble coping with the demand. We also launched at 6am GMT so that most of our potential customers would still be in bed, spreading the initial surge over several hours.

This time, being a larger organisation with coordination across multiple news outlets and press conferences, the launch time was fixed for 9am on Feb 2nd 2015 so everything would happen then, apart from the odd journalist with premature timing problems – you know who you are.

Our initial plan was to leave the site up as normal, but set the maintenance page to be the launch announcement. That way if the launch overwhelmed things, everyone should see the announcement served direct from the load balancers and otherwise the site should function as normal. Plan B was to disable the forums, giving more resources to the main blog so people could comment there.

The Launch


It is a complete coincidence that our director Pete took off to go to this isolated beach in the tropics five minutes after the Raspberry Pi 2 launch.

At 9:00 the announcement went live. Within a few minutes traffic volumes on the site had increased by more than a factor of five and the forum users were starting to make comments and chatter to each other. The server load increased from its usual level of 2 to over 400 – we now had a massive queue of users waiting for page requests because all of the server CPU time was being taken generating those slow forum pages which starved the main blog of server time to deliver those fast cached pages. At this point our load balancers started to kick in and deliver to a large fraction of our site users the maintenance page containing the announcement – the fall back plan. This did annoy the forum and blog users who had posted comments and received the maintenance page back having just had their submission thrown away – sorry. During the day we did a little bit of tweaking to the server to improve throughput, removing the nf_conntrack in the firewall to free up CPU for page rendering, and changing the apache settings to queue earlier so people received either their request page or maintenance page more quickly.

Disabling the forums freed up lots of CPU time for the main page and gave us a mostly working site. Sometimes it’d deliver the maintenance page, but mostly people were receiving cached WordPress pages of the announcement and most of the comments were being accepted.

Super Cache not quite so super

Unfortunately, we were still seeing problems. The site would cope with the load happily for a good few minutes, and then suddenly have a load spike to the point where pages were not being generated fast enough. It appears that WP Super Cache wasn’t behaving exactly as intended. When someone posts a comment, Super Cache invalidates its cache of the corresponding page, and starts to rebuild a new one, but providing you have this option ticked… supercache-anonymouse …(we did), the now out-of-date cached page should continue to be served until it is overwritten by the newer version. After a while, we realised that the symptoms that we were seeing were entirely consistent with this not working correctly, and once you hit very high traffic levels this behaviour becomes critical. If cached versions are not served whilst the page is being rebuilt then subsequent requests will also trigger a rebuild and you spend more and more CPU time generating copies of the missing cached page which makes the rebuild take even longer so you have to build more copies each of which now takes even longer. Now we can build a ludicrously overly simple model of this with a short bit of perl and draw a graph of how long it takes to rebuild the main page based on hit rate – and it looks like this. Supercache performance This tells us that performance reasonably suddenly falls off a cliff at around 60-70 hits/second. At 12 hits/sec (typical usage) a rebuild of the page completes in considerably under a second, at 40 hits/sec (very busy) it’s about 4s, at 60 hits/sec it’s 30s, at 80hits/sec it’s well over five minutes – the load balancers kick in and just display the maintenance page, and wait for the load to die down again before starting to serve traffic as normal again. We still don’t know exactly what the cause of this was, so either it’s something else with exactly the same symptoms, or this setting wasn’t working or was interacting badly with another plugin, but as soon as we’d figured out the issue, we implemented the sensible workaround; we put a rewrite hack in to serve the front page and announcement page completely statically, then created the page fresh once every five minutes from cron picking up all the newest comments. As if by magic the load returned to sensible levels although there was now a small delay on new comments appearing.

Re-enabling the forums


With stable traffic levels, we turned the forums back on. And then immediately off again. They very quickly backed up the database server with connections, causing both the forums to cease working and the main website to run slowly. A little further investigation into the InnoDB parameters and we realised we had some contention on database locks, we reconfigured and this happened.

Our company pedant points out that actually only the database server process fell over, and it needed restarted not rebooting. Cunningly, we’d managed to find a set of improved settings for InnoDB that allowed us to see all the tables in the database but not read any data out of them. A tiny bit of fiddling later and everything was happy.

The bandwidth graphs

We end up with a traffic graph that looks like this. raspi-launch-bwgraph On the launch day it’s a bit lumpy, this is because when we’re serving the maintenance page nobody can get to the downloads page. Downloads of operating system images and NOOBS dominates the traffic graphs normally. Over the next few days the HTML volume starts dropping and the number of system downloads for newly purchased Raspberry Pis starts increasing rapidly. At this point were reminded of the work we did last year to build a fast distributed downloads setup and were rather thankful because we’re considerably beyond the traffic levels you can sanely serve from a single host.

Could do a bit better

The launch of Raspberry Pi 2 was a closely guarded secret, and although we were told in advance, we didn’t have a lot of time to prepare for the increased traffic. There’s a few things we’d like to have improved and will be talking to with Raspberry Pi over the coming months. One is to upgrade the hardware adding some more cores and RAM to the setup. Whilst we’re doing this it would be sensible to look at splitting the parts of the site into different VMs so that the forums/database/Wordpress have some separation from each other and make it easier to scale things. It would have been really nice to have put our extremely secret test setup with HipHop Virtual Machine into production, but that’s not yet well enough tested for primetime although a seven-fold performance increase on page rendering certainly would be nice.

Schoolboy error

Talking with Ben Nuttall we realised that the stripped down minimal super fast maintenance page didn’t have analytics on it. So the difference between our stats of 11 million page requests and Ben’s of 1.5 million indicate how many people during the launch saw the static maintenance page rather than a WordPress generated page with comments. In hindsight putting analytics on the maintenance page would have been a really good idea. Not every http request which received the maintenance page was necessarily a request to see the launch, nor was each definitely a different visitor. Without detailed analytics that we don’t have, we can estimate the number of people who saw the announcement to be more than 1.5 million but less than 11 million.

Flaming, Bleeding Servers

Liz occasionally has slightly odd ideas about exactly how web-servers work: 


Now, much to her disappointment we don’t have any photographs of servers weeping blood or catching fire. [Liz interjects: it’s called METAPHOR, Pete.] But when we retire servers we like to give them a bit of a special send-off: here’s a server funeral, Mythic Beasts-style.

Delian's Tech blog: node-netflowv9 node.js module for processing of netflowv9 has been updated to 0.2.5

This post was syndicated from: Delian's Tech blog and was written by: Delian Delchev. Original post: at Delian's Tech blog

My node-netflowv9 library has been updated to version 0.2.5

There are few new things -

  • Almost all of the IETF netflow types are decoded now. Which means practically that we support IPFIX
  • Unknown NetFlow v9 type does not throw an error. It is decoded into property with name ‘unknown_type_XXX’ where XXX is the ID of the type
  • Unknown NetFlow v9 Option Template scope does not throw an error. It is decoded in ‘unknown_scope_XXX’ where XXX is the ID of the scope
  • The user can overwrite how different types of NetFlow are decoded and the user can define its own decoding for new types. The same for scopes. And this can happen “on fly” – at any time.
  • The library supports well multiple netflow collectors running at the same time
  • A lot of new options and models for using of the library has been introduced
Bellow is the updated file, describing how to use the library:


The usage of the netflowv9 collector library is very very simple. You just have to do something like this:
var Collector = require('node-netflowv9');

Collector(function(flow) {
or you can use it as event provider:
Collector({port: 3000}).on('data',function(flow) {
The flow will be presented in a format very similar to this:
{ header: 
{ version: 9,
count: 25,
uptime: 2452864139,
seconds: 1401951592,
sequence: 254138992,
sourceId: 2081 },
{ address: '',
family: 'IPv4',
port: 29471,
size: 1452 },
packet: Buffer <00 00 00 00 ....>
flow: [
{ in_pkts: 3,
in_bytes: 144,
ipv4_src_addr: '',
ipv4_dst_addr: '',
input_snmp: 27,
output_snmp: 16,
last_switched: 2452753808,
first_switched: 2452744429,
l4_src_port: 61538,
l4_dst_port: 62348,
out_as: 0,
in_as: 0,
bgp_ipv4_next_hop: '',
src_mask: 32,
dst_mask: 24,
protocol: 17,
tcp_flags: 0,
src_tos: 0,
direction: 1,
fw_status: 64,
flow_sampler_id: 2 } } ]
There will be one callback for each packet, which may contain more than one flow.
You can also access a NetFlow decode function directly. Do something like this:
var netflowPktDecoder = require('node-netflowv9').nfPktDecode;
Currently we support netflow version 1, 5, 7 and 9.


You can initialize the collector with either callback function only or a group of options within an object.
The following options are available during initialization:
port – defines the port where our collector will listen to.
Collector({ port: 5000, cb: function (flow) { console.log(flow) } })
If no port is provided, then the underlying socket will not be initialized (bind to a port) until you call listen method with a port as a parameter:
Collector(function (flow) { console.log(flow) }).listen(port)
cb – defines a callback function to be executed for every flow. If no call back function is provided, then the collector fires ‘data’ event for each received flow
Collector({ cb: function (flow) { console.log(flow) } }).listen(5000)
ipv4num – defines that we want to receive the IPv4 ip address as a number, instead of decoded in a readable dot format
Collector({ ipv4num: true, cb: function (flow) { console.log(flow) } }).listen(5000)
socketType – defines to what socket type we will bind to. Default is udp4. You can change it to udp6 is you like.
Collector({ socketType: 'udp6', cb: function (flow) { console.log(flow) } }).listen(5000)
nfTypes – defines your own decoders to NetFlow v9+ types
nfScope – defines your own decoders to NetFlow v9+ Option Template scopes

Define your own decoders for NetFlow v9+ types

NetFlow v9 could be extended with vendor specific types and many vendors define their own. There could be no netflow collector in the world that decodes all the specific vendor types. By default this library decodes in readable format all the types it recognises. All the unknown types are decoded as ‘unknown_type_XXX’ where XXX is the type ID. The data is provided as a HEX string. But you can extend the library yourself. You can even replace how current types are decoded. You can even do that on fly (you can dynamically change how the type is decoded in different periods of time).
To understand how to do that, you have to learn a bit about the internals of how this module works.
  • When a new flowset template is received from the NetFlow Agent, this netflow module generates and compile (with new Function()) a decoding function
  • When a netflow is received for a known flowset template (we have a compiled function for it) – the function is simply executed
This approach is quite simple and provides enormous performance. The function code is as small as possible and as well on first execution Node.JS compiles it with JIT and the result is really fast.
The function code is generated with templates that contains the javascript code to be add for each netflow type, identified by its ID.
Each template consist of an object of the following form:
{ name: 'property-name', compileRule: compileRuleObject }
compileRuleObject contains rules how that netflow type to be decoded, depending on its length. The reason for that is, that some of the netflow types are variable length. And you may have to execute different code to decode them depending on the length. The compileRuleObject format is simple:
length: 'javascript code as a string that decode this value',
There is a special length property of 0. This code will be used, if there is no more specific decode defined for a length. For example:
4: 'code used to decode this netflow type with length of 4',
8: 'code used to decode this netflow type with length of 8',
0: 'code used to decode ANY OTHER length'

decoding code

The decoding code must be a string that contains javascript code. This code will be concatenated to the function body before compilation. If that code contain errors or simply does not work as expected it could crash the collector. So be careful.
There are few variables you have to use:
$pos – this string is replaced with a number containing the current position of the netflow type within the binary buffer.
$len – this string is replaced with a number containing the length of the netflow type
$name – this string is replaced with a string containing the name property of the netflow type (defined by you above)
buf – is Node.JS Buffer object containing the Flow we want to decode
o – this is the object where the decoded flow is written to.
Everything else is pure javascript. It is good if you know the restrictions of the javascript and Node.JS capabilities of the Function() method, but not necessary to allow you to write simple decoding by yourself.
If you want to decode a string, of variable length, you could write a compileRuleObject of the form:
0: 'o["$name"] = buf.toString("utf8",$pos,$pos+$len)'
The example above will say that for this netfow type, whatever length it has, we will decode the value as utf8 string.


Lets assume you want to write you own code for decoding a NetFlow type, lets say 4444, which could be of variable length, and contains a integer number.
You can write a code like this:
port: 5000,
nfTypes: {
4444: { // 4444 is the NetFlow Type ID which decoding we want to replace
name: 'my_vendor_type4444', // This will be the property name, that will contain the decoded value, it will be also the value of the $name
compileRule: {
1: "o['$name']=buf.readUInt8($pos);", // This is how we decode type of length 1 to a number
2: "o['$name']=buf.readUInt16BE($pos);", // This is how we decode type of length 2 to a number
3: "o['$name']=buf.readUInt8($pos)*65536+buf.readUInt16BE($pos+1);", // This is how we decode type of length 3 to a number
4: "o['$name']=buf.readUInt32BE($pos);", // This is how we decode type of length 4 to a number
5: "o['$name']=buf.readUInt8($pos)*4294967296+buf.readUInt32BE($pos+1);", // This is how we decode type of length 5 to a number
6: "o['$name']=buf.readUInt16BE($pos)*4294967296+buf.readUInt32BE($pos+2);", // This is how we decode type of length 6 to a number
8: "o['$name']=buf.readUInt32BE($pos)*4294967296+buf.readUInt32BE($pos+4);", // This is how we decode type of length 8 to a number
0: "o['$name']='Unsupported Length of $len'"
cb: function (flow) {
It looks to be a bit complex, but actually it is not. In most of the cases, you don’t have to define a compile rule for each different length. The following example defines a decoding for a netflow type 6789 that carry a string:
var colObj = Collector(function (flow) {


colObj.nfTypes[6789] = {
name: 'vendor_string',
compileRule: {
0: 'o["$name"] = buf.toString("utf8",$pos,$pos+$len)'
As you can see, we can also change the decoding on fly, by defining a property for that netflow type within the nfTypes property of the colObj (the Collector object). Next time when the NetFlow Agent send us a NetFlow Template definition containing this netflow type, the new rule will be used (the routers usually send temlpates from time to time, so even currently compiled templates are recompiled).
You could also overwrite the default property names where the decoded data is written. For example:
var colObj = Collector(function (flow) {

colObj.nfTypes[14].name = 'outputInterface';
colObj.nfTypes[10].name = 'inputInterface';

Logging / Debugging the module

You can use the debug module to turn on the logging, in order to debug how the library behave. The following example show you how:
var Collector = require('node-netflowv9');
Collector(function(flow) {

Multiple collectors

The module allows you to define multiple collectors at the same time. For example:
var Collector = require('node-netflowv9');

Collector(function(flow) { // Collector 1 listening on port 5555

Collector(function(flow) { // Collector 2 listening on port 6666

NetFlowV9 Options Template

NetFlowV9 support Options template, where there could be an option Flow Set that contains data for a predefined fields within a certain scope. This module supports the Options Template and provides the output of it as it is any other flow. The only difference is that there is a property isOption set to true to remind to your code, that this data has come from an Option Template.
Currently the following nfScope are supported – system, interface, line_card, netflow_cache. You can overwrite the decoding of them, or add another the same way (and using absolutley the same format) as you overwrite nfTypes.

Anchor Cloud Hosting: Anchor launches new managed hosting services on AWS

This post was syndicated from: Anchor Cloud Hosting and was written by: Jessica Field. Original post: at Anchor Cloud Hosting

Collaboration reflects transformation for Anchor’s cloud hosting model

Sydney, Australia, March 3, 2015 – Anchor, the hosting heavyweight behind some of Australia’s biggest online retailers, has announced a range of new cloud hosting products and services on Amazon Web Services (AWS), not previously available in Australia.

The first new Anchor service to be made available on AWS is a new management tier — DevOps Automation — bridging the gap between operations and software development teams.

Websites have become complex, feature-rich web apps that require constant updating and enhancement. Some major retailers are already deploying new code into production at an incredible pace—in some cases up to fifty times a day—compared with just once or twice a month previously. This is due to new development methodologies such as “Continuous Delivery”, resulting in real and significant competitive advantage to those practicing it.

Anchor’s new management tier allows clients who are looking to adopt Agile development methodologies to outsource the DevOps management of their AWS infrastructure to Anchor, with the goal of building and releasing software faster, with less stress.

Anchor specifically addresses the automation of responsibilities traditionally handled by sysadmins and operations teams, a defining feature of DevOps.

Anchor’s approach takes full advantage of the comprehensive API-driven cloud infrastructure services provided by AWS, combined with new, collaborative approaches to automation and managed hosting. Businesses can benefit from improved website performance, scalability and efficiency, while freeing up internal resources for more productive activities.

“A lot of hosting providers try to shoehorn traditional hosting models onto the cloud, only extracting some of the benefits of one while retaining many of the limitations of the other,” says Bart Thomas, CEO of Anchor. “However, cloud technologies, and AWS in particular, provide a huge opportunity for simplifying and automating hosting operations; including code deployment, auto-scaling, environment cloning and more.

By combining its operations and automation expertise with the power, scale and flexibility of Amazon Web Services, Anchor is able to automate the development and deployment workflows of its customers.

“The industry continues to evolve at a frantic pace, which is why Anchor has spent the last few months re-evaluating the current hosting landscape to bring about its own evolution,” says Thomas. “The relationship with AWS is the next step in our transformation, allowing us to adopt fresh methodologies, and develop new technologies. While we continue to deliver bespoke hosting services to customers in Australia and around the world, we recognise that hosting technology is only ever as powerful as the workflow it enables.”

For more information, please visit

About Anchor

Anchor is one of Australia’s leading, independent web hosting providers. Founded in 2000, Anchor provides fully managed web and application hosting services and specialises in creating innovative, open source and scalable cloud hosting solutions. Anchor works with some of Australia’s biggest brands, including BT Financial, Booktopia, Michelle Bridges, Kimberly Clark, Birdsnest and Airtasker. It has also supported the strong growth of heavyweight Silicon Valley-based technology companies such as TestFlight and Github.

Read More:

The post Anchor launches new managed hosting services on AWS appeared first on Anchor Cloud Hosting.

SANS Internet Storm Center, InfoCON: green: How Do You Control the Internet of Things Inside Your Network?, (Mon, Mar 2nd)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

Klaus Vesthammer recently tweetedthat “>The Internet of Things is just like the regular Internet, just without software patches. We have a flood of announcements about vulnerable devices, and little in terms of patches. At the same time, expect more and more of these devices to be connected to your network, if you want it or not. Bring your own Devices should be addressed more inclusive then just covering smart phones and tablets.

If you do have a working inventory system that recognizes and blocks unauthorized devices in real time, then stop reading and count yourself lucky. But for most of us, network maps are filed under fiction and network access control was this great solution we tried and failed as it hit real network life. So what else is there to do?

One of the critical aspects is to figure out which devices are not just on your network, but also do they talk to systems outside of your network. Active scanning will only get you that far. Many devices, to save power, will not connect to the network unless they have something to say. Some also use bluetooth to connect to smartphones and use them as a gateway. The device will not show up as an entity on your network in this case.

Here are a couple of indicators to look for:

- NTP queries: Some devices do have hard coded NTP servers, that do not match your standard network configuration
– DNS queries: DNS knows everything
– HTTP User-Agent and Server headers

Someone I am sure will provide pointers to do this in Bro. For everybody else, some simple log parsing scripts can help. Any other methods your use to find new and dangerous devices on your network?

Johannes B. Ullrich, Ph.D.

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

TorrentFreak: Pirate Bay Uploads Stop Working

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

tpb-logoMore than a month has passed since The Pirate Bay returned online, but the notorious torrent site continues to face problems.

Aside from a persistent hosting whac-a-mole the site is also dealing with failing features.

For example, a few days ago several users were surprised to see that they were being redirected to other user’s accounts after logging in.

Several users panicked fearing that their accounts had been hijacked or breached, but luckily the users were only redirected. They didn’t actually gain access to the accounts of others.

Today another issue popped up, one that’s blocking new content from being added to the site. Starting roughly 12 hours ago the The Pirate Bay’s upload functionality broke, displaying a “500 Internal Server Error” instead.

The upload problem appears to be global as no new files have been added to the site since. The most recent upload listed on The Pirate Bay is from 5:15 CET.


It’s not clear what’s causing the upload issue. It appears to be a misconfiguration or related technical error, possibly the result of a recent move to a new hosting company.

TorrentFreak reached out to The Pirate Bay’s admin and we will update this article if we hear back. For the time being Pirate Bay users will have to do without fresh uploads.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and anonymous VPN services.

Raspberry Pi: Big Birthday Bash – the aftermath

This post was syndicated from: Raspberry Pi and was written by: Liz Upton. Original post: at Raspberry Pi

We are all very tender, aching and sleepy. It was a fantastic weekend.

1300 of you came to see us at the University of Cambridge Computer Laboratory over the weekend, where you listened to 24 lecture theatre talks, took part in 14 workshops, shared hundreds of incredible projects you’d made with your Pis, and ate 110 pizzas.

2015-02-28 13.21.11

2015-02-28 13.20.57

The workshops were amazing: thanks so much to everybody who helped run them. Here’s Imogen, age 10, who is a Scratch pro (we loved your maze game, Imogen!): this is the first time she’s ever done any robotics, and we thought her robot turned out just great.

Alan McCullagh came all the way from France, where he runs the Rhône Valley Raspberry Jams, to join the other volunteers teaching kids in the Beginners’ Workshop.


(Private note for Alan: ROWER. I said ROWER.)

The projects on display were brilliant. Phil Atkin brought along PIANATRON, his Raspberry Pi synthesiser. Pete from Mythic Beasts (you can only see his hands), who is such a good pianist I’m always too embarrassed to play in front of him, was joined by Jonathan “Penguins” Pallant on the “drums”. (Jonathan gave me an update on the penguins project: the Pis all survived the Antarctic winter; however, the solar panels did not, so some more work’s being done on how to manage power.)

We loved watching kids see the music they were making.

magic keyboard

Some kids learned a bit of history.

2015-02-28 12.46.45

Others got to work on custom devices.

2015-02-28 14.34.33-1

Brian Corteil’s Easter Bunny (which he lent us last year for YRS) made an appearance, and laid several kilos of chocolate eggs.


We found more kids in quiet corners, hacking away together.


Workshops aren’t just for young learners: here’s Dave Hughes, the author of the PiCamera library, giving a PiCamera workshop to some grown-up users.


There were 24 talks: here’s our very own Carrie Anne explaining what we do in education.


A certain Amy Mather made a Pi photobooth, the results of which, in this particular instance, I found horrifying.


Vendors set up stands to sell Pis and add-ons on both days. Here’s Pimoroni’s stand, as gorgeous as ever.

16675607665_17b9204e05_z (1)

All the cool kids played retro games.


Poly Core (Sam Aaron and Ben Smith) provided live-coded evening entertainment. (My Mum, who came along for the day, is still adamant that there must have been a tape recorder hidden in a box somewhere.) They were amazing – find more snippets on their Twitter feed.

Dan Aldred brought a newly refined version of PiGlove. The capitalisation of its name is of utmost importance.

Screen Shot 2015-03-02 at 14.17.00

Ben Croston from the Fuzzy Duck Brewery (and author of RPi.GPIO) uses a Raspberry Pi controller in the brewing process, and made us a batch of very toothsome, special edition beer called Irration Ale (geddit?) for the Saturday evening event.

Screen Shot 2015-03-02 at 10.52.17

There was cake.


It was a bit like getting married again.


There was more cake.


After the beer (and raspberry lemonade for the kids) and cake, several hundred people played Pass the Parcel.

The foyer centrepiece was a talking throne which we borrowed from an exhibition at Kensington Palace (thank you to Henry Cooke and Tim from Elkworks for making it, and for your heroic work getting it to Cambridge!) We understand a door had to be removed from its frame at Kensington Palace to get it here.


A selection of members of Team Pi were photographed on it. Please note the apposite labelling – the throne uses a Pi with RFID to read what’s on the slates out loud. (Ross has cheese on his mind because we interrupted his burger for this shot.)


And we appear to have lost Eben. He was last seen heading towards Bedford in an outsized, Pi-powered Big Trak.

Enormous thanks to all the exhibitors and volunteers – and most especially to Mike Horne, Tim Richardson and Lisa Mather, who made this weekend what it was. We can’t thank the three of you enough.

There was so much more – we were so busy we didn’t get pictures of everything, and I didn’t manage to get to talk to anything like as many of you as I’d like to have done. (Does anybody have a picture of the gerbils?) I’ll add links to other people’s accounts of the weekend’s events as they come in.

Thank you to the University of Cambridge Computer Laboratory for letting us take over the building for the weekend.

Thank you to our incredibly thoughtful and generous sponsors for the pass-the-parcel gifts, the contents of the goodie bags, and other giveaways:

  • 4tronix
  • @holdenweb
  • @ipmb
  • @whaleygeek
  • Adafruit
  • AirPi (Tom Hartley)
  • Bare Conductive
  • Brian Cortiel
  • CamJam
  • CPC
  • CSR
  • Cyntech
  • Dawn Robotics
  • Dexter Industries
  • Django
  • Eduboard
  • Energenie
  • Farnell
  • GitHub
  • IQaudIO
  • Low Voltage Labs
  • Manchester Girl Geeks
  • ModMyPi
  • MyPiFi
  • NewIT
  • No Starch Press
  • O’Reilly
  • PiBorg
  • Pimoroni
  • PiSupply
  • RasPi.TV
  • RealVNC
  • RS Components
  • RyanTeck
  • Sugru
  • The Pi Hut
  • UK Space Agency
  • Watterott
  • Wiley
  • Wireless Things

Tableware and Decorations were kindly sponsored by:

  • @WileyTech
  • @RealVNC

 Wood and Laser Cutting was generously sponsored by:

  • @fablabmcr (FabLab Manchester)