Posts tagged ‘Other’

Errata Security: Twitter has to change

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

Today, Twitter announced that instead of the normal timeline of newest messages on top, they will prioritize messages they think you’ll be interested in. This angers a lot of people, but my guess it’s it’s something Twitter has to do.

Let me give you an example. Edward @Snowden has 1.4 million followers on Twitter. Yesterday, he retweeted a link to one of my blogposts. You’d think this would’ve caused a flood of traffic to my blog, but it hasn’t. That post still has fewer than 5000 pageviews, and is only the third most popular post on my blog this week. More people come from Reddit and news.ycombinator.com than from Twitter.

I suspect the reason is that the older twitter gets, the more people people follow. (…the more persons each individual Twitter customer will follow). I’m in that boat. If you tweeted something more than 10 minutes since the last time I checked Twitter, I will not have seen it. I read fewer than 5% of what’s possible in my timeline. That’s something Twitter can actually measure, so they already know it’s a problem.

Note that the Internet is littered with websites that were once dominant in their day, but which  failed to change and adapt. Internet oldtimers will remember Slashdot as a good example.

Thus, Twitter has to evolve. There’s a good change their attempts will fail, and they’ll shoot themselves. On the other hand, not attempting is guaranteed failure.

Errata Security: Is packet-sniffing illegal? (OmniCISA update)

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

In the news recently, Janet Napolitano (formerly head of DHS, now head of California’s university system) had packet-sniffing software installed at the UC Berkeley campus to monitor all its traffic. This brings up the age old question: is such packet-sniffing legal, or a violation of wiretap laws.

Setting aside the legality question for the moment, I should first point out that’s its perfectly normal. Almost all organizations use “packet-sniffers” to help manage their network. Almost all organizations have “intrusion detection systems” (IDS) that monitor network traffic looking for hacker attacks. Learning how to use packet-sniffers like “Wireshark” is part of every network engineer’s training.
Indeed, while the news articles describes this as some special and nefarious plot by Napolitano, the reality is that it’s probably just an upgrade of packet-sniffer systems that already exist.
Ironical, much packet-sniffing practice comes from UC Berkele. It’s famous for having created “BPF”, the eponymously named “Berkeley Packet Filter”, a standard for packet-sniffing included in most computers. Whatever packet-sniffing system Berkeley purchased to eavesdrop on its networks is almost certainly including Berkeley’s own BPF software.
Now for the legal question. Even if everyone is doing it, it doesn’t necessarily mean it’s legal. But the wiretap law does appear to contain an exception for packet-sniffing. Section 18 U.S. Code § 2511 (2) (a) (i) says:

It shall not be unlawful … to intercept … while engaged in any activity which is a necessary incident to the rendition of his service or to the protection of the rights or property of the provider of that service

In other words, you can wiretap your own network in order to keep it running and protect it against hackers. There is a lengthy academic paper that discusses this in more details: http://spot.colorado.edu/~sicker/publications/issues.pdf
 
At least, that’s the state of things before OmniCISA (“Cybersecurity Act of 2015”). Section 104 (a) (1) says:
Notwithstanding any other provision of law, a private entity may, for cybersecurity purposes, monitor … an information system of such private entity;

In other words, regardless of other laws, you may monitor your computers (including the network) for the purpose of cybersecurity.
As I read OmniCISA, I see that the intent is just this, to clarify that what organizations are already doing is in fact legal. When I read the text of the bill, and translate legalese into technology, I see that what it’s really talking about is just standard practice of monitoring log files and operating IDSs, IPSs, and firewalls. It also describes the standard practice of outsourcing security operations to a managed provider (the terms we would use, not how the bill described it). Much of what we’ve been doing is ambiguous under the law, since it’s confusing as heck, so OmniCISA clarifies this.
Thus, the argument about whether packet-sniffing was legal before is now moot: according to OmniCISA, you can now packet-sniff your networks for cybersecurity, such as using IDSs.

Errata Security: Net ring-buffers are essential to an OS

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

Even by OpenBSD standards, this rejection of ‘netmap’ is silly and clueless.

BSD is a Linux-like operating system that powers a lot of the Internet, from Netflix servers to your iPhone. One variant of BSD focuses on security, called “OpenBSD“. A lot of security-related projects get their start on OpenBSD. In theory, it’s for those who care a lot about security. In practice, virtually nobody uses it, because it makes too many sacrifices in the name of security.

“Netmap” is a user-space network ring-buffer. What that means is the hardware delivers network packets directly to an application, bypassing the operating system’s network stack. Netmap currently works on FreeBSD and Linux. There are projects similar to this known as “PF_RING” and “Intel DPDK”.

The problem with things like netmap is that it means the network hardware no longer is a shareable resource, but instead must be reserved for a single application. This violates many principles of a “general purpose operating system”.

In addition, it ultimately means that the application is going to have to implement it’s own TCP/IP stack. That means it’s going to repeat all the same mistakes of the past, such as “ping of death” when a packet reassembles to more then 65536 bytes. This introduces a security problem.

But these criticisms are nonsense.

Take “microkernels” like Hurd or IBM mainframes. These things already put the networking stack in user space, for security reasons. I’ve crashed the network stack on mainframes — the crash only affects the networking process and not the kernel or other apps. No matter how bad a user-mode TCP/IP stack is written, any vulnerabilities affect just that process, and not the integrity of the system. User-mode isolation is a security feature. That today’s operating-systems don’t offer user-mode stacks is a flaw.

Today’s computers are no longer multi-purpose, multi-user machines. While such machines do exist, most computers today are dedicated to a single purpose, such as supercomputer computations, or a domain controller, or memcached, or a firewall. Since single-purpose, single-application computers are the norm, “general purpose” operating systems need to be written to include that concept. There needs to be a system whereby apps can request exclusive access to hardware resources, such as GPUs, FPGAs, hardware crypto accelerators, and of course, network adapters.

These user-space ring-mode network drivers operate with essentially zero overhead. You have no comprehension of how fast this can be. It means networking can operating 10 times to even a 100 times faster than trying to move packets through the kernel. I’ve been writing such apps for over 20 years, and have constantly struggled against disbelief as people simply cannot believe that machines can run this fast.

In todays terms, it means it’s relatively trivial to use a desktop system (quad-core, 3 GHz) to create a 10-gbps firewall that passes 30 million packets/second (bidirectional), at wire speed. I’m assuming 10 million concurrent TCP connections here, with 100,000 rules. This is between 10 and 100 times faster than you can get through the OpenBSD kernel, even if you simply configured it to simply bridge two adapters with no inspection.

There are many reasons for the speed. One is hardware. In modern desktops, the 10gbps network hardware DMAs the packet directly into the CPU’s cache — actually bypassing memory. A packet can arrive, be inspected, then forwarded out the other adapter before the cache writes back changes to DRAM.

Another reason is the nature of ring-buffers themselves. The kernel’s drivers also use ring-buffers in the Ethernet hardware drivers. The problem is that the kernel must remove the packet from the driver’s ring-buffers, either by making a copy of it, or by allocating a replacement buffer. This is actually a huge amount of overhead. You think it’s insignificant, because you compare this overhead with the rest of kernel packet processing. But I’m comparing it against the zero overhead of netmap. In netmap, the packet stays within the buffer until the app is done with it.

Arriving TCP packets perform a long “pointer walk”, following a chain of pointers to get from the network structures to file descriptor structures. At scale (millions of concurrent TCP connections), these things no longer fit within cache. That means each time you follow pointer you cause a cache miss, and must halt and wait 100 nanoseconds for memory.

In a specialized user-mode stack, this doesn’t happen. Instead, you put everything related to the TCP control block into a 1k chunk of memory, then pre-allocated an array of 32 million of them (using 32-gigs of RAM). Now there is only a single cache miss per packet. But actually, there are zero, because with a ring buffer, you can pre-parse future packets in the ring and issue “prefetch” instructions, such that the TCP block is already in the cache by the time you need it.

These performance issues are inherent to the purpose of the kernel. As soon as you think in terms of multiple users of the TCP/IP stack, you inherently accept processing overhead and cache misses. No amount of optimizations will ever solve this problem.

Now let’s talk multicore synchronization. On OpenBSD, it rather sucks. Adding more CPUs to the system often makes the system go slower. In user-mode stacks, synchronization often has essentially zero overhead (again that number zero). Modern network hardware will hash the address/ports of incoming packets, giving each CPU/thread their own stream. Thus, our hypothetical firewall would process packets as essentially 8 separate firewalls that only rarely need to exchange information (when doing deep inspection on things like FTP to open up dynamic ports).

Now let’s talk applications. The OpenBSD post presumes that apps needing this level of speed are rare. The opposite is true. They are painfully common, increasingly becoming the norm.

Supercomputers need this for what they call “remote DMA”. Supercomputers today are simply thousand of desktop machines, with gaming graphics cards, hooked up to 10gbps Ethernet, running in parallel. Often one process on one machine needs to send bulk data to another process on another machine. Normal kernel TCP/IP networking is too slow, though some now have specialized “RDMA” drivers trying to compensate. Pure user-space networking is just better.

My “masscan” port scanner transmits at a rate of 30 million packets per second. It’s so fast it’ll often melt firewalls, routers, and switches that fail to keep up.

Web services, either the servers themselves, or frontends like varnish and memcached, are often limitted by the kernel resources, such as maximum number of connections. They would be vastly improved with user-mode stacks on top of netmap.

Back in the day, I created the first “intrusion prevention system” or “IPS”. It ran on a dual core 3 GHz machine at maximum gigabit speeds, including 2 million packets-per-second. We wrote our own user-space ring-buffer driver, since things like netmap and PF_RING didn’t exist back then.

Intel has invested a lot in its DPDK system, which is a better than netmap, for creating arbitrary network-centric devices out of standard desktop/server systems. That we have competing open-source ring-buffer drivers (netmap, PF_RING, DPDK) plus numerous commercial versions means that there is a lot of interest in such things.

Conclusion

Modern network machines, whether web servers or firewalls, have two parts: the control-plane where you SSH into the box and manage it, and the data-plane, which delivers high-throughput data through the box. These things have different needs. Unix was originally designed to be a control-plane system for network switches. Trying to make it into a data-plane system is 30 years out of date. The idea persists because of the clueless thinking as expressed by the OpenBSD engineers above.

User-mode stacks based on ring-buffer drivers are the future for all high-performance network services. Eventually OpenBSD will add netmap or something similar, but as usually, they’ll be years behind everyone else.

Errata Security: How not to be a better programmer

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

Over at r/programming is this post on “How to be a better programmer“. It’s mostly garbage.

Don’t repeat yourself (reuse code)

Trying to reuse code is near the top of reasons why big projects fail. The problem is that while the needs of multiple users of a module may sound similar, they are often different in profound ways that cannot be reconciled. Trying to make the same bit of code serve divergent needs is often more complex and buggy than multiple modules written from the ground up for each specific need.

Yes, we adhere to code cleanliness principles (modularity, cohesion) that makes reuse easier. Yes, we should reuse code when the needs match close enough. But that doesn’t mean we should bend over backwards trying to shove a square peg through a round hole, and the principle that all pegs/holes are the same.

Give variables/methods clear names

Programmers hate to read other code because the variable names are unclear. Hence the advice to use “clear names” that aren’t confusing.

But of course, programmers already think they are being clear. No programmer thinks to themselves “I’m going to be deliberately obtuse here so that other programmers won’t understand”. Therefore, telling them to use clear names won’t work, because they think they already are doing that.

The problem is that programmers are introverts and fail to put themselves in another’s shoes, trying to see their code as others might. Hence, they fail to communicate well with those other programmers. There’s no easy way to overcome this. Those of us who spend a lot of time reading code just have to get use to this problem.

One piece of advice is to make names longer. Cryptographers write horrible code, because they insist on using one letter variable names, as they do in mathematics. I’ve never had a problem with names being too long, but names being too short is a frequent problem.

One of the more clueful bits of advice I’ve heard is that variable names should imply their purpose, not the other way around. Too often, programmers choose a name that makes sense once you know what the variable is, but tells you nothing about variable if you don’t already know what it is.

Don’t use magic numbers or string literals

Wrong. There are lots of reasons to use magic numbers and literals.

If I’m writing code to parse external file-formats or network-protocols, then the code should match the specification. It’s not going to change. When checking the IP (Internet Protocol) header to see if it’s version 4, then using the number ‘4’ is perfectly acceptable. Trying to create a constant, such as enum {IpVersionFour = 4}; is moronic and makes the code harder to read.

While it’s true that newbie programmers often do the wrong kind of magic numbers, that doesn’t apply to good programmers. I see magic numbers all the time in Internet code, and they almost always make the code easier to understand. Likewise, I frequently see programmers bend over backwards to avoid magic numbers that makes the code harder to read.

In short, if you are an experienced programmer, ignore this dictum.

Don’t be afraid to ask for help

Oh, god, the horror. Engineering organizations are divided into the “helper” and “helpee” sections. The “helpees” are chronically asking for help, to the point where they are basically asking better programmers to finish and debug their code for them.

Asking for help is a good thing if, when reading a book on a technical subject (networking, cryptography, OpenCL, etc.) , you want the local expert in the subject to help overcome some confusion. Or, it’s good to ask for help on how to use that confusing feature of the debugger.

But stop asking for others to do your work for you. It’s your responsibility to debug your own code. It’s your responsibility to be an expert in the programming language you are using. It’s your responsibility for writing the code, unit tests, and documentation.

If you see some buggy or messy code, fix it

No, no, no, no.

This advice only makes sense in modules that already have robust unit/regression that will quickly catch any bugs introduced by such cleanups. But if the code is messy, then chances are the tests are messy to.

Avoid touching code that doesn’t have robust tests. Instead, go in and write those unit tests. Unstable code prone to bugs can remain so when the tests are robust. The tests act as a safety net, preventing bugs from appearing.

Only once the unit/regression tests are robust can you start doing arbitrary cleanups.

Share knowledge and help others

This is bad for several reasons.

When programmers don’t complete their code on schedule (i.e. the norm), one of their excuses is that they were helping others.

Engineering organizations are dominated by political battles as engineers fight for things. This often masquerades as “sharing knowledge”, as you help others understand the power of LISP over C++, for example.

As pointed out above, the lazy/bad programmers will exploit your good nature to shift their responsibilities onto you. That’s toxic bad.

The upshot is this. You have a job to complete your code on schedule. Only once you have done that do you have done that, then you’ve got time to become a subject matter expert in something (networking, crypto, graphics), and have time to share your expertise on these subjects with other.

Conclusion

Beware anything that boils programming down to simple rules like “don’t use magic numbers”. Code is more subtle than that.

The way to become a better programmer is this: (1) write lots of code, (2) work on big projects (more than 10kloc), (3) spend more time reading open-source. Over time, you’ll figure out for yourself what to do, and what not to do.

Errata Security: Some notes C in 2016

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

On r/programming was this post called “How to C (as of 2016)“. It has some useful advice, but also some bad advice. I thought I’d write up comments on the topic. As somebody mentioned while I was writing this, only responsible programmers should be writing in C. Irresponsible programmers should write other languages that have more training wheels. These are the sorts of things responsible programmers do.


Use a debugger

The #1 thing you aren’t doing, that you should be doing, is stepping through each line of code in a source level debugger as soon as you write it. If you only pull out the debugger to solve particularly difficult problems, then you are doing it wrong.

That means using an IDE like Visual Studio, XCode, or Eclipse. If you are only using an editor (without debugging capabilities), you are doing it wrong. I mention this because so many people are coding in editors that don’t have debuggers. I don’t even.

It’s a concern for all language, but especially with C. When memory gets corrupted, you need to be able to dump structures and memory in order to see that. Why is x some weird value like 37653? Using printf() style debugging won’t tell you, but looking at the hexdump of the stack will clearly show you how the entire chunk of memory was overwritten.

And debug your own code

Because C has no memory protection, a bug in one place can show up elsewhere, in an unrelated part of code. This makes debugging some problems really hard. In such cases, many programmers throw up their hands, say “I can’t fix this”, and lean on other programmers to debug their problem for them.

Don’t be that person. Once you’ve gone through the pain of such bugs, you quickly learn to write better code. This includes better self-checking code that makes such bugs show up quicker, or better unit tests that cover boundary cases.

Code offensively

I once worked on a project where the leaders had decided to put “catch(…)” (in C++) everywhere, so that the program wouldn’t crash. Exceptions, even memory corruption, would be silently masked an the program would continue. They thought they were making the code more robust. They thought it was defensive programming, a good principle.

No, that isn’t defensive programming, just stupid programming. I masks bugs, making them harder to find in the long run.

You want to do the reverse. You want offensive code such that bugs cannot survive long undetected.

One way is assert(), double checking assumptions that you know must always be true. This catches bugs before they have a chance to mysteriously corrupt memory. Indeed, when debugging mystery C bugs, I’ll often begin by adding assert() everywhere I suspect their might be a problem. (Although, don’t go overboard on asserts)

The best offensive coding is unit tests. Any time something is in doubt, write a unit test that stresses it. The C language has a reputation for going off in the weeds when things scale past what programmers anticipated, so write a test for such cases.

Code for quality


On a related note, things like unit tests, regression tests, and even fuzz testing are increasingly becoming the norm. If you have an open-source project, you should expect to have “make test” that adequately tests it. This should unit test the code with high code coverage. It’s become the standard for major open-source projects. Seriously, “unit test with high code coverage” should be the starting point for any new project. You’ll see that in all my big open-source projects, where I start writing unit tests early and often (albeit, because I’m lazy, I have inadequate code coverage).

AFL fuzzer is relatively new, but it’s proving itself useful at exposing bugs in all sorts of open-source projects. C is the world’s most dangerous language for parsing external input. Crashing because of a badly formatted file or bad network packet was common in the past. But in 2016, such nonsense is no longer tolerated. If you aren’t nearly certain no input will crash your code, you are doing it wrong.

And, if you think “quality” is somebody else’s problem, then you are doing it wrong.


Stop with the globals

When I work with open-source C/C++ projects, I tear my hair out from all the globals. The reason your project is tough to debug and impossible to make multithreaded is that you’ve littered with global variables. The reason refactoring your code is a pain is because you overuse globals.

There’s occasionally a place for globals, such the debug/status logging system, but otherwise it’s a bad, bad thing.


A bit OOP, a bit functional, a bit Java

As they say, “you can program in X in any language”, referring to the fact that programmers often fail to use the language as it was intended, but instead try to coerce it into some other language they are familiar with. But that’s just saying that dumb programmers can be dumb in any language. Sometimes counter-language paradigms are actually good.

The thing you want from object-oriented programming is how a structure conceptually has both data and methods that act on the data. For  struct Foobar, you create a series of functions that look like foo_xxxx(). You have a constructor foo_create(), a destructor foo_destroy(), and a bunch of functions get act on the structure.

Most importantly, when reasonable, define struct Foobar in the C file, not the header file. Make the functions public, but keep the precise format of the structure hidden. Everywhere else refer to the structure using forward references. This is especially important for libraries, where exporting their headers destroys ABI compatibility (because structure size changes). If you must export the structure, then put a version or size as its first parameter.

No, the goal here isn’t to emulate the full OOP paradigm of inheritance and polymorphism. Instead, it’s just a good way of modularizing code that’s similar to OOP.

Similarly, there some good ideas to pull from functional programming, namely that functions don’t have “side effects”. They consume their inputs, and return an output, changing nothing else. The majority of your functions should look like this. A function that looks like “void foobar(void);” is the opposite of this principle, being a side-effect only function.

One area of side-effects to avoid is global variables. Another is system calls that affect the state of the system. Globals are similar to deep variables within structures, where you call something like “int foobar(struct Xyz *p);” that hunts deeply in p to find the parameters it acts on. It’s better to bring them up to the top, such as calling “foobar(p->length, p->socket->status, p->bbb)“. Yes, it makes the parameter lists long and annoying, but now the function “foobar()” depends on simple types, not a complex structure.

Part of this functional attitude to programming is being aggressively const correct, where pointers are passed in as const, so that the function can’t change them. It communicates clearly which parts are output (the return value and non-const pointers), and which are the inputs.

C is a low-level systems language, but except for dire circumstances, you should avoid those C-isms. Instead of C specific, you should write your code in terms of the larger C-like language ecosystem, where code could be pasted into JavaScript, Java, C#, and so on.

That means no pointer arithmetic. Yes, in the 1980s, this made code slightly faster, but since 1990s, it provides no benefit, especially with modern optimizing compilers. It makes code hard to read. Whenever there is a cybersecurity vulnerability in open-source (Hearbleed, Shellshock, etc.), it’s almost always in pointer-arithmetic code. Instead, define an integer index variable, and iterate through arrays that way — as if you were writing this in Java.

This ideal also means stop casting structs/integers for network-protocol/file-format parsing. Yes, that networking book you love so much taught you to do this, using things like “noths(*(short*)p)“, but it was wrong then when the book as written, and is wronger now. Parse the integers like you would have to in Java, such as “p[0] * 256 + p[1]“. You think casting a packed structure on top of data to parse it is the most “elegant” way of doing it, but it’s not.

Ban unsafe functions


Stop using deprecated functions like strcpy() and spritnf(). When I find a security vulnerability, it’ll be in these functions. Moreover, it makes your code horribly expensive to audit, because I’m going to have to look at every one of these and make sure you don’t have a buffer-overflow. You may know it’s safe, but it’ll take me a horrendously long time to figure out for myself. Instead, use strlcpy()/strcpy_s(), and snprintf()/sprintf_s().

More generally, you really need to be really comfortable knowing what both a buffer-overflow and integer-overflow is. Go read OpenBSD’s reallocarray(), understand why it solves the integer-overflow problem, then using it instead of malloc() in all your code. If you have to, copy the reallocarray() source from OpenBSD and stick it in your code.

You know how your code mysteriously crashes on some input? Unsafe code is probably the reason. Also, it’s why hackers break into your code. Do the right things, and these problems disappear.

The “How to C” post above tells you to use calloc() everywhere. That’s wrong, it still leaves open the integer overflow bug on many platforms. Also, get used to variable-sized thingies, which means using realloc() a lot — hence reallocarray().

There’s much more to writing secure code, but if you do these, you’ll solve most problems. In general, always distrust input, even when it’s from a local file or USB port you control.

Stop with the weird code

What every organization should do is organize an after-work meeting, where anybody can volunteer to come and hash out an agreement for a common style-guide for the code written in the organization. Then fire every employee who shows up. It’s a stupid exercise.

The only correct “style” is to make code look unsurprisingly like the rest of the code on the Internet. This applies to private code, as well as any open-source you do. The only decision you have to make is to pick an existing, well-known style guide to follow, like Linux, BSD, WebKit, or Gnu.

The nice thing about other languages, especially Python, is that there isn’t the plethora of common styles like there is in C. That was one of the shocking things about the Hearbleed vulnerability, that OpenSSL uses the “Whitesmiths” style of braces, which was at one time common but is now rare and weird. LibreSSL restyled it to the BSD format. That’s probably a good decision: if your C style is fringe/old, it may be worth restyling it to something common/average.

You know that really cool thing you’ve thought of, and think everyone will adopt once they see the beauty of it in your code? Yea, remove that thing, it just pisses everyone off. Or, if you must use that technique (it happens sometimes), document it

The future is multicore


CPUs aren’t going to get any faster. Instead, we’ll increasingly get more CPU cores on the chip. That doesn’t mean you need to worry about making your code multithreaded today, but it means you should probably consider what that might look like in the future.

No, mutexes and critical sections don’t make your code more multicore. Sure, they solve the safety problem, but an enormous cost to performance. It means your code might get faster for 2 or 3 cores, but after that, adding cores will instead make your software go slower. Fixing this, getting multicore scalability is second only to security in importance to the C programmer today.

One of these days I’ll write a huge document on multicore scalability, but in the meanwhile, just follow the advice above: get rid of global variables and the invisible sharing of deep data structures. When we have to go in later and refactor the code to make it scale, our jobs will be significantly easier.

Stop using true/false for success/failure

The “How to C” document tells you that success always means true. This is garbage. True means true, success means success. They don’t mean each other. You can’t avoid the massive code out there that returns zero on success and some other integer for failure.

Yea, it sucks that there is no standard, but there is never going to be one. Instead, the wrong advice of the “How to C” doc is a good example of the “Stop with weird code” principle. The author thinks if only we could get everyone to do it his way, if we just do it hard enough in our own code to enlighten others, then the problem is going to be solved. That’s bogus, programmers aren’t ever going to agree on the same way. Your code has to exist in a world where where ambiguity exists, where both true and 0 are common indicators of success, despite being opposite values. The way to do that is unambiguously define SUCCESS and FAILURE values.

When code does this:

   if (foobar(x,y)) {
      …;
   } else {
      …;
   }

There’s no way that I, reading your code, can easily know which case is “success” and which is failure. There are just too many standards. Instead, do something like this:

   if (foobar(x,y) == Success) {
      …;
   } else {
      …;
   }



The thing about integers

The “How to C” guide claims there’s no good reason to use naked “int” or “unsigned“, and that you should use better defined types like int32_t or uint32_t. This is nonsense. The input to many library functions is ‘int‘ or ‘long‘ and compilers are getting to be increasingly type-savvy, warning you about the difference even when both are the same size.

Frankly, getting integers ‘wrong’ isn’t a big source of problems. That even applies to 64-bit and 32-bit issues. Yes, using ‘int‘ to hold a pointer will break 64-bit code (use intptr_t or ptrdiff_t or size_t instead), but I’m astonished how little this happens in practice. Just mmap() the first 4-gigabytes as invalid pages on startup and run your unit/regression test suite, and you’ll quickly resolve any problems. I don’t need to really recommend to you how best to fix it.

But the biggest annoyance in code is that programmers want to redefine integer types. Stop doing that. I know having “u32” in your code makes it pretty for you, but it just makes it more annoying for me, who has to read your code. Please use something standard, such as “uint32_t” or “unsigned int“. Worse, don’t arbitrarily create integer types like “filesize“. I know you want to decorate this integer with additional meaning, but the point of C programming is “low level”, and this just annoys the heck out of programmers.

Use static and dynamic analysis

The old hotness in C was “warning levels” and “lint”, but in modern C we have “static analysis”. Clang has dramatically increased the state-of-the-art of the sorts of things compilers can warn about, and gcc is busy catching up. Unknown to many, Microsoft has also had some Clang-quality statis-analysis in its compilers. XCode’s ability to visualize Clang’s analysis is incredible, though you get the same sort of thing with Clang’s web tools.

But that’s just basic static analysis. There are many security-focused tools that take static analysis to the next level, like Coverity, Veracode, and HP Fortify.

All these things produce copious “false positives”, but that’s a misnomer. Fixing code to accommodate the false positive cleans up the code and makes it dramatically more robust. In other words, such things are often “this code is confusing”, and the solution is to clean it up. Coding under the constraint of a static analyzer makes you a better programmer.

Dependency hell

In corporations, after a few years, projects become unbuildable, except on the current build system. There’s just too many, poorly documented, dependencies. One company I worked for joked they should just give their source to their competitors, because they’d never be able to figure out how to get the thing to build.

And companies perpetuate this for very good sounding reasons. Another company proposed standardizing on compiler versions, to avoid the frequent integration problems from different teams using different compilers. But that’s just solving a relatively minor problem by introducing a major problem down the road. Solving integration problems keeps the code healthy.

Open-source has related problems. Dependencies are rarely fully document, and indeed are often broken. Many is the time you end up having to install two incompatible versions of the same dependency in order to get the resulting code finally compiled.

The fewer the dependencies, the more popular the code. There are ways to achieve this.

  • Remove the feature. As a whole, most dependencies exist for some feature that only 1% of the users want, but which burden the other 99% of the user base.
  • Include just the source file you need. Instead of depending on the entire OpenSSL library (and its dependencies), just include the sha2.c file if that’s the only functionality from OpenSSL that you need.
  • Include their entire source in your tree. For example, Lua is a great scripting language in 25kloc that really needs no updates. Instead of forcing users to hunt down the right lua-dev dependency, just include the Lua source with your source.
  • Load libraries at runtime, through the use of dlopen(), and include their interface .h files as part of your project source. That means they aren’t burdened by the dependency unless they use that feature. Or, if it’s a necessary feature, you can spit out error messages with more help on fixing the dependency.


Understand undefined C

You are likely wrong about how C works. Consider the expression (x + 1 < x). A valid result of this is ‘5’. It’s because C doesn’t define what happens when ‘x‘ is the maximum value for an integer and you add 1 to it, causing it to overflow. One thing many compilers do is treat this as the 2s complement overflow, similar to how other languages do (such as Java). But some compilers have been known to treat this as an impossibility, and remove the code depending on it completely.

Thus, instead of relying upon how the current version of your C compiler works, you really need to code according to the large spec of how C compilers may behave.

Conclusion

Don’t program in C unless you are responsible. Responsible means understand buffer-overflows, integer overflows, thread synchronization, undefined behaviors, and so. Responsible means coding for quality that aggressively tries to expose bugs early. This is the nature of C in 2016.

Errata Security: The Schelling Game

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

At the Shmoocon conference, a vendor (“Breach Intelligence”) is putting a card in ever schwag bag with an “IoC”. The game works by giving everyone a different IoC, in pairs. If you find your matching IoC and come to their booth, they’ll give you a free quadcopter.

This is like the “Schelling Point“, a question in game theory. You are supposed to meet somebody New York City, but neither of you have been told where to meet. So where do you go? The trick is to estimate the most logical place that the other person, using the same information as you, would make. Most people agree that the answer is the “information booth at Grand Central Station”.
So how do you find your matching IoC to win the prize? One guy is walking around asking strangers to match cards. That’s useful, because a lot of people who don’t want to play the game simply give him their cards, so he’s got an ever expanding list of possible matches.
My solution is to tweet the IoC, and of course, blog about it:
If my partner searches Twitter, they will find it. That’s because Twitter’s search engine is instantaneous. Google, on the other hand, will take a few days before they’ll find this page and index it, by which time either Shmoocon will be over, or the vendor will have run out of prizes.
At first I tweeted that number bare, because my partner has only to search it to find me. But it hides the purpose so that others don’t get on to the trick, find their matches, and exhaust the prizes. But that doesn’t work, because the logic applies to my partner as well. So instead, I want to publicize the technique widely, 
So, should my partner choose to find me, then searching on Twitter or (in time) Google should be possible. Sadly, though, I hear they’ve already run out of quadcopters.
BTW, an IoC, or “indicator of compromise” is a checksum or pattern that was retrieved in analyzing a breach, which can then maybe used to detect similar breaches elsewhere. It’s the thing that OmniCISA was designed to share. These are IoC of real attacks. If you google the number on your card, not only may you find your partner, you may also find the original virus or attack that the IoC applies to on a website.

AWS DevOps Blog: Quickly Explore the Chef Environment in AWS OpsWorks

This post was syndicated from: AWS DevOps Blog and was written by: Daniel Huesch. Original post: at AWS DevOps Blog

AWS OpsWorks recently launched support for Chef 12 Linux. This release changes the way that information about the stacks, layers, and instances provided by OpsWorks is made available during a Chef run. In this post, I show how to interactively explore this information using the OpsWorks agent command line interface (CLI) and Pry, a shell for Ruby. Our documentation shows you what’s available, this post shows you how to explore that data interactively.

OpsWorks manages EC2 or on-premises instances by triggering Chef runs. Before running your Chef recipes, OpsWorks prepares an environment. This environment includes a number of data bags that provide information about your stack, instances, and other resources in your stack. You can use data bags to write cookbooks that adapt to changes in your infrastructure.

When an instance has finished its setup or when it leaves the online state, OpsWorks triggers a Configure event. You can register your own custom recipes to run during Configure events, and use a custom recipe as a light-weight service discovery mechanism. For example, you could use custom recipes to grant database access to an app server after it’s started, or revoke access after it’s stopped, or discover the IP address of the database server within your stack.

Typically, you access data about stacks, layers, and instances through Chef search. For earlier supported versions of Chef on Linux, this data was made available as attributes. In Chef 12 Linux, the data is available in data bags.

To access this data, I’m going to use only tools that are already present on OpsWorks instances: the OpsWorks agent CLI and Pry. Here’s the elevator pitch for Pry, taken from the Pry website:

Pry is a powerful alternative to the standard IRB shell for Ruby. It features syntax highlighting, a flexible plugin architecture, runtime invocation and source and documentation browsing.

Because Pry is already present on OpsWorks instances, there’s no need to install it.

I execute all terminal commands shown in the rest of this post as the root user.

How Do You Use Pry with OpsWorks?

First, let’s take a look at the OpsWorks agent CLI. The agent CLI lets you explore and repeat Chef runs on an instance.

To see a list of completed runs, use opsworks-agent-cli list:

[root@nodejs-server1 ~]# opsworks-agent-cli list
2015-12-16T13:37:2        setup
2015-12-16T13:40:56       configure

For an instance that has just finished booting, you should see a successful Setup event, followed by a successful Configure event.

Let’s repeat the Chef run for the Configure event. To repeat the last run, use opsworks-agent-cli run:

[root@nodejs-server1 ~]# opsworks-agent-cli run
[2015-12-16 13:44:55]  INFO [opsworks-agent(26261)]: About to re-run 'configure' from 2015-12-16T13:40:56
...
[2015-12-16 13:45:01]  INFO [opsworks-agent(26261)]: Finished Chef run with exitcode 0

Because the agent CLI can only repeat Chef runs, it doesn’t allow me to execute arbitrary recipes. I can do that in the OpsWorks console with the Run command. For demo purposes, I’ll use a custom cookbook named explore-opsworks-data to trigger a Chef run so I can then execute a recipe during the run.

The Chef run failed because I tried to execute a recipe that doesn’t exist. Let’s create and run the recipe and do it in a way that opens up a Pry session.

[root@nodejs-server1 ~]# mkdir -p /var/chef/cookbooks/explore-opsworks-data/recipes
[root@nodejs-server1 ~]# echo 'require "pry"; binding.pry' > /var/chef/cookbooks/explore-opsworks-data/recipes/default.rb
[root@nodejs-server1 ~]# opsworks-agent-cli run
...
[2015-12-16T13:55:32+00:00] INFO: Storing updated cookbooks/explore-opsworks-data/recipes/default.rb in the cache.
From: /var/chef/runs/35e8a98a-c81e-46a9-84e3-1bbd105f07dd/local-mode-cache/cache/cookbooks/explore-opsworks-data/recipes/default.rb @ line 1 Chef::Mixin::FromFile#from_file:
 => 1: require "pry"; binding.pry

That doesn’t look very good. In fact, the output appears truncated. That’s because I’m now using an interactive shell, Pry, right in the middle of the Chef run. But, I can now use Pry to run arbitrary Ruby code within the recipe I created. I’ll try searching on the data bags for the stack, layer, and instance.

The aws_opsworks_stack data bag contains details about the stack, like the region and the custom cookbook source, as shown in the following example:

search(:aws_opsworks_stack)
=> [{"data_bag_item('aws_opsworks_stack', '8bd5b1e5-6f45-4d3d-9eb1-5cdaecaf77b8')"=>
   {"arn"=>"arn:aws:opsworks:us-west-2:153700967203:stack/8bd5b1e5-6f45-4d3d-9eb1-5cdaecaf77b8/",
    "custom_cookbooks_source"=>{"type"=>"archive", "url"=>"https://s3.amazonaws.com/opsworks-demo-assets/opsworks-linux-demo-cookbooks-nodejs.tar.gz", "username"=>nil, "password"=>nil, "ssh_key"=>nil, "revision"=>nil},
    "name"=>"My Sample Stack (Linux)",
    …
"data_bag"=>"aws_opsworks_stack"}}]

The aws_opsworks_layer data bag contains details about layers, like the layer name and Amazon Elastic Block Store (Amazon EBS) volume configurations:

search(:aws_opsworks_layer)
=> [{"data_bag_item('aws_opsworks_layer', 'nodejs-server')"=>
   {"layer_id"=>"a8127c0d-749a-4192-aad7-8e512c8942b4", "name"=>"Node.js App Server", "packages"=>[], "shortname"=>"nodejs-server", "type"=>"custom", "volume_configurations"=>[], "id"=>"nodejs-server", "chef_type"=>"data_bag_item", "data_bag"=>"aws_opsworks_layer"}}]

The aws_opsworks_instance data bag contains details about instances, like the operating system and IP addresses:

search(:aws_opsworks_instance)
=> [{"data_bag_item('aws_opsworks_instance', 'nodejs-server1')"=>
   {"ami_id"=>"ami-d93622b8",
    "architecture"=>"x86_64",
    …
    "id"=>"nodejs-server1",
    "chef_type"=>"data_bag_item",
    "data_bag"=>"aws_opsworks_instance"}}]

Now I’ll access a data bag directly. As the following example shows, the data I get this way is identical to the data the search command returns:

data_bag("aws_opsworks_stack")
=> ["8bd5b1e5-6f45-4d3d-9eb1-5cdaecaf77b8"]
data_bag_item("aws_opsworks_stack", "8bd5b1e5-6f45-4d3d-9eb1-5cdaecaf77b8")
=> {"data_bag_item('aws_opsworks_stack', '8bd5b1e5-6f45-4d3d-9eb1-5cdaecaf77b8')"=>
  {"arn"=>"arn:aws:opsworks:us-west-2:153700967203:stack/8bd5b1e5-6f45-4d3d-9eb1-5cdaecaf77b8/",
   "custom_cookbooks_source"=>{"type"=>"archive", "url"=>"https://s3.amazonaws.com/opsworks-demo-assets/opsworks-linux-demo-cookbooks-nodejs.tar.gz", "username"=>nil, "password"=>nil, "ssh_key"=>nil, "revision"=>nil},
   "name"=>"My Sample Stack (Linux)",
   …
    "data_bag"=>"aws_opsworks_stack"}}

As a practical example of how I would use search in one of my recipes, I’ll look up the current instance’s root device type and layer ID:

myself = search(:aws_opsworks_instance, "self:true").first
...
Chef::Log.info "My root device type is #{myself['root_device_type']}"
[2015-12-16T18:19:55+00:00] INFO: My root device type is ebs
...
Chef::Log.info "I am a member of layer #{myself['layer_ids'].first}"
[2015-12-16T18:20:17+00:00] INFO: I am a member of layer a8127c0d-749a-4192-aad7-8e512c8942b4
...

And just to make it clear that this shell isn’t just about Chef, but about Ruby code in general, here’s a Ruby snippet that would list all files and directories below /tmp, without using Chef:

Dir.glob("/tmp/*")
=> ["/tmp/npm-1967-e4f411bc", "/tmp/hsperfdata_root"]

After I’m done exploring, I can leave the shell by typing exit or by pressing Ctrl+D.

Summary

By using Pry in the middle of a Chef run, you can inspect the data that’s available during the run. If you’re troubleshooting a failed run by making a change on your workstation, updating cookbooks on your instance, and triggering another deployment, using this approach can save you a significant amount of time.

There’s no need to limit yourself to a single Pry session. If there are more areas in your code you need to explore, just put binding.pry in the appropriate place in your cookbook. Keep in mind, though, that you don’t want to permanently include this in your recipe, so don’t put this kind of a change under version control.

Errata Security: Powerball lessons for infosec

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

“Powerball” is a 44-state lottery whose prize now exceeds $1 billion, so there is much attention on it. I thought I’d draw some lessons for infosec.

The odds of a ticket winning the top prize is 1 in 292-million. However, last week 440-million tickets were purchased. Why did nobody win?
Because most people choose their own numbers. Humans choose numbers that are meaningful and lucky to them, such as birthdays, while avoiding meaningless or unlucky numbers, like 13. Such numbers clump. Thus, while theory tells us there should’ve been at least one winner if everyone chose their number randomly, in practice a large percentage of possible numbers go unchosen. (Letting the computer choose random numbers doesn’t increase your odds of winning, but does decrease the odds of having to sharing the prize).
The same applies to passwords. The reason we can crack passwords, even the tough ones using salted hashes, is because we rely upon the fact that humans choose passwords themselves. This makes password guessing a tractable human problem, rather than an intractable mathematical problem.
The average adult in lottery states spends $300 a year on the lottery. The amount spent on lotteries is more than sports, movies, music, and books combined. Buying a single lottery ticket can be justified on the argument that it’s entertainment, but the vast spending (primarily by the poor) points to a much graver problem, such as gambling addiction and bad planning.
Organizations have much the same bad planning. The decision makers at the top, those with the least cybersecurity knowledge, convince themselves about what they want to believe. Corporate executives live a fantasy world where they won’t get hacked, or they won’t pay the consequences, similar to the fantasies of lottery players.
Even at the bottom of the organization, among techies, planning is often no better. They have a passion for infosec that leads to emotional decisions, rather than a dispassionate view of risk. They’ll often treat it as binary, something is either secure or insecure, much like the lottery player’s view of their own chances of winning (“you can’t win if you don’t play”). We need to become more dispassionate and less prejudicial about our own risk analysis.
States justify lotteries by claiming the profits go to worthwhile causes, like schools. In practice, every time lotteries fund schools, states reduce school funding to compensate, spending the money on other things. Thus, no matter how much you try to earmark lottery funds, they really become just another tax. That’s a good thing because this tax is voluntary, though also a bad thing because more than half of all tickets are purchased by those in the lower third of income levels (“a tax on the poor”).
Something similar happens in security. The security team will spend a lot of money upgrading the network with better firewalls and intrusion prevention, but that just means everyone else will just take more risks. They’ll stop doing code audits for SQL injection because the WAF handles it, or they’ll allow executable attachments because the email antivirus will catch it. Thus, the money spent on cybersecurity is more fungible than you realize. It may mean saving even more money somewhere else, or get defeated by insecure practices somewhere else.
The lottery is big business, not only for the huge companies that run the lottery, but also throughout the retailers who sell tickets. Many local shops are otherwise merely “break even” except for the money they earn from the lottery. These companies spend a huge amount of money lobbying government, which is why it won’t be made illegal, and why other forms of gambling (“competition”) remain illegal. Despite all the ills of corrupt state-run gambling, there is simply no way to dislodge it.
The same thing is true in infosec. We don’t have an independent infosec community, but instead one that is wholly dominated by vendors of security products and services. You see that as the posh sales rep from Big Firm gives the manager in charge two box seat tickets to the next local sports game. You see that in phrases like “defense in depth”, which you think is a technical concept, but which is really how every vendor talks about their product: you never have enough layers, but could always use one more, such as this fancy product I’m selling you. You’d think that analysts like Gartner would be independent, but they have been wholly corrupted by the system, and perpetuate the problem. Vendors never call out Gartner’s “Magic Quadrant” for the snake oil that it is because they are all still hoping that Gartner will put them in that quadrant.
Finally, there is this last bit, where I get all judgey on you for buying that ticket. People play the lottery because they fantasize about being ultra rich — but without having to earn it. Teenage hackers have the same dream, wanting to be elite hackers that can walk ninja-like through computer systems — but without having to learn all those unnecessary technical details. I think this is a horrible view on life. It’s grand achievements we should work toward. And we can achieve grand things. I look throughout the landscape of infosec and see it on a daily basis. I go to conferences, not for the talks (which are boring), but for “bar-con” and “hallway-con”, where I get one-on-one discussion of people doing great things that makes me feel really jealous. Yes, we see young kids get lucky with that really cool thing they didn’t put much work into, but the  majority of cool things happen by people who put in the long slog to get there.
So yea, that billion dollar cash prize is out there, waiting to be claimed by somebody. But you’ve got twenty times more chance of getting eaten by a shark or becoming president, so don’t go for it. Instead, spend the time and effort becoming better at infosec, and fantasize how you are going to take down ISIS by hacking the shit out of them.

Krebs on Security: Account Takeovers Fueling ‘Warranty Fraud’

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Cybercrime takes many forms, but one of the more insidious and perhaps less obvious manifestations is warranty fraud. This scheme involves con artists who assume the identity of a consumer, complain that a given product has ceased to operate as expected, and demand that the retailer replace the article in question. Such claims turn into a loss for targeted merchants when the scammer hacks an unwitting customer’s account and replaces the customer’s email address with his own address and demands that the retailer ship him a brand new device.

Leakforums is a big source of account takeover and waranty fraud for a variety of products.

Leakforums is a big source of account takeover and waranty fraud for a variety of products.

Fitness tracking giant FitBit recently found itself the target of such fraud in the last few months of 2015, when the company noticed large caches of data from customer accounts being posted to Pastebin. To the untrained eye, such data might seem at first glance to indicate that FitBit had experienced a breach that exposed their user account data. Included in the data dumps posted to Pastebin were details about the make and model number of each user’s fitness tracker, as well as information about the last time the user had synced the device.

But a more nuanced look at the information posted to Pastebin and other public data dump sites indicates that FitBit is just the latest victim of customer account takeovers powered by breaches at other e-commerce providers.

Hacked FitBit user accounts sell for about $2 apiece.

Hacked FitBit user accounts sell for about $2 apiece.

I reached out to FitBit about this and the company’s security chief Marc Bown said the data appears to coming from a couple of sources: Customer computers that have been compromised by password-stealing malware, and customers who re-use the same credentials across a broad swath of sites online.

“They’re mainly interested in the premium devices,” Bown said, referring to the most expensive devices that FitBit sells — such as the Surge, which retails for about $250. “Those are the ones that we’re seeing are most targeted for warranty fraud.”

Bown the fraudsters will log in to the customer’s account and change the email address and on the customer’s account. The scammers then call FitBit’s customer service folks, claim that their device has stopped working, and demand a replacement.

“Basically, they start a support case with customer service, but before they do that, they change the email address on the account they hacked to an address that they control, and at that point they are the customer,” Bown said. “For a lot of customers, this ends up creating a pretty negative experience.”

Bown said after several weeks of battling warranty fraud, the company has more or less solved the problem by educating their customer service employees and assigning risk scores to all warranty replacement requests.

“Account takeover is a thing for all online organizations,” Bown said. “If we see an account that was used in a suspicious way, or a large number of login requests for accounts coming from a small group of Internet addresses, we’ll lock the account and have the customer reconfirm specific information.”

E-commerce companies can increase the level of security for user accounts by requiring two-step or two-factor authentication, which usually involves sending a one-time code to the user’s mobile device that needs to be inputted in addition to the customer’s username and password. Bown said FitBit is considering adding this capability to user accounts.

“I’m not sure the type of user who is using the same password at every site is the great target for that,” Bown said. “But we should offer it, and it’s something we plan to offer in 2016 natively.”

Errata Security: In defense of Paul Graham’s "Inequality"

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

The simplest way of trolling people is to defend that which everyone hates. That’s what Paul Graham discovered this week in his support for “inequality“. As a troll, I of course agree with his position.

When your startup is a success, you are suddenly rich after living like a pauper for many years. You naturally feel entitled to exploit all those tax loopholes and exemptions that rich people get. But then your accountant gives you the bad news: those loopholes don’t exist. You’ll have to give more than half of your new wealth to the government. The argument that the “rich don’t pay their fair share of taxes” is based on cherry picking exceptional cases that apply to a tiny few. They certainly don’t apply to you, the startup founder. Statistically, the top 1% earn ~20% of the nation’s income but pay ~40% of taxes [*], twice their “fair share”. There’s nothing a successful entrepreneur can do to evade these taxes.
I point this out because the point of To Kill a Mockingbird is that to understand a person, you need to walk around in their shoes. That’s the backstory of Paul Graham’s piece. He regularly hears statements like “the rich don’t pay their fair share of taxes”, which are at complete odds with his personal experience. This is just bigotry, an argument made by people who envy and hate the rich. They’ll continue to cherry pick the exceptions rather than look at it from another point of view. Paul Graham’s entire piece, though, is looking at the debate from another point of view — that of a Silicon Valley entrepreneur. Specifically, the more you confiscate the winnings of startup founders, the fewer startups you’ll get, and consequently, fewer life changing innovations like the iPhone.
Ezra Klein at Vox attempts to rebut Paul Graham’s piece, but falls victim to the same sort of cherry picking. He points to Sweden, which has less income inequality while still having a vibrant tech startup culture (Skype, Minecraft, Spotify, Candy Crush, etc.). But he’s only picking the part of Sweden that agrees with his argument.
While Sweden has better income equality than the United States, it has worse wealth inequality [*]. A greater percentage of Swedish wealth is held by the rich than America. Moreover, while most millionaires in America are self-made, having earned their money, most millionaires in Sweden inherited their money. This seems counter intuitive, because Sweden has exactly the sort of confiscatory taxes that anti-inequality activists want for the United States, to prevent accumulations of wealth among rich families. But that’s because the rich don’t follow the law, and evade taxes. Silicon Valley entrepreneurs pay their tax bill, the Swedish rich do not.
More to the point, incoming inequality is rising in Sweden. Activists want you to believe that rising incoming inequality is a uniquely American thing, due to deficiencies in the American system. That’s not true: it’s rising in every other rich country as well [*][*]. It’s probably due to technology advances — the more advanced the technology, the more a person can produce. An engineer using modern computers is vastly more productive today than they were 20 years ago, while those performing manual labor are no more productive.
Nobody wants to live in a “plutocracy”, a system where the rich govern. Not even the rich want this. It would mean spending all their time and wealth bribing politicians instead of relaxing on their yacht.
But confiscating their hard-earned wealth is not the only answer. When you raise taxes to confiscatory levels, you encourage the corrupt system (as in Sweden) rather than doing anything to stop the influence of money in politics. If you could somehow make everyone obey the law, unlike Sweden, then all you’d get is less productivity and fewer startups. It’s pointless going through all the pain that startups entail (and it’s a lot of pain) if the government takes all your earnings anyway. Without these startups, everyone will be worse off, not getting the innovations these startups create.
Better answers are those that would apply equally to France, Germany, Sweden, and Japan — countries that are likewise seeing dramatic increases in income inequality. They can’t confiscate even more wealth, so have to look to other solutions, such as increasing STEM education, or making it easier for everyone to share in the wealth generated by startups.

Errata Security: Mythical vuln-disclosure program

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

In the olden days (the 1990s), we security people would try to do the “right thing” and notify companies about the security vulnerabilities we’d find. It was possible then, because the “Internet” team was a small part of the company. Contacting the “webmaster” was a straightforward process — indeed their email address was often on the webpage. Whatever the problem, you could quickly get routed to the person responsible for fixing it.

Today, the Internet suffuses everything companies do. There is no one person responsible. If companies haven’t setup a disclosure policy (such as an email account “security@example.com”), they simply cannot handle disclosure. Assuming you could tell everyone in the company about the problem, from the CEO on down to the sysadmins and developers, you still won’t have found the right person to tell — because such a person doesn’t exist. There’s simply no process for dealing with the issue.

I point this out in response to the following Twitter discussion:

Josh’s assertion is wrong. There is nobody at American Airlines that can handle a bug report. At some point, a product management team is going to have to prioritize fixing this bug compared to other features they want to implement, and they’ll likely convince themselves that this bug isn’t important, and it won’t get fixed.

Josh is imagining that somebody at American Airlines has both the competence and authority to handle such a bug. But if that were true, then they’d already have a vuln-disclosure program, and emails sent to “security@aa.com” would get answered. In other words, Josh is asserting that they do handle vulnerability reports — but using a super-secret process that nobody knows about.

Large companies all deal with risk the same way. It doesn’t matter if the risk is hackers, or an implosion in the housing market, or the explosion of oil refineries. The first look at “best practices”, what their peers/competitors in the industry do. The second is they’ll respond to bad things that happen to them.

In other words, the only way American Airlines will get a vuln-disclosure/bug-bounty program is (1) if many other airlines create such programs, or (2) they get bitten hard by a vulnerability.

So far, United Airlines is the leader, having created a bug-bounty program that has reward security researchers millions of frequent-flyer miles in rewards. Other airlines will eventually catch up. In the meanwhile, the only way for American Airlines to respond to a vuln is for the bug to be reported on a full-dislosure mailing list. This will either cause people in the company to panic, and therefore fix the bug before it bites them. Or, hackers will exploit the bug, and cause millions of dollars of damage. Either way, it’s how American Airlines decided to do business, how they chose to respond to risk.

…and I’m not saying this because I want to be mean to the company. I don’t even think it’s a wrong way of doing business. Sure, it sounds bad relative to the risks I understand (hacking), but it’s the only way I know how to handle other risks. Waiting to be bitten by a risk is often a better strategy than trying to anticipate all possible unknown risks.

Krebs on Security: Fraudsters Automate Russian Dating Scams

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Virtually every aspect of cybercrime has been made into a service or plug-and-play product. That includes dating scams — among the oldest and most common of online swindles. Recently, I had a chance to review a package of dating scam emails, instructions, pictures, videos and love letter templates that are sold to scammers in the underground, and was struck by how commoditized this type of fraud has become.

The dating scam package is assembled for and marketed to Russian-speaking hackers, with hundreds of email templates written in English and a variety of European languages. Many of the sample emails read a bit like Mad Libs or choose-your-own-adventure texts, featuring decision templates that include advice for ultimately tricking the mark into wiring money to the scammer.

The romance scam package is designed for fraudsters who prey on lonely men via dating Web sites and small spam campaigns. The vendor of the fraud package advertises a guaranteed response rate of at least 1.2 percent, and states that customers who average 30 scam letters per day can expect to earn roughly $2,000 a week. The proprietor also claims that his method is more than 20% effective within three replies and over 60% effective after eight.

One of hundreds of sample template files in the dating scam package.

One of hundreds of sample template files in the dating scam package.

The dating scam package advises customers to stick to a tried-and-true approach. For instance, scammers are urged to include an email from the mother of the girl in the first 10 emails between the scammer and a target. The scammer often pretends to be a young woman in an isolated or desolate region of Russia who is desperate for a new life, and the email from the girl’s supposed mother is intended to add legitimacy to the scheme.

Then there are dozens of pre-fabricated excuses for not talking on the phone, an activity reserved for the final stretch of the scam when the fraudster typically pretends to be stranded at the airport or somewhere else en route to the target’s home town.

“Working with dozens of possible outcomes, they carefully lay out every possible response, including dealing with broke guys who fell in love online,” said Alex Holden, the security expert who intercepted the romance scam package. “If the mark doesn’t have money, the package contains advice for getting him credit, telling the customer to restate his love and discuss credit options.”

A sample letter with multiple-choice options for creating unique love letter greetings.

A sample letter with multiple-choice options for creating unique love letter greetings.

Interestingly, although Russia is considered by many to be among the most hostile countries toward homosexuals, the makers of this dating scam package also include advice and templates for targeting gay men.

Also included in the dating scam tutorial is a list of email addresses and pseudonyms favored by anti-scammer vigilantes who try to waste the scammers’ time and otherwise prevent them from conning real victims. In addition, the package bundles several photos and videos of attractive Russian women, some of whom are holding up blank signs onto which the scammer can later Photoshop whatever message he wants.

Holden said that an enterprising fraudster with the right programming skills or the funds to hire a coder could easily automate the scam using bots that are programmed to respond to emails from the targets with content-specific replies.

CALL CENTERS TO CLOSE THE DEAL

The romance scam package urges customers to send at least a dozen emails to establish a rapport and relationship before even mentioning the subject of traveling to meet the target. It is in this critical, final part of the scam that the fraudster is encouraged to take advantage of criminal call centers that staff women who can be hired to play the part of the damsel in distress.

The login page for a criminal call center.

The login page for a criminal call center.

“When you get down to the final stage, there has to be a crisis, some compelling reason why the target should you send the money,” said Holden, founder of Hold Security [full disclosure: Yours Truly is an uncompensated adviser to Holden’s company]. “Usually this is something like the girl is stranded at the airport or needs money to get a travel visa. There has to be some kind of distress situation for this person to be duped into wiring money, which can be anywhere between $200 and $2,000 on average.”

Crooked call centers like the one pictured in the screen shot above employ male and female con artists who speak a variety of languages. When the call center employees are not being hired to close the deal on a romance scam, very often they are used to assist in bank account takeovers, redirecting packages with shipping companies, or handling fraudulent new credit applications that require phone verification.

Another reason that call centers aren’t used earlier in romance scams: Hiring one is expensive. The call center pictured above charges $10 per call, payable only in Bitcoin.

“If you imagine the cost of doing by phone every part of the scam, it’s rather high, so they do most of the scam via email,” Holden said. “What we tend to see with these dating scams is the scammer will tell the call center operator to be sure to mention special nicknames and to remind him of specific things they talked about in their email correspondence.”

sparta-ad

An ad for a criminal call center that specializes in online dating scams. This one, run by a cybecrook who uses the nickname “Sparta,” says “Only the best calls for you.”

Check back later this week for a more in-depth story about criminal call centers.

Krebs on Security: Happy 6th Birthday, KrebsOnSecurity!

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

You know you’re getting old when you can’t remember your own birthday (a reader tipped me off). Today is the sixth anniversary of this site’s launch! KrebsOnSecurity turns 6! I’m pretty sure that’s like middle age in Internet years.

sixfingersAbsolutely none of this would be possible without you, Dear Reader. You have supported, encouraged and inspired me in too many ways to count these past years. The community that’s sprung up around here has been a joy to watch, and essential to the site’s success. Thank you!

I tried for at least one post per weekday in 2015, and came close, publishing some 206 entries this year (not counting this one). The frequency of new posts suffered a bit from September to November, when I was on the road nearly 24/7 for a series of back-to-back speaking gigs. Fun fact: Since its inception, this site has featured some 1,200 stories that generated more than 62,000 reader comments.

Here’s wishing you all a very happy, healthy, wealthy and safe New Year.  Below are some of the KrebsOnSecurity posts that readers found most popular in 2015 (minus the Ashley Madison and Lizard Squad stuff), along with one or two of my personal favorites in no particular order.

How I Learned to Stop Worrying and Embrace the Security Freeze — Credit monitoring services offered in the wake of umpteen breaches this year won’t stop ID thieves from stealing your good name.

What’s in a Boarding Pass Barcode? – Sometimes the stories intended to be written in a “hey-did-you-know” format turn into national news. Who knew?

How Carders Can Use eBay as a Virtual ATM – “Triangulation fraud” is big business.

Sign Up at the IRS Before Crooks Do It For You – This story about how ID thieves used the IRS’s own site to steal taxpayer data was published three months before the IRS acknowledged that some 330,000 taxpayers had been impacted.

Intuit Failed at Know-Your-Customer Basics – Much of the tax refund fraud problem can be traced back to poor or non-existent authentication at online tax preparation firms, like TurboTax.

Hacker Who Sent Me Heroin Faces Charges in the U.S. – A stranger-than-fiction story about a cybercrime kingpin who tried to frame me for drug possession and failed spectacularly.

Bluetooth ATM Skimming Series in Mexico – I traveled to Cancun in September to chronicle the work of an ATM skimming gang that was bribing ATM technicians to get access to the insides of the cash machines.

Gas Theft Gangs Fuel Pump Skimming Scams – It’s truly remarkable how much effort crooks will put into extracting value from stolen credit and debit cards.

Inside Target Corp., Days After 2013 Breach – I got to look at a confidential, internal penetration test that Target commissioned just days after learning it had lost 40 million credit cards. It wasn’t pretty.

A Day in the Life of a Stolen Healthcare Record – Healthcare organizations have some serious and difficult security challenges ahead of them. I think that explains the reader interest in this story, coupled with the fact that there are so few stories out there about stolen medical info showing up for sale in the cybercrime underground.

Krebs on Security: Flash Player Patch Fixes 0-Day, 18 Other Flaws

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Adobe has shipped a new version of its Flash Player browser plugin to close at least 19 security holes in the program, including one that is already being exploited in active attacks.

brokenflash-aThe new Flash version, v. 20.0.0.267 for most Mac and Windows users, includes a fix for a vulnerability (CVE-2015-8651) that Adobe says is being used in “limited, targeted attacks.” If you have Flash installed, please update it.

Better yet, get rid of Flash altogether, or at least disable it until and unless you need it. Doing without Flash just makes good security sense, and it isn’t as difficult as you might think: See my post, A Month Without Adobe Flash Player, for tips on how to minimize the risks of having Flash installed.

The most recent versions of Flash should be available from the Flash home page. Windows users who browse the Web with anything other than Internet Explorer may need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.). This link should tell you whether your system has Flash and if so which version of Flash is installed in your browser.

Let's Encrypt - Free SSL/TLS Certificates: OVH Sponsors Let’s Encrypt

This post was syndicated from: Let&#039;s Encrypt - Free SSL/TLS Certificates and was written by: Let's Encrypt - Free SSL/TLS Certificates. Original post: at Let's Encrypt - Free SSL/TLS Certificates

We’re pleased to announce that OVH has become a Platinum sponsor of Let’s Encrypt.

According to OVH CTO and Founder Octave Klaba, “OVH is delighted to become a Platinum sponsor. With Let’s Encrypt, OVH will be able to set a new standard for security by offering end-to-end encrypted communications by default to all its communities.”

The Web is an increasingly integral part of our daily lives, and encryption by default is critical in order to provide the degree of security and privacy that people expect. Let’s Encrypt’s mission is to encrypt the Web and our sponsors make pursuing that mission possible.

OVH’s sponsorship will help us to pay for staff and other operation costs in 2016.

If your company or organization would like to sponsor Let’s Encrypt, please email us at sponsor@letsencrypt.org.

Errata Security: Where do bitcoins go when you die? (sci-fi)

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

A cyberpunk writer asks this, so I thought I’d answer it:

Note that it’s asked in a legal framework, about “wills” and “heirs”, but law isn’t the concern. Instead, the question is:

What happens to the bitcoins if you don’t pass on the wallet and password?

Presumably, your heirs will inherit your computer, and if they scan it, they’ll find your bitcoin wallet. But the wallet is encrypted, and the password is usually not written down anywhere, but memorized by the owner. Without the password, they can do nothing with the wallet.

Now, they could “crack” the password. Half the population will choose easy-to-remember passwords, which means that anybody can crack them. Many, though, will choose complex passwords that essentially mean nobody can crack them.

As a science-fiction writer, you might make up a new technology for cracking passwords. For example, “quantum computers” are becoming scary real scary fast. But here’s the thing: any technology that makes it easy to crack this password also makes it easy to crack all of bitcoin to begin with.

But let’s go back a moment and look at how bitcoin precisely works. Sci-fi writers imagine future currency as something that exchanged between two devices, such as me holding up my phone to yours, and some data is exchanged. The “coins” are data that exist on one device, that then flow to another device.

This actually doesn’t work, because of the “double spending” problem. Unlike real coins, data can be copied. Any data I have on a device that I give to you, I can also keep, and then spend a second time to give to somebody else.

The solution is a ledger. When my phone squirts coins to your phones, both our phones contact the bank and inform it of the transfer. The bank then debits my account and credits yours. And that’s how your credit card works with the “chip and pin”. It’s actually a small computer on the credit card that verifies a transaction, and then your bank records that transaction in a ledger, debiting your account.

Bitcoin is simply that ledger, but without banks. It’s a public ledger, known as the blockchain.

The point is that you don’t have any bitcoins yourself. Instead, there is an entry in the public-ledger/blockchain that says you have bitcoins.

What’s in a bitcoin wallet is not any bitcoins, but the secret crypto keys that control the associated entries in the public ledger. Only the person with the private key can add a transaction to the public-ledger/blockchain reassigning those bitcoins to somebody else. Such a private key looks something like:

E9873D79C6D87DC0FB6A5778633389F4453213303DA61F20BD67FC233AA33262

Without this key, the associated entries in the blockchain become stale. There’s no way to create new entries passing bitcoins to somebody else. If somebody dies without passing this key to somebody else, then the bitcoins essentially die with them.

In theory, somebody can memorize their private key, but in practice, nobody does. Instead, they put this into a file, and then encrypt the file with a password that’s more easily memorized. For example, they might use as their password the first line of text from Neuromancer. It’s long and hard to guess, but yet something that is either easily memorized, of if forgotten, easily recovered. In other words, the password (or passphrase in this case) to encrypt the file containing the private key might be:

The sky above the port was the color of television, tuned to a dead channel.

So now our deceased has to pass on both the wallet file and the password that will decrypt the wallet. Presumably, though, the deceased’s heirs will find the computer and the wallet, so practically the only problem becomes cracking the password.

Cracking is an exponential problem. The trope in sci-fi is to wave aside this problem and “reroute the encryptions”, and instantly decrypt such things, but in the real world, it’s a lot harder. Passwords become exponentially harder to crack the longer they are.

The classic story here is that of a knave who plays chess with a king. The king tells his opponent that he can have anything he wants within reason should he win. The knave chooses this as his prize: one grain of rice for the first square, two for the second, four grains of rice for the third square, and so on, doubling each time for all 64 squares on the chessboard. The king, thinking this to be a minor amount, agrees. When the knave wins, the king finds he cannot payoff the winnings — because of exponential growth.

The first ten squares have the following number of rice grains:

1 2 4 8 16 32 64 128 256 512

This is 1024 grains of rice in total. Using ‘k’ to mean ‘a thousand’ (kilo-grains), the next 10 squares look like this:

1k 2k 4k 8k 16k 32k 64k 128k 256k 512k

This is about a million grains of rice. Using ‘m’ to mean ‘a million’ (mega-grains of rice), the next 10 squares look like this:

1m 2m 4m 8m 16m 32m 64m 128m 256k 512m

This is about a billion grains of rice. The next 10 squares becomes a trillion gains of rice, and we are only 40 out of 64 squares.

As the Wikipedia article discusses, filling the chessboard requires a heap of rice larger than Mt. Everest in rice, or a thousand years at the current rate of growing rice.

One ending of this story is that the knave gets the daughter in marriage and half the kingdom. In the other version of this story, the king beheads the knave for his impudence.

The same applies to password cracking. Short passwords are easily cracked. Because of exponential growth, long passwords becoming impossible to track, even at sci-fi levels of imagined technology. If such a magic technology existed, then it would defeat the underlying cryptography of the blockchain as well — if you could crack the password encrypting the key, you could just crack the key. If you could do that, then you could steal everyone’s bitcoins, not just the deceased’s.

In the above example, the sci-fi writer in question imagines an artificial intelligence that, in order to make money, tracks down dead people and harvests all the bitcoins they haven’t passed on. This can’t be done by harvesting the blockchain — it’d need the private keys.

One way that this might happen is that for the AI to own a company that recycles computers. Before recycling, it automatically scans them for such files. While it can’t break the encryption normally, some large percentage of people choose weak passwords. Also, the AI might know some tricks that make it smarter at figuring out how people choose passwords. It still won’t crack everything, but even cracking half the possible coins would lead to a good amount of income.

Or, let’s tackle this problem from another angle, a legal angle. One of the hot topics these days is something known as “crypto backdoors”. The police claim (erroneously in my opinion) that such unbreakable encryption prevents them from investigating some crimes, because even when they have a warrant to get computers, phones, and files, they can’t possibly decrypt them. Thus, they claim, technology needs a “backdoor” that only the police can access with a warrant.

In it’s simplest form, this is technically easy. Indeed, it’s often a feature for corporations, so that they can get at the encrypted files and message when employees leave the firm, or more often, when stupid employees forget their password but need to have the IT department recover their data.

In a practical form, it’s unreasonable, because it means outlawing any software that doesn’t have a backdoor. Since crypto is just math, and software is something anybody can write, this means a drastic police-state measure. But, if you are a cyberpunk writer about future dystopias, well then, this would be perfectly reasonable.

Thus, in this case, the police, using their secret backdoor key, would be able to decrypt the wallet, and recover any secret key.

But then at the same time, the police could in theory impose this rule on the blockchain itself. Instead of simply trusting a single person’s key, it can trust multiple keys, so that any of them can transfer bitcoins to somebody else. One of those keys could be a secret backdoor police held by the police, so they could step in and grab bitcoins any time they want.

This would, of course, largely defeat the purpose of the bitcoin blockchain, because now you had a central control. But things can go halfway. Bitcoin is transnational, so it really can’t be controlled by even a dystopic government, which is why it’s currently popular in places like Russia. However, a government can still force the citizens of their own country to backdoor their transactions with that county’s public backdoor key (which matches a secret police key). Thus, the American police would be able to grab bitcoins from any law-abiding American to chose to sign their transactions with the FBI’s key.

The point I’m making here is that if you are a sci-fi writer, while a naive approach to the topic might not have a good answer, something thinking and discussing it with a bunch of people might yield something fruitful.

Errata Security: Force Awakens review: adequacity

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

The film is worth seeing. See it quickly before everyone tells you the spoilers. The two main characters, Rey and Fin, are rather awesome. There was enough cheering in the theater, at the appropriate points, that I think fans and non fans will like it. Director JarJar Abrams did not, as I feared, ruin the franchise (as he did previously with Star Trek).

On the other hand, there’s so much to hate. The plot is a rip-off of the original Star Wars movie, so much so that the decision to “go in and blow it up” is a soul-killing perfunctory scene. Rather than being on the edge of your seat, you really just don’t care, because you know how that part ends.

While JarJar Abrams thankfully cut down down on the lens flare, there’s still to much that ruins every scene he applies it to. Critics keep hammering him on how much this sucks, but JarJar will never give up his favorite movie making technique.

The universe is flat and boring. In the original trilogy, things happen for a purpose. Everything that transpires is according to Palpatine’s design. And even while we find his plans confusing, we still get the sense that there are plans. In this movie, the bad guys seem to act haphazardly, with no real plan. Deus Ex Machina is out in force, with JarJar Abrams conjuring things out of thin air to serve his purpose, even though if you think them through, they make no sense.

Many settings were similarly flat. Both JarJar Abrams and George Lucas create places that looked fantastically beautiful from afar. But JarJar often leaves it at that, whereas Lucas then goes onto explore his creations. We saw a lot of Naboo, Coruscant, Tatooine, Endor, Lando’s Cloud City, and so on; they weren’t just still pictures painted on a screen. In Force Awakens, we just touch down in a place and then leave again, without fully exploring it.

In other words, the latest episode doesn’t have the soul or depth of Lucas’s original. Lucas had a huge story, and let us peek at it with each movie. JarJar Abrams is small minded — he’s only interested in this movie, and doesn’t care about the larger story. If we couldn’t see it on the screen, JarJar put little thought into it. The difference is palpable.

But the characters of Rey and Fin save the movie. The actors are fantastically cast, especially Daisy Ridley’s (Rey) quirkyness. Their yearning to grow is the same as Luke’s that drove the original film, which is the part I identified with most as a child. It’s what made me a Star Wars fan. It’s beautifully portrayed at the start of the movie where each of the character reaches a point where they first say “no”, and from that moment on start growing according to their own design, and not the design of those around them. And, each character finds that their own decisions actually make a difference. JJ Abrams deserves a lot of credit for how well this turned out. I don’t care much about this new Star Wars — except I want to know what these two characters do next.

In short, despite all that I hate JarJar Abrams, the movie is still worth watching. As everyone else goes to see the movie, the spoilers will start leaking out. Any single spoiler isn’t going to ruin the movie, but the more you hear, the less interesting the movie will become. You really need to go see it this week, before all that happens.

Errata Security: No, you can’t shut down parts of the Internet

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

In tonight’s Republican debate, Donald Trump claimed we should shutdown parts of the Internet in order to disable ISIS. This would not work. I thought I’d create some quick notes why.

This post claims it would be easy, just forge a BGP announcement. Doing so would then redirect all Syrian traffic to the United States instead of Syria. This is too simplistic of a view.

Technically, the BGP attack described in the above post wouldn’t even work. BGP announcements in the United States would only disrupt traffic to/from the United States. Traffic between Turkey and ISIS would remain unaffected. The Internet is based on trust — abusing trust this way could only work temporarily, before everyone else would untrust the United States. Legally, this couldn’t work, as the United States has no sufficient legal authority to cause such an action. Congress would have to pass a law, which it wouldn’t do.

But “routing” is just a logical layer built on top of telecommunications links. Syria and Iraq own their respective IP address space. ISIS doesn’t have any “ASN” of their own. (If you think otherwise, then simply tell us the ASN that ISIS uses). Instead, ISIS has to pay for telecommunications links to route traffic through other countries. This causes ISIS to share the IP address space of those countries. Since we are talking about client access to the Internet, these are probably going through NATs of some kind. Indeed, that’s how a lot of cellphone access works in third world countries — the IP address of your phone frequently does not match that of your country, but of the country of the company providing the cellphone service (which is often outsourced).

Any attempt to shut those down is going to have a huge collateral impact on other Internet users. You could take a scorched earth approach and disrupt everyone’s traffic, but that’s just going to increasingly isolate the United States while having little impact on ISIS. Satellite and other private radio links can be setup as fast as you bomb them.

In any event, a scorched earth approach to messing with IP routing is still harder than just cutting off their land-line links they already have. In other words, attacking ISIS at Layer 3 (routing) is foolish when attacking at Layer 1 (pysical links) is so much easier.

You could probably bomb fiber optic cables and satellite links as quickly as they got reestablished. But then, you could disable ISIS by doing the same thing with roads, bridges, oil wells, electrical power, and so on. Disabling critical infrastructure is considered a war crime, because it disproportionately affects the populace rather than the enemy. The same likely applies to Internet connections — you’d do little but annoy ISIS while harming the population.

Indeed, cutting off the population from the Internet is what dictators do. It’s what ISIS wants to do, but don’t, because it would turn the populace against them. Our strategy shouldn’t be to help ISIS.

Note that I’ve been focused on clients, because ISIS’s servers they use to interact with the rest of the world are located outside of ISIS controlled areas. That’s because Internet access is so slow and expensive, they use it for only client browsing, not for services. Trump tried to backoff his crazy proposal by insisting it was only in ISIS controlled areas, but that’s not how the Internet works. ISIS equipment is world wide — the only way to shut them down is a huge First Amendment violating censorship campaign.

Here’s the deal. The Internet routes around censorship. Of the many options we have, censoring the Internet in ISIS controlled territories is neither something we can do or would want to do. Simply null routing AS numbers in BGP and bombing satellite uplinks would certainly not do it. Cutting the physical links is certainly possible, but even ISIS’s neighbors, all of whom oppose ISIS, have not taken that step.


Update: In response to Weev’s comment below, I thought I’d make a few points. The Pakistan goof did not disable all of YouTube, just areas with a shorter route to Pakistan than the United States, such as Europe. Also, while it’s possible to create disruption, it’s impossible to do so for a long period of time, as the Pakistan incident showed when after a bit everyone just ignored Pakistan. It hurt Pakistan more than YouTube. Lastly, ISIS has no ASN to null route. If you disagree with me, then name the ASN. Instead, the ASNs in ISIS controled areas are those from Syria, neighbors like Turkey and Iran, and possibly other countries like China. Trying to block them all would cause huge collateral damage.

Update: If you think you can wage war by spoofing BGP, then it means ISIS-friendly ISPs can retaliate by spoofing back. It’s not a precedent you want to establish.

DevOps Blog: Continuous Delivery for a PHP Application Using AWS CodePipeline, AWS Elastic Beanstalk, and Solano Labs

This post was syndicated from: DevOps Blog and was written by: David Nasi. Original post: at DevOps Blog

My colleague Joseph Fontes, an AWS Solutions Architect, wrote the guest post below to discuss continuous delivery for a PHP Application Using AWS CodePipeline, AWS Elastic Beanstalk, and Solano Labs.


Solano Labs recently integrated Solano CI with AWS CodePipeline, a continuous delivery service for fast and reliable application updates. Solano Labs provides CI/CD capabilities to a variety of organizations, such as Airbnb, Change.org, and Apptio.  Solano CI is an enterprise-grade, scalable continuous integration (CI) and continuous deployment (CD) SaaS solution.  CodePipeline builds, tests, and deploys your code every time there is a code change, based on release process models that you define. You can now take advantage of the CI/CD capabilities long enjoyed by Solano Labs customers from within CodePipeline and with all of the ease of using the AWS Management Console.

In this post, we demonstrate how to use Solano CI with CodePipeline to test a PHP application using PHPUnit, and then deploy the application to AWS Elastic Beanstalk (Elastic Beanstalk). 

You will learn how to:

  • Deploy a sample PHP application to Elastic Beanstalk
  • Create a CD tool chain to push your code to Elastic Beanstalk
  • Connect your GitHub source repository to CodePipeline
  • Set up a Solano CI build stage to build your application and perform integration tests
  • Deploy your tested application to Elastic Beanstalk 

After you have completed these steps, your code will be continuously tested and delivered safely in an automated fashion.

To follow along, you need to set up an account with Solano Labs and have a GitHub account and a repository for the demo. 

1. To create an account with Solano Labs, go to http://docs.solanolabs.com/introduction/.

2. If you don’t already have a GitHub account, create one. Also, create the repository for this demo. For instructions for both, go to

https://help.github.com/articles/signing-up-for-a-new-github-account/

https://help.github.com/articles/creating-a-new-repository/

As a part of the deployment process, you will use the PHPUnit testing framework on the demonstration application we’ve posted to our Elastic Beanstalk environment. For more information about PHPUnit and Solano Labs PHPUnit integration, go to https://phpunit.de/ and http://docs.solanolabs.com/ConfiguringLanguage/php/.

Now, let’s start the demo.

1. Clone the application code from the following location into your Git repository: https://github.com/awslabs/aws-demo-php-simple-app.git.

2. Create the destination application in Elastic Beanstalk by following the instructions at http://docs.aws.amazon.com/gettingstarted/latest/deploy/deploying-with-elastic-beanstalk.html.

You need to choose specific options for your configuration. Instead of Node.js for the Predefined configuration, choose PHP

When asked to provide an archive of an application to get started, download the following .zip file to your local machine and upload into the Elastic Beanstalk application.

3. Finish configuring the Elastic Beanstalk application and environment.

4. Ensure that the configuration is running properly by testing it in a web browser.  You should see a page similar to this:

5. Next, create a CodePipeline pipeline to establish the continuous delivery process.  From the AWS Management Console, choose Services, and then, choose AWS CodePipeline.  If this is the first time you’ve created a pipeline, CodePipeline displays the following page:


6. Choose Get started.

7. Enter a name for your pipeline. For this example, we use “solano-eb-build”.   Choose Next step.

8. Now define your source provider.  CodePipeline provides direct integration between GitHub repositories and versioned Amazon S3 locations.  After you define your source, CodePipeline tracks changes committed to the source and performs actions that you will define.

For Source provider, choose GitHub. You may be requested to login to your Github account to proceed.  Under Connect to GitHub, you’ll see a variety of repositories and branches.  Choose the repository and branch that you just created, and then choose Next step.

9. For Build provider, choose Solano CI, and then choose Connect.  You may be requested to login to your Solano CI account.  When you are redirected to the Solano site, confirm the connection between CodePipeline and Solano CI by choosing Connect.  Then, choose Next step on the Create pipeline page.

 

 

10. For Deployment provider, choose AWS Elastic Beanstalk, and then choose the Application name and the Environment name of the Elastic Beanstalk environment you created earlier.  Choose Next step.

An Amazon Identity and Access Management (IAM) role will provide the permissions necessary for CodePipeline to perform the necessary build actions and service calls.  If you already have a role that you want to use with the pipeline push, choose it for Role name. Otherwise, choose Create role to create a role with the sufficient permissions to perform the build and push tasks.  Please review these predefined permissions and then accept.  For information about IAM, see  http://docs.aws.amazon.com/codepipeline/latest/userguide/access-permissions.html. Then choose Next step.

 

11. Review the information and make any necessary changes, and then choose Create pipeline.

You’ll get confirmation that the pipeline’s been created:

12. Now that you have a pipeline and the initial version of the application running, let’s make some changes.  In your Git directory, open the www/index.php file with a text editor.  Change the value of:

$AppName = “Demo Web App”;

to

$AppName = “PHP Web App”;

Save the file and check your changes into Git, as follows.

Index.php prior to alteration:

Index.php after alteration:


Outcome of running the git commit command:

You can see that the name of the application rendered in the web browser has been changed:

13. Check on the status of the build process by viewing the CodePipeline console:

 

 

You can also see the build status on the Solano Labs dashboard and build details page:

 

 

You have leveraged Solano’s built-in capabilities to test the functionality of the updated code.

In the Git repository, there are six tests identified across two files.  If you look at the phpunit directory in the Git working directory, you will see two files, indexTest.php and loadGenTest.php.  loadGenTest.php tests the load generation feature of the demo application.  To test this functionality, you ordinarily need to generate load that would take time to test.  We can take advantage of the Solano CI parallel PHPUnit testing to ensure that the load is spread in a parallel fashion.

PHPUnit test definitions:

These are top three tests performed:

In the next screenshots, you can see the file responsible for PHPUnit test configuration and the solano.yml configuration file that invokes the PHPUnit testing:

This walkthrough demonstrates just a few of the many unique capabilities available when you integrate Solano CI into the CodePipeline build process.  Using Solano CI expands the portfolio of technologies available for use within your AWS CI/CD implementations.  You can expand the capabilities of this one example by pushing unique GitHub branches to different development, testing, and production environments.  You can also leverage other CodePipeline integrations to take advantage of AWS CodeDeploy and a collection of specialized testing services.

Now that you have the build process running, make some changes and observe the process flow from within the CodePipeline console, AWS Elastic Beanstalk console, and the Solano Labs’ Solano CI console.  After you check changes into the GitHub repository that you used for this demonstration, updates are automatically dispatched to your new Elastic Beanstalk application.

 

Errata Security: Policy wonks aren’t computer experts

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

This Politico story polls “cybersecurity experts” on a range of issues. But they weren’t experts, they were mostly policy wonks and politicians. Almost none of them have ever configured a firewall, wrote some code, exploited SQL injection, analyzed a compromise, or in any other way have any technical expertise in cybersecurity. It’s like polling a group of “medical experts”, none of which has a degree in medicine, or having a “council of economic advisers”, consisting of nobody with economics degrees, but instead representatives from labor unions and corporations.

As an expert, a real expert, I thought I’d answer the questions in the poll. After each question, I’ll post my answer (yes/no), the percentage from the Politico poll of those agreeing with me, and then a discussion.

Should the government mandate minimum cybersecurity requirements for private-sector firms?

No (39%). This question is biased because they asked policy wonks, most of which will answer “yes” to any question “should government mandate”. It’s also biases because if you ask anybody involved in X if we need more X, they’ll say “yes”, regardless of the subject you are talking about.

But the best answer is “no”, for three reasons.

Firstly, we experts don’t know what “minimum requirements” should be. The most common attacks on the Internet are SQL injection, phishing, and password reuse. We experts don’t know how to solve these problems. Even if everyone followed minimum requirements, it wouldn’t make a difference in hacking.

Secondly, “requirements” have a huge cost. The government already has a mandate for minimum requirements for government products, called “Common Criteria”. It costs millions of dollars to get a product certified and make no difference in cybersecurity.

Finally, it would kill innovation. The industry is in a headlong rush to “IoT”, the “Internet of Things”, where every device in your home, including hair driers and Barbie dolls, are Internet enabled. I’ll be at the forefront pointing out the laughable security in these devices, and how they easily allow hackers into your home. But to force innovation to halt for the next decade while they addressed cybersecurity instead would be a travesty. A better model is for them to ship crap first, for us in the industry to laugh and mock them for their obvious bugs, and for them to fix it later.

Should companies provide a “back door” for law enforcement to gain access to a program or computer?

No (85%). This one is a no brainer. Even the most pro-law-enforcement among us recognize the problems with this one.

If passed, would the cybersecurity legislation under negotiation result in the appreciable reduction in cyber breaches of U.S. firms?

No (74%). This one surprised me, since most of the responses are from Washington D.C. policy wonks. But then the truth of CISA is that nobody cares whether it actually works — they want it firstly so that they appear to be addressing the problem, and secondly as a platform to stick amendments onto.

If passed, would the cybersecurity legislation under negotiation present a significant loss of privacy for Americans?

Yes (35%). Sadly, I’m in the minority. The reason is that policy wonks believe that the intention of CISA isn’t to invade privacy, so they’ll answer “no”. However, privacy invasion is an unintended consequence of information sharing, which is why privacy advocates answer “yes”.

Do you expect a major cyberattack against U.S. critical infrastructure to occur within the …

Century (0%). The only choices they gave were Next year (9%), Net five years (48%), and Next decade (43%). They are all morons. It’s roughly the same answer “experts” have been giving for the last 15 years, which has shown that they’ve been consistently wrong.

Hacking into a power company and causing a blackout is deceptively easy. A lot of these people are privy to “pen test” reports showing how hackers easily broke into a power grid and put their virtual fingers on the proverbial button to turn off the power.

But just because it’s possible doesn’t mean that people will do it. It’s equally possible for Al Qaeda, the North Koreans, or the French to send sleeper agents into the United States to create explosives from off-the-shelf ingredients, and then bomb key power distribution points to cause mass blackouts throughout the country. Attacking the grid with cyber is easy, but attacking it “kinetically” is still even easier. I’ve done pentests of the power grid. If you hired me to cause mass blackouts, I’d predominantly use explosives.

The biggest issue, though, is that the United States critical infrastructure is incredibly diverse, involving 10,000 different companies. Small, temporary blackouts are easy, but a “major” blackout affecting a large part of the grid is impractical, at least, unless you spent many years on the problem.

Eventually something might happen. But what we’ll see is a range of minor attacks against critical infrastructure long before we see a major attack. Those minor attacks haven’t happened yet, and until they do, we shouldn’t get worried about it.

Does working for the U.S. government now mean accepting that your personal information will be accessed by foreign governments?

Yes (77%), but really, it’s always been this way. Throughout the cold war, the biggest thing spies did was figure out everyone working for foreign intelligence agencies. It’s always been known that if you get clearance, you get put on a list that our adversaries (Russia, China, the French) would know about, meaning that even casually traveling to those countries as a tourist might get your hotel room bugged.

The OPM breach changes none of this. I suspect the OPM breach was by much lower level hackers, and they are finding it hard selling the information because all the potential buyers already have it.

Should the U.S. government pardon Edward Snowden?

No (91%), but not for the reasons you think.

I’m on the side who thinks Snowden is a hero. However, breaking your word should have consequences. I’d like to think given the same situation as Snowden, I’d’ve leaked that Verizon court order, but I would have stayed to face the consequences and go to jail.

Anybody in government who has taken solemn oaths (especially the military) is likely to agree with me, regardless of what they think about mass surveillance.

Is cybersecurity over-hyped as a problem?

Yes (19%), of course it is. It’s obvious the Internet is secure enough, or people wouldn’t be putting everything on the Internet. No matter the costs of hacking/insecurity, they are less than the benefits of the Internet.

For example, credit card fraud is the biggest cybersecurity problem today, but is so small that we get “cash back” from credit cards, because the amount of fraud is still less than the fees they charge designed to compensate for fraud.

Of course, this question has the same biases I mentioned above. If you ask anybody involved in X if the public needs more awareness of X, they’ll almost always say “yes”.

Has the U.S. military been too hesitant to conduct offensive cyber operations?

No (77%). The other 23% say “yes” because they’ve seen situations where we could’ve, but didn’t.

But “no” is the right answer. By itself, the mass global cyber surveillance uncovered by Snowden is evidence that we are the most aggressive actor in cyberspace. But beyond surveillance, we have a very active program of cyber-offensive.

Will we reach an agreement on international rules of the road in cyberspace?

Blerg (0%). That’s sort of a nonsense question. Will we reach agreements? Yes. That’s the sort of thing politicians do. Will they have any meaning? any teeth? Will countries abide by them? Probably not.

We’ve already one instance, the Wassenaar agreement controlling “cyber weapons”, and it’s turning out horribly, not what anybody expected.

Are U.S. government officials too hesitant to publicly attribute cyberattacks to other countries?

No (39%). The reason policy wonks answer “yes” is that they can point to examples where the government was hesitant, such as that DDoS attack against GitHub that was clearly by the Chinese government.

But at the same time, we can point to many opposite cases where the government is too eager to attribute attacks to other countries, such as the Sony hack attributed to North Korea.

It’s hard to say which happens more often, but in my experience, attacks that are legitimate from “other countries” aren’t actually directed by those countries. Government foster an environment that makes attacking the U.S. easy, but don’t actually direct the attacks.

It’s like the terrorist attacks in Paris and San Bernadino. ISIS claims credit, but it’s unclear how much was directed and supported by ISIS, and how much the attacks were planned by locals in ISIS’s name. In much the same way, there are lots of cyberattacks from China and Russia against the United States, but I’m not sure how much they are directed by their respective governments.

Is the no-commercial cyberspying agreement between President Barack Obama and chinese President Xi Jinping likely to lead to a reduction in economic hacking by China?

No (60%). At most, it’ll stop the direct attacks from the Chinese Army, but hacking is rife in Chinese society, so I’m not sure how much that will stop. On the other hand, information about who in society is hacking percolates up the food chain, so it’s possible that the central government could crack down on those hackers if it wants. I imagine a situation where there’s this hacker who has been living in a mansion for a decade, selling secret’s he’s hacked with collusion from Chinese officials, to be surprised by the secret police showing up one day and arresting him.

Errata Security: Some notes on fast grep

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

This thread on the FreeBSD mailing discusses why GNU grep (that you get on Linux) is faster than the grep on FreeBSD. I thought I’d write up some notes on this.

I come from the world of “network intrusion detection”, where we search network traffic for patterns indicating hacker activity. In many cases, this means solving the same problem of grep with complex regexes, but doing so very fast, at 10gbps on desktop-class hardware (quad-core Core i7). We in the intrusion-detection world have seen every possible variation of the problem. Concepts like “Boyer-Moore” and “Aho-Corasick” may seem new to you, but they are old-hat to us.

Zero-copy

Your first problem is getting the raw data from the filesystem into memory. As the thread suggests, one way of doing this is “memory-mapping” the file. Another option would be “asynchronous I/O”. When done right, either solution gets you “zero-copy” performance. On modern Intel CPUs, the disk controller will DMA the block directly into the CPU’s L3 cache. Network cards work the same way, which is why getting 10-gbps from the network card is trivial, even on slow desktop systems.

Double-parsing

Your next problem is stop with the line parsing, idiots. All these command-line tools first parse to the end-of-line, either explicitly (such as memchr()) or implicitly (such as reading input with fgets()). This double-parses the data — and even memchr() is likely slower than the regex algorithm, unless you are using the new AVX “TEXT” instructions that can process 16 bytes per clock cycle.

I mention this because all the command-line tools, from grep to awk to wc, suffer from this problem. Consider wc, “word count”, as bottom limit for a simple command-line, text-processing utility. What can be simpler than counting the number of words in an input file? In fact, it’s needlessly complex and slow, such as double-parsing end-of-lines,. It therefore represents a sort of upper limit on parsing speed. You can almost always parse text files faster than wc can. Benchmark your text parser, and if it fails to be faster than wc, then go back and fix it.

I’ve created a DNS server that must parse the 8 gigabyte “.com” zone file. It does so several times faster than wc, even though the parsing (and building an in-memory database) is a much harder task. This demonstrates the problem that parsing end-of-line causes in code.

NFAs and DFAs

regex gets converted into a finite-automata, either an NFA (nondeterministic finite-automata), or DFA (deterministic finite-automata).

An NFA uses low amount of memory, but a lot more CPU power. Some complicated regexes will cause unbounded CPU to be used. It’s actually a vulnerability in systems that allow users to submit regexes — they can submit some that cause the CPU to go into a nearly infinite loop.

A DFA uses a tiny amount of CPU power, but a corresponding large amount of memory. Some complicated regexes cause the amount of memory to explode — though they are typically different expressions than those which cases NFA problems. Again, this is a security problem, as hackers can submit regexes that consume all memory.

The perfect regex system combines DFA and NFA. The DFA portion is for those things that encode well in a DFA with low memory, plus the first parts of those patterns that’ll eventually match using NFA. It’ll be very fast in the normal case, while also be memory efficient. Also, it should be able to avoid hostile patterns that cause memory or CPU to explode.

DFA speed

A DFA is essentially just a big table, with a state variable pointing to the current row. Each new byte of data then looks up in the current row to find the next row to point the state at.

The speed of DFA is about 9 instructions per byte input, regardless of the size of the table. Since Intel CPUs can easily execute 3 instructions per clock cycle, that’s roughly 3 clocks per byte of input.

However, the limit is not the number of instructions executed, but the speed of the L1 cache. Each new byte of input requires reading a new table row from memory. In random input, these rows will be in L1 cache. Modern x86 processors require 4 clock cycles to read the L1 cache. Thus, each byte of input costs 4 clock cycles.

Consider Intel’s latest “Skylake” Core i7 CPU that runs a quad-core at 4.0 GHz. That translates to a DFA running at 1 gigabyte per second, or 8 gbps. On four cores, that’s essentially a theoretical speed of 32 gbps. That’s why modern desktop CPUs are easily fast enough for something like a 10-gbps network intrusion detection system.

Note that much the same logic applies to low-end ARM CPUs, such as those found in cellphones and microservers like the Raspberry Pi.

Boyer-Moore

The original grep thread, however, started with the Boyer-Moore algorithm. Consider if you are searching for the pattern “xxxxxxxxxx”. This means that instead of looking at each byte of input, you can skip forward every 10 characters and test for an ‘x’. If there’s no ‘x’, then you know that the pattern can’t fit within the previous 10 bytes, and you can just skip forward again. But, if there is an ‘x’, then you have to stop and search backwards for the start of the string, then test for a full match.

More complicated patterns like “abracadabra” means skipping forward, then testing to see if the character is one of “abcdr”. Testing for multiple words at a the same time works the same way. Each skip is for the shortest word, and the characters tested are a combination of all the words.

Thus, for a 10 character pattern, Boyer-Moore is essentially 10x faster than a regex DFA. On the other hand, the system quickly breaks down when there are lots of patterns or any short patterns. As soon as a 3 byte pattern is entered into the mix, or there are enough characters that they start matching on random input, then the entire system becomes much slower than a DFA.

Summary

The perfect grep would therefore look like the following:

  • got data into memory with zero-copy (either memory-map or async)
  • didn’t parse newlines first
  • used mixed DFA and NFA for regex
  • used Boyer-Moore instead for simple patterns

Errata Security: Joking aside: Trump is Unreasonable

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

Orin Kerr writes an excellent post repudiating Donald Trump. As a right-of-center troll, sometimes it looks like I support Trump. I don’t — I repudiate everything about Trump.

I often defend Trump, but only because I defend fairness. Sometimes people attack Trump for identical policies supported by their own favorite politicians. Sometimes they take Trump’s bad policies and make them even worse by creating “strawman” versions of them. Because I believe in fairness, I’ll defend even Trump from unfair attacks.

But Trump is an evil politician. Trump is “fascism-lite”. You’ll quickly cite Godwin’s Law, but fascism is indeed the proper comparison. He’s nationalistic, racist, populist, and promotes the idea of a “strongman” — all the distinctive hallmarks of Nazism and Italian Fascism.

Scoundrels, like Trump, make it appear that opposition is unreasonable, that they are somehow sabotaging progress, and that all it takes is a strongman with the “will” to overcome them. But the truth is that in politics, reasonable people disagree. I’ll vigorously defend my politics and call yours wrong, but at the end of the day, we can go out and have a beer together without hating each other. Trump-style politicians, on the other hand, do everything in their power to delegitimatize or dehumanize their opponents, stoking the fires of hate.

If only we had a strong leader, one able to overcome the illegitimate opposition, then progress can be made. That was the fundamental argument of Mussolini’s “Fascist” party, and later Hitler’s. It’s a morally bankrupt position, as Benito and Adolf shows us. Gridlock often happens in a Democracy. For all that you don’t get what you want, because of political gridlock, the more democratic a society, and the more “political” everything is, the more prosperity they enjoy.

Trump’s racism is almost childlike in its simplicity, But even here, there’s an undercurrent of fascism. Trump describes the Mexicans and Chinese as “clever” people who take advantage of us. Despite his protestations that he likes the Mexicans and Chinese, this comes uncomfortably close to Nazism. Hitler killed Gypsies and Slavs in huge numbers, but the particular hatred he had for Jews was that they were a “clever” people taking advantage of Aryans.

While we reject Trump, we still need to take his positions seriously. What do you call somebody who is stupid, uneducated, crazy, and bigoted? The answer is “voter”. Trump knows this, and appeals to these people are just fed up being called stupid, crazy, and bigoted. No, that doesn’t mean we enact racist policies that Trump proposes. But it does mean that we take voters seriously, explaining yet again why bigotry matters, rather than simply shrugging them off.

Anyway, this post is trying to just make it clear that I don’t support Trump in any way. All the remaining Democrat/Republican candidates in the race are reasonable people that would make adequate presidents, except for Trump. Sure, they all have their downsides, but they are all about average for politicians. The only one that’s exceptional and unreasonable is Trump.

Errata Security: Tesla is copying Apple’s business model

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

One of the interesting things about Tesla is that the company is trying to copy Apple’s business model. As a Silicon Valley entrepreneur myself, and an owner of a Tesla car, I thought I’d write up what that means.

There are two basic business models in the world. The first is cheap, low-quality, high-volume products. You don’t make much profit per unit, but you sell of a ton of them. The second is expensive, high-quality (luxury), low-volume products. You don’t sell many units, but you make a lot of profit per unit.
It’s really hard to split the difference, selling high-volume, high-quality products. If you spend 1% more on quality, your customers can’t tell the difference (without more research on their part), so you’ll lose 10% of your customers who won’t accept the higher price. Or, you are selling to the luxury market, lowering price to sell more units means lowering quality standards, destroying your brand.
Rarely, though, companies can split the difference. A prime example is Costco. While the average person who shops at Walmart (low-quality, high-volume store) earns less than $20,000 per year, the average income of a Costco customer is over $90,000 per year. Costco sells high-quality products to these customers, but it does so at high-volume, keeping the prices low.
Apple is another company that succeeds at this, selling higher quality products at enormous volumes, at mainstream prices.
It’s at this point that those who don’t like Apple laugh at me for calling it “quality” products. They are wrong. While many aspects of quality are subjective, leading some to dislike Apple, other aspects are objective.
Most luxury products are really only subjectively quality products. Take Ferrari cars, for example. Sure, they go fast, but they also spend a lot of time in the shop. Likewise, a lot of high-fashion falls apart if you wash it. The biggest lie in luxury is Whole Foods, which often sells crap products like bottled tap water for high prices.
At the same time, some quality measurements are objective. That’s how Costco works. For every product category, their buyers apply rigorous quality tests before selling something under their “Kirkland” brand, whether it’s soap, cola, vodka, luggage, or shoes.
Likewise, Apple is objectively a quality product. Take an Apple power supply, remove any branding, and give it to an engineer to compare against other power supplies. The engineer will tell you that the Apple product is better designed and uses higher quality components.
But being higher quality doesn’t work if customers don’t know it. That’s why every other company has crappy power supplies, because it’s not a value that companies can communicate to their customers. The customers don’t care.
That’s where branding comes in. The business models of Costco and Apple are precarious. As soon as customers fail to recognize their better quality, they’ll leave these companies for cheaper products. That makes these companies focus obsessively on maintaining both subjective and objective quality. This communicates the brand of quality even when customers can’t judge for themselves.
Look at the Apple power supply, on the outside. It screams “APPPLE”. It’s not (just) the logo that does this. It’s the fact that the power supply has the same white plastic, curved edged design of the first iPods and MacBooks. Subjectively, every bit of the power supply feels different than the standard industrial bricks sourced from random vendors. Even if it’s not actual quality, subjectively it feels different, and hence (if you like Apple) better.
The problem with all this “quality” is that it gets expensive. It can easily double the price. Customers impressed with Apple’s quality wouldn’t be willing to pay for it. Sure, they’ll pay 30% more, because it’s a status symbol and “cool”, but they won’t pay double. Therefore, Apple has to tackle the cost issue.
They do this with “NRE” or “up-front” payments. The reason quality components are expensive is because they are produced in low volume, the same business model duality described above. Apple has to push its business model down through the supply chain. That means going to vendor, giving them a bunch of money (Non-Recurring Engineering) to design a higher quality part, then capital so they can build a factory to produce that part in volume. In exchange, Apple then gets to buy that part at a low price.
Apple is so good at this that they can produce a high-quality iPhone at the same cost as low-quality competitors. This produces huge profits per iPhone. Even though Apple sells less than 20% of all mobile phones, it earns most of the industry’s profits. Nobody can compete with them. Another vendor wishing to enter the market doesn’t have enough capital to create the same deals Apple gets, so can’t produce a quality phone as cheaply, and thus must sell in lower volumes for lower profits. And even then, they still can’t compete because such a low volume product can’t generate enough profits for the engineering required. And, there is certainly no money left over to create the luxury branding needed to support the marketing.
Thus, not only is Apple’s model unique, nobody else can replicate it. At least, not in any market where Apple competes.

Tesla

Now let’s talk about Tesla. Their endgame is to be like Apple, but for cars. That means selling a high-margin product, but at volume competing against other lower-priced competitors.
That car will probably be the Model 3, a $35k car that sells against a Chevy Volt, Nissan Leaf, and BMW i3.
To get there, Tesla needs to first create a brand, namely “it’s what the cool people drive“. Branding isn’t your name, logo, motto, or anything conscious. Branding is about unconconscious emotions. People move from Android to iPhones (and rarely the other direction) simply because of the emotional feeling that it’s why the cool kids own. It’s like buying a kid an XBox for Christmas, which objectively meets the kid’s needs better any other console, but having the kid cry because all the cool kids at school have PlayStations. Tesla is trying to create a brand that’ll cause kids to cry if you don’t buy them one when they turn 18.
Part of that is their rebranding of “internal combustion engines”, or “ICE”, as uncool. It’s weird talking to Tesla owners and their disdain for ICE, as if they all went to the same cult. It’s like some shameful cooties that other car makers have that they’ll never be able to get rid of. Even though BMW produces an all-electric i3, they still can’t shake their ICE heritage.
And indeed, it is a hard heritage to move beyond, as this story describes. Existing car companies sell through dealers, which make their profits by servicing cars, which electric cars need less of. Thus, the sales people steer customers toward gasoline cars, or try to trick them into paying for a “service” plan that includes free oil changes — something electric cars don’t need. It’s like watching Microsoft flail around with its tragicly un-cool “Zune” against the iPod. Objectively, it was just as good or better. Subjectively, they failed in branding against Apple in every possible way marketing people can fail.
Ultimately, what Tesla is trying to do with the current model (Model S) is to create a “cool” factor that it can later apply to the later mainstream model (Model 3). It’ll take them time to ramp up production and support network, so the number of cars they can build is limited anyway. Therefore, they make the coolest car possible for under $100,000.
And they succeeded. The Model S is better than every other sedan on the market, and also better than most all sports cars. It’s better in every single metric but one (long distance driving). The huge battery means it drives three times further than any other electric car. Because of the huge battery, it can generate faster acceleration than any car costing less than $1 million. Because of the huge battery sitting at the bottom of the car, lowering the center of gravity, it’s handling is better than any other car not specifically tuned for the track. It’s not just this, but a long list of other cool features, like the central control unit, the aluminum body, the self-driving features, and so on.
In short, the Model S is iconic, like Apple. It’s the mostly highly rated car in car enthusiast magazines ever.
The mainstream Model 3 won’t be as iconic, because it’ll be cheaper. But yet, the brand will be established. For example, the high-end Model S is nearly all aluminum, but the cheaper Model 3 will be mostly steel. But yet, marketing will still focus on the few remaining light-weight parts, extolling their virtues, even though in practice they are little different than competitors. The competitors won’t be able to get into a fight over whose car is lightest, because then Tesla will always fight back with the Model S. Apple has been doing this for years with things like processor speed — objectively, it’s no faster, but subjectively, they convince the faithful it’s somehow better.
In much the same way that Apple became the biggest consumer of flash memory, and used it’s capital to guarantee it paid the lowest price in the industry, Tesla is doing the same with batteries. The Model S has three times the battery per car as any other electric vehicle, and sells more electric cars than anyone else. Thus, it drives the battery market.
That’s why they are spending so much capital on the “Gigafactory” to produce batteries, currently partnering with Panasonic. Just like Apple has to spend capital to get low-cost parts and flash memory, Tesla has to spend capital to guarantee cheap batteries. That means when the mainstream Model 3 starts competing against the Volt, Leaf, and i3, it’ll have larger batteries for a cheaper cost than its competitors.
It’s weird watching business models like this unfold. Existing car companies aren’t willing to bet that much capital in an unproven market. Tesla’s investors, on the other hand, are betting everything to create that market. Thus, Tesla can do things that entrenched companies cannot. Assuming Tesla continues to be competent, and that the electric car market grows, then they should command the lion’s share of it — just like Apple.
Recently, industry veteran Bob Lutz wrote an op-ed claiming Tesla was doomed because it didn’t have a dealer network like at traditional car company. It’s just like reading the op-eds from Nokia, Microsoft, and Blackberry when Apple released the iPhone. Lutz might be partly right that Tesla needs dealers to provide capital to for inventory management, but he’s otherwise profoundly wrong. Tesla breaks dealership model even if it didn’t want to, such as different way electrics need servicing. Dealerships are corrupt quasi-monopolies, and nobody likes dealing with them. Sure, Tesla may lose some sales because customers can’t drive a car instantly off the lot, but they’ll also gain customers fed up with corrupt businesses. Putting showrooms in shopping malls instead is just one more way that Tesla easily makes itself distinctly different from its internal combustion competitors.
With all the good ways Tesla is executing on Apple’s business model, it’s also making a lot of mistakes. There are lots of small design flaws in the Model S, and some clearly lacking areas. For example, the voice command system is decade old crap. Tesla desperately needs to license a better one from Apple (Siri), Microsoft (Cortana), or Google (Ok Google).
What these flaws show is that Tesla doesn’t have Musk’s full attention. He’s off dreaming about hyperloops, solar panels, and SpaceX. Tesla doesn’t have somebody like a Steve Jobs, or even a John Ivy, who obsesses over every small detail to make everything perfect. This flaw can be fatal. The Tesla Model S driving experience is so awesome is makes us look past the small flaws, but there’s no excuse for those flaws to exist. If they persist, they’ll kill the Model 3. Imagine test driving a Nissan Leaf with Apple Siri embedded, where you can ask about last night’s game scores, and then step into a Model 3 which can’t even dial a phone properly. Car innovation is continuing beyond the electric model and self-driving features — Tesla needs to be up near the front on all of them.
Conclusion

When Apple released the iPhone during the recession, I bought a bunch of Apple stock — enough to buy my Tesla Model S from the gains. Just by looking at the product, business model, and the market, it should’ve been obvious to anybody that Apple had changed everything.
Electrics aren’t quite the same game changer — they are still cars. The challenges of charging them, and the inability of pure electrics to drive long distances, mean that they won’t take over the market. In a decade, though, even without government subsidies, they’ll command a good 30% of the market. Even if Tesla isn’t one of the top car companies, there’s a good chance it’ll be one of the most profitable — if it can continue to execute on this model. High margins means that even if it’s not selling the most cars, it could be earning the most profits in the industry.
Their stock is already high, and Musk doesn’t seem to be executing as well as Jobs, so I’m not interested in buying their stock. But really, the Model S is an awesome car to drive.

Errata Security: Tesla is copying Apple’s business model

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

One of the interesting things about Tesla is that the company is trying to copy Apple’s business model. As a Silicon Valley entrepreneur myself, and an owner of a Tesla car, I thought I’d write up what that means.

There are two basic business models in the world. The first is cheap, low-quality, high-volume products. You don’t make much profit per unit, but you sell of a ton of them. The second is expensive, high-quality (luxury), low-volume products. You don’t sell many units, but you make a lot of profit per unit.
It’s really hard to split the difference, selling high-volume, high-quality products. If you spend 1% more on quality, your customers can’t tell the difference (without more research on their part), so you’ll lose 10% of your customers who won’t accept the higher price. Or, you are selling to the luxury market, lowering price to sell more units means lowering quality standards, destroying your brand.
Rarely, though, companies can split the difference. A prime example is Costco. While the average person who shops at Walmart (low-quality, high-volume store) earns less than $20,000 per year, the average income of a Costco customer is over $90,000 per year. Costco sells high-quality products to these customers, but it does so at high-volume, keeping the prices low.
Apple is another company that succeeds at this, selling higher quality products at enormous volumes, at mainstream prices.
It’s at this point that those who don’t like Apple laugh at me for calling it “quality” products. They are wrong. While many aspects of quality are subjective, leading some to dislike Apple, other aspects are objective.
Most luxury products are really only subjectively quality products. Take Ferrari cars, for example. Sure, they go fast, but they also spend a lot of time in the shop. Likewise, a lot of high-fashion falls apart if you wash it. The biggest lie in luxury is Whole Foods, which often sells crap products like bottled tap water for high prices.
At the same time, some quality measurements are objective. That’s how Costco works. For every product category, their buyers apply rigorous quality tests before selling something under their “Kirkland” brand, whether it’s soap, cola, vodka, luggage, or shoes.
Likewise, Apple is objectively a quality product. Take an Apple power supply, remove any branding, and give it to an engineer to compare against other power supplies. The engineer will tell you that the Apple product is better designed and uses higher quality components.
But being higher quality doesn’t work if customers don’t know it. That’s why every other company has crappy power supplies, because it’s not a value that companies can communicate to their customers. The customers don’t care.
That’s where branding comes in. The business models of Costco and Apple are precarious. As soon as customers fail to recognize their better quality, they’ll leave these companies for cheaper products. That makes these companies focus obsessively on maintaining both subjective and objective quality. This communicates the brand of quality even when customers can’t judge for themselves.
Look at the Apple power supply, on the outside. It screams “APPPLE”. It’s not (just) the logo that does this. It’s the fact that the power supply has the same white plastic, curved edged design of the first iPods and MacBooks. Subjectively, every bit of the power supply feels different than the standard industrial bricks sourced from random vendors. Even if it’s not actual quality, subjectively it feels different, and hence (if you like Apple) better.
The problem with all this “quality” is that it gets expensive. It can easily double the price. Customers impressed with Apple’s quality wouldn’t be willing to pay for it. Sure, they’ll pay 30% more, because it’s a status symbol and “cool”, but they won’t pay double. Therefore, Apple has to tackle the cost issue.
They do this with “NRE” or “up-front” payments. The reason quality components are expensive is because they are produced in low volume, the same business model duality described above. Apple has to push its business model down through the supply chain. That means going to vendor, giving them a bunch of money (Non-Recurring Engineering) to design a higher quality part, then capital so they can build a factory to produce that part in volume. In exchange, Apple then gets to buy that part at a low price.
Apple is so good at this that they can produce a high-quality iPhone at the same cost as low-quality competitors. This produces huge profits per iPhone. Even though Apple sells less than 20% of all mobile phones, it earns most of the industry’s profits. Nobody can compete with them. Another vendor wishing to enter the market doesn’t have enough capital to create the same deals Apple gets, so can’t produce a quality phone as cheaply, and thus must sell in lower volumes for lower profits. And even then, they still can’t compete because such a low volume product can’t generate enough profits for the engineering required. And, there is certainly no money left over to create the luxury branding needed to support the marketing.
Thus, not only is Apple’s model unique, nobody else can replicate it. At least, not in any market where Apple competes.

Tesla

Now let’s talk about Tesla. Their endgame is to be like Apple, but for cars. That means selling a high-margin product, but at volume competing against other lower-priced competitors.
That car will probably be the Model 3, a $35k car that sells against a Chevy Volt, Nissan Leaf, and BMW i3.
To get there, Tesla needs to first create a brand, namely “it’s what the cool people drive“. Branding isn’t your name, logo, motto, or anything conscious. Branding is about unconconscious emotions. People move from Android to iPhones (and rarely the other direction) simply because of the emotional feeling that it’s why the cool kids own. It’s like buying a kid an XBox for Christmas, which objectively meets the kid’s needs better any other console, but having the kid cry because all the cool kids at school have PlayStations. Tesla is trying to create a brand that’ll cause kids to cry if you don’t buy them one when they turn 18.
Part of that is their rebranding of “internal combustion engines”, or “ICE”, as uncool. It’s weird talking to Tesla owners and their disdain for ICE, as if they all went to the same cult. It’s like some shameful cooties that other car makers have that they’ll never be able to get rid of. Even though BMW produces an all-electric i3, they still can’t shake their ICE heritage.
And indeed, it is a hard heritage to move beyond, as this story describes. Existing car companies sell through dealers, which make their profits by servicing cars, which electric cars need less of. Thus, the sales people steer customers toward gasoline cars, or try to trick them into paying for a “service” plan that includes free oil changes — something electric cars don’t need. It’s like watching Microsoft flail around with its tragicly un-cool “Zune” against the iPod. Objectively, it was just as good or better. Subjectively, they failed in branding against Apple in every possible way marketing people can fail.
Ultimately, what Tesla is trying to do with the current model (Model S) is to create a “cool” factor that it can later apply to the later mainstream model (Model 3). It’ll take them time to ramp up production and support network, so the number of cars they can build is limited anyway. Therefore, they make the coolest car possible for under $100,000.
And they succeeded. The Model S is better than every other sedan on the market, and also better than most all sports cars. It’s better in every single metric but one (long distance driving). The huge battery means it drives three times further than any other electric car. Because of the huge battery, it can generate faster acceleration than any car costing less than $1 million. Because of the huge battery sitting at the bottom of the car, lowering the center of gravity, it’s handling is better than any other car not specifically tuned for the track. It’s not just this, but a long list of other cool features, like the central control unit, the aluminum body, the self-driving features, and so on.
In short, the Model S is iconic, like Apple. It’s the mostly highly rated car in car enthusiast magazines ever.
The mainstream Model 3 won’t be as iconic, because it’ll be cheaper. But yet, the brand will be established. For example, the high-end Model S is nearly all aluminum, but the cheaper Model 3 will be mostly steel. But yet, marketing will still focus on the few remaining light-weight parts, extolling their virtues, even though in practice they are little different than competitors. The competitors won’t be able to get into a fight over whose car is lightest, because then Tesla will always fight back with the Model S. Apple has been doing this for years with things like processor speed — objectively, it’s no faster, but subjectively, they convince the faithful it’s somehow better.
In much the same way that Apple became the biggest consumer of flash memory, and used it’s capital to guarantee it paid the lowest price in the industry, Tesla is doing the same with batteries. The Model S has three times the battery per car as any other electric vehicle, and sells more electric cars than anyone else. Thus, it drives the battery market.
That’s why they are spending so much capital on the “Gigafactory” to produce batteries, currently partnering with Panasonic. Just like Apple has to spend capital to get low-cost parts and flash memory, Tesla has to spend capital to guarantee cheap batteries. That means when the mainstream Model 3 starts competing against the Volt, Leaf, and i3, it’ll have larger batteries for a cheaper cost than its competitors.
It’s weird watching business models like this unfold. Existing car companies aren’t willing to bet that much capital in an unproven market. Tesla’s investors, on the other hand, are betting everything to create that market. Thus, Tesla can do things that entrenched companies cannot. Assuming Tesla continues to be competent, and that the electric car market grows, then they should command the lion’s share of it — just like Apple.
Recently, industry veteran Bob Lutz wrote an op-ed claiming Tesla was doomed because it didn’t have a dealer network like at traditional car company. It’s just like reading the op-eds from Nokia, Microsoft, and Blackberry when Apple released the iPhone. Lutz might be partly right that Tesla needs dealers to provide capital to for inventory management, but he’s otherwise profoundly wrong. Tesla breaks dealership model even if it didn’t want to, such as different way electrics need servicing. Dealerships are corrupt quasi-monopolies, and nobody likes dealing with them. Sure, Tesla may lose some sales because customers can’t drive a car instantly off the lot, but they’ll also gain customers fed up with corrupt businesses. Putting showrooms in shopping malls instead is just one more way that Tesla easily makes itself distinctly different from its internal combustion competitors.
With all the good ways Tesla is executing on Apple’s business model, it’s also making a lot of mistakes. There are lots of small design flaws in the Model S, and some clearly lacking areas. For example, the voice command system is decade old crap. Tesla desperately needs to license a better one from Apple (Siri), Microsoft (Cortana), or Google (Ok Google).
What these flaws show is that Tesla doesn’t have Musk’s full attention. He’s off dreaming about hyperloops, solar panels, and SpaceX. Tesla doesn’t have somebody like a Steve Jobs, or even a Jonathan Ive, who obsesses over every small detail to make everything perfect. This flaw can be fatal. The Tesla Model S driving experience is so awesome is makes us look past the small flaws, but there’s no excuse for those flaws to exist. If they persist, they’ll kill the Model 3. Imagine test driving a Nissan Leaf with Apple Siri embedded, where you can ask about last night’s game scores, and then step into a Model 3 which can’t even dial a phone properly. Car innovation is continuing beyond the electric model and self-driving features — Tesla needs to be up near the front on all of them.
Conclusion

When Apple released the iPhone during the recession, I bought a bunch of Apple stock — enough to buy my Tesla Model S from the gains. Just by looking at the product, business model, and the market, it should’ve been obvious to anybody that Apple had changed everything.
Electrics aren’t quite the same game changer — they are still cars. The challenges of charging them, and the inability of pure electrics to drive long distances, mean that they won’t take over the market. In a decade, though, even without government subsidies, they’ll command a good 30% of the market. Even if Tesla isn’t one of the top car companies, there’s a good chance it’ll be one of the most profitable — if it can continue to execute on this model. High margins means that even if it’s not selling the most cars, it could be earning the most profits in the industry.
Their stock is already high, and Musk doesn’t seem to be executing as well as Jobs, so I’m not interested in buying their stock. But really, the Model S is an awesome car to drive.

Raspberry Pi: Astro Pi: Launch is tonight!

This post was syndicated from: Raspberry Pi and was written by: David Honess. Original post: at Raspberry Pi

Astro_Pi_Logo_WEB

Tonight, two specially augmented Raspberry Pi computers, called Astro Pis, will launch into SPAAAAACE! The Astro Pis will be running experimental Python programs written by school-age students, where the results will be downloaded back to Earth and made available online for all to see.

  • When: 22:55:41 GMT (first launch window opens)
  • Where: Cape Canaveral, Florida, USA
  • Coverage: NASA TV live stream (below, and keep an eye out – Astro Pi may get a mention in the launch commentary)

NASA Public

NASA TV airs a variety of regularly scheduled, pre-recorded educational and public relations programming 24 hours a day on its various channels.

If you’ve been following the Astro Pi project, you’ll know that we were bumped from Tim Peake’s launch vehicle due to a cargo overbooking back in October.

ORB-4

The OA-4 Cygnus Spacecraft, image credit: Orbital ATK

We’re now going to launch on Orbital Sciences’ Cygnus cargo freighter (an unmanned spacecraft, above) on its fourth supply mission to the ISS. Orbital Sciences have contracted ULA to launch it into space on an Atlas V rocket (below).

cygnus4

The Atlas V rocket, image credit: United Launch Alliance

When you need to rendezvous with an object in orbit, the timing of the launch is often critical to ensure that you get into the right orbital trajectory. This is often achieved with an instantaneous launch window where the rocket has to lift off at a precisely calculated time, otherwise the two objects will never meet in space.

Obviously, this approach can significantly limit the probability of an on-time launch. For instance, you may need to wait for a rain shower to pass by, a technical problem to be resolved or a boat in restricted waters to be chased away.

However, this is not the case for our launch! The Atlas V has so much performance capability that it provides a 30-minute launch window each day, and it’s all thanks to energy. The Atlas V has so much available energy that it can accommodate a very large off-nominal time of launch: 15 minutes early or 15 minutes late. The extra power is then used by clever steering algorithms to compensate for the rotation of the Earth, relative to the orbital target.

Below is the final configuration of the rocket. It will fly in the 401 vehicle configuration with a four-metre fairing, no solid rocket boosters, and a single-engine Centaur upper stage. The Astro Pis are sitting inside a small cargo transfer bag within the Cygnus spacecraft at the top.

oa4_config

OA-4 Launch Configuration, image credit: United Launch Alliance

And here is the ascent profile, with the main events numbered.

oa4_launch_profile

OA-4 Ascent Profile, image credit: United Launch Alliance

These are the event descriptions, with their times relative to lift-off.

oa4_launch_events

OA-4 Main Events, image credit: United Launch Alliance

And finally, this is the ground trace; note that it will pass over the UK around 23 minutes into the mission. However, it will be in the Earth’s shadow and so almost impossible to spot with the naked eye.

oa4_ground_trace

OA-4 Ground Trace, image credit: United Launch Alliance

If everything goes according to plan, the Cygnus spacecraft will arrive at the ISS on the 6th of December at 9:00 GMT. The docking is a fascinating process and really worth watching if you’re interested; NASA TV will show it. It involves one of the crew operating the Canadarm2 to grab onto the incoming spacecraft before pulling it in.

cygnus5

Cygnus being moved onto the pad yesterday – the two Raspberry Pis are in the cone at the top.

Should the launch be delayed for any reason, here is a list of subsequent launch windows that occur over the next few days:

  • December 4
    Launch: 22:33 GMT
    ISS arrival: Dec 7 or 8
  • December 5
    Launch: 22:10 GMT
    ISS arrival: Dec 9
  • December 6
    Launch: 21:44 GMT
    ISS arrival: Dec 19

If it’s delayed to the 6th, it will have to loiter in orbit for a few weeks before it can dock with the ISS. This is because of other visiting vehicle traffic, such as the Soyuz 45S launching on December 15th, carrying Tim Peake and his crew mates.

I will be attending the launch at Kennedy Space Center, along with Matt Richardson and Jonathan Bell (AKA jdb on the forums). We will be live-tweeting from our personal Twitter accounts (@Dave_Spice and @MattRichardson), from the Raspberry Pi account and also from the main Astro Pi account.

Please follow the official Twitter account below for the very latest updates on the launch.

Astro Pi (@astro_pi) | Twitter

The latest Tweets from Astro Pi (@astro_pi). British @esa astronaut @astro_timpeake is taking two augmented @Raspberry_Pi computers to the @Space_Station as part of mission #Principia Get your code on them

This has been a long road for us and our partners, so please keep fingers and toes crossed!

The post Astro Pi: Launch is tonight! appeared first on Raspberry Pi.