Posts tagged ‘education’

TorrentFreak: Spanish Government Claims Success in Internet Piracy Fight

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

piracydownFor many years Spain was regarded as somewhat of a piracy safe-haven but in recent times the country has taken steps to repair its fractured relationship with the entertainment industries.

Since 2012, Spain has implemented a series of changes and adjustments to local copyright law, each aimed at clamping down on the online distribution of copyrighted content. January 1, 2015 saw the most notable development, with the introduction of tough new legislation aimed at quickly shutting down pirate sites.

Now the country’s Ministry of Education, Culture and Sports is reporting success in its battle with the Internet pirates in a new report highlighting achievements since the beginning of legislative change three years ago.

According to the Ministry, more than 95% of the 444 complaints filed with the Intellectual Property Commission by creators and rightsholders have been resolved.

In total, 252 websites were ordered by the Commission to remove illegal content with 247 (98%) responding positively to the demands. According to the Ministry, 31 ‘pirate’ sites chose to shut down completely.

Last December and following a complaint filed by 20th Century Fox, Warner Bros, Disney, Universal, Paramount and Sony, police also raided two of the country’s leading video streaming sites. Two men were arrested.

In addition to these voluntary and forced shutdowns, Spanish courts have recently ordered local ISPs to block several sites after rightsholders took advantage of a recent change in the law. Unsurprisingly The Pirate Bay was the first site to be targeted

In its report the Ministry reports that a total of five websites have now been ordered to be blocked in this manner following two High Court judgments. They include Goear, the first unlicensed music site to be tackled by the legislation.

Given the scale of the problem the gains being reported by the Spanish government seem relatively modest. Nevertheless, the Ministry insists that progress is definitely being made.

Citing figures from Alexa showing that three years ago 30 ‘pirate’ sites were among the top 250 most-visited sites in Spain, the Ministry says that now just 13 are present. Furthermore, those 13 are lower placed than they were before.

“It is clear from this data that pirate websites are losing their share of total Internet traffic in Spain,” the Ministry reports.

But while the claimed shuttering of dozens of sites and the removal of copyright content following complaints is being portrayed as a success story, the real test is whether Spaniards are buying more content.

According to figures published this week by local music industry group Promusicae, they are. Music sales in Spain totaled €70.6 million ($78 million) in the first half of 2015, an increase of almost 11%.

However, rather than solely attributing the successes to anti-piracy measures, Promusicae praised streaming as the industry’s savior. According to the group, streaming revenues increased 40% in the first six months of 2015 when compared to the same period last year.

With music industry successes ringing in their ears, later this year the TV and movie industries will learn whether Spaniards have a similar appetite for their products ‘on demand’. After a seemingly endless wait, Netflix will launch locally in the second half of 2015.

Beating piracy in Spain will be a tall order, but Netflix CEO Reed Hastings is upbeat.

“We can think of this as the bottled water business,” Hastings said. “Tap water can be drunk and is free, but there is still a public that demands bottled water.”

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and the best VPN services.

TorrentFreak: UK Anti-Piracy ‘Education’ Campaign Starts This Summer

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

uk-flagIn an effort to curb online piracy, early last year the movie and music industries reached agreement with the UK’s leading ISPs to send ‘warnings’ to alleged pirates.

As we previously revealed, the Voluntary Copyright Alert Programme (VCAP) will monitor illegal P2P file-sharing with a strong focus on repeat infringers.

The alerts program is part of the larger Creative Content UK (CCUK) initiative, which will kick off with a broad anti-piracy PR campaign targeted at the general public.

This education part is nearly ready for launch and TF is informed that it will officially kick off this summer.

“…work has started on the education component of the campaign, which helps to lay the ground and is designed to inform and raise consumer awareness and to engage with people around their love of content. The first activities are scheduled to start later this summer,” ” a Creative Content UK spokesperson tells TF.

The education part is aimed at steering people away from piracy sites by pointing out how convenient and accessible legal services are.

The associated alerts campaign has no hard start date yet but is also being finalized and will begin at a later date.

“The education campaign will show consumers how to easily access content – such as music, film, TV, books, games, magazines and sport – from authorized online sources which provide a superior user experience. So it makes sense for this to happen before the alerts program starts,” CCUK informs us.

Both programs are supported by the UK Government with millions in funding. The Government justifies this contribution with an expected increase in sales, and thus tax revenue.

The ultimate goal is to bring down local piracy rates and during the months following the rollout the file-sharing habits of UK Internet users will be frequently polled to measure the impact of the campaign.

“The aim of Creative Content UK is to encourage greater use of legal content services and to reduce online copyright infringement. There will be regular measurements of legal and illegal consumption of content throughout the duration of the initiative, which will be compared with levels before the launch of the program,” CCUK tells TF.

To what degree the PR campaign and alerts will convert pirates into paying customers has yet to be seen. In any case, it won’t go by unnoticed.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and the best VPN services.

Errata Security: Software and the bogeyman

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

This post about the July 8 glitches (United, NYSE, WSJ failed) keeps popping up in my Twitter timeline. It’s complete nonsense.

What’s being argued here is that these glitches were due to some sort of “moral weakness”, like laziness, politics, or stupidity. It’s a facile and appealing argument, so scoundrels make it often — to great applause from the audience. But it’s not true.

Legacy

Layers and legacies exist because working systems are precious. More than half of big software projects are abandoned, because getting new things to work is a hard task. We place so much value on legacy, working old systems, because the new replacements usually fail.

An example of this is the failed BIND10 project. BIND, the Berkeley Internet Name Daemon, is the oldest and most popular DNS server. It is the de facto reference standard for how DNS works, more so than the actual RFCs. Version 9 of the project is 15 years old. Therefore, the consortium that maintains it funded development for version 10. They completed the project, then effectively abandoned it, as it was worse in almost every way than the previous version.

The reason legacy works well is the enormous regression testing that goes on. In robust projects, every time there is a bug, engineers create one or more tests to exercise the bug, then add that to the automated testing, so that from now on, that bug (or something similar) can never happen again. You can look at a project like BIND9 and point to the thousands of bugs it’s had over the last decade. So many bugs might make you think it’s bad, but the opposite is true: it means that it’s got an enormous regression test system that stresses the system in peculiar ways. A new replacement will have just as many bugs — but no robust test that will find them.

A regression test is often more important than the actual code. If you want to build a replacement project, start with the old regression test. If you are a software company and want to steal your competitors intellectual property, ignore their source, steal their regression test instead.

People look at the problems of legacy and believe that we’d be better off without it, if only we had the will (the moral strength) to do the right thing and replace old system. That’s rarely true. Legacy is what’s reliable and working — it’s new stuff that ultimately is untrustworthy and likely to break. You should worship legacy, not fear it.

Technical debt

Almost all uses of the phrase “technical debt” call it a bad thing. The opposite is true. The word was coined to refer to a good thing.
The analogy is financial debt. That, too, is used incorrectly as a pejorative. People focus on the negatives, the tiny percentage of bankruptcies. They don’t focus on the positives, what that debt finances, like factories, roads, education, and so on. Our economic system is “capitalism”, where “capital” just means “debt”. The dollar bills in your wallet are a form of debt. When you contribute to society, they are indebted to you, so give you a marker, which you can then redeem by giving back to society in exchange something that you want, like a beer at your local bar.
The same is true of technical debt. It’s a good thing, a necessary thing. The reason we talk about technical debt isn’t so that we can get rid of it, but so that we can keep track of it and exploit it.
The Medium story claims:

A lot of new code is written very very fast, because that’s what the intersection of the current wave of software development (and the angel investor / venture capital model of funding) in Silicon Valley compels people to do.

This is nonsense. Every software project of every type has technical debt. Indeed, it’s open-source that overwhelmingly has the most technical debt. Most open-source software starts as somebody wanting to solve a small problem now. If people like the project, then it survives, and more code and features are added. If people don’t like it, the project disappears. By sheer evolution, that which survives has technical debt. Sure, some projects are better than others at going back and cleaning up their debt, but it’s something intrinsic to all software engineering.
Figuring out what user’s want is 90% of the problem, how the code works is only 10%. Most software fails because nobody wants to use it. Focusing on removing technical debt, investing many times more effort in creating the code, just magnifies the cost of failure when your code still doesn’t do what users want. The overwhelmingly correct development methodology is to incur lots of technical debt at the start of every project.
Technical debt isn’t about bugs. People like to equate the two, as both are seen as symptoms of moral weakness. Instead, technical debt is about the fact that fixing bugs (or adding features) is more expensive the more technical debt you have. If a section of the code is bug-free, and unlikely to be extended to the future, then there will be no payback for cleaning up the technical debt. On the other hand, if you are constantly struggling with a core piece of code, making lots of changes to it, then you should refactor it, cleaning up the technical debt so that you can make changes to it.
In summary, technical debt is not some sort of weakness in code that needs to be fought, but merely an aspect of code that needs to be managed.

Complexity

More and more, software ends up interacting with other software. This causes unexpected things to happen.
That’s true, but the alternative is worse. As a software engineer building a system, you can either link together existing bits of code, or try to “reinvent the wheel” and write those bits yourself. Reinventing is sometimes good, because you get something tailored for your purpose without all the unnecessary complexity. But more often you experience the rewriting problem I describe above: your new code is untested and buggy, as opposed to the well-tested, robust, albeit complex module that you avoided.
The reality of complexity is that we demand it of software. We all want Internet-connected lightbulbs in our homes that we can turn on/off with a smartphone app while vacationing in Mongolia. This demands a certain level of complexity. We like such complexity — arguing that we should get rid of it and go back to a simpler time of banging rocks together is unacceptable.
When you look at why glitches at United, NYSE, and WSJ happen, it because once they’ve got a nice robust system working, they can’t resist adding more features to it. It’s like bridges. Over decades, bridge builders get more creative and less conservative. Then a bridge fails, because builders were to aggressive, and the entire industry moves back into becoming more conservative, overbuilding bridges, and being less creative about new designs. It’s been like that for millennia. It’s a good thing, have you even seen the new bridges lately? Sure, it has a long term cost, but the thing is, this approach also has benefits that more than make up for the costs. Yes, NYSE will go down for a few hours every couple years because of a bug they’ve introduced into their system, but the new features are worth it.
By the way, I want to focus on the author’s quote:

Getting rid of errors in code (or debugging) is a beast of a job

There are two types of software engineers. One type avoids debugging, finding it an unpleasant aspect of their job. The other kind thinks debugging is their job — that writing code is just that brief interlude before you start debugging. The first kind often gives up on bugs, finding them to be unsolveable. The second type quickly finds every bug they encountered, even the most finicky kind. Every large organization is split between these two camps: those busy writing code causing bugs, and the other camp fixing them. You can tell which camp the author of this Medium story falls into. As you can tell, I have enormous disrespect for such people.

“Lack of interest in fixing the actual problem”

The NYSE already agrees that uptime and reliability is the problem, above all others, that they have to solve. If they have a failure, it doesn’t mean they aren’t focused on failures as the problem.
But in truth, it’s not as big a problem as they think. The stock market doesn’t actually need to be that robust. It’s more likely to “fail” for other reasons. For example, every time a former President dies (as in the case of Ford, Nixon, and Reagan), the markets close for a day in mourning. Likewise, wild market swings caused by economic conditions will automatically shut down the market, as they did in China recently.
Insisting that code be perfect is absurd, and impossible. Instead, the only level of perfection the NYSE needs is so that glitches in code shut down the market less often than dead presidents or economic crashes.
The same is true of United Airlines. Sure, a glitch grounded their planes, but weather and strikes are a bigger problem. If you think grounded airplanes is such an unthinkable event, then the first thing you need to do is ban all unions. I’m not sure I disagree with you, since it seems every flight I’ve had through Charles de Gaulle airport in Paris has been delayed by a strike (seriously, what is wrong with the French?). But that’s the sort of thing you are effectively demanding.
The only people who think that reliability and uptime are “the problem” that needs to be fixed are fascists. They made trains “run on time” by imposing huge taxes on the people to overbuild the train system, then putting guns to the heads of the conductors, making them indentured servants. The reality is that “glitches” are not “the problem” — making systems people want to use is the problem. Nobody likes it when software fails, of course, but that’s like saying nobody likes losing money when playing poker. It’s a risk vs. reward, we can make software more reliable but at such a huge cost that it would, overall, make software less desirable.

Conclusion

Security and reliability are tradeoffs. Problems happen not because engineers are morally weak (political, stupid, lazy), but because small gains in security/reliability would require enormous sacrifices in other areas, such as features, cost, performance, and usability. But “tradeoffs” are a complex answer, requiring people to thinki. “Moral weakness” is a much easier, and more attractive answer that doesn’t require much thought, since everyone is convinced everyone else (but them) is morally weak. This is why so many people in my Twitter timeline keep mentioning that stupid Medium article.

TorrentFreak: Researcher Receives Copyright Threat After Exposing Security Hole

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

imperothreatLast month researcher Zammis Clark (known online as ‘Slipstream’) discovered a security flaw in Impero Education Pro (IEP), a not insignificant find given the software’s application.

IEP is widely used in UK schools to monitor and restrict students’ Internet activities. According to Slipstream, the flaw had the potential to expose the personal details of thousands of users’ to hackers.

Early last month the researcher announced his find on Twitter while noting that it would allow for remote code execution on all Windows clients. Within the tweet he posted a link to his proof-of-concept code.

slipstream

“[Impero] had a booth at BETT back in January. They gave out donuts. Those were nice,” Slipstream wrote

“Unfortunately, when I asked about their security, nobody answered me. Some reversing later, looks like Impero is completely pwned amirite.”

While Slipstream ultimately advised against using Impero’s product, he says he didn’t immediately inform the company of the vulnerability.

“Not being a customer, I wouldn’t have known where to send it, or whether they’d even reply to me,” the researcher told TF. “And, given the severity of the issue, I figured that full disclosure would cause some sort of fix pretty quickly.”

In fact, that prediction proved correct, with Impero issuing a temporary security patch to fix the flaw.

“We immediately released a hot fix, as a short-term measure, to address the issue and since then we have been working closely with our customers and penetration testers to develop a solid long-term solution,” the company said.

“All schools will have the new version, including the long-term fix, installed in time for the new school term.”

However, Slipstream claims the patch wasn’t effective.

“Of course, their fix turned out to be inadequate. After speaking to Impero users on a forum who advised me to email Impero support, I did just that, responsibly disclosing to them exactly how their fix was inadequate and that I had an updated PoC that worked against it,” he told us.

At this point it appears that relations between Slipstream and Impero had already taken a turn for the worse. After disclosing the issues with the patch almost a week ago, this week he received a legal threat from the company.

“In breach of the license terms, you have modified the software without our client’s authority, you have decompiled the software for purposes otherwise than to achieve interoperability and you have published confidential information about our client’s software,” Impero’s legal team state.

“By publicising the encryption key on the internet and on social media and other confidential information, you have enabled anyone to breach the security of our client’s software program and write destructive files to disrupt numerous software systems throughout the UK.”

improcopyright

Impero’s lawyers say that Slipstream’s actions have caused “direct loss and damage” in addition to “reputational damage” and “potential damage” to numerous IT systems used by schools throughout the UK.

“The loss and damage to our clients caused by your activities is significant and will in any legal action taken in the civil courts be the subject of applications to the court for restraining orders to restrict you from further copyright infringement and breach of confidence as well as court orders for monetary compensation,” the letter adds.

After advising Slipstream to seek legal advice and setting a deadline of July 17, Impero’s lawyers suggest that the damage to their clients could be mitigated if the Github posting and all associated Tweets are taken down. That has not yet happened.

Slipstream is disappointed by the threats and informs TF that taking action against researchers like himself could even prove counter-productive.

“Legal threats here would just be ‘shooting the messenger’ so to speak, and would discourage security researchers from actively reporting any issues,” he explains.

“Such legal threats to security researchers would certainly not prevent any malicious individuals from finding issues themselves, and using them for malicious purposes.”

Indeed, this last point is particularly relevant. Slipstream says that he knows someone who has found two other security issues in Impero’s software. Whether they will be tempted to speak to the company considering its aggressive legal response will remain to be seen.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and the best VPN services.

Raspberry Pi: Astro Pi: Mission Update 4

This post was syndicated from: Raspberry Pi and was written by: David Honess. Original post: at Raspberry Pi

Astro_Pi_Logo_WEB-300px

Just over a week ago now we closed the Secondary School phase of the Astro Pi competition after a one week extension to the deadline. Students from all over the UK have uploaded their code hoping that British ESA Astronaut Tim Peake win run it on the ISS later this year!

Last week folks from the leading UK Space companies, the UK Space Agency and ESERO UK met with us at Pi Towers in Cambridge to do the judging. We used the actual flight Astro Pi units to test run the submitted code. You can see one of them on the table in the picture below:

The standard of entries was incredibly high and we were blown away by how clever some of them were!

Doug Liddle of SSTL said:

“We are delighted that the competition has reached so many school children and we hope that this inspires them to continue coding and look to Space for great career opportunities”

British ESA Astronaut Tim Peake - photo provided by UK Space Agency under CC BY-ND

British ESA Astronaut Tim Peake – photo provided by UK Space Agency under CC BY-ND

Jeremy Curtis, Head of Education at the UK Space Agency, said:

“We’re incredibly impressed with the exciting and innovative Astro Pi proposals we’ve received and look forward to seeing them in action aboard the International Space Station.

Not only will these students be learning incredibly useful coding skills, but will get the chance to translate those skills into real experiments that will take place in the unique environment of space.”

When Tim Peake flies to the ISS in December he will have the two Astro Pis in his personal cargo allowance. He’ll also have 10 especially prepared SD cards which will contain the winning applications. Time is booked into his operations schedule to deploy the Astro Pis and set the code running and afterwards he will recover any output files created. These will then be returned to their respective owners and made available online for everyone to see.

Code was received for all secondary school key stages and we even have several from key stage 2 primary schools. These were judged along with the key stage 3 entries. So without further adieu here comes a breakdown of who won and what their code does:

Each of these programs have been assigned an operational code name that will be used when talking about them over the space to ground radio. These are essentially arbitrary so don’t read into them too much!

Ops name: FLAGS

  • School: Thirsk School
  • Team name: Space-Byrds
  • Key stage: 3
  • Teacher: Dan Aldred
  • The judges had a lot of fun with this. Their program uses telemetry data provided by NORAD along with the Real Time Clock on the Astro Pi to computationally predict the location of the ISS (so it doesn’t need to be online). It then works out what country that location is within and shows its flag on the LED matrix along with a short phrase in the local language.

Ops name: MISSION CONTROL

  • School: Cottenham Village College
  • Team name: Kieran Wand
  • Key stage: 3
  • Teacher: Christopher Butcher
  • Kieran’s program is an environmental system monitor and could be used to cross check the ISS’s own life support system. It continually measures the temperature, pressure and humidity and displays these in a cycling split-screen heads up display. It has the ability to raise alarms if these measurements move outside of acceptable parameters. We were especially impressed that code had been written to compensate for thermal transfer between the Pi CPU and Astro Pi sensors.

Andy Powell of the Knowledge Transfer Network said:

“All of the judges were impressed by the quality of work and the effort that had gone into the winning KS3 projects and they produced useful, well thought through and entertaining results”

Ops name: TREES

  • School: Westminster School
  • Team name: EnviroPi
  • Key stage: 4 (and equivalent)
  • Teacher: Sam Page
  • This entry will be run in the cupola module of the ISS with the Astro Pi NoIR camera pointing out of the window. The aim is to take pictures of the ground and to later analyse them using false colour image processing. This will produce a Normalised Differentiated Vegetation Index (NDVI) for each image which is a measure of plant health. They have one piece of code which will run on the ISS to capture the images and another that will run on the ground after the mission to post process and analyse the images captured. They even tested their code by going up in a light aircraft to take pictures of the ground!

Ops name: REACTION GAMES

  • School: Lincoln UTC
  • Team name: Team Terminal
  • Key stage: 4 (and equivalent)
  • Teacher: Mark Hall
  • These students have made a whole suite of various reaction games complete with a nice little menu system to let the user choose. The games also record your response times with the eventual goal to investigate how crew reaction time changes over the course of a long term space flight. This entry caused all work to cease during the judging for about half an hour!

Lincoln UTC have also won the prize for the best overall submission in the Secondary School completion. This earns them a photograph of their school taken from space by an Airbus or SSTL satellite. Go and make a giant space invader please!

Ops name: RADIATION

  • School: Magdalen College School
  • Team name: Arthur, Alexander and Kiran
  • Key stage: 5 (and equivalent)
  • Teacher: Dr Jesse Petersen
  • This team have successfully made a radiation detector using the Raspberry Pi camera module, the possibility of which was hinted at during our Astro Pi animation video from a few months ago. The camera lens is blanked off to prevent light from getting in but this still allows high energy space radiation to get through. Due to the design of the camera the sensor sees the impacts of these particles as tiny specks of light. The code then uses OpenCV to measure the intensity of these specks and produces an overall measurement of the level of radiation happening.

What blew us away was that they had taken their Astro Pi and camera module along to the Rutherford Appleton Laboratory and fired a neutron cannon at it to test it was working!!!

The code can even compensate for dead pixels in the camera sensor. I am wondering if they killed some pixels with the neutron cannon and then had to add that code out of necessity? Brilliant.

These winning programs will be joined on the ISS by the winners of the Primary School Competition which closed in April:

Ops name: MINECRAFT

  • School: Cumnor House Girl’s School
  • Team name: Hannah Belshaw
  • Key stage: 2
  • Teacher: Peter Kelly
  • Hannah’s entry is to log data from the Astro Pi sensors but to visualise it later using structures in a Minecraft world. So columns of blocks are used to represent environmental measurements and a giant blocky model of the ISS itself (that moves) is used to represent movement and orientation. The code was written, under Hannah’s guidance, by Martin O’Hanlon who runs Stuff About Code. The data logging program that will run on the ISS produces a CSV file that can be consumed later by the visualisation code to play back what happened when Tim Peak was running it in space. The code is already online here.

Ops name: SWEATY ASTRONAUT

  • School: Cranmere Primary School
  • Team name: Cranmere Code Club
  • Key stage: 2
  • Teacher: Richard Hayler
  • Although they were entitled to have their entry coded by us at Raspberry Pi the kids of the Cranmere Code Club are collectively writing their program themselves. The aim is to try and detect the presence of a crew member by monitoring the environmental sensors of the Astro Pi. Particularly humidity. If a fluctuation is detected it will scroll a message asking if someone is there. They even made a Lego replica of the Astro Pi flight case for their testing!

Obviously the main winning prize is to have your code flown and run on the ISS. However the UK Space companies also offered a number of thematic prizes which were awarded independently of those that have been chosen to fly. Some cross over with the other winners was expected here.

  • Space Sensors
    Hannah Belshaw, from Cumnor House Girl’s School with her idea for Minecraft data visualisation.
  • Space Measurements
    Kieran Wand from Cottenham Village College for his ISS environment monitoring system.
  • Imaging and Remote Sensing
    The EnviroPi team from Westminster School with their experiment to measure plant health from space using NDVI images.
  • Space Radiation
    Magdalen College, Oxford with their Space Radiation Detector.
  • Data Fusion
    Nicole Ashworth, from Reading, for her weather reporting system; comparing historical weather data from the UK with the environment on the ISS.
  • Coding Excellence
    Sarah and Charlie Maclean for their multiplayer Labyrinth game.

Pat Norris of CGI said:

“It has been great to see so many schools getting involved in coding and we hope that this competition has inspired the next generation to take up coding, space systems or any of the many other opportunities the UK space sector offers. We were particularly impressed by the way Charlie structured his code, added explanatory comments and used best practice in developing the functionality”

We’re aiming to have all the code that was submitted to the competition on one of the ten SD cards that will fly. So your code will still fly even if it won’t be scheduled to be run in space. The hope is that, during periods of downtime, Tim may have a look through some of the other entries and run them manually. But this depends on a lot of factors outside of our control and so we can’t promise anything.

But wait, there’s more?

There is still opportunity for all schools to get involved with Astro Pi!

There will be an on-orbit activity during the mission (probably in January or February) that you can all do at the same time as Tim. After the competition winning programs have all finished the Astro Pi will enter a phase of flight data recording. Just like the black box on an aircraft.

This will make the Astro Pi continually record everything from all its sensors and save the data into a file that you can get! If you set your Astro Pi up in the same way (the software will be provided by us) then you can compare his measurements with yours taken on the ground.

There is then a lot of educational value in looking at the differences and understanding why they occur. For instance you could look at the accelerometer data to find out when ISS reboosts occurred or study the magnetometer data to find out how the earth’s magnetic field changes as they orbit the earth. A number of free educational resources will be provided that will help you to leverage the value of this exercise.

The general public can also get involved when the Sense HAT goes on general sale in a few weeks time.

Libby Jackson of the UK Space Agency said:

“Although the competition is over, the really exciting part of the project is just beginning. All of the winning entries will get see their code run in space and thousands more can take part in real life space experiments through the Flight Data phase”

IMG_0198

The post Astro Pi: Mission Update 4 appeared first on Raspberry Pi.

TorrentFreak: Bogus “Copyright Trademark” Complaint Fails to Censor the BBC

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

censorshipGoogle receives millions of requests every week to have links delisted from its search results, largely following claims from third parties that the referenced content infringes their rights.

While it’s difficult to say what proportion of these claims are erroneous or duplicate, it’s likely to run into thousands per month. Other claims, like the one we’re highlighting today, underline why we absolutely need Google’s Transparency Report and the DMCA notice archive maintained by Chilling Effects.

The episode began on July 1, 2015 when an individual contacted Google with a complaint about a page hosted by the BBC. Found here, the page carries a news report from 2009 which reveals how a man called Kevin Collinson with two failed disability scooter businesses behind him was allegedly (and potentially illegally) running a third.

The article is a typical “rogue trader” affair, with tales of aggressive sales techniques, broken promises, faulty goods, out-of-pocket customers and companies that dissolve only to reappear debt-free shortly after. Unpleasant to say the least.

So what prompted the complaint to Google that was subsequently published on Chilling Effects? Well, it was sent to the search giant by a gentleman calling himself (you guessed it) Kevin Collinson. Nevertheless, the important thing is this – has the BBC infringed his rights? Collinson thinks so.

The notice sent to Google by Collinson

collinson-dmca

As highlighted by the image above, when asked for the source of the infringed material, Kevin Collinson links to a page on his domain kevincollinson.com. It contains the image below which apparently proves that Collinson owns a “copyright name trademark” to his own name, whatever one of those might be.

copyright-trademark

Reading between the lines, Collinson seems to suggest that since he has a trademark on his name (searches in UK databases draw a blank incidentally), outlets such as the BBC aren’t allowed to report news containing his name. Complete nonsense of course, and Google hasn’t removed the page either.

That said, under UK law people are indeed allowed to trademark their names.

Perhaps surprisingly, trademark UK00002572177 (EU009734096) is registered to Wikileaks’ Julian Assange and protects him in the areas of public speaking, news reporting, journalism, publication of texts, education and entertainment services.

Professor Stephen Hawking also has a couple of trademarks protecting his name. Coincidentally (and possibly of interest to Mr Collinson) one of those covers mobility scooters and wheelchairs.

KEVIN JOSEPH COLLINSON did not respond to TorrentFreak’s request for comment.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and the best VPN services.

Backblaze Blog | The Life of a Cloud Backup Company: What You Would Do With a Storage Pod

This post was syndicated from: Backblaze Blog | The Life of a Cloud Backup Company and was written by: Andy Klein. Original post: at Backblaze Blog | The Life of a Cloud Backup Company

blog-pod-contest-winner-1

A few weeks ago, we held a contest offering a free Storage Pod chassis as a prize to people who came up with creative ways to use/reuse a Backblaze Storage Pod chassis. The response was outstanding! We reviewed all the submissions and selected 20 we thought the most deserving – it was hard work. Here are some of the winning entries with all the winners listed at the end.

Storage Pods in education

Over the years, students have built Storage Pods to store data for research projects and similar data intensive activities. Here are a couple of submissions where the students most likely will not be using the Storage Pods to store data – and that’s just fine with us.

    “I have three kids ages 9, 7 and 5. What would we do with these? They would immediately be incorporated into their ongoing quasi-engineering to build various things out of parts of all kinds, both indoors and outdoors, as they continue to develop their imagination, creativity and engineering ability.”

    “We are building a Makerspace and Tinkering lab at our SF school and are trying to use as much up-cycled and repurposed material as possible. Our students would love to think of creative and innovative uses for the pods in their new spaces.”

A second career for the Storage Pods

The Storage Pods being retired have worked 24/7 for the last six years. That’s equivalent to working 40 hours a week for 26 years. While these Storage Pod chassis are technically in retirement, some of them want to continue to work. Several of the contest winners suggested excellent second careers.

    Magician’s Assistant – “I am a magician. The storage pods would be easily convertible into a mini sword box, where I could put something inside and stick swords though the item, then open it up and see the item is still in one piece with no holes.”

    Roadie – “I would use it for storage for all my musical equipment and I will be able to route cables and ports through the holes so that way I can make it a one stop shop for all my outboard gear for recording.”

    Senior Roadie – A sturdy box to put cables and other material for guitar gigs and then place the box under my 2×12 guitar cabinet to elevate it. A metal box is sturdy as well as has a good connection to the ground as it’s important that the cab rests on a sturdy environment so the cab won’t move around and has a good connection so the low-end guitar sound is propagated through the floor.

    Skydiving Assistant – I would make it in to a skydiving gear box including a monitor to playback the action after each jump. So many skydivers are geeky enough that they would immediately recognize and be envious of this unique and awesome piece of history.

blog-pod-contest-winner-2

Courtesy of Angel

A leisurely Storage Pod life

A full time second career may not be what every retiring Storage Pod wants. Here are some suggestions from our contest winners that would let Storage Pods leisurely pass the time.

    Popcorn Dispenser – Design a Storage Pod to “distribute popcorn to 3 cups at once.”

    Boombox – “A sweet boombox to turn my famous server room parties up to 11.”

    Bookshelf – Repurpose the Storage Pod into a little free library in front of my historic New Orleans home. Use solar power to charge batteries to illuminate it at night.

Fish and zombies

Of course there are some Storage Pods looking for something a little different in their retirement. Here are a couple of suggestions that have an interesting twist…

    A wagon – Construct a wagon from a Storage Pod so “I can take my pet fish, Ruth Bader Ginsberg, out for walks. She always complains we never take her anywhere.”

    A doll house – Build a doll house out of a Storage Pod so it can be used as safe place for dolls during a zombie apocalypse. Playful, yet practical.

blog-pod-contest-winner-3

Courtesy of Kirk (left) and Bret (right)

What’s next?

Over the next few days, we’ll match each Storage Pod chassis to their appropriate retirement opportunity. Each Pod is different, so this could take a while. Then, we’ll ship out the Storage Pods to their new owners. That will be a happy yet sad day here at Backblaze.

The Winners

The people below have been contacted and we will be shipping out their Storage Pods shortly.

    Wayne, Kent, Frank, Nicholas, Tristan, Bret, Nathan, Paul, Jorge, Yon, Franz, Angel, Kirk, and Alan.

The following people are winners, but we’ve been unable to reach them. If your name is below and you’re interested in receiving a Storage Pod chassis, contact us at (andy at backblaze.com) and let us know. If we don’t hear from you by July 15th, we’ll select another winner.

    Nepal, Don, Samantha, Michael, Alan, Gaëtan, and Marius.

No losers

If you didn’t win a Storage Pod this time, don’t fret there will be more Storage Pod chassis coming available over the next few months. We’ll post updates to our Facebook page as they become available and let you know how you can scoop one up!

Thanks to everyone that sent in a submission, we appreciate each of your very creative and entertaining ideas.

The post What You Would Do With a Storage Pod appeared first on Backblaze Blog | The Life of a Cloud Backup Company.

lcamtuf's blog: Poland vs the United States: immigration

This post was syndicated from: lcamtuf's blog and was written by: Michal Zalewski. Original post: at lcamtuf's blog

This is the eleventh article in a short series about Poland, Europe, and the United States. To explore the entire series, start here.

There are quite a few corners of the world where the ratio of immigrants to native-born citizens is remarkably high. Many of these places are small or rapidly growing countries – say, Monaco or Qatar. Some others, including several European states, just happen to be on the receiving end of transient, regional demographic shifts; for example, in the past decade, over 500,000 people moved from Poland to the UK. But on the list of foreigner-friendly destinations, the US deserves a special spot: it is an enduring home to by far the largest, most diverse, and quite possibly best-assimilated migrant population in the world.

The inner workings of the American immigration system are a fascinating mess – a tangle of complex regulation, of multiple overlapping bureaucracies, and of quite a few unique social norms. The bureaucratic machine itself is ruthlessly efficient, issuing several million non-tourist visas and processing over 700,000 naturalization applications every year. But the system is also marred by puzzling dysfunction: for example, it allows highly skilled foreign students to attend US universities, sometimes granting them scholarships – only to show many of them the door the day they graduate. It runs a restrictive H-1B visa program that ties foreign workers to their petitioning employers, preventing them from seeking better wages – and thus needlessly making the American labor a bit less competitive. It also neglects the countless illegal immigrants who, with the tacit approval of legislators and business owners, prop up many facets of the economy – but are denied the ability to join the society even after decades of staying out of trouble and doing honest work.

Despite being fairly picky about the people it admits into its borders, in many ways, the United States is still an exceptionally welcoming country: very few other developed nations unconditionally bestow citizenship onto all children born on their soil, run immigration lotteries, or allow newly-naturalized citizens to invite their parents, siblings, and adult children over, no questions asked. At the same time, the US immigration system has a shameful history of giving credence to populist fears about alien cultures – and of implementing exclusionary policies that, at one time or another, targeted anyone from the Irish, to Poles, to Arabs, to people from many parts of Asia or Africa. Some pundits still find this sort of scaremongering fashionable, now seeing Mexico as the new threat to the national identity and to the American way of life. The claim made very little sense 15 years ago – and makes even less of it today, as the migration from the region has dropped precipitously and has been eclipsed by the inflow from other parts of the world.

The contradictions, the dysfunction, and the occasional prejudice aside, what always struck me about the United States is that immigration is simply a part of the nation’s identity; the principle of welcoming people from all over the world and giving them a fair chance is an axiom that is seldom questioned in any serious way. When surveyed, around 80% Americans can identify their own foreign ancestry – and they often do this with enthusiasm and pride. Europe is very different, with national identity being a more binary affair; I always felt that over there, accepting foreigners is seen as a humanitarian duty, not an act of nation-building – and that this attitude makes it harder for the newcomers to truly integrate into the society.

In the US, as a consequence of treating contemporary immigrants as equals, many newcomers face a strong social pressure to make it on their own, to accept American values, and to adopt the American way of life; it is a powerful, implicit social contract that very few dare to willingly renege on. In contrast to this, post-war Europe approaches the matter differently, seeing greater moral value in letting the immigrants preserve their cultural identity and customs, with the state stepping in to help them jumpstart their new lives through a variety of education programs and financial benefits. It is a noble concept, although I’m not sure if the compassionate European approach always worked better than the more ruthless and pragmatic American method: in France and in the United Kingdom, massive migrant populations have been condemned to a life of exclusion and hopelessness, giving rise to social unrest and – in response – to powerful anti-immigrant sentiments and policies. I think this hasn’t happened to nearly the same extent in the US, perhaps simply because the social contract is structured in a different way – but then, I know eminently reasonable folks who would disagree.

As for my own country of origin, it occupies an interesting spot. Historically a cosmopolitan nation, Poland has lost much of its foreign population and ethnic minorities to the horrors of World War II and to the policies implemented within the Soviet Bloc – eventually becoming one of the most culturally and ethnically homogeneous nations on the continent. Today, migrants comprise less than 1% of its populace, and most of them come from the neighboring, culturally similar Slavic states. Various flavors of xenophobia run deep in the society, playing right into the recent pan-European anti-immigration sentiments. As I’m writing this, Poland is fighting the European Commission tooth and nail not to take three thousand asylum seekers from Syria; many politicians and pundits want to first make sure that all the refugees are of Christian faith.

For the next article in the series, click here.

TorrentFreak: Police Let Seized ‘Pirate’ Domains Expire, Some Up For Sale

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

cityoflondonpoliceFor the past several years the Police Intellectual Property Crime Unit (PIPCU) has been at the forefront of Internet-focused anti-piracy activity in the UK. The government-funded unit has been responsible for several high-profile operations and has been praised by a broad range of entertainment industry companies.

After carrying out raids against the operators of dozens of sites, PIPCU likes to take control of their domains. They do this for two key reasons – one, so that the sites can no longer operate as they did before and two, so they can be used to ‘educate’ former users of the downed sites.

That ‘education’ takes place when visitors to the now-seized ‘pirate’ domains are confronted not with a torrent, proxy, streaming or links site, but a banner published by PIPCU themselves. It’s aim is to send a message that sites offering copyrighted content will be dealt with under the law and to suggest that their visitors have been noted.

Earlier comments by PIPCU suggest that its banner has been seen millions of times by people who tried to access a ‘pirate’ site but subsequently discovered that it no longer exists. Last month in an announcement on Twitter, the unit revealed that since Jul 2015 it has diverted more than 11m ‘pirate’ site visits.


While the hits continue to mount for many domains PIPCU has seized (or gained control over by forcing site operators or registrars into compliance), it’s now likely that the group’s educational efforts will reach a smaller audience. Tests carried out by TorrentFreak reveal that PIPCU has somehow lost influence over several previously controlled domains.

Instead of the now-familiar PIPCU ‘busted’ banner, visitors to a range of defunct sites are now greeted with expired, advert-laden or ‘for sale’ domains.

MP3lemon.org, for example, currently displays ads/affiliate links. The same goes for Boxingguru.tv, a domain that was linked to a high-profile PIPCU raid in 2014. Former proxies Katunblock.com and Fenopyreverse.info, plus former streaming links site Potlocker.re complete the batch.

boxing-guru

Other domains don’t carry ads but are instead listed for sale. They include former anti-censorship tool site Torrenticity.com, proxy index PirateReverse.info and H33T proxy h33tunblock.info.

The fate of the final set of domains is much less glamorous. Movie2KProxy.com, Movie4KProxy.com, EZTVProxy.net, Metricity.org, YIFYProxy.net and TorrentProxies.com all appear to have simply expired.

Whether these domains will be snapped up at the first opportunity or left to die will largely hinge on whether people believe they can make a profit from them. Some have already changed hands and are now being touted for a couple of thousand dollars each but others are lying in limbo.

In any event, none of these domains seem destined to display PIPCU’s banner in the future. Whether or not the unit cares right now is up for debate, but if any of the domains spring back into life with a ‘pirate’ mission, that could soon change.

Unlike Megaupload’s old domains they don’t appear linked to obvious scams, so that’s probably the main thing.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and the best VPN services.

Raspberry Pi: Naturebytes wildlife cam kit

This post was syndicated from: Raspberry Pi and was written by: Liz Upton. Original post: at Raspberry Pi

Liz: The wildlife cam kit has landed. If you’re a regular reader you’ll know we’ve been following the Naturebytes team’s work with great interest; we think there’s massive potential for bringing nature to life for kids and for adults with a bit of smart computing. Digital making for nature is here.

Naturebytes is a tiny organisation, but it’s made up of people whose work you’ll recognise if you follow Raspberry Pi projects closely; they’ve worked with bodies like the Horniman Museum, who have corals to examine; and with the Zoological Society of London (ZSL). Pis watching for rhino poachers in Kenya? Pis monitoring penguins in Antarctica? People on the Naturebytes team have worked on those projects, and have a huge amount of experience in wildlife observation with the Pi. They’ve also worked closely with educators and with kids on this Kickstarter offering, making sure that what they’re doing fits perfectly with what nature-lovers want. 

Today’s guest post is from Naturebytes’ Alasdair Davies. Good luck with the Kickstarter, folks: we’re incredibly excited about the potential of what you’re doing, and we think lots of other people will be too.

We made it! (quite literally). Two years after first being supported by the Raspberry Pi Foundation’s Education Fund and the awesome folk over at Nesta, we finally pressed the big red button and went into orbit by launching the Naturebytes Wildlife Cam Kit – now available via Kickstarter.
1
This is the kit that will fuel our digital making for nature vision – a community of Raspberry Pi enthusiasts using the Pi to help monitor, count, and conserve wildlife; and have a hell of a lot of fun learning how to code and hack their cam kits to do so much more – yes, you can even set it up to take chicken selfies.

We’ve designed it for a wide range of audiences, whether you’re a beginner, an educator, or a grandma who just wants to capture photos of the bird species in the garden and share them with her grandchildren – there’s something for everyone.

2

This was the final push for the small team of three over at Naturebytes HQ. A few badgers, 2,323 coffees, 24 foxes,  and a Real Time Clock later, we signed off the prototype cam kit last week, and are proud of what we’ve achieved thanks to the support of the Raspberry Pi Foundation that assisted us in getting there.

We also get the very privileged opportunity of appearing in this follow-up guest blog, and my, how things have changed since our first appearance back in September 2014. We thought we’d take you on a quick tour to show you what we’ve changed on the kit since then, and to share the lessons learnt during our R&D, before ending with a look at some of the creative activities people have suggested the kit be used for. Suggest your own in the comments, and please do share our Kickstarter far and wide so we can get the kits into the hands of as many people as possible.

Then and now – the case.

Our earlier prototype was slick and thin, with a perspex back. Once we exposed it to the savages of British weather, we soon had to lock down the hatches and toughen up the hinges to create the version you see today. The bird feeder arm was also reinforced and a clip on mechanism added for easy removal – just one of the lessons learnt when trialing and testing.

The final cam kit case:

3

The final cam kit features:

4

5
Schools and Resources

A great deal of our development time has focused on the creation of a useful website back end and resource packs for teacher and educators. For Naturebytes to be a success we knew from the start that we’d need to support teachers wishing to deliver activities, and it’s paramount to us that we get this right. In doing so, we tagged along with the Foundation’s Picademy to understand the needs of teachers and to create resources that will be both helpful and accessible.

Print your own

We’ve always wanted to make it as easy as possible for experienced digital makers to join in, so the necessary 3D print files will now be released as open source assets. For those with their own Pi, Pi cam and custom components, we’ve created a developer’s kit too that contains everything you need to finish a printed version of the cam kit (note – it won’t be waterproof if you 3D print it yourself).

6
You can get the Developer’s Kit on Kickstarter.

The Experience

7

Help us develop a fantastic experience for Naturebytes users. We hope to make a GUI and customised Raspbian OS to help users get the most from the cam kit.

It’s not much fun if you can’t share your wildlife sightings with others, so we’re looking at how to build an experience on the Pi itself. It will most likely be in the form of a Python GUI that boots at startup with a modified Raspbian OS to theme up the desktop. Our end goal is the creation of what we are calling “Fantastic Fox” – a simple-to-use Raspbian OS with pre-loaded software and activities together with a simple interface to submit your photos etc. This will be a community-driven build, so if you want to help with its, development please contact us and we’ll get you on board.

Creative activities

This is where the community aspect of Naturebytes comes into play. As everyone’s starting with the same wildlife cam kit, whether you get the full complete kit from us or print your own, there are a number of activities to get you started. Here are just a few of the ones we love:

Participate in an official challenge

We’ll be hosting challenges for the whole community. Join us on a hedgehog hunt (photo hunt!) together with hundreds of others, and upload your sightings for the entire community to see. There will be hacking challenges to see who can keep their cams powered the longest, and even case modification design competitions too.

Identify another school’s species (from around the globe!)

Hook up a WiFi connection and you’ll be able to share your photos on the internet. This means that a school in Washington DC could pair up with a school in Rochdale and swap their photos once a day. An exciting opportunity to connect to other schools globally, and discover wildlife that you thought you may never encounter by peeking into the garden of school a long way away.

Build a better home (for wildlife)

It’s not just digital making that you can get your hands into. Why not build a garden residence for the species that you most want to attract, and use the camera to monitor if they moved in (or just visited to inspect)? A great family project, fuelled by the excitement of discovering that someone, or something, liked what you build for them.

Stamp the weather on it

There’s an official Raspberry Pi weather station that we love – in fact, we were one of the early beta testers and have always wanted to incorporate it into Naturebytes. A great activity would be connecting to the weather station to receive a snapshot of data and stamping that on to the JPEG of the photo your camera just created. Then you’ll have an accurate weather reading together with your photo!

Time-lapse a pond, tree or wild space

It’s fantastic to look through a year’s worth of photographic data within 60 seconds. Why not take a look at the species visiting your pond, tree or a wild space near you by setting up a time-lapse and comparing it with other Naturebytes users near you?

We’d love to hear your ideas for collaborative projects – please leave a note in the comments if you’ve got something to add!

 

The post Naturebytes wildlife cam kit appeared first on Raspberry Pi.

Raspberry Pi: Skycademy – Free High Altitude CPD

This post was syndicated from: Raspberry Pi and was written by: James Robinson. Original post: at Raspberry Pi

We’re looking for 24 teachers (or youth leaders) to take part in a FREE two-and-a-half day Continuing Professional Development (CPD) event aiming to provide experience of high altitude ballooning to educators, and demonstrating how it can be used as an engaging teaching opportunity.

Over the last few year I’ve seen many awesome uses of the Raspberry Pi, but one of my favourites by far is seeing the Pi used as a payload tracker for High Altitude Ballooning (HAB) projects.

One of the most prolific HAB enthusiasts is Dave Akerman, who has launched many flights using the Raspberry Pi, from the first flight back in 2012

…to the launch of a potato for Heston Blumenthal’s “Great British Food”…

…and even capturing some amazing images of the recent Solar Eclipse from 30km up.

Many schools are also seeing the opportunities for learning that a HAB flight presents, incorporating physics, maths, computing and geography into one project.

Here’s a project from William Howard School in Cumbria, whose students built their own tracker connected to a Pi.

In my previous life as a teacher, I organised a launch with my own students, and we had help from Dave Akerman on the day. This turned out to be super helpful, as it takes some planning and there’s a lot to remember.

One of the hardest parts of running a flight is the number of different aspects you have to plan and manage. You can test the hardware and software to a certain point, but there’s limited opportunity for a practice flight. Having experience is really helpful.

For this reason we’re running our first “Skycademy”, during which we will be giving attendees hands-on experience of a flight. The event will be free to attend and will be spread over two and a half days between the 24th and 26th of August.

  • Day 1 – Planning and workshop sessions on all aspects of HAB flights.
  • Day 2 – Each team launches their payload, tracks, follows and recovers it.
  • Day 3 – Teams gather together for plenary morning.

Our aim is to support and inspire teachers and other adults working with young people. The hope is that those that attend will return to lead a project with their groups that will do something amazing.

Attendees will be supported throughout the course by experienced HAB enthusiasts and the Raspberry Pi Education Team. If you are a UK teacher or work with young people (scout leader, youth leader etc), you can apply here.

The post Skycademy – Free High Altitude CPD appeared first on Raspberry Pi.

Raspberry Pi: Announcing Picademy USA

This post was syndicated from: Raspberry Pi and was written by: Matt Richardson. Original post: at Raspberry Pi

Picademy_USA_TRANSPARENTI’m often asked when Picademy, our teacher professional development program, is coming to the United States. It’s been an incredible success within the UK and there’s clearly huge demand for it within the US. Today, we’re happy to announce a new partnership with the Computer History Museum to launch a pilot of Picademy in the United States. Located in Mountain View, California, The Computer History Museum makes an incredible partner and we’re excited to incorporate their educational content into the program.

We’re piloting Picademy USA with 4 sessions starting in early 2016. Our goal is to give 100 US-based educators free, hands-on experience with Raspberry Pi and induct them into a growing group of Raspberry Pi Certified Educators worldwide. The first Picademy in the US will take place at The Computer History Museum. Exact dates and locations for the workshops are being confirmed. To express interest in an upcoming Picademy, please complete this form. It will help us get a sense of where in the US there’s demand for professional development and you’ll be signed up to receive updates when venues and dates are confirmed.

We’re especially proud to announce this pilot in response to President Obama’s call to action to create a Nation of Makers. Since a major focus of this call to action is in the realm of STEM education, it was a natural fit for Picademy to be our commitment to supporting efforts to use computers in the classroom for tinkering, coding, and project-based learning. I’ll be with Computer History Museum’s Kate McGregor at the White House for a kick off event this morning. Keep an eye on @Raspberry_Pi for ongoing updates and check back here later in the day for photos from that event.

Click here to express interest in Picademy USA and to find out more about the program.

The post Announcing Picademy USA appeared first on Raspberry Pi.

Raspberry Pi: EuroPython 2015 – Education Summit

This post was syndicated from: Raspberry Pi and was written by: Ben Nuttall. Original post: at Raspberry Pi

This year’s EuroPython conference takes place in July in Bilbao, Spain. Not only is our own Carrie Anne Philbin presenting a keynote (alongside the creator of Python, Guido van Rossum), but the Raspberry Pi Foundation will be running an Education Summit, in partnership with the EuroPython Society.

EuroPython Education Summit_Logo_FULLWe have a dedicated track of education focused talks, as well as training sessions, workshops and discussion groups for teachers and educators. There’ll also be two days of sprints for developers to contribute to educational projects.

As well as Carrie Anne’s keynote, I’ll be giving a talk and workshop on physical computing, and James will be giving talks on the Raspberry Pi Weather Station and his experience as a teacher at PyConUK. We’re really excited to be attending, and helping facilitate the first Education Summit at EuroPython.

Reduced ticket prices

Good news – teachers can pick up conference tickets at the student ticket price of €120, and the organisers have also arranged coupons for kids on request – please email helpdesk@europython.eu.

Raspberry Pi Certified Educators – we’d love to see you there!

Community Pythonistas

We’re not only appealing to teachers to attend EuroPython – anyone interested in Python should feel welcome to attend, and we’d love to see more of the wonderful Raspberry Pi community in Bilbao. Come see some great talks and learn from the best in the industry, show people what you do with Python, and of course come and take part in our Education sprints! Personal tickets are available at €340, and Bilbao is lovely in July.

std_logo_one_color_redI’d like to take this opportunity to congratulate Carrie Anne, who has just been elected to the Python Software Foundation’s Board of Directors – we wish her success in her role on the board.

The post EuroPython 2015 – Education Summit appeared first on Raspberry Pi.

The Hacker Factor Blog: Continuing Education

This post was syndicated from: The Hacker Factor Blog and was written by: The Hacker Factor Blog. Original post: at The Hacker Factor Blog

While error level analysis (ELA) seems like a simple enough concept, budding analysts need to understand what the algorithm does, how it works, and how to apply it. By itself, ELA highlights the various compression level potentials across an image (analogous to adding dye to a petri dish). However, the analyst needs to know what to look for in the results. There’s the straightforward “significantly different” coloring, the more subtle chrominance separation (rainbowing), and other artifacts that alter the compression rate across the image.

To help with this learning curve, I developed tutorials and challenges. The tutorials describe how the algorithm works and the challenges allow people to test their knowledge. Since different teaching methods work better for different people, it is good to offer a variety of training methods.

I recently ran the stats on the FotoForensics tutorials. I checked them by weekly and monthly distributions and the results were consistent: about a third of visitors to the site (35% average) visit the tutorials page. However, only about a tenth of them actually spend more than a few seconds on the tutorials pages in a given week. The challenges average about 7%, but those users appear to work on at least one challenge puzzle.

I also looked for longer trends. A solid third (34%) of unique network addresses have spent time with the tutorials and/or challenges in the previous year. (I keep thinking: a free site where a third of the users are actually reading the training materials? WOW!)

I even see people applying what they learned. FotoForensics has been very popular with the Reddit community. A few years ago, people gave a lot of bad interpretations (e.g., “white means modified” or “color means fake” or “ELA doesn’t work”). However, those ~30% of users who took the time to learn have become a dominant force at Reddit. When someone posts a link to FotoForensics, it is usually followed by someone asking what it means, and someone else giving an intelligent answer.

On one hand, this tells me that the tutorials and challenges are easy enough for users to find on the site. And about 3 in 10 users are interested enough to take the time to learn how it works. (I am open to suggestions for other possible training options that could engage with more of the other 70%.)

Unfortunately, I still see people misapplying the technology or giving really bad advice on how it works.

It’s all about compression

ELA does one thing: it quantifies the lossy compression error potential over the image. It returns a map that shows the compression level over the image. It doesn’t return a numerical value (“7″) or summary (“green” or “true”) because different types of alterations generate different compression level signatures. For example, if a picture is 95% unaltered, then would you call it real or fake? With a map of the picture, you can identify the abnormal area.


Over at Reddit, a tom_beale posted to “mildly infuriating” some sidewalk covers that were put back wrong. User Afterfx21 “fixed it“. The compression map generated by ELA makes it easy to identify how it was “fixed”.

But let’s go back a moment and talk about compression…

JPEG is based on a lossy compression system. By “lossy”, we mean that the decompressed data does not look exactly like the pre-compressed data. What comes out is similar, but not exactly like what went in. Since there’s a little difference, it looses quality. Even saving a JPEG at “100%” will result in a little data loss; what most tools call “100%” is actually closer to 99%. The purpose of the lossy compression is to make as many repetitive zeros and small values as possible in the encoded sequence, while remaining visually similar to the source image. More repitition leads to better compression.

The lossy compression works by quantizing the values; effectively turning a smooth curve into stair steps. For example the quantization value “3” would make the values “40 20 10 5 1″ become “13 6 3 1 0″. JPEG uses integer math so fractions after dividing by 3 are lost.

To restore the sequence, the values are multiplied by the quantization value: “39 18 9 3 0″. Each of these decoded values are close enough to the source values. When talking about pixels, the human eye is unlikely to notice any difference. (The actual JPEG encoding method is a little more complicated and includes 64 quantization values as well as some other compression steps. For much more detail about JPEG encoding, see JPEG Bitstream Bytes.)

Additional JPEG compression loss

If we just use one quantization value and repeatedly cycle between encoding and decoding, then the first encoding will cause data loss but the remainder will not.

Encoding “40 20 10 5 1″ with quantizer “3” generates “13 6 3 1 0″. (Encoding is a division with integer math.)
Decoding “13 6 3 1 0″ with quantizer “3” generates “39 18 9 3 0″. (Decoding is a multiplication.)
Encoding “39 18 9 3 0″ with “3” generates “13 6 3 1 0″. (Same value)
Decoding “13 6 3 1 0″ with quantizer “3” generates “39 18 9 3 0″. (Same value)

If JPEG encoded RGB values, then this would be it. The first encoding would generates a little loss but repeated encoding/decoding cycles would not. Unfortunately, JPEG does not encode RGB values. Instead, it first converts the values from RGB to YUV (an alternate color space). This conversion is lossy and causes values to shift a little. This means two things. First, JPEG cannot store true 24-bit color. Second, the values may shift a little between the first decoding and second encoding steps, so the next encoding may result in values that are a little different.

But JPEG doesn’t stop there. It also converts the colors from the 8×8 grid pixel space to an 8×8 frequency space. This conversion uses a discrete cosine transform (DCT). When you see the word “cosine” you should be thinking “floating point values”. Except that JPEG does everything with integers so fractions get truncated. Simply repeatedly encoding and decoding the DCT values with integer math will result in constant degradation. When combined with the quantization step, it results in significant degradation.

I say that the compression “constantly” degrades, but it really does stop eventually. With JPEG encoding, the first save at a given quality level (e.g., save as 80%) causes the most data loss. Subsequent decode and re-encode cycles at the same quality level will result in less and less loss. The first save causes the most loss. The second causes some loss, but not as much as the first time. The third save causes less loss than the second, etc. You would probably have to resave a JPEG over a dozen times to see it normalize, but it should eventually stop degrading, unless you use Photoshop. With Adobe products, JPEGs may take thousands of resaves to normalize, and they will look very distorted.

Detecting loss

The impact from this lossy compression is detectable. For this example, I’ll use a photo that I took yesterday…

Camera original.
Resaved with Photoshop CS5 at “high” quality (first resave).
Resaved first resave with with Photoshop CS5 at “high” quality (second resave).

With this picture, the second resave is only a little different from the first resave. However, the amount of change between the first and second resaves really depends on the picture. The only consistency is that the second resave will not change more than the first resave.

Because nothing else was altered between saves, the first and second resaves are very similar. The first save removed most of the artifacts and the second save removed a few artifacts. (If you look in the ELA map at the cup’s lid, you may notice that some of the small white squares are gone in the second resave.)

While the picture’s content may not be very exciting, it does have a couple of great attributes:

  • There are large areas of mostly white and mostly black. Solid colors compress very efficiently. As a result, the white on the lid, white sunlight on the floor, and part of the black border on the laptop’s monitor all appear solid black under ELA. These areas were so efficiently compressed in the original image that they didn’t change between resaves.

  • There are visible high-contrast edges. For example, the white cup against the brown table, black laptop against the brown table, and the ribs in the dark brown chairs against the light brown wall. All of these edges have similar ELA intensities.
  • There are lots of mostly flat surfaces. The white cup, the lid, most of the black laptop, the wall in the background, and even the low-contrast table (where the sunlight is not bringing out details). These are all surfaces and they are all at the same ELA intensity.
  • There are textured surfaces, denoted as small regions with high-detail patterns: the text on the cup, the computer screen, the keyboard letters are visually similar (white/black) and have similar intensities.

With ELA, you want to compare similar attributes with similar attributes. Each of these areas (surfaces, edges, and textures) may compress at different rates. But in general, all similar surfaces should compress at the same rate. Edges should compress at the same rate as similar edges, and textures should compress at the same rate as similar textures.

When a picture is edited, the modified areas are likely at a different compression level than the rest of the picture. This is how we know that the sidewalk picture (beginning of this post) was digitally altered. We do not make the determination by saying “white means edited”. Instead, we identify that a section of each sidewalk cover is inconsistent with the rest of the picture. This inconsistency permits identifying the edit.

The thing to remember is that ELA maps out the error level potential — the amount of expected loss during a resave. If a picture is resaved too many times, then the compression level becomes normalized. At that point, subsequent resaves at the same quality level will not alter the picture. This results in a black, or near black ELA map.

Original resaved at a low quality

Unfortunately, it is still common to see people who don’t read the tutorials and claim that ELA does not work by uploading a low quality picture as their proof. Alternately, they upload a picture that has undergone global modifications (e.g., scaling or recoloring) that changes all pixel values, resulting in a higher/whiter ELA compression map. But even in these cases, ELA still functions properly — it still generates a topology map that represents the potential compression rate across the picture. This may not be useful for identifying if a picture was spliced, but it is useful for detecting what happened during the last save.

Bad Analysis

A few days ago, a group called “Bellingcat” published a report where they tried to do some digital photo forensics. They were trying to show that some satellite photos were digitally altered. They used FotoForensics to evaluate the picture, but unfortunately ended up misinterpreting the results.



In Bellingcat’s analysis, they claims that the picture was altered because the five regions (A-E) look different. However, they failed to remember to compare similar attributes:

  • Region “A” shows clouds and is uniformly white. Solid colors compress really well, so the ELA result is solid black. This indicates that the uniformly colored region is already optimally compressed.

  • Region “E” has a little noise surrounded by black in the ELA — just like the lid in the coffee cup example. This is where the colors blend from solid white to near white.
  • Region “C” has a consistent texture. It shows land and buildings.
  • Region “D” has a different texture from C. It is a smoother surface. Clouds with no texture are relatively smooth and compress better than complex textures. This results in the expected lower error level potential. This area also appears consistent with the lower-left region of “C”, where the clouds partially cover the land.
  • Region “B” has… well, I see no difference between B and D.

The one thing that ELA really pulled out are the annotations. They are at a much higher error level potential, indicating that they have not been resaved as many times as the rest of the picture.

Using ELA, we cannot determine the authenticity of this picture: we cannot tell if it is real, and we cannot tell if it is fake. We can only conclude that this is a low quality picture and that the black text on white annotations were added last. If there was a higher quality version of this picture (without the annotations), then we would have a better chance at detecting any potential alterations.

Everyone’s a critic

A number of people have pointed out flaws in the Bellingcat analysis. A forensic examiner in Australia used different tools and methods than me and found other inconsistencies in the Bellingcat findings. I think Myghty has one of the most thorough debunkings of the Bellingcat report.

Unfortunately, other forensic experts chose to blame the tool rather than the uneducated users (yes Bellingcat, I’m calling you uneducated). For example, Spiegel quoted German image forensics expert Jens Kriese as saying:

From the perspective of forensics, the Bellingcat approach is not very robust. The core of what they are doing is based on so-called Error Level Analysis (ELA). The method is subjective and not based entirely on science. This is why there is not a single scientific paper that addresses it.

The ignorance spouted by Kriese offends me. In particular:

  • Kriese is correct that the results from the ELA system at FotoForensics is subjective — it is up to the analyst to draw a conclusion from the generated compression map. However, this is no different than requiring a human to look through a microscope to identify cancer in a tissue sample. The scientific method is both objective and subjective. Tools should be repeatable and predictable — that is objective. ELA generates a consistent, repeatable, and predictable map of JPEG’s lossy compression potential.

    In order to interpret results, we use two types of reasoning: inductive and deductive. Deductive is objective, while inductive is subjective. Inductive reasoning is often used for predicting, forecasting, and behavioral analysis. (“Did someone alter this picture?” or “did a camera generate this?” are behaviors.)

    As an example, if you have ever broken a bone then you likely had an X-ray. The X-ray permits an analyst to view details that would otherwise go unseen. The X-ray is objective, not subjective. However, when the X-ray technician says, “I cannot tell you that it is broken because a diagnosis requires a doctor”, then you enter the realm of subjective. (This is why you can ask for a “second opinion” — opinions are subjective.) Similarly, ELA acts like an X-ray, permitting unseen attributes to become visible. However the interpretation of the ELA results is not automated and requires a human to make a subjective determination based on specific factors.

  • Identifying artifacts is part of the scientific process. In fact, it’s the first step: observation. Given that ELA works consistently and predictably, it can also be used to test a hypothesis. Specific tests include: Do similar edges have similar ELA intensities? Do similar surfaces appear similar? And do similar textures appear similar? If the hypothesis is that the picture was altered and the ELA generates consistent error level potentials, then it fails to confirm the hypothesis. An alternative is to hypothesize that the picture is real and see an inconsistent ELA image. Inconsistency would prove the hypothesis is false, enabling an analyst to detect alterations.

    For Kriese to question whether ELA is based on science, or to criticize the subjective portion of the evaluation, makes me question his understanding of the scientific method.

  • Kriese says that “there is not a single scientific paper” covering ELA. Clearly Kriese has not read my blog. Four years ago I wrote about a Chinese researcher who plagiarized my work and had it published in a scientific journal: Lecture Notes in Computer Science, 2011, Volume 6526/2011, 1-11.

    ELA is also mentioned in the “Digital Photo Forensics” section of the Handbook of Digital Imaging (John Wiley & Sons, Ltd). I wrote this encyclopedia’s section and it was technically reviewed prior to acceptance.

    In fact, ELA was first introduced in a white paper that was presented at the Black Hat Briefings computer security conference in 2007. Since computer forensics is part of computer security, this technology was presented to peers.

    That makes three scientific papers that cover ELA. I can only assume that Kriese did not both looking anything up before making this false claim.

  • The entire argument, that research is not scientific unless it is published in a scientific paper, is fundamentally flawed. I have multiple blog entries about various problems with the academic publication process. Journal publication is not timely, authors often leave out critical information necessary to recreate or verify results, papers typically lack readability, trivial alterations are considered novel, and papers frequently discuss positives and omit limitations.

    There are also significant flaws with the peer review process. Peer reviews often dismiss new discoveries when they conflict with the peer’s personal interests. And if peer review actually worked, then why are plagiarism, false reporting, retractions, and even fake peer reviews so prevalent?

    In addition, many companies have proprietary technologies that have not been publicly published. This does not mean that the technologies are unscientific. It only means that the details are not public. (In the case of ELA, the details are public.)

    It is extremely myopic for Kriese to (1) believe that something is only scientific if it is published, and (2) attribute more creditability to published science articles than they deserve.

Similarly, Hany Farid repeated his misunderstanding by saying:

The reliance on error level analysis is fatally flawed as this technique is riddled with problems that mis-characterize authentic images as altered and failed to detect alterations.

As I have repeatedly stated, the automated portion of ELA does not “detect” anything. Detection means drawing a conclusion. ELA highlights artifacts in the image, explicitly quantifies the JPEG error level potential across the image, and does it in a provable, repeatable, predictable way. The resulting compression map generated by ELA is deterministic, idempotent, and independent of personal opinion.

I also find it a little ironic that Farid’s statements, “mis-characterize authentic images as altered” and “failed to detect alterations”, can be explicitly applied to his own “izitru” and “FourMatch” commercial products. Unless the picture is a camera original, izitru will report that it could be altered. In effect, virtually everything online could be altered.

Both Kriese and Farid are correct that the Bellingcat report is bogus. However, they both incorrectly blame the problem on ELA. It’s not the tool that is in error, it’s the authors of the Bellingcat report.

See one, do one, teach one

I do not believe it is possible to teach everyone. Some people have no incentive to learn, while others have ingrained beliefs that are personally biased or based on false premises. However, this does not mean that I will stop trying to help those who want to learn.

Occasionally I debunk algorithms published in scientific journals. In the near future, I’ll cover a widely deployed forensic algorithm — that was published in a peer-reviewed journal. This algorithm is used by many forensic analysts and even taught in a few classes. But is so unreliable that it has virtually no practical value.

Krebs on Security: How I Learned to Stop Worrying and Embrace the Security Freeze

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

If you’ve been paying attention in recent years, you might have noticed that just about everyone is losing your personal data. Even if you haven’t noticed (or maybe you just haven’t actually received a breach notice), I’m here to tell you that if you’re an American, your basic personal data is already for sale. What follows is a primer on what you can do to avoid becoming a victim of identity theft as a result of all this data (s)pillage.

Click here for a primer on identity theft protection services.

Click here for a primer on identity theft protection services.

A seemingly never-ending stream of breaches at banks, healthcare providers, insurance companies and data brokers has created a robust market for thieves who sell identity data. Even without the help of mega breaches like the 80 million identities leaked in the Anthem compromise or last week’s news about 4 million records from the U.S. Office of Personnel Management gone missing, crooks already have access to the information needed to open new lines of credit or file phony tax refund requests in your name.

If your response to this breachapalooza is to do what each the breached organizations suggest — to take them up on one or two years’ worth of free credit monitoring services — you might sleep better at night but you will probably not be any more protected against crooks stealing your identity. As I discussed at length in this primer, credit monitoring services aren’t really built to prevent ID theft. The most you can hope for from a credit monitoring service is that they give you a heads up when ID theft does happen, and then help you through the often labyrinthine process of getting the credit bureaus and/or creditors to remove the fraudulent activity and to fix your credit score.

In short, if you have already been victimized by identity theft (fraud involving existing credit or debit cards is not identity theft), it might be worth paying for these credit monitoring and repair services (although more than likely, you are already eligible for free coverage thanks to a recent breach at any one of dozens of companies that have lost your information over the past year). Otherwise, I’d strongly advise you to consider freezing your credit file at the major credit bureaus. 

There is shockingly little public knowledge or education about the benefits of a security freeze, also known as a “credit freeze.” I routinely do public speaking engagements in front of bankers and other experts in the financial industry, and I’m amazed at how often I hear from people in this community who are puzzled to learn that there is even such a thing as a security freeze (to be fair, most of these people are in the business of opening new lines of credit, not blocking such activity).

Also, there is a great deal of misinformation and/or bad information about security freezes available online. As such, I thought it best to approach this subject in the form of a Q&A, which is the most direct method I know how to impart knowledge about a subject in way that is easy for readers to digest.

Q: What is a security freeze?

A: A security freeze essentially blocks any potential creditors from being able to view or “pull” your credit file, unless you affirmatively unfreeze or thaw your file beforehand. With a freeze in place on your credit file, ID thieves can apply for credit in your name all they want, but they will not succeed in getting new lines of credit in your name because few if any creditors will extend that credit without first being able to gauge how risky it is to loan to you (i.e., view your credit file). And because each credit inquiry caused by a creditor has the potential to lower your credit score, the freeze also helps protect your score, which is what most lenders use to decide whether to grant you credit when you truly do want it and apply for it. 

Q: What’s involved in freezing my credit file?

A: Freezing your credit involves notifying each of the major credit bureaus that you wish to place a freeze on your credit file. This can usually be done online, but in a few cases you may need to contact one or more credit bureaus by phone or in writing. Once you complete the application process, each bureau will provide a unique personal identification number (PIN) that you can use to unfreeze or “thaw” your credit file in the event that you need to apply for new lines of credit sometime in the future. Depending on your state of residence and your circumstances, you may also have to pay a small fee to place a freeze at each bureau. There are four consumer credit bureaus, including Equifax, Experian, Innovis and Trans Union

Q: How much is the fee, and how can I know whether I have to pay it?

A: The fee ranges from $0 to $15 per bureau, meaning that it can cost upwards of $60 to place a freeze at all four credit bureaus (recommended). However, in most states, consumers can freeze their credit file for free at each of the major credit bureaus if they also supply a copy of a police report and in some cases an affidavit stating that the filer believes he/she is or is likely to be the victim of identity theft. In many states, that police report can be filed and obtained online. The fee covers a freeze as long as the consumer keeps it in place. Equifax has a decent breakdown of the state laws and freeze fees/requirements.

Q: What’s involved in unfreezing my file?

A: The easiest way to unfreeze your file for the purposes of gaining new credit is to spend a few minutes on the phone with the company from which you hope to gain the line of credit (or perhaps research the matter online) to see which credit bureau they rely upon for credit checks. It will most likely be one of the major bureaus. Once you know which bureau the creditor uses, contact that bureau either via phone or online and supply the PIN they gave you when you froze your credit file with them. The thawing process should not take more than 24 hours.

Q: I’ve heard about something called a fraud alert. What’s the difference between a security freeze and a fraud alert on my credit file?

A: With a fraud alert on your credit file, lenders or service providers should not grant credit in your name without first contacting you to obtain your approval — by phone or whatever other method you specify when you apply for the fraud alert. To place a fraud alert, merely contact one of the credit bureaus via phone or online, fill out a short form, and answer a handful of multiple-choice, out-of-wallet questions about your credit history. Assuming the application goes through, the bureau you filed the alert with must by law share that alert with the other bureaus.

Consumers also can get an extended fraud alert, which remains on your credit report for seven years. Like the free freeze, an extended fraud alert requires a police report or other official record showing that you’ve been the victim of identity theft.

An active duty alert is another alert available if you are on active military duty. The active duty alert is similar to an initial fraud alert except that it lasts 12 months and your name is removed from pre-approved firm offers of credit or insurance (prescreening) for 2 years.

Q: Why would I pay for a security freeze when a fraud alert is free?

A: Fraud alerts only last for 90 days, although you can renew them as often as you like. More importantly, while lenders and service providers are supposed to seek and obtain your approval before granting credit in your name if you have a fraud alert on your file, they’re not legally required to do this.

Q: Hang on: If I thaw my credit file after freezing it so that I can apply for new lines of credit, won’t I have to pay to refreeze my file at the credit bureau where I thawed it?

A: Yes (unless you’ve previously qualified for a free freeze). However, even if you have to do this once or twice a year, the cost of doing so is almost certainly less than paying for a year’s worth of credit monitoring services.

Q: Is there anything I should do in addition to placing a freeze that would help me get the upper hand on ID thieves?

A: Yes: Periodically order a free copy of your credit report. By law, each of the three major credit reporting bureaus must provide a free copy of your credit report each year — via a government-mandated site: annualcreditreport.com. The best way to take advantage of this right is to make a notation in your calendar to request a copy of your report every 120 days, to review the report and to report any inaccuracies or questionable entries when and if you spot them.

Q: I’ve heard that tax refund fraud is a big deal now. Would having a fraud alert or security freeze prevent thieves from filing phony tax refund requests in my name with the states and with the Internal Revenue Service?

A: Neither would stop thieves from fraudulently requesting a refund in your name. However, a freeze on your credit file would have prevented thieves from using the IRS’s own Web site to request a copy of your previous year’s tax transcript — a problem the IRS said led to tax fraud on 100,000 Americans this year and that prompted the agency to suspend online access to the information. For more information on what you can do to minimize your exposure to tax refund fraud, see this primer.

Q: Okay, I’ve got a security freeze on my file, what else should I do?

A: It’s also a good idea to notify a company called ChexSystems to keep an eye out for fraud committed in your name. Thousands of banks rely on ChexSystems to verify customers that are requesting new checking and savings accounts, and ChexSystems lets consumers place a security alert on their credit data to make it more difficult for ID thieves to fraudulently obtain checking and savings accounts. For more information on doing that with ChexSystems, see this link

Q: If I freeze my file, won’t I have trouble getting new credit going forward? 

A: If you’re in the habit of applying for a new credit card each time you see a 10 percent discount for shopping in a department store, a security freeze may cure you of that impulse. Other than that, as long as you already have existing lines of credit (credit cards, loans, etc) the credit bureaus should be able to continue to monitor and evaluate your creditworthiness should you decide at some point to take out a new loan or apply for a new line of credit.

Q: Anything else?

A: ID thieves like to intercept offers of new credit and insurance sent via postal mail, so it’s a good idea to opt out of pre-approved credit offers. If you decide that you don’t want to receive prescreened offers of credit and insurance, you have two choices: You can opt out of receiving them for five years or opt out of receiving them permanently.

To opt out for five years: Call toll-free 1-888-5-OPT-OUT (1-888-567-8688) or visit www.optoutprescreen.com. The phone number and website are operated by the major consumer reporting companies.

To opt out permanently: You can begin the permanent Opt-Out process online at www.optoutprescreen.com. To complete your request, you must return the signed Permanent Opt-Out Election form, which will be provided after you initiate your online request. 

PERSONAL EXPERIENCE

A couple of years back, I was signed up for a credit monitoring service and had several unauthorized applications for credit filed in my name in rapid succession. Over a period of weeks, I fielded numerous calls from the credit monitoring firm, and spent many grueling hours on the phone with the firm’s technicians and with the banks that had been tricked into granting the credit — all in a bid to convince the latter that I had not in fact asked them for a new credit line.

The banks in question insisted that I verify my identity by giving them all of my personal information that they didn’t already have, and I was indignant that they should have been that careful before opening the new fraudulent accounts. Needless to say, the experience was extremely frustrating and massively time-consuming.

We eventually got that straightened out, but it took weeks. Not long after that episode, I decided to freeze my credit and that of my wife’s at all of the major bureaus. Turns out, I did that none too soon: A few weeks later, I broke a story about a credit card breach at nationwide beauty chain Sally Beauty, detailing how the cards stolen from Sally Beauty customers had wound up for sale on Rescator[dot]cc, the same fraud shop that had been principally responsible for selling cards stolen in the wake of the massive data breaches at Home Depot and Target.

Rescator's message to his customers urging them to steal my identity.

Rescator’s message to his customers urging them to steal my identity.

In response to my reporting about him and his site, Rescator changed his site’s home page to a photoshopped picture of my driver’s license, and linked his customers (mostly identity thieves and credit card hustlers) to a full copy of my credit report along with links to dozens of sites where one can apply for instant credit. Rescator also encouraged his friends and customers to apply for new credit in my name.

Over the next few weeks, I received multiple rejection letters from various financial firms, stating that although they had hoped to be able to grant my application for new credit, they were unable to do so because they could not view my credit file. The freeze had done its job.

In summary, credit monitoring services are helpful in digging you out of an identity theft ditch. But if you want true piece of mind, freeze your credit file.

TorrentFreak: How You Can Help to Fix EU Copyright Law

This post was syndicated from: TorrentFreak and was written by: Amelia Andersdotter. Original post: at TorrentFreak

copyright-brandedThe pro-copyright lobbies are the best organised in the world. Second only to the tobacco lobby. They gather up employees and contractors and tell them real people and real internet users are bad people who want to harm them.

When I was in the Parliament, I was at one time visited by a young mother of two who wondered why I was trying to put her children without food or education on the streets. She was a script-writer for tax-payer-financed French-German TV station ARTE.

Even if I understand that her wages don’t come from copyright licenses, even indirectly, and even if she appeared not to have thought of that, it was uncomfortable to be accused of harming someone else’s children.

Had I not been 10 years younger than her, and convinced that there are ways for her to make money that don’t include destroying the internet or putting file-sharers out of their homes, I may have opted to change my political opinion because of her heartfelt accusation.

Many individuals like her are currently visiting our legislators. Many politicians are presently being accused of harming children should they consider progressive copyright proposals.

What these politicians aren’t hearing, are the stories of those people who get cease-and-desist letters, get sued, or put through criminal trials or get handed damages so large they can’t reasonably be paid off in a life-time by a single individual. They’re not hearing the stories of those who’ve built networks for millions of Europeans where, for want of better words, cultural affinity arises.

File-sharing and peer-to-peer culture, like no other culture in modern times, has created a common cultural base in Europe. Although I hope that even without my idealistic formulation of these matters, you’re all convinced copyright at least somehow needs to change.

Politics too often gets stuck in the realm of the possible. It is possible that a 35-year-old mother could have her income impacted by a legislative reform that in no way influences her employer. It is not possible, but real, that many individuals in the European Union every year are caused heavy, even impossible, costs due to file-sharing trials and cease-and-desist letters.

It is not possible, but real, that copyright laws are increasingly forcing technology companies to innovate to the disadvantage of the freedom of the users.

The European Parliament needs to be taken back down to reality, and away from the realm of possible dangers before June 16th.

If you are presently in the European Union, or if you can reach out people in the European Union, in any way at all: this is the time to ask them to contact their representatives in the European Parliament. Tell the Members about yourselves, your lives, your children and the world in which you want to live. Give them a taste of the reality which exists away from the speculative possibilities of professional lobbies.

Whenever we’re too tired or too scared to tell our politicians what is important, whoever has the resources will weave them stories of realms of possibilities instead. The future of copyright, and of all of the Internet, is too important to leave in the hands of such story-tellers.

Go to copywrongs.eu and figure out the specific demands you want to place to your MEP, but remember – your biggest asset is that you’re real, and the lobby stories mostly aren’t.

About The Author

ameliaa

Amelia Andersdotter represented the Swedish Pirate Party in the European Parliament between December 2011 and July 2014. She’s an expert on topics related to the Internet, intellectual property and IT-policy.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and anonymous VPN services.

Raspberry Pi: Apply now for Picademy in July

This post was syndicated from: Raspberry Pi and was written by: Helen Lynn. Original post: at Raspberry Pi

At this very moment, the Raspberry Pi education team are in Exeter nearing the end of the second day of Picademy #10; Day 2 is Project Day, and the #picademy hashtag on Twitter is full of photos of biscuit-tin robots, papercrafts, Babbage bears and breadboards as the teams share their projects.

Primary, secondary and post-16 teachers can be part of CPD like this: Picademy #11 takes place on 13 and 14 July at Pi Towers in Cambridge, UK, and you can apply here. As ever, it’s completely free to attend, and you don’t need any experience: we’ll teach you, inspire you, feed you, and give you free resources, and at the end, you’ll join the friendly and enthusiastic ranks of Raspberry Pi Certified Educators (you even get a badge).

But what’s it like, exactly? Raspberry Pi Certified Educators have blogged and tweeted quite a lot about their experiences of Picademy:

Babbage drums

I decided to create a blue peter inspired drum kit made out of paper cups and tin foil which played different noises when hit using the pygame library in Python […] that really was what made Picademy stand out from other course – the fact that everyone was given time to play and practice away from the school environment and build a network of like minded educationalists and also have hands on guidance from the experts. I feel more confident in letting pupils loose on the pi now… – David Williams at https://computingcondo.wordpress.com/2015/06/03/picademynorth-day-2-and-overview/

Sway Grantham, a Year 5 class teacher and one of our earliest Picademy attendees, joined us at the recent Picademy North in York and used Storify to bring together glimpses of the event from Twitter.

One of our favourite things about Picademy is watching the effects ripple out afterwards, as people write about its impact on their teaching and network with one another to organise events local to them. Spencer Organ, a chemistry teacher from a school in Birmingham who attended Picademy in October 2014 blogged about the training itself and, later that term, about the impact of his two days at Pi Towers:

Impacts of Picademy

David Saunders travelled from overseas to attend Picademy in April, and went home inspired:

In short, my mind is ablaze with ideas and enthusiasm for how to positively impact my community. – https://medium.com/@DesignSaunders/raspberry-pi-academy-8001c43087d5 – David Saunders at https://medium.com/@DesignSaunders/raspberry-pi-academy-8001c43087d5

More Raspberry Pi Certified Educators have written about their Picademy experiences and their effects than I can quote here, and you’ll find more of them listed on our Picademy home page.

We always finish Picademy with a group photo. It’s an occasion of dignity and gravitas.

February Picademy

So, what are you waiting for?

Picademy

The post Apply now for Picademy in July appeared first on Raspberry Pi.

Raspberry Pi: Maker Faire Bay Area 2015

This post was syndicated from: Raspberry Pi and was written by: Helen Lynn. Original post: at Raspberry Pi

Three weeks ago Ben, Eben, Liz, Matt, Pete, Rachel and I headed to San Mateo, California for Maker Faire Bay Area 2015. We thought it might get a bit busy, so we roped in Paul from Pimoroni to give us a hand, and we also had lots of help from fantastic local volunteer Dean over the weekend.

Maker Faire events are a showcase of making, where people who make eveything from elaborate marble runs to drawing robots to needle-felted T-Rexes to backyard rollercoasters get to show their projects, and visitors get to experience the tremendous potential of making and to try things out for themselves. They call Maker Faire Bay Area The Greatest Show & Tell On Earth, and it’s easy to see why as soon as you arrive. It’s full of this kind of stuff:

MegaBot

MegaBot is a bit terrifying even when it’s unpowered

This giant metal-plated fire-breathing rhino has headlamps and a registration plate

This giant metal-plated fire-breathing rhino has headlamps and a registration plate

We peeled ourselves away from the giant fire-breathing animals and got our stand set up. This year the event opened earlier than usual, with a special preview day on Friday to give educators, school groups and others a chance to meet and talk with makers for an afternoon before the big crowds arrived on Saturday and Sunday. About half an hour before the doors opened, Pete and I tested out the DOT board activity that we’d be doing with children.

The DOT board, which Rachel created, made its debut at SXSW Create, and it’s great: you use electrically conductive paint to complete a connect-the-dots picture on a printed circuit board, and then connect the board to a Raspberry Pi and run a Python program to see the Pi respond to the connections you’ve painted – in this case, you see an image of an aeroplane in your choice of colour. It went down very well with the school groups whom we loved meeting on Friday:

Children paint DOT boards

Group of children painting DOT boards

My absolute favourite part of this activity is watching children add a dab of conductive paint to the DOT board to select an extra colour while it’s connected to a Pi with the Python program running. A lot of the time we did this in response to kids asking, “But what happens if I…?” It was a jaw-dropping moment for many of them when we suggested they try it and see, and they watched the image on the screen change colour. It was great to see how the DOT board showed children part of the relationship between hardware and software, inputs and outputs in a very direct way that made a real impression.

On Saturday morning we regrouped ready for the big crowds.

We soon learned that having just eight or nine students around the DOT boards counted as a lull. We went through 1200 DOT boards over the weekend, and as well as helping children with the activity, there were usually at least two of us talking to visitors who were interested in other aspects of Raspberry Pi, answering questions and handing out resource cards to whet people’s appetite for our growing collection of free, high-quality online resources. We gave out around 7000 stickers to visitors, and Eben, Matt, Ben and Paul gave talks at several of the event’s 12 stages. You can see how busy the Faire got in an interview that Eben gave in front of our stand – you’ll spot a number of us in the background if you look closely!

We knew that children were getting excited about hardware with our DOT activity, and we felt very proud that Maker Faire rated it too; here’s a Maker Faire Editor’s Choice blue ribbon hanging beside one of our banners.

I was lucky enough to be able to spend some time away from our stand and visit other makers. Plenty were using Raspberry Pi in their own projects, and I enjoyed saying hello to Acrobotic and Weaved, finally meeting Mugbot in person, and eyeing up cute wheeled gardening robots that do your weeding for you:

It was a truly outstanding weekend, and, while bigger than most, this year’s Maker Faire is just one of many events we attend to talk to people and introduce them to learning and teaching with Raspberry Pi. Within a week of packing up, our team headed off to introduce the DOT boards to a Raspberry Jam in Utah and to prepare for a Picademy in York and an exhibition in Liverpool. And, of course, there’s always a Jam coming up somewhere in the world, and although we can’t get to all of them, we think of them all often! Why not see if there’s one near you?

The post Maker Faire Bay Area 2015 appeared first on Raspberry Pi.

LWN.net: Conservancy Seeks Your Questions on GPL Enforcement

This post was syndicated from: LWN.net and was written by: ris. Original post: at LWN.net

Software Freedom Conservancy has announced
a long-term campaign to increase education and understanding about
community-driven GPL enforcement processes. “Conservancy invites
developers and other Open Source and Free Software contributors to email
their questions on GPL enforcement to
<enforcement-questions@sfconservancy.org>. Conservancy cannot promise
to answer every question; Conservancy will use the collected questions over
the coming months to provide more educational and informational materials
about GPL enforcement, and in particular about Conservancy’s GPL Compliance Project for Linux Developers.

TorrentFreak: Court Orders VPN, TOR & Proxy Advice Site to be Blocked

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

stopstopWhile there is still much resistance to the practice in the United States, having websites blocked at the ISP level is becoming easier in many other countries around the world.

One country where the process is becoming ever more streamlined is Russia. The country blocks hundreds of websites on many grounds, from copyright infringement to the publication of extremist propaganda, suicide discussion and the promotion of drugs.

Keeping a close eye on Russia’s constantly expanding website blocklist is RosComSvoboda. The project advocates human rights and freedoms on the Internet, monitors and publishes data on blockades, and provides assistance to Internet users and website operators who are wrongfully subjected to restrictions.

Now, however, RosKomSvoboda will have to fight for its own freedoms after a local court ordered ISPs to block an advice portal operated by the group.

The site, RUBlacklist, is an information resource aimed at users who wish to learn about tools that can be used to circumvent censorship. It doesn’t host any tools itself but offers advice on VPNs, proxies, TOR and The Pirate Bay’s Pirate Browser.

Also detailed are various anonymizer services (which are presented via a linked Google search), Opera browser’s ‘turbo mode’ (which is often used in the UK to unblock torrent sites) and open source anonymous network I2P (soon to feature in a Popcorn Time fork).

Unfortunately, Russian authorities view this education as problematic. During an investigation carried out by the Anapa district’s prosecutor’s office it was determined that RosKomSvoboda’s advice undermines government blocks.

“Due to anonymizer sites, in particular http://rublacklist.net/bypass, users can have full access to all the banned sites anonymously and via spoofing. That is, with the help of this site, citizens can get unlimited anonymous access to banned content, including extremist material,” a ruling from the Anapa Court reads.

Describing the portal as an anonymization service, the Court ordered RosKomSvoboda’s advice center to be blocked at the ISP level.

Needless to say the operators of RosKomSvoboda are outraged that their anti-censorship efforts will now be censored. Group chief Artyom Kozlyuk slammed the decision, describing both the prosecutor’s lawsuit and the Court ruling as “absurd”.

“Law enforcement has demonstrated its complete incompetence in the basic knowledge of all the common technical aspects of the Internet, though even youngsters can understand it,” Kozlyuk says.

“Anonymizers, proxies and browsers are multitask instruments, helping to search for information on the Internet. If we follow the reasoning of the prosecutor and the court, then the following stuff should be prohibited as well: knives, as they can become a tool for murder; hammers, as they can be used as a tool of torture; planes, because if they fall they can lead to many deaths.

“To conclude, I would love to ask the prosecutor of Anapa to consider the possibility of prohibiting paper and ink, because with these tools one can draw a very melancholic picture of this ruling’s complete ignorance.”

RosKomSvoboda’s legal team say they intend to appeal the ruling which was the result of a legal procedure that took place without their knowledge.

“We can only guess why the project is considered to be an anonymizer. It’s likely that no one in Anapa city court understands what they are dealing with,” says RosKomSvoboda lawyer Sarkis Darbinian.

“We see that these kinds of rulings are being stamped on a legal conveyor belt. Moreover, we see the obvious violation of the fundamental principles of civil procedure – an adversarial system.”

The court ruling against RUBlacklist arrives at the same time as a report from the United Nations which urges member states to do everything they can to encourage encryption and anonymity online.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and anonymous VPN services.

Raspberry Pi: Astro Pi Mission Update 3

This post was syndicated from: Raspberry Pi and was written by: James Robinson. Original post: at Raspberry Pi

Here at Pi Towers, Astro Pi fever is taking hold! Over the last few weeks there have been a number of things happening which we’re really excited about, so it’s time for an update.

astro pi

The first and most crucial bit of news for those of you furiously writing your competition entries is that you now have just a little more time. UK Space, the organisation managing the competition, have decided to give secondary students a whole extra week. As we were a week late in shipping the kits to our phase 1 winners the deadline is now Monday the 6th of July, by 10am, so get cracking!

Put this date in your diary, don’t forget.

To help you get to grips with programming the Astro Pi HAT we’ve created a couple of helpful resources.

Firstly, you can find out all about the hardware, its capabilities and get a detailed breakdown of the Python library using our Astro Pi Guide. It explains in detail all aspects of the library and provides some examples of how to use them.

However, if you just want to have a play and learn as you go, check out our Getting Started with Astro Pi resource which works through a series of examples and explores most of the the Python library.

Astro Pi Teachers' Guide
Getting Started with Astro Pi

You will create a series of interesting programs which make use of all of the Astro Pi sensors like this reaction game.

There’s a great interactive demo for exploring the IMU (movement) sensor and you can also experiment with pressure, following Dave’s example:

Astro Pi HATs are also starting to appear in the wild — loads of competition entrants have been receiving their Astro Pi HATs and excitedly tweeting about it.

Others within our community have been playing around creating examples and resources. The awesome Martin O’Hanlon has put together a getting started tutorial as well as building this amazing interactive Astro Pi in Minecraft:

Dan Aldred, one of our Raspberry Pi Certified Educators, has put together some great resources for using Astro Pi with his students. Visit his website to find a great language reference booklet.

We also have a number of examples compiled by Ben Nuttall and available on Github.

If you’re looking for more technical help with Astro Pi check out our Astro Pi forum.

As you can see we’re all super excited about the Astro Pi Launch! You can keep an eye on our progress as we get closer to lift-off by following social media channels. In the next few weeks the flight hardware is going to be assembled and tested!

PS

SPAAAAAAAAAAACCCCEEEE!!!!!!

The post Astro Pi Mission Update 3 appeared first on Raspberry Pi.

Raspberry Pi: Bank Holiday kind of blog

This post was syndicated from: Raspberry Pi and was written by: Clive Beale. Original post: at Raspberry Pi

The blog today is that there is no blog today. It’s Spring bank holiday here in the UK which means that we are all too busy rolling cheeses down hills, re-enacting obscure battles against baddies and doing stuff like this:

StompMorrisdance

Also, the education team are all en route to York for Picademy North. See you Tuesday!

The post Bank Holiday kind of blog appeared first on Raspberry Pi.

Raspberry Pi: The Sense HAT: headgear for the terminally curious

This post was syndicated from: Raspberry Pi and was written by: Clive Beale. Original post: at Raspberry Pi

Having looked at the chunky outside goodness of the Astro Pi case yesterday it seems only fair to take another look at the heart of the Astro Pi, the Sense HAT. (This is not a conical cap that you put on the really clever kid and stand him in the corner but our add-on board for the Pi bristling with sensors and other useful things.) It’s currently going out to schools and organisations who took part in our recent competition but we also plan to sell it.

A Raspberry Pi wearing a Sense HAT

A Raspberry Pi wearing a Sense HAT

The full tech specs are here but basically it has:

  • 8×8 LED matrix display
  • accelerometer, gyroscope and magnetometer
  • air pressure sensor
  • temperature and humidity sensor
  • a teeny joystick
  • real time clock

The Astro Pi site explains what these all do and how they could be used.

I’m really excited about the Sense HAT. With all of those sensors on a single board it’s obviously a brilliant tool for making stuff (I have in mind a self-balancing attack robot that senses humans, aggressively hunts them down and then gently dispenses Wagon Wheels from its slot-like mouth). But it’s the potential for science that’s making me think. In particular I’d love to see it flourish in the science classroom.

A typical school science classroom

A typical school science classroom

Despite the teacher recruitment ads that inevitably show zany antics with Van der Graaff generators, explosions and dancing bonobos the reality is that much of high school science is about experimentation and observation (which is a good thing!). But lab kit such as sensors, controllers and data loggers don’t come cheap (I was once told by a class that their usual science teacher never let them use the data loggers because “they were too expensive). Nor is it easy to get bits of kit to talk to each other or the Internet of Things (with the potential benefits that come from that such as improved assessment, parental involvement, sharing and consolidating data).

data logger

A data logger I found in a school skip. The size of a cash register yet only logs temperature. On paper.

A Pi wearing a Sense HAT could do everything from monitoring plant growth to controlling and logging experimental variables. A series of experiments using the accelerometer/gyroscope to investigate forces and equations of motion is mandatory. Feel free to add your own ideas below and if any science teachers would like to get involved the please get in touch.

If you are lucky enough to already have a Sense HAT, Martin “When does that man sleep?” O’Hanlon has written an excellent getting started tutorial . If not then it’s worth taking a look anyway to get a sense (yeah, yeah :)) of what it can do.

Astro Pi sense HAT LED

Up above the streets and the houses, Sense HAT climbing high.

The final price is yet to be announced but we’re confident that there will be nothing else out there to rival it for value, potential, support and resources. Keep your eyes peeled for more news on the Sense HAT soon.

The post The Sense HAT: headgear for the terminally curious appeared first on Raspberry Pi.

Krebs on Security: Who’s Scanning Your Network? (A: Everyone)

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Not long ago I heard from a reader who wanted advice on how to stop someone from scanning his home network, or at least recommendations about to whom he should report the person doing the scanning. I couldn’t believe that people actually still cared about scanning, and I told him as much: These days there are countless entities — some benign and research-oriented, and some less benign — that are continuously mapping and cataloging virtually every devices that’s put online.

GF5One of the more benign is scans.io, a data repository of research findings collected through continuous scans of the public Internet. The project, hosted by the ZMap Team at the University of Michigan, includes huge, regularly updated results grouped around scanning for Internet hosts running some of the most commonly used “ports” or network entryways, such as Port 443 (think Web sites protected by the lock icon denoting SSL/TLS Web site encryption); Port 21, or file transfer protocol (FTP); and Port 25, or simple mail transfer protocol (SMTP), used by many businesses to send email.

When I was first getting my feet wet on the security beat roughly 15 years ago, the practice of scanning networks you didn’t own looking for the virtual equivalent of open doors and windows was still fairly frowned upon — if not grounds to get one into legal trouble. These days, complaining about being scanned is about as useful as griping that the top of your home is viewable via Google Earth. Trying to put devices on the Internet and and then hoping that someone or something won’t find them is one of the most futile exercises in security-by-obscurity.

To get a gut check on this, I spoke at length last week with University of Michigan researchers Michael D. Bailey (MB) and Zakir Durumeric (ZD) about their ongoing and very public project to scan all the Internet-facing things. I was curious to get their perspective on how public perception of widespread Internet scanning has changed over the years, and how targeted scanning can actually lead to beneficial results for Internet users as a whole.

MB: Because of the historic bias against scanning and this debate between disclosure and security-by-obscurity, we’ve approached this very carefully. We certainly think that the benefits of publishing this information are huge, and that we’re just scratching the surface of what we can learn from it.

ZD: Yes, there are close to two dozen papers published now based on broad, Internet-wide scanning. People who are more focused on comprehensive scans tend to be the more serious publications that are trying to do statistical or large-scale analyses that are complete, versus just finding devices on the Internet. It’s really been in the last year that we’ve started ramping up and adding scans [to the scans.io site] more frequently.

BK: What are your short- and long-term goals with this project?

ZD: I think long-term we do want to add coverage of additional protocols. A lot of what we’re focused on is different aspects of a protocol. For example, if you’re looking at hosts running the “https://” protocol, there are many different ways you can ask questions depending on what perspective you come from. You see different attributes and behavior. So a lot of what we’ve done has revolved around https, which is of course hot right now within the research community.

MB: I’m excited to add other protocols. There a handful of protocols that are critical to operations of the Internet, and I’m very interested in understanding the deployment of DNS, BGP, and TLS’s interception with SMTP. Right now, there’s a pretty long tail to all of these protocols, and so that’s where it starts to get interesting. We’d like to start looking at things like programmable logic controllers (PLCs) and things that are responding from industrial control systems.

ZD: One of the things we’re trying to pay more attention to is the world of embedded devices, or this ‘Internet of Things’ phenomenon. As Michael said, there are also industrial protocols, and there are different protocols that these embedded devices are supporting, and I think we’ll continue to add protocols around that class of devices as well because from a security perspective it’s incredibly interesting which devices are popping up on the Internet.

BK: What are some of the things you’ve found in your aggregate scanning results that surprised you?

ZD: I think one thing in the “https://” world that really popped out was we have this very large certificate authority ecosystem, and a lot of the attention is focused on a small number of authorities, but actually there is this very long tail — there are hundreds of certificate authorities that we don’t really think about on a daily basis, but that still have permission to sign for any Web site. That’s something we didn’t necessary expect. We knew there were a lot, but we didn’t really know what would come up until we looked at those.

There also was work we did a couple of years ago on cryptographic keys and how those are shared between devices. In one example, primes were being shared between RSA keys, and because of this we were able to factor a large number of keys, but we really wouldn’t have seen that unless we started to dig into that aspect [their research paper on this is available here].

MB: One of things we’ve been surprised about is when we measure these things at scale in a way that hasn’t been done before, often times these kinds of emergent behaviors become clear.

BK: Talk about what you hope to do with all this data.

ZD: We were involved a lot in the analysis of the Heartbleed vulnerability. And one of the surprising developments there wasn’t that there were lots of people vulnerable, but it was interesting to see who patched, how and how quickly. What we were able to find was by taking the data from these scans and actually doing vulnerability notifications to everybody, we were able to increase patching for the Heartbleed bug by 50 percent. So there was an interesting kind of surprise there, not what you learn from looking at the data, but in terms of what actions do you take from that analysis? And that’s something we’re incredibly interested in: Which is how can we spur progress within the community to improve security, whether that be through vulnerability notification, or helping with configurations.

BK: How do you know your notifications helped speed up patching?

MB: With the Heartbleed vulnerability, we took the known vulnerable population from scans, and ran an A/B test. We split the population that was vulnerable in half and notified one half of the population, while not notifying the other half, and then measured the difference in patching rates between the two populations. We did end up after a week notifying the second population…the other half.

BK: How many people did you notify after going through the data from the Heartbleed vulnerability scanning? 

ZD: We took everyone on the IPv4 address space, found those that were vulnerable, and then contacted the registered abuse contact for each block of IP space. We used data from 200,000 hosts, which corresponded to 4,600 abuse contacts, and then we split those into an A/B test. [Their research on this testing was published here].

So, that’s the other thing that’s really exciting about this data. Notification is one thing, but the other is we’ve been building models that are predictive of organizational behavior. So, if you can watch, for example, how an organization runs their Web server, how they respond to certificate revocation, or how fast they patch — that actually tells you something about the security posture of the organization, and you can start to build models of risk profiles of those organizations. It moves away from this sort of patch-and-break or patch-and-pray game we’ve been playing. So, that’s the other thing we’ve been starting to see, which is the potential for being more proactive about security.

BK: How exactly do you go about the notification process? That’s a hard thing to do effectively and smoothly even if you already have a good relationship with the organization you’re notifying….

MB: I think one of the reasons why the Heartbleed notification experiment was so successful is we did notifications on the heels of a broad vulnerability disclosure. The press and the general atmosphere and culture provided the impetus for people to be excited about patching. The overwhelming response we received from notifications associated with that were very positive. A lot of people we reached out to say, ‘Hey, this is a great, please scan me again, and let me know if I’m patched.” Pretty much everyone was excited to have the help.

Another interesting challenge was that we did some filtering as well in cases where the IP address had no known patches. So, for example, where we got information from a national CERT [Computer Emergency Response Team] that this was an embedded device for which there was no patch available, we withheld that notification because we felt it would do more harm than good since there was no path forward for them. We did some aggregation as well, because it was clear there were a lot of DSL and dial-up pools affected, and we did some notifications to ISPs directly.

BK: You must get some pushback from people about being included in these scans. Do you think that idea that scanning is inherently bad or should somehow prompt some kind of reaction in and of itself, do you think that ship has sailed?

ZD: There is some small subset that does have issues. What we try to do with this is be as transparent as possible. All of our hosts we use for scanning, if look at them on WHOIS records or just visit them with a browser it will tell you right away that this machine is part of this research study, here’s the information we’re collecting and here’s how you can be excluded. A very small percentage of people who visit that page will read it and then contact us and ask to be excluded. If you send us an email [and request removal], we’ll remove you from all future scans. A lot of this comes down to education, a lot of people to whom we explain our process and motives are okay with it.

BK: Are those that object and ask to be removed more likely to be companies and governments, or individuals?

ZD: It’s a mix of all of them. I do remember offhand there were a fair number of academic institutions and government organizations, but there were a surprising number of home users. Actually, when we broke down the numbers last year (PDF), the largest category was small to mid-sized businesses. This time last year, we had excluded only 157 organizations that had asked for it.

BK: Was there any pattern to those that asked to be excluded?

ZD: I think that actually is somewhat interesting: The exclusion requests aren’t generally coming from large corporations, which likely notice our scanning but don’t have an issue with it. A lot of emails we get are from these small businesses and organizations that really don’t know how to interpret their logs, and often times just choose the most conservative route.

So we’ve been scanning for a several years now, and I think when we originally started scanning, we expected to have all the people who were watching for this to contact us all at once, and say ”Please exclude us.’ And then we sort of expected that the number of people who’d ask to be excluded would plateau, and we wouldn’t have problems again. But what we’ve seen is, almost the exact opposite. We still get [exclusion request] emails each day, but what we’re really finding is people aren’t discovering these scans proactively. Instead, they’re going through their logs while trying to troubleshoot some other issue, and they see a scan coming from us there and they don’t know who we are or why we’re contacting their servers. And so it’s not these organizations that are watching, it’s the ones who really aren’t watching who are contacting us.

BK: Do you guys go back and delete historic records associated with network owners that have asked to be excluded from scans going forward?

ZD: At this point we haven’t gone back and removed data. One reason is there are published research results that are based on those data sets, results, and so it’s very hard to change that information after the fact because if another researcher went back and tried to confirm an experiment or perform something similar, there would be no easy way of doing that.

BK: Is this what you’re thinking about for the future of your project? How to do more notification and build on the data you have for those purposes? Or are you going in a different or additional direction?

MB: When I think about the ethics of this kind of activity, I have very utilitarian view: I’m interested in doing as much good as we possibly can with the data we have. I think that lies in notifications, being proactive, helping organizations that run networks to better understand what their external posture looks like, and in building better safe defaults. But I’m most interested in a handful of core protocols that are under-serviced and not well understood. And so I think we should spend a majority of effort focusing on a small handful of those, including BGP, TLS, and DNS.

ZD: In many ways, we’re just kind of at the tip of this iceberg. We’re just starting to see what types of security questions we can answer from these large-scale analyses. I think in terms of notifications, it’s very exciting that there are things beyond the analysis that we can use to actually trigger actions, but that’s something that clearly needs a lot more analysis. The challenge is learning how to do this correctly. Every time we look at another protocol, we start seeing these weird trends and behavior we never noticed before. With every protocol we look at there are these endless questions that seem to need to be answered. And at this point there are far more questions than we have hours in the day to answer.

Raspberry Pi: Happy Scratch Day 2015!

This post was syndicated from: Raspberry Pi and was written by: Clive Beale. Original post: at Raspberry Pi

A quick blip of a blog to say Happy Scratch Day!

Scratch cat logo

Help! I’m trapped in a white box. Fetch the magic wand tool!

We’re huge fans of Scratch here at the Foundation. It was designed to teach young people how to program but it’s a great learning tool at any age: you can build your first program in minutes and pick up fundamental concepts very quickly. Whilst having fun. Sneaky!

Raspberry Pi Scratch workshop.

Raspberry Pi Scratch workshop. Yes, that is Mitch Resnick at the back!

If you’ve never tried Scratch before then today is the day to boot up your Pi and have a play. If you are Pi-less then you can use it online, but you’ll be missing out on the best bit of all: physical computing with Scratch. It’s probably the easiest way to hook up sensors, LEDs, buttons and motors to the Pi, and resources such as our reaction game and Santa detector are great intro projects for the weekend.

various components connected to raspberry pi and Scratch

Connect buttons, sensors, cameras, LEDS, goblin sticks and other gubbins to your Pi

The Foundation has supported Scratch since our early days and we’ve put a lot of resources into making Scratch on the Pi better, faster and funner. We’re just putting the finishing touches to some new stuff that we think you’ll find really exciting (I’m excited anyway :)) so watch this space. We’ll have some news this summer. We’ll also be at the Scratch 2015 conference in Amsterdam in August so if you are there come and talk to us.

Raspberry Pi Scratch workshop.

Raspberry Pi Scratch workshop. A young man shows adults why flying hippos are essential in LED traffic light projects.

I’ll finish now. My sesame seed bagel has just popped up (and you know how easily they burn) and I’m stopping you from making things. This weekend, give the repeats of Thundercats a miss and go and have a play with Scratch instead—it’s a beautiful thing.

The post Happy Scratch Day 2015! appeared first on Raspberry Pi.