SANS Internet Storm Center, InfoCON: green: ISC StormCast for Wednesday, September 2nd 2015 http://isc.sans.edu/podcastdetail.html?id=4639, (Wed, Sep 2nd)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

SANS Internet Storm Center, InfoCON: green: What’s the situation this week for Neutrino and Angler EK?, (Wed, Sep 2nd)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

Introduction

Last month in mid-August 2015, an actor using Angler exploit kit (EK) switched to Neutrino EK [1]. A few days later, we found that actor using Angler again [2]. This week, were back to seeingNeutrino EK from the same actor.

Neutrino EK from this actor is sending TeslaCrypt 2.0 as the payload. We also saw another actor use Angler EK to pushBedep during the same timeframe.

Todays diary looks at two infection chains from Tuesday 2015-09-01, one for Angler EK and another for Neutrino.

First infection chain: Angler EK

This infection chain ended with a Bedep infection. Bedep is known for carrying out advertising fraud and downloading additional malware [3, 4, 5].

A page from the compromised website had malicious script injected before the opening HTML tag. The Bedep payloadalong withpatterns seen in the injected scriptindicate this is a different actor. Its not the same actorcurrently using Neutrino EK (or Angler EK)” />
Shown above: Injected script in a page from the compromised website.

We saw Angler EK on 148.251.98.68. ” />
Click on the above image for a full-size view.

Using tcpreplay on the pcap in Security Onion, we findalerts for Angler EK and Bedep. This setup used Suricata with the EmergingThreats (ET) and ET Pro rule sets. ” />
Click on the above image for a full-size view.

Second infection chain: Neutrino EK

Our second infection chain ended in a TeslaCrypt 2.0 infection. Very little has changed fromlast weeksTelsaCrypt 2.0post-infection traffic. This actors malicious script was injected into a page from the compromised website. The injected script follows the same patterns seen last week [2]. “>Take another look at the image above. Notice another URL with the domain heriaisi.work before the Neutrino EK landing URL. We saw the same type of URL using heriaisi.work last week from injected script used by this actor. “>6], and it is currently hosted on a Ukrainian IP at 93.171.205.64. It doesnt come up in the traffic. Im not sure what purpose it serves, but that URLis another indicator pointing to this particular actor.

Were still seeing the same type of activityfrom ourTeslaCrypt 2.0 payload this week. The only difference? Last week, the malware included a .bmp image file with decrypt instructions on the desktop. This week, the TeslaCrypt 2.0 sample didnt include any images. ” />
Shown above: A user” />
Shown above: TeslaCrypt 2.0s decrypt instructions, ripped off from CryptoWall 3.0, still stating CryptoWall in the text.

We saw Neutrino EK on 46.108.156.181. As always, Neutrino uses non-standard ports for its HTTP traffic. This type of traffic is easily blocked on an organizations firewall, but most home users wont have that protection. ” />
Click on the above image for a full-size view.

Using tcpreplay on the pcap in Security Onion, we findalerts for Angler EK and AlphaCrypt. This setup used Suricata with the EmergingThreats (ET) and ET Pro rule sets. The TeslaCrypt 2.0 traffic triggered ET alerts for AlphaCrypt. ” />
Click on the above image for a full-size view.

Final words

We cant say how long this actor will stick with Neutrino EK. As I mentioned last time, the situation can quickly change. And despite this actors use of Neutrino EK, were still seeing other actors use Angler EK. As always, well continue to keep an eye on the cyber landscape. And astime permits, well let you know of any further changes we find from the various threat actors.

Traffic and malware for this diary are listed below:

  • A pcap file with the infection traffic for Angler EK from Tuesday 2015-09-01 is available here. (2.08 MB)
  • A pcap file with the infection traffic for Neutrino EK from Tuesday 2015-09-01 is available here. (594 KB)
  • A zip archive of the malware and other artifacts is available here. (692 KB)

The zip archive is password-protected with the standard password. If you dont know it, email admin@malware-traffic-analysis.net and ask.


Brad Duncan
Security Researcher at Rackspace
Blog: www.malware-traffic-analysis.net – Twitter: @malware_traffic

References:

[1] https://isc.sans.edu/forums/diary/Actor+using+Angler+exploit+kit+switched+to+Neutrino/20059/
[2] https://isc.sans.edu/forums/diary/Actor+that+tried+Neutrino+exploit+kit+now+back+to+Angler/20075/
[3] http://blog.trendmicro.com/trendlabs-security-intelligence/bedep-malware-tied-to-adobe-zero-days/
[4] https://threatpost.com/angler-exploit-kit-bedep-malware-inflating-video-views/112611
[5] http://www.microsoft.com/security/portal/threat/encyclopedia/entry.aspx?Name=Win32/Bedep#tab=2
[6] http://whois.domaintools.com/heriaisi.work

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Backblaze Blog | The Life of a Cloud Backup Company: A Behind-the-Scenes Look at Designing the Next Storage Pod

This post was syndicated from: Backblaze Blog | The Life of a Cloud Backup Company and was written by: Andy Klein. Original post: at Backblaze Blog | The Life of a Cloud Backup Company

Backblaze Labs Storage Pod Design
Tim holds up a drive beam and shows the design to Matt who is attending the meeting courtesy of a telepresence robot. They are discussing the results of a prototype Storage Pod developed using Agile development methodology applied to hardware design. The Storage Pod they are discussing was the result of the fifth sprint (iteration) of prototyping. Based on the evaluation by the Scrum (project) team, the prototype pod is either sent to the datacenter for more testing and hopefully deployment, or it is decided that more work is required and the pod is salvaged for usable parts. Learning from failure, as well as from success, is a key part of the Agile methodology.

Prototyping Storage Pod Design

Backblaze Labs uses the Scrum framework within the Agile methodology to manage the Storage Pod prototyping process. To start, we form a Scrum team to create a product backlog containing the tasks we want to accomplish with the project: redesign drive grids, use tool-less lids, etc. We then divide the tasks into four-week sprints, each with the goal of designing, manufacturing, and assembling one to five production-worthy Storage Pods.

Let’s take a look at some of the tasks on the product backlog that the Scrum team is working through as they develop the next generation Storage Pod.

Drive Grids Become Drive Beams

Each Storage Pod holds 45 hard drives. Previously we used drive grids to hold the drives in place so that they connect to the backplanes. Here’s the process of how drive grids became drive brackets that became drive beams.

Current Storage Pod 4.5 Design: Each pod has an upper and lower drive grid. Hard drives are placed in each “bay” of the grid and then a drive cover presses lightly on each row of 15 hard drives to secure them in place.
Storage Pod Drive Grids
Storage Pod Scrum – Sprint 1 Design: This drive bracket design allowed drives to be easily installed and removed, but didn’t seem to hold the drives well. A good first effort, but a better design was needed.
Drive Guides Sprint 2
Storage Pod Scrum – Sprint 3 Design: Sprint 3 introduced a new drive beam design. On the plus side, the drives were very secure. On the minus side, the beams sat too high in the chassis, so it was nearly impossible to remove a drive with your fingers.
Drive Beams Sprint 3
Storage Pod Scrum – Sprint 5 Design: The height of the drive beams was shortened in Sprint 5, making them low enough so that the hard drives could be removed by using your fingers.
Drive Beams Sprint 5
Operations Impacts the Design
Having Operations personnel on the Scrum team helped get the design of the drive beams right. For the first two sprints, the hard drives installed in the prototypes for testing purposes were small capacity drives. These drives were ¾ inch wide. Once installed, the gap between the drives provided enough space to use your fingers to remove the drives. Matt from Operations suggested that we use large capacity hard drives, like we do in production, to test the prototypes. These drives are one inch wide. When we did this in Sprint 3, we discovered there was no longer enough room between the drives our for fingers. This led to lowering the height of the drive beams in Sprint 5.

Drive Guides

In conjunction with the introduction of the new drive beam design from Sprint 3, we needed a way to keep the drives aligned and immobile within the beams. Drive guides were the proposed solution. These guides are screwed or inserted into the four mounting holes on each drive. The drive then slides into cutouts on the drive beam securing the drive in place. There were three different drive guide designs:

Sprint 2 – The first design for drive guides was to use screws in the four mounting holes on each drive. The screw head was the right size to slide into the drive beam. Inserting 180 screws into the 45 drives (45 x 4) during assembly was “time expensive” so other solutions were examined.
Sprint 3 – The second design for drive guides was plastic “buttons” that fit into each of the four drive-mounting screw holes. This was a better solution, but when the height of the drive beams was lowered in Sprint 4, the top two drive guide buttons were now above the drive beams with nothing to attach to.
Drive Guide Buttons Sprint 3
In Sprint 5, full-length drive guides replaced the buttons, with one guide attached to each side of the drive. The full-length guides allowed the drive to be firmly seated even though the drive beams were made shorter. They had the added benefit of taking much less time to install on the assembly line.
Drive Rails
Time Versus Money: 3D Printing
One lesson learned as we proceeded through the sprints was that we could pay extra to get the parts needed to complete the sprint on time. In other words, we could trade money for time. One example is the drive guides. About halfway through Sprint 5 we created the design for the new full-length drive guides. We couldn’t procure any in time to finish the sprint, so we made them. In fact, we printed them using a 3D printer. At $3 each to print ($270 for all 45 drives), the cost is too high for production, but for prototyping we spent the money. Later, a plastics manufacturer will make these for $0.30 each in a larger quantity – trading time to save money during production.

Drive Lids

Drive lids were introduced in Storage Pod 3.0 to place a small amount of pressure on the top of the hard drives, minimizing vibrations as they sat in the drive bay. The design worked well, but the locking mechanism was clumsy and the materials and production costs were a bit high.

Sprint 2 – We wanted to make it easier to remove the drive lids yet continue to apply the lid pressure evenly across the drives. This design looked nice, but required 6 screws to be removed in order to access a row of drives. In addition, the screws could cross-thread and leave metal shards in the chassis. A better solution was needed.
Screw Down Drive Lids
Sprint 4 – The candidate design was tool-less; there were no screws. This drive lid flexed slightly when pressed so as to fit into cutouts on the drive beams. The Scrum team decided to test multiple designs and metals to see if any performed better and have the results at the next sprint.
Drive Lids Sprint 4
Sprint 5 – The team reviewed the different drive lids and selected three different lids to move forward. Nine copies of each drive lid were made, and three Storage Pods were given to operations for testing to see how each of the three drive lids performed.
Drive Lids Sprint 5

Backplane Tray

In Storage Pod 4.5, 3.0, and prior versions, the backplanes were screwed to standoffs attached to the chassis body itself. Assembly personnel on the Scrum team asked if backplanes and all their cables, etc. could be simplified to make assembly easier.

Sprint 5 – The proposed solution was to create a backplane tray on which all the wiring could be installed and then the backplanes attached. The resulting backplane module could be pre-built and tested as a unit and then installed in the pod in minutes during the assembly process.
Backblaze Tray Sprint 5
Sprint 7 – The design of the backplane module is still being finalized. For example, variations on the location of wire tie downs and the wiring routes are being designed and tested.
Backblaze Tray Sprint 7
Metal Benders Matter
One thing that may not be obvious is that the actual Storage Pod chassis, the metal portion, is fabricated from scratch each sprint. Finding a shop that can meet the one-week turnaround time for metal manufacturing required by the sprint timeline is time-consuming, but well worth it. In addition, even though these shops are not directly part of the Scrum team, they can provide input into the process. For example, one of the shops told us that if we made a small design change to the drive beams they could fabricate them out of a single piece of metal and lower the unit production cost by up to half.

The Scrum team

Our Storage Pod Scrum team starts with a Scrum Master (Ariel/Backblaze), a designer (Petr/Evolve), and a Product Owner (Tim/Backblaze). Also included on the team are representatives from assembly (Edgar/Evolve and Greg/Evolve), operations (Matt/Backblaze), procurement (Amanda/Evolve), and an executive champion (Rich/Evolve). At the end of each four-week sprint, each person on the team helps evaluate the prototype created during that sprint. The idea is simple: identify and attack issues during the design phase. A good example of this thinking is the way we are changing the Storage Pod lids.

Current Storage Pods require the removal of twelve screws to access the drive bay and another eight screws to access the CPU bay. The screws are small and easy to lose, and they take time to remove and install. Operations suggested tool-less lids be part of the Scrum product backlog and they were introduced in Sprint 3. A couple of iterations (Sprints) later, we have CPU bay and drive bay lids that slide and latch into place and are secured with just two thumb screws each that stay attached to the lid.

Tool-less Lids Sprint 3

Going From Prototype to Production

One of the supposed drawbacks of using Scrum is that is it is hard to know when you are done. Our goal for each sprint is to produce a production-worthy Storage Pod. In reality, this will not be the case, since each time we introduce a major change to a component (drive beams, for example) things break. This is just like doing software development, where new modules often destabilize the overall system until the bugs are worked out. The key is for the Scrum team to capture any feedback and categorize it as: 1) must fix, 2) new idea to save for later, or 3) never mind.

Here’s the high-level process the Scrum team uses to move the Storage Pod design from prototype to production.

  1. Manufacture one to five Storage Pods chassis per sprint until the original product backlog tasks are addressed and, based on input from manufacturing, assembly and operations, the group is confident in the new design.
  2. The Scrum team is responsible for the following deliverables:
    • Final product specifications
    • Bill of materials
    • Final design files
    • Final draft of the work instructions (Build Book)
  3. Negotiate production pricing to manufacture the chassis. This will be done based on a minimum six-month production run.
  4. Negotiate production pricing to acquire the components used to assemble the Storage Pod: power supplies, motherboards, hard drives, etc. This will be done based on a minimum six-month production run.

Once all of these tasks are accomplished the design can be placed into production.

The Storage Pod Scrum is intended to continue even after we have placed Storage Pods into production. We start with a product backlog, build and test a number of prototypes and then go into production with a feature complete, tested unit. At that point, any new ideas generated along the way would then be added to the product backlog and the sprints for the next Storage Pod production unit would begin.

Scrum Process Diagram

Will the ongoing Storage Pod Scrum last forever? We started the Scrum with the intention of continuing as long as one or more of the following are being accomplished:

  • We are improving product reliability
  • We are reducing the manufacturing cost
  • We are reducing the assembly cost
  • We are reducing the maintenance cost
  • We are reducing the operating cost

The Next Storage Pod – Not Yet

Where are we in the process of creating the next Storage Pod version to open source? We’re not done yet. We have a handful of one-off pods in testing right now and we’ve worked through most of the major features in the product backlog, so we’re close.

In the meantime, if you need a Storage Pod or you’re just curious, you can check out https://www.backblaze.com/storage-pod.html. There you can learn all about Storage Pods, including their history, the Storage Pod ecosystem, how to build or buy your very own Storage Pod and much more.

The post A Behind-the-Scenes Look at Designing the Next Storage Pod appeared first on Backblaze Blog | The Life of a Cloud Backup Company.

TorrentFreak: US Govt. Denies Responsibility for Megaupload’s Servers

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

megaupload-logoIn a few months’ time it will be four years since Megaupload’s servers were raided by U.S. authorities. Since then, virtually no progress has been made in the criminal case.

Kim Dotcom and his Megaupload colleagues are still awaiting their extradition hearing in New Zealand and have yet to formally appear in a U.S. court.

Meanwhile, more than 1,000 Megaupload servers from Carpathia Hosting remain in storage in Virginia, some of which contain crucial evidence as well as valuable files from users. The question is, for how long.

Last month QTS, the company that now owns the servers after acquiring Carpathia, asked the court if it can get rid of the data which is costing them thousands of dollars per month in storage fees.

This prompted a response from a former user who wants to preserve his data, as well as Megaupload, who don’t want any of the evidence to be destroyed.

Megaupload’s legal team suggested that the U.S. Government should buy the servers and take care of the hosting costs. However, in a new filing (pdf) just submitted to the District Court the authorities deny all responsibility.

United States Attorney Dana Boente explains that the Government has already backed up the data they need and no longer have a claim on the servers.

“…the government has already completed its acquisition of data from the Carpathia Servers authorized by the warrant, which the defendants will be entitled to during discovery,” Boente writes.

“As such, there is no basis for the Court to order the government to assume possession of the Carpathia Servers or reimburse Carpathia for ‘allocated costs’ related to their continued maintenance.”

The Government says it handed over its claim on the servers early 2012 after the search warrant was executed and the hosting company was informed at the time. This means that the U.S. can and will not determine the fate of the stored servers.

The authorities say they are willing to allow Megaupload and the other defendants to look through the data that was copied, but only after they are arraigned.

In any case, the U.S. denies any responsibility for the Megaupload servers and asks the court to keep it this way.

“…the United States continues to request that the Court deny any effort to impose unprecedented financial or supervisory obligations on the United States related to the Carpathia Servers,” the U.S. Attorney concludes.

Previously the U.S. and MPAA blocked Megaupload’s plans to buy the servers, which is one of the main reasons that there is still no solution after all those years.

The MPAA also renewed its previous position last week (pdf). The Hollywood group says doesn’t mind if users are reunited with their files as long as Megaupload doesn’t get hold of them.

“The MPAA members’ principal concern is assuring that adequate steps are taken [to] prevent the MPAA members’ content on the Mega Servers in Carpathia’s possession from falling back into the hands of Megaupload or otherwise entering the stream of commerce,” they write.

The above means that none of the parties is willing to move forward. The servers are still trapped in between the various parties and it appears that only District Court Judge Liam O’Grady can change this.

It appears to be a choice between saving the 25 Petabytes of data or wiping all servers clean.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

LWN.net: Microsoft, Google, Amazon, others, aim for royalty-free video codecs (Ars Technica)

This post was syndicated from: LWN.net and was written by: ris. Original post: at LWN.net

Ars Technica reports
that Microsoft, Google, Mozilla, Cisco, Intel, Netflix, and Amazon have
launched a new consortium, the Alliance for Open Media. “The
Alliance for Open Media would put an end to this problem [of patent licenses and royalties]. The group’s first aim is to produce a video codec that’s a meaningful improvement on HEVC. Many of the members already have their own work on next-generation codecs; Cisco has Thor, Mozilla has been working on Daala, and Google on VP9 and VP10. Daala and Thor are both also under consideration by the IETF’s netvc working group, which is similarly trying to assemble a royalty-free video codec.

Krebs on Security: Like Kaspersky, Russian Antivirus Firm Dr.Web Tested Rivals

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

A recent Reuters story accusing Russian security firm Kaspersky Lab of faking malware to harm rivals prompted denials from the company’s eponymous chief executive — Eugene Kaspersky — who called the story “complete BS” and noted that his firm was a victim of such activity.  But according to interviews with the CEO of Dr.Web — Kaspersky’s main competitor in Russia — both companies experimented with ways to expose antivirus vendors who blindly accepted malware intelligence shared by rival firms.

quarantineThe Reuters piece cited anonymous, former Kaspersky employees who said the company assigned staff to reverse-engineer competitors’ virus detection software to figure out how to fool those products into flagging good files as malicious. Such errors, known in the industry as “false positives,” can be quite costly, disruptive and embarrassing for antivirus vendors and their customers.

Reuters cited an experiment that Kaspersky first publicized in 2010, in which a German computer magazine created ten harmless files and told antivirus scanning service Virustotal.com that Kaspersky detected them as malicious (Virustotal aggregates data on suspicious files and shares them with security companies). The story said the campaign targeted antivirus products sold or given away by AVG, Avast and Microsoft.

“Within a week and a half, all 10 files were declared dangerous by as many as 14 security companies that had blindly followed Kaspersky’s lead, according to a media presentation given by senior Kaspersky analyst Magnus Kalkuhl in Moscow in January 2010,” wrote Reuters’ Joe Menn. “When Kaspersky’s complaints did not lead to significant change, the former employees said, it stepped up the sabotage.”

Eugene Kaspersky posted a lengthy denial of the story on his personal blog, calling the story a “conflation of a number of facts with a generous amount of pure fiction.”  But according to Dr.Web CEO Boris Sharov, Kaspersky was not alone in probing which antivirus firms were merely aping the technology of competitors instead of developing their own.

Dr. Web CEO Boris Sharov.

Dr.Web CEO Boris Sharov.

In an interview with KrebsOnSecurity, Sharov said Dr.Web conducted similar analyses and reached similar conclusions, although he said the company never mislabeled samples submitted to testing labs.

“We did the same kind of thing,” Sharov said. “We went to the [antivirus] testing laboratories and said, ‘We are sending you clean files, but a little bit modified. Could you please check what your system says about that?’”

Sharov said the testing lab came back very quickly with an answer: Seven antivirus products detected the clean files as malicious.

“At this point, we were very confused, because our explanation was very clear: ‘We are sending you clean files. A little bit modified, but clean, harmless files,’” Sharov recalled of an experiment the company said it conducted over three years ago. “We then observed the evolution of these two files, and a week later, half of the antivirus products were flagging them as bad. But we never flagged these ourselves as bad.”

Sharov said the experiments by both Dr.Web and Kaspersky — although conducted differently and independently — were attempts to expose the reality that many antivirus products are simply following the leaders.

“The security industry in that case becomes bullshit, because people believe in those products and use them in their corporate environments without understanding that those products are just following others,” Sharov said. “It’s unacceptable.”

According to Sharov, a good antivirus product actually consists of two products: One that is sold to customers in a box and/or or online, and the second component that customers will never see — the back-end internal infrastructure of people, machines and databases that are constantly scanning incoming suspicious files and testing the overall product for quality assurance. Such systems, he said, include exhaustive “clean file” tests, which scan incoming samples to make sure they are not simply known, good files. Programs that have never been seen before are nearly always given more scrutiny, but they also are a frequent source of false positives.

“We have sometimes false positives because we are unable to gather all the clean files in the world,” Sharov said. “We know that we can get some part of them, but pretty sure we never get 100 percent. Anyway, this second part of the [antivirus product] should be much more powerful, to make sure what you release to public is not harmful or dangerous.”

Sharov said some antivirus firms (he declined to name which) have traditionally not invested in all of this technology and manpower, but have nevertheless gained top market share.

“For me it’s not clear that [Kaspersky Lab] would have deliberately attacked other antivirus firm, because you can’t attack a company in this way if they don’t have the infrastructure behind it,” Sharov said.

“If you carry out your own analysis of each file you will never be fooled like this,” Sharov said of the testing Dr.Web and Kaspersky conducted. “Some products prefer just to look at what others are doing, and they are quite successful in the market, much more successful than we are. We are not mad about it, but when you think how much harm could bring to customers, it’s quite bad really.

Sharov said he questions the timing of the anonymous sources who contributed to the Reuters report, which comes amid increasingly rocky relations between the United States and Russia. Indeed, Reuters reported today the United States is now considering economic sanctions against both Russian and Chinese individuals for cyber attacks against U.S. commercial targets.

Missing from the Reuters piece that started this hubub is the back story to what Dr.Web and Kaspersky both say was the impetus for their experiments: A long-running debate in the antivirus industry over the accuracy, methodology and real-world relevance of staged antivirus comparison tests run by third-party firms like AV-Test.org and Av-Comparatives.org.

Such tests often show many products block 99 percent of all known threats, but critics of this kind of testing say it doesn’t measure real-world attack, and in any case doesn’t reflect the reality that far too much malware is getting through antivirus defenses these days. For an example of this controversy, check out my piece from 2010, Anti-Virus Is a Poor Substitute for Common Sense.

How does all this affect the end user? My takeaway from that 2010 story hasn’t changed one bit: If you’re depending on an anti-virus product to save you from an ill-advised decision — such as opening an attachment in an e-mail you weren’t expecting, installing random video players from third-party sites, or downloading executable files from peer-to-peer file sharing networks — you’re playing a dangerous game of Russian Roulette with your computer.

Antivirus remains a useful — if somewhat antiquated and ineffective — approach to security.  Security is all about layers, and not depending on any one technology or approach to detect or save you from the latest threats. The most important layer in that security defense? You! Most threats succeed because they take advantage of human weaknesses (laziness, apathy, ignorance, etc.), and less because of their sophistication. So, take a few minutes to browse Krebs’s 3 Rules for Online Safety, and my Tools for a Safer PC primer.

Further reading: Antivirus is Dead: Long Live Antivirus!

LWN.net: Tuesday’s security advisories

This post was syndicated from: LWN.net and was written by: ris. Original post: at LWN.net

Fedora has updated qemu (F21: multiple vulnerabilities).

Oracle has updated gdk-pixbuf2 (OL7; OL6: code execution), jakarta-taglibs-standard (OL7; OL6: code execution), and nss-softokn (OL7; OL6: signature forgery).

Red Hat has updated nss-softokn
(RHEL6,7: signature forgery) and pcs
(RHEL6,7: privilege escalation).

Ubuntu has updated expat (15.04,
14.04, 12.04: denial of service) and gnutls28 (15.04: two vulnerabilities).

LWN.net: OpenSSL Security: A Year in Review

This post was syndicated from: LWN.net and was written by: corbet. Original post: at LWN.net

The OpenSSL project looks
at its security record
for the last year. “The acceptable
timeline for disclosure is a hot topic in the community: we meet CERT’s
45-day disclosure deadline more often than not, and we’ve never blown
Project Zero’s 90-day baseline. Most importantly, we met the goal we set
ourselves and released fixes for all HIGH severity issues in well under a
month. We also landed mitigation for two high-profile protocol bugs, POODLE
and Logjam. Those disclosure deadlines weren’t under our control but our
response was prepared by the day the reports went public.

Raspberry Pi: Fish tank temperature probe: an ideal beginner’s project

This post was syndicated from: Raspberry Pi and was written by: Clive Beale. Original post: at Raspberry Pi

Determined to redress the moggie-doggie bias of the internet Lauren Orsini decided to use a Raspberry Pi and a waterproof temperature sensor to monitor her fish tank.

bigfishtank

It’s not a recent project but it deserves a place here because it’s such a brilliant introduction to physical computing on the Raspberry Pi: one sensor, one purpose and a few lines of “English with a funny syntax” (aka Python). It’s a great tutorial too—Lauren writes clearly and shares her beginner’s point of view, documenting things that more experienced people might take for granted. The setup is based on a tutorial from Adafruit and although Lauren hadn’t done any “hardware hacking” before, she says that the hardest part was “taping the wires inside the temperature sensor to the wires that fit inside the breadboard.”

tape

So it’s a real beginner’s project but one that can be expanded as you learn. Lauren, for instance, extended the project to turn it into a true Internet of Things device that texts her when the fish tank gets too hot. All in all it’s a great way to slowly build your Raspberry Pi computing skills.

It’s also pocket money cheap. In fact if you already have the CamJam EduKit #2 then you already have the kit needed for this project. And of course the sensor doesn’t have to be in a fish tank. Monitor the temperature of your bathwater; your cup of tea; the fridge; your dad’s armpit while he dozes in front of the TV. If you’re looking for something to do with your Pi on the last day of your summer holiday then this comes highly recommended.


 

Bonus back to school question #1: If ‘dogs’ = 5; ‘cats’ = 2; and ‘cheese’ = 1, what is the value of ‘fish’? Answer tomorrow…

The post Fish tank temperature probe: an ideal beginner’s project appeared first on Raspberry Pi.

Schneier on Security: What Can you Learn from Metadata?

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

An Australian reporter for the ABC, Will Ockenden published a bunch of his metadata, and asked people to derive various elements of his life. They did pretty well, even though they were amateurs, which should give you some idea what professionals can do.

SANS Internet Storm Center, InfoCON: green: How to hack, (Tue, Sep 1st)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

” />
Agreed, this information is not overly useful. These hacks are basically on the opposite end of the threat scale from the over-hyped Advanced Persistent Threat (APT). Lets call it the Basic Sporadic Annoyance (BSA), just to come up with a new acronym :).

The BSAs still tell us though what average wannabe hackers seem to be interested in breaking into, namely: websites, online games, wifi and phones. Cars, pacemakers, fridges and power plants are not on the list, suggesting that these targets are apparently not yet popular enough.

Being fully aware of the filter bubble https://en.wikipedia.org/wiki/Filter_bubble we had several people try the same search, and they largely got the same result. Looks like Facebook really IS currently the main wannabe hacker target. But Facebook dont need to worry all that much. Because if you just type How to h, then the suggestions reveal that other problems are even more prominent than hacking Facebook” />

If your results (of the how to hack query, not the latter one) differ significantly,please share in the comments below.”>Updated to add: Thanks, we have enough samples now :)

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

TorrentFreak: Movie Studios and Record Labels Target Pirate Bay in New Lawsuit

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

tpbIn 2009, the IFPI and several local movie studios demanded that Norwegian ISP Telenor should block The Pirate Bay. The ISP refused and legal action commenced.

A subsequent ruling determined that there was no legal basis for site blocking and in 2010 a rightsholder appeal also failed. If sites were to be blocked, a change in the law was required.

In May 2011 the Ministry of Culture announced that it had put forward proposals for amendments to the Copyright Act, to include web blocking, and on July 1, 2013 the new law came into effect.

After more than two years of threats, local and international copyright holders have now made good on their promises to use the new legislation to stamp down on piracy.

In a lawsuit filed at the Oslo District Court, Disney, Warner Bros. and Sony plus local producers and representatives from the recording industry are teaming up to sue eleven local ISPs. Also targeted in the action are the alleged operators of eight ‘pirate’ sites.

Although the sites are yet to be publicly revealed, The Pirate Bay is among them and site co-founder Fredrik Neij is named as a party in the case.

According to Dagens Næringsliv, studios and labels filed an initial complaint with ISPs back in April via anti-piracy outfit Rights Alliance. It was sent to the country’s largest ISP Telenor plus others including Get, NextGenTel and Altibox.

The rightsholders’ demands are familiar. All the main local ISPs must block The Pirate Bay and related sites so that subscribers can no longer access the domains directly.

“We understand licensees’ struggle for their rights. For us it is important that the court must take these decisions, and that we do not assume a censorship role,” says Telenor communications manager Tormod Sandstø.

Also of interest is how the legal process is being handled. The Oslo District Court is dealing with the case in writing so the whole process is completely closed to the public. After processing the case during the summer, early estimations suggest that the court will have made its decision within the next 10 days.

The news follows several key Norwegian anti-piracy developments in 2015. In March, an investigation by Rights Alliance culminated in a police raid against local pirate site Norskfilm.

In July, Rights Alliance placed the blame for a piracy explosion firmly on the shoulders of Popcorn Time, with the group announcing last week that up to 75,000 users of the application could now be contacted by mail. The message they will receive remains unclear but comments from Rights Alliance during the past few days have leaned away from lawsuits.

Interestingly, Popcorn Time related sites are not among the batch of domains currently under consideration by the Oslo District Court as the service was not considered a priority when the original Rights Alliance complaint was being put together. Should the current blocking attempt prove successful, expect Popcorn Time domains to appear in an upcoming lawsuit.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Велоеволюция: Вело-преброяване 2015

This post was syndicated from: Велоеволюция and was written by: Radost. Original post: at Велоеволюция

1.1
През май и юли 2015 г. приключиха първите етапи на изследване на велосипедните потоци в София, което сдружение „Велоеволюция“ провежда.
Най-интересното от изследването можете да намерите тук.

Неравномерното натоварване с велосипедисти по различните направления в града, води до интересни изводи за ефекта от наличната вело-инфраструктура и за необходимостта от общи мерки за насърчаване на вело-транспорта. Данните красноречиво показват:

  • най-значимите велосипедни трасета
  • тенденции с които вело-инфраструктура трябва да се съобрази още в идните години
  • потенциала на конкретни велоалеи за привличане на велосипедисти.

Вело-преброяванията са част от проект “Вело данни за София” на Велоеволюция и се осъществяват с финансовата подкрепа на Програма “Европа” 2015 на Столична община. Това е резултат от усилията на сдружението за ангажиране на общинските власти с конкретни стъпки за насърчаване на велосипеден транспорт.

lcamtuf's blog: Understanding the process of finding serious vulns

This post was syndicated from: lcamtuf's blog and was written by: Michal Zalewski. Original post: at lcamtuf's blog

Our industry tends to glamorize vulnerability research, with a growing number of bug reports accompanied by flashy conference presentations, media kits, and exclusive interviews. But for all that grandeur, the public understands relatively little about the effort that goes into identifying and troubleshooting the hundreds of serious vulnerabilities that crop up every year in the software we all depend on. It certainly does not help that many of the commercial security testing products are promoted with truly bombastic claims – and that some of the most vocal security researchers enjoy the image of savant hackers, seldom talking about the processes and toolkits they depend on to get stuff done.

I figured it may make sense to change this. Several weeks ago, I started trawling through the list of public CVE assignments, and then manually compiling a list of genuine, high-impact flaws in commonly used software. I tried to follow three basic principles:

  • For pragmatic reasons, I focused on problems where the nature of the vulnerability and the identity of the researcher is easy to ascertain. For this reason, I ended up rejecting entries such as CVE-2015-2132 or CVE-2015-3799.

  • I focused on widespread software – e.g., browsers, operating systems, network services – skipping many categories of niche enterprise products, WordPress add-ons, and so on. Good examples of rejected entries in this category include CVE-2015-5406 and CVE-2015-5681.

  • I skipped issues that appeared to be low impact, or where the credibility of the report seemed unclear. One example of a rejected submission is CVE-2015-4173.

To ensure that the data isn’t skewed toward more vulnerable software, I tried to focus on research efforts, rather than on individual bugs; where a single reporter was credited for multiple closely related vulnerabilities in the same product within a narrow timeframe, I would use only one sample from the entire series of bugs.

For the qualifying CVE entries, I started sending out anonymous surveys to the researchers who reported the underlying issues. The surveys open with a discussion of the basic method employed to find the bug:

  How did you find this issue?

  ( ) Manual bug hunting
  ( ) Automated vulnerability discovery
  ( ) Lucky accident while doing unrelated work

If “manual bug hunting” is selected, several additional options appear:

  ( ) I was reviewing the source code to check for flaws.
  ( ) I studied the binary using a disassembler, decompiler, or a tracing tool.
  ( ) I was doing black-box experimentation to see how the program behaves.
  ( ) I simply noticed that this bug is being exploited in the wild.
  ( ) I did something else: ____________________

Selecting “automated discovery” results in a different set of choices:

  ( ) I used a fuzzer.
  ( ) I ran a simple vulnerability scanner (e.g., Nessus).
  ( ) I used a source code analyzer (static analysis).
  ( ) I relied on symbolic or concolic execution.
  ( ) I did something else: ____________________

Researchers who relied on automated tools are also asked about the origins of the tool and the computing resources used:

  Name of tool used (optional): ____________________

  Where does this tool come from?

  ( ) I created it just for this project.
  ( ) It's an existing but non-public utility.
  ( ) It's a publicly available framework.

  At what scale did you perform the experiments?

  ( ) I used 16 CPU cores or less.
  ( ) I employed more than 16 cores.

Regardless of the underlying method, the survey also asks every participant about the use of memory diagnostic tools:

  Did you use any additional, automatic error-catching tools - like ASAN
  or Valgrind - to investigate this issue?

  ( ) Yes. ( ) Nope!

…and about the lengths to which the reporter went to demonstrate the bug:

  How far did you go to demonstrate the impact of the issue?

  ( ) I just pointed out the problematic code or functionality.
  ( ) I submitted a basic proof-of-concept (say, a crashing test case).
  ( ) I created a fully-fledged, working exploit.

It also touches on the communications with the vendor:

  Did you coordinate the disclosure with the vendor of the affected
  software?

  ( ) Yes. ( ) No.

  How long have you waited before having the issue disclosed to the
  public?

  ( ) I disclosed right away. ( ) Less than a week. ( ) 1-4 weeks.
  ( ) 1-3 months. ( ) 4-6 months. ( ) More than 6 months.

  In the end, did the vendor address the issue as quickly as you would
  have hoped?

  ( ) Yes. ( ) Nope.

…and the channel used to disclose the bug – an area where we have seen some stark changes over the past five years:

  How did you disclose it? Select all options that apply:

  [ ] I made a blog post about the bug.
  [ ] I posted to a security mailing list (e.g., BUGTRAQ).
  [ ] I shared the finding on a web-based discussion forum.
  [ ] I announced it at a security conference.
  [ ] I shared it on Twitter or other social media.
  [ ] We made a press kit or reached out to a journalist.
  [ ] Vendor released an advisory.

The survey ends with a question about the motivation and the overall amount of effort that went into this work:

  What motivated you to look for this bug?

  ( ) It's just a hobby project.
  ( ) I received a scientific grant.
  ( ) I wanted to participate in a bounty program.
  ( ) I was doing contract work.
  ( ) It's a part of my full-time job.

  How much effort did you end up putting into this project?

  ( ) Just a couple of hours.
  ( ) Several days.
  ( ) Several weeks or more.

So far, the response rate for the survey is approximately 80%; because I only started in August, I currently don’t have enough answers to draw particularly detailed conclusions from the data set – this should change over the next couple of months. Still, I’m already seeing several well-defined if preliminary trends:

  • The use of fuzzers is ubiquitous (incidentally, of named projects, afl-fuzz leads the fray so far); the use of other automated tools, such as static analysis frameworks or concolic execution, appears to be unheard of – despite the undivided attention that such methods receive in academic settings.

  • Memory diagnostic tools, such as ASAN and Valgrind, are extremely popular – and are an untold success story of vulnerability research.

  • Most of public vulnerability research appears to be done by people who work on it full-time, employed by vendors; hobby work and bug bounties follow closely.

  • Only a small minority of serious vulnerabilities appear to be disclosed anywhere outside a vendor advisory, making it extremely dangerous to rely on press coverage (or any other casual source) for evaluating personal risk.

Of course, some security work happens out of public view; for example, some enterprises have well-established and meaningful security assurance programs that likely prevent hundreds of security bugs from ever shipping in the reviewed code. Since it is difficult to collect comprehensive and unbiased data about such programs, there is always some speculation involved when discussing the similarities and differences between this work and public security research.

Well, that’s it! Watch this space for updates – and let me know if there’s anything you’d change or add to the questionnaire.

SANS Internet Storm Center, InfoCON: green: ISC StormCast for Tuesday, September 1st 2015 http://isc.sans.edu/podcastdetail.html?id=4637, (Tue, Sep 1st)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Anchor Cloud Hosting: Is Your Website Ready for Click Frenzy 2015?

This post was syndicated from: Anchor Cloud Hosting and was written by: Jessica Field. Original post: at Anchor Cloud Hosting

With Click Frenzy happening again on November 17th, and online sales trends continuing to climb, it is once more predicted to break online sales records—as well as any participating websites that aren’t prepared.

Last year’s Click Frenzy event saw a 27.7% rise in online sales over the previous year, with the average customer order increasing 15.2% to $151.021. Any sudden rise in online activity, particularly when compressed into a single 24-hour period, can trigger a frenzy of technical issues for some retailers. According to IBM research, the first Click Frenzy in November 2012 saw 37% more online sales than even the Boxing Day sales. This unprecedented flurry of online transactions in such a short period of time saw many retailers crash under the pressure, losing sales and frustrating customers2.

That first event revealed how easy it is for online retailers to snatch failure from the jaws of success by underestimating the ability of their website infrastructure to cope with the stampede of eager customers.

Online retailer Just Bedding decided against participating in the first Click Frenzy campaign, as the risk of making a bad impression outweighed the attraction of a huge retail opportunity. The sudden increase in traffic across a single day, and the unpredictability of the demand, would have required too much effort from the technical team to keep the site from falling over, and with no guarantee of success.

But, in 2013, Anchor’s server support gave Just Bedding the confidence to take advantage of the massive sales opportunity. Gabriel Luis, Just Bedding’s website administrator, called up Anchor to let the team know a potential flood of traffic was coming. Forewarned, Anchor was able to work with Just Bedding’s technical team to seamlessly scale the website.

“Anchor simply upped the server package to cope with the increase in traffic on this day,” says Luis. “They did create a precautionary landing page, just in case there were any issues with customers reaching the site, but it wasn’t needed.”

The website was able to withstand a huge spike in traffic on the day, eliminating much of the risk while maximising the opportunity.

If you’re one of the many retailers already signed up or still contemplating whether to join Click Frenzy this year, now is the time to talk to your hosting provider and your technical team. With a little forward planning, you can enjoy a frenzy of sales, not complaints.click-frenzy_1

The post Is Your Website Ready for Click Frenzy 2015? appeared first on Anchor Cloud Hosting.

SANS Internet Storm Center, InfoCON: green: Encryption of "data at rest" in servers, (Tue, Sep 1st)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

Over in the SANS ISC discussion forum, a couple of readers have started a good discussion https://isc.sans.edu/forums/Encryption+at+rest+what+am+I+missing/959 about which threats we actually aim to mitigate if we follow the HIPAA/HITECH (and other) recommendations to encrypt data at rest that is stored on a server in a data center. Yes, it helps against outright theft of the physical server, but – like many recent prominent data breaches suggest – it doesnt help all that much if the attacker comes in over the network and has acquired admin privileges, or if the attack exploits a SQL injection vulnerability in a web application.

There are types of encryption (mainly field or file level) that also can help against these eventualities, but they are usually more complicated and expensive, and not often applied. If you are interested in data at rest encryption for servers, please join the mentioned discussion in the Forum.

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

SANS Internet Storm Center, InfoCON: green: Gift card from Marriott?, (Tue, Sep 1st)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

Always nice when the spammers are so forthcoming to send their latest crud directly to our SANS ISC honeypot account. The current incarnation

Subject: Re: Your complimentary 3-night stay giftcard (Expires 09
From: Marriott Gift Card marriottgiftcard@summerallstar.review

came from

Received: from summerallstar.review (50.22.145.13-static.reverse.softlayer.com [50.22.145.13])

which kinda figures, Softlayer is among the cloud computing providers whose get a virtual server FREE for one month is an offering that scammers cant resist. The Marriott email said:

Marriott Special Gift Card:
=======================================================
Expires 09/15/15
Notification: #2595319
=======================================================

ALERT: Your Marriott-Gift Card will expire 09/15/15.

Please claim your gift-card at the link below:
http://seespecial.summerallstar[dot]review

This gift-card is only good for one-person to claim
at once with participation required. Please respect the
rules of the special-giftpromo.

=======================================================
Expires 09/15/15
Notification: #2595319
=======================================================

End-GiftCard Notification

.review ? How lovely! Lets use the opportunity to again *thank* ICANN for their moronic money grab, and all the shiny new useless top level domains that honest users and corporations now have to avoid and block. The lesson learned a couple years ago, when .biz and .info came online, should have been enough to know that the new cyber real estate would primarily get occupied by crooks. But here we are. I guess ICANN and most domain name pimps don” />

It doesn” />

Somewhere along the way, it seems like the connection to Marriott got lost. Which is maybe all the better…

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

LWN.net: ownCloud Contributor Conference Announcements

This post was syndicated from: LWN.net and was written by: ris. Original post: at LWN.net

The ownCloud Contributor Conference
2015
(August 28-September 3 in Berlin, Germany) started off with some big
announcements
, including the publishing of the User Data Manifesto 2.0, the
creation of the ownCloud Security Bug Bounty Program, and the release of
the ownCloud Proxy app. “Designed for those of you who want your own private, secure “Dropbox” and don’t want the hassle of configuring routers, firewalls and DNS entries for access from anywhere, at any time, ownCloud Proxy is for you. It comes installed as an ownCloud community app in the new ownCloud community appliance, connects to relay servers in the cloud, and provides anytime, anywhere access to your files, on your PC running in your home network, quickly and easily. And, of course, you can grab it from the ownCloud app store and add it to an existing ownCloud server if you already have one running.

TorrentFreak: Torrenting “Manny” Pirate Must Pay $30,000 in Damages

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

pirate-runningWhile relatively underreported, many U.S. district courts are still swamped with lawsuits against alleged film pirates.

One of the newcomers this year are the makers of the sports documentary Manny. Over the past few months “Manny Film” has filed 215 lawsuits across several districts.

Most cases are settled without of court, presumably for a few thousands dollars. However, if the alleged downloader fails to respond the damage can be much worse.

District Court Judge Darrin Gayles recently issued a default judgment (pdf) against Micheal Chang, a Florida man who stood accused of pirating a copy of “Manny.

Since Chang didn’t respond to the allegations the Judge agreed with the filmmakers and ordered Chang to pay $30,000 in statutory damages. In addition he must pay attorneys’ fees and costs bringing the total to $31,657.

While the damages are a heavy burden to bear for most, the filmmakers say that the defendant got off lightly. Manny Film argued that Chang was guilty of willful copyright infringement for which the damages can go up to $150,000 per work.

“Here, despite the fact of Defendant’s willful infringement, Plaintiff only seeks an award of $30,000 per work in statutory damages,” Manny Film wrote.

According to the filmmakers Chang’s Internet connection was used to pirate over 2,400 files via BitTorrent in recent years, which they say proves that he willfully pirated their movie.

“…for nearly two years, Defendant infringed over 2,400 third-party works on BitTorrent. In addition to Plaintiff’s film, Defendant downloaded scores of Hollywood films, including works such as Avatar and The Wolf of Wall Street.”

It is unlikely that the court would have issued the same damages award if Chang had defended himself. Even in cases without representation several judges have shown reluctance to issue such severe punishments.

For example, last year U.S. District Court Judge Thomas Rice ruled that $30,000 in damages per shared film is excessive, referring to the Eighth Amendment which prohibits excessive fines as well as cruel and unusual punishments.

“This Court finds an award of $30,000 for each defendant would be an excessive punishment considering the seriousness of each Defendant’s conduct and the sum of money at issue,” Judge Rice wrote.

On the other hand, it could have been even worse. The damage award pales in comparison to some other default judgments. In Illinois three men had to pay $1.5 million each for sharing seven to ten movies using BitTorrent.

Controversy aside, Manny Film believes that they are certainly entitled to the $30,000. They have requested the same amount in other cases which are still pending, arguing that the actual lost revenue is even higher.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Schneier on Security: Using Samsung’s Internet-Enabled Refrigerator for Man-in-the-Middle Attacks

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

This is interesting research::

Whilst the fridge implements SSL, it FAILS to validate SSL certificates, thereby enabling man-in-the-middle attacks against most connections. This includes those made to Google’s servers to download Gmail calendar information for the on-screen display.

So, MITM the victim’s fridge from next door, or on the road outside and you can potentially steal their Google credentials.

The notable exception to the rule above is when the terminal connects to the update server — we were able to isolate the URL https://www.samsungotn.net which is the same used by TVs, etc. We generated a set of certificates with the exact same contents as those on the real website (fake server cert + fake CA signing cert) in the hope that the validation was weak but it failed.

The terminal must have a copy of the CA and is making sure that the server’s cert is signed against that one. We can’t hack this without access to the file system where we could replace the CA it is validating against. Long story short we couldn’t intercept communications between the fridge terminal and the update server.

When I think about the security implications of the Internet of things, this is one of my primary worries. As we connect things to each other, vulnerabilities on one of them affect the security of another. And because so many of the things we connect to the Internet will be poorly designed, and low cost, there will be lots of vulnerabilities in them. Expect a lot more of this kind of thing as we move forward.

Matthew Garrett: Working with the kernel keyring

This post was syndicated from: Matthew Garrett and was written by: Matthew Garrett. Original post: at Matthew Garrett

The Linux kernel keyring is effectively a mechanism to allow shoving blobs of data into the kernel and then setting access controls on them. It’s convenient for a couple of reasons: the first is that these blobs are available to the kernel itself (so it can use them for things like NFSv4 authentication or module signing keys), and the second is that once they’re locked down there’s no way for even root to modify them.

But there’s a corner case that can be somewhat confusing here, and it’s one that I managed to crash into multiple times when I was implementing some code that works with this. Keys can be “possessed” by a process, and have permissions that are granted to the possessor orthogonally to any permissions granted to the user or group that owns the key. This is important because it allows for the creation of keyrings that are only visible to specific processes – if my userspace keyring manager is using the kernel keyring as a backing store for decrypted material, I don’t want any arbitrary process running as me to be able to obtain those keys[1]. As described in keyrings(7), keyrings exist at the session, process and thread levels of granularity.

This is absolutely fine in the normal case, but gets confusing when you start using sudo. sudo by default doesn’t create a new login session – when you’re working with sudo, you’re still working with key posession that’s tied to the original user. This makes sense when you consider that you often want applications you run with sudo to have access to the keys that you own, but it becomes a pain when you’re trying to work with keys that need to be accessible to a user no matter whether that user owns the login session or not.

I spent a while talking to David Howells about this and he explained the easiest way to handle this. If you do something like the following:
$ sudo keyctl add user testkey testdata @u
a new key will be created and added to UID 0’s user keyring (indicated by @u). This is possible because the keyring defaults to 0x3f3f0000 permissions, giving both the possessor and the user read/write access to the keyring. But if you then try to do something like:
$ sudo keyctl setperm 678913344 0x3f3f0000
where 678913344 is the ID of the key we created in the previous command, you’ll get permission denied. This is because the default permissions on a key are 0x3f010000, meaning that the possessor has permission to do anything to the key but the user only has permission to view its attributes. The cause of this confusion is that although we have permission to write to UID 0’s keyring (because the permissions are 0x3f3f0000), we don’t possess it – the only permissions we have for this key are the user ones, and the default state for user permissions on new keys only gives us permission to view the attributes, not change them.

But! There’s a way around this. If we instead do:
$ sudo keyctl add user testkey testdata @s
then the key is added to the current session keyring (@s). Because the session keyring belongs to us, we possess any keys within it and so we have permission to modify the permissions further. We can then do:
$ sudo keyctl setperm 678913344 0x3f3f0000
and it works. Hurrah! Except that if we log in as root, we’ll be part of another session and won’t be able to see that key. Boo. So, after setting the permissions, we should:
$ sudo keyctl link 678913344 @u
which ties it to UID 0’s user keyring. Someone who logs in as root will then be able to see the key, as will any processes running as root via sudo. But we probably also want to remove it from the unprivileged user’s session keyring, because that’s readable/writable by the unprivileged user – they’d be able to revoke the key from underneath us!
$ sudo keyctl unlink 678913344 @s
will achieve this, and now the key is configured appropriately – UID 0 can read, modify and delete the key, other users can’t.

This is part of our ongoing work at CoreOS to make rkt more secure. Moving the signing keys into the kernel is the first step towards rkt no longer having to trust the local writable filesystem[2]. Once keys have been enrolled the keyring can be locked down – rkt will then refuse to run any images unless they’re signed with one of these keys, and even root will be unable to alter them.

[1] (obviously it should also be impossible to ptrace() my userspace keyring manager)
[2] Part of our Secure Boot work has been the integration of dm-verity into CoreOS. Once deployed this will mean that the /usr partition is cryptographically verified by the kernel at runtime, making it impossible for anybody to modify it underneath the kernel. / remains writable in order to permit local configuration and to act as a data store, and right now rkt stores its trusted keys there.

comment count unavailable comments

LWN.net: Security updates for Monday

This post was syndicated from: LWN.net and was written by: ris. Original post: at LWN.net

Debian has updated drupal7 (multiple vulnerabilities) and iceweasel (multiple vulnerabilities).

Mageia has updated audit (MG4,5:
unsafe escape-sequence handling), firefox
(MG4,5: multiple vulnerabilities), and glusterfs (MG5; MG4: two vulnerabilities).

openSUSE has updated ansible
(13.2: broken python parsing) and thunderbird (13.2; 13.1: multiple vulnerabilities).

Red Hat has updated gdk-pixbuf2
(RHEL6,7: code execution) and jakarta-taglibs-standard (RHEL6,7: code execution).

Scientific Linux has updated firefox (SL5,6,7: two vulnerabilities), gdk-pixbuf2 (SL6,7: code execution), and jakarta-taglibs-standard (SL6,7: code execution).

Slackware has updated firefox (multiple vulnerabilities).

SUSE has updated kvm (SLE11SP4:
code execution).

TorrentFreak: Yandex Demands Takedown of ‘Illegal’ Music Downloader

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

githubYandex is a Russian Internet company that runs the country’s most popular search engine controlling more than 60% of the market.

Making use of the free music that could be found via its search results, in 2009 Yandex introduced its first music player. A year later the company launched Yandex.Music, a new service offering enhanced legal access to around 800,000 tracks from the company’s catalog.

In 2014 and after years of development, Yandex relaunched a revamped music platform with new features including a Spotify-like recommendation engine and licensing deals with Universal, EMI, Warner and Sony, among others. Today the service offers more than 20 million tracks, all available for streaming from music.yandex.ru.

1d-github

While the service can be reached by using an appropriate VPN, Yandex Music is technically only available to users from Russia, Ukraine, Belarus and Kazakhstan. Additionally, the service’s licensing terms allow only streaming.

Of course, there are some who don’t appreciate being so restricted and this has led to the development of third-party applications that are designed to offer full MP3 downloads.

In addition to various browser extensions, one of the most popular is Yandex Music Downloader. Hosted on Github, the program’s aims are straightforward – to provide swift downloading of music from Yandex while organizing everything from ID3 tags to cover images and playlists.

Unfortunately for its fanbase, however, the software has now attracted the attention of Yandex’s legal team.

“I am Legal Counsel of Yandex LLC, Russian Internet-company. We have learned that your service is hosting program code ‘Yandex.Music downloader’…which allows users to download content (music tracks) from the service Yandex.Music…,” a complaint from Yandex to Github reads.

“Service Yandex.Music is the biggest music service in Russia that provides users with access to the licensed music. Music that [is] placed on the service Yandex.Music is licensed from its right holders including: Sony Music, The Orchard, Universal Music, Warner Music and other,” the counsel continues.

“Service Yandex.Music does not provide users with possibility to download content from the service. Downloading content from the service Yandex.Music is illegal. This means that program code ‘Yandex.Music downloader’…provides illegal unauthorized access to the service Yandex.Music that breaches rights of Yandex LLC, right holders of the content and also breaches GitHub Terms of Service.”

As a result, users trying to obtain the application are now greeted with the following screen.

git-down

The Yandex complaint follows a similar one earlier in the month in which it targeted another variant of the software.

While the takedowns may temporarily affect the distribution of the tools, Yandex’s efforts are unlikely to affect the unauthorized downloading of MP3s from its service. A cursory Google search reveals plenty of alternative tools which provide high-quality MP3s on tap.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Beyond Bandwidth: Attracting the Eye of Millennials: The Workforce Engagement Impact for Corporate CSR Programs

This post was syndicated from: Beyond Bandwidth and was written by: Ashley Pritchard. Original post: at Beyond Bandwidth

I am a millennial in every sense of the word. Not only was I born in the appropriate time frame, between 1982 and the early 2000s, but I am a glass-half-full, rose-tinted-glasses kind of gal. I have actively volunteered since I was in high school. I can’t say for certain what sparked my interest in…

The post Attracting the Eye of Millennials: The Workforce Engagement Impact for Corporate CSR Programs appeared first on Beyond Bandwidth.