TorrentFreak: Anti-Piracy Activities Get VPNs Banned at Torrent Sites

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

spyFor the privacy-conscious Internet user, VPNs and similar services are now considered must-have tools. In addition to providing much needed security, VPNs also allow users to side-step geo-blocking technology, a useful ability for today’s global web-trotter.

While VPNs are often associated with file-sharing activity, it may be of interest to learn that they are also used by groups looking to crack down on the practice. Just like file-sharers it appears that anti-piracy groups prefer to work undetected, as events during the past few days have shown.

Earlier this week while doing our usual sweep of the world’s leading torrent sites, it became evident that at least two popular portals were refusing to load. Finding no complaints that the sites were down, we were able to access them via publicly accessible proxies and as a result thought no more of it.

A day later, however, comments began to surface on Twitter that some VPN users were having problems accessing certain torrent sites. Sure enough, after we disabled our VPN the affected sites sprang into action. Shortly after, reader emails to TF revealed that other users were experiencing similar problems.

Eager to learn more, TF opened up a dialog with one of the affected sites and in return for granting complete anonymity, its operator agreed to tell us what had been happening.

“The IP range you mentioned was used for massive DMCA crawling and thus it’s been blocked,” the admin told us.

Intrigued, we asked the operator more questions. How do DMCA crawlers manifest themselves? Are they easy to spot and deal with?

“If you see 15,000 requests from the same IP address after integrity checks on the IP’s browsers for the day, you can safely assume its a [DMCA] bot,” the admin said.

From the above we now know that anti-piracy bots use commercial VPN services, but do they also access the sites by other means?

“They mostly use rented dedicated servers. But sometimes I’ve even caught them using Hola VPN,” our source adds. Interestingly, it appears that the anti-piracy activities were directed through the IP addresses of Hola users without them knowing.

Once spotted the IP addresses used by the aggressive bots are banned. The site admin wouldn’t tell TF how his system works. However, he did disclose that sizable computing resources are deployed to deal with the issue and that the intelligence gathered proves extremely useful.

Of course, just because an IP address is banned at a torrent site it doesn’t necessarily follow that a similar anti-DMCA system is being deployed. IP addresses are often excluded after being linked to users uploading spam, fakes and malware. Additionally, users can share IP addresses, particularly in the case of VPNs. Nevertheless, the banning of DMCA notice-senders is a documented phenomenon.

Earlier this month Jonathan Bailey at Plagiarism Today revealed his frustrations when attempting to get so-called “revenge porn” removed from various sites.

“Once you file your copyright or other notice of abuse, the host, rather than remove the material at question, simply blocks you, the submitter, from accessing the site,” Bailey explains.

“This is most commonly done by blocking your IP address. This means, when you come back to check and see if the site’s content is down, it appears that the content, and maybe the entire site, is offline. However, in reality, the rest of the world can view the content, it’s just you that can’t see it,” he notes.

Perhaps unsurprisingly, Bailey advises a simple way of regaining access to a site using these methods.

“I keep subscriptions with multiple VPN providers that give access to over a hundred potential IP addresses that I can use to get around such tactics,” he reveals.

The good news for both file-sharers and anti-piracy groups alike is that IP address blocks like these don’t last forever. The site we spoke with said that blocks on the VPN range we inquired about had already been removed. Still, the cat and mouse game is likely to continue.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services. Stable kernels 4.2.3 and 4.1.10

This post was syndicated from: and was written by: jake. Original post: at

Greg Kroah-Hartman has released the 4.2.3
and 4.1.10 stable kernels. The fix for the
deadlocks reported for 4.1.9 did not make
it into 4.1.10. As usual, these stable kernels contain fixes throughout
the tree.

yovko in a nutshell: Бъдещето, за което не сме готови

This post was syndicated from: yovko in a nutshell and was written by: Yovko Lambrev. Original post: at yovko in a nutshell

To do nothing is within the power of all men
– Samuel Johnson

Този текст ще бъде малко дълъг и ще ви отнеме повече от обичайното време, не защото не съм писал и по-дълги от него, но защото ще бъде допълнен с мислите на няколко души, далеч по-умни от мен. И най-вече влиятелни – с визията си за нещата около нас или за бъдещето.

Преди да продължим ми се иска да помоля да изгледате следващото видео, ако не сте. Peter Diamandis e предприемач, учен, доктор от Harvard Medical School, следвал още Molecular Genetics и Aerospace Engineering в MIT. Създател, заедно с Ray Kurzweil, на един много особен университет, който тотално не се вписва в рамката на казионното образование, но пък (може би точно заради това) от него вече се родиха няколко проекта с революционен потенциал да променят живота на много хора по света.

Видеото не е от най-кратките – 15 минути е, но моля, преживейте контраста между първите 30 секунди и останалите… И след това се върнете към текста ми.

Иска ми се да припомня кога беше анонсиран първият iPhone – 9 януари 2007 година. Това никак не беше отдавна. Само 8 години са минали оттогава, но съм готов да се обзаложа, че много хора по света поне от 4-5 години насам са забравили какво е ежедневието без smartphone. Между другото iPhone се появи само 9 дни след като България стана част от Европейския съюз. А оттогава насам се промениха толкова неща, че много пък от българите удобно забравиха как беше преди. Преди само 8 години… Колкото и да си мислим, че нещата се случват бавно, истината е, че всъщност бързо прегръщаме промените, особено когато са позитивни. ОК – и iPhone, и ЕС си имат достатъчно хейтъри :)

Замисляме ли се за бъдещето в следващите 8 години? И не говоря за хората, които чакат промените да ги застигнат. Питам тези, които носят промените – предприемачи, хора на науката, хората на словото (избягвам целенасочено думата журналисти, преднамерено е)… Живеем в свят, в който темпото на еволюцията е революционно, невиждано преди. Глобалната ни свързност, технологиите и новите бизнес модели променят бранш след бранш – далеч не само технологичния и това е прекрасно, но нито светът, нито локалните ни общности, в които живеем са готови за толкова грандиозни и бързи промени.

Виждаме го по себе си, самозаливайки се с дребнотемие из социалките, заради местни избори, идиотщини в сутрешните блокове или бълвочите из вестници като Дума или Телеграф… Не се справяме с ignore-а, който всичко това трябва да получи в отговор, за да потъне в забвение. Не се справяме с личните си филтри срещу свръхинформираността си и информационното замъгление. И се превръщаме в усилватели на тази лудница. А има далеч по-важни неща, за които да се тревожим – ето няколко…

Готов съм да се обзаложа, че повечето водещи компании днес няма да преживеят следващите 15-20 години. Даже съм почти сигурен, че примерно Yahoo и HP няма да преживеят 2017-та, поне не в този си вид. IBM ще ги последва много скоро. Всъщност започнах този текст преди около седмица и преди да го довърша поредното разделяне на две в HP е вече в новините.

Говоря за технологични гиганти, понеже следя основно този сектор, но… какво предстои в промишлеността? Роботите и 3D-принтерите ще правят производството все по-евтино, толкова по-евтино, че ще го върнат в САЩ и Европа. Говоря за чудесните работници като тези на ABB или UR10, или моят smart любимец Baxter, който не се програмира, а се обучава и дори споделя емоционалното си състояние, чрез дисплея, за да бъде възприеман по-естествено от евентуалните си колеги-човеци. Цената на Baxter започва едва от $25000, a UR10 и много други са дори още по-евтини. В същия момент те са супер прецизни, работят 24×7, и могат да се пренастройват от една функционална роля към друга по-бързо, от който и да е човек. Иначе казано още сега, днес, в този момент, техните оперативни разходи са по-малко от цената на човешкия труд. Добавете към това психологическите фактори – гарантирано работят с години в рамките на експлотационния срок, не членуват в профсъюзи, не стачкуват и не напускат…

Буквално всяка година тези роботи стават все по-добри и евтини, и са в състояние да изпълняват все по-сложни човешки задачи. Това в глобален мащаб звучи добре за Европа, САЩ и донякъде в Азия, където има локални производства, но… със сигурност не звучи добре за Китай.

И само в следващото десетилетие може да се окаже, че дори няма да се нуждаем от тези роботи. Заради 3D-принтерите, които е напълно реално дотогава да са в състояние да „разпечатват“ и потребителска електроника. За неизкушените технологично, представете си да можете да си „разпечатате“ iPhone-а вкъщи.

Хайде сега да понадникнем в енергетиката. До скоро имаше реален шанс в обозримо бъдеще да изчерпим нефтените запаси на планетата за гориво. Вече има шанс да не го направим, а дори и част от запасите никога да не е нужно да бъдат използвани. Заради слънчевата енергия, чиято цена се срина буквално (97% за последните 35 години, твърди статистиката) и продължава да пада. В Щатите очакват да е по-евтино да си произвеждаш енергията от собствени панели вкъщи, отколкото да използваш мрежата, буквално всеки момент и със сигурност преди края на това десетилетие. Ето и нещо във връзка с това (пак от вчера). Нека добавим и Tesla Powerwall към картинката и нещата стават прекрасни за човека на бъдещето, но не и за енергийните дружества. Затова и съпротивата срещу слънчевата енергия вече се води с всички средства – включително у нас. Защото това е единствената алтернативна енергия с потенциал да направи напълно излишна досегашната фосилна енергетика.

Добрата новина е, че ако развитието около слънчевата енергия продължава с тези темпове, достатъчно скоро ще разполагаме с на практика неограничена енергия на пренебрежима цена. Което ще реши и проблема с недостига на питейна вода, понеже ще е безкрайно евтино да се пречиства океанска и морска вода в каквито количества е нужно. Което пък означава, че ще могат да се възстановят обезводнени райони и хората да се прехранват от локално зеленчукопроизводство от т.нар. вертикални ферми. Няма да споменаваме засега за възможността да си „отпечатаме“ вечеря с 3D-принтер, за да не ни бият кулинарите. Но със сигурност следва революция и в хранителната промишленост и земеделието.

И за да посъкратя текста си и да оставя на всеки простор за собствени разсъждения – само ще пробягам през още 2-3 ключови сектора.

Комуникациите – свързаността, която от една страна в близко бъдеще ще ни тежи като осъзнаване, колко различни и разделени (и в напредъка си) сме хората по света, ще бъде и инструмент за глобални манипулации, но в същото време ще е силата на обикновените хора, които все повече ще бъдат равностоен фактор срещу медиите, политиците и големите корпорации. Интернет е най-важната ни сила и платформа за разпространение на знания.

Образованието трябва да бъде голямата революция на утрешния ден.

Трябва да върнем тежестта и авторитета на учителя, учения, знанието, науката. То става все по-достъпно и е престъпление към човечеството да допускаме да бъде замествано от псевдонаука, суеверия, религия или телевизия. Трябва да развием нулева толерантност към хора, с колекция от висши образования и дори научни степени, цитиращи Ванга или световни конспирации. А религията в най-добрия случай да бъде сведена до някоя далечна петостепенна характеристика на личността, но в никакъв случай основополагаща.

Журналистите на бъдещето най-вероятно ще бъдем всички, а тази професия ще изчезне или поне драстично ще се промени. Журналисти ще бъдат експертите и ще се прехранват не от журналистика, а от действителната си професия или роля в обществото.

Финансовите експерименти като Bitcoin ще бъдат все повече и по-интересни, но това, което има истински потенциал да преобърне капиталовите пазари и финансите на бъдещето са инструменти като crowdfunding платформите, които вече разклащат сериозно venture-capital сегмента и банките, правейки възможно бързото и лесно финансиране на различни социални и бизнес-идеи. Вече има и успешни експерименти с crowdfunding финансиране на имотни сделки, (включително в България), обществено-споделен бизнес риск, локални финансови системи и др. Много е вероятно да се окаже, че бъдещето ни ще бъде без банките, каквито ги познаваме днес.

Добавете към това здравеопазването, с възможност за ранна диагностика и нерекъснато наблюдение на жизнените ни параметри, посредством сензорите на „умните“ ни устройства, роботизираната хирургия или възможността за трансплантация на 3D-отпечатани органи.

Но… ще имаме и много нови проблеми за решаване, един от който буквано тропа пред вратата ни – изчезването на много човешки професии. Много работни места ще бъдат заети от роботи и компютри, включително позиции, които сега се заемат от квалифицирани хора. Почти няма да има незасегната индустрия. А това ще създава нови социални проблеми, защото хората няма да успяват да се адаптират или преквалифицират толкова бързо и може да не разполагат с честни начини да се самоиздържат. В същото време ще разполагаме с индустрия, която е в състояние да произвежда продукция и богатство много по-лесно и евтино, което няма как да се случва без потребление. Следователно ще са нужни нови икономически структури и механизми, които да разпределят богатствата, понеже пазарът може и не е способен да се справи сам.

В психологически план, без работа, човек ще загуби социалната си ангажираност и чувството си на удовлетвореност и принадлежност към обществото. Животът, независимостта и стремежът към щастие и комфорт няма да са достижими чрез труд и ще трябва да открием други начини. В този ред на мисли е важно да съзнаваме дълбочината на тези процеси за да можем да сме подготвени с решенията.

Част от днешните визионери успокояват, че подобно на процесите около индустриалната революция и лудитите е нормално да има временни катаклизми, но в крайна сметка индустрията ще създаде необходимост от нови професии, всичко ще си дойде на мястото и хората ще живеят още по-добре. Единственият проблем е, че темпото на индустриалната революция измерваме с векове, а това на технологичната е броени години и понякога месеци. А нови знания не се придобиват толкова лесно и бързо, дори и хората да имат нагласа за това.

Не Uber, а self-driving колите са големият проблем на шофьорите, които са едни от най-застрашените от изчезване професии, в много кратък срок. Такива коли и камиони вече експериментално се движат по пътищата и са невероятно безопасни и много по-безвредни. Те съвсем не са фантастика, а реалност и ще направят излишни много таксиметрови щофьори, тираджии и дори трактористи и куриери до броени години. Хайде пак си спомнете кога Steve Jobs ви показа iPhone… толкова скоро ще бъде. А да, и Uber ще може предложи още по-добра цена.

С напредъка на изкуствените интелекти, big data и cloud computing-а всяка позиция, която изисква анализ на информация ще се прави по-добре от компютър, това включва физици, счетоводители, адвокати, борсови агенти… Евентуално ще са нужни малко хора, които да взаимодействат с други, които предпочитат човешки контакт, но машините ще имат нужда от много малко човешка помощ.

Това всъщност, ако излезем от рамката на социалния проблем, може да бъде възможност за нас хората. Нима наистина е нужно да работим по 40 или 60 часа на седмица, ако фабриките и машините могат да се справят и без нас? Може би човечеството най-накрая ще има времето да преосмисли себе си и да се отдаде на истински креативен труд, вместо безпаметните часове в офиса? Какво ако с 10-15 часа седмично печелим достатъчно за да остава време за отдих, доброволчество или за повече и още знания и образование?

Може би има повече неща, от които да се вълнуваме и очакваме с трепет, отколкото да се тревожим? Ако сме достатъчно умни да решим толкова проблеми, сързани с болестите, глада, енергията и образованието – вероятно трябва да можем да се справим и със социалните проблеми. Но със сигурност е време да вдигнем носа си и да погледнем напред, не утре, сега, днес… за да сме сигурни, че няма да сме следващите лудити. Да се провокираме и учим да гледаме (и да виждаме!) бъдещето.

В последните около два месеца ми се случи така, че няколко приятели и познати около мен ми цитират и възхваляват четива като тези на Айн Ранд и ми е тъжно, че се опитват да обяснят света около себе си с тях. Окей, със сигурност е по-здравословно да четеш Айн Ранд, отколкото Маркс, но пазарът в крайна сметка е само един възможен механизъм за преразпределяне на богатство, нека не му приписваме морални функции, моля… И да не търсим отговорите на всички въпроси зад гърба си. Време е да оставим всички утопии на миналия век, където им е мястото… в историята.

Крайно време е да започнем да си задаваме въпроси, чиито отговори са в бъдещето, а не в миналото…

Бъдещето не се побира в рамката нито на либертарианството, нито дори на капитализма, какъвто е днес. Харесва ли ни или не, вярваме ли или не, но имаме нужда от поредна еволюция на обществено-икономическата подредба на обществото. И независимо как я наречем, тя ще се случи. И в нея не бива да има религиозни артефакти като в монотеистичните религии – Пазарът, Равенството, не знам какво си… с главни букви.

Новият свят ще има нужда от по-балансиран модел, където човекът трябва да се преосмисли като съзидател, а не като консуматор, по-малко индивидуалист и егоист, и повече част от общността си. Повече в синхрон с всичко около себе си. Растежът не може да е единствената мярка за успех. Растежът заради самият растеж е пагубен. Пазарът не може да се саморегулира, гонейки ръст и печалба. Или когато успее, ще е твърде късно. Защото може да не изчерпим всичкия нефт за енергия, но може да го изчерпим за производство на какво ли не… или да изхабим много други застрашени и крайни ресурси на планетата. Пазарът е краен, няма как растежът да не е.

Трябва да осъзнаем тежестта си на граждани и потенциала си за промяна, който можем да усилим със сдружаване помежду си и чрез Интернет. Огледайте се, понякога е нужен един единствен човек за да се случи трудна социална промяна, понякога дори в глобален мащаб. Примери колкото искате… Вижте в какво се превърна Humans of New York, днес това е социален инструмент, по-смислен от много проекти за интеграция, програми на ООН и какво ли не… един човек…

Ето още едно видео – за един човек Lawrence Lessig, за един човек Aaron Swartz, и за един човек Granny D

Да, да… повечето такива усилия не успяват ще кажат циниците и прагматиците. Но истината е, че успехът не винаги е пик в кардиограмата, а все повече ще успяваме, ако циниците и прагматиците си мръднат д-тата, вместо да си проветряват устната кухина. Защото Интернет е усилвател, но реалната работа е да излезеш на площада, да пишеш на кмета (дори да не те кефи), да помогнеш на комшията.

Смеем се на VW (не че не са за бой, освен за смях), но иначе горим дърва и въглища през зимата, с енергията, която се вложи да се убеждаваме колко страшно е е-гласуването можеше да се задвижи нещо креативно, че и да се изкара на печалба, или както писа наскоро Иван Бедров по друг повод… в това време хората откриха вода на Марс…

Имаме адска нужда да си повярваме. В силата и потенциала на всеки един от нас и в това, на което сме способни заедно. Дори като малка част от всички. Винаги много малко хора са в зародиша на големите промени.

Политическите ни елити са изхабени, но те са отражение на тези, които сме ги избрали. И на тези, които сме се примирили да имаме такива елити. Политическите елити в цял свят не стават за чеп за зеле. Но и те също са функция и продукт на икономически интереси, PR и маркетинг.

Няма как да променим тях, ако не променим себе си. Ако не осъзнаем ролята си на граждани на бъдещето, което блъска по вратата. Отворете му да влезе…

(P.S. Същите видеа могат да се гледат и в, където за разлика от YouTube може да имат български субтитри. Подсказвам за тези, които имат затруднения с английския.)

Оригинален линк: “Бъдещето, за което не сме готови” – Някои права запазени

TorrentFreak: Torrent Sites Remove Millions of Links to Pirate Content

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

deleteEntertainment industry groups including the RIAA and MPAA view BitTorrent sites as a major threat. The owners of most BitTorrent sites, however, believe they do nothing wrong.

While it’s common knowledge that The Pirate Bay refuses to remove any torrents, all of the other major BitTorrent sites do honor DMCA-style takedown requests.

Several copyright holders make use of these takedown services to remove infringing content, resulting in tens of thousands of takedown requests per month.

Bitsnoop is one of the prime targets. The site boasts one of the largest torrent databases on the Internet, more than 24 million files in total. This number could have been higher though, as the site has complied with 2,220,099 takedown requests over the years.

The overview below shows that most of the takedown notices received by Bitsnoop were sent by Remove Your Media. Other prominent names such as the RIAA and Microsoft also appear in the list of top senders.


As one of the largest torrent sites, KickassTorrents (KAT) is also frequently contacted by copyright holders.

The site doesn’t list as many torrents as Bitsnoop does, but with tens of thousands of takedown notices per month it receives its fair share of takedown requests.

The KAT team informs TF that they removed 26,060 torrents over the past month, and a total of 856,463 since they started counting.

Torrent sites are not the only ones targeted. Copyright holders also ask Google to indirectly remove access to infringing torrents that appear in its search results. Interestingly, Google receives more requests for Bitsnoop and KAT than the sites themselves do.

Google’s transparency report currently lists 3,902,882 Bitsnoop URLs and several million for KickassTorrents’ most recent domain names. The people at TorrentTags noticed this as well and recently published some additional insights from their own database.

Despite the proper takedown policies it’s hard for torrent sites to escape criticism. On the one hand users complain that their torrents are vanishing. On the other, copyright holders are not happy with the constant stream of newly uploaded torrents.

Not all torrent sites are happy with the takedown procedure either. ExtraTorrent doesn’t keep track of the number of takedown requests the site receives, but the operator informs TF that many contain errors or include links that point to different domains.

Still, most torrent sites feel obligated to accept takedown notices and will continue to do so in order to avoid further trouble.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

lcamtuf's blog: Subjective explainer: gun debate in the US

This post was syndicated from: lcamtuf's blog and was written by: Michal Zalewski. Original post: at lcamtuf's blog

In the wake of the tragic events in Roseburg, I decided to return to the topic of looking at the US culture from the perspective of a person born in Europe. In particular, I wanted to circle back to the topic of firearms.

Contrary to popular belief, the United States has witnessed a dramatic decline in violence over the past 20 years. In fact, when it comes to most types of violent crime – say, robbery, assault, or rape – the country now compares favorably to the UK and many other OECD nations. But as I explored in my earlier posts, one particular statistic – homicide – remains stubbornly high, registering about three times as high as in many other places within the EU.

The homicide epidemic in the United States has a complex nature and overwhelmingly affects ethnic minorities and other disadvantaged social groups; perhaps because of this, the phenomenon sees very little honest, public scrutiny. It is propelled into the limelight only in the wake of spree shootings and other sickening, seemingly random acts of terror; such incidents, although statistically insignificant, take a profound mental toll on the American society. Yet, the effects of such violence also seem strangely short-lived: they trigger a series of impassioned political speeches, invariably focusing on the connection between violence and guns – but the nation soon goes back to business as usual, knowing full well that another massacre will happen soon, perhaps the very same year.

On the face of it, this pattern defies all reason – angering my friends in Europe and upsetting many brilliant and well-educated progressives in the US. They utter frustrated remarks about the all-powerful gun lobby and the spineless politicians and are quick to blame the partisan gridlock for the failure to pass even the most reasonable and toothless gun control laws. I used to be in the same camp; today, I think the reality is more complex than that.

To get to the bottom of this mystery, it helps to look at the spirit of radical individualism and libertarianism that remains the national ethos of the United States – and in fact, is enjoying a degree of resurgence unseen for many decades prior. In Europe, it has long been settled that many individual liberties – be it the freedom of speech or the natural right to self-defense – can be constrained to advance even some fairly far-fetched communal goals. On the old continent, such sacrifices sometimes paid off, and sometimes led to atrocities; but the basic premise of European collectivism is not up for serious debate. In America, a similar notion is far from being settled today.

And so, when it comes to firearm ownership, the country is facing a fundamental choice between two possible realities:

  • A largely disarmed society that depends on the state to protect it from almost all harm, and where citizens are generally not permitted to own guns without presenting a compelling cause. In this model, firearms would be less available to criminals – the resulting black market would be smaller, costlier, and more dangerous. At the same time, the nation would arguably become more vulnerable to foreign invasion or domestic terror, should the state ever fail to provide adequate protection to all its citizens.

  • A well-armed society where firearms are available to almost all competent adults, and where the natural right to self-defense is subject to few constraints. In this model, the country would be likely more resilient in the face of calamity. At the same time, the model must probably accept some inherent, non-trivial increase in violent crime due to the prospect of firearms more easily falling into the wrong hands.

It seems doubtful that a viable middle-ground approach can exist in the United States. With more than 300 million civilian firearms in circulation, most of them in unknown hands, the premise of reducing crime through gun control would critically depend on some form of confiscation; without it, the supply of firearms to the criminal underground or to unfit individuals would not be disrupted in any meaningful way. Because of this, intellectual integrity requires us to look at many of the legislative proposals not only through the prism of their immediate utility, but also through the prism of the societal model they are likely to advance in the long haul.

And herein lies the problem: many of the current “common-sense” gun control proposals have very little merit when considered in isolation. There is scant evidence that bans on military-looking semi-automatic rifles (“assault weapons”), or the prohibition on private sales at gun shows, would deliver measurable results. There is also no compelling reason to believe that ammo taxes, firearm owner liability insurance, mandatory gun store cameras, firearm-free school zones, or federal gun registration can have any impact on violent crime. At the same time, simply by the virtue of making weapons more expensive and burdensome to own, such regulation would be likely to gradually undermine the US gun culture – and in a matter of a decade or two, would make it easier for the country to follow in the footsteps of Australia or the UK. Only as we cross that line, it’s fathomable – yet still far from certain – that we would see a sharp drop in homicides.

This line of reasoning helps explain the visceral response from gun rights advocates: given the legislation’s unclear benefits and its suspected societal impact, many pro-gun folks are genuinely worried that any compromise would eventually mean giving up their civil liberties – and on some level, they are right. It is fashionable to imply that there is a sinister corporate “gun lobby” that derails the political debate for its own financial gain; but the evidence of this is virtually non-existent – and it’s unlikely that gun manufacturers honestly care about being allowed to put barrel shrouds or larger magazines on the rifles they sell.

Another factor that poisons the debate is that despite being highly educated and eloquent, the progressive proponents of gun control measures are often hopelessly unfamiliar with the very devices they are trying to outlaw:

I’m reminded of the widespread contempt faced by Senator Ted Stevens following his attempt to compare the Internet to a “series of tubes” as he was arguing against net neutrality. His analogy wasn’t very wrong – it just struck a nerve as simplistic and out-of-date. My progressive friends did not react the same way when Representative Carolyn McCarthy – one of the key proponents of the ban on assault weapons – showed no understanding of the firearm features she was trying to eradicate. Such bloopers are not rare, too; not long ago, Mr. Bloomberg, one of the leading progressive voices on gun control in America, argued against semi-automatic rifles without understanding how they differ from the already-illegal machine guns:

There are countless dubious and polarizing claims made by the supporters of gun rights, too; but when introducing new legislation, the burden of making educated and thoughtful arguments should rest on its proponents, not other citizens. When folks such as Bloomberg prescribe sweeping changes to the American society while demonstrating striking ignorance about the topics they want to regulate, they come across as elitist and flippant – and deservedly so.

Given how controversial the topic is, I think it’s wise to start an open, national conversation about the European model of gun control and the risks and benefits of living in an unarmed society. But it’s also likely that such a debate wouldn’t take off; progressive politicians like to say that the dialogue is impossible because of the undue influence of the National Rifle Association – but as I discussed in my earlier blog posts, the organization’s financial resources and power are often overstated, and the NRA is not bankrolled by shadowy business interests or wealthy oligarchs. In reality, disarmament just happens to be a very unpopular policy in America today: the support for gun ownership is very strong and has been growing over the past 20 years – even though hunting is on the decline.

Perhaps it would serve the progressive movement better to embrace the gun culture – and then think of ways to curb its unwanted costs. Addressing inner-city violence, especially among the disadvantaged youth, would quickly bring the US homicide rate much closer to the rest of the highly developed world. But admitting the staggering scale of this social problem can be an uncomfortable and politically charged position to hold.

PS. If you are interested in a more systematic evaluation of the scale, the impact, and the politics of gun ownership in the United States, you may enjoy an earlier entry on this blog. Ad-blocking extension AdBlock sold to new owner

This post was syndicated from: and was written by: n8willis. Original post: at

Many online media outlets are reporting the news that ownership of
the popular ad-blocking browser extension AdBlock has
been sold to a new owner. Not to be confused with similarly named
projects AdBlock Plus and AdBlock Edge, this AdBlock announced the
news of the sale to its users in a pop-up window. TheNextWeb reports
that AdBlock employees refused to identify the buyer. In related
news, the new owner has decided to join the “Acceptable Ads”
whitelisting program run by rival AdBlock Plus. An announcement
on the AdBlock Plus site confirms the move, and notes that an
independent review board” will now decide which
advertisements are included the Acceptable Ads whitelist. Public
nominations for the board are said to be open.

Schneier on Security: Friday Squid Blogging: Bobtail Squid Keeps Bacteria to Protect Its Eggs

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

The Hawaiian Bobtail Squid deposits bacteria on its eggs to keep them safe.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

AWS Compute Blog: Amazon EC2 Container Service at AWS re:Invent

This post was syndicated from: AWS Compute Blog and was written by: Deepak Singh. Original post: at AWS Compute Blog

Amazon ECS Container Ship

AWS re:Invent is just a few days away. The Amazon ECS team will be there. To talk to us about how you are using Amazon ECS or to find out more drop by the Compute booth and the developer lounge.

There are also a number of Amazon ECS related talks next week. Come hear from AWS customers and the ECS team about how you can use Amazon ECS in production today.

In the Compute track

CMP302 – Amazon EC2 Container Service: Distributed Applications at Scale (also being live streamed)
CMP406 – Amazon ECS at Coursera: Powering a general-purpose near-line execution microservice, while defending against untrusted code (by Coursera)

In the Devops track

DVO305 – Turbocharge Your Continuous Deployment Pipeline with Containers
DVO308 – Docker & ECS in Production: How We Migrated Our Infrastructure from Heroku to AWS (by Remind)
DVO313 – Building Next-Generation Applications with Amazon ECS (by Meteor)
DVO317 – From Local Docker Development to Production Deployments (by Docker)

You can also drop by to watch a lightning talk on Amazon ECS and continuous delivery.

We look forward to seeing you in Las Vegas next week.

— The Amazon ECS Team

Darknet - The Darkside: HookME – API Based TCP Proxy Including SSL

This post was syndicated from: Darknet - The Darkside and was written by: Darknet. Original post: at Darknet - The Darkside

HookME is a an API based TCP Proxy software designed for intercepting communications by hooking the desired process and hooking the API calls for sending and receiving network data (even SSL clear data). HookME provides a nice graphic user interface allowing you to change the packet content in real time, dropping or forwarding the packet….

Read the full post at

Schneier on Security: Resilient Systems News

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Former Raytheon chief scientist Bill Swanson has joined our board of directors.

For those who don’t know, Resilient Systems is my company. I’m the CTO, and we sell an incident-response management platform that…well…helps IR teams to manage incidents. It’s a single hub that allows a team to collect data about an incident, assign and manage tasks, automate actions, integrate intelligence information, and so on. It’s designed to be powerful, flexible, and intuitive — if your HR or legal person needs to get involved, she has to be able to use it without any training. I’m really impressed with how well it works. Incident response is all about people, and the platform makes teams more effective. This is probably the best description of what we do.

We have lots of large- and medium-sized companies as customers. They’re all happy, and we continue to sell this thing at an impressive rate. Our Q3 numbers were fantastic. It’s kind of scary, really.

Krebs on Security: Scottrade Breach Hits 4.6 Million Customers

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Welcome to Day 2 of Cybersecurity (Breach) Awareness Month! Today’s awareness lesson is brought to you by retail brokerage firm Scottrade Inc., which just disclosed a breach involving contact information and possibly Social Security numbers on 4.6 million customers.

scottradeIn an email sent today to customers, St. Louis-based Scottrade said it recently heard from federal law enforcement officials about crimes involving the theft of information from Scottrade and other financial services companies.

“Based upon our subsequent internal investigation coupled with information provided by the authorities, we believe a list of client names and street addresses was taken from our system,” the email notice reads. “Importantly, we have no reason to believe that Scottrade’s trading platforms or any client funds were compromised. All client passwords remained encrypted at all times and we have not seen any indication of fraudulent activity as a result of this incident.”

The notice said that although Social Security numbers, email addresses and other sensitive data were contained in the system accessed, “it appears that contact information was the focus of the incident.” The company said the unauthorized access appears to have occurred over a period between late 2013 and early 2014.

Asked about the context of the notification from federal law enforcement officials, Scottrade spokesperson Shea Leordeanu said the company couldn’t comment on the incident much more than the information included in its Web site notice about the attack. But she did say that Scottrade learned about the data theft from the FBI, and that the company is working with agents from FBI field offices in Atlanta and New York. FBI officials could not be immediately reached for comment.

It may well be that the intruders were after Scottrade user data to facilitate stock scams, and that a spike in spam email for affected Scottrade customers will be the main fallout from this break-in.

In July 2015, prosecutors in Manhattan filed charges against five people — including some suspected of having played a role in the 2014 breach at JPMorgan Chase that exposed the contact information on more than 80 million consumers. The authorities in that investigation said they suspect that group sought to use email addresses stolen in the JPMorgan hacking to further stock manipulation schemes involving spam emails to pump up the price of otherwise worthless penny stocks.

Scottrade said despite the fact that it doesn’t believe Social Security numbers were stolen, the company is offering a year’s worth of free credit monitoring services to affected customers. Readers who are concerned about protecting their credit files from identity thieves should read How I Learned to Stop Worrying and Embrace the Security Freeze.

AWS Official Blog: Spot Fleet Update – Console Support, Fleet Scaling, CloudFormation

This post was syndicated from: AWS Official Blog and was written by: Jeff Barr. Original post: at AWS Official Blog

There’s a lot of buzz about Spot instances these days. Customers are really starting to understand the power that comes with the ability to name their own price for compute power!

After launching the Spot fleet API in May to allow you to manage thousands of Spot instances with a single request, we followed up with resource-oriented bidding in August and the option to distribute your fleet across multiple instance pools in September.

One quick note before I dig in: While the word “fleet” might make you think that this model is best-suited to running hundreds or thousands of instances at a time, everything that I have to say here applies regardless of the size of your fleet, whether it is comprised of one, two, three, or three thousand instances! As you will see in a moment, you get a console that’s flexible and easy to use, along with the ability to draw resources from multiple pools of Spot capacity, when you create and run a Spot fleet.

Today we are adding three more features to the roster: a new Spot console, the ability to change the size of a running fleet, and CloudFormation support.

New Spot Console (With Fleet Support)
In addition to CLI and API support, you can now design and launch Spot fleets using the new Spot Instance Launch Wizard. The new wizard allows you to create resource-oriented bids that are denominated in instances, vCPUs, or arbitrary units that you can specify when you design your fleet.  It also helps you to choose a bid price that is high enough (given the current state of the Spot market) to allow you to launch instances of the desired types.

I start by choosing the desired AMI (stock or custom), the capacity unit (I’ll start with instances), and the amount of capacity that I need. I can specify a fixed bid price across all of the instance types that I select, or I set it to be a percentage of the On-Demand price for the type. Either way, the wizard will indicate (with the “caution” icon) any bid prices that are too low to succeed:

When I find a set of prices and instance types that satisfies my requirements, I can select them and click on Next to move forward.

I can also make resource-oriented bids using a custom capacity unit. When I do this I have even more control over the bid. First, I can specify the minimum requirements (vCPUs, memory, instance storage, and generation) for the instances that I want in my fleet:

The display will update to indicate the instance types that meet my requirements.

The second element that I can control is the amount of capacity per instance type (as I explained in an earlier post, this might be driven by the amount of throughput that a particular instance type can deliver for my application). I can control this by clicking in the Weighted Capacity column and entering the designated amount of capacity for each instance type:

As you can see from the screen shot above, I have chosen all of instance types that offer weighted capacity at less than $0.35 / unit.

Now that I have designed my fleet, I can configure it by choosing the allocation strategy (diversified or lowest price), the VPC, security groups, availability zones / subnets, and a key pair for SSH access:

I can also click on Advanced to create requests that are valid only between certain dates and times, and to set other options:

After that I review my settings and click on Launch to move ahead:

My Spot fleet is visible in the Console. I can select it and see which instances were used to satisfy my request:

If I plan to make requests for similar fleets from time to time, I can download a JSON version of my settings:

Fleet Size Modification
We are also giving you the ability to modify the size of an existing fleet. The new ModifySpotFleetRequest allows you to make an existing fleet larger or smaller by specifying a new target capacity.

When you increase the capacity of one of your existing fleets, new bids will be placed in accordance with the fleet’s allocation strategy (lowest price or diversified).

When you decrease the capacity of one of your existing fleets, you can request that excess instances be terminated based on the allocation strategy. Alternatively, you can leave the instances running, and manually terminate them using a strategy of your own.

You can also modify the size of your fleet using the Console:

CloudFormation Support
We are also adding support for the creation of Spot fleets via a CloudFormation template. Here’s a sample:

"SpotFleet": {
  "Type": "AWS::EC2::SpotFleet",
  "Properties": {
    "SpotFleetRequestConfigData": {
      "IamFleetRole": { "Ref": "IAMFleetRole" },
      "SpotPrice": "1000",
      "TargetCapacity": { "Ref": "TargetCapacity" },
      "LaunchSpecifications": [
        "EbsOptimized": "false",
        "InstanceType": { "Ref": "InstanceType" },
        "ImageId": { "Fn::FindInMap": [ "AWSRegionArch2AMI", { "Ref": "AWS::Region" },
                     { "Fn::FindInMap": [ "AWSInstanceType2Arch", { "Ref": "InstanceType" }, "Arch" ] }
        "WeightedCapacity": "8"
        "EbsOptimized": "true",
        "InstanceType": { "Ref": "InstanceType" },
        "ImageId": { "Fn::FindInMap": [ "AWSRegionArch2AMI", { "Ref": "AWS::Region" },
                     { "Fn::FindInMap": [ "AWSInstanceType2Arch", { "Ref": "InstanceType" }, "Arch" ] }
        "Monitoring": { "Enabled": "true" },
        "SecurityGroups": [ { "GroupId": { "Fn::GetAtt": [ "SG0", "GroupId" ] } } ],
        "SubnetId": { "Ref": "Subnet0" },
        "IamInstanceProfile": { "Arn": { "Fn::GetAtt": [ "RootInstanceProfile", "Arn" ] } },
        "WeightedCapacity": "8"

Available Now
The new Spot Fleet Console, the new ModifySpotFleetRequest function, and the CloudFormation support are available now and you can start using them today!


SANS Internet Storm Center, InfoCON: green: BizCN gate actor update, (Fri, Oct 2nd)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green


The actor using gates registered through BizCN(alwayswith privacy protection) continues using the Nuclear exploit kit (EK) to deliver malware.

My previous diary on this actor documented the actors switch from Fiesta EK to Nuclear EK in early July 2015 [1]. Since then, the BizCN gate actor briefly switched to Neutrino EK in however, it appears to be using Nuclear EK again.

Our thanksto Paul, who submitted a pcap of”>”>”>actorto the ISC.


Pauls pcap showed us a Google search leading to thecompromised website.In the image below, youcan alsosee” />
Shown above: A pcap of the traffic filtered by HTTP request.

No payload was found inthis EK traffic, so the Windowshost viewing the compromised websitedidnt get infected. The Windows host from this pcapwas running IE 11, and URLs for the EK traffic stop after the last two HTTP POST requests. These URL patterns are what Ive seen every time IE 11 crashes after getting hit with Nuclear EK.

A key thing to remember with the BizCN gate actor is the referer line from the landing page. This will always show the compromised website, and it wont indicate the BizCN-registered gate that gets you there. Pauls pcap didnt include traffic to the BizCN-registered gate, but I found a reference to it in the traffic. ” />
Shown above: Flow chart for EK traffic associated with the BizCN gate actor.

How did Ifind the gate in this example? First, I checked the referer on the HTTP GET request to the EK” />
Shown above: TCP stream for the HTTP GET request to the Nuclear EK landing page.

That referer should have injected script pointing to the BizCN gate URL, soI exported that” />
Shown above: ” />
Shown above: The object Iexportedfrom the pcap.

I searched the HTML text” />
Shown above: Malicious script in page from the compromised websitepointing to URL on the BizCN-registered gate domain.

The BizCN-registered”>, andpingingto itshowed as the IP address. ” />
Shown above: Whoisinformation on”>

This completes my flow chart for the BizCN gate actor.The domains associated from Pauls pcapwere:

  • – Compromised website
  • – – BizCN-registered gate
  • – – Nuclear EK

Final words

Recently, Ive hadhard time getting a full chain of infection traffic from theBizCN gate actor. Pauls pcap also had this issue, because there was no payload. However the BizCN gate actor is still active, and many of the compromised websites Ive noted in previous diaries [1, 4] are still compromised.

We continue to track the BizCN gate actor, and well let you know if we discover any significant changes.

Brad Duncan
Security Researcher at Rackspace
Blog: – Twitter: @malware_traffic



(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.

TorrentFreak: Comcast User Hit With 112 DMCA Notices in 48 Hours

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

Every day, DMCA-style notices are sent to regular Internet users who use BitTorrent to share copyrighted material. These notices are delivered to users’ Internet service providers who pass them on in the hope that customers correct their behavior.

The most well-known notice system in operation in the United States is the so-called “six strikes” scheme, in which the leading recording labels and movie studios send educational warning notices to presumed pirates. Not surprisingly, six-strikes refers to users receiving a maximum of six notices. However, content providers outside the scheme are not bound by its rules – sometimes to the extreme.

According to a lawsuit filed this week in the United States District Court for the Western District of Pennsylvania (pdf), one unlucky Comcast user was subjected not only to a barrage of copyright notices on an unprecedented scale, but during one of the narrowest time frames yet.

The complaint comes from Rotten Records who state that the account holder behind a single Comcast IP address used BitTorrent to share the discography of Dog Fashion Disco, a long-since defunct metal band previously known as Hug the Retard.

“Defendant distributed all of the pieces of the Infringing Files allowing others to assemble them into a playable audio file,” Rotten Records’ attorney Flynn Wirkus Young explain.

Considering Rotten Records have been working with Rightscorp on other cases this year, it will come as no surprise that the anti-piracy outfit is also involved in this one. And boy have they been busy tracking this particular user. In a single 48 hour period, Rightscorp hammered the Comcast subscriber with more than two DMCA notices every hour over a single torrent.

“Rightscorp sent Defendant 112 notices via Defendant’s ISP Comcast from June 15, 2015 to June 17, 2015 demanding that Defendant stop illegally distributing Plaintiff’s work,” the lawsuit reads.

“Defendant ignored each and every notice and continued to illegally distribute Plaintiff’s work.”


While it’s clear that the John Doe behind IP address shouldn’t have been sharing the works in question (if he indeed was the culprit and not someone else), the suggestion to the Court that he or she systematically ignored 112 demands to stop infringing copyright is stretching the bounds of reasonable to say the least.

trolloridiotIn fact, Court documents state that after infringement began sometime on June 15, the latest infringement took place on June 16 at 11:49am, meaning that the defendant may well have acted on Rightscorp’s notices within 24 hours – and that’s presuming that Comcast passed them on right away, or even at all.

Either way, the attempt here is to portray the defendant as someone who had zero respect for Rotten Record’s rights, even after being warned by Rightscorp more than a hundred and ten times. Trouble is, all of those notices covered an alleged infringing period of less than 36 hours – hardly a reasonable time in which to react.

Still, it’s unlikely the Court will be particularly interested and will probably issue an order for Comcast to hand over their subscriber’s identity so he or she can be targeted by Rotten Records for a cash settlement.

Rotten has targeted Comcast users on several earlier occasions, despite being able to sue the subscribers of any service provider. Notably, while Comcast does indeed pass on Rightscorp’s DMCA takedown notices, it strips the cash settlement demand from the bottom.

One has to wonder whether Rightscorp and its client are trying to send the ISP a message with these lawsuits.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Krebs on Security: Experian Breach Affects 15 Million Consumers

This post was syndicated from: Krebs on Security and was written by: BrianKrebs. Original post: at Krebs on Security

Kicking off National Cybersecurity Awareness Month with a bang, credit bureau and consumer data broker Experian North America disclosed Thursday that a breach of its computer systems exposed approximately 15 million Social Security numbers and other data on people who applied for financing from wireless provider T-Mobile USA Inc.

experianExperian said the compromise of an internal server exposed names, dates of birth, addresses, Social Security numbers and/or drivers’ license numbers, as well as additional information used in T-Moblie’s own credit assessment. The Costa Mesa-based data broker stressed that no payment card or banking details were stolen, and that the intruders never touched its consumer credit database.

Based on the wording of Experian’s public statement, many publications have reported that the breach lasted for two years from Sept. 1, 2013 to Sept. 16, 2015. But according to Experian spokesperson Susan Henson, the forensic investigation is ongoing, and it remains unclear at this point the exact date that the intruders broke into Experian’s server.

Henson told KrebsOnSecurity that Experian detected the breach on Sept. 15, 2015, and confirmed the theft of a single file containing the T-Mobile data on Sept. 22, 1015.

T-Mobile CEO John Legere blasted Experian in a statement posted to T-Mobile’s site. “Obviously I am incredibly angry about this data breach and we will institute a thorough review of our relationship with Experian, but right now my top concern and first focus is assisting any and all consumers affected,” Legere wrote.


Experian said it will be notifying affected consumers by snail mail, and that it will be offering affected consumers free credit monitoring through its “Protect MyID” service. Take them up on this offer if you want , but I would strongly encourage anyone affected by this breach to instead place a security freeze on their credit files at Experian and at the other big three credit bureaus, including Equifax, Trans Union and Innovis.

Experian’s offer to sign victims up for a credit monitoring service to address a breach of its own making is pretty rich. Moreover, credit monitoring services aren’t really built to prevent ID theft. The most you can hope for from a credit monitoring service is that they give you a heads up when ID theft does happen, and then help you through the often labyrinthine process of getting the credit bureaus and/or creditors to remove the fraudulent activity and to fix your credit score.

If after ordering a free copy of your credit report at you find unauthorized activity on your credit file, by all means take advantage of the credit monitoring service, which should assist you in removing those inquiries from your credit file and restoring your credit score if it was dinged in the process.

But as I explain at length in my story How I Learned to Stop Worrying and Embrace the Security Freeze, credit monitoring services aren’t really built to stop thieves from opening new lines of credit in your name.

If you wish to block thieves from using your personal information to obtain new credit in your name, freeze your credit file with the major bureaus. For more on how to do that and for my own personal experience with placing a freeze, see this piece.

I will be taking a much closer look at Experian’s security (or lack thereof) in the coming days, and my guess is lawmakers on Capitol Hill will be following suit. This is hardly first time lax security at Experian has exposed millions of consumer records. Earlier this year, a Vietnamese man named Hieu Minh Ngo was sentenced to 13 years in prison for running an online identity theft service that pulled consumer data directly from an Experian subsidiary. Experian is now fighting off a class-action lawsuit over the incident.

During the time that ID theft service was in operation, customers of Ngo’s service had access to more than 200 million consumer records. Experian didn’t detect Ngo’s activity until it was notified by federal investigators that Ngo was an ID thief posing as a private investigator based in the United States. The data broker failed to detect the anomalous activity even though Ngo’s monthly payments for consumer data lookups his hundreds of customers conducted each month came via wire transfers from a bank in Singapore. Friday’s security updates

This post was syndicated from: and was written by: n8willis. Original post: at

CentOS has updated thunderbird (C6; C5; C7: multiple vulnerabilities).

Debian-LTS has updated binutils (multiple vulnerabilities).

Fedora has updated freeimage (F22; F21:
integer overflow),
golang (F22; F21: multiple vulnerabilities), jakarta-commons-httpclient
(F22; F21: denial of service), and openjpeg2 (F22; F21: use-after-free vulnerability).

Mageia has updated thunderbird (M5: multiple vulnerabilities).

openSUSE has updated bind
(11.4: denial of service).

Oracle has updated thunderbird (O6; O7: multiple vulnerabilities).

Red Hat has updated mod_proxy_fcgi (RHEL6: denial of service).

Scientific Linux has updated thunderbird (SL5, 6, 7: multiple vulnerabilities).

Slackware has updated mozilla-thunderbird (14.0, 14.1, current: multiple vulnerabilities), php (14.0, 14.1, current: multiple vulnerabilities), and seamonkey (14.0, 14.1, current: multiple vulnerabilities).

Ubuntu has updated kernel
(12.04: multiple vulnerabilities) and linux-ti-omap4 (12.04: multiple vulnerabilities).

Linux How-Tos and Linux Tutorials: Using G’MIC to Work Magic on Your Graphics

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Jack Wallen. Original post: at Linux How-Tos and Linux Tutorials

jack-gmic-1I’ve been doing graphic design for a long, long time. During that time, I’ve used one tool and only one tool… Gimp. Gimp has always offered all the power I need to create amazing graphics from book covers, to promotional images, photo retouch, and much more. But…

There’s always a but.

Even though Gimp has a rather powerful (and easy to use) set of filters, those filters tend to be very much one-trick-ponies. In other words, if you want to create a complex look on an image, you most likely will wind up using a combination of multiple filters to get the effect you want. This is great, simply because you have the filters at your command. However, sometimes knowing which filter to use for what effect can be a bit daunting.

That’s why GREYC’s Magic for Image Computing (aka G’MIC) is such a breath of fresh air. This particular plugin for Gimp has saved me time, effort, and hair pulling on a number of occasions. What G’MIC does is easily extend the capabilities of not just Gimp, but the Gimp user. G’MIC is a set of predefined filters and effects that make using Gimp exponentially easier.

The list of filters and effects available from G’MIC is beyond impressive. You’ll find things like:

  • Arrays & tiles

  • Bokeh

  • Cartoon

  • Chalk it up

  • Finger paint

  • Graphic novel

  • Hope poster

  • Lylejk’s painting

  • Make squiggly

  • Paint daub

  • Pen drawing

  • Warhol

  • Watercolor

  • Charcoal

  • Sketch

  • Stamp

  • Boost-fade

  • Luminance

  • Decompose channels

  • Hue lighten-darken

  • Metallic look

  • Water drops

  • Vintage style

  • Skeleton

  • Euclidean – polar

  • Reflection

  • Ripple

  • Wave

  • Wind

  • Noise

  • Old Movie Stripes

 And more. For an entire listing of the effects and filters available, check out the ascii chart here.

At this point, any Gimp user should be salivating at the thought of using this wonderful tool. With that said, let’s install and get to know G’MIC.


The good news is that you can find G’MIC in your distribution’s standard repositories. I’ll show you how to install using the Ubuntu Software Center.

The first thing to do, once you’ve opened up the Ubuntu Software Center, is to search for Gimp. Click on the entry for Gimp and then click the More Info button. Scroll down until you see the Optional add-ons (see Figure 1 above).

From within the optional add-ons listing, make sure to check the box for GREYC’s Magic for Image Computing and then click Apply Changes.

With the installation of G’MIC complete, you are ready to start using the tool.

I will warn you: I currently use the unstable version (2.9.1) of Gimp. Although unstable, there are features and improvements in this version that blow away the 2.8 branch. So… if you’re willing to work with a possibly unstable product (I find it stable), it’s worth the risk. To install the 2.9 branch of Gimp on a Ubuntu-based distribution, follow these steps:

  1. Open a terminal window

  2. Add the necessary repository with the command sudo add-apt-repository ppa:otto-kesselgulasch/gimp-edge

  3. Update apt with the command sudo apt-get update

  4. Install the development build of Gimp, issue the command sudo apt-get install gimp

The above should also upgrade G’MIC as well. If not, you might need to follow up the install with the command sudo apt-get upgrade.


Now it’s time to start using G’MIC. If you search through your desktop menu, you’ll not find G’MIC listed. That is because it is integrated into Gimp itself. In fact, you should see G’MIC listed in the menu structure. If you click that entry, you’ll see G’MIC listed, but it’s grayed out. That is because G’MIC can only open when you’re actually working on an image (remember, this is a set of predefined filters that act on an image, not create an image). With that said, open up an image and then click G’MIC > G’MIC. A new window will open (Figure 2) showing the abundance of filters and effects available to you.

jack-gmic-2The first thing you need to know is the Input/Output section (bottom left corner). Here you can decide, first, what G’MIC is working on. For example, you can tell G’MIC to use the currently active layer for Input but to output to a brand new layer. This can sometimes be handy so you’re not changing the current working layer (you might not want to do destructive editing on something you’ve spent hours on). If you like what G’MIC did with the layer, you can then move it into place and delete (or hide) the original layer.

At this point, it’s all about scrolling through each of the included pre-built effects and filters to find what you want. Each filter/effect offers a varying degree of user-controlled options (Figure 3 illustrates the controls for the Dirty filter under Degradations).

jack-gmic-3One thing you must get used to is making sure to select the layer you want to work on before opening G’MIC. If you don’t, you’ll have to close G’MIC, select the correct layer, and re-open G’MIC. You also need to understand that some of the filters take much longer to work their magic than others. You’ll see a progress bar at the bottom of the Gimp window, indicating the filter/effect is being applied.

If you want to test G’MIC before installing it, or you want to test filters/effects before applying them to your own work, you can test it with this handy online demo version. This tool allows you to work with G’MIC on a demo image so you can not only see how well the effects/filters work, but get the hang of using G’MIC (it’s not hard).

If you’re a Gimp power user, G’MIC is, without a doubt, one of the single most important add-ons available for the flagship open source image editing tool. With G’MIC you can bring some real magic to your digital images… and do so with ease. Give it a go and see if it doesn’t take your Gimp work to the next level.

AWS Official Blog: Are You Well-Architected?

This post was syndicated from: AWS Official Blog and was written by: Jeff Barr. Original post: at AWS Official Blog

Seattle-born musical legend Jimi Hendrix started out his career with a landmark album titled Are You Experienced?

I’ve got a similar question for you: Are You Well-Architected? In other words, have you chosen a cloud architecture that is in alignment with the best practices for the use of AWS?

We want to make sure that your applications are well-architected. After working with thousands of customers, the AWS Solutions Architects have identified a set of core strategies and best practices for architecting systems in the cloud and have codified them in our new AWS Well-Architected Framework. This document contains a set of foundational questions that will allow you to measure your architecture against these best practices and to learn how to address any shortcomings.

The AWS Well-Architected Framework is based around four pillars:

  • Security – The ability to protect information systems and assets while delivering business value through risk assessments and mitigation strategies.
  • Reliability – The ability to recover from infrastructure or service failures, dynamically acquire computing resources to meet demand, and mitigate disruptions such as misconfigurations or transient network issues.
  • Performance Efficiency -The efficient use computing resources to meet system requirements, and maintaining that efficiency as demand changes and technologies evolve.
  • Cost Optimization – The ability to avoid or eliminate unneeded cost or suboptimal resources.

For each pillar, the guide puts forth a series of design principles, and then defines the pillar in detail. Then it outlines a set of best practices for the pillar and proffers a set of questions that will help you to understand where you are with respect to the best practices. The questions are open-ended. For example, there’s no simple answer to the question “How does your system withstand component failures?” or “How are you planning for recovery?”

As you work your way through the Framework, I would suggest that you capture and save the answers to each of the questions. This will give you a point-in-time reference and will allow you to look back later in order to measure your progress toward well-architected.

The AWS Well-Architected Framework is available at no charge. If you find yourself in need of additional help along your journey to the cloud, be sure to tap in to accumulated knowledge and expertise of our team of Solutions Architects.


PS – If you are coming to AWS re:Invent, be sure to attend the Well-Architected Workshop at 1 PM on Wednesday, October 7th.

Блогът на Юруков: Дигитализирай ми институция

This post was syndicated from: Блогът на Юруков and was written by: Боян Юруков. Original post: at Блогът на Юруков


Обществена поръчка за нов софтуер за управление. След тези думи неизменно идва прозявка и патос. „Нормално“ е да се изхарчат едни пари от който трябва и да се достави нещо, което никой да не използва. Така е било и надали скоро ще има сериозна промяна. Има индикации обаче, че се натъкнахме на едно от малкото изключения от това правило.

Целта на тези поръчки е да се подобри работата в агенции и министерства, да се ускори комуникацията както между дирекциите, така и между институциите. Навярно най-големият проблем, който всяка администрация – частна или държавна – има, е управлението на информация. Това включва както бази данни със задачи и наличности, така и следене на процеси, резултати, разходи и сигурност. В много организации тези неща се правят предимно на хартия или в най-добрия случай на Excel таблици, което е малко подобрение, но създава съвсем други проблеми.

През март т.г. получих мейл от един екип. Споменаха, че правят такава система за управление, бяха видели работата ми с отворени данни и искаха да поговорим по темата. Идеята им беше да заложат този принцип в новата платформа. Обсъдихме плановете им, какви проблеми срещат в събирането на информация и дигитализирането на процесите. Не успях добавя почти нищо към идеите им – нещо, което не ми се е случвало до сега.

Преди два дни Агенция „Пътна инфраструктура“ представи официално платформата. Публичната част включва карта и мобилно приложение, с които може да следим в реално време ремонти, забрани и проблеми по пътищата. Доста медии съобщиха, че картата цели да покаже данните за трафика, които ще бъдат събирани от една друга система, която все още не е довършена. В действителност, тази карта показва като допълнителна опция трафика от Google. Целта ѝ обаче няма много общо с това.

Задача бе на ПР-ите на агенцията да обяснят всичко това и явно не им се е получило. Затова позволете ми го илюстрирам с няколко примера.

Тези неща не ги ли знаехме и преди?


Горе виждате новините на АПИ за затворени пътища и ремонти. На сайта им ще намерите и бюлетина за пътната обстановка. По него всеки може да се ориентира къде какво се случва. На теория. Представен като прост текст, всеки шофьор трябва да следи постоянно бюлетина, да прецени къде е точно 172-рия км. след София и дали му пука от това. Така информацията не може да се подаде на навигация или друг алгоритъм. Публикуването на новините в машинно-четим формат с географски координати, категоризация и описание е единственото решение.

За да се подаде така информацията обаче не стига да се направи една карта, на която чиновниците да кликат. До сега всичко това е било аналогово, което означава, че липсва всякаква потребна дигитална информация къде точно са пътища, знаци, ограничения, настилка, ремонти и прочие. С други думи – липсва информационната инфраструктура за инфраструктурата ни. Освен да се промени формуляра, с който местните дирекции въвеждат в конкретния бюлетин, трябва да се хванат всички данни и процеси в агенцията и да се избутат в 21 век. Това е колкото технологично предизвикателство, толкова и организационно.


Резултатът е, че всички съобщения ги виждаме прегледно на карта като тази. За да е възможно обаче, не стига да се направи просто един сайт, който всъщност е единственият видим за нас елемент. Ще ни се да мислим, че държавната администрация има всички данни, но ги крие като бащиния. В доста случаи това наистина е така. Както обаче казах пред TEDxBG преди година, прекалено често информацията просто я няма или е пръсната по папки из цялата страна. Като човек, който събира данни от институциите, мога да ви кажа, че това е масова практика и далеч не български патент.

Почти милион за един сайт и APP? Всъщност не

Основната критика към проекта е за цената и кой стои зад изпълнителя. По последното не мога да говоря, защото не съм наясно с фирмата и историята ѝ. Капитал писа вече за това и скоро може да очакваме още подробности. Мога да говоря само за свършеното и очаквания ефект от системата. В контекста на написаното до тук, надявам се да сте наясно, че далеч не става въпрос за един app. Всъщност вчера се чух отново с изпълнителите на проекта и ми разказаха какво са доставили. От други източници също научих, че направеното от тях далеч надхвърля поръчаното от АПИ. Дори се е наложило да налагат определени решения – като публични отворени данни – въпреки отпора на отделни чиновници.

Публичната част от платформата разчита на цялостна система за управление заместваща всички процеси и комуникация вътре в агенцията. Ако до сега са изпращали на хартия информация за ремонти и поръчки за нови знаци, сега това става електронно. Всъщност, примерът със знаците е показателен. Има няколко мобилни приложения, които са направени за вътрешно ползване и не са достъпни за всички нас. Един от тях позволява служителите им бързо да картографират всички знаци в България. В рамките на тестовете са поставени около 100 хиляди от тях на картата.


Нали се сещате как постоянно виждаме нелогични знаци за намаляване на скоростта или за ремонти без да има нужда от тях? Надали някой би се учудил, че АПИ не знае за тях. Забравят ги заради недоглеждане на местните дирекции. Обяснението е, че просто нямат подходяща информационна система. Признаха за това неотдавна. В същото време десетки милиони се харчат на година за нови и нови знаци.

Е, сега такава система има и вече давала резултати. Всеки служител на всяко ниво може да провери къде има знак, има ли нужда от него, дали може да се махне и използва другаде. Това е нещо дребно на пръв поглед, но първо ще подобри безопасността по пътищата и второ ще спести пари. По предварителни преценки на АПИ, само от прибрани и повторно използвани пътни знаци държавата ще пести по милион лева на година. Това означава, че само от този компонент на платформата ще се изплати целият проект за около 9-10 месеца.

Google не знае всичко, той си го намира

Друг интересен коментар в мрежата беше, че Google също показва тези ремонти и има много по-добра навигация. Това обаче не е така поне за първото. Google работи с местни партньори, които подават каквато информация имат за трафика или инциденти по пътя. Заради липсата на надеждна официална услуга до сега, в България тази информация беше непълна и неточна. Дори данните за трафика са по-скоро предвиждане на база стари данни и клетки на мобилните оператори, отколкото актуално състояние. С новата платформа, дори услуги като картите на Google ще започнат да черпят актуална информация.


Една друга важна част от платформата е подаването на сигнали и обратната връзка. За последните два дни има доста такива и някои споделят, че вече получават отговор за запланувани ремонти. Дали това ще остане така и дали от системата ще има реален ефект за подобряване на пътищата ни, ще видим в рамките на следващата година. Дали сигналите водят до реални действия ще може да проследят както подалите сигнали, така и ние през картата.

За да се случат интеграцията и гражданския контрол обаче, имаме нужда от въпросните отворени данни и публични услуги, за които пиша толкова време. Документацията за тях не е все още публична, но имам уверение, че ще бъде добавена в правителствения портал за отворени данни през ноември. Така може да включим информацията подадена от АПИ във всякакви приложения и анализи.

Информацията е сила, когато я има и се използва

Макар картата и app-а да са полезни, с тях далеч не се изчерпват целите на събраните данни. Една идея е да се направи система за насочване на линейките. Всъщност, явно вече има принципно съгласие от Министерството на здравеопазването и може би ще го осъществят като проект. Плюсът е, че няма да има нужда от ново оборудване, а ще трябва само да се свържат системите с подходящ алгоритъм между тях.

В момента се взима еднократно решение къде да патрулират линейките в градове като София и Пловдив. Когато има обаждане до 112, най-близката линейка отива до сигнала и оставя региона си празен. Ако има друг сигнал в този регион, останалите линейки са далеч и отнема повече време. Понякога се случвало дори линейка да изоставя един сигнал за друг, ако има индикации, че първият е по-малко спешен. Мисля, че всички имаме примери за закъснели линейки. Одитът на спешната помощ посочва именно лошото планиране и липсата на свързаност между системите като основна причина за това.


Използвайки данните за състоянието на пътищата, ремонти и задръствания, сравнително прост алгоритъм би могъл да изчисли каква зона може да покрие дадена линейка. Тоест – от сегашното си положение колко далеч може да стигне в рамките на 10-15 мин. предвид обстановката по пътищата. Засичайки местоположението на всички линейки, този алгоритъм може да пресметне кои зони на града не са покрити и да пренасочи една или повече да патрулират на друго място, за да са по-близо до евентуален инцидент. Ако една бъде викната по спешност, останалите ще бъдат преместени, за да покрият по-добре региона. Така почти изчезва вероятността от сериозно закъснение, защото линейката е трябвало да дойде от другия край на града или е в задръстване/ремонт, за които шофьорът не е знаел.

Фактът, че екипът създал системата са предвидили и наложили на своя глава пускането на отворени данни, означава, че частни компании могат да разработят приложения и да предоставят същите услуги на охранителни фирми, таксита и доставчици. Данните за трафика и ремонтите ще позволи и адекватен анализ на новите инфраструктурни проекти – дали реално са намалили трафика и инцидентите, дали работата по тях не е създала повече проблеми и не са се наложили скъпи ремонти по-късно. До сега това също беше възможно, но изискваше цяла армия от хора да събира информация от бюлетини и папки от цялата страна, да ги въвежда в таблици и да се надява, че никой по веригата не е сгрешил някоя цифра. След около месец би трябвало да получаваме всичко това с един клик.

Тука едни пари…, но с резултат

Съвсем естествено е да има въпроси около поръчката, изпълнителя, свързаните лица и изпълнението. Добре е, че има журналисти, които се вглеждат в подобни поръчки и надали някой има съмнение, че повечето от тях са оваляни в корупция и некадърност. Също така, рядко ще чуете от мен да хваля една или друга софтуерна система в администрацията. Обикновено се правят като дипломна работа в български университет – колкото да мине комисия. След това не се поддържа, което на никой не му прави впечатление, защото не се и използва. Всъщност, който се е занимавал с въвеждането каквито и да е системи от този мащаб знае, че това се случва често дори и частния сектор.

От техническа гледна точка и съдейки по това, което видях и научих, мога да твърдя, че тази платформа ще е от голяма полза не само за АПИ, но и за всички нас чрез публичната си информация. Разбира се, възможно е да греша – не сме видели кода на системата тъй като поръчката е пусната преди изискването за отворен код на всички нови системи. Затова не може да знаем колко добре ще работи в бъдеще и дали съпътстващите проблеми няма да накарат служителите да се върнат към хартията, както се случва отново и отново държавната администрация. За това обаче ще разберем сравнително бързо и ще зависи до голяма степен от ръководните кадри в агенцията.

На този етап мога само да похваля изпълнителите и да се надявам скоро да видим данните и публичните услуги.

AWS Compute Blog: Dynamic Scaling with EC2 Spot Fleet

This post was syndicated from: AWS Compute Blog and was written by: Vyom Nagrani. Original post: at AWS Compute Blog

Tipu Qureshi Tipu Qureshi, AWS Senior Cloud Support Engineer

The RequestSpotFleet API allows you to launch and manage an entire fleet of EC2 Spot Instances with one request. A fleet is a collection of Spot Instances that are all working together as part of a distributed application and providing cost savings. With the ModifyFleetRequest API, it’s possible to dynamically scale a Spot fleet’s target capacity according to changing capacity requirements over time. Let’s look at a batch processing application that is utilizing Spot fleet and Amazon SQS as an example. As discussed in our previous blog post on Additional CloudWatch Metrics for Amazon SQS and Amazon SNS, you can scale up when the ApproximateNumberOfMessagesVisible SQS metric starts to grow too large for one of your SQS queues, and scale down once it returns to a more normal value.

There are multiple ways to accomplish this dynamic scaling. As an example, a script can be scheduled (e.g. via cron) to get the value of the ApproximateNumberOfMessagesVisible SQS metric periodically and then scale the Spot fleet according to defined thresholds. The current size of the Spot fleet can be obtained using the DescribeSpotFleetRequests API and the scaling can be carried out by using the new ModifyFleetRequest API. A sample script written for NodeJS is available here, and following is a sample IAM policy for an IAM role that could be used on an EC2 instance for running the script:

  "Version": "2012-10-17",
  "Statement": [
      "Sid": "Stmt1441252157702",
      "Action": [
      "Effect": "Allow",
      "Resource": "*"

By leveraging the IAM role on an EC2 instance, the script uses the AWS API methods described above to scale the Spot fleet dynamically. You can configure variables such as the Spot fleet request, SQS queue name, SQS metric thresholds and instance thresholds according to your application’s needs. In the example configuration below we have set the minimum number of instances threshold (minCount) at 2 to ensure that the instance count for the spot fleet never goes below 2. This is to ensure that a new job is still processed immediately after an extended period with no batch jobs.

// Sample script for Dynamically scaling Spot Fleet
// define configuration
var config = {
    spotFleetRequest:'sfr-c8205d41-254b-4fa9-9843-be06585e5cda', //Spot Fleet Request Id
    queueName:'demojobqueue', //SQS queuename
    maxCount:100, //maximum number of instances
    minCount:2, //minimum number of instances
    stepCount:5, //increment of instances
    scaleUpThreshold:20, //CW metric threshold at which to scale up
    scaleDownThreshold:10, //CW metric threshold at which to scale down
    period:900, //period in seconds for CW
    region:'us-east-1' //AWS region

// dependencies
var AWS = require('aws-sdk');
var ec2 = new AWS.EC2({region: config.region, maxRetries: 5});
var cloudwatch = new AWS.CloudWatch({region: config.region, maxRetries: 5});

console.log ('Loading function');

//main function
function main() {
//main function
    var now = new Date();
    var startTime = new Date(now - (config.period * 1000));
    console.log ('Timestamp: '+now);
    var cloudWatchParams = {
        StartTime: startTime,
        EndTime: now,
        MetricName: 'ApproximateNumberOfMessagesVisible',
        Namespace: 'AWS/SQS',
        Period: config.period,
        Statistics: ['Average'],
        Dimensions: [
                Name: 'QueueName',
                Value: config.queueName,
        Unit: 'Count'
    cloudwatch.getMetricStatistics(cloudWatchParams, function(err, data) {
        if (err) console.log(err, err.stack); // an error occurred
        else {
            //set Metric Variable
            var metricValue = data.Datapoints[0].Average;
            console.log ('Cloudwatch Metric Value is: '+ metricValue);
            var up = 1;
            var down = -1;
            // check if scaling is required
            if (metricValue = config.scaleDownThreshold)
                console.log ("metric not breached for scaling action");
            else if (metricValue >= config.scaleUpThreshold)
                scale(up); //scaleup
                scale(down); //scaledown

//defining scaling function
function scale (direction) {
    //adjust stepCount depending upon whether we are scaling up or down
    config.stepCount = Math.abs(config.stepCount) * direction;
    //describe Spot Fleet Request Capacity
    console.log ('attempting to adjust capacity by: '+ config.stepCount);
    var describeParams = {
        DryRun: false,
        SpotFleetRequestIds: [
    //get current fleet capacity
    ec2.describeSpotFleetRequests(describeParams, function(err, data) {
        if (err) {
            console.log('Unable to describeSpotFleetRequests: ' + err); // an error occurred
            return 'Unable to describeSpotFleetRequests';
        //set current capacity variable
        var currentCapacity = data.SpotFleetRequestConfigs[0].SpotFleetRequestConfig.TargetCapacity;
        console.log ('current capacity is: ' + currentCapacity);
        //set desired capacity variable
        var desiredCapacity = currentCapacity + config.stepCount;
        console.log ('desired capacity is: '+ desiredCapacity);
        //find out if the spot fleet is already modifying
        var fleetModifyState = data.SpotFleetRequestConfigs[0].SpotFleetRequestState;
        console.log ('current state of the the spot fleet is: ' + fleetModifyState);
        //only proceed forward if  maxCount or minCount hasn't been reached
        //or spot fleet isn't being currently modified.
        if (fleetModifyState == 'modifying')
            console.log ('capacity already at min, max or fleet is currently being modified');
        else if (desiredCapacity  config.maxCount)
            console.log ('capacity already at max count');
        else {
            console.log ('scaling');
            var modifyParams = {
                SpotFleetRequestId: config.spotFleetRequest,
                TargetCapacity: desiredCapacity
            ec2.modifySpotFleetRequest(modifyParams, function(err, data) {
                if (err) {
                    console.log('unable to modify spot fleet due to: ' + err);
                else {
                    console.log('successfully modified capacity to: ' + desiredCapacity);
                    return 'success';

You can modify this sample script to meet your application’s requirements.

You could also leverage AWS Lambda for dynamically scaling your Spot fleet. As depicted in the diagram below, an AWS Lambda function can be scheduled (e.g using AWS datapipeline, cron or any form of scheduling) to get the ApproximateNumberOfMessagesVisible SQS metric for the SQS queue in a batch processing application. This Lambda function will check the current size of a Spot fleet using the DescribeSpotFleetRequests API, and then scale the Spot fleet using the ModifyFleetRequest API after also checking certain constraints such as the state or size of the Spot fleet similar to the script discussed above.

Dynamic Spot Fleet Scaling Architecture

You could also use the sample IAM policy provided above to create an IAM role for the AWS Lambda function. A sample Lambda deployment package for dynamically scaling a Spot fleet based on the value of the ApproximateNumberOfMessagesVisible SQS metric can be found here. However, you could modify it to use any CloudWatch metric based on your use case. The sample script and Lambda function provided are only for reference and should be tested before using in a production environment.

yovko in a nutshell: UX дизайн на мобилни приложения

This post was syndicated from: yovko in a nutshell and was written by: Yovko Lambrev. Original post: at yovko in a nutshell

Хей, това може да е интересно за всички, които се занимават (или са изкушени да се занимават) с мобилни приложения и особено с UX дизайн.

Мой добър приятел, с когото се запознах в Барселона, и по-важното – експерт с реален международен опит в областта – е планирал пътуване до София, където ще води двудневно практическо обучение на тема „UX дизайн на мобилни приложения“. Казва се Хавиер Коелю и освен, че е приятен събеседник, е работил по проекти на компании като Yahoo и Telefónica, преподава в Universitat Autònoma de Barcelona и много обича да споделя опита и знанията си.

Повече за семинара можете да прочетете като проследите линка, а иначе аз бих похвалил и неговата книга Designing Mobile Apps и скорошната му прекрасна статия Thinking Like An App Designer в Smashing Magazine.

И още нещо, специално за читателите ми и следващите ме в социалните мрежи, ако при регистрация за семинара въведете код mobile ще получите 30% отстъпка от цената. А пък аз ще завиждам, че можете да участвате, докато аз скучая в Барселона ;)

Оригинален линк: “UX дизайн на мобилни приложения” – Някои права запазени

Schneier on Security: Stealing Fingerprints

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

The news from the Office of Personnel Management hack keeps getting worse. In addition to the personal records of over 20 million US government employees, we’ve now learned that the hackers stole fingerprint files for 5.6 million of them.

This is fundamentally different from the data thefts we regularly read about in the news, and should give us pause before we entrust our biometric data to large networked databases.

There are three basic kinds of data that can be stolen. The first, and most common, is authentication credentials. These are passwords and other information that allows someone else access into our accounts and — usually — our money. An example would be the 56 million credit card numbers hackers stole from Home Depot in 2014, or the 21.5 million Social Security numbers hackers stole in the OPM breach. The motivation is typically financial. The hackers want to steal money from our bank accounts, process fraudulent credit card charges in our name, or open new lines of credit or apply for tax refunds.

It’s a huge illegal business, but we know how to deal with it when it happens. We detect these hacks as quickly as possible, and update our account credentials as soon as we detect an attack. (We also need to stop treating Social Security numbers as if they were secret.)

The second kind of data stolen is personal information. Examples would be the medical data stolen and exposed when Sony was hacked in 2014, or the very personal data from the infidelity website Ashley Madison stolen and published this year. In these instances, there is no real way to recover after a breach. Once the data is public, or in the hands of an adversary, it’s impossible to make it private again.

This is the main consequence of the OPM data breach. Whoever stole the data — we suspect it was the Chinese — got copies the security-clearance paperwork of all those government employees. This documentation includes the answers to some very personal and embarrassing questions, and now these employees up to blackmail and other types of coercion.

Fingerprints are another type of data entirely. They’re used to identify people at crime scenes, but increasingly they’re used as an authentication credential. If you have an iPhone, for example, you probably use your fingerprint to unlock your phone. This type of authentication is increasingly common, replacing a password — something you know — with a biometric: something you are. The problem with biometrics is that they can’t be replaced. So while it’s easy to update your password or get a new credit card number, you can’t get a new finger.

And now, for the rest of their lives, 5.6 million US government employees need to remember that someone, somewhere, has their fingerprints. And we really don’t know the future value of this data. If, in twenty years, we routinely use our fingerprints at ATM machines, that fingerprint database will become very profitable to criminals. If fingerprints start being used on our computers to authorize our access to files and data, that database will become very profitable to spies.

Of course, it’s not that simple. Fingerprint readers employ various technologies to prevent being fooled by fake fingers: detecting temperature, pores, a heartbeat, and so on. But this is an arms race between attackers and defenders, and there are many ways to fool fingerprint readers. When Apple introduced its iPhone fingerprint reader, hackers figured out how to fool it within days, and have continued to fool each new generation of phone readers equally quickly.

Not every use of biometrics requires the biometric data to be stored in a central server somewhere. Apple’s system, for example, only stores the data locally: on your phone. That way there’s no central repository to be hacked. And many systems don’t store the biometric data at all, only a mathematical function of the data that can be used for authentication but can’t be used to reconstruct the actual biometric. Unfortunately, OPM stored copies of actual fingerprints.

Ashley Madison has taught us all the dangers of entrusting our intimate secrets to a company’s computers and networks, because once that data is out there’s no getting it back. All biometric data, whether it be fingerprints, retinal scans, voiceprints, or something else, has that same property. We should be skeptical of any attempts to store this data en masse, whether by governments or by corporations. We need our biometrics for authentication, and we can’t afford to lose them to hackers.

This essay previously appeared on Motherboard.

Raspberry Pi: Astro Pi: Mission Update 6 – Payload Handover

This post was syndicated from: Raspberry Pi and was written by: David Honess. Original post: at Raspberry Pi

Those of you who regularly read our blog will know all about Astro Pi. If not then, to briefly recap, two specially augmented Raspberry Pis (called Astro Pis) are being launched to the International Space Station (ISS) as part of British ESA Astronaut Tim Peake’s mission starting in December. The launch date is December the 15th.

Britsh ESA Astronaut Tim Peake with Astro Pi

British ESA astronaut Tim Peake with Astro Pi – Image credit ESA

The Astro Pi competition

Last year we joined forces with the UK Space Agency, ESA and the UK Space Trade Association to run a competition that gave school-age students in the UK the chance to devise computer science experiments for Tim to run aboard the ISS.

Here is our competition video voiced by Tim Peake himself:

Astro Pi

This is “Astro Pi” by Raspberry Pi Foundation on Vimeo, the home for high quality videos and the people who love them.

This ran from December 2014 to July 2015 and produced seven winning programs that will be run on the ISS by Tim. You can read about those in a previous blog post here. They range from fun reaction-time games to real science experiments looking at the radiation environment in space. The results will be downloaded back to Earth and made available online for all to see.

During the competition we saw kids with little or no coding experience become so motivated by the possibility of having their code run in space that they learned programming from scratch and grew proficient enough to submit an entry.

Flight safety testing and laser etching

Meanwhile we were working with ESA and a number of the UK space companies to get the Astro Pi flight hardware (below) certified for space.

An Astro Pi unit in its flight case

An Astro Pi unit in its space-grade aluminium flight case

This was a very long process which began in September 2014 and is only now coming to an end. Read all about it in the blog entry here.

The final step in this process was to get some laser engraving done. This is to label every port and every feature that the crew can interact with. Their time is heavily scheduled up there and they use step-by-step scripts to explicitly coordinate everything from getting the Astro Pis out and setting them up, to getting data off the SD cards and packing them away again.


So this labelling (known within ESA as Ops Noms) allows the features of the flight cases to exactly match what is written in those ISS deployment scripts. There can be no doubt about anything this way.


In order to do this we asked our CAD guy, Jonathan Wells, to produce updated drawings of the flight cases showing the labels. We then took those to a company called Cut Tec up in Barnsley to do the work.

They have a machine, rather like a plotter, which laser etches according to the CAD file provided. The process actually involves melting the metal of the cases to leave a permanent, hard wearing, burn mark.

They engraved four of our ground Astro Pi units (used for training and verification purposes) followed by the two precious flight units that went through all the safety testing. Here is a video:

Private Video on Vimeo

Join the web’s most supportive community of creators and get high-quality tools for hosting, sharing, and streaming videos in gorgeous HD with no ads.

After many months of hard work the only thing left to do was to package up the payload and ship it to ESA! This was done on Friday of last week.

Raspberry Pi on Twitter

The final flight @astro_pi payload has left the building! @gsholling @astro_timpeake @spacegovuk @esa

The payload is now with a space contractor company in Italy called ALTEC. They will be cleaning the units, applying special ISS bar codes, and packaging them into Nomex pouch bags for launch. After that the payload will be shipped to the Baikonur Cosmodrome in Kazakhstan to be loaded onto the same launch vehicle that Tim Peake will use to get into space: the Soyuz 45S.

This is not the last you’ll hear of Astro Pi!

We have a range of new Astro Pi educational resources coming up. There will be opportunities to examine the results of the winning competition experiments, and a data analysis activity where you can obtain a CSV file full of time-stamped sensor readings direct from Tim.

Tim has also said that, during the flight, he wants to use some of his free time on Saturday afternoons to do educational outreach. While we can’t confirm anything at this stage we are hopeful that some kind of interactive Astro Pi activities will take place. There could yet be more opportunities to get your code running on the ISS!

If you want to participate in this we recommend that you prepare by obtaining a Sense HAT and maybe even building a mock-up of the Astro Pi flight unit like the students of Cranmere Primary School did to test their competition entry.

Richard Hayler ☀ on Twitter

We’ve built a Lego version of the @astro_pi flight case to make sweaty-astronaut testing as realistic as possible.

It’s been about 25 years since we last had a British Astronaut (Helen Sharman in 1991) and we all feel that this is a hugely historic and aspirational moment for Great Britain. To be so intimately involved thus far has been an honour and a privilege for us. We’ve made some great friends at the UK Space Agency, ESA, CGI, Airbus Defence & Space and Surrey Satellite Technology to name a few.

We wish Tim Peake all the best for what remains of his training and for the mission ahead. Thanks for reading, and please watch this short video if you want to find out a bit more about the man himself:

Tim Peake: How to be an Astronaut – Preview – BBC Two

Programme website: An intimate portrait of the man behind the visor – British astronaut Tim Peake. Follow Tim Peake @BBCScienceClub, as he prepares for take off. #BritInSpace

The Astro Pis are staying on the ISS until 2022 when the coin cell batteries in their real time clocks reach end of life. So we sincerely hope that other crew members flying to the ISS will use them in the future.


Columbus ISS Training Module in Germany – Image credit ESA

The post Astro Pi: Mission Update 6 – Payload Handover appeared first on Raspberry Pi.

TorrentFreak: Copyright Trolls Announce UK Anti-Piracy Invasion

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

trollsignSo-called copyright trolls were a common occurrence in the UK half a decade ago, when many Internet subscribers received settlement demands for allegedly downloading pirated files.

After one of the key players went bankrupt the focus shifted to other countries, but now they’re back. One of the best known trolling outfits has just announced the largest anti-piracy push in the UK for many years.

The renewed efforts began earlier this year when the makers of “The Company You Keep” began demanding cash from many Sky Broadband customers.

This action was spearheaded by Maverick Eye, a German outfit that tracks and monitors BitTorrent piracy data that forms the basis of these campaigns. Today, the company says that this was just the beginning.

Framed as one of the largest anti-piracy campaigns in history, Maverick Eye says it teamed up with law firm Hatton & Berkeley and other key players to launch a new wave of settlement demands.

“Since July this year, Hatton & Berkeley and Maverick Eye have been busy working with producers, lawyers, key industry figures, investors, partners, and supporters to develop a program to protect the industry and defend the UK cinema against rampant piracy online,” Maverick Eye says.

“The entertainment industry can expect even more from these experts as they continue the fight against piracy in the UK.”

The companies have yet to announce which copyright holders are involved, but Maverick Eye is already working with the makers of the movies Dallas Buyers Club, The Cobbler and Survivor in other countries.

Most recently, they supported a series of lawsuits against several Popcorn Time users in the U.S., and they also targeted BitTorrent users in Canada and Australia.

Hatton & Berkeley commonly offers administrative services and says it will provide “essential infrastructure” for the UK anti-piracy campaign.

“Hatton and Berkeley stands alongside our colleagues in an international operation that has so far yielded drastic reductions in streaming, torrenting and illegal downloads across Europe,” the company announces.

In the UK it is relatively easy for copyright holders to obtain the personal details of thousands of subscribers at once, which means that tens of thousands of people could be at risk of being targeted.

Source: TorrentFreak, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

AWS Official Blog: Amazon WorkSpaces Update – BYOL, Chromebooks, Encryption

This post was syndicated from: AWS Official Blog and was written by: Jeff Barr. Original post: at AWS Official Blog

As I have noted in the past, I am a huge fan and devoted user of Amazon WorkSpaces. In fact, every blog post that I have written and illustrated over the last 6 or 7 months has been written on my WorkSpace. The most recent set of AWS podcasts were edited on the same WorkSpace.

Several months ago the hard drive in my laptop crashed and was replaced. In the past, I would have spent several hours installing and customizing my apps and my environment. All of my work in progress is stored in Amazon WorkDocs, so that aspect of the recovery would have been painless. At this point, the only truly personal items on my laptop are the 12-character registration code for my WorkSpace and my hard-won set of stickers. My laptop has become little more than a generic display and I/O device (with some awesome stickers).

I have three pieces of good news for Amazon WorkSpaces users:

  1. You can now bring your Windows 7 Desktop license to Amazon WorkSpaces.
  2. There’s a new Amazon WorkSpaces Client App for Chromebook.
  3. The storage volumes used by WorkSpaces (both root and user) can now be encrypted.

Bring Your Windows 7 Desktop License to Amazon WorkSpaces (BYOL)
You can now bring your existing Windows 7 Desktop license to Amazon WorkSpaces and run the Windows 7 Desktop OS on hardware that is physically dedicated to you. This new option entitles you to a discount of $4.00 per month per WorkSpace (a savings of up to 16%) and also allows you to use the same Windows 7 Desktop golden image on-premises and the AWS cloud. The newly launched images can be activated using new or existing Microsoft activation servers running in your VPC, or that can be reached from your VPC.

To take advantage of this option, at a minimum your organization must have an active Enterprise Agreement (EA) with Microsoft and you must commit to running at least 200 WorkSpaces in a given AWS region each month. To learn more, take a look at the WorkSpaces FAQ.

In order to ensure that you have adequate dedicated capacity allocated to your account and to get started with BYOL, please reach out to your AWS account manager or sales representative or create a Technical Support case with Amazon WorkSpaces.

New Amazon WorkSpaces Client App for Chromebook
Today we are making Amazon WorkSpaces even more flexible and accessible by adding support for the Google Chromebook. These low-cost “thin client” laptops are simple and easy to manage. They run Chrome OS and were designed specifically for internet users. This makes them a great match for Amazon WorkSpaces because you can access your cloud desktops, your productivity apps, and your corporate network from devices that are simple to manage, secure, and available at a low cost.

The newest Amazon WorkSpaces client app runs on Chromebooks (version 45 of Chrome OS and newer) with ARM and Intel chipsets, and supports both touch and non-touch devices.  You can download the WorkSpaces client for Chromebook now and install it on your Chromebook today.

The Amazon WorkSpaces client app is also available for Mac OS X, iPad, Windows, Android Tablet, and Fire Tablet environments.

Encrypted Storage Volumes Using KMS
Amazon WorkSpaces enables you to deliver a high quality desktop experience to your end-users and can also help you to address regulatory requirements or to conform to organizational security policies.

Today we are announcing an additional security option: encryption for WorkSpaces data in motion and at rest (this includes the disk volume and the snapshots associated with it). The WorkSpaces administrator now has the option to encrypt the C: and D: drives as part of the launch and configuration process for each newly created WorkSpace.  This encryption is performed using a customer master key (CMK) stored in AWS Key Management Service (KMS).

Encryption is supported for all types of Amazon WorkSpace bundles including custom bundles created within your organization, but must be set up when the WorkSpace is created (encrypting an existing WorkSpace is not supported). Each customer master key from KMS can be used to encrypt up to 30 WorkSpaces.

Launching a WorkSpace with an encrypted root volume can take additional time. Once launched, you can expect to see a minimal impact on latency or IOPS. Here is how you (or your WorkSpaces administrator) choose the volumes to be encrypted along with the KMS key at launch time:

The encryption status of each WorkSpace is also visible from within the WorkSpaces Console:

There’s no charge for the encryption feature, but you will pay the standard KMS charges for any keys that you create.


PS – Before you ask, I am planning to ditch my laptop in favor of a Chromebook immediately after AWS re:Invent!