Posts tagged ‘ip address’

Linux How-Tos and Linux Tutorials: How to Manage Linux Virtualization Using ConVirt Open Source Deployed on Amazon Web Services

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Jaydeep Marfatia. Original post: at Linux How-Tos and Linux Tutorials

With the latest version of ConVirt Open Source Version 2.5, management of KVM and Xen-based virtual machines is now possible from an Amazon Web Services (AWS) account. ConVirt is available as an Amazon Machine Image (AMI) on an existing Amazon account. As a result, IT managers can add the ease and flexibility of cloud-based management to their virtualization toolset.

ConVirt deployed in the Amazon cloud connects to all of the instances of KVM and Xen in the data center via the “ConVirt Connector,” a secure network interface that is installed in the data center. Now, the IT manager has a sophisticated management tool for his entire virtual infrastructure that is easily accessed and simple to deploy on the Amazon cloud.

By deploying the ConVirt management layer into the cloud, IT managers can access and manage multiple virtual resources located in on-premises data centers – even if those are scattered in different geographic locations. In addition, by deploying ConVirt into the cloud, there is no need to allocate any additional computing or human resources toward setting up and configuring the management functions locally. Rather, the IT admin can spin up an instance of ConVirt in their Amazon account and start managing KVM and Xen servers immediately including monitoring, configuration management, templates-based provisioning, and live migration.

ConVirt Open Source running in Amazon also provides the ability for third-party management of virtual resources, including by managed service providers and IT outsourcers.

OSS Deployment EC2ConVirt Open Source is free to use and is available immediately here. Now, let’s walk you through the set up process.

Prerequisite: Amazon Account (NOTE: You will be charged by Amazon for this usage.)

There are two basic steps:

1. Starting the ConVirt Appliance in Amazon EC2

2. Providing access to Infrastructure via the ConVirt-Connector

1. Starting ConVirt Appliance in Amazon EC2

Locate and Launch AMI

  • Log in to your Amazon account.

  • Click EC2 from AWS Console or Select EC2 from Services drop-down if you are in another console.

  • Select N. California (US-West) region by selecting drop down from top right.

  • Click AMIs under Images from the left navigator.

  • Select Public AMI from drop down and search for “ConVirt-OSS” on Amazon EC2 in N.California (US-West) region. Pick the latest release and build.

  • If required, copy the AMI to region of your choice.

  • Launch the AMI image, with following choices in the wizard.

    • Select ‘t1.micro’ instance type.

    • Select appropriate details on ‘Configure Instance Details’ page. Defaults are ok.

    • Skip ‘Add Storage’ page.

    • On the ‘Tag Instance’ page, put appropriate value for the Name tag.

    • On the ‘Security group’ page:

      • Change the name and description of newly created security group shown on the page and make sure it has the following two rules:

      • SSH (TCP port 22) from Anywhere as source

      • Custom UDP Rule, UDP port 1194 from Anywhere as source (For secure vpn connectivity to the enterprise)

      • Custom TCP Rule, TCP port 8082 from Anywhere as source (For ConVirt management web application )

  • When you press ‘Review and Launch’ button, you will be prompted to generate or use an existing key. You will need this to access the ConVirt-Appliance so don’t forget to download and save it.(e.g., ~/ec2_creds/keys/my-convirt-appliance-key.pem). We will refer to it as ConVirt-Appliance Key.

  • Go to Instances pane, and wait for the instance to initialize completely.

  • Now go to Elastic IPs option under Network and Security from the left navigator.

  • Allocate a new Elastic IP Address or select from existing list. Use the Associate button, and select the ConVirt-Appliance instance you just started.

  • Note down the Elastic IP, we will refer to it as ConVirt-Appliance IP.

Validate the ConVirt-Appliance instance

  • Use ssh (or Putty on windows) to Login to the ConVirt-Appliance

Syntax : ssh -i <Instance key> ubuntu@<ConVirt-Appliance IP> e.g. ssh -i ~/ec2_creds/keys/my-convirt-appliance-key.pem ubuntu@54.241.22.142

It will prompt you to accept the fingerprints. Say yes.

Execute the following commands:

service openvpn status (Expected result: The response should validate that openvpn is running.)

ps -ef | grep paster (Expected result: You should see a process running with name paster.)

netstat -an | grep 8082 | grep LISTEN (Expected Result: You should see one entry containing 8082.)

Logout from the ConVirt-Appliance.

If you see the expected results in all cases, your appliance is set and ready to go to the next step.

Login to ConVirt Management Server

Use a browser and point it to https://<ConVirt-Appliance IP>:8082/login . This should bring up the ConVirt application in the browser. You will be promoted for security warning, as the default ssl cerificate is self-signed. Follow your browser specific prompts to continue with the warning.

Login using ‘admin’ user and ‘admin’ password.

Change the default password

Use the ‘Change Password’ option from the top right to change the default password. (NOTE: Treat this step as mandatory for security reasons.)

2. Providing access to Infrastructure via the ConVirt-Connector

For ConVirt to manage your virtualization infrastructure from ConVirt Appliance in EC2, you need to have connectivity between the two. If you already have a VPC (Virtual Private Cloud) with secure connectivity to your enterprise infrastructure and administrators, you can skip this section. For those who do not have this setup, Convirture provides a Connector to establish secure connectivity to the ConVirt-Appliance. For those instructions, go here.

Conclusion

From this point, you need to continue with preparing managed servers as you would with an on-premises installation of ConVirt. For those steps, go here.

# # #

Jaydeep Marfatia is Executive Vice President of Engineering and Founder of Convirture. Jaydeep is responsible for all aspects of product development at Convirture. He brings a wealth of industry experience to his current role, including over 10 years in systems management. Prior to co-founding Convirture, Jaydeep held a senior engineering management position in the ASLM division at Oracle, and was one of the principal architects of the Oracle Enterprise Manager 10g product suite. He holds a degree in Computer Science from the University of Mumbai.

Errata Security: Using masscan to scan for heartbleed vulnerability

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

I’ve updated my port scanner, “masscan”, to specifically look for Neel Mehta’s “HeartBleed” vulnerability. Masscan is good for scanning very large networks (like the network).

Remember that the trick with masscan is that it has its own TCP/IP stack. This means that on Linux and Mac OS X (but not Windows), the operating system will send back RST packets in acknowledgement to a SYN-ACK. Therefore, on Linux, you have to either configure firewall rules to block out a range of ports that masscan can use without generating resets, or better yet, just set masscan to “spoof” an otherwise unused IP address on the local network.

Here is how you might use it:

masscan 10.0.0.0/8 -p443 -S 10.1.2.53 –rate 100000 –heartbleed

This translates to:

  • 10.0.0.0/8 = the network you want to scan, which is all 10.x.x.x
  • -p443 = the port(s) you want to scan, in this case, the ones assigned to SSL
  • -S 10.1.2.53 = an otherwise unused local IP address to scan from
  • –rate 100000 = 100-packets/second, which scans the entire Class A range in a few minutes
  • –heartbleed = the new option that reconfigures masscan to look for this vulnerability

The output on the command-line will look like the following:

Discovered open port 443/tcp on 10.20.30.143
Banner on port 443/tcp on 10.20.30.143: [ssl] cipher:0xc014
Banner on port 443/tcp on 10.20.30.143: [vuln] SSL[heartbeat] SSL[HEARTBLEED]

There are three pieces of output for each IP address. The first is that the open port exists (the scanner received a SYN-ACK). The second is that the SSL exists, that the scanner was able to get back a reasonable SSL result (reporting which cipher suite it’s using). The third line is the “vulnerability” information the scanner found. In this case, it’s found two separate vulnerabilities. The first is that SSL “heartbeats” are enabled, which really isn’t a vulnerability, but something some people might want to remove from their network. The second is the important part, notifying you that that the “HEARTBLEED” vulnerability exists (in all caps, ’cause it’s important).

Some researchers would like to capture the bled (disclosed) information. To do that, add the option “–capture heartbleed” to the command-line. This will add a fourth line of output per IP address:

Banner on port 443/tcp on 10.20.30.143: [heartbleed] AwJTQ1uQnZtyC7wMvCuSqEiXz705BMwWCoUDkJ93BDPU…

This line will be BASE64 encoded, and be many kilobytes in size (I think masscan truncates it to the first 4k).

TorrentFreak: Document Reveals When Copyright Trolls Drop Piracy Cases

This post was syndicated from: TorrentFreak and was written by: Andy. Original post: at TorrentFreak

trollIt’s well known that while copyright trolls may suggest they are going to pursue all of their cases to the bitter end, they simply do not. Plenty of cases are dropped or otherwise terminated, although the precise reasons why this happens usually remain a closely guarded secret.

Today, however, we have a much clearer idea of what happens behind the scenes at Malibu Media, one of the main companies in the United States currently chasing down BitTorrent users for cash settlements.

The company was required by Illinois Judge Milton Shadur to submit a summary of its activities in Illinois and, as spotted by troll watcher SJD over at Fight Copyright Trolls, there was an agreement that it could remain under seal.

Somehow, however, that document has now became available on Pacer and it reveals some rather interesting details on Malibu’s operations.

Overall, Malibu Media reports that it filed cases in Illinois against 886 defendants. According to the company, just 174 have paid up so far, with 150 of those hiring a lawyer to do so.

While 100 cases are still open (including 42 still at discovery stage and 30 in negotiations), for various reasons a total of 612 defendants paid nothing at all and the cases against them were dismissed. Malibu reveal the reasons for this in their filing, and they’re quite eye-opening to say the least.

Hardship

“Hardship is when a defendant may be liable for the conduct, but has extenuating circumstances where Plaintiff does not wish to proceed against him or her,” the Malibu document explains.

“Examples are when a defendant has little or no assets, defendant has serious illness or has recently deceased, defendant is currently active duty US military, defendant is a charitable organization or school, etc.”

Out of 886 defendants, Malibu reports that cases against 49 were dropped on hardship grounds.

Insufficient Evidence

It has long been said that an IP address alone isn’t enough to identify an infringer and Malibu’s own submission to the court underlines this in grand fashion.

“Insufficient evidence is defined as when Plaintiff’s evidence does not raise a strong
presumption that the defendant is the infringer or some other ambiguity causes Malibu to question the Defendant’s innocence,” the company writes.

So, in an attempt to boost the value of the IP address evidence, Malibu says it investigates further to determine whether the account holder is in fact the infringer. The company says it looks in three areas.

1. Length of the infringement, i.e. how long it took place, when it began, when it ended, whether it took place during the day or night, and any other patterns.

2. Location of the residence where the infringement occurred, i.e. whether it is in a remote location or with other dwellings within wireless access range.

3. Profiling suspected pirates using social media (Facebook, Twitter)

The third element is of particular interest. Malibu says that since July 2012 it has been monitoring not just its own content online, but also piracy on music, movies, ebooks and software. It compares the IP addresses it spots downloading other pirate content with the IP addresses known to be infringing copyright on its own titles.

The data collected is then used to profile the person behind the IP address and this is compared with information gleaned from sites including Facebook and Twitter.

“Oftentimes, a subscriber will publicly admit on social media to enjoying sports teams,
music groups, or favorite TV shows. Malibu will compare their likes and interests to their [downloads of other content] and determine whether the interests match,” the company explains.

So in what circumstances will Malibu dismiss a case on evidence grounds?

In the company’s own words:

-Multiple roommates within one residence with similar profiles and interests share a single Internet connection

-The defendant has left the country and cannot be located

-The results of additional surveillance do not specifically match profile interests or occupation of Defendant or other authorized users of the Internet connection

-The subscriber is a small business with public Wi-Fi access, etc

From a total of 886 defendants, cases against 259 were dropped due to insufficient evidence.

The Polygraph Defense

In the absence of any other supporting evidence, how can a subscriber prove a negative, i.e that he or she did not carry out any unlawful file-sharing? Quite bizarrely, Malibu says that it will accept the results of a lie detector test.

“[M]alibu will dismiss its claims against any Defendant who agrees to and passes a
polygraph administered by a licensed examiner of the Defendant’s choosing,” the company told the court.

So has anyone taken the bait? Apparently so.

“Out of the entirety of polygraphs administered within the United States by Malibu, no Defendant has passed and all such examinations have subsequently led to the Defendant settling the case,” Malibu writes.

No discovery

In order for Malibu to pressure account holders into settling, it first needs to find out who they are from their ISPs. Malibu’s submission reveals that this is not always possible due to:

- ISPs not retaining logging data for a long enough period
- Subpoenas being quashed due to cases being severed
- Information held on file at ISPs does not match identities of an address’s occupants
- ISP could not match the IP address with a subscriber at the time and date stipulated by Malibu

From a total of 886 defendants, cases against 304 were dropped due to failed discovery.

Cases dismissed due to settlement / actual judgments obtained

In total, 174 cases were settled by defendants without need for a trial but the amounts paid are not included in the document. However, the submission does reveal that two cases did go to court resulting in statutory damages awards of $26,250 and $15,000 respectively.

Conclusion

Malibu’s submission points to a few interesting conclusions, not least that the vast majority of their cases get dismissed for one reason or another and a significant proportion simply do not pay up.

The document also suggests that Malibu are working under the assumption that an IP address alone isn’t enough to secure a settlement and that additional social media-sourced evidence is required to back it up.

This information, plus the reasons listed by Malibu for not pursuing cases, should ensure that even less people are prompted to pay up in future.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Linux How-Tos and Linux Tutorials: Three Alternatives to Ubuntu One Cloud Service

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Jack Wallen. Original post: at Linux How-Tos and Linux Tutorials

Many of us had hoped it was an April Fool’s prank. But Ubuntu One will, in fact, no longer be available as of June 1, 2014 and all data will be wiped July 31, 2014. This will leave a great number of Ubuntu users without a cloud service. Fear not, intrepid users, there are plenty of cloud services and tools available – each with native Linux clients – ready and willing to take your Ubuntu One data and keep it in the cloud.

But out of the many services, which might be the best to suit your needs? Given what happened with Ubuntu One, many are growing leery of using services hosted by smaller companies (who could easily fold in the coming years). With larger companies making it impossible for the small guys to compete, the best bet for long-term cloud storage is to go with the proven companies with a track record of keeping the lights on and the data safe. With that in mind, which of the large companies work best on the Linux platform? Let’s take a look.

Google Drive (with Insync)

Because I have become more and more reliant on Google Drive, the obvious solution for me was to shift all of my Ubuntu One data to Google’s cloud service. After seeing the prices (100GB for $1.99/month), it was a no brainer. There was one catch – syncing. For reasons unbeknown to me, a native Linux has yet to appear from behind the magic curtain that is Google. No problem…a company called Insync has us covered. With a robust and reliable native client for Linux, Insync easily syncs your Google Drive data onto your choice of location within your Linux box.

The Insync client is one of the best clients available to the Linux desktop. It does have a price associated with it, but for anyone looking to sync their Google Drive account with Linux, it is very much worth it. Pricing ranges from a one-time $15.00 for a single consumer account license to a business license at $15.00 per account, per year.

Installation is simple. I will walk you through the process on the Ubuntu 13.10 platform (Insync is available for Ubuntu, Debian, and Fedora). The one requirement for Insync is Python.

  1. Download the .deb file into your ~/Downloads directory

  2. Open a terminal window

  3. Issue the command cd Downloads

  4. Issue the command sudo dpkg -i insync_XXX_xxx.deb (Where XXX is the release number and xxx is the architecture)

  5. Type your sudo password and hit Enter

  6. Hit ‘y’ when prompted

  7. Allow the installation to complete.

Once the command line installation is complete, it’s time to walk through the GUI wizard. This will open up automatically. In the resulting window, click Start Insync.

You’ll be required to log into your Google account to complete it and then give Insync permission to work with your Google Drive account. Click Accept, when prompted, and then click Start syncing. The synced folder will be named after the address associated with your Gmail account and will reside in your home directory. If you want to relocate this folder, click Advanced setup and you can place that folder anywhere you like. In the Advanced setup, you can also give your synced folder a different name and make sure all Google Docs are not automatically converted. Before the wizard is complete, you will be asked if you want to integrate Insync with Nautilus. I highly recommend you do this, as it will make it incredibly easy to add folders to Insync with a simple right-click within Nautilus. To integrate, just click Yes when prompted (Figure 1).

insync

You will then be prompted for your sudo password, so Insync can download and install the necessary components for Nautilus integration. Finally, click Done in the installer window and, when prompted, click Restart Nautilus.

When Insync is running you’ll see an icon in the notification area. From there you can interact with the application to check status, pause, add accounts, and more.

Dropbox

This has been the de facto standard cloud storage for a long time – with good reason. It works with nearly everything. So you can be sure to have your data synced to all of your devices – regardless of platform. The downside of Dropbox is that it’s costlier than Google Drive and you’re boxed into a single folder. A free account will get your 2 GB of space and for $9.99/month you get 100 GB. Not much else needs to be said about Dropbox (as it has been covered extensively); but the installation of the client is fairly simple. You will first need to sign up for an account, or have a pre-existing account to log into. Once you have that information, do the following (we’ll stick with Ubuntu):

  1. Download the appropriate installer file for your platform into your ~/Downloads folder

  2. Open up a terminal window

  3. Issue the command cd Downloads

  4. Issue the command sudo dpkg -i dropbox_XXX_xxx.deb (Where XXX is the release number and xxx is the architecture)

  5. Type your sudo password and hit Enter

  6. Allow the installation to complete

  7. When prompted click the Start Dropbox button (Figure 2)

dropbox 

Now it’s time to walk through the official Dropbox install Wizard. This is quite simple – it will first ask you for your account credentials (or, if you don’t have an account, allow you to set one up). When prompted, log into your Dropbox account (from within the install wizard) and then tell Dropbox where to place the syncing folder. By default, the folder will be ~/Dropbox. You cannot change the name of the folder and you can only sync that one folder. You do get to choose which Dropbox sub-folders to sync (this can be helpful if you have a large Dropbox folder and a smaller SSD drive).

Once the installation is complete, you’ll be prompted to restart Nautilus. Dropbox does have limited Dropbox integration. What it allows you to do is move a folder into your Dropbox folder – you cannot sync folders outside of Dropbox.

If you’re not using Google Drive, Dropbox is one of the better solutions available, for the Linux desktop – especially if you use other platforms and want to sync data across every device.

ownCloud

Is a bit different from the competition in that it requires you to connect to your own ownCloud server. On the plus side, ownCloud is open source, so anyone can set up their own cloud server. The downside to that is you will need an IP address accessible to the outside world, in order to make use of this. If you have that available, ownCloud is an incredibly powerful solution that you control. Setting up an ownCloud server is beyond the scope of this article, but installing the client is simple (again, sticking with the Ubuntu platform):

  1. Open up a terminal window

  2. Issue the command sudo sh -c “echo ‘deb http://download.opensuse.org/repositories/isv:/ownCloud:/desktop/xUbuntu_13.10/ /’ >> /etc/apt/sources.list.d/owncloud-client.list”

  3. Issue the command sudo apt-get update

  4. Issue the command sudo apt-get install owncloud-client

  5. Enter your password when prompted and hit Enter.

After the install completes, issue the command owncloud and then enter the server address for your ownCloud 5 or 6 server. You will then be prompted for your username and password. Upon successful authentication, you can configure where you want your ownCloud folder to exist. Once you set that folder, click Connect and the ownCloud client will prompt you to either open the ownCloud folder or the ownCloud web interface. Click Finish and you’re done.

The ownCloud client also has a notification icon that allows you to Sign in, quit, or go to the ownCloud settings. The ownCloud settings window (Figure 3) allows you to: 

  • Add a folder

  • Check storage usage

  • Set up ignored files

  • Modify your accounts

  • Check activity

  • Set ownCloud to launch at start

  • Set up a proxy

  • Limit bandwidth

owncloud

If you want to install your own server, you can download it and install it from the ownCloud installer page. NOTE: The web installer is the easiest method for new users. If you don’t want to setup your own server, there are plenty of ownCloud service providers available. Check out this page for a listing of supported service providers. Some of the plans (such as on OwnDrive) are free (1GB of space).

Ubuntu users need not fear the loss of Ubuntu One. With so many cloud services available – most of which offer native Linux clients – there are too many choices, ready to host your data, to be concerned. Give one of these options a try and see if they don’t meet your needs.

Which cloud service will take over the hosting of your Ubuntu One data? Or do you plan on setting up your own cloud server?

SANS Internet Storm Center, InfoCON: green: Dealing with Disaster – A Short Malware Incident Response, (Fri, Apr 4th)

This post was syndicated from: SANS Internet Storm Center, InfoCON: green and was written by: SANS Internet Storm Center, InfoCON: green. Original post: at SANS Internet Storm Center, InfoCON: green

I had a client call me recently with a full on service outage – his servers weren't reachable, his VOIP phones were giving him more static than voice, and his Exchange server wasn't sending or receiving mail – pretty much everything was offline.

I VPN'd in (I was not onsite) and started with the firewall, because things were bad enough that's all I could initially get to from a VPN session.  No surprise, I saw thousands of events per second flying by that looked like:

So right away this looks like malware, broadcasting on UDP ports 137 and 138 (netbios name services and datagrams).  You''ll usually have some level of these in almost any network, but the volumes in this case where high enough to DOS just about everything, I was lucky to keep my SSH sessions (see below) going long enough to get things under control.  And yes, that was me that was behind Monday's post on this if this sounds familiar

To get the network to some semblance of usability, I ssh'd to each switch in turn and put broadcast limits on each and every switch port:

On Cisco:
interface gigabitethernet0/x
  storm-control broadcast level 20  (for 20 percent)
or
  storm-control broadcast level pps 100  (for packets per second)

On HP Procurve:
interface x
   broadcast-limit 5  (where x is a percentage of the total possible traffic)

On HP Comware:
int gig x/0/y
 broadcast-suppression pps 200
or
storm-constrain broadcast pps 200 200
(you can do these in percent as well if you want)

Where I can, I try to do this in packets per second, so that the discussion with the client can be "of course we shut that port down – there's no production traffic in your environment that should generate more than 100 broadcasts per second."

With that done, I now could get to the syslog server.  What we needed was a quick-and-dirty list of the infected hosts, in order of how much grief they were causing.

First, let's filter out the records of interest – everything that has a broadcast address and a matching netbios port in it – it's a Windows host, so we'll use windows commands (plus some GNU commands):

type syslogcatchall.txt | find "172.xx.yy.255/13"

But we don't really want the whole syslog record, plus this short filter still leaves us with thousands of events to go through

Let's narrow this down:
first, let's use "cut" to pull out the just source IP out of these events of interest.  

cut -d " " -f 7

Unfortunately, that field also includes the source port, so let's remove that by using "/" as the field delimeter, and take only the source ip address (field one)

cut -d "/" -f 1

Use sort and uniq -c (the -c gives you a count for each source ip)
then use sort /r to do a reverse sort based on record count

Putting it all together, we get:

type syslogcatchall.txt | find "172.xx.yy.255/13" | cut -d " " -f 7 | cut -d "/" -f 1 | sort | uniq -c | sort /r > infected.txt

This gave us a 15 line file, sorted so that the worst offenders were at the top of the list, with a record count for each.  My client took these 15 stations offline and started the hands-on assess and "nuke from orbit" routine on them, since their AV package didn't catch the offending malware.

What else did we learn during this incident?

  • Workstations should never be on server VLANs.
  • Each and every switch port needs basic security configured on it (broadcast limits today)

and, in related but not directly related lessons …

  • Their guest wireless network was being used as a base for torrent downloads
  • Their guest wireless network also had an infected workstation (we popped a shun on the firewall for that one).
  • Their syslog server wasn't being patched by WSUS – that poor server hadn't seen a patch since December's patch Tuesday


What did we miss?
By the time I got onsite, the infected machines had all been re-imaged, so we didn't get a chance to assess the actual malware.  We don't know what it was doing besides this broadcast activity, and don't have any good way if working out for sure how it got into the environment.  Though since this distilled down to one infected laptop, my guess would be this is malware that got picked up at home, but that's just a guess at the moment.

Just a side note, but an important one – cut, uniq, sed and grep are all on your syslog server if it's a *nux host, but if you run syslog on Windows, these commands are still pretty much a must-have.  As you can see, with these commands we were able to distill a couple of million records down to 15 usable, actionable lines of text within a few minutes – a REALLY valuable toolset and skill to have during an incident.  Microsoft provides these tools in their "SUA" – Subsystem for Unix Based Applications, which has been available for many years and for almost every version of Windows.  
Or if you need to drop just a few of these commands on to a server during an incident, especially if you don't own that server, you can get what you need from gnutools (gnuwin32.sourceforge.net) – I keep this toolset on my laptop for just this sort of situation.
Once you get the hang of using these tools, you'll find that your fingers will type "grep" instead of find or findstr in no time!

===============
Rob VandenBrink
Metafore

(c) SANS Internet Storm Center. http://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Schneier on Security: Mass Surveillance by Eavesdropping on Web Cookies

This post was syndicated from: Schneier on Security and was written by: schneier. Original post: at Schneier on Security

Interesting research:

Abstract: We investigate the ability of a passive network observer to leverage third-party HTTP tracking cookies for mass surveillance. If two web pages embed the same tracker which emits a unique pseudonymous identifier, then the adversary can link visits to those pages from the same user (browser instance) even if the user’s IP address varies. Using simulated browsing profiles, we cluster network traffic by transitively linking shared unique cookies and estimate that for typical users over 90% of web sites with embedded trackers are located in a single connected component. Furthermore, almost half of the most popular web pages will leak a logged-in user’s real-world identity to an eavesdropper in unencrypted traffic. Together, these provide a novel method to link an identified individual to a large fraction of her entire web history. We discuss the privacy consequences of this attack and suggest mitigation strategies.

Blog post.

Errata Security: We may have witnessed a NSA "Shotgiant" TAO-like action

This post was syndicated from: Errata Security and was written by: Robert Graham. Original post: at Errata Security

Last Friday, the New York Times reported that the NSA has hacked/infiltrated Huawei, a big Chinese network hardware firm. We may have witnessed something related to this.

In 2012, during an incident, we watched in real time as somebody logged into an account reserved for Huawei tech support, from the Huawei IP address space in mainland China. We watched as they executed a very narrow SQL query targeting specific data. That person then encrypted the results, emailed them to a Hotmail account, removed the log files, and logged out. Had we not been connected live to the system and watched it in real-time, there would have been no trace of the act.

The compelling attribute of the information they grabbed is that it was useful only to American intelligence. The narrowness of the SQL query clearly identified why they wanted the data. It wasn’t support information, something that a support engineer would want. It wasn’t indiscriminate information, something a hacker might grab. It wasn’t something that would interest other intelligence services — except to pass it on to the Americans.

I point this out to demonstrate the incompleteness of the New York Times story. The story takes the leaked document with the phrase “Leverage Huawei presence to gain access to networks of interest” and assumes it’s referring to the existing narratives of “hardware backdoors” or “finding 0days”. In fact, these documents can mean so much more, such as “exploiting support contracts”.

A backdoor or 0day for a Huawei router would be of limited use to the NSA, because the control ports are behind firewalls. Hacking behind firewalls would likely give full access to the target network anyway, making any backdoors/0days in routers superfluous.

But embedding themselves inside the support infrastructure would give the NSA nearly unlimited access to much of the world. Huawei claims that a third of the Internet is running their devices. Almost all of it is under support contract. These means a Huawei support engineer, or a spy, can at any time reach out through cyberspace and take control of a third of the Internet hardware, located in data centers behind firewalls. Most often, it’s the Huawei device or management server the NSA would target. In other cases, the Huawei product is just one hop away from the desired system, without a firewall in between.

You want to know who the Pakistani president called in the hours after the raid on the Bin Laden compound? Easy, just use the Huawei support account to query all the telephone switches. You want the contents of Bashar al-Assad’s emails? Easy, just log into the Huawei management servers that share accounts with the email servers.

This isn’t a just Huawei issue, but a universal principle of hacking. An example of this was last Christmas’s breach of the retailer Target, where 40 million credit cards were stolen by hackers. Apparently, hackers first breached the HVAC (air conditioning) company, then leveraged their VPN connection to the Target network to then hacking into servers.

By the way, I doubt this was actually the NSA. It’s more likely the CIA, who has “assets” at Huawei (support engineers they’ve bribed), or the intelligence service for a friendly country. The intelligence community is so huge it’d be unreasonable to assume the NSA is lurking behind every rock. I’m just pointing out that there are other ways to interpret that NYTimes story.

TorrentFreak: Judge: IP-Address Is Not a Person and Can’t Identify a BitTorrent Pirate

This post was syndicated from: TorrentFreak and was written by: Ernesto. Original post: at TorrentFreak

ip-addressOver the past several years hundreds of thousands of alleged BitTorrent pirates have been sued by so-called ‘copyright trolls’ in the United States.

The rightsholders bringing these cases generally rely on an IP address as evidence. They then ask the courts to grant a subpoena, forcing Internet providers to hand over the personal details of the associated account holder.

The problem, however, is that the person listed as the account holder is often not the person who downloaded the infringing material. Although not many judges address this crucial issue early on, there are exceptions, such as the one raised by Florida District Court Judge Ursula Ungaro.

Judge Ungaro was presented with a case brought by Malibu Media, who accused IP-address “174.61.81.171″ of sharing one of their films using BitTorrent without their permission. The Judge, however, was reluctant to issue a subpoena, and asked the company to explain how they could identify the actual infringer.

Responding to this order to show cause, Malibu Media gave an overview of their data gathering techniques. Among other things they explained that geo-location software was used to pinpoint the right location, and how they made sure that it was a residential address, and not a public hotspot.

Judge Ungaro welcomed the additional details, but saw nothing that actually proves that the account holder is the person who downloaded the file.

“Plaintiff has shown that the geolocation software can provide a location for an infringing IP address; however, Plaintiff has not shown how this geolocation software can establish the identity of the Defendant,” Ungaro wrote in an order last week.

“There is nothing that links the IP address location to the identity of the person actually downloading and viewing Plaintiff’s videos, and establishing whether that person lives in this district,” she adds.

The order

Even if Malibu Media can accurately show that the copyright infringer used the Internet connection of the account holder connected to IP-address 174.61.81.171, they still can’t prove who shared the file.

“Even if this IP address is located within a residence, the geolocation software cannot identify who has access to that residence’s computer and who would actually be using it to infringe Plaintiff’s copyright,” Judge Ungaro explains.

As a result, the court decided to dismiss the case for improper venue. The ruling is crucial as it’s another unique order confirming that an IP address alone is not enough to launch a copyright infringement lawsuit.

Copyright Troll watcher SJD points out that the same Judge has also issued orders to show cause in two other Malibu Media cases, which are also likely to be closed.

While not all judges may come to the same conclusion, the order definitely limits the options for copyright holders in the Southern District of Florida. Together with several similar rulings on the insufficiency of IP-address evidence, accused downloaders have yet more ammunition to fight back.

Source: TorrentFreak, for the latest info on copyright, file-sharing and anonymous VPN services.

Linux How-Tos and Linux Tutorials: How to Manage VMs with OpenStack Command Line Tools

This post was syndicated from: Linux How-Tos and Linux Tutorials and was written by: Linux How-Tos and Linux Tutorials. Original post: at Linux How-Tos and Linux Tutorials

OpenStack is an industry-standard open-source cloud management platform. Using OpenStack, one can build public, private or hybrid clouds easily. Due to the purely open nature of the platform, major IT vendors including Red Hat, Rackspace, IBM and HP are betting on its future, actively contributing to OpenStack development. In OpenStack, there are two different interfaces […]
Continue reading…

The post How to manage VMs with OpenStack command line tools appeared first on Xmodulo.

Read more at Xmodulo