Tag Archives: cloud-infrastructure

A Look at Our Development Process of the Cloud Resource Enrichment API

Post Syndicated from Rapid7 original https://blog.rapid7.com/2023/09/07/a-look-at-our-development-process-of-the-cloud-resource-enrichment-api/

A Look at Our Development Process of the Cloud Resource Enrichment API

In today’s ever-evolving cybersecurity landscape, detecting and responding to cyber threats is paramount for organizations in cloud environments. At the same time, investigating cyber threat alerts can be arduous due to the time-consuming and complex process of data collection. To tackle this pain point, Rapid7 developed a new Cloud Resource Enrichment API that streamlines data retrieval from various cloud resources. The API empowers security analysts to swiftly respond to cyber threats and improve incident response time.

Identifying the Need for a Unified API

Protecting cloud resources from cyber attacks is a growing challenge. Security analysts must grapple with gathering relevant data spread across multiple systems and APIs, leading to incident response inefficiencies. Presented with this challenge, we recognized a pressing need for a unified API that collects all relevant data types related to a cloud resource during a cyber threat action. This API streamlines data access, enabling analysts to piece together a comprehensive view of incidents rapidly, enhancing cybersecurity operations.

Defining the Vision and Scope

Our development team worked closely with security analysts to tailor the API’s functionalities to meet real-world needs. Defining the API’s scope involved meticulous prioritization of features, striking the right balance between usability and data abundance. By involving analysts from the outset, we laid a solid foundation for the API’s success.

The Development Journey

Adopting agile methodologies, our team iteratively developed the API, adapting and fine-tuning as we progressed. The iterative development process played a vital role in ensuring the API’s success. By breaking down the project into smaller, manageable tasks, we could focus on specific features, implement them efficiently, and gather feedback from early prototypes. With a comprehensive design phase, we defined the API’s architecture and capabilities based on insights from security analysts. Regular meetings and feedback gathering facilitated continuous improvements, streamlining the data retrieval process.

The API utilizes RESTful API design principles for data integration and communication between cloud systems. It collects the following types of data:

  • Harvested cloud resource properties (image, IP, network interfaces, region, cloud organization and account, security groups, and much, much more)
  • Permissions data (permissions on the resource, permissions of the resource)
  • Security insights (risks, misconfigurations, vulnerabilities)
  • Security alerts (“threat finding”)
  • First level cloud related resources
  • Application context (tagging made by the client in the cloud environment)

Each data type required collaboration with a different team which is responsible for collecting and processing the data. This resulted in a feature that involved developers from 6 different teams! Regular meetings and continuous communication with the development team and the product manager, allowed us to incorporate suggestions and make iterative improvements to the API’s design and functionality.

Conclusion

The development journey of our Cloud Resource Enrichment API has been both challenging and rewarding. With a user-centric approach, we have crafted a powerful tool that empowers security teams to respond effectively to cyber threats. As we continue to enhance the API, we remain committed to fortifying organizations’ cyber defenses and elevating incident response capabilities. Together, we can better equip security analysts to face the ever-changing cyber war with confidence.

Why Your AWS Cloud Container Needs Client-Side Security

Post Syndicated from Rapid7 original https://blog.rapid7.com/2023/08/24/why-your-aws-cloud-container-needs-client-side-security/

Why Your AWS Cloud Container Needs Client-Side Security

With increasingly complicated network infrastructure and organizations needing to deploy applications across various environments, cloud containers are necessary for companies to stay agile and innovative. Containers are packages of software that hold all of the necessary components for an app to run in any environment. One of the biggest benefits of cloud containers? They virtualize an operating system, enabling users to access from private data centers, public clouds, and even laptops.

According to recent research by Faction, 92% of organizations have a multi-cloud strategy in place or are in the process of adopting one. In addition to the ubiquity of cloud computing, there are a variety of cloud container providers, including Google Cloud Platform (GCP), Amazon Web Services (AWS), and Microsoft Azure. Nearly 80% of all containers on the cloud, however, run on AWS, which is known for its security, reliability, and scalability.

When it comes to cloud container security, AWS works on a shared responsibility model. This means that security and compliance is shared between AWS and the client. AWS protects the infrastructure running the services offered in the cloud — the hardware, software, networking, and facilities.

Unfortunately, many AWS users stop here. They believe that the security provided by AWS is sufficient to protect their cloud containers. While it is true that the level of customer responsibility for security differs depending on the AWS product, each product does require the customer to assume some level of security responsibility.

To avoid this mistake, let’s examine why your AWS cloud container needs additional client-side security and how Rapid7 can help.

Top reasons why your AWS container needs client-side security

Visibility and monitoring

Some of the same qualities that make containers ideal for agility and innovation also creates difficulty in visibility and monitoring. Cloud containers are ephemeral, which means they’re easy to establish and destroy. This is convenient for quickly moving workloads and applications, but it also makes it difficult to track changes. Many AWS containers share memory and CPU resources with a variety of hosts (physical and cloud) in your ecosystem. Consequently, monitoring resource consumption and assessing container performance and application health can be difficult — after all, how can you know how much memory is being utilized by the container or the physical host?

Traditional monitoring tools and solutions also fail to collect the necessary metrics or provide the crucial insights needed for monitoring and troubleshooting container health and performance. While AWS offers protection for the cloud container structure, visualizing and monitoring what happens within the container is the responsibility of your organization.

Alert contextualization and remediation

As your company grows and you scale your cloud infrastructure, your DevOps teams will continue to create containers. For example, Google runs everything in containers and launches an epic amount of containers (several billion per week!) to keep up with their developer and client needs. While you might not be launching quite as many containers, it’s still easy to lose track of them all. Organizations utilize alerts to keep track of container performance and health to resolve problems quickly. While alerting policies differ, most companies use metric- or log-based alerting.

It can be overwhelming to manage and remediate all of your organization’s container alerts. Not only do these alerts need to be routed to the proper developer or resource owner, but they also need to be remediated quickly to ensure the security and continued good performance of the container.

Cybersecurity standards

While AWS provides security for your foundational services in containerized applications — computing, storage, databases, and networking — it’s your responsibility to develop sufficient security protocols to protect your data, applications, operating system, and firewall. In the same way that your organization follows external cybersecurity standards for security and compliance across the rest of your digital ecosystem, it’s best to align your client-side AWS container security with a well-known industry framework.

Adopting a standardized cybersecurity framework will work in concert with AWS’s security measures by providing guidelines and best practices — preventing your organization from a haphazard security application that creates coverage gaps.

How Rapid7 can help with AWS container security

Now that you know why your organization needs client-side security, here’s how Rapid7 can help.

  • Visibility and monitoring: Rapid7’s InsightCloudSec continuously scans your cloud’s infrastructure, orchestration platforms, and workloads to provide a real-time assessment of health, performance, and risk. With the ability to scan containers in less than 60 seconds, your team will be able to quickly and accurately track changes in your containers and view the data in a single, convenient platform, perfect for collaborating across teams and quickly remediating issues.
  • Alert contextualization and remediation: Client-side security measures are key to processing and remediating system alerts in your AWS containers, but it can’t be accomplished manually. Automation is key for alert contextualization and remediation. InsightCloudSec integrates with AWS services like Amazon GuardDuty to analyze logs for malicious activity. The tool also integrates with your larger enterprise security systems to automate the remediation of critical risks in real time — often within 60 seconds.
  • Cybersecurity standards: While aligning your cloud containers with an industry-standard cybersecurity framework is a necessity, it’s often a struggle. Maintaining security and compliance requirements requires specialized knowledge and expertise. With record staff shortages, this often falls by the wayside. InsightCloudSec automates cloud compliance for well-known industry standards like the National Institute of Standards and Technology’s (NIST) Cybersecurity Framework (CSF) with out-of-the-box policies that map back to specific NIST directives.

Secure your container (and it’s contents)

AWS’s shared responsibility model of security helps relieve operational burdens for organizations operating cloud containers. AWS clients don’t have to worry about the infrastructure security of their cloud containers. The contents in the cloud containers, however, are the owner’s responsibility and require additional security considerations.

Client-side security is necessary for proper monitoring and visibility, reduction in alert fatigue and real-time troubleshooting, and the application of external cybersecurity frameworks. The right tools, like Rapid7’s InsightCloudSec, can provide crucial support in each of these areas and beyond, filling crucial expertise and staffing gaps on your team and empowering your organization to confidently (and securely) utilize cloud containers.

Want to learn more about AWS container security? Download Fortify Your Containerized Apps With Rapid7 on AWS.

CIEM is Required for Cloud Security and IAM Providers to Compete: Gartner® Report

Post Syndicated from Aaron Wells original https://blog.rapid7.com/2023/02/15/ciem-is-required-for-cloud-security-and-iam-providers-to-compete-gartner-r-report/

CIEM is Required for Cloud Security and IAM Providers to Compete: Gartner® Report

In an ongoing effort to help security organizations stay competitive, we’re pleased to offer this complimentary Gartner® report, Emerging Tech: CIEM Is Required for Cloud Security and IAM Providers to Compete. The research in the report demonstrates the need for Cloud Infrastructure Entitlement Management (CIEM) product leaders to adopt trends that can help deliver value across Cloud Security and Identity and Access Management (IAM) enterprises.

CIEM product leaders looking to remain competitive in Cloud Security and IAM practices should consider prioritizing specific capabilities in their planning in order to address new and emerging threats and, as Gartner says:                            

  • Gain a further competitive edge in the CIEM market by investing in more-advanced guided remediation capabilities, such as automated downsizing of over-privileged accounts.
  • Appeal to a larger audience beyond cloud security teams by positioning CIEM as part of broader enterprise security controls.

Businesses not currently prioritizing CIEM capabilities, however, can’t simply “do a 180” and expect to be successful. Managing entitlements in the current sophisticated age of attacks and digital espionage can feel impossible. It is imperative for security organizations to adopt updated access practices though, not only to remain competitive but to remain secure.

Least Privileged Access (LPA) approaches lacking in effectiveness can find support in CIEM tools that provide advanced enforcement and remediation of ineffective LPA methods. Gartner says:

“The anomaly-detection capabilities leveraged by CIEM tools can be extended to analyze the misconfigurations and vulnerabilities in the IAM stack. With overprivileged account discovery, and some guided remediation, CIEM tools can help organizations move toward a security posture where identities have at least privileges.”

Broadening the portfolio

Within cloud security, identity-verification practices are more critical than ever. Companies developing and leveraging SaaS applications must constantly grapple with varying business priorities, thus identity permissions across these applications can become inconsistent. This can leave applications — and the business — open to vulnerabilities and other challenges.

When it comes to dynamic multi- and hybrid-cloud environments, it can become prohibitively difficult to monitor identity administration and governance. Challenges can include:

  • Prevention of misuse from privileged accounts
  • Poor visibility for performing compliance and oversight
  • Added complexity from short-term cloud entitlements
  • Inconsistency across multiple cloud infrastructures
  • Accounts with excessive access permissions

Multi-cloud IAM requires a more refined approach, and CIEM tools can capably address the challenges above, which is why they must be adopted as part of a suite of broader enterprise security controls.

Accelerating cloud adoption

Technology and service providers fulfilling IAM services are in critical need of capabilities that can address specific cloud use cases. Gartner says:

“It is a natural extension to assist existing customers in their digital transformation and cloud adoption journey. These solutions are able to bridge both on-premises identity implementations and cloud to support hybrid use cases. This will also translate existing IAM policies and apply relevant elements for the cloud while adding additional use cases unique to the cloud environment.”

In fact, a key finding from the report is that “visibility of entitlements and rightsizing of permissions are quickly becoming ‘table stakes’ features in the CIEM market.”

Mature CIEM vendors can typically be expected to also offer additional capabilities like cloud security posture management (CSPM). InsightCloudSec from Rapid7 is a CIEM solution that also offers CSPM capabilities to effectively manage the perpetual shift, adoption, and innovation of cloud infrastructure. Businesses and security organizations can more effectively compete when they offer strong solutions that support and aid existing CIEM capabilities.

Download the report

Rapid7 is pleased to continually offer leading research to help you gain clarity into ways you can stand out in this ultra-competitive landscape. Read the entire complimentary Gartner report now to better understand just how in-demand CIEM capabilities are becoming and how product leaders can tailor strategies to Cloud Security and IAM enterprises.

Gartner, “Emerging Tech: CIEM Is Required for Cloud Security and IAM Providers to Compete”

Swati Rakheja, Mark Wah. 13 July 2022.

Gartner is registered trademark and servicemark of Gartner, Inc and/or its affiliates in the U.S. and internationally, and is used herein with permission. All rights reserved.

Unifying Threat Findings to Elevate Your Runtime Cloud Security

Post Syndicated from Alon Berger original https://blog.rapid7.com/2022/11/29/unifying-threat-findings-to-elevate-your-runtime-cloud-security/

Unifying Threat Findings to Elevate Your Runtime Cloud Security

The widespread growth in cloud adoption in recent years has given businesses across all industries the ability to transform and scale in ways never before possible. However, the speed of those changes, combined with the drastically increased volume and complexity of resources in cloud environments, often forces organizations to choose between slowing the pace of their innovation or taking on massive amounts of unmanaged risk.

Cloud security teams still struggle to gather all the relevant insights such as alerts, threat findings, and notifications in a single, consolidated place, and even when they succeed, these findings are often missing much of the context needed to perform quickly and conduct proper investigations with confidence.

A Single Pane of Glass for Runtime Security Threats

To address and overcome these challenges, we’ve introduced a series of agentless cloud detection and response (CDR) capabilities, empowering our customers to utilize better observability and context for proactive and collaborative investigations.

As part of our new CDR capabilities, we first introduced a unified threat findings view that curates runtime threat detections from various customer resources and cloud service providers to allow faster intelligence analysis and detection of potential risks.

This offers frictionless workflow integrations with third-party cloud vendors, collecting cloud events, alerts, and threat intelligence feeds from associated services, such as AWS GuardDuty. The new unified view not only consolidates all runtime threat detections from various sources, but also provides richer security context by associating the findings with the affected cloud resources and their properties, all in a single place.

These seamless integrations also ensure that companies are able to leverage their CSP’s newest security tools and capabilities, as well as keeping up with the latest developments in the ever-changing world of cloud infrastructure.

In addition to consolidating third-party threat findings, we’ve also built native detection for suspicious events in customer cloud environments. These native detection capabilities, which are based on research from Rapid7 cloud security experts and detect suspicious events within 90 seconds, include identifying potential threat actor behaviors such as:

  • A user marking an existing resource as publicly accessible/exposed to the world
  • A user making a resource unencrypted at rest
  • A user removing transit encryption for a resource
  • A user removing cloud protective measures, such as password policy
  • A user adding overly permissive policies to an existing resource

Along with providing individual alerts for these detections, admin can now also filter resources to get a view of only those assets that have seen a suspicious event in the last 24 hours. This allows flexibility in how individuals and teams are able to review, investigate, and report on recent threats across their cloud environment.

Simplify Mitigation at Scale

Runtime security is key to providing visibility and detecting a variety of threats that piggyback on network resources. With Rapid7’s continuous monitoring and analysis of native and third-party threat findings, teams are able to leverage advanced automated remediation of risks in their environment, including misconfigured resources and hygiene drifts, known and unknown vulnerabilities, uncontrolled access (Secrets, tokens, credentials, etc.), and more.

Along with identifying threats, teams are now able to leverage an intelligent automated notification for third-party integrations such as SIEM, ticketing platform, or chat solutions. This significantly helps with an advanced and much faster remediation process to isolate relevant resources and prevent further suspicious activity until a thorough investigation is completed.

Take a Holistic Approach to Runtime Security in
the Cloud

Rapid7 is on a mission to help drive cloud security forward across the entire industry and community. With this new set of capabilities, including our recently launched unified threats findings view, getting visibility into risks and threats is easier and more powerful than ever. Ultimately, we aim for our customers to benefit from our current and upcoming offerings, helping them to create greater impact and to drive business forward faster and at scale.

Want to learn more? Click here.

Reducing Risk In The Cloud with Agentless Vulnerability Management

Post Syndicated from Alon Berger original https://blog.rapid7.com/2022/11/28/reducing-risk-with-agentless-cloud-vulnerability-management/

Reducing Risk In The Cloud with Agentless Vulnerability Management

In order to gain visibility into vulnerabilities in their public cloud environments, many organizations still rely on agent or network-based scanning technology that was initially built for traditional infrastructure and endpoints.

These methods often struggle to keep up with the speed of change and scale of complex, and constantly changing cloud environments, forcing infrastructure teams to constantly play catch up and avoid significant blindspots caused by unprotected workloads.

Vulnerability management in the cloud starts with continuous discovery of the container images and host workloads that may contain them and the supporting resources that control how they are launched.  The assessment step produces  long lists of vulnerabilities that can lack the necessary context to help prioritize and accurately route the issue to the correct owners for remediation.

Getting Better Visibility and Control

Rapid7’s InsightCloudSec now addresses all these challenges and provides agentless vulnerability assessment capabilities for cloud-based container workloads and hosts.  Building on InsightCloudSec’s industry leading cloud resource discovery technology, we’ve unleashed the latest generation agentless methods for assessing vulnerabilities on Containers using side-scanning and on Hosts using image snapshotting.  Combined, this fully enables security teams to quickly identify where the vulnerabilities exist across their cloud infrastructure, what resources are responsible for managing the dynamic workloads that launch them, and the tools to manage response prioritization and remediation.

InsightCloudSec’s vulnerability management  capabilities are  purpose-built for cloud-native environments and leverage Rapid7’s proven vulnerability management expertise and intelligence.  Our agentless approach  reduces the unnecessary overhead of agent management on highly ephemeral cloud resources.

Vulnerability Management with Rapid7’s InsightCloudSec

Vulnerability management with InsightCloudSec focuses on container and host-based workloads found in production environments, where the risk of exploitation is the highest. The solution leverages event-driven detection capabilities, allowing teams to maintain an up-to-the-minute inventory of all resources in production. This in turn minimizes blind spots and allows for more trustworthy reporting.

The solution automatically analyzes new container images and host instances upon deployment and provides detailed intelligence and remediation guidance for known vulnerabilities. InsightCloudSec then periodically revalidates running hosts against the newest vulnerability data to detect and protect against drift.

Our comprehensive vulnerability detection spans operating systems, installed software packages, network services, and open-source software libraries and packages typically used as dependencies in these environments, providing customers with the broadest coverage available in the market.

Agentless Container and Host Workload Assessment

With agentless Vulnerability assessment, security teams gain robust, continuous visibility into what vulnerabilities exist in their cloud environment, without having to include an agent in their container and host golden images. We discover new container images and host instances in near-real-time and immediately gather the information necessary to perform the assessment without waiting for a scheduled scan window or impacting the performance of the live workloads.  

When new container images are detected in the monitored registries, InsightCloudSec performs a side-scan on them to index the inventory of operating system and installed software packages as well as any other dependent libraries that exist on which we can detect vulnerabilities.

In the same way, once a new running host (VM) instance is detected, InsightCloudSec fetches the workload’s runtime storage layer using remote harvesting and automated snapshot triggering to gather the data required for vulnerability assessment.

By combining workloads metadata gathered from cloud provider APIs with container and host vulnerability data, we are able to provide contextualized vulnerability reports and deep visibility of where they exist in cloud environments, allowing security teams to respond to those impacting the most critical applications and cloud accounts.

Conclusion

Rapid7 and InsightCloudSec strive to help security and operation teams apply proper processes and procedures across the deployment pipeline, allowing them to quickly respond to vulnerabilities of any sort and severity.

With an accurate assessment of detected vulnerabilities and intelligent, automated routing for faster remediation, our solution empowers teams to have a robust and continuous visibility into vulnerabilities that exist in their cloud environments.

Want to learn more? Click here.

Real-Time Risk Mitigation in Google Cloud Platform

Post Syndicated from Ben Austin original https://blog.rapid7.com/2022/10/12/real-time-risk-mitigation-in-google-cloud-platform/

Real-Time Risk Mitigation in Google Cloud Platform

With Google Cloud Next happening this week, there’s been some recent water cooler talk – okay, informal, ad hoc Zoom calls – where discussions about what makes Google Cloud Platform (GCP) unique when it comes to security. A few specific differences have popped up here and there (default data encryption, the way IAM is handled, etc.), but, generally speaking, many of the principles that apply to all other cloud providers apply to GCP environments.

For one, due to the speed and scale of these environments, it’s simultaneously very difficult and extremely critical to maintain an up-to-date inventory of the state of all resources in your environment. This means constantly monitoring your environment for resources being created, deleted, or modified in as close to real time as possible.

And in an effort to avoid ambiguity or hide behind marketing buzz terms, when I’m referring to “real time” here, I’m talking about sub 5-minute intervals based on activity happening in the environment. This is not to be confused with “near real time” approaches some vendors tout, which, in reality, still only pulls in data once or twice a day based on a static schedule.

In GCP, like in AWS, Azure, and all other cloud environments, simply getting a snapshot once a day to identify misconfigurations, vulnerabilities, or suspicious behaviors like you might with an on-prem data center just isn’t a scalable strategy. It’s a common cliche, but the ephemeral nature and rate of change in public cloud environments makes that kind of scanning strategy extremely ineffective when it comes to monitoring, analyzing, and eliminating actual risk in a cloud environment.

Let me lay out a couple examples where this kind of real-time monitoring can provide significant, potentially necessary, value to security teams working to make their cloud risk management programs more effective.

Identification of high-risk resources

As an example, say a developer is in a GCP project associated with your company’s revenue-generating application and they spin up a Cloud Storage instance that is, whether mistakenly or maliciously, open to the public internet.

If your security team is reliant on a scan to happen 12 hours later to get visibility into this activity, your organization will constantly be left open to significant risk. Take away the hyperbole here and assume it’s a much smaller risk or compliance violation. Even in that situation, your team is still working from behind and, presumably, almost always facing some level of stress about what issues are out there in the environment that they won’t know about for another 12-18 hours.

Worst of all, with this type of scanning you’re generally just getting a point-in-time snapshot of the environment and usually don’t know who made the change or how long ago it happened. This makes it much more difficult and time consuming for your team to actually assess the risk or get their hands on the right information to make an informed decision about how the situation should be addressed.

When a team is working with real-time data, however, they can be much more diligent and confident that they’re prioritizing the right issues at any given moment, with all the necessary context about who made the change and when it occurred. This not only helps teams stay ahead of issues and reduce the risk of a breach in their environment, but also helps keep individuals and teams feeling positive about the impact that the program is having on the organization.

Delayed remediation workflows

Building off of the previous example, it’s not only that teams can’t respond to risk they haven’t been notified of, it’s also that any automated response workflows your team may have built out to be more efficient are significantly less effective when they’re triggered by hours-old data. A 12-hour delay in an automation workflow all but eliminates the value of the automation itself, and it can actually cause headaches and confusion that detract from your team’s efficiency, rather than improving it (more on this in the next example).

In contrast, if you’re able to detect risky changes to your environment as they happen, you can automatically respond to that issue as it happens. In the case of this all being a mistake caused by a developer working a little too quickly, you’re able to automatically notify them of their error within a matter of minutes, likely while they’re still working within that project. Giving your development team this kind of feedback in the moment, rather than forcing them to context switch and go back into the project to fix the error a day later, is an excellent way to build stronger relationships and rapport with that team.

In the more rare case that this is indeed a malicious internal or external actor, enabling your automated remediation workflows to kick into gear within seconds and potentially stop the behavior could mean the difference between a minor incident and a breach requiring public disclosure from your organization.

Minimizing false positives and cross-team friction

Speaking of relationships with the development team (sorry, #DevSecOps), I can almost guarantee that working with data from scans or snapshots that occur every 12 or 24 hours in your cloud will cause friction between your two teams. Whether it’s tied to manual identification of risky resources or automated workflows notifying them of a non-compliant asset, working with stale data will inevitably lead to false positives that will both annoy and distract your already overburdened development team.

Take the example highlighted above, but instead, let’s say the developer actually spun up that Cloud Storage instance for a short amount of time in a dev instance with no actual customer data as part of a testing exercise. By the time your team gets visibility into this and either reaches out manually or has some automated notification sent to the developer, that instance could have already been deleted for hours. Now your team is looking at one set of old data and seeing an issue, meanwhile the developer is insisting that the storage container doesn’t even exist anymore. As mentioned above, this is going to cause headaches and frustration for both parties, and cause your team to lose credibility with the dev team.

At this point, you can probably guess where this is going next. With real-time monitoring in your environment this situation can be avoided altogether because your team will be looking at the same up-to-date information, and your team will be able to see that the storage container was shut down or removed from the project rather than spending time chasing down a false positive.

Earlier this month we released event-driven harvesting for GCP in InsightCloudSec. This agentless, real-time monitoring helps your security team achieve every one of the benefits outlined above while also avoiding API rate limiting. In addition, we’ve recently added GCP CIS Benchmarks v1.3.0, added GCP threat findings into our console, and added support for Google Directory to give visibility into IAM factors such as user last login, MFA status, group association and more.

If you want to learn more about how Rapid7 can help you secure Google Cloud Platform, or any other public cloud environment, sign up for our live bi-weekly demo of InsightCloudSec.

No Damsels in Distress: How Media and Entertainment Companies Can Secure Data and Content

Post Syndicated from Ryan Blanchard original https://blog.rapid7.com/2022/08/08/no-damsels-in-distress-how-media-and-entertainment-companies-can-secure-data-and-content/

No Damsels in Distress: How Media and Entertainment Companies Can Secure Data and Content

Streaming is king in the media and entertainment industry. According to the Motion Picture Association’s Theatrical and Home Entertainment Market Environment Report, the global number of streaming subscribers grew to 1.3 billion in 2021. Consumer demand for immediate digital delivery is skyrocketing. Producing high-quality content at scale is a challenge media companies must step up to on a daily basis. One thing is for sure: Meeting these expectations would be unmanageable left to human hands alone.

Fortunately, cloud adoption has enabled entertainment companies to meet mounting customer and business needs more efficiently. With the high-speed workflow and delivery processes that the cloud enables, distributing direct-to-consumer is now the industry standard.

As media and entertainment companies grow their cloud footprints, they’re also opening themselves up to vulnerabilities threat actors can exploit — and the potential consequences can be financially devastating.

Balancing cloud security with production speed

In 2021, a Twitch data breach showed the impact cyberattacks can have on intellectual property at media and entertainment companies. Attackers stole 128 gigabytes of data from the popular streaming site and posted the collection on 4chan. The released torrent file contained:

  • The history of Twitch’s source code
  • Programs Twitch used to test its own vulnerabilities
  • Proprietary software development kits
  • An unreleased online games store intended to compete with Steam
  • Amazon Game Studios’ next title

Ouch. In mere moments, the attackers stole a ton of sensitive IP and a key security strategy. How did attackers manage this? By exploiting a single misconfigured server.

Before you think, “Well, that couldn’t happen to us,” consider that cloud misconfigurations are the most common source of data breaches.

Yet, media and entertainment businesses can’t afford to slow down their adoption and usage of public cloud infrastructure if they hope to remain relevant. Consumers demand timely content, whether it’s the latest midnight album drop from Taylor Swift or breaking news on the war in Ukraine.

Media and entertainment organizations must mature their cloud security postures alongside their content delivery and production processes to maintain momentum while protecting their most valuable resources: intellectual property, content, and customer data.

We’ve outlined three key cloud security strategies media and entertainment companies can follow to secure their data in the cloud.

1. Expand and consolidate visibility

You can’t protect what you can’t see. There are myriad production, technical, and creative teams working on a host of projects at a media and entertainment company – and they all interact with cloud and container environments throughout their workflow. This opens the door for potential misconfigurations (and then breaches) if these environments aren’t carefully tracked, secured, or even known about.

Here are some key considerations to make:

  • Do you know exactly what platforms are being used across your organization?
  • Do you know how they are being used and whether they are secure?

Most enterprises lack visibility into all the cloud and container environments their teams use throughout each step of their digital supply chain. Implementing a system to continuously monitor all cloud and container services gives you better insight into associated risks. Putting these processes into place will enable you to tightly monitor – and therefore protect – your growing cloud footprint.

How to get started: Improve visibility by introducing a plan for cloud workload protection.

2. Shift left to prevent risk earlier

Cloud, container, and other infrastructure misconfigurations are a major area of concern for most security teams. More than 30% of the data breaches studied in our 2022 Cloud Misconfigurations Report were caused by relaxed security settings and misconfigurations in the cloud. These misconfigurations are alarmingly common across industries and can cause critical exposures, as evidenced in the following example:

In 2021, a server misconfiguration on Sky.com (a UK-based media company) revealed access credentials to a production-level database and IP addresses to development endpoints. This meant that anyone with those released credentials or addresses could easily access a mountain of proprietary data from the Comcast subsidiary.

One way to avoid these types of breaches is to prevent misconfigurations in your Infrastructure as Code (IaC) templates. Scanning IaC templates, such as Terraform, reduces the likelihood of cloud misconfigurations by ensuring that any templates that are built and deployed are already vetted against the same security and compliance checks as your production cloud infrastructure and services.

By leveraging IaC scanning that provides fast, context-rich results to resource owners, media and entertainment organizations can build a stronger security foundation while reducing friction across DevOps and security teams and cutting down on the number of 11th-hour fixes. Solving problems in the CI/CD pipeline improves efficiency by correcting issues once rather than fixing them over and over again at runtime.

How to get started: Learn about the first step of shifting left with Infrastructure as Code in the CI/CD pipeline.

3. Create a culture of security

As the saying goes, a chain is only as strong as its weakest link. Cloud security practices won’t be as effective or efficient if an organization’s workforce doesn’t understand and value secure processes. A culture of collaboration between DevOps and security is a good start, but the entire organization must understand and uphold security best practices.

Fostering a culture that prioritizes the protection of digital content empowers all parts of (and people in) your supply chain to work with secure practices front-of-mind.

What’s the tell-tale sign that you’ve created a culture of security? When all employees, no matter their department or role, see it as simply another part of their job. This is obviously not to say that you need to turn all employees, or even developers, into security experts, but they should understand how security influences their role and the negative consequences to the business if security recommendations are avoided or ignored.

How to get started: Share this curated set of resources on cloud security for media and entertainment companies with your team.

Achieving continuous content security

Media and entertainment companies can’t afford to slow down if they hope to meet consumer demands. They can’t afford to neglect security, either, if they want to maintain consumer trust.

Remember, the ultimate offense is a strong defense. Building security into your cloud infrastructure processes from the beginning dramatically decreases the odds that an attacker will find a chink in your armor. Moreover, identifying and remediating security issues sooner plays a critical role in protecting consumer data and your intellectual property and other media investments.

Want to learn more about how media and entertainment companies can strengthen their cloud security postures?

Read our eBook: Protecting IP and Consumer Data in the Streaming Age: A Guide to Cloud Security for Digital Media & Entertainment.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Shift Left: Secure Your Innovation Pipeline

Post Syndicated from Ryan Blanchard original https://blog.rapid7.com/2022/08/01/shift-left-secure-your-innovation-pipeline/

Shift Left: Secure Your Innovation Pipeline

There’s no shortage of buzzwords in the tech world. Some are purely marketing spin. But others are colloquial ways for the industry to talk about complex topics that have a massive impact on how organizations and teams drive innovation and work more efficiently. Here at Rapid7, we believe the “shift left” movement very much falls in the latter category.

Because we see shifting left as so critical to an effective cloud security strategy, we’re kicking off a new blog series covering how organizations can seamlessly incorporate security best practices and technologies into their existing DevOps workflows — and, of course, how InsightCloudSec and the brilliant team here at Rapid7 can help.

What does “shift left” actually mean?

For those who might not be familiar with the term, “shift left” can be used interchangeably with DevOps methodologies. The idea is to “shift” tasks that have typically been performed by centralized and dedicated operations teams earlier in the software development life cycle (SDLC). In the case of security, this means weaving security guardrails and checks into development, fixing problems at the source rather than waiting to do so upon deployment or production.

Shift Left: Secure Your Innovation Pipeline

Historically, security was centered around applying checks and scanning for known vulnerabilities after software was built as part of the test and release processes. While this is an important step in the cycle, there are many instances in which this is too late to begin thinking about the integrity of your software and supporting infrastructure — particularly as organizations adopt DevOps practices, resources are increasingly provisioned declaratively, and the development cycle becomes a more iterative, continuous process.

Our philosophy on shift left

One of the most commonly cited concerns we hear from organizations attempting to shift left is the potential to create a bottleneck in development, as developers need to complete additional steps to clear compliance and security hurdles. This is a crucial consideration, given that accelerating software development and increasing efficiency is often the driving force behind adopting DevOps practices in the first place. Security must catch up to the pace of development, not slow it down.

Shift left is very much about decentralizing security to match the speed and scale of the cloud, and when done poorly, it can erode trust and be viewed as a gating factor to releasing high-quality code. This is what drives Rapid7’s fundamental belief that in order to effectively shift security left, you need to avoid adding friction into the process, and instead embrace the developer experience and meet devs where they are today.

How do you accomplish this? Here’s a few core concepts that we here at Rapid7 endorse:

Provide real-time feedback with clear remediation guidance

The main goal of DevOps is to accelerate the pace of software development and improve operating efficiency. In order to accomplish this without compromising quality and security, you must make sure that insights derived from your tooling are actionable and made available to the relevant stakeholders in real time. For instance, if an issue is detected in an IaC template, the developer should be immediately notified and provided with step-by-step guidance on how to fix the issue directly in the template itself.

Establish clear and consistent security and compliance standards

It’s important for an organization to have a clear and consistent definition of what “good” looks like. A well-centered definition of security and compliance controls helps establish a common standard for the entire organization, making measurement of compliance and risk easier to establish and report. Working from a single, centrally managed policy set makes it that much easier to ensure that teams are building compliant workloads from the start, and you can limit the time wasted repeatedly fixing issues after they reach production. A common standard for security that everyone is accountable for also establishes trust with the development community.

Integrate seamlessly with existing tool chains and processes

When adding any tools or additional steps into the development life cycle, it is critically important to integrate them with existing tools and processes to avoid adding friction and creating bottlenecks. This means that your security tools must be compatible with existing CI/CD tools (e.g., GitHub, Jenkins, Puppet, etc.) to make the process of scanning resources and remediating issues seamless, and to enable developers to complete their tasks without ever leaving the tools they are most comfortable working with.

Enable automation by shifting security left

Automation can be a powerful tool for teams managing sprawling and complex cloud environments. Shifting security left with IaC scanning allows you to catch faulty source templates before they’re ever used, allowing teams to leverage automation to deploy their cloud infrastructure resources with the confidence that they will align to organizational security standards.

Shifting cloud security left with IaC scanning

Infrastructure as code (IaC) refers to the ability to provision cloud infrastructure resources declaratively, by writing code in the same development environments used to write the software it is intended to support. IaC is a critical component of shifting left, as it empowers developers to write, test, and release software and infrastructure resources programmatically in a highly integrated process. This is typically done through pre-configured templates based on policies determined by operations teams, making development a shared and reproducible process.

When it comes to IaC security, we’re primarily talking about integrating the process of checking IaC templates to be sure that they won’t result in non-compliant infrastructure. But it shouldn’t stop there. In a perfect world, the IaC scanning tool will identify why a given template will be non-compliant, but it should also tell you how to fix it (bonus points if it can fix the problem for you!).

IaC scanning with InsightCloudSec

By this point, it should be clear that we here at Rapid7 strongly believe in incorporating security and compliance as early as possible in the development process, but we know this can be a daunting task. That’s why we built powerful capabilities into the InsightCloudSec platform to make integrating IaC scanning into your development workflows as easy and seamless as possible.

With IaC scanning in InsightCloudSec, your teams can identify and evaluate risk before infrastructure is ever built, stopping non-compliant or misconfigured resources from ever reaching production, and improving efficiency by fixing problems at the source once and for all, rather than repeatedly addressing them in runtime. With out-of-the-box support for popular IaC tools like Terraform and CloudFormation, InsightCloudSec provides teams with a common understanding of good that is consistent throughout the entire development life cycle.

Shifting security left requires consistency

Consistency is critical when shifting left, because if you’re scanning IaC templates with checks against policies that differ from those being applied in production, there’s a high likelihood that after some — likely short — period of time, those policy sets are going to drift, leading to missed vulnerabilities, misconfigurations, and/or non-compliant workloads. That may not seem like the end of the world, but it creates real problems for communicating issues across teams and increases the risk of inconsistent application of policies. When you lack consistency, it creates confusion among your stakeholders and erodes confidence in the effectiveness of your security program.

To address this, InsightCloudSec applies the same exact set of configuration standards and security policies across your entire CI/CD pipeline and even across your various cloud platforms (if your organization is one of the many that employ a hybrid cloud strategy). That means teams using IaC templates to provision infrastructure resources for their cloud-native applications can be confident they are deploying workloads that are in line with existing compliance and security standards — without having to apply a distinct set of checks, or cross-reference them with those being used in production environments.

Sounds amazing, right?! There’s a whole lot more that InsightCloudSec has to offer cloud security teams that we don’t have time to cover in this post, so follow this link if you’d like to learn more.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Cloud Complexity Requires a Unified Approach to Assessing Risk

Post Syndicated from Shalini Subbiah original https://blog.rapid7.com/2022/07/05/cloud-complexity-requires-a-unified-approach-to-assessing-risk/

Cloud Complexity Requires a Unified Approach to Assessing Risk

There has been an unprecedented acceleration in the shift to the cloud as a result of the COVID-19 pandemic. McKinsey experts estimate companies have moved to the cloud “24 times faster […] than they thought” over the past two years. As organizations move quickly to scale, drive innovation, and revamp the way they engage with their consumers by moving to the public cloud, there is an increasing need for a security strategy that aligns with the varied states of organizations’ maturity in their usage and adoption of the cloud.

Modern cloud environments are complex and require multiple areas of focus, including security, application modernization, reduction of infrastructure overhead, accelerating software delivery, maintaining compliance, and countless more. All of these are critical to realizing the end goals of cloud adoption: increased speed, flexibility, and performance. Rapid cloud adoption without the appropriate visibility and automated security controls will lead to imminent exposure and vulnerability.

Whose responsibility is cloud security?

When it comes to cloud environments, security and compliance are a shared responsibility between the cloud service provider (CSP) and the customer’s internal security team. In the typical shared responsibility model, cloud providers are responsible for the security OF the cloud. This essentially means they are on the hook to make sure the actual underlying resources you’re using – such as a storage bucket or a compute instance – or the physical hardware sitting in their data centers aren’t a security threat.

So, if the provider is responsible for security of the cloud itself, what falls to the customer? Customers are responsible for security IN the cloud, meaning they are responsible for making sure their own data – and their customers’ data – is properly secured. Generally speaking, this means keeping track of how various resources and services are configured properly; identifying and remediating exploitable vulnerabilities; managing identity and access management (IAM) policies to maintain least privilege access; and utilizing encryption for data, whether it’s at rest, in transit, or even in use.

Cloud Complexity Requires a Unified Approach to Assessing Risk

So why is it that such a large majority of breaches are the fault of the customer if the responsibility is shared? There are a few drivers behind this, but it’s primarily because the goal of most bad actors is to gain access to sensitive and potentially lucrative customer data, which falls outside of the responsibility of the cloud provider.

I know what you’re thinking: “The answer is simple – just don’t leave a cloud resource housing sensitive data exposed to the public internet.” That’s, of course, the intent of any well-meaning engineer. That said, mistakes are unfortunately quite common. As engineers and developers work at light speed to bring new products to market, it can be very easy for security and compliance to fall through the cracks, especially as powerful new cloud capabilities enable infrastructure to be implemented with the click of a mouse.

This is consistent with our own research in our 2022 Cloud Misconfigurations Report, in which we found the most commonly breached resources were those that were secure and private by default, such as a storage bucket. This suggests human error played a pivotal role in leaving data exposed.

Prioritizing risk requires a unified approach to cloud security

The scale and complexity of modern cloud environments make it impossible to respond to every alert and potential issue that arises. So, what can you and your team do to make sure you’re not vulnerable to attack?

The key is context.

It is imperative for organizations to think of cloud security holistically so that they can understand their true risk exposure. Organizations need to be able to easily prioritize the issues that are the most critical to fix right away from the flood of alerts and incidents that are calling for their teams’ attention.

The question that needs answering seems simple, yet can be quite complex: “What are the biggest threats to my cloud environment today, and how do I mitigate them?” As mentioned earlier, it is not sufficient anymore to look at an issue through a single lens. Without a unified approach to cloud security, you could be leaving your organization and the systems it relies on in jeopardy.

This means examining not just the risks associated with a workload itself, but a holistic mapping of all resource interdependencies across your environment to understand how one compromised resource may impact others. It means taking into consideration whether or not a given resource is connected to the public internet, or whether there is potential for improper access to potentially sensitive information. There is also business context that needs to be taken into account, where an understanding of resource ownership and accountability can highlight relevant stakeholders that need to be looped in for remediation or audits and provide color as to potential business impact.

See? Simple – yet complex.

Extend this concept across millions of resources spanning hundreds of cloud environments, architectures, and technologies, and you have the complexity of cloud security today. It is therefore a non-negotiable starting point to have a consolidated, weighted, and standardized view of risk to one’s cloud estate. This can only be accomplished by gathering and analyzing all of the relevant data in a single solution that helps you see the full context – and passing that context along to other teams like DevOps – so that organizations can start being more strategic about prioritizing and remediating risks in their environment.

While there are many cloud security tools and vendors that focus on various aspects of cloud security, such as misconfigurations, vulnerabilities, access permissions, and exposure to the internet, very few offer a holistic understanding of all of the above combined to provide a “true” understanding of risk.

A holistic approach to cloud security with InsightCloudSec

Maintaining visibility can only get you so far from a security perspective. Given the sheer volume of monitored resources, chances are without an effective strategy to prioritize the flood of alerts cloud environments produce, your teams won’t know where to start.

The cloud is here to stay, and it is ever-changing. As cloud security and technologies evolve, so do attempts by bad actors to breach it. It is crucial for organizations to invest in best practices and automated cloud security throughout the development lifecycle. Cloud architectures and initiatives must be built on solid risk detection, prioritization and management processes, and platforms that provide seamless and real-time visibility into the true risk posture of the organization.

Increasingly, organizations want to focus their efforts on activities that increase their bottom line and competitive advantage. They simply don’t have the time to sift through multiple lines of code, teams, and repositories to understand the breadth and depth of risks associated with their cloud estate. Cloud security has to be looked at holistically to understand its true impact and threat to the organization.

That’s the difference with InsightCloudSec. We go beyond providing visibility to help organizations uncover the most critical threats facing their cloud ecosystem and provide guidance toward prioritization and response based on the true, holistic risk across multiple security domains. With a higher signal-to-noise ratio, development teams will be able to detect, understand, prevent, prioritize, and respond to threats better and faster, enabling them to build safely and efficiently in a multi-cloud environment.

Interested in learning more? Don’t hesitate to request a free demo!

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Security Is Shifting in a Cloud-Native World: Insights From RSAC 2022

Post Syndicated from Jesse Mack original https://blog.rapid7.com/2022/06/16/security-is-shifting-in-a-cloud-native-world-insights-from-rsac-2022/

Security Is Shifting in a Cloud-Native World: Insights From RSAC 2022

The cloud has become the default for IT infrastructure and resource delivery, allowing an unprecedented level of speed and flexibility for development and production pipelines. This helps organizations compete and innovate in a fast-paced business environment. But as the cloud becomes more ingrained, the ephemeral nature of cloud infrastructure is presenting new challenges for security teams.

Several talks by our Rapid7 presenters at this year’s RSA Conference touched on this theme. Here’s a closer look at what our RSAC 2022 presenters had to say about adapting security processes to a cloud-native world.

A complex picture

As Lee Weiner, SVP Cloud Security and Chief Innovation Officer, pointed out in his RSA briefing, “Context Is King: The Future of Cloud Security,” cloud adoption is not only increasing — it’s growing more complex. Many organizations are bringing on multiple cloud vendors to meet a variety of different needs. One report estimates that a whopping 89% of companies that have adopted the cloud have chosen a multicloud approach.

This model is so popular because of the flexibility it offers organizations to utilize the right technology, in the right cloud environment, at the right cost — a key advantage in a today’s marketplace.

“Over the last decade or so, many organizations have been going through a transformation to put themselves in a position to use the scale and speed of the cloud as a strategic business advantage,” Jane Man, Director of Product Management for VRM, said in her RSA Lounge presentation, “Adapting Your Vulnerability Management Program for Cloud-Native Environments.”

While DevOps teams can move more quickly than ever before with this model, security pros face a more complex set of questions than with traditional infrastructure, Lee noted. How many of our instances are exposed to known vulnerabilities? Do they have property identity and access management (IAM) controls established? What levels of access do those permissions actually grant users in our key applications?

New infrastructure, new demands

The core components of vulnerability management remain the same in cloud environments, Jane said in her talk. Security teams must:

  • Get visibility into all assets, resources, and services
  • Assess, prioritize, and remediate risks
  • Communicate the organization’s security and compliance posture to management

But because of the ephemeral nature of the cloud, the way teams go about completing these requirements is shifting.

“Running a scheduled scan, waiting for it to complete and then handing a report to IT doesn’t work when instances may be spinning up and down on a daily or hourly basis,” she said.

In his presentation, Lee expressed optimism that the cloud itself may help provide the new methods we need for cloud-native security.

“Because of the way cloud infrastructure is built and deployed, there’s a real opportunity to answer these questions far faster, far more efficiently, far more effectively than we could with traditional infrastructure,” he said.

Calling for context

For Lee, the goal is to enable secure adoption of cloud technologies so companies can accelerate and innovate at scale. But there’s a key element needed to achieve this vision: context.

What often prevents teams from fully understanding the context around their security data is the fact that it is siloed, and the lack of integration between disparate systems requires a high level of manual effort to put the pieces together. To really get a clear picture of risk, security teams need to be able to bring their data together with context from each layer of the environment.

But what does context actually look like in practice, and how do you achieve it? Jane laid out a few key strategies for understanding the context around security data in your cloud environment.

  • Broaden your scope: Set up your VM processes so that you can detect more than just vulnerabilities in the cloud — you want to be able to see misconfigurations and issues with IAM permissions, too.
  • Understand the environment: When you identify a vulnerable instance, identify if it is publicly accessible and what its business application is — this will help you determine the scope of the vulnerability.
  • Catch early: Aim to find and fix vulnerabilities in production or pre-production by shifting security left, earlier in the development cycle.

4 best practices for context-driven cloud security

Once you’re able to better understand the context around security data in your environment, how do you fit those insights into a holistic cloud security strategy? For Lee, this comes down to four key components that make up the framework for cloud-native security.

1. Visibility and findings

You can’t secure what you can’t see — so the first step in this process is to take a full inventory of your attack surface. With different kinds of cloud resources in place and providers releasing new services frequently, understanding the security posture of these pieces of your infrastructure is critical. This includes understanding not just vulnerabilities and misconfigurations but also access, permissions, and identities.

“Understanding the layer from the infrastructure to the workload to the identity can provide a lot of confidence,” Lee said.

2. Contextual prioritization

Not everything you discover in this inventory will be of equal importance, and treating it all the same way just isn’t practical or feasible. The vast amount of data that companies collect today can easily overwhelm security analysts — and this is where context really comes in.

With integrated visibility across your cloud infrastructure, you can make smarter decisions about what risks to prioritize. Then, you can assign ownership to resource owners and help them understand how those priorities were identified, improving transparency and promoting trust.

3. Prevent and automate

The cloud is built with automation in mind through Infrastructure as Code — and this plays a key role in security. Automation can help boost efficiency by minimizing the time it takes to detect, remediate, or contain threats. A shift-left strategy can also help with prevention by building security into deployment pipelines, so production teams can identify vulnerabilities earlier.

Jane echoed this sentiment in her talk, recommending that companies “automate to enable — but not force — remediation” and use tagging to drive remediation of vulnerabilities found running in production.

4. Runtime monitoring

The next step is to continually monitor the environment for vulnerabilities and threat activity — and as you might have guessed, monitoring looks a little different in the cloud. For Lee, it’s about leveraging the increased number of signals to understand if there’s any drift away from the way the service was originally configured.

He also recommended using behavioral analysis to detect threat activity and setting up purpose-built detections that are specific to cloud infrastructure. This will help ensure the security operations center (SOC) has the most relevant information possible, so they can perform more effective investigations.

Lee stressed that in order to carry out the core components of cloud security and achieve the outcomes companies are looking for, having an integrated ecosystem is absolutely essential. This will help prevent data from becoming siloed, enable security pros to obtain that ever-important context around their data, and let teams collaborate with less friction.

Looking for more insights on how to adapt your security program to a cloud-native world? Check out Lee’s presentation on demand, or watch our replays of Rapid7 speakers’ sessions from RSAC 2022.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Update for CIS Google Cloud Platform Foundation Benchmarks – Version 1.3.0

Post Syndicated from Ryan Blanchard original https://blog.rapid7.com/2022/05/13/update-for-cis-google-cloud-platform-foundation-benchmarks-version-1-3-0/

Update for CIS Google Cloud Platform Foundation Benchmarks - Version 1.3.0

The Center for Internet Security (CIS) recently released an updated version of their Google Cloud Platform Foundation Benchmarks – Version 1.3.0. Expanding on previous iterations, the update adds 21 new benchmarks covering best practices for securing Google Cloud environments.

The updates were broad in scope, with recommendations covering configurations and policies ranging from resource segregation to Compute and Storage. In this post, we’ll briefly cover what CIS Benchmarks are, dig into a few key highlights from the newly released version, and highlight how Rapid7 InsightCloudSec can help your teams implement and maintain compliance with new guidance as it becomes available.

What are CIS Benchmarks?

In the rare case that you’ve never come across them, the CIS Benchmarks are a set of recommendations and best practices determined by contributors across the cybersecurity community intended to provide organizations and security practitioners with a baseline of configurations and policies to better protect their applications, infrastructure, and data.

While not a regulatory requirement, the CIS Benchmarks provide a foundation for establishing a strong security posture, and as a result, many organizations use them to guide the creation of their own internal policies. As new benchmarks are created and updates are announced, many throughout the industry sift through the recommendations to determine whether or not they should be implementing the guidelines in their own environments.

CIS Benchmarks can be even more beneficial to practitioners taking on emerging technology areas where they may not have the background knowledge or experience to confidently implement security programs and policies. In the case of the GCP Foundation Benchmarks, they can prove to be a vital asset for folks looking to get started in cloud security or that are taking on the added responsibility of their organizations’ cloud environments.

Key highlights from CIS GCP Foundational Benchmarks 1.3.0

Relative to benchmarks created for more traditional security fields such as endpoint OS, Linux, and others, those developed for cloud service providers (CSPs) are relatively new. As a result, when updates are released they tend to be fairly substantial as it relates to the volume of new recommendations. Let’s dig in a bit further into some of the key highlights from version 1.3.0 and why they’re important to consider for your own environment.

2.13 – Ensure Cloud Asset Inventory is enabled

Enabling Cloud Asset Inventory is critical to maintaining visibility into your entire environment, providing a real-time and retroactive (5 weeks of history retained) view of all assets across your cloud estate. This is critical because in order to effectively secure your cloud assets and data, you first need to gain insight into everything that’s running within your environment. Beyond providing an inventory snapshot, Cloud Asset Inventory also surfaces metadata related to those assets, providing added context when assessing the sensitivity and/or integrity of your cloud resources.

4.11 – Ensure that compute instances have Confidential Computing enabled

This is a really powerful new configuration that enables organizations to secure their mission critical data throughout its lifecycle, including while actively in use. Typically, encryption is only available while data is either at rest or in transit. Making use of Google’s new Secure Encrypted Virtualization (SEV) feature, Confidential Computing allows customers to encrypt their data while it is being indexed or queried.

A dozen new recommendations for securing GCP databases

The new benchmarks added 12 new recommendations targeted at securing GCP databases, each of which are geared toward addressing issues related to data loss or leakage. This aligns with Verizon’s most recent Data Breach Investigations Report, which found that data stores remained the most commonly exploited cloud service, with more than 40% of breaches resulting from misconfiguration of cloud data stores such as AWS S3 buckets, Azure Blob Storage, and Google Cloud Storage buckets.

How InsightCloudSec can help your team align to new CIS Benchmarks

In response to the recent update, Rapid7 has released a new compliance pack – GCP 1.3.0 – for InsightCloudSec to ensure that security teams can easily check their environment for adherence with the new benchmarks. The new pack contains 57 Insights to help organizations reconcile their own existing GCP configurations against the new recommendations and best practices. Should your team need to make any adjustments based on the benchmarks, InsightCloudSec users can leverage bots to notify the necessary team(s) or automatically enact them.

In subsequent releases, we will continue to update the pack as more filters and Insights are available. If you have specific questions on this capability or a supported GCP resource, reach out to us through the Customer Portal.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

To the Left: Your Guide to Infrastructure as Code for Shifting Left

Post Syndicated from Marla Rosner original https://blog.rapid7.com/2021/09/27/to-the-left-your-guide-to-infrastructure-as-code-for-shifting-left/

To the Left: Your Guide to Infrastructure as Code for Shifting Left

It’s the cloud’s world now, and we’re all just living in it. The mass migration of organizational infrastructure to the cloud isn’t slowing down any time soon — and really, why would it? Cloud computing has allowed developers to move at vastly greater speeds than ever before. And this in turn lets businesses move at greater speeds than ever before. What could go wrong?

If you’re reading this blog, you probably already know the answer: data security and regulatory compliance. With so much development, testing, and deployment happening all the time, it’s far too easy for infrastructure misconfigurations, compliance violations, or other risks to slip through the cracks.

Right now, these risks are most often found and addressed at runtime, after the proverbial barn door has already been left open and the horses are long gone. It’s obviously not ideal to have developers racing around trying to fix security issues that have already gone live and put the organization at risk. It’s also not all that optimal for those developers to constantly have to drop their current projects to put out security fires.

So our beleaguered security teams are stuck acting as the organizational killjoys constantly pumping the brakes on development, while developers are left unable to take full advantage of the speed cloud offers them. The bottom line: No one’s happy.

The power of infrastructure as code

This, of course, is where our favorite catchy slogan “shift left” comes into play. What organizations need to address all these issues is to shift security left, earlier in the development cycle. This shift allows teams to catch misconfigurations before they go live and expose an organization to risk. In this way, shifting left also keeps security from becoming a bottleneck for development. And it keeps both DevOps and SecOps happy — with their processes and with each other.

So how do you make this rosy picture a reality for yourself and your organization? The key is infrastructure as code (IaC). Traditionally, you would need to create security infrastructure by hand. But the IaC approach replaces manual creation with declarative statements that define the infrastructure needed to run code. Essentially, IaC turns the creation of security infrastructure into a shared, programmatic task within and between teams that can easily be replicated as often as needed.

By evaluating these IaC templates before runtime, developers are empowered to build more secure applications. The IaC templates provide the structure and feedback developers need to understand and resolve risks, and integrate security and compliance into all parts of the CI/CD process. With this, DevOps can become the primary safeguard against misconfigurations and risk without overly disturbing their established workflows.

All in all, IaC helps increase the speed of deployment, reduce misconfiguration and compliance errors, improve the relationship between developers and security, and lower costs.

Getting started with IaC

At this point, you may be thinking, “OK, shifting left with IaC sounds great — but how do I make that happen?”

There’s no one-size-fits-all answer to that question. Not all tools for IaC and shifting cloud security left are created equal. And the type of tool you need will depend on the specific characteristics of your own organization. But if you’re looking to bring the IaC revolution home to your organization, there are a few crucial points to keep in mind.

  • Make use of both static and dynamic analysis. These are the two main approaches for shifting left using IaC. Static analysis is faster but more limited in scope, whereas dynamic analysis is slower but more accurate, as it analyzes not only the IaC template but any environments or services the template works with. You’ll need access to both approaches to properly safeguard your cloud.
  • Be sure the cloud security tool you choose supports not only the cloud environment you currently use, but also any you might expand to in the future. Not all tools support all cloud environments. Whatever tool you invest in needs to fit the current state of your organization, without imposing limits on your future growth. This also applies to CI/CD orchestration tools: Any IaC tool you acquire should be compatible with your CI/CD tools, and not all of them will be.
  • Keep the total number of IaC tools to a minimum. Modern cloud security is complex enough as it is, and excess tools will only multiply the headaches for Dev and SecOps exponentially.
  • Consider where and how different security policies need to be used. Depending on their nature (say, public vs. private) and the kind of information they hold, different clouds may need different policies. Policies may also need to be enforced differently for different stages of the CI/CD pipeline. You’ll need a tool that’s flexible enough to manage all of these scenarios effectively.

Rapid7’s own cloud security tool, InsightCloudSec, is a fully integrated solution enabling continuous security and compliance for complex, multi-cloud environments. InsightCloudSec allows you to shift cloud security left using IaC, catching and remediating misconfigurations and vulnerabilities before they go live.

With InsightCloudSec, security and compliance in the cloud doesn’t have to slow you down. Learn more here.

Cloud Challenges in the Age of Remote Work: Rapid7’s 2021 Cloud Misconfigurations Report

Post Syndicated from Shelby Matthews original https://blog.rapid7.com/2021/09/09/cloud-challenges-in-the-age-of-remote-work-rapid7s-2021-cloud-misconfigurations-report/

Cloud Challenges in the Age of Remote Work: Rapid7’s 2021 Cloud Misconfigurations Report

A lot changed in 2020, and the way businesses use the cloud was no exception. According to one study, 90% of organizations plan to increase their use of cloud infrastructure following the COVID-19 pandemic, and 61% are planning to optimize the way they currently use the cloud. The move to the cloud has increased organizations’ ability to innovate, but it’s also significantly impacted security risks.

Cloud misconfigurations have been among the leading sources of attacks and data breaches in recent years. One report found the top causes of cloud misconfigurations were lack of awareness of cloud security and policies, lack of adequate controls and oversight, and the presence of too many APIs and interfaces. As employees started working from home, the problem only got worse. IBM’s 2021 Cost of a Data Breach report found the difference in cost of a data breach involving remote work was 24.2% higher than those involving non-remote work.

What’s causing misconfigurations?

Rapid7 researchers found and studied 121 publicly reported cases of data exposures in 2020 that were directly caused by a misconfiguration in the organization’s cloud environment. The good news is that 62% of these cases were discovered by independent researchers and not hackers. The bad news? There are likely many more data exposures that hackers have found but the impacted organizations still don’t know about.

Here are some of our key findings:

  • A lot of misconfigurations happen because an organization wants to make access to a resource easier
  • The top three industries impacted by data exposure incidents were information, entertainment, and healthcare.
  • AWS S3 and ElasticSearch databases accounted for 45% of the incidents.
  • On average, there were 10 reported incidents a month across 15 industries.
  • The median data exposure was 10 million records.

Traditionally, security has been at the end of the cycle, allowing for vulnerabilities to get missed — but we’re here to help. InsightCloudSec is a cloud-native security platform meant to help you shift your cloud security programs left to allow security to become an earlier part of the cycle along with increasing workflow automation and reducing noise in your cloud environment.

Check out our full report that goes deeper into how and why these data breaches are occurring.

Cloud Security Glossary: Key Terms and Definitions

Post Syndicated from Shelby Matthews original https://blog.rapid7.com/2021/08/11/cloud-security-glossary-key-terms-to-know/

Cloud Security Glossary: Key Terms and Definitions

When navigating the complexities of the public cloud, it’s easy to get lost in the endless acronyms, industry jargon, and vendor-specific terms. From K8s to IaC to Shift Left, it can be helpful to have a map to navigate the nuances of this emerging segment of the market.

That’s why a few cloud security experts here at Rapid7 created a list of terms that cover the basics — the key terms and concepts that help you continue your journey into cloud security and DevSecOps with clarity and confidence. Here are the most important entries in your cloud security glossary.


Application Program Interface (API): A set of functions and procedures allowing for the creation of applications that can access the features or data of an operating system, application, or other service.

  • The InsightCloudSec API can be used to create insights and bots, modify compliance packs, and perform other functions outside of the InsightCloudSec user interface.

Cloud Security Posture Management (CSPM): CSPM solutions continuously manage cloud security risk. They detect, log, report, and provide automation to address common issues. These can range from cloud service configurations to security settings and are typically related to governance, compliance, and security for cloud resources.

Cloud Service Provider (CSP): A third-party company that offers a cloud-based platform, infrastructure, application, or storage services. The most popular CSPs are AWS, Azure, Alibaba, and GCP.

Cloud Workload Protection Program (CWPP): CWPPs help organizations protect their capabilities or workloads (applications, resources, etc.) running in a cloud instance.

Container Security: A container represents a software application and may contain all necessary code, run-time, system tools, and libraries needed to run the application. Container hosts can be packed with risk, so properly securing them means maintaining visibility into vulnerabilities associated with their components and layers.

Entitlements: Entitlements, or permissions entitlements, give domain users control over basic users’ and organization admins’ permissions to access certain parts of a tool.

Identity Access Management (IAM): A framework of policies and technologies for ensuring the right users have the appropriate access to technology resources. It’s also known as Cloud Infrastructure Entitlement Management (CIEM), which provides identity and access governance controls with the goal of reducing excessive cloud infrastructure entitlements and streamlining least-privileged access (LPA) protocols across dynamic, distributed cloud environments.

Infrastructure: With respect to cloud computing, infrastructure refers to an enterprise’s entire cloud-based or local collection of resources and services. This term is used synonymously with “cloud footprint.”

Infrastructure as Code (IaC): The process of managing and provisioning computer data centers through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools. With IaC, configuration files contain your infrastructure specifications, making it easier to edit and distribute configurations.

Kubernetes: A portable, extensible open-source platform for deploying, managing, and orchestrating containerized workloads and services at scale.

Least-Privileged Access (LPA): A security and access control concept that gives users the minimum necessary permissions based on the functions required for their particular roles.

Shared Responsibility Model: A framework in cloud computing that defines who is responsible for the security and compliance of each component of the cloud architecture. With on-premise data centers, the responsibility is solely on your organization to manage and maintain security for the entire technology stack, from the physical hardware to the applications and data. Because public cloud computing purposefully abstracts layers of that tech stack, this model acts as an agreement between the CSP and their customer as to who takes on the responsibility of managing and maintaining proper hygiene and security within the cloud infrastructure.

Shift Left: A concept that refers to building security into an earlier stage of the development cycle. Traditionally, security checks occurred at the end of the cycle. By shifting left, organizations can ensure their applications are more secure from the start — and at a much lower cost.

BECOME FLUENT IN CLOUD SECURITY

Read the full glossary now

Why More Teams are Shifting Security Analytics to the Cloud This Year

Post Syndicated from Margaret Zonay original https://blog.rapid7.com/2021/02/17/why-more-teams-are-shifting-security-analytics-to-the-cloud-this-year/

Why More Teams are Shifting Security Analytics to the Cloud This Year

As the threat landscape continues to evolve in size and complexity, so does the security skills and resource gap, leaving organizations both understaffed and overwhelmed. An ESG study found that 63% of organizations say security is more difficult than it was two years ago. Teams cite the growing attack surface, increasing alerts, and bandwidth as key reasons.

For their research, ESG surveyed hundreds of IT and cybersecurity professionals to gain more insights into strategies for driving successful security analytics and operations. Read the highlights of their study below, and check out the full ebook, “The Rise of Cloud-Based Security Analytics and Operations Technologies,” here.

The attack surface continues to grow as cloud adoption soars

Many organizations have been adopting cloud solutions, giving teams more visibility across their environments, while at the same time expanding their attack surface. The trend toward the cloud is only continuing to increase—ESG’s research found that 82% of organizations are dedicated to moving a significant amount of their workload and applications to the public cloud. The surge in remote work over the past year has arguably only amplified this, making it even more critical for teams to have detection and response programs that are more effective and efficient than ever before.

Organizations are looking toward consolidation to streamline incident detection and response

ESG found that 70% of organizations are using a SIEM tool today as well as an assortment of other other point solutions, such as an EDR or Network Traffic Analysis solution. While this fixes the visibility issue plaguing security teams today, it doesn’t help with streamlining detection and response, which is likely why 36% of cybersecurity professionals say integrating disparate security analytics and operations tools is one of their organization’s highest priorities. Consolidating solutions drastically cuts down on false-positive alerts, eliminating the noise and confusion of managing multiple tools.

Combat complexity and drive efficiency with the right cloud solution

A detection and response solution that can correlate all of your valuable security data in one place is key for accelerated detection and response across the sprawling attack surface. Rapid7’s InsightIDR provides advanced visibility by automatically ingesting data from across your environment—including logs, endpoints, network traffic, cloud, and use activity—into a single solution, eliminating the need to jump in and out of multiple tools and giving hours back to your team. And with pre-built automation workflows, you can take action directly from within InsightIDR.

See ESG’s findings on cloud-based solutions, automation/orchestration, machine learning, and more by accessing “The Rise of Cloud-Based Security Analytics and Operations Technologies” ebook.

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.