Announcing Tami, Our New AI Cloud SecOps Agent Learn More

Idan Perez

CTO and co-founder

Michael St.Onge

Head of Technical Services

Joseph Barringhaus

VP of Marketing

Idan Perez

CTO and co-founder

Michael St.Onge

Head of Technical Services

Joseph Barringhaus

VP of Marketing

Key Findings

A long long time ago, we had computers sitting in racks in the backs of our offices.

These computers served our software far and wide, allowing our users to rejoice and marvel at our creations.

When we worried something nefarious was going on in our systems, a veteran member of our team would step into the backroom––a holy space permitting entry only to the greyest of the greybeards. The greybeard will run some pieces of software only known to them, maybe peek their head out to ask the developers some questions, work some magic, and then walk back into the open space full of cubicles as a wave of calm sweeps through the room.

This is how security remediation in the good old days looked like.

(Or at least how we remember it to be).

Remediating security issues in the age of the cloud is far from a stroll through the server farm. Today’s engineers are so far removed from the actual infrastructure, and that infrastructure has grown so complex and virtual, that “getting your hands dirty” to fix a problem is much harder than it once was.

To solve that problem, we adopted tools that promised to “handle compliance” for our clouds. These tools evolved into CSPMs and later into CNAPPs—platforms expected to pinpoint exactly what needs fixing.

We invested in this shiny new platform that delivers more alerts, more data, and more… well, everything. Yet, in practice, the CNAPPs proved almost too good at their job––they’ve become so effective at detecting issues that they’re finding more vulnerabilities than most security teams can realistically address, no matter how skilled or well-staffed. Meanwhile, security hasn’t gotten any easier over the years—threat models have evolved, new technologies and use cases demand attention, and breaches continue to happen year after year.

This is usually the point where we’d say, “And let’s not forget how scary AI is.”

Unpopular opinion: AI in and of itself is not intimidating. The real challenge? The recent AI boom has dramatically increased our attackers’ efficiency—making defense harder than ever.

 

To truly understand what cloud remediation looks like today, we at Tamnoon analyzed over 4.76 million individual CNAPP alerts, collected over a period of 12 months from leading CNAPPs and CSPMs before any Tamnoon remediation took place. The major takeaways are presented below, and if you’re curious about our methodology, the way we define each term, and some of the backstory, check out the methodology section for a detailed breakdown of our research methodology.

Let’s dive in.

What We Know Today

Takeaway 1:

Different CNAPPs Classify the Same Misconfigurations Differently

One would think that an alert, based on the same underlying asset misconfiguration, would wear the same level of severity across multiple different CNAPPs.

We also thought, at one point, that cloud services are inherently secure. Some thoughts can be misguided.

In practice, CNAPPs may assign different severity rankings to alerts based on contextual factors like the underlying asset type. While this can be beneficial, it highlights the importance of configuring custom rules within your CNAPP to ensure alert severity actually reflects (potential) real-world impact.

The following is a list of misconfigurations that have been classified very differently between multiple different CNAPPs. These cases represent nearly 2% of our “Top 35” dataset, which comprises the most frequently occurring alerts in our data (see the methodology for detailed definitions).

Underlying Misconfiguration

Severity Min

Severity Max

% of Total

Implement comprehensive monitoring for configuration changes

Regular rotation of access keys for enhanced security

EC2 Launch Templates Should Not Assign Public IPs to Network Interfaces

Security Groups Should Not Allow Unrestricted Access to Ports with High Rsk

Outbound Traffic to Malicious IP Addresses

Overprivileged IAM Role

Perhaps the most glaring example is the fourth item: a security group issue that was labeled as “informational” by two CNAPPs and “critical” by three others.

This stark difference appears despite referring to the exact same underlying misconfiguration. We believe the difference stems from each CNAPP’s unique scan engine design, which reflects individual provider considerations rather than adherence to a unified standard. In other words, someone on the research team of each CNAPP decided what deserves attention and what not, which created a difference between what severity looks like between the various CNAPPs.

Takeaway 2:

10+ Years into Cloud Security—And We’re Still (Wildly) Exposed

There are very few organizations that have no risk exposure.

If you’re building anything in the cloud, it’s likely that somewhere along the road you will configure something that could be construed as dangerous, or use a few pieces of infrastructure that will inevitably have vulnerabilities found in them.

Cloud security is not about eliminating all risk—it’s about managing the risk that you have effectively and making informed decisions between business goals and that risk. It turns out that in cloud security, despite state-of-the-art CNAPPs and 10+ years of technological advancements, we remain very much behind where we need to be in securing our cloud environments.

Let’s take a wider view for a moment––looking at all closed alerts in our “All Alerts” dataset (see methodology here), the average MTTR is as follows:

Cloud Alert Resolution Time: Severity Matters

2025 State of Cloud Remediation Report

All Alerts Dataset, Closed Alerts

You can clearly see that our exposure time to these misconfigurations is measured in months, not days. Even for critical alerts, the average MTTR remains extremely high at 128 days––or over 4 months of exposure.

Cloud security has a remediation problem, not an alert problem. More visibility hasn’t led to faster fixes, it’s only created bigger backlogs. The real opportunity is helping teams resolve issues efficiently with the right urgency, not just detecting more of them

Mike Privette

CISO and Cybersecurity Economist, Return on Security

To take a different perspective on the problem, let’s break down the activities that occur (or don’t) once an alert is generated.

To analyze this, we broke down each closed critical alert’s remediation process into steps as defined by the CNAPP. The following timeline shows how much time each critical alert spent in different stages: idle (awaiting engagement), triage/owner identification, and remediation planning & execution.

2025 State of Cloud Remediation Report

All Alerts Dataset, Closed Alerts

Critical Alerts Still Take 128 Days to Solve

Based on our data analysis and experience at Tamnoon, three main factors contribute to the extended time spent on each step:

  1. It’s hard to understand what is more important: We saw organizations attempting to manage hundreds and thousands of critical alerts simultaneously. With such volume, prioritizing what to do next becomes challenging, causing many critical alerts to remain in the backlog for months at a time.
  2. It’s hard to find the owner: Who owns a piece of infrastructure is not a trivial question; resources are often tied up in multiple different functions, and multiple different people (of varying degrees of importance and influence in the organization) have connections to each resource. Finding one decision maker can be difficult.
  3. It’s hard to plan remediation safely: Just as identifying the owner is hard, evaluating possible outcomes and analyzing the blast radius of a specific remediation path is a significant undertaking. It’s not only technically challenging—organizations often lack clear understanding of how each infrastructure component fits into the broader context, making it hard to estimate cost, operational impact, performance, complexity, back out plan, etc.

With the volume of alerts most organizations see in cloud computing, it’s going to take ruthless prioritization (to know what to fix first), detailed context (to know who needs to fix it), and precise remediation guidance (to know how to fix it) for organizations to get MTTR to where we want it to be.

Neil Carpenter

Field CTO, Orca Security

CNAPP Decoded: Alerts, Remediations, and CNAPP Best Practices 1x a Month in your Inbox

Takeaway 3:

You’ll Never Close Your Highs Alone

While criticals are the most urgent category, they aren’t even close to being the largest one—they only accumulate to about 1.36% of the entirety of the alerts generated by the CNAPPs in our “All Alerts” dataset (see methodology here).

High Alerts Make Up the Majority of Your Alerts

2025 State of Cloud Remediation Report

All Alerts Dataset

The only time I’ve ever seen alert counts decrease is by creating generous ignore rules – fixing security findings is undoubtedly the most difficult part of cloud security, not finding them.

James Berthoty

Security Engineer Turned Analyst/Founder at Latio Tech

This figure is a result of the conflicting motivation that CNAPP platforms face:

Most CNAPPs maintain this balance by adhering to a well-known benchmark: keeping critical alerts below 2% of total alerts.

Note that this is a hard limit that remains fixed regardless of the total alert volume. This limit keeps your critical alert backlog manageable by essentially “shifting” many borderline-critical alerts into the high alerts pile.

This is reflected in the proportion of high-priority alerts: almost 34% of all alerts are classified as high. While some organizations manage to stay on top of their critical alerts, they often find themselves drowning in the sea of high-priority ones. And who can blame them? When your high-priority queue is 17 times larger than your critical one, even the most efficient security teams can feel overwhelmed.

What’s Hiding in Your High Alerts?

To put some things into perspective, let’s look at a couple of the alerts classified as “high” in our dataset that you’re most likely not getting to, and some of the incidents they were associated with in the past:

Before we move on, it’s worth noting: Tamnoon’s cloud security experts regularly help customers address these severity level inconsistencies: the practical solution involves implementing custom rules and configurations to ensure appropriate alert tagging.

We’ve observed that both severity levels and alert volumes can vary significantly between CNAPPs. The priority is maintaining a manageable workload for your cloud security engineers – sometimes the best approach out of backlog hell is “reseting” the ruled determined by the CNAPPs and considering the nuances of your own cloud environment and business.

Takeaway 4:

Remediation Time Varies By Alert Category

Each piece of cloud infrastructure has unique properties, eccentricities, and inherent complexities.

To analyze remediation time patterns, we categorized the underlying misconfiguration of the alerts in our “All Alerts” dataset based on keywords found in both the alert names and their associated resources.

CI/CD (Continuous Integration and Continuous Deployment)

Includes assets like CodeBuild (build automation), CodePipeline (orchestrating CI/CD workflows), and Elastic Beanstalk (application deployment).

Includes well-known resources like EC2 (virtual machines), Lambda (serverless compute), and ECS (container orchestration).

Examples include IAM Users (individual access), IAM Roles (temporary access permissions), and Secrets Manager (securely storing sensitive data).

Examples include VPC (virtual private cloud for networking), API Gateway (API management and integrations), and CloudFront (content delivery network).

Includes CloudTrail (logging and governance), CloudWatch (monitoring and observability), and Organizations (account management).

Includes S3 (object storage), EBS (block storage for EC2), and RDS (managed relational databases like MySQL/PostgreSQL).

Examples include Kinesis (real-time data streaming), MSK (managed Kafka for event streaming), and SQS (queue service for decoupled architectures).

We analyzed the misconfigurations by category to identify both their frequency in the dataset and the average alert open duration for each of the alerts. Let’s first examine the distribution of alerts across misconfiguration categories:

2025 State of Cloud Remediation Report

All Alerts Dataset

Compute 
Misconfigurations Make Up Over Half of CNAPP Alerts

Unsurprisingly, the most frequently issued alerts are for the most commonly used components of your infrastructure: compute, storage, and networking. What might be more surprising is the significant gap between the first and second place contenders—compute-related misconfigurations occur more than twice as often as storage-related ones.

Let’s now consider the average alert open duration by misconfiguration category: 

Some Misconfigurations Sit Open for 
Almost 3 Years

2025 State of Cloud Remediation Report

All Alerts Dataset, Open Alerts

This graph leads to a very clear conclusion: 

An alert that sits open for years isn’t just an oversight or part of the backlog—it’s an attack waiting to happen. The longer a vulnerability sits open, the greater the risk of exploitation.

Pramod Gosavi

Sr. Principal, Blumberg Capital

Takeaway 5:

Critical Alerts Wait 3 Months Longer to Be Solved

One would assume that critical alerts—since they are, well, critical – would stay open for a short amount of time, as they are tackled first by the cloud security team. That tracks with the SLAs most security teams are bound by: if you’re attempting to stay compliant with any major compliance standards, you must maintain strict SLAs for handling potentially dangerous cloud misconfigurations. 

In practice, when we look at the “Top 35”’ dataset (see methodology here), it appears that critical alerts stay open for 87 days more on average than high alerts.

2025 State of Cloud Remediation Report

Top 35 Dataset, Open Alerts

Remediation Delays: Critical Alerts Stay Open for Nearly a Year

CNAPP Decoded: Alerts, Remediations, and CNAPP Best Practices 1x a Month in your Inbox

Note that these figures are for alerts that are waiting to be solved. What about ones that we actually managed to solve?

The situation is much worse—out of the same dataset (top 35 most frequently-recurring alerts), it takes about 151 more days on average to solve a critical alert than it does to solve a high alert.  

Faster Fixes Needed: Critical Alerts Take Over 2x Longer to Resolve

2025 State of Cloud Remediation Report

Top 35 Dataset, Closed Alerts

This finding, while surprising, makes sense when we look deeper. Critical alerts often require complex solutions rather than simple, “checkbox” fixes. Our analysis of the top 5 critical alerts reveals issues that appear straightforward on the surface but involve complicated remediation processes:

Policy Name

Severity Level

Resource Type

Category

% of Total Alerts

Restrict Public Access and Permissions for Cloud Storage Buckets

Restrict Default Network Access in Cloud Environments

Restrict Ingress Access to Critical Ports and Protocols

Restrict Public Access to Custom Images

Ecs Services Exposed to Internet Through Load Balancer

Consider the most common critical alert: “Restrict Public Access and Permissions for Cloud Storage Buckets.” While adjusting bucket permissions is technically straightforward, a bucket’s public accessibility often exists for legitimate business purposes:

A CNAPP will flag these scenarios as problematic since it cannot automatically determine their legitimacy without specific configuration or human input.

Working our way down the list, “Restrict Ingress Access to Critical Ports and Protocols” might actually have to do with a legacy service that no one in the current organization knows how to work with. One of these magic “black boxes” that just work and no one wants to touch.

And, perhaps––and that’s the most recurring theme in our conversations––these critical alerts are just plain hard to fix. Given the amount of context switching, the time spent per alert, and the sheer complexity of some cloud environments, it can be hard to maintain a good pace of remediation.

Lorem ipsum odor amet, consectetuer adipiscing elit. Dui laoreet praesent vitae finibus proin augue vitae proin pulvinar. Lectus montes faucibus scelerisque nullam turpis; hac eget vitae congue.

Marina Segal

CEO and co-founder

How Tamnoon Can Help

Tamnoon helps cloud security teams reduce their cloud exposure at scale without adding another tool to manage. We offer a fully managed service that combines AI-powered technology with human cloud security experts to reduce your risk, get to zero critical alerts, and streamline your cloud security operations.


We don’t just identify problems—we help you fix them. Each initiative includes detailed research into findings, a full impact analysis of proposed solutions, and customized, step-by-step remediation plans.


With Tamnoon, cloud security becomes smarter, faster, and easier to manage.

CNAPP Decoded: Alerts, Remediations, and CNAPP Best Practices 1x a Month in your Inbox

Methodology

Population

This study analyzes real-world cloud security data from dozens of cloud environments before any Tamnoon-led remediations took place, providing empirical evidence rather than relying on survey responses.

Our analysis encompasses two comprehensive datasets:

  1. The “All Alerts” dataset: Contains 4,762,029 CNAPP alerts generated over a 12-month period, representing the complete spectrum of cloud security findings
  2. The “Top 35” dataset: Comprises 3,249,548 alerts focusing on the 35 most frequent alert types, enabling detailed analysis of common security patterns

The study employs a point-in-time analysis methodology. We established a fixed date as our reference point and analyzed the preceding 12 months of data from that position.

To ensure confidentiality, all data underwent comprehensive anonymization to protect organizational privacy and security.

The analyzed alerts originate from leading Cloud-Native Application Protection Platforms (CNAPPs) and cloud provider security tools:

  • AWS Security Hub
  • CloudGuard CNAPP (Check Point)
  • Falcon for Cloud (CrowdStrike)
  • FortiCNAPP (prev. Lacework)
  • Orca Security
  • Prisma Cloud (Palo Alto Networks)
  • Wiz

Please note that all alerts were generated from AWS environments only.

Classifying Alerts By Categories

Our AI analysis identified 1,860 unique misconfiguration types (referred to as “alert categories” in this report) from over 10,000 alert definitions. Here’s how we processed and analyzed the data:

– Collected alerts from all participating organizations
– Processed each alert through our misconfiguration classification model
– Assigned specific misconfiguration types to standardize alerts across different platforms
– Aggregated alerts based on their classified titles

Classifying Alerts By Status

We analyzed alert remediation through two distinct measurements:

  1. Open Alerts: We measured the Average Alert Open Duration for alerts that remained active at the time of analysis, providing insight into ongoing security issues.
  2. Closed Alerts: We calculated the Mean Time to Remediate (MTTR) for resolved alerts, allowing us to understand typical remediation timeframes for different types of alerts.

Definitions Used in This Report

Alert

A notification generated by a CNAPP or CSPM indicating a potential misconfiguration in a cloud environment.

Misconfiguration

A security issue caused by incorrect settings or configurations of cloud assets. Misconfigurations are grouped into categories using Tamnoon AI’s classification model to standardize and simplify their analysis. Each alert has one or more underlying misconfigurations associated with it.

Note: In this report, we use the term “misconfiguration” to refer to any issue detected by CNAPP platforms or cloud provider security tools, regardless of whether it’s a configuration error, a vulnerability, or another type of security finding. While these tools detect various types of issues (including software vulnerabilities, compliance violations, and identity misconfigurations), we’ve chosen to use “misconfiguration” as an umbrella term to maintain consistency throughout the report and avoid terminology confusion.

Category

Groupings of misconfigurations based on their asset type, functionality, or impact.

Severity

A unified risk level assigned to alerts by different CNAPPs & CSPMs. Each CNAPP/CSPM has its own version of severity scoring.

Key Metrics

MTTR (Mean Time to Remediate)

The average time it takes to close an alert, measured from when it is opened to when it is resolved. This calculation includes only closed alerts.

Average Alert Open Duration

The average time an alert remains open, measured from the time it is created by a CNAPP to the last time it is fetched with a status of “Open.” This metric naturally focuses only on open alerts.

A special thank you to Tom Granot for providing his analysis and takeaway on this report.

Scroll to Top

Join us for

CNAPP Decoded: Alerts, Remediations, and CNAPP Best Practices 1x a Month

Join 2,300+ Cloud Security leaders looking to master their CNAPP with expert remediation tips and best practices to test in your own CNAPP today.