Amazon CloudWatch is a popular AWS service for monitoring cloud applications that collects performance metrics and logs, creates telemetry dashboards, and responds to cloud events. In fact, it’s one of the key components in AWS that provides visibility into your AWS application performance. This visibility, when combined with code-to-cloud insights, allows organizations to connect telemetry data back to the underlying code and architectural decisions, ensuring a holistic understanding of their cloud environment.
That’s the good news. The bad news? While it provides observability and actionable insights that may be useful for decreasing your AWS expenditures, CloudWatch itself can be a cost driver. Without proper management, organizations can see their CloudWatch bills escalate rapidly, sometimes unexpectedly accounting for a significant portion of their overall AWS spend.
Expose cloud risks no other tool can
Learn how Wiz Cloud surfaces toxic combinations across misconfigurations, identities, vulnerabilities, and data—so you can take action fast.

Understanding CloudWatch costs
To effectively control CloudWatch expenditures, it’s important to understand where all the costs come from:
Logs
Amazon CloudWatch is used to collect, monitor, and access logs from your EC2 instances, applications, and AWS services. Log costs are primarily driven by the volume of data ingested (per GB) and the duration of storage (per GB-month).
Here’s a closer look at the common causes of high CloudWatch log costs:
High log-ingestion volumes: The amount of logs you record is directly related to the number of applications/services you run, log verbosity, and application traffic/usage.
Log retention duration: By default, logs stay in CloudWatch indefinitely if a retention policy isn’t specified. Without a proper retention policy, logs can quickly grow to huge volumes, drastically increasing your storage costs.
Search and retrieval frequency: CloudWatch Logs Insights supports advanced querying of ingested logs through a domain-specific query language, filtering, and insights generation. Log querying is essential for monitoring and analytics pipelines, but it can incur additional costs if it’s not used carefully.
Metrics
Amazon CloudWatch enables the collection of metrics from more than 70 AWS services. While metrics are essential for observability, their cost scales with both volume and frequency. For example, collecting metrics at 1-minute intervals instead of 5-minute intervals increases the number of data points,significantly raising both storage and alarm costs.
Metric costs are based on the number of custom metrics, the resolution (standard or high), and the number of API requests for GetMetricData
or PutMetricData
.
Dashboards
In Amazon CloudWatch, dashboards come with specific cost considerations. Dashboard costs are associated with the number of dashboards exceeding the free tier and API operations related to their management.
Alarms
Alarms are another source of potential waste, and a high volume can substantially increase CloudWatch costs. Alarms based on high-resolution metrics (evaluated every 10 or 30 seconds) are particularly cost-intensive because they increase the frequency of metric evaluations and, consequently, alarm evaluations.
Alarm costs are incurred per alarm, per month, with higher rates for high-resolution alarms and composite alarms.
CloudWatch cost optimization strategies
To tackle the inefficiencies we’ve discussed and maximize cloud cost savings, follow these strategies:
CloudWatch logs optimization
Log ingestion management
Reducing the volume of logs sent to CloudWatch is one of the best ways to cut costs. Here’s how:
Evaluate all logs recorded by your services and applications and create appropriate log groups based on log importance, environment, and other specific factors tailored to your organization.
Based on the evaluation in step 1, remove unnecessary logs and/or reduce verbosity (unless extra information is required for debugging).
Tap specific team members to review new logs generated by your applications and services.
Log retention settings
CloudWatch supports log retention periods ranging from one day to ten years. Follow these practical tips to choose a log retention policy that best fits your needs:
Apply retention periods to different log groups based on log information and factors like debugging value and origin environment.
Set up automated log expiration policies to remove stale data before it accumulates.
Avoid unnecessary log duplication and delivery across different regions and accounts unless you have specific operational or compliance requirements.
CloudWatch metrics optimization
Metrics ingestion
Here, your first step is to carefully evaluate custom metrics sent to CloudWatch. Some of the metrics may no longer serve any business or operational purpose and can be safely removed. You might also discover that some metrics provide duplicate information and/or can be aggregated at the application level.
Another promising cloud cost-saving strategy is creating metrics from log events using filtering. Specifically, you can use filters to search for certain terms and patterns in log data. Many logs already record useful information that can be extracted instead of creating a new custom metric.
Metrics resolution
Amazon CloudWatch offers two main schedules for metrics collection: standard and high resolution. Standard resolution collects data at one-minute intervals, while high resolution gathers data as frequently as once per second. Since high-resolution metrics can significantly increase costs, it's important to use them only when necessary. For less critical monitoring tasks, consider using standard resolution or event-driven metrics to reduce your metrics ingestion volume and lower alarm evaluation costs.
Alarm optimization
Alarm cost = an alarm’s resolution setting (standard or high) + the number of metrics referenced by the alarm.
You can lower alarm spend by…
Using standard-resolution alarms on high-resolution metrics: To control costs, you can set a standard-resolution alarm (evaluated every minute) on high-resolution metrics. This reduces the evaluation frequency and can be a great fit for less critical metrics.
Removing unused alarms: Regularly audit alarms and remove alarms tied to unused resources (like those in an insufficient data state) or outdated thresholds.
Using math alarms carefully: Math alarms are composite alarms that aggregate multiple metrics. You can reduce the number of composite alarms you use by aggregating metrics on the source side before they are sent to CloudWatch.
Dashboard management
As we’ve seen, CloudWatch dashboards incur charges when their usage exceeds the AWS free-tier’s limit, which includes up to three dashboards. Additional costs can also originate from frequent API calls to query dashboards, including listing, modifying, and getting metric information.
To reduce costs, regularly audit and remove unused dashboards and limit programmatic API calls when possible. While interacting with dashboards via the AWS Management Console can incur costs for underlying metric data requests, direct dashboard view counts don't directly add to API request charges beyond the initial load.
Tools to support CloudWatch cost optimization
AWS native tools
AWS provides several built-in tools to help you monitor and optimize CloudWatch spending. These services offer granular visibility into usage patterns and highlight waste.
AWS Cost Explorer helps analyze and visualize CloudWatch costs over time. Costs can be filtered by query requests, metric streams, custom metrics, logs, region, and more.
AWS Cost and Usage Reports (CUR) with Amazon Athena allows you to create detailed cost and usage reports, delivering them to an S3 bucket where Athena can query them for analysis.
CloudWatch Logs Insights lets you run queries to identify high-volume log groups, analyze log information, and explore log access patterns to guide your decisions about retention and filtering.
AWS Trusted Advisor runs continuous checks against your AWS environment to identify underutilized resources, suggest cleanup actions, and provide recommendations based on AWS best practices.
Observability focused platforms
Beyond AWS-native options, third-party observability tools can provide a more contextual view of CloudWatch usage and offer additional features such as fine-grained visualization and cost analysis.
Usage visualization and insights: Platforms like Datadog and New Relic offer log and metric heatmaps, making it easier to see which parts of your system are generating the most log and metrics data. For a more comprehensive and proactive approach, solutions that offer agentless scanning can provide immediate insights into CloudWatch configurations and associated resources across your entire cloud estate, identifying misconfigurations or excessive logging without requiring intrusive agents.
Spend attribution and ownership mapping: Datadog and Dynatrace integrate with AWS Cost Explorer to map telemetry usage and costs by team or environment using tags.
Cloud cost management and optimization solutions
The most efficient way to optimize CloudWatch costs? Use a cloud cost management and optimization solution that monitors CloudWatch costs and automatically surfaces CloudWatch cost optimization opportunities. Ideal solution should be able to provide granular visibility into CloudWatch spend broken down by storage, metrics, I/O, and requests.
These tools go beyond CloudWatch and help you monitor and optimize costs across all clouds and cloud services. They are designed to provide comprehensive visibility and reporting for finops and finance teams and smaller with granular visibility and cloud resource context for development teams implementing cost optimizations.
The Board-Ready CISO Report Deck [Template]
This editable template helps you communicate risk, impact, and priorities in language your board will understand—so you can gain buy-in and drive action.

Enforcing ownership and telemetry tagging
As your CloudWatch environment scales, knowing which applications, services, and/or teams produce specific logs and metrics becomes a prerequisite for effective cost optimization.
Tagging best practices
Apply meaningful tags to log groups and custom metrics at the time of creation. Always tag log groups and metrics by team, environment, and application or service:
Team tags help identify which engineering or DevOps team owns the resource.
Environment tags allow you to identify production, staging, development, or test workloads. (This is useful for prioritizing different types of logs and metrics.)
Application or service tags link telemetry to the specific service or microservice generating it.
How to enforce tagging
To make tagging efficient, it should be enforced when log group or metric resources are created. Tagging standards can be enforced through tools like AWS Organizations service control policies (SCPs), AWS Config rules, or IaC frameworks. These tools only allow a new log group or custom metric to be created if tagging is applied.
Free Cloud Security Risk Assessment
Connect with a Wiz expert for a personal walkthrough of the critical risks in each layer of your environment.

Automating CloudWatch cost controls
Manual CloudWatch cost optimization can be cumbersome and inefficient. The main reasons for that?:
Logs and metrics are spread across multiple services, teams, and environments.
New logs and metrics are created continuously, which means that optimization should also be a continuous process.
This is where automation shines: Automated processes ensure that cost controls are applied consistently, reducing the risk of unbounded telemetry growth.
Scheduled cleanups
Over time, CloudWatch artifacts can accumulate and significantly affect your costs, so it’s important to perform a regular cleanup of stale logs, dashboards, and metrics. This is especially true for the artifacts from services you no longer use and temporary environments. To automate the cleanup process, you can use cron jobs and AWS Lambda functions.
Retention automation settings
Retention policies can be adjusted per log group using the AWS CLI (with the PutRetentionPolicy API call) or through the CloudWatch interface. A better way? Use infrastructure-as-code (IaC) tools like Terraform or CloudFormation to enforce standard retention periods across environments. IaC abstracts the process of calling PutRetentionPolicy for you when you create a log group.
Alert teams
Automation enables fast response to unexpected events that might affect the security, stability, and performance of your services and applications. It’s a good idea to set up automated alerts for…
High-volume log write spikes: Always notify your team about a sudden spike in log output, which may indicate performance and/or security issues.
Custom metric thresholds: Make sure alerts trigger when the number of custom metrics for a team, service, or account exceeds a defined threshold. This can signal runaway metric creation due to misconfigured applications.
Why context is key for CloudWatch optimization
The optimization of CloudWatch costs start with cutting log volume, removing unnecessary logs and metrics, and configuring a sound retention policy. But these steps alone can’t solve the deeper issues that led to CloudWatch inefficiencies in the first place. These deeper issues often include alert fatigue from irrelevant data, difficulty in pinpointing root causes due to fragmented visibility, and challenges in assigning accountability for resource overspending.
To truly optimize your CloudWatch spend, context is critical. Keep tabs on who is generating telemetry; whether the data is actionable; and how it ties into service performance, architectural dependencies, operational risk, and cost. With the right tagging, configuration, automation, and contextual analytics in place, you can transform CloudWatch into a cost-effective and strategically valuable monitoring platform.
Ready to take things to the next level? Wiz is an all-in-one platform that surfaces AWS costs alongside cloud architecture, usage, and security insights. Schedule a demo today, and see firsthand how Wiz powers cloud cost management.
Manage Cloud Costs with Wiz
Learn how Wiz combines security insights with cost visibility to maximize business outcomes.