Jeff’s Note #
Unlike generic exam dumps, ADH analyzes this scenario through the lens of a Real-World Site Reliability Engineer.
For SOA-C02 candidates, the confusion often lies in whether to choose subscription-based processing versus metric filtering. In production, this is about knowing exactly how to create reliable, aggregate metrics (like the p90 latency) from raw log data without unnecessary overhead or complexity. Let’s drill down.
The Certification Drill (Simulated Question) #
Scenario #
AirStream Tech operates a critical user-facing service that continuously sends logs to Amazon CloudWatch Logs. Each log entry records several fields, including an application response latency measured in milliseconds. The SRE team needs to observe the 90th percentile (p90) latency over time to proactively catch performance issues and trigger alerts before users notice degradation.
The Requirement: #
As a Site Reliability Engineer, you want to monitor the p90 statistic for the latency field extracted from this log data. How should you implement this in AWS to meet the monitoring goals?
The Options #
- A) Create an Amazon CloudWatch Contributor Insights rule on the log data.
- B) Create a metric filter on the log data.
- C) Create a subscription filter on the log data.
- D) Create an Amazon CloudWatch Application Insights rule for the workload.
Google adsense #
leave a comment:
Correct Answer #
B) Create a metric filter on the log data.
Quick Insight: The SOA-C02 Imperative #
- The core AWS exam focus here is on knowing how to translate logs into meaningful metrics efficiently.
- Metric filters extract numerical data points (like latency) from logs and emit CloudWatch metrics that support percentile statistics.
- Subscription filters forward raw logs to other services, but don’t generate metrics themselves.
- Contributor Insights and Application Insights serve different purposes and do not provide direct p90 latency metrics from log fields.
Content Locked: The Expert Analysis #
You’ve identified the answer. But do you know the implementation details that separate a Junior from a Senior?
The Expert’s Analysis #
Correct Answer #
Option B - Create a metric filter on the log data.
The Winning Logic #
Metric filters parse CloudWatch Logs using pattern matching to extract scalar values (such as latency) and create custom metrics from those values. These metrics can then be used in CloudWatch dashboards and alarms that support percentile statistics like p90 over time. This aligns precisely with the need to monitor the 90th percentile of application latency. The approach leverages CloudWatch’s native ability to compute percentiles on emitted custom metrics.
- The metric filter pattern targets the specific log field containing the latency value.
- Once the metric filter publishes the latency metric, CloudWatch’s Statistics tab allows viewing the p90 aggregation.
- This keeps the monitoring pipeline simple, efficient, and fully managed without additional infrastructure.
The Trap (Distractor Analysis) #
- Why not A (Contributor Insights)? Contributor Insights analyzes log data to detect contributors to patterns or troubleshoot spike causes, but it doesn’t create metrics for percentile calculations directly.
- Why not C (Subscription Filter)? Subscription filters send a copy of the logs to destinations like Lambda or Kinesis but do not themselves create CloudWatch metrics. You would need additional processing to extract percentile metrics.
- Why not D (Application Insights)? Application Insights is for monitoring application health and automatically detects anomalies but does not parse and create custom percentile metrics from log fields.
The Technical Blueprint #
SysOps CLI Command to Create Metric Filter (Example) #
aws logs put-metric-filter \
--log-group-name "/airstream-tech/app-logs" \
--filter-name "LatencyMetricFilter" \
--filter-pattern '{ $.latencyMS = * }' \
--metric-transformations metricName=AppLatency,metricNamespace=AirStream/Latency,metricValue=$.latencyMS
- This extracts
latencyMSfield from JSON logs and emits it as a CloudWatch metric namedAppLatency. - The metric namespace groups related metrics.
- Then, you configure alarms or dashboards using the
p90statistic onAppLatency.
The Comparative Analysis #
| Option | Operational Overhead | Automation Level | Impact on Monitoring |
|---|---|---|---|
| A | Low | Fully Managed | Good for contributor insights, no direct percentile metrics |
| B | Low | Fully Managed | Direct extraction of latency metric, supports percentile stats |
| C | Medium | Requires downstream | Forwards raw logs, no built-in p90 metric without extra tools |
| D | Low | Managed | Monitors health, no custom percentile latency metric |
Real-World Application (Practitioner Insight) #
Exam Rule #
For the exam, always pick Metric Filters when you see the keyword “extract numeric log field to monitor statistics.”
Real World #
In production, you may use subscription filters paired with Lambda to enrich or transform logs before metric creation if metric filters cannot capture complex patterns. But this adds complexity and latency. Metric filters are the go-to for simple numeric field extraction.
(CTA) Stop Guessing, Start Mastering #
Disclaimer
This is a study note based on simulated scenarios for the SOA-C02 exam.