시험덤프
매달, 우리는 1000명 이상의 사람들이 시험 준비를 잘하고 시험을 잘 통과할 수 있도록 도와줍니다.
  / PCA 덤프  / PCA 문제 연습

The Linux Foundation PCA 시험

Prometheus Certified Associate Exam 온라인 연습

최종 업데이트 시간: 2025년11월17일

당신은 온라인 연습 문제를 통해 The Linux Foundation PCA 시험지식에 대해 자신이 어떻게 알고 있는지 파악한 후 시험 참가 신청 여부를 결정할 수 있다.

시험을 100% 합격하고 시험 준비 시간을 35% 절약하기를 바라며 PCA 덤프 (최신 실제 시험 문제)를 사용 선택하여 현재 최신 60개의 시험 문제와 답을 포함하십시오.

 / 2

Question No : 1


Which of the following PromQL queries is invalid?

정답:
Explanation:
The max operator in PromQL is an aggregation operator, not a binary vector matching operator.
Therefore, the valid syntax for aggregation uses by() or without(), not on().
✅ max by (instance) up → Valid; aggregates maximum values per instance.
✅ max without (instance) up and max without (instance, job) up → Valid; aggregates over all labels except those listed.
❌ max on (instance) (up) → Invalid; the keyword on() is only valid in binary operations (e.g., +, -, and, or, unless), where two vectors are being matched on specific labels.
Hence, max on (instance) (up) is a syntax error in PromQL because on() cannot be used directly with aggregation operators.
Reference: Verified from Prometheus documentation C Aggregation Operators, Vector Matching C on()/ignoring(), and PromQL Language Syntax Reference sections.

Question No : 2


With the following metrics over the last 5 minutes:
up{instance="localhost"} 1 1 1 1 1
up{instance="server1"} 1 0 0 0 0
What does the following query return:
min_over_time(up[5m])

정답:
Explanation:
The min_over_time() function in PromQL returns the minimum sample value observed within the specified time range for each time series.
In the given data:
For up{instance="localhost"}, all samples are 1. The minimum value over 5 minutes is therefore 1.
For up{instance="server1"}, the sequence is 1 0 0 0 0. The minimum observed value is 0.
Thus, the query min_over_time(up[5m]) returns two series ― one per instance:
{instance="localhost"} 1
{instance="server1"} 0
This query is commonly used to check uptime consistency. If the minimum value over the time window is 0, it indicates at least one scrape failure (target down).
Reference: Verified from Prometheus documentation C PromQL Range Vector Functions, min_over_time() definition, and up Metric Semantics sections.

Question No : 3


Which metric type uses the delta() function?

정답:
Explanation:
The delta() function in PromQL calculates the difference between the first and last samples in a range vector over a specified time window. This function is primarily used with gauge metrics, as they can move both up and down, and delta() captures that net change directly.
For example, if a gauge metric like node_memory_Active_bytes changes from 1000 to 1200 within a 5-minute window, delta(node_memory_Active_bytes[5m]) returns 200.
Unlike rate() or increase(), which are designed for monotonically increasing counters, delta() is ideal for metrics representing resource levels, capacities, or instantaneous measurements that fluctuate over time.
Reference: Verified from Prometheus documentation C PromQL Range Functions C delta(), Gauge Semantics and Usage, and Comparing delta() and rate() sections.

Question No : 4


Which kind of metrics are associated with the function deriv()?

정답:
Explanation:
The deriv() function in PromQL calculates the per-second derivative of a time series using linear regression over the provided time range. It estimates the instantaneous rate of change for metrics that can both increase and decrease ― which are typically gauges.
Because counters can only increase (except when reset), rate() or increase() functions are more appropriate for them. deriv() is used to identify trends in fluctuating metrics like CPU temperature, memory utilization, or queue depth, where values rise and fall continuously.
In contrast, summaries and histograms consist of multiple sub-metrics (e.g., _count, _sum, _bucket) and are not directly suited for derivative calculation without decomposition.
Reference: Extracted and verified from Prometheus documentation C PromQL Functions C deriv(), Understanding Rates and Derivatives, and Gauge Metric Examples.

Question No : 5


Given the following Histogram metric data, how many requests took less than or equal to 0.1 seconds?
apiserver_request_duration_seconds_bucket{job="kube-apiserver", le="+Inf"} 3 apiserver_request_duration_seconds_bucket{job="kube-apiserver", le="0.05"} 0 apiserver_request_duration_seconds_bucket{job="kube-apiserver", le="0.1"} 1 apiserver_request_duration_seconds_bucket{job="kube-apiserver", le="1"} 3 apiserver_request_duration_seconds_count{job="kube-apiserver"} 3 apiserver_request_duration_seconds_sum{job="kube-apiserver"} 0.554003785

정답:
Explanation:
In Prometheus, histogram metrics use cumulative buckets to record the count of observations that fall within specific duration thresholds. Each bucket has a label le (“less than or equal to”), representing the upper bound of that bucket.
In the given metric, the bucket labeled le="0.1" has a value of 1, meaning exactly one request took less than or equal to 0.1 seconds. Buckets are cumulative, so:
le="0.05" → 0 requests ≤ 0.05 seconds
le="0.1" → 1 request ≤ 0.1 seconds
le="1" → 3 requests ≤ 1 second
le="+Inf" → all 3 requests total
The _sum and _count values represent total duration and request count respectively, but the number of requests below a given threshold is read directly from the bucket’s le value.
Reference: Verified from Prometheus documentation C Understanding Histograms and Summaries, Bucket Semantics, and Histogram Query Examples sections.

Question No : 6


If the vector selector foo[5m] contains 1 1 NaN, what would max_over_time(foo[5m]) return?

정답:
Explanation:
In PromQL, range vector functions like max_over_time() compute an aggregate value (in this case,
the maximum) over all samples within a specified time range. The function ignores NaN (Not-a-Number) values when computing the result.
Given the range vector foo[5m] containing samples [1, 1, NaN], the maximum value among the valid numeric samples is 1. Therefore, max_over_time(foo[5m]) returns 1.
Prometheus functions handle missing or invalid data points gracefully―ignoring NaN ensures stable calculations even when intermittent collection issues or resets occur. The function only errors if the selector is syntactically invalid or if no numeric samples exist at all.
Reference: Verified from Prometheus documentation C PromQL Range Vector Functions, Aggregation Over Time Functions, and Handling NaN Values in PromQL sections.

Question No : 7


How do you calculate the average request duration during the last 5 minutes from a histogram or summary called http_request_duration_seconds?

정답:
Explanation:
In Prometheus, histograms and summaries expose metrics with _sum and _count suffixes to represent total accumulated values and sample counts, respectively. To compute the average request duration over a given time window (for example, 5 minutes), you divide the rate of increase of _sum by the rate of increase of _count:
\text{Average duration} =
\frac{\text{rate(http_request_duration_seconds_sum[5m])}}{\text{rate(http_request_duration_seco nds_count[5m])}}
Here,
http_request_duration_seconds_sum represents the total accumulated request time, and
http_request_duration_seconds_count represents the number of requests observed.
By dividing these rates, you obtain the average request duration per request over the specified time range.
Reference: Extracted and verified from Prometheus documentation C Querying Histograms and Summaries, PromQL Rate Function, and Metric Naming Conventions sections.

Question No : 8


You’d like to monitor a short-lived batch job.
What Prometheus component would you use?

정답:
Explanation:
Prometheus normally operates on a pull-based model, where it scrapes metrics from long-running targets. However, short-lived batch jobs (such as cron jobs or data processing tasks) often finish before Prometheus can scrape them. To handle this scenario, Prometheus provides the Pushgateway component.
The Pushgateway allows ephemeral jobs to push their metrics to an intermediary gateway. Prometheus then scrapes these metrics from the Pushgateway like any other target. This ensures short-lived jobs have their metrics preserved even after completion.
The Pushgateway should not be used for continuously running applications because it breaks Prometheus’s usual target lifecycle semantics. Instead, it is intended solely for transient job metrics, like backups or CI/CD tasks.
Reference: Verified from Prometheus documentation C Pushing Metrics C The Pushgateway and Use Cases for Short-Lived Jobs sections.

Question No : 9


What Prometheus component would you use if targets are running behind a Firewall/NAT?

정답:
Explanation:
When Prometheus targets are behind firewalls or NAT and cannot be reached directly by the Prometheus server’s pull mechanism, the recommended component to use is PushProx.
PushProx works by reversing the usual pull model. It consists of a PushProx Proxy (accessible by Prometheus) and PushProx Clients (running alongside the targets). The clients establish outbound connections to the proxy, which allows Prometheus to “pull” metrics indirectly. This approach bypasses network restrictions without compromising the Prometheus data model.
Unlike the Pushgateway (which is used for short-lived batch jobs, not network-isolated targets), PushProx maintains the Prometheus “pull” semantics while accommodating environments where direct scraping is impossible.
Reference: Verified from Prometheus documentation and official PushProx design notes C Monitoring Behind NAT/Firewall, PushProx Overview, and Architecture and Usage Scenarios sections.

Question No : 10


Which of the following metrics is unsuitable for a Prometheus setup?

정답:
Explanation:
The metric user_last_login_timestamp_seconds{email="[email protected]"} is unsuitable for Prometheus because it includes a high-cardinality label (email). Each unique email address would generate a separate time series, potentially numbering in the millions, which severely impacts Prometheus performance and memory usage.
Prometheus is optimized for low- to medium-cardinality metrics that represent system-wide behavior rather than per-user data. High-cardinality metrics cause data explosion, complicating queries and overwhelming the storage engine.
By contrast, the other metrics―prometheus_engine_query_log_enabled,
promhttp_metric_handler_requests_total{code="500"}, and
http_response_total{handler="static/*filepath"}―adhere to Prometheus best practices. They
represent operational or service-level metrics with limited, manageable label value sets.
Reference: Extracted and verified from Prometheus documentation C Metric and Label Naming Best Practices, Cardinality Management, and Anti-Patterns for Metric Design sections.

Question No : 11


How do you configure the rule evaluation interval in Prometheus?

정답:
Explanation:
Prometheus evaluates alerting and recording rules at a regular cadence determined by the evaluation_interval setting. This can be defined globally in the main Prometheus configuration file (prometheus.yml) under the global: section or overridden for specific rule groups in the rule configuration files.
The global evaluation_interval specifies how frequently Prometheus should execute all configured rules, while rule-specific intervals can fine-tune evaluation frequency for individual groups.
For instance:
global:
evaluation_interval: 30s
This means Prometheus evaluates rules every 30 seconds unless a rule file specifies otherwise.
This parameter is distinct from scrape_interval, which governs metric collection frequency from targets. It has no relation to TSDB, service discovery, or command-line flags.
Reference: Verified from Prometheus documentation C Configuration File Reference, Rule Evaluation and Recording Rules sections.

Question No : 12


What is an example of a single-target exporter?

정답:
Explanation:
A single-target exporter in Prometheus is designed to expose metrics for a specific service instance rather than multiple dynamic endpoints. The Redis Exporter is a prime example ― it connects to one Redis server instance and exports its metrics (like memory usage, keyspace hits, or command statistics) to Prometheus.
By contrast, exporters like the SNMP Exporter and Blackbox Exporter can probe multiple targets dynamically, making them multi-target exporters. The Node Exporter, while often deployed per host, is considered a host-level exporter, not a true single-target one in configuration behavior.
The Redis Exporter is instrumented specifically for a single Redis endpoint per configuration, aligning it with Prometheus’s single-target exporter definition. This design simplifies monitoring and avoids dynamic reconfiguration.
Reference: Verified from Prometheus documentation and official exporter guidelines C Writing Exporters, Exporter Types, and Redis Exporter Overview sections.

Question No : 13


Which of the following is a valid metric name?

정답:
Explanation:
According to Prometheus naming rules, metric names must match the regex [a-zA-Z_:][a-zA-Z0-9_:]*. This means metric names must begin with a letter, underscore, or colon, and can only contain letters, digits, and underscores thereafter.
The valid metric name among the options is go_goroutines, which follows all these rules. It starts with a letter (g), uses underscores to separate words, and contains only allowed characters.
By contrast:
go routines is invalid because it contains a space.
go.goroutines is invalid because it contains a dot (.), which is reserved for recording rule naming hierarchies, not metric identifiers.
99_goroutines is invalid because metric names cannot start with a number.
Following these conventions ensures compatibility with PromQL syntax and Prometheus’ internal data model.
Reference: Extracted from Prometheus documentation C Metric Naming Conventions and Data Model Rules sections.

Question No : 14


What is api_http_requests_total in the following metric?
api_http_requests_total{method="POST", handler="/messages"}

정답:
Explanation:
In Prometheus, the part before the curly braces {} represents the metric name. Therefore, in the metric api_http_requests_total{method="POST", handler="/messages"}, the term api_http_requests_total is the metric name. Metric names describe the specific quantity being measured ― in this example, the total number of HTTP requests received by an API.
The portion within the braces defines labels, which provide additional dimensions to the metric. Here, method="POST" and handler="/messages" are labels describing request attributes. The metric name should follow Prometheus conventions: lowercase letters, numbers, and underscores only, and ending in _total for counters.
This naming scheme ensures clarity and standardization across instrumented applications. The metric type (e.g., counter, gauge) is declared separately in the exposition format, not within the metric name itself.
Reference: Verified from Prometheus documentation C Metric and Label Naming, Data Model, and Instrumentation Best Practices sections.

Question No : 15


How can you send metrics from your Prometheus setup to a remote system, e.g., for long-term storage?

정답:
Explanation:
Prometheus provides a feature called Remote Write to transmit scraped and processed metrics to an external system for long-term storage, aggregation, or advanced analytics. When configured, Prometheus continuously pushes time series data to the remote endpoint defined in the remote_write section of the configuration file.
This mechanism is often used to integrate with long-term data storage backends such as Cortex, Thanos, Mimir, or InfluxDB, enabling durable retention and global query capabilities beyond Prometheus’s local time series database limits.
In contrast, “scraping” refers to data collection from targets, while “federation” allows hierarchical Prometheus setups (pulling metrics from other Prometheus instances) but does not serve as long-term storage. Using “S3 Buckets” directly is also unsupported in native Prometheus configurations.
Reference: Extracted and verified from Prometheus documentation C Remote Write/Read APIs and Long-Term Storage Integrations sections.

 / 2
The Linux Foundation