Confluent Certified Administrator for Apache Kafka 온라인 연습
최종 업데이트 시간: 2025년06월18일
당신은 온라인 연습 문제를 통해 Confluent CCAAK 시험지식에 대해 자신이 어떻게 알고 있는지 파악한 후 시험 참가 신청 여부를 결정할 수 있다.
시험을 100% 합격하고 시험 준비 시간을 35% 절약하기를 바라며 CCAAK 덤프 (최신 실제 시험 문제)를 사용 선택하여 현재 최신 54개의 시험 문제와 답을 포함하십시오.
정답:
Explanation:
Kafka ensures message immutability for data integrity. Once a message is written to a Kafka topic and persisted to disk, it cannot be modified. This immutability guarantees that consumers always receive the original message content, which is critical for auditability, fault tolerance, and data reliability.
정답:
Explanation:
A graceful shutdown ensures that logs are flushed to disk, minimizing recovery time during restart. Kafka performs controlled leader migration during a graceful shutdown to avoid disruption and ensure availability.
정답:
Explanation:
The kafka-consumer-groups.sh script is used to inspect consumer group details, including consumer lag, which indicates how far behind a consumer is from the latest data in the partition.
The typical usage is bin/kafka-consumer-groups.sh --bootstrap-server <broker> --describe --group <group_id>
정답:
Explanation:
Kafka ACLs (Access Control Lists) perform authorization checks every time a client attempts to access a resource (e.g., topic, consumer group). This ensures continuous enforcement of permissions, not just at connection time or intervals. This approach provides fine-grained security, preventing unauthorized actions at any time during a session.
정답:
Explanation:
Increasing batch.size allows the producer to accumulate more messages into a single batch, improving compression and reducing the number of requests sent to the broker.
Increasing linger.ms gives the producer more time to fill up batches before sending them, which improves batching efficiency and throughput.
This combination is a best practice for maximizing throughput, especially when message volume is high or consistent latency is not a strict requirement.
정답:
Explanation:
In a Kafka cluster, only one broker acts as the Controller at any given time. The Controller is responsible for managing cluster metadata, such as partition leadership and broker status. Even if the cluster has multiple brokers (in this case, four), only one is elected as the Controller, and others serve as regular brokers. If the current Controller fails, another broker is automatically elected to take its place.
정답:
Explanation:
acks=0 provides the highest throughput because the producer does not wait for any acknowledgment from the broker. This minimizes latency and maximizes performance.
However, it comes at the cost of no durability guarantees ― messages may be lost if the broker fails before writing them. This setting is suitable when throughput is critical and occasional data loss is acceptable, such as in some log ingestion use cases where logs are also stored elsewhere.
정답:
Explanation:
Partitions are the primary mechanism for achieving load balancing in Kafka. When a topic has multiple partitions, Kafka clients (producers and consumers) can distribute the load across brokers hosting these partitions.
정답:
Explanation:
Kafka Streams is a client library for building real-time applications that process and analyze data stored in Kafka.
ksqlDB enables event stream processing using SQL-like queries, allowing real-time transformation and analysis of Kafka topics.
정답:
Explanation:
When a broker goes down, the Controller detects the failure and triggers a leader election for all partitions that had their leader on the failed broker. The leader is chosen from the in-sync replicas (ISRs) of each partition.
정답:
Explanation:
Kafka relies on ZooKeeper’s ephemeral nodes to detect if a broker (controller) goes down and to elect a new controller.
The controller manages partition leadership assignments and handles leader election when a broker fails.
The epoch number ensures coordination and avoids outdated controllers acting on stale data.
정답:
Explanation:
If the broker's JVM garbage collection (GC) pause is too long, it may fail to send heartbeats to ZooKeeper within the expected interval. As a result, ZooKeeper considers the broker dead, and the broker may be removed from the cluster, triggering leader elections and partition reassignments.
정답:
Explanation:
processing.guarantee=exactly_once ensures that messages are processed exactly once by ksqlDB, preventing both duplicates and message loss.
정답:
Explanation:
Kafka quotas allow administrators to control and limit the rate of data production and consumption per client (producer/consumer), ensuring fair use of broker resources among multiple clients.
정답:
Explanation:
Avro is the original and most commonly used schema format supported by Schema Registry.
Confluent Schema Registry supports JSON Schema for validation and compatibility checks.
Protocol Buffers (Protobuf) are supported for schema management in Schema Registry.