시험덤프
매달, 우리는 1000명 이상의 사람들이 시험 준비를 잘하고 시험을 잘 통과할 수 있도록 도와줍니다.
  / CCAAK 덤프  / CCAAK 문제 연습

Confluent CCAAK 시험

Confluent Certified Administrator for Apache Kafka 온라인 연습

최종 업데이트 시간: 2025년06월18일

당신은 온라인 연습 문제를 통해 Confluent CCAAK 시험지식에 대해 자신이 어떻게 알고 있는지 파악한 후 시험 참가 신청 여부를 결정할 수 있다.

시험을 100% 합격하고 시험 준비 시간을 35% 절약하기를 바라며 CCAAK 덤프 (최신 실제 시험 문제)를 사용 선택하여 현재 최신 54개의 시험 문제와 답을 포함하십시오.

 / 2

Question No : 1


How does Kafka guarantee message integrity after a message is written on a disk?

정답:
Explanation:
Kafka ensures message immutability for data integrity. Once a message is written to a Kafka topic and persisted to disk, it cannot be modified. This immutability guarantees that consumers always receive the original message content, which is critical for auditability, fault tolerance, and data reliability.

Question No : 2


What are benefits to gracefully shutting down brokers? (Choose two.)

정답:
Explanation:
A graceful shutdown ensures that logs are flushed to disk, minimizing recovery time during restart. Kafka performs controlled leader migration during a graceful shutdown to avoid disruption and ensure availability.

Question No : 3


An employee in the reporting department needs assistance because their data feed is slowing down.
You start by quickly checking the consumer lag for the clients on the data stream.
Which command will allow you to quickly check for lag on the consumers?

정답:
Explanation:
The kafka-consumer-groups.sh script is used to inspect consumer group details, including consumer lag, which indicates how far behind a consumer is from the latest data in the partition.
The typical usage is bin/kafka-consumer-groups.sh --bootstrap-server <broker> --describe --group <group_id>

Question No : 4


When using Kafka ACLs, when is the resource authorization checked?

정답:
Explanation:
Kafka ACLs (Access Control Lists) perform authorization checks every time a client attempts to access a resource (e.g., topic, consumer group). This ensures continuous enforcement of permissions, not just at connection time or intervals. This approach provides fine-grained security, preventing unauthorized actions at any time during a session.

Question No : 5


You want to increase Producer throughput for the messages it sends to your Kafka cluster by tuning the batch size (‘batch size’) and the time the Producer waits before sending a batch (‘linger.ms’). According to best practices, what should you do?

정답:
Explanation:
Increasing batch.size allows the producer to accumulate more messages into a single batch, improving compression and reducing the number of requests sent to the broker.
Increasing linger.ms gives the producer more time to fill up batches before sending them, which improves batching efficiency and throughput.
This combination is a best practice for maximizing throughput, especially when message volume is high or consistent latency is not a strict requirement.

Question No : 6


Your Kafka cluster has four brokers. The topic t1 on the cluster has two partitions, and it has a replication factor of three. You create a Consumer Group with four consumers, which subscribes to t1.
In the scenario above, how many Controllers are in the Kafka cluster?

정답:
Explanation:
In a Kafka cluster, only one broker acts as the Controller at any given time. The Controller is responsible for managing cluster metadata, such as partition leadership and broker status. Even if the cluster has multiple brokers (in this case, four), only one is elected as the Controller, and others serve as regular brokers. If the current Controller fails, another broker is automatically elected to take its place.

Question No : 7


A company is setting up a log ingestion use case where they will consume logs from numerous systems. The company wants to tune Kafka for the utmost throughput. In this scenario, what acknowledgment setting makes the most sense?

정답:
Explanation:
acks=0 provides the highest throughput because the producer does not wait for any acknowledgment from the broker. This minimizes latency and maximizes performance.
However, it comes at the cost of no durability guarantees ― messages may be lost if the broker fails before writing them. This setting is suitable when throughput is critical and occasional data loss is acceptable, such as in some log ingestion use cases where logs are also stored elsewhere.

Question No : 8


How can load balancing of Kafka clients across multiple brokers be accomplished?

정답:
Explanation:
Partitions are the primary mechanism for achieving load balancing in Kafka. When a topic has multiple partitions, Kafka clients (producers and consumers) can distribute the load across brokers hosting these partitions.

Question No : 9


Which technology can be used to perform event stream processing? (Choose two.)

정답:
Explanation:
Kafka Streams is a client library for building real-time applications that process and analyze data stored in Kafka.
ksqlDB enables event stream processing using SQL-like queries, allowing real-time transformation and analysis of Kafka topics.

Question No : 10


When a broker goes down, what will the Controller do?

정답:
Explanation:
When a broker goes down, the Controller detects the failure and triggers a leader election for all partitions that had their leader on the failed broker. The leader is chosen from the in-sync replicas (ISRs) of each partition.

Question No : 11


You are managing a Kafka cluster with five brokers (broker id '0', '1','2','3','4') and three ZooKeepers. There are 100 topics, five partitions for each topic, and replication factor three on the cluster. Broker id ‘0’ is currently the Controller, and this broker suddenly fails.
Which statements are correct? (Choose three.)

정답:
Explanation:
Kafka relies on ZooKeeper’s ephemeral nodes to detect if a broker (controller) goes down and to elect a new controller.
The controller manages partition leadership assignments and handles leader election when a broker fails.
The epoch number ensures coordination and avoids outdated controllers acting on stale data.

Question No : 12


If a broker's JVM garbage collection takes too long, what can occur?

정답:
Explanation:
If the broker's JVM garbage collection (GC) pause is too long, it may fail to send heartbeats to ZooKeeper within the expected interval. As a result, ZooKeeper considers the broker dead, and the broker may be removed from the cluster, triggering leader elections and partition reassignments.

Question No : 13


A customer has a use case for a ksqlDB persistent query. You need to make sure that duplicate messages are not processed and messages are not skipped.
Which property should you use?

정답:
Explanation:
processing.guarantee=exactly_once ensures that messages are processed exactly once by ksqlDB, preventing both duplicates and message loss.

Question No : 14


Multiple clients are sharing a Kafka cluster.
As an administrator, how would you ensure that Kafka resources are distributed fairly to all clients?

정답:
Explanation:
Kafka quotas allow administrators to control and limit the rate of data production and consumption per client (producer/consumer), ensuring fair use of broker resources among multiple clients.

Question No : 15


You are using Confluent Schema Registry to provide a RESTful interface for storing and retrieving schemas.
Which types of schemas are supported? (Choose three.)

정답:
Explanation:
Avro is the original and most commonly used schema format supported by Schema Registry.
Confluent Schema Registry supports JSON Schema for validation and compatibility checks.
Protocol Buffers (Protobuf) are supported for schema management in Schema Registry.

 / 2
Confluent