The exactly once guarantee in the Kafka Streams is for which flow of data?
Correct Answer:
A
Kafka Streams can only guarantee exactly once processing if you have a Kafka to Kafka topology.
How does a consumer commit offsets in Kafka?
Correct Answer:
B
Consumers do not directly write to the consumer_offsets topic, they instead interact with a broker that has been elected to manage that topic, which is the Group Coordinator broker
There are two consumers C1 and C2 belonging to the same group G subscribed to topics T1 and T2. Each of the topics has 3 partitions. How will the partitions be assigned to consumers with Partition Assigner being Round Robin Assigner?
Correct Answer:
A
The correct option is the only one where the two consumers share an equal number of partitions amongst the two topics of three partitions. An interesting article to read ishttps://medium.com/@anyili0928/what-i-have-learned-from-kafka-partition-assignment- strategy-799fdf15d3ab
You want to sink data from a Kafka topic to S3 using Kafka Connect. There are 10 brokers in the cluster, the topic has 2 partitions with replication factor of 3. How many tasks will you configure for the S3 connector?
Correct Answer:
D
You cannot have more sink tasks (= consumers) than the number of partitions, so 2.
What's is true about Kafka brokers and clients from version 0.10.2 onwards?
Correct Answer:
C
Kafka's new bidirectional client compatibility introduced in 0.10.2 allows this. Read more herehttps://www.confluent.io/blog/upgrading-apache-kafka-clients-just-got-easier/