Black Friday Sale - Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: 65percent

Welcome To DumpsPedia

CCDAK Sample Questions Answers

Questions 4

In Avro, adding an element to an enum without a default is a __ schema evolution

Options:

A.

breaking

B.

full

C.

backward

D.

forward

Buy Now
Questions 5

What happens if you write the following code in your producer? producer.send(producerRecord).get()

Options:

A.

Compression will be increased

B.

Throughput will be decreased

C.

It will force all brokers in Kafka to acknowledge the producerRecord

D.

Batching will be increased

Buy Now
Questions 6

If I want to send binary data through the REST proxy, it needs to be base64 encoded. Which component needs to encode the binary data into base 64?

Options:

A.

The Producer

B.

The Kafka Broker

C.

Zookeeper

D.

The REST Proxy

Buy Now
Questions 7

In the Kafka consumer metrics it is observed that fetch-rate is very high and each fetch is small. What steps will you take to increase throughput?

Options:

A.

Increase fetch.max.wait

B.

Increase fetch.max.bytes

C.

Decrease fetch.max.bytes

D.

Decrease fetch.min.bytes

E.

Increase fetch.min.bytes

Buy Now
Questions 8

Using the Confluent Schema Registry, where are Avro schema stored?

Options:

A.

In the Schema Registry embedded SQL database

B.

In the Zookeeper node /schemas

C.

In the message bytes themselves

D.

In the _schemas topic

Buy Now
Questions 9

What is the risk of increasing max.in.flight.requests.per.connection while also enabling retries in a producer?

Options:

A.

At least once delivery is not guaranteed

B.

Message order not preserved

C.

Reduce throughput

D.

Less resilient

Buy Now
Questions 10

Your manager would like to have topic availability over consistency. Which setting do you need to change in order to enable that?

Options:

A.

compression.type

B.

unclean.leader.election.enable

C.

min.insync.replicas

Buy Now
Questions 11

If I supply the setting compression.type=snappy to my producer, what will happen? (select two)

Options:

A.

The Kafka brokers have to de-compress the data

B.

The Kafka brokers have to compress the data

C.

The Consumers have to de-compress the data

D.

The Consumers have to compress the data

E.

The Producers have to compress the data

Buy Now
Questions 12

Select all the way for one consumer to subscribe simultaneously to the following topics - topic.history, topic.sports, topic.politics? (select two)

Options:

A.

consumer.subscribe(Pattern.compile("topic\..*"));

B.

consumer.subscribe("topic.history"); consumer.subscribe("topic.sports"); consumer.subscribe("topic.politics");

C.

consumer.subscribePrefix("topic.");

D.

consumer.subscribe(Arrays.asList("topic.history", "topic.sports", "topic.politics"));

Buy Now
Questions 13

You are running a Kafka Streams application in a Docker container managed by Kubernetes, and upon application restart, it takes a long time for the docker container to replicate the state and get back to processing the data. How can you improve dramatically the application restart?

Options:

A.

Mount a persistent volume for your RocksDB

B.

Increase the number of partitions in your inputs topic

C.

Reduce the Streams caching property

D.

Increase the number of Streams threads

Buy Now
Questions 14

In Java, Avro SpecificRecords classes are

Options:

A.

automatically generated from an Avro Schema

B.

written manually by the programmer

C.

automatically generated from an Avro Schema + a Maven / Gradle Plugin

Buy Now
Questions 15

Which is an optional field in an Avro record?

Options:

A.

doc

B.

name

C.

namespace

D.

fields

Buy Now
Questions 16

Once sent to a topic, a message can be modified

Options:

A.

No

B.

Yes

Buy Now
Questions 17

We would like to be in an at-most once consuming scenario. Which offset commit strategy would you recommend?

Options:

A.

Commit the offsets on disk, after processing the data

B.

Do not commit any offsets and read from beginning

C.

Commit the offsets in Kafka, after processing the data

D.

Commit the offsets in Kafka, before processing the data

Buy Now
Questions 18

Consumer failed to process record # 10 and succeeded in processing record # 11. Select the course of action that you should choose to guarantee at least once processing

Options:

A.

Commit offsets at 10

B.

Do not commit until successfully processing the record #10

C.

Commit offsets at 11

Buy Now
Questions 19

What information isn't stored inside of Zookeeper? (select two)

Options:

A.

Schema Registry schemas

B.

Consumer offset

C.

ACL inforomation

D.

Controller registration

E.

Broker registration info

Buy Now
Questions 20

A Zookeeper ensemble contains 5 servers. What is the maximum number of servers that can go missing and the ensemble still run?

Options:

A.

3

B.

4

C.

2

D.

1

Buy Now
Questions 21

If a topic has a replication factor of 3...

Options:

A.

3 replicas of the same data will live on 1 broker

B.

Each partition will live on 4 different brokers

C.

Each partition will live on 2 different brokers

D.

Each partition will live on 3 different brokers

Buy Now
Questions 22

What is true about partitions? (select two)

Options:

A.

A broker can have a partition and its replica on its disk

B.

You cannot have more partitions than the number of brokers in your cluster

C.

A broker can have different partitions numbers for the same topic on its disk

D.

Only out of sync replicas are replicas, the remaining partitions that are in sync are also leader

E.

A partition has one replica that is a leader, while the other replicas are followers

Buy Now
Questions 23

To get acknowledgement of writes to only the leader partition, we need to use the config...

Options:

A.

acks=1

B.

acks=0

C.

acks=all

Buy Now
Questions 24

You are using JDBC source connector to copy data from a table to Kafka topic. There is one connector created with max.tasks equal to 2 deployed on a cluster of 3 workers. How many tasks are launched?

Options:

A.

3

B.

2

C.

1

D.

6

Buy Now
Questions 25

In Avro, removing a field that does not have a default is a __ schema evolution

Options:

A.

breaking

B.

full

C.

backward

D.

forward

Buy Now
Questions 26

I am producing Avro data on my Kafka cluster that is integrated with the Confluent Schema Registry. After a schema change that is incompatible, I know my data will be rejected. Which component will reject the data?

Options:

A.

The Confluent Schema Registry

B.

The Kafka Broker

C.

The Kafka Producer itself

D.

Zookeeper

Buy Now
Questions 27

Which of the following is true regarding thread safety in the Java Kafka Clients?

Options:

A.

One Producer can be safely used in multiple threads

B.

One Consumer can be safely used in multiple threads

C.

One Consumer needs to run in one thread

D.

One Producer needs to be run in one thread

Buy Now
Questions 28

Which of these joins does not require input topics to be sharing the same number of partitions?

Options:

A.

KStream-KTable join

B.

KStream-KStream join

C.

KStream-GlobalKTable

D.

KTable-KTable join

Buy Now
Questions 29

To allow consumers in a group to resume at the previously committed offset, I need to set the proper value for...

Options:

A.

value.deserializer

B.

auto.offset.resets

C.

group.id

D.

enable.auto.commit

Buy Now
Questions 30

How will you find out all the partitions where one or more of the replicas for the partition are not in-sync with the leader?

Options:

A.

kafka-topics.sh --bootstrap-server localhost:9092 --describe --unavailable- partitions

B.

kafka-topics.sh --zookeeper localhost:2181 --describe --unavailable- partitions

C.

kafka-topics.sh --broker-list localhost:9092 --describe --under-replicated-partitions

D.

kafka-topics.sh --zookeeper localhost:2181 --describe --under-replicated-partitions

Buy Now
Questions 31

The kafka-console-consumer CLI, when used with the default options

Options:

A.

uses a random group id

B.

always uses the same group id

C.

does not use a group id

Buy Now
Questions 32

Which of the following statements are true regarding the number of partitions of a topic?

Options:

A.

The number of partitions in a topic cannot be altered

B.

We can add partitions in a topic by adding a broker to the cluster

C.

We can add partitions in a topic using the kafka-topics.sh command

D.

We can remove partitions in a topic by removing a broker

E.

We can remove partitions in a topic using the kafka-topics.sh command

Buy Now
Questions 33

When using plain JSON data with Connect, you see the following error messageorg.apache.kafka.connect.errors.DataExceptionJsonDeserializer with schemas.enable requires "schema" and "payload" fields and may not contain additional fields. How will you fix the error?

Options:

A.

Set key.converter, value.converter to JsonConverter and the schema registry url

B.

Use Single Message Transforms to add schema and payload fields in the message

C.

Set key.converter.schemas.enable and value.converter.schemas.enable to false

D.

Set key.converter, value.converter to AvroConverter and the schema registry url

Buy Now
Questions 34

Two consumers share the same group.id (consumer group id). Each consumer will

Options:

A.

Read mutually exclusive offsets blocks on all the partitions

B.

Read all the data on mutual exclusive partitions

C.

Read all data from all partitions

Buy Now
Questions 35

You are doing complex calculations using a machine learning framework on records fetched from a Kafka topic. It takes more about 6 minutes to process a record batch, and the consumer enters rebalances even though it's still running. How can you improve this scenario?

Options:

A.

Increase max.poll.interval.ms to 600000

B.

Increase heartbeat.interval.ms to 600000

C.

Increase session.timeout.ms to 600000

D.

Add consumers to the consumer group and kill them right away

Buy Now
Questions 36

Which of the following errors are retriable from a producer perspective? (select two)

Options:

A.

MESSAGE_TOO_LARGE

B.

INVALID_REQUIRED_ACKS

C.

NOT_ENOUGH_REPLICAS

D.

NOT_LEADER_FOR_PARTITION

E.

TOPIC_AUTHORIZATION_FAILED

Buy Now
Questions 37

A customer has many consumer applications that process messages from a Kafka topic. Each consumer application can only process 50 MB/s. Your customer wants to achieve a target throughput of 1 GB/s. What is the minimum number of partitions will you suggest to the customer for that particular topic?

Options:

A.

10

B.

20

C.

1

D.

50

Buy Now
Questions 38

You are sending messages with keys to a topic. To increase throughput, you decide to increase the number of partitions of the topic. Select all that apply.

Options:

A.

All the existing records will get rebalanced among the partitions to balance load

B.

New records with the same key will get written to the partition where old records with that key were written

C.

New records may get written to a different partition

D.

Old records will stay in their partitions

Buy Now
Questions 39

Compaction is enabled for a topic in Kafka by setting log.cleanup.policy=compact. What is true about log compaction?

Options:

A.

After cleanup, only one message per key is retained with the first value

B.

Each message stored in the topic is compressed

C.

Kafka automatically de-duplicates incoming messages based on key hashes

D.

After cleanup, only one message per key is retained with the latest value

Compaction changes the offset of messages

Buy Now
Questions 40

If I want to send binary data through the REST proxy to topic "test_binary", it needs to be base64 encoded. A consumer connecting directly into the Kafka topic A. "test_binary" will receive

B. binary data

C. avro data

D. json data

E. base64 encoded data, it will need to decode it

Options:

Buy Now
Questions 41

To import data from external databases, I should use

Options:

A.

Confluent REST Proxy

B.

Kafka Connect Sink

C.

Kafka Streams

D.

Kafka Connect Source

Buy Now
Questions 42

We want the average of all events in every five-minute window updated every minute. What kind of Kafka Streams window will be required on the stream?

Options:

A.

Session window

B.

Tumbling window

C.

Sliding window

D.

Hopping window

Buy Now
Questions 43

To continuously export data from Kafka into a target database, I should use

Options:

A.

Kafka Producer

B.

Kafka Streams

C.

Kafka Connect Sink

D.

Kafka Connect Source

Buy Now
Questions 44

A bank uses a Kafka cluster for credit card payments. What should be the value of the property unclean.leader.election.enable?

Options:

A.

FALSE

B.

TRUE

Buy Now
Questions 45

You have a Kafka cluster and all the topics have a replication factor of 3. One intern at your company stopped a broker, and accidentally deleted all the data of that broker on the disk. What will happen if the broker is restarted?

Options:

A.

The broker will start, and other topics will also be deleted as the broker data on the disk got deleted

B.

The broker will start, and won't be online until all the data it needs to have is replicated from other leaders

C.

The broker will crash

D.

The broker will start, and won't have any data. If the broker comes leader, we have a data loss

Buy Now
Exam Code: CCDAK
Exam Name: Confluent Certified Developer for Apache Kafka Certification Examination
Last Update: Nov 23, 2024
Questions: 150
$57.75  $164.99
$43.75  $124.99
$36.75  $104.99
buy now CCDAK