Apache Kafka:
1. Apache Kafka was originated at LinkedIn and later became
an open sourced Apache project in 2011
2. First-class Apache project was launched in 2012
3. Kafka is written in Scala and Java
4. It is a publish-subscribe based fault tolerant messaging
system
5. It is fast, scalable and distributed by design.
6. Apache Kafka is a distributed publish-subscribe messaging
system and a robust queue that can handle a high volume of data and enables you
to pass messages from one end-point to another
7. Kafka is suitable for both offline and online message
consumption.
8. Kafka messages are
persisted on the disk and replicated within the cluster to prevent data loss.
9. Kafka is built on top of the
ZooKeeper synchronization service.
10. It integrates very well with
Apache Storm and Spark for real-time streaming data analysis.
Benefits:
1. Reliability − Kafka is
distributed, partitioned, replicated and fault tolerance.
2. Scalability − Kafka
messaging system scales easily without down time. Kafka is very fast and
guarantees zero downtime and zero data loss.
3. Durability − Kafka
uses Distributed commit log which means messages persists on disk as
fast as possible, hence it is durable.
4. Performance − Kafka has
high throughput for both publishing and subscribing messages. It maintains
stable performance even many TB of messages are stored.
Use cases:
1. Metrics − Kafka is
often used for operational monitoring data. This involves aggregating
statistics from distributed applications to produce centralized feeds of
operational data.
2. Log Aggregation Solution −
Kafka can be used across an organization to collect logs from multiple services
and make them available in a standard format to multiple consumers.
3. Stream Processing −
Popular frameworks such as Storm and Spark Streaming read data from a topic,
processes it, and write processed data to a new topic where it becomes
available for users and applications. Kafka’s strong durability is also very
useful in the context of stream processing.
Components in Kafka
1. Topics
A stream of messages belonging to a particular category is
called a topic. Data is stored in topics.
Topics are split into partitions. For each topic, Kafka
keeps a minimum of one partition. Each such partition contains messages in an
immutable ordered sequence. A partition is implemented as a set of segment
files of equal sizes.
2. Partition
Topics may have many partitions, so it can handle an
arbitrary amount of data.
3. Partition offset
Each partitioned message has a unique sequence id called
as offset.
4. Replicas of partition
Replicas are nothing but backups of a partition.
Replicas are never read or write data. They are used to prevent data loss.
5. Brokers
- Brokers are simple system responsible for maintaining the published data. Each broker may have zero or more partitions per topic. Assume, if there are N partitions in a topic and N number of brokers, each broker will have one partition.
- Assume if there are N partitions in a topic and more than N brokers (n + m), the first N broker will have one partition and the next M broker will not have any partition for that particular topic.
- Assume if there are N partitions in a topic and less than N brokers (n-m), each broker will have one or more partition sharing among them. This scenario is not recommended due to unequal load distribution among the broker.
6. Kafka Cluster
Kafka’s having more than one broker are called as Kafka
cluster. A Kafka cluster can be expanded without downtime. These clusters are
used to manage the persistence and replication of message data.
7. Producers
Producers are the publisher of messages to one or more Kafka
topics. Producers send data to Kafka brokers. Every time a producer publishes a
message to a broker, the broker simply appends the message to the last segment
file. Actually, the message will be appended to a partition. Producer can also
send messages to a partition of their choice.
8. Consumers
Consumers read data from brokers. Consumers subscribes to
one or more topics and consume published messages by pulling data from the
brokers.
9. Leader
Leader is the node responsible for all reads and writes
for the given partition. Every partition has one server acting as a leader.
10. Follower
Node which follows leader instructions are called as
follower. If the leader fails, one of the follower will automatically become
the new leader. A follower acts as normal consumer, pulls messages and up-dates
its own data store.
Role of Zoo Keeper
A critical dependency of Apache Kafka is Apache Zookeeper,
which is a distributed configuration and synchronization service. Zookeeper
serves as the coordination interface between the Kafka brokers and consumers.
The Kafka servers share information via a Zookeeper cluster. Kafka stores basic
metadata in Zookeeper such as information about topics, brokers, consumer
offsets (queue readers) and so on.
Since all the critical information is stored in the
Zookeeper and it normally replicates this data across its ensemble, failure of
Kafka broker / Zookeeper does not affect the state of the Kafka cluster. Kafka
will restore the state, once the Zookeeper restarts. This gives zero downtime
for Kafka. The leader election between the Kafka broker is also done by using
Zookeeper in the event of leader failure.
Prerequisites:
1. Java: jdk 1.6 +
2.
Zookeeper
Apache Kafka installation
Download and extract the Kafka binaries to the desired
folder
Starting the server
Kafka uses Zookeeper so we need to start
Zookeeper server first
Starting Kafka server
Creating a Topic
Let's create a topic named "test" with
a single partition and only one replica:
We
can now see that topic if we run the list topic command:
Alternatively, instead of manually creating topics you can
also configure your brokers to auto-create topics when a non-existent topic is
published to.
Starting Producer to
send some messages
Syntax
bin/kafka-console-producer.sh --broker-list localhost:9092
--topic topic-name
Kafka comes with a command line client that will take input
from a file or from standard input and send it out as messages to the Kafka
cluster.
By default, each line will be sent as a separate message.
Broker-list − The list of brokers that we want
to send the messages to. In this case we only have one broker. The
Config/server.properties file contains broker port id, since we know our broker
is listening on port 9092, so you can specify it directly.
Topic name − Here is an example for the topic name.
Starting Consumer to
receive messages
Similar to producer, the default consumer properties are
specified in config/consumer.proper-ties file. Open a new terminal
and type the below syntax for consuming messages.
Syntax
bin/kafka-console-consumer.sh --zookeeper
localhost:2181 --topic topic-name --from-beginning
Single Node-Multiple
Brokers Configuration
Note: Start the Zookeeper server first
Create multiple kafka brokers:
Copy the existing server.properties file into two new config
file as below
Add or modify newly
created config files as below:
server1.properties
server2.properties
Starting the multiple
brokers as below
Broker1:
Broker2:
Broker0:
Creating a Topic
Let us assign the replication factor value as three for this
topic because we have three different brokers running. If you have two brokers,
then the assigned replica value will be two.
Syntax
bin/kafka-topics.sh --create --zookeeper localhost:2181
--replication-factor 3 -partitions 1 --topic topic-name
The Describe command is used to check which broker
is listening on the current created topic as shown below
Starting Producer to
send messages
Starting consumer to
receive messages
Basic topic
operations
Modifying a Topic
Syntax
bin/kafka-topics.sh —zookeeper localhost:2181 --alter
--topic topic_name --partitions count
Deleting a Topic
Syntax
bin/kafka-topics.sh --zookeeper localhost:2181 --delete
--topic topic_name
To resolve this set delete.topic.enable = true in config/server.properties
of kafka brokers
Restart the kafka server with new config and try to delete
No comments:
Post a Comment