Introduction to Kafka

Krishna Chaitanya Sarvepalli
3 min readFeb 19, 2022
Kafka
Kafka

This post is 101 session on Kafka. I will be covering few more advanced Kafka topics in later posts.

The idea behind asynchronous processing is, caller is not waiting for immediate response or in some cases, we need to send events to another system to perform some action.

For example, with an order processing system, if the inventory for a specific item is lower that certain threshold while processing the order, we can send an event to inventory system to restock that specific item.

There are many messaging platforms in the industry, Examples are RabbitMQ, ActiveMQ, IBM MQ, JMS etc…

Selection of which Messaging platform to use will be purely depending on the use case and following factors can play crucial role:

  1. Broadcast message to multiple consumers
  2. Horizontal scaling of producers and consumers
  3. Parallel Processing of messages.
  4. Streaming Ability
  5. Message Ordering

For example, if we need message filtering, Kafka might not support it directly, (you can support using multiple topics and filtered streaming), you might need to look for other messaging platforms like RabbitMQ.

Kafka Highlights:

  1. Kafka is a distributed messaging platform.
  2. Kafka can store messages durably and reliably for the defined period.
  3. Kafka provides the ability to publish/subscribe to messages.
  4. Kafka provides the ability to replay the messages from specific position/from start.
  5. Senders/Receivers are not bound to discuss in same language.
  6. Kafka provides parallel processing mechanism for consumers.
Kafka EcoSystem

Kafka Eco System:

  1. Producers: Producers act as clients can send messages to Kafka without even noticing who consumes it.
  2. Consumers: Consumers can subscribe to the topic and receive messages from Brokers. Consumers are typically part of a consumer group.
  3. Brokers: Integral component of Kafka ecosystem. Responsible for receiving messages from producers and responsible for maintaining coordinating with other message brokers.
  4. Clusters: Multiple brokers coordinating together will form a Kafka cluster. Easy to handle to add/remove brokers from the cluster.
  5. Zookeeper: Zookeeper is a distributed store that provides discovery aspect for Kafka brokers. (There are plans to remove Zookeeper in later versions of Kafka).

Kafka Components:

  1. Topic: Messages in Kafka are categorized into topics. Topic is almost similar to a table which logically divides the messages per use case.
  2. Offset: Offset is the position of consumer to track the position.
  3. Partition: Topics are additionally broken down into partitions to provide redundancy, scalability and parallelism behavior
  4. Replication: Each message is replicated across the cluster to maintain high availability and broker loss scenarios.

Kafka Concepts:

  1. Leader: One of the broker will act as a leader for partition and will be responsible for writes and replication to other brokers.
  2. Followers: Followers are also brokers, which are responsible to be in sync with leader replicas, and in case of leader node failure case, they will be elected as leader
  3. Commit Log: Messages are appended to the end of a file
  4. Group Coordinator: One of the broker will be designated as Group coordinator which takes care of rebalances and managing consumers.

Kafka Java Client Examples:

Producer Example:

Writing a producer is pretty easy. Kafka provider client jar file.

  1. Create Properties specifying the Kafka brokers, how to serialize the key and message.
  2. Create a Record with topic name and message.
  3. Send the message using Kafka Producer, if we don’t specify a key, message will be sent to random partition.
  4. Acknowledgment from Kafka can be synchronous or asynchronous with callback handler.
Kafka Producer Consumer Code

Consumer Example:

  1. Create Properties specifying Kafka brokers, how to deserialize the key and message. We can specify consumer group property, which can be used to represent set of consumers working as parallel under one single group.
  2. We can subscribe to single topic or multiple topics. Also we can specify regex pattern based on topic name to subscribe and retrieve messages from Kafka brokers.
  3. Since it’s stream of data, we need to handle continuous flow. A Consumer can poll Kafka and retrieve a batch of records based on batch.size property.
  4. The retrieve record will have current offset, partition number and the message which can be used to process the business logic.
Kafka Java Consumer Code

I will cover Kafka Architecture in next blog. Please let me know your feedback in comments and if you want me to cover any extra content.

--

--

Krishna Chaitanya Sarvepalli

Solution Architect @TSYS Good @ Java, Kubernetes, Kafka, AWS cloud, devops , architecture and complex problems