Kafka - Consumer Groups đĨ
Description đ
All the consumers in an application read data as consumer groups. Each consumer within a group reads from exlusive partitions of a topic. This allows for parallel processing of data. Consumer groups are dynamically scalable and can be added or removed at any time. To create distinct consumer groups you must use a unique group.id for each consumer. Consumer Offsets stores the offsets at which a consumer group has been reading stored in a Kafka topic named __consumer_offsets. When a consumer in a group has processed data received from Kafka, it should be periodically committing the offsets. If a consumer dies, it will be able to read back from where it left off thanks to the committed consumer offset.
There are 3 delivery semantics for committing consumer offsets
- at least once -
Consumeroffsets are committed after the message is processed. This means that if aconsumerdies, it will read back from where it left off. This is the default delivery semantics forconsumer offsets. - at most once -
Consumeroffsets are committed as soon as messages are received. If processing fails, some messages will be lost - exactly once