Get Started. It's Free
or sign up with your email address
Kafka by Mind Map: Kafka

1. message queue issues

1.1. brokers need to record client positions

1.2. spaghetti of queues

1.3. skip messages

2. solutions

2.1. vs ActiveMQ/RabbitMQ

2.1.1. maintain explicit order is overkill

2.1.1.1. does not guarantee that messages are processed in the order that they were received

2.1.2. how it deals with message consumers

2.1.2.1. messages are not removed from system when they have been consumed. Instead relying on consumers to keep track of the offset of the last message consumed

2.1.2.1.1. keep track of the last offset by ZooKeeper

2.2. vs Flume

2.2.1. Flume

2.2.1.1. Flume is a more complete Hadoop ingestion solution

2.2.1.1.1. has good supports for writing data to Hadoop including HDFS, HBase and Solr

2.2.1.1.2. handles many common issues in writing data to HDFS such as reliability, optimal file sizes, file formats, updating metadata and partitioning

2.2.2. Kafka

2.2.2.1. generic message broker

2.2.2.1.1. if the requirements involve

2.2.2.2. need to develop own producers and consumers

3. features

3.1. fast

3.1.1. hundred mb/s from thousands of clients

3.2. scalable

3.2.1. easily scale up and down without downtime

3.3. durable

3.3.1. messages are persisted on disk to prevent data loss

3.4. developed in Linked using Scala

4. internal structures

4.1. logical structure

4.1.1. Topic

4.1.1.1. def

4.1.1.1.1. main unit of application level data separation

4.1.1.2. physical storage

4.1.1.2.1. consists of partitions

4.1.1.3. Log-structured storage

4.1.1.3.1. append-only mechanism

4.1.1.3.2. maintain separate logs for each partition of each topic

4.1.1.4. partitions of the same topic can be spread across many brokers

4.1.1.4.1. why

4.1.1.5. each topic will be replicated to multiple partitions

4.1.2. Untitled

4.2. API name and usage

4.2.1. publish data onto a topic

4.2.1.1. $publish topic data

4.2.2. consume from a topic

4.2.2.1. $consume topic offset

4.3. brokers

4.3.1. def

4.3.1.1. physical processes that make up a Kafka cluster

4.3.1.2. each broker corresponds to a separate physical server

4.4. producers

4.4.1. in charge of sending messages to kafka broker

4.5. consumers

4.5.1. consumer pull messages from Kafka broker

4.5.2. can use offset to target a specific message on a partition

4.5.2.1. by default point to the latest offset

4.5.2.2. can set the offset to an older to read old data

4.5.3. consumer maintain message state

4.5.4. No ACK

4.6. consumer groups

4.6.1. one or more consumers

4.6.1.1. each consumer reads one or more partitions of a topic

4.6.2. why

4.6.2.1. used for load balancing

4.6.2.1.1. higher throughput than a consumer could handle

4.6.2.2. used for high availability

4.6.2.2.1. if one consumer fails, partitions it is reading will be assigned to other consumers in the group

4.6.3. each message will be delivered to only one consumer within a group

5. architecture

5.1. stores all messages for a preconfigured length of time

5.2. pull vs push

5.2.1. push model

5.2.1.1. high throughput

5.2.1.2. complex server logic

5.2.2. pull model

5.2.2.1. reply feature

5.2.2.2. simple server logic

5.2.3. used pull model

5.2.3.1. storing all messages for a period of time and not tracking acknowledgements for individual consumers and messages

5.2.3.1.1. scale to more than 10,000 consumers

5.2.3.1.2. support consumers that read data in batches

5.3. log file format

5.3.1. vs rabbitmq/activemq

5.3.1.1. push model

5.3.2. one partition is a physical folder

5.3.3. one message

5.3.3.1. offset

5.3.3.2. message length

5.3.3.3. magic value, 1 byte

5.3.3.4. CRC value, 4 bytes

5.3.3.5. payload, n bytes

5.3.4. overview

5.3.4.1. Untitled

5.3.4.2. active segment list maintains mapping to segment files

5.3.4.3. log entries in segment files

5.4. IO optimization

5.4.1. Append-only writing

5.4.1.1. reads do not block writes

5.4.1.2. super fast writing speed because one partition is one log file

5.4.2. zerocopy

5.4.2.1. original

5.4.2.1.1. Untitled

5.4.2.1.2. OS reads data from disk into page cache in kernel space

5.4.2.1.3. application reads data from kernel space into user space

5.4.2.1.4. application writes data back o kernel space into socket buffer

5.4.2.1.5. os copies data from socket buffer to NIC buffer

5.4.2.2. improved

5.4.2.2.1. Untitled

5.4.2.2.2. zerocopy copies data into page cache only once and reuse

5.5. data replication

5.5.1. handled at the topic level

5.5.1.1. process

5.5.1.1.1. producer write through partition leader

5.5.1.1.2. partition leader write the message into local disk

5.5.1.1.3. partition follower pull from partition leader

5.5.1.1.4. once leader received configurable ACK from partitions, it is written

5.5.1.2. structure

5.5.1.2.1. each partition of a topic has a leader partition

5.5.2. ISR(in-sync Replica)

5.5.2.1. replicas do not need to be completely consistent

5.5.2.2. a nice balance between sync and async copy

5.5.2.3. configurable lag behind

5.5.2.3.1. if one replica is too slow, will be removed from ISR

5.5.2.4. follower can batch read from leader

5.5.2.4.1. Untitled

5.5.3. when writing data, producer can choose

5.5.3.1. methods

5.5.3.1.1. whether the write will be acknowledged by all synchronized replicas

5.5.3.1.2. whether the write will be acknowledged by a specified number of replicas

5.5.3.1.3. whether the write will be acknowledged by just the leader

5.5.3.1.4. whether to wait for acknowledge at all

5.5.3.2. criteria

5.5.3.2.1. reliability a concern or not

5.5.4. when reading data

5.5.4.1. consumer will only read committed messages

5.5.4.1.1. messages that were written to all synchronized replicas

5.6. delivery mechanism

5.6.1. at least once

5.6.1.1. if the consumer advances the offset after processing message

5.6.1.2. if the consumer crash after processing the data but before advancing the last offset

5.6.1.2.1. a new consumer will reread messages since the last offset

5.6.2. at most once

5.6.2.1. if the consumer first advances the offset and then processes offset

5.6.2.2. if the consumer crash after advancing the last offset but before processing the data

5.6.2.2.1. a new consumer will read data from the new offset

5.6.3. exactly once

5.6.3.1. if the consumer uses a two-phase commit to process data and advance the offset at the same time

5.6.3.1.1. usually done in batch consumers and not in streaming because of overhead

6. Use cases

6.1. in place of a traditional message broker to decouple services

6.2. log aggregation

6.2.1. forward all the logs into specific kafka topic

6.2.2. setup consumer/consumer group on the topic and forward to indexing service

6.2.3. query on the data through kibana

6.3. real-time analysis

6.3.1. forward interested data into kafka topic

6.3.2. integrate with spark/samza for stream processing