默认三台机器曾经装置java,设置好ssh免密连贯。
1.下载kafka
http://kafka.apache.org/downl...

2.发送到集群解压到指定目录/usr/local下

tar -zxvf kafka_2.11-0.10.0.1.tar.gz -C /usr/local

3.进入kafka_2.11-0.10.0.1目录,创立文件夹zk_kfk_data(自取)
,并在该目录下创立myid文件,内容在三个集群中不同,别离是1,2,3

cd kafka_2.11-0.10.0.1mkdir zk_kfk_datavi myid


4.创立目录

mkdir logsmkdir kafka-logs-1


5.批改/config/zookeeper.properties文件

cd configvi zookeeper.properties


6.批改server.properties

vi server.properties




7.把kafka整个文件夹分发给两个子节点

scp -r /usr/local/kafka_2.11-0.10.1.1 hadoop@centos2:/usr/localscp -r /usr/local/kafka_2.11-0.10.1.1 hadoop@centos3:/usr/local

8.批改centos2和centos3的myid

ssh centos2cd /usr/local/kafka_2.11-0.10.1.1/zk_kfk_datavi myid

ssh centos3cd /usr/local/kafka_2.11-0.10.1.1/zk_kfk_datavi myid

9.批改centos2和centos3的server.properties

ssh centos2cd /usr/local/kafka_2.11-0.10.1.1/configvi server.properties

ssh centos3cd /usr/local/kafka_2.11-0.10.1.1/configvi server.properties


装置实现!
运行测试:(默认在/usr/local/kafka_2.11-0.10.1.1目录下执行)
10.三台集群别离启动zk:

./bin/zookeeper-server-start.sh config/zookeeper.properties & 

11.启动kafka集群

nohup ./bin/kafka-server-start.sh config/server.properties &>> kafka.log &

12.创立topic:

./bin/kafka-topics.sh --create --zookeeper centos1:2181,centos2:2181,centos3:2181  --replication-factor 1 --partitions 1 --topic test

13.查看topic:

./bin/kafka-topics.sh --list --zookeeper localhost:2181


14.发送数据:

./bin/kafka-console-producer.sh --broker-list centos1:9092,centos2:9092,centos3:9092 --topic test


15.生产:

./bin/kafka-console-consumer.sh --zookeeper centos1:2181,centos2:2181,centos3:2181 --from-beginning --topic test