<a></a>
/spark-kafka/spark-2.1.1-bin-hadoop2.6# ./bin/spark-submit --jars ~/spark-streaming-kafka-0-8-assembly_2.11-2.2.0.jar examples/src/main/python/streaming/kafka_wordcount.py localhost:2181 test
其中:spark-streaming-kafka-0-8-assembly_2.11-2.2.0.jar在 http://search.maven.org/#search%7Cga%7C1%7Cspark-streaming-kafka-0-8-assembly 下載下傳
kafka 使用0.11版本:
This tutorial assumes you are starting fresh and have no existing Kafka or ZooKeeper data. Since Kafka console scripts are different for Unix-based and Windows platforms, on Windows platforms use <code>bin\windows\</code> instead of <code>bin/</code>, and change the script extension to <code>.bat</code>.
1
2
<code>></code><code>tar</code> <code>-xzf kafka_2.11-0.11.0.0.tgz</code>
<code>></code><code>cd</code> <code>kafka_2.11-0.11.0.0</code>
3
<code>> bin</code><code>/zookeeper-server-start</code><code>.sh config</code><code>/zookeeper</code><code>.properties</code>
<code>[2013-04-22 15:01:37,495] INFO Reading configuration from: config</code><code>/zookeeper</code><code>.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)</code>
<code>...</code>
Now start the Kafka server:
4
<code>> bin</code><code>/kafka-server-start</code><code>.sh config</code><code>/server</code><code>.properties</code>
<code>[2013-04-22 15:01:47,028] INFO Verifying properties (kafka.utils.VerifiableProperties)</code>
<code>[2013-04-22 15:01:47,051] INFO Property socket.send.buffer.bytes is overridden to 1048576 (kafka.utils.VerifiableProperties)</code>
Let's create a topic named "test" with a single partition and only one replica:
<code>> bin</code><code>/kafka-topics</code><code>.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic</code><code>test</code>
We can now see that topic if we run the list topic command:
<code>> bin</code><code>/kafka-topics</code><code>.sh --list --zookeeper localhost:2181</code>
<code>test</code>
Alternatively, instead of manually creating topics you can also configure your brokers to auto-create topics when a non-existent topic is published to.
Kafka comes with a command line client that will take input from a file or from standard input and send it out as messages to the Kafka cluster. By default, each line will be sent as a separate message.
Run the producer and then type a few messages into the console to send to the server.
<code>> bin</code><code>/kafka-console-producer</code><code>.sh --broker-list localhost:9092 --topic</code><code>test</code>
<code>This is a message</code>
<code>This is another message</code>
Kafka also has a command line consumer that will dump out messages to standard output.
<code>> bin</code><code>/kafka-console-consumer</code><code>.sh --bootstrap-server localhost:9092 --topic</code><code>test</code> <code>--from-beginning</code>
本文轉自張昺華-sky部落格園部落格,原文連結:http://www.cnblogs.com/bonelee/p/7435506.html,如需轉載請自行聯系原作者