The examples in this repository serve to demonstrate running Beam pipelines with SamzaRunner locally, in Yarn cluster, or in standalone cluster with Zookeeper. More complex pipelines can be built from here and run in similar manner.
The following examples are included:
-
WordCount
reads a file as input (bounded data source), and computes word frequencies. -
KafkaWordCount
does the same word-count computation but reading from a Kafka stream (unbounded data source). It uses a fixed 10-sec window to aggregate the counts.
Each example can be run locally, in Yarn cluster or in standalone cluster. Here we use WordCount as an example.
-
Download and install JDK version 8. Verify that the JAVA_HOME environment variable is set and points to your JDK installation.
-
Download and install Apache Maven by following Maven’s installation guide for your specific operating system.
-
A script named "grid" is included in this project which allows you to easily download and install Zookeeper, Kafka, and Yarn. You can run the following to bring them all up running in your local machine:
$ scripts/grid bootstrap
All the downloaded package files will be put under deploy
folder. Once the grid command completes,
you can verify that Yarn is up and running by going to http://localhost:8088. You can also choose to
bring them up separately, e.g.:
$ scripts/grid install zookeeper
$ scripts/grid start zookeeper
You can run directly within the project using maven:
$ mvn package
$ mvn compile exec:java -Dexec.mainClass=org.apache.beam.examples.WordCount \
-Dexec.args="--inputFile=pom.xml --output=counts --runner=SamzaRunner" -P samza-runner
To execute the example in either Yarn or standalone, you need to package it first. After packaging, we deploy and explode the tgz in the deploy folder:
$ mkdir -p deploy/examples
$ mvn package && tar -xvf target/samza-beam-examples-0.1-dist.tar.gz -C deploy/examples/
You can use the run-beam-standalone.sh
script included in this repo to run an example
in standalone mode. The config file is provided as config/standalone.properties
. Note that by
default we create a single input partition for the whole input. To set the number of
partitions, you can add "--maxSourceParallelism=" argument. For example, "--maxSourceParallelism=2"
will create two partitions of the input file, based on size.
$ deploy/examples/bin/run-beam-standalone.sh org.apache.beam.examples.WordCount \
--configFilePath=$PWD/deploy/examples/config/standalone.properties \
--inputFile=$PWD/pom.xml --output=word-counts.txt \
--maxSourceParallelism=2
If the example consumes from Kafka, we can set a large "maxSourceParallelism" value so each kafka partition be assigned to a Samza task (the total number of tasks will be bounded by maxSourceParallelism). E.g.
$ deploy/examples/bin/run-beam-standalone.sh org.apache.beam.examples.KafkaWordCount \
--configFilePath=$PWD/deploy/examples/config/standalone.properties \
--maxSourceParallelism=1024
Similar to running standalone, we can use the run-beam-yarn.sh
to run the examples
in Yarn cluster. The config file is provided as config/yarn.properties
.
Note that for yarn, we don't need to wait after submitting the job, so there is no need for waitUntilFinish()
.
Please change p.run().waitUtilFinish()
to p.run()
in the WordCount.java
class.
To run the WordCount example in yarn:
$ deploy/examples/bin/run-beam-yarn.sh org.apache.beam.examples.WordCount \
--configFilePath=$PWD/deploy/examples/config/yarn.properties \
--inputFile=$PWD/pom.xml \
--output=/tmp/word-counts.txt --maxSourceParallelism=2
Same as Standalone, we can provide a large "maxSourceParallelism" value to have better parallism in Kafka case.
Feel free to build more complex pipelines based on the examples above, and reach out to us:
-
Subscribe and mail to [email protected] for any Beam questions.
-
Subscribe and mail to [email protected] for any Samza questions.
- Apache Beam
- Apache Samza
- Quickstart: Java, Python, Go