Wednesday, July 3, 2013

Apache Oozie - Part 5: Oozie workflow with streaming map reduce (python) action

1.0. What's covered in the blog?

1. Documentation on the Oozie mapreduce streaming action
2. A sample oozie workflow that includes a mapreduce streaming action to process some syslog generated log files using python-regex.  Instructions on loading sample data and running the workflow are provided, along with some notes based on my learnings.

Version:
Oozie 3.3.0; Pig 0.10.0

Related blogs:
Blog 1: Oozie workflow - hdfs and email actions
Blog 2: Oozie workflow - hdfs, email and hive actions
Blog 3: Oozie workflow - sqoop action (Hive-mysql; sqoop export)
Blog 4: Oozie workflow - java map-reduce (new API) action
Blog 5: Oozie workflow - streaming map-reduce (python) action 
Blog 6: Oozie workflow - java main action
Blog 7: Oozie workflow - Pig action
Blog 8: Oozie sub-workflow
Blog 9a: Oozie coordinator job - time-triggered sub-workflow, fork-join control and decision control
Blog 9b: Oozie coordinator jobs - file triggered 
Blog 9c: Oozie coordinator jobs - dataset availability triggered
Blog 10: Oozie bundle jobs
Blog 11a: Oozie Java API for interfacing with oozie workflows
Blog 11b: Oozie Web Service API for interfacing with oozie workflows


Your thoughts/updates:
If you want to share your thoughts/updates, email me at airawat.blog@gmail.com.

2.0. About the Oozie map-reduce streaming action

Apace documentation at: http://archive.cloudera.com/cdh4/cdh/4/oozie/WorkflowFunctionalSpec.html#a3.2.2.2_Streaming


Excerpts from Apache documentation....

2.0.1. Map-Reduce Action

The map-reduce action starts a Hadoop map/reduce job from a workflow. Hadoop jobs can be Java Map/Reduce jobs or streaming jobs.

A map-reduce action can be configured to perform file system cleanup and directory creation before starting the map reduce job. This capability enables Oozie to retry a Hadoop job in the situation of a transient failure (Hadoop checks the non-existence of the job output directory and then creates it when the Hadoop job is starting, thus a retry without cleanup of the job output directory would fail).

The workflow job will wait until the Hadoop map/reduce job completes before continuing to the next action in the workflow execution path.

The counters of the Hadoop job and job exit status (=FAILED=, KILLED or SUCCEEDED ) must be available to the workflow job after the Hadoop jobs ends. This information can be used from within decision nodes and other actions configurations.

The map-reduce action has to be configured with all the necessary Hadoop JobConf properties to run the Hadoop map/reduce job.

Hadoop JobConf properties can be specified in a JobConf XML file bundled with the workflow application or they can be indicated inline in the map-reduce action configuration.

The configuration properties are loaded in the following order, streaming , job-xml and configuration , and later values override earlier values.

Streaming and inline property values can be parameterized (templatized) using EL expressions.

The Hadoop mapred.job.tracker and fs.default.name properties must not be present in the job-xml and inline configuration.


2.0.2. Adding Files and Archives for the Job

The file , archive elements make available, to map-reduce jobs, files and archives. If the specified path is relative, it is assumed the file or archiver are within the application directory, in the corresponding sub-path. If the path is absolute, the file or archive it is expected in the given absolute path.

Files specified with the file element, will be symbolic links in the home directory of the task.

If a file is a native library (an '.so' or a '.so.#' file), it will be symlinked as and '.so' file in the task running directory, thus available to the task JVM.

To force a symlink for a file on the task running directory, use a '#' followed by the symlink name. For example 'mycat.sh#cat'.

Refer to Hadoop distributed cache documentation for details more details on files and archives.


2.0.3. Streaming

Streaming information can be specified in the streaming element.

The mapper and reducer elements are used to specify the executable/script to be used as mapper and reducer.

User defined scripts must be bundled with the workflow application and they must be declared in the files element of the streaming configuration. If the are not declared in the files element of the configuration it is assumed they will be available (and in the command PATH) of the Hadoop slave machines.

Some streaming jobs require Files found on HDFS to be available to the mapper/reducer scripts. This is done using the file and archive elements described in the previous section.
The Mapper/Reducer can be overridden by a mapred.mapper.class or mapred.reducer.class properties in the job-xml file or configuration elements.


3.0. Sample workflow application

Components:
















Pictorial overview of workflow:


Sample application:


Oozie web console:
Screenshots from application execution

8 comments:

  1. Hi,
    I need help regarding this issue,
    I wanted to create a streaming job from Hue UI, where mapper and reducers where shell scripts which performs word count (term frequency) and submitted the Job.
    The error is:

    2013-12-16 19:21:24,278 ERROR [main] org.apache.hadoop.streaming.PipeMapRed: configuration exception
    java.io.IOException: Cannot run program "/hadoop/yarn/local/usercache/root/appcache/application_1387201627160_0006/container_1387201627160_0006_01_000002/./maptf.sh": java.io.IOException: error=2, No such file or directory

    This means it can't able to find mapper and reducer in that path where oozie will create onfly. Can you check my oozie configuration and workflow (by email: sandeepboda91083@gmail.com) and let me know if there is any config issue?

    In HDFS, I have all paths and files setup correctly under root user.

    Note: I can able to run streaming jobs without oozie as:
    cd /root/mrtest/
    ls
    -rwxrwxrwx 1 root root 235 Dec 11 11:37 maptf.sh
    -rwxrwxrwx 1 root root 273 Dec 11 11:37 redtf.sh

    hadoop jar /usr/lib/hadoop-mapreduce/hadoop-streaming-2.2.0.2.0.6.0-76.jar -D stream.num.map.output.key.fields=1 -input crane_in1 -output crane_out2 -file ./maptf.sh -mapper maptf.sh -file ./redtf.sh -reducer redtf.sh

    ReplyDelete
  2. Thanks for helping me to understand basic Hadoop oozie workflow concepts. As a beginner in Hadoop your post help me a lot.
    Hadoop Training in Velachery | Hadoop Training .
    Hadoop Training in Chennai | Hadoop .

    ReplyDelete
  3. Thanks for sharing such details about big data and hadoop. Big data hadoop online Course Bangalore

    ReplyDelete
  4. thakyou it vry nice blog for beginners
    https://www.emexotechnologies.com/courses/big-data-analytics-training/big-data-hadoop-training/

    ReplyDelete
  5. Thanks for your article. Its very helpful.As a beginner in hadoop ,i got depth knowlege. Thanks for your informative article. Hadoop training in chennai | Hadoop Training institute in chennai

    ReplyDelete


  6. Well done! It is so well written and interactive. Keep writing such brilliant piece of work. Glad i came across this post. Last night even i saw similar wonderful Python tutorial on youtube so you can check that too for more detailed knowledge on Python.https://www.youtube.com/watch?v=HcsvDObzW2U

    ReplyDelete
  7. Good Post! Thank you so much for sharing this pretty post, it was so good to read and useful to improve my knowledge as updated one, keep blogging.

    https://www.emexotechnologies.com/online-courses/big-data-hadoop-training-in-electronic-city/

    ReplyDelete