1.0. What's in this blog?
In my previous blog on creating custom UDFs in Hive, I covered a sample basic UDF. This blog covers generic UDF creation, to mimic the same NVL2 functionality covered in the previous blog. It includes sample data, java code for creating the UDF, expected results, commands to execute and the output.
[hive 0.10]
[hive 0.10]
About UDFs:
UDF stands for User Defined Function. In Hive, there are (a) reusable functions available, as part of core Hive (out of the box) that can be used in Hive queries; They are called UDFs, even though they are not user-defined. And then there are (b) functions that one can create in Java, also called UDFs, and use in Hive queries. The focus of this blog is custom UDFs (b), specifically generic UDFs.
About generic UDF:
About NVL2:
UDFs in Hive have are extensions of either UDF or GenericUDF classes. GenericUDFs are more optimal from a performance perspective as they use short circuit evaluation and lazy evaluation, when compared to UDFs that use reflection. GenericUDFs support non-primitive Hive types like arrays, structs and maps in addition to primitive types, unlike UDFs that support only primitive types.
About NVL2:
NVL2 takes three parameters, we will refer to as expr1, expr2 and expr3.
NVL2 lets you determine the value returned by a query based on whether a specified expression is null or not null. If expr1 is not null, then NVL2 returns expr2. If expr1 is null, then NVL2 returns expr3.
2.0. NVL2 generic UDF in Hive
1: Create the test data file for a Hive external table
2: Create the Hive table
3: Create the UDF in Java
4: Expected results
5: Try out the UDF
3.0. Making the UDF permanently available when you launch the hive shell
There are several ways to make a custom UDF available when you launch the Hive shell, bypassing the need to execute the "add jar..." statement before using a custom UDF. I have listed a couple of them.
Option 1:
From "Programming Hive"
Your function may also be added permanently to Hive, however this requires a small modification to a Hive Java file and then rebuilding Hive.
Inside the Hive source code, a one-line change is required to theFunctionRegistry
class found atql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java. Then you rebuild Hive following the instructions that come with the source distribution.
While it is recommended that you redeploy the entire new build, only the hive-exec-*.jar, where \* is the version number, needs to be replaced.
Option 2:
Add it to the .hiverc file on each node from where hive queries will be run.
Check out my blog - http://hadooped.blogspot.com/2013/08/hive-hiverc-file.html
4.0. References
Apache documentation:
http://hive.apache.org/docs/r0.10.0/api/org/apache/hadoop/hive/ql/udf/generic/GenericUDF.html
https://cwiki.apache.org/confluence/display/Hive/OperatorsAndFunctions
A good article on creating a UDF that involves non-primitive types - link
Programming Hive - from O'Reilly
That's it for this blog. Do share any additional insights with me.
Cheers!