Executing Spark UDFs
To push down UDFs to the OmniData service, you need to deploy the UDF dependency package. The following uses huawei-udf as an example.
- Deploy huawei_udf.jar in the local /opt/boostkit directory.

- Register UDFs with MetaStore before running the UDFs. There are many registration methods. This section uses AdDecryptNew as an example.
CREATE TEMPORARY FUNCTION AdDecryptNew AS "com.huawei.udf.AdDecryptNew";
- Push down the Spark UDFs.
/usr/local/spark/bin/spark-sql --driver-class-path '/opt/boostkit/*' --jars '/opt/boostkit/*' --conf 'spark.executor.extraClassPath=./*' --name udf_sqls/UDF_AdDecryptNew.sql --driver-memory 50G --driver-java-options -Dlog4j.configuration=file:../conf/log4j.properties --executor-memory 32G --num-executors 30 --executor-cores 18 --properties-file tpch_query.conf -f UDF_AdDecryptNew.sql;
The command output is as follows:

Parent topic: Using OmniData on the Spark Engine