Deploying the UDF Plugin Row-by-Row Processing
The following operations must be performed on the management node and all compute nodes.
- Upload the previously mentioned packages to the /opt/omni-operator/hive-udf directory on management and compute nodes.
- Decompress the related packages.
unzip udf.zip rm -f udf.zip unzip conf.zip rm -f conf.zip
- In the /opt/omni-operator/conf/omni.conf file, update the following content:
- Open /opt/omni-operator/conf/omni.conf.
vim /opt/omni-operator/conf/omni.conf
- Updated the UDF configuration.
enableBatchExprEvaluate=false // false indicates expression row-by-row processing and true indicates that expression bath processing. hiveUdfPropertyFilePath=/opt/omni-operator/hive-udf/udf.properties // UDF trustlist file path. hiveUdfDir=/opt/omni-operator/hive-udf/udf // Hive UDF JAR file directory.
- Press Esc, type :wq!, and press Enter to save the file and exit.
- Open /opt/omni-operator/conf/omni.conf.
- Use vim to open the ~/.bashrc file and add LD_LIBRARY_PATH to the file to update environment variables.
- Open the ~/.bashrc file.
vim ~/.bashrc
- Add LD_LIBRARY_PATH to update environment variables.
export LD_LIBRARY_PATH=${JAVA_HOME}/jre/lib/aarch64/server:$LD_LIBRARY_PATH - Press Esc, type :wq!, and press Enter to save the file and exit.
- Update the environment variables.
source ~/.bashrc
- Open the ~/.bashrc file.
You can customize the example names of the udf.zip and conf.zip packages based on your service requirements.
Parent topic: Deploying the OmniOperator UDF Plugin