Configuring OmniData
Scenario 1: Ceph/HDFS Access Configuration (Mandatory)
When the OmniData service is started, the HDFS/Ceph configuration file needs to be read. You need to upload the hdfs-site.xml and core-site.xml configuration files to the etc directory of OmniData. You can search for the files on the ceph1/hdfs1, ceph2/hdfs2, and ceph3/hdfs3 nodes in the etc/hadoop/ directory in the Hadoop installation directory.
You can add file transfer operations. Take Ceph as an example. The following figure shows how to transfer hdfs-site.xml in the local path to /home/omm/haf-install/haf-target/run/haf_user/omnidata/etc/ of the selected servers (ceph1, ceph2, and ceph3).

The method of uploading the core-site.xml file is the same.
To access Ceph, you need to prepare multiple dependencies on offload nodes (ceph1 to ceph3).
- Upload hdfs-ceph-3.2.0.jar and librgw_jni.so to the server using SmartKit and ensure that they can be loaded by HAF.
- Copy the keyring file on any node (agent1 to agent3) to the same path (default path: /var/lib/ceph/radosgw/ceph-admin/keyring) on ceph1 to ceph3.
- Check the permission for keyring: chmod -R 755 /var/lib/ceph;chmod 644 keyring.
Scenario 2: Kerberos Configuration (When HDFS and ZooKeeper in the Cluster Are in Security Mode)
On offload nodes:
Add the following configurations to /home/omm/haf-install/haf-target/run/haf_user/omnidata/etc/config.properties on all nodes where the OmniData service is deployed. Copy related configuration files (krb5.conf, hdfs.keytab, and client_jass.conf) to the etc directory.
- Configure the Kerberos and copy related configuration files to the specified directory.
- Go to the directory where the config.properties file is stored and edit the configuration file.
cd /home/omm/haf-install/haf-target/run/haf_user/omnidata/etc vi config.properties
- Add the following content to the file:
hdfs.authentication.type=KERBEROS hdfs.krb5.conf.path=/home/omm/haf-install/haf-target/run/haf_user/omnidata/etc/krb5.conf hdfs.krb5.keytab.path=/home/omm/haf-install/haf-target/run/haf_user/omnidata/etc/hdfs.keytab hdfs.krb5.principal=hdfs/server1@EXAMPLE.COM
- Copy related configuration files to the specified directory.
cp xxx/krb5.conf /home/omm/haf-install/haf-target/run/haf_user/omnidata/etc/ cp xxx/hdfs.keytab /home/omm/haf-install/haf-target/run/haf_user/omnidata/etc/
- Go to the directory where the config.properties file is stored and edit the configuration file.
- If the engine is Spark, you need to configure a secure ZooKeeper connection.
zookeeper.krb5.enabled=true zookeeper.java.security.auth.login.config=/home/omm/haf-install/haf-target/run/haf_user/omnidata/etc/client_jaas.conf zookeeper.krb5.conf=/home/omm/haf-install/haf-target/run/haf_user/omnidata/etc/krb5.conf
cp xxx/client_jaas.conf /home/omm/haf-install/haf-target/run/haf_user/omnidata/etc/ cp xxx/krb5.conf /home/omm/haf-install/haf-target/run/haf_user/omnidata/etc/
- Grant permission on the configuration file directory. In the following command, omm is the current HAF installation user. Replace it with the actual user.
chown omm /home/omm/haf-install/haf-target/run/haf_user/omnidata/etc/*
The italic part in the preceding command needs to be modified based on the actual path in the cluster environment.