我要评分
获取效率
正确性
完整性
易理解

Configuring OmniData

Scenario 1: Ceph/HDFS Access Configuration (Mandatory)

When the OmniData service is started, the HDFS/Ceph configuration file needs to be read. You need to upload the hdfs-site.xml and core-site.xml configuration files to the etc directory of OmniData. You can search for the files on the ceph1/hdfs1, ceph2/hdfs2, and ceph3/hdfs3 nodes in the etc/hadoop/ directory in the Hadoop installation directory.

As shown in the following figure, you can add file transfer operations. Take Ceph as an example. The following figure shows how to transfer hdfs-site.xml in the local path to /opt/haf-target/run/haf_user/omnidata/etc/ of the selected servers (ceph1, ceph2, and ceph3).

The method of uploading the core-site.xml file is the same.

To access Ceph, you need to prepare multiple dependencies on offload nodes (ceph1 to ceph3).

  1. hdfs-ceph-3.2.0.jar and librgw_jni.so have been uploaded to the server using SmartKit and will be loaded by the HAF.
  2. Copy the keyring file on any node (agent1 to agent3) to the same path (default path: /var/lib/ceph/radosgw/ceph-admin/keyring) on ceph1 to ceph3.
  3. Check the permission of keyring: chmod -R 755 /var/lib/ceph;chmod 644 keyring.

Scenario 2: Kerberos Configuration (When HDFS and ZooKeeper in the Cluster Are in Security Mode)

On offload nodes:

Add the following configurations to /opt/haf-install/haf-target/run/haf_user/omnidata/etc/config.properties on all nodes where the OmniData service is deployed. Copy related configuration files (krb5.conf, hdfs.keytab, and client_jass.conf) to the etc directory.

  1. Configure the Kerberos and copy related configuration files to the specified directory.
    cd /opt/haf-install/haf-target/run/haf_user/omnidata/etc
    vi config.properties
    hdfs.authentication.type=KERBEROS
    hdfs.krb5.conf.path=/opt/haf-install/haf-target/run/haf_user/omnidata/etc/krb5.conf
    hdfs.krb5.keytab.path=/opt/haf-install/haf-target/run/haf_user/omnidata/etc/hdfs.keytab
    hdfs.krb5.principal=hdfs/server1@EXAMPLE.COM
    cp xxx/krb5.conf /opt/haf-install/haf-target/run/haf_user/omnidata/etc/
    cp xxx/hdfs.keytab /opt/haf-install/haf-target/run/haf_user/omnidata/etc/
  2. If the engine is Spark, you need to configure a secure ZooKeeper connection.
    zookeeper.krb5.enabled=true
    zookeeper.java.security.auth.login.config=/opt/haf-install/haf-target/run/haf_user/omnidata/etc/client_jaas.conf
    zookeeper.krb5.conf=/opt/haf-install/haf-target/run/haf_user/omnidata/etc/krb5.conf
    cp xxx/client_jaas.conf /opt/haf-install/haf-target/run/haf_user/omnidata/etc/
    cp xxx/krb5.conf /opt/haf-install/haf-target/run/haf_user/omnidata/etc/
  3. Configure file directory permissions.
    chown haf_user:haf /opt/haf-install/haf-target/run/haf_user/omnidata/etc/*

The italic part in the preceding command needs to be modified based on the actual path in the cluster environment.