OmniAdvisor Configuration File
The configuration files of OmniAdvisor 2.0 are common_config.ini and native_config.ini.
common_config.ini
common_config.ini is the basic configuration file of OmniAdvisor 2.0. It is used to select the tuning method, set the number of retests, and configure database information and Spark history server parameters.
When TLS mutual authentication is enabled for the backend PostgreSQL database and the Spark history server, the client private key or private key password in the common_config.ini file must be stored in plain text, which may pose a risk of information leakage. The related code is open sourced, and you can harden system security based on your requirements.
[common] # Number of retests tuning.retest.times=1 # Threshold for evaluating a configuration failure. If the number of configuration execution failures is greater than or equal to this threshold, the configuration becomes invalid. config.fail.threshold=1 # Tuning policy tuning.strategy=[["transfer", 1],["expert", 2],["iterative", 10]] # Queue used for background retests. If this parameter is not set, the user queue is retained. backend.retest.queue= [database] # Name of the backend PostgreSQL database postgresql.database.name= # User of the backend PostgreSQL database postgresql.database.user= # Name of the backend PostgreSQL database host postgresql.database.host= # Name of the backend PostgreSQL database port postgresql.database.port= # SSL mode of the backend PostgreSQL database postgresql.database.sslmode=verify-full # Path to the server CA root certificate of the backend PostgreSQL database postgresql.database.sslrootcert= # Path to the client root certificate of the backend PostgreSQL database postgresql.database.sslcert= # Path to the client private key certificate of the backend PostgreSQL database postgresql.database.sslkey= # Client private key password of the backend PostgreSQL database postgresql.database.sslpassword= [spark] # URL of the Spark history server, which is used only in Rest mode spark.history.rest.url=http://localhost:18080 # User name for the Spark history server URL (Set this parameter only when necessary.) spark.history.username= # Timeout for Spark to fetch traces from the history server spark.fetch.trace.timeout=30 # Interval for Spark to fetch traces from the history server spark.fetch.trace.interval=5 # Spark task timeout interval divided by the baseline time spark.exec.timeout.ratio=10.0 # Indicates whether to separate stdout and stderr in the Spark output result spark.output.merge.switch=False # SSL mutual verification switch for the Spark history server (enabled by default; mandatory and must not be empty) spark.history.sslverify = True # Files related to the SSL mutual verification certificate of the Spark history server (valid only when sslverify is set to True) spark.history.sslrootca = spark.history.sslcrt = spark.history.sslkey = [crypto] # Semaphore occupied by the KMC. Valid values range from 0x1111 to 0x9999. Only hexadecimal numbers are supported. kmc.sem.key=0x1111
- If the value of spark.fetch.trace.timeout is too small, the trace information of the Spark task may fail to be fetched occasionally.
This problem does not affect the subsequent tuning process. However, you can increase the value to reduce the fetch failure probability.
When the network environment is stable, the default value 30 can ensure a relatively high success rate of fetching trace information.
- If the value of spark.exec.timeout.ratio is too small, the probability of tuning configuration execution failures due to timeouts increases, especially when the baseline performance value is small.
The default value 10 can meet the tuning requirements in most cases.
If tuning failures frequently occur due to timeout, increase the value of this parameter.
- The value of the tuning.retest.times parameter must be greater than 0.
native_config.ini
native_config.ini is the configuration file for the native tuning method. You can edit this file to specify the OmniOperator deployment path and version.
[native] # Root directory for deploying OmniOperator omnioperator_home = /opt/omni-operator # OmniOperator version omnioperator_version = 1.7.0 # Associated Spark version associated with the OmniOperator version, used to form content similar to boostkit-omniop-spark-3.3.1-1.7.0-aarch64.jar omnioperator_spark_version = 3.3.1 # Root directory for deploying Hadoop hadoop_home = /usr/local/hadoop