Using OmniScheduler
- Go to the /home/hadoop/loadsmetric-install/loadsmetric-server/etc directory.
cd /home/hadoop/loadsmetric-install/loadsmetric-server/etc
- Optional: Modify the LoadsMetric configuration items as required. (Ignore those configuration items that are already correctly set.)
- Open the configuration file.
vim application.properties
- Press i to enter the insert mode. Modify the usage threshold and usage weight as required. The former affects the determination of overloaded nodes in the cluster, and the latter affects the node load sorting result. For details about the parameters, see References.
# limits for multi resource usage load.limit.cpu=80 load.limit.mem=80 load.limit.diskio=80 load.limit.netio=80 # weights for multi resource usage load.weight.cpu=0.3 load.weight.mem=0.3 load.weight.diskio=0.2 load.weight.netio=0.2
- Press Esc, type :wq!, and press Enter to save the file and exit.
- Open the configuration file.
- Restart LoadsMetric to make the updated configuration items take effect.
cd /home/hadoop/loadsmetric-software sh loadsmetric_deploy.sh restart
- Run a computing task (for example a Spark task) and start Yarn resource scheduling:
spark-sql --deploy-mode client --driver-cores 8 --driver-memory 20g --num-executors 3 --executor-cores 7 --executor-memory 26g --master yarn --conf spark.task.cpus=1 --conf spark.sql.orc.impl=native --conf spark.sql.shuffle.partitions=1000 --conf spark.network.timeout=600 --conf spark.sql.adaptive.enabled=true --conf spark.sql.adaptive.skewedJoin.enabled=true --conf spark.serializer=org.apache.spark.serializer.KryoSerializer --conf spark.sql.autoBroadcastJoinThreshold=100M --properties-file /home/spark.conf --database tpcds_bin_partitioned_decimal_orc_3000
Parent topic: Using the Feature