Executing Spark Tasks
Verify that Gluten takes effect and run a test case to show the performance optimization. Ensure that Spark engine tasks are running properly.
Spark uses interactive command lines to execute SQL tasks. To check whether Gluten has taken effect, add EXPLAIN before the SQL statement or view the Spark UI to check the operator name in the execution plan. If an operator starting with Omni or ending with Transformer is displayed, Gluten has taken effect.
This example uses the tpcds_bin_partitioned_varchar_orc_2 data table as the test table. Table 1 describes the test table. The test SQL statement is the TPC-DS test dataset Q82.
|
Table |
Format |
Rows |
|---|---|---|
|
item |
orc |
26000 |
|
inventory |
orc |
16966305 |
|
date_dim |
orc |
73049 |
|
store_sales |
orc |
5760749 |
- Start the Spark SQL CLI.
- Command for starting open source Spark SQL:
1/usr/local/spark/bin/spark-sql --deploy-mode client --driver-cores 8 --driver-memory 20g --master yarn --executor-cores 8 --executor-memory 26g --num-executors 36 --conf spark.executor.extraJavaOptions='-XX:+UseG1GC -XX:+UseNUMA' --conf spark.locality.wait=0 --conf spark.network.timeout=600 --conf spark.serializer=org.apache.spark.serializer.KryoSerializer --conf spark.sql.adaptive.enabled=true --conf spark.sql.autoBroadcastJoinThreshold=100M --conf spark.sql.broadcastTimeout=600 --conf spark.sql.shuffle.partitions=1000 --conf spark.sql.orc.impl=native --conf spark.task.cpus=1 --database tpcds_bin_partitioned_varchar_orc_2
- Perform the following operations to start the Gluten plugin.
- Go to the /usr/local/spark/conf directory and create the spark-defaults-omnioperator.conf file.
1 2
cd /usr/local/spark/conf cp spark-defaults.conf spark-defaults-omnioperator.conf
- Change the permission on spark-defaults-omnioperator.conf to 640.
1chmod 640 spark-defaults-omnioperator.conf
- Open spark-defaults-omnioperator.conf.
1vi spark-defaults-omnioperator.conf - Press i to enter the insert mode and add the following content to the end of the file:
spark.plugins org.apache.gluten.GlutenPlugin spark.shuffle.manager org.apache.spark.shuffle.sort.ColumnarShuffleManager spark.executor.memoryOverhead=3g spark.memory.offHeap.enabled true spark.memory.offHeap.size 35g spark.gluten.sql.columnar.backend.lib omni spark.executor.extraClassPath ${PWD}/omni/omni-operator/lib/gluten-omni-bundle-spark3.3_2.12-openEuler_22.03_aarch_64-1.3.0.jar spark.driver.extraClassPath /opt/omni-operator/lib/gluten-omni-bundle-spark3.3_2.12-openEuler_22.03_aarch_64-1.3.0.jar spark.executorEnv.LD_LIBRARY_PATH ${PWD}/omni/omni-operator/lib spark.executorEnv.OMNI_HOME ${PWD}/omni/omni-operator spark.driverEnv.LD_LIBRARY_PATH /opt/omni-operator/lib spark.driverEnv.OMNI_HOME /opt/omni-operator spark.executorEnv.MALLOC_CONF narenas:2 spark.driverEnv.MALLOC_CONF tcache:false spark.driverEnv.LD_PRELOAD /opt/omni-operator/lib/libjemalloc.so.2 spark.executorEnv.LD_PRELOAD ${PWD}/omni/omni-operator/lib/libjemalloc.so.2 spark.gluten.sql.columnar.libpath /opt/omni-operator/lib/libspark_columnar_plugin.so spark.gluten.sql.columnar.executor.libpath ${PWD}/omni/omni-operator/lib/libspark_columnar_plugin.so spark.gluten.sql.native.union true spark.gluten.sql.columnar.forceShuffledHashJoin true spark.sql.ansi.enabled false spark.executorEnv.MALLOC_CONF tcache:false spark.driverEnv.MALLOC_CONF tcache:false spark.sql.parquet.datetimeRebaseModeInRead CORRECTED spark.sql.parquet.int96RebaseModeInRead CORRECTED spark.sql.optimizer.runtime.bloomfilter.enabled false spark.gluten.sql.columnar.backend.omni.combineJoinedAggregates true spark.gluten.sql.columnar.backend.omni.joinReorderEnhance true spark.gluten.sql.columnar.backend.omni.dedupLeftSemiJoin true spark.gluten.sql.columnar.backend.omni.pushOrderedLimitThroughAggEnable true spark.gluten.sql.columnar.backend.omni.adaptivePartialAggregation true spark.gluten.sql.columnar.backend.omni.filterMerge true spark.gluten.sql.columnar.backend.omni.preferShuffledHashJoin true spark.gluten.sql.columnar.backend.omni.aggregationSpillEnabled false spark.gluten.sql.columnar.backend.omni.vec.predicate.enabled true spark.sql.optimizer.runtime.bloomFilter.enabled false spark.gluten.sql.columnar.backend.omni.rewriteSelfJoinInInPredicate true spark.gluten.sql.columnar.physicalJoinOptimizeEnable true spark.gluten.sql.columnar.physicalJoinOptimizationLevel 19 spark.driver.maxResultSize 2G spark.network.timeout 600 spark.serializer org.apache.spark.serializer.KryoSerializer spark.sql.adaptive.enabled true spark.sql.adaptive.skewedJoin.enabled true spark.sql.autoBroadcastJoinThreshold 100M spark.sql.broadcastTimeout 600 spark.sql.shuffle.partitions 200 spark.sql.orc.impl native spark.task.cpus 1 spark.sql.sources.parallelPartitionDiscovery.parallelism 60 spark.sql.shuffle.partitions 1000 spark.sql.adaptive.coalescePartitions.minPartitionNum 400 spark.sql.adaptive.coalescePartitions.initialPartitionNum 400 spark.kryoserializer.buffer.max 1024m spark.reducer.maxSizeInFlight 128m spark.gluten.sql.columnar.maxBatchSize 8192 - Press Esc, type :wq!, and press Enter to save the file and exit.
- Run the startup command.
/usr/local/spark/bin/spark-sql --archives hdfs://server1:9000/user/root/omni-operator.tar.gz#omni --deploy-mode client --driver-cores 8 --driver-memory 40g --master yarn --executor-cores 12 --executor-memory 5g --conf spark.memory.offHeap.enabled=true --conf spark.memory.offHeap.size=35g --num-executors 24 --conf spark.executor.extraJavaOptions='-XX:+UseG1GC' --conf spark.locality.wait=0 --conf spark.network.timeout=600 --conf spark.serializer=org.apache.spark.serializer.KryoSerializer --conf spark.sql.adaptive.enabled=true --conf spark.sql.adaptive.skewedJoin.enabled=true --conf spark.sql.autoBroadcastJoinThreshold=100M --conf spark.sql.broadcastTimeout=600 --conf spark.sql.shuffle.partitions=600 --conf spark.sql.orc.impl=native --conf spark.task.cpus=1 --properties-file /usr/local/spark/conf/spark-defaults-omnioperator.conf --database tpcds_bin_partitioned_varchar_orc_2
- hdfs://server1:9000/user/root/omni-operator.tar.gz#omni: Set hdfs://server1:9000 based on the actual value of fs.defaultFS in the core-site.xml file of Hadoop. You can replace /user/root/omni-operator.tar.gz with a custom directory and this directory is associated with the operations in 2. #omni indicates the directory where the omni-operator.tar.gz package is extracted. You can customize the directory.
- The preceding startup command is used in Yarn mode. If the SparkExtension plugin is started in local mode, change --master yarn to --master local. Before starting the plugin, add export LD_PRELOAD=/opt/omni-operator/lib/libjemalloc.so.2 to the ~/.bashrc file on all nodes and update environment variables. Replace ${PWD}/omni in the startup command with /opt.
Table 2 describes the Gluten startup parameters.
Table 2 Gluten startup parameters Parameter
Default Value
Description
spark.plugins
org.apache.gluten.GlutenPlugin
Enable Gluten.
spark.shuffle.manager
sort
Indicates whether to enable columnar shuffle. If you enable this function, configure the shuffleManager class of OmniShuffle and add the configuration item --conf spark.shuffle.manager="org.apache.spark.shuffle.sort.OmniColumnarShuffleManager". By default, open source Shuffle is used for sorting.
spark.gluten.sql.columnar.hashagg
true
Indicates whether to enable columnar HashAgg. true: yes; false: no.
spark.gluten.sql.columnar.project
true
Indicates whether to enable columnar Project. true: yes; false: no.
spark.gluten.sql.columnar.filter
true
Indicates whether to enable columnar Filter. true: yes; false: no.
spark.gluten.sql.columnar.sort
true
Indicates whether to enable columnar Sort. true: yes; false: no.
spark.gluten.sql.columnar.window
true
Indicates whether to enable columnar Window. true: yes; false: no.
spark.gluten.sql.columnar.broadcastJoin
true
Indicates whether to enable columnar BroadcastHash Join. true: yes; false: no.
spark.gluten.sql.columnar.filescan
true
Indicates whether to enable columnar NativeFilescan, including ORC and Parquet file formats. true: yes; false: no.
spark.gluten.sql.columnar.sortMergeJoin
true
Indicates whether to enable columnar SortMerge Join. true: yes; false: no.
spark.gluten.sql.columnar.takeOrderedAndProject
true
Indicates whether to enable columnar TakeOrderedAndProject. true: yes; false: no.
spark.gluten.sql.columnar.shuffledHashJoin
true
Indicates whether to enable columnar ShuffledHash Join. true: yes; false: no.
spark.gluten.sql.columnar.backend.omni.shuffleSpillBatchRowNum
10000
Specifies the number of rows in each batch output by shuffle. Adjust the parameter value based on the actual memory specifications. You can increase the value to reduce the number of batches for writing drive files and increase the write speed.
spark.gluten.sql.columnar.backend.omni.shuffleTaskSpillMemoryThreshold
2147483648
Specifies the upper limit of shuffle spill, in bytes. When the shuffle memory reaches the default upper limit, data is spilled. Adjust the parameter value based on the actual memory specifications. You can increase the value to reduce the number of shuffle spills to drives and drive I/O operations.
spark.gluten.sql.columnar.backend.omni.compressBlockSize
65536
Specifies the size of a compressed shuffle data block, in bytes. Adjust the parameter value based on the actual memory specifications. The default value is recommended.
spark.gluten.sql.columnar.backend.omni.shuffleSpillBatchRowNum
10000
Specifies the size of the initialized buffer for columnar shuffle, in bytes. Adjust the parameter value based on the actual memory specifications. You can increase the value to reduce the number of shuffle reads/writes and improve performance.
spark.shuffle.compress
true
Indicates whether to enable compression for the shuffle output. true: yes; false: no.
spark.io.compression.codec
lz4
Specifies the compression format for the shuffle output. Possible values are uncompressed, zlib, snappy, lz4, and zstd.
spark.gluten.sql.columnar.backend.omni.sortSpill.rowThreshold
214783647
Specifies the threshold that triggers spilling for the Sort operator, in rows. When the number of data rows to be processed exceeds the specified value, data is spilled. Adjust the parameter value based on the actual memory specifications. You can increase the value to reduce the number of Sort operator spills to drives and drive I/O operations.
spark.gluten.sql.columnar.backend.omni.memFraction
90
Specifies the threshold that triggers spilling for the Sort operator. When the off-heap memory usage for data processing exceeds the specified value, data is spilled. This parameter is used together with the spark.memory.offHeap.size parameter, which means the total off-heap memory size. Adjust the parameter value based on the actual memory specifications. You can increase the value to reduce the number of Sort operator spills to drives and drive I/O operations.
spark.gluten.sql.columnar.backend.omni.broadcastJoin.sharehashtable
true
Indicates whether the builder constructs only one hash table and whether the hash table is shared by all lookup joins in Broadcast Join. true: yes; false: no.
spark.gluten.sql.columnar.backend.omni.spill.dirDiskReserveSize
10737418240
Specifies the size of the available drive space reserved for data spilling of the Sort operator, in bytes. If the actual size is less than the specified value, an exception is thrown. Adjust the parameter value based on the actual drive capacity and service scenario. It is recommended that the value be less than or equal to the service data size. The upper limit of the value is the actual drive capacity.
spark.gluten.sql.columnar.backend.omni.joinReorderEnhance
true
Indicates whether to enable the join reordering optimization policy. true (default): yes; false: no. The heuristic join reordering algorithm automatically optimizes join reordering based on the number of where filter criteria and the table size.
spark.default.parallelism
200
Specifies the number of tasks concurrently executed by Spark.
spark.sql.shuffle.partitions
200
Specifies the number of shuffle partitions when Spark performs aggregation or join operations.
spark.sql.adaptive.enabled
false
Indicates whether to enable adaptive query optimization. The execution plan can be dynamically adjusted during query execution. true: yes; false: no.
spark.executorEnv.MALLOC_CONF
narenas:1
Controls the memory allocation policy of each Executor process in Spark.
spark.sql.autoBroadcastJoinThreshold
10M
Specifies the threshold for using Broadcast Join to join small tables during join operations.
spark.sql.broadcastTimeout
300
Specifies the timeout duration of broadcasting small tables to other nodes.
spark.locality.wait
3
Specifies the waiting duration for data localization.
spark.sql.cbo.enabled
false
Indicates whether to enable CBO. true: yes; false: no.
spark.sql.codegen.wholeStage
true
Indicates whether to enable whole stage code generation. true: yes; false: no.
spark.sql.orc.impl
native
native indicates that an open source ORC library version is used, and hive indicates that the ORC library in Hive is used.
spark.serializer
-
Specifies serialization with Kryo.
spark.executor.extraJavaOptions
-
Specifies the path to the local Hadoop library that the Executor uses for acceleration.
spark.driver.extraJavaOptions
-
Specifies the path to the local Hadoop library that the driver uses for acceleration.
spark.network.timeout
120
Specifies the default timeout duration of all network interactions, in seconds.
spark.gluten.sql.columnar.backend.omni.rewriteSelfJoinInInPredicate
false
Indicates whether to convert Self Join in the in expression to HashAgg so as to delete unused columns to reduce the data volume. true: yes; false: no.
spark.gluten.sql.columnar.backend.omni.filterMerge
false
Indicates whether to combine expressions with similar structures in the same table so as to reduce the scan data volume. true: yes; false: no.
spark.gluten.sql.columnar.backend.omni.dedupLeftSemiJoin
false
Indicates whether to deduplicate the LeftSemi Join right table so as to reduce the join data volume. true: yes; false: no.
spark.gluten.sql.columnar.backend.omni.preferShuffledHashJoin
false
Indicates whether to use ShuffledHashJoin whenever possible. true: yes; false: no.
spark.sql.adaptive.skewedJoin.enabled
false
Indicates whether to enable adaptive skewed join optimization. During adaptive skewed join optimization, some special join algorithms are used to process skewed data if any, improving the join operation efficiency. true: yes; false: no.
spark.sql.adaptive.coalescePartitions.minPartitionNum
1
Specifies the minimum number of shuffle partitions after merging. If this parameter is not set, the default degree of parallelism of the Spark cluster is used.
spark.gluten.sql.columnar.backend.omni.adaptivePartialAggregation
false
Indicates whether to enable adaptive skipping of the HashAgg group aggregation operation in the partial stage. This optimization is performed during software running. The partial stage of group aggregation is skipped and data is directly output to the downstream operator if the sampling scenario is identified as a high cardinality scenario and if group aggregation is performed but the first/last aggregation does not exist. true: yes; false: no.
spark.gluten.sql.columnar.backend.omni.pushOrderedLimitThroughAggEnable
false
Indicates whether to enable pushOrderedLimitThroughAgg optimization. If the execution plan contains the Sort+Limit operator and the sorting field is a subset of the grouping field for the group aggregation operation, the TopNSort operator is pushed down to the partial stage of the group aggregation operation. This reduces the data processing volume of the downstream operator. true: yes; false: no.
This type of optimization and the adaptivePartialAggregation optimization do not take effect at the same time.
spark.gluten.sql.columnar.backend.omni.combineJoinedAggregates
false
Indicates whether to enable combineJoinedAggregates optimization. This type of optimization reduces repeated table read operations by merging subqueries that are based on the same data. true: yes; false: no.
spark.gluten.sql.columnar.wholeStage.fallback.threshold
-1
When AQE is enabled, if the number of operators rolled back in a stage is greater than or equal to the threshold, all operators (except OmniColumnarToRow and OmniAQEShuffleReadExec) of the stage are rolled back to open source operators. The value –1 indicates that this function is disabled.
spark.gluten.sql.columnar.query.fallback.threshold
-1
When AQE is disabled, if the number of operators rolled back in the execution plan is greater than or equal to the threshold, all operators of the stage are rolled back to open source operators. The value –1 indicates that this function is disabled.
spark.gluten.sql.columnar.backend.omni.unixTimeFunc.enabled
true
Indicates whether to enable the from_unixtime and unix_timestamp expressions. true: yes; false: no.
spark.sql.orc.filterPushdown
true
Indicates whether to enable predicate pushdown for data query in ORC format.
spark.gluten.sql.columnar.backend.omni.catalog.cache.size
128
Specifies the cache space size for the catalog metadata. If the value is less than or equal to 0, caching is disabled.
spark.gluten.sql.columnar.backend.omni.catalog.cache.expire.time
600
Specifies the cache expiration time of the cached catalog metadata. The default value is 600 seconds.
spark.gluten.sql.columnar.backend.omni.vec.predicate.enabled
false
Indicates whether to enable the vectorized predicate pushdown function. true: yes; false: no.
- Go to the /usr/local/spark/conf directory and create the spark-defaults-omnioperator.conf file.
- Command for starting open source Spark SQL:
- Check whether Gluten takes effect.
Run the following SQL statement in the Gluten CLI and open source Spark SQL CLI:
set spark.sql.adaptive.enabled=false; explain select i_item_id ,i_item_desc ,i_current_price from item, inventory, date_dim, store_sales where i_current_price between 76 and 76+30 and inv_item_sk = i_item_sk and d_date_sk=inv_date_sk and d_date between cast('1998-06-29' as date) and cast('1998-08-29' as date) and i_manufact_id in (512,409,677,16) and inv_quantity_on_hand between 100 and 500 and ss_item_sk = i_item_sk group by i_item_id,i_item_desc,i_current_price order by i_item_id limit 100;The following figure shows the execution plan output by Gluten. If the operator starts with Omni or ends with Transformer, Gluten has taken effect.

The following figure shows the execution plan output by the open source Spark SQL CLI:

- Run the following SQL statement.
Run the following SQL statement in the Gluten CLI and open source Spark SQL CLI:
set spark.sql.adaptive.enabled=false; select i_item_id ,i_item_desc ,i_current_price from item, inventory, date_dim, store_sales where i_current_price between 76 and 76+30 and inv_item_sk = i_item_sk and d_date_sk=inv_date_sk and d_date between cast('1998-06-29' as date) and cast('1998-08-29' as date) and i_manufact_id in (512,409,677,16) and inv_quantity_on_hand between 100 and 500 and ss_item_sk = i_item_sk group by i_item_id,i_item_desc,i_current_price order by i_item_id limit 100; - Compare the query results of the TPC-DS test dataset Q82 executed by open source Spark SQL and Gluten, and check the performance differences before and after Gluten is enabled.
Execution result comparison: The query results of the two tests are the same. After Gluten is enabled, the time required for executing SQL statements is reduced. Gluten improves the Q82 query execution efficiency without affecting the query result.

