Known Issues
Trouble Ticket No. |
DTS2024060329127 |
|---|---|
Severity |
Minor |
Symptom |
In the scenario where Spark executes INSERT statements and there is only one data partition, when Sort Merge Join (SMJ) is performed in 50 consecutive tables, the off-heap memory is used up and the OmniOperator SMJ operator calls the new statement to apply for the vector memory. As a result, a core dump occurs. |
Root Cause |
|
Impact |
This problem occurs in a high-load scenario. Spark jobs aim to leverage the concurrent processing capability of large clusters. In normal scenarios, the situation where a single task (single thread) executes JOIN for a large number of tables does not exist. This problem has not occurred in actual service scenarios and has little impact on customers. |
Workaround |
|
Solution |
|