Known Issues
Trouble Ticket No. |
DTS2024060329127 |
|---|---|
Severity |
Minor |
Symptom |
In the scenario where Spark OmniOperator executes INSERT statements and there is only one data partition, when Sort Merge Join is performed in 50 consecutive tables, the SMJ operator calls the new statement to apply for the vector memory when the off-heap memory is used up. As a result, a core dump occurs. |
Root Cause |
|
Impact |
This problem occurs in a rare scenario. Spark jobs aim to leverage the concurrent processing capability of large clusters. In normal scenarios, the situation where a single task (single thread) executes JOIN for a large number of tables does not exist. This problem has not occurred in actual service scenarios and has little impact on customers. |
Workaround |
|
Solution |
|
Parent topic: V1.5.0