基于Hadoop 3.2.0版本的libhdfs.so,执行Spark查询Parquet数据源偶现coredump问题的解决方法
问题现象描述
使用Spark查询Parquet数据源时,若开启OmniOperator并依赖Hadoop 3.2.0版本的libhdfs.so时,会偶现coredump,报错堆栈如下。
1 2 3 4 5 6 7 8 9 10 11 12 |
Stack: [Ox00007fb9e8e5d000,0x00007fb9e8f5e000], sp=0x00007fb9e8f5cd40, free space=1023k Native frames: (J=compiled Java code, j=interpreted , Vv=VM code, C=native code) C [libhdfs.so+0xcd39] hdfsThreadDestructor+0xb9 ------------------ PROCESS ------------------ VM state:not at safepoint (normal execution) VM Mutex/Monitor currently owned by a thread: ([ mutex/lock event]) [0x00007fbbc00119b0] CodeCache_lock - owner thread: 0x00007fbbc00d9800 [Ox00007fbbc0012ab0] AdapterHandlerLibrary_lock - owner thread: 0x00007fba04451800 heap address: 0x00000000c0000000, size: 1024 MB, Compressed 0ops mode: 32-bit Narrow klass base: 0x0000000000000000, Narrow klass shift: 3 Compressed class space size: 1073741824 Address: 0x0000000100000000 |
父主题: OmniOperator算子加速