Details
-
Bug
-
Status: Resolved
-
Major
-
Resolution: Fixed
-
1.10.0
-
None
-
linux, cluster with 5 servers over hdfs/parquet
Description
We have a drill cluster with five servers over hdfs/parquet.
Each machine have 8 cores. All cores get at 100% of use.
Each thread is looping in the while in line 314 in AsyncPageReader.java inside clear() method.
https://github.com/apache/drill/blob/1.10.0/exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/columnreaders/AsyncPageReader.java#L314
jstack -l 19255|grep -A 50 $(printf "%x" 29250)
"271d6262-ff19-ad24-af36-777bfe6c6375:frag:1:4" daemon prio=10 tid=0x00007f5b2adec800 nid=0x7242 runnable [0x00007f5aa33e8000]
java.lang.Thread.State: RUNNABLE
at java.lang.Throwable.fillInStackTrace(Native Method)
at java.lang.Throwable.fillInStackTrace(Throwable.java:783)
- locked <0x00000007374bfcb0> (a java.lang.InterruptedException)
at java.lang.Throwable.<init>(Throwable.java:250)
at java.lang.Exception.<init>(Exception.java:54)
at java.lang.InterruptedException.<init>(InterruptedException.java:57)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireInterruptibly(AbstractQueuedSynchronizer.java:1219)
at java.util.concurrent.locks.ReentrantLock.lockInterruptibly(ReentrantLock.java:340)
at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:439)
at org.apache.drill.exec.store.parquet.columnreaders.AsyncPageReader.clear(AsyncPageReader.java:317)
at org.apache.drill.exec.store.parquet.columnreaders.ColumnReader.clear(ColumnReader.java:140)
at org.apache.drill.exec.store.parquet.columnreaders.ParquetRecordReader.close(ParquetRecordReader.java:632)
at org.apache.drill.exec.physical.impl.ScanBatch.next(ScanBatch.java:183)
at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
at org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
at org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:135)
at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
at org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
at org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
at org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext(RemovingRecordBatch.java:93)
at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
at org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
at org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:135)
at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
at org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
at org.apache.drill.exec.physical.impl.limit.LimitRecordBatch.innerNext(LimitRecordBatch.java:115)
at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
at org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
at org.apache.drill.exec.physical.impl.svremover.RemovingRecordBatch.innerNext(RemovingRecordBatch.java:93)
at org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
at org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:104)
at org.apache.drill.exec.physical.impl.SingleSenderCreator$SingleSenderRootExec.innerNext(SingleSenderCreator.java:92)
at org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:94)
at org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:232)
at org.apache.drill.exec.work.fragment.FragmentExecutor$1.run(FragmentExecutor.java:226)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
Attachments
Attachments
Issue Links
- is duplicated by
-
DRILL-5609 Resources leak on parquet table when the query hangs with CANCELLATION_REQUESTED state
- Closed
- is related to
-
DRILL-5435 Using Limit causes Memory Leaked Error since 1.10
- Open
-
DRILL-5569 NullPointerException in Async Parquet reader
- Open
-
DRILL-5160 Memory leak in Parquet async reader when Snappy fails
- Resolved
- links to