-
Type: Improvement
-
Status: Resolved
-
Priority: Minor
-
Resolution: Fixed
-
Affects Version/s: 7.10
-
Component/s: Cache
When using an S3 binary store, some contention during mass import/indexing can be seen:
during fulltext extraction:
at org.nuxeo.common.file.LRUFileCache.getFile(LRUFileCache.java:263) - waiting to lock <0x000000064b890f08> (a org.nuxeo.common.file.LRUFileCache) at org.nuxeo.ecm.core.blob.binary.CachingBinaryManager.getFile(CachingBinaryManager.java:162) at org.nuxeo.ecm.core.blob.binary.LazyBinary.getFile(LazyBinary.java:76) at org.nuxeo.ecm.core.blob.binary.LazyBinary.getStream(LazyBinary.java:68) at org.nuxeo.ecm.core.blob.binary.BinaryBlob.getStream(BinaryBlob.java:65) at org.nuxeo.ecm.core.api.impl.blob.AbstractBlob.getByteArray(AbstractBlob.java:110) at org.nuxeo.ecm.core.storage.FulltextExtractorWork.blobsToText(FulltextExtractorWork.java:181)
Indexation
- waiting to lock <0x000000064b890f08> (a org.nuxeo.common.file.LRUFileCache) at org.nuxeo.ecm.core.blob.binary.CachingBinaryManager.getLengthFromCache(CachingBinaryManager.java:198) at org.nuxeo.ecm.core.blob.binary.CachingBinaryManager.getLength(CachingBinaryManager.java:186) at org.nuxeo.ecm.core.blob.binary.LazyBinary.getLength(LazyBinary.java:92) at org.nuxeo.ecm.core.blob.binary.BinaryBlobProvider.readBlob(BinaryBlobProvider.java:91) at org.nuxeo.ecm.core.storage.sql.S3BinaryManager.readBlob(S3BinaryManager.java:581) at org.nuxeo.ecm.core.blob.BlobManagerComponent.readBlob(BlobManagerComponent.java:234) at org.nuxeo.ecm.core.storage.BaseDocument.getValueBlob(BaseDocument.java:460) at org.nuxeo.ecm.core.storage.BaseDocument.readComplexProperty(BaseDocument.java:646) at org.nuxeo.ecm.core.storage.BaseDocument.readComplexProperty(BaseDocument.java:664) at org.nuxeo.ecm.core.storage.sql.coremodel.SQLDocumentLive.readDocumentPart(SQLDocumentLive.java:172) at org.nuxeo.ecm.core.api.DocumentModelFactory.createDataModel(DocumentModelFactory.java:208) at org.nuxeo.ecm.core.api.AbstractSession.getDataModel(AbstractSession.java:1934) at org.nuxeo.ecm.core.api.impl.DocumentModelImpl$1.run(DocumentModelImpl.java:489) at org.nuxeo.ecm.core.api.impl.DocumentModelImpl$1.run(DocumentModelImpl.java:486) at org.nuxeo.ecm.core.api.impl.DocumentModelImpl$RunWithCoreSession.execute(DocumentModelImpl.java:400) at org.nuxeo.ecm.core.api.impl.DocumentModelImpl.loadDataModel(DocumentModelImpl.java:491) at org.nuxeo.ecm.core.api.impl.DocumentModelImpl.getDataModel(DocumentModelImpl.java:500) at org.nuxeo.ecm.core.api.impl.DocumentModelImpl.getPart(DocumentModelImpl.java:1331) at org.nuxeo.ecm.automation.jaxrs.io.documents.JsonDocumentWriter.writeProperties(JsonDocumentWriter.java:234)
The lock is own by writer:
at sun.nio.fs.UnixNativeDispatcher.stat0(Native Method) at sun.nio.fs.UnixNativeDispatcher.stat(UnixNativeDispatcher.java:286) at sun.nio.fs.UnixFileAttributes.get(UnixFileAttributes.java:70) at sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:52) at sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144) at sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99) at java.nio.file.Files.readAttributes(Files.java:1737) at java.nio.file.Files.isRegularFile(Files.java:2229) at org.nuxeo.common.file.LRUFileCache$RegularFileFilter.accept(LRUFileCache.java:104) at org.nuxeo.common.file.LRUFileCache$RegularFileFilter.accept(LRUFileCache.java:98) at sun.nio.fs.UnixDirectoryStream$UnixDirectoryIterator.readNextEntry(UnixDirectoryStream.java:189) at sun.nio.fs.UnixDirectoryStream$UnixDirectoryIterator.hasNext(UnixDirectoryStream.java:201) - locked <0x00000007b913f158> (a sun.nio.fs.UnixDirectoryStream$UnixDirectoryIterator) at org.nuxeo.common.file.LRUFileCache.clearOldEntries(LRUFileCache.java:162) at org.nuxeo.common.file.LRUFileCache.putFile(LRUFileCache.java:247) - locked <0x000000064b890f08> (a org.nuxeo.common.file.LRUFileCache) at org.nuxeo.ecm.core.blob.binary.CachingBinaryManager.getFile(CachingBinaryManager.java:170)
This should be avoided, there should be no lock for readers and only one writer should perform the cleaning, ideally no more than one cleaning every 2 seconds.
–
Some contentions were removed when using the S3 binary store
- depends on
-
NXP-18369 Fix LRUFileCache file cleaning
- Resolved