- install Nuxeo
- install S3 addon
- create a File document and upload a file to it
- check the tmp folder
- observe an nxbincache folder is present
- wait for some time, even an hour
- observe the file is still present
- stop Nuxeo
- observe the file has been purged from tmp folder
Expected behavior: in production, some environments cannot be restarted on the sole purpose of cleaning the tmp folder. This should be done on-the-fly.
Note: this is due to the nxbincache file being created as a JVM temporary file, thus deleted only when the JVM stops:
https://github.com/nuxeo/nuxeo/blob/master/nuxeo-core/nuxeo-core-api/src/main/java/org/nuxeo/ecm/core/blob/binary/CachingBinaryManager.java#L105
using:
https://github.com/nuxeo/nuxeo/blob/master/nuxeo-runtime/nuxeo-runtime/src/main/java/org/nuxeo/runtime/api/Framework.java#L612
Also, the folder is declared as a LRU cache but checking the getSize method defined here:
https://github.com/nuxeo/nuxeo/blob/master/nuxeo-common/src/main/java/org/nuxeo/common/file/LRUFileCache.java#L136
it is never used except in testLRUFileCache:
https://github.com/nuxeo/nuxeo/blob/master/nuxeo-common/src/test/java/org/nuxeo/common/file/TestLRUFileCache.java#L58
So the max size is never evaluated.
- is related to
-
NXP-26382 LRUFileCache (use for S3 cache) should be resilient to directory removal
- Resolved