Affects Version/s: None
Component/s: BlobManager, S3
LRUFileCache (used by the S3 blob store for instance) cleans files on the filesystem when they aren't referenced anymore by the Java server, however because a reference to the file is kept by the Binary inside the various VCS caches, this can take a very long time. Basically the file may never be deleted if there's no memory pressure in the server.
-> Fix the implementation to avoid this.
Refactor the LRUFileCache to only rely on filesystem information, and to not depend on references being unreachable. When a new file is written to the cache, old files that make the size go beyond the maximum should be deleted. To determine file age and size, the cache directory will be scanned (this will happen for each new write).
As a consequence of this design, it will now be possible to:
- delete files manually from the cache at any time (provided they're older than a few seconds/minutes so that there's no risk of them being in use at the time),
- share the cache directory between several Nuxeo instances (although at the moment because the cache directory has a random name at startup this isn't really possible, see