-
Type: Improvement
-
Status: Resolved
-
Priority: Minor
-
Resolution: Not A Bug
-
Affects Version/s: None
-
Fix Version/s: None
-
Epic Link:
-
Tags:
-
Team:UI
-
Sprint:UI - 2020-06 3
Given the intrinsic limitations of AWS S3 such as:
- Maximum number of parts per upload: 10,000
- Part size: 5 MB to 5 GB
we need to have a proper heuristic to determine the chunk size to use for an upload given the file size.
Similar work has been done in the Drive client (see https://github.com/nuxeo/nuxeo-python-client/blob/f27d798e0d6df951809309ad26fd751e55bfb967/nuxeo/utils.py#L38-L72) so we should try to have a unified approach to this issue