-
Type: Bug
-
Status: Resolved
-
Priority: Critical
-
Resolution: Fixed
-
Affects Version/s: 4.0.0
-
Fix Version/s: 4.4.4
-
Component/s: Synchronizer
-
Release Notes Summary:Improved big file upload robustness
-
Release Notes Description:
-
Epic Link:
-
Tags:
-
Sprint:nxDrive 11.1.35
-
Story Points:1
Error
The scenario is quite complexe but when it happens (most likely with big files), it is quite hard to understand and debug.
I managed to reproduce the issue by uploading a 100 GB file to the intranet.
Step 1/3:
- start the upload
- it must fail at chunk N
- the upload is then blacklisted and will be retried later
Step 2/3:
- for whatever reason, its batch ID is no more valid
- resume the upload
- a new batch ID is given
- upload all chunks successfully
- server error at linking the blob to the document
- the upload is then blacklisted and will be retried later
Step 3/3:
- resume the upload
- the batch ID used to resume the upload is the first batch ID, not the second that was used at step 2
Fix
The step 3 should simply work:
- no chunks should be uploaded
- the linking should work
- is related to
-
NXDRIVE-2185 Be more specific on HTTP error handling when fetching the batch ID associated to an upload
- Resolved
-
NXDRIVE-2595 Restart transfer from the ground when resuming with an invalid batch ID
- Resolved
- Is referenced in