Uploaded image for project: 'Nuxeo Drive '
  1. Nuxeo Drive
  2. NXDRIVE-2525

Do not try to list parts when an upload was already completed on S3

    XMLWordPrintable

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Minor
    • Resolution: Fixed
    • Affects Version/s: 4.4.2
    • Fix Version/s: 5.0.0
    • Component/s: Remote client

      Description

      Problem

      • Upload a file using S3 direct upload.
      • For whatever reason, the upload failed after the upload was completed on S3, likely at the Nuxeo completion step.
      • Drive will retry the upload, and retrieve uploaded chunks using the ListParts API.
      • The upload will be in error.

      Here are revelant logs:

      2021-02-17 10:10:46 6864 123145442471936 DEBUG    nxdrive.engine.processor Entered _synchronize_locally_created, parent_pair=<DocPair[2] local_path=PosixPath('devops'), local_parent_path=PosixPath('.'), remote_ref='defaultSyncRootFolderItemFactory#default#1900e411-9539-4aa6-96b3-20cd27eef8a5', local_state='synchronized', remote_state='synchronized', pair_state='synchronized', last_error=None>
      2021-02-17 10:10:46 6864 123145442471936 INFO     nxdrive.engine.processor Creating remote document '70gb.bin' in folder 'devops'
      2021-02-17 10:10:46 6864 123145442471936 DEBUG    nxdrive.client.uploader Retrieved ongoing transfer Upload(uid=1, path=PosixPath('/Users/aba/ws/nuxeo/drive/devops/70gb.bin'), name='70gb.bin', status=<TransferStatus.ONGOING: 1>, engine='aae9fe64705511ebafcbacde48001122', is_direct_edit=0, is_direct_transfer=False, progress=100.0, doc_pair=3, filesize=73400320000, transfer_type='upload', batch={'batchId': 'BATCHID', 'extraInfo': {'bucket': 'l2it-test-gl-ts-aspera', 'baseKey': '', 'endpoint': None, 'expiration': 1613512329000, 'usePathStyleAccess': False, 'region': 'us-east-1', 'useS3Accelerate': False, 'awsSecretKeyId': '...', 'awsSessionToken': '...', 'awsSecretAccessKey': '...', 'batchId': 'batchId-3e326409-c9b5-4d8c-8bae-9856d03ceb48'}, 'etag': '"4eb8a8b6b586b7e50ee654ca724f61cb-3500"', 'key': 'e948ce14-3680-40bb-ba23-bde75c653f26', 'multiPartUploadId': '...', 'provider': 's3', 'upload_idx': 1}, chunk_size=20971520, remote_parent_path='', remote_parent_ref='', batch_obj=None)
      2021-02-17 10:10:46 6864 123145442471936 DEBUG    nuxeo.client       Calling GET 'HOST/api/v1/upload/BATCHID' with headers={...}, params=None, kwargs={...} and cookies=<...>
      2021-02-17 10:10:46 6864 123145442471936 DEBUG    nuxeo.client       Response from 'HOST/api/v1/upload/BATCHID' [204]: '<no content>' with headers {...} and cookies <...>
      2021-02-17 10:10:46 6864 123145442471936 DEBUG    nxdrive.client.uploader Associated batch found, resuming the upload
      2021-02-17 10:10:46 6864 123145442471936 DEBUG    nuxeo.client       Calling POST 'HOST/api/v1/upload/BATCHID/refreshToken' with headers={...}, params=None, kwargs={...} and cookies=<...>
      2021-02-17 10:10:46 6864 123145442471936 DEBUG    nuxeo.client       Response from 'HOST/api/v1/upload/BATCHID/refreshToken' [200]: '{"expiration":1613556646000,"batchId":"BATCHID","awsSecretKeyId":"...","awsSessionToken":"...","awsSecretAccessKey":"..."}' with headers {...} and cookies <...>
      

      And the error:

      2021-02-17 10:10:47 6864 123145442471936 ERROR    nxdrive.engine.processor Unknown error
      Traceback (most recent call last):
        File "nxdrive/engine/processor.py", line 317, in _execute
        File "nxdrive/engine/processor.py", line 240, in _handle_doc_pair_sync
        File "nxdrive/engine/processor.py", line 949, in _synchronize_locally_created
        File "nxdrive/client/remote_client.py", line 582, in stream_file
        File "nxdrive/client/remote_client.py", line 446, in upload
        File "nxdrive/client/uploader/sync.py", line 31, in upload
        File "nxdrive/client/uploader/__init__.py", line 197, in upload_impl
        File "nxdrive/client/uploader/__init__.py", line 253, in upload_chunks
        File "nuxeo/models.py", line 182, in get_uploader
        File "nuxeo/uploads.py", line 349, in get_uploader
        File "nuxeo/handlers/s3.py", line 136, in __init__
        File "nuxeo/handlers/s3.py", line 223, in state
        File "nuxeo/handlers/s3.py", line 177, in _state
        File "botocore/client.py", line 357, in _api_call
        File "botocore/client.py", line 676, in _make_api_call
      botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the ListParts operation: Access Denied
      

      Analysis

      As the upload was succesfully completed on S3, there are no more "parts" (chunks) in the transient bucket. In fact, everything has been purged. So when Drive tries to get the parts list, an error is thrown.

      Solution

      We can see in the logs that we already completed the multipart upload because we have an etag and a multiPartUploadId. In that case, we should just skip S3 stuff and do the Nuxeo completion directly.

        Attachments

          Issue Links

            Activity

              People

              • Votes:
                0 Vote for this issue
                Watchers:
                2 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved:

                  Time Tracking

                  Estimated:
                  Original Estimate - Not Specified
                  Not Specified
                  Remaining:
                  Remaining Estimate - 0 minutes
                  0m
                  Logged:
                  Time Spent - 3 hours
                  3h

                    PagerDuty