Uploaded image for project: 'Nuxeo Drive '
  1. Nuxeo Drive
  2. NXDRIVE-2590

Restart upload on still outdated refreshed AWS credentials

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Minor
    • Resolution: Fixed
    • Affects Version/s: 4.4.4
    • Fix Version/s: 5.1.1
    • Component/s: Synchronizer
    • Release Notes Summary:
      Improved Uploads Robustness With S3 Direct Upload
    • Release Notes Description:
      Hide

      We improved uploads robustness when going through S3 direct upload in several situations:

      • when a non-chunked upload fail because of expired credentials;
      • when refreshed credentials are still expired because of a misconfiguration of the ARN;
      • when the original {{batchId}} is removed from the transient store while the upload is still ongoing on the S3 side.

      Those thrice use cases will now restart the transfer from the ground because there is no possibility to continue an upload with a different batchId.

      Similar tickets: NXDRIVE-2595 and NXDRIVE-2598.

      Show
      We improved uploads robustness when going through S3 direct upload in several situations: when a non-chunked upload fail because of expired credentials; when refreshed credentials are still expired because of a misconfiguration of the ARN; when the original {{ batchId }} is removed from the transient store while the upload is still ongoing on the S3 side. Those thrice use cases will now restart the transfer from the ground because there is no possibility to continue an upload with a different batchId . Similar tickets: NXDRIVE-2595 and NXDRIVE-2598 .
    • Sprint:
      nxDrive 11.2.17
    • Story Points:
      1

      Description

      Problem

      When refreshing AWS credentials for an upload, it may end on still invalid credentials. Here are logs:

      2021-04-02 14:18:44 39986 123145459507200 INFO     nxdrive.client.uploader.direct_transfer Direct Transfer of PosixPath('/Users/tiger-222/Downloads/a/ubuntu-20.04.1-desktop-amd64.iso') into '/default-domain/UserWorkspaces/mschoentgen/idempotent-me' ('2fadb465-a7b6-4350-8242-cecf1edbf5e9')
      2021-04-02 14:18:44 39986 123145459507200 DEBUG    nxdrive.client.uploader Retrieved ongoing transfer Upload(uid=1, path=PosixPath('/Users/tiger-222/Downloads/a/ubuntu-20.04.1-desktop-amd64.iso'), name='ubuntu-20.04.1-desktop-amd64.iso', status=<TransferStatus.ONGOING: 1>, engine='729347ec832311eb8ca2c4b301d938e1', is_direct_edit=False, is_direct_transfer=True, progress=95.63253012048193, doc_pair=115, filesize=2785017856, transfer_type='upload', batch={'batchId': 'BATCHID', 'extraInfo': {'bucket': '...', 'baseKey': 'directupload/', 'endpoint': None, 'expiration': 1617356207000, 'usePathStyleAccess': False, 'region': 'eu-west-1', 'useS3Accelerate': True, 'awsSecretKeyId': '...', 'awsSessionToken': '...', 'awsSecretAccessKey': '.../fNr32ZOVwlvuF00OaskcPyyGt'}, 'key': '01ffeb7a-b796-4202-b045-41a5f4fd1503', 'multiPartUploadId': '...', 'provider': 's3', 'upload-idx': 0}, chunk_size=20971520, remote_parent_path='/default-domain/UserWorkspaces/mschoentgen/idempotent-me', remote_parent_ref='2fadb465-a7b6-4350-8242-cecf1edbf5e9', batch_obj=None, is_dirty=False)
      2021-04-02 14:18:44 39986 123145459507200 DEBUG    nuxeo.client       Calling GET 'https://nxdev-dev01.beta.nuxeocloud.com/nuxeo/api/v1/upload/BATCHID' with headers ...
      2021-04-02 14:18:47 39986 123145459507200 DEBUG    nuxeo.client       Response from 'https://nxdev-dev01.beta.nuxeocloud.com/nuxeo/api/v1/upload/BATCHID' [204]: '<no content>' with headers ...
      2021-04-02 14:18:47 39986 123145459507200 DEBUG    nxdrive.client.uploader Associated batch found, resuming the upload
      2021-04-02 14:18:47 39986 123145459507200 DEBUG    nuxeo.client       Calling POST 'https://nxdev-dev01.beta.nuxeocloud.com/nuxeo/api/v1/upload/BATCHID/refreshToken' with headers...
      2021-04-02 14:18:47 39986 123145459507200 ERROR    nxdrive.engine.processor Unknown error
      Traceback (most recent call last):
        File "engine/processor.py", line 337, in _execute
          self._handle_doc_pair_dt(doc_pair, sync_handler)
        File "engine/processor.py", line 256, in _handle_doc_pair_dt
          sync_handler(doc_pair)
        File "engine/processor.py", line 544, in _synchronize_direct_transfer
          self.remote.upload(
        File "client/remote_client.py", line 493, in upload
          return uploader(self).upload(path, **kwargs)
        File "client/uploader/direct_transfer.py", line 86, in upload
          item = super().upload_impl(
        File "client/uploader/__init__.py", line 205, in upload_impl
          self.upload_chunks(transfer, blob, chunked)
        File "client/uploader/__init__.py", line 269, in upload_chunks
          uploader: Uploader = transfer.batch_obj.get_uploader(blob, **kwargs)
        File "/.../nuxeo/models.py", line 177, in get_uploader
          return self.service.get_uploader(self, blob, **kwargs)
        File "/.../nuxeo/uploads.py", line 353, in get_uploader
          return cls(self, batch, blob, chunk_size, callback, **kwargs)
        File "/.../nuxeo/handlers/s3.py", line 122, in __init__
          self.chunk_count, self.blob.uploadedChunkIds = self.state()
        File "/.../nuxeo/handlers/s3.py", line 242, in state
          self.chunk_size, uploaded_chunks, self._data_packs = self._state()
        File "/.../nuxeo/handlers/s3.py", line 196, in _state
          info = self.s3_client.list_parts(
        File "/.../botocore/client.py", line 357, in _api_call
          return self._make_api_call(operation_name, kwargs)
        File "/.../botocore/client.py", line 662, in _make_api_call
          http, parsed_response = self._make_request(
        File "/.../botocore/client.py", line 682, in _make_request
          return self._endpoint.make_request(operation_model, request_dict)
        File "/.../botocore/endpoint.py", line 102, in make_request
          return self._send_request(request_dict, operation_model)
        File "/.../botocore/endpoint.py", line 132, in _send_request
          request = self.create_request(request_dict, operation_model)
        File "/.../botocore/endpoint.py", line 115, in create_request
          self._event_emitter.emit(event_name, request=request,
        File "/.../botocore/hooks.py", line 356, in emit
          return self._emitter.emit(aliased_event_name, **kwargs)
        File "/.../botocore/hooks.py", line 228, in emit
          return self._emit(event_name, kwargs)
        File "/.../botocore/hooks.py", line 211, in _emit
          response = handler(**kwargs)
        File "/.../botocore/signers.py", line 90, in handler
          return self.sign(operation_name, request)
        File "/.../botocore/signers.py", line 154, in sign
          auth = self.get_auth_instance(**kwargs)
        File "/.../botocore/signers.py", line 234, in get_auth_instance
          frozen_credentials = self._credentials.get_frozen_credentials()
        File "/.../botocore/credentials.py", line 612, in get_frozen_credentials
          self._refresh()
        File "/.../botocore/credentials.py", line 507, in _refresh
          self._protected_refresh(is_mandatory=is_mandatory_refresh)
        File "/.../botocore/credentials.py", line 550, in _protected_refresh
          raise RuntimeError(msg)
      RuntimeError: Credentials were refreshed, but the refreshed credentials are still expired.
      2021-04-02 14:18:47 39986 123145459507200 INFO     nxdrive.engine.workers Increasing error [SYNC_HANDLER__synchronize_direct_transfer] (Credentials were refreshed, but the refreshed credentials are still expired.) for <DocPair[115] local_path=PosixPath('Users/tiger-222/Downloads/a/ubuntu-20.04.1-desktop-amd64.iso'), local_parent_path=PosixPath('Users/tiger-222/Downloads/a'), remote_ref='', local_state='direct', remote_state='unknown', pair_state='direct_transfer', last_error=None>
      2021-04-02 14:18:47 39986 123145459507200 INFO     nxdrive.engine.queue_manager Temporary ignore pair for 60s: <DocPair[115] local_path=PosixPath('Users/tiger-222/Downloads/a/ubuntu-20.04.1-desktop-amd64.iso'), local_parent_path=PosixPath('Users/tiger-222/Downloads/a'), remote_ref='', local_state='direct', remote_state='unknown', pair_state='direct_transfer', last_error='SYNC_HANDLER__synchronize_direct_transfer'>
      

      The transfer will be retried after 60 seconds and fail again. Actually, there is no way the upload finishes in such scenario.

      Fix

      For such error the upload should be restarted from the ground.

        Attachments

          Issue Links

            Activity

              People

              • Votes:
                0 Vote for this issue
                Watchers:
                3 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved:

                  Time Tracking

                  Estimated:
                  Original Estimate - Not Specified
                  Not Specified
                  Remaining:
                  Remaining Estimate - 0 minutes
                  0m
                  Logged:
                  Time Spent - 2 hours
                  2h

                    PagerDuty

                    Error rendering 'com.pagerduty.jira-server-plugin:PagerDuty'. Please contact your Jira administrators.