-
Type: Bug
-
Status: Resolved
-
Priority: Major
-
Resolution: Fixed
-
Affects Version/s: 10.10, 2021.x, 11.x
-
Fix Version/s: 10.10-HF47, 11.x, 2021.3
-
Component/s: S3
-
Release Notes Summary:The S3 content-type header is split into MIME type and encoding, and then saved to the blob properties.
-
Tags:
-
Backlog priority:900
-
Sprint:nxplatform #34
-
Story Points:3
In some cases the AWS client determines the mimetype of an uploaded file and appends "; charset=UTF=8".
While there are a few reasons for this (https://github.com/aws/aws-cli/pull/2426, https://github.com/aws/aws-sdk-js/issues/2510) most Nuxeo code assumes that charset won't be appended.
To correct this, find this section in S3DirectBatchHandler:
ObjectMetadata metadata = amazonS3.getObjectMetadata(bucket, fileKey);
...
BlobInfo blobInfo = new BlobInfo();
blobInfo.mimeType = metadata.getContentType();
blobInfo.encoding = metadata.getContentEncoding();
blobInfo.filename = fileInfo.getFilename();
blobInfo.length = metadata.getContentLength();
blobInfo.key = key;
Scan blobInfo.mimeType for the ';', if it is there, truncate:
if (blobInfo.mimeType.indexOf(';') > 0) { blobInfo.mimeType = blobInfo.mimeType.substring(0,blobInfo.mimeType.indexOf(';')); }
- is related to
-
NXP-30122 Fix thumbnails computation when importing a large PDF with S3 BlobProvider
- Resolved
- Is referenced in