-
Type: Bug
-
Status: Resolved
-
Priority: Critical
-
Resolution: Fixed
-
Affects Version/s: 10.10
-
Fix Version/s: 10.10-HF33, 11.3, 2021.0
-
Component/s: BlobManager
-
Tags:
-
Backlog priority:900
-
Sprint:nxplatform #17
-
Story Points:5
When performing a mass import using the binary manager from the google-storage addon, it can get the following stacktrace:
2020-08-31T20:45:14,321 ERROR [http-nio-127.0.0.1-8080-exec-91] [org.nuxeo.ecm.webengine.app.WebEngineExceptionMapper] com.google.cloud.storage.StorageException: The rate of change requests to the object gfusw-qecm400-live/transient_BatchManagerCache/bdea87eb0a53c7d416645ae9a54e615b exceeds the rate limit. Please reduce the rate of create, update, and delete requests. com.google.cloud.storage.StorageException: The rate of change requests to the object gfusw-qecm400-live/transient_BatchManagerCache/bdea87eb0a53c7d416645ae9a54e615b exceeds the rate limit. Please reduce the rate of create, update, and delete requests. at com.google.cloud.storage.spi.v1.HttpStorageRpc.translate(HttpStorageRpc.java:227) ~[google-cloud-storage-1.76.0.jar:1.76.0] at com.google.cloud.storage.spi.v1.HttpStorageRpc.create(HttpStorageRpc.java:308) ~[google-cloud-storage-1.76.0.jar:1.76.0] at com.google.cloud.storage.StorageImpl.create(StorageImpl.java:178) ~[google-cloud-storage-1.76.0.jar:1.76.0] at com.google.cloud.storage.Bucket.create(Bucket.java:975) ~[google-cloud-storage-1.76.0.jar:1.76.0] at org.nuxeo.ecm.core.storage.gcp.GoogleStorageBinaryManager$GCPFileStorage.storeFile(GoogleStorageBinaryManager.java:185) ~[nuxeo-core-binarymanager-gcp-10.10-HF27.jar:?] at org.nuxeo.ecm.core.blob.binary.CachingBinaryManager.getBinary(CachingBinaryManager.java:155) ~[nuxeo-core-api-10.10-HF29.jar:?] at org.nuxeo.ecm.core.blob.binary.AbstractBinaryManager.getBinary(AbstractBinaryManager.java:130) ~[nuxeo-core-api-10.10-HF29.jar:?] at org.nuxeo.ecm.core.blob.binary.BinaryBlobProvider.writeBlob(BinaryBlobProvider.java:133) ~[nuxeo-core-api-10.10-HF29.jar:?] at org.nuxeo.ecm.blob.AbstractCloudBinaryManager.writeBlob(AbstractCloudBinaryManager.java:147) ~[nuxeo-core-binarymanager-common-10.10-HF21.jar:?] at org.nuxeo.ecm.core.transientstore.keyvalueblob.KeyValueBlobTransientStore.putBlobs(KeyValueBlobTransientStore.java:491) ~[nuxeo-core-cache-10.10-HF21.jar:?] at org.nuxeo.ecm.automation.server.jaxrs.batch.Batch.addFile(Batch.java:185) ~[nuxeo-automation-server-10.10-HF29.jar:?] at org.nuxeo.ecm.restapi.server.jaxrs.BatchUploadObject.addBlob(BatchUploadObject.java:332) ~[nuxeo-rest-api-server-10.10-HF30.jar:?] at org.nuxeo.ecm.restapi.server.jaxrs.BatchUploadObject.uploadNoTransaction(BatchUploadObject.java:279) ~[nuxeo-rest-api-server-10.10-HF30.jar:?] at org.nuxeo.ecm.restapi.server.jaxrs.BatchUploadObject.upload(BatchUploadObject.java:187) ~[nuxeo-rest-api-server-10.10-HF30.jar:?] ... Caused by: com.google.api.client.googleapis.json.GoogleJsonResponseException: 429 Too Many Requests { "code" : 429, "errors" : [ { "domain" : "usageLimits", "message" : "The rate of change requests to the object gfusw-qecm400-live/transient_BatchManagerCache/bdea87eb0a53c7d416645ae9a54e615b exceeds the rate limit. Please reduce the rate of create, update, and delete requests.", "reason" : "rateLimitExceeded" } ], "message" : "The rate of change requests to the object gfusw-qecm400-live/transient_BatchManagerCache/bdea87eb0a53c7d416645ae9a54e615b exceeds the rate limit. Please reduce the rate of create, update, and delete requests." } at com.google.api.client.googleapis.json.GoogleJsonResponseException.from(GoogleJsonResponseException.java:150) ~[google-api-client-1.25.0.jar:1.25.0] at com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError(AbstractGoogleJsonClientRequest.java:113) ~[google-api-client-1.25.0.jar:1.25.0] at com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError(AbstractGoogleJsonClientRequest.java:40) ~[google-api-client-1.25.0.jar:1.25.0] at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:432) ~[google-api-client-1.25.0.jar:1.25.0] at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:352) ~[google-api-client-1.25.0.jar:1.25.0] at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute(AbstractGoogleClientRequest.java:469) ~[google-api-client-1.25.0.jar:1.25.0] at com.google.cloud.storage.spi.v1.HttpStorageRpc.create(HttpStorageRpc.java:305) ~[google-cloud-storage-1.76.0.jar:1.76.0] ... 135 more
The code of GoogleStorageBinaryManager.GCPFileStorage.storeFile is naïve in that, contrary to what we do for S3 or Azure, it doesn't check that if the hash is already stored in GCP before trying to upload it.
If one tries to upload the same file in a short time, it'll cause this issue because https://cloud.google.com/storage/docs/key-terms#immutability clearly says
However, a single particular object can only be updated or overwritten up to once per second. For example, if you have an object bar in bucket foo, then you should only upload a new copy of foo/bar about once per second. Updating the same object more frequently than once per second may result in 429 Too Many Requests errors.