-
Type: Bug
-
Status: Resolved
-
Priority: Minor
-
Resolution: Fixed
-
Affects Version/s: 2021.0
-
Fix Version/s: 2021.15
-
Release Notes Summary:CSV Export can now handle large metadata (over 1MB)
-
Tags:
-
Backlog priority:900
-
Sprint:nxplatform #53
-
Story Points:3
When running a csvExport with documents containing large metdata, the bucket of csv lines cannot fit into a stream record limited to a 1MB size,
the following exception can be observed:
2022-01-06T02:00:08,911 ERROR [bulk/csvExportPool-01,in:2,inCheckpoint:1,out:1,lastRead:1641434388890,lastTimer:0,wm:215146084804591617,loop:17924,record] [org.nuxeo.lib.stream.computation.log.ComputationRunner] org.nuxeo.lib.stream.StreamRuntimeException: Unable to send record: ProducerRecord... at org.nuxeo.lib.stream.log.kafka.KafkaLogAppender.append(KafkaLogAppender.java:151) ~[nuxeo-stream-2021.13.7.jar:?] at org.nuxeo.lib.stream.log.kafka.KafkaLogAppender.append(KafkaLogAppender.java:131) ~[nuxeo-stream-2021.13.7.jar:?] at org.nuxeo.lib.stream.computation.log.LogStreamManager.append(LogStreamManager.java:156) ~[nuxeo-stream-2021.13.7.jar:?] at org.nuxeo.lib.stream.computation.log.ComputationRunner.sendRecords(ComputationRunner.java:584) ~[nuxeo-stream-2021.13.7.jar:?] at org.nuxeo.lib.stream.computation.log.ComputationRunner.checkpoint(ComputationRunner.java:543) ~[nuxeo-stream-2021.13.7.jar:?] at org.nuxeo.lib.stream.computation.log.ComputationRunner.checkpointIfNecessary(ComputationRunner.java:532) ~[nuxeo-stream-2021.13.7.jar:?] at org.nuxeo.lib.stream.computation.log.ComputationRunner.processRecordWithTracing(ComputationRunner.java:424) ~[nuxeo-stream-2021.13.7.jar:?] at org.nuxeo.lib.stream.computation.log.ComputationRunner.processRecord(ComputationRunner.java:411) ~[nuxeo-stream-2021.13.7.jar:?] at org.nuxeo.lib.stream.computation.log.ComputationRunner.processLoop(ComputationRunner.java:272) ~[nuxeo-stream-2021.13.7.jar:?] at org.nuxeo.lib.stream.computation.log.ComputationRunner.run(ComputationRunner.java:206) [nuxeo-stream-2021.13.7.jar:?] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?] at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?] at java.lang.Thread.run(Thread.java:829) [?:?] Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.RecordTooLargeException: The message is 24718737 bytes when serialized which is larger than 1048576, which is the value of the max.request.size configuration. at org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.<init>(KafkaProducer.java:1314) ~[kafka-clients-2.6.0.jar:?] at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:970) ~[kafka-clients-2.6.0.jar:?] at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:870) ~[kafka-clients-2.6.0.jar:?] at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:758) ~[kafka-clients-2.6.0.jar:?] at org.nuxeo.lib.stream.log.kafka.KafkaLogAppender.append(KafkaLogAppender.java:143) ~[nuxeo-stream-2021.13.7.jar:?] ... 14 more Caused by: org.apache.kafka.common.errors.RecordTooLargeException: The message is 24718737 bytes when serialized which is larger than 1048576, which is the value of the max.request.size configuration.
Request that same workaround that was applied for NXP-26691 is applied here.