-
Type: Task
-
Status: Resolved
-
Priority: Minor
-
Resolution: Fixed
-
Affects Version/s: None
-
Fix Version/s: 2023.8
-
Component/s: Streams
-
Release Notes Summary:Large works are now serialized by default with warnings at serialization.
-
Tags:
-
Team:PLATFORM
-
Sprint:nxplatform #105, nxplatform #106
-
Story Points:3
When processing a large Work, we can hit the default Kafka message size limit (1MB).
This happens for example when importing a Nuxeo distribution on a Nuxeo server running the Platform Explorer package.
Yet, we don't want to increase the kafka message size limit.
As explain in NXP-26691, Nuxeo has an overflow mechanism to process big messages without changing the default kafka message size limit by storing the messages in a transient store and only transmit their key to find them back.
However, this mechanism is not enabled by default. To turn it on, we need to set nuxeo.stream.work.computation.filter.enabled to true.
- is related to
-
NXP-26691 StreamWorkManager workaround for large work
- Resolved