-
Type: Improvement
-
Status: Resolved
-
Priority: Major
-
Resolution: Fixed
-
Affects Version/s: 2.1.0
-
Fix Version/s: 2.1.1
-
Component/s: None
-
Epic Link:
-
Tags:
-
Sprint:nxDrive 11.1.11
-
Story Points:1
Issue
Several memory issues where spotted in Drive:
- https://sentry.io/organizations/nuxeo/issues/1048770341/
- https://sentry.io/organizations/nuxeo/issues/1048776862/
- https://sentry.io/organizations/nuxeo/issues/1048777268/
- https://sentry.io/organizations/nuxeo/issues/1048204045/
https://sentry.io/organizations/nuxeo/issues/1048755442/
Cause
One cause is that each request made to the server will try to decode the encoding of the response if we try to access to its .text attribute. This answer from SO is interesting: https://stackoverflow.com/a/24656254/1117028
In the client, there is 2 occurences of that .text attribute: one to retrieve the token (it is fine as is) and one to log the server's response (https://github.com/nuxeo/nuxeo-python-client/blob/b2e7ec08bdb8d7c1958d41ed0a40fd1c523a9b89/nuxeo/client.py#L205).
The second is problematic as it will be useful only when you have the logging level set to DEBUG. So for each and every HTTP calls, we are filling the OS memory with somewhat not necessary data (the DEBUG mode is useful for testing only).
Fix
The fix is quite simple: we should get the .text data and logging it only when the logger is set to log at such level.
Going Further
Out of curiosity, we wanted to know if accessing to the .json() method will call the .text attribute without our consent. Looking at the code of requests, that attribute will effectively be used if and only if the encoding of the response was not guessed by the provided algorithm (https://github.com/kennethreitz/requests/blob/bedd9284c9646e50c10b3defdf519d4ba479e2c7/requests/models.py#L897).
Now that we are aware of it, I do not think we can do something about it. But it is interesting to keep it in mind.