Uploaded image for project: 'Nuxeo Platform'
  1. Nuxeo Platform
  2. NXP-20481

Fix unnecessary filesystem usage growth when big zip file is timing out

    XMLWordPrintable

    Details

    • Type: Bug
    • Status: Resolved
    • Priority: Minor
    • Resolution: Duplicate
    • Affects Version/s: 7.10, 8.3
    • Fix Version/s: None
    • Component/s: Seam / JSF UI, Web Common
    • Tags:
    • Backlog priority:
      700
    • Sprint:
      nxSL Sprint 8.4.4, nxSL Sprint 8.4.5, nxSL Sprint 8.10.1, nxSL Sprint 8.10.2, nxSL Sprint 9.1.1
    • Story Points:
      0

      Description

      When requesting big ZIP files (say 5 GB), it takes time, sometimes more than 10 minutes, if the connection between the client and Nuxeo is cut, the ZIP file won't be downloaded by the client anymore and the temporary file will remain in the filesystem using space until server restart.

      There are at least 2 scenarios for this:

      1. most load balancers have a timeout of 5 minutes. When the load balancer is timing out, it cuts the connection
      2. when a client is simply cutting the connection because it restarts, it stops the browser, its proxy cuts the connection, and so on
      3. there may be some workaround for some scenarios like increasing the timeout but this is rarely acceptable

      In ClipboardActionsBean.exportWorklistAsZip(List<DocumentModel> documents, boolean exportAllBlobs)*:

      HttpServletRequest request = (HttpServletRequest) context.getExternalContext().getRequest();
                          request.setAttribute(NXAuthConstants.DISABLE_REDIRECT_REQUEST_KEY, true);
                          String zipDownloadURL = BaseURL.getBaseURL(request);
                          zipDownloadURL += DownloadService.NXBIGZIPFILE + "/";
                          zipDownloadURL += tmpFile.getName();
                          try {
                              context.getExternalContext().redirect(zipDownloadURL);
                          } catch (IOException e) {
                              log.error("Error while redirecting for big file downloader", e);
                          }
      
      1. When the connection is cut, the redirect will generate an IOException and tmpFile will not be deleted
      2. there should be a finally statement or at least a tmpFile.delete() call inside the exception to ensure the space is freed
      3. ideally the tmpFile name should be logged for support purpose

        Attachments

          Issue Links

            Activity

              People

              • Votes:
                0 Vote for this issue
                Watchers:
                2 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: