Uploaded image for project: 'Nuxeo Drive '
  1. Nuxeo Drive
  2. NXDRIVE-372

Create a server-side bench

    XMLWordPrintable

    Details

    • Type: Task
    • Status: Open
    • Priority: Major
    • Resolution: Unresolved
    • Affects Version/s: None
    • Fix Version/s: Postponed
    • Component/s: Tests
    • Sprint:
      drive-8.1-4

      Description

      First goal is to measure the impact of a big number of Drive clients connected to a Nuxeo instance: 1000 or more.
      The main bottlenecks should be the remote full scan done at Drive init and the recurrent calls to NuxeoDrive.GetChangeSummary.

      We need to perform the bench on an instance containing data that reflects reality, especially regarding the audit logs => let's code a custom platform importer to generate different users working on a hierarchy of documents randomly generated.
      We should be able to configure the importer as for:

      • the number of users
      • number of levels of the hierarchy
      • number of folders / files per folder
      • file types / size
      • number of sync roots per user
      • ...

      We can launch the importer and wait for a while (at least untill we have a substantial hierarchy) before launching the actual bench simulating the Drive clients.
      This way when first connecting the Drive clients will perform a remote recursive scan of which we'll measure the impact, then they'll switch to the incremental mode asking for the change summary every 30 seconds (should be configurable in the bench) on a hierarchy subject to some activity thanks to the importer still going on.
      We could also add an operation to generate some random activity (CRUD) while the bench is running to even better simulate real use cases.

      The bench will be done with Gatling.

      We can also test and measure:

      • The difference between the SQL audit backend and the ES one.
      • Impact of adding a file on a sync root shared by all users, which will generate a big number of downloads.
      • Impact of concurrent write operations, for instance 10% of the users effectively working on documents by saving every 10 seconds => binary upload.

      Finally we should be able to run this bench on a cluster, when batch upload + cluster issue is solved.

        Attachments

          Issue Links

            Activity

              People

              • Votes:
                0 Vote for this issue
                Watchers:
                1 Start watching this issue

                Dates

                • Created:
                  Updated: