-
Type: Improvement
-
Status: Resolved
-
Priority: Major
-
Resolution: Fixed
-
Affects Version/s: None
-
Component/s: Elasticsearch
-
Epic Link:
-
Tags:
-
Upgrade notes:
-
Sprint:Sprint RepoTeam 7.1-1
The standard tokenizer used in our custom analyzers does not split string when special characters such as "_" and "." are not followed by a white space.
As a result, if I have a document named "york.jpg", search for "york" does not return anything because the token in the index is "york.jpg". Moreover search for "york.jpg" does not work because Nuxeo removes the dot in the string such that ES receives a query for "york jpg"