jpountz opened a new issue, #12393: URL: https://github.com/apache/lucene/issues/12393
### Description `StandardTokenizer` is likely our most widely used tokenizer, and is reported as the main bottleneck for indexing in our nightly benchmarks, see e.g. top 5 CPU users for the 1kB Wikipedia corpus on yesterday's run: ``` PERCENT CPU SAMPLES STACK 10.54% 51017 org.apache.lucene.analysis.standard.StandardTokenizerImpl#getNextToken() 6.99% 33852 org.apache.lucene.index.IndexingChain$PerField#invertTokenStream() 6.47% 31309 org.apache.lucene.index.TermsHashPerField#writeByte() 5.00% 24183 org.apache.lucene.util.BytesRefHash#equals() 4.38% 21215 java.lang.Character#codePointAtImpl() ``` Intuitively, this kind of workload is amenable to vectorization, could we take advantage of vectorization to speed up text analysis and thus indexing? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org