David Smiley, sorry for my terminology, I’m used to calling a full data fetching by small parts from DB table (collection) as "scrolling". Of course, in Solr cursors (cursorMark) are designed for this and I use them. Large "rows" values in my examples (measurements) are needed to show the speed at
>
> Isn't there a deserializer of Solr javabin format in python/json
Well, you can try to marry
https://solr.apache.org/guide/solr/latest/query-guide/response-writers.html#smile-response-writer
+ https://github.com/jhosmer/PySmile
On Mon, Feb 27, 2023 at 12:46 AM Fikavec F wrote:
> Thank you f
Thank you for your help with slow single threaded data receiving from Solr. Today I was able to reach a speed of 3Gigabit+ and got results that may be useful in the future. I turned out to be wrong in assuming that the main problem is in the FastWriter output buffer, but this was the most obvious t
You used the word "scroll" a lot. Can you elaborate?
Search is generally optimized for returning top-X where X is not large. My
suspicion is that you want lots of results back. You might want to use
cursorMark as described here:
https://solr.apache.org/guide/solr/latest/query-guide/pagination-of
> I'm not sure there's a shortcut bypassing ordering results through heap
To expand on this a bit: the behavior Mikhail describes changes as of
solr 9.1 (https://issues.apache.org/jira/browse/SOLR-14765), which
introduces exactly the proposed bypass. The extra overhead (pre-9.1)
scales linear wrt