How long delay do you see? Is it only for query panel or for the UI in general?
A query for *:* is not necessarily a simple query, it depends on how many and
large fields you have etc. Try a query with fl=id or fl=title and see if that
helps.
Jan
> 13. jan. 2020 kl. 22:29 skrev Webster Homer
>
Good questions. I've been having similar problems for a while, for me,
the UI in general is frozen, including the navigation buttons and query
text boxes. Delay depends on the size of the json - if I do a request
for 1000 rows with just 1 field each, it's a permanent 5s delay on
scrolling and t
This is the build-in JSON formatting in the Query panel, it is so slow when
requesting huge JSON. I believe we have some JS code that fetch the result JSON
in AJAX and then formats it in a pretty way, also trying to filter out XSS
traps etc. Not sure if Chrome’s native JSON renderer is being use
## 13 January 2020, Apache Solr™ 8.4.1 available
The Lucene PMC is pleased to announce the release of Apache Solr 8.4.1.
Solr is the popular, blazing fast, open source NoSQL search platform
from the Apache Lucene project. Its major features include powerful
full-text search, hit highlighting, fac
I’ve also seen the browser hang when returning large result sets even when
typing the query in the address bar, bypassing the admin UI entirely.
Browsers are simply not built to display large amounts of data, and Jan’s
comments about bypassing the admin UI’s built-in processing may help, but it’
I noticed a - in my opinion - strange behavior in Solr Cloud.
I have a collection that has 1 shard and two replica's.
When I look at the directory structure, both have the same file names
in "data/index" ..
BUT the contents of those files are different.
So when I query this collection, and sort
This is expected for NRT replicas. For NRT, segments are _not_
the unit of update for the replicas, documents are. So the process is:
- leader gets documents to index
- leader indexes them locally and forwards the raw documents to the replicas
The autocommit timers trigger when the first doc hits
I am trying to add a csv file while indexing a core.
curl command:
sudo -u solr curl '
http://localhost:8983/solr/dbtraddr/update/csv?commit=true&escape=\&encapsulator=%7C&stream.file=/tmp/csv/dbtrphon_0.csv
'
The header of the csv file:
|id|,|archive|,|contact_type|,|debtor_id|,|descr|,|group
On Mon, Jan 13, 2020 at 3:42 PM Terry Steichen wrote:
> Maybe solr isn't using enough of your available memory (a rough check is
> produced by 'solr status'). Do you realize you can start solr with a
> '-m xx' parameter? (for me, xx = 1g)
>
> Terry
>
>
I changed the java_mem field in solr.in.sh,
I went ahead and adjusted the time_stamp field to be UTC, and that took
care of the problem.
On Tue, Jan 14, 2020 at 10:24 AM rhys J wrote:
> I am trying to add a csv file while indexing a core.
>
> curl command:
>
> sudo -u solr curl '
> http://localhost:8983/solr/dbtraddr/update/csv?commit=tru
Ok I understand better.
Solr does not "read" the 1 to 900 docs to retrieve 901 to 1000 but it still
needs to compute some stuff (docset intersection or something like that,
right?) and sort, which is costly, and then "read" the docs.
> Are those 10 requests happening simultaneously, or consecuti
Hi Team,
I am currently upgrading my system from solr 6.6 to solr 8.2 :
1. I am observing increased search time in my queries i.e. search response
time is increasing along with cpu utilisation, although memory looks fine,
on analysing heap dumps I figured out that queries are taking most of the
Conceptually asking for cods 900-1000 works something like this. Solr (well,
Lucene actually) has to keep a sorted list 1,000 items long of scores and doc
IDs because you can’t know whether doc N+1 will be in the list, or where. So
the list manipulation is what takes the extra time. For even 1,0
My experience is that the size of the query doesn't seem to matter, it just has
to be run in Chrome. Moreover this used to work fine in chrome too, chrome's
behavior with the admin console changed, my data hasn't. I don't see problems
in firefox.
-Original Message-
From: Erick Erickson
Had you already seen Solr deep paging?
https://lucidworks.com/post/coming-soon-to-solr-efficient-cursor-based-iteration-of-large-result-sets/
> On Tue, 14 Jan 2020 at 20:41, Erick Erickson wrote:
> Conceptually asking for cods 900-1000 works something like this. Solr (well,
> Lucene actually)
Also trie fileds have been updated to point fields, will that by any chance
degrade my response time by 50 percent?
On Tue, Jan 14, 2020 at 1:37 PM kshitij tyagi
wrote:
> Hi Team,
>
> I am currently upgrading my system from solr 6.6 to solr 8.2 :
>
> 1. I am observing increased search time in m
Please don’t cross-post, this discussion belongs in solr-user only.
Jan
> 14. jan. 2020 kl. 22:22 skrev kshitij tyagi :
>
> Also trie fileds have been updated to point fields, will that by any chance
> degrade my response time by 50 percent?
>
> On Tue, Jan 14, 2020 at 1:37 PM kshitij tyagi
>
Thanks.
The issue turned out to be little different that expected. My IntelliJ was
on a Windows (my main operating system). Solr was running inside a docker
container inside Debian VM hosted on my Windows operating system.
Debug server that runs inside the docker container will accept connections
I am SOLR fant and had implemented it in our company over 10 years ago.
I moved away from that role and the new search team in the meanwhile
implemented a proprietary (and expensive) nosql style search engine. That
the project did not go well, and now I am back to project and reviewing the
technolo
19 matches
Mail list logo