Hi
In solr 6.2.1 as server and solr 6.2.0 for client
It's a 2 shards index, 3 replicas for each shard.
We are fetching the latest document with sorting over creationTime desc and
rows=1.
At the same time we are committing sanity tests that insert documents and
delete them immediately.
The weir
Hi
Thanks for the reply.
We are using
select?q=*:*&sort=creationTimestamp+desc&rows=1
So as you said we should have got results.
Another piece of information is that we commit within 300ms when inserting
the "sanity" doc.
And again, we delete by query.
We don't have any custom plugin/query
I am not sure that it's related,
but with local tests we got to a scenario where we
Add doc that somehow has * empty key* and then, when querying with sort over
creationTime with rows=1, we get empty result set.
When specifying the recent doc shard with shards=shard2 we do have results.
I don't
Thanks for the help Yonik.
Cheers
Gilad
--
View this message in context:
http://lucene.472066.n3.nabble.com/empty-result-set-for-a-sort-query-tp4309256p4309500.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi
We have solr 6.2.1.
One of the collection is causing lots of updates.
We see the next logs:
/INFO org.apache.solr.core.SolrDeletionPolicy :
SolrDeletionPolicy.onCommit: commits: num=2
commit{dir=/opt/solr-6.2.1/server/solr/collection_shard1_replica2/data/index,segFN=segments_qbmv,generation
Shawn, thanks for the reply
Please take a look at that post. It's describing the same issue with ES
They describe the issue as "dentry cache is bloating memory"
https://discuss.elastic.co/t/memory-usage-of-the-machine-with-es-is-continuously-increasing/23537/5
Thanks
Gilad
--
View this me
In the mean time I am removing all the explicit commits we have in the code.
Will update if it got better
--
View this message in context:
http://lucene.472066.n3.nabble.com/High-increasing-slab-memory-solr-6-tp4309708p4309718.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi
We have solr server 6.2.1 with basic auth (rule based).
Clients are using solrj 6.2.
During heavy workload we see that besides some timeouts, we also get the
next messages:
org.apache.solr.security.RuleBasedAuthorizationPlugin messages : request has
come without principal. failed permission {
Hi
We are using solr 6.2.1 for server and solrj 6.2.0,
with no explicit commits, and -
3
30
for autoCommit.
Each request to solr contains 300 small documents with different keys., with
a commitWithin of 300 ms.
We have lots of requests coming in.
The behavior is as the following:
Thanks Shawn.
We do specify
3
30
false
but I guess that still, the commitWithin 300 ms is a bad idea.
We will definitely try playing with the configs you suggested.
I still don't get the reason for a fast inserting when sending
Hi
Yes it is solrCloud, we saw the same behavior with 1,2 and 4 shards. each
shard has 3 replicas.
Each bulk contains 300 docs. We get approximately 800 docs inserted in a
second.
~6000 docs are being sent in an iteration by all loading threads.
we have 20 threads, each sending bulks of 300 docs
With hight entropy we see the same latency even when working with 1 shard.
Assuming that even with 1 shard, Solr is still working hard to route the
documents,
what is the component that is responsible for the document routing?
Is it the zookeeper?
And how would you verify that that's the bottle
Hi all
We have changed all solr configs and commit parameters that were mentioned
by Shawn,
but still - when inserting the same 300 documents from 20 threads we see no
latency
and when inserting different 300 docs from 20 threads it is very slow and no
cpu/ram/disk/network are showing high metrics
Hi!
We are running on solrj 6.2.0, server 6.2.1
and trying to fetch 100 records at a time, with nextCursorMark,
while* sorting on: score desc,* key asc
The collection is made of 2 shards, with 3 replicas each.
We get inconsistent results when not specifying specific replica for each
shard.
S
Thanks Shawn
We tried setting
and it did help us getting closer score among different replicas, but as you
say, we still had
1 replica with a different score than the others.
Would you suggest using "sticky" cursor that is always querying the same
replica on each shard?
Thanks
Gilad
--
Vie
Hi all
We are about to alter our schema with some DocValue annotations.
According to docs, we should whether delete all docs and re-insert, or
create a new collection with the new schema.
1. Is it valid to modify the schema in the current collection, where all
documents were created without docV
Tank you Emir.
I tried this locally (changing schema, re-index all implace)
and I wasn't able to sort on the doc value fields anymore (someone actually
mentioned this before on that forum -
https://lucene.472066.n3.nabble.com/DocValues-error-td4240116.html)
with the next error
"Error from server a
17 matches
Mail list logo