Hello there, let me introduce my self. My name is Mohammad Kevin Putra (you
can call me Kevin), from Indonesia, i am a beginner in backend developer, i
use Linux Mint, i use Apache SOLR 7.5.0 and Apache TIKA 1.91.0.
I have a little bit problem about how to put PDF File via Apache TIKA. I
understan
Sofiya:
The interval between when a commit happens and all the autowarm
queries are finished if 52 seconds for the filterCache. seen warming
that that long unless something's very unusual. I'd actually be very
surprised if you're really only firing 64 autowarm queries and it's
taking almost 52 sec
: Hi Erick,Thanks for your reply.No, we aren't using schemaless
: mode. is not explicitly declared in
: our solrconfig.xmlAlso we have only one replica and one shard.
ManagedIndexSchemaFactory has been the default since 6.0 unless an
explicit schemaFactory is defined...
https://lucene.apac
Could you please clarify what is memory disk layer? Do you mean swapping
from memory to disk, reading from disk to memory, or something else?
On 29.10.18 17:20, Deepak Goel wrote:
I would then suspect performance is choking in memory disk layer. can
you please check the performance?
On Mon,
Hi,
To support higher select-query-rates, we are planning to increase the
replication factor from 15 to 24.
Will this put too much load on the leader nodes? since each update now has
to be propagated to 24 replica nodes.
Each node is on a different IP but in the same availability region within a
d
Sure, i can test that, will set to zero now :)
We never tried a small number for the autowarming parameter but it has
been running with zero (default value) for a while before being changed
to 64, and the startup after the commit has been a bit slow. But
overall, there was rather little differ
Speaking of your caches... Either it's a problem with the metrics
reporting or your warmup times very, very long. 11 seconds and, er,
52 seconds! My guess is that you have your autowarm counts set to a
very high number and are consuming a lot of CPU time every time a
commit happens. Which will only
I would then suspect performance is choking in memory disk layer. can you
please check the performance?
On Mon, 29 Oct 2018, 20:30 Sofiya Strochyk, wrote:
> Hi Deepak and thanks for your reply,
>
> On 27.10.18 10:35, Deepak Goel wrote:
>
>
> Last, what is the nature of your request. Are the quer
Hi Zahra,
To answer your question on seeing "No such processor atomic" with
AtomicUpdateProcessorFactory;
The feature is introduced in Solr 6.6.1 and 7.0 and is available in the
versions later.
I am trying the below on v 7.4 and it is working fine, without adding any
component on solrconfig.xml:
Hi Ere,
Thanks for your advice! I'm aware of the performance problems with deep
paging and unfortunately it is not the case here, as the rows number is
always 24 and next pages are hardly ever requested from what i see in
the logs.
On 29.10.18 11:19, Ere Maijala wrote:
Hi Sofiya,
You've a
Hi Deepak and thanks for your reply,
On 27.10.18 10:35, Deepak Goel wrote:
Last, what is the nature of your request. Are the queries the same? Or
they are very random? Random queries would need more tuning than if
the queries the same.
The search term (q) is different for each query, and fil
I am not sure. I haven't tried this particular path. Your original
question was without using SolrJ. Maybe others have.
However, I am also not sure how much sense this makes. This Atomic
processor is to make it easier to do the merge when you cannot modify
the source documents. But if you are alre
On 10/29/2018 7:40 AM, Zahra Aminolroaya wrote:
Thanks Alex. I want to have a query for atomic update with solrj like below:
http://localhost:8983/solr/test4/update?preprocessor=atomic&atomic.text2=set&atomic.text=set&atomic.text3=set&commit=true&stream.body=%3Cadd%3E%3Cdoc%3E%3Cfield%20name=%22
Thanks Alex. I want to have a query for atomic update with solrj like below:
http://localhost:8983/solr/test4/update?preprocessor=atomic&atomic.text2=set&atomic.text=set&atomic.text3=set&commit=true&stream.body=%3Cadd%3E%3Cdoc%3E%3Cfield%20name=%22id%22%3E11%3C/field%3E%3Cfield%20name=%22text3%22%
Hi Shalin,
these are stats for caches used:
*documentCache*
class:org.apache.solr.search.LRUCache
description:LRU Cache(maxSize=128, initialSize=128)
stats:
CACHE.searcher.documentCache.cumulative_evictions:234923643
CACHE.searcher.documentCache.cumulative_hitratio:0
CACHE.searcher.documentCache
Maybe this was introduced in the later version of Solr. Check the Changes
file to compare yours and the releases version.
Regards,
Alex
On Mon, Oct 29, 2018, 6:37 AM Zahra Aminolroaya,
wrote:
> Thanks Alex. I try the following to set the atomic processor:
>
> http://localhost:8983/solr/tes
Thanks Alex. I try the following to set the atomic processor:
http://localhost:8983/solr/test4/update?processor=atomic&Atomic.text2=add
However, I get the following error:
400
4
org.apache.solr.common.SolrException
org.apache.solr.common.SolrException
No such processor atomic
400
I rea
On Mon, 2018-10-29 at 10:55 +0200, Sofiya Strochyk wrote:
> I think we could try that, but most likely it turns out that at some
> point we are receiving 300 requests per second, and are able to
> reasonably handle 150 per second, which means everything else is
> going to be kept in the growing que
Hi Sofiya,
You've already received a lot of ideas, but I think this wasn't yet
mentioned: You didn't specify the number of rows your queries fetch or
whether you're using deep paging in the queries. Both can be real
perfomance killers in a sharded index because a large set of records
have to
I think we could try that, but most likely it turns out that at some
point we are receiving 300 requests per second, and are able to
reasonably handle 150 per second, which means everything else is going
to be kept in the growing queue and increase response times even further..
Also, if one no
Erick,
thanks, i've been pulling my hair out over this for a long time and
gathered a lot of information :)
Doesn't there exist a setting for maxIndexingThreads in solrconfig with
default value of about 8? It's not clear if my updates are being
executed in parallel or not but i would expect
Hi Walter,
yes, after some point it gets really slow (before reaching 100% CPU
usage), so unless G1 or further tuning helps i guess we will have to add
more replicas or shards.
On 26.10.18 20:57, Walter Underwood wrote:
The G1 collector should improve 95th percentile performance, because it
What does your cache statistics look like? What's the hit ratio, size,
evictions etc?
More comments inline:
On Sat, Oct 27, 2018 at 8:23 AM Erick Erickson
wrote:
> Sofiya:
>
> I haven't said so before, but it's a great pleasure to work with
> someone who's done a lot of homework before pinging
23 matches
Mail list logo