: I am wondering if there is a way to warmup new searcher on commit by
: rerunning queries processed by the last searcher. May be it happens by
: default but then I can't understand why we see high query times if those
: searchers are being warmed.
it only happens by default if you ha
rm the new searcher so you may
need to change the auto-commit intervals.
Joel Bernstein
http://joelsolr.blogspot.com/
On Wed, Jan 27, 2021 at 5:30 PM Pushkar Raste
wrote:
> Hi,
>
> A rookie question. We have a Solr cluster that doesn't get too much
> traffic. We see that our qu
Hi,
A rookie question. We have a Solr cluster that doesn't get too much
traffic. We see that our queries take long time unless we run a script to
send more traffic to Solr.
We are indexing data all the time and use autoCommit.
I am wondering if there is a way to warmup new searcher on comm
On 1/21/2021 3:42 AM, Parshant Kumar wrote:
Do value(true or false) of cold searcher play any role during the
completion of replication on slave server.If not please tell in which
process in solr its applied?
The setting to use a cold searcher applies whenever a new searcher is
opened. It
Adding more queries :-
Do value(true or false) of cold searcher play any role during the
completion of replication on slave server.If not please tell in which
process in solr its applied?
On Thu, Jan 21, 2021 at 3:11 PM Parshant Kumar
wrote:
> Hi all,
>
> Please help me in below queri
Hi all,
Please help me in below queries:
1) what is the impact of making cold searcher false,true?
2)After full replication completion of data on slave server, new searcher
is opened or not?
3)If opensearcher is false in autocommit and cold searcher is true , what
does this conclude , Is their
Please don’t mess with _version_, that’s used internally for optimistic locking.
I don’t have a clue, really, whether changing the definition will be deleterious
or not.
OTOH, that field was presumably defined by people who put the use of
_version_ in in the first place, so changing it is just ask
So to make things clear, belows what I am expecting
I have a document with a unique id field lets say "uniqueID".
This document has both stored/indexed and not stored/ not indexed fields
Currently I have my pop values in external files but I will instead define
a new field in schema (popVal) which
Let us know how it works. I want to be sure I’m not confusing you
though. There isn’t a “doc ID field”. The structure of an eff file is
docid:value
where docid is your . What updating numerics does is allow
you to update a field in a doc that’s identified by . That
field is any name you want as l
Hey Erick,
Thanks for the information about the doc ID field.
So our external file values are single float value fields and we do use
them in functional queries in boost parameter, so based on the definition
the above should work.
So currently we use solr 5.4.0 but are in the process of upgrading
Right, but you can use those with function queries. Assuming your eff entry is
a doc ID plus single numeric, I was wondering if you can accomplish what you
need to with function queries...
> On Aug 10, 2020, at 11:30 AM, raj.yadav wrote:
>
> Erick Erickson wrote
>> Ah, ok. That makes sense. I
Erick Erickson wrote
> Ah, ok. That makes sense. I wonder if your use-case would be better
> served, though, by “in place updates”, see:
> https://lucene.apache.org/solr/guide/8_1/updating-parts-of-documents.html
> This has been around in since Solr 6.5…
As per documentation `in place update` is o
ote:
>
> I have a use case where none of the document in my solr index is changing but
> I still want to open a new searcher through the curl api.
>
> On executing the below curl command
> curl
> "XXX.XX.XX.XXX:9744/solr/mycollection/update?openSearcher=true&com
ote:
>
> Hey,
>
> So I have external file fields that have some data that get updated
> regularly. Whenever those get updated we need the open searcher operation
> to happen. The value in this external files are used in boosting and other
> function/range queries.
>
> O
Hey,
So I have external file fields that have some data that get updated
regularly. Whenever those get updated we need the open searcher operation
to happen. The value in this external files are used in boosting and other
function/range queries.
On Mon, Aug 10, 2020 at 5:08 PM Erick Erickson
I have a use case where none of the document in my solr index is changing but
I still want to open a new searcher through the curl api.
On executing the below curl command
curl
"XXX.XX.XX.XXX:9744/solr/mycollection/update?openSearcher=true&commit=true"
it doesn't open a new s
In a word, “no”. There is explicit code to _not_ open a new searcher if the
index hasn’t changed because it’s an expensive operation.
Could you explain _why_ you want to open a new searcher even though the index
is unchanged? The reason for the check in the first place is that nothing has
Hey,
I have a use case where none of the document in my solr index is changing but I
still want to open a new searcher through the curl api.
On executing the below curl command
curl “XXX.XX.XX.XXX:9744/solr/mycollection/update?openSearcher=true&commit=true”
it doesn’t open a new sear
Hi
I am seeing from my logs searcher referenced as main and realtime .Do they
correspond to hard vs sofCommit. I do not see the co-relation to that based
on our commit settings.
Opening [Searcher@538abc62[xx_shard1_replica2] main]
Opening [Searcher@2e151991[ xx _shard1_replica1] realtime
maller cache
>> size.
>>
>> However, I still get these latency spikes (these changes have made no
>> difference to them).
>>
>> So the theory about them being due to the warming being too intensive is
>> wrong.
>>
>> I know the images didn
due to the warming being too intensive is
wrong.
>
> I know the images didn't load btw so when I say spike I mean p95th
response time going from 50ms to 100-120ms momentarily.
>
> From: Walter Underwood
> Sent: 29 Ja
On 1/29/2020 2:48 PM, Karl Stoney wrote:
I know the images didn't load btw so when I say spike I mean p95th response
time going from 50ms to 100-120ms momentarily.
I agree with Erick on looking at what users can actually notice.
When the normal response time is 50 milliseconds, even if that d
ue to the warming being too intensive is
> wrong.
>
> I know the images didn't load btw so when I say spike I mean p95th response
> time going from 50ms to 100-120ms momentarily.
>
> From: Walter Underwood
> Sent: 29 January 2020 21:30
> To
: Re: Solr Searcher 100% Latency Spike
Looking at the log, that takes one or two seconds after a complete batch reload
(master/slave). So that is loading a cold index, all new files. This is not a
big index, about a half million book titles.
wunder
Walter Underwood
wun...@wunderwood.org
htt
)
> On Jan 29, 2020, at 1:21 PM, Karl Stoney
> wrote:
>
> Out of curiosity, could you define "fast"?
> I'm wondering what sort of figures people target their searcher warm time at
>
> From: Walter Underwood
> Sent
Out of curiosity, could you define "fast"?
I'm wondering what sort of figures people target their searcher warm time at
From: Walter Underwood
Sent: 29 January 2020 21:13
To: solr-user@lucene.apache.org
Subject: Re: Solr Searcher 100% Latency
I use a static set of warming queries, about 20 of them. That is fast and gets
a decent amount of the index into file buffers. Your top queries won’t change
much unless you have a news site or a seasonal business.
Like this:
introduction
inter
Hey Shawn,
Thanks for the reply - funnily enough that is exactly what i'm trialing now.
I've significantly lowered the autoWarm (as well as the size) and still have a
0.95+ cache hit rate through searcher loads.
I'm going to continue to tweak these values down so long as i ke
On 1/29/2020 12:44 PM, Karl Stoney wrote:
Looking for a bit of support here. When we soft commit (every 10
minutes), we get a latency spike that means response times for solr are
loosely double, as you can see in this screenshot:
Attachments almost never make it to the list. We cannot see an
Hi All,
Looking for a bit of support here. When we soft commit (every 10 minutes), we
get a latency spike that means response times for solr are loosely double, as
you can see in this screenshot:
[cid:ed9fa791-0776-43fc-8f22-d8a568f5c084]
These do correlate to GC spikes (albeit not particularl
bq. I was under the wrong impression that autoCommit openSearcher=false would
control those too.
No, the settings in solrconfig.xml are the defaults. Like almost everything
else in the config, they govern the action in the absence of a per-command
override.
Best,
Erick
> On Mar 10, 2019, at
We do add commitWithin=XX when indexing updates, I take it that triggers
new searcher when the commit is made? I was under the wrong impression that
autoCommit openSearcher=false would control those too.
On Sat, Mar 9, 2019 at 9:00 PM Erick Erickson
wrote:
> Nothing should be opening
On 3/9/2019 8:24 PM, John Davis wrote:
I couldn't find an answer to this in the docs: if openSearcher is set to
false in the autocommit with no softcommits, what triggers a new one to be
created? My assumption is that until a new searcher is created all the
newly indexed docs will not be vi
the system. Does
your Solr log show any updates and what are the parameters if so?
BTW, the setting for hard commit openSearcher=false _only_ applies
to autocommits. The default behavior of an explicit commit from
elsewhere will open a new searcher.
> My assumption is that until a new searcher
Hi there,
I couldn't find an answer to this in the docs: if openSearcher is set to
false in the autocommit with no softcommits, what triggers a new one to be
created? My assumption is that until a new searcher is created all the
newly indexed docs will not be visible. Based on the solr
On 3/1/2019 4:42 AM, Amjad Khan wrote:
We are trying to extend AbstractSolrEventListener class and override
newSearcher method. Was curious to know if we can copy the existing searcher
cache to new searcher instead of executing the query receiving from
solrconfig.. Because we are not sure
We are trying to extend AbstractSolrEventListener class and override
newSearcher method. Was curious to know if we can copy the existing searcher
cache to new searcher instead of executing the query receiving from
solrconfig.. Because we are not sure what item was mostly searched.
Will
I see, thank you very much!
> -Original Message-
> From: Mikhail Khludnev [mailto:m...@apache.org]
> Sent: Tuesday, January 15, 2019 6:45 PM
> To: solr-user
> Subject: Re: join query and new searcher on joined collection
>
> It doesn't invalidate anything. It j
owing join at collection1 just won't hit
filter cache, and will be cached as new entry and lately the old entry will
be evicted.
On Tue, Jan 15, 2019 at 5:30 PM Vadim Ivanov <
vadim.iva...@spb.ntk-intourist.ru> wrote:
> Thanx, Mikhail for reply
> > collection1 has no idea abo
Thanx, Mikhail for reply
> collection1 has no idea about new searcher in collection2.
I suspected it. :)
So, when "join" query arrives searcher on collection1 has no chance to use
filter cache, stored before.
I suppose it invalidates filter cache, am I right?
&fq={!join s
collection1 has no idea about new searcher in collection2.
On Tue, Jan 15, 2019 at 1:18 PM Vadim Ivanov <
vadim.iva...@spb.ntk-intourist.ru> wrote:
> Sory, I've sent unfinished message
> So, query on collection1
> q=*:*{!join score=none from=id fromIndex=collection2 t
Sory, I've sent unfinished message
So, query on collection1
q=*:*{!join score=none from=id fromIndex=collection2 to=field1}*:*
The question is what happened with autowarming and new searchers on
collection1 when new searcher starts on collection2?
IMHO when request with join comes it's
Solr 6.3
I have a query like this:
q=*:*{!join score=none from=id fromIndex=hss_4 to=rpk_hdquotes v=$qq}*:*
--
Vadim
Hi Walter,
A searcher has an immutable (stale) view of the index of when it was
created. Therefore, a soft commit always open a new searcher, because this
new searcher will reflect the changes in the index since the last commit.
When you are doing a hard commit you have the option of not opening
would be great if the documentation explicitly said that soft commits open a
new Searcher. That would parallel the discussion under hard commits.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On Nov 16, 2018, at 11:21 AM, Shawn Heisey wrote:
>
>
/or
open a new Searcher. I guess I can see a use case where it would be
OK for the caches to have stale information for a while, but uncached
searches would find the new documents. And invalidating individual
entries in the document cache might be doable.
The only way you'll see changes
On 11/16/2018 11:54 AM, Walter Underwood wrote:
Does a soft commit always open a new Searcher?
In general, yes. To quote the oft-referenced blog post ... hard commits
are about durability, soft commits are about visibility.
I actually don't know if "openSearcher=false" would
Does a soft commit always open a new Searcher?
I’ve been reading all the documentation and articles I can find, and they all
say that soft commit makes documents visible for searching. They don’t
specifically say that they invalidate the caches and/or open a new Searcher. I
guess I can see a
Thank You Erick!
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
ng. So if you throttle your indexing you
can maybe make it better.
2> How often to you commit such that it opens a searcher? Either soft
commits or hard commits with openSearcher=true. See:
https://lucidworks.com/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/
3> H
is
completed - again searcher is faster.
65497572 [http-apr-8980-exec-40] INFO
org.apache.solr.update.processor.LogUpdateProcessor – [bbc] webapp=/solr
path=/update params=
{distrib.from=http://ser6.rit.net:8980/solr/bbc/&update.distrib=FROMLEADER&wt=javabin&version=2&update.ch
Chris:
LGTM, except maybe ;).
You'll want to look closely at your admin UI/Analysis page for the
field (or fieldType) once it's defined. Uncheck the "verbose" box when
you look the first time, it'll be less confusing. That'll show you
_exactly_ what the results are and whether they match your
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Erick,
On 3/12/18 1:00 PM, Erick Erickson wrote:
> bq: which you aren't supposed to edit directly.
>
> Well, kind of. Here's why it's "discouraged":
> https://lucene.apache.org/solr/guide/6_6/schema-api.html.
>
> But as long as you don't mix-and-
People can discourage that, but we only use hand-edited schema and solrconfig
files. Those are checked into version control. I wrote some Python to load them
into Zookeeper and reload the cluster.
This allows us to use the same configs in dev, test, and prod. We can actually
test things before
bq: which you aren't supposed to edit directly.
Well, kind of. Here's why it's "discouraged":
https://lucene.apache.org/solr/guide/6_6/schema-api.html.
But as long as you don't mix-and-match hand-editing with using the
schema API you can hand edit it freely. You're then in charge of
pushing it to
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
All,
I'd like to add a new synthesized field that uses a phonetic analyzer
such as Beider-Morse. I'm using Solr 7.2.
When I request the current schema via the schema API, I get a list of
existing fields, dynamic fields, and analyzers, none of which
Hmmm. oddly another poster was seeing this due to permissions issues,
although I don't know why that would clear up after a while. But it's
something to check.
Erick
On Wed, Aug 30, 2017 at 3:24 PM, Sundeep T wrote:
> Hello,
>
> Occasionally we are seeing errors opening new s
Hello,
Occasionally we are seeing errors opening new searcher for certain solr
cores. Whenever this happens, we are unable to query or ingest new data
into these cores. It seems to clear up after some time though. The root
cause seems to be - *"org.apache.lucene.store.LockObtainFailedExce
use it.
>> >
>> > What when I go into production? Should I be aware of anything?
>> >
>> >
>> >
>> > --
>> > View this message in context: http://lucene.472066.n3.
>> nabble.com/Error-creating-core-da-Error-opening-new-
>> searcher-tp4326041p4326085.html
>> > Sent from the Solr - User mailing list archive at Nabble.com.
>>
gt; >
> >
> >
> > --
> > View this message in context: http://lucene.472066.n3.
> nabble.com/Error-creating-core-da-Error-opening-new-
> searcher-tp4326041p4326085.html
> > Sent from the Solr - User mailing list archive at Nabble.com.
>
n? Should I be aware of anything?
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Error-creating-core-da-Error-opening-new-searcher-tp4326041p4326085.html
> Sent from the Solr - User mailing list archive at Nabble.com.
-Error-opening-new-searcher-tp4326041p4326085.html
Sent from the Solr - User mailing list archive at Nabble.com.
ene.472066.n3.
> nabble.com/Error-creating-core-da-Error-opening-new-
> searcher-tp4326041p4326065.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
-core-da-Error-opening-new-searcher-tp4326041p4326065.html
Sent from the Solr - User mailing list archive at Nabble.com.
ate a new core it gives me the below
>> error message.
>>
>> Can anyone see what is wrong?
>>
>> org.apache.solr.common.SolrException: Error opening new searcher
>> at org.apache.solr.core.SolrCore.(SolrCore.java:952)
>> at org.apache.solr.co
new core it gives me the below
> error message.
>
> Can anyone see what is wrong?
>
> org.apache.solr.common.SolrException: Error opening new searcher
> at org.apache.solr.core.SolrCore.(SolrCore.java:952)
> at org.apache.solr.core.SolrCore.(S
I am trying to create a solr core on a google cloud linux server using binami
launchpad. But when im trying to create a new core it gives me the below
error message.
Can anyone see what is wrong?
org.apache.solr.common.SolrException: Error opening new searcher
at
ing D?
:
: Are there some situations where the (potentially extremely short lived)
: C searcher must be visible before D replaces it?
In theory it might make sense to throw out C, but in practice:
1) since maxWarmingSearchers is typically a small value, E (and
sometimes D) are rarely creat
On Tue, 2016-12-13 at 16:07 -0700, Chris Hostetter wrote:
> ** "warming" happens i na single threaded executor -- so if there
> are multiple ondeck searchers, only one of them at a time is ever a
> "warming" searcher
> ** multiple ondeck searchers can be a sign of a
(disclaimer: i'm writing this all from memory, maybe there was some code
change at some point that i'm not aware of and i'm completley wrong)
: I've always understood the "on deck" searcher(s) being the same as the
: warming searcher(s). So you have the "ac
We've got a patch to prevent the exceptions:
https://issues.apache.org/jira/browse/SOLR-9712
-Yonik
On Fri, Dec 9, 2016 at 7:45 PM, Joel Bernstein wrote:
> The question about allowing more the one on-deck searcher is a good one.
> The current behavior with maxWarmingSearcher config
The question about allowing more the one on-deck searcher is a good one.
The current behavior with maxWarmingSearcher config is to throw an
exception if searchers are being opened too frequently. There is probably a
good reason why it was done this way but I'm not sure the history behi
r was the next one to take off),
and was later used heavily in baseball (the "on deck" batter was the one
warming up to go next) and probably elsewhere.
I've always understood the "on deck" searcher(s) being the same as the
warming searcher(s). So you have the "active
Jihwan:
Correct. Do note that there are two distinct warnings here:
1> "Error opening new searcher. exceeded limit of maxWarmingSearchers"
2> "PERFORMANCE WARNING: Overlapping onDeckSearchers=..."
in <1>, the new searcher is _not_ opened.
in <2>, the
rming. If the
value is 1, the second warming will fail. More number of concurrent
warming-up requires larger memory usage.
On Fri, Dec 9, 2016 at 9:14 AM, Erick Erickson
wrote:
> bq: because shouldn't there only be one active
> searcher at a time?
>
> Kind of. This is a total
bq: because shouldn't there only be one active
searcher at a time?
Kind of. This is a total nit, but there can be multiple
searchers serving queries briefly (one hopes at least).
S1 is serving some query when S2 becomes
active and starts getting new queries. Until the last
query S1 is servi
Hmmm, conflicting answers. Given the infamous "PERFORMANCE WARNING:
Overlapping onDeckSearchers" log message, it seems like the "they're the
same" answer is probably correct, because shouldn't there only be one active
searcher at a time?
Although it makes me curio
An on-deck searcher is not yet the active searcher. The SolrCore increments
the on-deck searcher count prior to starting the warming process. Unless
it's the first searcher, a new searcher will be warmed and then registered.
Once registered the searcher becomes active.
So, the initial que
On 12/8/2016 6:08 PM, Brent wrote:
> Is there a difference between an "on deck" searcher and a warming
> searcher? From what I've read, they sound like the same thing.
The on-deck searcher is the one that's active and serving queries. A
warming searcher is one that is
Is there a difference between an "on deck" searcher and a warming searcher?
>From what I've read, they sound like the same thing.
--
View this message in context:
http://lucene.472066.n3.nabble.com/on-deck-searcher-vs-warming-searcher-tp4309021.html
Sent from the Solr -
Hi Erick
Thanks for your help, it is alright now.
Have a good day
Victor
Message original
*Sujet: *Re: Error opening new searcher
*De : *Erick Erickson
*Pour : *solr-user
*Date : *20/05/2016 17:57
Actually, it almost certainly _is_ in the regular Solr log file, just
which
INFO to WARN but this kind of log
> was not in the solr.log regular log file !
>
> Regards
> Victor
>
> Message original ----
> *Sujet: *Re: Error opening new searcher
> *De : *Shawn Heisey
> *Pour : *solr-user@lucene.apache.org
> *Date : *20/05/2
Hi Shawn
Ok I am going to comit less often then.
I have planned to set the console log from INFO to WARN but this kind of
log was not in the solr.log regular log file !
Regards
Victor
Message original
*Sujet: *Re: Error opening new searcher
*De : *Shawn Heisey
*Pour
c:db s:shard3 r:core_node3
> x:db_shard3_replica1] o.a.s.u.p.DistributedUpdateProcessor Error
> sending update to http://10.69.212.22:8983/solr
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:
> Error from server at http://10.69.212.22:8983/solr/db_shard3_replica1:
> Erro
:8983/solr
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:
Error from server at http://10.69.212.22:8983/solr/db_shard3_replica1:
Error opening new searcher. exceeded limit of maxWarmingSearchers=2, try
again later.
Am I supposed to resend the document or will it be inserted just fine
later ?
And is it p
On Mon, Apr 18, 2016 at 8:02 PM, Erick Erickson wrote:
> This is about real-time get.
To clarify, it's used to handle real-time get type functionality in
general. It's used internally in a couple ways, not just when a user
issues a "real-time get".
-Yonik
Hi Erick,
Thanks for the info. Was under impression that we have extra setting
"openSearcher" to control when the searchers are being opened.
>From what you saying a searcher can be opened not only as a result of
hard or soft commit.
What I am observe, to follow your example:
T0
Erick can correct me. I think "searcher" here might just sound a bit
misleading. Real time get is really about fetching by id, not issuing
searches per-se. Only after a soft or hard commit does a document truly
become searchable.
On Mon, Apr 18, 2016 at 8:02 PM Erick Erickson
wrote:
rs opening new "realtime" searcher?
>
> 2016-04-18_16:28:02.33289 INFO (qtp1038620625-13) [c:col1 s:shard1
> r:core_node3 x:col1_shard1_replica3] o.a.s.s.SolrIndexSearcher Opening
> Searcher@752e986f[col1_shard1_replica3] realtime
>
> I am seeing above being tri
Hi,
What exactly triggers opening new "realtime" searcher?
2016-04-18_16:28:02.33289 INFO (qtp1038620625-13) [c:col1 s:shard1
r:core_node3 x:col1_shard1_replica3] o.a.s.s.SolrIndexSearcher Opening
Searcher@752e986f[col1_shard1_replica3] realtime
I am seeing above being triggered w
llo. My company uses Solr-4.10 in a distributed environment. I have
> written a SearchComponent which contains a cache which is loaded at
> start-up. The cache is only used on the searchers, never on the
> aggregators. Is there some way I can signal that the cache should be loaded
> only if
instance in question is a searcher, not if it is an
aggregator? At present the cache is loaded in the inform() function; is
that the wrong place for it?
Thanks in advance for your help.
Jitka
--
View this message in context:
http://lucene.472066.n3.nabble.com/Can-I-take-Solr-role-aggregator
See SOLR-5783.
-Original message-
> From:Alessandro Benedetti
> Sent: Wednesday 15th July 2015 14:48
> To: solr-user@lucene.apache.org
> Subject: Re: To the experts: howto force opening a new searcher?
>
> 2015-07-15 12:44 GMT+01:00 Markus Jelsma :
>
> &g
Am 15.07.2015 um 14:47 schrieb Alessandro Benedetti:
...
>>> What ever you name a problem, I just wanted to open a new searcher
>>> after several days of heavy load/searching on one of my slaves
>>> to do some testing with empty field-/document-/filter-caches.
>>
2015-07-15 12:44 GMT+01:00 Markus Jelsma :
> Well yes, a simple empty commit won't do the trick, the searcher is not
> going to reload on recent versions. Reloading the core will.
>
mmm Markus, let's assume we trigger a soft commit, even empty, if open
searcher is equal true,
Hi Markus,
excellent, reloading the core did it.
Best regards
Bernd
Am 15.07.2015 um 13:44 schrieb Markus Jelsma:
> Well yes, a simple empty commit won't do the trick, the searcher is not going
> to reload on recent versions. Reloading the core will.
>
>
Well yes, a simple empty commit won't do the trick, the searcher is not going
to reload on recent versions. Reloading the core will.
-Original message-
> From:Bernd Fehling
> Sent: Wednesday 15th July 2015 13:42
> To: solr-user@lucene.apache.org
> Subject: Re: To t
What ever you name a problem, I just wanted to open a new searcher
after several days of heavy load/searching on one of my slaves
to do some testing with empty field-/document-/filter-caches.
Sure, I could first add, then delete a document and do a commit.
Or may be only do a fake update of a
Triggering a commit , implies the new Searcher to be opened in a soft
commit scenario.
With an hard commit, you can decide if opening or not the new searcher.
But this is probably a X/Y problem.
Can you describe better your real problem and not the way you were trying
to solve it ?
Cheers
2015
On top of that sorry, I didn't answer to your question because I don't know
if that is possible
Best,
Andrea
On 15 Jul 2015 02:51, "Andrea Gazzarini" wrote:
> What do you mean with "clean" state? A searcher is a view over a given
> index (let's say) &q
1 - 100 of 293 matches
Mail list logo