back to sitecore once we
> manually indexed the item with changes
>
> Regards
> David Villacorta
>
> -Original Message-
> From: Emir Arnautović [mailto:emir.arnauto...@sematext.com]
> Sent: Friday, November 08, 2019 7:53 PM
> To: solr-user@lucene.apache.org
> Su
Message-
From: Emir Arnautović [mailto:emir.arnauto...@sematext.com]
Sent: Friday, November 08, 2019 7:53 PM
To: solr-user@lucene.apache.org
Subject: Re: Commit disabled
Hi David,
Index will get updated (hard commit is happening every 15s) but changes will
not be visible until you explicitly
Hi David,
Index will get updated (hard commit is happening every 15s) but changes will
not be visible until you explicitly commit or you reload core. Note that Solr
restart reloads cores.
HTH,
Emir
--
Monitoring - Log Management - Alerting - Anomaly Detection
Solr & Elasticsearch Consulting Supp
On 5/14/2018 11:29 AM, LOPEZ-CORTES Mariano-ext wrote:
> After having injecting 200 documents in our Solr server, the commit
> operation at the end of the process (using ConcurrentUpdateSolrClient) take
> 10 minutes. It's too slow?
There is a wiki page discussing slow commits:
https://wiki.
Hi Wei,
I'm assuming the lastModified time is when latest hard commit happens. Is
that correct?
>> Yes. its correct.
I also see sometime difference between replicas and leader commit
timestamps where the "diff/lag < autoCommit interval". So in your case you
noticed like upto 10 mins.
My guess is
I realized that after doing the commit manually, two shards had a lot fewer
files than the 3rd shard (which failed on commit). However, with the
passage of time, the number of files continued to decrease for the shard
with more files. FWIW, each shard has exactly same number of document and
similar
Thanks for the reply.
The issue is, when the core is unloaded, post commit listeners on the core
are not getting called.
If you see here, the code that calls post commit listeners is commented out.
https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/update/DirectU
Hi Saiks,
I am not following you.
According to the Solr documentation :
"transient=["true"|"false"]. Whether the core should be put in the LRU list
of cores that may be unloaded. NOTE: When a core is unloaded, any
outstanding operations (indexing or query) will be completed before the core
is close
Hi All,
We are a big public company and we are evaluating Solr to store hundreds of
tera bytes of data.
Post commit listeners getting called on core close is a must for us.
It would be great if anyone can help us fix the issue or suggest a
workaround :)
Thank you
--
View this message in conte
Alessandro,
I'm not sure which code reference are you asking about, but here they are:
http://lucene.apache.org/core/6_3_0/core/org/apache/lucene/index/DirectoryReader.html#openIfChanged-org.apache.lucene.index.DirectoryReader-org.apache.lucene.index.IndexWriter-boolean-
http://blog.mikemccandless.
Interesting Michael, can you pass me the code reference?
Cheers
--
View this message in context:
http://lucene.472066.n3.nabble.com/Commit-required-after-delete-tp4312697p4313692.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hello, Friend!
You absolutely need to commit to make delete visible. And even more, when
"softCommit" is issued in Lucene level, there is a flag which ignores
deletes for sake of performance.
06 янв. 2017 г. 10:55 пользователь "Dorian Hoxha"
написал:
Hello friends,
Based on what I've read, I th
It would be worth looking into iostats of your disks.
On Aug 22, 2016 10:11 AM, "Alessandro Benedetti"
wrote:
> I agree with the suggestions so far.
> The cache auto-warming doesn't seem the problem as the index is not massive
> and the auto-warm is for only 10 docs.
> Are you using any warming
I agree with the suggestions so far.
The cache auto-warming doesn't seem the problem as the index is not massive
and the auto-warm is for only 10 docs.
Are you using any warming query for the new searcher ?
Are you using soft or hard commit ?
This can make the difference ( soft are much cheaper, n
Midas,
I’d like further clarification as well. Are you sending commits along with each
document that you’re POSTing to Solr? If so, you’re essentially either opening
a new searcher or flushing to disk with each POST which could explain latency
between each request.
Thanks,
Esther
> On Aug 11,
bq: we post json documents through the curl it takes the time (same time i
would like to say that we are not hard committing ). that curl takes time
i.e. 1.3 sec.
OK, I'm really confused. _what_ is taking 1.3 seconds? When you said
commit, I was thinking of Solr's commit operation, which is total
Hi Midas,
1. How many indexing threads?
2. Do you batch documents and what is your batch size?
3. How frequently do you commit?
I would recommend:
1. Move commits to Solr (set auto soft commit to max allowed time)
2. Use batches (bulks)
3. tune bulk size and number of threads to achieve max perf
Emir,
other queries:
a) Solr cloud : NO
b)
c)
d)
e) we are using multi threaded system.
On Thu, Aug 11, 2016 at 11:48 AM, Midas A wrote:
> Emir,
>
> we post json documents through the curl it takes the time (same time i
> would like to say that we are not hard committing ). that curl takes
Emir,
we post json documents through the curl it takes the time (same time i
would like to say that we are not hard committing ). that curl takes time
i.e. 1.3 sec.
On Wed, Aug 10, 2016 at 2:29 PM, Emir Arnautovic <
emir.arnauto...@sematext.com> wrote:
> Hi Midas,
>
> According to your autocommi
Hi Midas,
According to your autocommit configuration and your worry about commit
time I assume that you are doing explicit commits from client code and
that 1.3s is client observed commit time. If that is the case, than it
might be opening searcher that is taking time.
How do you index data
Thanks for replying
index size:9GB
2000 docs/sec.
Actually earlier it was taking less but suddenly it has increased .
Currently we do not have any monitoring tool.
On Tue, Aug 9, 2016 at 7:00 PM, Emir Arnautovic <
emir.arnauto...@sematext.com> wrote:
> Hi Midas,
>
> Can you give us more detai
Hi Midas,
Can you give us more details on your index: size, number of new docs
between commits. Why do you think 1.3s for commit is to much and why do
you need it to take less? Did you do any system/Solr monitoring?
Emir
On 09.08.2016 14:10, Midas A wrote:
please reply it is urgent.
On Tue
please reply it is urgent.
On Tue, Aug 9, 2016 at 11:17 AM, Midas A wrote:
> Hi ,
>
> commit is taking more than 1300 ms . what should i check on server.
>
> below is my configuration .
>
> ${solr.autoCommit.maxTime:15000} <
> openSearcher>false
> ${solr.autoSoftCommit.maxTime:-1}
>
>
Sorry, I did not see the responses here because I found out myself. I
definitely seems like a hard commit it performed when shutting down
gracefully. The info I got from production was wrong.
It is not necessarily obvious that you will loose data on "kill -9". The
tlog ought to save you, but it
On 5/20/2016 2:51 PM, Jon Drews wrote:
> I would be interested in an answer to this question.
>
> From my research it looks like it will do a hard commit if cleanly shut
> down. However if you "kill -9" it you'll loose data (obviously). Perhaps
> production isn't cleanly shutting down solr?
> https
I would be interested in an answer to this question.
>From my research it looks like it will do a hard commit if cleanly shut
down. However if you "kill -9" it you'll loose data (obviously). Perhaps
production isn't cleanly shutting down solr?
https://dzone.com/articles/understanding-solr-soft
Jo
On 3/3/2016 11:36 PM, sangs8788 wrote:
> When a commit fails, the document doesnt get cleared out from MQ and there is
> a task which runs in a background to republish the files to SOLR. If we do a
> batch commit we will not know we will end up redoing the same batch commit
> again. We currenlty ha
Hi Sangeetha,
It seems to me that you are using Solr as primary data store? If that is
true, you should not do that - you should have some other store that is
transactional and can support what you are trying to do with Solr. If
you are not using Solr as primary store, and it is critical to hav
When a commit fails, the document doesnt get cleared out from MQ and there is
a task which runs in a background to republish the files to SOLR. If we do a
batch commit we will not know we will end up redoing the same batch commit
again. We currenlty have a client side commit which issue the command
So batch them. You get a response back from Solr whether the document was
accepted. If that fail, there is a failure. What do you do then?
After every 100 docs or one minute, do a commit. Then delete the documents from
the input queue. What do you do when the commit fails?
wunder
Walter Underwo
If you need transactions, you should use a different system, like MarkLogic.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On Mar 3, 2016, at 8:46 PM, sangs8788
> wrote:
>
> Hi Emir,
>
> Right now we are having only inserts into SOLR. The main rea
Hi Varun,
We dont have SOLR Cloud setup in our system. We have Master-Slave
architecture setup. In that case i dont see a way where SOLR can guarantee
whether a document got indexed/commited successfully or not.
Even thought about having a flag setup in db for whichever documents
commited to SOLR
Hi Emir,
Right now we are having only inserts into SOLR. The main reason for having
commit after each document is to get a guarantee that the document has got
indexed in solr. Until the commit status is received back the document will
not be deleted from MQ. So that even if there is a commit failu
Hi Sangeetha,
Well I don't think you need to commit after every document add.
You can rely on Solr's transaction log feature . If you are using SolrCloud
it's mandatory to have a transaction log . So every documents get written
to the tlog . Now say a node crashes even if documents were not commi
Hi Sangeetha,
What is sure is that it is not going to work - with 200-300K doc/hour,
there will be >50 commits/second, meaning there are <20ms time for
doc+commit.
You can do is let Solr handle commits and maybe use real time get to
verify doc is in Solr or do some periodic sanity checks.
Are y
On 10/28/15 5:41 PM, Shawn Heisey wrote:
On 10/28/2015 5:11 PM, Rallavagu wrote:
Seeing very high CPU during this time and very high warmup times. During
this time, there were plenty of these errors logged. So, trying to find
out possible causes for this to occur. Could it be disk I/O issues o
On 10/28/2015 5:11 PM, Rallavagu wrote:
> Seeing very high CPU during this time and very high warmup times. During
> this time, there were plenty of these errors logged. So, trying to find
> out possible causes for this to occur. Could it be disk I/O issues or
> something else as it is related to c
Also, is this thread that went OOM and what could cause it? The heap was
doing fine and server was live and running.
On 10/28/15 3:57 PM, Shawn Heisey wrote:
On 10/28/2015 2:06 PM, Rallavagu wrote:
Solr 4.6.1, cloud
Seeing following commit errors.
[commitScheduler-19-thread-1] ERROR
org.apac
Thanks Shawn for the response.
Seeing very high CPU during this time and very high warmup times. During
this time, there were plenty of these errors logged. So, trying to find
out possible causes for this to occur. Could it be disk I/O issues or
something else as it is related to commit (writi
On 10/28/2015 2:06 PM, Rallavagu wrote:
> Solr 4.6.1, cloud
>
> Seeing following commit errors.
>
> [commitScheduler-19-thread-1] ERROR
> org.apache.solr.update.CommitTracker – auto commit
> error...:java.lang.IllegalStateException: this writer hit an
> OutOfMemoryError; cannot commit at
> org.apac
Hi Upayavira,
You were rigtht. I had to only replace the Content-type to appliacation/xml
and it worked correctly.
Roland
2015-08-30 11:22 GMT+02:00 Upayavira :
>
>
> On Sat, Aug 29, 2015, at 05:30 PM, Szűcs Roland wrote:
> > Hello SOLR experts,
> >
> > I am new to solr as you will see from my
Thanks Erick,
Your blog post made it clear. It was looong, but not too long.
Roland
2015-08-29 19:00 GMT+02:00 Erick Erickson :
> 1> My first guess is that your autocommit
> section in solrconfig.xml has false
> So the commitWithin happened but a new searcher
> was not opened thus the document
On Sat, Aug 29, 2015, at 05:30 PM, Szűcs Roland wrote:
> Hello SOLR experts,
>
> I am new to solr as you will see from my problem. I just try to
> understand
> how solr works. I use one core (BandW) on my locla machine and I use
> javascript for my learning purpose.
>
> I have a test schema.xml
You can probably do a custom update request processor chain and skip
the distributed component. No idea of the consequences though.
Regards,
Alex.
Personal website: http://www.outerthoughts.com/
Current project: http://www.solr-start.com/ - Accelerating your Solr proficiency
On Thu, May 22, 2
This is almost always that you're committing too often, either soft
commit or hard commit with openSearcher=true. Shouldn't have any
effect on the consistency of your index though.
It _is_ making your Solr work harder than you want it to, so consider
increasing the commit intervals substantially.
Thanks Shawn, I appreciate the information.
On Wed, Apr 9, 2014 at 10:27 AM, Shawn Heisey wrote:
> On 4/9/2014 7:47 AM, Jamie Johnson wrote:
> > This is being triggered by adding the commitWithin param to
> > ContentStreamUpdateRequest (request.setCommitWithin(1);). My
> > configuration ha
On 4/9/2014 7:47 AM, Jamie Johnson wrote:
> This is being triggered by adding the commitWithin param to
> ContentStreamUpdateRequest (request.setCommitWithin(1);). My
> configuration has autoCommit max time of 15s and openSearcher set to false.
> I'm assuming that changing openSeracher to tru
This is being triggered by adding the commitWithin param to
ContentStreamUpdateRequest (request.setCommitWithin(1);). My
configuration has autoCommit max time of 15s and openSearcher set to false.
I'm assuming that changing openSeracher to true should address this, and
adding the softCommit =
Got a clue how it's being generated? Because it's not going to show
you documents.
commit{,optimize=false,openSearcher=false,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
openSearcher=false and softCommit=false so the documents will be
invisible. You need one or the
Below is the log showing what I believe to be the commit
07-Apr-2014 23:40:55.846 INFO [catalina-exec-5]
org.apache.solr.update.processor.LogUpdateProcessor.finish [forums]
webapp=/solr path=/update/extract
params={uprefix=attr_&literal.source_id=e4bb4bb6-96ab-4f8f-8a2a-1cf37dc1bcce&literal.conten
What does the call look like? Are you setting opening a new searcher
or not? That should be in the log line where the commit is recorded...
FWIW,
Erick
On Sun, Apr 6, 2014 at 5:37 PM, Jamie Johnson wrote:
> I'm running solr 4.6.0 and am noticing that commitWithin doesn't seem to
> work when I am
You say you see the commit happen in the log, is openSearcher
specified? This sounds like you're somehow getting a commit
with openSearcher=false...
Best,
Erick
On Sun, Apr 6, 2014 at 5:37 PM, Jamie Johnson wrote:
> I'm running solr 4.6.0 and am noticing that commitWithin doesn't seem to
> work
t; Sent: Friday, March 28, 2014 3:14 PM
> To: solr-user@lucene.apache.org
> Subject: Re: commit=false in Solr update URL
>
> On 3/28/2014 1:02 PM, Joshi, Shital wrote:
>> You mean default for openSearcher is false right? So unless I specify
>> commit=false&openSear
Thank you!
-Original Message-
From: Shawn Heisey [mailto:s...@elyograg.org]
Sent: Friday, March 28, 2014 3:14 PM
To: solr-user@lucene.apache.org
Subject: Re: commit=false in Solr update URL
On 3/28/2014 1:02 PM, Joshi, Shital wrote:
> You mean default for openSearcher is false right?
On 3/28/2014 1:02 PM, Joshi, Shital wrote:
You mean default for openSearcher is false right? So unless I specify
commit=false&openSearcher=true in my Solr Update URL the current searcher and
caches will not get invalidated.
If commit=false, openSearcher does not matter -- it's part of a commi
2014 12:48 PM
To: solr-user@lucene.apache.org
Subject: Re: commit=false in Solr update URL
On 3/28/2014 10:22 AM, Joshi, Shital wrote:
> What happens when we use commit=false in Solr update URL?
> http://$solr_url/solr/$solr_core/update/csv?commit=false&separator=|&trim=true&skipLi
On 3/28/2014 10:22 AM, Joshi, Shital wrote:
What happens when we use commit=false in Solr update URL?
http://$solr_url/solr/$solr_core/update/csv?commit=false&separator=|&trim=true&skipLines=2&_shard_=$shardid
1. Does it invalidate all caches? We really need to know this.
2. Nothing
Thanks for the links. I think it would be worth getting more detailed info.
Because it could be the performance threshold, or it could be st else /such
as updated java version or st else, loosely related to ram, eg what is held
in memory before the commit, what is cached, leaked custom query object
On 2/8/2014 11:02 AM, Roman Chyla wrote:
> I would be curious what the cause is. Samarth says that it worked for over
> a year /and supposedly docs were being added all the time/. Did the index
> grew considerably in the last period? Perhaps he could attach visualvm
> while it is in the 'black hole
I would be curious what the cause is. Samarth says that it worked for over
a year /and supposedly docs were being added all the time/. Did the index
grew considerably in the last period? Perhaps he could attach visualvm
while it is in the 'black hole' state to see what is actually going on. I
don't
On 2/8/2014 10:22 AM, Shawn Heisey wrote:
> Can you share your solrconfig.xml file? I may be able to confirm a
> couple of things I suspect, and depending on what's there, may be able
> to offer some ideas to help a little bit. It's best if you use a file
> sharing site like dropbox - the list do
On 2/8/2014 1:40 AM, samarth s wrote:
> Yes it is amazon ec2 indeed.
>
> To expqnd on that,
> This solr deployment was working fine, handling the same load, on a 34 GB
> instance on ebs storage for quite some time. To reduce the time taken by a
> commit, I shifted this to a 30 GB SSD instance. It
Yes it is amazon ec2 indeed.
To expqnd on that,
This solr deployment was working fine, handling the same load, on a 34 GB
instance on ebs storage for quite some time. To reduce the time taken by a
commit, I shifted this to a 30 GB SSD instance. It performed better in
writes and commits for sure. B
On 2/6/2014 9:56 AM, samarth s wrote:
> Size of index = 260 GB
> Total Docs = 100mn
> Usual writing speed = 50K per hour
> autoCommit-maxDocs = 400,000
> autoCommit-maxTime = 1500,000 (25 mins)
> merge factor = 10
>
> M/c memory = 30 GB, Xmx = 20 GB
> Server - Jetty
> OS - Cent OS 6
With 30GB of
On Nov 25, 2013, at 1:40 AM, adfel70 wrote:
> Just to clarify how these two phrases come together:
> 1. "you will know when an update is rejected - it just might not be easy to
> know which in the batch / stream"
>
> 2. "Documents that come in batches are added as they come / are processed -
>
Just to clarify how these two phrases come together:
1. "you will know when an update is rejected - it just might not be easy to
know which in the batch / stream"
2. "Documents that come in batches are added as they come / are processed -
not in some atomic unit."
If I send a batch of documents
If you want this promise and complete control, you pretty much need to do a doc
per request and many parallel requests for speed.
The bulk and streaming methods of adding documents do not have a good fine
grained error reporting strategy yet. It’s okay for certain use cases and and
especially b
Hi Mark, Thanks for the answer.
One more question though: You say that if I get a success from the update,
it’s in the system, commit or not. But when exactly do I get this feedback -
Is it one feedback per the whole request, or per one add inside the request?
I will give an example clarify my que
I suggest you to read here:
http://searchhub.org/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/
Thanks;
Furkan KAMACI
2013/11/24 Mark Miller
> SolrCloud does not use commits for update acceptance promises.
>
> The idea is, if you get a success from the update, it
SolrCloud does not use commits for update acceptance promises.
The idea is, if you get a success from the update, it’s in the system, commit
or not.
Soft Commits are used for visibility only.
Standard Hard Commits are used essentially for internal purposes and should be
done via auto commit ge
Take a loot at solrconfig.xml. You configure filtrerCache,
documentCache, queryResultCache. These (and
some others I believe, but certainly these) are _not_
per-segment caches, so are invalidated on soft commit.
Any autowarming you've specified also gets executed
if applicable.
On the other hand,
Erik-
/It does invalidate the "top level" caches, including the caches you
configure in solrconfig.xml. /
Could you elucidate?
--
View this message in context:
http://lucene.472066.n3.nabble.com/commit-vs-soft-commit-tp4083817p4083844.html
Sent from the Solr - User mailing list archive at Nabb
Soft commits also do not rebuild certain per-segment caches
etc. It does invalidate the "top level" caches, including
the caches you configure in solrconfig.xml.
So no, it's not free at all. Your soft commits should still
be as long an interval as makes sense in your app. But
they're still much fa
Yes a new searcher is opened with every soft commit. It's still considered
faster because it does not write to the disk which is a slow IO operation
and might take a lot more time.
On Sunday, August 11, 2013, tamanjit.bin...@yahoo.co.in wrote:
> Hi,
> Some confusion in my head.
> http://
> http:/
cool.
so far I've been using the default collection 1 only.
thanks,
Jason
On Thu, Jul 11, 2013 at 7:57 AM, Erick Erickson wrote:
> Just use the address in the url. You don't have to use the core name
> if the defaults are set, which is usually collection1.
>
> So it's something like http://hos
Just use the address in the url. You don't have to use the core name
if the defaults are set, which is usually collection1.
So it's something like http://host:port/solr/core2/update? blah blah blah
Erick
On Wed, Jul 10, 2013 at 4:17 PM, Jason Huang wrote:
> Thanks David.
>
> I am actually tryin
Thanks David.
I am actually trying to commit the database row on the fly, not DIH. :)
Anyway, if I understand you correctly, basically you are suggesting to
modify the value of the primary key and pass the new value to "id" before
committing to solr. This could probably be one solution.
What if
Hi Jason,
Assuming you're using DIH, why not build a new, unique id within the query to
use as the 'doc_id' for SOLR? We do something like this in one of our
collections. In MySQL, try this (don't know what it would be for any other db
but there must be equivalents):
select @rownum:=@rownum+1
On 5/3/2013 9:28 AM, vicky desai wrote:
Hi,
When a auto commit operation is fired I am getting the following logs
INFO: start
commit{,optimize=false,openSearcher=false,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
setting the openSearcher to false definetly gave m
Hi,
When a auto commit operation is fired I am getting the following logs
INFO: start
commit{,optimize=false,openSearcher=false,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
setting the openSearcher to false definetly gave me a lot of performance
improvement but wa
Since you have define commit option as "Auto Commit" for hard and soft
commit then you don't have to explicitly call commit from SolrJ client. And
openSearcher=false for hard commit will make hard commit faster since it is
only makes sure that recent changes are flushed to disk (for durability)
an
Hi,
After using the following config
500
1000
5000
false
When a commit operation is fired I am getting the follow
Hi All,
setting opensearcher flag to true solution worked and it give me visible
improvement in commit time. One thing to make note of is that while using
solrj client we have to call server.commit(false,false) which i was doing
incorrectly and hence was not able to see the improvement earliear.
My solrconfig.xml is as follows
LUCENE_40
2147483647
simple
true
500
1000
That's not ideal.
Can you post solrconfig.xml?
On 3 May 2013 07:41, "vicky desai" wrote:
> Hi sandeep,
>
> I made the changes u mentioned and tested again for the same set of docs
> but
> unfortunately the commit time increased.
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3
Hi Gopal,
I added the opensearcher parameter as mentioned by you but on checking logs
I found that apensearcher was still true on commit. it is only when I
removed the autosoftcommit parameter the opensearcher parameter worked and
provided faster updates as well. however I require soft commit in m
Hi sandeep,
I made the changes u mentioned and tested again for the same set of docs but
unfortunately the commit time increased.
--
View this message in context:
http://lucene.472066.n3.nabble.com/commit-in-solr4-takes-a-longer-time-tp4060396p4060622.html
Sent from the Solr - User mailing lis
Hi Vicky,
I faced this issue as well and after some playing around I found the
autowarm count in cache sizes to be a problem.
I changed that from a fixed count (3072) to percentage (10%) and all commit
times were stable then onwards.
HTH,
Sandeep
On 2 May 2013 16:31, Alexandre Rafalovitch
If you don't re-open the searcher, you will not see new changes. So,
if you only have hard commit, you never see those changes (until
restart). But if you also have soft commit enabled, that will re-open
your searcher for you.
Regards,
Alex.
Personal blog: http://blog.outerthoughts.com/
LinkedI
What happens exactly when you don't open searcher at commit?
2013/5/2 Gopal Patwa
> you might want to added openSearcher=false for hard commit, so hard commit
> also act like soft commit
>
>
> 5
> 30
>fal
you might want to added openSearcher=false for hard commit, so hard commit
also act like soft commit
5
30
false
On Thu, May 2, 2013 at 12:16 AM, vicky desai wrote:
> Hi,
>
> I am using 1
First, I would upgrade to 4.2.1 and remember to change to
LUCENE_42.
There were a LOT of fixes between 4.0 and 4.2.1.
wunder
On May 2, 2013, at 12:16 AM, vicky desai wrote:
> Hi,
>
> I am using 1 shard and two replicas. Document size is around 6 lakhs
>
>
> My solrconfig.xml is as follows
Hi,
I am using 1 shard and two replicas. Document size is around 6 lakhs
My solrconfig.xml is as follows
LUCENE_40
2147483647
simple
true
500
Can you explain more about your document size, shard and replica sizes, and
auto/soft commit time parameters?
2013/5/2 vicky desai
> Hi all,
>
> I have recently migrated from solr 3.6 to solr 4.0. The documents in my
> core
> are getting constantly updated and so I fire a code commit after every
collection -> Plugins / Stats -> CORE -> searcher
On Wed, Mar 13, 2013 at 4:53 AM, Arkadi Colson wrote:
> Sorry I'm quite new to solr but where exactly in the admin interface can I
> find how long it takes to warm the index?
>
> Arkadi
>
>
> On 03/13/2013 11:19 AM, Upayavira wrote:
>
>> It dep
Sorry I'm quite new to solr but where exactly in the admin interface can
I find how long it takes to warm the index?
Arkadi
On 03/13/2013 11:19 AM, Upayavira wrote:
It depends whether you are using soft commits - that changes things a
lot.
If you aren't, then you should look in the admin inte
It depends whether you are using soft commits - that changes things a
lot.
If you aren't, then you should look in the admin interface, and see how
long it takes to warm your index, and commit at least less frequently
than that (commit more often, and you'll have concurrent warming
searchers which
What would be a good value for maxTime or maxDocs knowing that we insert
about 10 docs/sec? Will it be a problem that we only use maxDocs = 1
because it's not searchable yet...
On 03/13/2013 10:00 AM, Upayavira wrote:
Auto commit would seem a good idea, as you don't want your independent
w
Auto commit would seem a good idea, as you don't want your independent
worker threads issuing overlapping commits. There's also commtWithin
that achieves the same thing.
Upayavira
On Wed, Mar 13, 2013, at 08:02 AM, Arkadi Colson wrote:
> Hi
>
> I'm filling our solr database with about 5mil docs.
Ok this is very surprising.
I just ran the curl command
curl --silent
http://xx.xx.xx.xx:8985/solr/collectionABC/update/?commit=true&openSearcher=false
And on the solr log file I can see these messages:
/Dec 16, 2012 10:44:14 PM org.apache.solr.update.DirectUpdateHandler2 commit
INFO: start
c
1 - 100 of 243 matches
Mail list logo