g from solr 4.6 to solr 7.7.2.
> In solr 4.6 the size was 2.5 gb but here in solr 7.7.2 the solr index size
> is showing 6.8 gb with the same no of documents. Is it expected behavior or
> any suggestions how to optimize the size.
>
Hi all
We are migrating from solr 4.6 to solr 7.7.2.
In solr 4.6 the size was 2.5 gb but here in solr 7.7.2 the solr index size
is showing 6.8 gb with the same no of documents. Is it expected behavior or
any suggestions how to optimize the size.
e rsync confirm that it has been entirely
> completed.
> >
> > I don't see any transaction not completed that normaly means that the
> indexation is completed. That's why I don't understand the difference.
> >
> > Kind Regards
> >
> > Matthieu
&
e
> colleague who realized the rsync confirm that it has been entirely completed.
>
> I don't see any transaction not completed that normaly means that the
> indexation is completed. That's why I don't understand the difference.
>
> Kind Regards
>
> M
se.io]
Sent: samedi 9 février 2019 16:56
To: solr-user@lucene.apache.org
Subject: Re: Solr Index Size after reindex
Yes, those numbers are different and that should explain the different size. I
think you should be able to find some information in the Alfresco or Solr log.
There must be a reas
* vendredi 8 février 2019 14:54
*To:* solr-user@lucene.apache.org
*Subject:* Re: Solr Index Size after reindex
Hi Mathieu,
what about the docs in the two infrastructures? Do they have the same
numbers (numdocs / maxdocs)? Any meaningful message (error or not) in
log files?
Andrea
On 08/02
9 14:54
To: solr-user@lucene.apache.org
Subject: Re: Solr Index Size after reindex
Hi Mathieu,
what about the docs in the two infrastructures? Do they have the same numbers
(numdocs / maxdocs)? Any meaningful message (error or not) in log files?
Andrea
On 08/02/2019 14:19, Mathieu Menard wrote:
Hi Mathieu,
what about the docs in the two infrastructures? Do they have the same
numbers (numdocs / maxdocs)? Any meaningful message (error or not) in
log files?
Andrea
On 08/02/2019 14:19, Mathieu Menard wrote:
Hello,
I would like to have your point of view about an observation we have
while searching the nested docs are filtered out for proper result count.
This required duplicating the nested doc fields in the parent doc.
This duplication of fields has resulted in huge Solr index size and I am
planning to get rid of them and use blockjoin for nested doc fields.
This has caused
Hello,
Is there a way to get index size statistics for a given solr instance? For
eg broken by each field stored or indexed. The only things I know of is
running du on the index data files and getting counts per field
indexed/stored, however each field can be quite different wrt size.
Thanks
John
On 4/10/2017 1:57 AM, Himanshu Sachdeva wrote:
> Thanks for your time and quick response. As you said, I changed our
> logging level from SEVERE to INFO and indeed found the performance
> warning *Overlapping onDeckSearchers=2* in the logs. I am considering
> limiting the *maxWarmingSearchers* coun
On Mon, 2017-04-10 at 13:27 +0530, Himanshu Sachdeva wrote:
> Thanks for your time and quick response. As you said, I changed our
> logging level from SEVERE to INFO and indeed found the performance
> warning *Overlapping onDeckSearchers=2* in the logs.
If you only see it occasionally, it is proba
Hi Himanshu,
maxWarmingSearchers would break nothing on production. Whenever you request
solr to open a new searcher, it autowarms the searcher so that it can
utilize caching. After autowarm is complete a new searcher is opened.
The questions you need to adress here are
1. Are you using soft-com
Hi Toke,
Thanks for your time and quick response. As you said, I changed our logging
level from SEVERE to INFO and indeed found the performance warning *Overlapping
onDeckSearchers=2* in the logs. I am considering limiting the
*maxWarmingSearchers* count in configuration but want to be sure that
n
On Thu, 2017-04-06 at 16:30 +0530, Himanshu Sachdeva wrote:
> We monitored the index size for a few days and found that it varies
> widely from 11GB to 43GB.
Lucene/Solr indexes consists of segments, each holding a number of
documents. When a document is deleted, its bytes are not removed
immedia
Hi all,
We use solr in our website for product search. Currently, we have 2.1
million documents in the products core and these documents each have around
350 fields. >90% of the fields are indexed. We have this master instance of
solr running on 15GB RAM and 200GB drive. We have also configured 10
Did you check if your index still contains 500 docs, or is there more?
Regards,
Edwin
On 12 March 2016 at 22:54, Toke Eskildsen wrote:
> sara hajili wrote:
> > why solr index size become bigger and bigger without adding any new doc?
>
> Solr does not change the index unprov
sara hajili wrote:
> why solr index size become bigger and bigger without adding any new doc?
Solr does not change the index unprovoked. It sounds like your external
document feeding process is still running.
- Toke Eskildsen
hi i have a about 500 doc that stored that in solr.
when i added this 500 doc i see solr index size it was about 300 KB .
but it become bigger more and more ,and now after about 2 hours solr index
size become 3500KB . i did n't add any new doc to solr. but index size
become bigger and bigge
Thanks Erick, our index is relatively static. I think the deletes must be
coming from 'reindexing' the same documents so definitely handy to recover
the space. I've seen that video before. Definitely very interesting.
Brendan
On Wed, Aug 7, 2013 at 8:04 AM, Erick Erickson wrote:
> The general
The general advice is to not merge (optimize) unless your
index is relatively static. You're quite correct, optimizing
simply recovers the space from deleted documents, otherwise
it won't change much (except having fewer segments).
Here's a _great_ video that Mike McCandless put together:
http://b
To maybe answer another one of my questions about the 50Gb recovered when
running:
curl '
http://localhost:8983/solr/update?optimize=true&maxSegments=10&waitFlush=false
'
It looks to me that it was from deleted docs being completely removed from
the index.
Thanks
On Tue, Aug 6, 2013 at 11:45
Well, I guess I can answer one of my questions which I didn't exactly
explicitly state, which is: how do I force solr to merge segments to a
given maximum. I forgot about doing this:
curl '
http://localhost:8983/solr/update?optimize=true&maxSegments=10&waitFlush=false
'
which reduced the number o
Hi All,
First of all, what I was actually trying to do is actually get a little
space back. So if there is a better way to do this by adjusting the
MergePolicy or something else please let me know. My index is currently
200Gb. In the past (Solr 1.4) we've found that optimizing the index will
doubl
> [commit, startup]
>
>
>
> true
>
>
>
>
>
>
>
> this is where the problem lies : i need the size of the index im not finding
> the API
> nor is the statistics printing out(sysout) the same.
> how to i get the size of the index
> --
> View this message in context:
> http://n3.nabble.com/Obtaining-SOLR-index-size-on-disk-tp500095p697599.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
--
Lance Norskog
goks...@gmail.com
true
this is where the problem lies : i need the size of the index im not finding
the API
nor is the statistics printing out(sysout) the same.
how to i get the size of the index
--
View this message in context:
http://n3.nabble.com/Obtaining-SOLR-index-size-on-disk-tp500095p6
Hi I was faced the same problem.
You can get it rectified by allocating specified memory to the jar process
that is running.
java -Xmx1024M -Xms1024M -jar start.jar you can specify the amount of
memory to the process.
--
Thank you,
Vijayant Kumar
Software Engineer
Website Toolbox Inc.
http://w
Solr needs memory allocation for different operations, not for the
index size. It needs X amount of memory for a query, Y amount of
memory for document found by a query, and other things. Sorting needs
memory for the number of documents. Faceting needs memory for the
number of unique values in a fi
Hello All,
I am trying to start Solr server using Jetty ( same as in
Solr tutorial in their website ). As the index size is around 3.5gb
its returning OutOfMemoryError. Is it mandatory to satisfy the
condition java heap size > index size ? . If yes, is there any
solution to run Solr s
>I tried to merge the 15 indexes again, and I found out that the new merged
>index (without opitmization) size was about 351 GB , but when I optimize it
>the size return back to 411 GB, Why?
Just as a sample, IOT in Oracle...
Ok, just in a kids-lang, what 'optimization' means? It means that Ma
On Tue, Aug 25, 2009 at 3:30 PM, engy.ali wrote:
>
> Summary
> ===
>
> I had about 120,000 object of total size 71.2 GB, those objects are already
> indexed using Lucene. The index size is about 111 GB.
>
> I tried to use solr 1.4 nightly build to index the same collection. I
> divided
On Sat, Aug 29, 2009 at 7:09 AM, engy.ali wrote:
> I thought that optimization would decrease or at least be equal to the same
> index size before optimization
Some index structures like norms are non-sparse. Index one unique
field with norms and there is a byte allocated for every document in
th
00; do it once at the end...
>
>
>
> -Original Message-
> From: engy.ali [mailto:omeshm...@hotmail.com]
> Sent: August-25-09 3:31 PM
> To: solr-user@lucene.apache.org
> Subject: Solr index - Size and indexing speed
>
>
> Summary
> ===
>
&
09 11:01 AM
> To: Solr User
> Subject: How to reduce the Solr index size..
>
> Hi,
>
> I am newbie to Solr. We recently started using Solr.
>
> We are using Solr to process the server logs. We are creating the indexes
> for each line of the logs, so that users would be able
"true", so that
only this field will make 1Mb if not indexed
-Original Message-
From: Silent Surfer [mailto:silentsurfe...@yahoo.com]
Sent: August-20-09 11:01 AM
To: Solr User
Subject: How to reduce the Solr index size..
Hi,
I am newbie to Solr. We recently started using Solr
uninverted index) (Yonik), term
vectors, stored=true, copyField, etc.
Do not do commit per 100; do it once at the end...
-Original Message-
From: engy.ali [mailto:omeshm...@hotmail.com]
Sent: August-25-09 3:31 PM
To: solr-user@lucene.apache.org
Subject: Solr index - Size and indexing
google and found out that solr is using an inverted index, but I want
to know what is the internal structure of solr index,for example if i have a
word and its stems, how it will be store in the index
Thanks,
Engy
--
View this message in context:
http://www.nabble.com/Solr-index---Size-and-indexing
On Aug 20, 2009, at 11:00 AM, Silent Surfer wrote:
Hi,
I am newbie to Solr. We recently started using Solr.
We are using Solr to process the server logs. We are creating the
indexes for each line of the logs, so that users would be able to do
a fine grain search upto second/ms.
Now what
Hi,
I am newbie to Solr. We recently started using Solr.
We are using Solr to process the server logs. We are creating the indexes for
each line of the logs, so that users would be able to do a fine grain search
upto second/ms.
Now what we are observing is , the index size that is being create
ible to obtain the SOLR index size on disk through the SOLR API?
>> I've read through the docs and mailing list questions but can't seem to find
>> the answer.
>
> No, but it'd be a great addition to the /admin/system handler which returns
> lots of other useful
On Jul 17, 2009, at 8:45 PM, J G wrote:
Is it possible to obtain the SOLR index size on disk through the
SOLR API? I've read through the docs and mailing list questions but
can't seem to find the answer.
No, but it'd be a great addition to the /admin/system handler which
Hello,
Is it possible to obtain the SOLR index size on disk through the SOLR API? I've
read through the docs and mailing list questions but can't seem to find the
answer.
Any help is appreciated.
Thanks.
_
Hotmail
Slightly different index sizes (even optimized) are normal - a same
document may get different internal docids in different runs. I don't
know why the number of terms are slight different.
On Fri, Apr 3, 2009 at 7:21 PM, Jun Rao wrote:
>
>
> Hi,
>
> We built a Solr index on a set of documents a
Hi,
We built a Solr index on a set of documents a few times. Each time, we did
an optimize to reduce the index to a single segment. The index sizes are
slightly different across different runs. Even though the documents are not
inserted in the same order across runs, it seems to me that the fina
g the oldest N docs for
removal.
Otis --
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
> From: Marshall Weir <[EMAIL PROTECTED]>
> To: solr-user@lucene.apache.org
> Sent: Friday, May 23, 2008 3:58:18 PM
> Subject: SOLR index size
>
&g
Hi,
I'm using SOLR to keep track of customer complaints. I only need to
keep recent complaints, but I want to keep as many as I can fit on my
hard drive. Is there any way I can configure SOLR to dump old entries
in the index when the index reaches a certain size? I'm using a month
old ver
46 matches
Mail list logo