Hello! We have a Solrcloud(7.4) consisting of 90+ hosts(each of them
running multiple nodes of solr, e.g. ports 8983, 8984, 8985), numerous
shards(each having several replicas) and numerous collections.
I was given a task to summarize the total index size(on disks) of a certain
collection. First
Hi,
I am in a process of migrating from Solr-6.5.1 To Solr-8.6.3. The current
index size after optimisation is 2.4 TB. We use a 7TB disk for indexing as
the optimization needs extra space.
Now with the newer Solr the un-optimised index itself got created of size
5+TB which after optimisation
size up by 3 fold, and if you out of disk space in the process
>>>> the optimize will quit since, it cant optimize, and leave the live index
>>>> pieces in tact, so now you have the "current" index as well as the
>>>> "optimized" fragments
&g
;> get an expanding disk it will keep growing and prevent this from happening,
>>> then the index will contract and the disk will shrink back to only what it
>>> needs. saved me a lot of headaches not needing to ever worry about disk
>>> space
>>>
>>> O
act and the disk will shrink back to only what it
>> needs. saved me a lot of headaches not needing to ever worry about disk
>> space
>>
>> On Tue, Jun 16, 2020 at 4:43 PM Raveendra Yerraguntla
>> wrote:
>>
>>>
>>> when optimize command i
the disk will shrink back to only what it
>> needs. saved me a lot of headaches not needing to ever worry about disk
>> space
>>
>> On Tue, Jun 16, 2020 at 4:43 PM Raveendra Yerraguntla
>> wrote:
>>
>>>
>>> when optimize command is issued,
the disk will shrink back to only what it
> needs. saved me a lot of headaches not needing to ever worry about disk
> space
>
> On Tue, Jun 16, 2020 at 4:43 PM Raveendra Yerraguntla
> wrote:
>
>>
>> when optimize command is issued, the expectation after the completion of
disk will shrink back to only what it
needs. saved me a lot of headaches not needing to ever worry about disk
space
On Tue, Jun 16, 2020 at 4:43 PM Raveendra Yerraguntla
wrote:
>
> when optimize command is issued, the expectation after the completion of
> optimization process is that the
when optimize command is issued, the expectation after the completion of
optimization process is that the index size either decreases or at most remain
same. In solr 7.6 cluster with 50 plus shards, when optimize command is issued,
some of the shard's transient or older segment files ar
i wouldnt worry about the index size until you get above a half terabyte or
so. adding doc values and other features means you sacrifice things that
dont matter, like size. memory and ssd's are cheap.
On Wed, Apr 15, 2020 at 1:21 PM Rajdeep Sahoo
wrote:
> Hi all
> We are migratin
Hi all
We are migrating from solr 4.6 to solr 7.7.2.
In solr 4.6 the size was 2.5 gb but here in solr 7.7.2 the solr index size
is showing 6.8 gb with the same no of documents. Is it expected behavior or
any suggestions how to optimize the size.
et by with the smallest possible RAM or disk.
>>
>> wunder
>> Walter Underwood
>> wun...@wunderwood.org
>> http://observer.wunderwood.org/ (my blog)
>>
>>> On Feb 3, 2020, at 5:28 AM, Erick Erickson
>> wrote:
>>>
>>> I’ve always had
Do NOT try to get by with the smallest possible RAM or disk.
>
> wunder
> Walter Underwood
> wun...@wunderwood.org
> http://observer.wunderwood.org/ (my blog)
>
> > On Feb 3, 2020, at 5:28 AM, Erick Erickson
> wrote:
> >
> > I’ve always had trouble with tha
smallest possible RAM or disk.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On Feb 3, 2020, at 5:28 AM, Erick Erickson wrote:
>
> I’ve always had trouble with that advice, that RAM size should be JVM + index
> size. I’ve seen 300G
I’ve always had trouble with that advice, that RAM size should be JVM + index
size. I’ve seen 300G indexes (as measured by the size of the data/index
directory) run in 128G of memory.
Here’s the long form:
https://lucidworks.com/post/sizing-hardware-in-the-abstract-why-we-dont-have-a
Hello All,
I want to size the RAM for my Solr cloud instance. The thumb rule is your
total RAM size should be = (JVM size + index size)
Now I have a simple question, How do I know my index size? A simple method,
perhaps from the Solr cloud admin UI or an API?
My assumption so far is the total
e rsync confirm that it has been entirely
> completed.
> >
> > I don't see any transaction not completed that normaly means that the
> indexation is completed. That's why I don't understand the difference.
> >
> > Kind Regards
> >
> > Matthieu
&
e
> colleague who realized the rsync confirm that it has been entirely completed.
>
> I don't see any transaction not completed that normaly means that the
> indexation is completed. That's why I don't understand the difference.
>
> Kind Regards
>
> M
se.io]
Sent: samedi 9 février 2019 16:56
To: solr-user@lucene.apache.org
Subject: Re: Solr Index Size after reindex
Yes, those numbers are different and that should explain the different size. I
think you should be able to find some information in the Alfresco or Solr log.
There must be a reas
* vendredi 8 février 2019 14:54
*To:* solr-user@lucene.apache.org
*Subject:* Re: Solr Index Size after reindex
Hi Mathieu,
what about the docs in the two infrastructures? Do they have the same
numbers (numdocs / maxdocs)? Any meaningful message (error or not) in
log files?
Andrea
On 08/02
9 14:54
To: solr-user@lucene.apache.org
Subject: Re: Solr Index Size after reindex
Hi Mathieu,
what about the docs in the two infrastructures? Do they have the same numbers
(numdocs / maxdocs)? Any meaningful message (error or not) in log files?
Andrea
On 08/02/2019 14:19, Mathieu Menard wrote:
Hi Mathieu,
what about the docs in the two infrastructures? Do they have the same
numbers (numdocs / maxdocs)? Any meaningful message (error or not) in
log files?
Andrea
On 08/02/2019 14:19, Mathieu Menard wrote:
Hello,
I would like to have your point of view about an observation we have
while searching the nested docs are filtered out for proper result count.
This required duplicating the nested doc fields in the parent doc.
This duplication of fields has resulted in huge Solr index size and I am
planning to get rid of them and use blockjoin for nested doc fields.
This has caused
Can't really be answered. For instance, stored data is held in *.fdt
files and is largely irrelevant to searching since that data is only
consulted for returning stored fields of the top N docs. So if your
index consists of 90% stored data it's one answer, if 10% it's totally
another. the stored da
Was wondering if anyone has an idea of the ratio size of indexed only vs
stored and indexed in solr 7.x. I was gong to run some testing myself
later today but was curious what others have seen in this regard.
Thanks,
David
all our solr
> services
> and index size in serverX descreased from 82Gb to 60Gb, and in serverY
> index
> size didn't change (49Gb).
>
>
>
> --
> Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
>
About which details do you ask? Yesterday we restarted all our solr services
and index size in serverX descreased from 82Gb to 60Gb, and in serverY index
size didn't change (49Gb).
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Hi,
What about cores segment details in admin UI interface ? More deleted
documents ?
Regards
Dominique
Le dim. 7 oct. 2018 à 08:22, SOLR4189 a écrit :
> Hi all,
>
> We use SOLR-6.5.1 and we have very strange issue. In our collection index
> size is very different from serv
Hi all,
We use SOLR-6.5.1 and we have very strange issue. In our collection index
size is very different from server to server (33gb difference):
1. We have index size 82Gb in serverX and 49Gb in serverY
2. ServerX displays 82gb used place if we run "df -h
/opt/solr/Xxx_shardX_replica1/data/
Hi,
Is there a way to monitor the size of the index broken by individual fields
across documents? I understand there are different parts - the inverted
index and the stored fields - and an estimate would be good start.
Thanks
John
gathered so far.
Your situation caught our attention and definitely changing the order of the
documents in input shouldn't affect the index size ( by such a greater
factor).
The fact that the optimize didn't change anything is even more suspicious.
It may be an indicator that in some edge cases o
Hi Erick & Alessandro,
I have solved my problem by re-ordering the data in the SQL query. I don't
know why it works but it does. I can consistently re-produce the problem
without changing anything else except the database table. As our Solr build is
scripted and we always build a new Solr s
I didn't mean to imply that _you'd_ changed things, the _defaults_ may
have changed. So the "string" fieldType may be defined with
docValues="true" in your new schema and "false" in your old schema
without you intentionally changing anything at _all_.
That's why the LukeRequestHandler will hel
Hi Erick,
I'm 99% sure that I haven't changed the field types between the two snapshots
as all of my test runs are completely scripted and build a new Solr server from
scratch (both the virtual machine and the Solr software). I can diff the
scripts between two runs to make sure I haven't acci
Well, I'm not entirely sure either ;)
What I'm seeing. And, BTW, I'm making a couple of assumptions here. In
the one listing, your biggest segment starts with _7l and in the other
its _zd. The aggregate size is
2,815M for _7l and 705M for _zd. So multiplying the individual files
in _zd by 4 (p
Hi Erick,
Thinking some more about the differences between the two sort orders has
suggested another possibility. We also have a geo spatial field defined in the
index:
echo "$(date) Creating geoLocation field"
curl -X POST -H 'Content-type:application/json' --data-binary '{
"add-fiel
Hi Erick,
Below is the file listing for when the index is loaded with the table ordered
in a way that produces the smaller index.
I have checked the console, and we have no deleted docs and we have the same
number of docs in the index as there are rows in the staging table that we load
from.
Hi Alessandro,
There are 14,061,990 records in the staging table and that is how many
documents that we end up with in Solr. I would be surprised if we have a
problem with the id, as we use the primary key of the table as the id in Solr
so it must be unique.
The primary key of the staging ta
It's a silly thing, but to confirm the direction that Erick is suggesting :
How many rows in the DB ?
If updates are happening on Solr ( causing the deletes), I would expect a
greater number of documents in the DB than in the Solr index.
Is the DB primary key ( if any) the same of the uniqueKey fie
Hi Emir,
We have no copy field definitions. To keep things simple, we have a one to one
mapping between the columns in our staging table and the fields in our Solr
index.
Regards,
David
David Howe
Java Domain Architect
Postal Systems
Level 16, 111 Bourke Street Melbourne VIC 3000
T 039106
ut possible.
>
> The shortcut here would be to optimize afterwards. In the usual course
> of events this should _not_ be necessary (or even desirable) unless
> you do it every time you build your index for arcane reasons, see:
> https://lucidworks.com/2017/10/13/segment-merging-
But if you do optimize (forceMerge) and the size drops back to more
reasonable levels it would be a clue.
Ordering simply should not affect the final index size except for,
possibly, changing the number of deleted docs in the index largely
through chance. If you do see a dramatic difference, try th
Hi Erick,
I have the full dump of the Solr index file sizes as well if that is of any
help. I have attached it below this message.
We don't have any deleted docs in our index, as we always build it from a brand
new virtual machine with a brand new installation of Solr.
The ordering is defini
David:
Rats, the cfs files make everything I'd hoped to understand with the
sizes ambiguous, since they conceal the underlying sizes of each other
extension. We can approach it a bit differently though. Take one
segment that's _not_ in cfs format where the total size of all files
making up that se
@Alessandro I will see if I can reproduce the same issue just by turning
off omitNorms on field type. I'll open another mail thread if required.
Thanks.
On Thu, Feb 15, 2018 at 6:12 AM, Howe, David
wrote:
>
> Hi Alessandro,
>
> Some interesting testing today that seems to have gotten me closer t
Hi Alessandro,
Some interesting testing today that seems to have gotten me closer to what the
issue is. When I run the version of the index that is working correctly
against my database table that has the extra field in it, the index suddenly
increases in size. This is even though the data i
@Pratik: you should have investigated. I understand that solved your issue,
but in case you needed norms it doesn't make sense that cause your index to
grow up by a factor of 30. You must have faced a nasty bug if it was just
the norms.
@Howe :
*Compound File* .cfs, .cfe An optional "virtua
Subject: RE: Index size increases disproportionately to size of added field
when indexed=false
I have set docValues=false on all of the string fields in our index that have
indexed=false and stored=true. This gave a small improvement in the index size
from 13.3GB to 12.82GB.
I have also tried
Feb 14, 2018 at 1:01 PM, Alessandro Benedetti
wrote:
> Hi pratik,
> how is it possible that just the norms for a single field were causing such
> a massive index size increment in your case ?
>
> In your case I think it was for a field type used by multiple fields, but
> it&
Hi pratik,
how is it possible that just the norms for a single field were causing such
a massive index size increment in your case ?
In your case I think it was for a field type used by multiple fields, but
it's still suspicious in my opinions,
norms should be that big.
If I remember correct
or sort on the field. This _will_
increase the index size on disk, but it's almost always a good
tradeoff, here's why:
To facet, group or sort you need to "uninvert" the field. If you have
docValues=false, this universion is done at run-time into Java's heap.
If you have do
I had a similar issue with index size after upgrading to version 6.4.1 from
5.x. The issue for me was that the field which caused index size to be
increased disproportionately had a field type("text_general") for which
default value of omitNorms was not true. Turning it on explicitl
I have set docValues=false on all of the string fields in our index that have
indexed=false and stored=true. This gave a small improvement in the index size
from 13.3GB to 12.82GB.
I have also tried running an optimize, which then reduced the index to 12.6GB.
Next step is to dump the sizes
Thanks Hoss. I will try setting docValues to false, as we only ever want to be
able to retrieve the value of this field.
Regards,
David
David Howe
Java Domain Architect
Postal Systems
Level 16, 111 Bourke Street Melbourne VIC 3000
T 0391067904
M 0424036591
E david.h...@auspost.com.au
W
Hi Erick,
Thanks for responding. You are correct that we don't have any deleted docs.
When we want to re-index (once a fortnight), we build a brand new installation
of Solr from scratch and re-import the new data into an empty index.
I will try setting docValues to false and see if that make
Hi Alessandro,
The docker image is like a disk image of the entire server, so it includes the
operating system, the Solr installation and the data. Because we run in the
cloud and our index isn't that big, this is an easy and fast way for us to
scale our Solr cluster without having to configu
To piggy back on this, what would be the right scenarios to use
docvalues='true'?
On Tue, Feb 13, 2018 at 1:10 PM, Chris Hostetter
wrote:
>
> : We are using Solr 7.1.0 to index a database of addresses. We have found
> : that our index size increases massively when we add
: We are using Solr 7.1.0 to index a database of addresses. We have found
: that our index size increases massively when we add one extra field to
: the index, even though that field is stored and not indexed, and doesn’t
what about docValues?
: When we run an index load without the
David:
Right, Optimize Is Evil. Well, actually in your case it's not. In your
specific case you can optimize every time you build your index and be
OK, gory details here:
https://lucidworks.com/2017/10/13/segment-merging-deleted-documents-optimize-may-bad/
But that's just for background. The key
Hi David,
given the fact that you are actually building a new index from scratch, my
shot in the dark didn't hit any target.
When you say : "Once the import finishes we save the docker image in the
AWS docker repository. We then build our cluster using that image as the
base"
Do you mean just c
Hi Alessanro,
Thanks for responding. We rebuild the index every time starting from a fresh
installation of Solr. Because we are running at AWS, we have automated our
deployment so we start with the base docker image, configure Solr and then
import our data every time the data changes (it onl
I assume you re-index in full right ?
My shot in the dark is that this increment is temporary.
You re-index, so effectively delete and add all documents ( this means that
even if the new field is just stored, you re-build the entire index for all
the fields).
Create new segments and the old docs ar
Hi,
We are using Solr 7.1.0 to index a database of addresses. We have found that
our index size increases massively when we add one extra field to the index,
even though that field is stored and not indexed, and doesn’t contain a lot of
data. When this occurs, we also observe a significant
.4 and we see index size
reduction. Trying to see if any optimization done to decrease the index sizes
, couldn’t locate. If anyone knows why please share.
Here's a history where you can see the a summary of the changes in
Lucene's index format in various ve
On 12/7/2017 1:27 PM, Natarajan, Rajeswari wrote:
> We have upgraded solr from 4.5.1 to 4.10.4 and we see index size reduction.
> Trying to see if any optimization done to decrease the index sizes , couldn’t
> locate. If anyone knows why please share.
Here's a history where you
Hi,
We have upgraded solr from 4.5.1 to 4.10.4 and we see index size reduction.
Trying to see if any optimization done to
decrease the index sizes , couldn’t locate. If anyone knows why please share.
Thank you,
Rajeswari
Hello,
Is there a way to get index size statistics for a given solr instance? For
eg broken by each field stored or indexed. The only things I know of is
running du on the index data files and getting counts per field
indexed/stored, however each field can be quite different wrt size.
Thanks
John
HI,
I have issue as mentions below while use Document Routing.
1: Query is slower with heavy index for below detail.
Config: 4 shard and 4 replica,with 8.5 GB Index Size(2GB Index Size for each
shard).
-With routing parameter:
q=worksetid_l:2028446%20AND%20modelid_l:23718&rows=1&
One additional bit: The *.fdt files contain the stored values (i.e.
stored=true). This a verbatim, compressed copy of the input for these
fields. This data does not need to reside in any memory. Say you have
rows=10, and numFound is 10,000,000. The stored data is only accessed
for the 10 returned d
On 5/11/2017 4:59 PM, S G wrote:
> How can 50GB index be handled by a 10GB heap?
> I am a developer myself and would love to know as many details as possible.
> So a long answer would be much appreciated.
Lucene (which is what provides large pieces of Solr's functionality)
does not read the enti
recommendation on the size of index that one should host
> > per core?
>
> No, there really isn't.
>
> I can list off a bunch of recommendations, but a whole bunch of things
> that I don't know about your install could make those recommendations
> completely wro
tions
completely wrong. An index size that works really well for one person
might have terrible performance for another.
If you haven't already built it, then there are possibly even things
that YOU don't know about your install yet that can affect what what you
need.
https://lucidworks.
I am curious about this as well. I generally have been using about a third
of available memory for the java heap, so I keep 50gb/150 available for the
jvm. Think this should be reduced?
On Wed, May 10, 2017 at 6:36 PM, Toke Eskildsen wrote:
> S G wrote:
> > *Rough estimates for an initial siz
S G wrote:
> *Rough estimates for an initial size:*
>
> 50gb index is best served if all of it is in memory.
Assuming you need low latency and/or high throughput, yes. I mention this
because in many cases the requirements for number of simultaneous users and
response times are known (at least
Hi,
Is there a recommendation on the size of index that one should host per
core?
Idea is to come up with an *initial* shard/replica setting for a load test.
And then arrive at a good cluster size based on that testing.
*Example: *
Num documents: 100 million
Average document size: 1kb
So total
On 4/10/2017 1:57 AM, Himanshu Sachdeva wrote:
> Thanks for your time and quick response. As you said, I changed our
> logging level from SEVERE to INFO and indeed found the performance
> warning *Overlapping onDeckSearchers=2* in the logs. I am considering
> limiting the *maxWarmingSearchers* coun
s
explain the index size fluctuations.
Each searcher also requires heap, which might explain why you get Out
Of Memory errors.
This all boils down to avoid having (too many) overlapping warming
searchers.
* Reduce your auto-warm if it is high
* Prolong the time between searcher-opening commits
* C
uidance will be very much appreciated. Thank you.
>
> On Thu, Apr 6, 2017 at 6:12 PM, Toke Eskildsen wrote:
>
> > On Thu, 2017-04-06 at 16:30 +0530, Himanshu Sachdeva wrote:
> > > We monitored the index size for a few days and found that it varies
> > > widely
only
the slaves? What purpose do the searchers serve exactly? Your time and
guidance will be very much appreciated. Thank you.
On Thu, Apr 6, 2017 at 6:12 PM, Toke Eskildsen wrote:
> On Thu, 2017-04-06 at 16:30 +0530, Himanshu Sachdeva wrote:
> > We monitored the index size for a few days
On Thu, 2017-04-06 at 16:30 +0530, Himanshu Sachdeva wrote:
> We monitored the index size for a few days and found that it varies
> widely from 11GB to 43GB.
Lucene/Solr indexes consists of segments, each holding a number of
documents. When a document is deleted, its bytes are not r
red 10 slaves
for handling the reads from website. Slaves poll master at an interval of
20 minutes. We monitored the index size for a few days and found that it
varies widely from 11GB to 43GB.
Recently, we started getting a lot of out of memory errors on the master.
Everytime, solr beco
will not be normalized. I explicitly added omitNorms=true for the
field type text_general and re-indexed the data. Now, my index size is much
smaller. I haven't yet verified this with complete data set yet but I can
see that index size is reduced. We have a large data set and it takes about
5-6
..@gmail.com>
> >> wrote:
> >>
> >> > Did you look in the data directories to check what index file
> extensions
> >> > contribute most to the difference? That could give a hint.
> >> >
> >> > Regards,
> >> >
t; > Alex
>> >
>> > On 21 Feb 2017 9:47 AM, "Pratik Patel" wrote:
>> >
>> > > Here is the same question in stackOverflow for better format.
>> > >
>> > > http://stackoverflow.com/questions/42370231/solr-
>> >
at index file extensions
> > contribute most to the difference? That could give a hint.
> >
> > Regards,
> > Alex
> >
> > On 21 Feb 2017 9:47 AM, "Pratik Patel" wrote:
> >
> > > Here is the same question in stackOverflow for better for
//stackoverflow.com/questions/42370231/solr-
> > dynamic-field-blowing-up-the-index-size
> >
> > Recently, I upgraded from solr 5.0 to solr 6.4.1. I can run my app fine
> but
> > the problem is that index size with solr 6 is way too large. In solr 5,
> > index size w
//stackoverflow.com/questions/42370231/solr-
> dynamic-field-blowing-up-the-index-size
>
> Recently, I upgraded from solr 5.0 to solr 6.4.1. I can run my app fine but
> the problem is that index size with solr 6 is way too large. In solr 5,
> index size was about 15GB and in solr 6, for
Here is the same question in stackOverflow for better format.
http://stackoverflow.com/questions/42370231/solr-
dynamic-field-blowing-up-the-index-size
Recently, I upgraded from solr 5.0 to solr 6.4.1. I can run my app fine but
the problem is that index size with solr 6 is way too large. In solr
Hi Shawn,
Thanks for the information.
Regards,
Edwin
On 14 October 2016 at 20:19, Shawn Heisey wrote:
> On 10/13/2016 9:58 PM, Zheng Lin Edwin Yeo wrote:
> > Thanks for the reply Shawn. Currently, my heap allocation to each Solr
> > instance is 22GB. Is that big enough?
>
> I can't answer tha
On 10/13/2016 9:58 PM, Zheng Lin Edwin Yeo wrote:
> Thanks for the reply Shawn. Currently, my heap allocation to each Solr
> instance is 22GB. Is that big enough?
I can't answer that question. I know little about your install. Even
if I *did* know a few more things about your install, I could o
ollection with a
> > very large index size be much slower than one which is still empty or
> > a very small index size? This is assuming that the configurations,
> > indexing code and the files to be indexed are the same. Currently, I
> > have a setup in which the collection
On 10/13/2016 9:20 AM, Zheng Lin Edwin Yeo wrote:
> Would like to find out, will the indexing speed in a collection with a
> very large index size be much slower than one which is still empty or
> a very small index size? This is assuming that the configurations,
> indexing code and
Hi,
Would like to find out, will the indexing speed in a collection with a very
large index size be much slower than one which is still empty or a very
small index size? This is assuming that the configurations, indexing code
and the files to be indexed are the same.
Currently, I have a setup in
Hi,
Would like to check, will the index size for fields which has been defined
as String be generally smaller than fields which has been defined as a Text
Field (Eg: KeywordTokenizerFactory)?
Assuming that both of them contains the same value in the fields, and there
is no additional filters for
Hi
It is quite normal that index size can be close to double during background
merge of segments. If you have a lot of deletions and/or reindexed docs then
the same document may also exist in multiple segments, taking up space
temporarily until a merge or optimize.
If this slows down your
Hi,
Suddenly my index size just doubles and indexing just slows down poorly.
After sometime it reduces back to normal and indexing starts working.
Can someone help me out in finding why index size doubles abnormally??
Did you check if your index still contains 500 docs, or is there more?
Regards,
Edwin
On 12 March 2016 at 22:54, Toke Eskildsen wrote:
> sara hajili wrote:
> > why solr index size become bigger and bigger without adding any new doc?
>
> Solr does not change the index unprov
sara hajili wrote:
> why solr index size become bigger and bigger without adding any new doc?
Solr does not change the index unprovoked. It sounds like your external
document feeding process is still running.
- Toke Eskildsen
hi i have a about 500 doc that stored that in solr.
when i added this 500 doc i see solr index size it was about 300 KB .
but it become bigger more and more ,and now after about 2 hours solr index
size become 3500KB . i did n't add any new doc to solr. but index size
become bigger and bigge
: I'm testing this on Windows, so that maybe a factor too (the OS is not
: releasing file handles?!)
specifically: Windows won't let Solr delete files on disk that have open
file handles...
https://wiki.apache.org/solr/FAQ#Why_doesn.27t_my_index_directory_get_smaller_.28immediately.29_when_i_de
1 - 100 of 316 matches
Mail list logo