Please don't do that ;) Unless you're willing to do it frequently. See:
https://lucidworks.com/2017/10/13/segment-merging-deleted-documents-optimize-may-bad/
expungeDeletes is really a variety of optimize, so the issues outlined
in that blog apply.
Best,
Erick
On Thu, Nov 9, 2017 at 12:24 PM, S
Thanks for the response Erick. I’m deleting the documents with expungeDeletes
option set as true. So, that does trigger a merge to throw away the deleted
documents.
On 11/9/17, 12:17 PM, "Erick Erickson" wrote:
bq: Is there a way to distinguish between when size is being reduced
becaus
bq: Is there a way to distinguish between when size is being reduced
because of a delete from that of during a lucene merge.
Not sure what you're really looking for here. Size on disk is _never_
reduced by a delete operation, the document is only 'marked as
deleted'. Only when segments are merged
Hi,
I wanted to get accurate metrics regarding to the amount of data being indexed
in Solr. In this regard, I observe that sometimes, this number decreases due to
lucene merges. But I’m also deleting data at times. Is there a way to
distinguish between when size is being reduced because of a de
your searches/indexing performing over the time? Are there any
impact?
Regards
Pravesh
--
View this message in context:
http://lucene.472066.n3.nabble.com/optimum-solr-core-size-tp4004251p4004424.html
Sent from the Solr - User mailing list archive at Nabble.com.
-
> From: jame vaalet
> To: solr-user@lucene.apache.org
> Cc:
> Sent: Thursday, August 30, 2012 2:40 AM
> Subject: optimum solr core size
>
> Hi,
> I have got singel core solr deployment in production and the documents are
> getting added daily (around 1 million entries
Sharding isn't necessarily decided upon by index size. Is your search
performance ok? Got enough free disk space to optimize? Then don't
shard.
But no, 150M is not a large index size.
700 cores, now that's a lot!
Erik
On Dec 17, 2009, at 1:27 PM, Matthieu Labour wrote:
Paul
Th
Paul
Thank you for your reply
I did du -sh in /solr_env/index/data
and it shows
36G
It is distributed among 700 cores with most of them being 150M
Is that a big index that should be sharded ?
2009/12/17 Noble Paul നോബിള് नोब्ळ्
> look at the index dir and see the size of the files . it is typ
look at the index dir and see the size of the files . it is typically
in $SOLR_HOME/data/index
On Thu, Dec 17, 2009 at 2:56 AM, Matthieu Labour wrote:
> Hi
> I am new to solr. Here is my question:
> How to find out the size of a solr core on disk ?
> Thank you
> matt
>
--
Hi
I am new to solr. Here is my question:
How to find out the size of a solr core on disk ?
Thank you
matt
text is hiring -- http://sematext.com/about/jobs.html?mls
> Lucene, Solr, Nutch, Katta, Hadoop, HBase, UIMA, NLP, NER, IR
>
>
>
> - Original Message
>> From: Phil Hagelberg
>> To: solr-user@lucene.apache.org
>> Sent: Mon, November 16, 2009 8:42:49 PM
>>
.org
> Sent: Mon, November 16, 2009 8:42:49 PM
> Subject: core size
>
>
> I'm are planning out a system with large indexes and wondering what kind
> of performance boost I'd see if I split out documents into many cores
> rather than using a single core and splitting by a
I'm are planning out a system with large indexes and wondering what kind
of performance boost I'd see if I split out documents into many cores
rather than using a single core and splitting by a field. I've got about
500GB worth of indexes ranging from 100MB to 50GB each.
I'm assuming if we split
: Wednesday, November 12, 2008 7:06:15 PM
Subject: Re: Solr Core Size limit
On Tue, 11 Nov 2008 20:39:32 -0800 (PST)
Otis Gospodnetic <[EMAIL PROTECTED]> wrote:
> With Distributed Search you are limited to # of shards * Integer.MAX_VALUE.
yeah, makes sense. And i would suspect since th
On Tue, 11 Nov 2008 20:39:32 -0800 (PST)
Otis Gospodnetic <[EMAIL PROTECTED]> wrote:
> With Distributed Search you are limited to # of shards * Integer.MAX_VALUE.
yeah, makes sense. And i would suspect since this is PER INDEX , it applies to
each core only ( so you could have n cores in m shards
On Tue, 11 Nov 2008 10:25:07 -0800 (PST)
Otis Gospodnetic <[EMAIL PROTECTED]> wrote:
> Doc ID gaps are zapped during segment merges and index optimization.
>
thanks Otis :)
b
_
{Beto|Norberto|Numard} Meijome
"I didn't attend the funeral, but I sent a nice letter saying
2008 9:38:34 PM
Subject: Re: Solr Core Size limit
On Nov 11, 2008, at 8:03 PM, Yonik Seeley wrote:
> On Tue, Nov 11, 2008 at 6:59 PM, Matthew Runo <[EMAIL PROTECTED]> wrote:
>> What happens when we use another in this case? I was under the
>> assumption that if we say styleId
On Nov 11, 2008, at 8:03 PM, Yonik Seeley wrote:
On Tue, Nov 11, 2008 at 6:59 PM, Matthew Runo <[EMAIL PROTECTED]>
wrote:
What happens when we use another in this case? I was
under the
assumption that if we say styleId then our
doc IDs
will be our styleIds.
Is there a secondary ID that'
On Tue, Nov 11, 2008 at 6:59 PM, Matthew Runo <[EMAIL PROTECTED]> wrote:
> What happens when we use another in this case? I was under the
> assumption that if we say styleId then our doc IDs
> will be our styleIds.
>
> Is there a secondary ID that's kept internal to Solr/Lucene in this case?
Ther
TED]>
To: solr-user@lucene.apache.org
Sent: Monday, November 10, 2008 6:45:01 PM
Subject: Re: Solr Core Size limit
On Mon, 10 Nov 2008 10:24:47 -0800 (PST)
Otis Gospodnetic <[EMAIL PROTECTED]> wrote:
I don't think there is a limit other than your hardware and the
internal Doc
ID
ubject: Re: Solr Core Size limit
On Mon, 10 Nov 2008 10:24:47 -0800 (PST)
Otis Gospodnetic <[EMAIL PROTECTED]> wrote:
> I don't think there is a limit other than your hardware and the internal Doc
> ID which limits you to 2B docs on 32-bit machines.
Hi Otis,
just curious
On Mon, 10 Nov 2008 10:24:47 -0800 (PST)
Otis Gospodnetic <[EMAIL PROTECTED]> wrote:
> I don't think there is a limit other than your hardware and the internal Doc
> ID which limits you to 2B docs on 32-bit machines.
Hi Otis,
just curious is this internal doc ID reused when an optimise happen
: solr-user@lucene.apache.org
Sent: Monday, November 10, 2008 5:43:17 AM
Subject: Solr Core Size limit
Hi,
Im using Solr multicore functionality in my app. I want to know the size
limit of holding the index files in each core.How can i identify the maximum
size limit of the cores.
Thanks in advan
Hi,
Im using Solr multicore functionality in my app. I want to know the size
limit of holding the index files in each core.How can i identify the maximum
size limit of the cores.
Thanks in advance
Prabhu.K
--
View this message in context:
http://www.nabble.com/Solr-Core-Size-limit
24 matches
Mail list logo