Thanks Erik. I used implicit shards. So the right maintenance could be
add other shards after a period of time, change the roule that fill
partition field in collection and drop old shards when they are empty. Is
it right ? How can I see that 2 billion records limit is r
Sorry that should have read have not tested in solr cloud.
> On Jul 6, 2017, at 6:37 PM, Dave wrote:
>
> I have tested that out in solr cloud, but for solr master slave replication
> the config sets will not go without a reload, even if specified in the in the
> slave settings.
>
>> On Jul
I have tested that out in solr cloud, but for solr master slave replication the
config sets will not go without a reload, even if specified in the in the slave
settings.
> On Jul 6, 2017, at 5:56 PM, Erick Erickson wrote:
>
> I'm not entirely sure what happens if the sequence is
> 1> node dro
Novin, How long is recovery taking for you? I assume the recovery completes
correctly.
Cheers-- Rick
On July 6, 2017 7:59:03 AM EDT, Novin Novin wrote:
>Hi Guys,
>
>I was just wondering is solr cloud can give information about how much
>recovery has been done by replica while in it is recovering
I'm not entirely sure what happens if the sequence is
1> node drops out due to network glitch but Solr is still running
2> you upload a new configset
3> the network glitch repairs itself
4> the Solr instance reconnects.
Certainly if the Solr node is _restarted_ or _reloaded_ the new
configs are re
Oh, Cool. Thank you, Joel.
I am using Solr 6.1 where I am still facing the issue.
Anyway, nice to know that this is fixed from versions 6.4 and ahead.
Thanks,
Lewin
-Original Message-
From: Joel Bernstein [mailto:joels...@gmail.com]
Sent: Wednesday, July 05, 2017 12:58 PM
To: solr-user@
Ok, so although there was a configuration change and/or schema change
(during network segmentation) that normally requires a manual core reload
(that nowadays happen automatically via the schema API), this replica will
get instructions from Zookeeper to update its configuration and schema,
reload i
Right, every individual shard is limited to 2B records. That does
include deleted docs. But I've never seen a shard (a Lucene index
actually) perform satisfactorily at that scale so while this is a
limit people usually add shards long before that.
There is no technical reason to optimize every tim
right, when the node connects again to Zookeeper, it will also rejoin
the collection. At that point it's index is synchronized with the
leader and when it goes "active", then it should again start serving
queries.
Best,
Erick
On Thu, Jul 6, 2017 at 2:04 PM, Lars Karlsson
wrote:
> Hi all, please
Hi all, please help clarify how solr will handle network segmented replica
meanwhile configuration and reload of cores/nodes for one collection is
applied?
Does the replica become part of the collection after connectivity is
restored?
Hence the node is not down, but lost ability to communicate to
Stored should not matter. What do you see when you add &debug=query?
My bet is one of two things:
1> you're not searching against the field you think
2> you didn't commit when you ran your first test...
Best,
Erick
On Thu, Jul 6, 2017 at 12:38 PM, Saurabh Sethi
wrote:
> Typo in previous response
Typo in previous response. Following is correct:
On Thu, Jul 6, 2017 at 12:37 PM, Saurabh Sethi
wrote:
>
>
>
> On Thu, Jul 6, 2017 at 12:16 PM, Susheel Kumar
> wrote:
>
>> and how do you create the field? Share the the line where you are
>> creating
>> the above field1
>>
>> >
>> On Thu, J
On Thu, Jul 6, 2017 at 12:16 PM, Susheel Kumar
wrote:
> and how do you create the field? Share the the line where you are creating
> the above field1
>
>
> On Thu, Jul 6, 2017 at 2:42 PM, Saurabh Sethi
> wrote:
>
> > Do we need to store boolean field in order to query it?
> >
> > The query
and how do you create the field? Share the the line where you are creating
the above field1
wrote:
> Do we need to store boolean field in order to query it?
>
> The query I am running is "field1:true"
>
> With the following field type, where "stored=false", query returns 0
> result.
>
> stored
Hi,
I'm working on an application that index CDR ( Call Detail Record ) in
SolrCloud with 1 collection and 3 shards.
Every day the application index 30 millions of CDR.
I have a purge application that delete records older than 10 days, and call
OPTIMIZE, so the collection will keep only 300
Hi Erick,
1) maxDocs setting is not defined in our config. As far as we know there
isn't a default value.
2) ramBufferSizeMB cannot be exceeded because written segment size are less
than 1 MB and we have defined a maximum of 2 GB.
3) The autocommit interval is 15 minutes. This could be the reaso
Do we need to store boolean field in order to query it?
The query I am running is "field1:true"
With the following field type, where "stored=false", query returns 0 result.
But if I change stored to "true", same query works.
Thanks,
Saurabh
will check it out, thanks-
--
*John Blythe*
Product Manager & Lead Developer
251.605.3071 | j...@curvolabs.com
www.curvolabs.com
58 Adams Ave
Evansville, IN 47713
On Thu, Jul 6, 2017 at 12:37 PM, Erick Erickson
wrote:
> Have you looked at the JSON facet capabilities? It might work for you...
Have you looked at the JSON facet capabilities? It might work for you
Best,
Erick
On Thu, Jul 6, 2017 at 9:09 AM, John Blythe wrote:
> hi all.
>
> i'm attempting to find similar purchases for a user. the volume of purchase
> helps dictate the price point that they can expect. as such, i'm at
hi all.
i'm attempting to find similar purchases for a user. the volume of purchase
helps dictate the price point that they can expect. as such, i'm attempting
to determine the sum of the quantity field across all purchases per user.
i've got something like this as of yet:
facet=on&
> stats=true
Hi - There is no out-of-the-box integration of OpenNLP in Lucene at this
moment, but there is an ancient patch if you are adventurous.
Regards,
LUCENE-2899
-Original message-
> From:meenu
> Sent: Thursday 6th July 2017 16:26
> To: solr-user@lucene.apache.org
> Subject: OpenNLP and
Our CDCR has been working fine for months, but we are now experiencing an issue
where each night only partial updates are made to the target.
For example: Our primary (source) is updated with 4500 docs. The target
instance is out of sync and only contains 1500 of the 4500 updates.
Any idea wh
You'll get a new segment whenever your time expires as well. Segments
are created whenever the _first_ of
1> maxDocs exceeded
2> ramBufferSizeMB is exceded
3> the autocommit _time_ interval is exceeded
4> someone does a commit
5> if someone is indexing via SolrJ and specifies a "commitWithin" for
o
Eric has provided the details on other email. See below
Use the _route_ field and put in "day_1" or "day_2". You've presumably
named the shards (the "shard" parameter) when you added them with the
CREATESHARD command so use the value you specified there.
Best,
Erick
On Wed, Jul 5, 2017 at
Joins and OFFSET are not currently supported with Parallel SQL.
The docs for parallel SQL cover all the supported features. Any syntax not
covered in the docs is likely not supported.
Joel Bernstein
http://joelsolr.blogspot.com/
On Thu, Jul 6, 2017 at 2:40 PM, wrote:
>
> Is it possible to join
Hi All,
Normally, when we set up a SolrCloud environment, we put a load balancer in
front of the Solr nodes, which we use for sending queries (HttpSolrServer),
and use CloudSolrServer (with the IPs for the zookeeper ensemble nodes) for
sending indexing operations.
Recently we embarked on a projec
Hi All
I am stuck right now.
Using Solr 6.6 for indexing and search in texts (from different sources like
word/excel/pdf...)
I want to have more relevance in search results so want to use OpenNLP.
Could not find any documentation which can help in integrating solr with
OpenNLP.
Please help.
Th
Hi Guys,
Could someone explain me why I have segments of 500 KB (with source "flush"
and only with 91 documents) if I have a ramBufferSizeMB of 2GB
and maxBufferedDocs not definined?
Thanks in advance,
Daniel
Hi Guys,
I was just wondering is solr cloud can give information about how much
recovery has been done by replica while in it is recovering, some
percentage would be handy.
Thanks,
Novin
Hi Eric,
there is no error with current config, But soon there will be about 5000
users in the site and with round robin and no shared session like in my
config I might have 5000 sessions in each server. So i ask this question.
May be if i fix jsessionid with nginx header, everyone will have the sa
name="ent"
forEach="/aa/bb/cc"
|
|
v
or
--
View this message in context:
http://lucene.472066.n3.nabble.com/is-there-a-way-to-point-forEach-s-directory-tp4344640.html
Sent from the Solr - User mailing list archive at Nabble.com.
31 matches
Mail list logo