traseek in Python+C because Python used reference counting and
>>> did not do garbage collection. That is the only way to have no pauses
>> with
>>> automatic memory management.
>>>
>>> wunder
>>> Walter Underwood
>>> wun...@wunde
derwood
> > wun...@wunderwood.org
> > http://observer.wunderwood.org/ (my blog)
> >
> >> On Feb 14, 2020, at 11:35 AM, Tom Burton-West
> wrote:
> >>
> >> Hello,
> >>
> >> In the section on JVM tuning in the Solr 8.3 documentation (
&
c memory management.
>
> wunder
> Walter Underwood
> wun...@wunderwood.org
> http://observer.wunderwood.org/ (my blog)
>
>> On Feb 14, 2020, at 11:35 AM, Tom Burton-West wrote:
>>
>> Hello,
>>
>> In the section on JVM tuning in the Solr 8.3
garbage collection. That is the only way to have no pauses with
automatic memory management.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On Feb 14, 2020, at 11:35 AM, Tom Burton-West wrote:
>
> Hello,
>
> In the section on JVM tuning i
Hello,
In the section on JVM tuning in the Solr 8.3 documentation (
https://lucene.apache.org/solr/guide/8_3/jvm-settings.html#jvm-settings)
there is a paragraph which cautions about setting heap sizes over 2 GB:
"The larger the heap the longer it takes to do garbage collection. This can
Here's roughly what was going on:
1. set up three node cluster with a collection. The collection has one
shard and three replicas for that shard.
2. Shut down two of the nodes and verify the remaining node is the
leader. Verified the other two nodes are registered as dead in solr ui.
First, be sure to wait at least 3 minutes before concluding the replicas are
permanently down, that’s the default wait period for certain leader election
fallbacks. It’s easy to conclude it’s never going to recover, 180 seconds is an
eternity ;).
You can try the collections API FORCELEADER comm
Hi all,
I have a 3 node solr cloud instance with a single collection. The solr
nodes are pointed to a 3-node zookeeper ensemble. I was doing some basic
disaster recovery testing and have encountered a problem that hasn't been
obvious to me on how to fix.
After i started back up the three solr jav
manage this or overcome such a scenario
please suggest.
Regards,
Vishal
From: Erick Erickson
Sent: Friday, January 3, 2020 8:39 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr 8.3
Well, you can’t do what you want. The Admin UI is
intended as a way for new
>
> Actually we want like this
> shard1 10.38.33.28
> replica1 10.38.33.31
> shard2 10.38.33.30
> replica2 10.38.33.29
>
> Regards,
> Vishal
>
> From: Sankar Panda
> Sent: Friday, January 3, 2020 12:36 PM
> To:
33.28
replica1 10.38.33.31
shard2 10.38.33.30
replica2 10.38.33.29
Regards,
Vishal
From: Sankar Panda
Sent: Friday, January 3, 2020 12:36 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr 8.3
Hi Vishal,
You can .go to the collection in admin console.mann
> *From:* Erick Erickson
> *Sent:* Thursday, January 2, 2020 7:40 PM
> *To:* solr-user@lucene.apache.org
> *Subject:* Re: Solr 8.3
>
> No, you cannot change the IP of an existing replica. Either do as Sankar
> mentioned when you first create the collect
-user@lucene.apache.org
Subject: Re: Solr 8.3
No, you cannot change the IP of an existing replica. Either do as Sankar
mentioned when you first create the collection or use the MOVREPLICA
collections API command.
MOVEREPLICA has existed for quite a long time, but if it’s not available, you
can
ts replica is 10.38.33.219.
>>
>> I want to create a new collection on the Same. can not change the shard IP
>> for the new collection. How can I?
>>
>>
>> --
>> *From:* sudhir kumar
>> *Sent:* Thursday, January 2, 2020 2:01 P
> *To:* solr-user@lucene.apache.org
> *Subject:* Re: Solr 8.3
>
> sample url to create collection:
>
>
> http//host:8080/solr/admin/collections?action=CREATE&name=collectionname&numShards=2&replicationFactor=3&maxShardsPerNode=2&createNodeSet=
> host
can I?
From: sudhir kumar
Sent: Thursday, January 2, 2020 2:01 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr 8.3
sample url to create collection:
http//host:8080/solr/admin/collections?action=CREATE&name=collectionname&numShards=2&replica
sample url to create collection:
http//host:8080/solr/admin/collections?action=CREATE&name=collectionname&numShards=2&replicationFactor=3&maxShardsPerNode=2&createNodeSet=
host:8080_solr,host:8080_solr,host:8080_solr,host:8080_solr
&collection.configName=collectionconfig
On Thu, Jan 2, 2020 at 1:
Hey Vishal,
You can use createNodeSet property while creating collection which will
allows you to create shards on specified IP.
/admin/collections?action=CREATE&name=*name*&numShards=*number*
&replicationFactor=*number*&*maxShardsPerNode*=*number*&*createNodeSet*=
*nodelist*&collection.configNam
When I am creating 2 shards and 2 replicas using admin panel, automatic assign
a shard or replica to any IP.
I want to make the specific shard or replica to solr instance at the time of
creating a collection. Can I?
Regards,
Vishal
ev/lucene/lucene-solr-8.3.1-RC2-reva3d456fba2cd1b9892defbcf46a0eb4d4bb4d01f/solr/Re-index>
> on it, and see if you still have issues.
>
> On Sun, 1 Dec 2019 at 17:35, Odysci wrote:
>
> > Hi,
> > I have a solr cloud setup using solr 8.3 and zookeeper, which I recently
> > conver
Dec 2019 at 17:35, Odysci wrote:
> Hi,
> I have a solr cloud setup using solr 8.3 and zookeeper, which I recently
> converted from solr 7.7. I converted the index using the index updater and
> it all went fine. My index has about 40 million docs.
> I used a separate program to chec
Hi,
I have a solr cloud setup using solr 8.3 and zookeeper, which I recently
converted from solr 7.7. I converted the index using the index updater and
it all went fine. My index has about 40 million docs.
I used a separate program to check the values of all fields in the solr
docs, for
Thanks Colvin.
Can you share the details in the ticket?
I plan to debug this today.
It's unlikely to be a synchronization issue because
serialization/deserialization usually happens in single thread.
On Sun, Nov 24, 2019, 4:09 AM Colvin Cowie
wrote:
> https://issues.apache.org/jira/browse/SOLR
https://issues.apache.org/jira/browse/SOLR-13963
I'll see about modifying the test I have to fit in with the existing tests,
and if there's a better option then open to whatever
On Sat, 23 Nov 2019 at 16:43, Colvin Cowie
wrote:
> I've found the problem, JavaBinCodec has a CharArr,* arr*, which
I've found the problem, JavaBinCodec has a CharArr,* arr*, which is
modified in two different locations, but only one of which is protected
with a synchronized block
getStringProvider(), which is used when you call getValue() rather than
getRawValue() on the string based SolrInputFields, synchroni
AFAIK these collection properties are not tracked that faithfully and can get
out of sync, mostly because they are used only during collection CREATE and
BACKUP / RESTORE and not during other collection operations or during searching
/ indexing. SPLITSHARD doesn’t trust them, instead it checks t
*> the difference is because the _default config has the dynamic schema
building in it, which I assume is pushing it down a different code path. *
Also to add to that, I assumed initially that this just meant that it was
working because the corrupted field names would just cause it to create a
fie
I've been a bit snowed under, but I've found the difference is because the
_default config has the dynamic schema building in it, which I assume is
pushing it down a different code path.
I'm using the vanilla Solr 8.3.0 binary8.3.0
2aa586909b911e66e1d8863aa89f173d69f86cd2 - ishan - 2019-10-25
Very curious what the config change that's related to reproducing this
looks like. Maybe it's something that is worth adding
test-randomization around? Just thinking aloud.
It seems like an issue to me. Can you open a JIRA with these details?
On Fri, Nov 15, 2019 at 10:51 AM Jacek Kikiewicz wrote:
>
> I found interesting situation, I've created a collection with only one
> replica.
> Then I scaled solr-cloud cluster, and run 'addreplica' call to add 2 more.
> So
; have
> > > > reliably fails within 50 iterations of indexing 2500 documents, with
> > > > getRawValue() it succeeds for the 500 iterations I'm running it for)
> > > >
> > > > I'll see about providing a test that can be shared th
t; > getRawValue() it succeeds for the 500 iterations I'm running it for)
> > >
> > > I'll see about providing a test that can be shared that demonstrates
> the
> > > problem, and see if we can find what is going wrong in the codec...
> > >
> > &g
> > wrote:
> >
> > > Hello
> > >
> > > Apologies for the lack of actual detail in this, we're still digging into
> > > it ourselves. I will provide more detail, and maybe some logs, once I have
> > > a better idea of what is actually hap
gt; wrote:
>
> > Hello
> >
> > Apologies for the lack of actual detail in this, we're still digging into
> > it ourselves. I will provide more detail, and maybe some logs, once I have
> > a better idea of what is actually happening.
> > But I thought I might
2019 at 13:48, Colvin Cowie
wrote:
> Hello
>
> Apologies for the lack of actual detail in this, we're still digging into
> it ourselves. I will provide more detail, and maybe some logs, once I have
> a better idea of what is actually happening.
> But I thought I might as well ask i
Hello
Apologies for the lack of actual detail in this, we're still digging into
it ourselves. I will provide more detail, and maybe some logs, once I have
a better idea of what is actually happening.
But I thought I might as well ask if anyone knows of changes that were made
in the Sol
On upgrading to solr 8.3 also I am facing the same issue as SOLR-13523.
I have added these on my schema
Document present in solr:
[{
"id": "parent1",
"isInStock": 1,
"parent": true,
"_childDocuments_": [
{
I found interesting situation, I've created a collection with only one replica.
Then I scaled solr-cloud cluster, and run 'addreplica' call to add 2 more.
So I have a collection with 3 tlog replicas, cluster status page shows
them but shows also this:
"core_node2":{
"
I created a JIRA for this:
https://issues.apache.org/jira/browse/SOLR-13894
On Wed, Nov 6, 2019 at 10:45 AM Jörn Franke wrote:
> I have checked now Solr 8.3 server in admin UI. Same issue.
>
> Reproduction:
> select(search(testcollection,q=“test”,df=“Default”,defType=“edismax”,f
Whew! I often work in a private window to lessen these kinds of “surprises”…..
> On Nov 6, 2019, at 4:35 AM, Jörn Franke wrote:
>
> Never mind. Restart of browser worked.
>
>> Am 06.11.2019 um 10:32 schrieb Jörn Franke :
>>
>> Hi,
>>
>> After u
I have checked now Solr 8.3 server in admin UI. Same issue.
Reproduction:
select(search(testcollection,q=“test”,df=“Default”,defType=“edismax”,fl=“id”,
qt=“/export”, sort=“id asc”),id,if(eq(1,1),Y,N) as found)
In 8.3 it returns only the id field.
In 8.2 it returns id,found field.
Since found
Never mind. Restart of browser worked.
> Am 06.11.2019 um 10:32 schrieb Jörn Franke :
>
> Hi,
>
> After upgrading to Solr 8.3 I observe that in the Admin UI the collection
> selector is greyed out. I am using Chrome. The core selector works as
> expected.
>
> Any
Hi,
After upgrading to Solr 8.3 I observe that in the Admin UI the collection
selector is greyed out. I am using Chrome. The core selector works as expected.
Any idea why this is happening?
Thank you.
Best regards
Thanks I will check and come back to you. As far as I remember (but have to
check) the queries generated by Solr were correct
Just to be clear the same thing works with Solr 8.2 server and Solr 8.2 client.
It show the odd behaviour with Solr 8.2 server and Solr 8.3 client.
> Am 05.11.2019
I'll probably need some more details. One thing that's useful is to look at
the logs and see the underlying Solr queries that are generated. Then try
those underlying queries against the Solr index and see what comes back. If
you're not seeing the fields with the plain Solr queries then we know it'
Most likely this issue can bei also reproduced in the admin UI for the
streaming handler of a collection.
> Am 04.11.2019 um 13:32 schrieb Jörn Franke :
>
> Hi,
>
> I use streaming expressions, e.g.
> Sort(Select(search(...),id,if(eq(1,1),Y,N) as found), by=“field A asc”)
> (Using export handl
Hi,
I use streaming expressions, e.g.
Sort(Select(search(...),id,if(eq(1,1),Y,N) as found), by=“field A asc”)
(Using export handler, sort is not really mandatory , I will remove it later
anyway)
This works perfectly fine if I use Solr 8.2.0 (server + client). It returns
Tuples in the form { “id
47 matches
Mail list logo