Hi...
I'm trying to get atomic updates working and am seeing some strangeness.
Here's my JSON with the data to update ..
[{"id":"/unique/path/id",
"field1":{"set","newvalue1"},
"field2":{"set","newvalue2"}
}]
If I use the REST API via curl it works fine. With the following
command, the f
?
Thanks!
...scott
On 8/29/18 3:02 PM, Scott Prentice wrote:
Hi...
I'm trying to get atomic updates working and am seeing some
strangeness. Here's my JSON with the data to update ..
[{"id":"/unique/path/id",
"field1":{"set","newvalue1&quo
/update/json
So i think the post gets treated as generic json parsing.
Can you try the same end point?
Regards,
Alex
On Fri, Aug 31, 2018, 7:05 PM Scott Prentice wrote:
Just bumping this post from a few days ago.
Is anyone using atomic updates? If so, how are you passing the updates
to
Ah .. is this done with the -url parameter? As in ..
./bin/post -url http://localhost:8983/solr/core01/update/json
/home/xtech/solrtest/test1.json
Will test.
Thanks,
...scott
On 8/31/18 5:15 PM, Scott Prentice wrote:
Hmm. That makes sense .. but where do you provide the endpoint to
post
quot;field.set".
I'm clearly flailing. If you have any thoughts on this, do let me know.
Thanks!
...scott
On 8/31/18 5:20 PM, Scott Prentice wrote:
Ah .. is this done with the -url parameter? As in ..
./bin/post -url http://localhost:8983/solr/core01/update/json
/home/xtech/solrtest/t
tes. But nice to see that it works!
Thanks for your help!
...scott
On 8/31/18 6:04 PM, Alexandre Rafalovitch wrote:
Ok,
Try "-format solr" instead of "-url ...".
Regards,
Alex.
On 31 August 2018 at 20:54, Scott Prentice wrote:
Nope. That's not it. It compla
n 9/1/18 9:26 PM, Shawn Heisey wrote:
On 8/31/2018 7:18 PM, Scott Prentice wrote:
Yup. That does the trick! Here's my command line ..
$ ./bin/post -c core01 -format solr /home/xtech/solrtest/test1b.json
I saw that "-format solr" option, but it wasn't clear what it did.
Using Solr 7.2.0 and Zookeeper 3.4.11
In an effort to move to a more robust Solr environment, I'm setting up a
prototype system of 3 Solr servers and 3 Zookeeper servers. For now,
this is all on one machine, but will eventually be 3 machines.
This works fine on a Ubuntu 5.4.0-6 VM on my local
On 1/29/18 12:44 PM, Shawn Heisey wrote:
On 1/29/2018 1:13 PM, Scott Prentice wrote:
But when I do the same thing on the Red Hat system it fails. Through
the UI, it'll first time out with this message ..
Connection to Solr lost
Then after a refresh, the collection appears to have
ration
to use localhost ports.
-----Original Message-
From: Scott Prentice [mailto:s...@leximation.com]
Sent: Monday, January 29, 2018 3:13 PM
To: solr-user@lucene.apache.org
Subject: SolrCloud installation troubles...
Using Solr 7.2.0 and Zookeeper 3.4.11
In an effort to move to a more robust
27;ve seen localhost start to resolve to ::1, the IPv6 equivalent
of 127.0.0.1.
I guess some environments can be strict enough to restrict communication on
localhost; seems hard to imagine, but it does happen.
-Original Message-
From: Scott Prentice [mailto:s...@leximation.com]
Sent: M
On 1/29/18 1:31 PM, Shawn Heisey wrote:
On 1/29/2018 2:02 PM, Scott Prentice wrote:
Thanks, Shawn. I was wondering if there was something going on with
IP redirection that was causing confusion. Any thoughts on how to
debug? And, what do you mean by "extreme garbage collection pauses&
We initially tested our Solr Cloud implementation on a single VM with 3
Solr servers and 3 Zookeeper servers. Once that seemed good, we moved to
3 VMs with 1 Solr/Zookeeper on each. That's all looking good, but in the
Solr Admin > Cloud > Graph, all of my shard replicas are on "127.0.1.1"
.. wi
t let the other instances
fully start up.
These were brand new, fresh, Ubuntu installs. Strange that the
/etc/hosts isn't set up to handle this.
Cheers,
...scott
On 2/28/18 8:48 PM, Shawn Heisey wrote:
On 2/28/2018 5:42 PM, Scott Prentice wrote:
We initially tested our Solr Cloud implement
We're in the process of moving from 12 single-core collections
(non-cloud Solr) on 3 VMs to a SolrCloud setup. Our collections aren't
huge, ranging in size from 50K to 150K documents with one at 1.2M docs.
Our max query frequency is rather low .. probably no more than
10-20/min. We do update fr
e shard setup 4X that size. You can still have replicas of this
shard for redundancy / availability purposes.
I'm not an expert, but I think one of the deciding factors is if your index
can fit into RAM (not JVM Heap, but OS cache). What are the sizes of your
indexes?
On 14 March 2018 at 11:0
port Training - http://sematext.com/
On 14 Mar 2018, at 01:01, Scott Prentice wrote:
We're in the process of moving from 12 single-core collections (non-cloud Solr)
on 3 VMs to a SolrCloud setup. Our collections aren't huge, ranging in size
from 50K to 150K documents with one at 1.2M do
We might be going at this wrong, but we've got Solr set up as a service,
so if the machine goes down it'll restart. But without Zookeeper running
as a service, that's not much help. I found the zookeeperd install,
which in theory seems like it should do the trick, but that installs a
new instan
on those machines by using the
Collections API ADDREPLICA.
3> once the new replicas are healthy. DELETEREPLICA on the old hardware.
No down time. No configuration to deal with, SolrCloud will take care
of it for you.
Best,
Erick
On Wed, Mar 14, 2018 at 9:32 AM, Scott Prentice wrote:
Emi
isey wrote:
On 3/14/2018 12:24 PM, Scott Prentice wrote:
We might be going at this wrong, but we've got Solr set up as a
service, so if the machine goes down it'll restart. But without
Zookeeper running as a service, that's not much help.
You're probably going to be very u
1, G1 collector). No faceting, but we get very long queries,
average length is 25 terms.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
On Mar 14, 2018, at 12:50 PM, Scott Prentice wrote:
Erick...
Thanks. Yes. I think we were just going shard-happy wi
21 matches
Mail list logo