I have the following structure for the class used for indexing a solr
document. I am using solrj 5.5.2 (same solr is being used on the cluster
with the collection in solr cloud mode having 3 shards)
I added the @Field(child = true) to the ChnagedAttribute object and even
though my document is inde
Update,
I checked with the following example as well and this also flattens the
results.
I took the example from here -
https://issues.apache.org/jira/browse/SOLR-1945
package com.airplus.poc.edl.spark.auditeventindexer;
import java.io.IOException;
import org.apache.solr.client.solrj.SolrServe
Hello,
You need to use
https://cwiki.apache.org/confluence/display/solr/Other+Parsers#OtherParsers-BlockJoinQueryParsers
and
https://cwiki.apache.org/confluence/display/solr/Transforming+Result+Documents#TransformingResultDocuments-[child]-ChildDocTransformerFactory
to get the nested data back.
O
Hi
Mikhail Khludnev-2 wrote
> Hello,
>
> You need to use
> https://cwiki.apache.org/confluence/display/solr/Other+Parsers#OtherParsers-BlockJoinQueryParsers
> and
> https://cwiki.apache.org/confluence/display/solr/Transforming+Result+Documents#TransformingResultDocuments-[child]-ChildDocTransform
Hello, Scenario: Currently we have 2 Solr Servers running in 2 different
servers (linux), Is there any way can we make the Core to be located in NAS or
Network shared Drive so both the solrs using the same Index.
Let me know if any performance issues, our size of Index is appx 1GB.
Thanks
Rav
ive always wanted to experiment with this, but you have to be very careful
that only one of the cores, or neither, can do ANY writes, also if you have
a suggester index you need to make sure that each core builds their own
independently. In any case from every thing ive read the general answer is
I'm executing a streaming expr and get this error:
Caused by: org.apache.solr.common.SolrException: Could not load
collection from ZK:
MovieLens_Ratings_f2e6f8b0_3199_11e7_b8ab_0242ac110002
at
org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:1098)
at
Wait, if I understand correctly, the documents would be indexed like that but
we can get back the document as nested if we perform the
blockjoinqueryparsing?
So if I query normally with the default parser I would get all documents
separately?
Did i understand correctly?
Thanks & regards
Biplob
You get this every time you run the expression?
Joel Bernstein
http://joelsolr.blogspot.com/
On Fri, May 19, 2017 at 10:44 AM, Timothy Potter
wrote:
> I'm executing a streaming expr and get this error:
>
> Caused by: org.apache.solr.common.SolrException: Could not load
> collection from ZK:
> M
For an experiment, mount the NAS filesystem ro (readonly). Is there any way to
tell Solr not to bother with a lockfile? And what happens if an update or add
gets requested by mistake, does it take down Solr?
Why not do this all the simple way, and just replicate?
On May 19, 2017 10:41:19 AM EDT
Yes! And the join queries get complicated. Yonick has some good blogs on this.
On May 19, 2017 11:05:52 AM EDT, biplobbiswas
wrote:
>Wait, if I understand correctly, the documents would be indexed like
>that but
>we can get back the document as nested if we perform the
>blockjoinqueryparsing?
>
Better off to just do Replication to the slave using the replication handler.
However, if there is no network connectivity, e.g. this is an offsite
cold/warm spare, then here is a solution:
The NAS likely supports some Copy-on-write/snapshotting capabilities. If your
systems people will work
The reason for me to want to try it is because replication is not possible
on the single machine, as the index size is around 350gb+another 400gb, and
i dont have enough SSD to cover a replication from the master node. Also i
have a theory and heard this as well from a presentation at the LR
confe
On 19.05.2017 16:33, Ravi Kumar Taminidi wrote:
> Hello, Scenario: Currently we have 2 Solr Servers running in 2 different
> servers (linux), Is there any way can we make the Core to be located in NAS
> or Network shared Drive so both the solrs using the same Index.
>
> Let me know if any perfo
No, not every time, but there was no GC pause on the Solr side (no
gaps in the log, nothing in the gc log) ... in the zk log, I do see
this around the same time:
2017-05-05T13:59:52,362 - INFO
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:9983:NIOServerCnxn@1007] -
Closed socket connection for client /127
Hi,
Good day. I have a solrcloud cluster which has a collection with RF=2 and
NumShards=3 on 6 Nodes. We want to test how to recover from unexpected
situations like shard loss. So we will probably execute an rm -rf on the solr
data directory on one of the replica or master. Now the question is,
We need to clearly distinguish between losing a _shard_ and losing a
_replica_. You have RF=2 so you have two replicas in each shard.
If you stop a Solr node hosting one of the two replicas you'll see that the
leader function will switch to the running replica.
Now if you nuke the data directory
Docker has a "layered" filesystem strategy, where new writes are written to a
top layer, so maybe there's a way to do this with docker.
Pretty speculative, but:
- Start a docker container based on an image containing Solr, but no index data.
- Build your index within the image.
- Shutdown solr an
Odd, I haven't run into this behavior. Are you getting the disconnect from
the client side, or is this happening in a stream being run inside Solr?
Joel Bernstein
http://joelsolr.blogspot.com/
On Fri, May 19, 2017 at 1:40 PM, Timothy Potter
wrote:
> No, not every time, but there was no GC pau
If you remove the content of data directory, then it should sync up
automatically. Give a try.
Thanks,
Susheel
On Fri, May 19, 2017 at 7:51 AM, Sudershan Madhavan <
smadha...@digitalriver.com> wrote:
> Hi,
>
> Good day. I have a solrcloud cluster which has a collection with RF=2 and
> NumShards=
The reason is GC pauses mostly at the client side and not the server side.
I guess you are using solrj client and this exception is thrown in the
client logs.
On Fri, May 19, 2017 at 11:46 PM, Joel Bernstein wrote:
> Odd, I haven't run into this behavior. Are you getting the disconnect from
> th
On 5/17/2017 9:15 AM, Jason Gerlowski wrote:
> A strawman new message could be: "Performance warning: Overlapping
> onDeskSearchers=2; consider reducing commit frequency if performance
> problems encountered"
>
> Happy to create a JIRA/patch for this; just wanted to get some
> feedback first in cas
When is the Solr 6.6 release date?
Waiting for the Zookeeper 3.4.10 upgrade in 6.6 version.
Regards,
Chiradeep
> multiple solr instances on one machine performs better than multiple
Does the machine have enough RAM to support all the instances? Again, time for
an experiment!
--
Sorry for being brief. Alternate email is rickleir at yahoo dot com
Mt thought would be that the machine would need only the same amount of ram
minus the heap size of the second instance of solr, since it will be file
caching the index into memory only once since its the same files, but read
by both solr instances. my solr slaves have about 150gb each.
On Fri, Ma
One problem here is how to open new searchers on the r/o core.
Consider the autocommit setting. The cycle is
> when the first doc comes in, start your timer
> x milliseconds later, do a commit and (perhaps) open a new searcher.
but the core referencing the index in R/O mode doesn't have any updat
It's under way now, perhaps as early as next week some time.
Best,
Erick
On Fri, May 19, 2017 at 11:41 AM, Chiradeep Das
wrote:
> When is the Solr 6.6 release date?
>
> Waiting for the Zookeeper 3.4.10 upgrade in 6.6 version.
>
>
> Regards,
>
> Chiradeep
On 5/17/2017 10:42 AM, Rick Leir wrote:
> Chris, Shawn,
> I am using 5.2.1 . Neither the array (Shawn) nor the document list (Chris)
> works for me in the Admin panel. However, CSV works fine.
>
> Clearly we are long overdue for an upgrade.
I checked the PDF reference guide for 5.2 and it looks
I agree completely, it was just something ive always wanted to try doing.
if my indexes were smaller id just fire up a bunch of slaves on a single
machine and nginx them out, but even 2tb SSD's are some what expensive and
theres not always enough ports on the servers to keep adding more.
On Fri,
Hello,
I am trying to set up a solrCloud (6.5.0/6.5.1). I have installed Solr as a
service.
Every time I start solr servers, they come up but one by one the
coreContainers start shutting down on their own within 1-2 minutes of their
being up.
Here are the solr logs
2017-05-19 20:45:30.926 INFO
I have the same exact issue on my box. Basic auth works in 6.4.2 but fails
in 6.5.1. I assume its a bug. probably just hasn't been acknowledged yet.
On Sun, May 14, 2017 at 2:37 PM Xie, Sean wrote:
> Configured the JVM:
>
> -Dsolr.httpclient.builder.factory=org.apache.solr.client.solrj.impl.P
i added a ticket
https://issues.apache.org/jira/browse/SOLR-10718
we'll see what happens
On Fri, May 19, 2017 at 3:03 PM Shawn Feldman
wrote:
> I have the same exact issue on my box. Basic auth works in 6.4.2 but
> fails in 6.5.1. I assume its a bug. probably just hasn't been
> acknowledged
I found the reason why this is happening!
I am using chef and running install_sol_service.sh with option -n -f. So,
every time chef-client runs it is stopping the already running solr
instance. Now, I have removed option -f (no upgrade) but running into an
error.
I have a question on the following
On 5/19/2017 5:05 PM, Chetas Joshi wrote:
> If I don't wanna upgrade and there is an already installed service, why
> should it be exit 1 and not exit 0? Shouldn't it be like
>
> if [ ! "$SOLR_UPGRADE" = "YES" ]; then
>
> if [ -f "/etc/init.d/$SOLR_SERVICE" ]; then
>
> print_usage "/etc/i
34 matches
Mail list logo