Thank You Erick!
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Hi All,
We are facing some issues in search with special characters. Can you please
help in query if the search is done using following characters:
• “&”
Example – Tata & Sons
• AND
Example – Tata AND Sons
• (
On 9/26/2018 2:39 PM, Terry Steichen wrote:
To the best of my knowledge, I'm not using SolrJ at all. Just
Solr-out-of-the-box. In this case, if I understand you below, it
"should indicate an error status"
I think you'd know if you were using SolrJ directly. You'd have written
the indexing p
Alex,
Please look at my embedded responses to your questions.
Terry
On 09/26/2018 04:57 PM, Alexandre Rafalovitch wrote:
> The challenge here is to figure out exactly what you are doing,
> because the original description could have been 10 different things.
>
> So:
> 1) You are using bin/post
Yes, it uses a the autoscaling policies to achieve the same. Please refer
to the documentation here
https://lucene.apache.org/solr/guide/7_5/solrcloud-autoscaling-policy-preferences.html
On Thu, Sep 27, 2018, 02:11 Chuck Reynolds wrote:
> Noble,
>
> Are you saying in the latest version of Solr t
This is true.
I am thinking isf solr says 8 and up it really is 8 and up there is no
other reference I find to not using G1 collection.
The java support right now for old versions is really a mess. Currently if
you want ongoing support patches without an Oracle support contract the
only way to ac
The challenge here is to figure out exactly what you are doing,
because the original description could have been 10 different things.
So:
1) You are using bin/post command (we just found this out)
2) You are indexing a bunch of files (what format? all same or different?)
3) You are indexing them i
Shawn,
To the best of my knowledge, I'm not using SolrJ at all. Just
Solr-out-of-the-box. In this case, if I understand you below, it
"should indicate an error status"
But it doesn't.
Let me try to clarify a bit - I'm just using bin/post to index the files
in a directory. That indexing proce
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Jeff,
On 9/26/18 11:35, Jeff Courtade wrote:
> My concern with using g1 is solely based on finding this. Does
> anyone have any information on this?
>
> https://wiki.apache.org/lucene-java/JavaBugs#Oracle_Java_.2F_Sun_Java_
.2F_OpenJDK_Bugs
>
> "D
On 9/26/2018 1:23 PM, Terry Steichen wrote:
I'm pretty sure this was covered earlier. But I can't find references
to it. The question is how to make indexing errors clear and obvious.
If there's an indexing error and you're NOT using the concurrent client
in SolrJ, the response that Solr ret
I'm pretty sure this was covered earlier. But I can't find references
to it. The question is how to make indexing errors clear and obvious.
(I find that there are maybe 10% more files in a directory than end up
in the index. I presume they were indexing errors, but I have no idea
which ones or
Hey all,
We're trying to use SOLR for our document store and are facing some issues
with the Realtime Get api. Basically, we're doing an api call from multiple
endpoint to retrieve configuration data. The document that we are
retrieving does not change at all but sometimes the API returns a null
d
Can you tell me where I could get insight into the testing cycles and
results?
On Wed, Sep 26, 2018, 1:03 PM Erick Erickson
wrote:
> There are consistent failures under JDK 11 in the automated tests that
> Solr/Lucene runs that do not happen for other releases. I personally
> haven't tried divi
The CMS settings are very nearly what we use after tons of load testing we
changed newratio to 2 and it cut the 10 second pauses way down for us
huge heap though
On Wed, Sep 26, 2018, 2:17 PM Shawn Heisey wrote:
> On 9/26/2018 9:35 AM, Jeff Courtade wrote:
> > My concern with using g1 is so
Hi,
Thanks for the reply, actually we are planning to optimize the huge volume
of data.
For example, in our current system we have as below, so we can do facet
pivot or stats to get the sum of asset_td for each acct, but the data
growing lot whenever more asset getting added.
Id | Accts| assetid
APX=approximately sorry
On Wed, Sep 26, 2018, 2:09 PM Shawn Heisey wrote:
> On 9/26/2018 9:45 AM, Jeff Courtade wrote:
> > We are considering a move to solr 7.x my question is Must we use cloud?
> We
> > currently do not and all is well. It seems all work is done referencing
> > cloud imple
On 9/26/2018 12:20 PM, Balanathagiri Ayyasamypalanivel wrote:
Currently I am storing json object type of values in string field in solr.
Using this field, in the code I am parsing json objects and doing sum of
the values under it.
In solr, do we have any option in doing it by default when using
Hi,
Currently I am storing json object type of values in string field in solr.
Using this field, in the code I am parsing json objects and doing sum of
the values under it.
In solr, do we have any option in doing it by default when using the json
object field values.
Regards,
Bala.
On 9/26/2018 9:35 AM, Jeff Courtade wrote:
My concern with using g1 is solely based on finding this.
Does anyone have any information on this?
https://wiki.apache.org/lucene-java/JavaBugs#Oracle_Java_.2F_Sun_Java_.2F_OpenJDK_Bugs
I have never had a single problem with Solr running with the G1
On 9/26/2018 9:45 AM, Jeff Courtade wrote:
We are considering a move to solr 7.x my question is Must we use cloud? We
currently do not and all is well. It seems all work is done referencing
cloud implementations.
You do not have to use cloud.
For most people who are starting from scratch, I w
Hi,
I am trying to use ManagedSynonymGraphFilterFactory and want to add
"tokenizerFactory" attribute into Managed
Resources(_schema_analysis_synonyms_*.json under conf directory).
To do this, is it OK to update json file manually?
If should not, is there any way to update ManagedResources except R
bq. In all my solr servers I have 40% free space
Well, clearly that's not enough if you're getting this error: "No
space left on device"
Solr/Lucene need _at least_ as much free space as the indexes occupy.
In some circumstances it can require more. It sounds like you're
having an issue with full
5 zookeepers is overkill for 4 nodes. 3 should be more than adequate.
But that's a tangent.
Sure. Configs to tune:
1> indexing rate. If you're flooding the cluster with updates at a
very high rate, the CPU cycles needed to index the docs are going to
take away from query processing. So if you thro
I’m still learning Telegraf/InfluxDB, but I like it so far. Does anybody have
experience adding simple URL-based probes? For example, I’d like to graph this
for each collection.
http://mycluster:8983/solr/mycollection/select?q=${query}&rows=0&wt=json"; | jq
-r .response.numFound
And this for e
There are consistent failures under JDK 11 in the automated tests that
Solr/Lucene runs that do not happen for other releases. I personally
haven't tried diving into them to know whether they're test artifacts
or not.
JDK 9 and JKD 10 also have open issues, especially around Hadoop integration.
I
How long does the query take when it is run directly, without Solr?
For our DIH queries, Solr was not the slow part. It took 90 minutes
directly or with DIH. With our big cluster, I’ve seen indexing rates of
one million docs per minute.
wunder
Walter Underwood
wun...@wunderwood.org
http://observe
With DIH you are doing indexing single-threaded. You should be able to
configure multiple DIH's on the same collection and then partition the data
between them, issuing slightly different SQL to each. But I don't exactly know
what that would look like.
--
Jan Høydahl, search solution architect
We have solr cloud 4 nodes with 5 zookeepers.
Usually search request are super fast! But, when we add docs to leader solr
- it starts pushing updates to other nodes - causing search request to
respond back at snail speed :( :( :(
We see tons of such logs for period of 2-3 mins and then once it
Noble,
Are you saying in the latest version of Solr that this would work with three
instances of Solr running on each server?
If so how?
Thanks again for your help.
On 9/26/18, 9:11 AM, "Noble Paul" wrote:
I'm not sure if it is pertinent to ask you to move to the latest Solr
which h
Neel,
I do not think there is a way to entirely bypass spellchecking if there are
results returned, and I'm not so sure performance would noticeably improve if
it did this. Clients can easily check to see if results were returned and can
ignore the spellcheck response in these cases, if desire
Agree with Walter. I personally really like the master slave set up for my use
cases.
David J. Hastings | Lead Developer
dhasti...@wshein.com | 716.882.2600 x 176
William S. Hein & Co., Inc.
2350 North Forest Road | Getzville, NY 14068
www.wshein.com/contact-us
Thanks ..!
On Wed, Sep 26, 2018 at 11:44 AM Markus Jelsma
wrote:
> Indeed, but JDK-8038348 has been fixed very recently for Java 9 or higher.
>
> -Original message-
> > From:Jeff Courtade
> > Sent: Wednesday 26th September 2018 17:36
> > To: solr-user@lucene.apache.org
> > Subject: Re:
Cloud is very useful if you shard or need near real-time indexing.
For non-sharded, non real time collections, I really like master/slave.
The loose coupling between master and slave makes it trivial to scale
out. Just clone a slave and fire it up.
wunder
Walter Underwood
wun...@wunderwood.org
h
Hi,
We are considering a move to solr 7.x my question is Must we use cloud? We
currently do not and all is well. It seems all work is done referencing
cloud implementations.
We have
solr 4.3.0 master/slave
14 servers RHEL 32 core 96 gb ram 7 shards one replica per shard
Total index is 333Gb aro
Indeed, but JDK-8038348 has been fixed very recently for Java 9 or higher.
-Original message-
> From:Jeff Courtade
> Sent: Wednesday 26th September 2018 17:36
> To: solr-user@lucene.apache.org
> Subject: Re: Java version 11 for solr 7.5?
>
> My concern with using g1 is solely based on f
My concern with using g1 is solely based on finding this.
Does anyone have any information on this?
https://wiki.apache.org/lucene-java/JavaBugs#Oracle_Java_.2F_Sun_Java_.2F_OpenJDK_Bugs
"Do not, under any circumstances, run Lucene with the G1 garbage collector.
Lucene's test suite fails with the
I'm not sure if it is pertinent to ask you to move to the latest Solr
which has the policy based replica placement. Unfortunately, I don't
have any other solution I can think of
On Wed, Sep 26, 2018 at 11:46 PM Chuck Reynolds wrote:
>
> Noble,
>
> So other than manually moving replicas of shard d
We’ve been running G1 in prod for at least 18 months. Our biggest cluster
is 48 machines, each with 36 CPUs, running 6.6.2. We also run it on our
4.10.4 master/slave cluster.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On Sep 26, 2018, at 7:37 AM, Je
Thanks for that...
I am just starting to look at this I was unaware of the license debacle.
Automated testing up to 10 is great.
I am still curious about the GC1 being supported now...
On Wed, Sep 26, 2018 at 10:25 AM Zisis T. wrote:
> Jeff Courtade wrote
> > Can we use GC1 garbage collection
Jeff Courtade wrote
> Can we use GC1 garbage collection yet or do we still need to use CMS?
I believe you should be safe to go with G1. We've applied it in in a Solr
6.6 cluster with 10 shards, 3 replicas per shard and an index of about 500GB
(1,5T counting all replicas) and it works extremely wel
The minimal required as per the changes file is 1.8 (8.x). I believe
there is an automated testing up to 10 and there were some issues
related to the modules but AFAIK they were resolved. So, you may be ok
until then.
However, the java 11 is a bit of a different beast. Both because the
public vers
Hello,
We are looking to migrate to solr 7.5 with java 11 from solr 4.3.0 with
java 7.
I have a couple basic questions...
What version of Java is current solr 7.5 development and testing based on?
Can we use java 11 with solr 7.5? any known issues?
Can we use GC1 garbage collection yet or do w
Noble,
So other than manually moving replicas of shard do you have a suggestion of how
one might accomplish the multiple availability zone with multiple instances of
Solr running on each server?
Thanks
On 9/26/18, 12:56 AM, "Noble Paul" wrote:
The rules suggested by Steve is correct. I
I saw something like this a year ago which i reported as a possible bug (
https://issues.apache.org/jira/browse/SOLR-10840, which has a full
description and stack traces)
This occurred very randomly on an AWS instance; moving the index directory
to a different file system did not fix the problem
Also are you using Solr data import? That will be much slower compare to if
you write our own little indexer which does indexing in batches and with
multiple threads.
On Wed, Sep 26, 2018 at 8:00 AM Vincenzo D'Amore wrote:
> Hi, I know this is the shortest way but, had you tried to add more core
Hi, I know this is the shortest way but, had you tried to add more core or CPU
to your solr instances? How big is you collection in terms of GB and number of
documents?
Ciao,
Vincenzo
> On 26 Sep 2018, at 08:36, Krizelle Mae Hernandez
> wrote:
>
> Hi.
>
> Our SOLR currently is running appr
Hi,
The documents might be too long to highlight, I think.
See "hl.maxAnalyzedChars" in reference guide.
https://lucene.apache.org/solr/guide/7_4/highlighting.html
Try to increase hl.maxAnalyzedChars value
or to use hl.alternateField, hl.maxAlternateFieldLength to create
snippets even if Solr fai
Hi,
We are running 3 node solr cloud(4.4) in our production infrastructure, We
recently moved our SOLR server host softlayer to digital ocean server with
same configuration as production.
Now we are facing some slowness in the searcher when we index document, when
we stop indexing then searches i
48 matches
Mail list logo