Another ask is , how can I use multiple spellchecker at the same time
based on condition
Currently we are using two spellchecker [spellcheck.dictionary=
wordbreak , spellcheck.dictionary=en ]
If wordbreak dictionary is having suggesion it will make a second call for
fethching the result and in
On 9/12/18 8:21 PM, Shawn Heisey wrote:
> On 9/12/2018 7:38 AM, Gu, Steve (CDC/OD/OADS) (CTR) wrote:
>> I am upgrading our solr to 7.4 and would like to set up solrcloud for
>> failover and load balance. There are three zookeeper servers
>> (zk1:2181, zk1:2182) and two solr instance solr1:8983, s
Thanks for the suggestions and responses Erick and Shawn. Erick I only return
30 records irrespective of the query (not the entire payload) I removed some of
my configuration settings for readability. The parameter "allResults" was a
little misleading I apologise for that but I appreciate your
Your use case should not start from data stored, but from the queries
you want to search. Then you massage your data to fit that.
Don't worry too much about 'duplicate' too much at this stage. You
could delete the historical records if needed. Or index them without
storing.
What you should try to
My recommendation is to put that data in a relational database. That does not
look like an appropriate use for Solr.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On Sep 12, 2018, at 1:10 PM, Samatha Sajja
> wrote:
>
> Hi,
>
> I have a use case wh
Hi,
I have a use case where I am not sure which type of fields to use.
Use case: For every order line we would like to store statuses and quantities
For ex, I have placed an order with some item quantity of 6. 4 of them got
shipped and 2 of them in processing. I would like to search on status an
Hi ,
I am using solr 4.6 version.
My document is having a "iphone 7" but when I am searching with with
"iphone7" I am getting the result because here worddelimiterfilterfactory
is taking care by slipt on numerics functionality.
(Iphone7-->iphone 7)
But I want solr to return a spellcheck suggestio
thanks, shawn. yep, i saw the multi term synonym discussion when googling
around a bit after your first reply. pretty jazzed about finally getting to
tinker w that instead of creating our regex ducktape solution
for_multi_term_synonyms!
thanks again-
--
John Blythe
On Wed, Sep 12, 2018 at 2:15
On 9/12/2018 7:38 AM, Gu, Steve (CDC/OD/OADS) (CTR) wrote:
I am upgrading our solr to 7.4 and would like to set up solrcloud for failover
and load balance. There are three zookeeper servers (zk1:2181, zk1:2182) and
two solr instance solr1:8983, solr2:8983. So what will be the solr url should
hi (again!). hopefully this will be the last question for a while—i've
really gotten my money's worth the last day or two :)
searches like "foo bar" aren't working the same way they used to for us
since our 7.4 upgrade this weekend.
in both cases our phrase was wrapped in double quotes. the case
On 9/12/2018 8:12 AM, John Blythe wrote:
shawn: at first, no. we rsynced data up after running it through the
migration tool. we'd gotten errors when using WDF so updated all instances
of it to WDGF (and subsequently added FlattenGraphFilterFactory to each
index analyzer that used WDGF to avoid e
On 9/12/2018 7:43 AM, Dominique Bejean wrote:
Are you aware about issues in Java applications in Docker if java version
is not 10 ?
https://blog.docker.com/2018/04/improved-docker-container-integration-with-java-10/
Solr explicitly sets heap size when it starts, so Java is *NOT*
determining th
well, it's our general text field that things get dumped into. this special
use case that is sku specific just ends up being done on the general input.
ended up raising the Xss value and i'm able to get results :)
i imagine this is a n00b or stupid question but imma go for it: what would
be the v
Looks like your SKU field is points-based? Strings would probably be
better, if you switched to points-based it's new code.
And maxBooleanClauses is so old-school ;) You're better off with
TermsQueryParser, especially if you pre-sort the tokens. see:
https://lucene.apache.org/solr/guide/6_6/ot
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
David,
On 9/12/18 12:21 PM, David Hastings wrote:
>> On Sep 12, 2018, at 12:15 PM, Christopher Schultz
>> mailto:ch...@christopherschultz.net>>
>> wrote:
>>
>> David,
>>
>> On 9/12/18 11:03 AM, David Hastings wrote:
>>> is there a way to start the
Use case is we are upgrading our servers, and have been running solr 5 and 7
side by side on the same machines to make sure we got 7 to reflect the results
of our current install. However to finally make the switch, it would require
changing many many scripts and servers that have already been m
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
David,
On 9/12/18 11:03 AM, David Hastings wrote:
> is there a way to start the default solr installation on more than
> one port? Only thing I could find was adding another connector to
> Jetty, via
> https://stackoverflow.com/questions/6905098/h
: WARN: (main) AbstractLifeCycle FAILED org.eclipse.jetty.server.Server@...
: java.io.FileNotFoundException: /opt/solr-5.4.1/server (Is a directory)
: java.io.FileNotFoundException: /opt/solr-5.4.1/server (Is a directory)
: at java.io.FileInputStream.open0(Native Method)
: at java
I updated an older Solr 4.10 core to Solr 7.1 recently. In so doing, I took an
old 'gradeLevel_enum' field of type EnumField and made it an EnumFieldType,
since the former has been deprecated. The old core was able to facet on
gradeLevel_enum, but the new 7.1 core just returns no facet values wh
hey all!
i'm having an issue w large queries. one of our use cases is for users to
drop in an untold amount of product skus. we previously had our
maxBooleanClause limit set to 20k (eek!). but it worked phenomenally well
and i think our record amount from a user was ~19k items.
we're now on 7.4 C
is there a way to start the default solr installation on more than one
port? Only thing I could find was adding another connector to Jetty, via
https://stackoverflow.com/questions/6905098/how-to-configure-jetty-to-listen-to-multiple-ports
however the default solr start command takes the -p parame
Vadim,
That makes perfect sense.
Thanks
Steve
-Original Message-
From: Vadim Ivanov
Sent: Wednesday, September 12, 2018 10:23 AM
To: solr-user@lucene.apache.org
Subject: RE: how to access solr in solrcloud
Hi, Steve
If you are using solr1:8983 to access solr and solr1 is down IMHO n
Thanks, Walter
-Original Message-
From: Walter Underwood
Sent: Wednesday, September 12, 2018 10:41 AM
To: solr-user@lucene.apache.org
Subject: Re: how to access solr in solrcloud
Use a load balancer. It doesn’t have to be fancy, we use the Amazon ALB because
our clusters are in AWS.
Z
Thanks, David
-Original Message-
From: David Santamauro
Sent: Wednesday, September 12, 2018 10:28 AM
To: solr-user@lucene.apache.org
Cc: David Santamauro
Subject: Re: how to access solr in solrcloud
... or haproxy.
On 9/12/18, 10:23 AM, "Vadim Ivanov" wrote:
Hi, Steve
If y
Use a load balancer. It doesn’t have to be fancy, we use the Amazon ALB because
our clusters are in AWS.
Zookeeper never handles queries. It coordinates cluster changes with the Solr
instances.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On Sep 12
... or haproxy.
On 9/12/18, 10:23 AM, "Vadim Ivanov" wrote:
Hi, Steve
If you are using solr1:8983 to access solr and solr1 is down IMHO nothing
helps you to access dead ip.
You should switch to any other live node in the cluster or I'd propose to
have nginx as frontend to
Hi, Steve
If you are using solr1:8983 to access solr and solr1 is down IMHO nothing
helps you to access dead ip.
You should switch to any other live node in the cluster or I'd propose to
have nginx as frontend to access
Solrcloud.
--
BR, Vadim
-Original Message-
From: Gu, Steve (CDC
You will run into significant problems if, when returning "all
results", you return large result sets. For regular queries I like to
limit the return to 100, although 1,000 is sometimes OK.
Millions will blow you out of the water, use CursorMark or Streaming
for very large result sets. CursorMark
hey guys.
preeti: good thought, but this was something we were already aware of and
had accounted for. thanks tho!
shawn: at first, no. we rsynced data up after running it through the
migration tool. we'd gotten errors when using WDF so updated all instances
of it to WDGF (and subsequently added
Hi,
Are you aware about issues in Java applications in Docker if java version
is not 10 ?
https://blog.docker.com/2018/04/improved-docker-container-integration-with-java-10/
Regards.
Dominique
Le mer. 12 sept. 2018 à 05:42, Shawn Heisey a écrit :
> On 9/11/2018 9:20 PM, solrnoobie wrote:
> >
Hi, all
I am upgrading our solr to 7.4 and would like to set up solrcloud for failover
and load balance. There are three zookeeper servers (zk1:2181, zk1:2182) and
two solr instance solr1:8983, solr2:8983. So what will be the solr url should
the client to use for access? Will it be solr1:89
Hello gurus,
I am using solrCloud with DIH for indexing my data.
Testing 7.4.0 with implicitly sharded collection I have noticed that any
indexing
longer then 2 minutes always failing with many timeout records in log coming
from all replicas in collection.
Such as:
x:Mycol_s_0_replica_t40 Reque
On 9/12/2018 5:47 AM, Dwane Hall wrote:
Good afternoon Solr brains trust I'm seeking some community advice if somebody
can spare a minute from their busy schedules.
I'm attempting to use the switch query parser to influence client search
behaviour based on a client specified request parameter.
Hello, Dominik.
IIRC, fl=field(foo_field_1)
On Wed, Sep 12, 2018 at 1:49 PM Dominik Safaric
wrote:
> Hi,
>
> I've implemented a custom FieldType, BinaryDocValuesField, that stores
> binary values as doc-values.
>
> I am interested onto how can I retrieve the stored values via a Solr query
> as b
Good afternoon Solr brains trust I'm seeking some community advice if somebody
can spare a minute from their busy schedules.
I'm attempting to use the switch query parser to influence client search
behaviour based on a client specified request parameter.
Essentially I want the following to occu
Hi,
I've implemented a custom FieldType, BinaryDocValuesField, that stores
binary values as doc-values.
I am interested onto how can I retrieve the stored values via a Solr query
as base64 encoded string? Because if I issue a "*" query with all fields
selected, I get all fields expect the binary
On 9/11/2018 10:15 PM, Zahra Aminolroaya wrote:
Thanks Erick. We used to use TrieLongField for our unique id and in the
document it is said that all Trie* fieldtypes are casting to
*pointfieldtypes. What would be the alternative solution?
I've never heard of Trie casting to Point.
Point is the
On 9/11/2018 8:32 PM, John Blythe wrote:
we recently migrated to cloud. part of that migration jumped us from 6.1 to
7.4.
one example query between our old solr instance and our new cloud instance
produces 42 results and 19k results.
the analyzer is the same aside from WordDelimiterFilterFactor
Hi all,
I have to update a bunch of documents but in the update requests there are
only parts of the documents.
Given that the atomic update is not a feasible option, because sometimes I
have to remove one or more instances of a dynamic field but I'm unable to
know their name(s) at update time.
39 matches
Mail list logo