Finally, I was able to implement desirable behavior using your suggestions
as follows:
- Added StatelessScriptUpdateProcessorFactory before
SignatureUpdateProcessorFactory in order to analyze "field1" and set
analyzed value to "field1_tmp_ss"
- Passed "field1_tmp_ss" to SignatureUpdateProcessorFac
Hi all,
I want to set up a solr cloud with x nodes and have 3 zookeepers servers.
As i understand the following parties need to know all zookeeper servers:
* All zookeeper servers
* All solr cloud nodes
* All solr4j cloud smart clients
So let's say if i make it hard coded and then want to add 2 z
Hi,
Hardcoding your zk server addresses is a key factor to stability in your
cluster.
If this was some kind of magic, and the magic failed, EVERYTHING would come to
a halt :)
And since changing ZK is something you do very seldom, I think it is not too
hard to
1. push new solr.in.sh file to all
Or you can administate the nodes via configuration management software such as
Salt, Puppet, etc. If we add a Zookeeper to our list of Zookeepers, it is
automatically updated in solr.in.sh file on all nodes and separate clusters.
If you're looking for easy maintenance that is :)
Markus
-
Hi Markus and Jan,
Thanks for the quick response and good ideas.
I will look for the puppet direction. We already use puppet, so this is
easy to add
Thanks a lot,
David
On Thu, Jan 26, 2017 at 3:38 PM Markus Jelsma
wrote:
> Or you can administate the nodes via configuration management softwar
On 1/26/2017 6:30 AM, David Michael Gang wrote:
> I want to set up a solr cloud with x nodes and have 3 zookeepers servers.
> As i understand the following parties need to know all zookeeper servers:
> * All zookeeper servers
> * All solr cloud nodes
> * All solr4j cloud smart clients
>
> So let's
Hi Alex,
Just tested the DIH example in 6.4 (bin/solr -e dih)
Getting the same “No dataimport-handler defined!” for every one of the cores
installed as part of the example.
Cheers,
Chris
On 24/01/2017, 15:07, "Alexandre Rafalovitch" wrote:
Strange.
If you run a pre-built DIH ex
Chris,
Shawn has already provided a workaround and a JIRA reference earlier
in this thread. Could you review his message and see if his solution
solves it for you. There might be a 6.4.1 soon and it will be fixed
there as well.
Regards,
Alex
http://www.solr-start.com/ - Resources for Solr
On 1/26/2017 7:44 AM, Chris Rogers wrote:
> Just tested the DIH example in 6.4 (bin/solr -e dih)
>
> Getting the same “No dataimport-handler defined!” for every one of the cores
> installed as part of the example.
Repeating a reply already posted elsewhere on this thread:
It's a bug.
https://is
SOLR 5.4.1 i am running a query with multiple facet fields.
_snip_
select?q=*%3A*&sort=metatag.date.prefix4+DESC&rows=7910&fl=metatag.date.prefix7&wt=json&indent=true&facet=true&facet.field=metatag.date.prefix7
&facet.field=metatag.date.prefix4&facet.field=metatag.doctype
field metatag.date.pr
Hi All,
I have migrated Solr from older versio 3.6 to SolrCloud 6.2 and all good but
there are almost every second some WARN messages in the logs.
HttpParser
bad HTTP parsed: 400 HTTP/0.9 not supported for
HttpChannelOverHttp@16a84451{r=0,c=false,a=IDLE,uri=null}
Anynone knows where are these
You probably have an old HTTP jar somewhere in your classpath that's
being found first. Or you have some client somewhere using an old HTTP
version.
Best,
Erick
On Thu, Jan 26, 2017 at 7:49 AM, marotosg wrote:
> Hi All,
> I have migrated Solr from older versio 3.6 to SolrCloud 6.2 and all good b
If you can't figure it out, you can dynamically change log status for
the particular package and/or you could enable access logs to see the
request.
Or run Wireshark on the network and see what's going on (when you have
a hammer!!!).
Regards,
Alex.
http://www.solr-start.com/ - Resources f
facet.limit?
f..facet.limit? (not sure how that would work with field
name that contains dots)
Docs are at: https://cwiki.apache.org/confluence/display/solr/Faceting
Regards,
Alex.
http://www.solr-start.com/ - Resources for Solr users, new and experienced
On 26 January 2017 at 10:36, KR
Hi,
I am experiencing a similar issue. We have tried method uif but that didn't
help much. There is still some performance degradation.
Perhaps some underlying changes in the lucene version its using.
Will switching to JSON facet API help in this case? We have 5 nodes/single
shard in our productio
Hi Johan,
Once the backup is created successfully, Solr does not play any role in
managing the backup copies and it is left up to the user. You may want to
build a script which maintains last N backup copies (and delete old ones).
If you end up building such script, see if you can submit a patch
Are you using docvalues ? Try that it might help.
Bill Bell
Sent from mobile
> On Jan 26, 2017, at 10:38 AM, Bhawna Asnani wrote:
>
> Hi,
> I am experiencing a similar issue. We have tried method uif but that didn't
> help much. There is still some performance degradation.
> Perhaps some under
Alexandre,
Thanks.
I will refactor my schema to eliminate the period seperated values in a field
names and try your suggestion.
I'll let you know how it goes.
Kris
- Original Message -
From: "Alexandre Rafalovitch"
To: "solr-user"
Sent: Thursday, January 26, 2017 11:40:49 AM
S
Running the latest crawl from Nutch to SOLR 5.4.1 it seems that my copy fields
do not work as expected anymore.
Why would copyField ignore the default all of a sudden?
I've not made any significant changes to SOLR and none at all to nutch.
{
"response":{"numFound":699,"start":0,"do
Is anyone thinking this is a normal behavior? To get random children?
And to have a different (correct) behavior if you commit after sending each
document?
Fabien
-Original Message-
From: Fabien Renaud [mailto:fabien.ren...@findwise.com]
Sent: den 24 januari 2017 19:23
To: solr-user@l
Adding my anecdotes:
I’m using heavily tuned ParNew/CMS. This is a SolrCloud collection, but
per-node I’ve got a 28G heap and a 200G index. The large heap turned out to be
necessary because certain operations in Lucene allocate memory based on things
other than result size, (index size typica
Hi All,
While I was testing out the transient feature of Solr cores, I found this
issue.
I had my transientCacheSize set to "1" in solr.xml and I have created two
cores with these properties
loadOnStartup=false
transient=true
I had my softCommit time set to 5 seconds and hardCommit to 10 seconds
Hi Mikhail,
The per row scenario would cater for queries that is looking at specific rows
with.
For example, I need address and bank details of a member that is stored on a
different core
I guess what I am trying to do is get Solr search functionality that is similar
to DB, something which I c
23 matches
Mail list logo