On 1/24/13 11:22 PM, Fadi Mohsen wrote:
Thanks Per, would the first approach involve restarting Solr?
Of course ZK need to run in order to load the config into ZK. Solr nodes
do not need to run. If they do I couldnt imagine that they need to be
restarted in order to take advantage of new conf
Hello,
I've just tried to upgrade from 4.0 to 4.1 and I have the following
exception when reindexing my data:
Caused by: java.lang.UnsupportedOperationException
at java.util.Collections$UnmodifiableMap.put(Collections.java:1283)
at
org.apache.solr.handler.dataimport.VariableResolver.currentLevel
Hi,
i'd like to copy specific words from the keywords field to another field.
Cause the data i get is all in one field i'd like to extract the cities (they
are fixed, so i'll know them in advance) and put them in a seperate field.
Can i generate a whitelist file and tell the copy field to check
Hi,
this is my spellcheck/autosuggest dictionary field and field type,
we are testing solr 4.1 running inside tomcat 7 and java 7 with following
options
JAVA_OPTS="-Xms256m -Xmx2048m -XX:MaxPermSize=1024m -XX:+UseConcMarkSweepGC
-XX:+CMSIncrementalMode -XX:+ParallelRefProcEnabled
-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/home/ubuntu/OOM_HeapDump"
our source
I think the best way will be to pre-process the document (or use a custom
UpdateRequestProcessor). Other option, if you'll only use the "cities"
field for faceting/sorting/searching (you don't need the stored content)
would be to use a regular copyField and use a "KeepWordFilter" for the
"cities" f
Hi
Use the KeepWordFilter on the destination field:
http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.KeepWordFilterFactory
Cheers
-Original message-
> From:b.riez...@pixel-ink.de
> Sent: Fri 25-Jan-2013 11:41
> To: solr-user@lucene.apache.org
> Subject: copyField - co
Possibly with Shingles before the KeepWord filter to deal with multi-word
situations (though I am not sure if KeepWord allows space-separate tokens
in the file): http://stackoverflow.com/questions/14479473/
Regards,
Alex.
Personal blog: http://blog.outerthoughts.com/
LinkedIn: http://www.linke
Can someone tell me if Solr 3.6.1 supports XML 1.1 or must I stick with XML 1.0?
Thanks!
-MJ
Hello,
I had some differences in solr score between solr 3.1 and solr 4.
I have a searchfield with the following type:
An example of fieldnorms:
SearchT
In case anyone was wondering, the solution is to html encode the URL.
Solr didn't like the &'s; just convert them to & and it works!
--
View this message in context:
http://lucene.472066.n3.nabble.com/error-initializing-QueryElevationComponent-tp4035194p4036261.html
Sent from the Solr - User ma
I think you really need to see a thread dump when it gets stuck to know what's
going on. My original thought this was a problem with the index-based
spellchecker and wouldn't affect direct- . (for DirectSolrSpellChekcer,
spellcheck.build is a no-op as there is no separate index or dictionary t
On 1/25/2013 4:49 AM, Harish Verma wrote:
we are testing solr 4.1 running inside tomcat 7 and java 7 with following
options
JAVA_OPTS="-Xms256m -Xmx2048m -XX:MaxPermSize=1024m -XX:+UseConcMarkSweepGC
-XX:+CMSIncrementalMode -XX:+ParallelRefProcEnabled
-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDum
Okay one last note... just for closure... looks like it was addressed in
solr 4.1+ (I was looking at 4.0).
On Thu, Jan 24, 2013 at 11:14 PM, Amit Nithian wrote:
> Okay so after some debugging I found the problem. While the replication
> piece will download the index from the master server and m
Actually I was mistaken. I thought we were running 4.1.0 but we were
actually running 4.0.0.
I will upgrade to 4.1.0 and see if this is still happening.
Thanks,
John
On Wed, Jan 23, 2013 at 9:39 PM, John Skopis (lists) wrote:
> Sorry for leaving that bit out. This is Solr 4.1.0.
>
> Thanks agai
This is a bug. Thank you for reporting it. I opened this ticket:
https://issues.apache.org/jira/browse/SOLR-4361
Until there is a fix, here are two workarounds:
1. If you do not need any 4.1 DIH functionality, use the 4.0 DIH jar with your
4.1 Solr.
-or-
2. Use request parameters without dots
I do something similar, but without the placeholders in db-data-config.xml. You
can define the entire datasource in solrconfig.xml, then leave out that element
entirely in db-data-config.xml. It seems really odd, but that is how the code
works.
This is working for me in 4.1, so it might be a wo
Thanks, it is working when using just a solr.xml for each node. I can't find
that anywhere in the docs.
As far as I can tell, the minimum config for a Zookeeper-based node is:
-Dzkhost=
-Dsolr.solr.home=... (directory containing solr.xml file)
I started out doing separate Zookeeper loads a
When I used 4.0 I could use my DIH on any shard and the documents would be
distributed based on the internal hashing algorithm and end up distributed
evenly across my three shards.
I have just upgraded to Solr 4.1 and I have noticed that my documents always
end up on the shard that I run the DIH o
Oops, that is -DzkHost, not -Dzkhost. --wunder
On Jan 25, 2013, at 10:56 AM, Walter Underwood wrote:
> Thanks, it is working when using just a solr.xml for each node. I can't find
> that anywhere in the docs.
>
> As far as I can tell, the minimum config for a Zookeeper-based node is:
>
> -Dzk
Hello all,
I have a one term query: "ocr:aardvark" When I look at the explain
output, for some matches the queryNorm and fieldWeight are shown and for
some matches only the "weight" is shown with no query norm. (See below)
What explains the difference? Shouldn't the queryNorm be applied to e
I found this: https://issues.apache.org/jira/browse/SOLR-2592
And tried adding the following to my solrconfig.xml
groupid
false
However I am still getting all documents added to the shard that I run the
DIH on.
According to this comment on the tracker the default sharding should b
: I have a one term query: "ocr:aardvark" When I look at the explain
: output, for some matches the queryNorm and fieldWeight are shown and for
: some matches only the "weight" is shown with no query norm. (See below)
It looks like this is from a distributed query, correct?
Explanation's gen
We are migrating our Solr index from single index to multiple shards with
solrcloud. I noticed that when I query solrcloud (to all shards or just one
of the shards), the response has a field of maxScore, but query of single
index does not include this field.
In both cases, we are using Solr 4.0.
Thanks Hoss,
Yes it is a distributed query.
Tom
On Fri, Jan 25, 2013 at 2:32 PM, Chris Hostetter
wrote:
>
> : I have a one term query: "ocr:aardvark" When I look at the explain
> : output, for some matches the queryNorm and fieldWeight are shown and for
> : some matches only the "weight" is
So I have quite a few cores already where this exact (as far as replication
is concerned) solrconfig.xml works. The other cores all replicate correctly
including conf files. And every core imports perfectly fine. The issue I am
having is that the master doesn't seem to acknowledge anything except
r
Also this doesn't seem like a problem with DIH specifically. I tried doing
JSON updates with the same result. Documents are only added to the shard
that I send the updates to.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-4-1-Custom-Hashing-DIH-tp4036316p4036332.html
I don't have any targeted advice at the moment, but just for kicks, you might
try using Solr 4.1.
- Mark
On Jan 25, 2013, at 2:47 PM, Sean Siefert wrote:
> So I have quite a few cores already where this exact (as far as replication
> is concerned) solrconfig.xml works. The other cores all repl
Yeah, I've noticed this two in some distrib search tests (it's not SolrCloud
related per say I think, but just distrib search in general).
Want to open a JIRA issue about making this consistent?
- Mark
On Jan 25, 2013, at 2:39 PM, Mingfeng Yang wrote:
> We are migrating our Solr index from si
On Fri, Jan 25, 2013 at 1:56 PM, davers wrote:
> When I used 4.0 I could use my DIH on any shard and the documents would be
> distributed based on the internal hashing algorithm and end up distributed
> evenly across my three shards.
>
> I have just upgraded to Solr 4.1 and I have noticed that my
On Jan 25, 2013, at 1:56 PM, Walter Underwood wrote:
> I started out doing separate Zookeeper loads and linkconfigs, but backed off
> to bootstrapping after a total lack of success. I'll try that again (in my
> copious free time), because that seems like the right approach for
> production.
I am using the coreadmin to assign nodes to the cluster. How can I tell the
collection how many shards it has without using the collections api or
boostrapping
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-4-1-Custom-Hashing-DIH-tp4036316p4036340.html
Sent from the So
wow, I'm impressed: I have now 3 different solutions! Thank you for your
help.
Boris.
On Fri, Jan 25, 2013 at 7:54 PM, Walter Underwood wrote:
> I do something similar, but without the placeholders in
> db-data-config.xml. You can define the entire datasource in solrconfig.xml,
> then leave ou
On Jan 25, 2013, at 3:15 PM, davers wrote:
> I am using the coreadmin to assign nodes to the cluster. How can I tell the
> collection how many shards it has without using the collections api or
> boostrapping
>
On the first CoreAdmin API call that creates a core, pass the numShards param.
The
Thanks for the response. That was my last resort attempt. I saw some
replication related fixes. I will reply if it works.
On Fri, Jan 25, 2013 at 12:10 PM, Mark Miller wrote:
> I don't have any targeted advice at the moment, but just for kicks, you
> might try using Solr 4.1.
>
> - Mark
>
> On
The code that auto detects the local address to use has been updated. It should
more often pick the address that is addressable from remote machines. If for
some reason this change is not desirable, perhaps open a JIRA issue that
optionally allows the old behavior (that might actually be an impo
Ok that worked thank you!
Now it seems like it is still using the default hashing function.
I put this in my solrconfig.xml under the tag
groupid
false
I want to shard on groupid instead of id but it doesn't seem to be working.
--
View this message in context:
http://lucene.472
On Fri, Jan 25, 2013 at 3:59 PM, davers wrote:
> I want to shard on groupid instead of id but it doesn't seem to be working.
That's not yet implemented.
Currently you need to put the group in the ID. From the release notes:
* Simple multi-tenancy through enhanced document routing:
- The "comp
I'm not sure I understand. I thought ID had to be unique.
for example
I have the following
[
{ "id" : 1, "groupid" : 1 },
{ "id" : 2, "groupid" : 1},
{ "id" : 3 "groupid : 2 }
]
How would I currently ensure that id 1 & 2 end up on the same shard using
the ID ?
--
View this message in context:
Unfortunately no such luck. If anyone thinks of anything to try I would
appreciate it.
On Fri, Jan 25, 2013 at 12:40 PM, Sean Siefert wrote:
> Thanks for the response. That was my last resort attempt. I saw some
> replication related fixes. I will reply if it works.
>
>
> On Fri, Jan 25, 2013 a
On Fri, Jan 25, 2013 at 4:09 PM, davers wrote:
> I'm not sure I understand. I thought ID had to be unique.
Right - the group becomes part of the ID (the prefix), not the whole ID.
> for example
> I have the following
>
> [
> { "id" : 1, "groupid" : 1 },
> { "id" : 2, "groupid" : 1},
> { "id" : 3
Well it seems to have resolved itself. Fully wiped the configuration
directores and recreate the cores and it seems to be fixed.
On Fri, Jan 25, 2013 at 1:11 PM, Sean Siefert wrote:
> Unfortunately no such luck. If anyone thinks of anything to try I would
> appreciate it.
>
>
> On Fri, Jan 25,
Hi
I created https://issues.apache.org/jira/browse/SOLR-4362 for this.
Ahmet
--- On Sun, 1/20/13, Ahmet Arslan wrote:
> From: Ahmet Arslan
> Subject: edismax, phrase query with slop, pf parameter
> To: solr-user@lucene.apache.org
> Date: Sunday, January 20, 2013, 6:13 PM
> Hello,
>
> Using ex
: Yeah, I've noticed this two in some distrib search tests (it's not
: SolrCloud related per say I think, but just distrib search in general).
I'm pretty sure that the crux of the issue is that maxScore is only
*returned* if scores are requested.
When you do a single node request, you have to
Right now I have an index with four shards on a single EC2 server, each
running on different ports. Now I'd like to migrate three shards
to independent servers.
What should I do to safely accomplish this process?
Can I just
1. shutdown all four solr instances.
2. copy three shards (indexes) to d
You could do it that way.
I'm not sure why you are worried about the leaders. That shouldn't matter.
You could also start up new Solrs on the new machines as replicas of the cores
you want to move - then once they are active, unload the cores on the old
machine, stop the Solr instances and remo
As my understand, you mean that we can just add replicas for the already
existent shards.
This step is straight.After replicas are active, we can just shutdown the
replaced shard nodes
and new leaders will be present automatically. right?
At last we just remove the stuff left on those replaced s
I am trying to update my config in zookeeper with the zkcli from the
solr example. This is 4.1. Here's the full command I am trying:
/opt/mbsolr4/cloud-scripts/zkcli.sh -d
/index/mbsolr4/bootstrap/mbbasecfg -n mbbasecfg -z
"mbzoo1.REDACTED.com:2181,mbzoo2.REDACTED.com:2181,mbzoo3.REDACTED.co
On 1/25/2013 8:52 PM, Shawn Heisey wrote:
I am trying to update my config in zookeeper with the zkcli from the
solr example. This is 4.1. Here's the full command I am trying:
/opt/mbsolr4/cloud-scripts/zkcli.sh -d
/index/mbsolr4/bootstrap/mbbasecfg -n mbbasecfg -z
"mbzoo1.REDACTED.com:2181,mbz
Hi Mark,
When I did testing with SolrCloud, I found the following.
1. I started 4 shards on the same host on port 8983, 8973, 8963, and 8953.
2. Index some data.
3. Shutdown all 4 shards.
4. Started 4 shards again, all pointing to the same data directory and use
the same configuration, except tha
Thanks for quick reply and addressing each point queried.
Additional asked information is mentioned below:
OS = Ubuntu 12.04 (64 bit)
Sun Java 7 (64 bit)
Total RAM = 8GB
SolrConfig.xml is available at http://pastebin.com/SEFxkw2R
51 matches
Mail list logo