Chris, can you post your complete clusterstate.json? Do all shards have a
null range? Also, did you issue any core admin CREATE commands apart from
the create collection api.
Primoz, I was able to reproduce this but by doing an illegal operation.
Suppose I create a collection with numShards=5 and
If I am not mistaken the only way to create a new shard from a collection
in 4.4.0 was to use cores API. That worked fine for me until I used
*other* cores API commands. Those usually produced null ranges.
In 4.5.0 this is fixed with newly added commands "createshard" etc. to the
collections A
Shalin,
It is working for me. As you pointed rightly, i had defined UNIQUE_KEY field
in schema, but forgot to mention this field in the decalaration.
After i added this, it started working.
One another question i have with regard to SPLITSHARD is, we are not able to
control, which nodes of tomcat,
it is not indexing, it is saying there are no files indexed
--
View this message in context:
http://lucene.472066.n3.nabble.com/Indexing-XML-files-in-Solr-with-DataImportHandler-tp4095628p4095811.html
Sent from the Solr - User mailing list archive at Nabble.com.
On 16 October 2013 13:06, kujta1 wrote:
> it is not indexing, it is saying there are no files indexed
If you expect answers on the mailing list it might be best to provide
details here. From a quick glance at Stackoverflow, it looks like you
need a FileListEntityProcessor.
Searching Google turns
Thanks Erick!
The version is 4.4.0.
I'm posting 100k docs batches every 30-40 sec from each indexing client and
sometimes two or more clients post in a very small timeframe. That's when i
think the deadlock happens.
I'll try to replicate the problem and check the thread dump.
--
View this mes
I ran an import last night, and this morning my cloud wouldn't accept
updates. I'm running the latest 4.6 snapshot. I was importing with latest
solrj snapshot, and using java bin transport with CloudSolrServer.
The cluster had indexed ~1.3 million docs before no further updates were
accepted, quer
Thanks for clearing that.
The way it is implemented, shard splitting must create the leaders of
sub-shards on the same node as the leader of the parent shard. The location
of the other replicas of the sub-shards are chosen at random. Split shard
doesn't support a createNodeSet parameter yet but it
If the initial collection was created with a numShards parameter (and hence
compositeId router then there was no way to create a new logical shard. You
can add replicas with the core admin API but only to shards that already
exist. A new logical shard can only be created by splitting an existing on
Hi,
If I recall correctly this problem relate to the class loader path.
make sure that the ./lib (solr home, were you've replaced the jars) is not also
part of the Tomcat class loader path.
(in other words solr and Tomcat cannot share the same ./lib directories.)
-Ariel
-Original Message--
hi,
can i access TermVector information using solrj ?
thx,
elfu
Yap, you are right - I only created extra replicas with cores API. For a
new shard I had to use "split shard" command.
My apologies.
Primož
From: Shalin Shekhar Mangar
To: solr-user@lucene.apache.org
Date: 16.10.2013 10:45
Subject:Re: Regarding Solr Cloud issue...
If the i
Hello Solr-Experts,
I am currently having a strange issue with my solr querys. I am running
a small php/mysql-website that uses Solr for faster text-searches in
name-lists, movie-titles, etc. Recently I noticed that the results on my
local development-environment differ from those on my webserver.
I got the trace from jstack.
I found references to "semaphore" but not sure if this is what you meant.
Here's the trace:
http://pastebin.com/15QKAz7U
--
View this message in context:
http://lucene.472066.n3.nabble.com/Debugging-update-request-tp4095619p4095847.html
Sent from the Solr - User mai
Here is my jstack output... Lots of blocked threads.
http://pastebin.com/1ktjBYbf
On 16 October 2013 10:28, michael.boom wrote:
> I got the trace from jstack.
> I found references to "semaphore" but not sure if this is what you meant.
> Here's the trace:
> http://pastebin.com/15QKAz7U
>
>
>
>
Hi there,
i want to boost a field, see below.
If i add the defType:dismax i don't get results at all anymore.
What i am doing wrong?
Regards
Uwe
true
text
AND
default
true
Hi,
My setup is
Zookeeper ensemble - running with 3 nodes
Tomcats - 9 Tomcat instances are brought up, by registereing with zookeeper.
Steps :
1) I uploaded the solr configuration like db_data_config, solrconfig, schema
xmls into zookeeoper
2) Now, i am trying to create a collection with the col
Hi
I write a plugin to index contents reusing our DAO layer which is developed
using Spring.
What I am doing now is putting the plugin jar and all other depending jars
of DAO layer to shared lib folder under solr home.
In the log, I can see all the jars are loaded through SolrResourceLoader
like
I have setup a SolrCloud system with: 3 shards, replicationFactor=3 on 3
machines along with 3 Zookeeper instances.
My web application makes queries to Solr specifying the hostname of one of
the machines. So that machine will always get the request and the other ones
will just serve as an aid.
So
If your web application is using SolrJ/Java based - use a CloudSolrServer
instance with the zkHosts. It will take care of load balancing when
querying, indexing, and handle routing if a node goes down.
On 16 October 2013 10:52, michael.boom wrote:
> I have setup a SolrCloud system with: 3 shard
Thanks!
I've read a lil' bit about that, but my app is php-based so I'm afraid I
can't use that.
--
View this message in context:
http://lucene.472066.n3.nabble.com/SolrCloud-Query-Balancing-tp4095854p4095857.html
Sent from the Solr - User mailing list archive at Nabble.com.
Can some expert users please leave a comment on this ?
On Sun, Oct 6, 2013 at 2:54 AM, user 01 wrote:
> Using a single node Solr instance, I need to search for, lets say,
> electronics items & grocery items. But I never want to search both of them
> together. When I search for electrnoics I do
What you could do (and what we do) is to have a simple proxy in front of your
Solr instances. We for example run with Nginx in front of all of our Tomcats,
and use Nginx's upstream capabilities to do a simple loadbalancer for our
SolrCloud cluster.
http://wiki.nginx.org/HttpUpstreamModule
I'm
What does the debug output say from debugQuery=true say between the two?
On Oct 16, 2013, at 5:16, Stavros Delisavas wrote:
> Hello Solr-Experts,
>
> I am currently having a strange issue with my solr querys. I am running
> a small php/mysql-website that uses Solr for faster text-searches in
Hi ,
Schema like this
external_id is multivalued field.
I want to know will values of upc will be appended to exiting values of
external_id or override it ?
For example if I send a document having values
upc:131
external_id:423
for indexing in sorl with above mentioned schema.what will be
Hi,
Please refer below link for clarification on fields having null value.
http://stackoverflow.com/questions/7332122/solr-what-are-the-default-values-for-fields-which-does-not-have-a-default-value
logically it is better to have different collections for different domain
data. Having 2 colle
Hi,
Please find the clusterstate.json as below:
I have created a dev environment on one of my servers so that you can see
the issue live - http://64.251.14.47:1984/solr/
Also, There seems to be something wrong in zookeeper, when we try to add
documents using solrj, it works fine as long as load
My local solr gives me:
http://pastebin.com/Q6d9dFmZ
and my webserver this:
http://pastebin.com/q87WEjVA
I copied only the first few hundret lines (of more than 8000) because
the webserver output was to big even for pastebin.
On 16.10.2013 12:27, Erik Hatcher wrote:
> What does the debug outpu
Run jstack on the solr process (standard with Java) and
look for the word "semaphore". You should see your
servers blocked on this in the Solr code. That'll pretty
much nail it.
There's an open JIRA to fix the underlying cause, see:
SOLR-5232, but that's currently slated for 4.6 which
won't be cut
oops, the actual url is -http://64.251.14.47:1981/solr/
Also, another issue that needs to be raised is the creation of cores from
the "core admin" section of the gui, doesnt really work well, it creates
files but then they do not work (again i am using 4.4)
On Wed, Oct 16, 2013 at 4:12 PM, Chris
Also, is there any easy way upgrading to 4.5 without having to change most
of my plugins & configuration files?
On Wed, Oct 16, 2013 at 4:18 PM, Chris wrote:
> oops, the actual url is -http://64.251.14.47:1981/solr/
>
> Also, another issue that needs to be raised is the creation of cores from
>
Hi Erick, here is a paste from other thread (debugging update request) with
my input as I am seeing errors too:
I ran an import last night, and this morning my cloud wouldn't accept
updates. I'm running the latest 4.6 snapshot. I was importing with latest
solrj snapshot, and using java bin transpo
@Shrikanth: how do you manage multiple redundant configurations(isn' it?) ?
I thought indexes would be separate when fields aren't shared. I don't need
to import any data/ or re-indexing, if those are the only benefits for
separate collections. I just index when a request comes/ new item is added
>>> Also, another issue that needs to be raised is the creation of cores
from
>>> the "core admin" section of the gui, doesnt really work well, it
creates
>>> files but then they do not work (again i am using 4.4)
>From my experience "core admin" section of the GUI does not work well in
SolrClo
(13/10/16 17:47), elfu wrote:
hi,
can i access TermVector information using solrj ?
There is TermVectorComponent to get termVector info:
http://wiki.apache.org/solr/TermVectorComponent
So yes, you can access it using solrj.
koji
--
http://soleami.com/blog/automatically-acquiring-synonym-kno
Hi Otis,
Did you get a chance to look into the logs. Please let me know if you need
more information. Thank you.
Regards,
Bharat Akkinepalli
-Original Message-
From: Akkinepalli, Bharat (ELS-CON) [mailto:b.akkinepa...@elsevier.com]
Sent: Friday, October 11, 2013 2:16 PM
To: solr-user@
Hi Shahzad,
Personally I am of the same opinion as others who have replied, that you
are better off going back to your clients at this stage itself, with all
the new found info/data points.
Further, to the questions that you put to me directly:
1) For option 1, as indicated earlier, you have to
Here's another jstack http://pastebin.com/8JiQc3rb
On 16 October 2013 11:53, Chris Geeringh wrote:
> Hi Erick, here is a paste from other thread (debugging update request)
> with my input as I am seeing errors too:
>
> I ran an import last night, and this morning my cloud wouldn't accept
> upda
Shawn,
It all makes sense, I'm just dealing with production servers here so I'm
trying to be very careful (shutting down one node at a time is OK, just
don't want to do something catastrophic.)
OK, so I should use that aliasing feature.
On index1 I have:
core1
core1new
core2
On index2 and index
oh great. Thanks Primoz.
is there any simple way to do the upgrade to 4.5 without having to change
my configurations? update a few jar files etc?
On Wed, Oct 16, 2013 at 4:58 PM, wrote:
> >>> Also, another issue that needs to be raised is the creation of cores
> from
> >>> the "core admin" sec
I install the version solr 4.5 on windows. I launch with Jetty web server the
example. I have no problem with collection 1 core. But, when i want to
create my core, the server send me this error :
*
org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
Could not load config fi
Hm, good question. I haven't really done any upgrading yet, because I just
reinstall and reindex everything. I would replace jars with the new ones
(if needed - check release notes for version 4.4.0 and 4.5.0 where all the
versions of external tools [tika, maven, etc.] are stated) and deploy the
Can you try with a directory path that contains *no* spaces.
Primoz
From: raige
To: solr-user@lucene.apache.org
Date: 16.10.2013 14:46
Subject:Error when i want to create a CORE
I install the version solr 4.5 on windows. I launch with Jetty web server
the
example. I have no
Get rid of the newlines before and after the value of the qf parameter.
-- Jack Krupansky
-Original Message-
From: uwe72
Sent: Wednesday, October 16, 2013 5:36 AM
To: solr-user@lucene.apache.org
Subject: Boosting a field with defType:dismax --> No results at all
Hi there,
i want to b
Assuming that you are using the Admin UI:
The instanceDir must be already existing (in your case index1).
Inside it there should be conf/ directory holding the cofiguration files.
In the config field only insert the file name (like "solrconfig.xml") which
shoulf be found in the conf/ directory
Appended.
-- Jack Krupansky
-Original Message-
From: vishgupt
Sent: Wednesday, October 16, 2013 6:25 AM
To: solr-user@lucene.apache.org
Subject: Solr Copy field append values ?
Hi ,
Schema like this
external_id is multivalued field.
I want to know will values of upc will be append
The alias applies to the entire cloud, not a single core.
So you'd have your indexing application point to a "collection alias" named
'index'. And that alias would point to core1.
You'd have your query applications point to a "collection alias" named 'query',
and that would point to core1, as w
Perfect!!! THANKS A LOT
That was the mistake.
Von: Jack Krupansky-2 [via Lucene]
[mailto:ml-node+s472066n409590...@n3.nabble.com]
Gesendet: Mittwoch, 16. Oktober 2013 14:55
An: uwe72
Betreff: Re: Boosting a field with defType:dismax --> No results at all
Get rid of the newlines before
very well, i will try the same, maybe an auto update tool should be also
put on the line...just a thought ...
On Wed, Oct 16, 2013 at 6:20 PM, wrote:
> Hm, good question. I haven't really done any upgrading yet, because I just
> reinstall and reindex everything. I would replace jars with the ne
Thanks!
Could you provide some examples or details of the configuration you use ?
I think this solution would suit me also.
--
View this message in context:
http://lucene.472066.n3.nabble.com/SolrCloud-Query-Balancing-tp4095854p4095910.html
Sent from the Solr - User mailing list archive at Nab
Dear all,
I am using solrj as client for indexing and searching documents on the solr
server
My question:
How to retrieve the query for a boolean keyword?
For example:
I have this query:
text:(“vacuna” AND “esteve news”) OR text:(“vacuna”) OR text:(“esteve news”)
And searching in:
text--> E
On 10/16/2013 3:52 AM, michael.boom wrote:
> I have setup a SolrCloud system with: 3 shards, replicationFactor=3 on 3
> machines along with 3 Zookeeper instances.
>
> My web application makes queries to Solr specifying the hostname of one of
> the machines. So that machine will always get the requ
I did not actually realize this, I apologize for my previous reply!
Haproxy would definitely be the right choice then for the posters setup for
redundancy.
Den 16/10/2013 kl. 15.53 skrev Shawn Heisey :
> On 10/16/2013 3:52 AM, michael.boom wrote:
>> I have setup a SolrCloud system with: 3 shard
I have a small solr setup, not even on a physical machine but a vmware
virtual machine with a single cpu that reads data using DIH from a
database. The machine has no phisical disks attached but stores data on a
netapp nas.
Currently this machine indexes 320 documents/sec, not bad but we plan to
d
I think DIH uses only one core per instance. IMHO 300 doc/sec is quite
good. If you would like to use more cores you need to use solrj. Or maybe
more than one DIH and more cores of course.
Primoz
From: Giovanni Bricconi
To: solr-user
Date: 16.10.2013 16:25
Subject:howto incr
You might consider local disks. I once ran Solr with the indexes on an
NFS-mounted volume and the slowdown was severe.
wunder
On Oct 16, 2013, at 7:40 AM, primoz.sk...@policija.si wrote:
> I think DIH uses only one core per instance. IMHO 300 doc/sec is quite
> good. If you would like to use m
Thanks Shalin. Will post it there too.
-
Phani Chaitanya
--
View this message in context:
http://lucene.472066.n3.nabble.com/prepareCommit-vs-Commit-tp4095545p4095916.html
Sent from the Solr - User mailing list archive at Nabble.com.
The only delete I see in the master logs is:
INFO - 2013-10-11 14:06:54.793;
org.apache.solr.update.processor.LogUpdateProcessor; [annotation]
webapp=/solr path=/update params={}
{delete=[change.me(-1448623278425899008)]} 0 60
When you commit, we have the following:
INFO - 2013-10-11 14:07:03.
We have just one more Problem:
When we search explicit, like *:* or partNumber:A32783627 we still don’t get
any results.
What we are doing here wrong?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Boosting-a-field-with-defType-dismax-No-results-at-all-tp4095850p4095918
Garth,
I think I get what you're saying, but I want to make sure.
I have 3 servers (index1, index2, index3), with Solr living on port 8080.
Each of those has 3 cores loaded with data:
core1 (old version)
core1new (new version)
core2 (unrelated to core1)
If I wanted to make it so that queries to
I'd suggest using the Collections API:
http://localhost:8983/solr/admin/collections?action=CREATEALIAS&name=alias&collections=collection1,collection2...
See the Collections Aliases section of http://wiki.apache.org/solr/SolrCloud.
BTW, once you make the aliases, Zookeeper will have entries in /al
Hi Shalin,
I am not sure why the log specifies "No uncommitted changes" appear. The data
is available in Solr at the time I perform a delete.
please find the below steps I have performed:
> Inserted a document in master (with id= change.me.1)
> issued a commit on master
> Triggered replication o
On 10/16/2013 4:51 AM, Chris wrote:
> Also, is there any easy way upgrading to 4.5 without having to change most
> of my plugins & configuration files?
Upgrading is something that should be done carefully. If you can, it's
always recommended that you try it out on dev hardware with your real
inde
We have just one more Problem:
When we search explicit, like *:* or partNumber:A32783627 we still don't
get any results.
What we are doing here wrong?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Boosting-a-field-with-defType-dismax-No-results-at-all-tp4095850p
On 10/16/2013 8:01 AM, Henrik Ossipoff Hansen wrote:
> I did not actually realize this, I apologize for my previous reply!
>
> Haproxy would definitely be the right choice then for the posters setup for
> redundancy.
Any load balancer software, or even an appliance load balancer like
those made
On 10/16/2013 9:44 AM, Christopher Gross wrote:
> Garth,
>
> I think I get what you're saying, but I want to make sure.
>
> I have 3 servers (index1, index2, index3), with Solr living on port 8080.
>
> Each of those has 3 cores loaded with data:
> core1 (old version)
> core1new (new version)
> c
dismax doesn't support wildcard, fuzzy, or fielded terms. edismax does.
My e-book details differences between the query parsers.
-- Jack Krupansky
-Original Message-
From: uwe72
Sent: Wednesday, October 16, 2013 12:26 PM
To: solr-user@lucene.apache.org
Subject: AW: Boosting a field wi
I believe it is not possible. But you can easily split this in two query
statements.
First one:
text:(“vacuna” AND “esteve news”)
and the second:
(text:(“vacuna”) OR text:(“esteve news”)) AND -text:(“vacuna” AND
“esteve news”)
The minus "-" excludes all entries of the first statemant. This
Hello,
Thank you all for your help. There was indeed a property which was not
set right in schema.xml:
omitTermFreqAndPositions="true"
After changing it to false phrase lookup started working OK.
Thanks,
M
On 10/15/13 12:01 PM, Jack Krupansky wrote:
Show us the field and field type from your
Ok, so I think I was confusing the terminology (still in a 3.X mindset I
guess.)
>From the Cloud->Tree, I do see that I have "collections" for what I was
calling "core1", "core2", etc.
So, to redo the above,
Servers: index1, index2, index3
Collections: (on each) coll1, coll2
Collection (core?) on
Works like this?
edismax
SignalImpl.baureihe^1011 text^0.1
Another option:
How about just but to the desired fields a high boosting factor while adding
the field to the document, using solr?!
Can this work?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Boosting-a-fiel
On 10/16/2013 11:51 AM, Christopher Gross wrote:
> Ok, so I think I was confusing the terminology (still in a 3.X mindset I
> guess.)
>
> From the Cloud->Tree, I do see that I have "collections" for what I was
> calling "core1", "core2", etc.
>
> So, to redo the above,
> Servers: index1, index2,
Hi,
I'm in the process of transitioning to SolrCloud from a conventional
Master-Slave model. I'm using Solr 4.4 and has set-up 2 shards with 1
replica each. I've 3 zookeeper ensemble. All the nodes are running on AWS
EC2 instances. Shards are on m1.xlarge and sharing a zookeeper instance
(mounte
Thanks Shawn, the explanations help bring me forward to the "SolrCloud"
mentality.
So it sounds like going forward that I should have a more complicated name
(ex: coll1-20131015) aliased to coll1, to make it easier to switch in the
future.
Now, if I already have an index (copied from one location
On 10/16/2013 4:46 AM, Stavros Delisavas wrote:
> My local solr gives me:
> http://pastebin.com/Q6d9dFmZ
>
> and my webserver this:
> http://pastebin.com/q87WEjVA
>
> I copied only the first few hundret lines (of more than 8000) because
> the webserver output was to big even for pastebin.
>
>
>
Okay I understand,
here's the rawquerystring. It was at about line 3000:
title:(into AND the AND wild*)
title:(into AND the AND wild*)
+title:wild*
+title:wild*
At this place the debug output DOES differ from the one on my local
system. But I don't understand why...
This is the local deb
So, the stopwords.txt file is different between the two systems - the first
has stop words but the second does not. Did you expect stop words to be
removed, or not?
-- Jack Krupansky
-Original Message-
From: Stavros Delsiavas
Sent: Wednesday, October 16, 2013 5:02 PM
To: solr-user@lu
Hi,
I'm in the process of transitioning to SolrCloud from a conventional
Master-Slave model. I'm using Solr 4.4 and has set-up 2 shards with 1
replica each. I've 3 zookeeper ensemble. All the nodes are running on AWS
EC2 instances. Shards are on m1.xlarge and sharing a zookeeper instance
(mounte
Hello,
I am trying to write some code to read rank data from external db, I saw
some example done using database -
http://sujitpal.blogspot.com/2011/05/custom-sorting-in-solr-using-external.html,
where they fetch whole database during index searcher creation and cache it.
But is there any way to
Hey guys,
I am debugging some /select queries on my Solr tier and would like to see
if there is a way to tell Solr to skip the caches on a given /select query
if it happens to ALREADY be in the cache. Live queries are being inserted
and read from the caches, but I want my debug queries to bypass t
Not important, but I'm also curious why you would want SSL on Solr (adds
overhead, complexity, harder-to-troubleshoot, etc)?
To avoid the overhead, could you put Solr on a separate VLAN (with ACLs to
client servers)?
Cheers,
Tim
On 12 October 2013 17:30, Shawn Heisey wrote:
> On 10/11/2013 9
On Wed, Oct 16, 2013 at 6:18 PM, Tim Vaillancourt wrote:
> I am debugging some /select queries on my Solr tier and would like to see
> if there is a way to tell Solr to skip the caches on a given /select query
> if it happens to ALREADY be in the cache. Live queries are being inserted
> and read f
Thanks Bharat. This is a bug. I've opened LUCENE-5289.
https://issues.apache.org/jira/browse/LUCENE-5289
On Wed, Oct 16, 2013 at 9:35 PM, Akkinepalli, Bharat (ELS-CON) <
b.akkinepa...@elsevier.com> wrote:
> Hi Shalin,
> I am not sure why the log specifies "No uncommitted changes" appear. The
>
Unfortunatly, I don't really know what stopwords are. I would like it to
not ignore any words of my query.
How/Where can I change this stopwords-behaviour?
Am 16.10.2013 23:45, schrieb Jack Krupansky:
So, the stopwords.txt file is different between the two systems - the
first has stop words bu
Query result cache hit might be low due to using NOW in bf. NOW is always
translated to current time and that of course changes from ms to ms... :)
Primoz
From: Shamik Bandopadhyay
To: solr-user@lucene.apache.org
Date: 17.10.2013 00:14
Subject:SolrCloud Performance Issue
Hi
85 matches
Mail list logo