Hi,
I am facing one issue in phrase query. I am entering 'Top of the world' as
my search criteria. I am expecting it to return all the records in which,
one field should all these words in any order.
But it is treating as OR and returning all the records, which are having
either of these words.
Thanks for your reply, Yonik:
On Thu, May 21, 2009 at 2:43 AM, Yonik Seeley
wrote:
>
> Some thoughts:
>
> #1) This is sort of already implemented in some form... see this
> section of solrconfig.xml and try uncommenting it:
> ...
> > On Wed, May 20, 2009 at 12:43 PM, Yonik Seeley
> > > wrote:
>
This problem is related with the default operator in dismax. Currently OR is
the default operator and it is behaving perfectly fine. I have changed the
default operator in schema.xml to AND, I also have changed the minimum match
to 100%.
But it seems like AND as default operator doesnt work with
Hello all,
I'm using Solr 1.3.0, and when I query my index for "solr" using the admin
page, the query string in the address bar of my browser reads like this:
http://localhost:8080/solr/select/?q=solr&version=2.2&start=0&rows=10&indent=on
Now, I don't know what version=2.2 means, and the wiki or
It seems I can only search on the field 'text'. With the following url :
http://localhost:8983/solr/select/?q=novel&qt=dismax&fl=title_s,id&version=2.2&start=0&rows=10&indent=on&debugQuery=on
I get answers, but on the debug area, it seems it's only searching on the
'text' field (with or without '
Hi,
I am facing very strange issue on solr, not sure if it is already a bug.
If I am searching for 'Top 500' then it returns all the records which
contains either of these anywhere, which is fine.
But if I search for 'Top 500 Companies' in any order, it gives me all those
records, which contain
On Wed, May 20, 2009 at 11:18 AM, James X
wrote:
> Hi Mike, thanks for the quick response:
>
> $ java -version
> java version "1.6.0_11"
> Java(TM) SE Runtime Environment (build 1.6.0_11-b03)
> Java HotSpot(TM) 64-Bit Server VM (build 11.0-b16, mixed mode)
>
> I hadn't noticed the 268m trigger for
Another question: are there any other exceptions in your logs? Eg
problems adding certain documents, or anything?
Mike
On Wed, May 20, 2009 at 11:18 AM, James X
wrote:
> Hi Mike, thanks for the quick response:
>
> $ java -version
> java version "1.6.0_11"
> Java(TM) SE Runtime Environment (buil
If you're able to run a patched version of Lucene, can you apply the
attached patch, run it, get the issue to happen again, and post back
the resulting exception?
It only adds further diagnostics to that RuntimeException you're hitting.
Another thing to try is turning on assertions, which may ver
On Thu, May 21, 2009 at 3:30 AM, Kent Fitch wrote:
> > #2) Your problem might be able to be solved with field collapsing on
> > the "category" field in the future (but it's not in Solr yet).
> Sorry - I didnt understand this
A single relevancy search, but group or collapse results based on the
va
Just curious. What would be the disadvantages of a no replication / multi
master (no slave) setup?
The client code should do the updates for evey master ofc, but if one
machine would fail then I can imediatly continue the indexing process and
also I can query the index on any machine for a valid re
Hey there,
I have been testing the last adjacent field collapsing patch in trunk and
seems to work perfectly. I am trying to modify the function of it but don't
know exactly how to do it. What I would like to do is instead of collapse
the results send them to the end of the results cue.
Aparently
This isn't much data to go on. Do you have any idea what your throughput is?How
many documents are you indexing? one 45G doc or 4.5 billion 10 character
docs?
Have you looked at any profiling data to see how much memory is being
consumed?
Are you IO bound or CPU bound?
Best
Erick
On Thu, May 21,
Nothing else is in the lib directory but this one jar.
Additionally, the logs seem to say that it finds the lib as shown below
INFO: Solr home set to '/home/zetasolr/'
May 20, 2009 10:16:56 AM org.apache.solr.core.SolrResourceLoader
createClassLoader
INFO: Adding 'file:/home/zetasolr/lib/FacetCube
Indexing is usually much more expensive that replication so it won't
scale well as you add more servers. Also, what would a client do if
it was able to send the update to only some of the servers because
others were down (for maintenance, etc)?
-Bryan
On May 21, 2009, at May 21, 6:04
Is adding QueryComponent to your SearchComponents an option? When
combined with the CollapseComponent this approach would return the
collapsed and the complete result set.
i.e.:
collapse
query
facet
mlt
highlight
Thomas
Marc Sturlese schrieb:
Hey there,
I have bee
I was looking for answer to the same question, and have similar concern. Looks
like any serious customization work requires developing custom SearchComponent,
but it's not clear to me how Solr designer wanted this to be done. I have more
confident to either do it at Lucene level, or stay on cli
Jeff Newburn wrote:
Nothing else is in the lib directory but this one jar.
Additionally, the logs seem to say that it finds the lib as shown below
INFO: Solr home set to '/home/zetasolr/'
May 20, 2009 10:16:56 AM org.apache.solr.core.SolrResourceLoader
createClassLoader
INFO: Adding 'file:/home/
On Wed, May 20, 2009 at 10:59 PM, Nick Bailey wrote:
> Hi,
>
> I am wondering if it is possible to basically add the distributed portion
> of a search query inside of a searchComponent.
>
> I am hoping to build my own component and add it as a first-component to
> the StandardRequestHandler. The
Also look at SOLR-565 and see if that helps you.
https://issues.apache.org/jira/browse/SOLR-565
On Thu, May 21, 2009 at 9:58 PM, Shalin Shekhar Mangar <
shalinman...@gmail.com> wrote:
>
> On Wed, May 20, 2009 at 10:59 PM, Nick Bailey <
> nicholas.bai...@rackspace.com> wrote:
>
>> Hi,
>>
>> I am
I was interested in this recently and also couldn't find anything on the
wiki. I found this in the list archive:
The version parameter determines the XML protocol used in the response.
Clients are strongly encouraged to ''always'' specify the protocol version,
so as to ensure that the format of th
Hi list,
We have deployed an experimental Solr 1.4 cluster (a master/slave
setup, with automatic promotion of the slave as a master in case of
failure) on drupal.org, to manage our medium size index (3GB, about
400K documents).
One of the problem we are facing is that there seems to be no sanity
You are right... I just don't like the idea of stopping the indexing process
if the master fails until a new one is started (more or less by hand).
On Thu, May 21, 2009 at 6:49 PM, Bryan Talbot wrote:
> Indexing is usually much more expensive that replication so it won't scale
> well as you add m
Yes, I have tried it but I see couple of problems doing that.
I will have to do more searches so response time will increase.
The second thing is that, imagine I show the results collapsed in page one
and put a button to see the non collapsed results. If later results for the
second page are re
Hi,
You should be able to do the following.
Put masters behind a load balancer (LB).
Create a LB VIP and a pool with 2 masters, masterA & masterB with a rule that
all requests always go to A unless A is down. If If A is down they go to B.
Bring up master instances A and B on 2 servers and make
Hi Damien,
Interesting, this is similar to my suggestion to another person I just replied
to here on solr-user.
Have you actually run into this problem? I haven't tried it, but I'd think the
first next replication (copying index from s1 to s2) would not necessarily
fail, but would simply over
Hi,
I built Solr from SVN today morning. I am using Clustering example. I
have added my own schema.xml.
The problem is the even though I change carrot.snippet field from
features to filecontent the clustering results are not changed a bit.
Please note features field is also there in my document.
Hi Otis,
Thanks for your answer.
On Thu, May 21, 2009 at 7:14 PM, Otis Gospodnetic
wrote:
> Interesting, this is similar to my suggestion to another person I just
> replied to here on solr-user.
> Have you actually run into this problem? I haven't tried it, but I'd think
> the first next repl
Hi,
I'm not sure why the rest of the scoring explanation is not shown, but your
query *was* expanded to search on text and title_s, and id fields, so I think
that expanded/rewritten query is what went to the index.
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Ori
Amit,
Append &debugQuery=true to the search request URL and you'll see how your query
string was interpreted.
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
> From: dabboo
> To: solr-user@lucene.apache.org
> Sent: Thursday, May 21, 2009 3:48:45
Aha, I see. Perhaps you can post the error message/stack trace?
As for the sanity check, I bet a call to
http://host:port/solr/replication?command=indexversion could be used ensure
only newer versions of the index are being pulled. We'll see what Paul says
when he wakes up. :)
Otis
--
Semat
One additional note we are on 1.4 tunk as of 5/7/2009. Just not sure why it
won't load since it obviously works fine if directly inserted into the
WEB-INF directory.
--
Jeff Newburn
Software Engineer, Zappos.com
jnewb...@zappos.com - 702-943-7562
> From: Mark Miller
> Reply-To:
> Date: Thu, 2
Hi All,
I understand from the details provided under
http://wiki.apache.org/solr/DataImportHandler regarding Delta-import that
there should be an additional column *last_modified* of timestamp type in
the table.
Is there any other way/method the same can be achieved without creating the
additiona
Can you share your full log (at least through startup) as well as the
config for both the component and the ReqHandler that is using it?
-Grant
On May 21, 2009, at 3:37 PM, Jeff Newburn wrote:
One additional note we are on 1.4 tunk as of 5/7/2009. Just not
sure why it
won't load since it
Hi.
> I built Solr from SVN today morning. I am using Clustering example. I
> have added my own schema.xml.
>
> The problem is the even though I change carrot.snippet field from
> features to filecontent the clustering results are not changed a bit.
> Please note features field is also there in m
Hi Mike,Documents are web pages, about 20 fields, mostly strings, a couple
of integers, booleans and one html field (for document body content).
I do have a multi-threaded client pushing docs to Solr, so yes, I suppose
that would mean I have several active Solr worker threads.
The only exceptions
On May 20, 2009, at 4:33 AM, Shalin Shekhar Mangar wrote:
On Wed, May 20, 2009 at 1:31 PM, Plaatje, Patrick <
patrick.plaa...@getronics.com> wrote:
At the moment Solr does not have such functionality. I have written a
plugin for Solr though which uses a second Solr core to store/index
the
Hi,
I will try this. Because when I tried it with field declared by me there was
no change. Will check out this and let you know.
Is it possbile to specify more than one snippet field or should I use copy
field to copy copy two or three field into single field and specify it in
snippet field.
Regar
Hello
is there a way you can get all the results back from SOLR when querying
solrJ client
my gut feeling was that this might work
query.setRows(-1)
The way is to change the configuration xml file, but that like hard coding
the configuration, and there also i have to set some valid number, i ca
careful what you ask for... what if you have a million docs? will
you get an OOM?
Maybe a better solution is to run a loop where you grab a bunch of
docs and then increase the "start" value.
but you can always use:
query.setRows( Integer.MAX_VALUE )
ryan
On May 21, 2009, at 8:37 PM,
ahI see! thank you so much for the response!
I'm using SolrJ, so I probably don't need to set XML version since the wiki
tells me that it uses binary as a default!
On Thu, May 21, 2009 at 10:00 PM, Jay Hill wrote:
> I was interested in this recently and also couldn't find anything on the
>
Hi,
The scenario is I have 2 different solr instances running at different
locations concurrently. The data location for both instances is same:
\\hostname\FileServer\CoreTeam\Research\data.
Both instances use EmbeddedSolrServer and locktype at both instances is
'single'.
I am getting following
the last_modified column is just one way. The query has to be
intelligent enough to detect the delta . it doesn't matter how you do
it
On Fri, May 22, 2009 at 1:32 AM, jayakeerthi s wrote:
> Hi All,
>
> I understand from the details provided under
> http://wiki.apache.org/solr/DataImportHandler r
Let us see what is the desired behavior.
When s1 comes back up online , s2 must download a fresh copy of index
from s1 because s1 is the slave and s2 has a newer version of index
than s1.
Are you suggesting that s2 downloads the index files and then commit
fails? The code is written as follows
b
check the status page of DIH and see if it is working properly. and
if, yes what is the rate of indexing
On Thu, May 21, 2009 at 11:48 AM, Jianbin Dai wrote:
>
> Hi,
>
> I have about 45GB xml files to be indexed. I am using DataImportHandler. I
> started the full import 4 hours ago, and it's sti
Hi Paul,
Thank you so much for answering my questions. It really helped.
After some adjustment, basically setting mergeFactor to 1000 from the default
value of 10, I can finished the whole job in 2.5 hours. I checked that during
running time, only around 18% of memory is being used, and VIRT is
what is the total no:of docs created ? I guess it may not be memory
bound. indexing is mostly amn IO bound operation. You may be able to
get a better perf if a SSD is used (solid state disk)
On Fri, May 22, 2009 at 10:46 AM, Jianbin Dai wrote:
>
> Hi Paul,
>
> Thank you so much for answering my
On Fri, May 22, 2009 at 3:22 AM, Grant Ingersoll wrote:
>
> I think you will want some type of persistence mechanism otherwise you will
> end up consuming a lot of resources keeping track of all the query strings,
> unless I'm missing something. Either a Lucene index (Solr core) or the
> option o
48 matches
Mail list logo