Hello Alex,
You're right. But I faced a problem regarding solr engine, when I reload
dataimport configuration and update, the indexation goes wrong. I had to
reload Solr and restart importation and get the values.
Tx for your help.
David
Le 09/07/2013 17:19, Alexandre Rafalovitch a écrit :
Jack due to 'some' reason my nutch is returning me index time boost =0.0
and just for a moment suppose that nutch is and will always return boost =0.
Now my simple question was why Solr is showing me document's score = 0 ?
Why is it depending upon index time boost value ? Why or how to make Solr
t
Can we get a sample fieldType and field definition?
Thanks.
On Mon, Jul 8, 2013 at 8:40 AM, Jack Krupansky wrote:
> Yes, you should be able to used nested query parsers to mix the queries.
> Solr 4.1(?) made it easier.
>
> -- Jack Krupansky
>
> -Original Message- From: Abeygunawardena,
I have a field that has omitNorms=true, but when I look at debugQuery I see
that
the field is being normalized for the score.
What can I do to turn off normalization in the score?
I want a simple way to do 2 things:
boost geodist() highest at 1 mile and lowest at 100 miles.
plus add a boost for
Hi there:
In solr4.3 source code , I found overseer use 3 queues to handle all
solrcloud management request:
1: /overseer/queue
2: /overseer/queue-work
3: /overseer/collection-queue-work
ClusterStateUpdater use 1st & 2nd queue to handle solrcloud shard or
state request
Oops... I misread and confused your "q" and "fq" params.
-- Jack Krupansky
-Original Message-
From: Jack Krupansky
Sent: Tuesday, July 09, 2013 7:47 PM
To: solr-user@lucene.apache.org
Subject: Re: join not working with UUIDs
Your join is requesting to use the "join_id" field ("from")
Your join is requesting to use the "join_id" field ("from") of documents
matching the query of "cor_parede:branca", but the join_id field of that
document is empty.
Maybe you intended to search in the other direction, like
"acessorio1:Teclado".
-- Jack Krupansky
-Original Message-
Hello,
I am trying to create a POC to test query joins. However, I was
surprised when I saw my test worked with some ids, but when my document ids
are UUIDs, it doesn't work.
Follows an example, using solrj:
SolrInputDocument doc = new SolrInputDocument();
doc.addField("id", "bcbaf9eb-0da
Hi Jed,
This is really with Solr 4.0? If so, it may be wiser to jump on 4.4
that is about to be released. We did not have fun working with 4.0 in
SolrCloud mode a few months ago. You will save time, hair, and money
if you convince your manager to let you use Solr 4.4. :)
Otis
--
Solr & Elastic
Look at the speed and time remaining on this one, pretty funny:
Master http://ssbuyma01:8983/solr/1/replication
Latest Index Version:null, Generation: null
Replicatable Index Version:1276893670202, Generation: 127213
Poll Interval00:05:00
Local Index Index Version: 1276893670108, G
On 7/9/2013 3:38 PM, Katie McCorkell wrote:
I am curious about the "Deleted Docs:" statistic on the solr/#/collection1
Overview page. Does Solr remove docs while indexing? I thought it only did
that when Optimizing, however my instance had 726 Deleted Docs, but then
after adding some documents th
Solr (Lucene, actually) will be doing segment merge operations in the
background, continually, so generally you won't need to do optimize
operations.
Generally, an explicit delete and a replace of an existing document are the
only two ways that you would get a deleted document.
-- Jack Krupa
Hello,
I am curious about the "Deleted Docs:" statistic on the solr/#/collection1
Overview page. Does Solr remove docs while indexing? I thought it only did
that when Optimizing, however my instance had 726 Deleted Docs, but then
after adding some documents that number decreased, eventually to 18
On 7/9/2013 2:02 PM, Andy Lester wrote:
What error do you get? Never say "I get an error." Always say "I get
this error: ."
This is the actual error when trying "*:*" :
Can't locate object method "_struct_" via package
"WebService::Solr::Query" at
/usr/local/share/perl/5.14.2/WebSer
Hi Shawn,
I have been trying to duplicate this problem without success for the last 2
weeks which is one reason I'm getting flustered. It seems reasonable to be
able to duplicate it but I can't.
We do have a story to upgrade but that is still weeks if not months before
that gets rolled out
Hi
My solr 3.6.1 slave farm is suddenly getting stuck during replication. It
seems to stop on a random file on various slaves (not all) and not continue.
I've tried stoping and restarting tomcat etc but some slaves just can't get the
index pulled down. Note there is plenty of space on the h
On Jul 9, 2013, at 2:48 PM, Shawn Heisey wrote:
> This is primarily to Andy Lester, who wrote the WebService::Solr module
> on CPAN, but I'll take a response from anyone who knows what I can do.
>
> If I use the following Perl code, I get an error.
What error do you get? Never say "I get an e
This is primarily to Andy Lester, who wrote the WebService::Solr module
on CPAN, but I'll take a response from anyone who knows what I can do.
If I use the following Perl code, I get an error. If I try to build
some other query besides *:* to request all documents, the script runs,
but the query
Other than using futures and callables? Runnables ;-) Other than that you
will need async request (ie. client).
But in case sb else is looking for an easy-recipe for the server-side async:
public void handleRequestBody(.) {
if (isBusy()) {
rsp.add("message", "Batch processing is already r
Thanks Erick I made a private patch to the CoreContainer until the real deal.
C
On Jul 9, 2013, at 4:35 AM, Erick Erickson wrote:
> There's been a lot of action around this recently, this is
> a known issue in 4.3.1.
>
> The short form is "it should all be better in Solr 4.4" which
> may be ou
On 7/9/2013 10:37 AM, adityab wrote:
Is staggered replication possible in Solr through configuration?
You wouldn't be able to do this directly without switching to completely
manually triggered replication, but the concept of a repeater may
interest you.
http://wiki.apache.org/solr/SolrRepl
Hi,
Is staggered replication possible in Solr through configuration?
We are concern with the CPU spike (80%) and GC pauses on all the slaves when
they try to replicate updated index from repeaters. We havent observed this
behavior in v3.5 (Max spike were 50% during replication)
In our case we hav
On 7/9/2013 9:50 AM, Jed Glazner wrote:
I'll give you the high level before delving deep into setup etc. I have been
struggeling at work with a seemingly random problem when solr will hang for
10-15 minutes during updates. This outage always seems to immediately be
proceeded by an EOF excepti
If you call /solr/zookeeper on a specific node, that servlet would tell you -
output is a bit verbose for what you want though.
- Mark
On Jul 9, 2013, at 10:36 AM, Robert Stewart wrote:
> I would like to be able to do it without consulting Zookeeper. Is there some
> variable or API I can call
I'll give you the high level before delving deep into setup etc. I have been
struggeling at work with a seemingly random problem when solr will hang for
10-15 minutes during updates. This outage always seems to immediately be
proceeded by an EOF exception on the replica. Then 10-15 minutes la
On Tue, Jul 9, 2013 at 6:29 AM, It-forum wrote:
> However when i use edimax query with the following details, I'm not able
> to retreive the field "tag". And it seems that it is not taken in match
> score too.
>
You seem to have two problems here. One not matching (use debug flags for
that) and
> We are going to use solr in production. There are chances that the machine
> itself might shutdown due to power failure or the network is disconnected
> due to manual intervention. We need to address those cases as well to
> build
> a robust system..
The latest version of Solr is 4.3.1, and 4.4
We are going to use solr in production. There are chances that the machine
itself might shutdown due to power failure or the network is disconnected
due to manual intervention. We need to address those cases as well to build
a robust system..
--
View this message in context:
http://lucene.47206
The same scenario happens if network to any one of the machine is
unavailable. (i.e if we manually disconnect network cable also, status of
the node not gets updated immediately).
Pls help me in this issue
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-Live-Nodes-not-
On 7/9/2013 5:43 AM, Ranjith Venkatesan wrote:
> I am new to solr. Currently i m using Solr-4.3.0. I had setup a solrcloud
> setup in 3 machines. If I kill a node running in any of the machine using
> "/kill -9/", status of the killed node is not updating immediately in web
> console of solr. I tak
I would like to be able to do it without consulting Zookeeper. Is there some
variable or API I can call on a specific Solr cloud node to know if it is
currently a shard leader? The reason I want to know is I want to perform index
backup on the shard leader from a cron job *only* if that node is
Something is wrong if it actually takes 20 minutes.
- Mark
On Jul 9, 2013, at 7:43 AM, Ranjith Venkatesan wrote:
> Hi,
>
> I am new to solr. Currently i m using Solr-4.3.0. I had setup a solrcloud
> setup in 3 machines. If I kill a node running in any of the machine using
> "/kill -9/", statu
On 7/8/2013 11:10 PM, Learner wrote:
>
> I wrote a custom data import handler to import data from files. I am trying
> to figure out a way to make asynchronous call instead of waiting for the
> data import response. Is there an easy way to invoke asynchronously (other
> than using futures and cal
I've another field at my schema: it is "*url*". When I get results as facet
I see that there are 3107206 numbers of *lev* (3107206). However what are the urls of that 3107206
documents? I tried grouping instead of facet:
/solr/select/?q=*:*&group=true&group.field=lang&wt=xml&fl=url
and I get only
Hey thanks.
Its some what works for me
--
View this message in context:
http://lucene.472066.n3.nabble.com/Phrase-search-without-stopwords-tp4076527p4076598.html
Sent from the Solr - User mailing list archive at Nabble.com.
I don't quite follow the question. Give us an example.
-- Jack Krupansky
-Original Message-
From: Furkan KAMACI
Sent: Tuesday, July 09, 2013 9:37 AM
To: solr-user@lucene.apache.org
Subject: Re: Document count mismatch
Ok, one more question. I have another field at my schema: *url*
Usually a car term and a car part term will look radically different. So,
simply use the edismax query parser and set "qf" to be both the car and car
part fields. If either matches, the document will be selected. And if you
have a "type" field, you can check that to see if a car or part was matc
Ok, one more question. I have another field at my schema: *url*. How can I
get urls at each facet?
2013/7/9 Jack Krupansky
> 1. Try facet.missing=true to count the number of documents that do not
> have a value for that field.
>
> 2. Try facet.limit=n to set the number of returned facet values t
Simple math: x times zero equals zero.
That's why the default document boost is 1.0 - score times 1.0 equals score.
Any particular reason you wanted to zero out the document score from the
document level?
-- Jack Krupansky
-Original Message-
From: Tony Mullins
Sent: Tuesday, July 0
I am passing boost value (via nutch) and i.e boost =0.0.
But my question is why Solr is showing me score = 0.0 when my boost (index
time boost) = 0.0 ?
Should not Solr calculate its documents score on the basis of TF-IDF ? And
if not how can I make Solr to only consider TF-IDF while calculating
doc
1. Try facet.missing=true to count the number of documents that do not have
a value for that field.
2. Try facet.limit=n to set the number of returned facet values to a larger
or smaller value than the default of 100.
3. Try reading the Faceting chapter of my book!
-- Jack Krupansky
-Or
Any suggestion ?
Le 09/07/2013 12:29, It-forum a écrit :
Hello to all,
I load solr by data-import.
I add in db_data_config.xml inside the product entity the tag entity
as follow :
|
|
WHERE id_product='${product.id_product}'"
parentDel
Hi
I solve it by copying the field in a string field type.
And query on this field only.
Regards
David
Le 09/07/2013 11:03, Parul Gupta(Knimbus) a écrit :
Hi solr-user!!!
I have an issue
I want to know that is it possible to implement StopwordFilterFactory with
KeywordTokenizer?
example
Am 05.07.2013 um 16:36 schrieb Shalin Shekhar Mangar:
> Okay so just for the rest of the people who dig up this thread. You
> had to put all the extra jar files required by typo3 into WEB-INF/lib
> to make this work. Is that right?
Maybe this works aswell but I'd put it in a directory called "lib
I've run a command to find term counts at my index:
solr/select/?q=*:*&rows=0&facet=on&facet.field=teno&wt=xml&indent=on
it gives me a result like that:
...
...
3107206
59821
...
when I sum that numbers(3107206 + 59821 + ...) I get: *3245074 *however *
numFound="3245092" *how it comes?
*PS:*
Hi,
I am new to solr. Currently i m using Solr-4.3.0. I had setup a solrcloud
setup in 3 machines. If I kill a node running in any of the machine using
"/kill -9/", status of the killed node is not updating immediately in web
console of solr. I takes hardly /20+ mins/ to mark that as "Gone" node.
No, there's no good way to make Solr return
numFound=120 when there are 540 (or
whatever) records. Why do you care?
If you need to stop at 120, just stop at 120 and ignore
the numFound.
If you need to display the 120 to the end user even if there
are more docs, just do that.
Best
Erick
On Tue, J
My guess is that you're not really passing on the boost field's value
and getting the default. Don't quite know how I'd track that down though
Best
Erick
On Tue, Jul 9, 2013 at 4:09 AM, imran khan wrote:
> Greetings,
>
> I am using nutch 2.x as my datasource for Solr 4.3.0. And nutch passes
I think Jack was mostly thinking in "slam dunk" terms. I know of
SolrCloud demo clusters with 500+ nodes, and at that point
people said "it's going to work for our situation, we don't need
to push more".
As you start getting into that kind of scale, though, you really
have a bunch of ops considera
According to code, at least in Solr 4.2, getParams of CoreAdminRequest.Unload
returns locally created ModifiableSolrParams.
It means that parameters that are set in such way won't be received in
CoreAdminHandler.
I'm going to open an issue in Jira and provide a patch for this.
Best regards,
Lyuba
There's been a lot of action around this recently, this is
a known issue in 4.3.1.
The short form is "it should all be better in Solr 4.4" which
may be out in the next couple of weeks, assuming we
can get agreement.
But look at Solr-4862, 4910, 4982 and related if you want
to see the ugly details
Hi Parul,
You might find this useful : https://github.com/cominvent/exactmatch/
From: Parul Gupta(Knimbus)
To: solr-user@lucene.apache.org
Sent: Tuesday, July 9, 2013 12:03 PM
Subject: Phrase search without stopwords
Hi solr-user!!!
I have an issue
I wa
Hi solr-user!!!
I have an issue
I want to know that is it possible to implement StopwordFilterFactory with
KeywordTokenizer?
example I have multiple title :
1)title:Canadian journal of information and library science
2)title:Canadian information of science
3)title:Southern information and lib
Hi Jack,
Thanks for your answer.
I upgraded Solr from 4.0.0 (LUCENE_40) to 4.3.0 (LUCENE_43), and later to
solr 4.3.1. As result the pivot queries I had already running against solr
4.0.0 that were taking a few milisecs (100ms, 150ms), now, with solr 4.3.1,
are taking arround 13 secs.
An index o
Hi Erick,
thanks for reply, I am doing the same thing already. But for paging
calculation i am depending on "numFound="120" value. That result i want
.()
thanks
aniljayanti
--
View this message in context:
http://lucene.472066.n3.nabble.com/Restrict-change-numFound-solr-result-tp4075882p4076
Hello to all,
I load solr by data-import.
I add in db_data_config.xml inside the product entity the tag entity as
follow :
|
|
WHERE id_product='${product.id_product}'"
parentDeltaQuery="select id_product as id from
ps_product where id_p
Greetings,
I am using nutch 2.x as my datasource for Solr 4.3.0. And nutch passes on
its own field to my Solr schema
Now due to some reason I always get = 0.0 and due to this my Solr's
document score is also always 0.0.
Is there any way in Solr that it ignores the field's value for its
docu
> 5. No more than 32 nodes in your SolrCloud cluster.
I hope this isn't too OT, but what tradeoffs is this based on? Would have
thought it easy to hit this number for a big index and high load (hence
with the view of both the number of shards and replicas horizontally
scaling..)
> 6. Don't return
I am migrating from solr 3.6 to 4.3.1. Using the core create rest call,
something like:
http://10.1.10.150:8090/solr/admin/cores?action=CREATE&name=foo&instanceDir=/home/solrdata/foo&persist=true&wt=json&dataDir=/home/solrdata/foo
I am able to add data to the index it creates within th
59 matches
Mail list logo