Hello,
It seems to me that there is no way how I can use dismax handler for
searching in both tokenized and untokenized fields while I'm searching for a
phrase.
Consider the next example. I have two fields in index: product_name and
product_name_un. The schema looks like:
On Sat, Oct 10, 2009 at 6:34 AM, Alex Baranov wrote:
>
> Hello,
>
> It seems to me that there is no way how I can use dismax handler for
> searching in both tokenized and untokenized fields while I'm searching for a
> phrase.
>
> Consider the next example. I have two fields in index: product_name
I guess this is a bug that should be added in JIRA (if it is not there
already). Should I add it?
> Hmmm, right. This is due to the fact that the Lucene query parser
> (still actually used in dismax) breaks things up by whitespace
> *before* analysis (so the analyzer for the untokenized field ne
ModifiableSolrParams p = new ModifiableSolrParams();
p.add("qt", "/dataimport");
p.add("command", "full-import");
server.query(p, METHOD.POST);
I do this
But it starts giving me this exception
SEVERE: Full Import failed
java.util.concurrent.Rejecte
I can't wait...
--
"Good Enough" is not good enough.
To give anything less than your best is to sacrifice the gift.
Quality First. Measure Twice. Cut Once.
This is pretty unstable...anyone has any clue...Sometimes it even creates
index, sometimes it does not ??
But everytime time I do get this exception
Regards
Rohan
On Sat, Oct 10, 2009 at 6:07 PM, rohan rai wrote:
> ModifiableSolrParams p = new ModifiableSolrParams();
> p.add("qt
> Hi
> I would appreciate if someone can throw some light on the
> following point
> regarding proximity search.
> i have a search box and if a use comes and type in "honda
> car" WITHOUT any
> double quotes, i want to get all documents with matches,
> and also they
> should be ranked based on pro
Hi all,
I am trying to use SpanQueries to save*all* hits for custom query type
(e.g. defType=fooSpanQuery), along with token positions. I have this working
in straight lucene, so my challenge is to implement it half-intelligently in
solr. At the moment, I can't figure out where and how to cust
Anyone know why you would see a transfer speed of just 10-20MB over a
gigbit network connection?
Even with standard drives, I would expect to at least see around 40MB.
Has anyone seen over 10-20 using replication?
Any ideas on what the bottleneck should be? I think even a standard
drive can do wr
Hi,
Simple question! I have a nightly cron job to send the optimize command
to Solr on our master instance. Is this also required on Solr replicated
slaves to optimise their indexes?
Thanks,
Matt
This e-mail message and any attachments are CONFIDENTIAL to the addressee(s)
and may also be LEGA
No. The slaves will copy the current index, optimized or not. --wunder
On Oct 10, 2009, at 4:33 PM, Matthew Painter wrote:
Hi,
Simple question! I have a nightly cron job to send the optimize
command
to Solr on our master instance. Is this also required on Solr
replicated
slaves to optimis
My apologies; I've just found the answer (that optimisation should be on
the master server only)
From: Matthew Painter
Sent: Sunday, 11 October 2009 12:34 p.m.
To: 'solr-user@lucene.apache.org'
Subject: Optimize on slaves?
Hi,
Simple question! I have a nightl
On a drive that can do 40+ that's getting query load might have it's
writes knocked down to that?
- Mark
http://www.lucidimagination.com (mobile)
On Oct 10, 2009, at 6:41 PM, Mark Miller wrote:
Anyone know why you would see a transfer speed of just 10-20MB over a
gigbit network connecti
Folks:
I have a corpus of approx 6 M documents each of approx 4K bytes.
Currently, the way indexing is set up I read documents from a database and
issue solr post requests in batches (batches are set up so that the
maxPostSize of tomcat which is set to 2MB is adhered to). This means that
in
Oh and one more thing...For historical reasons our apps run using msft
technologies, so using SolrJ would be next to impossible at the present
time
Thanks in advance for your help!
-- Bill
--
From: "William Pierce"
Sent: Saturday, October 1
A few things off the bat:
1) do not commit until the end.
2) use the DataImportHandler - it runs inside Solr and reads the
database. This cuts out the HTTP transfer/XML xlation overheads.
3) examine your schema. Some of the text analyzers are quite slow.
Solr tips:
http://wiki.apache.org/solr/Solr
In Solr a facet is assigned one number: the number of documents in
which it appears. The facets are sorted by that number. Would your
use case be solved with a second number that is formulated from the
relevance of the associated documents? For example:
facet relevance = count * sum(scores of
If you dont want to do a pure negative query and just want boost a few
documents down based on a matching criteria try to use linear function (one
of the functions available in boost function) with a negative m (slope).
We could solve our problem this way.
We wanted to do negatively boost some d
On Fri, Oct 9, 2009 at 3:33 AM, Patrick Jungermann <
patrick.jungerm...@googlemail.com> wrote:
> Hi Bern,
>
> the problem is the character sequence "--". A query is not allowed to
> have minus characters that consequent upon another one. Remove one minus
> character and the query will be parsed wi
On Fri, Oct 9, 2009 at 7:56 PM, Michael wrote:
> Hm... still no success. Can anyone point me to a doc that explains
> how to define and reference core properties? I've had no luck
> searching Google.
>
> Shalin, I gave an identical '' tag to
> each of my cores, and referenced ${solr.core.shards
On Fri, Oct 9, 2009 at 9:39 PM, Michael wrote:
> For posterity...
>
> After reading through http://wiki.apache.org/solr/SolrConfigXml and
> http://wiki.apache.org/solr/CoreAdmin and
> http://issues.apache.org/jira/browse/SOLR-646, I think there's no way
> for me to make only one core specify &sha
On Fri, Oct 9, 2009 at 9:49 PM, Moshe Cohen wrote:
> Hi,
> I am using SOLR 1.4 (July 23rd nightly build), with a master-slave setup.
> I have encountered twice an occurrence of the slave recreating the indexes
> over and over gain.
> Couldn't find any pointers in the log.
> Any help would be ap
hi,
I am creating facets on a field of type
The field can contain any number of dates even 0. I am making a facet
query on the field with following query parameters:
facet.date=daysForFilter
facet.date.gap=%2B1DAY
facet.date.end=2009-10-16T00:00:00Z
facet=true
facet.date.start=
23 matches
Mail list logo