Hi Jon,
Yes it is possible already :)
Just add a request parameter "entity". For example --
command=full-import&entity=entity1&entity=entity2 which will run full
import for entity1 and entity2. Note that this works for root entities
(top level entities) only.
On Sat, Jul 12, 2008 at 10:39 AM, Jo
Ahhh very cool, did not realize that one. I was actually able to use
a db entity over the http entity so I pulled a list of subdomains and
include it that way.
One small *possible* feature request (or is it possible already) is to
load entities by name? For example if I wanted to cron up
On Fri, Jul 11, 2008 at 11:46 PM, Jon Baer <[EMAIL PROTECTED]> wrote:
> Hi,
>
> On the wiki it says that url attribute can be templatized but Im not sure
> how that happens, do you I need to create something read from a database
> column in order to use that type of function? ie Id like to run ove
On Fri, 11 Jul 2008 15:22:35 +
sundar shankar <[EMAIL PROTECTED]> wrote:
> I recently was looking to find details of 1.3 specific analysers and filters
> in the solr wiki and was unable to do so. Could anyone please point me to a
> place where I can find some documentation of the same.
>
Hello.
I have a question about morphology.
Currently i'm storing multiple form of words i.e. if word 'N' in sequence 'M
N K' leads to two normal forms 'Nf1' and 'Nf2' then i'm storing 'Mf Nf1 Nf2
Kf'
That allows me to search if user entered N2 (or any of its forms) as well as
any form of N1. May
Yeah I guess I was optmizing every 1000 records while indexing. I changed that
now and it doesnt seem to be happening at least on the dev box. But the
question still remains on why solr was out of max warmers. I was running just
one thread that was pulling up data from the DB and indexing it. T
Hi,
On the wiki it says that url attribute can be templatized but Im not
sure how that happens, do you I need to create something read from a
database column in order to use that type of function? ie Id like to
run over some RSS feeds for multiple URLs (~ 30), do I need to copy 1
per URL
thanks,,
I will give it a try and get back to you
> Date: Fri, 11 Jul 2008 20:14:11 +0530
> From: [EMAIL PROTECTED]
> To: solr-user@lucene.apache.org
> Subject: Re: Solr searching issue..
>
> You can use EdgeNGramTokenizer available with Solr 1.3 to achi
Thanks for the clarification - I understand now.
On Fri, Jul 11, 2008 at 12:30 PM, Shalin Shekhar Mangar
<[EMAIL PROTECTED]> wrote:
> Sorry for not being clear. I meant that you can use one OR the other
> as and when necessary. You can't use both the styles in one request.
>
> On Fri, Jul 11, 2008
Sorry for not being clear. I meant that you can use one OR the other
as and when necessary. You can't use both the styles in one request.
On Fri, Jul 11, 2008 at 9:55 PM, Ian Connor <[EMAIL PROTECTED]> wrote:
> Could you give me an example how combining standard with dismax would
> look like in qu
Thanks Yonik. I will try it out. Btw, what cache should we use for
multivalued, untokenised fields with large number of terms? Faceted search
on these fields seem to be noticeably slower even if I have allocated enough
filterCache. There seems to be a lot of cache lookups for each query.
On Sat, Ju
Could you give me an example how combining standard with dismax would
look like in query string or URL?
I thought you had to set qt=dismax in the URL and it applied to the
whole query like:
http://solrserver:8983/solr/select?indent=on&version=2.2&q=nik+gene+cluster&start=0&rows=10&fl=*%2Cscore&qt
You're trying to commit too fast and warming searchers are stacking up.
Do less warming of caches, or space out your commits a little more.
-Yonik
On Fri, Jul 11, 2008 at 11:56 AM, sundar shankar
<[EMAIL PROTECTED]> wrote:
> Hi ,
>I am getting the "Error opening new searcher. exceeded limit o
See ExternalFileField and BoostedQuery
-Yonik
On Fri, Jul 11, 2008 at 11:47 AM, climbingrose <[EMAIL PROTECTED]> wrote:
> Hi all,
> Has anyone tried to factor rating/popularity into Solr scoring? For example,
> I want documents with more page views to be ranked higher in the search
> results. Fro
Hi ,
I am getting the "Error opening new searcher. exceeded limit of
maxWarmingSearchers=4, try again later." My configuration includes enabling
coldSearchers to true and Having number of maxWarmimgSearchers as 4. We expect
a max of 40 concurrent users but an average of 5-10 at most times. W
Note that you can use both the standard and dismax style as when you
need more control vs. searching all fields.
On Fri, Jul 11, 2008 at 9:14 PM, Ian Connor <[EMAIL PROTECTED]> wrote:
> So it might be a neat solution. If I am reading this right:
>
> 1. would make the index larger
> 2. means you ar
What was the type of the field that you are using. I guess you could achieve it
by a simple swap of text and string.
> From: [EMAIL PROTECTED]> To: solr-user@lucene.apache.org> Subject: Solr
> searching issue..> Date: Fri, 11 Jul 2008 11:28:50 +0100> > > Hi solr-users,
> > > version type: nigh
Hi all,
Has anyone tried to factor rating/popularity into Solr scoring? For example,
I want documents with more page views to be ranked higher in the search
results. From what I can see, the most difficult thing is that we have to
update the number of page views for each document. With Solr-139, do
So it might be a neat solution. If I am reading this right:
1. would make the index larger
2. means you are searching all fields (or a weighted list) but cannot
also search specific fields at the same time
3. would make your query string longer but give you the most control
without bloating your i
Hi
I recently was looking to find details of 1.3 specific analysers and
filters in the solr wiki and was unable to do so. Could anyone please point me
to a place where I can find some documentation of the same.
Thanks
Sundar
___
H... The Analyzer shows me *almost* what I am expecting to see. When I
show it being verbose with debug info, I can see exactly what is going on,
which is great. Thanks for the tip.
What's happening (for most of my test cases) is that some of the synonyms
are multiple words (and it's a big sy
Have you tried [ulimit -n 65536]? I don't think it relates to files
marked for deletion...
==
http://www.linkedin.com/in/liferay
Earlier or later, the system crashes with message "Too many open files"
Hi Ian,
No, the * syntax you mentioned is not supported. However, there are a
couple of ways of achieving this.
1. Create a copyField which has all the content and search on that --
as you mentioned
2. You can use dismax and add all the fields as the search field
parameter either in the request o
You can use EdgeNGramTokenizer available with Solr 1.3 to achieve this. But
I'd think again about introducing this kind of search as n-grams can bloat
your index size.
On Fri, Jul 11, 2008 at 3:58 PM, dudes dudes <[EMAIL PROTECTED]> wrote:
>
> Hi solr-users,
>
> version type: nightly build solr-2
There's no errors in my log, just a list of GET HEAD and POST entries, it
looks just like an Apache access log.
There are a few entries in the log file that have " " and "-" in them, but
as far as I can see that isn't a problem.
Is there a way to make Solr's logging a bit more verbose to help de
Are there any errors in your logs? Have you tried looking at the
admin analysis page to see how text gets treated on that field?
Are you sure the large synonym file is formatted correctly?
-Grant
On Jul 11, 2008, at 7:23 AM, matt connolly wrote:
I'm setting up Solr to run on a web site I'
I discovered that moving the synonym expansion to at index time rather than
query time works just fine with my synonym list.
I'd still like to know why it doesn't work expanding at query time
though :(
--
View this message in context:
http://www.nabble.com/Synonyms-list-breaks-solr-tp18401
Is it possible to have all fields being searched by default?
* or something like that.
An alternative, is if there is some lucene query that will give me all
fields "*:term".
My final, more redundant alternative is to create a field called
"all_fields" and put everything in there at the end to be
I'm setting up Solr to run on a web site I'm working on.
Basically, if I use no synonym file, then Solr is working really well for
finding text, the porter stemmer filter is great.
It also works with a small synonym file, like the one in the example, which
defines Television,TV.
But when I add
Hi solr-users,
version type: nightly build solr-2008-07-07
If I search for name John, it finds it with out any issues On the other
hand if I search for Joh* , it also finds all the possible matches. However, if
I search for "Joh".. it doesn't find any possible match in other word, it
Hi *,
we are optimizing lucene indexes automaticaly every night.
This reduces the amount of index files, but the rest of the files is not
really deleted, but only marked as 'deleted' (RedHat VM). Earlier or
later, the system crashes with message "Too many open files"
This behavior occurs onl
a small improvement:
I just managed to get relative paths for the test jetty working, so to you just
have to:
cd testsolr
java -jar start.jar
open http://localhost:8983/solrjs/test/testServerside.html
regards,
matthias
Matthias Epheser schrieb:
Hi,
I just made a commit to http://solrstuff.
On Thu, 10 Jul 2008 17:55:55 -0600
"Galen Pahlke" <[EMAIL PROTECTED]> wrote:
> Could this perhaps be because a date field has so many possible unique
> values? I don't know how to find out exactly, but I'd guess there are
> at least a few million unique dates in the index. Would increasing the
>
33 matches
Mail list logo