Hi ,
I realize this might be too simple - Can someone tell me where to look? I'm
new to solr and have to fix this for a demo asap.
If my search query is "the", all 91 objects are returned as search results.
I expect 0 results.
--
Sanjay Suri
Videocrux Inc.
http://videocrux.com
+91 99102 66626
Setup an extra filter before SolrDispatchFilter to do authentication.
On Thu, Nov 20, 2008 at 12:28 PM, RaghavPrabhu <[EMAIL PROTECTED]> wrote:
>
> Hi all,
>
> Im using multiple cores and all i need to do is,to make the each core in
> secure manner. If i am accessing the particular core via url,
Hi Solr Community,
I was wondering if it is possible to access and modify the charTypeTable
of the WordDelimeterFilter.
The use case is that I do not want to split on a '*' char. Which the
filter currently does. If I could modify the charTypeTable I could
change the behaviour of the filter. Or a
Hi all!
For a school project at the master's program in LIS at Oslo University
College, I am trying to index Marc-records to make a faceted browser
of digital books.
So far, I've transformed the Marc-records to Solr-friendly records. I
am now trying to update my index (for the first time)
Hey there,
I found couple of solutions that work fine for my case (is not exacly what
I was looking for at the begining but I could adapt it).
First one:
Use always quantum=1 and minTokenLen=2.
Instead of order the tokens by frequency, I order them alphabetically, doing
this I am a little more p
Hi again!
I've figured it out. Hadn't reloaded Solr after updating the schema. Doh.
Regards,
Elise
Quoting Elise Dawn Conradi <[EMAIL PROTECTED]>:
Hi all!
For a school project at the master's program in LIS at Oslo
University College, I am trying to index Marc-records to make a
faceted
Thanks for sharing Marc, thats very nice to know. I'll take your
experience as a starting point for some wiki recommendations.
Sounds like we should add a switch to order alpha as well.
Marc Sturlese wrote:
Hey there,
I found couple of solutions that work fine for my case (is not exacly what
Hey there,
I have started working with an index divided in 3 shards. When I did a
distributed search I got an error with the fields that were not string or
text. I read that the error was due to BinaryResponseWriter and not
string/text empty fields.
I found the solution in an old thread of this f
Hi
We have a problem with slow deleteById where one delete can take up to 30
minutes and the thread which initiated the deleteById is waiting for the
method to return. The problem is not that the delete takes so much time. The
problem is that the application that initiates deletes is halted that l
I'd suggest aggregating those three columns into a string that can
serve as the Solr uniqueKey field value.
Erik
On Nov 20, 2008, at 1:10 AM, Raghunandan Rao wrote:
Basically, I am working on two views. First one has an ID column. The
second view has no unique ID column. What to do
1.3.0 final release.
Erik
On Nov 20, 2008, at 2:03 AM, Shalin Shekhar Mangar wrote:
Eric, which Solr version is that stack trace from?
On Thu, Nov 20, 2008 at 7:57 AM, Erik Hatcher <[EMAIL PROTECTED]
>wrote:
In analyzing a clients Solr logs, from Tomcat, I came across the
except
On Nov 20, 2008, at 3:31 AM, Sanjay Suri wrote:
Hi ,
I realize this might be too simple - Can someone tell me where to
look? I'm
new to solr and have to fix this for a demo asap.
If my search query is "the", all 91 objects are returned as search
results.
I expect 0 results.
Add &debugQu
On Nov 20, 2008, at 6:22 AM, Elise Dawn Conradi wrote:
For a school project at the master's program in LIS at Oslo
University College, I am trying to index Marc-records to make a
faceted browser of digital books.
Are you aware of the library projects that leverage Solr? Blacklight,
VuFin
Hi Fergus,
Were you overwriting the existing index or did you also clean out the
Solr data directory, too? In other words, was it a fresh index, or an
existing one? And was that also the case for the 22 minute time?
Would it be possible to profile the two instance and see if you notice
We are about to release Field collapsing in our production site, but the
index size is not as big as yours.
Definitely collapsing is an added overhead. You can do some load testing and
bench mark on some dataset as you would expect on your production project as
SOLR-236 is currently available only
gurudev wrote:
One thing that you can go with is using "adjacent" field collapsing rather
than simple collapsing. As internally SOLR would first sort on the collapse
field to use simple collapsing, which is not the case with "adjacent"
collapsing.
This something that I think could be improved
Its pretty nuts, cause the null check protecting against that appears to
have been in well pre 1.3. How the heck does a null get past a null check?
Erik Hatcher wrote:
1.3.0 final release.
Erik
On Nov 20, 2008, at 2:03 AM, Shalin Shekhar Mangar wrote:
Eric, which Solr version is that s
Hello Grant,
>Were you overwriting the existing index or did you also clean out the
>Solr data directory, too? In other words, was it a fresh index, or an
>existing one? And was that also the case for the 22 minute time?
No in each case it was a new index. I store the indexes (the "data" d
Hi,
I'm trying to index some content that has things like 'java/J2EE' but with
solr.WordDelimiterFilterFactory and parameters [generateWordParts="1"
generateNumberParts="0" catenateWords="0" catenateNumbers="0"
catenateAll="0" splitOnCaseChange="0"] this ends up tokenized as
'java','j','2',EE'
Do
Just use the query analysis link with appropriate values. It will show how
each filter factories and analyzers breaks the terms during various analysis
levels. Specially check EnglishPorterFilterFactory analysis
Jeff Newburn wrote:
>
> I am trying to figure out how the synonym filter processe
Mark Miller wrote:
Thanks for sharing Marc, thats very nice to know. I'll take your
experience as a starting point for some wiki recommendations.
Sounds like we should add a switch to order alpha as well.
On the general note of near-duplicate detection ... I found this paper
in the proceedin
if only i could magic all these damn pdfs I have into some code :)
+1
I want some of that magic too!
Could you provide your schema and the exact query that you issued?
Things to consider... If you just searched for "the", it used the
default search field, which is declared in your schema. The filters
associated with that default field are what determine whether or not the
stopword list is invoked
I've found that creating a custom filter and filter factory isn't too
burdensome when the filter doesn't "quite" do what I need. You could
grab the source and create your own version.
-Todd Feak
-Original Message-
From: Jerven Bolleman [mailto:[EMAIL PROTECTED]
Sent: Thursday, November 2
On Thu, 2008-11-20 at 07:30 -0800, Feak, Todd wrote:
> I've found that creating a custom filter and filter factory isn't too
> burdensome when the filter doesn't "quite" do what I need. You could
> grab the source and create your own version.
>
I will have to do so anyway. As a test I used reflect
I am trying to use the api for the solr cores. Reload works great but when
I try to UNLOAD I get a massive exception in IOException. It seems to
unload the module but doesn¹t remove it from the configuration file. The
solr.xml file is full read and write but still errors. Any ideas?
Solr.xml
s
: 1.3.0 final release.
that stack trace doesn't jive with 1.3.0 ...
: > > java.lang.NullPointerException
: > > at
: > > org.apache
: > > .solr.servlet.SolrDispatchFilter.destroy(SolrDispatchFilter.java:123)
: > > at
SolrDispatchFilter.java:123 in 1.3 (and 1.2, and trunk) is in the d
Thanks for the help Ryan!
Using the start.jar with 1.3 and added the slf4j jar to the classpath. When
it comes to the setting up
of the log4j I wonder which method is better. To put the redirect to the log
server in the Jetty.xml file
or to put a log4j.properties file in the web library, and if it'
Ok just FYI solr replaces the file instead of editing. This means that the
webserver needs permissions in the directory to delete and create the
solr.xml file. Once I fixed that it no longer gave IOException errors.
On 11/20/08 8:29 AM, "Jeff Newburn" <[EMAIL PROTECTED]> wrote:
> I am trying t
unsubscribe
> Date: Thu, 20 Nov 2008 08:29:20 -0800
> Subject: Solr Core Admin
> From: [EMAIL PROTECTED]
> To: solr-user@lucene.apache.org
>
> I am trying to use the api for the solr cores. Reload works great but when
> I try to UNLOAD I get a massive exception in IOException. It seems to
> un
On Nov 20, 2008, at 11:57 AM, Erik Holstad wrote:
Thanks for the help Ryan!
Using the start.jar with 1.3 and added the slf4j jar to the
classpath. When
with 1.3 -- the logging is java.util.logging --
The slf4j advice only applies to 1.4-dev
ryan
Ok, thanks Ryan!
On Thu, Nov 20, 2008 at 9:03 AM, Ryan McKinley <[EMAIL PROTECTED]> wrote:
>
> On Nov 20, 2008, at 11:57 AM, Erik Holstad wrote:
>
> Thanks for the help Ryan!
>> Using the start.jar with 1.3 and added the slf4j jar to the classpath.
>> When
>>
>
> with 1.3 -- the logging is java.
I use the 'dismax handler' for my phrase matching. And i have the 'mm' set
this way:
Up to 3 words, match all
up to 4, match 3
up to 4, match 3 & so on
Its been working fine, but for certain phrases like 'san diego drunk driving
defense attorney', its brings up dui attorneys for other cities first
Greetings all,
I'm having trouble tracking down why a particular query is not
working. A user is trying to do a search for
alternate_form_title_text:"three films by louis malle" specifically to
find the 4 records that contain the phrase "Three films by Louis Malle"
in their alternate_form_
Thanks. I understand what Amazon is doing. The original question is how to
achieve this with Solr. And to be more specific, how to achieve this within
Solr and not involve multiple search queries to Solr.
Nguyen, Joe-2 wrote:
>
>
> Seemed like its first search required match all terms.
>
I'm also hitting some threading issues with autocommit -- JConsole
does not show deadlock, but it shows some threads 'BLOCKED' on
scheduleCommitWithin
Perhaps this has something to do with the changes we made for: SOLR-793
I am able to fix this (at least I don't see the blocking with the dat
Hi,
I want to fetch only the documents which have a certain
field.
For this I am using a fq query like this
fq=rev.comments:[* TO *]
rev.comments fields is of type string.
The functionality works correctly but I am seeing a performance
degradation
Without the above fq, the QTim
On 20-Nov-08, at 12:23 PM, Manepalli, Kalyan wrote:
Hi,
I want to fetch only the documents which have a certain
field.
For this I am using a fq query like this
fq=rev.comments:[* TO *]
rev.comments fields is of type string.
The functionality works correctly but I am seeing a p
I actually am looking for the same answer.
I have worked around it by indexing 'empty' fields with a dumpy value
but this isn't an ideal situation
Thijs
On 11/19/08 10:38 PM, Geoffrey Young wrote:
Lance Norskog wrote:
Try: Type:blue OR -Type:[* TO *]
You can't have a negative clause at
Hi,
I have set
but it is not taking effect. It continues to take it as OR. I am working
with the latest nightly build 11/20/2008
For a querry like
term1 term2
Debug shows
content:term1 content:term2>/str>
Bug?
Thanks
- ashok
--
View this message in context:
http://www.nabble.com/sol
Hi Mike,
Thanks for the suggestion, I will test it out and post the
results
Thanks,
Kalyan Manepalli
-Original Message-
From: Mike Klaas [mailto:[EMAIL PROTECTED]
Sent: Thursday, November 20, 2008 2:38 PM
To: solr-user@lucene.apache.org
Subject: Re: Filtering on blank fields
On
Hoss,
There were a few comments about schema files in Mark Mail between you
and Grant a couple of months ago, no big demand for them for the
schema.xml file. Before I drop this would you consider taking a look at
XSD file below for the schema.xml and perhaps submit the XSD file the
SVN system? I c
On 20-Nov-08, at 6:20 AM, Daniel Rosher wrote:
Hi,
I'm trying to index some content that has things like 'java/J2EE'
but with
solr.WordDelimiterFilterFactory and parameters [generateWordParts="1"
generateNumberParts="0" catenateWords="0" catenateNumbers="0"
catenateAll="0" splitOnCaseChange
Is it possible to boost a document by the contents of a field? Given the
query:
text field:value
I want to return all documents with 'text'. Documents where 'field = value'
boosted over documents where 'field = some other value'.
This query does it:
(text field:value)^100 (text -fiel
On 20-Nov-08, at 11:40 AM, Caligula wrote:
Thanks. I understand what Amazon is doing. The original question
is how to
achieve this with Solr. And to be more specific, how to achieve
this within
Solr and not involve multiple search queries to Solr.
There isn't a way. The best way to
I could not manage, yet to use it. :confused:
My doubts are:
- must I download solr from svn - trunk?
- then, must I apply the patches of solrjs and velocity and unzip the files?
or is this already in trunk?
because trunk contains velocity and javascript in contrib.
but does not find the ve
On Wed, 19 Nov 2008 22:58:52 -0800 (PST)
RaghavPrabhu <[EMAIL PROTECTED]> wrote:
> Im using multiple cores and all i need to do is,to make the each core in
> secure manner. If i am accessing the particular core via url,it should ask
> and validate the credentials say Username & Password for each
The problem with a zero-length string "" is that it is also returned by:
field:[* TO *]. So you don't know if you're doing this right or not. For
those of us who cannot reindex at the drop of a hat, this is a big deal. We
went with -1.
Lance
-Original Message-
From: Manepalli, Kalyan [ma
Thanks Erik.
If I convert that to a string then id field defined in schema.xml would
fail as I have that as integer. If I change that to string then first
view would fail as it is Integer there. What to do in such scenarios? Do
I need to define multiple schema.xml or multiple unique key definition
49 matches
Mail list logo