Schema.xml
Have you edited schema.xml since building a full index from scratch? If
so, try rebuilding the index.
People often get the behavior you describe if the 'id' is a 'text' field.
ryan
I'm currently indexing all documents using the update XML. I have always used
the following when post the documents to solr:
I've never had allowDups flag set to true...I'm assuming this is false by
default?
We did have Tomcat crash once (JVM OutOfMem) durning an indexing process,
could th
: Hey all, I have a fairly odd case of duplicate documents in our solr index
: (See attached xml sample). THe index is roughtly 35k in documents. The only
How did you index those documents?
Any chance you inadvertently set the "allowDups=true" attribute when
sending them to Solr (possibly becu
For the MultiCore experts, is there an acceptable or approved way to
close and unregister a single SolrCore? I'm interested in stopping
cores, manipulating the solr directory tree, and reregistering them.
Thanks,
-John R.
Hi, I'm new to Solr but very familiar with Lucene.
Is there a way to have Solr search in more than once index, much like the
MultiSearcher in Lucene ?
If so how so I configure the location of the indexes ?
I have the following custom field defined for author names. After indexing
the 2 documents below the admin analysis tool looks right for field-name=au and
field-value=Schröder, Jürgen The highlight matching also seems right.
However, if I search for au:Schröder, Jürgen using the admin tool
I haven't made any changes to the schema since the intial full-index. Do you
know if there is a way to rebuild the full index in the background, without
having to take down the current live index?
Dan
ryantxu wrote:
>
>>
>> Schema.xml
>>
>
> Have you edited schema.xml since building a ful
Hi Brian,
Found the SVN location, will download from there and give it a try.
Thanks for the help.
On 07/11/2007, Mike Davies <[EMAIL PROTECTED]> wrote:
>
> I'm using 1.2, downloaded from
>
> http://apache.rediris.es/lucene/solr/
>
> Where can i get the trunk version?
>
>
>
>
> On 07/11/2007,
On Nov 7, 2007, at 10:00 AM, Mike Davies wrote:
java -Djetty.port=8521 -jar start.jar
However when I run this it seems to ignore the command and still
start on
the default port of 8983. Any suggestions?
Are you using trunk solr or 1.2? I believe 1.2 still shipped with an
older version
On 11/6/07, Kristen Roth <[EMAIL PROTECTED]> wrote:
> Yonik - thanks so much for your help! Just to clarify; where should the
> regex go for each field?
Each field should have a different FieldType (referenced by the "type"
XML attribute). Each fieldType can have it's own analyzer. You can
use
Hi,
I'm trying to change the port number that the start.jar application runs
on. I have found examples on the web that use the -Djetty.port=command,
i.e.
java -Djetty.port=8521 -jar start.jar
However when I run this it seems to ignore the command and still start on
the default port of 8983.
On 11/7/07, Ryan McKinley <[EMAIL PROTECTED]> wrote:
> Yonik - do we want to keep this checking for 'null', or should we change
> QueryParser.parseSort( ) to always return a valid sortSpec?
In Lucene, a null sort is not equal to "score desc"... they result in
the same documents being returned, but
Does anyone know what could be the problem?
looks like it was a problem in the new query parser. I just fixed it in
trunk:
http://svn.apache.org/viewvc?view=rev&revision=592740
Yonik - do we want to keep this checking for 'null', or should we change
QueryParser.parseSort( ) to always retu
Yonik Seeley wrote:
On 11/7/07, Ryan McKinley <[EMAIL PROTECTED]> wrote:
Yonik - do we want to keep this checking for 'null', or should we change
QueryParser.parseSort( ) to always return a valid sortSpec?
In Lucene, a null sort is not equal to "score desc"... they result in
the same documents
I fixed this problem by returning this"return super.getPrefixQuery(field,
termStr);"
in solr.search.SolrQueryParser and it worked for me.
-Kamran
Mike Klaas wrote:
>
> On 7-Jun-07, at 5:27 PM, Frédéric Glorieux wrote:
>
>> Hoss,
>>
>> Thanks for all your information and pointers. I know that
Pardon the basicness of these questions, but I'm just getting started
with SOLR and have a couple of confusions regarding sorting that I
couldn't resolve based on the docs or an archive search.
1. There appears to be (at least) two ways to specify sorting, one
involving an append to the q parm and
So, I think I have things set up correctly in my schema, but it doesn't
appear that any logic is being applied to my Category_# fields - they
are being populated with the full string copied from the Category field
(facet1::facet2::facet3...facetn) instead of just facet1, facet2, etc.
I have severa
I'm using 1.2, downloaded from
http://apache.rediris.es/lucene/solr/
Where can i get the trunk version?
On 07/11/2007, Brian Whitman <[EMAIL PROTECTED]> wrote:
>
>
> On Nov 7, 2007, at 10:00 AM, Mike Davies wrote:
> > java -Djetty.port=8521 -jar start.jar
> >
> > However when I run this it se
On Nov 7, 2007, at 2:04 AM, James liu wrote:
i just decrease answer information...and u will see my result(full,
not
part)
*before unserialize*
string(433)
"a:2:{s:14:"responseHeader";a:3:{s:6:"status";i:0;s:5:"QTime";i:
0;s:6:"params";a:7:{s:2:"fl";s:5:"Title";s:6:"indent";s:2:"on";s:
Hi,
I'm sending a local csv file to Solr via remote streaming, and constantly
get the "500 read timeout" message. The csv file is about 200MB in size, and
Solr is running on Tomcat 5.5. What types of timeout related Tomcat params I
can adjust to fix this?
Thanks in advance.
- Guangwei
Thanks Erik. That helps.
-Original Message-
From: Erik Hatcher [mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 07, 2007 11:36 AM
To: solr-user@lucene.apache.org
Subject: Re: Analysis / Query problem
On Nov 7, 2007, at 10:26 AM, Wagner,Harry wrote:
> I have the following custom fiel
On Nov 7, 2007, at 10:26 AM, Wagner,Harry wrote:
I have the following custom field defined for author names. After
indexing the 2 documents below the admin analysis tool looks right
for field-name=au and field-value=Schröder, Jürgen The highlight
matching also seems right. However, if
On Nov 7, 2007, at 10:07 AM, Mike Davies wrote:
I'm using 1.2, downloaded from
http://apache.rediris.es/lucene/solr/
Where can i get the trunk version?
svn, or http://people.apache.org/builds/lucene/solr/nightly/
I need to perform a search against a limited set of documents. I have the
set of document ids, but was wondering what is the best way to formulate the
query to SOLR?
--
View this message in context:
http://www.nabble.com/restricting-search-to-a-set-of-documents-tf4767801.html#a13637479
Sent fr
On 7-Nov-07, at 2:27 PM, briand wrote:
I need to perform a search against a limited set of documents. I
have the
set of document ids, but was wondering what is the best way to
formulate the
query to SOLR?
add fq=docId:(id1 id2 id3 id4 id5...)
cheers,
-Mike
Jeryl Cook /^\ Pharaoh /^\ http://pharaohofkush.blogspot.com/ "..Act your
age, and not your shoe size.." -Prince(1986)
> From: [EMAIL PROTECTED]> Subject: Re: start.jar -Djetty.port= not working>
> Date: Wed, 7 Nov 2007 10:13:22 -0500> To: solr-user@lucene.apache.org> > > >
> On Nov 7, 2007,
If you really, really need to preserve the XML structure, you'll
be doing a LOT of work to make Solr do that. It might be cheaper
to start with software that already does that. I recommend
MarkLogic -- I know the principals there, and it is some seriously
fine software. Not free or open, but very,
I am sure this is 101 question, but I am bit confused about indexing xml data
using SOLR.
I have rich xml content (books) that need to searched at granular levels
(specifically paragraph and sentence levels very accurately, no
approximations). My source text has exact and tags for this
purp
hmm
i find error,,,that is my error not about php and phps ..
i use old config to testso config have a problem..
that is Title i use double as its type...it should use text.
On Nov 8, 2007 10:29 AM, James liu <[EMAIL PROTECTED]> wrote:
> php now is ok..
>
> but phps failed
>
> my
php now is ok..
but phps failed
mycode:
> $url = '
> http://localhost:8080/solr1/select/?q=2&version=2.2&rows=2&fl=Title&start=0&rows=10&indent=on&wt=phps
> ';
> $a = file_get_contents($url);
> //eval('$solrResults = ' .$serializedSolrResults . ';');
> echo 'before unserial
On Wed, 7 Nov 2007 20:18:25 -0800 (PST)
David Neubert <[EMAIL PROTECTED]> wrote:
> I am sure this is 101 question, but I am bit confused about indexing xml data
> using SOLR.
>
> I have rich xml content (books) that need to searched at granular levels
> (specifically paragraph and sentence leve
I'm not sure I fully understand your ultimate goal or Yonik's
response. However, in the past I've been able to represent
hierarchical data as a simple enumeration of delimited paths:
root
root/region
root/region/north america
root/region/south america
Then, at response time, you can walk th
Hi,
I'm sending a local csv file to Solr via remote streaming, and constantly
get the "500 read timeout" message. The csv file is about 200MB in size, and
Solr is running on Tomcat 5.5. What types of timeout related Tomcat params I
can adjust to fix this?
Thanks in advance.
- Guangwei
> > Does anyone know what could be the problem?
> >
>
> looks like it was a problem in the new query parser. I just fixed it
> in trunk:
> http://svn.apache.org/viewvc?view=rev&revision=592740
Thanks it works now.
Cheers,
Michael
--
Michael Thessel <[EMAIL PROTECTED]>
Gossamer Threads Inc. h
Nothing yet... but check:
https://issues.apache.org/jira/browse/SOLR-350
ryan
John Reuning wrote:
For the MultiCore experts, is there an acceptable or approved way to
close and unregister a single SolrCore? I'm interested in stopping
cores, manipulating the solr directory tree, and reregiste
Thks everybody who give me help.
especial Dave, thk u.
On Nov 8, 2007 11:21 AM, James liu <[EMAIL PROTECTED]> wrote:
> hmm
>
> i find error,,,that is my error not about php and phps ..
>
> i use old config to testso config have a problem..
>
> that is Title i use double as its type...it
Thanks Walter --
I am aware of MarkLogic -- and agree -- but I have a very low budget on
licensed software in this case (near 0) --
have you used eXists or Xindices?
Dave
- Original Message
From: Walter Underwood <[EMAIL PROTECTED]>
To: solr-user@lucene.apache.org
Sent: Wednesday,
I was hoping that a feature was lurking about and not yet added to the
patch. How about something like this? Should it throw an exception if
the core isn't found in the map?
Thanks,
-jrr
--- MultiCore.java.orig 2007-11-07 23:09:32.0 -0500
+++ MultiCore.java 2007-11-07 23:14:08
On Nov 7, 2007, at 12:10 PM, Chris Hostetter wrote:
: Hey all, I have a fairly odd case of duplicate documents in our
solr index
: (See attached xml sample). THe index is roughtly 35k in
documents. The only
How did you index those documents?
Any chance you inadvertently set the "allowDups=
39 matches
Mail list logo