Hi ,
When i give a query like the following ,why does it become a phrase query
as shown below?
The field type is the default text field in the schema.
volker-blanz
PhraseQuery(content:"volker blanz")
Also when i have special characters in the query as SCHÖLKOPF , i am not
able to convert the "o"
Hi All,
I would like to provide an admin interface (in a different system) that
would update the synonyms.txt file and automatically inform a set of Solr
instances that are being replicated to update their synonyms.txt file too.
This discussion shows a possible solution:
http://www.nabble.com/Ref
Hi Mike,I don't see a patch file here?
Could another explanation be that the fdx file doesn't exist yet / has been
deleted from underneath Lucene?
I'm constantly CREATE-ing and UNLOAD-ing Solr cores, and more importantly,
moving the bundled cores around between machines. I find it much more likel
Hi,
I'd like to include a data version in my index, and it looks like
dataimport.properties would be a nice place for it. Is there a way to add a
custom name-value pair to that file?
Thanks,
Wojtek
--
View this message in context:
http://www.nabble.com/Custom-Values-in-dataimport.properties-
I know, but the FieldCache is not in the solrconfig.xml
-Original Message-
From: Yonik Seeley [mailto:ysee...@gmail.com]
Sent: Friday, May 29, 2009 10:47 AM
To: solr-user@lucene.apache.org
Subject: Re: Java OutOfmemory error during autowarming
On Fri, May 29, 2009 at 1:44 PM, Francis Ya
On Fri, May 29, 2009 at 1:44 PM, Francis Yakin wrote:
>
> There is no "FieldCache" entries in solrconfig.xml ( BTW we are running
> version 1.2.0)
Lucene FieldCache entries are created when you sort on a field or when
you use a field in a function query.
-Yonik
There is no "FieldCache" entries in solrconfig.xml ( BTW we are running version
1.2.0)
-Original Message-
From: Yonik Seeley [mailto:ysee...@gmail.com]
Sent: Friday, May 29, 2009 9:12 AM
To: solr-user@lucene.apache.org
Subject: Re: Java OutOfmemory error during autowarming
It's probabl
What are you really trying to accomplish here? Because index time boostingis
a way of saying "I care about matches in this field of this document
X times more than other documents" whereas search time boosting
expresses "elevate the relevance of any document where this term matches"
>From your ex
I have been able to create my custom field. The problem is that I have laoded
in the solr core a couple of HashMaps from a DB
with values that will influence in the sort. My problem is that I don't know
how to let my custom sort have access to this HashMaps.
I am a bit confused now. I think that w
It's probably not the size of the query cache, but the size of the
FieldCache entries that are used for sorting and function queries
(that's the only thing that should be allocating huge arrays like
that).
What fields do you sort on or use function queries on? There may be a
way to decrease the m
Ah... yes, you removed defType=dismax, which means you need to provide
a query. Use q=*:* if you want to find all records (and display all
facets).
You can also remove q.alt from the parameters, since that is only used
with the dismax parser. And likewise the qf parameter.
Erik
On May 28, 2009, at 9:46 PM, Eric Pugh wrote:
Updating to latest and greatest added that data, thank you for the
pointer. Too many copies of Solr 1.4 trunk, and I'd neglected to
update.
However, the issue with the mapping not working with the
ext.metadata.prefix seems to remain:
budap
HTTP ERROR: 500
null
java.lang.NullPointerException
at java.io.StringReader.(StringReader.java:33)
at org.apache.lucene.queryParser.QueryParser.parse(QueryParser.java:169)
at
org.apache.solr.search.LuceneQParser.parse(LuceneQParserPlugin.java:78)
at org.apache.sol
Jorg - the rest of that exception would be mighty handy! Please share
the entire details.
Erik
On May 29, 2009, at 7:38 AM, Jörg Agatz wrote:
also,
i have after using the Nightly bUILD FROM tODY, 29.05.2009
BUT THE sAME ERROR...
HTTP ERROR 500
null
java.lang.NullPointerException
also,
i have after using the Nightly bUILD FROM tODY, 29.05.2009
BUT THE sAME ERROR...
HTTP ERROR 500
null
java.lang.NullPointerException
...
...
...
The url is: .../solr/itas?fq=cat:"test"
when i tryed .../solr/itas?q=SEARCHWORD&cat:"test" ir worket, links dosent
work
Hey there,
I am testing MoreLikeThis feaure (with MoreLikeThis component and with
MoreLikeThis handler) and I am getting lots of duplicates. I have noticed
that lots of the similar documents returned are duplicates. To avoid that I
have tried to use the field collapsing patch but it's not taking
Very interesting: FieldsWriter thinks it's written 12 bytes to the fdx
file, yet the directory says the file does not exist.
Can you re-run with this new patch? I'm suspecting that FieldsWriter
wrote to one segment, but somehow we are then looking at the wrong
segment. The attached patch prints
what would be the url to ping to replicate
like http://slave_host:port/solr/replication?command=enablepoll
thanks
--
View this message in context:
http://www.nabble.com/replication-solr-1.4-tp23777206p23777272.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi guys,
I didnt ennoy you for ages now ... hope everybody is fine ... I've an issue
with my replication
I was wondering ... after a while replication doesnt work anymore ...
we have a script which enable or not replication every 2hours and this
morning it didnt pull anything
and it's maybe bec
I think you're asking if the (very temporary on trunk) faceting bug is
fixed. The answer is yes.
Erik
On May 29, 2009, at 3:10 AM, Jörg Agatz wrote:
i the Bug fixt in the news Nightliy Bilds?
i the Bug fixt in the news Nightliy Bilds?
21 matches
Mail list logo