If I add 10 document to solrServer as in solrServer.addIndex(docs) ( Using
Embedded ) and then I commit and commit fails for for some reason. Then can
I retry this commit lets say after some time or the added documents are
lost??
--
View this message in context:
http://www.nabble.com/commit-que
Hi peter,
Thank you very much for your quick reply.
I tried the CheckIndex method. It can't work on my crashed index.
In the error message, it says the segments file in the directory is missing.
and when I use the -fix param, new segments file still can't be write.
I even try the CheckIndex witho
Thanks. That explains it! I'll set termVector to true and give it a try again.
On Mon, May 25, 2009 at 7:41 AM, Koji Sekiguchi wrote:
> MLT uses termVector if it exists for the field. If termVector is not
> available,
> MLT tries to get stored field data. If stored field is not available, MLT
> d
>
> I just want to mix it up a "little"
>
Sounds very subjective and open.
Give this a thought - You can try multi-field sort with first sort being on
the score (so that all the more relevant results ones appear first), and
second being a sort on the random field (which shuffles the order of resu
Hi Avlesh
No, as I was trying to explain, I obviously don't want a totally random
result. I just want to mix it up a "little". Is there a way to achieve this
with solr?
Bjorn
Avlesh Singh wrote:
>
> If simply getting random results (matching your query) from Solr is your
> requirement, then
If simply getting random results (matching your query) from Solr is your
requirement, then a dynamic RandomSortField is what you need. Details here
-
http://lucene.apache.org/solr/api/org/apache/solr/schema/RandomSortField.html
Cheers
Avlesh
On Tue, May 26, 2009 at 6:54 AM, yaymicro_bjorn
wrote:
On May 25, 2009, at 11:15 AM, Jörg Agatz wrote:
I will use Solitas for solr..
Yay! Our first customer ;)
At the moment, Solitas present All fields in the Results, but i
musst change
it and present only some, Like id, name, cat and inStock
(exampledocuments)
i think that is the code to
Hi
I'm responsible for the search engine at yaymicro.com. yaymicro.com is a
microstock agency (sells images). We are using the excellent solr search
engine, but I have a problem with series of similar images showing up. I'll
try to explain:
A search for dog for example
http://yaymicro.com/searc
Hi Thomas,
In a 5-24-09 nightly build, I applied the patch:
cd apache-solr-nightly
patch -p0 < ~/Projects/apache-solr-patches/SOLR-236_collapsing.patch
patching file src/common/org/apache/solr/common/params/CollapseParams.java
patching file
src/java/org/apache/solr/handler/component/CollapseComp
On Mon, May 25, 2009 at 3:09 AM, Reza Safari wrote:
> One little question: is there any utility that can convert core Lucene query
> (any type e.q. TermQuery etc) to solr query? It's is really a lot of work
> for me to rewrite existing code.
Solr internal APIs take Lucene query types.
I guess per
Thanks Otis. I added termVector="true" for those fields, but there isn't a
noticeable difference. So, just to be a little more clear, the dynamic
fields I'm adding... there might be hundreds. Do you see this as a problem?
Thanks,
Matt
On Fri, May 15, 2009 at 7:48 PM, Otis Gospodnetic <
otis_gospo
hello, im using solr 1.3, and having some problems when i search with the
shards parameters,
for example:
*shards=localhost:9090/isearch*
*im using 9090 as the default port
i get this error:
NFO: Filter queries (object): [null]
> 25/05/2009 17:06:33 org.apache.solr.core.SolrCore execute
> INFO:
Hello Matt,
the patch should work with trunk and after a small fix with 1.3 too (see
my comment in SOLR-236). I just made a successful build to be sure.
Do you see any error messages?
Thomas
Matt Mitchell schrieb:
Thanks guys. I looked at the dedup stuff, but the documents I'm adding
aren't r
Thanks guys. I looked at the dedup stuff, but the documents I'm adding
aren't really duplicates. They're very similar, but different.
I checked out the field collapsing feature patch, applied the patch but
can't get it to build successfully. Will this patch work with a nightly
build?
Thanks!
On
Salaam,
Sorry for this here is the big picture
Actually we use solr to index all the mails that come to us so that we can
allow for faster look ups.
We have seen that after our mail server accepts say a GB of mails the index
size goes upto 800MB
I hope that this time I am clear in conveying
Again, indexing becomes extremely slow after indexed 8m documents (about 25G of
original file size). Here is the memory usage info of my computer. Does this
have anything to do with tomcat setting? Thanks.
top - 08:09:53 up 7:22, 1 user, load average: 1.03, 1.01, 1.00
Tasks: 78 total, 2
you can use the lucene jar with solr to invoke the CheckIndex method -
this will possibly allow you to recover if you pass the -fix param.
You may lose some docs, however, so this is only viable if you can,
for example, query to check what's missing.
The command looks like (from the root of the
>
> Also, most (none?) Query objects do not have a parseable toString
> representation so it may not even work at all.
>
IMO, this behavior is limited to the Subclasses of SpanQuery.
Anyways, I understand the general notion here.
Cheers
Avlesh
On Mon, May 25, 2009 at 9:30 PM, Shalin Shekhar Mang
On Mon, May 25, 2009 at 9:16 PM, Avlesh Singh wrote:
> Point taken, Erik. But, is there really a downside towards using
> Query.toString() if someone is not using any of the complex Query
> Subclasses
> (like a SpanQuery)?
>
Well, you will be relying on undocumented behavior that might change in
Point taken, Erik. But, is there really a downside towards using
Query.toString() if someone is not using any of the complex Query Subclasses
(like a SpanQuery)?
Cheers
Avlesh
On Mon, May 25, 2009 at 5:38 PM, Erik Hatcher wrote:
> Warning: toString on a Query object is *NOT* guaranteed to be par
On Mon, May 25, 2009 at 3:53 PM, Muhammed Sameer wrote:
>
> We are using apache-solr to index our files for faster searches, all things
> happen without a problem, my only concern is the size of the cache.
>
> It seems that the trend is that the if I cache 1 GB of files the index goes
> to 800MB i
Hallo...
I have a Problem...
I will use Solitas for solr..
But i have a Problem...
At the moment, Solitas present All fields in the Results, but i musst change
it and present only some, Like id, name, cat and inStock (exampledocuments)
i think that is the code to post all fields..
#foreach($
jlist9 wrote:
The wiki page (http://wiki.apache.org/solr/MoreLikeThis) says:
mlt.fl: The fields to use for similarity. NOTE: if possible, these
should have a stored TermVector
I didn't set TermVector to true MoreLikeThis with StandardRequestHandler seems
to work fine. The first question is, is
Peter - I posted this to the solr-dev list this morning also. The
thread to follow is over there.
Erik
On May 25, 2009, at 9:05 AM, Peter Wolanin wrote:
Building Solr last night from updated svn, I'm now getting the
exception below when I use any fq parameter searching a pre-existin
Building Solr last night from updated svn, I'm now getting the
exception below when I use any fq parameter searching a pre-existing
index. So far, I cannot fix it by tweak config files, but I had to
delete and re-index.
I note that Solr was recently updated to the latest lucene build, so
maybe so
Warning: toString on a Query object is *NOT* guaranteed to be parsable
back into the same Query. Don't use Query.toString() in this manner.
What you probably want to do is create your own QParserPlugin for Solr
that creates the Query however you need from textual parameters from
the client
That's the standard request handler. You have to create a mapping in
solrconfig.xml to the MoreLikeThisHandler (not
MoreLikeThis*Request*Handler) in order to use that. It is not mapped
in the default example config (at least on trunk).
Erik
On May 24, 2009, at 11:08 PM, jlist9 w
hi jeff ,
look at these lines in the log
May 22, 2009 7:38:25 AM org.apache.solr.core.SolrResourceLoader
INFO: Solr home set to '/home/zetasolr/'
May 22, 2009 7:38:25 AM org.apache.solr.core.SolrResourceLoader
createClassLoader
INFO: Adding 'file:/home/zetasolr/lib/FacetCubeComponent.jar' to Solr
jlist9 wrote:
Thanks. Will that still be the MoreLikeThisRequestHandler?
Or the StandardRequestHandler with mlt option?
Yes, StandardRequestHandler. MoreLikeThisComponent is
available by default. Set mlt=on when you want to get MLT results.
Koji
Salaam,
We are using apache-solr to index our files for faster searches, all things
happen without a problem, my only concern is the size of the cache.
It seems that the trend is that the if I cache 1 GB of files the index goes to
800MB ie we are seeing a 80% cache size.
Is this normal or am
Hi all,
I created a script that uses a Solr Search Component, which hooks into the main
solr core and catches the searches being done. After this it tokenizes the
search and send both the tokenized as well as the original query to another
Solr core. I have not written a factory for this, but if
You missed the point, Reza. toString *has to be implemented* by all
Queryobjects in Lucene. All you have to do is to compose the right
Lucene query
matching your needs (all combinations of TermQueries, BooleanQueries,
RangeQueries etc ..) and just do a luceneQuery.toString() when performing a
Solr
Hi,
I tested the new filters' configuration and it works fine.
The problem about ISOLatin1AccentFilterFactory was not due to Solr, but to a
core-dependent configuration
Hmmm, overriding toString() can make wonders. I will try as you
suggested. Thanx for quick reply.
Gr, Reza
On May 25, 2009, at 9:34 AM, Avlesh Singh wrote:
If you use SolrJ client to perform searches, does this not work for
you?
SolrQuery solrQuery = new SolrQuery();
solrQuery.setQuery(*m
Hi everyone,
I have 8m docs to index, and each doc is around 50kb. The solr crashed in
the middle of indexing. error message said that one of the file in the data
directory is missing. I don't know why this is happened.
So right now I have to find a way to recover the index to avoid re-index. Is
Hi Erik,
This mail just got into my junk so left unread. Well after I made debug query
on., and firing the same query I am getting some different vibs from it.
Suppose the query is Content: xyz AND Ticket_Id: (123 OR 1234) and here the
search query i.e “xyz” was in the stop word list even t
I am using DIH to do indexing. After I indexed about 8M documents (took about
1hr40m), it used up almost all memory (4GB), and the indexing becomes extremely
slow. If I delete all indexing and shutdown tomcat, it still shows over 3gb
memory was used. Is it memory leaking? if it is, then the lea
Hello,
I wish to send an Mlt request to Solr and filter the result by a list of values
to specific field. The problem is sometimes the list can include
thousands of values and it's impossible to send such GET request.
Sending this request as POST didn't work well... Is POST supporte
If you use SolrJ client to perform searches, does this not work for you?
SolrQuery solrQuery = new SolrQuery();
solrQuery.setQuery(*myLuceneQuery.toString()*);
QueryResponse response = mySolrServer.query(solrQuery);
Cheers
Avlesh
On Mon, May 25, 2009 at 12:39 PM, Reza Safari wrote:
> Hello,
>
On Mon, May 25, 2009 at 10:56 AM, nk 11 wrote:
> Hello
> Interesting thread. One request please, because I don't have much experience
> with solr, could you please use full terms and not DIH, RES etc.?
nk11.
DIH = DataImportHandler
RES=?
it is unavoidable that we end up using short names becaus
Hello,
One little question: is there any utility that can convert core Lucene
query (any type e.q. TermQuery etc) to solr query? It's is really a
lot of work for me to rewrite existing code.
Thanks,
Reza
--
Reza Safari
LUKKIEN
Copernicuslaan 15
6716 BM Ede
The Netherlands
41 matches
Mail list logo