I want to cache full text into memory to improve performance.
Full text is only used to highlight in my application(But it's very
time consuming, My avg query time is about 250ms, I guess it will cost
about 50ms if I just get top 10 full text. Things get worse when get
more full text because i
ya i checked the extraction request handler but couldnt get the
info... i installed tika-0.7 and copied the jar files into the solr
home library.. i started sending the pdf/html files then i get a lazy
error. i am using tomcat and solr 1.4
hi,
yes i followed the wiki and can now tell me the procedure for it
regards,
swaroop
Hi. All.
I am using solr dismax to search over my books in db. I indexed them all
using solr.
the problem I noticed today is,
Everything start with I want to search for a book "
The Girl Who Kicked the Hornet's Nest
"
but nothing is returned. I'm sure I have this book in DB. So I stripped so
brilliant! thanks very much for your help :)
On 13 July 2010 21:47, Jonathan Rochkind wrote:
>> i'm hoping that -- faceting simply calculates+returns the counts for docs
>> that
>> have the field present while results may still contain documents that don't
>> have the facet field (i.e. the field
I had the same problem, the correction differs by which application server you
are using.
If it's Tomcat, try here: http://wiki.apache.org/solr/SolrTomcat near uri
charset.
I use glassfish, and I added this entry to the wiki after getting help from
this group: http://wiki.apache.org/solr/
: What could cause a facet query on a field (say 'name') differ in count
: from a basic query on the field using the same value?
lots of things are possible -- largely related to how your field is
analyzed.
you need to show us the relavent sections from your schema.xml (the field
and fieldty
: The idea of a default date type came while reading this on
...
: Arguments may be numerically indexed date fields such as TrieDate (the
: default in 1.4), or date math (examples in SolrQuerySyntax) based on a
: constant date or NOW.
ahh.. yeah, that wording was missleading. i've fixed
I'll try it out and let you know!
@tommychheng
Programmer and UC Irvine Graduate Student
Find a great grad school based on research interests: http://gradschoolnow.com
On 7/13/10 1:41 PM, Erik Hatcher wrote:
Tommy,
It's not committed to trunk or any other branch at the moment, so no
futur
Tommy,
It's not committed to trunk or any other branch at the moment, so no
future released version until then.
Have you tested it out? Any feedback we should incorporate?
When I can carve out some time over the next week or so I'll review
and commit if there are no issues brought up.
Hi,
Which next version of solr is the csv response writer set to be included
in?
https://issues.apache.org/jira/browse/SOLR-1925
--
@tommychheng
Programmer and UC Irvine Graduate Student
Find a great grad school based on research interests: http://gradschoolnow.com
: Yup that did it.
To clarify what you're seeing...
http://wiki.apache.org/solr/SolrConfigXml#Request_Dispatcher
: I think you're looking for the statistics for the standard request handler.
...
: I am looking at the stats.jsp page in the SOLR admin panel.
:
: I do not see statistic
I am trying to add the following synonym while indexing/searching
swimsuit, bañadores, bañador
I testing searching for "bañadores" however it didn't return any results.
After further inspection I noticed in the field analysis admin that swimsuit
gets expanded to ba�adores. Not sure if it will sh
Seems that coreContainer.shoutdown() solves the problem.
Anyone doing it in a different way?
--
View this message in context:
http://lucene.472066.n3.nabble.com/ending-a-java-app-that-uses-EmbeddedSolrServer-tp963573p964013.html
Sent from the Solr - User mailing list archive at Nabble.com.
Great, thanks!
On Tue, Jul 13, 2010 at 2:55 AM, Fornoville, Tom
wrote:
> If you're only adding documents you can also have a go with
> StreamingUpdateSolrServer instead of the CommonsHttpSolrServer.
> Couple that with the suggestion of master/slave so the searches don't
> interfere with the index
Thank you so much guys. You solved my problem :) :)
The problem was I was using stemming, I removed that it works perfectly now.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Problem-with-Wildcard-searches-in-Solr-tp961448p963744.html
Sent from the Solr - User mailing lis
Hey there,
I've done some tests with a custom java app using EmbeddedSolrServer to
create an index.
It works ok and I am able to build the index but I've noticed after the
commit an optimize are done, the app never terminates.
How should I end it? Is there any way to tell the EmbeddedSolrServer to
I think you're going to have trouble doing this with seperate cores.
With seperate cores, you'll need to issue two querries to solr, one for
each core. And then to intermingle results from the differnet cores like
that, it's going to require difficult (esp to do at all efficiently)
client side
Hi,
Are you sure you followed the wiki [1] on this subject? There is an example
there but you need Solr 1.4.0 or higher. I unsure if just patching 1.3.0 will
really do the trick. The patch must then also include Apache Tika, which sits
under the hood, extracting content and meta data from vario
> i'm hoping that -- faceting simply calculates+returns the counts for docs that
> have the field present while results may still contain documents that don't
> have the facet field (i.e. the field faceted on)?
Yes, that's exactly what happens. You can use facet.missing to get a count for
documen
Hey all!
I have a problem with how solrj makes beans from a solr response. Very
simplified my business objects are as follows:
enum E {
ENUM1, ENUM2;
}
public class A {
@Field("a_id")
private String id;
@Field("a_type")
private E objectType;
public void setId(String id) {
this.
> i'm hoping that -- faceting simply calculates+returns the
> counts for docs that
> have the field present while results may still contain
> documents that don't
> have the facet field (i.e. the field faceted on)?
http://wiki.apache.org/solr/SimpleFacetParameters#facet.missing ?
On 07/13/2010 02:11 PM, satya swaroop wrote:
Hi all,
i am new to solr and followed with the wiki and got the solr admin
run sucessfully. It is good going for xml files. But to index the rich
documents i am unable to get it. I followed wiki to make the richer
documents also, but i didnt
Hi All,
I have a specific requirement as stated below. Kindly suggest if this can
be acheived or not and the steps to acheive it.
I have 2 cores storing different kind of data.
My search query should return results in the below given order
1) Exact match resutls from core1
2) Exact match res
Hi all,
i am new to solr and followed with the wiki and got the solr admin
run sucessfully. It is good going for xml files. But to index the rich
documents i am unable to get it. I followed wiki to make the richer
documents also, but i didnt get it.The error comes when i send an pdf/html
Hi all,
i am new to solr and followed with the wiki and got the solr admin
run sucessfully. It is good going for xml files. But to index the rich
documents i am unable to get it. I followed wiki to make the richer
documents also, but i didnt get it.The error comes when i send an pdf/html
hi,
has anyone had experience with faceting over a field where the field
is not present
in all documents within the index?
i'm hoping that -- faceting simply calculates+returns the counts for docs that
have the field present while results may still contain documents that don't
have the facet fiel
Thanks a lot for the reply,
is it independent of merge factor??
My index size reduced a lot (almost by 40%) after optimization and i am
worried that i might have lost the data. I have no deletes at all but a high
merge factor. Any suggestions?
Thanks,
Karthik
> I'm using solr 1.4 and only one core.
> The elevate xml file is quite big, and
> i wonder can solr handle that? How to reload the core?
Markus Jelsma's suggestion is more robust. You don't need to restart or reload
anything. Put elevate.xml under data directory. It will reloaded automatically
It will mostly likely be smaller but the new size is highly dependent on
the number of documents that you have deleted (because optimize actually
removes data instead of only flagging it).
-Original Message-
From: Karthik K [mailto:karthikkato...@gmail.com]
Sent: dinsdag 13 juli 2010 11:3
Hi,
What will be the size of the index after optimizing? I know it increases
during the process but what will the size after the optimization is done? Is
it dependent on the merge factor during indexing? Please reply,
Thanks,
Karthik
I'm using solr 1.4 and only one core. The elevate xml file is quite big, and
i wonder can solr handle that? How to reload the core?
On Tue, Jul 13, 2010 at 4:12 PM, Ahmet Arslan wrote:
> > The problem is that every time I
> > update the elevate.xml, I need to restart
> > solr tomcat service. Thi
> The problem is that every time I
> update the elevate.xml, I need to restart
> solr tomcat service. This feature needs to be updated
> frequently. How would
> i handle that?
You can reload core, without restarting tomcat, if you are using multi-core
setup. Which version of solr are you using?
No, it can build for each new searcher [1].
[1]: http://wiki.apache.org/solr/QueryElevationComponent#config-file
On Tuesday 13 July 2010 11:02:10 Chamnap Chhorn wrote:
> The problem is that every time I update the elevate.xml, I need to restart
> solr tomcat service. This feature needs to be upd
The problem is that every time I update the elevate.xml, I need to restart
solr tomcat service. This feature needs to be updated frequently. How would
i handle that?
Any idea or other solutions?
On Mon, Jul 12, 2010 at 5:45 PM, Ahmet Arslan wrote:
> > I wonder there is a proper way to
> > fulfi
Ok
I did it
Thank u
-Original Message-
From: Rebecca Watson [mailto:bec.wat...@gmail.com]
Sent: Tuesday, July 13, 2010 11:51 AM
To: solr-user@lucene.apache.org
Subject: Re: Locked Index files
shut down your solr server first... if its not important! :)
On 13 July 2010 16:47, ZAROGKIKA
Hi again,
I change the search options to decrease my query size and now I get
passed the URI too long from the other thread.
I already added :
819200
819200
On Jetty config but now I'm stucked again on:
13/Jul/2010 9:41:38 org.apache.solr.common.SolrException log
SEVERE: java.lang.Null
shut down your solr server first... if its not important! :)
On 13 July 2010 16:47, ZAROGKIKAS,GIORGOS wrote:
> I found it but I can not delete
> Any suggestion???
>
> -Original Message-
> From: Yuval Feinstein [mailto:yuv...@answers.com]
> Sent: Tuesday, July 13, 2010 11:39 AM
> To: solr
Is the Solr process still running?
Also what OS are you using?
-Original Message-
From: ZAROGKIKAS,GIORGOS [mailto:g.zarogki...@multirama.gr]
Sent: dinsdag 13 juli 2010 10:47
To: solr-user@lucene.apache.org
Subject: RE: Locked Index files
I found it but I can not delete
Any suggestion??
I found it but I can not delete
Any suggestion???
-Original Message-
From: Yuval Feinstein [mailto:yuv...@answers.com]
Sent: Tuesday, July 13, 2010 11:39 AM
To: solr-user@lucene.apache.org
Subject: RE: Locked Index files
Hi Giorgos.
Try looking for write.lock files and deleting them.
Ch
Hi Giorgos.
Try looking for write.lock files and deleting them.
Cheers,
Yuval
-Original Message-
From: ZAROGKIKAS,GIORGOS [mailto:g.zarogki...@multirama.gr]
Sent: Tuesday, July 13, 2010 11:28 AM
To: solr-user@lucene.apache.org
Subject: Locked Index files
Hi
My solr Index files a
Hi
My solr Index files are locked and I can’t index anything
How can I remove the lock file ?
I can’t delete it
hi,
sorry realised i had a typo:
> of course, non of this is going to sort out trying to match against the query
> "co?mput?r" because you've probably stemmed "computer" to "comput" or
> something
> at index time -- but if you add in a copyfield to an extra field that
> isn't stemmed
> at query
Hi,
earlier this week i started messing with getting wildcard queries to
be analysed
i've got some weird analysers doing stemming/lowercasing and writing
in the same rules into a custom queryparser didn't seem logical given
i just want the analysers to apply as they do at index time
i ca
Hi,
to use leading wildcards you may have a look at this one:
http://mail-archives.apache.org/mod_mbox/lucene-solr-user/200910.mbox/%3c4ac4b71c.6040...@gmail.com%3e
basicly u just put a ReversedWildcardFilterFactory in your config and u can use
leading wildcards.
good luck!
-Ursprünglich
If you're only adding documents you can also have a go with
StreamingUpdateSolrServer instead of the CommonsHttpSolrServer.
Couple that with the suggestion of master/slave so the searches don't
interfere with the indexing and you should have a pretty responsive
system.
-Original Message-
F
hi,
I installed tika and made its jar files into solr home library and also
gave the path to the tika configuration file. But the error is same. the
tika config file is as follows:::
http://purl.org/dc/elements/1.1/
application/xml
So what the right xml code to call solr 1.4.1 through maven ?
Regards,
Thibaut
On 06/27/2010 02:18 AM, Jason Chaffee wrote:
Was this change intentional or a mistake? If it was a mistake, can someone
please fix it in maven's central repository?
Thanks ,
I am really looking forward to the SolrInfoMBeanHandler .
As of now I am working with the parsing idea .
--
View this message in context:
http://lucene.472066.n3.nabble.com/Cache-hits-exposed-by-API-tp930602p962654.html
Sent from the Solr - User mailing list archive at Nabble.com.
49 matches
Mail list logo