Hi all,
Currently in my project , most of the core configurations are
same(solrconfig.xml, dataimport.properties...), which are putted in their
own folder as reduplicative.
I am wondering how could I put common ones in one folder, which each core
could share, and keep the different ones in their
Hello list,
we have the problem that old searchers often are not closing
after optimize (on master) or replication (on slaves) and
therefore have huge index volumes.
Only solution so far is to stop and start solr which cleans
up everything successfully, but this can only be a workaround.
Is the
I would like to influence the score but I would rather not mess with the q=
field since I want the query to dismax for Q.
Something like:
fq={!type=dismax qf=$qqf v=$qspec}&
fq={!type=dismax qt=dismaxname v=$qname}&
q=_val_:"{!type=dismax qf=$qqf v=$qspec}" _val_:"{!type=dismax
qt=dismaxname v=$
Wanted to share this, as I've seen a couple discussions on different boards.
The solution has been either:
1. use the solrj client
2. import as csv
3. use the streamingupdatesolrserver
The barrier I have is that I need to build this offline (without using a
solr server, solrconfig.xml, or
As I understand it, sorting by field is what caches are all
about. You have a big list in memory of all of the terms for
a field, indexed by Lucene doc ID so fetching the term to
compare by doc ID is fast, and also why the caches need
to be warmed, and why sort fields should be single-valued.
If y
On 4/19/2011 1:43 PM, Jan Høydahl wrote:
Hi,
Not possible :)
Lucene compares each matching document against the query and produces a score
for each.
Documents are not compared to eachother like normal sort, that would be way too
costly.
That might be true for sort by 'score' (although even i
You could create a new Similarity class plugin that take in account every
parameters you need. :
http://wiki.apache.org/solr/SolrPlugins?highlight=%28similarity%29#Similarity
but, as Jan said, be carefull with the cost of the the similarity function.
Ludovic.
2011/4/19 Jan Høydahl / Cominvent
Hi,
Not possible :)
Lucene compares each matching document against the query and produces a score
for each.
Documents are not compared to eachother like normal sort, that would be way too
costly.
But if you explain your use case, I'm sure we can find ways to express your
needs in other ways
P
Hi,
I want to able to have a custom sorting algorithm such that for each comparison
of document results (A v B) I can rank them. i.e. writing a comparator like I
would normally do in Java (Compares its two arguments for order. Returns a
negative integer, zero, or a positive integer as the first
I don't know, will ask him.
On Tue, Apr 19, 2011 at 7:02 PM, Li wrote:
> Looks like dependencies. Did you or him included the dependencies in the
> solrconfig?
>
> Sent from my iPhone
>
> On Apr 19, 2011, at 8:35 AM, Oleg Tikhonov wrote:
>
> >> Hello everybody,
> >>
> >> Recently, I got a mess
yes, but as default! i dont want to set it from me to false.
i need not after ervery commit an optimize and i want it default=false !
Hmm, so you don't want to add '&optimize=false' every time you call DIH.
DIH is a request handler so you can define defaults in solrconfig.xml
data-config.
Hello everybody,
Recently, I got a message from a guy who was asking about
TikaEntityProcessor.
He uses Solr 1.4 and Tika 0.8.
Here is a stack:
SEVERE: Full Import failed
org.apache.solr.handler.
dataimport.DataImportHandlerException: Unable to load En
tityProcessor implementation for entity:99464
Looks like dependencies. Did you or him included the dependencies in the
solrconfig?
Sent from my iPhone
On Apr 19, 2011, at 8:35 AM, Oleg Tikhonov wrote:
>> Hello everybody,
>>
>> Recently, I got a message from a guy who was asking about
>> TikaEntityProcessor.
>> He uses Solr 1.4 and Tika 0
> Hello everybody,
>
> Recently, I got a message from a guy who was asking about
> TikaEntityProcessor.
> He uses Solr 1.4 and Tika 0.8.
> Here is a stack:
> SEVERE: Full Import failed
> org.apache.solr.handler.
> dataimport.DataImportHandlerException: Unable to load En
> tityProcessor implementati
that looks like a good starting point,
thanks,
bryan rasmussen
2011/4/19 François Schiettecatte :
> I would start here:
>
> http://snowball.tartarus.org/
>
> François
>
> On Apr 19, 2011, at 11:15 AM, bryan rasmussen wrote:
>
>> Hi,
>>
>> I was wondering if I have a large number of queries
maybe not a library but a command line tool would be good, something
that I can write code or do automation via script to test that when I
ask for the word virksomhed in the danish language that I can then see
that it will would also return virksomhederne and other variations.
I guess I was hoping
I would start here:
http://snowball.tartarus.org/
François
On Apr 19, 2011, at 11:15 AM, bryan rasmussen wrote:
> Hi,
>
> I was wondering if I have a large number of queries I want to test
> stemming on if there is a free standing library I can just run it
> against without having to d
I'm not sure what a "free standing library" would look like. Do you
want it to check that all the terms in your index are stemmed
correctly (or at least as expected)?
You have a bunch of queries. How would such a library test them
against your corpus?
There's not enough information here to give a
Hi,
I was wondering if I have a large number of queries I want to test
stemming on if there is a free standing library I can just run it
against without having to do all the overhead of a http request?
Thanks,
Bryan Rasmussen
hello.
my optimize is taking tooo much time and sometimes i started a optimize but
i dont want it ... :/ stupit, i know.
is it possible to abort a runnung optimize ?
-
--- System
One Server, 12 GB RAM, 2 Solr Instances, 7
yes, but as default! i dont want to set it from me to false.
i need not after ervery commit an optimize and i want it default=false !
-
--- System
One Server, 12 GB RAM, 2 Solr Instances, 7 Cores,
1 Core with 31 Million Do
How can i change the default value of optimize in DIH to false ?
You mean? solr/dataimport?command=delta-import&optimize=false
Hello.
How can i change the default value of optimize in DIH to false ?
-
--- System
One Server, 12 GB RAM, 2 Solr Instances, 7 Cores,
1 Core with 31 Million Documents other Cores < 100.000
- Solr1 for Search-Requests -
I think you could approximate this with some empirical measurements, i.e. index
1,000 'typical' documents and see what the resulting index size it. Of course
you may need to adjust this number upwards if there is a lot of variability in
document size.
When I built the search engine that ran fe
There's no way I know of to do this.
Why is this important to you? Because I'm not
sure what actionable information this gives you.
The number will vary based on whether the fields
are stored or not. And storing the fields has
very little effect on search memory requirements.
What are you hoping
Hello,
I want to do distributed search with Solr for embedded "servers" via Solrj.
For now, I use MultiCore features (as in the tutorial) by initializing 10
different cores. But in the end I get 10 times the last core initialized.
I don't know where is the problem (or a misconfiguration --"solr.
Hi Vignesh,
Are you working from the provided example? If not, did you copy the
solr-cell libraries to your Solr deployment?
You can follow the instructions here:
http://wiki.apache.org/solr/ExtractingRequestHandler#Configuration
Regards,
*Juan*
On Tue, Apr 19, 2011 at 3:47 AM, Vignesh Raj w
Hi,
Is there a way to find out Solr indexing size for a particular document. I
am using Solrj to index the documents.
Assume, I am indexing multiple fields like title, description, content, and
few integer fields in schema.xml, then once I index the content, is there a
way to identify the index
Hi,
Nutch 1.3-dev seems to have changed its tstamp field from a long to a properly
formatted Solr readable date/time but the example Solr schema for Nutch still
configures the tstamp field as a long. This results in a formatted date/time
in a long field, which i think should not be allowed in t
Will
On Mon, Apr 18, 2011 at 9:51 PM, Will Milspec wrote:
> Does the lucene-solr git repository have a tag that marks the 3.1 release?
what's wrong about the "lucene_solr_3_1"-branch, which you already mentioned?
The Branch sets your repo on this commit
(https://github.com/apache/lucene-solr/com
Thanks
On 19 Apr 2011, at 08:21, Tommaso Teofili wrote:
> Hello Dave,
> the LukeRequestHandler [1] and the Analysis service [2] should help you :
> Regards,
> Tommaso
>
> [1] : http://wiki.apache.org/solr/LukeRequestHandler
> [2] :
> http://wiki.apache.org/solr/FAQ#My_search_returns_too_many_.2BA
Hello Dave,
the LukeRequestHandler [1] and the Analysis service [2] should help you :
Regards,
Tommaso
[1] : http://wiki.apache.org/solr/LukeRequestHandler
[2] :
http://wiki.apache.org/solr/FAQ#My_search_returns_too_many_.2BAC8_too_little_.2BAC8_unexpected_results.2C_how_to_debug.3F
2011/4/19 Da
Hi Isha
2011/4/18 Isha Garg
> Can anyone explain me the what are runtimeParameters specified in the
> as in link http://wiki.apache.org/solr/SolrUIMA. also tell me
> how to integrate our own analysis engine to solr. I am new to this.
the runtimeParameters contains parameters' settings that
Hi,
I am testing index time synonyms, stemming etc and it would be great to be able
to view the raw indexed data is there a way to do this using either Lucene
tools or the admin Solr admin interface
Regards,
David
Hi All!
I want to integrate Uima-solr . I followed the steps in the
readme file.I am using apache solr3.1. The jar file starts fine. But I
dont know the exact syntax in solrj to index my documents for Uima-solr
integration . i got the following exception.Can anyone help me out
rga
35 matches
Mail list logo