The accepted logLevel values are
error, deubug,warn,trace,info
2009/10/18 Noble Paul നോബിള് नोब्ळ् :
> On Sun, Oct 18, 2009 at 4:16 AM, Lance Norskog wrote:
>> I had this problem also, but I was using the Jetty exampl. I fail at
>> logging configurations about 90% of the time, so I assumed it w
Hi i browsed through the solr docs and user forums and what i infer is we
cant use solr to store Relational
Mapping(foreign key) in solr .
but just want to know if any chances of doing the same.
I have two tables User table (with 1,00,000 entries ) and project table
with (200 entries ).
User t
Hi,
here's what you could do:
* Use multivalued fields instead of 'comma separated values', so you
won't need a separator.
* Store project identifiers in the user index.
Denormalised projects informations in a user entry will fatally need
re-indexing lot of user entries when project info change
Hi,
I'm using the terms component for an autosuggest feature and it works
well but i've hit an issue with truncation:
Take the following query:
http://localhost:8983/solr/terms?terms.fl=meta_name_t&terms.prefix=switch
This is the response:
0
1
35
7
In this case the word 'switch
On Oct 19, 2009, at 6:23 AM, Paul Forsyth wrote:
Hi,
I'm using the terms component for an autosuggest feature and it
works well but i've hit an issue with truncation:
Take the following query:
http://localhost:8983/solr/terms?terms.fl=meta_name_t&terms.prefix=switch
This is the response:
Hello,
firstly sorry for my english :)
Since last Friday I try to define in shema.xml a new field that is the
concatenation of two other fields.
So in schemal.xml I have these fields :
field3
In my .csv file date are stored like that :
field1 ; field2
toto ; titi
In my mind field3
Lance, Noble:
I set logLevel="debug" in my dihconfig.xml at the entity level. Got no
output! I then gave up digging into this further because I was pressed for
time to dig into how to increase the speed of importing into solr with
dih...
Cheers,
- Bill
--
>From what I've read/found, MoreLikeThis doesn't support the dismax
parameters that are available in the StandardRequestHandler (such as bq). Is
it possible that we might get support for those parameters some time? What
are the issues with MLT Handler inheriting from the StandardRequestHandler
inst
Thanks Grant,
I'm still a bit of a newbie with Solr :)
I was able to add a new non-stemming field along with a copyfield, and
that seems to have done the trick :)
Until i tried this i didnt quite realise what copyfields did...
Thanks again,
Paul
On 19 Oct 2009, at 11:23, Paul Forsyth wrot
The boost (index time) does not work when i am searching for a word with
a wildcard appended to the end.
I stumbled on to this "feature" and its pretty much a show stopper for me.
I am implementing a live search feature where i always have an wildcard
in the last word that is currently being wri
awesome. Thanks for figuring this out guys
wojtekpia wrote:
>
> Good catch. I was testing on a nightly build from mid-July. I just tested
> on a similar deployment with nightly code from Oct 5th and everything
> seems to work.
>
> My mid-July deployment breaks on sints, integers, sdouble, d
On Oct 19, 2009, at 7:21 AM, sophSophie wrote:
Hello,
firstly sorry for my english :)
Since last Friday I try to define in shema.xml a new field that is the
concatenation of two other fields.
So in schemal.xml I have these fields :
field3
In my .csv file date are stored like that
Hi,
My application indexes huge number of documents(like in millions). Below
is the snapshot of my code where I add all documents to Solr, and then
at last issue commit command. I use Solrj. I find that last few
documents are not committed to Solr. Is this because adding documents
to Solr took lo
A few questions to help the troubleshooting.
Solr version #?
Is there just 1 commit through Solrj for the millions of documents?
Or do you do it on a regular interval (every 100k documents for example) and
then one at the end to be sure?
How are you observing that the last few didn't make it
It seems like no, and should be an easy change. I'm putting newlines
after the commas so the large shards list doesn't scroll off the
screen.
If a filter query matches nothing, then no additional query should be
performed and no results returned? I don't think we have this today?
I have been trying to integrate wordnet dictionary with solr. I used below
link to generate indexes using prolog package from wordnet.
http://chencer.com/techno/java/lucene/wordnet.html
And here are the changes I did in solr :
Schema.xml changes:
word
dict
solr.Ind
I was wondering if anyone might have any insight on the following
problem. I'm using the latest Solr code from SVN and indexing around 17m
XML records via DIH. With perfect replicability, the following exception
is thrown on the same aggregate file (#236, and each XML file has ~50k
records), al
On Mon, Oct 19, 2009 at 2:55 PM, Jason Rutherglen
wrote:
> If a filter query matches nothing, then no additional query should be
> performed and no results returned? I don't think we have this today?
No, but this is a fast operation anyway (In Solr 1.4 at least).
Another thing to watch out for
Thanks for the report Aaron, this definitely looks like a Lucene bug,
and I've opened
https://issues.apache.org/jira/browse/LUCENE-1995
Can you follow up there (I asked about your index settings).
-Yonik
http://www.lucidimagination.com
On Mon, Oct 19, 2009 at 3:04 PM, Aaron McKee wrote:
> I wa
: I won't have access to the code until monday, but i'm pretty sure this
: should be a fairly trivial change (just un-set the estimator on the
: CacheEntry objects)
done, see notes in SOLR-1292
-Hoss
Solr version is 1.3
I am indexing total of 1.4 million documents. Yes, I commit(waitFlush="true"
waitSearcher="true") every 100k documents and then one at the end.
I have a counter next to addDoc(SolrDocument) statement to keep track of
number of documents added. When I query Solr after commit,
Yonik,
> this is a fast operation anyway
Can you elaborate on why this is a fast operation?
Basically there's a distributed query with a filter, where on a
number of the servers, the filter query isn't matching anything,
however I'm seeing load on those servers (where nothing
matches), so I'm as
Version 0.9.3 of the PECL extension for solr has just been released.
Some of the methods have been updated and more get* methods have been added
to the Query builder classes.
The user level documentation was also updated to make the installation
instructions a lot clearer.
The latest documentati
On Mon, Oct 19, 2009 at 4:45 PM, Jason Rutherglen
wrote:
> Yonik,
>
>> this is a fast operation anyway
>
> Can you elaborate on why this is a fast operation?
The scorers will never really be used.
The query will be weighted and scorers will be created, but the filter
will be checked first and ret
I have a small core performing deltas quickly (core00), and a large core
performing deltas slowly (core01), both on the same set of documents. The
delta core is cleaned nightly. As you can imagine, at times there are two
versions of a document, one in each core. When I execute a query that
matches
> The boost (index time) does not work
> when i am searching for a word with a wildcard appended to
> the end.
> I stumbled on to this "feature" and its pretty much a show
> stopper for me.
> I am implementing a live search feature where i always have
> an wildcard in the last word that is currentl
Distributed Search is designed only for disjoint cores.
The document list from each core is returned sorted by the relevance
score. The distributed searcher merges these sorted lists. Solr does
not implement "distributed IDF", which essentially means distributed
coordinated scoring. All scoring ha
commit(waitFlush="true", waitSearcher="true") waits for the entire
operation and when it finishes, all 1 million documents should be
searchable.
Please try this same test with Solr 1.4 and post your results. To make
it easier, here is the first release candidate:
http://people.apache.org/~gsinge
We have an indexing script which has been running for a couple of weeks
now without problems. It indexes documents and then periodically commit
(which is a tad redundant I suppose) both via the HTTP interface.
All documents are indexed to a master and a slave rsyncs them off using
the standard
Ok, thanks, new Lucene 2.9 features.
On Mon, Oct 19, 2009 at 2:33 PM, Yonik Seeley
wrote:
> On Mon, Oct 19, 2009 at 4:45 PM, Jason Rutherglen
> wrote:
>> Yonik,
>>
>>> this is a fast operation anyway
>>
>> Can you elaborate on why this is a fast operation?
>
> The scorers will never really be us
Hi,
Is it possible to get the matching terms from your query for each document
returned without using highlighting.
For example if you have the query "aaa bbb ccc" and one of the documents has
the term "aaa" and another document has the term "bbb" and "ccc".
To have Solr return:
Document 1: "
On Mon, Oct 19, 2009 at 7:39 PM, Lance Norskog wrote:
> commit(waitFlush="true", waitSearcher="true") waits for the entire
> operation and when it finishes, all 1 million documents should be
> searchable.
That waits for the commit to complete, but not any adds that may be
happening in parallel (
Although shards should be disjoint, Solr "tolerates" duplication
(won't return duplicates in the main results list, but doesn't make
any effort to correct facet counts, etc).
Currently, whichever shard responds first wins.
The relevant code is around line 420 in QueryComponent.java:
Str
If you query looks like this -
q=(myField:aaa myField:bbb myField:ccc)
you would get the desired results for any tokenized field (e.g. text) called
myField.
Cheers
Avlesh
On Tue, Oct 20, 2009 at 6:28 AM, angry127 wrote:
>
> Hi,
>
> Is it possible to get the matching terms from your query for ea
Hi Jerome ,
thanks for your response.
I never knew about multivalued fields.
Will give a try about it and see if that suits my need.
But i dont understand this
* You could have a mixed index with user and project entries in the
same index, so if you search for a name, you'd find users and proje
36 matches
Mail list logo