RE: Deletion of indexes.

2009-01-12 Thread sundar shankar
When I asked the user group sometime back about the same problem, One other solution I got was to have a soft delete column. (A column where u maintain a delete flag) . Sundar > Date: Mon, 12 Jan 2009 11:03:52 -0700 > From: rgra...@dollardays.com > To: solr-user@lucene.apache.org > Subject: Re

RE: Best way to prevent max warmers error

2008-10-21 Thread sundar shankar
Thanks for the reply Hoss. As far as our application goes, Commits and reads are done to the index during the normal business hours. However, we observed the max warmers error happening during a nightly job when the only operation is 4 parallel threads commits data to index and Optimizes it fina

Best way to prevent max warmers error

2008-10-10 Thread sundar shankar
Hi, We have an application with more 2.5 million docs currently. It is hosted on a single box with 8 GIG memory. The number of warmers configured are 4 and Cold-searcher is allowed too. The application is based on data entry and commit to data happens as often as a data is entered. We optim

RE: Best practice advice needed!

2008-09-25 Thread sundar shankar
when traffic is lowest). SOLR > won't be in-sync with database, but you can always retrieve PKs from > SOLR, check database for those PKs, and 'filter' output... > > -- > Thanks, > > Fuad Efendi > 416-993-2060(cell) > Tokenizer Inc. > ==

RE: Best practice advice needed!

2008-09-25 Thread sundar shankar
Great Thanks. > Date: Thu, 25 Sep 2008 11:54:32 -0700 > Subject: Re: Best practice advice needed! > From: [EMAIL PROTECTED] > To: solr-user@lucene.apache.org > > That should be "flag it in a boolean column". --wunder > > > On 9/25/08 11:51 AM, "Walter Underwood" <[EMAIL PROTECTED]> wrote: >

Best practice advice needed!

2008-09-25 Thread sundar shankar
Hi, We have an index of courses (about 4 million docs in prod) and we have a nightly that would pick up newly added courses and update the index accordingly. There is another Enterprise system that shares the same table and that could delete data from the table too. I just want to know w

RE: Question on how index works - runs out of disk space!

2008-09-11 Thread sundar shankar
It Totally Helps. Thanks Jason! Hoss, Are the parameters you mentioned, available in the sample solrconfig.xml that comes with the nightly build? My schema and config files are about a year old(1.2.X version) one and am not sure if the 1.3 files for the same have some default options li

RE: Question on how index works - runs out of disk space!

2008-09-10 Thread sundar shankar
Thats brilliant. I am just starting to wonder if there anything at all that you guys haven't thought about ;) Thanks that setting should be really useful. > Date: Wed, 10 Sep 2008 15:26:57 -0700 > From: [EMAIL PROTECTED] > To: solr-user@lucene.apache.org > Subject: RE: Question on how index works

RE: Question on how index works - runs out of disk space!

2008-09-10 Thread sundar shankar
"optimize"? Solr doesn't seem to fully > integrate all updates into a single index until an optimize is performed. > > Jason > > On Wed, Sep 10, 2008 at 1:05 PM, sundar shankar <[EMAIL PROTECTED]>wrote: > > > Hi All, > > We have a clus

RE: Question on how index works - runs out of disk space!

2008-09-10 Thread sundar shankar
ize is performed. > > Jason > > On Wed, Sep 10, 2008 at 1:05 PM, sundar shankar <[EMAIL PROTECTED]>wrote: > > > Hi All, > > We have a cluster of 4 servers for the application and Just one > > server for Solr. We have just about 2 million docs to ind

Question on how index works - runs out of disk space!

2008-09-10 Thread sundar shankar
Hi All, We have a cluster of 4 servers for the application and Just one server for Solr. We have just about 2 million docs to index and we never bothered to make the solr environment clustered as Solr was delivering performance with the current setup itself. Offlate we just discovered

RE: Question about autocomplete feature

2008-09-03 Thread sundar shankar
Did u reindex after the change? > Date: Wed, 27 Aug 2008 23:43:05 +0300 > From: [EMAIL PROTECTED] > To: solr-user@lucene.apache.org > Subject: Question about autocomplete feature > > > Hello. > > I'm trying to implement autocomplete feature using the snippet posted > by Dan. > (http://mail-ar

RE: How do I configure commit to run after updates

2008-08-07 Thread sundar shankar
rs for a new document or> a *deleted* document to take effect?> > > Or am I missing the point?> > Thanks,> Jacob> sundar shankar wrote:> > Yes > commits are very expensive and optimizes are even expensive.> > Coming to > your question of numdocs and 0's

RE: How do I configure commit to run after updates

2008-08-07 Thread sundar shankar
Yes commits are very expensive and optimizes are even expensive. Coming to your question of numdocs and 0's in update handler section The numdocs that u see on top are the ones that are committed. The ones u see below are the ones u have updated, not committed. update handlers ==com

RE: How do I configure commit to run after updates

2008-08-07 Thread sundar shankar
Look at the update handlers section of the Solr stats page. I guess the url is /admi/stats.jsp. This woudld give u an idea of how many docs are pending commit. > Date: Thu, 7 Aug 2008 14:53:02 -0700> From: [EMAIL PROTECTED]> To: > solr-user@lucene.apache.org> Subject: Re: How do I configure comm

RE: How do I configure commit to run after updates

2008-08-07 Thread sundar shankar
Or time. 1 1 > Date: Thu, 7 Aug 2008 18:42:30 -0300> From: [EMAIL PROTECTED]> To: > solr-user@lucene.apache.org> Subject: Re: How do I configure commit to run > after updates> > You can configure the autocommit feature in solrconfig.xml > to get commit to>

RE: Unlock on startup

2008-08-07 Thread sundar shankar
n Thu, Aug 7, > 2008 at 12:17 PM, sundar shankar> <[EMAIL PROTECTED]> wrote:> > I had the war > created from JUly 8th nightly. Do u want me to take the latest and try it > out??> > Yes, please.> > -Yonik _

RE: Unlock on startup

2008-08-07 Thread sundar shankar
mmitted a patch for this on Jul 12th.> > -Yonik> > On > Tue, Aug 5, 2008 at 1:38 PM, sundar shankar <[EMAIL PROTECTED]> wrote:> > Hi > All,> > I am having to test solr indexing quite a bit on my local and dev > environments. I had the> >> > true

RE: Unlock on startup

2008-08-07 Thread sundar shankar
Nope I dont see any error logs when my Jboss starts up. I havent added solr classes to my log4j.xml though. Should I add them and try again? What does single do, btw? Do i need to use this in conjunction with or do i use it separately?? > Date: Wed, 6 Aug 2008 16:57:23 -0700> From: [EMAIL

RE: Out of memory on Solr sorting

2008-08-05 Thread sundar shankar
Oh Wow, I didnt know that was the case. I am completely left baffled now. BAck to square one I guess. :) > Date: Tue, 5 Aug 2008 14:31:28 -0700> From: [EMAIL PROTECTED]> To: > solr-user@lucene.apache.org> Subject: RE: Out of memory on Solr sorting> > > Sundar, very strange that increase of size

RE: Out of memory on Solr sorting

2008-08-05 Thread sundar shankar
Yes this is what I did. I got an out of memory while executing a query with a sort param 1. Stopped Jboss server 2. In these 3 params, I changed "size" from 512 to 2048. 3. Restarted the server 4. Ran query again. It worked just fine. after that. I am currently reinexing, repl

RE: Out of memory on Solr sorting

2008-08-05 Thread sundar shankar
ng LRU cache helps you:> - you are probably using 'tokenized' field > for sorting (could you > confirm please?)...> > ...you should use > 'non-tokenized single-valued non-boolean' for better > performance of > sorting...> > > Fuad Efendi>

RE: Out of memory on Solr sorting

2008-08-05 Thread sundar shankar
od if you could do some profiling on your > Solr app.> > I've done it during the indexing process so I could figure out > what > > was going on in the OutOfMemoryErrors I was getting.> >> > But you > won't definitelly need to have as much memory as your

Unlock on startup

2008-08-05 Thread sundar shankar
Hi All, I am having to test solr indexing quite a bit on my local and dev environments. I had the true. But restarting my server still doesn't seem to remove the writelock file. Is there some other configuration that I might have to do get this fixed. My Configurations : Solr

RE: Out of memory on Solr sorting

2008-07-23 Thread sundar shankar
ext); you need not tokenize > > this field; you need not store TermVector. > > > > for 2 000 000 documents with simple untokenized text field such as > > title of book (256 bytes) you need probably 512 000 000 bytes per > > Searcher, and as Mark mentioned you should limi

RE: Out of memory on Solr sorting

2008-07-22 Thread sundar shankar
>>>> >>>>>> SEVERE: java.lang.OutOfMemoryError: > allocLargeObjectOrArray - Object> >>>>>> size: 100767936, Num elements: > 25191979> >>>>>> at> >>>>>> > org.apache.lucene.search.F

RE: Out of memory on Solr sorting

2008-07-22 Thread sundar shankar
you can get by. > Have you checked out all the solr stats on > the admin page? Maybe you are > trying to load up to many searchers at a > time. I think there is a setting > to limit the number of searchers that > can be on deck...> > sundar shankar > wrote:> > Hi

RE: Out of memory on Solr sorting

2008-07-22 Thread sundar shankar
Hi Mark, I am still getting an OOM even after increasing the heap to 1024. The docset I have is numDocs : 1138976 maxDoc : 1180554 Not sure how much more I would need. Is there any other way out of this. I noticed another interesting behavior. I have a Solr setup on a personal B

RE: Out of memory on Solr sorting

2008-07-22 Thread sundar shankar
.have you upped your xmx > setting? I think you can roughly say a 2 million doc index would need > 40-50 MB (depending and rough, but to give an idea) per field your > sorting on.> > - Mark> > sundar shankar wrote:> > Thanks Fuad.> > But why does just sorting

RE: Out of memory on Solr sorting

2008-07-22 Thread sundar shankar
ugh, but to give an idea) per field your > sorting on. > > - Mark > > sundar shankar wrote: > > Thanks Fuad. > > But why does just sorting provide an OOM. I executed the > > query without adding the sort clause it executed perfectly. In fa

RE: Out of memory on Solr sorting

2008-07-22 Thread sundar shankar
and itensures that 1024M is available at startup)> > > OOM happens also with fragmented memory, when application requests big > > contigues fragment and GC is unable to optimize; looks like your > > application requests a little and memory is not available...> > > Quoti

RE: Out of memory on Solr sorting

2008-07-22 Thread sundar shankar
> From: [EMAIL PROTECTED] > To: solr-user@lucene.apache.org > Subject: Out of memory on Solr sorting > Date: Tue, 22 Jul 2008 19:11:02 + > > > Hi, > Sorry again fellos. I am not sure whats happening. The day with solr is bad > for me I guess. EZMLM didnt let me send any mails this morning

Out of memory on Solr sorting

2008-07-22 Thread sundar shankar
Hi, SOrry again fellos. I am not sure whats happening. The day with solr is bad for me I guess. EZMLM didnt let me send any mails this morning. Asked me to confirm subscription and when I did, it said I was already a member. Now my mails are all coming out bad. Sorry for troubling y'all this ba

RE: OOM on Solr Sort

2008-07-22 Thread sundar shankar
Sorry for that. I didnt realise how my had finally arrived. Sorry!!! From: [EMAIL PROTECTED] To: solr-user@lucene.apache.org Subject: OOM on Solr Sort Date: Tue, 22 Jul 2008 18:33:43 + Hi, We are developing a product in a agile manner and the current implementation has a data of size ju

OOM on Solr Sort

2008-07-22 Thread sundar shankar
Hi,We are developing a product in a agile manner and the current implementation has a data of size just about a 800 megs in dev. The memory allocated to solr on dev (Dual core Linux box) is 128-512. My config= trueMy Field===

RE: Wiki for 1.3

2008-07-15 Thread sundar shankar
THANKS!!! > Date: Tue, 15 Jul 2008 11:38:06 -0700> From: [EMAIL PROTECTED]> To: > solr-user@lucene.apache.org> Subject: RE: Wiki for 1.3> > > : Thanks. Do we > expect the same some time soon. I agree that the user > : community have shed > light in with a lot of examples. Just wanna know if > :

RE: Wiki for 1.3

2008-07-15 Thread sundar shankar
useful in the past for me. > Date: Tue, 15 Jul 2008 11:26:16 +1000> From: [EMAIL PROTECTED]> To: > solr-user@lucene.apache.org> Subject: Re: Wiki for 1.3> > On Mon, 14 Jul 2008 > 23:25:25 +> sundar shankar <[EMAIL PROTECTED]> wrote:> > > Thanks

RE: Wiki for 1.3

2008-07-14 Thread sundar shankar
official&hs=fUX&q=EdgeNGramFilterFactory+solr++wiki&btnG=Search -S > Date: Tue, 15 Jul 2008 07:54:27 +1000> From: [EMAIL PROTECTED]> To: > solr-user@lucene.apache.org> Subject: Re: Wiki for 1.3> > On Mon, 14 Jul 2008 > 15:52:35 +> sundar shankar

RE: Solr searching issue..

2008-07-14 Thread sundar shankar
Copy field dest="text". I am not sure if u can copy into text or something like that. We copy it into a field of type text or string etc.. Plus what is ur query string. what gives u no results. How do u index it?? need more clues to figure out answer dude :) > From: [EMAIL PROTECTED]> To: solr

RE: Max Warming searchers error

2008-07-14 Thread sundar shankar
OOps, Sorry about that. > Date: Sun, 13 Jul 2008 18:13:51 -0700> From: [EMAIL PROTECTED]> To: > solr-user@lucene.apache.org> Subject: Re: Max Warming searchers error> > > : > Subject: Max Warming searchers error> : In-Reply-To: <[EMAIL PROTECTED]>> : > References: <[EMAIL PROTECTED]>> > > h

RE: Wiki for 1.3

2008-07-14 Thread sundar shankar
Hi Hoss, I was talking about classes like EdgeNGramFilterFactory, PatterReplaceFilterfactory etc. I didnt find these in the 1.2 Jar. Where do I find wiki for these and Specific classes introduced for 1.3? -Sundar > Date: Sun, 13 Jul 2008 09:44:20 -0700 > From: [EMAIL PROTECTED] >

RE: Max Warming searchers error

2008-07-11 Thread sundar shankar
Re: Max Warming searchers error> > > You're trying to commit too fast and warming searchers are stacking up.> Do > less warming of caches, or space out your commits a little more.> > -Yonik> > > On Fri, Jul 11, 2008 at 11:56 AM, sundar shankar> <[EMAIL

Max Warming searchers error

2008-07-11 Thread sundar shankar
Hi , I am getting the "Error opening new searcher. exceeded limit of maxWarmingSearchers=4, try again later." My configuration includes enabling coldSearchers to true and Having number of maxWarmimgSearchers as 4. We expect a max of 40 concurrent users but an average of 5-10 at most times. W

RE: Solr searching issue..

2008-07-11 Thread sundar shankar
What was the type of the field that you are using. I guess you could achieve it by a simple swap of text and string. > From: [EMAIL PROTECTED]> To: solr-user@lucene.apache.org> Subject: Solr > searching issue..> Date: Fri, 11 Jul 2008 11:28:50 +0100> > > Hi solr-users, > > > version type: nigh

Wiki for 1.3

2008-07-11 Thread sundar shankar
Hi I recently was looking to find details of 1.3 specific analysers and filters in the solr wiki and was unable to do so. Could anyone please point me to a place where I can find some documentation of the same. Thanks Sundar ___

RE: Auto complete

2008-07-10 Thread sundar shankar
ot;> pattern="([^a-z0-9])"> > > replacement="" replace="all" />> > > class="solr.EdgeNGramFilterFactory"> maxGramSize="100" minGramSize="1" />>> > > > > > > class="solr.KeywordTokeni

RE: Auto complete

2008-07-08 Thread sundar shankar
e to release.> > On Tue, Jul 8, 2008 at 10:38 PM, sundar > shankar <[EMAIL PROTECTED]>> wrote:> > > Hi Daniel,> > Thanks for the code. I > just did observe that you have> > EdgeNGramFilterFactory. I didnt find it in > the 1.2 Solr version. Which>

RE: Auto complete

2008-07-08 Thread sundar shankar
replacement="" replace="all" />> class="solr.PatternReplaceFilterFactory"> pattern="^(.{20})(.*)?" > replacement="$1" replace="all" />> > > ...> name="ac" type="autocomplete" indexed="t

Auto complete

2008-07-07 Thread sundar shankar
Hi All, I am using Solr for some time and am having trouble with an auto complete feature that I have been trying to incorporate. I am indexing solr as a database column to solr field mapping. I have tried various configs that were mentioned in the solr user community suggestions and