When I asked the user group sometime back about the same problem, One other
solution I got was to have a soft delete column. (A column where u maintain a
delete flag) .
Sundar
> Date: Mon, 12 Jan 2009 11:03:52 -0700
> From: rgra...@dollardays.com
> To: solr-user@lucene.apache.org
> Subject: Re
Thanks for the reply Hoss.
As far as our application goes, Commits and reads are done to the index during
the normal business hours. However, we observed the max warmers error happening
during a nightly job when the only operation is 4 parallel threads commits data
to index and Optimizes it fina
Hi,
We have an application with more 2.5 million docs currently. It is hosted
on a single box with 8 GIG memory. The number of warmers configured are 4 and
Cold-searcher is allowed too. The application is based on data entry and commit
to data happens as often as a data is entered. We optim
when traffic is lowest). SOLR
> won't be in-sync with database, but you can always retrieve PKs from
> SOLR, check database for those PKs, and 'filter' output...
>
> --
> Thanks,
>
> Fuad Efendi
> 416-993-2060(cell)
> Tokenizer Inc.
> ==
Great Thanks.
> Date: Thu, 25 Sep 2008 11:54:32 -0700
> Subject: Re: Best practice advice needed!
> From: [EMAIL PROTECTED]
> To: solr-user@lucene.apache.org
>
> That should be "flag it in a boolean column". --wunder
>
>
> On 9/25/08 11:51 AM, "Walter Underwood" <[EMAIL PROTECTED]> wrote:
>
Hi,
We have an index of courses (about 4 million docs in prod) and we have a
nightly that would pick up newly added courses and update the index
accordingly. There is another Enterprise system that shares the same table and
that could delete data from the table too.
I just want to know w
It Totally Helps. Thanks Jason!
Hoss,
Are the parameters you mentioned, available in the sample solrconfig.xml
that comes with the nightly build? My schema and config files are about a year
old(1.2.X version) one and am not sure if the 1.3 files for the same have some
default options li
Thats brilliant. I am just starting to wonder if there anything at all
that you guys haven't thought about ;) Thanks that setting should be
really useful.
> Date: Wed, 10 Sep 2008 15:26:57 -0700
> From: [EMAIL PROTECTED]
> To: solr-user@lucene.apache.org
> Subject: RE: Question on how index works
"optimize"? Solr doesn't seem to fully
> integrate all updates into a single index until an optimize is performed.
>
> Jason
>
> On Wed, Sep 10, 2008 at 1:05 PM, sundar shankar <[EMAIL PROTECTED]>wrote:
>
> > Hi All,
> > We have a clus
ize is performed.
>
> Jason
>
> On Wed, Sep 10, 2008 at 1:05 PM, sundar shankar <[EMAIL PROTECTED]>wrote:
>
> > Hi All,
> > We have a cluster of 4 servers for the application and Just one
> > server for Solr. We have just about 2 million docs to ind
Hi All,
We have a cluster of 4 servers for the application and Just one
server for Solr. We have just about 2 million docs to index and we never
bothered to make the solr environment clustered as Solr was delivering
performance with the current setup itself. Offlate we just discovered
Did u reindex after the change?
> Date: Wed, 27 Aug 2008 23:43:05 +0300
> From: [EMAIL PROTECTED]
> To: solr-user@lucene.apache.org
> Subject: Question about autocomplete feature
>
>
> Hello.
>
> I'm trying to implement autocomplete feature using the snippet posted
> by Dan.
> (http://mail-ar
rs for a new document or> a *deleted* document to take effect?> >
> Or am I missing the point?> > Thanks,> Jacob> sundar shankar wrote:> > Yes
> commits are very expensive and optimizes are even expensive.> > Coming to
> your question of numdocs and 0's
Yes commits are very expensive and optimizes are even expensive.
Coming to your question of numdocs and 0's in update handler section
The numdocs that u see on top are the ones that are committed. The ones u see
below are the ones u have updated, not committed.
update handlers
==com
Look at the update handlers section of the Solr stats page. I guess the url is
/admi/stats.jsp. This woudld give u an idea of how many docs are pending commit.
> Date: Thu, 7 Aug 2008 14:53:02 -0700> From: [EMAIL PROTECTED]> To:
> solr-user@lucene.apache.org> Subject: Re: How do I configure comm
Or time.
1 1
> Date: Thu, 7 Aug 2008 18:42:30 -0300> From: [EMAIL PROTECTED]> To:
> solr-user@lucene.apache.org> Subject: Re: How do I configure commit to run
> after updates> > You can configure the autocommit feature in solrconfig.xml
> to get commit to>
n Thu, Aug 7,
> 2008 at 12:17 PM, sundar shankar> <[EMAIL PROTECTED]> wrote:> > I had the war
> created from JUly 8th nightly. Do u want me to take the latest and try it
> out??> > Yes, please.> > -Yonik
_
mmitted a patch for this on Jul 12th.> > -Yonik> > On
> Tue, Aug 5, 2008 at 1:38 PM, sundar shankar <[EMAIL PROTECTED]> wrote:> > Hi
> All,> > I am having to test solr indexing quite a bit on my local and dev
> environments. I had the> >> > true
Nope I dont see any error logs when my Jboss starts up. I havent added solr
classes to my log4j.xml though. Should I add them and try again?
What does single do, btw? Do i need to use this in
conjunction with or do i use it separately??
> Date: Wed, 6 Aug 2008 16:57:23 -0700> From: [EMAIL
Oh Wow, I didnt know that was the case. I am completely left baffled now. BAck
to square one I guess. :)
> Date: Tue, 5 Aug 2008 14:31:28 -0700> From: [EMAIL PROTECTED]> To:
> solr-user@lucene.apache.org> Subject: RE: Out of memory on Solr sorting> >
> Sundar, very strange that increase of size
Yes this is what I did. I got an out of memory while executing a query with a
sort param
1. Stopped Jboss server
2.
In these 3 params, I changed "size" from 512 to 2048. 3. Restarted the server
4. Ran query again.
It worked just fine. after that. I am currently reinexing, repl
ng LRU cache helps you:> - you are probably using 'tokenized' field
> for sorting (could you > confirm please?)...> > ...you should use
> 'non-tokenized single-valued non-boolean' for better > performance of
> sorting...> > > Fuad Efendi>
od if you could do some profiling on your
> Solr app.> > I've done it during the indexing process so I could figure out
> what > > was going on in the OutOfMemoryErrors I was getting.> >> > But you
> won't definitelly need to have as much memory as your
Hi All,
I am having to test solr indexing quite a bit on my local and dev
environments. I had the
true.
But restarting my server still doesn't seem to remove the writelock file. Is
there some other configuration that I might have to do get this fixed.
My Configurations :
Solr
ext); you need not tokenize
> > this field; you need not store TermVector.
> >
> > for 2 000 000 documents with simple untokenized text field such as
> > title of book (256 bytes) you need probably 512 000 000 bytes per
> > Searcher, and as Mark mentioned you should limi
>>>> >>>>>> SEVERE: java.lang.OutOfMemoryError:
> allocLargeObjectOrArray - Object> >>>>>> size: 100767936, Num elements:
> 25191979> >>>>>> at> >>>>>>
> org.apache.lucene.search.F
you can get by.
> Have you checked out all the solr stats on > the admin page? Maybe you are
> trying to load up to many searchers at a > time. I think there is a setting
> to limit the number of searchers that > can be on deck...> > sundar shankar
> wrote:> > Hi
Hi Mark,
I am still getting an OOM even after increasing the heap to 1024.
The docset I have is
numDocs : 1138976 maxDoc : 1180554
Not sure how much more I would need. Is there any other way out of this. I
noticed another interesting behavior. I have a Solr setup on a personal B
.have you upped your xmx > setting? I
think you can roughly say a 2 million doc index would need > 40-50 MB
(depending and rough, but to give an idea) per field your > sorting on.> > -
Mark> > sundar shankar wrote:> > Thanks Fuad.> > But why does just sorting
ugh, but to give an idea) per field your
> sorting on.
>
> - Mark
>
> sundar shankar wrote:
> > Thanks Fuad.
> > But why does just sorting provide an OOM. I executed the
> > query without adding the sort clause it executed perfectly. In fa
and itensures that 1024M is available at startup)> >
> OOM happens also with fragmented memory, when application requests big >
> contigues fragment and GC is unable to optimize; looks like your >
> application requests a little and memory is not available...> > > Quoti
> From: [EMAIL PROTECTED]
> To: solr-user@lucene.apache.org
> Subject: Out of memory on Solr sorting
> Date: Tue, 22 Jul 2008 19:11:02 +
>
>
> Hi,
> Sorry again fellos. I am not sure whats happening. The day with solr is bad
> for me I guess. EZMLM didnt let me send any mails this morning
Hi,
SOrry again fellos. I am not sure whats happening. The day with solr is bad for
me I guess. EZMLM didnt let me send any mails this morning. Asked me to confirm
subscription and when I did, it said I was already a member. Now my mails are
all coming out bad. Sorry for troubling y'all this ba
Sorry for that. I didnt realise how my had finally arrived. Sorry!!!
From: [EMAIL PROTECTED]
To: solr-user@lucene.apache.org
Subject: OOM on Solr Sort
Date: Tue, 22 Jul 2008 18:33:43 +
Hi,
We are developing a product in a agile manner and the current
implementation has a data of size ju
Hi,We are developing a product in a agile manner and the current
implementation has a data of size just about a 800 megs in dev. The memory
allocated to solr on dev (Dual core Linux box) is 128-512. My config=
trueMy Field===
THANKS!!!
> Date: Tue, 15 Jul 2008 11:38:06 -0700> From: [EMAIL PROTECTED]> To:
> solr-user@lucene.apache.org> Subject: RE: Wiki for 1.3> > > : Thanks. Do we
> expect the same some time soon. I agree that the user > : community have shed
> light in with a lot of examples. Just wanna know if > :
useful in the past
for me.
> Date: Tue, 15 Jul 2008 11:26:16 +1000> From: [EMAIL PROTECTED]> To:
> solr-user@lucene.apache.org> Subject: Re: Wiki for 1.3> > On Mon, 14 Jul 2008
> 23:25:25 +> sundar shankar <[EMAIL PROTECTED]> wrote:> > > Thanks
official&hs=fUX&q=EdgeNGramFilterFactory+solr++wiki&btnG=Search
-S
> Date: Tue, 15 Jul 2008 07:54:27 +1000> From: [EMAIL PROTECTED]> To:
> solr-user@lucene.apache.org> Subject: Re: Wiki for 1.3> > On Mon, 14 Jul 2008
> 15:52:35 +> sundar shankar
Copy field dest="text". I am not sure if u can copy into text or something like
that. We copy it into a field of type text or string etc.. Plus what is ur
query string. what gives u no results. How do u index it??
need more clues to figure out answer dude :)
> From: [EMAIL PROTECTED]> To: solr
OOps,
Sorry about that.
> Date: Sun, 13 Jul 2008 18:13:51 -0700> From: [EMAIL PROTECTED]> To:
> solr-user@lucene.apache.org> Subject: Re: Max Warming searchers error> > > :
> Subject: Max Warming searchers error> : In-Reply-To: <[EMAIL PROTECTED]>> :
> References: <[EMAIL PROTECTED]>> >
> h
Hi Hoss,
I was talking about classes like EdgeNGramFilterFactory,
PatterReplaceFilterfactory etc. I didnt find these in the 1.2 Jar. Where do I
find wiki for these and Specific classes introduced for 1.3?
-Sundar
> Date: Sun, 13 Jul 2008 09:44:20 -0700
> From: [EMAIL PROTECTED]
>
Re: Max Warming searchers error> >
> You're trying to commit too fast and warming searchers are stacking up.> Do
> less warming of caches, or space out your commits a little more.> > -Yonik> >
> On Fri, Jul 11, 2008 at 11:56 AM, sundar shankar> <[EMAIL
Hi ,
I am getting the "Error opening new searcher. exceeded limit of
maxWarmingSearchers=4, try again later." My configuration includes enabling
coldSearchers to true and Having number of maxWarmimgSearchers as 4. We expect
a max of 40 concurrent users but an average of 5-10 at most times. W
What was the type of the field that you are using. I guess you could achieve it
by a simple swap of text and string.
> From: [EMAIL PROTECTED]> To: solr-user@lucene.apache.org> Subject: Solr
> searching issue..> Date: Fri, 11 Jul 2008 11:28:50 +0100> > > Hi solr-users,
> > > version type: nigh
Hi
I recently was looking to find details of 1.3 specific analysers and
filters in the solr wiki and was unable to do so. Could anyone please point me
to a place where I can find some documentation of the same.
Thanks
Sundar
___
ot;> pattern="([^a-z0-9])"> >
> replacement="" replace="all" />> >
> class="solr.EdgeNGramFilterFactory"> maxGramSize="100" minGramSize="1" />>> >
> > > >
> class="solr.KeywordTokeni
e to release.> > On Tue, Jul 8, 2008 at 10:38 PM, sundar
> shankar <[EMAIL PROTECTED]>> wrote:> > > Hi Daniel,> > Thanks for the code. I
> just did observe that you have> > EdgeNGramFilterFactory. I didnt find it in
> the 1.2 Solr version. Which>
replacement="" replace="all" />> class="solr.PatternReplaceFilterFactory"> pattern="^(.{20})(.*)?"
> replacement="$1" replace="all" />> > > ...> name="ac" type="autocomplete" indexed="t
Hi All,
I am using Solr for some time and am having trouble with an auto
complete feature that I have been trying to incorporate. I am indexing solr as
a database column to solr field mapping. I have tried various configs that were
mentioned in the solr user community suggestions and
49 matches
Mail list logo