solr will not give any exceptions .atleast ,there is no code which
checks for that. choose names which are valid characters in url
On Mon, May 18, 2009 at 11:08 AM, KK wrote:
> Thank you Otis.
> One silly question, how would I know that a particular character is
> forbidden, I think Solr will giv
That's correct - You can paginate/offset mlt results only through the
MoreLikeThisHandler rather than the method you're using
(standardrequesthandler with mlt enabled).
Cheers,
--bemansell
On May 9, 2009 10:42 AM, wrote:
Hi. I'm using the StandardRequestHandler for MoreLikeThis queries.
I find
Thank you Otis.
One silly question, how would I know that a particular character is
forbidden, I think Solr will give me exceptions saying that some characters
not allowed, right?
Thank,
KK.
On Sun, May 17, 2009 at 3:12 AM, Otis Gospodnetic <
otis_gospodne...@yahoo.com> wrote:
>
> KK,
>
> That s
Patric -
See the "documents in facets" results for a creative method for handling
this need with xslt transformations.
Cheers,
--bemansell
On May 16, 2009 2:11 AM, wrote:
Hello,
I've got a little problem. My index contains a formatid wich i counts in my
querys with the facet.field
select?q=
Chris,
As far as I know, AOL is using Solr with lots of cores. What I don't know is
how they are handling shutting down of idle cores, which is something you'll
need to do if your machine can't handle all cores being open and their data
structures being populated at all times. I know I had t
On Mon, May 18, 2009 at 8:18 AM, Otis Gospodnetic
wrote:
>
> Chris,
>
> As far as I know, AOL is using Solr with lots of cores. What I don't know is
> how they are handling shutting down of idle cores, which is something you'll
> need to do if your machine can't handle all cores being open and
A few questions,
1) what is the frequency of inserts?
2) how many cores need to be up and running at any given point
On Mon, May 18, 2009 at 3:23 AM, Chris Cornell wrote:
> Trying to create a search solution for about 20k users at a company.
> Each person's documents are private and different (
On Sun, May 17, 2009 at 8:38 PM, Otis Gospodnetic
wrote:
>
> Chris,
>
> Yes, disk space is cheap, and with so little overlap you won't gain much by
> putting everything in a single index. Plus, when each user has a separate
> index, it's easy to to split users and distribute over multiple machi
Chris,
Yes, disk space is cheap, and with so little overlap you won't gain much by
putting everything in a single index. Plus, when each user has a separate
index, it's easy to to split users and distribute over multiple machines if you
ever need to do that, it's easy and fast to completely r
Thanks for helping Ryan,
On Sun, May 17, 2009 at 7:17 PM, Ryan McKinley wrote:
> how much overlap is there with the 20k user documents?
There are around 20k users but each one has anywhere from zero to
thousands of documents. The final overlap is unknown because there is
a current set of docume
how much overlap is there with the 20k user documents?
if you create a separate index for each of them will you be indexing
90% of the documents 20K times? How many total documents could an
individual user typically see? How many total distinct documents are
you talking about? Is the ind
Trying to create a search solution for about 20k users at a company.
Each person's documents are private and different (some overlap... it
would be nice to not have to store/index copies).
Is multicore something that would work or should we auto-insert a
facet into each query generated by the pers
Thanks Mike, I'm running it in a few environments that do not have
post-commit hooks and so far have not seen any issues. A white-box review
will be helpful in seeing things that may rarely occur, or if I had any
misuse if internal data structures that I do not know well enough to
measure.
--j
Hi Jayson,
It is on my list of things to do. I've been having a very busy week
and and am also working all weekend. I hope to get to it next week
sometime, if no-one else has taken it.
cheers,
-mike
On 8-May-09, at 10:15 PM, jayson.minard wrote:
First cut of updated handler now in:
ht
I've never paid attention to post/commit ration. I usually do a commit
after maybe 100 posts. Is there a guideline about this? Thanks.
On Wed, May 13, 2009 at 1:10 PM, Otis Gospodnetic
wrote:
> 2) ramBufferSizeMB dictates, more or less, how much Lucene/Solr will consume
> during indexing. Ther
I think that if you have in your index any documents with norms, you
will still use norms for those fields even if the schema is changed
later. Did you wipe and re-index after all your schema changes?
-Peter
On Fri, May 15, 2009 at 9:14 PM, vivek sar wrote:
> Some more info,
>
> Profiling the
hi ,
u may not need that enclosing entity , if you only wish to index one file.
baseDir is not required if you give absolute path in the fileName.
no need to mention forEach or fields if you set useSolrAddSchema="true"
On Sat, May 16, 2009 at 1:23 AM, jayakeerthi s wrote:
> Hi All,
>
> I am try
>Something that would be interesting is to share solr configs for
>various types of indexing tasks. From a solr configuration aimed at
>indexing web pages to one doing large amounts of text to one that
>indexes specific structured data. I could see those being posted on
>the wiki and help
18 matches
Mail list logo