Hello Otis,
https://issues.apache.org/jira/browse/SOLR-236 has links for
a lot of files. I figure this is what I need:
10. solr-236.patch (24 kb)
So I downloaded the patch file, and also downloaded 2008/06/16
nightly build, then I ran this, and got an error:
$ patch -p0 -i solr-236.patch --dry-r
Short-circuit attempt. Why put 3 shards on a single server in the first place?
If you are working with large index and need to break it into smaller shards,
break it in shards where each shard fully utilizes the server it is on.
Other than my thought above, I think you hit the main difference
I'm sitting here looking over some ideas and one thing just occurred to
me, what would be the benefits of using a MultiCore approache for
sharding vs. multiple instances of solr?
That is, say I wanted to have 3 shards on a single piece of hardware,
what would be the advantages / disadvantages of
On Mon, Jun 16, 2008 at 5:24 PM, Chris Hostetter <[EMAIL PROTECTED]>
wrote:
>
> : When I do the search vio*, I get the correct results, but no
> highlighting.
>
> this is because for "prefix" queries solr use an PrefixFilter
> that is garunteed to work (no matter how many docs match) instead of a
: When I do the search vio*, I get the correct results, but no highlighting.
this is because for "prefix" queries solr use an PrefixFilter
that is garunteed to work (no matter how many docs match) instead of a
PrefixQuery (which might generate an exception, but can be highlighted)
see SOLR-195
: Doesn't anyone know a revision number from svn that might be working and where
: setAllowLeadingWildcard is set-able?
it's not currently a user setable option ... at the moment you need to
modify code to do this.
if you know Java and whould like to help work on a general patch there is
an
: I can certainly do: search for the unique key or combination of other
: fields, then put rest fields of this document plus new fields back to
: it.
:
: I know this is not a too smart way, before I do that, is there any solr
: guru out there who can think of a better way?
That is really, the
: Maybe I could contribute that but I don't really know the code well however. I
: found the Lucene-switch and someone described in a discussion on this earlier
: and change it, but that doesn't seem to be the way you would want to handle
: it.
I've updated SOLR-218 with some tips on how (I think
Hoss,
I'm sure the keys are unique. I'm generating them myself before adding. Only
a handful of items have gone in with duplicate keys.
Here is what the update handler is reporting (I assume since I last
restarted Solr on 6/13):
commits : 17429
autocommits : 0
optimizes : 0
docsPending : 3
del
: want to be sure as it seems that some are being dropped. I just need to know
: if this can happen during commits or if I should be looking elsewhere to
: resolve my dropped record problem.
are you sure you aren't adding documents with identicle uniqueKeys to
existing documents? what does "doc
The version of 1.2 I'm using does use the update servlet and would return 400
( or similar ) if something went wrong, and 200 if OK, but, like you
suggested, perhaps a 200 does not entirely mean it completely worked.
It sounds like 1.3 is the way to go. I will start with the 1.3 config and
schema
On Mon, Jun 16, 2008 at 6:07 PM, dls1138 <[EMAIL PROTECTED]> wrote:
> I'm getting all 200 return codes from Solr on all of my batches.
IIRC, Solr1.2 uses the update servlet and always returns 200 (you need
to look at the response body to see if there was an error or not).
> I skimmed the logs for
: This works well when the number of fields is small, but what are the
: performance ramifications when the number of fields is more than 1000?
: Is this a serious performance killer? If yes, what would we need to
: counter act it, more RAM or faster CPU's? Or both?
the performance characteristi
I'm getting all 200 return codes from Solr on all of my batches.
I skimmed the logs for errors, but I didn't try to grep for "Exception". I
will take your advice look there for some clues.
Incidentally I'm running solr 1.2 using Jetty. I'm not on 1.3 because I read
it wasn't released yet. Is th
No records should be dropped, regardless of if a commit or optimize is going on.
Are you checking the return codes (HTTP return codes for Solr 1.3)?
Some updates could be failing for some reason.
Also grep for "Exception" in the solr log file.
-Yonik
On Mon, Jun 16, 2008 at 4:02 PM, dls1138 <[EMA
I'm able to get the fields specified in my schema with this query:
/solr/admin/luke?show=schema&numTerms=0
But it doesn't show me dynamic fields that I've created. Is there a way to
get dynamic fields as well?
Yonik Seeley wrote:
>
> On Dec 20, 2007 8:47 PM, Edward Zhang <[EMAIL PROTECTED]> w
I've been sending data in batches to Solr with no errors reported, yet after
a commit, over 50% of the records I added (before the commit) do not show
up- even after several subsequent commits down the road.
Is it possible that Solr/Lucene could be disregarding or dropping my add
queries if those
On Mon, Jun 16, 2008 at 10:46 AM, Norberto Meijome <[EMAIL PROTECTED]> wrote:
> I just wanted to confirm that dynamic fields cannot be used with dismax
There are two levels of dynamic field support.
Specific dynamic fields can be queried with dismax, but you can't
wildcard the "qf" or other field
Hi!
How can I apply stylesheet to the search result? I mean, where can I
define, what stylesheet to use?
Thanks,
Kesava
Hi John,
The output from the statistics page is in XML format on which an XSL
stylesheet is applied to make it more presentable. You can directly call the
statistics page from your programs and parse out all the data you need.
On Mon, Jun 16, 2008 at 8:19 PM, McBride, John <[EMAIL PROTECTED]>
wro
Hello,
I have noticed that the solr/admin page pulls in XML status information
from add on modules in solr eg DataImportHandler.
Is the core SOLR statistical data exposed through an XML API, such that
I could collate all SOLR Slave status pages into one consolidated admin
panel?
Thanks,
John
Hi everyone,
I just wanted to confirm that dynamic fields cannot be used with dismax
By this I mean that the following :
schema.xml
[...]
[..]
solrconfig.xml
[..]
explicit
0.01
field1^10.0 dyn_1_*^5.0
[...]
will never take dyn_1_* fields into co
Could you expand on what you want to do? Do you mean you want
language detection? Or, you just need different analyzers for
different languages?
Either way, probably the best thing to do is to search the archives
here for multilingual search
-Grant
On Jun 15, 2008, at 11:48 PM, sherin
Hi!
How can I apply stylesheet to the search result? I mean, where can I
define, what stylesheet to use?
Ar cieņu, Mihails
Hi Yonik,
I've tried to change de "documentCache" to 60 as you told me but the problem
persist.
If I set "hl=off" the memory never pass 65.000KB. But if I set it to "on" in
my first search, using a common word as "a", the memory increase until
763000KB. If after this I search for other common wor
25 matches
Mail list logo