Nice. Great help. I have added following fields to hold tokens.
Hi!!!
I have already realized the mistake.My "id" field was generated from the
copy of another field called "url".In other words
It seems that the thing did not work well when the "id" field was generated
from the copy of another one.
Now I have changed the schema.xml so that this does not
Hello Group,
I have a field name prefix1 and which is copy of another
field called "content". Field type of prefix1 is
My data, found with solr needs to be tested against matching regular
expression formed at auery time. to avoid sending big data chunks via http
i've suggested that results can be verified on solr side before they sent to
client.
I've heard that we can assign custom java function for filtering but
Otis,
May I ask you how do you go about handling user access privileges? I mean
you need some mechanism how to get user privileges from corporate
environment (LDAP for example) and filter returned hits using document
access policy. Also you may be caching these informations as well for
performance
Hi all,
we send xml add document messages to Solr and we notice something very strange.
We autocommit at 10 documents, starting from a total clean index (removed
the data folder), when we start uploading we notice that the docsPending is
going up but also that the deletesPending is going up
Noble--
You should probably include SOLR-505 in your DataImportHandler patch.
-Sean
Noble Paul നോബിള് नोब्ळ् wrote:
It is caused by the new caching feature in Solr. The caching is done
at the browser level . Slr just sends appropriate headers. .We had
raised an issue to disable that.
BTW Th
That is well within the boundaries of what Solr/Lucene can handle.
But, of course, it depends on what you're doing with those fields
too. Putting 200 fields into a dismax qf specification, for example,
would surely be bad for performance :) But querying on only a
handful of fields or les
On Apr 25, 2008, at 3:02 AM, Rantjil Bould wrote:
Nice. Great help. I have added following fields to hold tokens.
class="solr.KeywordTokenizerFactory"/>
Hi,
Is there any way to tell solr to load in a kind of reindexing mode, which
won't open a new searcher after every commit, etc? This is just when you
don't have it available to query because you just want to reindex all the
information.
What do you think?
Jonathan
Don't think so. But you reindex on the master and query on the slave. If your
concern is that the index will be sent to the search slave while you are still
reindexing, just don't commit until you are done.
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Mes
Yes , We are waiting for the patch to get committed.
--Noble
On Fri, Apr 25, 2008 at 5:36 PM, Sean Timm <[EMAIL PROTECTED]> wrote:
> Noble--
>
> You should probably include SOLR-505 in your DataImportHandler patch.
>
> -Sean
>
>
>
> Noble Paul നോബിള് नोब्ळ् wrote:
>
> > It is caused by the new
Help required with external value source SOLR-351
I'm trying to get this new feature to work without much success. I've
completed the following steps.
1) dowloaded latest nightly build
2) added the following to schema.xml
and
3) Created a file in the solr index folder - "external_cpc" with t
In our setup, snapshooter is triggered on optimize, not commit.
We can commit all we want on the master without making a
snapshot. That only happens when we optimize.
The new Searcher is the biggest performance impact for us.
We don't have that many documents (~250K), so copying an
entire index is
You're right. But I'm concerned about some Max Number of Searchers Reached
that I usually get when reindexing every one in a while.
On Fri, Apr 25, 2008 at 12:28 PM, Otis Gospodnetic <
[EMAIL PROTECTED]> wrote:
> Don't think so. But you reindex on the master and query on the slave. If
> your co
Like Wunder said, you can reindex every once in a while all you want, just
don't create index snapshots then you commit (disable the postcommit hook in
solrconfig.xml) or don't commit at all until you are done. Or call optimize at
the end and enable postOptimize hook.
Otis
--
Sematext -- http:
What Erik said ;)
200 fields is not a problem. Things to watch out for are:
- more index file and thus more open file descriptors if you use non-compound
Lucene index format and are working with non-optimized indices (on master -
optimize your index before it gets to slaves)
- slower merging (I
The GSA -> Solr conversion I mentioned has not yet happened and may not even
include doc access right functionality.
However, when I implemented things like that in the past, I used custom
trickery, not a general open framework.
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
Custom trickery is pretty standard for access controls in search.
A couple of the high points from deploying Ultraseek: three incompatible
"single sign on" system in one company and a system that controlled
which links were shown instead of access to the docs themselves.
The latter amazed me. If y
Hello,
I was looking at DisMax and playing with its "pf" parameter. I created a
sample index with field "content". I set "pf" to: content^2.0 and expected to
see (content:"my query here")^2.0 in the query (debugQuery=true). However, I
only got (content:"my query here") -- no boost.
Is this a
This what the spellchecker does. It makes a separate Lucene index of n-gram
letters and searches those. Works pretty well and it is outside the main
index. I did an experimental variation indexing word pairs as phrases, and
it worked well too.
Lance Norskog
-Original Message-
From: Ryan
On 25-Apr-08, at 7:05 AM, Jonathan Ariel wrote:
Hi,
Is there any way to tell solr to load in a kind of reindexing mode,
which
won't open a new searcher after every commit, etc? This is just when
you
don't have it available to query because you just want to reindex
all the
information.
Ar
On 25-Apr-08, at 4:27 AM, Tim Mahy wrote:
Hi all,
we send xml add document messages to Solr and we notice something
very strange.
We autocommit at 10 documents, starting from a total clean index
(removed the data folder), when we start uploading we notice that
the docsPending is going
On 24-Apr-08, at 2:57 PM, oleg_gnatovskiy wrote:
Hello. I was wondering if Solr has some kind of a multi-threaded
document
loader? I've been using post.sh (curl) to post documents to my Solr
server,
and it's pretty slow. I know it should be pretty easy to write one
up, but I
was just wond
I am frustrated that I have to pick between the two because I want both. The
way I look at it, there should be a more configurable query handler which
allows me to dimax if I want to, and pick a parser for the user's query
(like the flexible one used by the standard query handler, or the more
res
25 matches
Mail list logo