was typying this on-the-go from my phone, I meant LuceneQParserPlugin of
course.
On Sat, Jul 9, 2011 at 6:39 PM, Dmitry Kan wrote:
> you can try extending LuceneQParser. In its createParser method
> (lucene 2.9.3 and solr 1.4) you can analyze the input query in the
> param q and modify it accord
Dear all,
In schema.xml I had the following fieldType definition
The length of the string value I am indexing exceeds the default length (256),
how do I override the default length in my schema.
Thanks and best regards,
Engy Morsy
Project Manager
ICT Department
Bibliotheca Alexandrina
P.O.B
Hey hannes,
the simplest solution here is maybe using a second field that is for
highlighting only. This field would then store your content without
the payloads. The other way would be stripping off the payloads during
rendering which is not a nice option I guess. Since I am not a
highlighter exp
Currently there is no easy way to do this. I would need to think how
you can force the index to drop those so the answer here is no you
can't!
simon
On Sat, Jul 9, 2011 at 11:11 AM, Gabriele Kahlout
wrote:
> I've stored the contents of some pages I no longer need. How can I now
> delete the stor
Hello,
IndexWriter writer = new IndexWriter(FSDirectory.open(new
File(req.getCore().getDataDir(), "index")), req.getSchema().getAnalyzer(),
IndexWriter.MaxFieldLength.LIMITED);
updateSolrIndex(writer);
But this is what I get (I know that RequestHandler are not intended to
Hi Lance,
Thanks for the detailed advice. I was reading Weka menu just now and it did
have many classification algorithms. I will start with it and try to follow
the two-part process. Will post again if facing difficulties. Thanks again.
On Sun, Jul 10, 2011 at 7:56 AM, Lance Norskog wrote:
> T
There are such RequestHandlers. Look at CSVRequestHandler, for example.
IndexWriter writer = new IndexWriter(FSDirectory.open(new
File(req.getCore().getDataDir(), "index")), req.getSchema().getAnalyzer(),
IndexWriter.MaxFieldLength.LIMITED);
updateSolrIndex(writer);
D
Thanks for the helpful hints!
The debugQuery didn't work in combination with MLT for me.
If I'm using MLT in the distributed mode, how would that work? Let's assume,
I'm having 5 shards and I'm executing a MLT query which will run against all
shards. How will the response from each shard be conso
On Sun, Jul 10, 2011 at 6:21 PM, Koji Sekiguchi wrote:
> There are such RequestHandlers. Look at CSVRequestHandler, for example.
>
> IndexWriter writer = new IndexWriter(FSDirectory.open(**new
>> File(req.getCore().getDataDir(**), "index")),
>> req.getSchema().getAnalyzer(),
>> Ind
This was my problem:
I had taken my queu from Nutch's schema:
On Sat, Jul 9, 2011 at 4:55 PM, Yonik Seeley wrote:
> Something is wrong with your indexing.
> Is "wc" an indexed field? If not, change it so it is, then re-index your
> data.
>
> If so, I'd recommend starting with the exa
(11/07/11 4:45), Gabriele Kahlout wrote:
On Sun, Jul 10, 2011 at 6:21 PM, Koji Sekiguchi wrote:
There are such RequestHandlers. Look at CSVRequestHandler, for example.
IndexWriter writer = new IndexWriter(FSDirectory.open(**new
File(req.getCore().getDataDir(**), "index")),
(11/07/11 4:26), Marcus Paradies wrote:
Thanks for the helpful hints!
The debugQuery didn't work in combination with MLT for me.
If I'm using MLT in the distributed mode, how would that work? Let's assume,
I'm having 5 shards and I'm executing a MLT query which will run against all
shards. How
hi all...
I have a field, which is a text type:
and my problem... i use stringValue method to get the value of this field
and check any matches with the query... but somehow some words are not
match...
eg
lets say the field value is : Jonas is from Germany, but now he lives in
Russia.
if
Saïd,
The misunderstanding you have is that you are confusing the user query (the
q parameter) with the URL that Solr sees. Well actually the part of the URL
that is called the query string -- that which is after the "?". SolrQuery
has various setters for well-known parameters, and others just us
Hi folks.
I've been keeping a categorized list of software that relates/integrates
with Solr in some way. I've dubbed this the "Solr Ecosystem". This Solr
ecosystem concept was going to be an appendix in an upcoming 2nd edition of
my book http://www.packtpub.com/apache-solr-3-enterprise-search-se
I installed Solr using:
java -jar start.jar
However I downloaded the source code and didn't compile it (Didn't pay
attention). And the error using:
http://localhost:8983/solr/admin/ was:
HTTP ERROR: 404 Problem accessing /solr/admin/. Reason: NOT_FOUND
I realized that it was nos configuring bec
On 7/10/11 2:33 PM, Simon Willnauer wrote:
Currently there is no easy way to do this. I would need to think how
you can force the index to drop those so the answer here is no you
can't!
simon
On Sat, Jul 9, 2011 at 11:11 AM, Gabriele Kahlout
wrote:
I've stored the contents of some pages I no
17 matches
Mail list logo