when I try without adaptive parameter I've OOME:
HTTP Status 500 - Java heap space java.lang.OutOfMemoryError: Java heap
space
Shalin Shekhar Mangar wrote:
>
> On Mon, Sep 22, 2008 at 9:19 PM, sunnyfr <[EMAIL PROTECTED]> wrote:
>
>>
>> Hi,
>> There is something wierd :
>> I've plan cron job
When I try without adaptive parameter I've an out of memory error.
Shalin Shekhar Mangar wrote:
>
> On Mon, Sep 22, 2008 at 9:19 PM, sunnyfr <[EMAIL PROTECTED]> wrote:
>
>>
>> Hi,
>> There is something wierd :
>> I've plan cron job every 5mn which heat delta-import's url and works fine
>> :
>
Hi Otis,
Currently I am creating indexes from Java standalone program.
I am preparing data by using query & have made data to index.
Function as blow can we write.
I have large number of product & we want to user it at production level.
Please provide me sample or tutorials.
/**
*
On 23.09.2008 00:30 Chris Hostetter wrote:
> : Here is what I was able to get working with your help.
> :
> : (productId:(102685804)) AND liveDate:[* TO NOW] AND ((endDate:[NOW TO *]) OR
> : ((*:* -endDate:[* TO *])))
> :
> : the *:* is what I was missing.
>
> Please, PLEASE ... do yourself a f
Hi Dinesh,
Your code is hardly useful to us since we don't know what you are trying to
achieve or what all those Dao classes do.
Look at the Solr tutorial first -- http://lucene.apache.org/solr/
Use the SolrJ client for communicating with Solr server --
http://wiki.apache.org/solr/Solrj
Also take
Hi,
Current we are using Lucene api to create index.
It creates index in a directory with 3 files like
xxx.cfs , deletable & segments.
If I am creating Lucene indexes from Solr, these file will be created or not?
Please give me example on MySQL data base instead of hsqldb
Regards,
Dinesh
__
Hello everyone, I'm new to Solr (have been using Lucene for a few years
now). We are looking into Solr and have heard many good things about the
project:)
I have a few questions regarding the EmbeddedSolrServer in Solrj and the
MultiCore features... I've tried to find answers to this in the
On Tue, Sep 23, 2008 at 5:33 PM, Dinesh Gupta <[EMAIL PROTECTED]>wrote:
>
> Hi,
> Current we are using Lucene api to create index.
>
> It creates index in a directory with 3 files like
>
> xxx.cfs , deletable & segments.
>
> If I am creating Lucene indexes from Solr, these file will be created or
Hi Shalin Shekhar,
Let me explain my issue.
I have some tables in my database like
Product
Category
Catalogue
Keywords
Seller
Brand
Country_city_group
etc.
I have a class that represent product document as
Document doc = new Document();
// Keywords which can be used directly for sear
Hi,
Probably a stupid question with the obvious answer, but if I am
running a Solr master and accepting updates, do I have to stop the
updates when I start the optimise of the index? Or will optimise just
take the latest snapshot and work on that independently of the
incoming updates?
Really enjo
Yes In deed it was problem with the path .. thanks a lot,
Just didnt get this part " If you turn up your logging to "FINE" what does
that mean ?
Huge thanks for your answer,
hossman wrote:
>
>
> : And I did change my config file :
> :
> :
Hi,
I don't know why when I start commit manually it doesn't fire snapshooter ?
I did it manually because no snapshot was created and if i run it manually
it works.
so my auto commit is activated (I think) :
1
1000
My snapshooter too:
./data/solr/book/l
Hi Dinesh,
This seems straightforward for Solr. You can use the embedded jetty server
for a start. Look at the tutorial on how to get started.
You'll need to modify the schema.xml to define all the fields that you want
to index. The wiki page at http://wiki.apache.org/solr/SchemaXml is a good
sta
On Tue, Sep 23, 2008 at 7:06 PM, Geoff Hopson <[EMAIL PROTECTED]>wrote:
>
> Probably a stupid question with the obvious answer, but if I am
> running a Solr master and accepting updates, do I have to stop the
> updates when I start the optimise of the index? Or will optimise just
> take the latest
On Tue, Sep 23, 2008 at 7:36 PM, sunnyfr <[EMAIL PROTECTED]> wrote:
>
> My snapshooter too:
>
>
> ./data/solr/book/logs/snapshooter
> data/solr/book/bin
> true
> arg1 arg2
> MYVAR=val1
>
> and everything is at the good place I think, my path are good ...
>
Right my bad it was bin directory, but even when i fire commit no snapshot
created ??
Does it check the number of document even when i fire it and another
question I dont rember have put in the conf file the path to commit, but
even manually it doesnt work
[EMAIL PROTECTED]:/# ./data/solr/book/b
Hi,
I'm quite new to solr and I'm looking for a way to extend the list of used
synonyms used at query-time without having to reload the config. What I've
found so far are these tow thread linked to below, of which neither really
helped me out.
Especially the MultiCore solution seems a little bit
This is probably not useful because synonyms work better at index time
than at query time. Reloading synonyms also requires reindexing all
the affected documents.
wunder
On 9/23/08 7:45 AM, "Batzenmann" <[EMAIL PROTECTED]> wrote:
>
> Hi,
>
> I'm quite new to solr and I'm looking for a way to e
If i have solr up and running and do something like this:
query.set("shards", "localhost:8080/solr/core0,localhost:
8080/solr/core1");
I will get the results from both cores, obviously...
But is there a way to do this without using shards and accessing the
cores through http?
I pr
Thanks for your response Chris.
I do see the reviewid in the index through luke. I guess what I am
confused about is the field cumulative_delete. Does this have any
significance to whether the delete was a success or not? Also shouldn't
the method deleteByQuery return a diff status code based on i
I have searched the forum and the internet at large to find an answer to my
simple problem, but have been unable. I am trying to get a simple dataimport
to work, and have not been able to. I have Solr installed on an Apache
server on Unix. I am able to commit and search for files using the usual
S
I've got a small configuration question. When posting docs via SolrJ, I get
the following warning in the Solr logs:
WARNING: The @Deprecated SolrUpdateServlet does not accept query parameters:
wt=xml&version=2.2
If you are using solrj, make sure to register a request handler to /update
rather th
On Sep 23, 2008, at 12:35 PM, Gregg wrote:
I've got a small configuration question. When posting docs via
SolrJ, I get
the following warning in the Solr logs:
WARNING: The @Deprecated SolrUpdateServlet does not accept query
parameters:
wt=xml&version=2.2
If you are using solrj, make sure
Are there any exceptions in the log file when you start Solr?
On Tue, Sep 23, 2008 at 9:31 PM, KyleMorrison <[EMAIL PROTECTED]> wrote:
>
> I have searched the forum and the internet at large to find an answer to my
> simple problem, but have been unable. I am trying to get a simple
> dataimport
>
Problem with the span filter - removing some test - re-posting.
water4u99 wrote:
>
> Hi,
>
> Some additional clue as to where the issue is: the computed number changed
> when there is an additional query it in the query request.
>
> Ex1: .../select/?q=_val_:%22sum(stockPrice_f,10.00)%22&fl=*,s
Simply set "text" to be multivalued (one for each *_t field).
Erik
On Sep 22, 2008, at 1:08 PM, Jon Drukman wrote:
I have a dynamicField declaration:
I want to copy any *_t's into a text field for searching with
dismax. As it is, it appears you can't search dynamicfields this way
Thank you for help. The problem was actually just stupidity on my part, as it
seems I was running the wrong startup and shutdown shells for the server,
and thus the server was getting restarted. I restarted the server and I can
at least access those pages. I'm getting some wonky output, but I assu
Try adding a debugQuery=true parameter on to see if that helps you
decipher what is going on.
FWIW, the _val_ boost is a factor in scoring, but it isn't the only
factor. Perhaps you're seeing the document score factor in as well?
-Grant
On Sep 22, 2008, at 6:37 PM, water4u99 wrote:
Hi
This turned out to be a fairly pedestrian bug on my part: I had "/update"
appended to the Solr base URL when I was adding docs via SolrJ.
Thanks for the help.
--Gregg
On Tue, Sep 23, 2008 at 12:42 PM, Ryan McKinley <[EMAIL PROTECTED]> wrote:
>
> On Sep 23, 2008, at 12:35 PM, Gregg wrote:
>
> I
Ok, I'm very frustrated. I've tried every configuraiton I can and parameters
and I cannot get fragments to show up in the highlighting in solr. (no
fragments at the bottom or highlights in the text. I must be
missing something but I'm just not sure what it is.
/select/?qt=standard&q=crayon&hl=tru
Make sure the fields you're trying to highlight are stored in your schema
(e.g. )
David Snelling-2 wrote:
>
> Ok, I'm very frustrated. I've tried every configuraiton I can and
> parameters
> and I cannot get fragments to show up in the highlighting in solr. (no
> fragments at the bottom or hig
This is the configuration for the two fields I have tried on
On Tue, Sep 23, 2008 at 1:59 PM, wojtekpia <[EMAIL PROTECTED]> wrote:
>
> Make sure the fields you're trying to highlight are stored in your schema
> (e.g. )
>
>
>
> David Snelling-2 wrote:
> >
> > Ok, I'm very frustrated. I've
Hi-
I'm new to Solr, and I'm trying to figure out the best way to configure it to
use BoostingTermQuery in the scoring mechanism. Do I need to create a custom
query parser? All I want is the default parser behavior except to get the
custom term boost from the Payload data. Thanks!
-Ken
Try a query where you're sure to get something to highlight in one of your
highlight fields, for example:
/select/?qt=standard&q=synopsis:crayon&hl=true&hl.fl=synopsis,shortdescription
David Snelling-2 wrote:
>
> This is the configuration for the two fields I have tried on
>
> stored="t
At this point, it's roll your own. I'd love to see the BTQ in Solr
(and Spans!), but I wonder if it makes sense w/o better indexing side
support. I assume you are rolling your own Analyzer, right? Spans
and payloads are this huge untapped area for better search!
On Sep 23, 2008, at 5:12
It may be too early to say this but I'll say it anyway :)
There should be a juicy case study that includes payloads, BTQ, and Spans in
the upcoming Lucene in Action 2. I can't wait to see it, personally.
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Mes
> At this point, it's roll your own.
That's where I'm getting bogged down - I'm confused by the various queryparser
classes in lucene and solr and I'm not sure exactly what I need to override.
Do you know of an example of something similar to what I'm doing that I could
use as a reference?
>
Hmmm. That doesn't actually return anything which is odd because I know
it's in the field if I do a query without specifying the field.
http://qasearch.donorschoose.org/select/?q=synopsis:students
returns nothing
http://qasearch.donorschoose.org/select/?q=students
returns items with query in s
Your fields are all of string type. String fields aren't tokenized or
analyzed, so you have to match the entire text of those fields to actually
get a match. Try the following:
/select/?q=firstname:Kathryn&hl=on&hl.fl=firstname
The reason you're seeing results with just q=students, but not
q=syn
Ok, thanks, that makes a lot of sense now.
So, how should I be storing the text for the synopsis or shortdescription
fields so it would be tokenized? Should it be text instead of string?
Thank you very much for the help by the way.
On Tue, Sep 23, 2008 at 2:49 PM, wojtekpia <[EMAIL PROTECTED]>
Yes, you can use text (or some custom derivative of it) for your fields.
David Snelling-2 wrote:
>
> Ok, thanks, that makes a lot of sense now.
> So, how should I be storing the text for the synopsis or shortdescription
> fields so it would be tokenized? Should it be text instead of string?
>
Hi,
I am using snappuller to sync my slave with master, i am not using rsync
daemon, i am doing Rsync using remote shell.
When i am serving requests from the master when the snappuller is running
(after optimization, total index is arnd 4 gb it doing the transfer of whole
index), the performance
On Sep 23, 2008, at 5:39 PM, Ensdorf Ken wrote:
At this point, it's roll your own.
That's where I'm getting bogged down - I'm confused by the various
queryparser classes in lucene and solr and I'm not sure exactly what
I need to override. Do you know of an example of something similar
Hi,
Can't tell with certainty without looking, but my guess would be slow disk,
high IO, and a large number of processes waiting for IO (run vmstat and look at
the "wa" column).
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
> From: rahul_k123 <
hi,
How to weightage more frequently searched word in solr?
what is the functionality in Apache solr module?
I have a list of more frequently searched word in my site , i need to
highlight those words.From the net i found out that 'score' is used for this
purpose. Isn't it true?
Anybody knows a
Hi,
Thanks for the reply.
I am not using SOLR for indexing and serving search requests, i am using
only the scripts for replication.
Yes it looks like I/O, but my question is how to handle this problem and is
there any optimal way to achieve this.
Thanks.
Otis Gospodnetic wrote:
>
> Hi,
Hi Guys
I am trying to take values by connecting two tables. My data-config.xml
looks like:
Since you have not given any information about your schema, we cannot help
with the queries.
What do you mean by error running query? Do you get an exception or no
values for the inner entity's fields?
On Wed, Sep 24, 2008 at 11:34 AM, con <[EMAIL PROTECTED]> wrote:
>
> Hi Guys
> I am trying to
Hi!
I am already using solr 1.2 and happy with it.
In a new project with very tight dead line (10 development days from
today) I need to setup a more ambitious system in terms of scale
Here is the spec:
* I need to index about 60,000,000
documents
* E
Which version of tomcat required.
I installed jboss4.0.2 which have tomcat5.5.9.
JSP pages are not going to compile.
Its giving syntax error.
Please help.
I can't move from jboss4.0.2.
Please help.
Regards,
Dinesh Gupta
> Date: Tue, 23 Sep 2008 19:36:22 +0530
> From: [EMAIL PROTECTED]
> T
50 matches
Mail list logo