Hi
This is a php problem you need to increase your per thread memory
limit in your php.ini the field name is memory_limit
Regards
David
On 11 Nov 2009, at 07:56, Jörg Agatz
wrote:
Hallo,
I have a Problem withe the Memory Size, but i dont know how i can
repair it.
Maby it is a PHP
Depends on number of rows being fetched from Solr + php configuration +
solr writer you are using json || xml etc.
Rgds,
Ritesh Gurung
David Stuart wrote:
> Hi
> This is a php problem you need to increase your per thread memory
> limit in your php.ini the field name is memory_limit
>
> Regards
>
>
In Solrj, there is a method called setAllowLeadingWildcard(true). I need to
call the same method in SolrSharp API as well. But I don't find the class
"SolrQueryParser.cs" in SolrSharp API. Can any one suggest me how to call
that method, if I can use any provided namespace as
"org.apache.solr.SolrS
I have change the php.ini, now it works...
it was a Problem in PHP, because i grouping the Results in PHP, when i have
much results i need more memory
Thanks for the Help
AFAIK this needs to be set in the config in your case, which is still an
open issue: http://issues.apache.org/jira/browse/SOLR-218
On Wed, Nov 11, 2009 at 9:25 AM, theashik wrote:
>
> In Solrj, there is a method called setAllowLeadingWildcard(true). I need to
> call the same method in SolrSharp
Hi folks,
i'm getting this error while committing after a dataimport of only 12 docs
!!!
Exception while solr commit.
java.io.IOException: background merge hit exception: _3kta:C2329239
_3ktb:c11->_3ktb into _3ktc [optimize] [mergeDocStores]
at org.apache.lucene.index.IndexWriter.optimize(IndexWr
2009/11/11 Licinio Fernández Maurelo
> Hi folks,
>
> i'm getting this error while committing after a dataimport of only 12 docs
> !!!
>
> Exception while solr commit.
> java.io.IOException: background merge hit exception: _3kta:C2329239
> _3ktb:c11->_3ktb into _3ktc [optimize] [mergeDocStores]
>
Thanks Israel, i've done a sucesfull import using optimize=false
2009/11/11 Israel Ekpo
> 2009/11/11 Licinio Fernández Maurelo
>
> > Hi folks,
> >
> > i'm getting this error while committing after a dataimport of only 12
> docs
> > !!!
> >
> > Exception while solr commit.
> > java.io.IOExceptio
Anyone?
I have done more reading and testing and it seems like I want to:
Use SolrJ and embed solr in my webapp, but I want to disable the http
access to solr, meaning force all calls through my solrj interface I
am building (no admin access etc).
Is there a simple way to do this?
Am I be
Hi,
I have a interesting issue...
Basically I am trying to delta imports on solr 1.4 on a postgresql 8.3
database.
Basically when I am running a delta import with the entity below I get an
exception (see below the entity definition) showing the query its trying to
run and you can see that its n
Hi,
I have added a PayloadTermQueryPlugin after reading
https://issues.apache.org/jira/browse/SOLR-1485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
my class is :
*/
*import org.apache.solr.common.params.SolrParams;*
*import org.apache.solr.common.util.NamedList;*
*import org
I have 2 entities from the root node, not sure if that makes a difference!
On Wed, Nov 11, 2009 at 4:49 PM, Mark Ellul wrote:
> Hi,
>
> I have a interesting issue...
>
> Basically I am trying to delta imports on solr 1.4 on a postgresql 8.3
> database.
>
> Basically when I am running a delta imp
Yes. I believe the "is the index already optimized" is in the guts of Lucene.
Otis
--
Sematext is hiring -- http://sematext.com/about/jobs.html?mls
Lucene, Solr, Nutch, Katta, Hadoop, HBase, UIMA, NLP, NER, IR
- Original Message
> From: William Pierce
> To: solr-user@lucene.apache.or
That's actually easy to explain/understand.
If the min n-gram size is 3, a query term with just 2 characters will ever
match any terms that originally had > 2 characters because longer terms will
never get tokenized into terms below 3-character tokens.
Take the term: house
house => hou ous use
Rather than start a new thread, I'd like to follow up on this. I'm going to
oversimplify but the basic question should be straightforward.
I currently have one very large SOLR index, and 5 small ones which contain
filtered subsets out of the big one and are used for faceting in one area of
our s
I am trying to post a document with the following content using SolrJ:
content
I need the xml/html tags to be ignored. Even though this works fine in
analysis.jsp, this does not work with SolrJ, as the client escapes the
< and > with < and > and HTMLStripCharFilterFactory does not
strip those escap
Either way works, but running Solr as a server means that you have an
admin interface. That can be very useful. You will want it as soon as
someone asks why some document is not the first hit for their favorite
query.
wunder
On Nov 11, 2009, at 7:26 AM, Joel Nylund wrote:
Anyone?
I have
is it possible to index on one server and copy the files over?
thanks
Joel
Hello!
> is it possible to index on one server and copy the files over?
> thanks
> Joel
Yes, it is possible, look at the CollectionDistribution wiki page
(http://wiki.apache.org/solr/CollectionDistribution).
--
Regards,
Rafał Kuć
I'd go with just broadcasting the delete. If I remember correctly, that's what
we did at one place where we used vanilla Lucene with RMI (pre-Solr) and we
didn't see any problems due to that (RMI, on the other hand). Whether this
will work for you depends on how often you'll need to do that, a
It looks like our core admin wiki doesn't cover the persist action?
http://wiki.apache.org/solr/CoreAdmin
I'd like to be able to persist the cores to solr.xml, even if . It seems like the persist action does this?
Noble,
Noble Paul wrote:
> DIH imports are really long running. There is a good chance that the
> connection times out or breaks in between.
Yes, you're right, I missed that point (in my case imports take no longer
than a minute).
> how about a callback?
Thanks for the hint. There was a discussio
Hi,
I'm using Solr 1.4 (from nightly build about 2 months ago) and have this
defined in solrconfig:
and following code that get executed once every night:
CommonsHttpSolrServer solrServer = new CommonsHttpSolrServer("http://...";);
solrServer.setRequestWriter(new BinaryRequestWriter());
It looks like the CJK one actually does 2-grams plus a little
processing separate processing on latin text.
That's kind of interesting - in general can I build a custom tokenizer
from existing tokenizers that treats different parts of the input
differently based on the utf-8 range of the character
Hi all,
I'm using the DIH in a parameterized way by passing request parameters
that are used inside of my data-config. All imports end up in the same
index.
1. Is it considered as good practice to set up several DIH request
handlers, one for each possible parameter value?
2. In case the range of
Hey Guys,
How do I add HTML/XML documents using SolrJ such that it does not by
pass the HTML char filter?
SolrJ escapes the HTML/XML value of a field, and that make it bypass
the HTML char filter. For example content if added to
a field with HTMLStripCharFilter on the field using SolrJ, is not
str
Is it possible to configure Solr to fully load indexes in memory? I
wasn't able to find any documentation about this on either their site or
in the Solr 1.4 Enterprise Search Server book.
Hi Erik,
Is it possible to get result of one solr query feed into another Solr Query?
Issue which I am facing right now is::
I am getting results from one query and I just need 2 index attribute values
. These index attribute values are used for form new Query to Solr.
Since Solr gives result
Peter, here is a project that does this:
http://issues.apache.org/jira/browse/LUCENE-1488
> That's kind of interesting - in general can I build a custom tokenizer
> from existing tokenizers that treats different parts of the input
> differently based on the utf-8 range of the characters? E.g. us
The HTMLStripCharFilter will strip the html for the *indexed* terms,
it does not effect the *stored* field.
If you don't want html in the stored field, can you just strip it out
before passing to solr?
On Nov 11, 2009, at 8:07 PM, aseem cheema wrote:
Hey Guys,
How do I add HTML/XML docum
Ohhh... you are a life saver... thank you so much.. it makes sense.
Aseem
On Wed, Nov 11, 2009 at 7:40 PM, Ryan McKinley wrote:
> The HTMLStripCharFilter will strip the html for the *indexed* terms, it does
> not effect the *stored* field.
>
> If you don't want html in the stored field, can you
Alright. It turns out that escapedTags is not for what I thought it is for.
The problem that I am having with HTMLStripCharFilterFactory is that
it strips the html while indexing the field, but not while storing the
field. That is why what is see in analysis.jsp, which is index
analysis, does not m
>
> 1. Is it considered as good practice to set up several DIH request
> handlers, one for each possible parameter value?
>
Nothing wrong with this. My assumption is that you want to do this to speed
up indexing. Each DIH instance would block all others, once a Lucene commit
for the former is perfo
I think not out of the box, but look at SOLR-243 issue in JIRA.
You could also put your index on ram disk (tmpfs), but it would be useless for
writing to it.
Note that when people ask about loading the whole index in memory explicitly,
it's often a premature optimization attempt.
Otis
--
Semat
Try changing:
to:
Then watch the logs for errors during indexing.
Otis--
Sematext is hiring -- http://sematext.com/about/jobs.html?mls
Lucene, Solr, Nutch, Katta, Hadoop, HBase, UIMA, NLP, NER, IR
- Original Message
> From: siping liu
> To: solr-user@lucene.apache.org
> Sent: Wed
Yes , open an issue . This is a trivial change
On Thu, Nov 12, 2009 at 5:08 AM, Sascha Szott wrote:
> Noble,
>
> Noble Paul wrote:
>> DIH imports are really long running. There is a good chance that the
>> connection times out or breaks in between.
> Yes, you're right, I missed that point (in my
are you sure the data comes back in the same name. Some DBs return the
field names in ALL CAPS
you may try out a delta_import using a full import too
http://wiki.apache.org/solr/DataImportHandlerFaq#My_delta-import_goes_out_of_memory_._Any_workaround_.3F
On Wed, Nov 11, 2009 at 9:55 PM, Mark Ell
On Thu, Nov 12, 2009 at 3:13 AM, Jason Rutherglen
wrote:
> It looks like our core admin wiki doesn't cover the persist action?
> http://wiki.apache.org/solr/CoreAdmin
>
> I'd like to be able to persist the cores to solr.xml, even if persistent="false">. It seems like the persist action does this
38 matches
Mail list logo