I'm having trouble getting the core CREATE command to work with relative
paths in the solr.xml configuration.
I'm working with a layout like this:
/opt/solr [this is solr.solr.home: $SOLR_HOME]
/opt/solr/solr.xml
/opt/solr/core0/ [this is the "template" core]
/opt/solr/core0/conf/schema.xml [etc.]
put that line in your startup script or u can set as env var
export CATALINA_OPTS=-Xms256m -Xmx1024m;
--
View this message in context:
http://lucene.472066.n3.nabble.com/OutOfMemoryErrors-tp1181731p1182708.html
Sent from the Solr - User mailing list archive at Nabble.com.
U can set up in startup script of tomcat
--
View this message in context:
http://lucene.472066.n3.nabble.com/OutOfMemoryErrors-tp1181731p1182582.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hello There,
Even I am facing same errors...
@Grijesh, Where exactly I need to make these changes of increasing JVM heap
space..I mean where i need to specify them... ?
I had made changes in tomcat config Java(JVM) initial memory pool and
maximum memory pool to 256-1024MB..Yet the error persists i
hi all,
the error i got is ""Unexpected RuntimeException from
org.apache.tika.parser.pdf.pdfpar...@8210fc"" when i indexed a file similar
to the one in
https://issues.apache.org/jira/browse/PDFBOX-709/samplerequestform.pdfcant
we index those type files in solr???
regards,
satya
increase your JVM Heap space by using params
-Xms1024m
-Xmx4096m
Like this.
--
View this message in context:
http://lucene.472066.n3.nabble.com/OutOfMemoryErrors-tp1181731p1181892.html
Sent from the Solr - User mailing list archive at Nabble.com.
I got this error, anyone could explain and solve this?
SEVERE: Exception invoking periodic operation:
java.lang.OutOfMemoryError: Java heap space
at
org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:114)
at
org.apache.catalina.core.ContainerBase.bac
Hello Everyone,
Please help me knowing the logic behind this lock file generation
while indexing data in solr!
The trouble I am facing is as follows:
The data that I indexed is nearly in millions. At the initial level of
indexing I find no errors unless it cross up-to 10lacs documen
current implementation of distributed search use unique key in the
STAGE_EXECUTE_QUERY stage.
public int distributedProcess(ResponseBuilder rb) throws IOException {
...
if (rb.stage == ResponseBuilder.STAGE_EXECUTE_QUERY) {
createMainQuery(rb);
return ResponseBuilder.STAGE_
On Mon, Aug 16, 2010 at 5:23 PM, Steven A Rowe wrote:
> Hi Robert,
>
> You wrote in response to Hoss:
> > Maybe for once your argument isn't completely bogus
>
> Attacking people here is really uncalled for.
>
>
actually, he asked for it:
> you're right, we should just fix the bug that the query
Analysis of the indexed data has to be done with your own custom queries.
The Solr logs include the query string, number of documents found and
query time. You'll have to code your own tool for this.
On Mon, Aug 16, 2010 at 1:08 AM, Karthik K wrote:
> Lucidgaze might help.
>
> Karthik
>
--
L
I've no idea if it's possible but i'd at least try to return an ArrayList of
rows instead of just a single row. And if it doesn't work, which is probably
the case, how about filing an issue in Jira?
Reading the docs in the matter, i think it should (made) to be possible to
return multiple ro
Still stuck on this - any hints on how to write the JavaScript to split a
document? Thanks!
-Pete
On Aug 5, 2010, at 8:10 PM, Lance Norskog wrote:
> You may have to write your own javascript to read in the giant field
> and split it up.
>
> On Thu, Aug 5, 2010 at 5:27 PM, Peter Spam wrote:
: issue resolve. problem was that solr.war was silently not being overwritten
: by new version.
:
: will try to spend more time debugging before posting.
Glad you were able to figure it out.
For future refrence: problems like these are pretty much impossible for
people to help you with unless
: > I think your problem may be that StreamingUpdateSolrServer buffers up
: > commands and sends them in batches in a background thread. if you want to
: > send individual updates in real time (and time them) you should just use
: > CommonsHttpSolrServer
:
: My goal is to batch updates. My cont
Hi Robert,
You wrote in response to Hoss:
> Maybe for once your argument isn't completely bogus
Attacking people here is really uncalled for.
-1 from me.
Steve
: > page. Use < instead of <. When I first ran into this, I was
: > surprised that &rt; was not required as well, but it's probably a good
: > idea to use it, just in case things tighten up in the future.
: >
:
: Thanks, confirming this worked using '<'; instead of <. It would help
: to no
Ugh I should have checked there first! Thanks for the reply.. that helps a
lot.
Sincerely
Amit
On Mon, Aug 16, 2010 at 10:57 AM, Gora Mohanty wrote:
> On Mon, 16 Aug 2010 10:43:38 -0700
> Amit Nithian wrote:
>
> > I am not sure if this is the best approach to this problem but I
> > was curious
: "We will only provide a conversion tool that can convert indexes from
: the last "branch_3x" up to this trunk (4.0) release, so they can be
: read later, but may not contain terms with all current analyzers, so
: people need mostly reindexing. Older indexes will not be able to be
: read natively
On Mon, Aug 16, 2010 at 4:20 PM, Chris Hostetter
wrote:
>
> Even if you convince folks to make every change you think should be made
> to the Lucene QueryParser (again: please take that up in a seperate
> thread) it won't change the fact that people using analysis.jsp should
> understand the disti
: > even if you change the Lucene QUeryParser so that whitespace isn't a meta
: > character it doens't affect the underlying issue: analysis.jsp is agnostic
: > about QueryParsers.
: analysis.jsp isn't agnostic about queryparsers, its ignorant of them, and
: your default queryparser is actually a
: Maybe, separate from analysis.jsp (showing only how text is analyzed),
: Solr needs a debug page showing the steps the field's QueryParser goes
: through on a given query, to debug such tricky QueryParser/Analyzer
: interactions?
As mentioned earlier in this thread, i set out to build something
: Perhaps fold it into the pf/pf2 syntax?
:
: pf=text^2// current syntax... makes phrases with a boost of 2
: pf=text~1^2 // proposed syntax... makes phrases with a slop of 1 and
: a boost of 2
:
: That actually seems pretty natural given the lucene query syntax - an
: actual boosted sloppy
No
http://wiki.apache.org/solr/SimpleFacetParameters#Facet_by_Range
https://issues.apache.org/jira/browse/SOLR-1240
-Original message-
From: Peng, Wei
Sent: Mon 16-08-2010 20:25
To: solr-user@lucene.apache.org;
Subject: RE: help on facet range
The solr version that I am using is 1
On 8/16/10 1:55 AM, Yatir Ben Shlomo wrote:
> Hi!
> I am using solrCloud with tomcat5.5
> in my setup every lanugage has an its own index and its own solr filters so
> it needs a seprated solr configuration files.
>
> in solrCLoud examples posted here : http://wiki.apache.org/solr/SolrCloud
> I n
The solr version that I am using is 1.4.0.
Does it support facet.range?
Wei
-Original Message-
From: Peng, Wei [mailto:wei.p...@xerox.com]
Sent: Monday, August 16, 2010 2:12 PM
To: solr-user@lucene.apache.org
Subject: help on facet range
I have been trying to use facet by range.
Howeve
I have been trying to use facet by range.
However no matter how I tried, I did not get anything from facet range (
I do get results from facet fields: topic and author).
The query is
http://localhost:8983/solr/select/?facet.range=timestamp&facet.range.sta
rt=0&facet.range.end=1277942270&facet.ran
On Mon, 16 Aug 2010 10:43:38 -0700
Amit Nithian wrote:
> I am not sure if this is the best approach to this problem but I
> was curious if a single solr server could be both a master and a
> slave without causing index corruption? It seems that you could
> setup multiple replication handlers in t
I am not sure if this is the best approach to this problem but I was curious
if a single solr server could be both a master and a slave without causing
index corruption? It seems that you could setup multiple replication
handlers in the SOLR config, /replication /replication2 and have one be
master
You can append it in your middleware, or try the EdgeNGramTokenizer [1]. If
you're going for the latter, don't forget to reindex and expect a larger index.
[1]:
http://lucene.apache.org/java/2_9_0/api/all/org/apache/lucene/analysis/ngram/EdgeNGramTokenizer.html
-Original message-
From
Is it possible to set up Lucene to treat a keyword search such as
title:News
implicitly like
title:News*
so that any title that begins with News will be returned without the
user having to throw in a wildcard?
Also, are there any common filters and such that are generally
considered a good pra
Hi
We have a news portal built on a CMS that heavily uses solr for indexing.
Going ahead we will be migrating all our other portals to the same platform
and are not sure how do we work with Solr for multiple websites.
The options are:
1) Using multiple publications/indexes within solr for each s
Thanks you very much.
I know the feeling, I've definitely had times when I just got busy and
didn't reply, but I've had plenty to do that didn't require that to be done
first, so no worries.
Thanks,
Thomas
On Mon, Aug 16, 2010 at 9:14 AM, Mark Allan wrote:
> Hi Thomas,
>
> Sorry for not replyi
Hi Thomas,
Sorry for not replying before now - I've had your email flagged in my
mail client to remind me to reply, but I've been so busy recently I
never got round to it.
I'll package up the necessary java files and send you the attachment
directly instead of posting a zip file to the ma
Sorry to bother you, but since I haven't had a reply in a week, I figured
I'd try asking again...
What build of Solr are you using personally? Are you just using a nightly
build, or is there a specific build that you are using? Has it had any
major screw-ups for you?
And I still would love to s
Thank you for your reply.
I'll apply your patch and try this new feature to see if it meets my needs.
If I understand correctly, your solution is to have a field by type and to
select the field to use depending on the value of another field.
Ideally, I would apply a different pre-treatment to my
Use a tool to download a site to local disk, and ship the resulting HTML as a
folder or ZIP.
If that is not good enough, consider shipping the Reference Guide by
LucidImagination. It is one PDF and contains most of what you need. The
customer may be confused by LucidWorks specific chapters but i
On 2010-08-16 10:06, Damien Dudognon wrote:
Hi all,
I want to use a specific stopword list depending on a field's value. For example, if type
== 1 then I use stopwords1.txt to index "text" field, else I use stopwords2.txt.
I thought of several solutions but no one really satisfied me:
1) use o
As far as I know, the higher you set the value, the faster the indexing
process will be (because more things are kept in memory). But depending on
which are your needs, it may not be the best option. If you set a high
mergeFactor and you want to optimize the index once the process is done,
this op
Lucidgaze might help.
Karthik
Hi all,
I want to use a specific stopword list depending on a field's value. For
example, if type == 1 then I use stopwords1.txt to index "text" field, else I
use stopwords2.txt.
I thought of several solutions but no one really satisfied me:
1) use one Solr instance by type, and therefore a di
41 matches
Mail list logo