Assuming that you're doing this in a Windows environment, you could define
your spreadsheet as an ODBC data source and define a datasource for it in
DIH. Then, you would extract the main documents from your database, and the
keywords from the ODBC datasource layered on top of your spreadsheet.
No
Hi all,
We have a requirement in the ecommerce site that, Keywords string for items
is required but just for searching purpose. Since keywords will be long and
only used for searching thus we just want to be indexed and don't need them
to persist in DB. Keywords will be there is the spreadsheet in
Thanks for the explanation !
On 8/8/13 4:52 AM, Shawn Heisey wrote:
On 8/6/2013 8:49 PM, manju16832003 wrote:
My Confusion is it feasible to choose many cores or use shards. I do not
have much experience on how shards works and why they are used for. I would
like to know the suggestions :-) for
Hi All,
Our web application (e commerce ) requires primary and secondary categories
in items. Based on this requirement I have following queries :
1) How category and subcategory are handled in solr version 4.4. I have used
apache-solr-1.3.0 previously, but facets have undergone many big changes
I noticed the example solrconfig.xml has event listeners for commit. I
wonder if they could be useful here:
I am not sure how they work with hard/soft commits though.
Regards,
Alex.
P.s. Just to make things complicated, UpdateRequestProcessors
have processCommit() method. But these see
The problem I saw was that the styles were all reset in a funny way, so it
was hard to just say H1/H2/div/ui and have a reasonable content showing up.
It was all small undifferentiated text. So, one had to inject a whole
bootstrap/CSS reset to do something useful. And, of course, even that was
non-
Hi Eric,
Thanks for your reply.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-design-Choose-Cores-or-Shards-tp4082930p4083178.html
Sent from the Solr - User mailing list archive at Nabble.com.
I was thinking of using it to provide example queries in collections I give
as examples. I also tested using it to inject a pop-out page that pulled
bootstrap/angular from CDN to do fancy interface to local instance. It
could have been useful if - say - example distribution also had a couple
of ex
: Didn't somebody once say this is used for customization of admin pages?
it can be yes, that's why it originla existed -- Stefan's question was
wether anyone was actually using it for that.
I used it quite a bit back in the day at CNET as a way to "self document"
what an instance was for and ho
Oh yeah. Hi have seen that Processor on the book and i was not able to
remember. Thanks a lot.
And thanks a lot for your solution. It works :)
On Aug 8, 2013, at 1:52 AM, "Jack Krupansky" wrote:
> Here's the actual update processor I used (and tested):
>
>
>
> main_s
> final_s
>
>
>
Here's the actual update processor I used (and tested):
main_s
final_s
backup_s
final_s
final_s
-- Jack Krupansky
-Original Message-
From: Jack Krupansky
Sent: Wednesday, August 07, 2013 8:20 PM
To: solr-user@lucene.apache.org
Subject: Re: SOLR Copy fi
Sorry, I am unable to untangle the logic you are expressing, but I can can
assure you that JavaScript and the StatelessScriptUpdate processor has full
support for implementing spaghetti code logic as tangled as desired!
Simpler forms of logic can be implemented directly using non-script update
Yes, it is possible to copy from a field to another field that has no value.
In fact, that is the only kind of copy you should be doing unless the field is
multivalued.
IOW, copy field is not “replace field”.
-- Jack Krupansky
From: Luís Portela Afonso
Sent: Wednesday, August 07, 2013 7:22 PM
No and No...
Commit has a life of its own. Autocommit can occur based on time and number
of documents, independent of the update processor chain. For example, you
can send a few updates with "commit within" and sit there idle doing no
commands and then suddenly after the commitWithin interval
Ok. So, running the update processor chain *is* the commit process?
In answer to Erick's question: my habit, an old and apparently bad
one, has been to call a hard commit at the end of each update. My
question had to do with allowing soft commits to be controlled by
settings in solrconfig.xml, say
Block-quoting and plagiarism are two different questions.
Block-quoting is simple: break the text apart into sentences or even
paragraphs and make them separate documents. Make facets of the
post-analysis text. Now just pull counts of facets and block quotes will
be clear.
Mahout has a scala
Hi Dmitry,
The command seems good. Are you sure your shell is not doing something
funny with the params? You could try:
python solrjmeter.py -C "g1,foo" -c hour -x ./jmx/SolrQueryTest.jmx -a
where g1 and foo are results of the individual runs, ie. something that was
started and saved with '-R g1'
Hi,
Is possible to copy a value of a field to another if the destination doesn't
have value?
An example:
Indexing an rss
The feed has the fields link and guid, but sometimes guid cannot be present in
the feed
I have a field that i will copy values with the name finalLink
Now i want to copy guid
On 8/6/2013 8:49 PM, manju16832003 wrote:
My Confusion is it feasible to choose many cores or use shards. I do not
have much experience on how shards works and why they are used for. I would
like to know the suggestions :-) for the design like this.
What are the implications if I were to choose t
Most update processor chains will be configured with the Run Update
processor as the last processor of the chain. That's were the Lucene index
update and optional commit would be done.
-- Jack Krupansky
-Original Message-
From: Jack Park
Sent: Wednesday, August 07, 2013 1:04 PM
To: s
How are you allowing for a soft commit? IOW how are you triggering it?
And what do you speculate the updateRequestProcessorChain has to do with
soft commit?
Best
Erick
On Wed, Aug 7, 2013 at 1:04 PM, Jack Park wrote:
> If one allows for a soft commit (rather than a hard commit on each
> reque
Something smells fishy here... why do you think you need to do this using
nested queries and parameter names?
Sounds like you're engaging in "premature complication". Try simpler
approaches first.
-- Jack Krupansky
-Original Message-
From: Noob
Sent: Wednesday, August 07, 2013 6:45
On 8/7/2013 2:45 PM, Torsten Albrecht wrote:
I would like to run zookeeper external at my old master server.
So I have two zookeeper to control my cloud. The third and fourth zookeeper
will be a virtual machine.
For true HA with zookepeer, you need at least three instances on
separate physic
Hi,
I am currently passing the query by passing the values to my search
component.
For ex:
http://localhost:8983/solr/select?firstname=charles&lastname=dawson&qt=person
Person search component is configured to accept the values and form the
query
(
_query_:"{!wp_dismax qf=f
The ends of a range query are indeed single terms - they are not queries or
any term that would analyze into multiple terms.
In some cases you might want composite values as strings so that you can do
a range on terms.
For example, city + ", " + state as a string.
-- Jack Krupansky
-Ori
Didn't somebody once say this is used for customization of admin pages?
Otis
--
SOLR Performance Monitoring -- http://sematext.com/spm
Solr & ElasticSearch Support -- http://sematext.com/
On Thu, Aug 8, 2013 at 12:24 AM, Stefan Matheis
wrote:
> Hmmm .. Didn't get at least one answer (except fro
Is it possible to obtain the shard routing key from within an
UpdateRequestProcessor when a document is being inserted?
Many thanks,
Terry
Hmmm .. Didn't get at least one answer (except from Shawn in #solr, telling me
he's using a 0 byte file to avoid errors :p) - does that mean, that really no
one is using it?
Don't be afraid .. tell me, one way or another :)
- Stefan
On Wednesday, July 17, 2013 at 8:50 AM, Stefan Matheis wro
I am trying to use range queries to take advantage of having constant scores
in multivalued field but I am not sure if range queries support phrase
query..
Ex:
The below range query works fine.
_query_:"address:([Charlotte TO Charlotte])"^5.5
The below query doesn't work,
_query_:"address:([
I run solr on tomcat with configured JUL to log solr to separate file:
org.apache.solr.level = INFO
org.apache.solr.handlers = 4solrerr.org.apache.juli.FileHandler
I've noticed that logging UI stops working. Is it normal behavior or is
it bug?
(When cores are initialized JulWatcher is registere
>A multivalued text field is directly equivalent to concatenating the
>values,
>with a possible position gap between the last and first terms of adjacent
>values.
That, in a nutshell, would be the problem. Maybe the discussion is over at
this point.
It could be I dumbed down the problem a b
Hi,
I have question regarding suggester component.
Can we filter suggestion results depending on particular value of filed?
like fq=column1:value1
--
View this message in context:
http://lucene.472066.n3.nabble.com/Filtering-suggestion-results-tp4083121.html
Sent from the Solr - User mailing l
Hi Jack,
I would like to run zookeeper external at my old master server.
So I have two zookeeper to control my cloud. The third and fourth zookeeper
will be a virtual machine.
Torsten
Von: Jack Krupansky
Gesendet: ?Mittwoch?, ?7?. ?August? ?2013 ?20?:?05
An: solr-user@lucene.apache.org
Thre
A multivalued text field is directly equivalent to concatenating the values,
with a possible position gap between the last and first terms of adjacent
values.
Term frequency is driven by the terms from the query, not the terms from the
field(tf(query-term), not tf(field-term)). Your "max" form
"before an update chain"
Really? Why?
And if so, then you will definitely have to deal with it before handing the
data to Solr since the update chain is where preprocessing of input data
normally happens for updates in Solr.
Be specific as to what processing you want to occur. Provide an exa
This might end up being more of a Lucene question, but anyway...
For a multivalued field, it appears that term frequency is calculated as
something a little like:
sum(tf(value1), ..., tf(valueN))
I'd rather my score not give preference based on how *many* of the values
in the multivalued field
Hi,
I'm currently using solr 4.0 final with Manifoldcf v1.3 dev.
I have multivalued titles (the names are all the same so far) that must go
into a single valued field.
Can a transformer do this?
Can anyone show me how to do it?
And this has to fire off before an update chain takes place.
Thanks,
Hello Lee,
Unfortunately no. It's possible to read csv field by
http://wiki.apache.org/solr/DataImportHandler#FieldReaderDataSource but
there is no csv like EntityProcessor, which can broke line on entities.
Transformers can not emit new entities.
On Wed, Aug 7, 2013 at 8:10 PM, Lee Carroll wrot
It does seem that the Lucene42DocValuesProducer has changed its internal
version and that is what its complaining about.
Cheers Shawn, Ok my misunderstanding on the codec stuff then, as I said
probably not a common occurrence but good to know.
On 7 August 2013 17:32, Shawn Heisey wrote:
> On 8
I suppose you can use Substring and Charindex to perform your task at SQL
level then use the value in another entity in DIH..
--
View this message in context:
http://lucene.472066.n3.nabble.com/DIH-Problem-create-multiple-docs-from-a-single-entity-tp4083050p4083106.html
Sent from the Solr - Use
I started looking into what I might have missed while upgrading to Solr 4.4.
and I noticed that solr.xml in Solr 4.4 has this:
${host:}
${jetty.port:8983}
${hostContext:solr}
${zkClientTimeout:15000}
${genericCoreNodeNames:true}
${socketTimeout:0}
${connTim
Three zookeepers give you bare minimum high availability - one can go down.
But... I would personally assert that running embedded zookeeper is
inherently not "high availability", just by definition (okay, by MY
definition.)
You didn't say whether you were running embedded zookeeper or not.
Hi Everyone,
I'm facing an issue in which my solr query is returning highlighted snippets
for some, but not all results. For reference, I'm searching through an index
that contains web crawls of human-rights-related websites. I'm running solr as
a webapp under Tomcat and I've included the que
While testing Solr's new ability to store data and transaction directories in
HDFS I added an additional core to one of my testing servers that was
configured as a backup (active but not leader) core for a shard elsewhere. It
looks like this extra core copies the data into its own directory rath
Hi,
I use a system with solr 3 and 20 shards (3 million docs per shard).
At a testsystem with one shard (60 million docs) I get 750 requests per second.
At my live system (20 shards) I get 200 requests per second.
Is the internal communication between the 20 shards a performance killer?
Anothe
If one allows for a soft commit (rather than a hard commit on each
request), when does the updateRequestProcessorChain fire? Does it fire
after the commit?
Many thanks
Jack
On 8/7/2013 7:44 AM, Parul Gupta(Knimbus) wrote:
I am trying to use solr.ISOLatin1AccentFilterFactory in solr4.3.1,But its
giving error
"Error loading class 'solr.ISOLatin1AccentFilterFactory'.
However its working fine in Solr3.6
This filter is deprecated. Here's the actual javadoc for this c
Thanks for this inquiry; as a result I have added a "round" JavaScript
script for the StatelessScriptUpdate processor to Early Access Release #5 of
my Solr 4.x Deep Dive book in the chapter on update processors.
The script takes a field name, a number of decimal digits to round to
(default is
On Aug 7, 2013, at 18:10 , Lee Carroll wrote:
> Hi
>
> I've 2 tables with the following data
>
> table 1
> id treatment_list
> 1 a,b
> 2 b,c
>
> table 2
> treatment id, name
> a name1
> b name 2
> c name 3
>
> Using DIH can you create an index
Thanks for the inquiry about “append two fields”; as a result I have added it
as an example in Early Access Release #5 of my Solr 4.x Deep Dive book in the
chapter on update processors. Actually, there are several examples:
- Append One Field to Another with Comma and Space as Delimiter:
On 8/7/2013 3:50 AM, Spadez wrote:
My issue is in the data-config.xml I have put two datasources, however, I am
stuck on what to put for the driver values and the urls.
" url=""
user="" password="" />
"
url="" user="" password="pass>"/>
Is anyone able to tell me hat I should be putting for thes
On 8/7/2013 3:33 AM, Daniel Collins wrote:
I had been running a Solr 4.3.0 index, which I upgraded to 4.4.0 (but
hadn't changed LuceneVersion, so it was still using the LUCENE_43 codec).
I then had to back-out and return to a 4.3 system, and got an error when it
tried to read the index.
Now, it
The answer will be further down in the stack trace. It will relate to an
error that occurred when initializing the filter.
One possibility is that you have a garbage attribute name in your token
filter XML - 4.4 checks for that kind of thing now.
-- Jack Krupansky
-Original Message-
Hi
I've 2 tables with the following data
table 1
id treatment_list
1 a,b
2 b,c
table 2
treatment id, name
a name1
b name 2
c name 3
Using DIH can you create an index of the form
id-treatment-id name
1a name1
1b
Hey Dmitry
That sounds a bit odd .. those are more like notices instead of real errors ..
sure that those are stopping the UI from working? if so .. we should see more
reports like those.
Can you verify the problem by using another browser?
I mean .. that is really a basic javascript handler .
Good point. Copying to a separate field that applied synonyms could help.
Filtering out the original countries could be tricky. The Javadoc mentiones a
keepOrig flag, but the Solr docs do not. If you could set keepOrig=false, that
would do the trick.
wunder
On Aug 7, 2013, at 5:13 AM, Erick Er
Thanks Erick, our index is relatively static. I think the deletes must be
coming from 'reindexing' the same documents so definitely handy to recover
the space. I've seen that video before. Definitely very interesting.
Brendan
On Wed, Aug 7, 2013 at 8:04 AM, Erick Erickson wrote:
> The general
Hi Mark,
Setting properly in my solrconfig.xml did it.
Thanks!
Greg Walters | Operations Team
530 Maryville Center Drive, Suite 250
St. Louis, Missouri 63141
t. 314.225.2745 | c. 314.225.2797
gwalt...@sherpaanalytics.com
www.sherpaanalytics.com
Hi Stefan,
I was able to debug the second click scenario (was tricky to catch it,
since on click redirect happens and logs statements of the previous are
gone; worked via setting break-points in plugins.js) and got these errors
(firefox 23.0 ubuntu):
[17:20:00.731] TypeError: anonymous function d
Thank you so much for the suggestion, Is the same recommended for querying too
i found it very slow when i do query using clousolrserver
Kalyan
> Date: Tue, 6 Aug 2013 13:25:37 -0600
> From: s...@elyograg.org
> To: solr-user@lucene.apache.org
> Subject: Re: SolrCloud Indexing question
>
> On 8/6
FYI, I am now using docValues for facet fields with somewhat better
performance, at least more consistent performance (especially with frequent
commits). Also, I see my main bottleneck seems to be ec2 servers - I am not
running on m3.xlarge with provisioned EBS 4000 IOPS and it is looking much
A data structure like:
fieldId -> BitArray (for fields with docFreq > 1/9 of total docs)
fieldId -> VIntList (variable byte encoded array of ints, for fields with
docFreq < 1/9 of total docs)
And the list is sorted top to bottom with most frequent fields at the top
(highest doc freqs at the to
Hi Roman,
One more question. I tried to compare different runs (g1 vs cms) using the
command below, but get an error. Should I attach some other param(s)?
python solrjmeter.py -C g1,foo -c hour -x ./jmx/SolrQueryTest.jmx
**ERROR**
File "solrjmeter.py", line 1427, in
main(sys.argv)
File
Hi,
I am trying to use solr.ISOLatin1AccentFilterFactory in solr4.3.1,But its
giving error
"Error loading class 'solr.ISOLatin1AccentFilterFactory'.
However its working fine in Solr3.6
...
Can anybody suggest me how to remove this error..Or is there any new
FilterFactory I h
It shouldn't .. but from your description sounds as the javascript-onclick
handler doesn't work on the second click (which would do a page reload).
if you use chrome, firefox or safari .. can you open the "developer tools" and
check if they report any javascript error? which would explain why ..
(a bit late, I know)
On 07/23/2013 02:09 PM, Erick Erickson wrote:
First a minor nit. The server.add(doc, time) is a hard commit, not a soft one.
By default, no, commitWithin is indeed a soft commit.
As per
http://lucene.472066.n3.nabble.com/near-realtime-search-and-dih-td494.html#a4000133
Hi Roman,
Finally, this has worked! Thanks for quick support.
The graphs look awesome. At least on the index sample :) It is quite easy
to setup and run + possible to run directly on the shard server in
background mode.
my test run was:
python solrjmeter.py -a -x ./jmx/SolrQueryTest.jmx -q
./qu
We have all 6 instances in zkhost parameter.
-Original Message-
From: Raymond Wiker [mailto:rwi...@gmail.com]
Sent: Wednesday, August 07, 2013 8:29 AM
To: solr-user@lucene.apache.org
Subject: Re: external zookeeper with SolrCloud
You said earlier that you had 6 zookeeper instances, but
On the first click the values are refreshed. On the second click the page
gets redirected:
from: http://localhost:8983/solr/#/statements/plugins/cache
to: http://localhost:8983/solr/#/
Is this intentional?
Regards,
Dmitry
I went through Admin page -> Dashboard of all 10 nodes and verified that each
one is using solr-spec 4.4.0.
solr-spec 4.4.0
solr-impl 4.4.0 1504776 - sarowe - 2013-07-19 02:58:35
lucene-spec 4.4.0
lucene-impl 4.4.0 1504776 - sarowe - 2013-07-19 02:53:42
Is there anything else I can check to ve
You're explicitly asking for only 10 search results - that's what the
"rows=10" parameter does.
If you want to see alll results, you can either increase "rows", or run
multiple queries, increasing "offset" each time.
On Wed, Aug 7, 2013 at 12:21 PM, Kamaljeet Kaur wrote:
> Hello,
> I am a newbi
You said earlier that you had 6 zookeeper instances, but the zkHost param
only shows 5 instances... is that correct?
On Tue, Aug 6, 2013 at 11:23 PM, Joshi, Shital wrote:
> Machines are definitely up. Solr4 node and zookeeper instance share the
> machine. We're using -DzkHost=zk1,zk2,zk3,zk4,zk
I have explained in the above post with screenshots. Indexing gets failed
when any node is down and also shard splitting is in progress
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-Split-Shard-Document-loss-and-down-time-tp4082002p4082994.html
Sent from the Solr - Us
Shards are "special" cores, usually hosted on separate machines
that comprise one single large (logical) index. Shards need to
have the same schema, config, etc usually.
So unless you have a corpus that's too large to fit on a single piece
of your hardware, you'll always be using "cores". And sinc
Walter:
Oooh, nice! One could even use a copyField if one wanted to
keep them separate...
Erick
On Tue, Aug 6, 2013 at 12:38 PM, Walter Underwood wrote:
> Would synonyms help? If you generate the query terms for the continents,
> you could do something like this:
>
> usa => continent-na
> cana
Hmmm, shouldn't be happening. How sure are you that the upgrade to 4.4
was carried out on all machines?
Erick
On Tue, Aug 6, 2013 at 5:23 PM, Joshi, Shital wrote:
> Machines are definitely up. Solr4 node and zookeeper instance share the
> machine. We're using -DzkHost=zk1,zk2,zk3,zk4,zk5 to le
Any suggestions pls?
On Wed, Aug 7, 2013 at 5:17 PM, Prasi S wrote:
> Hi,
> I have setup solr 4.4 with cloud. When i start solr, I get an exception as
> below,
>
> *ERROR [CoreContainer] Unable to create core: mycore_sh1:
> org.apache.solr.common.SolrException: Plugin init failure for [schema.x
The general advice is to not merge (optimize) unless your
index is relatively static. You're quite correct, optimizing
simply recovers the space from deleted documents, otherwise
it won't change much (except having fewer segments).
Here's a _great_ video that Mike McCandless put together:
http://b
Well, at least it's not throwing an error .
Sorting on a tokenized field is not supported,
or rather the behavior is undefined. Your Name
field is tokenized if it's the stock text_en field.
Best
Erick
On Tue, Aug 6, 2013 at 11:03 AM, Mysurf Mail wrote:
> I don't see how it is sorted.
> this i
Hi,
I have setup solr 4.4 with cloud. When i start solr, I get an exception as
below,
*ERROR [CoreContainer] Unable to create core: mycore_sh1:
org.apache.solr.common.SolrException: Plugin init failure for [schema.xml]
fieldType "text_shingle": Plugin init failure for [schema.xml]
analyzer/filter:
On Tue, 2013-07-30 at 21:48 +0200, Robert Stewart wrote:
[Custom facet structure]
> Then we sorted those sets of facet fields by total document frequency
> so we enumerate the more frequent facet fields first, and we stop
> looking when we find a facet field which has less total document
> matche
Hello,
I am a newbie to solr. I have installed and configured it with my django
project. I am using the following versions:
>>> django-haystack - 2.0.0
>>> ApacheSolr - 3.5.0
>>> Django - 1.4
>>> mysql - 5.5.32-0
Here is the model, whose data I want to index: http://tny.cz/422c5fb7
Here is search_
Hi,
I'm looking for a bit of guidance in implementing a data import handler for
mongodb.
I am using
https://github.com/sucode/solrMongoDBImporter/blob/master/README.md as a
starting point, and I can get full imports working properly with a few
adjustments to the source. The problem comes in whe
Hi Ranjith,
Here are a few things to note about shard split:
1. The command auto-retries. Also, if there's something that went wrong
during a split, you should wait for it to complete.
2. In case of a failure, the parent shard is supposed to be intact and the
new sub-shards wouldn't replace the pa
For the data import handler I have moved he mysql and postgresql jar files to
the solr lib directory (/opt/solr/lib).
My issue is in the data-config.xml I have put two datasources, however, I am
stuck on what to put for the driver values and the urls.
" url=""
user="" password="" />
"
url="" user
Hi Erick,
I have a question. Suppose if any error occurred during shard split , is
there any approach to revert back the split action? . This is seriously
breaking my head. For me documents are getting lost when any of the node for
that shard is dead when split shard is in progress.
Thanks
Ran
I had been running a Solr 4.3.0 index, which I upgraded to 4.4.0 (but
hadn't changed LuceneVersion, so it was still using the LUCENE_43 codec).
I then had to back-out and return to a 4.3 system, and got an error when it
tried to read the index.
Now, it was only a dev system, so not a problem, and
On 8/7/13 9:04 AM, Shawn Heisey wrote:
On 8/7/2013 12:13 AM, Per Steffensen wrote:
Is there a way I can configure Solrs so that it handles its shared
completely in memory? If yes, how? No writing to disk - neither
transactionlog nor lucene indices. Of course I accept that data is lost
if the Sol
With SOLR-5115 there's support for forcing ZkResourceLoader to use
SolrResourceLoader using file:/// prefix in your schema. This forces Solr to
load files from the FS as usual.
https://issues.apache.org/jira/browse/SOLR-5115
-Original message-
> From:Markus Jelsma
> Sent: Friday 2nd Au
Yes, you can copyField the source's contents to another field, use the
KeepWordTokenFilter to keep only those words you really care about. Using
(e)dismax you can then apply a heavy boost on the field. All special words in
that field will show up higher if queried for.
-Original message--
On 8/7/2013 12:13 AM, Per Steffensen wrote:
> Is there a way I can configure Solrs so that it handles its shared
> completely in memory? If yes, how? No writing to disk - neither
> transactionlog nor lucene indices. Of course I accept that data is lost
> if the Solr crash or is shut down.
The luce
On Thu, 2013-08-01 at 15:24 +0200, Grzegorz Sobczyk wrote:
> Today I found in solr logs exception: java.lang.OutOfMemoryError: Requested
> array size exceeds VM limit.
> At that time memory usage was ~200MB / Xmx3g
[...]
> Caused by: java.lang.OutOfMemoryError: Requested array size exceeds VM lim
92 matches
Mail list logo