Hi,
If you like, you can open a JIRA issue on this and provide as much info as
possible. Someone can then look into (potential) memory optimization of this
part of the code.
--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com
Solr Training - www.solrtraining.com
28. sep.
Hi Jan.
Thank you very much for your advice.
So I understand Solr needs more memory to parse the files.
To parse a file of size x, it needs double memory (2x). Then how much
memory allocation should be taken to heap size? 8x? 16x?
Regards,
Shigeki
2012/9/28 Jan Høydahl
> Please try to incr
Please try to increase -Xmx and see how much RAM you need for it to succeed.
I believe it is simply a case where this particular file needs double memory
(480Mb) to parse and you have only allocated 1Gb (which is not particularly
much). Perhaps the code could be optimized to avoid the Arrays.cop
These are very large files and this is not enough memory. Do you upload these
as files?
If the CSV file is one document per line, you can split it up. Unix has a
'split' command which does this very nicely.
- Original Message -
| From: "Shigeki Kobayashi"
| To: solr-user@lucene.apach