On 2/19/2016 8:47 PM, Brian Wright wrote:
> Here's the fundamental issue when talking about use of software like
> Solr in a corporate environment. As a systems engineer at Marketo, my
> best practices recommendations and justifications are based on the
> documentation provided by the project owner
Hi Shawn,
Thanks for the information.
On 2/19/16 3:32 PM, Shawn Heisey wrote:
You will use fewer resources if you only run one Solr instance on each
machine. You can still have different considerations for different
hardware with one instance -- on the servers with more resources,
configure a
With collapse and expand the facet counts will be calculated for the
collapsed data set. You can also use the tag/exclude feature to exclude the
collapse filter query and generate facets on the uncollapsed set.
Joel Bernstein
http://joelsolr.blogspot.com/
On Wed, Feb 17, 2016 at 10:44 AM, Anil w
On 2/19/2016 3:40 PM, Brian Wright wrote:
> Without going into excessive detail on our design, I won't be able to
> sufficiently justify an answer to your question as to the why of it.
> Suffice it to say we plan to deploy this indexing for our entire
> customer base. Because of size these document
Hi Shawn,
Without going into excessive detail on our design, I won't be able to
sufficiently justify an answer to your question as to the why of it.
Suffice it to say we plan to deploy this indexing for our entire
customer base. Because of size these document collections and the way
that they
Hi Ed,
Did you look into ExternalFilefield type (for ex: with name ::
position_external_field in your schema), which can be used to map to your
field (for ex position, hope these are not changed very often) and then use
position_external_field in your boost function.
This can be used if you can
Hi,
We are implementing a solution on SolrCoud involving parent/child documents.
Had few questions:
1. In a nested document structure does parent and child are always indexed on
same shard?
2. Are there any limitations on number of child documents in a nested structure.
3. Any other limitati
Thanks Shawn!
Best,
Mark.
On Wed, Feb 17, 2016 at 7:48 PM, Shawn Heisey wrote:
> On 2/17/2016 3:49 PM, Mark Robinson wrote:
> > I have around 121 fields out of which 12 of them are indexed and almost
> all
> > 121 are stored.
> > Average size of a doc is 10KB.
> >
> > I was checking for start=0
Hello,
I am using Solr 5.4.0, one collection, multiple shards with replication.
Sample documents:
{
"item_id": "30d1e667",
"date": "2014-01-01",
"position": "5",
"description": "automobile license plate holder"
}
{
"item_id": "3cf18028",
"date": "2013-01-01",
"position": "23",
"description": "din
On 2/19/2016 3:08 AM, Clemens Wyss DEV wrote:
> The logic is somewhat this:
>
> SolrClient solrClient = new HttpSolrClient( coreUrl );
> while ( got more elements to index )
> {
> batch = create 100 SolrInputDocuments
> solrClient.add( batch )
> }
How much data is going into each of those Sol
Clemens,
What i understand from your above emails that you are creating
SolrInputDocuments in a batch inside a loop which gets created in heap .
SolrJ/SolrClient doesn't have any control on removing those objects from
heap which is controlled by Garbage Collection. So your program may end up
in
I'm out of the office now so I don't have the numbers to hand but from memory I
think there are probably around 800-1000 fields or so. I will confirm on Monday.
If i have time over the weekend I will try and recreate the problem at home and
see if I can post up a sample.
I am getting below error while indexing in solrlcoud, i am using implicit
router
null:org.apache.solr.common.SolrException: Error trying to proxy request for
url: http:/localhost:8984/solr/Restaurant_Restaurant_2_replica1/update
at
org.apache.solr.servlet.HttpSolrCall.remoteQuery(HttpSolr
Thanks Susheel,
but I am having problems in and am talking about SolrJ, i.e. the "client-side
of Solr" ...
-Ursprüngliche Nachricht-
Von: Susheel Kumar [mailto:susheel2...@gmail.com]
Gesendet: Freitag, 19. Februar 2016 17:23
An: solr-user@lucene.apache.org
Betreff: Re: OutOfMemory when
Hello,
We try to boost exact search to improve relevance.
We followed this article :
http://everydaydeveloper.blogspot.fr/2012/02/solr-improve-relevancy-by-boost
ing.html and this
http://stackoverflow.com/questions/29103155/solr-exact-match-boost-over-text
-containing-the-exact-match but it d
On Fri, Feb 19, 2016 at 8:51 AM, Adam Neal [Extranet] wrote:
> I've recently upgraded from 4.10.2 to 5.3.1 and I've hit an issue with slow
> commits on one of my cores. The core in question is relatively small (56k
> docs) and the issue only shows when commiting after a number of deletes,
> com
Clemens,
First allocating higher or right amount of heap memory is not a workaround
but becomes a requirement depending on how much heap memory your Java
program needs.
Please read about why Solr need heap memory at
https://wiki.apache.org/solr/SolrPerformanceProblems
Thanks,
Susheel
On Fri, F
And is there a workaround?
That jira issue is full of Zookeeper problems. Doubt it it will be
solved anytime soon.
On 02/19/2016 04:38 PM, Binoy Dalal wrote:
There's a JIRA ticket regarding this, and as of yet is unresolved.
https://issues.apache.org/jira/browse/SOLR-3274
On Fri, Feb 19, 201
max heap on both instances is the same at 8gig, and only using around 1.5gig at
the time of testing.
5.3.1 index size is 90MB
4.10.2 index size is 125MB
From: Shawn Heisey [apa...@elyograg.org]
Sent: 19 February 2016 15:45
To: solr-user@lucene.apache.o
On 2/19/2016 6:51 AM, Adam Neal [Extranet] wrote:
> I've recently upgraded from 4.10.2 to 5.3.1 and I've hit an issue with slow
> commits on one of my cores. The core in question is relatively small (56k
> docs) and the issue only shows when commiting after a number of deletes,
> commiting after
There's a JIRA ticket regarding this, and as of yet is unresolved.
https://issues.apache.org/jira/browse/SOLR-3274
On Fri, Feb 19, 2016 at 2:11 PM Bogdan Marinescu <
bogdan.marine...@awinta.com> wrote:
> Hi,
>
> From time to time I get org.apache.solr.common.SolrException: Cannot
> talk to ZooKe
> increase heap size
this is a "workaround"
Doesn't SolrClient free part of its buffer? At least documents it has sent to
the Solr-Server?
-Ursprüngliche Nachricht-
Von: Susheel Kumar [mailto:susheel2...@gmail.com]
Gesendet: Freitag, 19. Februar 2016 14:42
An: solr-user@lucene.apache.o
Just some additional information, the problem is mainly when the dynamic fields
are stored. Just having them indexed reduces the commit time to around 20
seconds. Unfortunately I need them stored.
From: Adam Neal [Extranet] [an...@mass.co.uk]
Sent: 19
I've recently upgraded from 4.10.2 to 5.3.1 and I've hit an issue with slow
commits on one of my cores. The core in question is relatively small (56k docs)
and the issue only shows when commiting after a number of deletes, commiting
after additions is fine. As an example commiting after deleting
And if it is on Solr side, please increase the heap size on Solr side
https://cwiki.apache.org/confluence/display/solr/JVM+Settings
On Fri, Feb 19, 2016 at 8:42 AM, Susheel Kumar
wrote:
> When you run your SolrJ Client Indexing program, can you increase heap
> size similar below. I guess it may
When you run your SolrJ Client Indexing program, can you increase heap size
similar below. I guess it may be on your client side you are running int
OOM... or please share the exact error if below doesn't work/is the issue.
java -Xmx4096m
Thanks,
Susheel
On Fri, Feb 19, 2016 at 6:25 AM,
Guessing on ;) :
must I commit after every "batch", in order to force a flushing of
org.apache.solr.client.solrj.request.RequestWriter$LazyContentStream et al?
OTH it is propagated to NOT "commit" from a (SolrJ) client
https://lucidworks.com/blog/2013/08/23/understanding-transaction-logs-softco
Ok Binoy, now it is clearer :)
Yes, if add sorting and faceting as additional optional requirements, doing
2 queries could be a perilous path !
Cheers
On 19 February 2016 at 09:24, Ere Maijala wrote:
> If he needs faceting or something (I didn't see that specified), doing two
> queries won't do
The char[] which occupies 180MB has the following "path to root"
char[87690841] @ 0x7940ba658 shopproducts#...
|- java.lang.Thread @ 0x7321d9b80 SolrUtil executorService for
core 'fust-1-fr_CH_1' -3-thread-1 Thread
|- value java.lang.String @ 0x79e804110 shopproducts#...
| '- str org.apache.
If he needs faceting or something (I didn't see that specified), doing
two queries won't do, of course..
--Ere
19.2.2016, 2.22, Binoy Dalal kirjoitti:
Hi Alessandro,
Don't get me wrong. Using mm, ps and pf can and absolutely will solve his
problem.
Like I said above, my solution is meant to b
Hi,
The only way I can imagine is to create that auxiliar field for performing
the facet on it. It means that you have to know "a priori" the kind of
report (facet field) you need.
For example if you current data (solrdocument) is:
{
"id": 3757,
"country": "CountryX",
"state
Hi,
From time to time I get org.apache.solr.common.SolrException: Cannot
talk to ZooKeeper - Updates are disabled.
Most likely when sol'r receives a lot of documents. My question is, why
is this happening and how to get around it ?
Stacktrace:
org.apache.solr.common.SolrException: Cannot ta
Environment: Solr 5.4.1
I am facing OOMs when batchupdating SolrJ. I am seeing approx 30'000(!)
SolrInputDocument instances, although my batchsize is 100. I.e. I call
solrClient.add( documents ) for every 100 documents only. So I'd expect to see
at most 100 SolrInputDocument's in memory at any
33 matches
Mail list logo