Hi there,
I just recently upgraded a SOLR instance to 6.0.1 version and while been
trying the new streaming expressions feature I discovered what I think might
be a bug in the HTTP interface.
I tried to create two simple streaming expressions, as described below:
innerJoin(
search(collec
EDIT: I'll keep testing with other stream sources/decorators. So far only the
search endpoint works both in the JAVA and cURL implementation
Cheers
--
View this message in context:
http://lucene.472066.n3.nabble.com/Streaming-expressions-malfunctioning-tp4281016p4281019.html
Sent from the Solr
Hi,
Actually there were errors in the expression synthax , examining the logs
allowed me to see what the error was.
Thanks
--
View this message in context:
http://lucene.472066.n3.nabble.com/Streaming-expressions-malfunctioning-tp4281016p4281198.html
Sent from the Solr - User mailing list arc
Hi,
While trying to create an example with the Select stream decorator, I
stumbled upon a bug in the solr 6.0.1 core.
The expression I was trying to run was:
via HTTP
The request returned an error message, so I looked in the server full stack
trace:
After examining the file org.apache.solr.
Hi Sweta,
I recently adapted that patch to a Solr instance running version 6.4 . If my
memory does not fail me, I think the only changes I had to make were
updating the package imports for the last OpenNLP version (I am using
OpenNLP 1.8):
What problem are you struggling with, exactly?
Best,
Hello guys,
I manage a Solr cluster and I am experiencing some problems with dynamic
schemas.
The cluster has 16 nodes and 1500 collections with 12 shards per collection
and 2 replicas per shard. The nodes can be divided in 2 major tiers:
- tier1 is composed of 12 machines with 4 physical cores
Dorian Hoxha wrote
> Isn't 18K lucene-indexes (1 for each shard, not counting the replicas) a
> little too much for 3TB of data ?
> Something like 0.167GB for each shard ?
> Isn't that too much overhead (i've mostly worked with es but still lucene
> underneath) ?
I don't have only 3TB , I have 3TB
The way the data is spread across the cluster is not really uniform. Most of
shards have way lower than 50GB; I would say about 15% of the total shards
have more than 50GB.
Dorian Hoxha wrote
> Each shard is a lucene index which has a lot of overhead.
And this overhead depends on what? I mean,
Hi,
Any updates on this issue? I am using Solr 6.3 and I have hit this same
bug...
Thanks
--
View this message in context:
http://lucene.472066.n3.nabble.com/java-lang-NullPointerException-in-json-facet-hll-function-tp4265378p4337877.html
Sent from the Solr - User mailing list archive at Nabb