Anyone has a clue?
--
View this message in context:
http://lucene.472066.n3.nabble.com/reindexing-a-solr-collection-of-nested-documents-tp4307586p4307976.html
Sent from the Solr - User mailing list archive at Nabble.com.
On 11/29/2016 8:40 AM, halis Yılboğa wrote:
> it is not normal to get that many error actually. Main problem should be
> from your index. It seems to me your index is corrupted.
>
> 29 Kas 2016 Sal, 14:40 tarihinde, Furkan KAMACI
> şunu yazdı:
>
>> On the other hand, my Solr instance stops frequen
On 11/29/2016 6:40 AM, tesm...@gmail.com wrote:
> Solr server is running in a Ubuntu VM on Azure. Php pages PHPSolarium are
> hosted as webapp using the same VM as for Solr server.
>
> After deployment, I am getting the following HTTP request timeout error:
>
> Fatal error: Uncaught exception 'Sola
On 11/29/2016 4:47 AM, Srinivas Kashyap wrote:
> Can somebody guide me how to resolve this issue?
>
> Some of the parameters for Tomcat set are :
>
> maxWait="15000" maxActive="1000" maxIdle="50".
A broken pipe error usually means that the TCP connection was broken,
but you didn't include enough o
At present, CDCR does not work with HDFS. The update log implementation is
different and incompatible at the moment. Please open a jira issue to
support cdcr under hdfs -- patches are always welcome!
On Tue, Nov 29, 2016 at 8:36 PM, ZHOU Ran (SAFRAN IDENTITY AND SECURITY) <
ran.z...@safrangroup.co
On Tue, Nov 29, 2016 at 10:22 PM, Kevin Risden wrote:
> If using CloudSolrClient or another zookeeper aware client, then a request
> gets sent to Zookeeper to determine the live nodes. If indexing,
> CloudSolrClient can find the leader and send documents directly there. The
> client then uses that
Hello,
Thanks for reading this, but it has been resolved. I honestly don't know
what was happening, but restarting my shell and running the exact same
commands today instead of yesterday seems to have fixed it.
Best,
James
On Mon, Nov 28, 2016 at 8:07 PM, James Muerle wrote:
> Hello,
>
> I am
Let's break this down:
'stmt=SELECT TextSize from main LIMIT 10' fails
This fails because CloudSolrStream does not currently support aliases. I
believe this is fixed in 6.4
'stmt=SELECT avg(TextSize) from UNCLASS' fails
This surprises me. I read through the StatsStream and don't see any reason
wh
This slideshow / presentation may give you some idea of the complexity
involved... No, nothing like this in Solr itself.
At least one approach is to mine your logs for user behavior and use that
information as a starting point for either an external machine learning
piece, or for just fine-tuning
Thank you Simon.
On Tue, Nov 29, 2016 at 11:25 AM, simon wrote:
> You might want to take a look at
> https://issues.apache.org/jira/browse/SOLR-4722
> ( 'highlighter which generates a list of query term positions'). We used it
> a while back and doesn't appear to have been used in any Solr > 4.1
You might want to take a look at
https://issues.apache.org/jira/browse/SOLR-4722
( 'highlighter which generates a list of query term positions'). We used it
a while back and doesn't appear to have been used in any Solr > 4.10)
-Simon
On Tue, Nov 29, 2016 at 11:43 AM, John Bickerstaff wrote:
> A
Hello,
{!lucene%20type=payloadQueryParser just seems doubtful to me. The token
after ! comes as a shortcut for type=.. param. So, the behavior in case of
specifying type twice might be undetermined.
Regarding wrong output, it might help to check how it behaves as part of q,
check explain, and prob
Beautiful!
Thank you all - that is exactly what I needed to be sure where I stood on
this before going into a meeting today.
On Tue, Nov 29, 2016 at 11:03 AM, Kevin Risden
wrote:
> For #2 you might be able to get away with the following:
>
> https://cwiki.apache.org/confluence/display/solr/The+
For #2 you might be able to get away with the following:
https://cwiki.apache.org/confluence/display/solr/The+Term+Vector+Component
The Term Vector component can return offsets and positions. Not sure how
useful they would be to you, but at least is a starting point. I'm assuming
this requires on
For #3 specifically, I've always found this page useful:
https://cwiki.apache.org/confluence/display/solr/Field+Properties+by+Use+Case
It lists out what properties are necessary on each field based on a use
case.
Kevin Risden
On Tue, Nov 29, 2016 at 11:49 AM, Erick Erickson
wrote:
> (1) No th
I think they believe this because of what you say above -- they are
misinterpreting the call for topology as a handoff where zookeeper "does
the rest"...
This info will allow me to straighten out that misunderstanding... Thanks
all!
On Tue, Nov 29, 2016 at 10:49 AM, Walter Underwood
wrote:
> T
This is easy to prove. Shut down Zookeeper, then send search requests to
different hosts in the Solr Cloud cluster. If they work, then the requests are
not going through Zookeeper.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On Nov 29, 2016, at 9:44
(1) No that I have readily at hand. And to make it
worse, there's the UnifiedHighlighter coming out soon
I don't think there's a good way for (2).
for (3) at least yes. The reason is simple. For analyzed text,
the only thing in the index is what's made it through the
analysis chains. So stopw
bq: My new place tells me they're sending requests to Zookeeper - and those are
getting sent on to Solr by Zookeeper - -- this is news to me if it's true...
Have you seen the code? Because if they're using SolrJ, then they do, indeed
connect to Zookeeper (i.e. CloudSolrClient takes a ZK ensemble)
Is there any way Solr Engine should “learn” based on user behaviours and offer
intelligent suggestions
How can it be implemented ?
This e-mail and any files transmitted with it are for the sole use of the
intended recipient(s) and may contain confidential and privileged information.
If you
If using CloudSolrClient or another zookeeper aware client, then a request
gets sent to Zookeeper to determine the live nodes. If indexing,
CloudSolrClient can find the leader and send documents directly there. The
client then uses that information to query the correct nodes directly.
Zookeeper is
All,
I've thought I understood that Solr search requests are made to the Solr
servers and NOT Zookeeper directly. (I.E. Zookeeper doesn't decide which
Solr server responds to requests and requests are made directly to Solr)
My new place tells me they're sending requests to Zookeeper - and those
Just some data points.
main is an alias for the collection UNCLASS.
'stmt=SELECT TextSize from main LIMIT 10' fails
'stmt=SELECT TextSize from UNCLASS LIMIT 10' succeeds
'stmt=SELECT avg(TextSize) from UNCLASS' fails
'stmt=SELECT avg(TextSize) from main' succeeds
'stmt=SELECT like_count, Document
All,
One of the questions I've been asked to answer / prove out is around the
question of highlighting query matches in responses.
BTW - One assumption I'm making is that highlighting is basically a
function of storing offsets for terms / tokens at index time. If that's
not right, I'd be gratefu
it is not normal to get that many error actually. Main problem should be
from your index. It seems to me your index is corrupted.
29 Kas 2016 Sal, 14:40 tarihinde, Furkan KAMACI
şunu yazdı:
> On the other hand, my Solr instance stops frequently due to such errors:
>
> 2016-11-29 12:25:36.962 WAR
Hello all,
could someone help on this?
Best Regards
Ran
Von: ZHOU Ran (SAFRAN IDENTITY AND SECURITY)
Gesendet: Freitag, 25. November 2016 15:37
An: 'solr-user@lucene.apache.org'
Betreff: Using Solr CDCR with HdfsDirectoryFactory
Hello
Hi All,
I have followed the guide "Cross Data Center Rep
Created an alias called 'main' and now it works:
curl --data-urlencode 'stmt=SELECT avg(TextSize) from main'
http://cordelia:9100/solr/UNCLASS/sql?aggregationMode=map_reduce
{"result-set":{"docs":[
{"avg(TextSize)":6024.222616504568},
{"EOF":true,"RESPONSE_TIME":1391}]}}
Thank you Damian and
I'll take a look at the StatsStream and see what the issue is.
Joel Bernstein
http://joelsolr.blogspot.com/
On Mon, Nov 28, 2016 at 8:32 PM, Damien Kamerman wrote:
> Aggregated selects only work with lower-case collection names (and no
> dashes). (Bug in StatsStream I think)
>
> I assume 'SOLR-
Hi,
I am deploying a search engine on Azure. The following is my configuration:
Solr server is running on Ubuntu VM (hosted on Azure)
PHP web app is hosted on Azure using the same VM hosting Solr server.
Is there any best practices/approach guidelines?
I am getting the following exception:
Fat
Hi,
I am deploying Solr+PHPSolarium on Azure
Solr server is running in a Ubuntu VM on Azure. Php pages PHPSolarium are
hosted as webapp using the same VM as for Solr server.
After deployment, I am getting the following HTTP request timeout error:
Fatal error: Uncaught exception 'Solarium\E
Hello,
After starting the solr application and running full imports, running into this
below error after a while:
null:org.apache.catalina.connector.ClientAbortException: java.io.IOException:
Broken pipe
at
org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:393)
at org
On the other hand, my Solr instance stops frequently due to such errors:
2016-11-29 12:25:36.962 WARN (qtp1528637575-14) [ x:collection1]
o.a.s.h.a.LukeRequestHandler Error getting file length for [segments_c]
java.nio.file.NoSuchFileException: data/index/segments_c
at sun.nio.fs.UnixException.
I use Solr 6.3 and get too many warning about. Is it usual:
WARN true LukeRequestHandler Error getting file length for [segments_1l]
java.nio.file.NoSuchFileException:
/home/server/solr/collection1/data/index/segments_1l
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at
On 28/11/2016 13:11, Jaroslaw Rozanski wrote:
Hi,
Thanks for elaborate response. Missed the link to duplicate JIRA. Makes
sense.
On the 5.x front I wasn't expecting 5.6 release now that we have 6.x but
was simply surprised to see fix for 4.x and not for 5.x.
As for adoption levels, it was my s
34 matches
Mail list logo