So we have been running LucidWorks for Solr for about a week now and
have seen no problems - so I believe it was due to that buffering
issue in Jetty 6.1.3, estimated here:
>>> It really looks like you're hitting a lower-level IO buffering bug
>>> (esp when you see a response starting off with the
yonik has a point, when i ran into this i also upgraded to the latest
stable jetty, im using jetty 6.1.18
On 08/28/2009 04:07 PM, Rupert Fiasco wrote:
I deployed LucidWorks with my existing solrconfig / schema and
re-indexed my data into it and pushed it out to production, we'll see
how it stac
I deployed LucidWorks with my existing solrconfig / schema and
re-indexed my data into it and pushed it out to production, we'll see
how it stacks up over the weekend. Already queries that were breaking
on the prior Jetty/stock Solr setup are now working - but I have seen
it before where upon an in
Yes, I am hitting the Solr server directly (medsolr1.colo:9007)
Versions / architectures:
Jetty(6.1.3)
o...@medsolr1 ~ $ uname -a
Linux medsolr1 2.6.18-xen-r12 #9 SMP Tue Mar 3 15:34:08 PST 2009
x86_64 Intel(R) Xeon(R) CPU L5420 @ 2.50GHz GenuineIntel GNU/Linux
o...@medsolr1 ~ $ java -version
j
On Mon, Aug 24, 2009 at 6:30 PM, Rupert Fiasco wrote:
> If I run these through curl on the command its
> truncated and if I run the search through the web-based admin panel
> then I get an XML parse error.
Are you running curl directly against the solr server, or going
through a load balancer? Cu
I know in my last message I said I was having issues with "extra
content" at the start of a response, resulting in an invalid document.
I still am having issues with documents getting truncated (yes, I have
problems galore).
I will elaborate on why its so difficult to track down an actual
document
i had a similar issue with text from past requests showing up, this was
on 1.3 nightly, i switched to using the lucid build of 1.3 and the
problem went away, im using a nightly of 1.4 right now also without
probs, then again your mileage may vary as i also made a bunch of schema
changes that mi
Firstly, to everyone who has been helping me, thank you very much. All
this feedback is helping me narrow down these issues.
I deleted the index and re-indexed all the data from scratch and for a
couple of days we were OK, but now it seems to be erring again.
It happens on different input documen
: We are running an instance of MediaWiki so the text goes through a
: couple of transformations: wiki markup -> html -> plain text.
: Its at this last step that I take a "snippet" and insert that into Solr.
...
: doc.addField("text_snippet_t", article.getSnippet(1000));
ok, well first of
> 1. Exactly which version of Solr / SolrJ are you using?
Solr Specification Version: 1.3.0
Solr Implementation Version: 1.3.0 694707 - grantingersoll - 2008-09-12 11:06:47
Latest SolrJ that I downloaded a couple of days ago.
> Can you put the orriginal (pre solr, pre solrj, raw untouched, etc..
1. Exactly which version of Solr / SolrJ are you using?
2. ...
: I am using the SolrJ client to add documents to in my index. My field
: is a normal "text" field type and the text itself is the first 1000
: characters of an article.
Can you put the orriginal (pre solr, pre solrj
So I whipped up a quick SolrJ client and ran it against the document
that I referenced earlier. When I retrieve the doc and just print its
field/value pairs to stdout it ends like this:
http://brockwine.com/images/output1.png
It appears to be some kind of garbage characters.
-Rupert
On Tue, Aug
Hi,
This is a very strange behavior and the fact that it is cause by one
specific field, again, leads me to believe it's still a data issue. Did
you try using SolrJ to query the data as well? If the same thing happens
when using the binary protocol, then it's probably not a data issue. On
the
The text file at:
http://brockwine.com/solr.txt
Represents one of these truncated responses (this one in XML). It
starts out great, then look at the bottom, boom, game over. :)
I found this document by first running our bigger search which breaks
and then zeroing in a specific broken document by
Can you copy-paste the source data indexed in this field which causes the
error?
Cheers
Avlesh
On Tue, Aug 25, 2009 at 10:01 PM, Rupert Fiasco wrote:
> Using wt=json also yields an invalid document. So after more
> investigation it appears that I can always "break" the response by
> pulling bac
Using wt=json also yields an invalid document. So after more
investigation it appears that I can always "break" the response by
pulling back a specific field via the "fl" parameter. If I leave off a
field then the response is valid, if I include it then Solr yields an
invalid document - a truncated
It can very well be an issue with the data itself. For example, if the
data contains un-escaped characters which invalidates the response. I
don't know much about ruby, but what do you get with wt=json?
Rupert Fiasco wrote:
I am seeing our responses getting truncated if and only if I search on
17 matches
Mail list logo