So we have been running LucidWorks for Solr for about a week now and
have seen no problems - so I believe it was due to that buffering
issue in Jetty 6.1.3, estimated here:
>>> It really looks like you're hitting a lower-level IO buffering bug
>>> (esp when you see a response starting off with the
yonik has a point, when i ran into this i also upgraded to the latest
stable jetty, im using jetty 6.1.18
On 08/28/2009 04:07 PM, Rupert Fiasco wrote:
I deployed LucidWorks with my existing solrconfig / schema and
re-indexed my data into it and pushed it out to production, we'll see
how it stac
I deployed LucidWorks with my existing solrconfig / schema and
re-indexed my data into it and pushed it out to production, we'll see
how it stacks up over the weekend. Already queries that were breaking
on the prior Jetty/stock Solr setup are now working - but I have seen
it before where upon an in
Yes, I am hitting the Solr server directly (medsolr1.colo:9007)
Versions / architectures:
Jetty(6.1.3)
o...@medsolr1 ~ $ uname -a
Linux medsolr1 2.6.18-xen-r12 #9 SMP Tue Mar 3 15:34:08 PST 2009
x86_64 Intel(R) Xeon(R) CPU L5420 @ 2.50GHz GenuineIntel GNU/Linux
o...@medsolr1 ~ $ java -version
j
On Mon, Aug 24, 2009 at 6:30 PM, Rupert Fiasco wrote:
> If I run these through curl on the command its
> truncated and if I run the search through the web-based admin panel
> then I get an XML parse error.
Are you running curl directly against the solr server, or going
through a load balancer? Cu
I know in my last message I said I was having issues with "extra
content" at the start of a response, resulting in an invalid document.
I still am having issues with documents getting truncated (yes, I have
problems galore).
I will elaborate on why its so difficult to track down an actual
document
i had a similar issue with text from past requests showing up, this was
on 1.3 nightly, i switched to using the lucid build of 1.3 and the
problem went away, im using a nightly of 1.4 right now also without
probs, then again your mileage may vary as i also made a bunch of schema
changes that mi
Firstly, to everyone who has been helping me, thank you very much. All
this feedback is helping me narrow down these issues.
I deleted the index and re-indexed all the data from scratch and for a
couple of days we were OK, but now it seems to be erring again.
It happens on different input documen
: We are running an instance of MediaWiki so the text goes through a
: couple of transformations: wiki markup -> html -> plain text.
: Its at this last step that I take a "snippet" and insert that into Solr.
...
: doc.addField("text_snippet_t", article.getSnippet(1000));
ok, well first of
> 1. Exactly which version of Solr / SolrJ are you using?
Solr Specification Version: 1.3.0
Solr Implementation Version: 1.3.0 694707 - grantingersoll - 2008-09-12 11:06:47
Latest SolrJ that I downloaded a couple of days ago.
> Can you put the orriginal (pre solr, pre solrj, raw untouched, etc..
1. Exactly which version of Solr / SolrJ are you using?
2. ...
: I am using the SolrJ client to add documents to in my index. My field
: is a normal "text" field type and the text itself is the first 1000
: characters of an article.
Can you put the orriginal (pre solr, pre solrj
s that all escaping it taking
>>>> place. The core problem seems to be that the document is just
>>>> truncated - it just plain end of files. Jetty's log says its sending
>>>> back an HTTP 200 so all is well.
>>>>
>>>> Any ideas on how I ca
bout ruby, but what do you get with wt=json?
Rupert Fiasco wrote:
I am seeing our responses getting truncated if and only if I search on
our main text field.
E.g. I just do some basic like
title_t:arthritis
Then I get a valid document back. But if I add in our larger text field:
title_t:
4:31 PM, Uri Boness wrote:
>> > It can very well be an issue with the data itself. For example, if the
>> data
>> > contains un-escaped characters which invalidates the response. I don't
>> know
>> > much about ruby, but what do you get with wt=json?
>
It can very well be an issue with the data itself. For example, if the
> data
> > contains un-escaped characters which invalidates the response. I don't
> know
> > much about ruby, but what do you get with wt=json?
> >
> > Rupert Fiasco wrote:
> >>
> >> I
n very well be an issue with the data itself. For example, if the data
> contains un-escaped characters which invalidates the response. I don't know
> much about ruby, but what do you get with wt=json?
>
> Rupert Fiasco wrote:
>>
>> I am seeing our responses getting
It can very well be an issue with the data itself. For example, if the
data contains un-escaped characters which invalidates the response. I
don't know much about ruby, but what do you get with wt=json?
Rupert Fiasco wrote:
I am seeing our responses getting truncated if and only if I s
I am seeing our responses getting truncated if and only if I search on
our main text field.
E.g. I just do some basic like
title_t:arthritis
Then I get a valid document back. But if I add in our larger text field:
title_t:arthritis OR text_t:arthritis
then the resultant document is NOT valid
18 matches
Mail list logo