Am 14.04.2011 09:53, schrieb Ralf Kraus:
Hello,
I just updatet to SOLR 3.1 and wondering if the phpnative response
writer plugin is part of it?
( https://issues.apache.org/jira/browse/SOLR-1967 )
When I try to compile the sources files I get some errors :
PHPNativeResponseWriter.java:57:
or
A thread with this same subject from 2008/2009 is here:
http://search-lucene.com/m/jkBgXnSsla
We're seeing customers being bitten by this "bug" now and then, and normally my
workaround is to simply not use stopwords at all.
However, is there an actual fix in the 3.1 eDisMax parser which solves t
Hi,
Thanks for your response. I am currently working in this issue.
When I run the test_utf8.sh script, I got the following result.
Solr server is up.
HTTP GET is accepting UTF-8
HTTP POST is accepting UTF-8
HTTP POST defaults to UTF-8
ERROR: HTTP GET is not accepting UTF-8 beyond the basi
You're possibly getting hit by server caching. Are you by chance
submitting the exact same query after your commit? What
happens if you change your query do one you haven't used before?
Turning off http caching might help. Solr should be searching
the new contents after a commit (and any attendant
Hi,
If I want to filter a search result to not return all fields as per
the default but I don't know what field my hits will be in.
This is basically for unstructured document type data, for example
large HTML or DOCBOOK documents.
thanks,
Bryan Rasmussen
Hi
There may be better ways but as far as my knowledge goes, I'd try to use
the highhlighting component, with hl.requireFieldMatch the hightlighting
response only includes fields where hightlights were applied (match was
found), which is probably what you want.
Best
Marek Tichy
> Hi,
>
> If I wan
Hi,
I am importing a number of XML documents from the filesystem. The
dataimporthandler finds them, but returns an undeclared general entity
error - even though my DTD is present and findable by other parsers.
DTD Declaration
In XML file in the same folder as the DTD allartikel.dtd
Thanks,
B
Hi everybody,
Recently I implemented an autocomplete mechanism for my website using a
custom TermsComponent. I was quite happy with that because it also enables
me to do a Google-like feature where complete sentences where suggested to
the user when he typed in the search field. I used Shingles t
Hi.
I've got a strange result of a DisMax search function. I might have
understood the functionallity wrong. But after I read the manual I
understood it is used to do ranked results with simple search terms.
Solr Version 1.4.0
I've got the setup
Schema fields
---
If you haven't modified your schema.xml, you'll find that the
is set to the text field. So when
you issue the q=term you're going against your default
search field.
Assuming you've changed the default search field to
"defaultSearch", then the problem is probably that your
analysis chain for defau
Using solr 3.1.
When I do:
sort=score desc
it works.
sort=product(typeId,2) desc (typeId is a valid attribute in document)
it works.
sort=product(score,typeId) desc
fails on 400 error? Also "sort=product(score,2) desc" fails too.
Must be something basic I'm missing? Tried a
Thanks everyone.
I updated the wiki. If you have a chance please take a look and check to make
sure I got it right on the wiki.
http://wiki.apache.org/solr/DisMaxQParserPlugin#tie_.28Tie_breaker.29
Tom
-Original Message-
From: Chris Hostetter [mailto:hossman_luc...@fucit.org]
Sent:
On Fri, Apr 15, 2011 at 11:50 AM, Michael Owen
wrote:
>
> Using solr 3.1.
> When I do:
> sort=score desc
> it works.
> sort=product(typeId,2) desc (typeId is a valid attribute in document)
> it works.
> sort=product(score,typeId) desc
> fails on 400 error? Also "sort=product(s
I know I'm late to the party, but I recently learned that field compression was
removed as of Solr 1.4.1. I think a lot of sites were relying on that feature,
so I'm curious what people are doing now that it's gone. Specifically, what are
people doing to efficiently store *and highlight* large f
I was just hoping someone might be able to point me in the right direction
here. We just upgraded from Solr 1.4 to Solr 3.1 this past week and we're
having issues running out of disk space on our Master servers. Our Master
has dozens of cores. We have a script that kicks off once per day to do a
Hello,
I want to split my string when it contains "(". Example:
spurs (London)
Internationale (milan)
to
spurs
(london)
Internationale
(milan)
What tokenizer can i use to fix this problem?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Split-token-tp2810772p2810772.html
thanks!
It seems the file count in index directory is the segment# * 8 in my dev
environment...
I see there are .fnm .frq .fdt .fdx .nrm .prx .tii .tis (8) file extensions,
and each has as many as segment# files.
Is it always safe to calculate the file counts using segment number multiply
by 8?
yeah, I can figure out the segment number by going to stat page of solr...
but my question was how to figure out exact total number of files in 'index'
folder for each core.
Like I mentioned in previous message, I currently have 8 files per segment
(.prx .tii etc), but it seems this might change i
Hi,
I want to evaluate (and probably use in production) facet pivoting -
what is the best approach to get a "as-stable-as-can-be" version of solr
which is able to do facet pivoting? I was hoping to see this in Solr
3.1, but apparently it is only in the dev versions/nightlies...
Is it possible to
Hi I have a question. How to combine the Deduplication and Elevation
implementations in Solr. Currently , I managed to implement either one only.
--
View this message in context:
http://lucene.472066.n3.nabble.com/How-to-combine-Deduplication-and-Elevation-tp2819621p2819621.html
Sent from the Sol
Hello,
We just tried core reloading on a freshly installed Solr 3.1.0 with
RamDirectoryFactory.
It doesn't seem to happen.
With the FSDirectoryFactory everything works fine.
Looks like the RamDirectoryFactory implementation caches directory and if
it's available it doesn't really reopen it thus n
Hi everyone,
We are using Solr 1.4.1 in my company and we need to do some backups of the
indexes.
After some googling, I'm quite confused about the differents ways of backing
up the index.
First, I tried the scripts provided in the Solr distribution without success
:
I untarred the apache-solr-1
Hi,
Thanks for your response. I am currently working in this issue.
When I run the test_utf8.sh script, I got the following result.
Solr server is up.
HTTP GET is accepting UTF-8
HTTP POST is accepting UTF-8
HTTP POST defaults to UTF-8
ERROR: HTTP GET is not accepting UTF-8 beyond the basic multi
Hi everybody,
I have the following problem/question:
In our system we have some categories and products in those categories. Our
structure looks a bit like this:
product X belongs to category: cat1_subcat1 (10)
product X belongs to category: cat2_subcat1 (20)
product Y belongs to category:
Hello,
I am new to solr,
my requirements are,
1. at regular interval need solr to fetch data from sql server database and
do indexing on it.
2. fetch only those records which is not yet indexed
3. for each record there is one file associated, so with database table
fields also want to index cont
Hi Victor,
I have the same questions about the new Suggest component.
I can't really help you as I didn't really manage to understand how it
worked.
Sometimes, I had more results, sometimes less.
Even so, I would really be interested in your resources using Terms and
shingles to implement auto-co
What you've shown would be handled with WhitespaceTokenizer, but you'd have
to
prevent filters from stripping the parens. If you have to handle things like
blah ( stuff )
WhitespaceTokenizer wouldn't work.
PatternTokenizerFactory might work for you, see:
http://lucene.apache.org/solr/api/org/apach
Hi Quentin, well stick in this thread, I will try to see how it works and
get inputs from other people.
Here is the link to my blog who shows how to do it :
http://www.victorkabdebon.net/archives/16
Note that I used Tomcat + SolR, but it can easily done with PHP. Also solrj
in 1.4.1 didn't have
Why do you care? You haven't outlined why having the precise numbers
here is necessary. Perhaps with a higher-level statement of the problem
you're trying to solve we could make some better suggestions
Best
Erick
On Wed, Apr 13, 2011 at 5:23 PM, Renee Sun wrote:
> yeah, I can figure out the
This pattern split tokens *only* in the presence of parentheses with adjoining
whitespace, and includes the parentheses with the tokens:
(?<=\))\s+|\s+(?=\()
So you'll get this kind of behavior:
Tottenham Hotspur (London)
F.C. Internationale (milan)
FC Midtjylland (Herning) (Ikast)
Sorry if this comes through twice, but my first got rejected (this one
is plain text,
should come through better).
Part of this is solved by the Data Import Handler (DIH) see:
http://wiki.apache.org/solr/DataImportHandler
And think about a "database" data source. This can be combined
with the "Ti
I can reproduce this with the example server w/ your deletionPolicy
and replicationHandler configs.
I'll dig further to see what's behind this behavior.
-Yonik
http://www.lucenerevolution.org -- Lucene/Solr User Conference, May
25-26, San Francisco
On Fri, Apr 15, 2011 at 1:14 PM, Trey Grainger
sorry I should elaborate that earlier...
in our production environment, we have multiple cores and the ingest
continuously all day long; we only do optimize periodically, and optimize
once a day in mid night.
So sometimes we could see 'too many open files' error. To prevent it from
happening, in
Hi John,
¿How can split the file of the solr index into multiple files?
>
Actually, the index is organized in a set of files called segments. It's not
just a single file, unless you tell Solr to do so.
That's because some "file systems are about to support a maximun
> of space in a single file"
Thank you, Yonik!
I see the Jira issue you created and am guessing it's due to this issue.
We're going to remove replicateAfter="startup" in the mean-time to see if
that helps (assuming this is the issue the jira ticket described).
I appreciate you taking a look at this.
Thanks
-Trey
On Fri,
Specifically to the file size support, all the file systems on current releases
of linux (and unixes too) support large files with 64 bit offsets, and I am
pretty sure that java VM supports 64 bit offsets in files, so there is no 2GB
file size limit anymore.
François
On Apr 15, 2011, at 4:31 P
Looks good, thanks Tom.
-Jay
On Fri, Apr 15, 2011 at 8:55 AM, Burton-West, Tom wrote:
> Thanks everyone.
>
> I updated the wiki. If you have a chance please take a look and check to
> make sure I got it right on the wiki.
>
> http://wiki.apache.org/solr/DisMaxQParserPlugin#tie_.28Tie_breaker.2
On Fri, Apr 15, 2011 at 5:28 PM, Trey Grainger wrote:
> Thank you, Yonik!
> I see the Jira issue you created and am guessing it's due to this issue.
> We're going to remove replicateAfter="startup" in the mean-time to see if
> that helps (assuming this is the issue the jira ticket described).
Ye
38 matches
Mail list logo