On 08.07.2009 00:50 Jan Høydahl wrote:
> itself and do not need to know the query language. You may then want
> to do a copyfield from all your text_ -> text for convenient one-
> field-to-rule-them-all search.
Would that really help? As I understand it, copyfield takes the raw, not
yet analyz
>If Perl is you choice:
>http://search.cpan.org/~bricas/WebService-Solr-0.07/lib/WebService/Solr.pm
>
h. Very interesting; I had not seen this!
>Otis
>--
>Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
>
>
>
>- Original Message
>> From: Francis Yakin
>> To: "solr-user@luc
Otis,
What is the difference or advantage if using solr.pm?
http://search.cpan.org/~garafola/Solr-0.03/lib/Solr.pm
Thanks
Francis
-Original Message-
From: Otis Gospodnetic [mailto:otis_gospodne...@yahoo.com]
Sent: Tuesday, July 07, 2009 10:34 PM
To: solr-user@lucene.apache.org
Subjec
If Perl is you choice:
http://search.cpan.org/~bricas/WebService-Solr-0.07/lib/WebService/Solr.pm
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
> From: Francis Yakin
> To: "solr-user@lucene.apache.org"
> Sent: Wednesday, July 8, 2009 1:16:04 AM
jay,
Thanks. The testcase was not enough. I have given a new patch . I
guess that should solve this
On Wed, Jul 8, 2009 at 3:48 AM, solr jay wrote:
> I guess in this case it doesn't matter whether the two directories
> tmpIndexDir and indexDir are the same or not. It looks that the index
> directo
I have the following "curl" cmd to update and doing commit to Solr ( I have 10
xml files just for testing)
curl http://solr00:7001/solr/update --data-binary @xml_Artist-100170.txt -H
'Content-type:text/plain; charset=utf-8'
curl http://solr00:7001/solr/update --data-binary @xml_Artist-101062.
I am using Solr1.3 version..
> Date: Wed, 8 Jul 2009 01:12:02 +0900
> From: k...@r.email.ne.jp
> To: solr-user@lucene.apache.org
> Subject: Re: Query on the updation of synonym and stopword file.
>
> Sagar,
>
> > I am facing a problem here that even after the core reload and
> re-indexing
> >
Try with fl=* or fl=*,score added to your request string.
-Yao
Yang Lin-2 wrote:
>
> Hi,
> I have some problems.
> For my solr progame, I want to type only the Query String and get all
> field
> result that includ the Query String. But now I can't get any result
> without
> specified field. For
use Solr's Filter Query parameter "fq":
fq=x:[10 TO 100]&fq=y:[20 TO 300]&fl=title
-Yao
huenzhao wrote:
>
> Hi all:
>
> Suppose that my index have 3 fields: title, x and y.
>
> I know one range(10 < x < 100) can query liks this:
>
> http://localhost:8983/solr/select?q=x:[10 TO 100]&fl=title
Hi all:
Suppose that my index have 3 fields: title, x and y.
I know one range(10 < x < 100) can query liks this:
http://localhost:8983/solr/select?q=x:[10 TO 100]&fl=title
If I want to two range(10 < x <100 AND 20 < y < 300) query like
SQL(select title where x>10 and x < 100 and y > 20 and
There is an alternative to knowing the language at query:
multiply-process for stems or lemmas of all the possible languages.
This may well be a cure much worse than the disease.
Yes, LI can sell you our lemma-production capability.
--benson margulies
basis technology
On Tue, Jul 7, 2009 at 6
When using stemming, you have to know the query language.
For your project, perhaps you should look into switching to a
lemmatizer instead. I believe Lucid can provide integration with a
commercial lemmatizer. This way you can expand the document field
itself and do not need to know the quer
: When indexing or querying text, i'm using the solr.StopFilterFactory ; it
seems to works just fine...
:
: But I want to use the text field as a facet, and get all the commonly
: used words in a set of results, without the stopwords. As far as I
: tried, I always get stopwords, and numerical
I guess in this case it doesn't matter whether the two directories
tmpIndexDir and indexDir are the same or not. It looks that the index
directory is switched to tmpIndexDir and then it is deleted inside
"finally".
On Tue, Jul 7, 2009 at 12:31 PM, solr jay wrote:
> In fact, I saw the directory w
Hi,
I have some problems.
For my solr progame, I want to type only the Query String and get all field
result that includ the Query String. But now I can't get any result without
specified field. For example, query with "tina" get nothing, but
"Sentence:tina" could.
I hava adjusted the *schema.xml*
Norberto,
You said last week:
"why not generate your SQL output directly into your oracle server as a file,
upload the file to your SOLR server? Then the data file is local to your SOLR
server , you will bypass any WAN and firewall you may be having. (or some
variation of it, sql -> SOLR server
: But I have a problem like this; when i call
:
http://localhost:8983/solr/select/?qt=cfacet&q=%2BitemTitle:nokia%20%2BcategoryId:130&start=0&limit=3&fl=id,
: itemTitle
: i'm getiing all fields instead of only id and itemTitle.
Your custom handler is responsible for checking the fl and setting
And here it's my code :)
If you need some explanation feel free to ask :)
You can test it on the first test file I gave you when I open the thread.
At the moment that works only on one file, I have to change it a bit to make
it works on repertory with lots of xml files,
See you later guys :-)
$
Faceting on MLT request the use of MoreLikeThisHandler. The standard request
handler, while provide support to MLT via a search component, does not
return facets on MLT results. To enable MLT handler, add an entry like below
to your solrconfig.xml
The query parameters syntax for facet
The answer to my owner question:
...
...
would work.
-Yao
Yao Ge wrote:
>
> I am not sure about the parameters for MLT the requestHandler plugin. Can
> one of you share the solrconfig.xml entry for MLT? Thanks in advance.
> -Yao
>
>
> Bill Au wrote:
>>
>> I have been using th
thanks for your genuine advice, I got it.
On Tue, Jul 7, 2009 at 12:27 PM, Chris Hostetter
wrote:
>
> duplicate post?
>
> http://www.nabble.com/how-to-shuffle-the-result-while-follow-some-priority-rules-at-the--same-time-to24282025.html#a24282025
>
> FYI: reposting the same question twice doesn't
In fact, I saw the directory was created and then deleted.
On Tue, Jul 7, 2009 at 12:29 PM, solr jay wrote:
> Ok, Here is the problem. In the function, the two directories tmpIndexDir
> and indexDir are the same (in this case only?), and then at the end of the
> function, the directory tmpIndexD
Ok, Here is the problem. In the function, the two directories tmpIndexDir
and indexDir are the same (in this case only?), and then at the end of the
function, the directory tmpIndexDir is deleted, which deletes the new index
directory.
} finally {
delTree(tmpIndexDir);
}
On
duplicate post?
http://www.nabble.com/how-to-shuffle-the-result-while-follow-some-priority-rules-at-the--same-time-to24282025.html#a24282025
FYI: reposting the same question twice doesn't tend to get responses
faster, it just increases the total volume of mail and slows down
everyones ability t
I'm sorry I almost finish my script to format my xml in Solr's xml.
I'll give it to you later, I think that can help some people like me in the
future :)
I just need to formate my output text and everything will be fine :)
Cheers for your help guys ;)
On Tue, Jul 7, 2009 at 7:06 PM, Jay Hill wr
: http://projecte01.development.barcelonamedia.org/fonetic/
: you will see a "Top Words" list (in Spanish and stemmed) in the list there
: is the word "si" which is in 20649 documents.
: If you click at this word, the system will perform the query
: (x) content:si, with no answers at all
:
I see. So I tried it again. Now index.properties has
#index properties
#Tue Jul 07 12:13:49 PDT 2009
index=index.20090707121349
but there is no such directory index.20090707121349 under the data
directory.
Thanks,
J
On Tue, Jul 7, 2009 at 11:50 AM, Shalin Shekhar Mangar <
shalinman...@gmail.co
On Tue, Jul 7, 2009 at 11:50 PM, solr jay wrote:
> It seemed that the patch fixed the symptom, but not the problem itself.
>
> Now the log messages looks good. After one download and installed the
> index,
> it printed out
>
> *Jul 7, 2009 10:35:10 AM org.apache.solr.handler.SnapPuller
> fetchLat
: I want to implement that effect that the results had better differ from each
: other in one page, but I want to show some results first like those contains
: more attributes.
there is a RandomSortField that you can use as a "tie breaker" when all
other fields are equal. info baout using that
On Tue, Jul 7, 2009 at 6:45 PM, gateway0 wrote:
>
> Thank you, that was it.
>
> Why is the preserveOriginal="1" option nowhere documented?
>
>
A simple case of oversight :)
I've added a note on preserveOriginal and splitOnNumerics (another omission)
to the wiki page http://wiki.apache.org/solr/A
: 5-6 days after fresh index index size suddenly increased (no optimization in
: between) by 150GB and then query takes long time and java heap error comes.
: I run optimize in this index Its takes long time and result it increase
: index size more more then 200GB and it didn't show about optimize
It seemed that the patch fixed the symptom, but not the problem itself.
Now the log messages looks good. After one download and installed the index,
it printed out
*Jul 7, 2009 10:35:10 AM org.apache.solr.handler.SnapPuller fetchLatestIndex
INFO: Slave in sync with master.*
but the files inside
On Tue, Jul 7, 2009 at 1:50 PM, Francis Yakin wrote:
> yeah, It works now.
>
> How can I verify if the new CSV file get uploaded?
point your browser at
http://localhost:8983/solr/admin/stats.jsp
Check out the "UPDATE HANDLERS" section
-Yonik
http://www.lucidimagination.com
yeah, It works now.
How can I verify if the new CSV file get uploaded?
Thanks
Francis
-Original Message-
From: ysee...@gmail.com [mailto:ysee...@gmail.com] On Behalf Of Yonik Seeley
Sent: Tuesday, July 07, 2009 10:49 AM
To: solr-user@lucene.apache.org
Cc: Norberto Meijome
Subject: Re:
The double quotes around the ampersand don't belong there.
I think that UTF8 should also be the default, so the following should also work:
curl
'http://localhost:8983/solr/update/csv?stream.file=/opt/apache-1.2.0/example/exampledocs/test.csv'
-Yonik
http://www.lucidimagination.com
On Tue, Jul
With
curl
'http://localhost:8983/solr/update/csv?stream.file=/opt/apache-1.2.0/example/exampledocs/test.csv&stream.contentType=text/plain;charset=utf-8'
No errors now.
But , how can I verify if the update happening?
Thanks
Francis
-Original Message-
From: Francis Yakin [mailto:fya..
I did try:
curl
'http://localhost:8983/solr/update/csv?stream.file=/opt/apache-1.2.0/example/exampledocs/test.csv"&"stream.contentType=text/plain;charset=utf-8'
It doesn't work
Francis
-Original Message-
From: ysee...@gmail.com [mailto:ysee...@gmail.com] On Behalf Of Yonik Seeley
Sent
Hi buddy,
I am working on a music search project and I have a special requirement
about the ranking when querying the artist name.
Ex: When I query the artist "ne yo", there are 500results and maybe 100 song
names are repeated. So the ideal thing is to let users get more different
songs in on pag
Hi,
I was interested in creating a test environment where i can
make use of solr/ lucene .My objective is to be able to test
various features of solr .(replication , performance, indexing , searching
and so on)
I wanted someone to give me a start on above.I am well versed
with lucene/solr basi
Mathieu, have a look at Solr's DataImportHandler. It provides a
configuration-based approach to index different types of datasources
including relational databases and XML files. In particular have a look at
the XpathEntityProcessor (
http://wiki.apache.org/solr/DataImportHandler#head-f1502b1ed71d9
I am not sure about the parameters for MLT the requestHandler plugin. Can one
of you share the solrconfig.xml entry for MLT? Thanks in advance.
-Yao
Bill Au wrote:
>
> I have been using the StandardRequestHandler (ie /solr/select). fq does
> work with the MoreLikeThisHandler. I will switch to
Hi Otis,
Thanks for replying to my query.
My query is, if multiple values are provided for a custom field then how can
it be represented in a SOLR query. So if my field is fileID and its values
are 111, 222 and 333 and my search string is ‘product’ then how can this be
represented in a SOLR query
Sagar,
> I am facing a problem here that even after the core reload and
re-indexing
> the documents the new updated synonym or stop words are not loaded.
> Seems so the filters are not aware that these files are updated so
the solution
> to me is to restart the whole container in which I have
Hi,
I want to try KStem. I'm following the instructions on this page:
http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters/Kstem
... but the download link doesn't work.
Is anyone know the new location to download KStem?
--
View this message in context:
http://www.nabble.com/KStem-downl
Hi.
I am currently using Solr Cell to extract content from binary files, and I
am passing along some additional metadata with ext.literal params. Sample
below:
curl
"http://localhost:8983/solr/update/extract?ext.literal.id=2&ext.literal.some_code1=code1&ext.literal.some_code2=code2&ext.idx.attr=
solr jay wrote:
Hi,
I am looking at this piece of configuration in solrconfig.xml
solr
solrconfig.xml
schema.xml
q=solr&version=2.0&start=0&rows=0
server-enabled
I've never used this feature before, but reading source code...
It wasn't cl
Hi,
I was interested in creating a test environment where i can
make use of solr/ lucene .My objective is to be able to test
various features of solr .(replication , performance, indexing , searching
and so on)
I wanted someone to give me a start on above.I am well versed
with lucene/solr basi
I have been using the StandardRequestHandler (ie /solr/select). fq does
work with the MoreLikeThisHandler. I will switch to use that. Thanks.
Bill
On Tue, Jul 7, 2009 at 11:02 AM, Marc Sturlese wrote:
>
> At least in trunk, if you request for:
> http://localhost:8084/solr/core_A/mlt?q=id:7468
You can use facet.perfix to match the beginning of a given word:
http://wiki.apache.org/solr/SimpleFacetParameters#head-579914ef3a14d775a5ac64d2c17a53f3364e3cf6
Bill
On Tue, Jul 7, 2009 at 11:02 AM, Pierre-Yves LANDRON
wrote:
>
> Hello,
>
> Here is what I would like to achieve : in an indexed d
Disclaimer: I'm no expert here.
Well, are you working with multi-word synonyms? If so, then yes, I'd say
that makes sense to do it at index time. Otherwise, it really depends on
a host of factors.
In terms of your settings, if you expand synonyms at index time, what
would be the point of red
At least in trunk, if you request for:
http://localhost:8084/solr/core_A/mlt?q=id:7468365&fq=price[100 TO 200]
It will filter the MoreLikeThis results
Bill Au wrote:
>
> I think fq only works on the main response, not the mlt matches. I found
> a
> couple of releated jira:
>
> http://issues.a
Hello,
Here is what I would like to achieve : in an indexed document there's a
fulltext indexed field ; I'd like to browse the terms in this field, ie. get
all the terms that match the begining of a given word, for example.
I can get all the field's facets for this document, but that's a lot o
Yep that making sense.
But I was afraid it was the only solution. Since I finished to wrote my
email I started to create a php script to create the same file but
compatible with Solr.
thx for your quick answer ;)
On Tue, Jul 7, 2009 at 4:40 PM, Matt Mitchell wrote:
> Saeli,
>
> Solr expects a
anyone?
ps: my apologies if you guys think its spamming. but i really need some help
here.
thanks!
mani
On Sun, Jul 5, 2009 at 12:49 PM, Mani Kumar wrote:
> hi all,
>
> i am confused a bit about how to use synonym filter configs. i am using
> solr 1.4.
>
> default config is like :
>
> for query
Hello,
I've recently started using this handler to index MS Word and PDF files.
When I set ext.extract.only=true, I get back all the metadata that is
associated with that file.
If I want to index, I need to set ext.extract.only=false. If I want to index
all that metadata along with the contents,
Saeli,
Solr expects a certain XML structure when adding documents. You'll need to
come up with a mapping, that translates the original structure to one that
solr understands. You can then search solr and get those solr documents
back. If you want to keep the original XML, you can store it in a fie
I think fq only works on the main response, not the mlt matches. I found a
couple of releated jira:
http://issues.apache.org/jira/browse/SOLR-295
http://issues.apache.org/jira/browse/SOLR-281
If I am reading them correctly, I should be able to use DIsMax and
MoreLikeThis together. I will give t
Hello.
I'm a new user of Solr, I already used Lucene to index files and search.
But my programme was too slow, it's why I was looking for another solution,
and I thought I found it.
I said I thought because I don't know if it's possible to use solar with
this kind of XML files.
http://ltsc.ieee
Also make sure you don't have any autocommit rules enabled in solrconfig.xml
How many documents are in the 400MB CSV file, and how long does it
take to index now?
-Yonik
http://www.lucidimagination.com
On Tue, Jul 7, 2009 at 10:03 AM, Anand Kumar
Prabhakar wrote:
>
> Hi Yonik,
>
> Currently ou
Hi Yonik,
Currently our Schema has very few fields and we don't have any copy fields
also. Please find the below Schema.xml we are using:
On Tue, Jul 7, 2009 at 9:14 AM, Anand Kumar
Prabhakar wrote:
> I want to know is there any method to do
> it much faster, we have overcome the OutOfMemoryException by increasing heap
> space.
Optimize your schema - eliminate all unnecessary copyFields and
default values. The current example schem
Thank you, that was it.
Why is the preserveOriginal="1" option nowhere documented?
Shalin Shekhar Mangar wrote:
>
> On Tue, Jul 7, 2009 at 2:10 PM, gateway0 wrote:
>
>>
>> I indexed my data and defined a defaultsearchfield named "text:" (> name="text" type="text" indexed="true" stored="fal
Thank you for the Reply Yonik, I have already tried with smaller CSV files,
currently we are trying to load a CSV file of 400 MB but this is taking too
much time(more than half an hour). I want to know is there any method to do
it much faster, we have overcome the OutOfMemoryException by increasin
On Tue, Jul 7, 2009 at 8:41 AM, Anand Kumar
Prabhakar wrote:
> Is there any way so that we can read the data from the
> CSV file and load it into the Solr database without using "/update/csv"
That *is* the right way to load a CSV file into Solr.
How many records are in the CSV file, and how much h
I'm Loading a CSV file into Solr, since the CSV file contains a huge amount
of data its taking a very long time to load and sometimes resulting in
OutOfMemoryException. Is there any way so that we can read the data from the
CSV file and load it into the Solr database without using "/update/csv" or
Look at the error - it's bash (your command line shell) complaining.
The '&' terminates one command and puts it in the background.
Surrounding the command with quotes will get you one step closer:
curl
'http://localhost:8983/solr/update/csv?stream.file=/opt/apache-1.2.0/example/exampledocs/test.c
Hi.
I'm writing my custom faceted request handler.
But I have a problem like this; when i call
http://localhost:8983/solr/select/?qt=cfacet&q=%2BitemTitle:nokia%20%2BcategoryId:130&start=0&limit=3&fl=id,
itemTitle
i'm getiing all fields instead of only id and itemTitle.
Also i'm gettting no res
Jay ,
I am opening an issue SOLR-1264
https://issues.apache.org/jira/browse/SOLR-1264
I have attached a patch as well . I guess that is the fix. could you
please confirm that.
On Tue, Jul 7, 2009 at 12:59 AM, solr jay wrote:
> It looks that the problem is here or before that in
> SnapPuller.fetc
Hi all,
i'm still trying to tune my spellchecker to get the results i expect
I've created a dictionary and currently i want to get an special behaviour
from the spellchecker.
The fact is that when i introduce the query 'Fernandox Alonso' i get what
i expect :
false
Fernando Alonso
but when i tr
Hey I've been testing MMapDirectory with a Debian and with just couple of
cores (1G each index) in the box it seems to give a bit better performance
with a cold index. But I don't think is good enough (not in my use case)
knowing the memory it requires.
When doing an snapinstaller (using MMap) tom
On Tue, Jul 7, 2009 at 2:10 PM, gateway0 wrote:
>
> I indexed my data and defined a defaultsearchfield named "text:" ( name="text" type="text" indexed="true" stored="false"
> multiValued="true"/>).
>
> Lets say I have 2 values indexed
> 1.value "ABCD"
> 2.value "ABCD3456"
>
> Now when I do a wild
Using MoreLikeThisHandler you can use fq to filter your results. As far as I
know bq are not allowed.
Bill Au wrote:
>
> I have been trying to restrict MoreLikeThis results without any luck also.
> In additional to restricting the results, I am also looking to influence
> the
> scores similar t
Hi,
I indexed my data and defined a defaultsearchfield named "text:" ().
I copied all my other field values into that field. Now my problem:
Lets say I have 2 values indexed
1.value "ABCD"
2.value "ABCD3456"
Now when I do a wildcard search over that two values the following happens:
- query:"
On Tue, Jul 7, 2009 at 11:37 AM, Rakhi Khatwani wrote:
> Hi,
> How do we tag solr indexes and search on those indexes, there is not
> much information on wiki. all i could find is this:
> http://wiki.apache.org/solr/UserTagDesign
>
> has anyone tried it? (using solr API)
>
That page was crea
74 matches
Mail list logo