Hi to all,
I have an issue while i am working with solr.
I am working on blog module,here the user will be creating more blogs,and he
can post on it and have several comments for post.For implementing this
module i am using solr 1.4.
When i get blog details of particular user, it brings the
ok this was committed on July 15, 2009
before that backupAfter was called "snapshot"
On Thu, Sep 10, 2009 at 10:14 PM, wojtekpia wrote:
>
> I'm using trunk from July 8, 2009. Do you know if it's more recent than that?
>
>
> Noble Paul നോബിള് नोब्ळ्-2 wrote:
>>
>> which version of Solr are you
There are a lot of company names that people are uncertain as to the
correct spelling. A few of examples are:
1. best buy, bestbuy
2. walmart, wal mart, wal-mart
3. Holiday Inn, HolidayInn
What Tokenizer Factory and/or TokenFilterFactory should I use so that
somebody typing "wal mart"(quotes no
Changing basic defaults like this makes it very confusing to work with
successive solr releases, to read the wiki, etc.
You can make custom search requesthandlers - an example:
customparser
http://localhost:8983/solr/custom?q=string_in_my_custom_language
On 9/10/09, Stephen Dunca
On Fri, Sep 11, 2009 at 6:48 AM, venn hardy wrote:
>
> Hi Fergus,
>
> When I debugged in the development console
> http://localhost:9080/solr/admin/dataimport.jsp?handler=/dataimport
>
> I had no problems. Each category/item seems to be only indexed once, and no
> parent fields are available (ex
Facets are not involved here. These are only simple searches.
The DisMax parser does not use field names in the query. DisMax creates a
nice simple syntax for people to type into a web browser search field. The
various parameters let you sculpt the relevance in order to tune the user
experience.
In the schema.xml file, "*_i" is defined as a wildcard type for integer.
If a name-value pair is an integer, use: name_i as the field name.
On 9/10/09, nourredine khadri wrote:
>
> Thanks for the quick reply.
>
> Ok for dynamicFields but how can i rename fields during indexation/search
> to ad
You are right.
I got into same thing. Windows curl gave me error but cygwin ran without any
issues.
thanks
Lance Norskog-2 wrote:
>
> It is a windows problem (or curl, whatever). This works with
> double-quotes.
>
> C:\Users\work\Downloads>\cygwin\home\work\curl-7.19.4\curl.exe
> http://loc
It is best to start off with Solr by playing around with the example in the
example/ directory. Index the data in the example/exampledocs directory, do
some searches, look at the index with the admin/luke page. After that, this
will be much easier.
To bring your Lucene under Solr, you have to exa
There is only one index. The index has newer "segments" which represent new
records and deletes to old records (sort of). Incremental replication copies
new segments; putting the new segments together with the previous index
makes the new index.
Incremental replication under rsync does work; perha
Yes, but in this case the query that I'm executing doesn't have any facet. I
mean for this query I'm not using any filter cache.What does it means
"operating system cache can be significant"? That my first query uploads a
big chunk on the index into memory (maybe even the entire index)?
On Thu, Se
Another, slower way is to create a spell checking dictionary and do spelling
requests on the first few characters the user types.
http://wiki.apache.org/solr/SpellCheckerRequestHandler?highlight=%28spell%29%7C%28checker%29
Another way is to search against facet values with the facet.prefix feature
A QueryParser is a Lucene class that parses a string into a tree of query
objects.
A request handler in solrconfig.xml describes a Solr RequestHandler object.
This object binds strings into http parameter strings. If a request handler
name is "/abc" then it is called by
http://localhot:8983/solr/a
datefield:[X TO* Y] for X to Y-0....1
This would be backwards-compatible. {} are used for other things and lexing
is a dying art. Using a * causes mistakes to trigger wildcard syntaxes,
which will fail loudly.
On Tue, Sep 8, 2009 at 5:20 PM, Chris Hostetter wrote:
>
> : I ran into that p
Hi Fergus,
When I debugged in the development console
http://localhost:9080/solr/admin/dataimport.jsp?handler=/dataimport
I had no problems. Each category/item seems to be only indexed once, and no
parent fields are available (except the category name).
I am not entirely sure how the forEach
This has to be done by an UpdateRequestProcessor
http://wiki.apache.org/solr/UpdateRequestProcessor
On Tue, Sep 8, 2009 at 3:34 PM, Villemos, Gert wrote:
> I would like to build the value of a field based on the value of multiple
> other fields at submission time. I.e. I would like to submit
At 12M documents, operating system cache can be significant.
Also, the first time you sort or facet on a field, a field cache
instance is populated which can take a lot of time. You can prevent
slow first queries by configuring a static warming query in
solrconfig.xml that includes the common sort
It is a windows problem (or curl, whatever). This works with double-quotes.
C:\Users\work\Downloads>\cygwin\home\work\curl-7.19.4\curl.exe
http://localhost:8983/solr/update --data-binary "" -H
"Content-type:text/xml; charset=utf-8"
Single-quotes inside double-quotes should work: ""
On Tue, Sep
Hi!Why would it take for the first query that I execute almost 60 seconds to
run and after that no more than 50ms? I disabled all my caching to check if
it is the reason for the subsequent fast responses, but the same happens.
I'm using solr 1.3.
Something really strange is that it doesn't happen w
hello
i have a task where my user is giving me 20 words of english dictionary and
i have to run a program and generate a report with all stemmed words.
I have to use EnglishPorterFilterFactory and SnowballPorterFilterFactory to
check which one is faster and gets the best results
Should i write a
Thanks Yonik
i have a task where my user is giving me 20 words of english dictionary and
i have to run a program and generate a report with all stemmed words.
I have to use EnglishPorterFilterFactory and SnowballPorterFilterFactory to
check which one is faster and gets the best results
Should i
Thanks! I don't think I can use an unreleased version of solr even is it's
stable enough (crazy infrastructure guys) but I might be able to apply the 2
patches mentioned in the link you sent. I will try it in my local copy of
solr and see if it improves and let you know.
Thanks!
On Thu, Sep 10, 20
Yes, it seems like I don't need to split. I could use different commit
times. In my use case it is too often and I could have a different commit
time on a country basis.Your questions made me rethink the need of splitting
into cores.
Thanks
On Fri, Sep 4, 2009 at 5:38 AM, Shalin Shekhar Mangar <
If I set snippets to 9 and "mergeContinuous" to true, will I get
the entire contents of the field with all the search terms replaced?
I don't see what good it would be just getting one line out of the
whole field as a snippet.
On Thu, Sep 10, 2009 at 7:45 PM, Jay Hill wrote:
> Set up the quer
Set up the query like this to highlight a field named "content":
SolrQuery query = new SolrQuery();
query.setQuery("foo");
query.setHighlight(true).setHighlightSnippets(1); //set other params as
needed
query.setParam("hl.fl", "content");
QueryResponse queryResponse =getSolrSe
Hi Zhong,
For #2 the existing patch SOLR-1395 is a good start. It should be
fairly simple to deploy indexes and distribute them to Solr Katta
nodes/servers.
-J
On Wed, Sep 9, 2009 at 11:41 PM, Zhenyu Zhong wrote:
> Jason,
>
> Thanks for the reply.
>
> In general, I would like to use katta to h
Can somebody point me to some sample code for using highlighting in
SolrJ? I understand the highlighted versions of the field comes in a
separate NamedList? How does that work?
--
http://www.linkedin.com/in/paultomblin
Hi again,
I've mostly gotten the multicore working except for one detail.
(I'm using solr 1.3 and solr-ruby 0.0.6 in a rails project.)
I've done a few queries and I appear to be able to get hits from either
core. (yeah!)
I'm forming my request like this:
req = Solr::Request::Standard.new(
If I recall correctly, in solr 1.3 there was an issue where filters
didn't really behaved as they should have. Basically, if you had a query
and filters defined, the query would have executed normally and only
after that the filter would be applied. AFAIK this is fixed in 1.4 where
now the docu
Try 1.4
http://www.lucidimagination.com/blog/2009/05/27/filtered-query-performance-increases-for-solr-14/
-Yonik
http://www.lucidimagination.com
On Thu, Sep 10, 2009 at 4:35 PM, Jonathan Ariel wrote:
> Hi all!
> I'm trying to measure the query response time when using just a query and
> when u
Hi all!
I'm trying to measure the query response time when using just a query and
when using some filter queries. From what I read and understand adding
filter query should boost the query response time. I used luke to understand
over which fields I should use filter query (those that have few uniq
On Thursday 10 September 2009 01:47:38 pm Walter Underwood wrote:
> What kind of storage is used for the Solr index files? When I tested it, NFS
> was 100X slower than local disk.
I'm sorry - I misunderstood your question. The Solr indexes themselves are
stored on local disk. The documents are r
What kind of storage is used for the Solr index files? When I tested it, NFS
was 100X slower than local disk.
wunder
-Original Message-
From: Dan A. Dickey [mailto:dan.dic...@savvis.net]
Sent: Thursday, September 10, 2009 11:15 AM
To: solr-user@lucene.apache.org
Cc: Walter Underwood
Sub
If using {!type=customparser} is the only way now, should I file an issue to
make the default configurable?
--
Stephen Duncan Jr
www.stephenduncanjr.com
On Thu, Sep 3, 2009 at 11:23 AM, Stephen Duncan Jr wrote:
> We have a custom query parser plugin registered as the default for
> searches, an
On Thursday 10 September 2009 09:10:27 am Walter Underwood wrote:
> How big are your documents?
For the most part, I'm just indexing metadata that has been pulled from
the documents. I think I have currently about 40 or so fields that I'm setting.
When the document is an actual document - pdf, do
On Thursday 10 September 2009 08:39:38 am Yonik Seeley wrote:
> On Thu, Sep 10, 2009 at 9:13 AM, Dan A. Dickey wrote:
> > I'm posting documents to Solr using http (curl) from
> > C++/C code and am seeing approximately 3.3 - 3.4
> > documents per second being posted. Is this to be expected?
>
> N
On Thu, Sep 10, 2009 at 1:28 PM, Joe Calderon wrote:
> i have field called text_stem that has a kstemmer on it, im having
> trouble matching wildcard searches on a word that got stemmed
>
> for example i index the word "america's", which according to
> analysis.jsp after stemming gets indexed as "
SO, do you think increasing the JVM will help? We also have
500 in solrconfig.xml
Originally was set to 200
Currently we give solr 1.5GB for Xms and Xmx, we use jrockit version 1.5.0_15
4 S root 12543 12495 16 76 0 - 848974 184466 Jul20 ? 8-11:12:03
/opt/bea/jrmc-3.0.3-1.5.0/bin/ja
i have field called text_stem that has a kstemmer on it, im having
trouble matching wildcard searches on a word that got stemmed
for example i index the word "america's", which according to
analysis.jsp after stemming gets indexed as "america"
when matching i do a query like myfield:(ame*) which
Thanks for the pointer. Definitely appreciate the help.
Todd
On Thu, Sep 10, 2009 at 11:10 AM, Jay Hill wrote:
> If you need an alternative to using the TermsComponent for auto-suggest,
> have a look at this blog on using EdgeNGrams instead of the TermsComponent.
>
>
> http://www.lucidimaginat
If you need an alternative to using the TermsComponent for auto-suggest,
have a look at this blog on using EdgeNGrams instead of the TermsComponent.
http://www.lucidimagination.com/blog/2009/09/08/auto-suggest-from-popular-queries-using-edgengrams/
-Jay
http://www.lucidimagination.com
On Wed, S
All you have to do is use the "start" and "rows" parameters to get the
results you want. For example, the query for the first page of results might
look like this,
?q=solr&start=0&rows=10 (other params omitted). So you'll start at the
beginning (0) and get 10 results. They next page would be
?q=sol
It looks like parseArg was added on Aug 20, 2009. I'm working with slightly
older code. Thanks!
Noble Paul നോബിള് नोब्ळ्-2 wrote:
>
> did you implement your own ValueSourceParser . the
> FunctionQParser#parseArg() method supports strings
>
> On Wed, Sep 9, 2009 at 12:10 AM, wojtekpia wrote:
I'm using trunk from July 8, 2009. Do you know if it's more recent than that?
Noble Paul നോബിള് नोब्ळ्-2 wrote:
>
> which version of Solr are you using? the "backupAfter" name was
> introduced recently
>
--
View this message in context:
http://www.nabble.com/Backups-using-Replication-tp25
https://issues.apache.org/jira/browse/SOLR-1421
2009/9/10 Noble Paul നോബിള് नोब्ळ् :
> I guess there is a bug. I shall raise an issue.
>
>
>
> 2009/9/10 Noble Paul നോബിള് नोब्ळ् :
>> everything looks fine and it beats me completely. I guess you will
>> have to debug this
>>
>> On Thu, Sep 10,
I guess there is a bug. I shall raise an issue.
2009/9/10 Noble Paul നോബിള് नोब्ळ् :
> everything looks fine and it beats me completely. I guess you will
> have to debug this
>
> On Thu, Sep 10, 2009 at 6:17 PM, nourredine khadri
> wrote:
>> Some fields are null but not the one parsed by XPat
The current patch definitely supports facet before and after the collapsing.
Stephen Weiss wrote:
I just noticed this and it reminded me of an issue I've had with
collapsed faceting with an older version of the patch in Solr 1.3.
Would it be possible, if we can get the terms for all the collap
I'm trying to understand the DisMax query handler. I orginally
configured it to ensure that the query was mapped onto different fields
in the documents and a boost assigned if the fields match. And that
works pretty smoothly.
However when it comes to facetted searches the results perplexes me.
Co
you do not have to make 3 copies of conf dir even in Solr1.3
you can try this
${./solr/${solr.core.name}/data}
On Thu, Sep 10, 2009 at 7:55 PM, Paul Rosen wrote:
> Ok. I have a workaround for now. I've duplicated the conf folder three times
> and changed this line in solrconfig.xml in each fo
everything looks fine and it beats me completely. I guess you will
have to debug this
On Thu, Sep 10, 2009 at 6:17 PM, nourredine khadri
wrote:
> Some fields are null but not the one parsed by XPathEntityProcessor (named
> XML)
>
> 10 sept. 2009 14:40:34 org.apache.solr.handler.dataimport.LogTra
All work and progress on this patch is done under the JIRA issue:
https://issues.apache.org/jira/browse/SOLR-236
R. Tan wrote:
The patch which will be committed soon will add this functionality.
Where can I follow the progress of this patch?
On Mon, Sep 7, 2009 at 3:38 PM, Uri Boness
Ok. I have a workaround for now. I've duplicated the conf folder three
times and changed this line in solrconfig.xml in each folder:
${solr.data.dir:./solr/exhibits/data}
I can't wait for solr 1.4!
Noble Paul നോബിള് नोब्ळ् wrote:
the dataDir is a Solr1.4 feature
On Thu, Sep 10, 2009 at 1:
How big are your documents? Is your index on local disk or network-
mounted disk?
wunder
On Sep 10, 2009, at 6:39 AM, Yonik Seeley wrote:
On Thu, Sep 10, 2009 at 9:13 AM, Dan A. Dickey
wrote:
I'm posting documents to Solr using http (curl) from
C++/C code and am seeing approximately 3.3 -
On Thu, Sep 10, 2009 at 9:13 AM, Dan A. Dickey wrote:
> I'm posting documents to Solr using http (curl) from
> C++/C code and am seeing approximately 3.3 - 3.4
> documents per second being posted. Is this to be expected?
No, that's very slow.
Are you using libcurl, or actually forking a new proc
>Hi Paul,
>The forEach="/document/category/item | /document/category/name" didn't work
>(no categoryname was stored or indexed).
>However forEach="/document/category/item | /document/category" seems to work
>well. I am not sure why category on its own works, but not category/name...
>But thanks f
I'm posting documents to Solr using http (curl) from
C++/C code and am seeing approximately 3.3 - 3.4
documents per second being posted. Is this to be expected?
Granted - I understand that this depends somewhat on the
machine running Solr. By the way - I'm running Solr inside JBoss.
I was hoping
Thanks for the quick reply.
Ok for dynamicFields but how can i rename fields during indexation/search to
add suffix corresponding to the type ?
What is the best way to do this?
Nourredine.
De : Yonik Seeley
À : solr-user@lucene.apache.org
Envoyé le : Jeu
in my tests both seems to be working. I had misspelt the column as
"catgoryname" is that why?
keep in mind that you get extra docs for each "category" also
On Thu, Sep 10, 2009 at 5:53 PM, venn hardy wrote:
>
> Hi Paul,
> The forEach="/document/category/item | /document/category/name" didn't w
Some fields are null but not the one parsed by XPathEntityProcessor (named XML)
10 sept. 2009 14:40:34 org.apache.solr.handler.dataimport.LogTransformer
transformRow
FIN: Map content : {KEYWORDS=pub, SPECIFIC=null, FATHERSID=, CONTAINERID=,
ARCHIVEDDATE=0, SITE=12308, LANGUAGE=null, ARCHIVESTATE
what do you see if you keep the logTemplate="${document}". I'm trying
to figure out the contents of the map
Thanks for your reply
> On Sep 10, 2009, at 6:41 AM, busbus wrote:
> Solr defers to Lucene on reading the index. You just need to tell
> Solr whether the index is a compound file or not and make sure the
> versions are compatible.
>
This part seems to be the point.
How to make solr to r
On Thu, Sep 10, 2009 at 5:58 AM, nourredine khadri
wrote:
> I want to index my fields dynamically.
>
> DynamicFields don't suit my need because I don't know fields name in advance
> and fields type must be set > dynamically too (need strong typage).
This is what dynamic fields are meant for - yo
Hi Giovanni,
i am facing same issue. Can you share some info on how you solved this
puzzle..
hossman wrote:
>
>
> : Even setting everything to INFO through
> : http://localhost:8080/solr/admin/logging didn't help.
> :
> : But considering you do not see any bad issue here, at this time I
Hi Paul,
The forEach="/document/category/item | /document/category/name" didn't work (no
categoryname was stored or indexed).
However forEach="/document/category/item | /document/category" seems to work
well. I am not sure why category on its own works, but not category/name...
But thanks for ti
That's the case. The field is not null.
10 sept. 2009 14:10:54 org.apache.solr.handler.dataimport.LogTransformer
transformRow
FIN: id : 5040052 - Xml content :
Empty
Subtitle - Click Here to edit Empty Title -
Click Here to edit Empty Chap¶ - Click Here
to edit Empty Autor - Click
On Sep 10, 2009, at 6:41 AM, busbus wrote:
Hello All,
I have a set of Files indexed by Lucene. Now i want to use the
indexed files
in SOLR. The file .cfx an .cfs are not readable by Solr, as it
supports only
.fds and .fdx.
Solr defers to Lucene on reading the index. You just need to te
On Thu, Sep 10, 2009 at 4:52 PM, dharhsana wrote:
>
> Hi to all,
> when i try to execute my query i get Connection refused ,can any one please
> tell me what should be done for this ,to make my solr run.
>
> org.apache.solr.client.solrj.SolrServerException:
> java.net.ConnectException:
> Connectio
Hi to all,
when i try to execute my query i get Connection refused ,can any one please
tell me what should be done for this ,to make my solr run.
org.apache.solr.client.solrj.SolrServerException: java.net.ConnectException:
Connection refused: connect
org.apache.solr.client.solrj.SolrServerExcepti
Hello All,
I have a set of Files indexed by Lucene. Now i want to use the indexed files
in SOLR. The file .cfx an .cfs are not readable by Solr, as it supports only
.fds and .fdx.
So i decided to Add/update the index by just loading a XML File using the
post.jar funtion.
java -jar post.jar newFi
can your just confirm that the field is not null byadding in a
LogTransformer to the entity "document"
On Thu, Sep 10, 2009 at 3:54 PM, nourredine khadri
wrote:
> But why that occurs only for delta import and not for the full ?
>
> I've checked my data : no xml field is null.
>
> Nourredine.
>
I just committed the fix https://issues.apache.org/jira/browse/SOLR-1420
But it does not solve your problem , it will just prevent the import
from throwing an exception and fail
2009/9/10 Noble Paul നോബിള് नोब्ळ् :
> I guess there was a null field and the xml parser blows up
>
>
> On Thu, Sep 1
But why that occurs only for delta import and not for the full ?
I've checked my data : no xml field is null.
Nourredine.
Noble Paul wrote :
>
>I guess there was a null field and the xml parser blows up
I guess there was a null field and the xml parser blows up
On Thu, Sep 10, 2009 at 3:06 PM, nourredine khadri
wrote:
> Hi,
>
> I'm new solR user and for the moment it suits almost all my needs :)
>
> I use a fresh nightly release (09/2009) and I index a
> database table using dataImportHandler.
Hello,
I want to index my fields dynamically.
DynamicFields don't suit my need because I don't know fields name in advance
and fields type must be set dynamically too (need strong typage).
I think the solution is to handle this programmatically but what is the best
way to do this? Which custo
In testing Solr 1.4 today with Weblogic it looks like the filters issue
still exists. Adding the appropriate entries in weblogic.xml still
resolves it.
On first look, the header.jsp changes dont appear to be required anymore.
Would it make sense to include a weblogic.xml in the distribution t
Hi,
I'm new solR user and for the moment it suits almost all my needs :)
I use a fresh nightly release (09/2009) and I index a
database table using dataImportHandler.
I try to parse an xml content field from this table using XPathEntityProcessor
and FieldReaderDataSource. Everything works fi
Thanks Hossman
As per my understandings and investigations, if we disable STDERR from the
jboss configs, we will not be able to see any STDERR coming from any of the
APIs - which can be real error messages.
So if we know the exact reason why this message from solr is showing up, we
can block thi
Hi,
I tried MoreLikeThis (StandardRequestHandler with mlt arguments)
with a single solr server and it works fine. However, when I tried
the same query with sharded servers, I don't get the moreLikeThis
key in the results.
So my question is, Is MoreLikeThis with StandardRequestHandler
supported on
I'd run look into faceting and run a test.
Create a schema, index the data and then run a query for *:* facteted by
hotel to get a list of all the hotels you want followed by a query that
returns all documents matching that hotel for your 2nd usecase.
You're probably still going to want a SQL dat
79 matches
Mail list logo