On Thu, Sep 17, 2009 at 11:19 AM, Rahul R wrote:
> Shalin,
> Can you please elaborate a little more on the third response
> *You can send the location's value directly as the value of the text
> field.*
> I dont follow. I am adding 'name' and 'age' to the 'text' field through the
> schema. If I a
Shalin,
Can you please elaborate a little more on the third response
*You can send the location's value directly as the value of the text field.*
I dont follow. I am adding 'name' and 'age' to the 'text' field through the
schema. If I add the 'location' from the program, will either one copy
(schem
2009/9/17 Noble Paul നോബിള് नोब्ळ् :
> it is possible to have a sub entity which has XPathEntityProcessor
> which can use the link ar the url
This may not be a good solution.
But you can use the $hasMore and $nextUrl options of
XPathEntityProcessor to recursively loop if there are more links
>
I would use the RSS feed (hopefully in Atom format) as a source of
links, then use a regular web spider to fetch the content.
I seriously doubt that DIH is up to the task of general fetching from
the Wild Wild Web. That is a dirty and difficult job and DIH is
designed for cooperating data s
I have opened an issue SOLR-1440
On Thu, Sep 17, 2009 at 2:46 AM, wojtekpia wrote:
>
> Note that if I change my import file to explicitly list all my files (instead
> of using the FileListEntityProcessor) as below then everything works as I
> expect.
>
> basePath="d:\my\directory\"/>
>
>
it is possible to have a sub entity which has XPathEntityProcessor
which can use the link ar the url
On Thu, Sep 17, 2009 at 8:57 AM, Grant Ingersoll wrote:
> Many RSS feeds contain a to some full article. How can I have the
> DIH get the RSS feed and then have it go and fetch the content at th
Many RSS feeds contain a to some full article. How can I have
the DIH get the RSS feed and then have it go and fetch the content at
the link?
Thanks,
Grant
On a quick look, it looks like this was caused (or at least triggered by)
https://issues.apache.org/jira/browse/SOLR-1427
Registering the bean in the SolrCore constructor causes it to
immediately turn around and ask for the stats which asks for a
searcher, which blocks.
-Yonik
http://www.lucidima
Hi,
I am testing EmbeddedSolrServer vs StreamingUpdateSolrServer for my
crawlers using more or less recent Solr code and everything was fine
till today when I took the latest trunk code.
When I start my crawler I see a number of INFO outputs
2009-09-16 21:08:29,399 INFO Adding
component:org.apa
Note that if I change my import file to explicitly list all my files (instead
of using the FileListEntityProcessor) as below then everything works as I
expect.
...
--
View this message in context:
http://www.nabble.com/FileListEntityProcessor-and-LineEntityProcessor-tp25
Hi,
I am a newbie to Solr and request you all to kindly excuse any rookie
mistakes.
I have the following questions:
We use the Synonym Filter on one of the indexed fields. It is specified in
the schema.xml as one of the filters (for the analyzer type index) for that
field. I believe that this
Fergus McMenemie-2 wrote:
>
>
> Can you provide more detail on what you are trying to do? ...
> You seem to listing all files "d:\my\directory\.*WRK". Do
> these WRK files contain lists of files to be indexed?
>
>
That is my complete data config file. I have a directory containing a bunch
On Wed, Sep 16, 2009 at 1:28 PM, Germán Biozzoli
wrote:
> Hello everybody
>
> I think that is a dumb question, but I can't find the way to obtain
> some facets without an specific query, what I need is what is
> implemented for instance in blacklight's OPAC, a first screen that
> shows most common
>Hi,
>
>I'm trying to import data from a list of files using the
>FileListEntityProcessor. Here is my import configuration:
>
>
>
>baseDir="d:\my\directory\" fileName=".*WRK" recursive="false"
>rootEntity="false">
> processor="LineEntityProcessor"
>url="${f.fileAbsolute
Hello everybody
I think that is a dumb question, but I can't find the way to obtain
some facets without an specific query, what I need is what is
implemented for instance in blacklight's OPAC, a first screen that
shows most common document types, most common subjects, most common
etc. I'm planning
Hi,
Anybody knows how to get the highlighted field, when q term matches in a
stemmed or n-grammed filtered field?
Matching in a normal field (not stemmed or n-grammed) highlighting works
perfectly as expected. But in stemmed matching cases, no highlighting fields
are recovered, and in n-gramming
I just started using terms/autosuggest service.Application need the document
details along with result items. What params I need to use to fetch the
document details.
--
View this message in context:
http://www.nabble.com/autosuggest---terms-componet---document-details-tp25476823p25476823.html
Hi,
I'm trying to import data from a list of files using the
FileListEntityProcessor. Here is my import configuration:
If I have only one file in d:\my\directory\ then everything works correctly.
If I have multiple files then I get the following exception:
Sep
Thank you Ahmet.
I forgot to encapuslate the searched string in quotations.
On Sep 15, 2009, at 5:19 PM, AHMET ARSLAN wrote:
--- On Tue, 9/15/09, Jonathan Vanasco wrote:
From: Jonathan Vanasco
Subject: faceted query not working as i expected
To: solr-user@lucene.apache.org
Date: Tuesday,
> You can enable/disable stemming per field type in the schema.xml, by
> removing the stemming filters from the type definition.
>
> Basically, copy your prefered type, rename it to something like
> 'text_nostem', remove the stemming filter from the type and use your
> 'text_nostem' type for your
This could be achieved purely client-side if all you're talking about
is a stored field (not indexed/searchable). The client-side could
encrypt and encode the encrypted bits as text that Solr/Lucene can
store. Then decrypt client-side.
Erik
On Sep 16, 2009, at 10:39 AM, Jay Hill
That's certainly something that is doable with a filter. I am not aware of
any available.
Bill
On Wed, Sep 16, 2009 at 10:39 AM, Jay Hill wrote:
> For security reasons (say I'm indexing very sensitive data, medical records
> for example) is there a way to encrypt data that is stored in Solr? S
Thanks guys...
Yonik and Grant commented on this thread in the dev group.
Dan
Chris Hostetter wrote:
: I would like to add an additional name:value pair for every line, mapping the
: sku field to my schema's id field:
:
: .map={sku.field}:{id}
the map param is for replacing a *value* with a
For security reasons (say I'm indexing very sensitive data, medical records
for example) is there a way to encrypt data that is stored in Solr? Some
businesses I've encountered have such needs and this is a barrier to them
adopting Solr to replace other legacy systems. Would it require a
custom-wri
Just FYI - you can put Solr plugins in /lib as JAR files
rather than messing with solr.war
Erik
On Sep 16, 2009, at 10:15 AM, Alexey Serba wrote:
Hi Aaron,
You can overwrite default Lucene Similarity and disable tf and
lengthNorm factors in scoring formula ( see
http://lucene.apache
Hi Aaron,
You can overwrite default Lucene Similarity and disable tf and
lengthNorm factors in scoring formula ( see
http://lucene.apache.org/java/2_4_1/api/org/apache/lucene/search/Similarity.html
and http://lucene.apache.org/java/2_4_1/api/index.html )
You need to
1) compile the following clas
Also Solr simplifies the process of implementing the client side interface.
You can use the same indices with clients written in any programming
language.
The client side could be in virtually any programming language of your
choosing.
If you were to work directly with Lucene, that would not be t
You will need to get SolrIndexSearcher.java and modify following:-
public static final int GET_SCORES = 0x01;
--Rajan
On Wed, Sep 16, 2009 at 6:58 PM, Shashikant Kore wrote:
> No, I don't wish to put a custom Similarity. Rather, I want an
> equivalent of HitCollector where I
balaji.a wrote:
> Hi All,
>I am aware that Solr internally uses Lucene for search and indexing. But
> it would be helpful if anybody explains about Solr features that is not
> provided by Lucene.
>
> Thanks,
> Balaji.
>
Any advanced Lucene application generally goes down the same path:
Buil
Comparing Solr to Lucene is not exactly an apples-to-apples comparison.
Solr is a superset of Lucene. It uses the Lucene engine to index and process
requests for data retrieval.
Start here first : *
http://lucene.apache.org/solr/features.html#Solr+Uses+the+Lucene+Search+Library+and+Extends+it
!*
On Sep 16, 2009, at 9:26 AM, balaji.a wrote:
Hi All,
I am aware that Solr internally uses Lucene for search and
indexing. But
it would be helpful if anybody explains about Solr features that is
not
provided by Lucene.
Solr is a server, Lucene is an API
Faceting
Distributed search
Rep
http://people.apache.org/builds/lucene/solr/nightly/
On Wed, Sep 16, 2009 at 6:42 PM, KirstyS wrote:
>
> mmm..can't seem to find the link..could you help?
>
>
> Noble Paul നോബിള് नोब्ळ्-2 wrote:
>>
>> yeah, not yet released but going to be released pretty soon
>>
>> On Wed, Sep 16, 2009 at 6:32
No, I don't wish to put a custom Similarity. Rather, I want an
equivalent of HitCollector where I can bypass the scoring altogether.
And I prefer to do it by changing the configuration.
--shashi
On Wed, Sep 16, 2009 at 6:36 PM, rajan chandi wrote:
> You might be talking about modifying the simi
Hi All,
I am aware that Solr internally uses Lucene for search and indexing. But
it would be helpful if anybody explains about Solr features that is not
provided by Lucene.
Thanks,
Balaji.
--
View this message in context:
http://www.nabble.com/When-to-use-Solr-over-Lucene-tp25472354p25472354
mmm..can't seem to find the link..could you help?
Noble Paul നോബിള് नोब्ळ्-2 wrote:
>
> yeah, not yet released but going to be released pretty soon
>
> On Wed, Sep 16, 2009 at 6:32 PM, KirstyS wrote:
>>
>> I thought 1.4 was not released yet?
>>
>>
>> Noble Paul നോബിള് नोब्ळ्-2 wrote:
>>>
yeah, not yet released but going to be released pretty soon
On Wed, Sep 16, 2009 at 6:32 PM, KirstyS wrote:
>
> I thought 1.4 was not released yet?
>
>
> Noble Paul നോബിള് नोब्ळ्-2 wrote:
>>
>> I vaguely remember there was an issue with delta-import in 1.3. could
>> you try it out with Solr1.4
You might be talking about modifying the similarity object to modify scoring
formula in Lucene!
$searcher->setSimilarity($similarity);
$writer->setSimilarity($similarity);
This can very well be done in Solr as SolrIndexWriter inherits from Lucene
IndexWriter class.
You might want to download
I thought 1.4 was not released yet?
Noble Paul നോബിള് नोब्ळ्-2 wrote:
>
> I vaguely remember there was an issue with delta-import in 1.3. could
> you try it out with Solr1.4
>
> On Wed, Sep 16, 2009 at 6:14 PM, KirstyS wrote:
>>
>> I hope this is the correct place to post this issue and if
I vaguely remember there was an issue with delta-import in 1.3. could
you try it out with Solr1.4
On Wed, Sep 16, 2009 at 6:14 PM, KirstyS wrote:
>
> I hope this is the correct place to post this issue and if so, that someone
> can help.
> I am using the DIH with Solr 1.3
> My data-config.xml fil
I hope this is the correct place to post this issue and if so, that someone
can help.
I am using the DIH with Solr 1.3
My data-config.xml file looks like this:
Have tried casting the dataimporter.last_index_time and the other date
fields. To no avail. My Full Import works perfectly but I cannot
On Tue, Sep 15, 2009 at 8:11 PM, Paul Rosen wrote:
>
> The second issue was detailed in an email last week "shards and facet
> count". The facet information is lost when doing a search over two shards,
> so if I use multicore, I can no longer have facets.
>
If both cores' schema is same and a uni
Fergus,
Implementing wildcard (//tagname) is definitely possible. I would love
to see it working. But if you wish to take a dig at it I shall do
whatever I can to help.
>What is the use case that makes flow though so useful?
We do not know to which forEach xpath a given field is associated with
Thanks, Abhay.
Can someone please throw light on how to disable scoring?
--shashi
On Wed, Sep 16, 2009 at 11:55 AM, abhay kumar wrote:
> Hi,
>
> 1)Solr has various type of caches . We can specify how many documents cache
> can have at a time.
> e.g. if windowsize=50
> 50 results
Hi Chantal,
Chantal Ackermann wrote:
>
> Have you had a look at the facet query? Not sure but it might just do
> what you are looking for.
>
> http://wiki.apache.org/solr/SolrFacetingOverview
> http://wiki.apache.org/solr/SimpleFacetParameters
>
I still don't really understand facetting? Bu
Have you had a look at the facet query? Not sure but it might just do
what you are looking for.
http://wiki.apache.org/solr/SolrFacetingOverview
http://wiki.apache.org/solr/SimpleFacetParameters
Hi All,
Should I create plugin for this or is there some functionality in solr that
can help me.
I'll try, thanks Martijn
2009/9/16 Martijn v Groningen
> Hi Licinio,
>
> You can use ClientUtils.toSolrInputDocument(...), that converts a
> SolrDocument to a SolrInputDocument.
>
> Martijn
>
> 2009/9/16 Licinio Fernández Maurelo :
> > Hi there,
> >
> > currently i'm working on a small app which
Hi Licinio,
You can use ClientUtils.toSolrInputDocument(...), that converts a
SolrDocument to a SolrInputDocument.
Martijn
2009/9/16 Licinio Fernández Maurelo :
> Hi there,
>
> currently i'm working on a small app which creates an Embedded Solr Server,
> reads all documents from one core and put
After re-indexing it works very well ! Thanks a lot !
Vincent
--
View this message in context:
http://www.nabble.com/Need-help-to-finalize-my-autocomplete-tp25468885p25469931.html
Sent from the Solr - User mailing list archive at Nabble.com.
2009/9/16 Vincent Pérès
>
> Hello,
>
> I tried to replace the class as you suggested, but I still get the same
> result (and not results where the query start only with the giving query).
>
>
Make sure you re-index your documents after change the schema.
--
Regards,
Shalin Shekhar Mangar.
Hi there,
currently i'm working on a small app which creates an Embedded Solr Server,
reads all documents from one core and puts these docs into another one.
The purpose of this app is to apply (small) changes on schema.xml to indexed
data (offline) resulting a new index with documents updated to
Hello,
I tried to replace the class as you suggested, but I still get the same
result (and not results where the query start only with the giving query).
--
View this message in context:
http://www.nabble.c
Instead of use
Cheers
Avlesh
2009/9/16 Vincent Pérès
>
> Hello,
>
> I'm using the following code for my autocomplete feature :
>
> The field type :
>
>
>
>
>
> minGramSize="2" />
>
>
>
>
> pattern="^(.{20})(.*)?" replacemen
Hello,
I'm using the following code for my autocomplete feature :
The field type :
The field :
The query :
?q=*:*&fq=query_ac:harry*&wt=json&rows=15&start=0&fl=*&indent=on&fq=model:SearchQuery
It gi
On Mon, Sep 14, 2009 at 5:12 PM, Rahul R wrote:
> Hello,
> I have a few questions regarding the copyField directive in schema.xml
>
> 1. Does the destination field store a reference or the actual data ?
>
It makes a copy. Storing or indexing of the field depends on the field
configuration.
> I
Hi All,
Should I create plugin for this or is there some functionality in solr that
can help me.
I basically already have part of what I want. The search response gives me a
result list with (in my situation) 20 results and the attached morelikethis
NamedList. Filtering based on the morelikethis
55 matches
Mail list logo