: I suspect this has something to do with waiting for the searcher to
: warm and switch over (?). Though, I'm confused because when I print
: out /solr/admin/registry.jsp, the hashcode of the Searcher changes
: immediately (as the commit docs say, the commit operation blocks by
: default until a n
On Fri, Mar 6, 2009 at 10:13 AM, Ashish P wrote:
>
> What are the types of documents types ( Mime types ) supported for indexing
> and searching in solr.
Solr tries to be agnostic about how the content being indexed is being
created. For solr everything is a string (or number). It is upto you t
hmm. I think I will just do that.
Thanks for clearing my doubt...
-Ashish
Shalin Shekhar Mangar wrote:
>
> On Fri, Mar 6, 2009 at 10:53 AM, Ashish P
> wrote:
>
>>
>> OK. so basically what you are saying is when you use copyField, it will
>> copy
>> the whole data from one field to other many
: But actually, something like that only works for text field types that
: you can specify an analyzer for. To sort by the integer value, you
: need an integer field.
we should really fix that so any FieldType can have an analyzer and treat
Tokens produced just like multivalued fields are right
: 1. I am trying to customize the search with q query parameter, so that it
: can support wildcard and field boosting. I customize QueryParser and created
: the wildcard query in the same way as it does for non wildcard. But even
: with this changed query, the results are not showing up.
:
: I f
On Fri, Mar 6, 2009 at 10:53 AM, Ashish P wrote:
>
> OK. so basically what you are saying is when you use copyField, it will
> copy
> the whole data from one field to other many fields but it can not copy part
> of data to other field.
Yes, it will try to copy all the data.
>
> Because within
On Fri, Mar 6, 2009 at 11:04 AM, Sagar Khetkade
wrote:
>
> I have multi-core scenario where the schemas are different and I have to
> search for these cores as per the use case. I am using distributed search
> approach here for getting the search results for the query from these cores.
Distribut
Hi Hoss,
But I cannot find documents about the integration of Nutch and Solr in
anywhere. Could you give me some clue? thanks
Tony
On Thu, Mar 5, 2009 at 11:14 PM, Chris Hostetter
wrote:
>
> : with Solr. What crawler do you guys use? or you coded one by yourself? I
>
> neither -- i've never ind
: with Solr. What crawler do you guys use? or you coded one by yourself? I
neither -- i've never indexed "crawled" data With Solr, i only ever index
structured data in one form or another.
(the closest i've ever come to using a crawler with Solr is some ant tasks
i whiped up one day to recursi
Erik,
Thanks for the information. I understand that it is revolving around
q/q.alt/dismax but as per my need, I have to do some customization and I
have to use dismax for the same. That;s the reason , I keep asking different
questions about the same.
Below is the dismax configuration from SolrCo
Hi,
I have multi-core scenario where the schemas are different and I have to search
for these cores as per the use case. I am using distributed search approach
here for getting the search results for the query from these cores.
But there is an obstacle. I have used EmbbededSolrServer class of
OK. so basically what you are saying is when you use copyField, it will copy
the whole data from one field to other many fields but it can not copy part
of data to other field.
Because within same tokenizing ( when I am tokenizing "condition" field ) I
want part of data to go into content field an
It works
thanks
Ashish
Shalin Shekhar Mangar wrote:
>
> On Fri, Mar 6, 2009 at 7:03 AM, Ashish P wrote:
>
>>
>> I want to search on single date field
>> e.g. q=creationDate:2009-01-24T15:00:00.000Z&rows=10
>>
>> But I think the query gets terminated after T15 as ':' ( COLON ) is taken
>> as
>
But I think this question should remain on the Solr user mail list, as I am
interested in finding a crawler that works for Solr and it doesn't
necessarily have to be Nutch.
Tony
On Thu, Mar 5, 2009 at 9:07 PM, Otis Gospodnetic wrote:
>
> Tony,
>
> I suggest you pick one place to get help with Nu
Thanks Otis.
On Thu, Mar 5, 2009 at 9:07 PM, Otis Gospodnetic wrote:
>
> Tony,
>
> I suggest you pick one place to get help with Nutch+Solr, since it looks
> like you are jumping between lists and not having much luck. :)
> I suggest you stick to the Nutch list, since that's where the integratio
What are the types of documents types ( Mime types ) supported for indexing
and searching in solr.
--
View this message in context:
http://www.nabble.com/supported-document-types-tp22366114p22366114.html
Sent from the Solr - User mailing list archive at Nabble.com.
Tony,
I suggest you pick one place to get help with Nutch+Solr, since it looks like
you are jumping between lists and not having much luck. :)
I suggest you stick to the Nutch list, since that's where the integration is
coming from.
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr -
On Fri, Mar 6, 2009 at 7:40 AM, Ashish P wrote:
>
> I have a multi valued field as follows:
>
> name="condition">
>
> I want to index the data from this field into following fields
>
>
>
> How can this be done?? Any ideas...
Use a copyField (look at the schema shipped with solr for an exampl
On Fri, Mar 6, 2009 at 7:03 AM, Ashish P wrote:
>
> I want to search on single date field
> e.g. q=creationDate:2009-01-24T15:00:00.000Z&rows=10
>
> But I think the query gets terminated after T15 as ':' ( COLON ) is taken
> as
> termination character.
>
> Any ideas on how to search on single dat
That's exactly what I'm doing, but I'm explicitly replicating, and
committing. Even under these circumstances, what could explain the
delay after commit before the new index becomes available?
On Thu, Mar 5, 2009 at 10:55 AM, Shalin Shekhar Mangar
wrote:
> On Thu, Mar 5, 2009 at 10:30 PM, Steve
Hi Nick -
Could you please teach me a little bit how to make Nutch work for Solr?
Thanks a lot!
Tony
On Thu, Mar 5, 2009 at 5:01 PM, Nick Tkach wrote:
> Yes, Nutch works quite well as a crawler for Solr.
>
> - Original Message -
> From: "Tony Wang"
> To: solr-user@lucene.apache.org
>
I have a multi valued field as follows:
I want to index the data from this field into following fields
How can this be done?? Any ideas...
--
View this message in context:
http://www.nabble.com/index-multi-valued-field-into-multiple-fields-tp22364915p22364915.html
Sent from the Solr - User
I want to search on single date field
e.g. q=creationDate:2009-01-24T15:00:00.000Z&rows=10
But I think the query gets terminated after T15 as ':' ( COLON ) is taken as
termination character.
Any ideas on how to search on single date or for that matter if query data
contains COLON then how to sea
On 5-Mar-09, at 6:47 AM, Yonik Seeley wrote:
This morning, an apparently over-zealous marketing firm, on behalf of
the company I work for, sent out a marketing email to a large number
of subscribers of the Lucene email lists. This was done without my
knowledge or approval, and I can assure you
Yes, Nutch works quite well as a crawler for Solr.
- Original Message -
From: "Tony Wang"
To: solr-user@lucene.apache.org
Sent: Thursday, March 5, 2009 5:32:57 PM GMT -06:00 US/Canada Central
Subject: what crawler do you use for Solr indexing?
Hi,
I wonder if there's any open source cra
We are using Heritrix, the Internet Archive’s open source crawler, which is
very easy to extend. We have augmented it with a custom parser to crawl some
specific data formats and coded our own processors (Heritrix’s terminology for
extensions) to link together different data sources as well as t
Hi,
I wonder if there's any open source crawler product that could be integrated
with Solr. What crawler do you guys use? or you coded one by yourself? I
have been trying to find out solutions for Nutch/Solr integration, but
haven't got any luck yet.
Could someone shed me some light?
thanks!
To
Thanks. Can you recommend a build I can try?
On Thu, Mar 5, 2009 at 3:09 PM, Marc Sturlese wrote:
>
> I am not sure if RollBackUpdateCommand was yet developed in the oficial solr
> 1.3 release. I think it's just in the nightly builds. Looks like your
> dataimport package is too new. I think you
I am not sure if RollBackUpdateCommand was yet developed in the oficial solr
1.3 release. I think it's just in the nightly builds. Looks like your
dataimport package is too new. I think you should try to use that dataimport
release with a solr nightly or try to grab an older dataimport release.
If you want this to switch groups, let me know...
...but it would be good to know benchmarkng like in ActiveRecord so that you
know for a request how much time was spent where.
ActiveRecord logs look like:
JournalAccess Columns (0.107016) SHOW FIELDS FROM `journal_accesses`
Transform Load (0.1
I tried updating the solr instance I'm testing DIH with, adding the
the dataimport and slf4j jar files to solr.
When I start solr, I get the following error. Is there something else
which needs to be installed for the nightly build version of DIH to
work in solr release 1.3?
Thanks,
Tim
java.l
Hi,
At Pubget we are also happy with jetty (distributed over a number of shards
and just adding more this week).
Just search around for a good init.d script to start it up, and we use monit
to keep it up:
init.d snippet:
START_COMMAND="java -Dsolr.data.dir=/solr8983 -Djetty.port=8983
-DSTOP.POR
I agree that remapping params is not that much fun. I certainly vote for
just passing them through and it will be easier to keep up with the latest
as well.
I created:
https://issues.apache.org/jira/browse/SOLR-1047
Let me know if there is something else I can do to help.
On Thu, Mar 5, 2009 at
First, note we have a ruby-...@lucene.apache.org list which focuses
primarily on the solr-ruby library, flare, and other Ruby specific
things. But this forum is as good as any, though I'm CC'ing ruby-dev
too.
On Mar 5, 2009, at 12:59 PM, Ian Connor wrote:
Is there a way to specify the face
Performance comparison link:
- "Jetty vs Tomcat: A Comparative Analysis". prepared by Greg Wilkins
- May, 2008.
http://www.webtide.com/choose/jetty.jsp
2009/3/5 Erik Hatcher :
> That being said... I don't think there is a strong reason to go out of your
> way to install Tomcat and do the addition
That being said... I don't think there is a strong reason to go out of
your way to install Tomcat and do the additional config. I'd say just
use Jetty until you have some other reason not to.
http://www.lucidimagination.com/search is currently powered by Jetty,
and we have no plans to swit
The jetty vs tomcat vs resin vs whatever question pretty much comes
down to what you are comfortable running/managing.
Solr tries its best to stay container agnostic.
On Mar 5, 2009, at 1:55 PM, Jonathan Haddad wrote:
Is there any compelling reason to use tomcat instead of jetty if all
we'r
On Thu, Mar 5, 2009 at 11:45 PM, Erik Hatcher wrote:
>
> On Mar 5, 2009, at 1:07 PM, Suryasnat Das wrote:
>
>> I have some queries on SOLR fo which i need immediate resolution. A fast
>> help would be greatly appreciated.
>>
>> a.) We know that fields are also indexed. So can we index some specifi
Is there any compelling reason to use tomcat instead of jetty if all
we're doing is using solr? We don't use tomcat anywhere else.
--
Jonathan Haddad
http://www.rustyrazorblade.com
On Thu, Mar 5, 2009 at 10:30 PM, Steve Conover wrote:
> Yep, I notice the default is true/true, but I explicitly specified
> both those things too and there's no difference in behavior.
>
Perhaps you are indexing on the master and then searching on the slaves? It
may be the delay introduced by r
On Thu, Mar 5, 2009 at 10:09 PM, Radha C. wrote:
> I want to understand fully what is configured and how to configure , So I
> tried to index my local MySql DB directly with one simple table called
> persons in it.
> But I am getting lot of errors. the following are the steps I did,
> 1. downlo
On Mar 5, 2009, at 1:07 PM, Suryasnat Das wrote:
I have some queries on SOLR fo which i need immediate resolution. A
fast
help would be greatly appreciated.
a.) We know that fields are also indexed. So can we index some
specific
fields(like author, id, etc) first and then do the indexing fo
Hi,
I have some queries on SOLR fo which i need immediate resolution. A fast
help would be greatly appreciated.
a.) We know that fields are also indexed. So can we index some specific
fields(like author, id, etc) first and then do the indexing for rest of the
fields(like creation date etc) at a l
Hi,
Is there a way to specify the facet.method using solr-ruby. I tried to add
it like this:
hash["facet.method"] = @params[:facets][:method] if
@params[:facets][:method]
to line 78 of standard.rb and it works when you add it to the facets Hash.
However, if there is another place that I co
We will be using sqllite for db.This can be used for a cd version where we
need to provide search
On 3/5/09, Grant Ingersoll wrote:
>
>
> On Mar 5, 2009, at 3:10 AM, revas wrote:
>
> Hi,
>>
>> I have a requirement where i need to search offline.We are thinking of
>> doing
>> this by storing the
yes, the dataimport.properties file is present in the conf directory
from previous imports. I'll try the trunk version as you suggested to
see if the problem persists.
Thanks,
Tim
On Wed, Mar 4, 2009 at 7:54 PM, Noble Paul നോബിള് नोब्ळ्
wrote:
> the dataimport.properties is created only after
Hi,
Index titles as "string" type. But this will completely prevent you from being
able to match "stop watch" when you search for "stop". This is where field
copying can help.
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
> From: stockm
>
Yep, I notice the default is true/true, but I explicitly specified
both those things too and there's no difference in behavior.
On Wed, Mar 4, 2009 at 7:39 PM, Shalin Shekhar Mangar
wrote:
> On Thu, Mar 5, 2009 at 6:06 AM, Steve Conover wrote:
>
>> I'm doing some testing of a solr master/slave c
Shalin,
I did not run the examples because in the demo everything is already
configured and built in, so the demo will run properly.
So I am not clear about how the example is working and what is the
configuration done for that, Where to start if I write a small new database
search.
I want to
On Mar 5, 2009, at 3:10 AM, revas wrote:
Hi,
I have a requirement where i need to search offline.We are thinking
of doing
this by storing the index terms in a db .
I'm not sure I follow. How is it that Solr would be offline, but your
DB would be online? Can you explain a bit more the
On Thu, Mar 5, 2009 at 4:50 AM, Fouad Mardini wrote:
> Thanks for your help, but I am not really sure I follow.
> It is possible to use the PatternTokenizerFactory with pattern = (\d+) and
> group = 0 to tokenize the input correctly
> But I don't see how to use the copyField to achieve sorting
>
On Mar 5, 2009, at 8:31 AM, dabboo wrote:
I am implementing column specific search with q query parameter. I
have
achieved the same but field boosting is not working in that.
Below is the query which is getting formed for this URL:
/?q=productURL_s:amit%20OR
%20prdMainTitle_s:amitg
&versio
You have to download the source, replace the lucene .jars and recompile it.
But be careful, I tried to run a recent solr nightly build with the oficial
lucene release 2.4 and got errors in compilation. It was due to some new
features of the IndexDeletionPolicies that are just available in lucene
This sounds like something that should be done with SQL on
a relational database. --wunder
On 3/5/09 1:41 AM, "Ron Chan" wrote:
> Hi
>
> I'm looking to build summary reports, something like
>
>jan feb mar total
> branch A
> branch B
> branch C
>
> should I search for the raw d
Yonik,
Thank-you for your email. I appreciated and accept your apology.
Indeed the spam was annoying, but I think that you and your colleagues
have significant social capital in the Lucene and Solr communities, so
this minor but unfortunate incident should have minimal impact.
That said, you and
This morning, an apparently over-zealous marketing firm, on behalf of
the company I work for, sent out a marketing email to a large number
of subscribers of the Lucene email lists. This was done without my
knowledge or approval, and I can assure you that I'll make all efforts
to prevent it from ha
Hi,
I am implementing column specific search with q query parameter. I have
achieved the same but field boosting is not working in that.
Below is the query which is getting formed for this URL:
/?q=productURL_s:amit%20OR%20prdMainTitle_s:amitg&version=2.2&start=0&rows=10&indent=on&qt=dismaxrequ
I'm really puzzled about *what* you're reporting. Could you add
some detail?
Best
Erick
On Thu, Mar 5, 2009 at 4:41 AM, Ron Chan wrote:
> Hi
>
> I'm looking to build summary reports, something like
>
> jan feb mar total
> branch A
> branch B
> branch C
>
> should I search for the raw
On Thu, Mar 5, 2009 at 4:42 PM, Radha C. wrote:
>
> Hi,
>
> I am newbie for solr search engin. I don't find any juicy information on
> how
> to configure the ORACLE data base to index the tables using solr search
> engin. There are huge documents spread over wiki pages. I need some core
> informa
running multiple webapps look like a bad idea. This is the very reason
solr has the multicore feature.
permgen size is a jvm option I guess it would be something like
-XX:MaxPermSize
On Wed, Feb 25, 2009 at 1:38 PM, revas wrote:
> Hi
>
> I am sure this question has been repeated many times over
Thanks, my client will send the query to solr and retrive the result from solr,
so solrj is having this facilities and I can use this API.
-Original Message-
From: Noble Paul നോബിള് नोब्ळ् [mailto:noble.p...@gmail.com]
Sent: Thursday, March 05, 2009 4:55 PM
To: solr-user@lucene.apac
HI,
How do i get the info on the current setting of MaxPermSize?
Regards
Sujahta
On 2/27/09, Alexander Ramos Jardim wrote:
>
> Another simple solution for your requirement is to use multicore. This way
> you will have only one Solr webapp loaded with as many indexes as you need.
>
> See more at
if you wish to search the content in your DB , you will need to index
it first to Solr.
the search can be done on Solr and if you are using java , you can use
SolrJ for that.
--Noble
On Thu, Mar 5, 2009 at 4:29 PM, Radha C. wrote:
>
> Hi,
>
> We are planning to use solr search server for our data
Hi,
I am newbie for solr search engin. I don't find any juicy information on how
to configure the ORACLE data base to index the tables using solr search
engin. There are huge documents spread over wiki pages. I need some core
information.
I am using Apache-tomcat 5.5.26, and oracle 9i. Can you p
Hi,
We are planning to use solr search server for our database content search,
so we have a plan to create our own java client.
Does solrj API provide the facilities to create java client for our database
search? Where I can get the information about the integration of oracle
content +solr sear
it depends on how fast your DB can give out data through jdbc . the
best thing is to just run and see.
--Noble
On Thu, Mar 5, 2009 at 1:13 PM, Venu Mittal wrote:
> Does anybody has any stats to share on how much time does DataImportHandler
> takes to index a given set of data ?
>
> I am currentl
Hi,
I use Solr 1.3 through SolrJ. I want to access the statistics which are
displayed at /admin/ in the default Solr install. Is there some way of
getting those statistics through SolrJ. I tried
query.setQueryType("admin"); in code and renamed the "/admin/"
requesthandler in solrconfig.xml to
Hello Yonik,
Thanks for your help, but I am not really sure I follow.
It is possible to use the PatternTokenizerFactory with pattern = (\d+) and
group = 0 to tokenize the input correctly
But I don't see how to use the copyField to achieve sorting
I read the documentation and this does not seem
Hi
I'm looking to build summary reports, something like
jan feb mar total
branch A
branch B
branch C
should I search for the raw data and build the table at the client end?
or is this better done inside a custom search component?
thanks
Ron
Hi,
Can somebody please give me an example as how to achieve it.
Thanks,
Amit Garg
dabboo wrote:
>
> Hi,
>
> I am implementing column specific query with q query parameter. for e.g.
>
> ?q=prdMainTitle_product_s:math & qt=dismaxrequest
>
> The above query doesnt work while if I use the same
Hi,
If i need to change the lucene version of solr ,then how can we do this?
Regards
Revas
Hi,
I have a requirement where i need to search offline.We are thinking of doing
this by storing the index terms in a db .
Is there a was of accessing the index tokens in solr 1.3 ?
The other way is to use Zend_lucene to read the index file of solr as zend
lucene has method for doing this.But Ze
72 matches
Mail list logo