The configuration is not an issue...
But how doindex i invoke it...
I only have known a url way to invoke it and thus import the data into
index...
like http://localhost:8983/solr/db/dataimport?command=full-import t
But with embedded I havent been able to figure it out
Regards
Rohan
2009/10/10 No
I guess it should be possible... what are the problems you encounter?
On Sat, Oct 10, 2009 at 10:56 AM, rohan rai wrote:
> Have been unable to use DIH for Embedded Solr
>
> Is there a way??
>
> Regards
> Rohan
>
--
-
Noble Paul | Principal E
Have been unable to use DIH for Embedded Solr
Is there a way??
Regards
Rohan
there is another option of passing the table name as a request
parameter and make your sql query templatized .
example
query="select * from ${table}"
and pass the value of table as a request parameter
On Sat, Oct 10, 2009 at 3:52 AM, solr.searcher wrote:
>
> Hmmm. Interesting line of thought. Th
it would be better to add a command to change the master in runtime.
But Solr are planning to move to a zookeeper based system where this
can be automatically be taken care of
On Sat, Oct 10, 2009 at 6:06 AM, wojtekpia wrote:
>
> Hi,
> I'm trying to change the masterUrl of a search slave at runt
Hi,
I'm trying to change the masterUrl of a search slave at runtime. So far I've
found 2 ways of doing it:
1. Change solrconfig_slave.xml on master, and have it replicate to
solrconfig.xml on the slave
2. Change solrconfig.xml on slave, then issue a core reload command. (a side
note: can I issue
Hi
I would appreciate if someone can throw some light on the following point
regarding proximity search.
i have a search box and if a use comes and type in "honda car" WITHOUT any
double quotes, i want to get all documents with matches, and also they
should be ranked based on proximity. i.e. the m
Thanks Hoss. Yes, in a separate thread on the list I reported that
doing a multi-stage optimize worked around the out of space problem. We
use mergefactor=10, maxSegments = 16, 8, 4, 2, 1 iteratively starting at
the closest power of two below the number of segments to merge.Works
nicely s
Hi Wojtek:
Sorry for the late, late reply. I haven't implemented this yet, but it is
on the (long) list of my todos. Have you made any progress?
Asif
On Thu, Aug 13, 2009 at 5:42 PM, wojtekpia wrote:
>
> Hi Asif,
>
> Did you end up implementing this as a custom sort order for facets? I'm
> f
Hmmm. Interesting line of thought. Thanks a lot Jay. Will explore this
approach. There are lot of duplicate tables though :).
I was about to try a different approach - set up two solar cores, keep
reloading config and updating one, merge with the bigger index ...
But your approach is worth expl
Yep! Thanks!
On Fri, Oct 9, 2009 at 1:42 PM, Jay Hill wrote:
> After checking out the latest revision did you do a build? I've made that
> mistake myself a few times: check out the latest revision and then fire up
> jetty before running "ant example" - could that be it?
>
> -Jay
> http://www.luc
Thanks for the reply hossman.
I have successfully created a custom QParserPlugin that implements some
special syntax to help me handle my many dynamic fields.
The syntax looks something like this:
http://127.0.0.1:8994/solr/select...@fn:(1,2,3,4)
Had to create java code to extract the virtua
After checking out the latest revision did you do a build? I've made that
mistake myself a few times: check out the latest revision and then fire up
jetty before running "ant example" - could that be it?
-Jay
http://www.lucidimagination.com
On Fri, Oct 9, 2009 at 1:38 PM, Jason Rutherglen wrote
Lance,
I tried "java -Dsolr.solr.home=example-DIH/solr -jar start.jar" and
the browser returns 404.
On Fri, Oct 9, 2009 at 1:30 PM, Lance Norskog wrote:
> "java -jar start.jar" spews a lot of log when you run it in the
> example/ directory.
> That should show the problem. Is "core" in the struct
Jay,
I tried that as well, still nothing.
When I run: java -Dsolr.solr.home=solr -jar start.jar
I see:
2009-10-09 13:37:04.887::INFO: Logging to STDERR via org.mortbay.log.StdErrLog
2009-10-09 13:37:05.051::INFO: jetty-6.1.3
2009-10-09 13:37:05.096::INFO: Started SocketConnector @ 0.0.0.0:8983
"java -jar start.jar" spews a lot of log when you run it in the
example/ directory.
That should show the problem. Is "core" in the structure
"core/solr/conf?" If it has multiple subcores, there is no
solr/admin.jsp. Instead there is a main solr/ and a
solr/core1/admin.jsp etc.
Try running -Dsolr.s
Shouldn't that be: java -Dsolr.solr.home=multicore -jar start.jar
and then hit url: http://localhost:8983/solr/core0/admin/ or
http://localhost:8983/solr/core1/admin/
-Jay
http://www.lucidimagination.com
On Fri, Oct 9, 2009 at 1:17 PM, Jason Rutherglen wrote:
> I have a fresh checkout from t
You could use separate DIH config files for each of your three tables. This
might be overkill, but it would keep them separate. The DIH is not limited
to one request handler setup, so you could create a unique handler for each
case with a unique name:
table1-config.xml
Use copyField to copy to a field with a field type like this:
This works for your example, however I can't be sure if it will work for all
of your content, but give it a try and see.
-Jay
http://www.lucid
I have a fresh checkout from trunk, "cd example", after running "java
-Dsolr.solr.home=core -jar start.jar",
http://localhost:8983/solr/admin yields a 404 error.
Hi,
Does anyone know when exactly is the dih.last_index_time in
dataimport.properties captured? E.g. start of issueing SQL to data source,
end of executing SQL to data source to fetch the list of IDs that have
changed since last index, end of indexing all changed/new documents? The
name seems t
Thank you Yonik, that worked (I added :method => :enum to the :facets hash).
And it seems to work really fast, too.
Yonik Seeley wrote:
Hi Paul,
The new faceting method is faster in the general case, but doesn't
work well for faceting full text fields (which tends not to work well
regardless
Hi Paul,
The new faceting method is faster in the general case, but doesn't
work well for faceting full text fields (which tends not to work well
regardless of the method).
You can get the old behavior bt adding either of the parameters
"facet.method=enum" or "f.content.facet.method=enum"
We'll
(using solr 1.4 nightly; solr-ruby 0.0.7)
I am attempting to do an auto-complete with the following statement:
req = Solr::Request::Standard.new(
:start => 0, :rows => 0, :shards => [ 'resources', 'exhibits'],
:query => "*:*", :filter_queries => [ ],
:facets => {:fields => [ "content" ], :min
Thanks Shalin. Patch works well for me too.
Michael
Shalin Shekhar Mangar wrote:
>
> On Thu, Oct 8, 2009 at 1:38 AM, michael8 wrote:
>
>>
>> 2 things I noticed that are different from 1.3 to 1.4 for DataImport:
>>
>> 1. there are now 2 datetime values (per my specific schema I'm sure) in
>>
I think you want "indexed='true' and stored='false'".
If the field is not marked "required=true" then, yes, there can be
"null" fields.
BTW, to search for documents where a value is not set, do this:
*:* -field:* TO *]
On Tue, Oct 6, 2009 at 1:46 AM, Avlesh Singh wrote:
>>
>> I am defining
On Fri, Oct 9, 2009 at 3:14 PM, Lance Norskog wrote:
> So if I have a unique value for every document, then delete some
> documents, Lucene will count the values from the deleted documents and
> refuse? Does the counter check for "this term has no documents any
> more"?
The count is done while fi
So if I have a unique value for every document, then delete some
documents, Lucene will count the values from the deleted documents and
refuse? Does the counter check for "this term has no documents any
more"?
On Tue, Oct 6, 2009 at 12:10 PM, Yonik Seeley
wrote:
> Lucene's test for multi-valued f
Marc,
What do you mean by Katta's ranking algorithm? If you use
SOLR-1395's search request system that traverses Hadoop RPC,
it's simply using what Solr offers today in terms of distributed
search (i.e. no distributed IDF). Instead of requests being
serialized into an HTTP call, they are serialize
Hi all,
First of all, please accept my apologies if this has been asked and answered
before. I tried my best to search and couldn't find anything on this.
The problem I am trying to solve is as follows. I have multiple tables with
identical schema - table_a, table_b, table_c ... and I am trying
: Reverse alphabetical ordering. The option "index" provides alphabetical
: ordering.
be careful: "index" doesn't mean "alphabetical" -- it means the natural
ordering of terms as they exist in the index. for non ascii characters
this is not neccessarily something that could be considered alp
Hi,
What's the canonical way to pass an update request to another handler?
I'm implementing a handler that has to dispatch its result to different
update handlers based on its internal processing.
Getting a handler from SolrCore.getRequestHandler(handlerName) makes the
implementation depende
: That theory did not work because the error log showed that solr was trying to
: merge into the _1j37 segment files showing as deleted in the lsof above when
: it ran out of space so those are a symptom not a cause of the lost space:
Right, you have to keep in mind Solr is always maintaining a
: business needs. Our problem is that we have a catalog schema with
: products and skus, one to many. The most relevant content being indexed
: is at the product level, in the name and description fields. However we
: are interested in filtering by sku attributes, and in particular making
:
: For example, lets say my unmodified query is
: "http://localhost:8994/solr/select?q=xxx:[* TO 3] AND yyy:[3 TO
: *]&defType=myQParser" and (JUST for the sake of argument) lets say I want to
: rewrite it as "http://localhost:8994/solr/select?q=aaa:[1 TO 2] AND bbb:[3
: TO 10]&defType=myQParser".
: Subject: How to determine the size of the index?
: Date: Wed, 7 Oct 2009 12:46:40 -0600
: Message-ID:
:
: In-Reply-To:
: References:
: <25783936.p...@talk.nabble.com>
:
: <25788406.p...@talk.nabble.com>
:
http://people.apache.org/~hossman/#threadhijack
Thread Hijacking
: I have a field. The field has a sentence. If the user types in a word
: or a phrase, how can I return the index of this word or the index of
: the first word of the phrase?
: I tried to use &bf=ord..., but it does not work as i expected.
for basic queries (term, phrase, etc...) position informa
Hi all,
I'm using solr-ruby 0.0.7 and am having trouble getting Sort to work.
I have the following statement:
req = Solr::Request::Standard.new(:start => start, :rows => max,
:sort => [ :title_sort => :ascending ],
:query => query, :filter_queries => filter_queries,
:field_list => @field_lis
Hi,
I am using SOLR 1.4 (July 23rd nightly build), with a master-slave setup.
I have encountered twice an occurrence of the slave recreating the indexes
over and over gain.
Couldn't find any pointers in the log.
Any help would be appreciated
Moshe Cohen
The data directory listing is:
total 25
For posterity...
After reading through http://wiki.apache.org/solr/SolrConfigXml and
http://wiki.apache.org/solr/CoreAdmin and
http://issues.apache.org/jira/browse/SOLR-646, I think there's no way
for me to make only one core specify &shards=foo, short of duplicating
my solrconfig.xml for that cor
As always, you guys rock!
Thanks,
-Jay
http://www.lucidimagination.com
On Fri, Oct 9, 2009 at 2:57 AM, Shalin Shekhar Mangar <
shalinman...@gmail.com> wrote:
> FYI - This is fixed in trunk.
>
> 2009/10/9 Noble Paul നോബിള് नोब्ळ्
>
> > I have raised an issue http://issues.apache.org/jira/brows
On Fri, Oct 9, 2009 at 10:26 AM, Michael wrote:
> Hm... still no success. Can anyone point me to a doc that explains
> how to define and reference core properties? I've had no luck
> searching Google.
OK, definition is described here:
http://wiki.apache.org/solr/CoreAdmin#property -- a page I'v
Hm... still no success. Can anyone point me to a doc that explains
how to define and reference core properties? I've had no luck
searching Google.
Shalin, I gave an identical '' tag to
each of my cores, and referenced ${solr.core.shardsParam} (with no
default specified via a colon) in solrconfig
On Fri, Oct 9, 2009 at 6:03 AM, Shalin Shekhar Mangar
wrote:
> Michael, the last line does not seem right. The tag has nothing
> called shardParam. If you want to add a core property called shardParam, you
> need to add something like this:
>
>
>
>
> value="localhost:9990/core1,localhost:999
Hi Koji,
the problem is, that this doesn't fit all of our requirements. We have
some Solr documents that must not be matched by "foo" or "bar" but by
"foo bar" as part of the query. Also, we have some other documents that
could be matched by "foo" and "foo bar" or "bar" and "foo bar".
The best wa
On Fri, Oct 9, 2009 at 6:10 PM, Pravin Karne
wrote:
> Hi,
> I am new to solr. I have configured solr successfully and its working
> smoothly.
>
> I have one query:
>
> I want index large data(around 100GB).So can we store these indexes on
> different machine as distributed system.
>
>
Are you talk
On Fri, Oct 9, 2009 at 4:37 PM, Pravin Karne
wrote:
> Hi
> I have index data with Lucene. I want to deploy this indexes on solr for
> search.
>
> Generally we index and search data with Solr, but now I want to just
> search with Lucene indexes.
>
> How can we do this ?
>
It is possible but you h
Hi Patrick,
Why don't you define:
foo bar, foo_bar (and expand="true")
instead of:
foo bar=>foo_bar
in only indexing side? Doesn't it make a change for the better?
Koji
Patrick Jungermann wrote:
Hi Koji,
using phrase queries is no alternative for us, because all query parts
has to be opt
Hi,
I am new to solr. I have configured solr successfully and its working smoothly.
I have one query:
I want index large data(around 100GB).So can we store these indexes on
different machine as distributed system.
So there will be one master and more slave . Also we have to keep these data in
Hi
I have index data with Lucene. I want to deploy this indexes on solr for search.
Generally we index and search data with Solr, but now I want to just search
with Lucene indexes.
How can we do this ?
-Pravin
DISCLAIMER
==
This e-mail may contain privileged and confidential informati
Hey there,
I am trying to set up the Katta integration plugin. I would like to know if
Katta's ranking algorith is used when searching among shards. In case yes,
would it mean it solves the problem with IDF's of distributed Solr?
--
View this message in context:
http://www.nabble.com/SOLR-1395-
Hi,
some more queries:
QUERY: buzz:base* => returns the desired result.
QUERY: buzz:baseWord => returns the desired result.
QUERY: buzz:baseWord* => returns nothing.
QUERY: buzz:base*khatwani => returns nothing
QUERY: buzz:base*khat* => returns nothing
finding it kinda weird... ny poi
Hi,
well i have a schema where i store a json string.
consider a small example schema shown below:
-
-
{"word":"words","baseWord":"word","pos":"noun","phrase":"following are the
list of words","frequency":7}
-
{"word":"heroes","baseWord":"hero","pos":"noun","phrase":"have you watched
the m
Hi,
I'm trying to open an existing local index with the following code:
/**
* get the SolrServer
*/
public static SolrServer startEmbeddedSolr(String indexdir) throws
IOException,
ParserConfigurationException, SAXException,
SolrServerException
{
final CoreContainer
On Fri, Oct 9, 2009 at 11:35 AM, Pravin Karne wrote:
> Thanks for your reply.
> I have one more query regarding solr distributed environment.
>
> I have configured solr on to machine as per
> http://wiki.apache.org/solr/DistributedSearch
>
> But I have following test case -
>
> Suppose I have two
On Wed, Oct 7, 2009 at 11:16 PM, Michael wrote:
> I'd like to have 5 cores on my box. core0 should automatically shard to
> cores 1-4, which each have a quarter of my corpus.
> I tried this in my solrconfig.xml:
>
>
>
> ${solr.core.shardsParam:}
>
>
>
> and this in my solr.x
Hi list,
is there any possibility to get highlighting also for the query string?
Example:
Query: fooo bar
Tokens after query analysis: foo[0,4], bar[5,8]
Token "foo" matches a token of one of the queried fields.
-> Query higlighting: "fooo"
Thanks, Patrick
FYI - This is fixed in trunk.
2009/10/9 Noble Paul നോബിള് नोब्ळ्
> I have raised an issue http://issues.apache.org/jira/browse/SOLR-1501
>
> On Fri, Oct 9, 2009 at 6:10 AM, Jay Hill wrote:
> > In the past setting rows=n with the full-import command has stopped the
> DIH
> > importing at the nu
FYI - This is fixed in trunk.
2009/10/9 Noble Paul നോബിള് नोब्ळ्
> raised an issue
> https://issues.apache.org/jira/browse/SOLR-1500
>
> On Fri, Oct 9, 2009 at 7:10 AM, jayakeerthi s
> wrote:
> > Hi All,
> >
> > I tried Indexing data and got the following error., Used Solr nightly
> Oct5th
> >
Hi Chantal,
yes, I'm using the SynonymFilter at index and query chain. Using it only
at query time or only at index time was part of former considerations,
but both don't fit all of our requirements.
But as I wrote in my first mail, it works only within the
/admin/analysis.jsp view and not at "re
Hi Patrick,
have you added that SynonymFilter to the index chain and the query
chain? You have to add it to both if you want to have it replaced at
index and query time. It might also be enough to add it to the query
chain only. Than your index still preserves the original data.
Cheers,
Chan
Hi Koji,
using phrase queries is no alternative for us, because all query parts
has to be optional parts. The phrase query workaround will work for a
query "foo bar", but only for this exact query. If the user queries for
"foo bar baz", it will be changed to "foo_bar baz", but it will not
match th
On Fri, Oct 9, 2009 at 12:51 AM, Ryan McKinley wrote:
> Hello-
>
> I have an application that can run in the background on a user Desktop --
> it will go through phases of being used and not being used. I want to be
> able to free as many system resources when not in use as possible.
>
> Current
Hi Joe,
WordDelimiterFilter removes different delimiters, and creates several
token strings from the input. It can also concatenate and add that as
additional token to the stream. Though, it concatenates without space.
But maybe you can tweak it to your needs?
You could also use two different
I ended up with the same set of results earlier but I don't results such as
"the champion", I think because of the EdgeNGram filter.
With NGram, I'm back to the same problem:
Result for q=ca
0.8717008
Blu Jazz Cafe
0.8717008
Café in the Pond
On Fri, Oct 9, 2009 at 4:02 PM, R. Tan wrote:
Hi Bernadette,
Bernadette Houghton schrieb:
Thanks for this Patrick. If I remove one of the hyphens, solr doesn't throw up
the error, but still doesn't find the right record. I see from marklo's
analysis page that solr is still parsing it with a hyphen. Changing this part
of our schema.xml -
Hi Koji,
thanx a ton ... now it worked :)
On Fri, Oct 9, 2009 at 6:02 AM, Koji Sekiguchi wrote:
> Hi Rakhi,
>
> Use multiValued (capital V), not multivalued. :)
>
> Koji
>
>
> Rakhi Khatwani wrote:
>
>> Hi,
>> i have a small schema with some of the fields defined as:
>> > multiV
How do these filters help the autosuggest?
On Fri, Oct 9, 2009 at 3:59 PM, Avlesh Singh wrote:
> >
> > What are the replacements for, the special character and 20 char?
> >
> I had no time to diff between your definitions and mine. Copy-pasting mine
> was easier :)
>
> Also, do you get resul
>
> What are the replacements for, the special character and 20 char?
>
I had no time to diff between your definitions and mine. Copy-pasting mine
was easier :)
Also, do you get results such as " formula"?
>
The "autocomplete" field would definitely not match this query, but the
"tokenized aut
Thanks, I'll give this a go. What are the replacements for, the special
character and 20 char? Also, do you get results such as " formula"?
On Fri, Oct 9, 2009 at 3:45 PM, Avlesh Singh wrote:
> I have a very similar set-up for my auto-suggest (I am sorry that it can't
> be viewed from an ext
I have a very similar set-up for my auto-suggest (I am sorry that it can't
be viewed from an external network).
I am sending you my field definitions, please use them and see if it works
out correctly.
Yeah, I do get results. Anything else I missed out?
I want it to work like this site's auto suggest feature.
http://www.sematext.com/demo/ac/index.html
Try the keyword 'formula'.
Thanks,
Rih
On Fri, Oct 9, 2009 at 3:24 PM, Avlesh Singh wrote:
> Can you just do q=autoCompleteHelper2:caf to se
Can you just do q=autoCompleteHelper2:caf to see you get results?
Cheers
Avlesh
On Fri, Oct 9, 2009 at 12:53 PM, R. Tan wrote:
> Yup, it is. Both are copied from another field called name.
>
> On Fri, Oct 9, 2009 at 3:15 PM, Avlesh Singh wrote:
>
> > Lame question, but are you populating data
Yup, it is. Both are copied from another field called name.
On Fri, Oct 9, 2009 at 3:15 PM, Avlesh Singh wrote:
> Lame question, but are you populating data in the autoCompleteHelper2
> field?
>
> Cheers
> Avlesh
>
> On Fri, Oct 9, 2009 at 12:36 PM, R. Tan wrote:
>
> > The problem is, I'm getti
Lame question, but are you populating data in the autoCompleteHelper2 field?
Cheers
Avlesh
On Fri, Oct 9, 2009 at 12:36 PM, R. Tan wrote:
> The problem is, I'm getting equal scores for this:
> Query:
> q=(autoCompleteHelper2:caf^10.0 autoCompleteHelper:caf)
>
> Partial Result:
>
>
> 0.7821733
The problem is, I'm getting equal scores for this:
Query:
q=(autoCompleteHelper2:caf^10.0 autoCompleteHelper:caf)
Partial Result:
0.7821733
Bikes Café
0.7821733
Cafe Feliy
I'm using the standard request handler with this.
Thanks,
Rih
On Fri, Oct 9, 2009 at 3:02 PM, R. Tan wrote:
> Avle
Avlesh,
I don't see anything wrong with the data from analysis.
KeywordTokenized:
*term position ** **1** **2** **3** **4** **5** **6** **7** **8** **9** **10
** **11** **12** **13** **14** **15** **16** **...*
*term text ** **th** **he** **e ** **c** **ch** **ha** **am** **mp** **pi**
**io** **o
77 matches
Mail list logo