On Fri, Sep 23, 2011 at 11:59 AM, nagarjuna wrote:
> yaa Gora i set up rss feed to my blog and i have the following url for the
> rss feed of my blog
It would be best if you stated your exact problem up front,
rather than having to dig through to find where exactly the
issue lies.
> http://nagar
Seems to be a rather innocent network issue based on your stacktrace:
Caused by: java.sql.SQLException: Network error IOException: Address
already in use: connect
Can you recheck connections and retry?
Sent from my iPhone
On Sep 23, 2011, at 3:34 PM, "Vazquez, Maria \(STM\)"
wrote:
> I tried
hi
thanks for details. I will look into xsl suggestion.
Any idea how would i send parameter to script?
As i understand thats the syntax for script transformer
--
View this message in context:
http://lucene.472066.n3.nabble.com/DIH-error-when-nested-db-datasource-and-file-data-source-tp3345664p
If relevance ranking is working well, in theory it doesn't matter how many hits
you get as long as the best results show up in the first page of results.
However, the default in choosing which facet values to show is to show the
facets with the highest count in the entire result set. Is there
The first thing I'd try is just tweaking the Xmx parameter on the invocation,
java -Xmx2048M -jar start.jar
Second option: Play with your options in solrconfig.xml
and lower it substantially, although I'm not quite sure how DIH interacts
with that.
Gotta rush, so sorry this is so terse.
Best
Er
On 9/23/2011 6:00 PM, hadi wrote:
> I index my files with solrj and crawl my sites with nutch 1.3 ,as you
> know, i have to overwrite the nutch schema on solr schema in order to
> have view the result in solr/browse, in this case i should define two
> cores,but i want have single result or the user
Hi all,
I'd like to know what the specific disadvantages are for using dynamic
fields in my schema are? About half of my fields are dynamic, but I could
move all of them to be static fields. WIll my searches run faster? If there
are no disadvantages, can I just set all my fields to be dynamic?
J
I index my files with solrj and crawl my sites with nutch 1.3 ,as you
know, i have to overwrite the nutch schema on solr schema in order to
have view the result in solr/browse, in this case i should define two
cores,but i want have single result or the user can search into both
core indexes at the
On Sep 23, 2011, at 2:03pm, hadi wrote:
> I have to cores with seprate schema and index but i want to have single
> result set in solr/browse,
If they have different schemas, how would you combine results from the two?
If they have the same schemas, then you can define a third core with a
diff
When I create a query like "something&fl=content" in solr/browse the "&" and
"=" in URL converted to %26 and %3D and no result occurs. but it works in
solr/admin advanced search and also in URL bar directly, How can I solve
this problem? Thanks
--
View this message in context:
http://lucene.4720
I tried the patch
(https://issues.apache.org/jira/secure/attachment/12481497/SOLR-2233-001
.patch)
And now I get these errors. Am I doing something wrong? Using MS SQL
Server
23 Sep 2011 12:26:14,418
[org.apache.solr.handler.dataimport.ThreadedEntityProcessorWrapper]
Exception in entity : keyword
conf/velocity by default. See Solr's example configuration.
Erik
On Sep 23, 2011, at 12:37, Fred Zimmerman wrote:
> ok, answered my own question, found velocity rw in solrconfig.xml. next
> question:
>
> where does velocity look for its templates?
>
> --
Hi,
In working through some updates for the Solr Size Estimator, I have
found a number of gaps in the Solr Wiki. I've Google'd to a fair degree
on each of these and either found nothing or an insufficient explanation.
In particular, for each of the following I'm looking for:
A) An explanation
I am using Solr 3.1.
But you can surely try the patch with 3.3.
On Fri, Sep 23, 2011 at 1:35 PM, Vazquez, Maria (STM) <
maria.vazq...@dexone.com> wrote:
> Thanks Rahul.
> Are you using 3.3 or 3.4? I'm on 3.3 right now
> I will try the patch today
> Thanks again,
> Maria
>
>
> -Original Messag
Few thoughts:
1) If you place the script transformer method on the entity named "x"
and then pass the ${topic_tree.topic_id} to that as an argument, then
shouldn't you have everything you need to work with x's row? Even if
you can't look up at the parent, all you needed to know was the
topic_id an
Just another point worth mentioning here.. Though its related to Nutch and
not Solr..
If you want to re-crawl and try to get new data into the index, you have to
remove data from the crawl folder (default for nutch) of nutch too.. Only
then will you get fresh crawled data (not to be confused with
Thanks Rahul.
Are you using 3.3 or 3.4? I'm on 3.3 right now
I will try the patch today
Thanks again,
Maria
-Original Message-
From: Rahul Warawdekar [mailto:rahul.warawde...@gmail.com]
Sent: Thursday, September 22, 2011 12:46 PM
To: solr-user@lucene.apache.org
Subject: Re: JdbcDataSourc
ok, answered my own question, found velocity rw in solrconfig.xml. next
question:
where does velocity look for its templates?
-
Subscribe to the Nimble Books Mailing List http://eepurl.com/czS- for
monthly updates
On Fri, Sep 23, 2011 at 11
Hi
I have indexed some 1M documents, just for performance testing. I have written
a query parser plug, when i add it in solr lib folder under tomcat wepapps
folder. and try to load solr admin page it keeps on loading and when I delete
jar file of query parser plugin from lib it works fine. but
This seems to be out of date. I am running Solr 3.4
* the file structure of apachehome/contrib is different and I don't see
velocity anywhere underneath
* the page referenced below only talks about Solr 1.4 and 4.0
?
On Thu, Sep 22, 2011 at 19:51, Markus Jelsma wrote:
> Hi,
>
> Solr support the
I tried that with the same results. You would think I would get the
exception back from Solr so I could trap it, instead I lose all other
requests after it.
On Fri, Sep 23, 2011 at 8:33 AM, Gunther, Andrew wrote:
> All the solr methods look like they should throw those 2 exceptions.
> Have you t
yes it is possible
http://www.medihack.org/2011/03/01/autocompletion-autosuggestion-using-solr/
Since i m looking into autosuggest i came across that info while doing
research..
--
View this message in context:
http://lucene.472066.n3.nabble.com/Autocomplete-terms-performance-problem-tp3351352p
hi
My requirement is
i have a list of popular search terms in database
seachterm | count
---
mango | 100
Consider i have only oneterm in that table, mango. I use edgengram and put
that in auto_complete field in solr index with count.
If user starts typing "m" i wil show
Nicolas,
A text or ngram field should do it.
Otis
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Lucene ecosystem search :: http://search-lucene.com/
- Original Message -
> From: Nicolas Martin
> To: solr-user@lucene.apache.org
> Cc:
> Sent: Friday, September 23, 2011
Hi Roland,
I did this:
http://search-lucene.com/?q=sort+by+function&fc_project=Solr&fc_type=wiki
Which took me to this:
http://wiki.apache.org/solr/FunctionQuery#Sort_By_Function
And further on that page you'll find strdist function documented:
http://wiki.apache.org/solr/FunctionQuery#strdist
hi
I am not getting exception anymore.. I had issue with database
But now real problem i always have ...
Now that i can fetch ID's from database how would i fetch correcponding data
from ID in xm file
So after getting DB info from jdbcsource I use xpath processor like this,
but it does not work.
Hi Ahmad,
Ah, that's a FAQ! :)
http://search-lucene.com/?q=delete+all+documents&fc_project=Solr&fc_type=wiki
Otis
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Lucene ecosystem search :: http://search-lucene.com/
- Original Message -
> From: ahmad ajiloo
> To: solr-us
Roy,
Use something other than Nabble or quote previous email to help people keep
track of what your problem is/was about.
Yes, with edge ngrams you won't be able to do infix searches but are you
sure you want that? People typically don't miss/skip the beginning of a word...
Otis
Semat
All the solr methods look like they should throw those 2 exceptions.
Have you tried the DirectXmlRequest method?
up.process(solrServer);
public UpdateResponse process( SolrServer server ) throws
SolrServerException, IOException
{
long startTime = System.currentTimeMillis();
UpdateRes
On 9/23/2011 1:45 AM, Pranav Prakash wrote:
Maybe I am wrong. But my intentions of using both of them is - first I
want to use phrase queries so used CommonGramsFilterFactory. Secondly,
I dont want those stopwords in my index, so I have used
StopFilterFactory to remove them.
CommonGrams is n
On Sat, Sep 3, 2011 at 1:29 AM, Chris Hostetter wrote:
>
> : I am not sure if current version has this, but DIH used to reload
> : connections after some idle time
> :
> : if (currTime - connLastUsed > CONN_TIME_OUT) {
> : synchronized (this) {
> :
*
I have a java program which sends thousands of Solr XML files up to Solr
using the following code. It works fine until there is a problem with one of
the Solr XML files. The code fails on the solrServer.request(up) line, but
it does not throw an exception, my application therefore cannot catch it
On Sun, Sep 18, 2011 at 11:47 AM, abhayd wrote:
> hi gora,
> Query works and if i remove xml data load indexing works fine too
>
> Problem seem to be with this
>
> baseDir="${solr.solr.home}" fileName=".xml"
>recursive="false" rootEntity="true"
> dataSource="video_datasource">
>
Hi,
OK, if SOLR-2403 being related to the bug I described, has been fixed in
SOLR 3.4 than we are safe, since we are in the process of migration. Is it
possible to verify this somehow? Is FacetComponent class is the one I should
start checking this from? Can you give any other pointers?
OK, for t
Erik
I tried your solution.. but it still not open the files in solr results, I
am pasting my files.. take a look is somthing can be corrected :
data-config.xml:
OK, I found the problem was in our new interface.
Your feedback made me look deeper. Thanx.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Snippets-and-Boundaryscanner-in-Highlighter-tp3358898p3361571.html
Sent from the Solr - User mailing list archive at Nabble.com.
(11/09/23 20:03), O. Klein wrote:
The regex fragmenter showed that there was enough content to show multiple
snippets.
The amount of snippets has no effect on any of the types of breakIterator.
Only fragsize has effect.
Or is this highlighter not supporting multiple snippets?
This highlighter
The regex fragmenter showed that there was enough content to show multiple
snippets.
The amount of snippets has no effect on any of the types of breakIterator.
Only fragsize has effect.
Or is this highlighter not supporting multiple snippets?
--
View this message in context:
http://lucene.4
Hi solR users!
I'd like to make research on my client database, in particular, i need
to find client by their address (ex : "100 avenue des champs élysée")
Does anyone know a good fieldType to store my addresses to enable me to
search client by address easily ?
thank you all
Hi solR users!
I'd like to make research on my client database, in particular, i need
to find client by their address (ex : "100 avenue des champs élysée")
Does anyone know a good fieldType to store my addresses to enable me to
search client by address easily ?
thank you all
On 23/09/2
Im using EdgeNgrams to do the same thing rather than wild card searches.
More info here :
http://www.lucidimagination.com/blog/2009/09/08/auto-suggest-from-popular-queries-using-edgengrams/
Make sure your search phrase is enclosed in quotes as well so its
treated as a phrase rather than 2 words
Thanks Otis,
I am able to show the results such that the last match (500 characters around
the match) in the log file is shown highlighted. I can try creating multiple
documents from one log file to see if it improves the performance.
Can anything else be done to reduce the heap size?
Anand Ni
> You've got CommonGramsFilterFactory and StopFilterFactory both using
> stopwords.txt, which is a confusing configuration. Normally you'd want one
> or the other, not both ... but if you did legitimately have both, you'd want
> them to each use a different wordlist.
>
Maybe I am wrong. But my in
Hi, I suppose that this isn't what you mean but I leave it here, because
it could help you.
If this what you need?
Using SolrJ, I delete all the rows of the index whit this command:
solr.deleteByQuery("id:*");
But you need to delete all the rows inserted from Nutch, could be this helps
yo
Thanks for helping me so far,
Yes i have seen the edgeNGrams possiblity. Correct me if i'm wrong, but i
thought it isn't possible to do infix searches with edgeNGrams? Like "chest"
gives suggestion "manchester".
--
View this message in context:
http://lucene.472066.n3.nabble.com/Autocomplete-t
Hi all
I sent my data from Nutch to Solr for indexing and searching. Now I want to
delete all of the indexed data sent from Nutch. Can anyone help me?
thanks
Hi All!
We are working first time with solr and have a simple data model
Entity Person(column surname) has 1:n Attribute(column name) has 1:n
Value(column text)
We need faceted search on the content of Attribute:name not on Attribute:name
itself, e.g if an Attribute of person has name=hobby, w
47 matches
Mail list logo