Check out
http://wiki.apache.org/solr/HighlightingParameters
and the hl.simple.pre/hl.simple.post options
You may be also able to control the display of the default via CSS
but will depend on your rendering context as to whether this is feasible.
Scott.
On 1/10/10 7:54 AM, efr...@gmail.com
Your solrconfig has a highlighting section. You can make that CDATA
thing whatever you want. I changed it to .
On Thu, Sep 30, 2010 at 2:54 PM, efr...@gmail.com wrote:
> Hi all -
>
> Does anyone know how to produce solr results where the match term is
> highlighted in bold rather than italic?
>
Hi all -
Does anyone know how to produce solr results where the match term is
highlighted in bold rather than italic?
thanks in advance,
Brad
Hi Yonik,
thanks for your reply.
I entered a bug for this at :
https://issues.apache.org/jira/browse/SOLR-2138
to answer your questions here:
- do you have any warming queries configured?
> no, all autowarmingcount are set to 0 for all caches
- do the cores have documents already, and i
Hi,
With the TermVector component, is there a means of limiting/filtering
the returned information to only those terms found in a query?
Scott.
On Thu, Sep 30, 2010 at 10:49 PM, Sharma, Raghvendra
wrote:
> I have been able to load around a million rows/docs in around 5+ minutes.
> The schema contains around 250+ fields. For the moment, I have kept
> everything as string.
> I am sure there are ways to get better loading speeds than thi
Thanks, but I still need to "store" text at any rate in order to get
the highlighted snippets for the search results list. This isn't a
problem. The issue is how to obtain correct offsets or other mechanisms
for being able to display the original HTML text plus term highlighting
when navigatin
On Fri, 2010-10-01 at 12:00 +1000, Scott Yeadon wrote:
> Hi,
>
> The problem is that the article text is HTML and Solr appears to strip
> the HTML by default.
I think what you need to look at is how the fields are defined by
default in your schema. If Data sent as HTML is being added to the
sta
Hi,
I have inherited an application which uses Solr search and the PHP Solr
API (http://pecl.php.net/package/solr). While the list of search results
with appropriate highlighting is all good, when selecting a result that
navigates to an individual article the users want to have all the hits
: it turns the plus(es) into spaces. Is this a tomcat setting or a solr
: one to stop this happening? How can I get the plus into solr so it
: actually means a required word.
It's part of the URL specification -- all of your query params (not just
the query string) need to be properly URL esca
Recent versions supports sharding and handles distribution of your query and
result set merging. The problem, it won't help you to join on separate
`tables`. The fields you query need to be present in each shard or you'll end
up with an HTTP 400 - undefined field error.
Indeed, there is no e
We cannot really give an answer without knowing your fieldType and query. We
can see that the blackberry => blackberri is caused by a stemmer you have,
perhaps a porter or snowball stemmer. Anyway, that's normal.
-Original message-
From: abhayd
Sent: Thu 30-09-2010 20:32
To: solr-user@
Anyone ever see this error on an import?
Caused by: java.lang.NullPointerException
at
oracle.jdbc.driver.DBConversion._CHARBytesToJavaChars(DBConversion.java:1015)
The Oracle column being converted is VARCHAR2(4000 Char) and there are NULLs
present in the record set.
Envrionment: Solr
I have also tried using SolrJ to hit my index, and I get this error:
2010-09-30 16:23:14,406 [pool-2-thread-1] DEBUG
org.apache.commons.httpclient.params.DefaultHttpParams - Set parameter
http.useragent = Jakarta Commons-HttpClient/3.0
2010-09-30 16:23:14,406 [pool-2-thread-1] DEBUG
org.apache.com
Almost, you can define a updateRequestProcessorChain that houses multiple
update processors.
true
title_signature
true
title
org.apache.solr.update.processor.Lookup3Signature
true
content_signature
true
content
org.
You can add a default setting to your request handler. Read about defaults,
appends and invariants in requesthandlers defined in your solrconfig.xml.
-Original message-
From: Sharma, Raghvendra
Sent: Thu 30-09-2010 19:17
To: solr-user@lucene.apache.org;
Subject: Automatic xslt to resp
You can also sort on a field by using a function query instead of the
"sort=field+desc" parameter. This will not eat up memory. Instead, it
will be slower. In short, it is a classic speed v.s. space trade-off.
You'll have to benchmark and decide which you want, and maybe some
fields need the fast
Updates will not show up if they weren't committed, either through a manual
commit or auto commit.
-Original message-
From: Vicedomine, James (TS)
Sent: Thu 30-09-2010 21:04
To: solr-user@lucene.apache.org;
Subject: updating the solr index
Sometimes with I update the solr index (for
Now I feel dumb, it was right there. Thanks! :)
-- Chris
On Thu, Sep 30, 2010 at 3:04 PM, Allistair Crossley wrote:
> it's in the dist folder with the name provided by the wiki page you refer
> to
>
> On Sep 30, 2010, at 3:01 PM, Christopher Gross wrote:
>
> > Where can I get SolrJ? The wiki
I have been reading through all the jira issues and patches, as well as the
wikis and I still have a few things that are not clear to me.
I am currently running with Solr 1.4.1 and using Nutch for my crawling.
Everything is working great, I am using a Nutch plugin to add lat long
information, I
it's in the dist folder with the name provided by the wiki page you refer to
On Sep 30, 2010, at 3:01 PM, Christopher Gross wrote:
> Where can I get SolrJ? The wiki makes reference to it, and says that it is
> a part of the Solr builds that you download, but I can't find it in the jars
> that co
Sometimes with I update the solr index (for example post new DOCs with
the same id) old DOC ATTRIBUTE VALUES appear to be available to queries;
but not visible when the DOC ATTRIBUTE VALUES are listed? In other
words, queries sometimes return results based upon old attribute values?
Thank you in
Where can I get SolrJ? The wiki makes reference to it, and says that it is
a part of the Solr builds that you download, but I can't find it in the jars
that come with it. Can anyone shed some light on this for me?
Thanks!
-- Chris
I think you've probably nailed it Chris, thanks for that, I think I can get
by with a different approach than this.
Do you know if I will get the same memory consumption using the
RandomFieldType vs the TrieInt?
-Jeff
On Thu, Sep 30, 2010 at 12:36 PM, Chris Hostetter
wrote:
>
> : There are 14,6
: There are 14,696,502 documents, we are doing a lot of funky stuff but I'm
: not sure which is most likely to cause an impact. We're sorting on a dynamic
: field there are about 1000 different variants of this field that look like
: "priority_sort_for_", which is an integer field. I've heard that
hi
I am searching for blackberry for some reason parsedquery shows up as
blackberri.
I check synonyms but i don't see anywhere.
text:blackberry
text:blackberry
text:blackberri
text:blackberri
Not sure if its related query results are showing up when its matched with
"black"
Any help or dire
So how would one set it up to use multiple nodes for building an index? I
see a document for solr + hadoop (http://wiki.apache.org/solr/HadoopIndexing)
and it says it has an example but the example is missing.
Thanks,
Steve Cohen
On Thu, Sep 30, 2010 at 10:58 AM, Jak Akdemir wrote:
> If you wan
I'm really sorry - thank you for the note.
-Original Message-
From: Chris Hostetter [mailto:hossman_luc...@fucit.org]
Sent: Tuesday, September 28, 2010 05:12
To: solr-user@lucene.apache.org
Subject: Re: Grouping in solr ?
: References:
:
: In-Reply-To:
:
: Subject: Grouping in
On Thu, Sep 30, 2010 at 10:47 PM, Sharma, Raghvendra
wrote:
> Is there a way to specify a xslt at the server side, and make it default,
> i.e. whenever a response is returned, that xslt is applied to the response
> automatically...
This should be of help: http://wiki.apache.org/solr/XsltRespons
Thanks for the ideas.
I think after reading enough documentation and articles around solr and xml
indexing in general, I have come around to understand that there is no escaping
denormalization.
However, one tiny thought remains... perhaps my last shot at avoiding
denormalization (of course it
On Thu, Sep 30, 2010 at 10:41 AM, Renee Sun wrote:
>
> Hi -
> I posted this problem but no response, I guess I need to post this in the
> Solr-User forum. Hopefully you will help me on this.
>
> We were running Solr 1.3 for long time, with 130 cores. Just upgrade to Solr
> 1.4, then when we start
Hi,
I ran into this problem again the other night. I've looked through my log
files in more detail, and nothing seems out of place (I stripped user
queries out and included it below). I have the following setup:
1. Indexer has 2 cores. One core gets incremental updates, the other is for
full re-sy
On Thu, Sep 30, 2010 at 1:48 PM, Yonik Seeley
wrote:
> Dynamic field types. You can configure it such that anything ending
> with _latlon is of type LatLonType.
> Perhaps we should do this in the example schema.
Looks like we already have it:
So you should be able to add stuff like home_p
I'm writing some code that pushes data into a Solr instance. I have my
Tomcat (5.5.28) set up to use 2 indexes, I'm hitting the second one for
this.
I try to issue the basic command to clear out the index
(*:*), and I get the error posted below
back.
Does anyone have an idea of what I'm missing o
On Thu, Sep 30, 2010 at 1:40 PM, webdev1977 wrote:
> Or.. do you mean each field must have a unique name, but both be of type
> latLon(solr.LatLonType).
> x,y
> x,y
Yes.
> If the statement directly above is true (I hope that it is not), how does
> one dynamically create fields when adding geota
So it is still possible to do this in the index:
x,y
x,y
But not this:
[0] x,y[0]
[1]x,y[1]
x,y
x,y
If the statement directly above is true (I hope that it is not), how does
one dynamically create fields when adding geotags? They would have to be
something like, , , etc, etc.
--
View
I think the indexing will be fine. We are looking to use multi-select
faceting, spelling suggestions, and highlighting to name a few. On the front
end (and on separate machines) are .NET web applications that issue queries via
HTTP requests to our searchers.
I can't think of anything else th
On Thu, Sep 30, 2010 at 1:09 PM, webdev1977 wrote:
> 1. I noticed that it said that the type of LatLongType can not be
> mulitvalued. Does that mean that I can not have multiple lat/lon values for
> one document.
That means that if you want to have multiple points per document, each
point must b
I have been able to load around a million rows/docs in around 5+ minutes. The
schema contains around 250+ fields. For the moment, I have kept everything as
string.
I am sure there are ways to get better loading speeds than this.
Will the data type matter in loading speeds ?? or anything else
Is there a way to specify a xslt at the server side, and make it default, i.e.
whenever a response is returned, that xslt is applied to the response
automatically...
**
This message may contain confidential
>
>Two things, one are your DB column uppercase as this would effect the out.
>
>
Interesting, I was under the impression that case does not matter.
>From http://wiki.apache.org/solr/DataImportHandler#A_shorter_data-config :
"It is possible to totally avoid the field entries in entities if the
I have been reading through all the jira issues and patches, as well as the
wikis and I still have a few things that are not clear to me.
I am currently running with Solr 1.4.1 and using Nutch for my crawling.
Everything is working great, I am using a Nutch plugin to add lat long
information, I
There are 14,696,502 documents, we are doing a lot of funky stuff but I'm
not sure which is most likely to cause an impact. We're sorting on a dynamic
field there are about 1000 different variants of this field that look like
"priority_sort_for_", which is an integer field. I've heard that
sorting
Hi All,
This is more of an FYI for those wanting to filter and sort by distance, and
have the values returned in the result set after determining a way to do
this with existing code.
Using solr 4.0 an example query would contain the following parameters:
/select?
q=stevenage^0.0
+_val_:"ghhsin(
You need to be able to query the database with the 'Mother of all Queries',
i.e. one that completely flattens all tables into each row.
In other words, the JOIN section of the query will have EVERY table in it, and
depending on your schema, some of them twice or more.
Trying to do that with CS
Hi -
I posted this problem but no response, I guess I need to post this in the
Solr-User forum. Hopefully you will help me on this.
We were running Solr 1.3 for long time, with 130 cores. Just upgrade to Solr
1.4, then when we start the Solr, it took about 45 minutes. The catalina.log
shows Solr
> One strategy that I like, but haven't found in discussion lists is
> auto-limiting cache size/warming based on available resources (similar
> to the way file system caches use free memory). This would allow
> caches to adjust to their memory environment as indexes grow.
I've written such a cache
I don't know if this is bug or not, but when i'm writing this in
solrconfig.xml
CustomRank
dedupe
only first update.processor works, why second is not working?
If you want to use both of your nodes for building index (which means
two master), it makes them unified and collapses master slave
relation.
Would you take a look the link below for index snapshot problem?
http://wiki.apache.org/solr/SolrCollectionDistributionScripts
On Thu, Sep 30, 2010 at 11:0
I an new to Solr and the search technologies. I am playing around with
multiple indexes. I configured Solr for Tomcat, created two tomcat fragments
so that two solr webapps listen on port 8080 in tomcat. I have created two
separate indexes using each webapp successfully.
My documents are very prim
On Thu, Sep 30, 2010 at 8:09 PM, Nicholas Swarr wrote:
>
> Our index is about 10 gigs in size with about 3 million documents. The
> documents range in size from dozens to hundreds of kilobytes. Per week, we
> only get about 50k queries.
> Currently, we use lucene and have one box for our index
Our index is about 10 gigs in size with about 3 million documents. The
documents range in size from dozens to hundreds of kilobytes. Per week, we
only get about 50k queries.
Currently, we use lucene and have one box for our indexer that has 32 gigs of
memory and an 8 core CPU. We have a pair
Hi there solr experts
I have an solr cluster with two nodes and separate index files for each
node
Node1 is master
Node2 is slave
Node1 is the one that I index my data and replicate them to Node2
How can I index my data at both nodes simultaneously ?
Is there any specific setup
Two things, one are your DB column uppercase as this would effect the out.
Second what does your db-data-config.xml look like
Regards,
Dave
On 30 Sep 2010, at 03:01, harrysmith wrote:
>
> Looking for some clarification on DIH to make sure I am interpreting this
> correctly.
>
> I have a wide
54 matches
Mail list logo