If you build your index in Hadoop, read this (it is about the Cloudera
Search but in my understanding also should work with Solr Hadoop contrib
since 4.7)
http://www.cloudera.com/content/cloudera-content/cloudera-docs/Search/latest/Cloudera-Search-User-Guide/csug_batch_index_to_solr_servers_using_g
I think the key message here is:
"simplistic XSLT caching mechanism is not appropriate for high load scenarios".
As in, maybe this is not really a production-level component. One
exception is given and it is not just lifetime, it's also a
single-transform.
Are you satisfying both of those conditi
On 5/1/2014 7:30 PM, Christopher Gross wrote:
> The message implies that there is a better way of having XSLT
> transformations. Is that the case, or is there just this perpetual warning
> for normal operations?
When I was using XSLT, I got a warning for every core, even though I had
a cached lif
The message implies that there is a better way of having XSLT
transformations. Is that the case, or is there just this perpetual warning
for normal operations?
-- Chris
On Thu, May 1, 2014 at 5:08 PM, Ahmet Arslan wrote:
> Hi Chris,
>
> Looking at source code reveals that warning message prin
My apologies!!
On May 1, 2014 6:56 PM, "Chris Hostetter" wrote:
>
> https://people.apache.org/~hossman/#threadhijack
> Thread Hijacking on Mailing Lists
>
> When starting a new discussion on a mailing list, please do not reply to
> an existing message, instead start a fresh email. Even if you ch
https://people.apache.org/~hossman/#threadhijack
Thread Hijacking on Mailing Lists
When starting a new discussion on a mailing list, please do not reply to
an existing message, instead start a fresh email. Even if you change the
subject line of your email, other mail headers still track which
Shamik:
I'm not sure what the cause of this is, but it definitely seems like a bug
to me. I've opened SOLR-6039 and noted a workarround for folks who don't
care about the new "track" debug info and just want the same debug info
that was available before 4.7...
https://issues.apache.org/jira/
Thanks for the reply, Anshum. Please see my answers to your questions below.
* Why do you want to do a full index everyday?
Not sure I understand what you mean by full index. Every day we want to
import additional documents to the existing ones. Of course, we want to
remove older ones as well,
On 5/1/2014 3:03 PM, Aman Tandon wrote:
> Please check that link
> http://wiki.apache.org/solr/SimpleFacetParameters#facet.method there is
> something mentioned in facet.method wiki
>
> *The default value is fc (except for BoolField which uses enum) since it
> tends to use less memory and is faster
Hi Chris,
Looking at source code reveals that warning message printed always. Independent
of xsltCacheLifetimeSeconds value.
/** singleton */
private TransformerProvider() {
// tell'em: currently, we only cache the last used XSLT transform, and
blindly recompile it
// once cacheLif
Hi Shawn,
Please check that link
http://wiki.apache.org/solr/SimpleFacetParameters#facet.method there is
something mentioned in facet.method wiki
*The default value is fc (except for BoolField which uses enum) since it
tends to use less memory and is faster then the enumeration method when a
fiel
Hi Costi,
I'd recommend SolrJ, parallelize the inserts. Also, it helps to set the
commit intervals reasonable.
Just to get a better perspective
* Why do you want to do a full index everyday?
* How much of data are we talking about?
* What's your SolrCloud setup like?
* Do you already have some be
Hi guys,
What would you say it's the fastest way to import data in SolrCloud?
Our use case: each day do a single import of a big number of documents.
Should we use SolrJ/DataImportHandler/other? Or perhaps is there a bulk
import feature in SOLR? I came upon this promising link:
http://wiki.apache
I found the following discussion very helpful from back in 2012.
http://markmail.org/thread/lkl7ffi77w7hpv6n
Probably the best description I've seen for how facets are actually calculated
in Solr Cloud. Thanks. I presume this is for the most part still accurate.
But, I have a slightly differ
I found the following discussion very helpful from back in 2012.
http://markmail.org/thread/lkl7ffi77w7hpv6n
Probably the best description I've seen for how facets are actually calculated
in Solr Cloud. Thanks. I presume this is for the most part still accurate.
But, I have a slightly differ
I get this warning when Solr (4.7.2) Starts:
WARN org.apache.solr.util.xslt.TransformerProvider â The
TransformerProvider's simplistic XSLT caching mechanism is not appropriate
for high load scenarios, unless a single XSLT transform is used and
xsltCacheLifetimeSeconds is set to a sufficiently hi
When using query screen:
1. chocolate cake results in following:
+(((Category2Name:chocol^40.0 |
ManfProdNum:chocolate | ProductNumber:chocolate | ProductName:chocol^100.0 |
Category3Name:chocol^80.0 | Category4Name:chocol^80.0 | Keywords:chocol^300.0 |
ProductNameGrams:chocolate^100.0 | Catego
On 5/1/2014 10:59 AM, Bob Laferriere wrote:
> I have set q.op=AND in solrconfig.xml and use edismax. I see the match as I
> would expect except when I explicitly try to add binary logic. When I type
>
> termA OR term B
>
> I am still getting the results of termA AND termB.
>
> Am I being stupid o
I'm working on upgrading our Solr 3 applications to Solr 4. The last piece
of the puzzle involves the change in how fuzzy matching works in the new
version. I'll have to rework how a key feature of our application is
implemented to get the same behavior with the new FuzzyQuery as I did in
the old v
Hi Bob,
Can you paste output of debugQuery=true?
On Thursday, May 1, 2014 8:00 PM, Bob Laferriere wrote:
I have set q.op=AND in solrconfig.xml and use edismax. I see the match as I
would expect except when I explicitly try to add binary logic. When I type
termA OR term B
I am still getti
I have set q.op=AND in solrconfig.xml and use edismax. I see the match as I
would expect except when I explicitly try to add binary logic. When I type
termA OR term B
I am still getting the results of termA AND termB.
Am I being stupid or is this just not possible?
Thanks,
-Bob
For those Tomcat fans out there, we've released HDS 4.8.0_01,
based on Solr 4.8.0 of course. HDS is pretty much just Apache Solr,
with the addition of a Tomcat based server.
Download: http://heliosearch.com/heliosearch-distribution-for-solr/
HDS details:
- includes a pre-configured (threads, log
Hi Erick,
thank you for your response. You are right, I changed alphaOnlySort to keep
lettres and numbers and to remove some acticles (a, an, the).
This is the filetype definition :
Then, I tested each name with admin ui on each s
What version are you running? This was fixed in a recent release. It can happen
if you hit add core with the defaults on the admin page in older versions.
--
Mark Miller
about.me/markrmiller
On May 1, 2014 at 11:19:54 AM, ryan.cooke (ryan.co...@gmail.com) wrote:
I saw an overseer queue clogged
I saw an overseer queue clogged as well due to a bad message in the queue.
Unfortunately this went unnoticed for a while until there were 130K messages
in the overseer queue. Since it was a production system we were not able to
simply stop everything and delete all Zookeeper data, so we manually de
I’ve added you Keith, go ahead :)
-Stefan
On Thursday, May 1, 2014 at 4:42 PM, Keith Thoma wrote:
> my wiki username is KeithThoma
>
> Please add me to the list so I will be able to make updates to the Solr
> Wiki.
>
>
> Keith Thoma
On 4/30/2014 5:53 PM, Aman Tandon wrote:
> Shawn -> Yes we have some plans to move to SolrCloud, Our total index size
> is 40GB with 11M of Docs, Available RAM 32GB, Allowed heap space for solr
> is 14GB, the GC tuning parameters using in our server
> is -XX:+UseConcMarkSweepGC -XX:+PrintGCApplicat
my wiki username is KeithThoma
Please add me to the list so I will be able to make updates to the Solr
Wiki.
Keith Thoma
Hello All,
I am having a query issue I cannot seem to find the correct answer for. I am
searching against a list of items and returning facets for that list of items.
I would like to group the result set on a field such as a “parentItemId”.
parentItemId maps to other documents within the same c
Hi Yetkin, welcome!
I think StandardAnalyzer of Lucene is the problem you are facing.
Why don't you have another field using StandardAnalyzer and see how it
tokenizes CRD_PROD
on Solr admin GUI?
I forgot in the detail but we can use Lucene's Analyzer in schema.xml something
like this:
Hi Yetkin,
You are on the right track by examining analysis page. How is your query
analyzed using query analyzer?
According to what you pasted q=CRD should return your example document.
Did you change something in schema.xml and forget to re-start solr and
re-index?
By the way simple letter
Hello,
Score support is addressed at
https://issues.apache.org/jira/browse/SOLR-5882.
Highlighting is another story. be aware of
http://heliosearch.org/expand-block-join/ it might somehow useful for your
problem.
On Thu, May 1, 2014 at 11:32 AM, StrW_dev wrote:
> Hello,
>
> I am trying out bl
Hello everyone,
I am new to SOLR and this is my first post in this list.
I have been working on this problem for a couple of days. I tried everything
which I found in google but it looks like I am missing something.
Here is my problem:
I have a field called: DBASE_LOCAT_NM_TEXT
It contains valu
On 5/1/2014 1:49 AM, Sohan Kalsariya wrote:
> Hello All.
> How do i add a new core in Solr ?
> My solr directory is :
> /usr/share/solr-4.6.1/example/solr
> And it is having only one collection i.e. collection1
> Now to add the new core I added a directory collection2 and in that I
> created 2 more
Hi Ahmet,
Thanks for the information! But as per Solr documentation, group.truncate
is not supported in distributed searches and I am looking for a solution
that can work on SolrCloud.
--
Varun Gupta
On Thu, May 1, 2014 at 4:12 PM, Ahmet Arslan wrote:
> Hi Varun,
>
> I think you can use group.
Hi Varun,
I think you can use group.truncate=true with stats component
http://wiki.apache.org/solr/StatsComponent
If true, facet counts are based on the most relevant document of each group
matching the query. Same applies for StatsComponent. Default is false.
Solr3.4 Supported from Solr
Hi,
I am using SolrCloud for getting results grouped by a particular field.
Now, I also want to get min and max value for a particular field for each
group. For example, if I am grouping results by city, then I also want to
get the minimum and maximum price for each city.
Is this possible to do w
Hello All.
How do i add a new core in Solr ?
My solr directory is :
/usr/share/solr-4.6.1/example/solr
And it is having only one collection i.e. collection1
Now to add the new core I added a directory collection2 and in that I
created 2 more directory.
/conf & /lib
Now my what should be the entry i
Hello,
I am trying out block joins for my index at the moment as I have many
documents that are mainly variations of the same search content. In my case
denormalization is not an option, so I am using nested documents.
The structure looks like this:
content
filter
boost
requ
39 matches
Mail list logo