Hi all.
Sorry about the title, but I don't know how to be more explicit than
that. I am updating a Solr 1.4 install to Solr 5.1. I went through all
the changes, updated my schema.xml, etc. Everything works (I
re-indexed instead of migrating the existing one). I can search for
documents, no problem
Try this
http://localhost:8983/solr/collection1/suggest?suggest=true&suggest.dictionary=suggest&suggest.build=true&wt=xml&suggest.q=mater
On Thu, Jun 4, 2015 at 11:53 AM, Zheng Lin Edwin Yeo
wrote:
> I've tried to use the solr.SuggestComponent as stated in the website, but
> it couldn't work.
>
This is the result that I get from the query URL you mentioned. Still not
able to get any output.
0
0
true
mater
true
suggest
xml
Regards,
Edwin
On 4 June 2015 at 15:26, Dhanesh Radhakrishnan wrote:
> Try this
>
>
> http://localhost:8983/solr/collect
Let me try to clarify the things…
Because you are using solr 5.1 I can not see any reason to try to use the
old spellcheck approach.
If you take a look to the page me and Erick quoted there is a simple config
example :
>
> mySuggester
> FuzzyLookupFactory
> suggester_fuzzy_dir
>
> DocumentDicti
Hi Rob,
Reading your use case I can not understand why the Query Time join is not a
fit for you !
The documents returned by the Query Time Join will be from core1, so
faceting and filter querying that core, would definitely be possible !
I can not see your problem honestly !
Cheers
2015-06-04 1:4
Hi!
I have one more question about atomic updates in Solr (Solr 4.4.0).
Is it posible to generate atomic update by query?
I mean I want to update those documents in which IDs contain some string.
For example, index has:
Doc1, id="123|a,b"
Doc2, id="123|a,c"
Doc3, id="345|a,b"
Doc4, id="345|a,c,d".
I think I'm confused with the old spellcheck approach that came out more
frequently during my research.
Just to confirm, do I need to re-index the data in order for this new
approach to work if I'm using an existing field?
Regards,
Edwin
On 4 June 2015 at 16:58, Alessandro Benedetti
wrote:
>
Hi,
I've successfully used procrun (see
http://commons.apache.org/proper/commons-daemon/procrun.html) to wrap Solr 5.1
solr.cmd script as a Windows service (I’ve only tested on Windows 2008 R2).
Previously, I was using Procrun to manage Jetty services running the Solr.war
from older versions
Hi,
Please provide your inputs on optimize and commit running as background.
Your suggestion will be really helpful.
Thanks,
Modassar
On Tue, Jun 2, 2015 at 6:05 PM, Modassar Ather
wrote:
> Erick! I could not find any underlying setting of 10 minutes.
> It is not only optimize but commit is al
If you are using an existing indexed field to provide suggestions, you
simply need to build the suggester and start using it !
No re-indexing needed .
Cheers
2015-06-04 11:01 GMT+01:00 Zheng Lin Edwin Yeo :
> I think I'm confused with the old spellcheck approach that came out more
> frequently d
I have some indexing issue . While indexing IOwait is high in solr server
and load also.
On Thu, 2015-06-04 at 16:45 +0530, Midas A wrote:
> I have some indexing issue . While indexing IOwait is high in solr server
> and load also.
Might be because you commit too frequently. How often do you do that?
- Toke Eskildsen, State and University Library, Denmark
I think this mail is really poor in term of details.
Which version of Solr are you using ?
Architecture ?
Load expected ?
Indexing approach ?
When does your problem happens ?
More detail we give, easier will be to provide help.
Cheers
2015-06-04 12:19 GMT+01:00 Toke Eskildsen :
> On Thu, 2015-0
Thanks for replying below is commit frequency
6 false
60
On Thu, Jun 4, 2015 at 4:49 PM, Toke Eskildsen
wrote:
> On Thu, 2015-06-04 at 16:45 +0530, Midas A wrote:
> > I have some indexing issue . While indexing IOwait is high in solr server
> > and load also.
>
> Might be because
Thanks Alessandro,
Please find the info inline .
Which version of Solr are you using : 4.2.1
- Architecture : Master -slave
Load expected : currently it is 7- 15 should be below 1
Indexing approach : Using DIH
When does your problem happens : we run delta import every 10 mins full
index onc
Honestly your auto-commit configuration seems not alarming at all!
Can you give me more details regarding :
Load expected : currently it is 7- 15 should be below 1
What does this mean ? Without a unit of measure i find hard to understand
plain numbers :)
was expecting the number of documents per
Dear Erick,
That document help me to build multiple suggesters
But still there is one problem that I faced.
When I used both suggesters with same lookupImpl as
AnalyzingInfixLookupFactory
AnalyzingInfixLookupFactory
solr throws an error
Caused by: java.lang.RuntimeException at
org.apache.
Hi there,
We have installed apache solr 3.6.2 for our Magento Enterprise sales
platform (unfortunately its the only version Enterprise supports),
however, when navigating the admin interface we keep stumbling across
stack traces:
PWC6033: Unable to compile class for JSP
PWC6197: An error oc
try it for yourself and see if it works Alessandro. Not only cant i get
facets but i even get field errors when i run such join queries
select?fl=title&q={!join from=id to=id fromIndex=Tags}titleNormalized:pdf
undefined field titleNormalized
400
On Thu, Jun 4, 2015 at 5:19 AM, Alessandro Be
Thanks Erick.
What about at query time? If I index my Boolean and it has one of the
variations of "t", "T" or "1", what should my query be to get a hit on
"true"? q=MyBoolField: ? What should the value of be when I
want to check if the field has a "true" and when I need to check if it has
a "f
Hi Alessandro,
On Thu, Jun 4, 2015 at 5:19 PM, Alessandro Benedetti <
benedetti.ale...@gmail.com> wrote:
> Honestly your auto-commit configuration seems not alarming at all!
> Can you give me more details regarding :
>
> Load expected : currently it is 7- 15 should be below 1
> *[Abhishek] : s
On 6/4/2015 1:22 AM, Wouter Admiraal wrote:
> When I turn on debug, I get the following:
>
> "debug": {
> "rawquerystring": "Food",
> "querystring": "Food",
> "parsedquery": "(+DisjunctionMaxQuery((label:Food^3.0)) ())/no_coord",
> "parsedquery_toString": "+(label:Food^3.0) ()",
> "expla
On 6/4/2015 5:32 AM, Adam Hall wrote:
> We have installed apache solr 3.6.2 for our Magento Enterprise sales
> platform (unfortunately its the only version Enterprise supports),
> however, when navigating the admin interface we keep stumbling across
> stack traces:
>
> PWC6033: Unable to comp
On 6/4/2015 5:15 AM, Midas A wrote:
> I have some indexing issue . While indexing IOwait is high in solr server
> and load also.
My first suspect here is that you don't have enough RAM for your index size.
* How many total docs is Solr handling (all cores)?
* What is the total size on disk of all
Lets try to make clear some point :
Index TO : is the one you are using to call the select request handler
Index From : Tags
Is titleNormalized present in the "Tags" index ? Because there is where the
query will run.
The documents in tags satisfying the query will be joined with the index TO
.
Th
Hi shawn,
Please find comment in line.
On Thu, Jun 4, 2015 at 6:48 PM, Shawn Heisey wrote:
> On 6/4/2015 5:15 AM, Midas A wrote:
> > I have some indexing issue . While indexing IOwait is high in solr server
> > and load also.
>
> My first suspect here is that you don't have enough RAM for your
my requirement is to join core1 onto core0. restating the requirements
again. I have 2 cores
core0
field:id
field: text
core1
field:id
field tag
I want to
1) query text field of core0, together with filters
2) use the {id} of matches (which can be >>10K) to retrieve the docs
Hi, thanks for the response.
Label field:
I can surely optimize the above config a bit, maybe only use one
for both query and index. But for now, this is what it
does.
Just
Hi Rob,
according to your use case you have to :
Call the /select from *core1 *in this way* :*
*core1*/select?fl=title&q={!join from=id to=id fromIndex=*core0*}
titleNormalized:pdf&facet=true&facet.field=tags
Hope this clarify your problem.
Cheers
2015-06-04 15:00 GMT+01:00 Robust Links :
> m
Please remember this :
"to be used as the basis for a suggestion, the field must be stored"
>From the official guide.
Cheers
2015-06-04 11:19 GMT+01:00 Alessandro Benedetti
:
> If you are using an existing indexed field to provide suggestions, you
> simply need to build the suggester and star
Thank you so much for your advice.
Regards,
Edwin
On 4 June 2015 at 22:30, Alessandro Benedetti
wrote:
> Please remember this :
>
> "to be used as the basis for a suggestion, the field must be stored"
>
> From the official guide.
>
> Cheers
>
> 2015-06-04 11:19 GMT+01:00 Alessandro Benedetti <
Hi,
Would like to check, are we able to use the Collection API or any other
method to list all the collections in the cluster together with the number
of records in each of the collections in one output?
Currently, I only know of the List Collections
/admin/collections?action=LIST. However, this
The empty parentheses in the parsed query says something odd is going on
with query-time analysis, that is essentially generating an empty term.
That may not be the cause of your specific issue, but at least its says
that something is unexplained here.
Generally, there is an asymmetry between the
that worked but seem unable to run
1) phrase queries: i.e.
*core1*/select?fl=title&q={!join from=id to=id fromIndex=*core0*}
titleNormalized:"*text pdf*"&facet=true&facet.field=tags
or 2) run filters on core0
*core1*/select?fl=title&q={!join from=id to=id fromIndex=*core0*}
titleNormalized:"*te
Hi,
Based on a few hours googling, I concluded that there is no class in SOLR 5.1
that can parser JSON response of The Term Vector Component.
I am not sure if it is fine to create an issue in the SOLR JIRA website and
make patch to address it.
I would be grateful to get any advice for that.
Thanks for the reply.
So, as an aside, should I remove the solr.WhitespaceTokenizerFactory
and solr.WordDelimiterFilterFactory from the query analyzer part?
Any idea in which direction I should poke around? I deactivated dismax
for now, but would really like to use it.
Wouter Admiraal
2015-06
The debug parsed queries for the various ways you've tried it would be helpful.
dismax uses the query analysis of each of the fields and the fact that label
does not appear lowercased indicates something fishy like changing the
definition after indexing maybe. Try the admin analysis UI for tha
There is no equivalent of, say a SQL update...where... so no, atomic
updates by query...
Best,
Erick
On Thu, Jun 4, 2015 at 2:49 AM, Ксения Баталова wrote:
> Hi!
>
> I have one more question about atomic updates in Solr (Solr 4.4.0).
> Is it posible to generate atomic update by query?
> I mean I
On 6/4/2015 7:38 AM, Midas A wrote:
> On Thu, Jun 4, 2015 at 6:48 PM, Shawn Heisey wrote:
>
>> On 6/4/2015 5:15 AM, Midas A wrote:
>>> I have some indexing issue . While indexing IOwait is high in solr server
>>> and load also.
>> My first suspect here is that you don't have enough RAM for your in
Can't get any failures to happen on my end so I really haven't a clue.
Best,
Erick
On Thu, Jun 4, 2015 at 3:17 AM, Modassar Ather wrote:
> Hi,
>
> Please provide your inputs on optimize and commit running as background.
> Your suggestion will be really helpful.
>
> Thanks,
> Modassar
>
> On Tue,
There shouldn't be any limitation. You haven't provided the full stack trace,
so there's not a lot to say.
Do be a little careful, though, since the parameters are slightly different
for analyzingInfix, i.e. indexPath rather than sotreDir.
Best,
Erick
On Thu, Jun 4, 2015 at 4:55 AM, Dhanesh Radh
Have you tried it? Really, it should take you 2 minutes to add a doc and see.
I'd guess it follows the same rules.
Best,
Erick
On Thu, Jun 4, 2015 at 5:29 AM, Steven White wrote:
> Thanks Erick.
>
> What about at query time? If I index my Boolean and it has one of the
> variations of "t", "T"
Not in a single call that I know of. These are really orthogonal
concepts. Getting the cluster status merely involves reading the
Zookeeper clusterstate whereas getting the total number of docs for
each would involve querying each collection, i.e. going to the Solr
nodes themselves. I'd guess it's
Hi folks -
Quick question:
Is TomCat needed on Windows Server 2012 before I install SOLR 5.1?
Thanks
Doug
Erick,
Thank you so much. It became a bit clearer.
It was decided to upgrade Solr to 5.2 and use SolrCloud in our next release.
I think I'll write here about it yet :)
_ _
Batalova Kseniya
I have to ask then why you're not using SolrCloud with multiple shards? It
seems to me that that gives
sorry Shawn ,
a) Total docs solr is handling is 3 million .
b) index size is only 5 GB
On Thu, Jun 4, 2015 at 9:35 PM, Shawn Heisey wrote:
> On 6/4/2015 7:38 AM, Midas A wrote:
> > On Thu, Jun 4, 2015 at 6:48 PM, Shawn Heisey
> wrote:
> >
> >> On 6/4/2015 5:15 AM, Midas A wrote:
> >>> I have
Is it planned soon?
Or may be not soon..
_ _ _
Batalova Kseniya
There is no equivalent of, say a SQL update...where... so no, atomic
updates by query...
Best,
Erick
On Thu, Jun 4, 2015 at 2:49 AM, Ксения Баталова
wrote:
> Hi!
>
> I have one more question about atomic updates in Solr (Solr 4
NP. It's something of a step when moving to SolrCloud to "let go" of the
details you've had to (painfully) pay attention to, but worth it. The price is,
of course, learning to do things a new way ;)...
Best,
Erick
On Thu, Jun 4, 2015 at 10:04 AM, Ксения Баталова wrote:
> Erick,
>
> Thank you so
On 6/4/2015 11:12 AM, Midas A wrote:
> sorry Shawn ,
>
> a) Total docs solr is handling is 3 million .
> b) index size is only 5 GB
If your total index size is only 5GB, then there should be no need for a
30GB heap. For that much index, I'd start with 4GB, and implement GC
tuning.
A high iowait
Shwan,
Please find the log . give me some sense what is happening
On Thu, Jun 4, 2015 at 10:56 PM, Shawn Heisey wrote:
> On 6/4/2015 11:12 AM, Midas A wrote:
> > sorry Shawn ,
> >
> > a) Total docs solr is handling is 3 million .
> > b) index size is only 5 GB
>
> If your total index size is on
we are indexing around 5 docs par 10 min .
On Thu, Jun 4, 2015 at 11:02 PM, Midas A wrote:
> Shwan,
>
> Please find the log . give me some sense what is happening
>
> On Thu, Jun 4, 2015 at 10:56 PM, Shawn Heisey wrote:
>
>> On 6/4/2015 11:12 AM, Midas A wrote:
>> > sorry Shawn ,
>> >
>> >
I thought splitshard was supposed to get rid of the original shard, shard1,
in this case. Am I missing something? I was expecting the only two
remaining shards to be shard1_0 and shard1_1.
The REST call I used was
/admin/collections?collection=default-collection&shard=shard1&action=SPLITSHARD
if t
No servlet container is needed at all. We're moving away from
distributing a war file in the future, Solr 5x still distributes a war
for back-compat reasons. The preferred method is now to use the
bin/solr start script.
Under the covers this still uses Jetty, but that is now an
"implementation det
Hi Mike,
Once the SPLITSHARD call completes, it just marks the original shard as
Inactive i.e. it no longer accepts requests. So yes, you would have to use
DELETESHARD (
https://cwiki.apache.org/confluence/display/solr/Collections+API#CollectionsAPI-api7)
to clean it up.
As far as what you see on
: I took a quick look at the code and it _looks_ like any string
: starting with "t", "T" or "1" is evaluated as true and everything else
: as false.
correct and documented...
https://cwiki.apache.org/confluence/display/solr/Field+Types+Included+with+Solr
: sortMissingLast determines sort order
Thanks. I thought it worked like that, but didn't want to jump to
conclusions.
On Thu, Jun 4, 2015 at 1:42 PM, Anshum Gupta wrote:
> Hi Mike,
>
> Once the SPLITSHARD call completes, it just marks the original shard as
> Inactive i.e. it no longer accepts requests. So yes, you would have to use
>
Not to my knowledge. In Solr terms this would be a _very_ heavyweight
operation, potentially re-indexing millions and millions of documents.
Imagine if your q were id:* for instance. Plus routing that to all
shards and dealing with other updates coming in would be a nightmare.
Best,
Erick
On Thu,
: What about at query time? If I index my Boolean and it has one of the
: variations of "t", "T" or "1", what should my query be to get a hit on
: "true"? q=MyBoolField: ? What should the value of be when I
: want to check if the field has a "true" and when I need to check if it has
: a "false
Hi,
I am trying to upgrade Solr 4.X to Solr 5.1.0. I am using Maven to compile
solr.
Previously, I was packing my custom-plugin jars with solr.war and
everything worked fine.
With Solr 5.X, however, I am finding it hard to include my custom-plugin
jars in the classpath. I am using bin/solr scrip
Hope I'll succeed)
Anyway, solr-user community surprised me in a good way.
Thanks again.
_ _
Batalova Kseniya
NP. It's something of a step when moving to SolrCloud to "let go" of the
details you've had to (painfully) pay attention to, but worth it. The price is,
of course, learning to do thin
On 6/4/2015 12:01 PM, Vaibhav Bhandari wrote:
> With Solr 5.X, however, I am finding it hard to include my custom-plugin
> jars in the classpath. I am using bin/solr script to start up Solr.
>
> I have tried a few things:
>
>1. Passing the path of custom jars as -Djava.class.path=
>2. Passi
Hi!
Need help with Solr 4.4.0 + Tomcat 7 + CURL.
I send many elementary select-queries to Solr core:
http://localhost/solr/Core1/select?q=id:("TestDoc1")&wt=xml&indent=true
May be this is Tomcat or CURL problem: after couple of seconds of regular
queries it returns empty response.
Requests are sen
Oh, I see.
May be it's not such a good idea.)
Thanks.
_ _
Batalova Kseniya
Not to my knowledge. In Solr terms this would be a _very_ heavyweight
operation, potentially re-indexing millions and millions of documents.
Imagine if your q were id:* for instance. Plus routing that to all
shards and
A follow up question, I see docValues has been there since Lucene 4.0. so can
I use docValues with my current solr cloud version of 4.8.x
The reason I am asking is because, I have deployment mechanism and securing
the index (using Tomcat Valves) all built out based on Tomcat which I need
figure o
: passed in as a Properties object to the CD constructor. At the moment,
: you can't refer to a property defined in solrcore.properties within your
: core.properties file.
but if you look at it fro ma historical context, that doesn't really
matter for the purpose that solrcore.properties was
Thanks for the quick response Shawn. That worked!
I wish this was documented though.
-Vaibhav
On Thu, Jun 4, 2015 at 11:22 AM, Shawn Heisey wrote:
> On 6/4/2015 12:01 PM, Vaibhav Bhandari wrote:
> > With Solr 5.X, however, I am finding it hard to include my custom-plugin
> > jars in the classp
Hi,
I am seeing some unexpected behavior when adding a new machine to my cluster. I
am running 4.10.3.
My setup has multiple collections, each collection has a single shard. I am
using core auto discovery on the hosts (my deployment mechanism ensures that
the directory structure is created and
The reason we wanted to do a single call is to improve on the performance,
as our application requires to list the total number of records in each of
the collections, and the number of records that matches the query each of
the collections.
Currently we are querying each collection one by one to r
Why do you stop the cluster while adding a node? This is the reason why
this is happening. When the first node of a solr cluster starts up, it
waits for some time to see other nodes but if it finds none then it goes
ahead and becomes the leader. If other nodes were up and running then peer
sync and
Since you are moving to Solr 5.x, have you seen
https://cwiki.apache.org/confluence/display/solr/Adding+Custom+Plugins+in+SolrCloud+Mode
?
On Fri, Jun 5, 2015 at 4:03 AM, Vaibhav Bhandari <
vaibhav.bhandar...@gmail.com> wrote:
> Thanks for the quick response Shawn. That worked!
>
> I wish this wa
Hi,
Would like to check, does all the collections in Solr have an ID that is
stored internally which we can reference to?
Currently I believe we are using the name of the collection when we are
querying to the collection, and this can be modified as and when is
required. Whenever this is changed,
Have you considered spawning a bunch of threads, one per collection
and having them all run in parallel?
Best,
Erick
On Thu, Jun 4, 2015 at 4:52 PM, Zheng Lin Edwin Yeo
wrote:
> The reason we wanted to do a single call is to improve on the performance,
> as our application requires to list the t
And to pile on Shalin's comments, there is absolutely no reason
to try to pre-configure the replica on the new node, and quite
a bit of downside as you are finding. Just add the new node
without any cores and use the ADDREPLICA command to cause
create replicas.
Best,
Erick
On Thu, Jun 4, 2015 at
In a word, no. Why do you change the collection name? If you're doing
some sort of switching the collection, consider collection aliasing.
Best,
Erick
On Thu, Jun 4, 2015 at 8:53 PM, Zheng Lin Edwin Yeo
wrote:
> Hi,
>
> Would like to check, does all the collections in Solr have an ID that is
> s
I'm trying to write a SolrJ program in Java to read and consolidate all the
information into a JSON file, The client will just need to call this SolrJ
program and read this JSON file to get the details. But the problem is we
are still querying the Solr once for each collection, just that this time
Hi Erick,
The reason is we want to allow flexibility to change the collection name
based on the needs of the users.
For the collection aliasing, does this mean that the user will reference
the collection by the alias name instead of the collection name, but at the
backend we will still reference
I see docValues has been there since Lucene 4.0. so can I use docValues with
my current solr cloud version of 4.8.x
The reason I am asking is because, I have deployment mechanism and securing
the index (using Tomcat valve) all built out based on Tomcat which I need
figure out all the way again wi
On 6/4/2015 11:39 PM, Zheng Lin Edwin Yeo wrote:
> The reason is we want to allow flexibility to change the collection name
> based on the needs of the users.
>
> For the collection aliasing, does this mean that the user will reference
> the collection by the alias name instead of the collection n
On 6/4/2015 11:42 PM, pras.venkatesh wrote:
> I see docValues has been there since Lucene 4.0. so can I use docValues with
> my current solr cloud version of 4.8.x
>
> The reason I am asking is because, I have deployment mechanism and securing
> the index (using Tomcat valve) all built out based
Hi,
According to the wiki, I got to know that integrating Solr (starting
release 5.0.0) with tomcat cannot be done. Should I run Solr as a
standalone server?
Thanks,
Chandima
On 6/5/2015 12:32 AM, Chandima Dileepa wrote:
> According to the wiki, I got to know that integrating Solr (starting
> release 5.0.0) with tomcat cannot be done. Should I run Solr as a
> standalone server?
Yes.
There's a lot more detail, but read this first:
https://wiki.apache.org/solr/WhyNoWar
81 matches
Mail list logo