Hi,
We created the new phonetic filter, It is working great on our products,
mostly of our suppliers are Indian, it is quite helpful for us to provide
the exact result e.g.
1) rikshaw, still able to find the suppliers of rickshaw
2) telefone, still able to find the suppliers of telephone
We also
Hi Erick,
In that issue you forwarded to me, they want to make one token from all
tokens received from token stream but in my case I want to keep the tokens
same and create and extra new token which is concat of all the tokens.
I'd guess, is the case
> here. I mean do you really want to concate
I'm using the FreeTextLookupFactory in my implementation now.
Yes, now it can suggest part of the field from the middle of the content.
I read that this implementation is able to consider the previous tokens
when making the suggestions. However, when I try to enter a search phrase,
it seems that
This kind of feedback is _very_ valuable, many thanks to all.
I may be the one committing this, but Upayavira is doing all the work
so hats off to him.
And it's time for anyone who likes UI work to step up and contribute ;).
I'll be happy to commit changes. Just link any JIRAs (especially ones
wi
Is there any API to support upload file for ExternalFileField to /data/
directory or any good practice on this?
My application and Solr Server were physically separated on two place.
Application will calculate a score and generate a file for
ExternalFileField.
Thanks for any input.
There's no a-priori reason you should need to do this. What's your
evidence here? What
behaviors do you see when you try this? Details matter as Hoss would
say. Give us an
example of what changes in the XML file (and/or schema) you see that
you think require
re-indexing.
Of course if you're adding
This one's going to be confusing to explain.
The ability of filters to operate on wildcarded terms at query time is limited
to some specific filters. If you're going into the code, see
MultiTermAware-derived
filters.
Most generally, the MultiTermAware filters only are valid for filters
that d
Hello,
Could anyone recieve my email? I'm new to solr and I have some questions,
could anyone help me to give me some answer??
I index file directly by extracting the content of file using Tika
embeded in solr. There is no problem of normal files. While I index a word
embeded an anot
My requirement is to read the XML from a CLOB field and parse it to get the
entity.
The data config is as shown below. I am trying to map two fields 'event' and
'policyNumber' for the entity 'catreport'.
I am g
On Wed, Jun 17, 2015, at 02:49 PM, TK Solr wrote:
> On 6/17/15, 2:35 PM, Upayavira wrote:
> > Do you have a managed-schema file, or such?
> >
> > You may have used the configs that have a managed schema, i.e. one that
> > allows you to change the schema via HTTP.
> I do see a file named "managed-
The intention very much is to do a collections API pane. In fact, I've
got a first pass made already that can create/delete collections, and
show the details of a collection and its replicas. But I want to focus
on getting the feature-for-feature replacement working first. If we
don't do that, then
We can get things like this in. If you want, feel free to have a go. As
much as I want to work on funky new stuff, I really need to focus on
finishing stuff first.
Upayavira
On Wed, Jun 17, 2015, at 02:53 PM, Anshum Gupta wrote:
> Also, while you are at it, it'd be good to get SOLR-4777 in so the
Also, while you are at it, it'd be good to get SOLR-4777 in so the Admin UI
is correct when users look at the SolrCloud graph post an operation that
can leave the slice INACTIVE e.g. Shard split.
On Wed, Jun 17, 2015 at 2:50 PM, Anshum Gupta
wrote:
> This looks good overall and thanks for migrat
This looks good overall and thanks for migrating it to something that more
developers can contribute to.
I started solr (trunk) in cloud mode using the bin scripts and opened the
new admin UI. The section for 'cores' says 'No cores available. Go and
create one'.
Starting Solr 5.0, we officially st
On 6/17/15, 2:35 PM, Upayavira wrote:
Do you have a managed-schema file, or such?
You may have used the configs that have a managed schema, i.e. one that
allows you to change the schema via HTTP.
I do see a file named "managed-schema" without ".xml" extension in the conf
directory.
Its content
i will check with Henry about this prolem again.
Best,
Soonho
From: Ramkumar R. Aiyengar [andyetitmo...@gmail.com]
Sent: Wednesday, June 17, 2015 5:08 PM
To: solr-user@lucene.apache.org
Subject: Re: Please help test the new Angular JS Admin UI
I started wi
Do you have a managed-schema file, or such?
You may have used the configs that have a managed schema, i.e. one that
allows you to change the schema via HTTP.
Upayavira
On Wed, Jun 17, 2015, at 02:33 PM, TK Solr wrote:
> With Solr 5.2.0, I ran:
> bin/solr create -c foo
> This created solrconfig.x
With Solr 5.2.0, I ran:
bin/solr create -c foo
This created solrconfig.xml in server/solr/foo/conf directory.
Other configuration files such as synonyms.txt are found in this directory too.
But I don't see schema.xml. Why is schema.xml handled differently?
I am guessing
server/solr/configsets/sam
Thanks Ramkumar, will dig into these next week.
Upayavira
On Wed, Jun 17, 2015, at 02:08 PM, Ramkumar R. Aiyengar wrote:
> I started with an empty Solr instance and Firefox 38 on Linux. This is
> the
> trunk source..
>
> There's a 'No cores available. Go and create one' button available in the
>
I started with an empty Solr instance and Firefox 38 on Linux. This is the
trunk source..
There's a 'No cores available. Go and create one' button available in the
old and the new UI. In the old UI, clicking it goes to the core admin, and
pops open the dialog for Add Core. The new UI only goes to
We regularly create a SOLR index from XML files, using the DIH with a suitably
edited xml-data-config.xml. However, whenever new XML become available it seems
like we have to rebuild the entire index again using the Data Import Handler.
Are we missing something? Should it be possible to add new
Hi! I am a Solr user having an issue with matches on searches using the
wildcard operators, specifically when the searches include a wildcard
operator with a number. Here is an example.
My query will look like (productTitle:*Sidem2*) and match nothing, when it
should be matching the productTitle Si
Check mapreduce.task.classpath.user.precedence and its equivalent property
in different hadoop version.
HADOOP_OPTS needs to work with this property being set to true.
I met problem like yours. And playing with these parameters solved my
problem.
On Wed, Jun 17, 2015 at 12:28 AM, adfel70 wrote
yep, 4.3.1. The API changed after that so it’s finding the time to rewrite
the entire backend that uses it
On 17/06/2015 16:55, "Shalin Shekhar Mangar"
wrote:
>You must be using an old version of Solr. Since Solr 4.8 and beyond,
>the and tags have been deprecated and you can place
>the field a
You must be using an old version of Solr. Since Solr 4.8 and beyond,
the and tags have been deprecated and you can place
the field and field type definitions anywhere in the schema.xml.
See http://issues.apache.org/jira/browse/SOLR-5228
On Wed, Jun 17, 2015 at 9:09 PM, Alistair Young
wrote:
>
working in a tiny tmux window does have some disadvantages, such as losing
one’s place in the file! the subject_autocomplete definition wasn’t inside
. Now that it is, everything is working. thanks for listening
Alistair
--
mov eax,1
mov ebx,0
int 80h
On 17/06/2015 15:17, "Alistair Young" w
Oh, I see you already did :) - thanks. - Steve
> On Jun 17, 2015, at 11:10 AM, Steve Rowe wrote:
>
> Hi Mike,
>
> Looks like a bug to me - would you please create a JIRA?
>
> Thanks,
> Steve
>
>> On Jun 17, 2015, at 10:29 AM, Mike Thomsen wrote:
>>
>> We're running Solr 4.10.4 and getting t
Hi Mike,
Looks like a bug to me - would you please create a JIRA?
Thanks,
Steve
> On Jun 17, 2015, at 10:29 AM, Mike Thomsen wrote:
>
> We're running Solr 4.10.4 and getting this...
>
> Caused by: java.lang.IllegalArgumentException: Unknown parameters:
> {ignoreCase=true}
>at
> org.ap
Hi Folks,
We are seeing the following in our logs on our Solr nodes after which Solr
nodes go into multiple full GCs and eventually runs out of heap. We saw this
ticket - https://issues.apache.org/jira/browse/SOLR-7338 - wondering that’s the
one causing it. We are currently on 4.10.0
INFO -
We're running Solr 4.10.4 and getting this...
Caused by: java.lang.IllegalArgumentException: Unknown parameters:
{ignoreCase=true}
at
org.apache.solr.rest.schema.analysis.BaseManagedTokenFilterFactory.(BaseManagedTokenFilterFactory.java:46)
at
org.apache.solr.rest.schema.analysis.M
looking at the schema browser, subject_autocomplete has a type of text_en
rather than text_auto and all the terms are stemmed. Its contents are the
same as the one it was copied from, dc.subject, which is text_en and
stemmed.
On 17/06/2015 14:58, "Erick Erickson" wrote:
>Hmmm, shouldn't be happe
Edwin,
The spellcheck is a thing, the Suggester is another.
If you need to provide auto suggestion to your users, the suggester is the
right thing to use.
But I really doubt to be useful to select as a suggester field the entire
content.
it is going to be quite expensive.
In the case I would agai
Is ZK healthy? Can you try the following from the server on which Solr
is running:
echo ruok | nc zk1 2181
On Wed, Jun 17, 2015 at 7:25 PM, shacky wrote:
> 2015-06-17 15:34 GMT+02:00 Shalin Shekhar Mangar :
>> You are asking telnet to connect to zk1 on port 2181 but you have not
>> specified the
For sure there are a few rough edges here
On Wed, Jun 17, 2015 at 12:28 AM, adfel70 wrote:
> We cannot downgrade httpclient in solrj5 because its using new features and
> we dont want to start altering solr code, anyway we thought about upgrading
> httpclient in hadoop but as Erick said its s
I think there is some better classpath isolation options in the works for
Hadoop. As it is, there is some harmonization that has to be done depending
on versions used, and it can get tricky.
- Mark
On Wed, Jun 17, 2015 at 9:52 AM Erick Erickson
wrote:
> For sure there are a few rough edges here
Hmmm, shouldn't be happening that way. Spellcheck is supposed to be
looking at indexed terms. If you go into the admin/schema browser
page and look at the new field, what are the terms in the index? They
shouldn't be stemmed.
And I always get confused where this
suggest
is supposed to point. Do
2015-06-17 15:34 GMT+02:00 Shalin Shekhar Mangar :
> You are asking telnet to connect to zk1 on port 2181 but you have not
> specified the port to Solr. You should set
> ZK_HOST="zk1:2181,zk2:2181,zk3:2181" instead.
I modified the ZK_HOST instance with the port, but the problem is not solved.
Do y
yep did both of those things. Getting the same results as using dc.subject
On 17/06/2015 14:44, "Shalin Shekhar Mangar"
wrote:
>Did you change the SpellCheckComponent's configuration to use
>subject_autocomplete instead of dc.subject? After you made that
>change, did you invoke spellcheck.build=
Thanks Yonik.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Multivalued-fields-order-of-storing-is-guaranteed-tp4212383p4212428.html
Sent from the Solr - User mailing list archive at Nabble.com.
If you used the JIRA I linked, vote for it, add any improvements etc.
Anyone can attach a patch to a JIRA, you just have to create a login.
That said, this may be too rare a use-case to deal with. I just thought
of shingling which I should have suggested before that will work for
concatenating sma
Did you change the SpellCheckComponent's configuration to use
subject_autocomplete instead of dc.subject? After you made that
change, did you invoke spellcheck.build=true to re-build the
spellcheck index?
On Wed, Jun 17, 2015 at 7:06 PM, Alistair Young
wrote:
> copyField doesn¹t seem to fix the s
copyField doesn¹t seem to fix the suggestion stemming. Copying the field
to another field of this type:
but I¹m still getting stemmed suggestions after rebuilding the index.
Alistair
--
mov eax,1
mov ebx,0
int 80h
On 17/06/2015 11:28, "Alistair Young" wrote:
>ah look
You are asking telnet to connect to zk1 on port 2181 but you have not
specified the port to Solr. You should set
ZK_HOST="zk1:2181,zk2:2181,zk3:2181" instead.
On Wed, Jun 17, 2015 at 3:53 PM, shacky wrote:
> Hi.
> I have a SolrCloud cluster with 3 nodes Solr + Zookeeper.
>
> My solr.in.sh file is
Comments inline:
On Wed, Jun 17, 2015 at 3:18 PM, Markus.Mirsberger
wrote:
> Hi,
>
> I am trying to use the dedupe feature to detect and mark near duplicate
> content in my collections.
> I dont want to prevent duplicate content. I woud like to detect it and keep
> it for further processing. That
On Wed, Jun 17, 2015 at 6:44 AM, Alok Bhandari
wrote:
> Is it guaranteed that stored multivalued fields maintain order of insertion.
Yes.
-Yonik
On Wed, Jun 17, 2015 at 2:44 PM, Sreekant Sreedharan
wrote:
> I have a requirement to make SOLR a turnkey replacement for our legacy
> search
> engine. To do this, the queries supported by the legacy search engine has
> to
> be supported by SOLR.
>
> To do this, I have implemented a QueryParser.
I have a requirement to make SOLR a turnkey replacement for our legacy search
engine. To do this, the queries supported by the legacy search engine has to
be supported by SOLR.
To do this, I have implemented a QueryParser. I've implemented it several
ways:
1. I've copied the implementation in Lu
Hi all,
maybe someone could be interested in this.
I have created a suite of Docker images, dockerfiles and bash scripts
useful to deploy a Zookeeper ensemble with 3 or more instances and a
SolrCloud (v. 4 or 5) cluster. SolrCloud 4 cluster is based on Tomcat 7.
https://github.com/freedev/solrcl
Hello ,
I am using Solr 5.10 , I have a use case to fit in.
Lets say I define 2 fields group-name,group-id both multivalued and stored .
1)now I add following values to each of them
group-name {a,b,c} and group-id{1,2,3} .
2)Now I want to add new value to each of these 2 fields {d},{4} , my
re
ah looks like I need to use copyField to get a non stemmed version of the
suggester field
Alistair
--
mov eax,1
mov ebx,0
int 80h
On 17/06/2015 11:15, "Alistair Young" wrote:
>I was wondering if there's a way to get the suggester to return whole
>words. Instead of returning 'technology' ,
Hi.
I have a SolrCloud cluster with 3 nodes Solr + Zookeeper.
My solr.in.sh file is configured as following:
ZK_HOST="zk1,zk2,zk3"
All worked good but now I cannot start SOLR nodes and the command exit
with the following errors:
root@index1:~# service solr restart
Sending stop command to Solr ru
I was wondering if there's a way to get the suggester to return whole words.
Instead of returning 'technology' , 'temperature' and 'tutorial', it's
returning 'technolog' , 'temperatur' and 'tutori'
using this config:
suggest
org.apache.solr.spelling.suggest.Suggester
org
Hi,
I am trying to use the dedupe feature to detect and mark near duplicate
content in my collections.
I dont want to prevent duplicate content. I woud like to detect it and
keep it for further processing. Thats why Im using an extra field and
not the documents unique field.
Here is how I ad
It doesn't work by design, remove write whole block.
https://issues.apache.org/jira/browse/SOLR-6596
On Wed, Jun 17, 2015 at 11:44 AM, Maya G wrote:
> Hey,
>
> I'm trying to add a new child to an existing document.
> When I query for the child doc it doesn't return it .
>
> I'm using sole 4.105
Hey,
I'm trying to add a new child to an existing document.
When I query for the child doc it doesn't return it .
I'm using sole 4.105.
Thank you,
Maya
--
View this message in context:
http://lucene.472066.n3.nabble.com/How-to-add-a-new-child-to-existing-document-tp4212365.html
Sent from th
On Tue, 2015-06-16 at 09:54 -0700, Shenghua(Daniel) Wan wrote:
> Hi, Toke,
> Did you try MapReduce with solr? I think it should be a good fit for your
> use case.
Thanks for the suggestion. Improved logistics, such as starting build of
a new shard while the previous shard is optimizing, would work
Hi,
Found the best way to do it (for the ones which will read it in the future).
Starting from Solr 4.8 nested documents can be used so for the document we
can created child document with the key & value as fields for each ley,
using block join queries will close to loop and give the ability to se
We cannot downgrade httpclient in solrj5 because its using new features and
we dont want to start altering solr code, anyway we thought about upgrading
httpclient in hadoop but as Erick said its sounds more work than just put
the jar in the data nodes.
About that flag we tried it, hadoop even has
Dear Erick,
e.g. Solr training
> *Porter:-* "solr" "train"
> Position 1 2
> *Concatenated :-* "solr" "train"
>"solrtrain"
>Position 1 2
I did implemented the filter as
59 matches
Mail list logo