Hello ,
I am aware of the fact that Solr (I am using 5.2) does not support join on
distributed search with documents to be joined residing on different
shards/collections.
My use case is I want to fetch uuid of documents that are resultant of a
search and also those docs which are outside this se
Hi Dennis ,
thanks for your reply. As I wanted this for some production system so may
not be able to upgrade to under-development branch of solr.
but thanks a lot for pointing me to this possible approach.
--
View this message in context:
http://lucene.472066.n3.nabble.com/How-to-achieve-join
Hello ,
I have a field which is defined to be a textField with PatternTokenizer
which splits on ";".
Now for one of the use case I need to use /export handler to export this
field. As /export handler needs field to support docValues , so if I try to
mark that field as docValues="true" it says tha
Thanks Erick.
Yes I was not clear in questioning but I want it to be searchable on
TextField.
--
View this message in context:
http://lucene.472066.n3.nabble.com/How-to-use-DocValues-with-TextField-tp4248647p4248796.html
Sent from the Solr - User mailing list archive at Nabble.com.
Thanks Markus.
--
View this message in context:
http://lucene.472066.n3.nabble.com/How-to-use-DocValues-with-TextField-tp4248647p4248797.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hello ,
I am using solr /export handler to export search results and it is
performing well.
Today I faced an issue , actually there are 2 multivalued fields I am
fetching lets say which holds list of items and which holds
list of sellers.
here I am storing information such that seller for 1st i
Thanks Joel.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-export-handler-is-exporting-only-unique-values-from-multivalued-field-tp4249986p4250067.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hello ,
I am using Solr 5.2.
I have a field defined as "string" field type. It have some values in it
like
DOC-1 => abc ".. I am " not ? test
DOC-2 => abc "..
This is the single string , I want to query all documents which exactly
match this string i.e. it should return me only DOC-1 when I qu
Hi Binoy thanks.
But does it matter which query-parser I use , shall I use "lucene" parser or
"edismax" parser.
--
View this message in context:
http://lucene.472066.n3.nabble.com/How-to-achieve-exact-string-match-query-which-includes-spaces-and-quotes-tp4250402p4250405.html
Sent from the Solr
Thanks Erick for your reply. Because of some medical reason I was out of
office for a week.
ClientUtils.escapeQueryChars method from solrj client should be used? or
you think its better to escape only quote " character.
--
View this message in context:
http://lucene.472066.n3.nabble.com/How-t
Hello Binoy ,
I found that if I am using a StringField and index it using java
code/solr-admin it adds a \ before " ,
i.e. lest say I have string ==> test " , then it gets indexed as test \".
For all other special chars it does not do anything , so the trick which
worked for me is
while searchin
Hello Harry ,
sorry for delayed reply , I have taken other approach by giving user a
different usability as I did not have solution for this. But your option
looks great , I will try this out.
--
View this message in context:
http://lucene.472066.n3.nabble.com/How-to-use-DocValues-with-TextFie
Hello ,
I am using Solr 5.10 , I have a use case to fit in.
Lets say I define 2 fields group-name,group-id both multivalued and stored .
1)now I add following values to each of them
group-name {a,b,c} and group-id{1,2,3} .
2)Now I want to add new value to each of these 2 fields {d},{4} , my
re
Thanks Yonik.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Multivalued-fields-order-of-storing-is-guaranteed-tp4212383p4212428.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hello All,
I am using solr 4.0. I have a data in my solrindex where on each review of a
document a new entry for a document is added in solr , here document also
have a field which holds employee_id and entry also holds the timestamp of
when that record is added.
Now I want to query this index
Thanks Jack.
It may be the case that I was unable to explain the query correctly.
Actually I don't want it for a single employee I want it for all the
employees that are updated in that time range. So if lets say 10 employees
data is updated in the given time-range and that also multiple times the
Thanks Erick.
It seems the approach suggested by you is the one which I was looking for ,
thanks a lot for reply.
--
View this message in context:
http://lucene.472066.n3.nabble.com/how-to-get-unique-latest-results-from-solr-tp4080034p4080228.html
Sent from the Solr - User mailing list archive
Hello ,
need to know about the same thing as I am also stuck on decision of using
grouping and with smalled data-set it seems to be a good performing thing.
Also is there anyway to specify that group caching should be used depending
on RAM allocated to it.
--
View this message in context:
http
Hello ,
I need some functionality for which I found that grouping is the most suited
feature. I want to know about performance issue associated with it. On some
posts I found that performance is an bottleneck but want to know that if I
am having 3 million records with 0.5 million distinct values
Hello all ,
I am using solr 4.x , I have a requirement where I need to have a field
which holds data from 2 fields concatenated using _. So for example I have 2
fields firstName and lastName , I want a third field which should hold
firstName_lastName. Is there any existing concatenating component
Thanks for reply.
But I don't want to introduce any scripting in my code so want to know is
there any Java component available for the same.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Concat-2-fields-in-another-field-tp4086786p4086791.html
Sent from the Solr - User mai
Hi all,
thanks for your replies. I have managed to do this by writing custom
updateprocessor and configured it as bellow
firstName
lastName
fullName
_
.
Federico Chiacchiaretta , I have tried the option mentioned by you but o
Did find any solution to this. I am also facing the same issue.
--
View this message in context:
http://lucene.472066.n3.nabble.com/massive-memory-consumption-of-grouping-feature-tp4031895p4093297.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hello ,
I am using solr 4.0 , I want to group entries on one of the int field , I
need all the groups and group.limit is 1. I am getting very slow performance
and some times I am getting OutOfMemory also.
My index is having 20 million records and out of which my search result
returns 1 million do
Thanks for reply Sandro.
My requirement is that I need all groups and then build compact data from it
to send to server. I am not sure about how much RAM should be allocated to
JVM instance to make it serve requests faster , any inputs on that are
welcome.
--
View this message in context:
http
Hello ,
I am using solr 4.0 , I want to sent a list of objects to solr as a request
parameter. Is it possible ? Please let me know.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Can-I-pass-some-Object-as-request-parameter-to-solr-server-tp4093463.html
Sent from the Solr -
Hello,
I have a tree data structure like
t1
|-t2
|-t3
t4
|-t5
and so on .
And there is no limit on tree depth as well as number of children to each
node.
What I want is that when I do the faceting for parent node t1 it should also
include count of all of its children (t2 and
Hello ,
I am working on Solr from last few months and stuck some where ,
Analyzer in Field Definition : --
In: "Please, email john@foo.com by 03-09, re: m37-xq."
Expected Out: "Please", "email", "john@foo.com", "by", "03-09", "re",
"m37-xq"
but not getting this. Is something wro
Just to make sure that there is no ambiguity the In: "Please, email
john@foo.com by 03-09, re: m37-xq." is the input given to this field for
indexing and the Expected Out: "Please", "email", "john@foo.com", "by",
"03-09", "re", "m37-xq" is expected output tokens.
--
View this message in
Hello,
I need to know that if I use ClassicTokenizer instead of StandardTokenizer
then what things I will loose. Is it the case that in future solr versions
ClassicTokenizer will be deprecated? or development in ClassicTokenizer is
going to halt? Please let me know this.
--
View this message in c
thanks for the reply. Yes I have started the admin/analysis thing before you
suggested but just wanted to know if out of the box anything specific is
notsupported/supported by the tokenizers specified.
--
View this message in context:
http://lucene.472066.n3.nabble.com/What-we-loose-if-we-use-C
Hello,
the requirement which I have is that on solr side we have indexed data of
multiple customers and each customer we have at least a million documents.
After executing search end user want to sort on some fields on datagrid lets
say subject, title, date etc.
Now as the sorting on text fields
Thanks for the inputs.
Eric, Yes I was referring to the String data-type. The reason I was asking
this is that for a single customer we have multiple users and each user may
apply different search criteria before sorting on the field so if we can
cache the sorted results then it may improve the us
Hello,
Is there any provision available with Solr so that while querying the solr
server using solrj API I can send Object as a request parameter? So that in
my request handler on solr side I can read that object.Thanks.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Sending
Thanks for reply.
If I check the debug query through solr-admin I can see that the lower case
filter is applied and
"rawquerystring":"em_to_name:Follett'.*",
"querystring":"em_to_name:Follett'.*",
"parsedquery":"+em_to_name:follett'.*",
"parsedquery_toString":"+em_to_name:follett'.
Yes I am using Solr 3.6.
Thanks for the link it is very useful.
>From the link I could make out that if analyzer includes any one of the
following then they are applied and any other elements specified under
analyzer are not applied as they are not multi-term aware.
ASCIIFoldingFilterFactory
Lo
Hello,
If I Search for abc AND *foo* return all docs for abc which do not have foo
why? I suspect that if the * is present on both the side of a word then that
word is ignored. Is it the correct interpretation? I am using solr 3.6 and
field uses StandardTokenizer.
--
View this message in context
It is my mistake, the field which I was referring to was non existing so this
effect is shown. Sorry for the stupid question I have asked :-)
--
View this message in context:
http://lucene.472066.n3.nabble.com/Search-for-abc-AND-foo-return-all-docs-for-abc-which-do-not-have-foo-why-tp3993138p3993
Hello,
this is how the field is declared in schema.xml
when I query for this filed with input
"M:/Users/User/AppData/Local/test/abc.txt" .
It searches for documents containing any of the token generated M,Users,
User etc.but I want to search for exact file
Hello Koji,
thanks for reply. yes one way I can try is use copyField with one of the
copy using PathHierarchyTokenizerFactory and the other using
KeywordTokenizerFactory and depending on whether input entered is directory
path or exact file path switch between these 2 fields . thanks
--
View this
Modifying the field definition to
solves the purpose . got it from the link
http://stackoverflow.com/questions/6920506/solr-pathhierarchytokenizerfactory-facet-query
--
View this message in context:
http://lucene
Hello,
I am using Edismax parser and query submitted by application is of the
format
price:1000 AND ( NOT ( launch_date:[2007-06-07T00:00:00.000Z TO
2009-04-07T23:59:59.999Z] AND product_type:electronic)).
Solr while executing gives unexpected result. I am suspecting it is because
of the AND (
Hello,
I am using solr 3.6.0 , I have observed many connection in CLOSE_WAIT state
after using solr server for some time. On further analysis and googling
found that I need to close the idle connections from the client which is
connecting to solr to query data and it does reduce the number of CLO
Hello All ,
we are using Solr6.2 , in schema that we use we have an integer field. For
a given query we want to know how many documents have duplicate value for
the field , for an example how many documents have same doc_id=10.
So to find this information we fire a query to solr-cloud with follow
44 matches
Mail list logo