Thanks Everyone for the responses.
Yes, the way Eric described would work for trivial debugging but when i
actually need to debug something in production this would be a big hassle
;-)
For now I am going to mark the field to be stored="true" to get around this
problem. We are migrating away from
Hi Everyone,
Sorry if the subject was too vague. What i am trying to do is this:
So basically i am trying to copy one of the destination fields of a copy
field to another field. The question i have is whether the field E will get
populated properly ie., by the time E is getting constr
Thanks Jamie. Missed to notice that one from the wiki.
Thanks,
Karthik
On Thu, Sep 1, 2011 at 3:38 PM, Jamie Johnson wrote:
> This won't work, according to
> http://wiki.apache.org/solr/SchemaXml#Copy_Fields
>
> "This is provided as a convenient way to ensure that data
ot;
that is intended to include edge cases. I see the option "before" would some
what solve my problem but would like to see if there are any alternatives.
Thanks
-- karthik
explanation.
-- karthik
On Sun, Sep 4, 2011 at 7:54 PM, Chris Hostetter wrote:
>
> : Can solr take the earliest date from the result set to be the value for
> : "facet.date.start"? I dont want to have the value 1/1/1995 hardcoded in
> my
> : application since a new data fee
the termcomponent and tried to see how many docs match the
synonym term & i don't see the term at all.
So not sure how to check if this is working or not.
Thanks,
Karthik
On Mon, Apr 2, 2012 at 3:41 PM, karthik wrote:
> Hi,
>
> I am trying to view what terms are getting indexed for a
(analysis_jsp.java:696)
... 29 more
-
I verified that the webapps/<>/WEB-INF/lib has all the latest 3.1.0 JAR
files in them. Any pointers to fix this issue would be great.
Thanks,
Karthik
ould require a lot more help in the latter scenario ;-)
Thanks in advance.
-- karthik
Thanks Erick. Will certainly take a look.
I am looking to do this for binary objects since i have started with that.
-- karthik
On Mon, Jun 13, 2011 at 8:52 PM, Erick Erickson wrote:
> Take a look at SOLR-445, I started down this road a while
> ago but then got distracted. If you
Look at solr-2272. It might help in your situation. you can have a separate
core & join using the document unique id.
This way in the separate core you can just have the document id & the view
stats & you can just keep updating those 2 fields alone instead of the
entire document.
--
any help on this would be really appreciated.
i just setup a totally brand new setup of solr & still got this exception ..
I can see that this would be something to do with classpath, but not able to
figure out exactly what is causing this issue.
-- karthik
On Mon, Jun 13, 2011 at 4:2
rted working fine as it was before.
I will compare the 2 tomcat folders to see what was different and respond
back with my findings.
-- karthik
On Wed, Jun 22, 2011 at 11:48 AM, Stefan Matheis <
matheis.ste...@googlemail.com> wrote:
> Karthik,
>
> could you attach/pastebin your sche
Hi Everyone,
I am trying to see whats the best way to view the entire document as its
indexed within solr/lucene. I have tried to use Luke but it's still showing
me the fields that i have configured to be returned back [ie., stored=true]
unless I am not enabling some option in the tool.
Is there
We are trying to update Field A.
-Karthik
On Thu, Apr 21, 2016 at 10:36 PM, John Bickerstaff wrote:
> Which field do you try to atomically update? A or B or some other?
> On Apr 21, 2016 8:29 PM, "Tirthankar Chatterjee" <
> tchatter...@commvault.com>
> wrot
started working.
Attached is the modified file.
With Thanks & Regards
Karthik Ramachandran
CommVault
P Please don't print this e-mail unless you really need to
-Original Message-----
From: Karthik Ramachandran [mailto:mrk...@gmail.com]
Sent: Friday, April 22, 2016 12:08 AM
To:
Eric
I have created a JIRA id (kramachand...@commvault.com). Once I get
access I will create the JIRA and submit the patch.
With Thanks & Regards
Karthik Ramachandran
CommVault
Direct: (732) 923-2197
P Please don't print this e-mail unless you really need to
On 4/22/16, 8:04 P
I have opened JIRA
https://issues.apache.org/jira/browse/SOLR-9034
I will upload the patch soon.
With Thanks & Regards
Karthik Ramachandran
CommVault
Direct: (732) 923-2197
Please don't print this e-mail unless you really need to
-Original Message-
From: Erick
some point in time. If that is not desired you can run
optimize in 6.x which will bring the index version to 6.x
https://cwiki.apache.org/confluence/display/solr/IndexUpgrader+Tool
-Karthik
On Tue, Aug 30, 2016 at 9:27 PM, Reth RM wrote:
> >>Is there any way through which I can mi
uot;:"filename1","count":5,"sum":5.0},{"val":"filename2","count":4,"sum":4.0},{"val":"filename3","count":3,"sum":3.0},{"val":"filename4","count":2,"sum":2.0}]}}}
Can someone help me understand?
With Thanks & Regards
Karthik Ramachandran
From: Karthik Ramachandran
Sent: Tuesday, September 27, 2016 12:21 PM
To: solr-user
Subject: JSON Facet "allBuckets" behavior
While performing json faceting with "allBuckets" and "m
So if i cannot use allBuckets since its not filtering, how can I achieve
this?
On Fri, Sep 30, 2016 at 7:19 PM, Yonik Seeley wrote:
> On Tue, Sep 27, 2016 at 12:20 PM, Karthik Ramachandran
> wrote:
> > While performing json faceting with "allBuckets" and "minc
I start Solr in my Eclipse for small test.
I have made some changes to ant build script to copy the webapp to required
location and also added eclipse launchers in this commit (
https://github.com/mrkarthik/lucene-solr/commit/d793a9b8ac0b1b4969aace4329ea5a6ddc22de16
)
Run "ant eclipse" from shell
I am also seeing 2 threads loading the cores, I am using Solr 6.6.0.
On Sat, Aug 26, 2017 at 11:53 AM, Erick Erickson
wrote:
> Setting loadOnStartup=false won't work for you in the long run,
> although it does provide something of a hint. Setting this to false
> means the core at that location s
JIRA already exists, https://issues.apache.org/jira/browse/SOLR-11622.
On Mon, Nov 13, 2017 at 5:55 PM, Zheng Lin Edwin Yeo
wrote:
> Hi Erick,
>
> I have added the apache-mime4j-core-0.7.2.jar in the Java Build Path of the
> Eclipse, but it is also not working.
>
> Regards,
> Edwin
>
> On 13 No
if you want to remove all the data in the then use "null" in set
curl . . . -d '[{"id":"docId","someField":{"set",null}}]'
-Karthik
On Wed, Dec 7, 2016 at 1:31 PM, Richard Bergmann
wrote:
> Hello,
>
> I am new to this an
desc","facet": {"sum":"sum(size)"}}}
Create 2 cores named fileduplicate01 and fileduplicate01 with the same schema
and run the attached java to populate the data and run the query.
Any help is appreciated.
With Thanks & Regards
Karthik Ramachandran
***
edList)
queryResponse.getResponse().get("facets");
NamedList duplicates = (NamedList)
facets.get("duplicates");
numBuckets = ((Number) duplicates.get("numBuckets")).longValue();
buckets = (List) duplicates.get("buckets");
System.
We are using Solr 6.4.2, Can anyone tell me this is a Bug for which I can open
a JIRA?
With Thanks & Regards
Karthik Ramachandran
Direct: (732) 923-2197
P Please don't print this e-mail unless you really need to
From: Karthik Ramachandran
Sent: Tuesday, April 4, 2017 8:35 PM
To: &
e of the above
behavior. This, sometimes results in none of the segment related files
(.si, .pos,... etc.) removed when all the documents are removed from the
index.
Thanks
Karthik
Hi,
What will be the size of the index after optimizing? I know it increases
during the process but what will the size after the optimization is done? Is
it dependent on the merge factor during indexing? Please reply,
Thanks,
Karthik
Thanks a lot for the reply,
is it independent of merge factor??
My index size reduced a lot (almost by 40%) after optimization and i am
worried that i might have lost the data. I have no deletes at all but a high
merge factor. Any suggestions?
Thanks,
Karthik
yeah, that happened :( ,lost lot of data because of it.
Can some one explain the terms numDocs and maxDoc ?? will the difference
indicate the duplicates??
Thank you,
karthik
Hi,
Is numDocs in solr statistics equal to the total number of documents that
are searchable on solr? I find that this number is very low in my case
compared to the total number of documents indexed. Please let me know the
possible reasons for this.
Thanks,
Karthik
I need to merge multiple solr indexes into one big index. The process is
very slow. Please share any tips to speed it up. Will optimizing the indexes
before merging help?
Thanks,
Karthik
Hi,
I have about 20 indexes each of size around 30-35 GB. All of it is on one
machine and i want to make it searchable.
I can have about 5 solr servers each with 2-3 indexes merged and search on
different shards or use katta.
Please let me know which is the better option.
Thanks,
karthik
Hi,
I thought having around a TB of data to search is when katta should come
into picture.
Thanks a lot, can you please point me to or elaborate more on how to manage
increasing index. Any standard strategies?
Can some one please answer this.
Is there a way of creating/adding a core and starting it without having to
reload Solr ?
Lucidgaze might help.
Karthik
for a workaround with field.prefix, but it cannot give the desired
result.
Thanks,
Karthik
ur desired ranges as one facet
query for each range.
http://wiki.apache.org/solr/SimpleFacetParameters#Facet_by_Range says its
not implemented yet.
Please let me know if there is any workaround in my case.
Thanks,
Karthik
adding
facet.query=timestamp:[20100601+TO+201006312359]&facet.query=timestamp:[20100701+TO+201007312359]...
in query should give the desired response without changing the schema or
re-indexing.
lose();
USER_IDX_CONTAINER.shutdown();
} catch (Exception e) {
e.printStackTrace();
}
I get a similar POSSIBLE RESORUCE LEAK!!! message that says SolrCore
wasn't closed.
Am calling this code via a message queue, and no concurrent calls happen.
Thanks,
Karthik.
all content to
just one field, I believe I've to create a custom RequestHandler (possibly
reusing existing SolrCell classes).
Is this approach right?
Thanks
Karthik
In case the exact problem was not clear to somebody:
The problem with FileUpload interpreting file data as regular form fields is
that, Solr thinks there are no content streams in the request and throws a
"missing_content_stream" exception.
On Thu, Mar 10, 2011 at 10:59 AM, Karth
his core
> to refresh the changes.
>
> NOW i get SOMETIMES my Exception =(
> Anybody a idea ?
>
> here is a part of my solrconfig.xml (updater AND searcher)
> ->
>
>
>true
>128
>2
>
>
>single
>1000
>10000
> false
>
>true
>false
>
>
>
> 1
>
> 0
>
>
>
>
>
>
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/FileNotFoundException-during-commit-concurrences-process-tp3991384.html
> Sent from the Solr - User mailing list archive at Nabble.com.
Regards,
Karthik
2
2
2
2
The other actors in the above results is obviously not what we expect to
see, since they do match the original query (i.e. malkovich).
Is there any other way I can approach this for multi-valued fields ?
Thanks,
karthik c
http://cantspellathing.blogspot.com
approach, I will have to pre-compute and index
the number of movies associated with each actor as well. I will need to do
this for the other fields as well. Do let me know if you any other
suggestions/approaches as well.
Thanks,
karthik c
http://cantspellathing.blogspot.com
On Mon, Mar 16, 2009 at 12
Thanks Otis... What kind of post-processing are u talking about here ? Is
there any mechanism in Solr to identify which of the facet results match the
query ?
karthik c
http://cantspellathing.blogspot.com
On Mon, Mar 16, 2009 at 6:57 PM, Otis Gospodnetic <
otis_gospodne...@yahoo.com>
Thanks Erik... Can we enable highlighting for facet results as well ? I am
using Solr's faceting feature to get a unique set of results for the field
with counts, so unless highlighting works for facet results, it will not
really be useful.
karthik c
http://cantspellathing.blogspot.com
O
to fix the problem though.
My concerns about using a single core are:
1. The schema will now contain fields for all types. So most fields will be
empty in most documents.
2. Will searching within a type be slower when compared to having the type
in a separate core ?
Thanks,
kart
Thanks Otis. Will try out using a single index.
karthik c
http://cantspellathing.blogspot.com
On Thu, Mar 19, 2009 at 11:24 PM, Otis Gospodnetic <
otis_gospodne...@yahoo.com> wrote:
>
> You can really go either way. Empty fields are OK. Having lots of cores
> seems har
Hi Folks,
I am unable to get highlighting to work when searching for exact phrases in
SOLR 1.4
A discussion about the exact same issue can be found here:
http://www.mail-archive.com/solr-user@lucene.apache.org/msg27872.html
Can someone please tell how to fix this?
I am using the parameter hl.u
Please add me to the mailing list
l="id, name, other, details",
sort="name asc"
on="id=id")
With Thanks & Regards
Karthik Ramachandran
P Please don't print this e-mail unless you really need to
hortestPath to return a stream of tuples so it can work with
> fetch
> > and other expressions.
> >
> > Are you getting good performance with the shortestPath expression?
> >
> > Joel Bernstein
> > http://joelsolr.blogspot.com/
> >
> >
Joel,
Should I create a JIRA for making shortestPath return stream of tuples?
On Sun, Jan 14, 2018 at 11:52 PM, Karthik Ramachandran
wrote:
> Joel,
> Thanks, I did try using cartesianProduct then fetch, it is working as
> excepted. For my dataset has only 5 or 6 level for that sho
27;t
> know how to do that.
>
> Maybe I'll just have to install an earlier Solr version that doesn't
> have this bug - could someone tell me what version that might be?
>
> Regards,
>
> Terry
>
>
>
--
With Thanks & Regards
Karthik Ramachandran
P Please don't print this e-mail unless you really need to
hanks & Regards
Karthik
stead of reload I restarted the solr
instance and collection was up and running.
Do you think if would be worth putting the support in solr collection api?
With Thanks & Regards
Karthik
ot;:{
> > > "q":"*:*",
> > > "distrib":"false",
> > > "indent":"on",
> > > "fl":"id",
> > > "fq":"id:mid531281",
> > >
Could you try something like this.?
deltaQuery="select CONCAT(cast(col1 as varchar(10)) , name) as ID FROM
MyTable WHERE UPDATED_DATE > '${dih.last_index_time}'"
deltaImportQuery="select CONCAT(cast(col1 as varchar(10)) , name) as ID, *
from MyTable WHERE CONCAT(cast(col1 as varchar(10)) , name) =
I have as well faced the problem when we have composite primary key in the
table, so below is how have went with workaround.
deltaQuery retrieve concat value with time criteria (that should retrieves
only modified rows) and use it in deltaImportQuery with where clause.
On Sun, Jul 8, 2
me":"Testing line
1","name1":"Testing line 1"},{"id":"2","name":"Testing line 2","name1":"Testing
line 2"},{"id":"3","name":"Testing line 3","name1":"
f there isn't one already
>
> On Fri, Aug 10, 2018, 19:49 Karthik Ramachandran <
> kramachand...@commvault.com> wrote:
>
> > We are using Solr 7.2.1, highlighting is not working with docValues only
> > String field.
> >
> > Should I open a JIRA
Hello Team,
How are you? This is Karthik Reddy and I am working as a Software
Developer. I have one question regarding SOLR scores. One of the projects,
which I am working on we are using Lucene Apache SOLR.
We were using SOLR 5.4.1 initially and then migrated to SOLR 8.4.1. After
migration, I do
zk nodes
Oct 08 21-22 180 connections from solr nodes.240 connections from zk nodes
Oct 08 22-23 180 connections from solr nodes.240 connections from zk nodes
Oct 08 23-24 180 connections from solr nodes.240 connections from zk nodes
Thank you,
Karthik
sole and
was looking for the metrics, but all I could see was the collection time
but not last GC duration as attached in the screenshot. Can you please help
here with finding the correct metrics. I strongly believe we are not
capturing this information. Please correct me if I am wrong.
Thanks & Regards,
Karthik
collection.
Please let me know if I can go ahead and create an open JIRA to track the
Solrcloud backup deletion.
Thanks,
Karthik
70 matches
Mail list logo