,
Tom
On Sat, Aug 27, 2016 at 12:23 PM, Shawn Heisey wrote:
> On 8/26/2016 10:22 AM, D'agostino Victor wrote:
> > Do you know in which version index format changes and if I should
> > update to a higher version ?
>
> In version 6.0, and again in the just-released 6.2,
"foo_2"
9) Remove "foo_1" collection once happy
This avoids indexing overwhelming the performance of the cluster (or
any nodes in the cluster that receive queries), and can be performed
with zero downtime or config changes on the clients.
Cheers
Tom
The user gets fed up
at no response, so reloads the page, re-submitting the analysis and
bringing down the next server in the cluster.
Lather, rinse, repeat - and then you get to have a meeting to discuss
why we invest so much in HA infrastructure that can be made non-HA by
one user with a complex query. In those meetings it is much harder to
justify not restarting.
Cheers
Tom
On Wed, Oct 26, 2016 at 8:03 AM, Prasanna S. Dhakephalkar
wrote:
> Hi,
>
>
>
> May be very rudimentary question
>
>
>
> There is a integer field in a core : "cost"
>
> Need to build a query that will return documents where 0 <
> "cost"-given_number < 500
>
cost:[given_number TO (500+given_numb
efore constructing the query!
You might be able to do this with function queries, but why bother? If
the number is fixed, then fix it in the query, if it varies then there
must be some code executing on your client that can be used to do a
simple addition.
Cheers
Tom
to which you would provide the users preferred sort
ordering (which you would retrieve from wherever you store such
information) and the field that you want sorted. It would look
something like this:
usersortorder("category_id", "3,5,1,7,2,12,14,58") DESC,
usersortorder("source_id", "5,2,1,4,3") DESC, date DESC, title DESC
Cheers
Tom
g next 50 objects. We are noticing that few objects which were
> returned before are being returned again in the second page. Is this a known
> issue with Solr?
Are you using paging (page=N) or deep paging (cursorMark=*)? Do you
have a deterministic sort order (IE, not simply by score)?
Cheers
Tom
rmance questions for your schema
and data is to try it out. Generate 10 million docs, store them in a
doc (eg as CSV), and then use the post tool to try different schema
and query options.
Cheers
Tom
t one core and not the
second.
What are we likely to be doing wrong in our config or update to prevent the
replication?
Thanks
Tom
Thanks Erick!
As I said, user error! ;)
Tom
On 21/11/17 22:41, Erick Erickson wrote:
I think you're confusing shards with replicas.
numShards is 2, each with one replica. Therefore half of your docs
will wind up on one replica and half on the other. If you're adding a
sing
I'm running into an issue with the initial CDCR bootstrapping of an existing
index. In short, after turning on CDCR only the leader replica in the target
data center will have the documents replicated and it will not exist in any of
the follower replicas in the target data center. All subsequent
the solr instance running
on that server. Not sure if this information helps at all.
> On Nov 30, 2017, at 11:22 AM, Amrit Sarkar wrote:
>
> Hi Tom,
>
> I see what you are saying and I too think this is a bug, but I will confirm
> once on the code. Bootstrapping should h
leader node
otherwise it will replicate from one of the replicas which is missing the
index).
> On Nov 30, 2017, at 12:16 PM, Amrit Sarkar wrote:
>
> Tom,
>
> This is very useful:
>
>> I found a way to get the follower replicas to receive the documents from
>>
y will not receive the
initial index.
> On Dec 1, 2017, at 12:52 AM, Amrit Sarkar wrote:
>
> Tom,
>
> (and take care not to restart the leader node otherwise it will replicate
>> from one of the replicas which is missing the index).
>
> How is this possible? Ok I will loo
onfluence/display/solr/Post+Tool
Or by doing it manually however you wish:
https://cwiki.apache.org/confluence/display/solr/Uploading+Data+with+Index+Handlers#UploadingDatawithIndexHandlers-CSVFormattedIndexUpdates
Cheers
Tom
osite-latlon
Cheers
Tom
t;
>
> This same script worked as expected on a single solr node (i.e. not in
> SolrCloud mode).
>
> Thanks,
> Chris
>
Hey Chris
We hit the same problem moving from non-cloud to cloud, we had a
collection that loaded its DIH config from various XML files listing
the DB queries to run. We wrote a simple DataSource plugin function to
load the config from Zookeeper instead of local disk to avoid having
to distribute those config files around the cluster.
https://issues.apache.org/jira/browse/SOLR-8557
Cheers
Tom
he cluster but empty, ready for you to assign new
replicas to it using the Collections API.
You can also use what are called "snitches" to define rules for how
you want replicas/shards allocated amongst the nodes, eg to avoid
placing all the replicas for a shard in the same rack.
Cheers
Tom
[1]
https://github.com/django-haystack/pysolr/commit/366f14d75d2de33884334ff7d00f6b19e04e8bbf
", "defType": "dismax", "indent": "true", "qf":
"title^2000 content", "pf": "pf=title^4000 content^2", "sort": "score
desc", "wt": "json", but that was not better. if I
Well, I've tried much larger values than 8, and it still doesn't seem to
do the job ?
For now, assume my users are searching for exact sub strings of a real
title.
Tom
On 13/01/17 16:22, Walter Underwood wrote:
I use a boost of 8 for title with no boost on the content. Both In
On 13 Jan 2017, at 16:35, Tom Chiverton wrote:
Well, I've tried much larger values than 8, and it still doesn't seem to do the
job ?
For now, assume my users are searching for exact sub strings of a real title.
Tom
On 13/01/17 16:22, Walter Underwood wrote:
I use a boost of 8 fo
ame.
I don't understand what you mean. If you have these three documents in
your index, what data do you want in the facet?
[
{itemId: 1, itemName: "Apple"},
{itemId: 2, itemName: "Android"},
{itemId: 3, itemName: "Android"},
]
Cheers
Tom
I 'solved' this by removing some of the 'AND' from my full query. AND
should be optional but have no effect if there, right ? But for me it
was forcing the score to 0.
Which might be the same as saying nothing matched ?
Tom
On 13/01/17 15:10, Tom Chiverton wrote:
nk you for your help.
You do have to follow the correct syntax:
json.facet={name_of_facet_in_output:{type:terms, field:name_of_field}}
It is documented in confluence:
https://cwiki.apache.org/confluence/display/solr/Faceted+Search
Also by yonik:
http://yonik.com/json-facet-api/
Cheers
Tom
Cheers
Tom
can
actually do your example however:
json.facet={hieght_facet:{type:range, gap:20, start:160, end:190,
hardend:True, field:height}}
If you do require arbitrary bucket sizes, you will need to do it by
specifying query facets instead, I believe.
Cheers
Tom
On Wed, Feb 8, 2017 at 11:26 PM, deniz wrote:
> Tom Evans-2 wrote
>> I don't think there is such a thing as an interval JSON facet.
>> Whereabouts in the documentation are you seeing an "interval" as JSON
>> facet type?
>>
>>
>> You want a ra
ghtly better performance.
Cheers
Tom
On Thu, Feb 9, 2017 at 11:58 AM, Bryant, Michael
wrote:
> Hi all,
>
> I'm converting my legacy facets to JSON facets and am seeing much better
> performance, especially with high cardinality facet fields. However, the one
> issue I can'
by adding
replicas of that shard on other nodes - perhaps even removing it from
the node that did the indexing. We have a node that solely does
indexing, before the collection is queried for anything it is added to
the querying nodes.
You can do this manually, or you can automate it using the collections API.
Cheers
Tom
rconfig.xml
you have:
data
It should be
${solr.data.dir:}
Which is still in your config, you've just got it commented out :)
Cheers
Tom
Hi,
I wonder how to secure Solr with Kerberos.
We can Kerberos secure Solr by configuring the AuthenticationFilter from
the hadoop-auth.jar that is packaged in solr.war.
But after we do that,
1) How does a SolrJ client connect to the secured Solr server?
2) In SolrCloud environment, how one Sol
xecutor.java:897)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:919)
at java.lang.Thread.run(Thread.java:738)
Any way to setup SolrCloud to write index to local file system, while
allowing the Solr MapReduceIndexerTool's GoLive to merge index generated on
HDFS to the SolrCloud?
Thanks,
Tom
\
-Dsolr.hdfs.home=file:///opt/test/solr/node/solr \
-jar start.jar
With that, the go-live works fine.
Any comment on this approach?
Tom
On Wed, Jul 2, 2014 at 9:50 PM, Erick Erickson
wrote:
> How would the MapReduceIndexerTool (MRIT for short)
> find the local disk to write from
, so
please forgive any naivety. Any ideas of what to do next would be greatly
appreciated. I don't currently have details of the VM implementation but
can get hold of this if it's relevant.
thanks,
Tom
Boogie, Shawn,
Thanks for the replies. I'm going to try out some of your suggestions
today. Although, without more RAM I'm not that optimistic..
Tom
On 21 October 2013 18:40, Shawn Heisey wrote:
> On 10/21/2013 9:48 AM, Tom Mortimer wrote:
>
>> Hi everyone,
>>
Just tried it with no other changes than upping the RAM to 128GB total, and
it's flying. I think that proves that RAM is good. =) Will implement
suggested changes later, though.
cheers,
Tom
On 22 October 2013 09:04, Tom Mortimer wrote:
> Boogie, Shawn,
>
> Thanks for the repl
e" returns the same results as "oscar
wilde" (2000 hits).
Is it possible to configure eDisMax to do case-sensitive parsing, so that
"AND" is an operator but "and" is just another term?
thanks,
Tom
Oh, good grief - I was just reading that page, how did I miss that? *derp*
Thanks Shawn!!!
Tom
On 6 November 2013 18:59, Shawn Heisey wrote:
> On 11/6/2013 11:46 AM, Tom Mortimer wrote:
>
>> I'm using eDisMax query parser, and need to support Boolean operators AND
>>
the only solution is to have a minimal, shared stopwords
list for all languages I want to support. Is this correct, or is there a
way of supporting this kind of searching with per-language stopword lists?
Thanks for any ideas!
Tom
Ah, thanks Markus. I think I'll just add the Boolean operators to the
stopwords list in that case.
Tom
On 7 November 2013 12:01, Markus Jelsma wrote:
> This is an ancient problem. The issue here is your mm-parameter, it gets
> confused because for separate fields different amoun
elps a bit!
Tom
On 7 November 2013 14:50, Palmer, Eric wrote:
> Sorry if this is obvious (because it isn't for me)
>
> I want to build a solr (4.5.1) + nutch (1.7.1) environment. I'm doing
> this on amazon linux (I may put nutch on a separate server eventually).
>
>
nded behavior.
Why is edismax splitting (title:Michigan) and (Corporate Income Tax) while
determining what to use for proximity boost?
Thanks, Tom
it process.
Regards,
Tom
ottleneck remains the same. Having said that, we have an
> ingestion tool in the works that will take advantage of data locality for
> splitable files as well.
>
> Wolfgang.
>
> On Sep 24, 2014, at 9:38 AM, Tom Chen wrote:
>
> > Hi,
> >
> > The MRIT (MapReduceInde
I wonder if Solr has InputFormat and OutputFormat like the EsInputFormat
and EsOutputFormat that are provided by Elasticserach for Hadoop
(es-hadoop).
Is it possible for Solr to provide such integration with Hadoop?
Best,
Tom
make the Solr index data available to
Hadoop MapReduce. Along the same line, we can also make Solr index data
available to Hive, Spark and etc like what es-hadoop can do.
Best,
Tom
On Thu, Sep 25, 2014 at 10:26 AM, Michael Della Bitta <
michael.della.bi...@appinions.com> wrote:
>
eger other than disabling each integer field in
turn?
Cheers
Tom
Exception while processing: variant document :
SolrInputDocument(fields: [(removed)]):
org.apache.solr.handler.dataimport.DataImportHandlerException:
java.lang.ClassCastException: java.lang.Long cannot be cast to
java.lang.
ired effect.
The source database is mysql, the source column for "country_id" is
"`country_id` smallint(6) NOT NULL default '0'".
Again, I'm not 100% sure that it is even the "country" field that
causes this, there are several SortedMapBackedCache sub-entities (but
they are all analogous to this one).
Thanks in advance
Tom
On Fri, Oct 3, 2014 at 3:13 PM, Tom Evans wrote:
> I tried converting the selected data to SIGNED INTEGER, eg
> "CONVERT(country_id, SIGNED INTEGER) AS country_id", but this did not
> have the desired effect.
However, changing them to be cast to CHAR change
On Fri, Oct 3, 2014 at 3:24 PM, Tom Evans wrote:
> On Fri, Oct 3, 2014 at 3:13 PM, Tom Evans wrote:
>> I tried converting the selected data to SIGNED INTEGER, eg
>> "CONVERT(country_id, SIGNED INTEGER) AS country_id", but this did not
>> have the desired effect.
&
e API for this
framework...
Kind regards,
Tom
Thank you, embarrassingly I had not looked at that doc.
And thank you to the other repliers.
From: Chris Hostetter
Sent: 22 October 2014 20:38
To: solr-user@lucene.apache.org
Subject: Re: StatelessScriptUpdateProcessorFactory Access to Solr
Core/schema/a
Brian and I are working together to diagnose this issue so I can chime in
quickly here as well. These values are defined as part of the the defaults
section of the config.
,
Tom
Thanks for the links. The dzone lnk was nice and concise, but unfortunately
makes use of the now deprecated CJK tokenizer. Does anyone out there have
some examples or experience working with the recommended replacement for
CJK?
Thanks,
TZ
on of an ICU
Tokenizer that might be well suited to Chinese text, but may be intended
for a multilingual field? (
https://cwiki.apache.org/confluence/display/solr/Tokenizers#Tokenizers-ICUTokenizer).
Anyone have an familiarity with ICU vs Standard for a field that will store
only Chinese text.
m the scoring algorithm.
Ideally I'd like an operator like ANY:
( ANY ANY ANY )
that has the purpose: return documents, sorted by the score of the highest
scoring term.
Any thoughts about how to achieve this?
_____
Tom Burgmans
xample.EmailAddress that are not simple strings but itself objects?
I made an image of the CAS Visual Debugger with this AE and the sentence to
show which fields I mean, I hope this makes it more clear:
http://tinypic.com/view.php?pic=34rud1s&s=8#.VN5bF7s2cWN
Does anyone know how to a
I'm on 4.1 and I have a similar problem. Except for the version number
everything else seems to be fine. Is that what other people are seeing?
--
View this message in context:
http://lucene.472066.n3.nabble.com/solr-4-2-1-still-has-problems-with-index-version-and-index-generation-tp405p40
&sort=last_updated_date desc
Maybe adding %20 will help:
&sort=last_updated_date%20desc
--
View this message in context:
http://lucene.472066.n3.nabble.com/Sorting-results-by-last-update-date-tp4066692p4066986.html
Sent from the Solr - User mailing list archive at Nabble.com.
UniqueId avoids entries with the same id.
--
View this message in context:
http://lucene.472066.n3.nabble.com/is-there-any-attribute-in-schema-xml-to-avoid-duplication-in-solr-tp3392408p3393085.html
Sent from the Solr - User mailing list archive at Nabble.com.
Does anyone know if you can just copy the index from a 1.4 Solr instance to a
3.X Solr instance? And be mostly done with the upgrade?
--
View this message in context:
http://lucene.472066.n3.nabble.com/upgrading-1-4-to-3-x-tp3415044p3415790.html
Sent from the Solr - User mailing list archive at N
Getting a solid-state drive might help
--
View this message in context:
http://lucene.472066.n3.nabble.com/millions-of-records-problem-tp3427796p3431309.html
Sent from the Solr - User mailing list archive at Nabble.com.
y dynamic fields that contain LatLonType data?
Thanks,
Tom
Sign-up to our newsletter for industry best practice and thought leadership:
http://www.gossinteractive.com/newsletter
Registered Office: c/o Bishop Fleming, Cobourg House, Mayflower Street,
Plymouth, PL1 1LG. Company Registration
ot; so it will take
precedence.
-----Original Message-
From: Tom Cooke [mailto:tom.co...@gossinteractive.com]
Sent: 26 October 2011 20:06
To: solr-user@lucene.apache.org
Subject: Can dynamic fields defined by a prefix be used with LatLonType?
Hi,
I'm adding support for lat/lon data
Interesting info.
You should look into using Solid State Drives. I moved my search engine to
SSD and saw dramatic improvements.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Huge-Performance-Solr-distributed-search-tp3530627p346.html
Sent from the Solr - User mailing
isn't an error in itself so much as a side effect of a
larger problem (like why the operation is taking so long).
--
Tom Lianza
CTO, Wishpot.com
skype: tlianza
Hi Lukas,
Have you tried setting the debug mode (debugQuery=on)?
It provides very detailed info about the scoring, it might even be too
much for a regular user but for us it was very helpful at times.
Regards,
Tom
-Original Message-
From: Lukas Kahwe Smith [mailto:m...@pooteeweet.org
Have you already looked at Hibernate Search?
It combines Hibernate ORM with indexing/searching functionality of
Lucene.
The latest version even comes with the Solr analyzers.
http://www.hibernate.org/subprojects/search.html
Regards,
Tom
-Original Message-
From: fachhoch [mailto:fachh
If you're only adding documents you can also have a go with
StreamingUpdateSolrServer instead of the CommonsHttpSolrServer.
Couple that with the suggestion of master/slave so the searches don't
interfere with the indexing and you should have a pretty responsive
system.
-Original Message-
F
Is the Solr process still running?
Also what OS are you using?
-Original Message-
From: ZAROGKIKAS,GIORGOS [mailto:g.zarogki...@multirama.gr]
Sent: dinsdag 13 juli 2010 10:47
To: solr-user@lucene.apache.org
Subject: RE: Locked Index files
I found it but I can not delete
Any suggestion??
It will mostly likely be smaller but the new size is highly dependent on
the number of documents that you have deleted (because optimize actually
removes data instead of only flagging it).
-Original Message-
From: Karthik K [mailto:karthikkato...@gmail.com]
Sent: dinsdag 13 juli 2010 11:3
Is there any reason why you have to limit each instance to only 1M
documents?
If you could put more documents in the same core I think it would
dramatically improve your response times.
-Original Message-
From: marship [mailto:mars...@126.com]
Sent: donderdag 15 juli 2010 6:23
To: solr-us
This question has come up several times over the past weeks.
The cause is probably all your fields being of type "string".
This is only good for exact matches like id's etc.
Try using "text" or another type that tokenizes.
-Original Message-
From: Hando420 [mailto:hando...@gmail.com]
Sen
differ from
opening excellent AND presentation_id:294 AND type:blob
So I wouldn't use either of the last two.
Tom
p.s. Not sure what is going on with the last lines of your debug
output for the query. Is that really what shows up after presentation
ID? I see Euro, hash mark, zero, semi-colon, and
Delete all docs with the dynamic fields, and then optimize.
On Wed, Sep 22, 2010 at 1:58 PM, Moiz Bhukhiya wrote:
> Hi All:
>
> I had used dynamic fields for some of my fields and then later decided to
> make it static. I removed that dynamic field from the schema but I still see
> it on admin in
Maybe process the city name as a single token?
On Mon, Sep 27, 2010 at 3:25 PM, Savannah Beckett
wrote:
> Hi,
> I have city name as a text field, and I want to do spellcheck on it. I use
> setting in http://wiki.apache.org/solr/SpellCheckComponent
>
> If I setup city name as text field and do
rop excess docs that way, but the potential
overhead is large.
Is there any way of doing this in Solr without hacking in a custom Lucene
Collector? (which doesn't look all that straightforward).
cheers,
Tom
Sounds like it's worth a try! Thanks Andre.
Tom
On 5 Dec 2012, at 17:49, Andre Bois-Crettez wrote:
> If you do grouping on source_id, it should be enough to request 3 times
> more documents than you need, then reorder and drop the bottom.
>
> Is a 3x overhead acceptable ?
&g
Thanks, but even with group.main=true the results are not in relevancy (score)
order, they are in group order. Which is why I can't use it as is.
Tom
On 6 Dec 2012, at 19:00, Way Cool wrote:
> Grouping should work:
> group=true&group.field=source_id&group.limit=3&grou
t AND turns into an implicit OR,
in case an Explicit OR is added to the query expression. The parsedquery
information confirms this behavior.
Why is edismax doing this?
Tested on a Solr 4.0.0 instance.
Thanks, Tom
--
Tom Burgmans
[cid:image001.jpg@01CDD86E.DC411F70]
Search Specialist
T
y
search terms with a +.
-Original Message-
From: Shawn Heisey [mailto:s...@elyograg.org]
Sent: Wednesday 12 December 2012 05:46
To: solr-user@lucene.apache.org
Subject: Re: edismax: implicit AND changes into implicit OR
On 12/12/2012 5:51 AM, Burgmans, Tom wrote:
> I have some doc
implicit AND changes into implicit OR
On 12/12/2012 10:27 AM, Burgmans, Tom wrote:
> I have set in the schema (and
> restarted Solr), and tested again with
>
> http://localhost:8983/solr/collection1/browse?defType=edismax&q=(Thomas+Michael)+OR+xxxmatchesnothingxxx&q.op=AND
&g
In our case it's the opposite. For our clients it is very important that every
synonym gets equal chances in the relevancy calculation. The fact that "nol"
scores higher than "net operating loss", simply because its document frequency
is lower, is unacceptable and a reason to look for ways to di
ous
Look at the EXPLAIN information to see how the final score is calculated.
Tom
-Original Message-
From: Sangeetha [mailto:sangeetha...@gmail.com]
Sent: Thursday 13 December 2012 08:33
To: solr-user@lucene.apache.org
Subject: score calculation
I want to know how score is calculated
: Description:
C:\Users\dhil2\AppData\Roaming\Microsoft\Signatures\ExperisIT.jpg]
Tom Polak
IT Recruiter
Experis IT Staffing
1122 Oberlin Road
Raleigh, NC 27605
T:
919 755 5838
F:
919 755 5828
C:
919 457 8530
tom.po...@experis.com<mailto:tom.po...@experis.com>
www.experis.co
en
US
Thanks, Tom
--
Tom Burgmans
[cid:image001.jpg@01CDDFA4.2B7968E0]
Search Special
earch terms,
while the query is (whitespace) tokenized for "body" but search as a phrase for
"valueadd"?
Thanks,
Tom Burgmans
This email and any attachments may contain confidential or privileged
information
and is intended for the addressee only. If you are not the intended re
kenized and non-tokenized fields. What a field type's analyzer does with
its value is irrelevant to query parsing.
-- Jack Krupansky
-Original Message-
From: Burgmans, Tom
Sent: Thursday, February 28, 2013 10:48 AM
To: solr-user@lucene.apache.org
Subject: Search in String and Text_en
t)~1.0 | (body_fr:a body_fr:result)~1.0)
How should I interpret this? Is it a bug in edismax? Is it intended and if yes:
why?
Thanks for any hint,
Tom
This email and any attachments may contain confidential or privileged
information
and is intended for the addressee only. If you are not the inten
d stop_fr.txt. Use same set of stop words for all
> fields that you search on.
>
> You might find this useful :
> http://bibwild.wordpress.com/2010/04/14/solr-stop-wordsdismax-gotcha/
>
> --- On Wed, 3/13/13, Burgmans, Tom wrote:
>
>> From: Burgmans, Tom
>> Subject:
he => syntax. You've already got it. Add the lines
pretty => scenic
text => words
to synonyms.txt, and it will do what you want.
Tom
> On 7 Dec 2010 01:28, "Erick Erickson" wrote:
>> See:
>>
> http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters
#solr.KeepWordFilterFactory
I'd put the synonym filter first in your configuration for the field,
then the keep words filter factory.
Tom
On Tue, Dec 7, 2010 at 12:06 PM, lee carroll
wrote:
> ok thanks for your response
>
> To summarise the solution then:
>
> To only index synonym
y
SolrCore. But it's trivial to do.
So, I wouldn't recommend it, but it was fun to play around with. :)
It's probably easier to fix the load balancer, which is almost
certainly just looking for any string you specify. Just change what
it's expecting. They are built so you can con
copy from the resulting URL.
>> 2) Do I need the double quotes around "San Francisco"?
Yes. Else is will be
(city:San) (Francisco)
Probably not what you want.
>> 3) Will complex boolean filters like this substantially
>> slow down query performance?
That's not very complex, and the filter may be cached. Probably won't
be a problem.
Tom
>>
>> Thanks
>>
>>
>>
>>
>
>
>
>
sion. What happens if you configure your slave
as a master, also? Does that get the behavior you want?
Tom
On Tue, Dec 7, 2010 at 8:16 AM, Markus Jelsma
wrote:
> Yes, i read that too in the replication request handler's source comments. But
> i would find it convenient if it would just u
If you can benchmark before and after, please post the results when
you are done!
Things like your index's size, and the amount of RAM in your computer
will help make it meaningful. If all of your index can be cached, I
don't think fragmentation is going matter much, once you get warme
s (id and title), and they deleted
quickly (17 milliseconds).
Maybe if you post your delete code? Are you doing anything else (like
commit/optimize?)
Tom
On Wed, Dec 8, 2010 at 12:55 PM, Ravi Kiran wrote:
> Hello,
>
> Iam using solr 1.4.1 when I delete by query or Id from solr
e you can tell which ones have failed, if any do, if you
delete with a list, but you are not using "unsuccessful" now anyway.
Tom
On Thu, Dec 9, 2010 at 7:55 AM, Ravi Kiran wrote:
> Thank you Tom for responding. On an average the docs are around 25-35 KB.
> The code is as follows, K
modified, core will be reloaded");
logReplicationTimeAndConfFiles(modifiedConfFiles,
successfulInstall);//write to a file time of replication and conf
files.
reloadCore();
}
And I tested it awhile ago, and it seemed to be working.
Check your logs for errors, perhaps?
Tom
, can you reduce the number or unique
terms? You might check your faceting algorithms, and see if you could
use enum, instead of fc for some of them.
Check your statistics page, what's your insanity count?
Tom
On Fri, Dec 10, 2010 at 12:17 PM, John Russell wrote:
> I have been load te
101 - 200 of 516 matches
Mail list logo