Below is my query
http://localhost:8983/solr/select/?q=subject:session management in
php&fq=category:[*%20TO%20*]&fl=category,score,subject
The result is like below
0
983
category:[* TO *]
subject:session management in php
Thanks. I added a copy field and that fixed the issue.
On Wed, Apr 3, 2013 at 12:29 PM, Gora Mohanty-3 [via Lucene] <
ml-node+s472066n4053412...@n3.nabble.com> wrote:
> On 3 April 2013 10:52, amit <[hidden
> email]<http://user/SendEmail.jtp?type=node&node=4053412&i=
when I use the copy field destination as "text" it works fine.
I get a boost for exact match.
But if I use some other field the score is not boosted for exact match.
Not sure if I am in the right direction.. I am new to solr please bear with
me
I checked this link http://wiki.apache.org/solr/So
Thanks Jack and Andre
I am trying to use edismax;but struck with the NoClassDefFoundError:
org/apache/solr/response/QueryResponseWriter
I am using solr 3.6
I have followed the steps here
http://wiki.apache.org/solr/VelocityResponseWriter#Using_the_VelocityResponseWriter_in_Solr_Core
Just the jars
I am using solr3.6 and trying to use the edismax handler.
The config has a /browse requestHandler, but it doesn't work because of
missing class definition VelocityResponseWriter error.
I have copied the jars to solr/lib following the steps here, but no luck
http://wiki.apache.org/solr/VelocityRes
I have a simple system. I put the title of webpages into the "name" field and
content of the web pages into the "Description" field.
I want to search both fields and give the name a little more boost.
A search on name field or description field returns records cloase to
hundreds.
http://localhost:
I am installing solr on tomcat7 in aws using bitmani tomcat stack.My solr
server is not starting; below is the errorINFO: Starting service Catalina
May 15, 2013 7:01:51 AM org.apache.catalina.core.StandardEngine
startInternal INFO: Starting Servlet Engine: Apache Tomcat/7.0.39 May 15,
2013 7:01:
t is wrong. I am using a win 7 machine dual core and 4 GB ram.
Thanks
Amit
--
View this message in context:
http://lucene.472066.n3.nabble.com/solr-indexing-slows-down-after-few-minutes-tp4004337.html
Sent from the Solr - User mailing list archive at Nabble.com.
that reduces the search speed, but so far so
good.
Thanks
Amit
On Thu, Aug 30, 2012 at 10:53 PM, pravesh [via Lucene] <
ml-node+s472066n4004421...@n3.nabble.com> wrote:
> Did you checked wiki:
> http://wiki.apache.org/lucene-java/ImproveIndexingSpeed
>
> Do you commit often? Do
core. This is the solr.xml
I have double checked lot of steps by searching on net, but no luck.
If anyone has faced this please suggest.
Thanks
Amit
--
View this message in context:
http://lucene.472066.n3.nabble.com/solr-3-6-1-tomcat-7-0-missing-core-name-in-path-tp4005868.html
Is is possible to index (add/update) solr using jquery AJAX?
I am trying with jasonp first, but no luck.
try {
$.ajax({
type: "POST",
url:
"http://192.168.10.113:8080/solr/update/json?commit=true";,
data: { "add": { "doc": { "id": "22"
I changed as per your feedback. Added quotes and escaped them before id and
name.
Still not able to insert.
data: "20trailblazers",
The tomcat log says bad request.
192.168.11.88 - - [01/Nov/2012:17:10:35 +0530] "OPTIONS
/solr/update?commit=true HTTP/1.1" 400 1052
In google chrome there are
Hi Luis
I tried sending an array too, but no luck
This is how the request looks like.
$.ajax({
url: "http://192.168.10.113:8080/solr/update/json?commit=true";,
type: "POST",
contentType: "application/json; charset=utf-8",
data: [ { "id": "22", "name": "Seatt
first irrespective of
score.
Thanks in advance any kind reply.
Regards,
Amit
No virus found in this outgoing message.
Checked by AVG.
Version: 7.5.549 / Virus Database: 270.8.2/1740 - Release Date: 22-10-2008
19:24
Hi All,
Is there any way to sort the facet values by other own ranking value instead
of only count?
Thanks and Regards,
Amit
No virus found in this outgoing message.
Checked by AVG.
Version: 7.5.549 / Virus Database: 270.9.9/1804 - Release Date: 21-11-2008
18:24
Hi Sahlin,
Thanks for reply.
Actually we have some ranking associated to field on which we are faceting
and we want to show only top 10 facet value now which is sort by count but
we want to sort by it ranking.
Regards,
Amit
-Original Message-
From: Shalin Shekhar Mangar [mailto
Cat4
Cat3
Cat2
Cat1
Hope this will convey what we want
Have great day .:)
Thanks and Regards,
Hi All,
In my case I am using DIH to index the data and Query is having 2 join
statements. To index 70K documents it is taking 3-4Hours. Document size would
be around 10-20KB. DB is MSSQL and using solr4.2.10 in cloud mode.
Rgds
AJ
> On 21-Mar-2016, at 05:23, Erick Erickson wrote:
>
> In my
Yes, I do have multiple modes in my solr cloud setup.
Rgds
AJ
> On 21-Mar-2016, at 22:20, fabigol wrote:
>
> Amit Jha,
> do you have several sold server with solr cloud?
>
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Ho
Rgds
AJ
> On 22-Mar-2016, at 05:32, Shawn Heisey wrote:
>
>> On 3/20/2016 6:11 PM, Amit Jha wrote:
>> In my case I am using DIH to index the data and Query is having 2 join
>> statements. To index 70K documents it is taking 3-4Hours. Document size
>> would be ar
got me slighlty concerned. During these periods I didn't notice any issues
with the cluster and everything looks healthy in the cloud summary. All of
the instances are hosted on AWS.
Any idea what may be causing this issue and what I can do to mitigate?
Thanks
Amit
Are there any known network issues?
> * Do you have any idea about the GC on those replicas?
>
>
> On Mon, Apr 27, 2015 at 1:25 PM, Amit L wrote:
>
> > Hi,
> >
> > A few days ago I deployed a solr 4.9.0 cluster, which consists of 2
> > collections. Each collection
Hi,
In my use case, I am adding a document to Solr through spring application using
spring-data-solr. This setup works well with single Solr. In current setup it
is single point of failure. So we decided to use solr replication because we
also need centralized search. Therefore we setup two ins
I want to have realtime index and realtime search.
Rgds
AJ
> On Jun 5, 2015, at 10:12 PM, Amit Jha wrote:
>
> Hi,
>
> In my use case, I am adding a document to Solr through spring application
> using spring-data-solr. This setup works well with single Solr. In current
&g
For instance, in the
> master/slave setup you won't see docs on the slave until after the
> polling interval is expired and the index is replicated.
> 2> In SolrCloud you aren't committing appropriately.
>
> You might review: http://wiki.apache.org/solr/UsingMailingLists
&
ne on both. If I setup
replication between 2 servers and configure both as repeater, than both can act
master and slave for each other. Therefore writing can be done on both.
Rgds
AJ
> On Jun 6, 2015, at 1:26 AM, Shawn Heisey wrote:
>
>> On 6/5/2015 1:38 PM, Amit Jha wrote:
>>
ather than once for each slave in
> DC2.
>
> But even in this situation you are only ever indexing to the master
> on DC1.
>
> Best,
> Erick
>
>> On Fri, Jun 5, 2015 at 1:20 PM, Amit Jha wrote:
>> Thanks Shawn, for reminding CloudSolrServer, yes I have moved to
Hi,
I setup a SolrCloud with 2 shards each is having 2 replicas with 3
zookeeper ensemble.
We add and update documents from web app. While updating we delete the
document and add same document with updated values with same unique id.
I am facing a very strange issue that some time 2 documents ha
It was because of the issues
Rgds
AJ
> On Jun 29, 2015, at 6:52 PM, Shalin Shekhar Mangar
> wrote:
>
>> On Mon, Jun 29, 2015 at 4:37 PM, Amit Jha wrote:
>> Hi,
>>
>> I setup a SolrCloud with 2 shards each is having 2 replicas with 3
>> zookeeper ensem
possible
to get the SegmentInfo that I might be missing, If I am in
the SearchComponent.prepare/process.
Many thanks,
Amit
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at
org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:745)
best,
Amit
Hello All,
Can some one explain me following snippet of SolrConfig.xml in terms of
Solr API (Java Psuedo Code) for better understanding.
like
**
* *
* *
* *
**
**
**
**
**
Here I want to know .
1. What is "update
>
> > Regards,
> >Alex.
> >
> > Personal website: http://www.outerthoughts.com/
> > LinkedIn: http://www.linkedin.com/in/alexandrerafalovitch
> > - Time is the quality of nature that keeps events from happening all at
> > once. Lately, it doesn't
How so you start your another project ? If it is maven or ant then you can
use anturn plugin to start solr . Otherwise you can write a small shell
script to start solr ..
On 27-Oct-2013 9:15 PM, "giridhar" wrote:
> Hi friends,Iam giridhar.please clarify my doubt.
>
> we are using solr for our pr
Depends One core one schema file ... One solrconfig.xml .
So if you want only one core then put all required fields of both search in
one schema file and carry out your searches Otherwise make two cores
having two schema file and perform searches accordingly ...
On 27-Oct-2013 7:22 AM, "
Lol ... Unsubscribe from this mailing list .
On 27-Oct-2013 5:02 PM, "veena rani" wrote:
> I want to stop the mail
>
>
> On Sun, Oct 27, 2013 at 4:37 PM, Rafał Kuć wrote:
>
> > Hello!
> >
> > Could you please write more about what you want to do? Do you need to
> > stop running Solr process. If
Try this:
http://hokiesuns.blogspot.com/2010/01/setting-up-apache-solr-in-eclipse.html
I use this today and it still works. If anything is outdated (as it's a
relatively old post) let me know.
I wrote this so ping me if you have any questions.
Thanks
Amit
On Sun, Oct 27, 2013 at 7:33 PM,
field in the qf
not in the pf or vice versa.
My understanding from the docs is that qf is a term-wise hard filter while
pf is a phrase-wise boost of documents who made it past the "qf" filter.
Thanks!
Amit
Hello All,
I have a requirement where I have to conect to Solr using SolrJ client and
documents return by solr to SolrJ client have to returned to PHP.
I know its simple to get document from Solr to SolrJ
But how do I return documents from SolrJ to PHP ?
Thanks
Amit Aggarwal
Thanks Erick. Numeric fields make sense as I guess would strictly string
fields too since its one term? In the normal text searching case though
does it make sense to have qf and pf differ?
Thanks
Amit
On Oct 28, 2013 3:36 AM, "Erick Erickson" wrote:
> The facetious answer is
Agreed with Doug
On 12-Nov-2013 6:46 PM, "Doug Turnbull"
wrote:
> As an aside, I think one reason people feel compelled to deviate from the
> distributed jetty distribution is because the folder is named "example".
> I've had to explain to a few clients that this is a bit of a misnomer. The
> IT
tegory:Action)* makes sense?
What are some techniques people use to boost documents based on discrete
things like category, manufacturer, genre etc?
Thanks!
Amit
()) {
returnMe.add(searcher.doc(it.next()));
}
Ques 1 - > My question is , what does FLAG represent in getDocList method ?
Ques 2 - > How can I ensure that searcher.getDocList method give me
score also with each document.
--
Amit Aggarwal
8095552012
are the results one day in a
meetup as I think it'll be kinda interesting.
Thanks again
Amit
On Thu, Nov 14, 2013 at 11:11 AM, Chris Hostetter
wrote:
>
> : I have a question around boosting. I wanted to use the &boost= to write a
> : nested query that will boost a document
.
>
> On Tue, Nov 19, 2013 at 8:08 AM, Amit Aggarwal
> wrote:
> > Hello All,
> >
> > I am trying to develop a custom request handler.
> > Here is the snippet :
> >
> > // returnMe is nothing but a list of Document going to return
> >
>
edgehammer
approach of category exclusion through filters.
Thanks
Amit
On Nov 19, 2013 8:51 AM, "Chris Hostetter" wrote:
> : My approach was something like:
> : 1) Look at the categories that the user has preferred and compute the
> : z-score
> : 2) Pick the top 3 among
Hello All ,
I am using defType=edismax
So will boosting will work like this in solrConfig.xml
value_search^2.0 desc_search country_search^1.5
state_search^2.0 city_search^2.5 area_search^3.0
I think it is not working ..
If yes , then what should I do ?
rom there it's a matter
> of adjusting the boosts to get the results you want.
>
>
> Best,
> Erick
>
>
> On Sat, Nov 23, 2013 at 9:17 AM, Amit Aggarwal >wrote:
>
> > Hello All ,
> >
> > I am using defType=edismax
> > So will boosting will w
rmFreq=1.0
3.3084502 = idf(docFreq=33, maxDocs=342)
0.125 = fieldNorm(doc=327)
Any links where this explaination is explained ?
Thanks
--
Amit Aggarwal
8095552012
Because in your solrconfig ... Against /select ... DirectUpdateHandler is
mentioned . It should be solr.searchhanlder ..
On 11-Dec-2013 3:11 PM, "Nutan" wrote:
> I have indexed 9 docs.
> this my* schema.xml*
>
>
>
>
> multiValued="false"/>
> required="true"
> multiValued="false"/>
> multiVal
When you start solr , do you find any error or exception
Java -jar ./start.jar ... Then see if there is any problem ...
Otherwise take solr solrconfig.xml and try to run .. it should run
On 11-Dec-2013 5:41 PM, "Nutan" wrote:
> default="true">
>
>
>explicit
>20
>
Hi,
"Wish You All a Very Happy New Year".
We have index where date field have default value as 'NOW'. We are using
solrj to query solr and when we try to convert query
response(response.getResponse) to JSON object in java. The JSON
API(org.json) throws 'invalid json string' exception. API say
Tue, Jan 7, 2014 at 12:28 AM, Amit Jha wrote:
> Hi,
>
> "Wish You All a Very Happy New Year".
>
> We have index where date field have default value as 'NOW'. We are using
> solrj to query solr and when we try to convert query
> response(response.ge
I am using it. But timestamp having ":" in between causes the issue. Please
help
On Tue, Jan 7, 2014 at 11:46 AM, Ahmet Arslan wrote:
> Hi Amit,
>
> If you want json response, Why don't you use wt=json?
>
> Ahmet
>
>
> On Tuesday, January 7, 2014 7:34 AM,
Hey Hoss,
Thanks for replying back..Here is the response generated by solrj.
*SolrJ Response*: ignore the Braces at It have copied it from big chunk
Response:
{responseHeader={status=0,QTime=0,params={lowercaseOperators=true,sort=score
desc,cache=false,qf=content,wt=javabin,rows=100,defType=e
Hi,
I would like to know if I index a file I.e PDF of 100KB then what would be the
size of index. What all factors should be consider to determine the disk size?
Rgds
AJ
case?
Regards
Amit
at I have soon and post back. If
there is feedback or other thoughts let me know!
Cheers
Amit
On Fri, Nov 22, 2013 at 11:38 AM, Chris Hostetter
wrote:
>
> : I thought about that but my concern/question was how. If I used the pow
> : function then I'm still boosting the bad ca
Chris,
Sounds good! Thanks for the tips.. I'll be glad to submit my talk to this
as I have a writeup pretty much ready to go.
Cheers
Amit
On Tue, Jan 28, 2014 at 11:24 AM, Chris Hostetter
wrote:
>
> : The initial results seem to be kinda promising... of course there are
>
overwriteDupes? I have
checked the existing Wiki; there is very little explanation of the flag
there.
Thanks,
-Amit
Solr will complaint only if you brought down both replica & leader of same
shard. It would be difficult to have highly available env. If you have less
number of physical servers.
Rgds
AJ
> On 18-Feb-2014, at 18:35, Vineet Mishra wrote:
>
> Hi All,
>
> I want to have clear idea about the Faul
I would say use dismax query parser and set boost factor in qf params.
Following link may help
http://wiki.apache.org/solr/DisMaxQParserPlugin#qf_.28Query_Fields.29
https://wiki.apache.org/solr/SolrRelevancyFAQ#Solr_Relevancy_FAQ
Rgds
AJ
> On 18-Feb-2014, at 20:49, "EXTERNAL Taminidi Ravi (ETI
Hi Mike,
What is exact your use case?
What do mean by "controlling the fields used for phrase queries" ?
Rgds
AJ
> On 12-Dec-2014, at 20:11, Michael Sokolov
> wrote:
>
> Doug - I believe pf controls the fields that are used for the phrase queries
> *generated by the parser*.
>
> What I
I am trying to find out duplicate records based on distance and phonetic
algorithms. Can I utilize solr for that? I have following fields and
conditions to identify exact or possible duplicates.
1. Fields
prefix
suffix
firstname
lastname
email(primary_email1, email2, email3)
phone(primary_phone1,
y/solr/De-Duplication
>
>
> -- Jack Krupansky
>
> On Sat, Jan 3, 2015 at 2:54 AM, Amit Jha wrote:
>
> > I am trying to find out duplicate records based on distance and phonetic
> > algorithms. Can I utilize solr for that? I have following fields and
> > condit
Hi,
I need to know how can I retrieve phonetic codes. Does solr provide it as
part of result? I need codes for record matching.
*following is schema fragment:*
Hi,
I need to know how can I retrieve phonetic codes. Does solr provide it as
part of result? I need codes for record matching.
*following is schema fragment:*
Hi,
Thanks for response, I can see generated MetaPhone codes using Luke. I am
us
,
why can't solr
On Thu, Jan 22, 2015 at 7:54 PM, Amit Jha wrote:
> Hi,
>
> I need to know how can I retrieve phonetic codes. Does solr provide it as
> part of result? I need codes for record matching.
>
> *following is schema fragment:*
>
g/solr/4_10_2/solr-core/org/apache/solr/handler/FieldAnalysisRequestHandler.html
> and in solrconfig.xml
>
>
> -- Jack Krupansky
>
>> On Thu, Jan 22, 2015 at 8:42 AM, Amit Jha wrote:
>>
>> Hi,
>>
>> I need to know how can I retrieve phonetic codes. Doe
I'm looking to search (in the solr admin search screen) a certain field
for:
*youtube*
I know that leading wildcards takes a lot of resources but I'm not worried
with that
My only question is about the syntax, would this work:
field:"*youtube*" ?
Thanks,
I'm using Solr 3.6.2
> No, you cannot use wildcards within a quoted term.
>
> Tell us a little more about what your strings look like. You might want to
> consider tokenizing or using ngrams to avoid the need for wildcards.
>
> -- Jack Krupansky
>
> -----Original Message- From: Amit Sela
>
quot;youtube" or even
> "something" that is a component of the URL path. No wildcard required.
>
>
> -- Jack Krupansky
>
> -Original Message- From: Amit Sela
> Sent: Thursday, June 27, 2013 8:37 AM
> To: solr-user@lucene.apache.org
> Subject: Re: Solr
Hi,
I would suggest for the following.
1. Create custom search connectors for each individual sources.
2. Connector will responsible to query the source of any type web, gateways
etc. and get the results & write the top N results to a solr.
3. Query the same keyword to solr and display the resu
You can use DB for storing user preferences and later if you want you can flush
them to solr as an update along with userid.
Or you may add a result pipeline filter
Rgds
AJ
On 13-Feb-2013, at 17:50, Á_o wrote:
> Hi:
>
> I am working on a proyect where we want to recommend our users pr
Hi,
As per my knowledge, any number of requests can be issued in parallel for index
the documents. Any commit request will write them to index.
So if P1 issued a commit then all documents of P2 those are eligible get
committed and remaining documents will get committed on other commit request.
Hi Baskar,
Just create a single schema.xml which should contains required fields from 3
tables.
Add a status column to child table.i.e
1 = add
2 = update
3 = delete
4 = indexed
Etc
Write a program using solrj which will read the status and do thing
accordingly.
Rgds
AJ
On 15-Sep-2013, at
Add a field called "source" in schema.xml and value would be your table names.
Rgds
AJ
On 15-Sep-2013, at 5:38, Baskar Sikkayan wrote:
> Hi,
> I am new to Solr and trying to use Solr java client instead of using the
> Data handler.
> Is there any configuration i need to do for this?
>
> I
Question is not clear to me. Please be more elaborative in your query. Why do
u want to store index to DB tables?
Rgds
AJ
On 15-Sep-2013, at 7:20, Baskar Sikkayan wrote:
> How to add index to 3 diff tables from java ...
>
>
> On Sun, Sep 15, 2013 at 6:49 AM, Amit Jha wrote:
ional
ZooKeeper ensemble ? I don't want to use the HBase ZooKeeper because, well
first of all HBase manages it so I'm not sure it's possible and second I
have HBase working pretty hard at times and I don't want to create any
connection issues by overloading ZooKeeper.
Thanks,
Amit.
Trouble in what why ? If I have enough memory - HBase RegionServer 10GB and
maybe 2GB for Solr ? - or you mean CPU / disk ?
On Wed, Apr 3, 2013 at 5:54 PM, Michael Della Bitta <
michael.della.bi...@appinions.com> wrote:
> Hello, Amit:
>
> My guess is that, if HBase is working har
get this from your container access logs after the fact? I
may be misunderstanding something but why wouldn't mining the Jetty/Tomcat
logs for the response size here suffice?
Thanks!
Amit
On Thu, Apr 4, 2013 at 1:34 AM, xavier jmlucjav wrote:
> A custom QueryResponseWriter...this makes sense
acing website (where
low latency is prob desired).
Again, I like to start simple and use one server until it dies then expand
from there.
Cheers
Amit
On Thu, Apr 4, 2013 at 7:58 AM, imehesz wrote:
> hello,
>
> I'm using a single server setup with Nutch (1.6) and Solr (4.2)
>
&
Hi all,
I'm trying to run a nutch crawler and index to Solr.
I'm running Nutch 1.6 and Solr 4.2.
I managed to crawl and index with that Nutch version into Solr 3.6.2 but I
can't seem to manage to run it with Solr 4.2
I re-built Nutch with the schema-solr4.xml and copied that file to
SOLR_HOME/ex
ortantly, make sure you use a Solr 4.1 solrconfig and
> merge in any of your application-specific changes.
>
> -- Jack Krupansky
>
> -Original Message- From: Amit Sela
> Sent: Friday, April 05, 2013 12:57 PM
> To: solr-user@lucene.apache.org
> Subject: unknown field er
I don't understand why this would be more performant.. seems like it'd be
more memory and resource intensive as you'd have multiple class-loaders and
multiple cache spaces for no good reason. Just have a single core with
sufficiently large caches to handle your response needs.
If you want to load
If you generate the maven pom files you can do this I think by doing mvn
-DskipTests=true.
On Sat, Apr 6, 2013 at 7:25 AM, Erick Erickson wrote:
> Don't know a good way to skip compiling the tests, but there isn't
> any harm in compiling them...
>
> changing to the solr directory and just issui
attribute values
on a notional "current" token. One obvious attribute is the term text
itself and perhaps any positional information. The best place to start is
to pick a fairly simple example from the Solr Source (maybe
lowercasefilter) and try and mimic that.
Cheers!
Amit
On Mon, May 13,
t's a
post-filter implementation so that you are doing heavy computation on a
presumably small set of data only to filter out the corner cases around the
radius circle that results.
I haven't looked at Solr's spatial querying in a while to know if this is
possible or not.
Cheers
Am
Hossman did a presentation on something similar to this using spatial data
at a Solr meetup some months ago.
http://people.apache.org/~hossman/spatial-for-non-spatial-meetup-20130117/
May be helpful to you.
On Thu, May 23, 2013 at 9:40 AM, rajh wrote:
> Thank you for your answer.
>
> Do you m
ult isn't UTF-8 or if there is a good reason, can
the DIH wiki be made more clear that this encoding attribute can affect the
indexing of international characters? If I can get access to edit this wiki
page, I can add a section to that effect.. perhaps under a troubleshooting
section?
Thanks!
Amit
That's probably the most efficient way to do it... I believe the line you
are referring allows you to have sub-entities which , in the RDBMS, would
execute a separate query for each parent given a primary key. The downside
to this though is that for each parent you will be executing N separate
quer
Thanks Otis. I went ahead and added this section. I hope that others can add
to this too but of course the list should be short :-)
- Amit
On Sun, Aug 1, 2010 at 12:00 AM, Otis Gospodnetic <
otis_gospodne...@yahoo.com> wrote:
> Hi Amit,
>
> Anyone can edit any Solr Wiki page -
ngly.
Any pitfalls to this?
Thanks
Amit
Ugh I should have checked there first! Thanks for the reply.. that helps a
lot.
Sincerely
Amit
On Mon, Aug 16, 2010 at 10:57 AM, Gora Mohanty wrote:
> On Mon, 16 Aug 2010 10:43:38 -0700
> Amit Nithian wrote:
>
> > I am not sure if this is the best approach to this proble
real
world traffic which usually has more predictable behavior.
hope that helps!
Amit
On Wed, Aug 25, 2010 at 7:50 PM, scott chu (朱炎詹) wrote:
> We're currently building a Solr index with ober 1.2 million documents. I
> want to do a good stress test of it. Does anyone know if ther's a
to increase capacity and because the
warranty is set to expire on our old servers and so I was curious before
asking for a certain spec what others run and at what point does having more
cores cease to matter? Mainly looking at somewhere between 4-12 cores per
server.
Thanks!
Amit
Lance,
Thanks for your help. What do you mean by that the OS can keep the index in
memory better than Solr? Do you mean that you should use another means to
keep the index in memory (i.e. ramdisk)? Is there a generally accepted heap
size/index size that you follow?
Thanks
Amit
On Mon, Aug 30
what I remember
there are synchronization points that could be a bottleneck where adding
more cores won't help this problem? Or am I completely missing something.
Thanks again
Amit
On Mon, Aug 30, 2010 at 8:28 PM, scott chu (朱炎詹) wrote:
> I am also curious as Amit does. Can you make an ex
I am curious about this too.. are you talking about using HBase/Cassandra as
an aux store of large data or using Cassandra to store the actual lucene
index (as in LuCandra)?
On Mon, Aug 30, 2010 at 11:06 PM, Siju George wrote:
> Thanks a million Nick,
>
> We are currently debating whether we sho
other SolrCore via the
CoreContainer?
Thanks
Amit
1 - 100 of 256 matches
Mail list logo