if you're using Jetty you can use the standard realms mechanism for Basic
Auth, and it works the same on Windows or UNIX. There's plenty of docs on
the Jetty site about getting this working, although it does vary somewhat
depending on the version of Jetty you're running (N.B. I would suggest
using
Try adding the "start" call in your jetty.xml:
Realm Name
/etc/realm.properties
5
On Wed, Jul 22, 2015 at 2:53 PM, O. Klein wrote:
> Yeah I can't get it to work on Jetty 9 either on Linux.
>
> Just trying to password protect the admin pages.
>
>
6; [ ] org.eclipse.jetty.server.Server;
> Started @478ms
>
> Does anyone know where / what logs I should turn on to debug this? Should
> I be posting this issue on the Jetty mailing list?
>
> Steve
>
>
> On Wed, Jul 22, 2015 at 10:34 AM, Peter Sturge
> wrote:
>
>
t; Not Found
> Powered by Jetty://
>
> In a previous post, I asked if anyone has setup Solr 5.2.1 or any 5.x with
> Basic Auth and got it working, I have not heard back. Either this feature
> is not tested or not in use. If it is not in use, how do folks secu
Hello Solr Forum,
Been trying to coerce Group faceting to give some faceting back for each
group, but maybe this use case isn't catered for in Grouping? :
So the Use Case is this:
Let's say I do a grouped search that returns say, 9 distinct groups, and in
these groups are various numbers of uniqu
Hello Solr Forum,
Been trying to coerce Group faceting to give some faceting back for each
group, but maybe this use case isn't catered for in Grouping? :
So the Use Case is this:
Let's say I do a grouped search that returns say, 9 distinct groups, and in
these groups are various numbers of uniqu
nik.com/solr-subfacets/
> >
> > Regards,
> >Alex.
> >
> > Solr Analyzers, Tokenizers, Filters, URPs and even a newsletter:
> > http://www.solr-start.com/
> >
> > On 11 October 2015 at 09:51, Peter Sturge
> wrote:
> > > Been
ting works with the classic flat document structure, the sub
> facet are working with any nested structure.
> So be careful about pivot faceting in a flat document with multi valued
> fields, because you lose the relation across the different fields value.
>
> Cheers
>
> On 13
3] http://yonik.com/solr-subfacets/
>
> On 14 October 2015 at 22:12, Peter Sturge wrote:
>
> > Yes, you are right about that - I've used pivots before and they do need
> to
> > be used judiciously.
> > Fortunately, we only ever use single-value fields, as it give
*And insults are not something I'd like to see in this mailing list, at all*
+1
Everyone is entitled to their opinion..
Solr can and does work extremely well as a database - it depends on your db
requirements.
For distributed/replicated search via REST API that is read heavy, Solr is
a great choic
Hi,
We'be been using JPRofiler (www.ej-technologies.com) for years now.
Without a doubt, the most comprehensive and useful profiler for java.
Works very well, supports remote profiling and includes some very neat heap
walking/gc profiling.
Peter
On Tue, Dec 5, 2017 at 3:21 PM, Walter Underwood
w
Hi Solr Group,
Got an interesting use case (to me, at least), perhaps someone could give
some insight on how best to achieve this?
I've got a core that has about 7million entries, with a field call 'addr'.
By definition, every entry has a unique 'addr' value, so there are 7million
unique values f
ess it's like taking two
facet lists (1 for addr, 1 for dest), intersecting them and returning the
result:
List 1:
a
b
c
d
e
f
List 2:
a
a
g
z
c
c
c
e
Resultant intersection:
a (2)
c (3)
e (1)
Thanks,
Peter
On Wed, Nov 19, 2014 at 7:16 PM, Toke Eskildsen
wrote:
> Peter Sturge [pe
.sort=count&rows=0
Thanks,
Peter
On Wed, Nov 19, 2014 at 9:23 PM, Toke Eskildsen
wrote:
> Peter Sturge [peter.stu...@gmail.com] wrote:
> > I guess you mean take the 1k or so values and build a boolean query from
> > them?
>
> Not really. Let me try again:
>
>
Hi Forum,
Is it possible for a Solr query to return the term(s) that matched a
particular field/query?
For example, let's say there's a field like this:
raw="This is a raw text field that happens to contain some text that's also
in the action field value..."
And another field in a different inde
ust from
> result page)?
> 09.12.2014 1:23 пользователь "Peter Sturge"
> написал:
>
> > Hi Forum,
> >
> > Is it possible for a Solr query to return the term(s) that matched a
> > particular field/query?
> >
> > For example, let's say t
Yes, totally agree. We run 500m+ docs in a (non-cloud) Solr4, and it even
performs reasonably well on commodity hardware with lots of faceting and
concurrent indexing! Ok, you need a lot of RAM to keep faceting happy, but
it works.
++1 for the automagic shard creator. We've been looking into doing
SHARD doesn't seem ideal. If a shard reaches a
> certain
> > size, it would be better for us to simply add an extra shard, without
> > splitting.
> >
>
> True, and you can do this if you take explicit control of the document
> routing, but...
> that's quite t
Hello Utkarsh,
This may or may not be relevant for your use-case, but the way we deal with
this scenario is to retrieve the top N documents 5,10,20or100 at a time
(user selectable). We can then page the results, changing the start
parameter to return the next set. This allows us to 'retrieve' milli
> >> > This works, I still don't like the reload of the whole core, but it
> >> seems
> >> > like the easiest thing to do now.
> >> >
> >> > -- roman
> >> >
> >> >
> >> > On Wed, Jun 5, 2013 at 12:07 PM,
t I have deactivated *all* components that write into index, so unless
> there is something deep inside, which automatically calls the commit, it
> should never happen.
>
> roman
>
>
> On Tue, Jul 2, 2013 at 2:54 PM, Peter Sturge
> wrote:
>
> > Hmmm, single lock sounds
.google.com/appinions
> w: appinions.com <http://www.appinions.com/>
>
>
> On Tue, Jul 2, 2013 at 5:05 PM, Peter Sturge
> wrote:
>
> > The RO instance commit isn't (or shouldn't be) doing any real writing,
> just
> > an empty commit to force new sear
Hi,
If you mean adding up numeric values stored in fields - no, Solr doesn't do
this by default.
We had a similar requirement for this, and created a custom SearchComponent
to handle sum, average, stats etc.
There are a number of things you need to bear in mind, such as:
* Handling errors when a
2c worth,
We do lots of facet lookups to allow 'prettyprint' versions of facet names.
We do this on the client-side, though. The reason is then the lookups can
be different for different locations/users etc. - makes it easy for
localization.
It's also very easy to implement such a lookup, without h
Hi,
This question could possibly be about rare idr facet counting - i.e. retrun
the facets counts with the least values.
I remember doing a patch for this years ago, but then it broke when some
UninvertedField facet optimization came in around ~3.5 time.
It's a neat idea though to have an option t
desc to the sort option like facet.sort=index,desc
> to get the following result
>
>
>
> 200
> 23
> 12/int>
>
>
>
> Bests Sandro
>
>
> -Ursprüngliche Nachricht-
> Von: Peter Sturge [mailto:peter.stu...@gmail.com]
> Gesend
Hi,
We have run solr in VM environments extensively (3.6 not Cloud, but the
issues will be similar).
There are some significant things to be aware of when running Solr in a
virtualized environment (these can be equally true with Hyper-V and Xen as
well):
If you're doing heavy indexing, the network
Hello Milen,
We do something very similar to this, except we use separate processes on
the same machine for the writer and reader. We do this so we can tune
caches etc. to optimize for each, and still use the same index files. On MP
machines, this works very well.
If you've got 2 separate machines
Hi,
We use this very same scenario to great effect - 2 instances using the same
dataDir with many cores - 1 is a writer (no caching), the other is a
searcher (lots of caching).
To get the searcher to see the index changes from the writer, you need the
searcher to do an empty commit - i.e. you invok
Hi,
Been wrestling with a question on highlighting (or not) - perhaps
someone can help?
The question is this:
Is it possible, using highlighting or perhaps another more suited
component, to return words/tokens from a stored field based on a
regular expression's capture groups?
What I was kind of
Hi,
There are a couple ways of handling this.
One is to do it from the 'client' side - i.e. do a Solr ping to each
shard beforehand to find out which/if any shards are unavailable. This
may not always work if you use forwarders/proxies etc.
What we do is add the name of all failed shards to the
Hi,
It's quite coincidental that I was just about to ask this very
question to the forum experts.
I think this is the same sort of thing Jamie was asking about. (the
only difference in my question is that the values won't be known at
query time)
Is it possible to create a request that will return
Hi,
I was wondering if anyone was aware of any existing functionality where
clients/server components could register some search criteria and be
notified of newly committed data matching the search when it becomes
available
- a 'push/streaming' search, rather than 'pull'?
Thanks!
Yes, you don't want to hard code permissions into your index - it will give
you headaches.
You might want to have a look at SOLR 1872:
https://issues.apache.org/jira/browse/SOLR-1872 .
This patch provides doc level security through an external ACL mechanism (in
this case, an XML file) controlling
Informational
Hi,
This information is for anyone who might be running into problems when
performing explicit periodic backups of Solr indexes. I encountered this
problem, and hopefully this might be useful to others.
A related Jira issue is: SOLR-1475.
The issue is: When you execute a 'command=b
Hi,
I'm sure there are good reasons for the decision to no longer support 2.9
format indexes in 4.0, and not have an automatic upgrade as in previous
versions.
Since Lucene 3.0.2 is 'out there', does this mean the format is nailed
down, and some sort of porting is possible?
Does anyone kn
If a tool exists for converting 2.9->3.0.x, it would likely be faster.
Do you know if such a tool exists?
Remaking the index, in my case, can only be done from the existing
index because the original data is no longer available (it is
transient network data).
I suppose an index 'remaker' might be s
Could be a solrj .jar version compat issue. Check that the client and
server's solrj version jars match up.
Peter
On Sun, Sep 12, 2010 at 1:16 PM, h00kpub...@gmail.com
wrote:
> hi... currently i am integrating nutch (release 1.2) into solr (trunk). if
> i indexing to solr index with nutch i g
Hi,
Below are some notes regarding Solr cache tuning that should prove
useful for anyone who uses Solr with frequent commits (e.g. <5min).
Environment:
Solr 1.4.1 or branch_3x trunk.
Note the 4.x trunk has lots of neat new features, so the notes here
are likely less relevant to the 4.x environmen
g, SOLR-1617? That could help
> your situation.
>
> On Sun, Sep 12, 2010 at 12:26 PM, Peter Sturge wrote:
>> Hi,
>>
>> Below are some notes regarding Solr cache tuning that should prove
>> useful for anyone who uses Solr with frequent commits (e.g. <5min).
>&
w includes a partial optimize option, so you can do
> larger controlled merges.
>
> Peter Sturge wrote:
>>
>> Hi,
>>
>> Below are some notes regarding Solr cache tuning that should prove
>> useful for anyone who uses Solr with frequent commits (e.g.<5min).
1. You can run multiple Solr instances in separate JVMs, with both
having their solr.xml configured to use the same index folder.
You need to be careful that one and only one of these instances will
ever update the index at a time. The best way to ensure this is to use
one for writing only,
and the
>
> Best
> Erick
>
> On Sun, Sep 12, 2010 at 12:26 PM, Peter Sturge wrote:
>
>> Hi,
>>
>> Below are some notes regarding Solr cache tuning that should prove
>> useful for anyone who uses Solr with frequent commits (e.g. <5min).
>>
>> Environment:
h... The
>> Lucene version shouldn't matter. The distributed
>> faceting
>> theoretically can easily be applied to multiple segments,
>> however the
>> way it's written for me is a challenge to untangle and
>> apply
>> successfully to a wo
1 as a way of
minimizing the number of onDeckSearchers. This is a prudent move --
thanks Chris for bringing this up!
All the best,
Peter
On Tue, Sep 14, 2010 at 2:00 PM, Peter Karich wrote:
> Peter Sturge,
>
> this was a nice hint, thanks again! If you are here in Germany anytime I
&
:
>>
>> > BTW, what is NRT?
>> >
>> > Dennis Gearon
>> >
>> > Signature Warning
>> >
>> > EARTH has a Right To Life,
>> > otherwise we all die.
>> >
>> > Read 'Hot, Flat, and Cro
Hi,
Are you going to generate a report with 3 records in it? That will
be a very large report - will anyone really want to read through that?
If you want/need 'summary' reports - i.e. stats on on the 30k records,
it is much more efficient to setup faceting and/or server-side
analysis to do thi
because like you pointed out, for those
> reports there will be very little data transfer but its the full data dump
> reports that I am trying to figure out the best way to handle.
>
> Thanks for your help
> Adeel
>
>
>
> On Thu, Sep 23, 2010 at 11:43 AM, Peter Sturg
Hi Ahson,
You'll really want to store an additional date field (make it a
TrieDateField type) that has only the date, and in the reverse order
from how you've shown it. You can still keep the one you've got, just
use it only for 'human viewing' rather than sorting.
Something like:
20080205 if you
Hi,
We've used iSCSI SANs with 6x1TB 15k SAS drives RAID10 in production
environments, and this works very well for both reads and writes. We
also have FibreChannel environments, and this is faster as you would
expect. It's also a lot more expensive.
The performance bottleneck will have more to d
Is it possible to get an index to span multiple disk volumes - i.e.
when its 'primary' volume fills up (or optimize needs more room), tell
Solr/Lucene to use a secondary/tertiary/quaternary et al volume?
I've not seen any configuration that would allow this, but maybe
others have a use case for su
Hi,
See SOLR-1872 for a way of providing access control, whilst placing
the ACL configuration itself outside of Solr, which is generally a
good idea.
http://www.lucidimagination.com/search/out?u=http://issues.apache.org/jira/browse/SOLR-1872
There are a number of ways to approach Access Contr
Many thanks, Peter K. for posting up on the wiki - great!
Yes, fc = field cache. Field Collapsing is something very nice indeed,
but is entirely different.
As Erik mentions in the wiki post, using per-segment faceting can be a
huge boon to performance. It does require the latest Solr trunk build
Hi Peter,
First off, many thanks for putting together the NRT Wiki page!
This may have changed recently, but the NRT stuff - e.g. per-segment
commits etc. is for the latest Solr 4 trunk only.
If your setup uses the 3x Solr code branch, then there's a bit of work
to do to move to the new version.
* I believe the NRT patches are included in the 4.x trunk. I don't
think there's any support as yet in 3x (uses features in Lucene 3.0).
* For merging, I'm talking about commits/writes. If you merge while
commits are going on, things can get a bit messy (maybe on source
cores this is ok, but I hav
> Maybe I didn't fully understood what you explained: but doesn't this mean
> that you'll have one index per day?
> Or are you overwriting, via replicating, every shard and the number of shard
> is fixed?
> And why are you replicating from the local replica to the next shard? (why
> not directly fr
> no, I only thought you use one day :-)
> so you don't or do you have 31 shards?
>
No, we use 1 shard per month - e.g. 7 shards will hold 7 month's of data.
It can be set to 1 day, but you would need to have a huge amount of
data in a single day to warrant doing that.
On Thu, Nov 18, 2010 at 8
Hi,
This problem is usually because your custom Transformer is in the
solr/lib folder, when it needs to be in the webapps .war file (under
WEB-INF/lib of course).
Place your custom Transformer in a .jar in your .war and you should be
good to go.
Thanks,
Peter
Subject:
RE: DataImportHandler
Yes, as mentioned in the above link, there's SOLR-1872 for maintaing
your own document-level access control. Also, if you have access to
the file system documents and want to use their existing ACL, have a
look at SOLR-1834.
Document-level access control can be a real 'can of worms', and it can
be
Hi,
With the advent of new windows versions, there are increasing
instances of system blue-screens, crashes, freezes and ad-hoc
failures.
If a Solr index is running at the time of a system halt, this can
often corrupt a segments file, requiring the index to be -fix'ed by
rewriting the offending fi
10 at 5:33 PM, Yonik Seeley
wrote:
> On Mon, Nov 29, 2010 at 10:46 AM, Peter Sturge wrote:
>> If a Solr index is running at the time of a system halt, this can
>> often corrupt a segments file, requiring the index to be -fix'ed by
>> rewriting the offending file.
>
We do a lot of precisely this sort of thing. Ours is a commercial
product (Honeycomb Lexicon) that extracts behavioural information from
logs, events and network data (don't worry, I'm not pushing this on
you!) - only to say that there are a lot of considerations beyond base
Solr when it comes to h
this seem right? I don't remember seeing so many corruptions in
the index - maybe it is the world of Win7 dodgy drivers, but it would
be worth investigating if there's something amiss in Solr/Lucene when
things go down unexpectedly...
Thanks,
Peter
On Tue, Nov 30, 2010 at 9:19 AM, Peter
nks,
Peter
On Thu, Dec 2, 2010 at 4:07 AM, Lance Norskog wrote:
> Is there any way that Windows 7 and disk drivers are not honoring the
> fsync() calls? That would cause files and/or blocks to get saved out
> of order.
>
> On Tue, Nov 30, 2010 at 3:24 PM, Peter Sturge wrote:
>
oblem on Windows 6.x (i.e. Server 2008 or
Win7)?
Mike, are there any diagnostics/config etc. that I could try to help
isolate the problem?
Many thanks,
Peter
On Thu, Dec 2, 2010 at 9:28 AM, Michael McCandless
wrote:
> On Thu, Dec 2, 2010 at 4:10 AM, Peter Sturge wrote:
>> The Win7 crash
In order for the 'read-only' instance to see any new/updated
documents, it needs to do a commit (since it's read-only, it is a
commit of 0 documents).
You can do this via a client service that issues periodic commits, or
use autorefresh from within solrconfig.xml. Be careful that you don't
do anyth
There are, as you would expect, a lot of factors that impact the
amount of fragmentation that occurs:
commit rate, mergeFactor updates/deletes vs 'new' data etc.
Having run reasonably large indexes on NTFS (>25GB), we've not found
fragmentation to be much of a hindrance.
I don't have any definitiv
Hi Lee,
Perhaps Solr's clustering component might be helpful for your use case?
http://wiki.apache.org/solr/ClusteringComponent
On Fri, Dec 10, 2010 at 9:17 AM, lee carroll
wrote:
> Hi Chris,
>
> Its all a bit early in the morning for this mined :-)
>
> The question asked, in good faith, was
imap has no intrinsic functionality for logging in as a user then
'impersonating' someone else.
What you can do is setup your email server so that your administrator
account or similar has access to other users via shared folders (this
is supported in imap2 servers - e.g. Exchange).
This is done al
Hi,
One of the things about Document Security is that it never involves
just one thing. There are a lot of things to consider, and
unfortunately, they're generally non-trivial.
Deciding how to store/hold/retrieve permissions is certainly one of
those things, and you're right, you should avoid att
Hi,
We use this scenario in production where we have one write-only Solr
instance and 1 read-only, pointing to the same data.
We do this so we can optimize caching/etc. for each instance for
write/read. The main performance gain is in cache warming and
associated parameters.
For your Index W, it's
Hello,
I've been wrestling with a query use case, perhaps someone has done this
already?
Is it possible to write a query that excludes results based on another
query?
Scenario:
I have an index that holds:
'customer' (textgen)
'product' (textgen)
'saledate' (date)
I'm looking to ret
?
>
> e.g. taking today as an example your upper limit would be
> 20011-02-04T00:00:00Z
> and so your query would be something like:
> q=products:Dog AND saledate:[* TO 20011-02-04T00:00:00Z]
>
>
> On 4 March 2011 11:40, Peter Sturge wrote:
>
> > Hello,
> >
Hi,
You need to put your password in as well. You should use protocol="imap"
unless your gmail is set for imaps (I don't believe the free gmail gives you
this).
HTH
Peter
On Fri, Mar 4, 2011 at 4:42 PM, Gora Mohanty wrote:
> On Fri, Mar 4, 2011 at 9:20 PM, Matias Alonso
> wrote:
> > H
nge I posted.
>
> Matias.
>
>
>
>
> 2011/3/4 Peter Sturge
>
> > Hi,
> >
> > You need to put your password in as well. You should use protocol="imap"
> > unless your gmail is set for imaps (I don't believe the free gmail gives
> > you
ot;true"
> processAttachments="false"
> includeOtherUserFolders="false"
> includeSharedFolders="false"
> batchSize="100"
> processor="MailEntityProcessor"
> protocol="imaps" />
>
>
>
PM, Matias Alonso wrote:
> Hi Peter,
>
> I test with deltaFetch="false", but doesn´t work :(
> I'm using "DataImportHandler Development Console" to index (
> http://localhost:8983/solr/mail/admin/dataimport.jsp?handler=/dataimport);
> I'm working wi
> Now, I execute "
> http://localhost:8983/solr/mail/dataimport?command=full-import"; but
> nothing
> happends; no index; no errors.
>
> thks...
>
> Matias.
>
>
>
> 2011/3/4 Peter Sturge
>
> > Hi Mataias,
> >
> >
> >
> http:/
27;s
lib/logging.properties file).
(you won't see any errors unless you run the status command - that's where
they're stored)
HTH
Peter
On Sat, Mar 5, 2011 at 12:46 AM, Matias Alonso wrote:
> I´m using the trunk.
>
> Thanks Peter for your preoccupation!
>
&g
t;
> -
>
> -
>
> data-config.xml
>
>
> -
>
> status<http://localhost:8983/solr/mail/dataimport?command=full-import>
>
> idle
>
>
> -
>
> This response format is experimental. It is likely to change in the future.
>
>
>
>
>
>
11 11:52:03 org.apache.solr.update.processor.LogUpdateProcessor
> finish
> INFO: {deleteByQuery=*:*,optimize=} 0 0
> 09/03/2011 11:52:03 org.apache.solr.handler.dataimport.DocBuilder execute
> INFO: Time taken = 0:0:2.359
>
>
>
> 09/03/2011 11:54:58 org.apache.solr.core.Sol
Hi,
I was wondering if it is possible during a query to create a returned
field 'on the fly' (like function query, but for concrete values, not
score).
For example, if I input this query:
q=_val_:"product(15,3)"&fl=*,score
For every returned document, I get score = 45.
If I change it slightl
; On Wed, Mar 9, 2011 at 10:06 PM, Peter Sturge wrote:
>> Hi,
>>
>> I was wondering if it is possible during a query to create a returned
>> field 'on the fly' (like function query, but for concrete values, not
>> score).
>>
>> For example, if
Could possibly be your original xml file was in unicode (with a BOM
header - FFFE or FEFF) - xml will see it as content if the underlying
file system doesn't handle it.
On Tue, Mar 15, 2011 at 10:00 PM, sivaram wrote:
> I got rid of the problem by just copying the other schema and config files(
Hi Viswa,
This patch was orignally built for the 3x branch, and I don't see any
ported patch revision or testing for trunk. A lot has changed in
faceting from 3x to trunk, so it will likely need a bit of adjusting
to cater for these changes (e.g. deprecation of date range in favour
of range). Have
This happens almost always because you're sending from a 'free' mail
account (gmail, yahoo, hotmail, etc), and your message contains words
that spam filters don't like.
For me, it was the use of the word 'remplica' (deliberately
mis-spelled so this mail gets sent).
It can also happen from 'non-fre
The best way to add your own fields is to create a custom Transformer sub-class.
See:
http://www.lucidimagination.com/search/out?u=http%3A%2F%2Fwiki.apache.org%2Fsolr%2FDataImportHandler
This will guide you through the steps.
Peter
2011/5/5 方振鹏 :
>
>
>
> I’m using Data Import Handler for index
[X] I always use the JDK logging as bundled in solr.war, that's perfect
[ ] I sometimes use log4j or another framework and am happy with
re-packaging solr.war
[ ] Give me solr.war WITHOUT an slf4j logger binding, so I can
choose at deploy time
[ ] Let me choose whether to bundle a binding o
Hi,
SOLR-1834 is good when the original documents' ACL is accessible.
SOLR-1872 is good where the usernames are persistent - neither of
these really fit your use case.
It sounds like you need more of an 'in-memory', transient access
control mechanism. Does the access have to exist beyond the user'
tic solution as in solr
> 1872.
>
> Am I right in my assumption that SOLR1872 is same as the solution that
> we currently have where we add a flter query of the products to orignal
> query and hence (SOLR 1872) will also run into TOO many boolean clause
> expanson error?
>
ould also try to dig
> deep into JAVA.
>
> What s meant by in-memory?Is it the Ram memory ,So If i have
> concurrent users ,each having products subscrbed,what would be the
> Impact on memory ?
>
>
>
> Regards
> Sujatha
>
>
> On Tue, Jun 14, 2011 at 5:43 PM
You'll need to be a bit careful using joins, as the performance hit
can be significant if you have lots of cross-referencing to do, which
I believe you would given your scenario.
Your table could be setup to use the username as the key (for fast
lookup), then map these to your own data class or co
Hi,
When you get this exception with no other error or explananation in
the logs, this is almost always because the JVM has run out of memory.
Have you checked/profiled your mem usage/GC during the stream operation?
On Thu, Aug 11, 2011 at 3:18 AM, Naveen Gupta wrote:
> Hi,
>
> We are doing st
s taking 3 mins 20 secs time,
>
> after deleting the index data, it is taking 9 secs
>
> What would be approach to have better indexing performance as well as index
> size should also at the same time.
>
> The index size was around 4.5 GB
>
> Thanks
> Naveen
>
> On Th
It's worth noting that the fast commit rate is only an indirect part
of the issue you're seeing. As the error comes from cache warming - a
consequence of committing, it's not the fault of commiting directly.
It's well worth having a good close look at exactly what you're caches
are doing when they
Just to add a few cents worth regarding SSD...
We use Vertex SSD drives for storing indexes, and wow, they really
scream compared to SATA/SAS/SAN. As we do some heavy commits, it's the
commit times where we see the biggest performance boost.
In tests, we found that locally attached 15k SAS drives
ctive alternative as well.
Peter
On Tue, Aug 23, 2011 at 3:29 PM, Gerard Roos wrote:
> Interesting. Do you make a symlink to the indexes or is the whole Solr
> directory on SSD?
>
> thanks,
> Gerard
>
> Op 23 aug. 2011, om 12:53 heeft Peter Sturge het volgende geschrev
Tue, Aug 23, 2011 at 5:34 PM, Sanne Grinovero
wrote:
> Indeed I would never actually use it, but symlinks do exist on Windows.
>
> http://en.wikipedia.org/wiki/NTFS_symbolic_link
>
> Sanne
>
> 2011/8/23 Peter Sturge :
>> The Solr index directory lives directly on the SSD (running
I can access the rar fine with WinRAR, so should be ok, but yes, it might
be in zip format.
In any case, better to use the slightly later version --> SolrACLSecurity.java
26kb 12 Apr 2010 10:35
Thanks,
Peter
On Mon, Jul 30, 2012 at 7:50 PM, Sujatha Arun wrote:
> I am uable to use the rar file
enamed to zip and worked fine,thanks
> >
> > Regards
> > Sujatha
> >
> >
> > On Tue, Jul 31, 2012 at 9:15 AM, Sujatha Arun
> wrote:
> >
> >> thanks ,was looking to the rar file for instructions on set up .
> >>
> >> Regards
> >&g
1 - 100 of 132 matches
Mail list logo