some attribute(s) to get higher scores, can you filter out items
that don't have those attributes? Also, maybe setting mm to require more
terms to match could cut out unwanted results (that's not useful for the
"dog" query of course).
Andy
On Fri, 4 Dec 2020 at 06:43,
I am adding a new float field to my index that I want to perform range
searches and sorting on. It will only contain a single value.
I have an existing dynamic field definition in my schema.xml that I wanted
to use to avoid having to updating the schema:
I went ahead and implemented th
uot;: "solr.PatternReplaceFilterFactory",
"pattern": "(.)\\1+",
"replacement": "$1"
}
]
}
}
]
}
This finds a match...
http://localhost:8983/solr/
I added the maxQueryLength option to DirectSolrSpellchecker in
https://issues.apache.org/jira/browse/SOLR-14131 - that landed in 8.5.0 so
should be available to you.
Andy
On Wed, 7 Oct 2020 at 23:53, gnandre wrote:
> Is there a way to truncate spellcheck.q param value from Solr side?
>
Don't know if this is an option for you but the SolrJ Java Client library
has support for uploading a config set. If the config set already exists it
will overwrite it, and automatically RELOAD the dependent collection.
See
https://lucene.apache.org/solr/8_5_0/solr-solrj/org/apache/solr/common/clo
Thanks David, I'll set up the techproducts schema and see what happens.
Kind regards,
Andy
On 4/09/20 4:09 pm, David Smiley wrote:
Hi,
I looked at the code at those line numbers and it seems simply impossible
that an ArrayIndexOutOfBoundsException could be thrown there because it'
; I haven't been able to find
anything similar reported online so I'm thinking it's a config issue and
would be grateful for any pointers & solutions. Many thanks in advance,|
--
Andy Dopleach
/Director/
*BlueFusion <https://www.bluefusion.co.nz/>*
p: 03 328
dragons!
Andy
> On 20 Aug 2020, at 07:25, Prashant Jyoti wrote:
>
> Hi Joe,
> These are the errors I am running into:
>
> org.apache.solr.common.SolrException: Error CREATEing SolrCore
> 'newcollsolr2_shard1_replica_n1': Unable to create core
> [newcoll
hi Jörn - something's decoding a UTF8 sequence using the legacy iso-8859-1
character set:
Jörn is J%C3%B6rn in UTF8
J%C3%B6rn misinterpreted as iso-8859-1 is Jörn
Jörn is J%C3%83%C2%B6rn in UTF8
I hope this helps track down the problem!
Andy
On Fri, 7 Aug 2020 at 12:08, Jörn Franke
o cause problems with the class loader,
as I start getting a NoClassDefFoundError :
org/apache/lucene/analysis/util/ResourceLoaderAware exception.
Any suggestions?
Thanks,
- Andy -
Dave,
You don't mention what query parser you are using, but with the default
query parser you can field qualify all the terms entered in a text box by
surrounding them with parenthesis. So if you want to search against the
'title' field and they entered:
train OR dragon
You could generate the S
Bernd,
I recently asked a similar question about Solr 7.3 and Zookeeper 3.4.11.
This is the response I found most helpful:
https://www.mail-archive.com/solr-user@lucene.apache.org/msg138910.html
- Andy -
On Fri, Dec 14, 2018 at 7:41 AM Bernd Fehling <
bernd.fehl...@uni-bielefeld.de>
update about 3000 docs per minute ,but other solr instance is running
normally
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
we use solrcloud with version 5.2.1,openjdk 1.8,we have multiple solr
machines, one of them keep increasing thread count ,I saw from
http://xx:xxx/solr/#/~threads page, I found that a lot of
[java.util.concurrent.locks.ReentrantReadWriteLock$FairSync@x] threads,
the detail info like this:
=
Optional.empty(); (as you have done).
Intent of my code was to support both using a chroot and not using a
chroot. The value of _zkChroot is read from a config file in code not shown.
- Andy -
= Optional.empty();
}
CloudSolrClient client = new CloudSolrClient.Builder(_zkHostList,
chrootOption).build();
Adapted from code I found somewhere (unit test?). Intent is to support the
option of configuring a chroot or not (stored in "_zkChroot")
- Andy -
On Mon, Jun 18, 201
Shawn,
Why are range searches more efficient than wildcard searches? I guess I
would have expected that they just provide different mechanism for defining
the range of unique terms that are of interest, and that the merge
processing would be identical.
Would a search such as:
field:c*
be more e
Thanks Shawn. That makes sense.
On Wed, May 9, 2018 at 5:10 PM, Shawn Heisey wrote:
> On 5/9/2018 2:38 PM, Andy C wrote:
> > Was not quite sure from reading the JIRA why the Zookeeper team felt the
> > issue was so critical that they felt the need to pull the release from
&g
invert the dataDir and
dataLogDir directories.
It does present something of a PR issue for us, if we tell our customers to
use a ZK version that has been pulled from the mirrors. Any plans to move
to ZK 3.4.12 in future releases?
Thanks,
- Andy -
On Wed, May 9, 2018 at 4:09 PM, Erick Erickson
leases?
Would appreciate any guidance.
Thanks,
- Andy -
grams it's not _necessary_
>
> Best,
> Erick
>
>
> On Fri, Mar 16, 2018 at 2:02 PM, Andy Tang wrote:
> > Erik,
> >
> > Thank you for reminding.
> > javac -cp
> > .:/opt/solr/solr-6.6.2/dist/*:/opt/solr/solr-6.6.2/dist/solrj-lib/*
> >
.6.2/dist/solrj-lib in
> your classpath.
>
> Best,
> Erick
>
> On Fri, Mar 16, 2018 at 12:14 PM, Andy Tang
> wrote:
> > I have the code to add document to Solr. I tested it in Both Solr 6.6.2
> and
> > Solr 7.2.1 and failed.
> >
> >
l nodes to reflect
the new IP address of the VMs. But will that be sufficient?
Appreciate any guidance.
Thanks
- Andy -
4)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 2 more
What is wrong with it? Is this urlString correct?
Any help is appreciated!
Andy Tang
We were able to locate the exact issue after some more digging. We added a
query to another collection that runs alongside the job we were executing
and we were missing the collection reference in the URL. If the below query
is run by itself in at least Solr 7, the error will be reproduced.
http:
Erick Erickson wrote
> Maybe your remote job server is using a different set of jars than
> your local one? How does the remote job server work?
The remote job server and our local are running the same code as our local,
and both our local and remote job server are making queries against the same
We are receiving an UnsupportedOperationException after making certain
requests. The requests themselves do not seem to be causing the issue as
when we run the job that makes these requests locally against the same
SolrCloud cluster where the errors are being thrown, there are no errors.
These er
Hi,
I use Solr 5.5, I recently notice a process a process ./fs-manager is run
under user solr that take quite high CPU usage. I don't think I see such
process before.
Is that a legitimate process from Solr?
Thanks.
lue is
present. And then changing the filter query to:
fq=ctindex_populated:false OR ctindex:myId
Would this be more efficient than your proposed filter query?
Thanks again,
- Andy -
On Mon, May 1, 2017 at 10:19 AM, Shawn Heisey wrote:
> On 4/26/2017 1:04 PM, Andy C wrote:
> > I'm looking at upgra
e q.op setting?
More details:
- Using the standard query parser
- The fieldType of the ctindex field is "string"
- I upgraded to 6.5 by copying my 5.3 config files over, updating the
schema version to 1.6 in the schema.xml, updating the luceneMatchVersion to
6.5.0 in the solrconfig.xml, and building a brand new index.
Thanks,
- Andy -
last 24 hours
("[NOW-1DAYS,NOW]"), but be aware that when you subsequently restrict your
query using one of these intervals using NOW without rounding has a
negative impact on the filter query cache (see
https://dzone.com/articles/solr-date-math-now-and-filter for a better
explanation than I
e the same issue will occur
in 6.2.1.
How should I proceed from here?
Thanks,
- Andy -
private int compareStart(FacetInterval o1, FacetInterval o2) {
if (o1.start == null) {
if (o2.start == null) {
return 0;
}
return -1;
}
i
just converted to a range query internally), and it fails to show up in the
Negative or Positive intervals either.
Any ideas what is going on, and if there is anything I can do to get this
to work correctly? I am using Solr 5.3.1. I've pasted the output from the
Solr Admin UI query b
Hello,
I am guessing that what I am looking for is probably going to require extending
StandardTokenizerFactory or ClassicTokenizerFactory. But I thought I would ask
the group here before attempting this. We are indexing documents from an
eclectic set of sources. There is, however, a heavy inter
evant
here. Are there other reasons not to use the embedded Zookeeper?
More generally, are there downsides to using SolrCloud with a single
Zookeeper node and single Solr node?
Would appreciate any feedback.
Thanks,
Andy
Hello,
I am using the eDisMax parser and have the following question.
With the eDisMax parser we can pass a query, q="brown and mazda", and
configure a bunch of fields in a solrconfig.xml SearchHandler to query on as
"qf". Let's say I have a SOLR schema.xml with the following fields:
and the
Hello,
I have am somewhat of a novice when it comes to using SOLR in a distributed
SolrCloud environment. My team and I are doing development work with a SOLR
core. We will shortly be transitioning over to a SolrCloud environment.
My question specifically has to do with Facets in a SOLR cloud/c
I also met the same problem, could you tell me why? Thanks
--
View this message in context:
http://lucene.472066.n3.nabble.com/Storing-positions-and-offsets-vs-FieldType-IndexOptions-DOCS-AND-FREQS-AND-POSITIONS-AND-OFFSETS-tp4061354p4208875.html
Sent from the Solr - User mailing list archive a
Hi folks,
I have a DelegatingCollector installed via a PostFilter (kind of like an
AnalyticsQuery) that needs the document score to a) add to a collection of
score-based stats, and b) decide whether to keep the document based on the
score.
If I keep the document, I call super.collect() (where sup
suggestion based on
negative performance implications of having to read and rewrite all
previous fields for a document when doing atomic updates? Or are there
additional inherent negatives to using lots of dynamic fields?
Andy
On Fri, Jun 27, 2014 at 11:46 AM, Jared Whiklo
wrote:
> This is pro
Hi folks,
My application requires tracking a daily performance metric for all
documents. I start tracking for an 18 month window from the time a doc is
indexed, so each doc will have ~548 of these fields. I have in my schema a
dynamic field to capture this requirement:
Example:
metric_2014_06_
Congrats! Any idea when will native faceting & off-heap fieldcache be available
for multivalued fields? Most of my fields are multivalued so that's the big one
for me.
Andy
On Thursday, June 19, 2014 3:46 PM, Yonik Seeley wrote:
FYI, for those who want to try out the new na
I am trying to pass a string of Japanese characters to an Apache Solr
query. The string in question is '製品'.
When a search is passed without any arguments, it brings up all of the
indexed information, including all of the documents that have this
particular string in them, however when this parame
Hi folks,
Using Solr 4.6.0 in a cloud configuration, I'm developing a SearchComponent
that generates a custom score for each document. Its operational flow
looks like this:
1. The score is derived from an analysis of search results coming out of
the QueryComponent. Therefore, the component is i
ld "#Month:July", even though it's
included in the highlighting section. I've tried changing various
highlighting parameters to no avail. Could someone help me know where to
look for why the pre/post aren't being applied?
Thanks,
Andy Pickler
results,
which tells me obviously it is having no effect.
Is there a change to the join query behavior between these releases, or
could I have configured something differently in my 4.5.1 install?
Thanks,
Andy Pickler
On Thu, Oct 24, 2013 at 2:42 PM, Andy Pickler wrote:
> We're attempting
different (and
expected) results, while the query doesn't affect the results at all in
4.5. Is there any known join query behavior differences/fixes between 4.2
and 4.5 that might explain this, or should I be looking at other factors?
Thanks,
Andy Pickler
On Aug 6, 2013, at 9:55 AM, Steven Bower wrote:
> Is there an easy way in code / command line to lint a solr config (or even
> just a solr schema)?
No, there's not. I would love there to be one, especially for the DIH.
--
Andy Lester => a...@petdance.com => www.petdance.com => AIM:petdance
ht" way to do it. Every approach you take will involve
tradeoffs. Read up on this already well-discussed topic and decide what answer
is best for you in your case.
xoa
--
Andy Lester => a...@petdance.com => www.petdance.com => AIM:petdance
On Jul 9, 2013, at 2:48 PM, Shawn Heisey wrote:
> This is primarily to Andy Lester, who wrote the WebService::Solr module
> on CPAN, but I'll take a response from anyone who knows what I can do.
>
> If I use the following Perl code, I get an error.
What error do you get? Ne
That's exactly what turned out to be the problem. We thought we had
already tried that permutation but apparently hadn't. I know it's obvious
in retrospect. Thanks for the suggestion.
Thanks,
Andy Pickler
On Wed, Jul 3, 2013 at 2:38 PM, Alexandre Rafalovitch wrote:
> On T
n the Solr index.
I don't think any of this helps you identify my problem, but I tried to
address your questions.
Thanks,
Andy
On Tue, Jul 2, 2013 at 9:14 AM, Gora Mohanty wrote:
> On 2 July 2013 20:29, Andy Pickler wrote:
> > Solr 4.1.0
> >
> > We've been using the
sub-entity column
in different nest levels of the XML to no avail. I'm curious if we're
trying something that is just not supported or whether we are just trying
the wrong things.
Thanks,
Andy Pickler
ted you to do that a decade and a half ago
> But either way, that's a pretty ridiculous solution.
> I don't know of any other server product that disregards security so
> willingly.
Why are you wasting your time with such an inferior project? Perhaps
ElasticSearch is
set to "true".
It all seems redundant just to allow for partial word
matching/highlighting but I didn't know of a better way. Does anything
stand out to you that could be the culprit? Let me know if you need any
more clarification.
Thanks!
- Andy
-Original Message
*Could anyone help me to see what is the reason which Solritas page failed?*
*I can go to http://localhost:8080/solr without problem, but fail to go to
http://localhost:8080/solr/browse*
*As below is the status report! Any help is appreciated.*
*Thanks!*
*Andy*
*
*
*type* Status report
This is very interesting. Thanks for sharing the benchmark.
One question I have is did you precondition the SSD (
http://www.sandforce.com/userfiles/file/downloads/FMS2009_F2A_Smith.pdf )? SSD
performance tends to take a very deep dive once all blocks are written at least
once and the garbage c
pelling?
Yes, definitely.
Thanks for the ticket. I am looking at the effects of turning on
spellcheck.onlyMorePopular to true, which reduces the number of collations it
seems to do, but doesn't affect the underlying question of "is the spellchecker
doing FQs properly?"
Thank
le FQ and it becomes 62038ms.
> But I think you're just setting maxCollationTries too high. You're asking it
> to do too much work in trying teens of combinations.
The results I get back with 100 tries are about twice as many as I get with 10
tries. That's a big difference to the user where it's trying to figure
misspelled phrases.
Andy
--
Andy Lester => a...@petdance.com => www.petdance.com => AIM:petdance
ionTries and how many FQs are at the end.
Am I doing something wrong? Do the collation internals not handle
FQs correctly? The lookup/hit counts on filterCache seem to be
increasing just fine. It will do N lookups, N hits, so I'm not
thinking that caching is the problem.
We'd really
'm getting results...and they indeed are relevant.
Thanks,
Andy Pickler
On Wed, May 22, 2013 at 12:20 PM, Andy Pickler wrote:
> I'm a developing a recommendation feature in our app using the
> MoreLikeThisHandler <http://wiki.apache.org/solr/MoreLikeThisHandler>,
>
ave setup another request handler that
only searches the whole word fields and it returns in 850 ms with
highlighting.
Any ideas?
- Andy
-Original Message-
From: Bryan Loofbourrow [mailto:bloofbour...@knowledgemosaic.com]
Sent: Monday, May 20, 2013 1:39 PM
To: solr-user@lucene.apa
to not be "significant". I
realize that I may not understand how the MLT Handler is doing things under
the covers...I've only been guessing until now based on the (otherwise
excellent) results I've been seeing.
Thanks,
Andy Pickler
P.S. For some additional information, th
true
true
true
true
breakIterator
2
name name_par description description_par
content content_par
162
simple
default
Cheers!
- Andy
ays "This is
how the dev process works."
--
Andy Lester => a...@petdance.com => www.petdance.com => AIM:petdance
ewhere that describes the process set
up?
--
Andy Lester => a...@petdance.com => www.petdance.com => AIM:petdance
e what is currently pending to go in 4.3?
--
Andy Lester => a...@petdance.com => www.petdance.com => AIM:petdance
On May 2, 2013, at 3:36 AM, "Jack Krupansky" wrote:
> RC4 of 4.3 is available now. The final release of 4.3 is likely to be within
> days.
How can I see the Changelog of what will be in it?
Thanks,
xoa
--
Andy Lester => a...@petdance.com => www.petdance.com => AIM:petdance
othing of timezones. Solr expects everything is in UTC. If you
want time zone support, you'll have to convert local time to UTC before
importing, and then convert back to local time from UTC when you read from Solr.
xoa
--
Andy Lester => a...@petdance.com => www.petdance.com => AIM:petdance
ilds up "that day's top terms" in a
table or something.
Thanks,
Andy Pickler
On Tue, Apr 2, 2013 at 7:16 AM, Tomás Fernández Löbbe wrote:
> Oh, I see, essentially you want to get the sum of the term frequencies for
> every term in a subset of documents (instead of the docum
oach doesn't work, because if a word
occurs more than one time in a document it needs to be counted that many
times. That seemed to rule out faceting like you mentioned as well as the
TermsComponent (which as I understand also only counts "documents").
Thanks,
Andy Pickler
On
ocument I need each term counted however many times it is entered
(content of "I think what I think" would report 'think' as used twice).
Does anyone have any insight as to whether I'm headed in the right
direction and then what my query would be?
Thanks,
Andy Pickler
rsGroup page - this is a one-time step.
Please add my username, AndyLester, to the approved editors list. Thanks.
--
Andy Lester => a...@petdance.com => www.petdance.com => AIM:petdance
solr-user@lucene.apache.org
Sent: Thursday, March 21, 2013 9:04 AM
Subject: Re: Facets with 5000 facet fields
as was said below, add facet.method=fcs to your query URL.
Upayavira
On Thu, Mar 21, 2013, at 09:41 AM, Andy wrote:
> What do I need to do to use this new per segment fa
What do I need to do to use this new per segment faceting method?
From: Mark Miller
To: solr-user@lucene.apache.org
Sent: Wednesday, March 20, 2013 1:09 PM
Subject: Re: Facets with 5000 facet fields
On Mar 20, 2013, at 11:29 AM, Chris Hostetter wrote:
> No
at problem?
From: Toke Eskildsen
To: "solr-user@lucene.apache.org" ; Andy
Sent: Wednesday, March 20, 2013 4:06 AM
Subject: Re: Facets with 5000 facet fields
On Wed, 2013-03-20 at 07:19 +0100, Andy wrote:
> What about the case where there's only a small number of fields (a
&g
Hoss,
What about the case where there's only a small number of fields (a dozen or
two) but each field has hundreds of thousands or millions of values? Would Solr
be able to handle that?
From: Chris Hostetter
To: solr-user@lucene.apache.org
Sent: Tuesday, Ma
RCHAR would be sad.
What you'll need to do is use a date formatting function in your SELECT out of
the MySQL database to get the date into the format that MySQL likes.
See
https://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_date-format
xoa
--
Andy Lester => a...@petdance.com => www.petdance.com => AIM:petdance
people expect that their next search-within-a-list
will have those new results.
Andy
--
Andy Lester => a...@petdance.com => www.petdance.com => AIM:petdance
o help on?) but I hope that you'll get some ideas.
xoa
--
Andy Lester => a...@petdance.com => www.petdance.com => AIM:petdance
lr
*
http://lucene.472066.n3.nabble.com/Filtered-search-for-subset-of-ids-td502245.html
*
http://lucene.472066.n3.nabble.com/Search-within-a-subset-of-documents-td1680475.html
Thanks,
Andy
--
Andy Lester => a...@petdance.com => www.petdance.com => AIM:petdance
Thanks man
--
View this message in context:
http://lucene.472066.n3.nabble.com/How-to-use-SolrCloud-in-multi-threaded-indexing-tp4037641p4038482.html
Sent from the Solr - User mailing list archive at Nabble.com.
Thanks man
--
View this message in context:
http://lucene.472066.n3.nabble.com/How-to-use-SolrCloud-in-multi-threaded-indexing-tp4037641p4038481.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi,
I am going to upgrade to solr 4.1 from version 3.6, and I want to set up to
shards.
I use ConcurrentUpdateSolrServer to index the documents in solr3.6.
I saw the api CloudSolrServer in 4.1,BUT
1:CloudSolrServer use the LBHttpSolrServer to issue requests,but "*
LBHttpSolrServer should NOT be
Thanks you guys, I got the reason now, there'is something wrong with
compareBottom method in my source,it's not consistent with compare method
--
View this message in context:
http://lucene.472066.n3.nabble.com/custom-solr-sort-tp4031014p4031444.html
Sent from the Solr - User mailing list arch
lain why you want to implement a different sort first? There
> may be other ways of achieving the same thing.
>
> Upayavira
>
> On Sun, Jan 6, 2013, at 01:32 AM, andy wrote:
>> Hi,
>>
>> Maybe this is an old thread or maybe it's different with previous one.
&g
eturn 1.0f;
}
}
@Override
public int compareDocToValue(int arg0, Object arg1)
throws IOException {
// TODO Auto-generated method stub
return 0;
}
}
}
}
and solrcon
ously be a must sooner or later.
--
Andy D'Arcy Jewell
SysMicro Limited
Linux Support
T: 0844 9918804
M: 07961605631
E: andy.jew...@sysmicro.co.uk
W: www.sysmicro.co.uk
maybe just solr)
2. Remove all files under /var/lib/solr/data/index/
3. Move/copy files from /tmp/snapshot.20121220155853703/ to
/var/lib/solr/data/index/
4. Restart Tomcat (or just solr)
Thanks everyone who's pitched in on this! Once I've got this working,
I'll document it.
-An
ill leaves open the question of how to
*pause* SolR or prevent commits during the backup (otherwise we have a
potential race condition).
-Andy
--
Andy D'Arcy Jewell
SysMicro Limited
Linux Support
E: andy.jew...@sysmicro.co.uk
W: www.sysmicro.co.uk
ed to a web-app, which accepts uploads and will be available
24/7, with a global audience, so "pausing" it may be rather difficult
(tho I may put this to the developer - it may for instance be possible
if he has a small number of choke points for input into SolR).
Thanks.
--
And
t, the web app will have to gracefully handle unavailability
of SolR, probably by displaying a "down for maintenance" message, but
this should preferably be only a very short amount of time.
Can anyone comment on my proposed solutions above, or provide any
additional ones?
Thanks for a
aggregate about queries over time? Or for giving
statistics about individual queries, like time breakouts for benchmarking?
For the latter, you want "debugQuery=true" and you get a raft of stats down in
.
xoa
--
Andy Lester => a...@petdance.com => www.petdance.com => AIM:petdance
it updates (sortof copy-on-write style)? So we
are relying on the principle that as long as you have at least one
remaining reference to the data, it's not deleted...
Thanks once again!
-Andy
--
Andy D'Arcy Jewell
SysMicro Limited
Linux Support
E: andy.jew...@sysmicro.co.uk
W: www.sysmicro.co.uk
ge against the down-time which would be required to
regenerate the indexes from scratch?
Regards,
-Andy
--
Andy D'Arcy Jewell
SysMicro Limited
Linux Support
E: andy.jew...@sysmicro.co.uk
W: www.sysmicro.co.uk
but
I'm not sure if that will cut it. I gather merely rsyncing the data
files won't do...
Can anyone give me a pointer to that "easy-to-find" document I have so
far failed to find? Or failing that, maybe some sound advice on how to
proceed?
Regards,
-Andy
--
Andy D
er VM (build 23.5-b02, mixed mode)
We are currently running more tests but it takes a while before the issues
become apparent.
Andy Kershaw
On 29 November 2012 18:31, Walter Underwood wrote:
> Several suggestions.
>
> 1. Adjust the traffic load for about 75% CPU. When you hit 100%, you
Thanks for responding Shawn.
Annette is away until Monday so I am looking into this in the meantime.
Looking at the times of the Full GC entries at the end of the log, I think
they are collections we started manually through jconsole to try and reduce
the size of the old generation. This only seem
yourself. There are not really any sensible
defaults for stopwords, so Solr doesn't provide them.
Just add them to the stopwords.txt and reindex your core.
xoa
--
Andy Lester => a...@petdance.com => www.petdance.com => AIM:petdance
t we must have it in-house on our own servers, for
monitoring internal dev systems, and we'd like it to be open source.
We already have Cacti up and running, but it's possible we could use something
else.
--
Andy Lester => a...@petdance.com => www.petdance.com => AIM:petdance
1 - 100 of 292 matches
Mail list logo