That is not 100% true. I would think RDBMS and XML would be the most common
importers but the real flexibility is with the TikaEntityProcessor [1] that
comes w/ DIH ...
http://wiki.apache.org/solr/TikaEntityProcessor
Im pretty sure it would be able to handle any type of serde (in the case of
I was playing around w/ Sqoop the other day, its a simple Cloudera tool for
imports (mysql -> hdfs) @ http://www.cloudera.com/developers/downloads/sqoop/
It seems to me (it would be pretty efficient) to dump to HDFS and have
something like Data Import Handler be able to read from hdfs:// directl
Are you using Ubuntu by any chance?
It's a somewhat common problem ...
@http://stackoverflow.com/questions/2854356/java-classpath-problems-in-ubuntu
I'm unsure if this has been resolved but a similar thing happened to me on a
recent VMware image in a dev environment. It worked everywhere else
You should already get this out of the box ... just tack on a wt=json to the
params ie ...
http://localhost:8983/solr/select/?q=*%3A*&version=2.2&start=0&rows=10&indent=on&qt=tvrh&tv=true&tv.tf=true&tv.df=true&tv.positions&tv.offsets=true&wt=json
If you look @ /apache-solr-1.4.0/contrib/velocity
ating such extra information requires a reasonable
>> amount of computation.
>>
>> -Sascha
>>
>> Jon Baer wrote:
>>>
>>> Does the standard debug component (?debugQuery=on) give you what you need?
>>>
>>>
>>> http://wiki
Does the standard debug component (?debugQuery=on) give you what you need?
http://wiki.apache.org/solr/SolrRelevancyFAQ#Why_does_id:archangel_come_before_id:hawkgirl_when_querying_for_.22wings.22
- Jon
On May 14, 2010, at 4:03 PM, Tim Garton wrote:
> All,
> I've searched around for help wit
IIRC, I think what we ended up doing in a project was to use the
VelocityResponseWriter to write the JSON and set the echoParams to read the
handler setup (and looping through the variables).
In the template you can grab it w/ something like
$request.params.get("facet_fields") ... I don't remem
To follow up it ... it seems dumping to Solr is common ...
http://highscalability.com/how-rackspace-now-uses-mapreduce-and-hadoop-query-terabytes-data
- Jon
On Apr 29, 2010, at 1:58 PM, Jon Baer wrote:
> Good question, +1 on finding answer, my take ...
>
> Depending on how large of
Good question, +1 on finding answer, my take ...
Depending on how large of log files you are talking about it might be better
off to do this w/ HDFS / Hadoop (and a script language like Pig) (or Amazon EMR)
http://developer.amazonwebservices.com/connect/entry.jspa?externalID=873
Theoretically y
Thanks, Im looking @ the atomic broadcast messaging protocol of Zookeeper and
think I have found what I was looking for ...
- Jon
On Apr 28, 2010, at 11:27 PM, Yonik Seeley wrote:
> On Wed, Apr 28, 2010 at 2:23 PM, Jon Baer wrote:
>> From what I understand Cassandra uses a gener
All that stuff happens in the JDBC driver associated w/ the DataSource so
probably not unless there is something which can be set in the Oracle driver
itself.
One thing that might have helped in this case might have been if
readFieldNames() in the JDBCDataSource dumped its return to debug log f
Does a "sort=field5+desc" on the query param not work?
- Jon
On Apr 29, 2010, at 9:32 AM, Doddamani, Prakash wrote:
> Hi,
>
>
>
> I am using the boost factor as below
>
>
>
> field1^20.0 field2^5 field3^2.5 field4^.5
>
>
>
>
>
> Where it searches first in field1 then field1 and
You should end up w/ a file like "conf/dataimport.properties" @ full import
time, might be that it did not get written out?
- Jon
On Apr 28, 2010, at 3:05 PM, safl wrote:
>
> Hello,
>
> I'm just new on the list.
> I searched a lot on the list, but I didn't find an answer to my question.
>
>
Just a general theory question ...
From what I understand Cassandra uses a generic gossip protocol for node
discovery (custom), will the Solr-Cloud have something similar?
I was looking through both projects and it seems like this "protocol" type can
be ripped from org.apache.cassandra.gms pac
Correct me if Im wrong but I think the problem here is that while there is a
"fetchindex" command in replication the handler and the master/slave setup
pertain to the core config.
For example for this to work properly the solr.xml configuration would need to
setup some type of "global" replicat
I would not use this layout, you are putting important Solr config files
outside onto the docroot (presuming we are looking @ the webapps folder) ...
here is my current Tomcat project (if it helps):
[507][jonbaer.MBP: tomcat]$ pwd
/Users/jonbaer/WORKAREA/SVN_HOME/my-project/tomcat
[508][jonbaer
Uggg I just got bit hard by this on a Tomcat project ...
https://issues.apache.org/jira/browse/SOLR-1238
Is there anyway to get access to that RequestEntity w/o patching? Also are
there security implications w/ using the repeatable payloads?
Thanks.
- Jon
I don't think there is anything low level in Lucene that will specifically
output anything like lastOptimized() to you, since it can be setup a few ways.
Your best bet is probably adding a postOptimize hook and dumping it to log /
file / monitor / etc, probably something like ...
lastO
Hi,
It looks like Im trying to do the same thing in this open JIRA here ...
https://issues.apache.org/jira/browse/SOLR-975
I noticed in index.jsp it has a reference to:
<%
// a quick hack to get rid of get-file.jsp -- note this still spits out
invalid HTML
out.write(
org.apache.solr.handler
How large is the index? There is probably alot of work in getting Solr and
dependencies (for example Lucene / RMI from what I have read) ...
Interestingly enough there is a Jetty container for it ...
http://code.google.com/p/i-jetty/
I think Solr itself would be OK to port to Dalvik just the L
You should maybe scan your db for bad data ...
This bit ...
at sun.nio.cs.UTF_8$Decoder.decodeLoop(UTF_8.java:324)
at java.nio.charset.CharsetDecoder.decode(CharsetDecoder.java:561)
Is probably happening on a specific record somewhere, in the query limit the id
range and try to narrow down which
There is the LuSQL tool which Ive used a few times.
http://lab.cisti-icist.nrc-cnrc.gc.ca/cistilabswiki/index.php/LuSql
http://www.slideshare.net/eby/lusql-quickly-and-easily-getting-your-data-from-your-dbms-into-lucene
- Jon
On Apr 7, 2010, at 11:26 PM, bbarani wrote:
>
> Hi,
>
> I am curr
Before digging through src ...
Docs say ... "Every component can have an extra attribute enable which can be
set as true/false."
It doesn't seem that listeners are part of PluginInfo scheme though ... for
example is this possible?
rch component.
>
> What's the use case for controlling handlers enabled flag on the fly?
>
> Erik
>
>
> On Mar 29, 2010, at 3:02 PM, Jon Baer wrote:
>
>> This is just something that seems to come up now and then ...
>>
>> * - Id like to write a
This is just something that seems to come up now and then ...
* - Id like to write a last-component which does something specific for a
particular declared handler /handler1 for example and there is no way to
determine which handler it came from @ the moment (or can it?)
* - It would be nice if
Just throwing this out there ... I recently saw something I found pretty
interesting from CMU ...
http://csunplugged.org/activities
The search algorithm exercise was focused on a Battleship lookup I think.
- Jon
On Mar 24, 2010, at 10:40 AM, Erik Hatcher wrote:
> I've got a couple of quest
-mail generation of search results. I'm in the process of baking
> VrW into the main Solr example (it's there on trunk, basically) and more
> examples are better.
>
> On Mar 18, 2010, at 7:40 PM, Jon Baer wrote:
>
>> It's also possible to try and use the Velo
It's also possible to try and use the Velocity contrib response writer and
paging it w/ the sitemap elements.
BTW generating a sitemap was a big reason of a switch we did from GSA to Solr
because (for some reason) the map took way too long to generate (even simple
requests).
If you page throug
I am interested in this as well ... Im also having the issue of understanding
if a result has been elevated by the QueryElevation component. It should like
SolrJ would need to know about some type of metadata contained within the docs
but I haven't seen SolrJ dealing w/ payloads specifically ye
Maybe some things to try:
* make sure your uniqueKey is string field type (ie if using int it will not
work)
* forceElevation to true (if sorting)
- Jon
On Mar 9, 2010, at 12:34 AM, Ryan Grange wrote:
> Using Solr 1.4.
> Was using the standard query handler, but needed the boost by field
> fu
Isn't this what Lucene/Solr payloads are theoretically for?
ie:
http://www.lucidimagination.com/blog/2009/08/05/getting-started-with-payloads/
- Jon
On Mar 8, 2010, at 11:15 PM, Lance Norskog wrote:
> This is an interesting idea. There are other projects to make the
> analyzer/filter chain mor
oost. You would need to re-index.
> Moreover, there is no way to "search by boost".
>
> Cheers
> Avlesh
>
> On Fri, Nov 13, 2009 at 8:17 PM, Jon Baer wrote:
>
>> Hi,
>>
>> Im trying to figure out if there is an easy way to basically "reset&qu
;reset" the doc boost. You would need to re-index.
>> Moreover, there is no way to "search by boost".
>>
>> Cheers
>> Avlesh
>>
>> On Fri, Nov 13, 2009 at 8:17 PM, Jon Baer wrote:
>>
>>
>>> Hi,
>>>
>>> Im trying
Hi,
Im trying to figure out if there is an easy way to basically "reset" all of any
doc boosts which you have made (for analytical purposes) ... for example if I
run an index, gather report, doc boost on the report, and reset the boosts @
time of next index ...
It would seem to be from just k
For this list I usually end up @ http://solr.markmail.org (which I believe also
uses Lucene under the hood)
Google is such a black box ...
Pros:
+ 1 Open Source (enough said :-)
There also seems to always be the notion that "crawling" leads itself to
produce the best results but that is rarel
I think it could be as simple as if you have +1 entities in the param
that clean=false as well (because you are specifically interested in
just targeting that entity import) ...
- Jon
On Mar 15, 2009, at 3:07 AM, Shalin Shekhar Mangar wrote:
On Fri, Mar 13, 2009 at 9:56 PM, Jon Baer
I have a few general questions re: caching ...
1. The FastLRU cache in 1.4 seems promising but is there a more
comprehensive list of benefits? Is there a huge speed boost for using
this type of cache?
2. What are the possibilities to using external caches for scaling out
like memcachedb
Bare in mind (and correct me if Im wrong) but a "full-import" is still
a "full-import" no matter what entity you tack onto the param.
Thus I think clean=false should be appended (a friend starting off in
Solr was really confused by this + could not understand why it did a
delete on all docu
hooks for the entire import just nothing on the
entity level?
- Jon
On Mar 9, 2009, at 2:48 PM, Noble Paul നോബിള്
नोब्ळ् wrote:
it is really not available. probably we can have a LogTransformer
which can Log using slf4j
On Mon, Mar 9, 2009 at 11:55 PM, Jon Baer wrote:
Hi,
Is
Are you using the replication feature by any chance?
- Jon
On Mar 10, 2009, at 2:28 PM, Matthew Runo wrote:
We're currently using 1.4 in production right now, using a recent
nightly. It's working fine for us.
Thanks for your time!
Matthew Runo
Software Engineer, Zappos.com
mr...@zappos.com
Id suggest what someone else mentioned to just do a full clean up of
the index. Sounds like you might have kill -9 or stopped the process
manually while indexing (would be only reason for a left over lock).
- Jon
On Mar 11, 2009, at 5:16 AM, Ashish P wrote:
I added single in indexDefault
Hi,
Is there currently anything in DIH to allow for more verbose logging?
(something more than status) ... was there a way to hook in your own
for debugging purposes? I can't seem to locate the options in the
Wiki or remember if it was available.
Thanks.
- Jon
This part:
The part of Zoie that enables real-time searchability is the fact that
ZoieSystem contains three IndexDataLoader objects:
* a RAMLuceneIndexDataLoader, which is a simple wrapper around a
RAMDirectory,
* a DiskLuceneIndexDataLoader, which can index directly to the
FSDire
you've done so far? It sounds like
you have tried out some function query stuff, but can you share what
you did there?
-Grant
On Feb 18, 2009, at 1:54 PM, Jon Baer wrote:
Ive spent a few months trying different techniques w/ regards to
searching just news articles w/ players and can
I don't think "general" discussion forums really help ... it would be
great if every major page in the Solr wiki had a discuss link off to
somewhere though +1 for that ...
Ie:
http://wiki.apache.org/solr/SolrRequestHandler
http://wiki.apache.org/solr/SolrReplication
etc.
For me even panning
Ive spent a few months trying different techniques w/ regards to
searching just news articles w/ players and can't seem to find the
perfect setup.
Normally I take into consideration date (frequency + recently
published), title (which boosts on relevancy) and general mm in body
text (and s
teTransformer,
com.nhl.solr.EnumeratedEntityTransformer">
I guess what Im looking for is that snippet which shows how it is setup (the
initial counter) ...
- Jon
On Mon, Feb 2, 2009 at 12:39 PM, Noble Paul നോബിള് नोब्ळ् <
noble.p...@gmail.com> wrote:
> On Mon, Feb 2, 2009 at 11:01
le from the configuration.
On Mon, Feb 2, 2009 at 11:53 AM, Shalin Shekhar Mangar <
shalinman...@gmail.com> wrote:
> On Mon, Feb 2, 2009 at 9:20 PM, Jon Baer wrote:
>
> > Hi,
> >
> > Sorry I know this exists ...
> >
> > "If an API supports chunking (
Hi,
Sorry I know this exists ...
"If an API supports chunking (when the dataset is too large) multiple calls
need to be made to complete the process. XPathEntityprocessor supports this
with a transformer. If transformer returns a row which contains a field *
$hasMore* with a the value "true" the
Hi,
Ive just had a bump in the night where some feeds have disappeared, Im
wondering since Im running the base 1.3 copy would patching it w/
https://issues.apache.org/jira/browse/SOLR-842
Break anything? Has anyone done this yet?
Thanks.
- Jon
Could it be the framework you are using around it? I know some IOC
containers will auto pool objects underneath as a service without you really
knowing it is being done or has to be explicitly turned off. Just a
thought. I use a single server for all requests behind a Hivemind setup ...
umm not
I think DIH would have to support JNDI which it current does not (I
think). Id also be interested in this (or where the credentials came
from the db itself).
- Jon
On Jan 18, 2009, at 11:37 AM, con wrote:
Hi all
Currently i am defining database parameters like the url, username and
passw
Hi,
Anyone have a quick, clever way of dealing w/ paged XML for
DataImportHandler? I have metadata like this:
1
3
15
I unfortunately can not get all the data in one shot so I need to
maybe a number of requests obtained from the paging me
This sounds a little like my original problem of deltaQuery imports
per entity ...
https://issues.apache.org/jira/browse/SOLR-783
I wonder if those 2 hacks could be combined to fix the issue.
- Jon
On Dec 6, 2008, at 12:29 PM, Marc Sturlese wrote:
Hey there,
I am doing some hacks to some
something similar to a VPS ...
http://www.sun.com/bigadmin/content/zones/
- Jon
On Dec 5, 2008, at 10:58 AM, Kashyap, Raghu wrote:
Jon,
What do you mean by off a "Zone"? Please clarify
-Raghu
-Original Message-
From: Jon Baer [mailto:[EMAIL PROTECTED]
Sent: Thursday, December
Just curious, is this off a "zone" by any chance?
- Jon
On Dec 4, 2008, at 10:40 PM, Kashyap, Raghu wrote:
We are running solr on a solaris box with 4 CPU's(8 cores) and 3GB
Ram.
When we try to index sometimes the HTTP Connection just hangs and the
client which is posting documents to solr
Sorry missed that (and probably dumb question), does that -D flag work
for setting as a RAMDirectory as well?
- Jon
On Nov 30, 2008, at 8:42 PM, Yonik Seeley wrote:
OK, the development version of Solr should now be fixed (i.e. NIO
should be the default for non-Windows platforms). The next n
HadoopEntityProcessor for the DIH?
Ive wondered about this as they make HadoopCluster LiveCDs and EC2
have images but best way to make use of them is always a challenge.
- Jon
On Nov 29, 2008, at 3:34 AM, Erik Hatcher wrote:
On Nov 28, 2008, at 8:38 PM, Yonik Seeley wrote:
Or, it would b
This sounds exactly same issue I had when going from 1.3 to 1.4 ... it
sounds like DIH is trying to automagically figure out the columns :-\
- Jon
On Nov 25, 2008, at 6:37 AM, Joel Karlsson wrote:
Hello,
I get Unknown field error when I'm indexing an Oracle dB. I've
reduced the
number of
https://issues.apache.org/jira/secure/attachment/12394282/solr2_maho_impression.png
https://issues.apache.org/jira/secure/attachment/12394266/apache_solr_b_red.jpg
Maybe another template idea ... I just started playing around w/ this
plugin:
http://malsup.com/jquery/taconite/
Would be pretty neat to have that as a response (or @ least the
technique), not sure how well known it is or if there is something W3C-
based in the pipeline that is similar. Pr
. right?
we did some refactoring to minimize the object creation for
case-insensitive comparisons.
I guess it should be rectified soon.
Thanks for bringing it to our notice.
--Noble
On Thu, Nov 20, 2008 at 10:05 AM, Jon Baer <[EMAIL PROTECTED]> wrote:
Schema:
DIH:
The col
it' fields must be case-sensitive.
Could you tell me the usecase or can you paste the data-config?
--Noble
On Thu, Nov 20, 2008 at 8:55 AM, Jon Baer <[EMAIL PROTECTED]> wrote:
Sorry I should have mentioned this is from using the
DataImportHandler ...
it seems case insensitive
if that resolves the problem. Thanks.
- Jon
On Nov 19, 2008, at 6:44 PM, Ryan McKinley wrote:
schema fields should be case sensitive... so DOCTYPE != doctype
is the behavior different for you in 1.3 with the same file/schema?
On Nov 19, 2008, at 6:26 PM, Jon Baer wrote:
Hi,
I wanted to
Hi,
I wanted to try the TermVectorComponent w/ current schema setup and I
did a build off trunk but it's giving me something like ...
org.apache.solr.common.SolrException: ERROR:unknown field 'DOCTYPE'
Even though it is declared in schema.xml (lowercase), before I grep
replace the entire f
Ive also had the same issues here but when trying to switch to
HTMLStripWhitespaceTokenizerFactor I found that it only removes the
tags but when it comes to all forms of javascript includes in a
document it keeps it all intact so I ended up w/ scripts in the
document text, is there any easy
te:
Hi Lance,
I guess I got your problem
So you wish to create docs for both entities (as suggested by Jon
Baer). So the best solution would be to create two root entities. The
first one should be the outer and write a transformer to store all the
urls into the db . The JdbcDataSource can do
Another idea is to use create the logic you need and dump to a temp
MySQL table and then fetch the feeds, that has worked pretty nicely
for me, it removes the need for the outer feed to do the work. @
first I could not figure out if this was a bug or feature ...
Something like ...
proce
On Nov 1, 2008, at 1:16 PM, Grant Ingersoll wrote:
How do you propose to distinguish those words from the other ones?
** They are field values from other documents
The problem you are addressing is often called keyword extraction.
In general, it 's a difficult problem, but you may have d
own SolrTermVectorMapper and
have it customize the TV output...
Thanks,
Grant
On Oct 31, 2008, at 5:20 PM, Jon Baer wrote:
Hi,
So Im looking to either use this or build a component which might
do what Im looking for. Id like to figure out if its possible use
a single doc to get tag gen
Hi,
So Im looking to either use this or build a component which might do
what Im looking for. Id like to figure out if its possible use a
single doc to get tag generation based on the matches within that
document for example:
1 News Doc -> contains 5 Players and 8 Teams (show them as pos
Is that right? I find the wording of "clean" a little confusing. I
would have thought this is what I had needed earlier but the topic
came up regarding the fact that you can not deleteByQuery for an
entity you want to flush w/ delta-import.
I just noticed that the original JIRA request sa
Id like to say that deal is part of https://issues.apache.org/jira/browse/SOLR-783
but looking @ it closely it might be different.
I think the issue is that delta-import does not have anything to match
it's last_index_time against when doing feeds. Im also interested in
that type of merge
If that is the case you should look @ the DataImportHandler examples
as they can already index RSS, im doing it now for ~ a dozen feeds on
an hourly basis. (This is also for any XML-based feed for XHTML, XML,
etc). I find Nutch more useful for plain vanilla HTML (something that
was built
Hi,
Im pretty intrigued by the Ocean search stuff and the Lucene patch, Im
wondering if it's something that a tweaked Solr w/ mod Lucene can run
now? Has anyone tried merging that patch and running w/ Solr? Im
sure there is more to it than just swapping out the libs but the real
time in
Is there a way to prevent this from occurring (or a way to nail down
the doc which is causing it?):
INFO: [news] webapp=/solr path=/admin/dataimport
params={command=status} status=0 QTime=0
Exception in thread "Thread-14" java.lang.StackOverflowError
at java.util.regex.Pattern$Single
Hi,
What is the proper behavior suppose to be between SolrJ and caching?
Im proxying through a framework and wondering if it is possible to
turn on / turn off caching programatically depending on the type of
query (or if this will have no effect whatsoever) ... since SolrJ uses
Apache HT
What is your set to? Could it be you have duplicates in
your uniqueKey setup (thus producing only 10 rows in index)?
- Jon
On Oct 12, 2008, at 1:30 PM, con wrote:
I wrote a jdbc program to implement the same query. But it is
returning all
the responses, 25 nos.
But the solr is still in
we do not really
have any knowledge on how to delete specific rows.
how about passing a deleteQuery=type:x in the request params
or having a deleteByQuery on each top level entitywhich can be used
when that entity is doing a full-import
--Noble
On Fri, Oct 3, 2008 at 4:32 AM, Jon Baer <[EMAIL P
Just curious,
Currently a full-import call does a delete all even when appending an
entity param ... wouldn't it be possible to pick up the param and just
delete on that entity somehow? It would be nice if there was
something involved w/ having an entity field name that worked w/ DIH
to
If I understand your question right ... you would not need a
transformer, basically you nest entities under each other ... ie:
driver="com.mysql.jdbc.Driver" url="jdbc:mysql://localhost/nhldb?
connectTimeout=0&autoReconnect=true" user="root" password=""
batchSize="-1"/>
process
Why even do any of the work :-)
Im not sure any of the free analytic apps (ala Google) can but the
paid ones do, just drop the query into one of those and let them
analyze ...
http://www.google.com/analytics/
Then just parse the reports.
- Jon
On Sep 25, 2008, at 8:39 AM, Mark Miller wro
le to read it back using
context.getPersistedProperty("key");
This would be a generic enough for users to get going
thoughts.
--Noble
On Sat, Sep 20, 2008 at 1:52 AM, Jon Baer <[EMAIL PROTECTED]> wrote:
Actually how does ${deltaimporter.last_index_time} know which
entity Im
specifically updating?
Actually how does ${deltaimporter.last_index_time} know which entity
Im specifically updating? I feel like Im missing something, can it
work like that?
Thanks.
- Jon
On Sep 19, 2008, at 4:14 PM, Jon Baer wrote:
Question -
So if I issued a dataimport?command=delta-import&entity=one
Question -
So if I issued a dataimport?command=delta-import&entity=one,two,three
Would this also hit items w/o a delta-import like four,five,six, etc?
Im trying to set something up and I ended up with 28k+ documents which
seems more like a full import, so do I need to do something like delt
Another +1 for Shalin and Noble for DIH ...
On Sep 16, 2008, at 9:50 PM, Erik Hatcher wrote:
+1 for Grant's efforts! He put a lot of sweat into making this
release a reality.
Erik
On Sep 16, 2008, at 9:29 PM, Grant Ingersoll wrote:
The Apache Solr team is happy to announce the av
That was it, thanks Shalin.
On Sep 16, 2008, at 1:41 PM, Shalin Shekhar Mangar wrote:
On Tue, Sep 16, 2008 at 10:41 PM, Jon Baer <[EMAIL PROTECTED]> wrote:
For some reason my XPath attribute keeps failing to get picked up
here (is
that the proper format?):
Put a slash between no
Hi,
For some reason my XPath attribute keeps failing to get picked up here
(is that the proper format?):
- Jon
it into a JSON Array?
Thanks
** julio
-Original Message-----
From: Jon Baer [mailto:[EMAIL PROTECTED]
Sent: Sunday, September 14, 2008 9:01 PM
To: solr-user@lucene.apache.org
Subject: Re: SolrJ and JSON in Solr -1.3
Hmm am I missing something but isn't the real point of SolrJ to be
Hmm am I missing something but isn't the real point of SolrJ to be
able to use the binary (javabin) format to keep it small / tight /
compressed? I have had to proxy Solr recently and found just throwing
a SolrDocumentList as a JSONArray (via json.org libs) works pretty
well (YMMV). I was
Hi,
Was wondering if there was an update on a push for a final 1.3?
Wanted to build a final .war but wondering status and if I should hold
off ... everything in trunk seems promising any major issues?
Thanks.
- Jon
Yeah I think the snapshot techniques that ZFS provides would be very
nice for handling indexes, although remains to be seen as I have not
seen too much info pertaining to it.
Im hoping to have a chance to put Solr on OpenSolaris soon and will
see what works / what doesn't. (BTW this combo
Thanks ... on a somewhat related note, does having the index on ZFS
buy me anything, has anyone toyed w/ ZFS snapshots / send / recv to
automount? Does it work?
- Jon
On Aug 21, 2008, at 6:43 PM, Alexander Ramos Jardim wrote:
You need to setup one snapshooter for each index
2008/8/21 Jon
Hi,
Ive started putting together a small cluster and going through the
setup on some of the scripts, do they have any awareness of a
multicore setup? It seems like I can only snapshot a single master
directory, Im assuming these tools are compatible with that type of
setup but just want
Hi,
(Im sure this was asked before but found nothing on markmail) ...
Wondering if Solr can handle this on its own or if something needs to
be written ... would like to handle recognizing date inputs to a
search box for news articles, items such as "August 1","August 1st" or
"08/01/2008"
It seems that spellchecker works great except all the "7 words you
can't say on TV" resolve to very important people, is there a way to
contain just certain words so they don't resolve?
Thanks.
- Jon
This is *exactly* my issue ... very nicely worded :-)
I would have thought facet.query=*:* would have been the solution but
it does not seem to work. Im interested in getting these *total*
counts for UI display.
- Jon
On Jul 22, 2008, at 6:05 AM, Stefan Oestreicher wrote:
Hi,
I have a
, 2008, at 9:14 PM, Mike Klaas wrote:
On 17-Jul-08, at 6:27 AM, Jon Baer wrote:
Ive gone from a complex multicore setup back to a single solrconfig
setup and using a doctype field (since the index is pretty small),
however there are a few spots where items are laid out in tabs and
each tab
Hi,
I can't seem to locate any info on how to get SolrJ + Spellcheck
working together, Id like to query the spellchecker if 0 items were
matched, is SolrJ "generic" enough to pick apart added component
results from the bottom of a query?
Thanks.
- Jon
Ive gone from a complex multicore setup back to a single solrconfig
setup and using a doctype field (since the index is pretty small),
however there are a few spots where items are laid out in tabs and
each tab has a count of docs associated, ie:
News (123) | Images (345) | Video (678) | Bl
1 - 100 of 123 matches
Mail list logo