I have the same problem. This happens only for some documents in the index.
Like sharadgaur, the problem ceased when I removed
ReversedWildcardFilterFactory from my analysis chain,
HTMLStripCharFilterFactory has been there before and after.
I am running branch-3.6 r1238628. As far as I can tell,
I was able to create a test case.
We are querying ranges of documents. When I tried to isolate the document
that causes trouble, I found it happens with exactly every second request
only for a single document query (it fails constantly when requesting a
range of documents where that document is in
I posted the files here: http://www.mediafire.com/?z43a5qyfvz4zxp1
--
View this message in context:
http://lucene.472066.n3.nabble.com/search-highlight-InvalidTokenOffsetsException-in-Solr-3-5-tp3560997p3793496.html
Sent from the Solr - User mailing list archive at Nabble.com.
Robert, I just tried with 3.6-SNAPSHOT 1296203 from svn - the problem is
still there.
I am just about to leave for a vacation. I'll try to open a JIRA issue this
evening.
--
View this message in context:
http://lucene.472066.n3.nabble.com/search-highlight-InvalidTokenOffsetsException-in-Solr-3-
Ah, ok - thank you for looking at it.
But - the wiki page has a foot note that says "a tokenizer must be defined
for the field, but it doesn't need to be indexed". The body field has the
type "dcx_text" which has a tokenizer.
Is the documentation wrong here or am I misunderstanding something?
true
16
192
true
/etc/solr/conf/solr.keytab
solr/
sandbox.hortonworks....@hortonworks.com
Thanks,
Andrew Bumstead
--
*NOTICE AND DISCLAIMER*
This email (including attachments) is confidential. If you are not the
intended recipient, notify the sender immediately, delete th
ven.org/maven2/org.apache.hadoop/hadoop-common/2.7.1/org/apache/hadoop/security/UserGroupInformation.java#UserGroupInformation.spawnAutoRenewalThreadForUserCreds%28%29
>
> [1] - https://issues.apache.org/jira/browse/HADOOP-6656
>
> On Fri, Jan 8, 2016 at 10:21 PM, Andrew Bumstead <
&g
No experience with this personally, but it seems like you are describing
https://cwiki.apache.org/confluence/display/solr/Language+Analysis#LanguageAnalysis-UnicodeCollation
- Andy -
-Original Message-
From: Shawn Heisey [mailto:apa...@elyograg.org]
Sent: Monday, March 14, 2016 10:51 AM
Are you sorting against an untokenized field (either defined using the 'string'
fieldType or a fieldType that is configured with KeywordTokenizerFactory)?
Solr will let you sort against a tokenized field. Not sure what happens
internally when you do this, but the results will not be what you exp
Dear user and dev lists,
We are loading files from a directory and would like to index a portion of
each file path as a field as well as the text inside the file.
E.g., on HDFS we have this file path:
/user/andrew/1234/1234/file.pdf
And we would like the "1234" token parsed from the
I'm not sure, it's a remote team but will get more info. For now, assuming
that a certain directory is specified, like "/user/andrew/", and a regex is
applied to capture anything two directories below matching "*/*/*.pdf".
Would there be a way to capture the wild-c
I followed the instructions here
https://wiki.apache.org/lucene-java/HowtoConfigureIntelliJ, including `ant
idea`, but I'm still not getting the links in solr classes and methods; do
I need to add libraries, or am I missing something else?
Thanks!
rovide literal.filename=blah/blah
>
> Upayavira
>
>
> On Tue, Jul 21, 2015, at 07:37 PM, Andrew Musselman wrote:
> > I'm not sure, it's a remote team but will get more info. For now,
> > assuming
> > that a certain directory is specified, like "/user/a
Which can only happen if I post it to a web service, and won't happen if I
do it through config?
On Tue, Jul 21, 2015 at 2:19 PM, Upayavira wrote:
> yes, unless it has been added consciously as a separate field.
>
> On Tue, Jul 21, 2015, at 09:40 PM, Andrew Musselman wrote:
>
n "ant idea" and re-open it
> if switching between too diverged branches (e.g., 4.10 and 5_x).
>
> вт, 21 июля 2015 г. в 21:53, Andrew Musselman >:
>
> > I followed the instructions here
> > https://wiki.apache.org/lucene-java/HowtoConfigureIntelliJ, including
Fwding to user..
-- Forwarded message --
From: Andrew Musselman
Date: Wed, Jul 22, 2015 at 8:54 AM
Subject: Re: Parsing and indexing parts of the input file paths
To: d...@lucene.apache.org
Thanks, and tell it to index the "id" field, which eventually contains the
parse?
On Wed, Jul 22, 2015 at 9:42 AM, Erick Erickson
wrote:
> Don't understand your question. If you're talking two different
> fields, use copyField.
>
> On Wed, Jul 22, 2015 at 8:55 AM, Andrew Musselman
> wrote:
> > Fwding to user..
> >
> > --
t,
> Erick
>
> On Wed, Jul 22, 2015 at 9:47 AM, Andrew Musselman
> wrote:
> > Trying to figure out how to parse the file path, which when I run the
> > "cloud" instance becomes the "id" for each PDF document.
> >
> > Is that "id" f
We had a similar issue, when this happened we did a fetch index on each core
out of sync to put them back right again
Sent from my iPhone
> On 5 Mar 2015, at 14:40, Martin de Vries wrote:
>
> Hi,
>
> We have index corruption on some cores on our Solrcloud running version
> 4.8.1. The index
happen less often (allowing
it to recover from new documents added and only send the changes with a wider
gap) - but I cant remember what those were.
-Original Message-
From: Andrew Butkus [mailto:andrew.but...@c6-intelligence.com]
Sent: 05 March 2015 14:42
To:
Subject: Re: Solrcloud Index
handler.dataimport.DocBuilder.doDelta(DocBuilder.java:338)
at
org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:223)
... 24 more
Caused by: java.lang.NullPointerException
at
org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.(JdbcDataSource.java:277)
... 32 more
Andrew Gilbertson
Don't know if this is what you are looking for, but we had a similar
requirement. In our case each folder had a unique identifier associated with it.
When generating the Solr input document our code populated 2 fields,
parent_folder, and folder_hierarchy (multi-valued), and for a document in the
Based on his example, it sounds like Naresh not only wants the tags field to
contain at least one of the values [T1, T2, T3] but also wants to exclude
documents that contain a tag other than T1, T2, or T3 (Doc3 should not be
retrieved).
If the set of possible values in the tags field is limited
If I understand you correctly you want to boost the score of documents where
the contents of the product_name field match exactly (other than case) the
query string.
I think what you need is for the dummy_name field to be non-tokenized (indexed
as a single string rather than parsed into individ
It is possible to get the original facet counts for the field you are filtering
on (we have been using this since Solr 3.6). Don't know if this can be extended
to get the original counts for all fields however.
This syntax is described here:
https://cwiki.apache.org/confluence/display/solr/Fac
Thanks Markus, Scott, and Erick, I appreciate the input.
Scott, I am not clear what you meant by " One reason is that zkServer.cmd tells
the process that run Zookeeper by judging the DOS window title. However,
according to what verison of Windows you use and how you start DOS window, it
could b
autoCommit to 10 we committed less per the log messages but I
expected the bigger initial segments to result in less merging thus lower
disk activity. But testing showed no significant change in disk writing.
Thanks for any help.
Andrew
the disk writes. Good savings but not enough to significantly change
the load we are putting on the SAN.
Andrew
On Fri, Nov 4, 2016 at 12:00 PM, Erick Erickson
wrote:
> Every time your ramBufferSizeMB limit is exceeded, a segment is
> created that's eventually merged. In terms of _th
I have an existing solr installation which uses the mysql jdbc driver to
access a remote database and index some complex data structures. This
has been serving me very well for a good long time, but I now need to
expand it a bit. I need to add some additional data from another source
via json, and
could you explore in a bit more detail about what format the
> json is in..
>
> Best,
> Erick
>
> On Wed, Dec 13, 2017 at 7:24 AM, Andrew Kelly wrote:
>> I have an existing solr installation which uses the mysql jdbc driver to
>> access a remote database and index som
0Z":965,
"gap":"+1MONTH",
"start":"2013-12-01T14:00:00Z",
"end":"2015-02-01T14:00:00Z"}
}
Now I also want to do the same thing but instead record counts for each date
group I want the total of seatingCapacity which is an integer field. Does
anyone know how to do this?
--Andrew Shumway
smime.p7s
Description: S/MIME cryptographic signature
Is port 8984 open in your ec2's security settings?
>From the ec2 instance, can you curl localhost:8984/solr? Do you see
anything?
On Tue, Jul 29, 2014 at 9:15 AM, pushkar sawant
wrote:
> Hi Team,
> I have done Solr 4.9.0 setup on ubuntu 12.04 instanace on AWS.
> with Java 7. When i start the
(see
https://issues.apache.org/jira/browse/SOLR-788)
Looking at the issue, it seems this has been (largely?) resolved since
Solr 4.1 and 5.0. Can I update the text to reflect that?
Thanks for your time.
Best wishes,
Andy Jackson
--
Dr Andrew N Jackson
Web Archiving Technical Lead
T
Hey Hoss,
I would be interested in being a moderator.
Thanks,
Andrew
On Sun, Oct 20, 2013 at 7:09 AM, Jeevanandam M. wrote:
> Hello Hoss -
>
> My pleasure, kindly accept my moderator nomination.
>
> Regards,
> Jeeva
>
> -- Original Message --
h the source code and I see that we can specify an
HttpClient when we create a new instance of an HttpSolrServer. I can set
the header there, but that seems slightly hackey to me. I'd prefer to use a
servlet filter if possible.
Do you have any other suggestions?
Thanks!
*-- Andrew Doyle
The Co-location section of this document
http://searchhub.org/2013/06/13/solr-cloud-document-routing/ might be of
interest to you. It mentions the need for using Solr Cloud routing to group
documents in the same core so that grouping can work properly.
--Andrew Shumway
-Original
Hi, we have 8 solr servers, split 4x4 across 2 data centers.
We have a collection of around ½ billion documents, split over 100 shards, each
is replicated 4 times on separate nodes (evenly distributed across both data
centers).
The problem we have is that when we use cursormark (and also wh
Hi, we have 8 solr servers, split 4x4 across 2 data centers.
We have a collection of around ½ billion documents, split over 100 shards, each
is replicated 4 times on separate nodes (evenly distributed across both data
centers).
The problem we have is that when we use cursormark (and also when w
Hi Shawn,
Thank you for your reply
>The part about memory usage is not clear. That 4GB and 16GB could refer to
>the operating system view of memory, or the view of memory within the JVM.
>I'm curious about how much total >RAM each machine has, how large the Java
>heap is, and what the total
> Extrapolating what Jack was saying on his reply ... with 100 shards and
> 4 replicas, you have 400 cores that are each about 2.8GB. That results in a
> total index size of just over a terabyte, with 140GB of index data on each of
> the eight servers.
> Assuming you have only one Solr instanc
We decided to downgrade to 20 shards again, as we kept having the query time
spikes, if it was a memory issue, I would assume we would have the same
performance issues with 20 shards, so I think this is maybe a problem in solr
rather than our configuration / amount of ram.
In anycase, we have
&shard.info=true
Sent from my iPhone
> On 17 Jan 2015, at 04:23, Naresh Yadav wrote:
>
> Hi all,
>
> We have single solr index with 3 fixed fields(on of field is tokenized with
> space) and rest dynamic fields(string fields in range of 10-20).
>
> Current size of index is 2 GB with around 12
I would switch the order of those. Add the new fields and *then* index to
solr.
We do something similar when we create SolrInputDocuments that are pushed
to solr. Create the SID from the existing doc, add any additional fields,
then add to solr.
On Wed, Jan 28, 2015 at 11:56 AM, Mark wrote:
> I
r if the document is a binary are you suggesting
>
> 1) curl to upload/extract passing docID
> 2) obtain a SID based off docID
> 3) add addtinal fields to SID & commit
>
> I know I'm possibly wandering into the schemaless teritory here as well
>
>
> On 28 January 2015
Using Solr 4.6.0 on linux with Java 6 (Oracle JRockit 1.6.0_75
R28.3.2-14-160877-1.6.0_75)
We are seeing these issues when doing a restart on a Solr cloud
configuration.After restarting each server in sequence none of them
will come up. The servers start up after a long time but the cloud
status
Hi,
My wiki username is AndyMacKinlay. Can I please be added to the
ContributorsGroup?
Thanks,
Andy
son
--
Dr Andrew N Jackson
Web Archiving Technical Lead
The British Library
Tel: 01937 546602
Mobile: 07765 897948
Web: www.webarchive.org.uk <http://www.webarchive.org.uk/>
Twitter: @UKWebArchive
The default facet.limit is 10, but it's set to 50 for most of the
facets. I've included the query parameters below. In case it makes any
difference, there are quite a lot of facet fields with large numbers of
terms, and the queries are being generated by the Sarnia Drupal module.
Thanks,
Andy
---
(System.currentTimeMillis() - start) + ",
query=" +
QueryParsing.toString(rb.getQuery(), rb.req.getSchema()) + ",
indexIds=" + getIndexIds(rb));
-- Jack Krupansky
-Original Message-
From: Andrew Lundgren
Sent: Tuesday, March 19, 2013 11:52 AM
To: solr-user@lucene.ap
there any functionality for a blocked-fl?
Thank you!
--
Andrew
NOTICE: This email message is for the sole use of the intended recipient(s)
and may contain confidential and privileged information. Any unauthorized
review, use, disclosure or distribution is prohibited. If you are not the
int
Hmm... Just found this JIRA: https://issues.apache.org/jira/browse/SOLR-3191
I think I have answered my question.
-Original Message-
From: Andrew Lundgren [mailto:lundg...@familysearch.org]
Sent: Thursday, April 18, 2013 1:21 PM
To: solr-user@lucene.apache.org
Subject: Making fields
I've been trying to get into how distributed field facets do their work but
I haven't been able to uncover how they deal with this issue.
Currently distrib pivot facets does a getTermCounts(first_field) to
populate a list at the level its working on.
When putting together the data structure we se
All the solr methods look like they should throw those 2 exceptions.
Have you tried the DirectXmlRequest method?
up.process(solrServer);
public UpdateResponse process( SolrServer server ) throws
SolrServerException, IOException
{
long startTime = System.currentTimeMillis();
UpdateRes
Hi,
A Solr search for "request" gives me hits on documents containing
"requests", "requesting", and "requester". How can I turn this feature off
so Solr will return only those documents containing "request"?
Thanks,
Andrew
the amount
of memory that will be set aside for the cache?
How do you determine how much cache each fq will consume?
Thank you!
--
Andrew Lundgren
lundg...@familysearch.org
NOTICE: This email message is for the sole use of the intended recipient(s)
and may contain confidential and
Hi everyone,
We have a large product catalogue (currently 9 million, but soon to inflate to
around 25 million) with each product have a unicode title. We're offering the
facility to sort by title, but often within quite large result sets, eg 1
million fiction books (we are correctly using filte
Thank you for your reply.
One clarification, is the maxdocs the max docs in the set, or the matched docs
from the set?
If there are 1000 docs and 19 of them match, is the maxdocs 1000, or 19?
--
Andrew
-Original Message-
From: Erick Erickson [mailto:erickerick...@gmail.com]
Sent
s is sufficiently small
(eg less than 1000). I'll feed this back to the powers that be.
Regards,
Andrew Ingram
, why are these 4 products all being given the same score, is
the document boosting not being considered correctly?
Additionally I'm sorting by "can_purchase+desc,+score+desc", where can_purchase
is a boolean field.
I would greatly appreciate any help with this.
Regards,
Andr
> For Filter cache
>
> size in memory = size in solrconfig.xml * WHAT (the size of an id) ???
> (I
> don't use facet.enum method)
>
As I understand it, size is the number queries that will be cached. My short
experience means that the memory consumed will be data dependent. If you have
a huge
s, then AND the resulting doc sets and then
once that is done score the query based on the resulting subset of documents?
--
Andrew Lundgren
lundg...@familysearch.org
NOTICE: This email message is for the sole use of the intended recipient(s)
and may contain confidential and privileged i
Is it possible to configure solr such that the filter query cache settings is
set to fq={!cache=false} by default?
--
Andrew Lundgren
lundg...@familysearch.org
NOTICE: This email message is for the sole use of the intended recipient(s)
and may contain confidential and privileged information
where this should be handled?
We have several clients and would like to protect the server from this field
being queried on even if they make a mistake.
Thank you.
--
Andrew Lundgren
lundg...@familysearch.org
NOTICE: This email message is for the sole use of the intended recipient(s)
and may
t; We've done similar query rewriting in a custom SearchComponent that
> runs before QueryComponent.
>
> Otis
>
> Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
> Lucene ecosystem search :: http://search-lucene.com/
>
>
> >___
Hi Marc,
I'd probably have another field called "keywords" (or something) that I copy
all the values into using copyfields, then just facet (and therefore filter) on
that field instead.
If there were a way to do it the way you're asking (there might be, I don't
know), there's no guarantee that
With Solr 4.0 you could use relevance functions to give a query time boost if
you don't have the information at index time.
Alternatively you could do term facet based autocomplete which would mean you
could sort by count rather than any other input.
Andrew
Sent on the run.
On 20/01
We found that optimising too often killed our slave performance. An optimise
will cause you to merge and ship the whole index rather than just the relevant
portions when you replicate.
The change on our slaves in terms of IO and CPU as well as RAM was marked.
Andrew
Sent on the run.
On 23
oc IDs are received, Solr chooses the first doc and
discards subsequent ones
Does 'doc ID' in the second point refer to the unique key in the first
point, or does it refer to the internal Lucene document ID?
Cheers,
Andrew.
--
View this message in context:
http://lucene.472066.n3.nabble
Mark Miller-3 wrote:
>
> The 'doc ID' in the second point refers to the unique key in the first
> point.
>
I thought so but thanks for clarifying. Maybe a wording change on the wiki
would be good?
Cheers,
Andrew.
--
View this message in context:
http://lucen
7;current', and bring it up in Solr as
a separate core? Will this be safe, as long as all index writing happens via
the 'current' core?
Or will it cause Solr to get confused and do horrible things to the index?
Thanks!
Andrew.
--
View this message in context:
http://lu
Mark Miller-3 wrote:
>
> On 7/4/10 12:49 PM, Andrew Clegg wrote:
>> I thought so but thanks for clarifying. Maybe a wording change on the
>> wiki
>
> Sounds like a good idea - go ahead and make the change if you'd like.
>
That page seems to be marked immut
etproof way.
Cheers,
Andrew.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Using-symlinks-to-alias-cores-tp942567p956394.html
Sent from the Solr - User mailing list archive at Nabble.com.
from what I gather, I
need to create the core folder and edit the solr.xml first before loading
the core with action=CREATE. Is that correct?
Regards
Andrew
I will be looking into that
eventually.
Regards
Andrew
On 20 July 2010 10:32, Peter Karich wrote:
> Hi Andrew,
>
> I didn't correctly understand what you are trying to do with 'copying'?
> Just use one core as a template or use it to replicate data?
>
> You can relo
Hi Peter
We are using the packaged Ubuntu Server (10.04 LTS) versions of Tomcat6 and
Solr1.4 and running a single instance of Solr with multiple cores.
Regards
Andrew
On 20 July 2010 19:47, Peter Karich wrote:
> Hi Andrew,
>
> the whole tomcat shouldn't fail on restart if only
Is anyone using ZooKeeper-based Solr Cloud in production yet? Any war
stories? Any problematic missing features?
Thanks,
Andrew.
--
View this message in context:
http://lucene.472066.n3.nabble.com/SolrCloud-in-production-tp991995p991995.html
Sent from the Solr - User mailing list archive at
I'm attempting to make use of PatternReplaceCharFilterFactory, but am running
into issues on both 1.4.1 ( I ported it) and on nightly (4.0-2010-07-27). It
seems that on a real query the charFilter isn't executed prior to the
tokenizer.
I modified the example configuration included in the dis
the wiki.
Thanks!
Andrew.
--
View this message in context:
http://lucene.472066.n3.nabble.com/maxMergeDocs-and-performance-tuning-tp1162695p1162695.html
Sent from the Solr - User mailing list archive at Nabble.com.
Okay, thanks Marc. I don't really have any complaints about performance
(yet!) but I'm still wondering how the mechanics work, e.g. when you have a
number of segments equal to mergeFactor, and each contains maxMergeDocs
documents.
The docs are a bit fuzzy on this...
--
View this message in conte
--
View this message in context:
http://lucene.472066.n3.nabble.com/Duplicate-docs-when-mergin-tp1261979p1261979.html
Sent from the Solr - User mailing list archive at Nabble.com.
suggests that IndexMergeTool can result in dupes, unless I'm
misinterpreting.
Thanks!
Andrew.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Duplicate-docs-when-merging-indices-tp1262043p1262043.html
Sent from the Solr - User mailing list archive at Nabble.com.
I'm quite new to SOLR and wondering if the following is possible: in
addition to normal full text search, my users want to have the option to
search only HTML heading innertext, i.e. content inside of , , or
tags.
Thank you,
Andy Cogan
tter, as I have
no prior Java experience.
Thank you,
Andrew Cogan
I'm a total Lucene/SOLR newbie, and I'm surprised to see that when there are
multiple search terms, term proximity isn't part of the scoring process. Has
anyone on the list done custom scoring that weights proximity?
Andy Cogan
-Original Message-
From: kenf_nc [mailto:ken.fos...@realestat
arches for "foo" and we have source text that
looks like “foo, then the highlighting markup gets inserted between
the ampersand and the ldquo, i.e. “Foo. How can we configure
the highlighting formatter to not split HTML named entities?
Thanks,
Andrew
A filter that could accept a list of SOLR document IDs as articulated by Tom
Burton-West would enable some important features for our application. So if
anyone is wondering if this would be a useful feature, consider this a yes
vote.
-Original Message-
From: Jonathan Rochkind [mailto:roch
ach field from the schema and creating my
own print function?
Thanks!
--
Andrew
NOTICE: This email message is for the sole use of the intended recipient(s)
and may contain confidential and privileged information. Any unauthorized
review, use, disclosure or distribution is prohibited. If y
at 3:24 PM, Andrew Lundgren
wrote:
> We use the toString call on the query in our logs. For some numeric
> types, the encoded form of the number is being printed instead of the
> readable form.
>
> This makes tail and some other tools very unhappy...
>
> Here is a partial e
I think you're saying), which has
nothing at all to do with Terms, it's just the query string passed in. So I'm
really puzzled as to what you're doing to get this kind of output, it almost
looks like you're trying to print out the _results_ of a query, not the query.
So so
to get this kind of output, it almost
looks like you're trying to print out the _results_ of a query, not the query.
So some clarification would be helpful...
Best
Erick
On Mon, Mar 18, 2013 at 12:01 PM, Andrew Lundgren wrote:
> I am sorry, I don't follow what you mean by debug=query
ied about the performance impact of keeping them floating around.
Thanks,
Andrew Ingram
27;re seeing are correct results. You are indexing 6 documents, as you
said before. You actually only want to index one document with multi-valued
fields.
Hope that's somehow helpful,
Andrew
On 10/04/2012, at 3:01, "Robert Petersen" wrote:
> You *could* do it by making one and
; them, with the exact field given a higher weight. This works great and
> performs well.
>
> It is what we did at Netflix and what I'm doing at Chegg.
>
> wunder
>
> On Apr 23, 2012, at 12:21 PM, Andrew Wagner wrote:
>
> > So I just realized the other day that stemmin
I'm sorry, I'm missing something. What's the difference between "storing"
and "indexing" a field?
On Tue, Apr 24, 2012 at 10:28 AM, Paul Libbrecht wrote:
>
> Le 24 avr. 2012 à 17:16, Otis Gospodnetic a écrit :
> > This would not necessarily increase the size of your index that much -
> you don't
"/response/result/doc[position()=4]/int[text()=4]");
assertQ("position 4 should have score 99.7", req,
"/response/result/doc[position()=4]/float[text()=99.7]");
assertQ("id 1 should be in position 5", req,
"/response/result/doc[position()=5]/int[text()=1]");
assertQ("position 5 should have score 99.6", req,
"/response/result/doc[position()=5]/float[text()=99.6]");
}
Andrew Morrison | Software Engineer | Etsy
e I'm hoping for, is to find that SOLR has
some built-in support for this kind of thing.
- Andrew
et me know.
Thanks.
Andrew Davidoff
Hi,
Apologies for making this my first email to the list but I am seeking a UK
Lucene contractor to join my team here in Manchester for a possible short
term contract around Q1 2011.
The mix of skills that I am most interested in is experince with working
with Solr, Nutch, Hadoop and a real bonus
Hi
You could use Solr's php serialized object output (wt=phps) and then convert
it to json in your php:
Regards
Andrew McCombe
On 15 December 2010 17:49, Dennis Gearon wrote:
> I want to just pass the JSON through after qualifying the user's access to
> the
> site.
>
&g
30-second sleep after the snapshot and before the
tar, but the error occurred again anyway.
There was a message from Lance N. with a similar error in, years ago:
http://www.mail-archive.com/solr-user@lucene.apache.org/msg06104.html
but that would be pre-replication anyway, right?
This is on Ubuntu 1
1 - 100 of 366 matches
Mail list logo