Dear Solr users/developers,
Hi,
I have tried to implement the Page and Post relation in single Solr Schema.
In my use case each page has multiple posts. Page and Post fields are as
follows:
Post:{post_content, owner_page_id, document_type}
Page:{page_id, document_type}
Suppose I want to query
construct the original steam, which very possibly would take
> up as much space as if you'd just stored the values anyway. _And_ it
> would burden every one else who didn't want to do this with a bloated
> index.
>
> Best,
> Erick
>
> On Sun, May 8, 2016 at 4:25 AM, A
Dear all,
Hi,
I was wondering, is it possible to re-index Solr 6.0 data in case of
store=false? I am using Solr as a secondary datastore, and for the sake of
space efficiency all the fields (except id) are considered as store=false.
Currently, due to some changes in application business, Solr schem
Dear Solr Users/Developers,
Hi,
I was wondering what is the correct query syntax for searching sequence of
terms with blank character in the middle of sequence. Suppose I am looking
for a query syntax with using fq parameter. For example suppose I want to
search for all documents having "hello wor
Dear all,
Hi,
I am wondering, is there any way to introduce and add a function for facet
gap parameter? I already know there are some Date Math that can be used.
(Such as DAY, MONTH, and etc.) I want to add some functions and try to use
them as gap in facet range; Is it possible?
Sincerely,
Ali.
acet=true&json.facet=%7bresult: %7b
type: range,
field: stat_date,
start: 146027158386,
end: 1460271583864,
gap: 1
%7d%7d
Sincerely,
On Sun, Apr 10, 2016 at 4:56 PM, Yonik Seeley wrote:
> On Sun, Apr 10, 2016 at 3:47 AM, Ali Nazemian
> wrote:
> > Dear all Solr users/develo
Dear all Solr users/developeres,
Hi,
I am going to use Solr JSON facet range on a date filed which is stored as
long milis. Unfortunately I got java heap space exception no matter how
much memory assigned to Solr Java heap! I already test that with 2g heap
space for Solr core with 50k documents!! I
g bulk size?
> How many indexing threads?
>
> Thanks,
> Emir
>
>
> On 11.12.2015 10:06, Ali Nazemian wrote:
>
>> I really appreciate if somebody can help me to solve this problem.
>> Regards.
>>
>> On Tue, Dec 8, 2015 at 9:22 PM, Ali Nazemian
>>
I really appreciate if somebody can help me to solve this problem.
Regards.
On Tue, Dec 8, 2015 at 9:22 PM, Ali Nazemian wrote:
> I did that already. The situation was worse. The autocommit part makes
> solr unavailable.
> On Dec 8, 2015 7:13 PM, "Emir Arnautovic"
> wrot
I did that already. The situation was worse. The autocommit part makes solr
unavailable.
On Dec 8, 2015 7:13 PM, "Emir Arnautovic"
wrote:
> Hi Ali,
> Can you try without explicit commits and see if threads will still be
> blocked.
>
> Thanks,
> Emir
>
> On 08
analyzing part I think it would be acceptable).
- The concurrentsolrclient is used in all the indexing/updating cases.
Regards.
On Tue, Dec 8, 2015 at 6:36 PM, Ali Nazemian wrote:
> Dear Emir,
> Hi,
> There are some cases that I have soft commit in my application. However,
> the bulk upd
--
> Monitoring * Alerting * Anomaly Detection * Centralized Log Management
> Solr & Elasticsearch Support * http://sematext.com/
>
>
>
> On 08.12.2015 08:16, Ali Nazemian wrote:
>
>> Hi,
>> There is a while since I have had problem with Solr 5.2.1 and I could not
>
Hi,
There is a while since I have had problem with Solr 5.2.1 and I could not
fix it yet. The only think that is clear to me is when I send bulk update
to Solr the commit thread will be blocked! Here is the thread dump output:
"qtp595445781-8207" prio=10 tid=0x7f0bf68f5800 nid=0x5785 waiting f
Dear Midas,
Hi,
AFAIK, currently Solr uses virtual memory for storing memory maps.
Therefore using 36GB from 48GB of ram for Java heap is not recommended. As
a rule of thumb do not access more than 25% of your total memory to Solr
JVM in usual situations.
About your main question, setting softcommi
rs.
On Mon, Oct 12, 2015 at 12:29 PM, Ali Nazemian
wrote:
> Thank you very much.
>
> Sincerely yours.
>
> On Mon, Oct 12, 2015 at 6:15 AM, Susheel Kumar
> wrote:
>
>> Yes, Ali. These are targeted for Solr 6 but you have the option download
>> source from trunk,
el
>
> On Sun, Oct 11, 2015 at 10:01 AM, Ali Nazemian
> wrote:
>
> > Dear Susheel,
> > Hi,
> >
> > I did check the jira issue that you mentioned but it seems its target is
> > Solr 6! Am I correct? The patch failed for Solr 5.3 due to class not
> foun
SolrDocument or IndexableField is from? Seems we'd have to add an
> > attribute for that.
> >
> > The other possibly simpler thing to do is execute the join at index time
> > with an update processor.
> >
> > Ryan
> >
> > On Tuesday, October 6, 2015
ot;
wrote:
> thus, something like [child]
>
> https://cwiki.apache.org/confluence/display/solr/Transforming+Result+Documents
> can be developed.
>
> On Tue, Oct 6, 2015 at 6:45 PM, Ali Nazemian
> wrote:
>
> > Dear Mikhail,
> > Hi,
> > I want to enric
Dear Mikhail,
Hi,
I want to enrich the result.
Regards
On Oct 6, 2015 7:07 PM, "Mikhail Khludnev"
wrote:
> Hello,
>
> Why do you need sibling core fields? do you facet? or just want to enrich
> result page with them?
>
> On Tue, Oct 6, 2015 at 6:04 PM, Ali Na
I was wondering how can I overcome this query requirement in Solr 5.2.1:
I have two different Solr cores refer as "core1" and "core2". core1 has
some fields such as field1, field2 and field3 and core2 has some other
fields such as field1, field4 and field5. I am looking for Solr query which
can r
Hi,
I am going to implement a searchcomponent for Solr to return document main
keywords with using the more like this interesting terms. The main part of
implemented component which uses mlt.retrieveInterestingTerms by lucene
docID does not work for all of the documents. I mean for some of the
doc
gt; the filterCache to create a bitset for each unique term. Which is totally
> > incompatible with the uninverted field error you're reporting, so I
> > clearly don't
> > understand something about your setup. Are you _sure_?
> >
> > Best,
> > E
Dear Yonik,
Hi,
Really thanks for you response.
Best regards.
On Tue, Jul 21, 2015 at 5:42 PM, Yonik Seeley wrote:
> On Tue, Jul 21, 2015 at 3:09 AM, Ali Nazemian
> wrote:
> > Dear Erick,
> > I found another thing, I did check the number of unique terms for this
> > fi
Dears,
Hi,
I know that there are lots of tips about how to make the Solr indexing
faster. Probably some of the most important ones which are considered in
client side are choosing batch indexing and multi-thread indexing. There
are other important factors that are server side which I dont want to
m
st regards.
On Tue, Jul 21, 2015 at 10:00 AM, Ali Nazemian
wrote:
> Dear Erick,
>
> Actually faceting on this field is not a user wanted application. I did
> that for the purpose of testing the customized normalizer and charfilter
> which I used. Therefore it just used for the pur
7;re reporting, so I
> clearly don't
> understand something about your setup. Are you _sure_?
>
> Best,
> Erick
>
> On Mon, Jul 20, 2015 at 9:32 PM, Ali Nazemian
> wrote:
> > Dear Toke and Davidphilip,
> > Hi,
> > The fieldtype text_fa has some custom
try the facet.method=enum and it works fine. Did you mean that
actually applying facet on analyzed field is wrong?
Best regards.
On Mon, Jul 20, 2015 at 8:07 PM, Toke Eskildsen
wrote:
> Ali Nazemian wrote:
> > I have a collection of 1.6m documents in Solr 5.2.1.
> > [...
Dears,
Hi,
I have a collection of 1.6m documents in Solr 5.2.1. When I use facet on
field of content this error will appear after around 30s of trying to
return the results:
null:org.apache.solr.common.SolrException: Exception during facet.field: content
at org.apache.solr.request.SimpleFa
Dear Lucene/Solr developers,
Hi,
I decided to develop a plugin for Solr in order to extract main keywords
from article. Since Solr already did the hard-working for calculating
tf-idf scores I decided to use that for the sake of better performance. I
know that UpdateRequestProcessor is the best suit
bution.
> > It should take in input a date format and a field and give in response
> > the
> > new formatted Date.
> >
> > The would be simple to use it :
> >
> > fl=id,persian_date:dateFormat("/mm/dd",gregorian_Date)
> >
> > Th
:
> I'm not sure what you're asking for, give us an example input/output pair?
>
> Best,
> Erick
>
> On Tue, Jun 9, 2015 at 8:47 AM, Ali Nazemian
> wrote:
> > Dear all,
> > Hi,
> > I was wondering is there any function query for converting date fo
Dear all,
Hi,
I was wondering is there any function query for converting date format in
Solr? If no, how can I implement such function query myself?
--
A.Nazemian
s the best solution doesn't involve "Y" at all?
> See Also: http://www.perlmonks.org/index.pl?node_id=542341
>
>
>
>
> : Date: Thu, 9 Apr 2015 01:02:16 +0430
> : From: Ali Nazemian
> : Reply-To: solr-user@lucene.apache.org
> : To: "solr-user@lucene.ap
sults, but the actual searching is
> done via the main query with the q parameter.
>
> -- Jack Krupansky
>
> On Tue, Apr 14, 2015 at 4:17 AM, Ali Nazemian
> wrote:
>
> > Dears,
> > Hi,
> > I have strange problem with Solr 4.10.x. My problem is when I do
> sea
Dears,
Hi,
I have strange problem with Solr 4.10.x. My problem is when I do searching
on solr Zero date which is "0002-11-30T00:00:00Z" if more than one filter
be considered, the results became invalid. For example consider this
scenario:
When I search for a document with fq=p_date:"0002-11-30T00:0
Dear all,
Hi,
As a part of my code I have to update Lucene document. For this purpose I
used writer.updateDocument() method. My problem is the update process is
not affect index until restarting Solr. Would you please tell me what part
of my code is wrong? Or what should I add in order to apply the
IndexWriter.
On Tue, Apr 7, 2015 at 6:13 PM, Ali Nazemian wrote:
> Dear Upayavira,
> Hi,
> It is just the part of my code in which caused the problem. I know
> searchComponent is not for changing the index, but for the purpose of
> extracting document keywords I was forced to hack sear
for
> updating the index, so it really doesn’t surprise me that you aren’t
> seeing updates.
>
> I’d suggest you describe the problem you are trying to solve before
> proposing solutions.
>
> Upayavira
>
>
> On Tue, Apr 7, 2015, at 01:32 PM, Ali Nazemian wrote:
> > I im
I implement a small code for the purpose of extracting some keywords out of
Lucene index. I did implement that using search component. My problem is
when I tried to update Lucene IndexWriter, Solr index which is placed on
top of that, does not affect. As you can see I did the commit part.
Bool
Dear all,
Hi,
I am looking for a way to filtering lucene index with multiple conditions.
For this purpose I checked two different method of filtering search, none
of them work for me:
Using BooleanQuery:
BooleanQuery query = new BooleanQuery();
String lower = "*";
String upper = "*";
on previously indexed but not post-processed
> documents.
>
> Regards,
>Alex.
>
> Solr Analyzers, Tokenizers, Filters, URPs and even a newsletter:
> http://www.solr-start.com/
>
>
> On 23 March 2015 at 15:07, Ali Nazemian wrote:
> > Dear All,
>
Dear All,
Hi,
I wrote a customize updateProcessorFactory for the purpose of extracting
interesting terms at index time an putting them in a new field. Since I use
MLT interesting terms for this purpose, I have to make sure that the added
document exists in index or not. If it was indexed before the
Hi,
I was wondering is it possible to filter tfq() function query to specific
selection of collection? Suppose I want to count all occurrences of term
"test" in documents with fq=category:2, how can I handle such query with
tfq() function query? It seems applying fq=category:2 in a "select" query
w
Dear all,
Hi,
I was wondering is there any performance comparison available for different
solr queries?
I meant what is the cost of different Solr queries from memory and CPU
points of view? I am looking for a report that could help me in case of
having different alternatives for sending single que
Dear all,
Hi,
I was wondering how can I extract the "k" highest rank tf-idf terms for
each document at query time? If such information is not available in solr
by default how can I implement that? (any example or similar scenario would
be appropriate)
Best regards.
--
A.Nazemian
Dear Markus,
Would you please explain more about maxqt parameter and the methodology of
choosing best number of terms for this value?
Best regards.
On Wed, Feb 4, 2015 at 2:46 PM, Markus Jelsma
wrote:
> Well, maxqt is easy, it is just the number of terms that compose your
> query. MinTF is a s
h as
> > an answer (you can't use scores to compare separate sets of search
> > results).
> >
> > Upayavira
> >
> > On Tue, Feb 3, 2015, at 08:01 PM, Ali Nazemian wrote:
> > > Dear Markus,
> > > Hi,
> > > Thank you very much for yo
eparate sets of search
> results).
>
> Upayavira
>
> On Tue, Feb 3, 2015, at 08:01 PM, Ali Nazemian wrote:
> > Dear Markus,
> > Hi,
> > Thank you very much for your response. I did check the reason why it is
> > not
> > recommended to filter by score in se
Hi,
I am looking for a best practice on More Like This parameters. I really
appreciate if somebody can tell me what is the best value for these
parameters in MLT query? Or at lease the proper methodology for finding the
best value for each of these parameters:
mlt.mintf
mlt.mindf
mlt.maxqt
Thank y
Dear Markus,
Hi,
Thank you very much for your response. I did check the reason why it is not
recommended to filter by score in search query. But I think it is
reasonable to filter by score in case of finding similar documents. I know
in both of them (simple search query and mlt query) vsm of tf-idf
Hi,
I was wondering how can I limit the result of MoreLikeThis query by the
score value instead of filtering them by document count?
Thank you very much.
--
A.Nazemian
e/lucene/
> search/similarities/TFIDFSimilarity.html
>
> Koji
> --
> http://soleami.com/blog/comparing-document-classification-functions-of-
> lucene-and-mahout.html
>
>
> On 2015/02/03 5:39, Ali Nazemian wrote:
>
>> Dear Erik,
>> Thank you for your response. Would
is computed, use Solr’s debug=true mode to see the explain details in the
> response.
>
> Erik
>
> > On Feb 2, 2015, at 10:49 AM, Ali Nazemian wrote:
> >
> > Hi,
> > I was wondering what is the range of score is brought by more like this
> > query
Hi,
I was wondering what is the range of score is brought by more like this
query in Solr? I know that the Lucene uses cosine similarity in vector
space model for calculating similarity between two documents. I also know
that cosine similarity is between -1 and 1 but the fact that I dont
understand
very much.
On Tue, Jan 13, 2015 at 4:21 PM, Jack Krupansky
wrote:
> A function query or an update processor to create a separate field are
> still your best options.
>
> -- Jack Krupansky
>
> On Tue, Jan 13, 2015 at 4:18 AM, Ali Nazemian
> wrote:
>
> > Dear Markus,
&
> jack.krupan...@gmail.com> wrote:
> > Could you clarify what you mean by "Lucene reverse index"? That's not a
> > term I am familiar with.
> >
> > -- Jack Krupansky
> >
> >
> > On Mon, Jan 12, 2015 at 1:01 AM, Ali Nazemian
> wrote:
re by a function of the term frequency of arbitrary terms,
> using the tf, mul, and add functions.
>
> See:
> https://cwiki.apache.org/confluence/display/solr/Function+Queries
>
> -- Jack Krupansky
>
> On Sun, Jan 11, 2015 at 10:55 AM, Ali Nazemian
> wrote:
>
> &
g stage. Have
> you tried UpdateRequestProcessors? They have access to the full
> document when it is sent and can do whatever they want with it.
>
> Regards,
>Alex.
>
> Sign up for my Solr resources newsletter at http://www.solr-start.com/
>
>
> On 11 January 2015 at 10:55, Ali
ene.apache.org/core/4_10_3/core/org/apache/lucene/search/similarities/TFIDFSimilarity.html
>
> And to use your custom similarity class in Solr:
>
> https://cwiki.apache.org/confluence/display/solr/Other+Schema+Elements#OtherSchemaElements-Similarity
>
>
> -- Jack Krupansky
&g
Hi everybody,
I am going to add some analysis to Solr at the index time. Here is what I
am considering in my mind:
Suppose I have two different fields for Solr schema, field "a" and field
"b". I am going to use the created reverse index in a way that some terms
are considered as important ones and
Hi,
I was wondering what is the hardware requirement for indexing 500 million
documents in Solr? Suppose maximum number of concurrent users in peak time
would be 20.
Thank you very much.
--
A.Nazemian
x27; specifically on the business level?
>
> Regards,
> Alex
> On 22/10/2014 7:27 am, "Ali Nazemian" wrote:
>
> > The problem is when I partially update some fields of document. The
> > signature becomes useless! Even if the updated fields are not included i
r your usecase,
> just don't configure teh signatureField to be the same as your uniqueKey
> field.
>
> configure some othe fieldname (ie "signature") instead.
>
>
> : Date: Tue, 14 Oct 2014 12:08:26 +0330
> : From: Ali Nazemian
> : Reply-To
Hi,
I was wondering how can I have both solr deduplication and partial update.
I found out that due to some reasons you can not rely on solr deduplication
when you try to update a document partially! It seems that when you do
partial update on some field- even if that field does not consider as
dup
Dear all,
Hi,
I was wondering how can I mark some documents as duplicate (just marking
for future usage not deleting) based on the hash combination of some
fields? Suppose I have 2 fields name "url" and "title" I want to create
hash based on url+title and send it to another field name "signature".
; lucene-solr-trunk
> > cd lucene-solr-trunk
> > ant eclipse
> >
> > ... And then, from your Eclipse "import existing java project", and
> select
> > the directory where you placed lucene-solr-trunk
> >
> > On Sun, Oct 12, 2014 at 7:09 AM, Ali Nazemian
Hi,
I am going to import solr source code to eclipse for some development
purpose. Unfortunately every tutorial that I found for this purpose is
outdated and did not work. So would you please give me some hint about how
can I import solr source code to eclipse?
Thank you very much.
--
A.Nazemian
outerthoughts.com/ and @arafalov
> Solr resources and newsletter: http://www.solr-start.com/ and @solrstart
> Solr popularizers community: https://www.linkedin.com/groups?gid=6713853
>
>
> On 6 October 2014 11:23, Ali Nazemian wrote:
> > Dear Alex,
> > Hi,
> > LOL,
y: https://www.linkedin.com/groups?gid=6713853
>
>
> On 6 October 2014 03:40, Ali Nazemian wrote:
> > Dear all,
> > Hi,
> > I am going to do partial update on a field that has not any value.
> Suppose
> > I have a document with document id (unique key) '1234
Dear all,
Hi,
I am going to do partial update on a field that has not any value. Suppose
I have a document with document id (unique key) '12345' and field
"read_flag" which does not index at the first place. So the read_flag field
for this document has not any value. After I did partial update to t
Did anybody test that?
Best regards.
On Mon, Sep 29, 2014 at 2:05 PM, Ali Nazemian wrote:
> I also check both solr log and solr console. There is no error inside
> that, it seems that every thing is fine! But actually there is not any
> child document after executing process.
>
>
Tue, Sep 30, 2014 at 7:07 PM, Ali Nazemian wrote:
> Dear Koji,
> Hi,
> Thank you very much.
> Do you know any example code for UpdateRequestProcessor? Anything would be
> appreciated.
> Best regards.
>
> On Tue, Sep 30, 2014 at 3:41 AM, Koji Sekiguchi
> wrote:
>
uery.
>
> Koji
> --
> http://soleami.com/blog/comparing-document-classification-functions-of-
> lucene-and-mahout.html
>
>
> (2014/09/29 4:25), Ali Nazemian wrote:
>
>> Dear all,
>> Hi,
>> I was wondering how can I implement solr boosting words fr
I also check both solr log and solr console. There is no error inside that,
it seems that every thing is fine! But actually there is not any child
document after executing process.
On Mon, Sep 29, 2014 at 1:47 PM, Ali Nazemian wrote:
> Dear all,
> Hi,
> Right now I face with th
Dear all,
Hi,
Right now I face with the strange problem related to solJ client:
When I use only incremental partial update. The incremental partial update
works fine. When I use only the add child documents. It works perfectly and
the child documents added successfully. But when I have both of them
Dear all,
Hi,
I was wondering how can I implement solr boosting words from specific list
of important words? I mean I want to have a list of important words and
tell solr to score documents based on the weighted sum of these words. For
example let word "school" has weight of 2 and word "president"
Dear all,
Hi,
I was wondering how can I use solrJ for sending nested document to solr?
Unfortunately I did not find any tutorial for this purpose. I really
appreciate if you can guide me through that. Thank you very much.
Best regards.
--
A.Nazemian
e HDFS world, you can scale
> > > pretty linearly
> > > with the number of nodes you can rack together.
> > >
> > > Frankly though, if your data set is small enough to fit on a single
> > machine
> > > _and_ you can get
> > > through your analysis in a reasonable
set is small enough to fit on a single machine
> _and_ you can get
> through your analysis in a reasonable time (reasonable here is up to you),
> then HDFS
> is probably not worth the hassle. But in the big data world where we're
> talking petabyte scale,
> having HDFS as
sletter: http://www.solr-start.com/ and @solrstart
> Solr popularizers community: https://www.linkedin.com/groups?gid=6713853
>
>
> On Wed, Aug 6, 2014 at 11:18 AM, Ali Nazemian
> wrote:
> > Dear Gora,
> > I think you misunderstood my problem. Actually I used nutch for c
these
comments? What is the document granularity?
Best regards.
On Wed, Aug 6, 2014 at 1:29 PM, Gora Mohanty wrote:
> On 6 August 2014 14:13, Ali Nazemian wrote:
> >
> > Dear all,
> > Hi,
> > I was wondering how can I mange to index comments in solr? suppose I am
&g
Dear all,
Hi,
I was wondering how can I mange to index comments in solr? suppose I am
going to index a web page that has a content of news and some comments that
are presented by people at the end of this page. How can I index these
comments in solr? consider the fact that I am going to do some ana
purposes
such as Analysis. So why we go for HDFS in the case of analysis if we want
to use SolrJ for this purpose? What is the point?
Regards.
On Wed, Aug 6, 2014 at 8:59 AM, Ali Nazemian wrote:
> Dear Erick,
> Hi,
> Thank you for you reply. Yeah I am aware that SolrJ is my last option
iles by raw I/O
> operations, good luck! I'm 99.99% certain that's going to cause
> you endless grief.
>
> Best,
> Erick
>
>
> On Tue, Aug 5, 2014 at 9:55 AM, Ali Nazemian
> wrote:
>
> > Actually I am going to do some analysis on the solr data using map
Actually I am going to do some analysis on the solr data using map reduce.
For this purpose it might be needed to change some part of data or add new
fields from outside solr.
On Tue, Aug 5, 2014 at 5:51 PM, Shawn Heisey wrote:
> On 8/5/2014 7:04 AM, Ali Nazemian wrote:
> > I changed
Dear all,
Hi,
I changed solr 4.9 to write index and data on hdfs. Now I am going to
connect to those data from the outside of solr for changing some of the
values. Could somebody please tell me how that is possible? Suppose I am
using Hbase over hdfs for do these changes.
Best regards.
--
A.Nazem
Just highlighting the
> challenge of such a task.
>
> Just to be clear, you are referring to "sync mode" and not mere "ETL",
> which people do all the time with batch scripts, Java extraction and
> ingestion connectors, and cron jobs.
>
> Give it a shot a
have to define
something (probably with accumulo iterator) to import to solr on inserting
new data.
Regards.
On Fri, Jul 25, 2014 at 12:59 PM, Ali Nazemian
wrote:
> Dear Jack,
> Actually I am going to do benefit-cost analysis for in-house developement
> or going for sqrrl support.
>
curiosity, why are you not using that integrated Lucene support of
> Sqrrl Enterprise?
>
>
> -- Jack Krupansky
>
> -Original Message- From: Ali Nazemian
> Sent: Thursday, July 24, 2014 3:07 PM
>
> To: solr-user@lucene.apache.org
> Subject: Re: integrating Accumulo
the right direction. And
> it has Hadoop and Spark integration as well.
>
> See:
> http://www.datastax.com/what-we-offer/products-services/
> datastax-enterprise
>
> -- Jack Krupansky
>
> -Original Message- From: Ali Nazemian
> Sent: Thursday, July 24, 2014 1
ccordingly in
> the QParserPlugin.
>
> This will give you true row level security in Solr and Accumulo, and it
> performs quite well in Solr.
>
> Let me know if you have any other questions.
>
> Joe
>
>
> On Thu, Jul 24, 2014 at 4:07 AM, Ali Nazemian
> wrote:
&g
there a reason you're thinking
> of using both databases in particular?
>
>
> On Wed, Jul 23, 2014 at 5:17 AM, Ali Nazemian
> wrote:
>
> > Dear All,
> > Hi,
> > I was wondering is there anybody out there that tried to integrate Solr
> > with Accumul
Dear All,
Hi,
I was wondering is there anybody out there that tried to integrate Solr
with Accumulo? I was thinking about using Accumulo on top of HDFS and using
Solr to index data inside Accumulo? Do you have any idea how can I do such
integration?
Best regards.
--
A.Nazemian
side this is what solr
> do
> : to documents with duplicated uniquekey.
> : Regards.
> :
> :
> : On Tue, Jul 8, 2014 at 12:29 PM, Himanshu Mehrotra <
> : himanshu.mehro...@snapdeal.com> wrote:
> :
> : > Please look at https://wiki.apache.org/solr/Atomic_
lds.
>
> Thanks,
> Himanshu
>
>
> On Tue, Jul 8, 2014 at 1:09 PM, Ali Nazemian
> wrote:
>
> > Dears,
> > Hi,
> > According to my requirement I need to change the default behavior of Solr
> > for overwriting the whole document on unique-key duplication. I am
Dears,
Hi,
According to my requirement I need to change the default behavior of Solr
for overwriting the whole document on unique-key duplication. I am going to
change that the overwrite just part of document (some fields) and other
parts of document (other fields) remain unchanged. First of all I
I think this will not improve the performance of indexing but probably it
would be a solution for using HDFS HA with replication factor. But I am not
sure about that.
On Mon, Jul 7, 2014 at 12:53 PM, search engn dev
wrote:
> Currently i am exploring hadoop with solr, Somewhere it is written as
ds,
>Alex.
> Personal website: http://www.outerthoughts.com/
> Current project: http://www.solr-start.com/ - Accelerating your Solr
> proficiency
>
>
> On Mon, Jul 7, 2014 at 4:48 PM, Ali Nazemian
> wrote:
> > Dear Alexande,
> > What if I use ExternalFileFiled
g your Solr
> proficiency
>
>
> On Mon, Jul 7, 2014 at 4:32 PM, Ali Nazemian
> wrote:
> > Updating documents will add some extra time to indexing process. (I send
> > the documents via apache Nutch) I prefer to make indexing as fast as
> > possible.
>
nt project: http://www.solr-start.com/ - Accelerating your Solr
> proficiency
>
>
> On Mon, Jul 7, 2014 at 2:08 PM, Ali Nazemian
> wrote:
> > Dears,
> > Is there any way that I can do that in other way?
> > I mean if you look at my main problem again you will find out
1 - 100 of 112 matches
Mail list logo