Hi guys,
I work now for serveral month on solr and really you provide quick answer
... and you're very nice to work with.
But I've got huge issue that I couldn't fixe after lot of post.
My indexation take one two days to be done. For 8G of data indexed and 1,5M
of docs (ok I've plenty of links i
what would be the url to ping to replicate
like http://slave_host:port/solr/replication?command=enablepoll
thanks
--
View this message in context:
http://www.nabble.com/replication-solr-1.4-tp23777206p23777272.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi guys,
I didnt ennoy you for ages now ... hope everybody is fine ... I've an issue
with my replication
I was wondering ... after a while replication doesnt work anymore ...
we have a script which enable or not replication every 2hours and this
morning it didnt pull anything
and it's maybe bec
2009 at 9:28 PM, sunnyfr wrote:
>
>>
>> Hi,
>>
>> I would like to create a field without tokenizer but I've an error,
>>
>
> You can use KeywordTokenizer which does not do any tokenization.
>
> --
> Regards,
> Shalin Shekhar Mangar.
>
>
Hi,
I would like to create a field without tokenizer but I've an error,
I tried :
But I've :
May 4 17:49:41 solr-test jsvc.exec[5786]: May 4, 2009 5:49:41 PM
org.apache.solr.common.SolrException log SEVERE:
org.apache.solr.common.SolrException: analyz
Hi,
I would like to know how work /autoSuggest.
I do have result when I hit :
/autoSuggest?terms=true&indent=true&terms.fl=title&terms.rows=5&terms.lower=simp&omitHeader=true
I've:
74
129
2
2
1
How can I ask it to suggest first expression which are more frequent in the
database ?
How can
Hi,
Just to know if there is a quick way to get the information without hiting
replication?command=details
like =isReplicating
Thanks,
--
View this message in context:
http://www.nabble.com/-replication-command%3DisReplicating-tp23295869p23295869.html
Sent from the Solr - User mailing list a
How can I get the weight of a field and use it in bf ??
thanks a lot
sunnyfr wrote:
>
> Hi Hoss,
> thanks for this answser, and is there a way to get the weight of a field ?
> like that and use it in the bf? queryWeight
>
>
> 0.14232224 = (MATCH) weight(text
Hi Hoss,
thanks for this answser, and is there a way to get the weight of a field ?
like that and use it in the bf? queryWeight
0.14232224 = (MATCH) weight(text:chien^0.2 in 9412049), product of:
0.0813888 = queryWeight(text:chien^0.2), product of:
0.2 = boost
6.5946517 = idf
than 5
> 2. Then call commit, and verify that the size is more than 5
>
> If the original size was > 5, then you should have size > 5 after
> autowarming too.
>
> On Wed, Apr 22, 2009 at 2:57 PM, sunnyfr wrote:
>
>>
>> still the same ?
>>
>> S
It looks like it doesnt warm up, no?
sunnyfr wrote:
>
> still the same ?
>
> Seems done :
> lookups : 0
> hits : 0
> hitratio : 0.00
> inserts : 0
> evictions : 0
> size : 5
> warmupTime : 20973
> cumulative_lookups : 0
> cumulative_hits : 0
> cumul
still the same ?
Seems done :
lookups : 0
hits : 0
hitratio : 0.00
inserts : 0
evictions : 0
size : 5
warmupTime : 20973
cumulative_lookups : 0
cumulative_hits : 0
cumulative_hitratio : 0.00
cumulative_inserts : 0
cumulative_evictions : 0
Apr 22 11:09:29 search-01 jsvc.exec[31908]: Apr 22, 20
yes but let me check again ... but the delta-import was lidle and I think
warmup was done is the log .. i will check it again now and let you know.
Shalin Shekhar Mangar wrote:
>
> On Wed, Apr 22, 2009 at 2:05 PM, sunnyfr wrote:
>
>>
>> thanks Shalin,
>&g
thanks Shalin,
How come just 5 if my autowarmCount=500 ?
--
View this message in context:
http://www.nabble.com/autowarmcount-how-to-check-if-cache-has-been-warmed-up-tp23156612p23172066.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi,
Is it possible to have autowarmcount=500 with warmupTime=2751 and size=5,
where can I check up if the cache is full or not cuz really there it looks
empty still??? and commitment is done.
solr1.4
thanks for your help,
sunny
name:queryResultCache
class: org.apache.solr.search.FastL
Hi,
I don't get why and how to change this: underscores are parsed only as
spaces, meaning that a search for user "ejekt_festival" will return zero
results, while "ejekt festival" will return the user "ejekt_festival".
Thanks for your help,
--
View this message in context:
http://www.nabble.c
Hi Christophe,
Did you find a way to fix up your problem, cuz even with replication will
have this problem, lot of update means clear cache and manage that.
I've the same issue, I just wondering if I won't turn off servers during
update ???
How did you fix that ?
Thanks,
sunny
christophe-2
Hi,
I would like to know where are you about your script which take the slave
out of the load balancer ??
I've no choice to do that during update on the slave server.
Thanks,
Yu-Hui Jin wrote:
>
> Thanks, guys.
>
> Glad to know the scripts work very well in your experience. (well, indeed
>
Hi Oleg
Did you find a way to pass over this issue ??
thanks a lot,
oleg_gnatovskiy wrote:
>
> Can you expand on this? Mirroring delay on what?
>
>
>
> zayhen wrote:
>>
>> Use multiple boxes, with a mirroring delaay from one to another, like a
>> pipeline.
>>
>> 2009/1/22 oleg_gnatovskiy
Hi Hossman,
I would love to know either how do you manage this ?
thanks,
Shalin Shekhar Mangar wrote:
>
> On Fri, Mar 6, 2009 at 8:47 AM, Steve Conover wrote:
>
>> That's exactly what I'm doing, but I'm explicitly replicating, and
>> committing. Even under these circumstances, what could
do you have an idea?
sunnyfr wrote:
>
> Hi Noble,
>
> Yes exactly that,
> I would like to know how people do during a replication ?
> Do they turn off servers and put a high autowarmCount which turn off the
> slave for a while like for my case, 10mn to bring back the
ok but how people do for a frequent update for a large dabase and lot of
query on it ?
do they turn off the slave during the warmup ??
Noble Paul നോബിള് नोब्ळ् wrote:
>
> On Thu, Apr 9, 2009 at 8:51 PM, sunnyfr wrote:
>>
>> Hi Otis,
>> How did you manage that? I
Hi Walter,
Did you find a way to sort out your issue, I would be very interested.
Thanks a lot,
Walter Underwood wrote:
>
> We've had some performance problems while Solr is indexing and also when
> it
> starts with a cold cache. I'm still digging through our own logs, but I'd
> like to get mo
Hi Otis,
How did you manage that? I've 8 core machine with 8GB of ram and 11GB index
for 14M docs and 5 update every 30mn but my replication kill everything.
My segments are merged too often sor full index replicate and cache lost and
I've no idea what can I do now?
Some help would be br
Hi Otis,
Ok about that, but still when it merges segments it changes names and I've
no choice to replicate all the segment which is bad for the replication and
cpu. ??
Thanks
Otis Gospodnetic wrote:
>
> Lower your mergeFactor and Lucene will merge segments(i.e. fewer index
> files) and purge
Do you have an idea ?
sunnyfr wrote:
>
> Hi,
>
> I've title description and tag field ... According to where I find the
> word searched, I would like to boost differently other field like nb_views
> or rating.
>
> if word is find in title then nb_views^10 and r
ueries Solr replication is
> not performing too badly. The queries are inherently slow and you wish
> to optimize the query performance itself.
> am I correct?
>
> On Tue, Apr 7, 2009 at 7:50 PM, sunnyfr wrote:
>>
>> Hi,
>>
>> So I did two test on two servers;
>>
0, 2009 at 10:31 PM, sunnyfr wrote:
>
>>
>> So except commit/optimize or replicate with a time poll less often, I
>> can't
>> change this ???
>> So replication when you have loads of data updated every 30mn is not
>> adviced.
>> Or I must replica
Do you have an idea?
sunnyfr wrote:
>
> Hi,
>
> So I did two test on two servers;
>
> First server : with just replication every 20mn like you can notice:
> http://www.nabble.com/file/p22930179/cpu_without_request.png
> cpu_without_request.png
> http://ww
Hi,
So I did two test on two servers;
First server : with just replication every 20mn like you can notice:
http://www.nabble.com/file/p22930179/cpu_without_request.png
cpu_without_request.png
http://www.nabble.com/file/p22930179/cpu2_without_request.jpg
cpu2_without_request.jpg
Second server
ee on the graph, the first
part>>
http://www.nabble.com/file/p22925561/cpu_.jpg cpu_.jpg
and on this graph and first part of the graph (blue part) it's just
replication no request at all.
normally i've 20 request per second
what would you reckon ?
Noble Paul നോബിള് नोब्ळ् w
Hi,
Sorry I can't find and issue, during my replication my respond time query
goes very slow.
I'm using replication handler, is there a way to slow down debit or ???
11G index size
8G ram
20 requests/sec
Java HotSpot(TM) 64-Bit Server VM
10.0-b22
Java HotSpot(TM) 64-Bit Server VM
4
-Xms4G
-
Hi
I would like to know if it use less memory to facet or put weight to a field
when I index it then when I make a dismax request.
Thanks,
--
View this message in context:
http://www.nabble.com/solr-1.4-indexation-or-request-%3E-memory-tp22913679p22913679.html
Sent from the Solr - User mail
Hi,
I've title description and tag field ... According to where I find the word
searched, I would like to boost differently other field like nb_views or
rating.
if word is find in title then nb_views^10 and rating^10
if word is find in description then nb_views^2 and rating^2
Thanks a lot for y
This is my conf :
http://www.nabble.com/file/p22847570/solrconfig.xml solrconfig.xml
And this is my delta import:
*/20 * * * * /usr/bin/wget -q --output-document=/home/video_import.txt
http:/master.com:8180/solr/video/dataimport?command=delta-import&optimize=false
--
View this message in c
>>> If it does should be some bug
>>>
>>> On Thu, Apr 2, 2009 at 6:00 PM, wrote:
>>> > I think its the same problem, tune jvm for multi thread ... 20request
>>> > seconde.
>>> > no??
>>> >
>>> >
>>> &
st not add the qtimes of ReplicationHandler
> --Noble
>
> On Thu, Apr 2, 2009 at 5:34 PM, sunnyfr wrote:
>>
>> Hi,
>>
>> Just applied replication by requestHandler.
>> And since this the Qtime went mad and can reach long time > name="QTime">9068
>
Hi,
Just applied replication by requestHandler.
And since this the Qtime went mad and can reach long time 9068
Without this replication Qtime can be around 1sec.
I've 14Mdocs stores for 11G. so not a lot of data stores.
I've servers with 8G and tomcat use 7G.
I'm updating every 30mn which is ab
web:1
status_official:1^1.5+OR+status_creative:1^1+OR+language:en^0.5
title^0.2+description^0.2+tags^1+owner_login^0.5
Shalin Shekhar Mangar wrote:
>
> On Thu, Apr 2, 2009 at 2:13 PM, sunnyfr wrote:
>>
>> Hi Hoss,
>>
>>
Hi Hoss,
Do I need autowarming > 0 to have newSearcher and firstSearcher fired?
Thanks a lot,
hossman wrote:
>
>
> : Subject: autowarm static queries
>
> A minor followup about terminology:
>
> "auto-warming" describes what Solr does when it opens a new cache, and
> seeds it with key/val
ach query in each
> thread.
>
> Hope it helps.
>
>
>
> sunnyfr wrote:
>>
>> Hi,
>>
>> I'm trying as well to stress test solr. I would love some advice to
>> manage it properly.
>> I'm using solr 1.3 and tomcat55.
>> Thanks a lot,
Hi,
How can I be sure about that my IndexReaders are in read-only mode?
Thanks a lot ,
--
View this message in context:
http://www.nabble.com/solr-1.4-IndexReaders-are-in-read-only-mode-tp22804955p22804955.html
Sent from the Solr - User mailing list archive at Nabble.com.
So except commit/optimize or replicate with a time poll less often, I can't
change this ???
So replication when you have loads of data updated every 30mn is not
adviced.
Or I must replicate once a day ??? or ..?
Shalin Shekhar Mangar wrote:
>
> On Mon, Mar 30, 2009 at 8:17 PM, sun
I've about 30 000 docs updated every 20mn.
I just store id and text which is (title description)
my index is about 11G
--
View this message in context:
http://www.nabble.com/Times-Replicated-Since-Startup%3A-109--since-yesterday-afternoon--tp22784943p22785606.html
Sent from the Solr - User ma
Hi,
Can you explain me more about this replication script in solr 1.4.
It does work but it always replicate everything from the master so it lost
every cache everything to replicate it.
I don't get really how it works ?
Thanks a lot,
--
View this message in context:
http://www.nabble.com/Tim
Hi,
I would like to know more about keepOptimizedOnly,
My problem is on the slaves's servers it's a bit slow after a replication
and I would like to automatize an optimization after every commit. How can I
do that ? Is it this option keepOptimizedOnly?
Thanks a lot,
--
View this message in co
Hi
I would like to know if you leave your slave allowed for searching during a
replication.
Everytime a replication is applied ... poll is enable, and start to bring
back files, my slave have a very batim perf, and can take 5sec to bring back
the result, as soon it's done everything is back prop
Sorry but which one shoud I take??
where exactly ?
Noble Paul നോബിള് नोब्ळ् wrote:
>
> this fix is there in the trunk ,
> you may not need to apply the patch
>
> On Fri, Mar 27, 2009 at 6:02 AM, sunnyfr wrote:
>>
>> Hi,
>>
>> It doesn't seem
Hi,
It doesn't seem to work for me, I changed as well this part below is it ok??
> -List copiedfiles = new ArrayList();
> +Set filesToCopy = new HashSet();
http://www.nabble.com/file/p22734005/ReplicationHandler.java
ReplicationHandler.java
Thanks a lot,
Noble Paul നോബിള് नोब्ळ्
Just applied this patch :
http://www.nabble.com/Solr-Replication%3A-disk-space-consumed-on-slave-much-higher-than-on--master-td21579171.html#a21622876
It seems to work well now. Do I have to do something else ?
Do you reckon something for my configuration ?
Thanks a lot
--
View this message in
sunnyfr wrote:
>
> Hi,
>
> Since I put this functionnality on, on my servers it takes sometimes a
> long time to get a respond for a select
> sometimes Qtime = 4sec some other 200msec ?
>
> Do you know why? and when I look at my servers graph, users part is very
&
Hi,
Since I put this functionnality on, on my servers it takes sometimes a long
time to get a respond for a select
sometimes Qtime = 4sec some other 200msec ?
Do you know why? and when I look at my servers graph, users part is very
used since I've applied this two patch.
Thanks for your help.
I
Hi,
Since I put this functionnality on, on my servers it takes sometimes a long
time to get a respond for a select
sometimes Qtime = 4sec some other 200msec ?
Do you know why? and when I look at my servers graph, users part is very
used since I've applied this two patch.
Thanks for your help.
I
Hi,
I don't understand how my index folder can pass from 11G to 45G?
Is it a prob with my segment?
For information I'm using solr 1.4, i've 14M of docs. The first full import
or optimize low down size to 11G.
I'm updating data (delta-import) every 30 mn for about 50 000docs updated
every time.
search-01 jsvc.exec[22812]: Mar 24, 2009 11:02:44 PM
org.apache.solr.update.DirectUpdateHandler2 commit INFO: start
commit(optimize=true,waitFlush=false,waitSearcher=true)
thanks a lot for your help
sunnyfr wrote:
>
> Like you can see, I did that and I've no information in my DIH but
voke it yourself just like
> commits.
>
> Take a look at the following for examples:
> http://wiki.apache.org/solr/UpdateXmlMessages
>
> On Thu, Oct 2, 2008 at 2:03 PM, sunnyfr wrote:
>
>>
>>
>> Hi,
>>
>> Can somebody explain me a bit how wor
How can I stop this?
Noble Paul നോബിള് नोब्ळ् wrote:
>
> if the DIH status does not say that it optimized, it is lucene
> mergeing the segments
>
> On Mon, Mar 23, 2009 at 8:15 PM, sunnyfr wrote:
>>
>> I checked this out but It doesn't say nothing ab
27;s code again. It won't optimize if
> optimize=false is specified.
>
> On Mon, Mar 23, 2009 at 12:43 AM, sunnyfr wrote:
>
>>
>> Do you have any idea ???
>> :(
>>
>> cheer,
>>
>>
>> sunnyfr wrote:
>> >
>> > Hi ever
se is specified.
>
> On Mon, Mar 23, 2009 at 12:43 AM, sunnyfr wrote:
>
>>
>> Do you have any idea ???
>> :(
>>
>> cheer,
>>
>>
>> sunnyfr wrote:
>> >
>> > Hi everybody ... still me :)
>> > hoo happy day :)
>
t;> I checked DataImportHandler's code again. It won't optimize if
>> optimize=false is specified.
>>
>> On Mon, Mar 23, 2009 at 12:43 AM, sunnyfr wrote:
>>
>>>
>>> Do you have any idea ???
>>> :(
>>>
>>> cheer,
>&g
Do you have any idea ???
:(
cheer,
sunnyfr wrote:
>
> Hi everybody ... still me :)
> hoo happy day :)
>
> Just, I dont get where I miss something, I will try to be clear.
>
> this is my index folder (and we can notice the evolution according to the
> delta imp
he last import. It can tell you whether a commit/optimize
> was performed
>
> On Fri, Mar 20, 2009 at 7:07 PM, sunnyfr wrote:
>>
>> Thanks I gave more information there :
>> http://www.nabble.com/Problem-for-replication-%3A-segment-optimized-automaticly-td22601442.ht
ays B.V. wrote:
>
>
>
> On Fri, 2009-03-20 at 03:41 -0700, sunnyfr wrote:
>
>> Hi
>>
>> I've an issue, I've some data which come up but I've applied a filtre on
>> it
>> and it shouldnt, when I check in my database mysql I've obviously
>
> 2009/3/20 Noble Paul നോബിള് नोब्ळ् :
>> you have set autoCommit every x minutes . it must have invoked commit
>> automatically
>>
>>
>> On Thu, Mar 19, 2009 at 4:17 PM, sunnyfr wrote:
>>>
>>> Hi,
>>>
>>> Even if I hit comma
Hi
I've an issue, I've some data which come up but I've applied a filtre on it
and it shouldnt, when I check in my database mysql I've obviously the
document which has been updated so I will like to see how it is in solr.
if I do : /solr/video/select?q=id:8582006 I will just see field which has
Hi everybody ... still me :)
hoo happy day :)
Just, I dont get where I miss something, I will try to be clear.
this is my index folder (and we can notice the evolution according to the
delta import every 30mn) :
r...@search-01:/data/solr# ls video/data/index/
_2bel.fdt _2bel.fnm _2bel.nrm _2
.00,cumulative_inserts=0,cumulative_evictions=240}
^IdocumentCache{lookups=0,hits=0,hitratio=0.00,inserts=0,evictions=38,size=2,warmupTime=0,cumulative_lookups=0,cumulative_hits=0,cumulative_hitratio=0.00,cumulative_inserts=0,cumulative_evictions=4560}
sunnyfr wrote:
>
> Hi,
>
> Even
Hi,
Even if I hit command=delta-import&commit=false&optimize=false
I still have commit set in my logs and sometimes even optimize=true,
About optimize I wonder if it comes from commitment too close and one is not
done, but still I don't know really.
Any idea?
Thanks a lot,
--
View this messa
Hi
I've in my log optimize=true after a commit but I didnt allow it in my
solrconfig ???
/data/solr/video/bin/snapshooter
/data/solr/video/bin
-c
true
Do you have an idea where it comes from??
Thanks a lot,
--
View this message in context:
htt
Maybe I miss something in solrconfig.xml ???
sunnyfr wrote:
>
> Hi
>
> I've a little problem with optimization which is very interesting but
> juste one time per day otherwise replication take ages to bring back index
> hard link.
>
> So my cron is every 3
Hi
I've a little problem with optimization which is very interesting but juste
one time per day otherwise replication take ages to bring back index hard
link.
So my cron is every 30mn :
/solr/user/dataimport?command=delta-import&optimize=false&commit=false
otherwise i've cron for optimizing ever
Hi
If I want to commit without optimize.
Because Ive that : > start
commit(optimize=true,waitFlush=false,waitSearcher=true)
but I don't want to optimize otherwise my replication will take every time
the full index folder.
Thanks a lot guys for ur help,
ryantxu wrote:
>
> yes. optimize also
Hi,
Noticing a relevant latency during search, I tried to turn off cronjob and
test it manually.
And it was obvious how during snappuller on a slave server, the query time
was a lot longer than the rest of the time.
Even snapinstaller didn't affect the query time.
without any action around 200
Hi Hoss,
How come if bq doesn't influence what matches -- that's q -- bq only
influence
the scores of existing matches if they also match the bq
when I put :
as bq=(country:FR)^2 (status_official:1 status_new:1)^2.5
Ive no result
if I put just bq=(country:FR)^2 Or bq=(status_official:1 stat
Thanks Yonik,
Yonik Seeley-2 wrote:
>
> On Thu, Feb 26, 2009 at 11:25 AM, sunnyfr wrote:
>> How can I tell it to put a lot of more weight for the book which has
>> exactly
>> the same title.
>
> A sloppy phrase query should work.
> See the "pf" pa
Hi guys,
I look for the parameter or the way to boost the order of the word in the
query.
Let's imagine people look for "rich & famous" book ... so in the search they
will just write rich & famous
and let's imagine a book with a better rating and lot of views is like
famous & very rich is there
Hi
How come if i put in my query q=+wow-kill
wow-kill
dismax
I will have books which contain wow and kill instead of books which have wow
in the title without kill???
Thanks a lot,
--
View this message in context:
http://www.nabble.com/dismax-%2B-and---tp4770p4770.html
Sent from the
I've actually added (status_official:1 OR status_creative:1)^2.5
sunnyfr wrote:
>
> Hi
>
> I dont get where I'm wrong.
> I would like to boost some type of my books.
>
> So If I do : &bq=status_official:0^1.5+status_creative:0^1.5
> I've one res
Hi
I dont get where I'm wrong.
I would like to boost some type of my books.
So If I do : &bq=status_official:0^1.5+status_creative:0^1.5
I've one result
If I do: &bq=status_official:1^1.5+status_creative:1^1.5
Nothing, I think the result should still come up even if it doesn't have
this status
Hello everybody,
Little question :
status_official:true^1,5
How come this doesn't show up datas and if I remove status_official then it
will show up data.
I tried to add status_official:false^1 but nothing come up and if I remove
this param I've some value.
I would like to boost some status .
yes thanks a lot Koji,
Koji Sekiguchi-2 wrote:
>
> sunnyfr wrote:
>> Hi
>>
>> Sorry I dont remember what is the parameter which show up every
>> parameters
>> stores in my solrconfig.xml file for the dismax query ? thanks a
>> lot,
>>
>
Hi
Sorry I dont remember what is the parameter which show up every parameters
stores in my solrconfig.xml file for the dismax query ? thanks a lot,
--
View this message in context:
http://www.nabble.com/show-up-every-parameter-in-my-dismax-query-tp22181063p22181063.html
Sent from the Sol
up, but I've no warmup maybe just 30 so almost nothing.
Noble Paul നോബിള് नोब्ळ् wrote:
>
> I was referring to the DIH debug page.
>
> But apparently, in some cases it seems to be working for you. can you
> elaborate , when does it work and when it doesn't?
>
> On
It looks like books which have no link with entity are not took in
consideration ???
part of my data-config.xml:
Is it normal ?
sunnyfr wrote:
>
> Hi,
> Thanks Paul
>
> I did that :
> book/dataimport?command=full-import&clean=false&start=9327553&r
ght in MySql database i've the row which come up properly.
what else can I do ... check ??
thanks a lot,
Noble Paul നോബിള് नोब्ळ् wrote:
>
> the start and rows is supposed to work . If you put it into debug you
> may see what is happening
>
> On Thu, Feb 19, 2009 at 3:47 PM
Hi
I looked for a book that I couldn't find in solr's databe.
How can I update just this one by the command in the Url ... I tried :
But it doesn't seems to work ??
dataimport?command=full-import&clean=false&start=11289500&rows=100
Is there another way ??? maybe the book can't be updated but
Hi,
I don't get: I added a bq boost,
the point is i've some book which are normal, some which are type_roman or
type_comedy and other type
but I would like to boost both of this type for every books indexed.
So if I do :
&bq=type_roman:true^1,5+type_comedy:true^1,5
no video come up
but if I do
Obviously it should be qb and not bf it looks better.
Is there everything in the wiki because I read it but I'm still a bit
confused about it.
sunnyfr wrote:
>
> Hi,
>
> I don't get really, I try to boost a field according to another one but
> I've a huge
Hi,
I don't get really, I try to boost a field according to another one but I've
a huge weight when I'm using qf boost like :
/select?qt=dismax&fl=*&q="obama
meeting"&debugQuery=true&qf=title&bf=product(title,stat_views)
I will have :
5803681.0 = (MATCH) sum of:
4.9400806 = weight(title:"obam
them to multiply
> in scoring instead of adding.
>
> See http://wiki.apache.org/solr/FunctionQuery
>
>
>
> On Feb 12, 2009, at 10:17 AM, sunnyfr wrote:
>
>>
>> Hi Grant,
>>
>> Thanks for your quick answer.
>>
>> So there is not a real qu
ki.apache.org/solr/FunctionQuery
>
>
>
> On Feb 12, 2009, at 10:17 AM, sunnyfr wrote:
>
>>
>> Hi Grant,
>>
>> Thanks for your quick answer.
>>
>> So there is not a real quick way to increase one field in particular
>> according to another one if
t; So the diskspace is consumed for unused index files also. You may need
> to delete unused snapshots from time to time
> --Noble
>
> On Tue, Feb 17, 2009 at 5:24 AM, sunnyfr wrote:
>>
>> Hi Noble,
>>
>> I maybe don't get something
>> Ok if it's h
a/spellchecker1
1.1M/data/solr/book/data/snapshot.20090216202502
30G /data/solr/book/data
thanks a lot,
Noble Paul നോബിള് नोब्ळ् wrote:
>
> they are just hardlinks. they do not consume space on disk
>
> On Mon, Feb 16, 2009 at 10:34 PM, sunnyfr wrote:
>>
>> Hi,
>
a/spellchecker1
1.1M/data/solr/book/data/snapshot.20090216202502
30G /data/solr/book/data
thanks a lot,
Noble Paul നോബിള് नोब्ळ् wrote:
>
> they are just hardlinks. they do not consume space on disk
>
> On Mon, Feb 16, 2009 at 10:34 PM, sunnyfr wrote:
>>
>> Hi,
>
ace on disk
>
> On Mon, Feb 16, 2009 at 10:34 PM, sunnyfr wrote:
>>
>> Hi,
>>
>> Ok but can I use it more often then every day like every three hours,
>> because snapshot are quite big.
>>
>> Thanks a lot,
>>
>>
>> Bill Au wrote:
&
that you
> can use the snapcleaner on the master and/or slave.
>
> Bill
>
> On Fri, Feb 13, 2009 at 10:15 AM, sunnyfr wrote:
>
>>
>> root 26834 16.2 0.0 19412 824 ?S16:05 0:08 rsync
>> -Wa
>> --delete rsync://##.##.##.##:18180/sol
4.0Kbook/data/snapshot.20090216154819
4.0Kbook/data/snapshot.20090216154820
15M book/data/snapshot.20090216153759
12G book/data/
sunnyfr wrote:
>
> Hi,
>
> Is it normal or did I miss something ??
> 5.8G book/data/snapshot.20090216153346
> 12K book/data/sp
Hi,
Is it normal or did I miss something ??
5.8Gbook/data/snapshot.20090216153346
12K book/data/spellchecker2
4.0Kbook/data/index
12K book/data/spellcheckerFile
12K book/data/spellchecker1
5.8Gbook/data/
Last update ?
92562
45492
0
2009-02-16 15:20:01
2009-02-16 15:20:0
, it should not be a problem
> --Noble
>
> On Mon, Feb 16, 2009 at 3:28 PM, sunnyfr wrote:
>>
>> Hi Hoss,
>>
>> Is it a problem if the snappuller miss one snapshot before the last one
>> ??
>>
>> Cheer,
>> Have a nice day,
>>
>>
>
Hi
I would like to know if a snapshot is automaticly created even if there is
no document update or added ?
Thanks a lot,
--
View this message in context:
http://www.nabble.com/snapshot-created-if-there-is-no-documente-updated-new--tp22034462p22034462.html
Sent from the Solr - User mailing l
1 - 100 of 355 matches
Mail list logo