best guess on how that
should be done. Does that sound right?
enjoy,
-jeremy
--
====
Jeremy Hinegardner jer...@hinegardner.org
--
> Jan H?ydahl, search solution architect
> Cominvent AS - www.cominvent.com
>
> On 28. okt. 2010, at 03.04, Jeremy Hinegardner wrote:
>
> > Hi all,
> >
> > I see that as of r1022188 Solr Cloud has been committed to trunk.
> >
> > I was wondering about the
wondering should use trunk as it is now that SOLR-1873 is applied, or
should we take SOLR-1873 and apply it to Solr 1.4.1.
Has anyone used 1.4.1 + SOLR-1873? In production?
Thanks,
-jeremy
--
Jeremy Hinegardner
your environment, and those would be the
instanceDir's and such on disk. Then you have logical core aliases of 'staging'
and 'production' etc. Sort of like symlinks on the file system.
I have not done a deployment like this yet, just thought about it a few times.
And I
> 2009/10/28 Noble Paul ? ??
> :
> > hi,
> > Looks like a bug. open an issue.
> >
> >
> > On Wed, Oct 28, 2009 at 4:04 AM, Jeremy Hinegardner
> > wrote:
> >> Hi all,
> >>
> >> I was try
ld do the trick. That is
evidently not the way it works.
What is the way that shareSchema=true is supposed to work?
enjoy,
-jeremy
--
====
Jeremy Hinegardner jer...@hinegardner.org
ate the new cores.
curl 'http://production.example.com:8080/solr/admin/cores?rest-of-request'
enjoy,
-jeremy
--
========
Jeremy Hinegardner jer...@hinegardner.org
27;default' Core which does the dispatch and aggregation of
requests between the 4*N total data cores. We then use HAproxy to load balance
the requests between the dispatch Cores.
enjoy,
-jeremy
--
========
Jeremy Hinegardner [EMAIL PROTECTED]
gt; I am not able to figure out the facet results. It does noyt contain any
> result of Universe, It also removes characters from matahmatics and shape.
>
> Please help me understanding the issue and let me know if any change in
> schema / solrConfig can solve the issue.
-0700, Erik Hatcher wrote:
> Jeremy,
>
> Great troubleshooting! You were spot on.
>
> I've posted a new patch that fixes the issue.
>
> Erik
>
>
> On Oct 16, 2008, at 9:53 PM, Jeremy Hinegardner wrote:
>
>> After a bit more investigating, it app
ither facet.tree=cat,inStock or
facet.tree=inStock,cat. Whereas before it would only work in the former.
enjoy,
-jeremy
On Thu, Oct 16, 2008 at 10:55:49AM -0600, Jeremy Hinegardner wrote:
> Erik,
>
> After some more experiments, I can get it to perform incorrectly using the
> sample s
st you're making to Solr?
>
> Do you get values when you facet normally on date_id and type?
> &facet.field=date_id&facet.field=type
>
> Erik
>
> p.s. this e-mail is not on the list (on a hotel net connection blocking
> outgoing mail) - feel free to reply
,
-jeremy
--
====
Jeremy Hinegardner [EMAIL PROTECTED]
dard search handler:
explicit
id,name,score
This sets the default 'fl' query parameter to have id,name,score which makes
the default fields returned be the id, the name and the score.
enjoy,
-jeremy
--
====
m -rf ${data_dir}/${index} &&
2) mv -f ${data_dir}/${index}.tmp$$ ${data_dir}/${index}
Any other thoughts / conjectures ?
enjoy,
-jeremy
--
====
Jeremy Hinegardner [EMAIL PROTECTED]
ed
Does this indicate there is a race condition between the info beans and the
closing of searchers?
enjoy,
-jeremy
--
========
Jeremy Hinegardner [EMAIL PROTECTED]
uot;
> Java(TM) SE Runtime Environment (build 1.6.0_07-b06)
> Java HotSpot(TM) 64-Bit Server VM (build 10.0-b23, mixed mode)
>
I use the JPackage rpm specs and build the java version I need. This has
worked
out pretty well for us.
enjoy,
-jeremy
--
==
x and such), along
with the init.d script to the community if anyone wants it as part of the
example application.
Or if people are interested in the rpm spec I can make that available as well.
enjoy,
-jeremy
--
========
Jeremy Hinegardner [EMAIL PROTECTED]
acting as a natural throttling mechanism.
The route we've taken is to use with 25000 docs or 15 minutes, in
the solrconfig.xml and continually add new data. Then 1/night we stop adding
new data, and run an optimize. For us this takes about 90 minutes across all
our cores.
So far this
ce?
enjoy,
-jeremy
--
====
Jeremy Hinegardner [EMAIL PROTECTED]
m disk instead of transmitting
the file over http.
You have to set enableRemoteStreaming="true" in the solrconfig.xml and then your
curl request would I think be:
curl -d stream.file=/tmp/post.xml http://localhost:8983/solr/update
A similar approach works prety well for me.
enjo
--Noble
>
> On Tue, Jul 29, 2008 at 12:33 PM, Jeremy Hinegardner
> <[EMAIL PROTECTED]> wrote:
> > Hi all,
> >
> > I'm using the DataImportHandler, and its working great. What I would
> > like to do is configure the to use a pooled conncation from
&
the servlet container.
enjoy,
-jeremy
--
========
Jeremy Hinegardner [EMAIL PROTECTED]
;id' column in the database to know
if new values have been added.
Or if there is another method that might have the same result, that
would be good to know too.
enjoy,
-jeremy
--
============
Jeremy Hinegardner [EMAIL PROTECTED]
r.
Those names will change every time an 'optimize' command is run.
Someone with more knowledge than I can probably go futher, but this is what I
think you could do with what I know about solr at this point.
enjoy,
-jeremy
--
Jeremy Hinegardner [EMAIL PROTECTED]
asiest and most maintainable with multicore ?
Any other thoughts people have on the subject? What considerations
would you use to decide between multicore vs. multiple instances ?
enjoy,
-jeremy
--
========
Jer
the index with lower JVM settings
> > each round and when response times get too slow or you hit OOE then you get
> > a rough estimate of the bare minimum X RAM needed for Y entries.
> >
> > I think we will do with something like 2G per 50M docs but I will need to
> > te
response times get too slow or you hit OOE then you get
> a rough estimate of the bare minimum X RAM needed for Y entries.
>
> I think we will do with something like 2G per 50M docs but I will need to
> test it out.
>
> If you get an answer in this matter please let me know.
>
&
ms that we can contribute back to Solr we will. If nothing
else there will be a nice article of how we manage TB of data with Solr.
enjoy,
-jeremy
--
====
Jeremy Hinegardner [EMAIL PROTECTED]
imagine the full result list is a
> big array, and you are asking for a slice of that array starting at
> "1", skipping the 0th result.
'Doh! of course, results are 0 indexed.
enjoy,
-jeremy
--
Jeremy Hinegardner [EMAIL PROTECTED]
tart'] => 1
result['response']['docs'].size => 179
Is this normal? I'm surprised I didn't notice this earlier. I know in some
search engines the 'number of results found' is actually an estimate and not
necessarily a verified count. I'm just wondering if this is the case here too.
enjoy,
-jeremy
--
Jeremy Hinegardner [EMAIL PROTECTED]
.
enjoy,
-jeremy
--
====
Jeremy Hinegardner [EMAIL PROTECTED]
Hi all,
I was wondering if there is a reason the solrj source is not in the nightly
build. I've started playing around with the nightly build and unfortunately
since the solrj source code is not shipped "ant compile" does not work out
of the box:
BUILD FAILED
/Users/jeremy/pkgs/solr/dist/ap
33 matches
Mail list logo