I can ask this. If folks there are okay with it, I can produce the dump,
but it is unlikely to be a service rather a one off.

Upayavira

On Sun, Dec 30, 2012, at 06:34 AM, Otis Gospodnetic wrote:
> Hi,
> 
> Sorry, by infra I meant ASF infrastructure people. There's a mailing list
> and a JIRA project for infra stuff.
> 
> Otis
> Solr & ElasticSearch Support
> http://sematext.com/
> On Dec 29, 2012 8:45 PM, "Alexandre Rafalovitch" <arafa...@gmail.com>
> wrote:
> 
> > Sorry,
> >
> > What's Infra? A mailing list? Demand is probably low for Solr, but may be
> > sufficient for all Apache's individual projects. I guess one way to check
> > is too see in Apache logs if there is a lot of scrapers running (by user
> > agents).
> >
> > Anyway, for Solr specifically, an acceptable substitute could be the manual
> > version from Lucid Imagination:
> > http://lucidworks.lucidimagination.com/display/home/PDF+Versions
> >
> > Regards,
> >    Alex.
> > P.s. I am getting a feeling that Lucid (and other commercial company)
> > people are not allowed to mention their products on this list.
> >
> > Personal blog: http://blog.outerthoughts.com/
> > LinkedIn: http://www.linkedin.com/in/alexandrerafalovitch
> > - Time is the quality of nature that keeps events from happening all at
> > once. Lately, it doesn't seem to be working.  (Anonymous  - via GTD book)
> >
> >
> > On Sun, Dec 30, 2012 at 12:17 PM, Otis Gospodnetic <
> > otis.gospodne...@gmail.com> wrote:
> >
> > > I'd take it to Infra, although I think demand for this is so low...
> > >
> > > Otis
> > > Solr & ElasticSearch Support
> > > http://sematext.com/
> > > On Dec 29, 2012 8:14 PM, "Alexandre Rafalovitch" <arafa...@gmail.com>
> > > wrote:
> > >
> > > > Should that be setup as a public service then (like Wikipedia dump)?
> > > > Because I need one too and I don't think it is a good idea for DDOSing
> > > Wiki
> > > > with crawlers. And I bet, there will be some 'challenges' during
> > > scraping.
> > > >
> > > > Regards,
> > > >     Alex.
> > > > P.s. In fact, it would make an interesting example to have an offline
> > > copy
> > > > with Solr index, etc.....
> > > >
> > > > Personal blog: http://blog.outerthoughts.com/
> > > > LinkedIn: http://www.linkedin.com/in/alexandrerafalovitch
> > > > - Time is the quality of nature that keeps events from happening all at
> > > > once. Lately, it doesn't seem to be working.  (Anonymous  - via GTD
> > book)
> > > >
> > > >
> > > > On Sun, Dec 30, 2012 at 9:15 AM, Otis Gospodnetic <
> > > > otis.gospodne...@gmail.com> wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > You can easily crawl it with wget to get a local copy.
> > > > >
> > > > > Otis
> > > > > Solr & ElasticSearch Support
> > > > > http://sematext.com/
> > > > > On Dec 29, 2012 4:54 PM, "d_k" <mail...@gmail.com> wrote:
> > > > >
> > > > > > Hello,
> > > > > >
> > > > > > I'm setting up Solr inside an intranet without an internet access
> > and
> > > > > > I was wondering if there is a way to obtain the data dump of the
> > Solr
> > > > > > Wiki (http://wiki.apache.org/solr/) for offline viewing and
> > > searching.
> > > > > >
> > > > > > I understand MoinMoin has an export feature one can use
> > > > > > (http://moinmo.in/MoinDump and
> > > > > > http://moinmo.in/HelpOnMoinCommand/ExportDump) but i'm afraid it
> > > needs
> > > > > > to be executed from within the MoinMoin server.
> > > > > >
> > > > > > Is there a way to obtain the result of that command?
> > > > > > Is there another way to view the solr wiki offline?
> > > > > >
> > > > >
> > > >
> > >
> >

Reply via email to