Should that be setup as a public service then (like Wikipedia dump)?
Because I need one too and I don't think it is a good idea for DDOSing Wiki
with crawlers. And I bet, there will be some 'challenges' during scraping.

Regards,
    Alex.
P.s. In fact, it would make an interesting example to have an offline copy
with Solr index, etc.....

Personal blog: http://blog.outerthoughts.com/
LinkedIn: http://www.linkedin.com/in/alexandrerafalovitch
- Time is the quality of nature that keeps events from happening all at
once. Lately, it doesn't seem to be working.  (Anonymous  - via GTD book)


On Sun, Dec 30, 2012 at 9:15 AM, Otis Gospodnetic <
otis.gospodne...@gmail.com> wrote:

> Hi,
>
> You can easily crawl it with wget to get a local copy.
>
> Otis
> Solr & ElasticSearch Support
> http://sematext.com/
> On Dec 29, 2012 4:54 PM, "d_k" <mail...@gmail.com> wrote:
>
> > Hello,
> >
> > I'm setting up Solr inside an intranet without an internet access and
> > I was wondering if there is a way to obtain the data dump of the Solr
> > Wiki (http://wiki.apache.org/solr/) for offline viewing and searching.
> >
> > I understand MoinMoin has an export feature one can use
> > (http://moinmo.in/MoinDump and
> > http://moinmo.in/HelpOnMoinCommand/ExportDump) but i'm afraid it needs
> > to be executed from within the MoinMoin server.
> >
> > Is there a way to obtain the result of that command?
> > Is there another way to view the solr wiki offline?
> >
>

Reply via email to