I have huge amount of data in FIX format (
https://en.wikipedia.org/wiki/Financial_Information_eXchange)
I want to give the data users the most flexibility to do their search,
usually like trading date range, order id or type, amount,
Can anyone share any experience on that?
Thanks.
*--
ok, I made a couple of changes, however I don't think the server is starting.
When I invoke the start:
*$ ./bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config:
/c/Users/thclotworthy/DevBase/zookeeper/zookeeper-3.4.10/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED*
, it says i
On 4/27/2018 10:48 AM, THADC wrote:
I am new to zookeeper. created my zoo.cfg and attempted starting instance
(DOS shell). my command is:
.\bin\zkServer.cmd start zoo.cfg
and getting following error:
"C:\Users\thclotworthy\DevBase\zookeeper\zookeeper-3.4.10\bin\..\build\classes;C:\Users\thclot
Hi,
Look in the "data" folder - there has to be a file named "myid". In this
file, the ID of your zookeeper has to be defined.
Start by 1.
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Thanks for responding, below is the zoo.cfg file contents. I took this
directly from the solr ref guide 7.0, for setting up an external zookeeper
(https://lucene.apache.org/solr/guide/7_0/setting-up-an-external-zookeeper-ensemble.html#setting-up-an-external-zookeeper-ensemble).
It did not specify t
What do your ZooKeeper configurations look like? My guess is that the
"myid" field isn't formed correctly or is absent.
Best,
Erick
On Fri, Apr 27, 2018 at 9:48 AM, THADC
wrote:
> Hi,
>
> I am new to zookeeper. created my zoo.cfg and attempted starting instance
> (DOS shell). my command is:
>
>
In the past, I’ve recommended seaurchin.io, a great tool. But, they were
acquired by Algolia and the service will be shut down this month.
As far as I know, there is nothing close to SeaUrchin.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On Apr 27,
Hi,
I am new to zookeeper. created my zoo.cfg and attempted starting instance
(DOS shell). my command is:
.\bin\zkServer.cmd start zoo.cfg
and getting following error:
"C:\Users\thclotworthy\DevBase\zookeeper\zookeeper-3.4.10\bin\..\build\classes;C:\Users\thclotworthy\DevBase\zookeeper\zookeep
Any document that has the same value in the "id" field (or whatever
you've defined in in your schema) will replace any older
documents with the same value. So my guess is that your data has some
duplicate keys.
A simple way to check is to watch maxDoc .vs. numDocs in the admin UI
for a particular
Exactly Alessandro -
I can totally build something, but there's not a good open source solution
solution for:
- Gathering queries / user / session metadata at search time from your app
- Gathering the returned result set and their display posn (just doc ids
would be fine)
- Gathering the clicks/c
Michal,
Doug was referring to an open source solution ready out of the box and just
pluggable ( a sort of plug and play).
Of course you can implement your own solution and using ELK or kafka is
absolutely a valid option.
Cheers
--
Alessandro Benedetti
Search Consultant, R
Hi
We've finished the data import of 40 millions data into a 3 node Solr cluster.
After injecting all data via a Java program, we've noticed that the number of
documents was less than expected (in 10 rows).
No exception, no error.
Some config details:
To add it further, in 6.5.1, while indexing... even sometimes one of solr
node goes down for a while and comes up automatically. During those period
all our calls to index fails. Even in the Solr admin UI, we can see node not
being active for a while and coming up again.
All these happens in 4 cor
Hi Markus,
Can you give an idea of what your filter queries look like? Any custom
plugins or things we should be aware of? Simple indexing artificial docs,
querying and committing doesn't seem to reproduce the issue for me.
On Thu, Apr 26, 2018 at 10:13 PM, Markus Jelsma
wrote:
> Hello,
>
> We
Hi,
you have plenty options. Without any special effort there is ELK. Parse solr
logs with logstash, feed elasticsearch with data, then analyze in kibana.
Another option is to send every relevant search request to kafka, then you can
do more sophisticated data analytic using kafka-stream API. T
15 matches
Mail list logo