Raymond,
May i suggest you to take a look at the examples given in Solr package?
Essentially you need to understand which field is to be searchable by the
application and what not. These FIX data can be represented i JSON or XML.
To parse and upload the data to Solr, you can use different librar
Hi,
In case you want to round up, you can use negative numbers and percentage of
failed matches so 75% of matches rounded up can be written as -25%.
HTH,
Emir
--
Monitoring - Log Management - Alerting - Anomaly Detection
Solr & Elasticsearch Consulting Support Training - http://sematext.com/
>
I have used a custom filter provided by a jar in schema.xml in standalone
Solr like below
And for this,
I have loaded the jar in solrconfig.xml like below
It's working fine But when I've tried to use it in solrcloud with external
zookeeper mode I've got an error 'IO exception' maybe for upl
If you want to delete all items from Solr index, use the query
*:*
-
Development Center Toronto
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
I have used a custom filter provided by a jar in schema.xml in standalone
Solr like below
And for this,
I have loaded the jar in solrconfig.xml like below
It's working fine But when I've tried to use it in solrcloud with external
zookeeper mode I've got an error 'IO exception' maybe for upl
Raymond
There is a default field normally called df. You would normally use Copyfield
to copy all searchable fields into the default field.
Cheers -- Rick
On April 1, 2018 11:34:07 PM EDT, Raymond Xie wrote:
>Hi Rick,
>
>I sorted it out half:
>
>I should have specified the field in the search q
Ray
Have you looked around for an existing FIX to Solr conduit? If FIX is a common
standard then I would expect that someone has done some work on this and
github'd it.
Even just FIX to JSON.
Cheers -- Rick
On April 2, 2018 12:34:44 AM EDT, Raymond Xie wrote:
>Thank you, Shawn, Rick and other
Google
fix to json,
there are a few interesting leads.
On April 2, 2018 12:34:44 AM EDT, Raymond Xie wrote:
>Thank you, Shawn, Rick and other readers,
>
>To Shawn:
>
>For *8=FIX.4.4 9=653 35=RIO* as an example, in the FIX standard: 8
>means BeginString, in this example, its value is FIX.4.
Thank you Rick for the enlightening.
I will get the FIX message parsed first and come back here later.
**
*Sincerely yours,*
*Raymond*
On Mon, Apr 2, 2018 at 9:15 AM, Rick Leir wrote:
> Google
>fix to json,
> there are a few interesting le
Hi Arturas,
Both Erick and I had a go at improving the documentation here. I hope it's
clearer.
https://builds.apache.org/job/Solr-reference-guide-master/javadoc/highlighting.html
The docs for hl.fl, hl.q, hl.qparser were all updated. The meat of the
change was a new note in hl.fl including an e
We noticed this issue in our solr clusters right after when Solr cluster is
restarted or Solr cluster is live for some time. Based on my research so
far... I am not seeing zookeeper connection issues from zk server side. It
seems it is solr side ( zk client) side. This issue is pretty constant now
Hi murugesh,
This error happen normally when you are in long GC pauses. Try to rise the heap
memory.
The only way to recover from this is restarting the affected node.
Regard.
--
Yago Riveiro
On 2 Apr 2018 15:39 +0100, murugesh karmegam , wrote:
> We noticed this issue in our solr clusters r
ZK as used by Solr defaults to a max of 1M file sizes specifically so
you _know_ when you are pushing large files around. You can change
that with setting jute.maxbuffer, see the ZooKeeper admin guide.
But if you put the jar file in the right place, it should have been
found. I did note that you p
Not located in the /server/logs/ folder.
Have these files instead
solr-8983-console.log
solr_gc.log.0.current
I can see logs from the Solr dashboard. Where is the solr.log file going
to? A search of "solr.log" in the system did not find the file.
Is the file called something else for solrcloud
Over the weekend one of our Dev solrcloud ran out of disk space. Examining
the problem we found one collection that had 2 months of uncommitted tlog
files. Unfortuneatly the solr logs rolled over and so I cannot see the
commit behavior during the last time data was loaded to it.
The solrconfig.xml
Webster:
Do you by any chance have CDCR configured? If so, insure that
buffering is disabled. Buffering was intended to be enabled
_temporarily_ during, say, a maintenance window and was conceived
before the bootstrapping capability was added to CDCR.
But I don't recall your other e-mails mention
Technically, Solr doesn't name the file at all, that's in your log4j
config, this line:
log4j.appender.file.File=${solr.log}/solr.log
so it's weird that you can't find it on your machine at all. How do
you _start_ Solr? In particular, to you define a system variable
"-Dsolr.log=some_path"?
And a
Wow life is complicated :)
Since I am using this to start solr, I am assuming the one in
/server/scripts/cloud-scripts is being used:
./bin/solr start -cloud -s /usr/local/bin/solr-7.2.1/server/solr/node1/solr
-p 8983 -z zk0-esohad:2181,zk1-esohad:2181,zk5-esohad:2181 -m 10g
So, I guess I need to
Erick,
Thanks, Normally our dev environment does not use CDCR, except when we're
doing active development on it. As it happens the collection in question,
was one we used to test cdcr. Or rather the configuration for it was, as
the specific collection has been deleted and created many times. Even
Raymond,
You can specify the default behavior in solrconfig.xml under each handler.
For instance for /browse you can specify it should look into name, and for
/query you can default it to different field.
On Mon, Apr 2, 2018 at 9:04 PM, Rick Leir wrote:
> Raymond
> There is a default field norm
Hello Markus,
It appears you are not familiar with PreAnalyzedUpdateProcessor? Using
that is much more flexible -- you could have different URP chains for your
use-cases. IMO PreAnalyzedField ought to go away. I argued for the URP
version and thus it's superiority to the FieldType here:
https://
Hi All - when building machine learning models using information gain, I
sometimes get this error when the number of iterations is high. I'm
using about 20k news articles in my training set (about 10k positive,
and 10k negative), and (for this particular run) am using 500 terms and
25,000 iter
Homer:
Yeah, the buffering bits are trappy, and in fact is being removed in
CDCR going forward.
Too bad you fell into that trap, there's hope going forward though...
Erick
On Mon, Apr 2, 2018 at 11:42 AM, Webster Homer wrote:
> Erick,
>
> Thanks, Normally our dev environment does not use CDCR,
Hi Yago Riveiro ,
Thanks for the reply. We have heap size 64G. Any more is not recommended
right? Except one time I was not able to co relate "updates disabled" with
GC pause. Also zk timeout is 120 seconds even with long GC pause (more than
10 seconds normally) we should recover right?
JVM se
Actually, 64G is on the high side, GC pauses can kill you pretty
easily in that range.
If it's at all possible to cut that down it would be A Good Thing
Best,
Erick
On Mon, Apr 2, 2018 at 12:56 PM, murugesh karmegam wrote:
> Hi Yago Riveiro ,
>
> Thanks for the reply. We have heap size 64G.
Hi Ilay,
I am still on Solr 6.6.0 and did not patch the grouping fix.
I implemented a temporary workaround solution to have 2 async request from
the web application 1st with grouping 2nd without grouping and merged the
results.
This solution worked for my case as we were getting grouping results f
Thanks Erik for the reply. We even had 92G heap size for some time at one
time. We were able to run and survive with 64G for the last several months
although with some issues mainly this issue "Can not talk to ZK Updates are
disabled". We have dedicated zk quorum. When we have reduced to 32G we ran
On 4/2/2018 2:43 PM, murugesh karmegam wrote:
> So given all of that wondering is there any options
> like G1 GC tuning ?
Targeted reply.
I've put some G1 information out there for Solr.
https://wiki.apache.org/solr/ShawnHeisey
Thanks,
Shawn
It looks like it accessing a replica that's down. Are the logs from
http://vesta:9100/solr/MODEL1024_1522696624083_shard20_replica_n75 reporting
any issues? When you go to that url is it back up and running?
Joel Bernstein
http://joelsolr.blogspot.com/
On Mon, Apr 2, 2018 at 3:55 PM, Joe Obernber
Hi Joel - thank you for your reply. Yes, the machine (Vesta) is up, and
I can access it. I don't see anything specific in the log, apart from
the same error, but this time to a different server. We have constant
indexing happening on this cluster, so if one went down, the indexing
would stop
On 4/2/2018 1:55 PM, Joe Obernberger wrote:
> The training data was split across 20 shards - specifically created with:
> http://icarus.querymasters.com:9100/solr/admin/collections?action=CREATE&name=MODEL1024_1522696624083&numShards=20&replicationFactor=2&maxShardsPerNode=5&collection.configName=T
We are experimenting with a text classifier for determining query intent.
Anybody have a favorite (or anti-favorite) Java implementation? Speed and ease
of implementation is important.
Right now, we’re mostly looking at Weka and the Stanford Classifier.
wunder
Walter Underwood
wun...@wunderwood
Hi all,
It would be nice if org.apache.solr.common.SolrInputDocument#addField throw
an exception when field name is 'id' and the method detect that the indexed
id is not unique, just like the post.jar tool.
I was confident that both had the same behavior, so...
Thanks.
Hello Wunder,
If you are particular about Java Stanford and Weka both are good choices.
OpenNLP also has a document classifier.
You can even explore beyond Java, I mean Python, and consume the intent as
a REST service.
Regards,
Dikshant
On Tue 3 Apr, 2018, 4:48 AM Walter Underwood, wrote:
> W
Thanks Rick and Adhyan
I see there is "/browse" in solrconfig.xml :
explicit
and name="defaults" with one item of "df" as shown below:
_text_
My understanding is I can put whatever fields I want to enable index and
searching here in parallel with _te
35 matches
Mail list logo