On 03/11/2017 15:32, Admin eLawJournal wrote:
Hi,
I have read that we can use tesseract with solr to index image files. I
would like some guidance on setting this up.
Currently, I am using solr for searching my wordpress installation via the
WPSOLR plugin.
I have Solr 6.6 installed on ubuntu 14
Hi Wael,
Can you provide your field definition and sample query.
Thanks,
Emir
--
Monitoring - Log Management - Alerting - Anomaly Detection
Solr & Elasticsearch Consulting Support Training - http://sematext.com/
> On 6 Nov 2017, at 08:30, Wael Kader wrote:
>
> Hello,
>
> I am having an index
Hi Charlie,
Thanks for the reply. You're right. I haven't got my hands dirty with solr
yet. I am not from an IT background and learnt everything I know through
lots of reading online. However, all the documentation on solr assumes that
the reader has advanced IT knowledge. In fact, it took me a we
Anand,
As Charlie says you should have a separate process for this. Also, if you go
back about ten months in this mailing list you will see some discussion about
how OCR can take minutes of CPU per page, and needs some preprocessing with
Imagemagick or Graphicsmagick. You will want to do some fi
Hi,
I am using a custom field. Below is the field definition.
I am using this because I don't want stemming.
Regards,
Hi Wael,
You are faceting on analyzed field. This results in field being uninverted -
fieldValueCache being built - on first call after every commit. This is both
time and memory consuming (you can check in admin console in stats how much
memory it took).
What you need to do is to create multiv
Thanks Rick, minutes of CPU is definitely going to break my site. I'm
looking for someone to hire as I have no coding knowledge. Please let me
know if you are up for it.
On Mon, Nov 6, 2017 at 8:05 PM, Rick Leir wrote:
> Anand,
> As Charlie says you should have a separate process for this. Also,
Dr. Krell
You could look at your /select query handler, and compare it with the /query
query handler in the Admin config.
Did you upgrade from a previous version of Solr? Or change your config ( no,
you must have thought of that). If it is a bug related to the Java upgrade then
you need to sho
_Why_ do you want to get the word counts? Faceting on all of the
tokens for 100M docs isn't something Solr is ordinarily used for. As
Emir says it'll take a huge amount of memory. You can use one of the
function queries (termfreq IIRC) that will give you the count of any
individual term you have an
Hi Rick, Hi Solr Experts,
Thank you for this reply!
My solr database is supposed to be(come) open source. Hence, I am willing to
share any information. Since I am new to solr, I just did not know what to
share. But in the mean time, I put some of the information online.
My current configuratio
He said that it's using to get a word cloud, if it's not related to the
search and it's a generic word cloud of the index, using the luke request
handler to get the first 250 o 500 word could work.
http://localhost:8983/solr/core/admin/luke?fl=text&numTerms=500&wt=json
On Mon, Nov 6, 2017 at 4:4
Hi Guys,
I was playing with payloads example as I had a possible use case of alternate
product titles for a product.
https://lucidworks.com/2017/09/14/solr-payloads/
bin/solr start
bin/solr create -c payloads
bin/post -c payloads -type text/csv -out yes -d $'id,vals_dpf\n1,one|1.0
two|2.0 thr
PS I knew sematext would be required to chime in here! 😊
Is there a non-expiring dev version I could experiment with? I think I did sign
up for a trial years ago from a different company... I was actually wondering
about hooking it up to my personal AWS based solr cloud instance.
Thanks
Rob
Hi Robert,
We use the following stack:
- Prometheus to scrape metrics (https://prometheus.io/)
- Prometheus node exporter to export "machine metrics" (Disk, network
usage, etc.) (https://github.com/prometheus/node_exporter)
- Prometheus JMX exporter to export "Solr metrics" (Cache usage, QPS,
Res
Interesting! Finally a Grafana user... Thanks Daniel, I will follow your links.
That looks promising.
Is anyone using Grafana over Graphite?
Thanks
Robi
From: Daniel Ortega
Sent: Monday, November 6, 2017 11:19:10 AM
To: solr-user@lucene.apache.org
Subject: R
Look back down the string to my post. We use Grafana.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On Nov 6, 2017, at 11:23 AM, Petersen, Robert (Contr)
> wrote:
>
> Interesting! Finally a Grafana user... Thanks Daniel, I will follow your
> links
Hi Robert,
There is a free plan with limited number of nodes and 30 min retention. It
should be straight forward to install it on AWS based solr cloud instance, but
if you run into some issues you can used built in chat to get in touch with
somebody to help you set it up.
Regards,
Emir
--
Monit
I see where this was an issue w/ 6.4 and fixed. I keep getting this error w/
7.0.1 and 7.1.0. Works fine up until 6.6.2. Could this issue have been
reintroduced? Is there somewhere to check what might be going on? I don't
see anything in the error logs.
--
Sent from: http://lucene.472066.n3
Hi Walter,
Yes, now I see it. I'm wondering about using Grafana and New Relic at the same
time since New Relic has a dashboard and also costs money for corporate use. I
guess after a reread you are using Grafana to visualize the influxDB data and
New Relic just for JVM right? Did this give yo
We use New Relic across the site, but it doesn’t split out traffic to different
endpoints. It also cannot distinguish between search traffic to the cluster and
intra-cluster traffic. With four shards, the total traffic is 4X bigger than
the incoming traffic.
We have a bunch of business metrics
Hi Walter,
OK now that sounds really interesting. I actually just turned on logging in
Jetty and yes did see all the intra-cluster traffic there. I'm pushing our ELK
team to pick out the get search requests across the cluster and aggregate them
for me. We'll see how that looks but that would j
Hi Guys,
Anyone else been noticing this this msg when starting up solr with java 9?
(This is just an FYI and not a real question)
Java HotSpot(TM) 64-Bit Server VM warning: Option UseConcMarkSweepGC was
deprecated in version 9.0 and will likely be removed in a future release.
Java HotSpot(TM)
I have used Java Melody for this purpose on past Java based servers, but I
haven't tried to embed it in Jetty.
-Original Message-
From: Petersen, Robert (Contr) [mailto:robert.peters...@ftr.com]
Sent: Monday, November 06, 2017 4:50 PM
To: solr-user@lucene.apache.org
Subject: Re: Anyone h
: Anyone else been noticing this this msg when starting up solr with java 9?
(This is just an FYI and not a real question)
: Java HotSpot(TM) 64-Bit Server VM warning: Option UseConcMarkSweepGC was
deprecated in version 9.0 and will likely be removed in a future release.
: Java HotSpot(TM) 64-
: Anyone else been noticing this this msg when starting up solr with java 9?
(This is just an FYI and not a real question)
: Java HotSpot(TM) 64-Bit Server VM warning: Option UseConcMarkSweepGC was
deprecated in version 9.0 and will likely be removed in a future release.
: Java HotSpot(TM) 64-
On 11/6/2017 3:07 PM, Petersen, Robert (Contr) wrote:
> Anyone else been noticing this this msg when starting up solr with java 9?
> (This is just an FYI and not a real question)
>
> Java HotSpot(TM) 64-Bit Server VM warning: Option UseConcMarkSweepGC was
> deprecated in version 9.0 and will like
On 11/6/2017 1:47 PM, richardg wrote:
> I see where this was an issue w/ 6.4 and fixed. I keep getting this error w/
> 7.0.1 and 7.1.0. Works fine up until 6.6.2. Could this issue have been
> reintroduced? Is there somewhere to check what might be going on? I don't
> see anything in the error
On 11/6/2017 4:26 PM, Shawn Heisey wrote:
> If I start Solr "normally" or with the cloud example, then the
> dataimport tab shows that error -- which is exactly as expected.
I have opened an improvement issue so that particular error message
isn't as vague. It's been labeled with "newdev" because
: We recently discovered issues with solr with converting utf8 code in the
search. One or two month ago everything was still working.
:
: - What might have caused it is a Java update (Java 8 Update 151).
: - We are using firefox as well as chrome for displaying results.
: - We tested it with So
Actually I can't believe they're depricating UseConcMarkSweepGC , That was the
one that finally made solr 'sing' with no OOMs!
I guess they must have found something better, have to look into that...
Robi
From: Chris Hostetter
Sent: Monday, November 6, 2017 3
OK no faceting, no filtering, I just want the hierarchy to come backin the
results. Can't quite get it... googled all over the place too.
Doc:
{ id : asdf, type_s:customer, firstName_s:Manny, lastName_s:Acevedo,
address_s:"123 Fourth Street", city_s:Gotham, tn_s:1234561234,
_childDocuments_:
Hi @Daniel ,
What version of Solr are you using ?
We gave prometheus + Jolokia + InfluxDB + Grafana a try , that came out
well.
With Solr 6.6 the metrics are explosed through the /metrics api, but how do
we go about for the earlier versions , please guide ?
Specifically the cache monitoring.
Than
Hoss
Clearly it is
U+00FC ü c3 bc LATIN SMALL LETTER U WITH DIAERESIS
As in Tübingen
"With the Yahoo Flickr Creative Commons 100 Million (YFCC100m) dataset, a great
novel dataset was introduced to the computer vision and multimedia research
community." -- cool
I think it is strange th
Hi,
I'm using Solr 6.5.1, and I'm facing the issue of incorrect ngroup count
after I have group it by signature field.
Usually, the number of records returned is more than what is shown in the
ngroup. For example, I may get a ngroup of 22, but there are 25 records
being returned.
Below is the pa
Hi,
thank you for your time and trying to narrow down my problem.
1) When looking for Tübingen in the title, I am expecting the 3092484 results.
That sounds like a reasonable result. Furthermore, when looking at some of the
results, they are exactly what I am looking for.
2) I am testing them
Hi
i want to use more than one ssd in each server of solr cluster but i don't
know how to set multiple hdd in solr.xml configurations.
i set on hdd path in solr.xml by:
/media/ssd
but i can't set more than one ssd.
how should i do it.
thanks.
36 matches
Mail list logo