I still see this issue with Chrome and the admin console. I am using Solr 7.3
In the Chrome console I see an error: "style.css:1 Failed to load resource:
the server responded with a status of 404 (Not Found)"
This used to work.
It is unusably slow, even with a simple query like *:*
-Origi
Maybe solr isn't using enough of your available memory (a rough check is
produced by 'solr status'). Do you realize you can start solr with a
'-m xx' parameter? (for me, xx = 1g)
Terry
On 1/13/20 3:12 PM, rhys J wrote:
On Mon, Jan 13, 2020 at 3:11 PM Gael Jourdan-Weil <
gael.jourdan-w...@kel
On 1/13/2020 11:53 AM, Gael Jourdan-Weil wrote:
Just to clarify something, we are not returning 1000 docs per request, we are
only returning 100.
We get 10 requests to Solr querying for docs 1 to 100, then 101 to 200, ...
until 901 to 1000.
But all that in the exact same second.
But I understa
On Mon, Jan 13, 2020 at 3:11 PM Gael Jourdan-Weil <
gael.jourdan-w...@kelkoogroup.com> wrote:
> Hello,
>
> If you are talking about "physical memory" as the bar displayed in Solr
> UI, that is the actual RAM your host have.
> If you need more, you need more RAM, it's not related to Solr.
>
>
Thank
Hello,
If you are talking about "physical memory" as the bar displayed in Solr UI,
that is the actual RAM your host have.
If you need more, you need more RAM, it's not related to Solr.
Gaël
De : rhys J
Envoyé : lundi 13 janvier 2020 20:46
À : solr-user@lucene.ap
Hi,
I was able to connect my IDE to Solr running on a container by using the
following command:
command: >
bash -c "solr start -c -f -a
-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005;"
It starts SolrCloud ( -c ) and listens on foreground ( -f ) so you don't
need to r
I am trying to figure out how to increase the physical memory in Solr.
I see how to increase the JVM size, and I've done that. But my physical
memory is 97% out of 7.79G of physical memory, and I'm trying to index a
lot more documents as I move this live.
Is there any documentation that I've miss
Thanks for your answer Erick.
Just to clarify something, we are not returning 1000 docs per request, we are
only returning 100.
We get 10 requests to Solr querying for docs 1 to 100, then 101 to 200, ...
until 901 to 1000.
But all that in the exact same second.
But I understand that to retrieve
Thanks for your helpful replies, guys.
@Edward: you were correct. I forgot to export 5005 port in YAML. After
exporting this port, I am at least able to see the process with following
command (I was not able to see it before):
gnandre@gnandre-deb9-64:/sandbox/gnandre/mw-ruby-development-server$ s
Hi Erick,
I am using json facets and was able to achieve desired result using them. I
was looking for some possibilities for same in groups.
Doing it using json facets is last option.
Thank
Saurabh Sharma
On Mon, Jan 13, 2020, 7:17 PM Erick Erickson
wrote:
> This might help:
> https://lucene
This might help: https://lucene.apache.org/solr/guide/7_4/json-facet-api.html
basically, if you can construct facets that correspond to your groups, you can
do some statistical functions on them.
Best,
Erick
> On Jan 13, 2020, at 2:18 AM, Saurabh Sharma
> wrote:
>
> Hi All,
>
> I have a requ
To return stored values, Lucene must
1> read the stored values from disk
2> decompress a minimum 16K block
3> assemble the return packet.
So you’re returning 500-1,000 documents per request, it may just be the above
set of steps. Solr was never designed to _return_ large result sets. Search
them
Hello,
We are experiencing some performance issues on Solr that seems related to
requests querying multiple pages of results for the same keyword at the same
time.
For instance, querying 10 pages of results (with 50 or 100 results per page) in
the same second for a given keyword, and doing that
13 matches
Mail list logo