Hi Livy dev team

*Context*
I want to enable live interaction with our spark cluster, where users would
be requesting some data over the web/mobile app, and in the backend we want
to run computations over spark and return the result in the API call, and
the response time that I am aiming for is less than 500ms.

I've some doubts regarding implementing the same using livy
1. The spark job we have uses the latest spark version (3.3.0) while livy
documentation says it requires at least Spark 1.6 and supports both Scala
2.10 and 2.11 builds of Spark, while spark 3.3.0 is built with scale 2.12.
Is there any way around this?

2. While using the batches api to initiate a spark job, I want to specify
jar files which need to be executed, though I found one open issue (Not
able to submit jars to Livy via batch/session to an already running
session/context.
<https://issues.apache.org/jira/projects/LIVY/issues/LIVY-869?filter=allopenissues>).
Is there any war around this?

It would be great if you can also share your thoughts on this whole
architecture and would it be possible to guarantee <500ms of response time?

Thanks & Regards
Shubhang Arora

Reply via email to