xunxunmimi5577 commented on issue #1563:
URL:
https://github.com/apache/datafusion-ballista/issues/1563#issuecomment-4301331200
> something like
>
> use ballista_executor::executor_process::{
> ExecutorProcessConfig, start_executor_process,
> };
> use datafusion::execution::{memory_pool::FairSpillPool,
runtime_env::RuntimeEnvBuilder};
> use std::sync::Arc;
>
> #[tokio::main]
> async fn main() -> ballista_core::error::Result<()> {
>
> let memory_pool = Arc::new(FairSpillPool::new(2_000_000));
>
> let runtime_env_producer: ballista_core::RuntimeProducer =
Arc::new(move |_config| {
> let runtime_env = RuntimeEnvBuilder::new()
> .with_memory_pool(memory_pool.clone())
> .build()?;
>
> Ok(Arc::new(runtime_env))
> });
>
> let config: ExecutorProcessConfig = ExecutorProcessConfig {
> override_runtime_producer: Some(runtime_env_producer),
> ..Default::default()
> };
>
> start_executor_process(Arc::new(config)).await
> }
Thanks for your answer! Previously, I set the memory limit for the
MemoryPool too high because I assumed each executor shared a single
MemoryPool—until I later discovered that the MemoryPool is actually created
each time a task is executed.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]