Can you share the yarn logs? Spark version?
Can enable debug mode for extra verbosity
When you say localizing, few things to check:
- Is there a firewall in between nodes?
- Do we have disk space ?
- What is the spark job trying to do? Is it working with large files?
Looks like you are providing large source data to be read by
Spark client and it is timing out. (default: 600 secs)
On Tue, Aug 10, 2021 at 4:36 PM Amit Chavan <[email protected]> wrote:
> Hi,
> I am Amit Chavan currently working at sattrix software solution pvt.
> ltd. We have used apache hadoop open source 2.7.1. In this version from
> last 5 months we have faced "Container is not running, current state is
> localizing" this issue is occurring. We have done every possible solution
> for this like configuration changes,we have increased spark core and memory
> even we have deleted the production data. Yet the error was not gone.
> So i'm requesting you to please give me the possible solution for
> this issue.
>