yes, let me try that changing the max mapper slot actually requires changing the hadoop config, since I just found that it's "final" param
On Wed, Jul 11, 2012 at 10:05 PM, Harsh J <[email protected]> wrote: > Your problem is more from the fact that you are running > 1 map slot > per TT, and multiple mappers are getting run at the same time, all > trying to bind to the same port. Limit your TT's max map tasks to 1 > when you're relying on such techniques to debug, or use the > LocalJobRunner/Apache MRUnit instead. > > On Thu, Jul 12, 2012 at 9:16 AM, Yang <[email protected]> wrote: > > I set the following params to be false in my pig script (0.10.0) > > > > SET mapred.map.tasks.speculative.execution false; > > SET mapred.reduce.tasks.speculative.execution false; > > > > > > I also verified in the jobtracker UI in the job.xml that they are indeed > > set correctly. > > > > when the job finished, jobtracker UI shows that there is only one attempt > > for each task (in fact I have only 1 task too). > > > > but when I went to the tasktracker node, looked under the > > /var/log/hadoop/userlogs/job_id_here/ > > dir , there are 3 attempts dir , > > job_201207111710_0024 # ls > > attempt_201207111710_0024_m_000000_0 > attempt_201207111710_0024_m_000001_0 > > attempt_201207111710_0024_m_000002_0 job-acls.xml > > > > so 3 attempts were indeed fired ?? > > > > I have to get this controlled correctly because I'm trying to debug the > > mappers through eclipse, > > but if more than 1 mapper process is fired, they all try to connect to > the > > same debugger port, and the end result is that nobody is able to > > hook to the debugger. > > > > > > Thanks > > Yang > > > > -- > Harsh J >
