" ,"IS_EXCEPTIONAL" ,"AGE" ,"SEMESTER" ,"DIGITAL_SIGNATURE"
,"CGPA" ,"PERCENTAGE" ,"HEIGHT" ,"ENROLMENT_DATE" ,"ENROLMENT_TIME"
,"JOINING_DATE_TIME" ,"YEARS_SPENT" ,"ROLL_NUMBER&
i consulted with hadoop expert and he told me that he is using value 100
for merging segments. I will rerun tests with 100 to check.
I found in stress tests that default setting this to 32 is way too high.
Hadoop guys are using value 10 during merge sorts to not stress IO that
much. I also discovered that filesystems like ZFS are using default io
queue size of 10 per drive.
I tried run tests with 10, 15 and 32 and there is