Robert G. Brown wrote:

The 2GB dimms emit the same heat as the 1 GB dimms. So if you have a 1000 node cluster, and you use the larger (slightly more expensive) 2GB dimms vs the 1GB dimms, you will emit somewhat less heat. I haven't done the analysis, but I bet it would be close to a good tradeoff for TCO.

ask and ye shall receive ...


The analysis is easy using Unka Rob's Foolproof Power Cost Estimate
Rate: $1/watt/year.  This is just a ballpark number, deliberately

...


Anyway, if a DIMM draws and average power of (say) 10W and is expected
to be on for 3 years, that means it costs roughly $30 over its lifetime.
so if the marginal cost of a 2 GB vs 2 1 GB DIMMs is $30 or less, it is
break even to a win to buy 2 GB DIMMs from a TCO point of view.  Similar

Using Kingston 1GB DIMMs as a metric and their 2GB counterparts. Both single rank, x4 DDR2-667 Registered ECC ram.

From a random reseller:

        1GB KVR667D2S4P5/1G $77.91USD
        2GB KVR667D2S4P5K2/2G $150.41USD

so 2x 1GB DIMMs are $155.84USD, at a $5.43USD higher cost than the 1x 2GB DIMM.

According to the datasheet for the 1 GB part (http://www.valueram.com/datasheets/KVR667D2S4P5_1G.pdf) it consumes 3.8W operating.

According to the datasheet for the 2GB part (http://www.valueram.com/datasheets/KVR667D2S4P5_2G.pdf) it consumes 4W operating.

Assuming 8 GB ram in each node:

        8x 1GB will consume about 30.4W,
        4x 2GB will consume about 16W

14.4W difference (lower) with the less expensive part (you save $20.72/node).

So, over 1000 nodes, you save 14kW, and $20.7k.  Effectively for "free".

This says that for carefully constructed nodes, you will save money. Most cluster vendors simply try to get the lowest cost parts they can, so if 8x 1GB costs them less than 4x 2GB, they will use the 8x 1GB parts. They are largely driven to this by price competitiveness in the market, and trying to scrape by on very thin margins. Try convincing a purchasing agent at a university sometime that you are actually saving them money over the long haul with something that *appears* to have a slightly higher upfront cost ... it doesn't fly.

Note: the FBDIMMs add about 4W/unit above the DDR2 ram. This is a shame as Clovertown is pretty nice for a number of codes (non-memory bandwidth bound). This gives Clovertown (and related) about a 16-32W additional load per node. Which translates to a large additional cooling/power bill/design.

considerations hold for high vs low power CPUs, LCD vs CRT monitors, and
so on.  Just assume $1/W/year x (expected number of years of service),
with a bit of kentucky windage depending on power cost and AC efficiency
in your area relative to the baseline assumptions of $0.08/KW-hour and
average CoP of 2.5 (given your average outdoor ambient temperature and
the temperature you keep the server room at).  If you have load
measurements and actual bills to use you can do better, but this will
get you within a factor of 1.5 (either way).

That and few parts means lower absolute number of failures, but that is another issue.

Which I do not address either.  And larger DIMMs gives you more slots

This should be an interesting (electrical) analysis. More noise sources and sinks. More "moving" parts. We try to err on the side of fewer moving parts.

for future expansion should you want to scale up a calculation besides,
for that matter, which might be worth something even if the marginal
cost of bigger DIMMs is a slight TCO loss (as it might be in a region
where power/cooling are really cheap).

     rgb




--

Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics LLC,
email: [EMAIL PROTECTED]
web  : http://www.scalableinformatics.com
       http://jackrabbit.scalableinformatics.com
phone: +1 734 786 8423
fax  : +1 866 888 3112
cell : +1 734 612 4615

_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to