On Mar 11, 2008, at 4:51 AM, Adam C Powell IV wrote:

[ Jeff/Tim: This is about the outstanding Debian bug report(s) regarding the architectures without atomic ops -- where we cannot build Open MPI. It has been suggested in the past to try libatomic-ops-dev which seems to lack one
or two instructions needed by Open Mpi ]

Good context; thanks.

On 8 March 2008 at 09:12, Adam C Powell IV wrote:
| Hi Dirk,
|
| I'm afraid I don't have a lot of time just now, but to me next steps
| seem like:
|      1. Install libatomic-ops-dev.
| 2. Try building openmpi without the included atomic ops and with
|         this lib.
|      3. If it works, great!  If it doesn't, try to adjust the calls
|         and/or ask on the openmpi mailing list.
|      4. If they suggest a workaround, great!  If not, wishlist
|         libatomic-ops-dev to add the needed functionality.
|      5. When everything works, push the change upstream.
|
| If you don't get to it first, I can do 1-2 in about 2-3 weeks...

Do you have access to any of the missing arches without atomic ops from
upstream?

No, but if it works on the existing arches, it should work on the
missing ones, right?  Furthermore, if the ops happen to be the same,
what's to say upstream didn't get them from this lib in the first place?
[Jeff/Tim, I welcome your comments...]

The guy who did the majority of atomic assembly is no longer on the project :-(, so I can't say for sure where they came from. It was probably a mixture of many different sources, such as instruction manuals for the chipsets involved, etc.

What platforms in particular are not supported that Debian wants?

Or are you in fact suggesting that supplant what upstream has with
libatomic-ops-dev? I would hesitate a great before doing that -- I tend to
trust upstream in these matters.

That's fair. On the other hand, it can't hurt to try it, and we have a good bit of time now for users to test it before the lenny release. If nothing else, it's worth giving it a go in experimental. As I said, if
you don't get it to it in a couple of weeks, I'll give it a go.

I'm afraid I know nothing about libatomic-ops-dev -- I did a few quick/ lame google searches and couldn't turn up a home page for this project (including on debian.org). Could someone point me in the right direction?

We can investigate it and see if it meets our needs.

Debian has generally emphasized sharing code wherever possible. So for example, ffmpeg and mplayer have been strongly urged to do what it takes to share the decoder libraries, which are developed together, though the
release schedules of those two "front ends" have been very different.
Likewise, I shoehorned the pysparse python sparse solver front end to
fit with Debian's superlu and umfpack (suitesparse package) libraries in place of the versions provided in the upstream source. I think the same
principal is at work here.

Without knowing anything about libatomics-ops-dev, three roadblocks to integrating libatomic-ops-dev into Open MPI could be:

1. license: Open MPI is BSD -- what's libtatomics-ops-dev?

2. portability: does it work outside of Linux? Does it work with non- gcc compilers? The first is surmountable (see below), but the second would be quite difficult to fix -- we would likely need fixes from the libatomics-ops-dev maintainers.

3. distribution: we have a core philosophy of aggressively trying to decrease the number of dependencies of Open MPI to enable simple download/install by novice users (we can't always succeed in this, but we do try). To this point, we have embedded a few "core" dependencies in the Open MPI source code distribution itself so that you don't have to have them installed to build/run Open MPI (e.g., particularly on platforms that may not have them already installed, such as OS X or Solaris). The atomic operations likely fit into this category such that the OMPI community may be resistant to requiring a 3rd party library just to be able to install/run.

One *possibility* is that we could use the included atomics unless specifically directed to use libatomic-ops (e.g., via a configure option such as --with-libatomic-ops=/foo). There's lots of "if's" in there, though -- if the license is compatible, if the library meets our needs, ...etc. So we would need to investigate a few things first.

Generally speaking, it may be worthwhile going to upstream now. So lemme CC
them, as well as co-maintainer Manuel.

Good idea. I often work with non-responsive upstream developers, so my
first recourse is usually to "just do it", but thankfully that is not
the case with openmpi.

We're not always quick to reply, but we try.  :-)

I should point out to Jeff and Tim that we do get quasi-constant grief about Open MPI not spanning the Debian universe of arches. I personally fall back
to Lam where Open MPI is missing but that is indeed somewhat cheesy.
Longer-term, full support across all hardware platform would be great. But I am talking way out of area of expertise here -- what do you upstream
guys think? Let us have it, and don't hold back.


The biggest problem is maintenance. I can't easily justify to my employer spending time on maintenance of platforms that we officially don't care about. Such is true with many of the other Open MPI developers -- we don't really have someone who is "batting cleanup" to pick up all the loose odds and ends on platforms that aren't officially supported. :-\

If libatomic-ops-dev can fix some of these problems (by automatically picking up [atomic ops] support on some platforms that we don't actively support, especially since our main assembly developer is now gone), that would be great. But it'll require some investigation first.

I should also point out that we're gearing up for branching for the v1.3 series and are rushing to finish some critical features before deadline. To be honest and not inflate expectations, I kinda doubt that we'll have the time to be able to perform due diligence on libatomic-ops-dev and/or integrate it before 1.3 branching. :-(

--
Jeff Squyres
Cisco Systems



--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]

Reply via email to