[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[bug#27850] gnu: mpi: openmpi: Don't enable thread-multiple
From: |
Dave Love |
Subject: |
[bug#27850] gnu: mpi: openmpi: Don't enable thread-multiple |
Date: |
Tue, 01 Aug 2017 21:10:23 +0100 |
User-agent: |
Gnus/5.13 (Gnus v5.13) Emacs/24.5 (gnu/linux) |
Ludovic Courtès <address@hidden> writes:
>> Maybe, but what about the non-ABI compatibility I expect there is? (I
>> don't know whether there's still any penalty from thread-multiple
>> anyhow; I guess not, as I see it's not the default.)
>
> I propose this because you had written that the “performance penalty for
> thread-multiple is supposed to be mitigated in the most recent openmpi.”
> If it’s not, then fine.
I don't know the value of "mitigated". I could ask or, better, measure
when I get back from holiday (at least micro-benchmarks over
Infiniband).
>> If anyone's using it seriously, I'd have thought effort would be better
>> spent on support for SLURM (as it's in Guix) and supporting
>> high-performance fabrics (which I started on).
>
> You already mentioned openfabrics a couple of times I think. Mentioning
> it more won’t turn it into an actual package. :-) It’s on my to-do
> list, I guess it’s on yours too, so we’ll get there.
Sure. It's only what seems important. I'll post what I've got, but if
someone else is doing it, fine, and I won't duplicate effort.
> What do you have in mind for SLURM?
There's integration with SLURM (--with-slurm), PBS/Torque, and LSF (or,
I guess, Open Lava in the free world). I don't know much about them,
but they build MCA modules. Unlike the gridengine support, they link
against libraries for the resource managers, so you want them to be
add-ons which are only installed when required (not like the Fedora
packaging).
> As for “using it seriously”, I think this is a needlessly aggressive way
> to express your frustration.
I'm sorry I'm mis-communicating trans-Manche, at least. It wasn't meant
like that at all and I'll try to be more careful. Please assume I'm a
friendly hacker, even if I have strong opinions, which I hope I can
justify!
> People *are* using Guix “seriously” in HPC
I meant openmpi, not Guix generally. "Seriously" meant applications
which are communication-intensive (like the latency-sensitive DFT
applications).
> already, but (1) different application domains emphasize different
> aspects of “HPC”, and (2) there’s on-going work to improve Guix for HPC
> and your feedback is invaluable here.
I hope I can give useful feedback, and any criticism is meant
constructively. However, I'm not representative of UK HPC people --
happier to use functional Scheme than Python, and believing in packaging
for a start!
Happy hacking.