guix-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Open MPI keeps references to GCC, GFortran, etc.


From: Ludovic Courtès
Subject: Re: Open MPI keeps references to GCC, GFortran, etc.
Date: Mon, 31 Jul 2017 15:57:23 +0200
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/25.2 (gnu/linux)

Hello Dave,

Dave Love <address@hidden> skribis:

> Ludovic Courtès <address@hidden> writes:

[...]

>> Interesting.  It’s not a “should” though IMO, in the sense that we add
>> additional inputs only when we have a good reason to do so.
>
> I think I was misunderstanding.  Is the intention actually to get rid of
> dependencies on the compilers?  (I assume that should apply to C as well
> as Fortran.)  I guess that's arguable, but at least the compilers are
> used by mpicc etc., and Fedora and Debian development packages depend
> on, or recommend the compilers.

My intent was to remove the *run-time* dependency of openmpi on gcc &
co. (as returned by ‘guix gc --references’ or ‘guix size openmpi’.)

> Looking at the packaging more closely, I think it needs, or should have,
> various changes.  --enable-static clobbers dynamically-loaded MCA
> components, which I think is is a non-starter.  One question I have is
> why are builtin atomics turned on?  They normally aren't, and I don't
> know what the consequences are.

No idea, you probably know better than me.

That said, I suggest addressing one problem at a time.  :-)

> You can reduce the closure with other changes I made to bring it roughly
> into line with Debian and Fedora and how I'd build it otherwise.
> Removing the obsolete vampirtrace support, and the devel headers (which
> are only for building external MCA components, which I've never seen
> done), and replacing the valgrind integration with the library wrapper,
> brings the "size" output down to:
>
>   store item                                                       total    
> self
>   /gnu/store/dws3a11p4s2qhnmapc4p1nm7g36hr3p4-openmpi-1.10.7         438.6    
>  9.7   2.2%

Sounds good!

> I don't understand why gfortran and gfortran-lib are so large anyway --
> they seem to duplicate C stuff, and gcc-lib bundles things I wouldn't
> expect it to.  The RHEL gcc-5 equivalents are ~10 and 2 MB intrinsically.

Yeah, gfortran is actually a GCC with support for C/C++/Fortran.  That
also deserves to be optimized but it’s not trivial I think.

> I assume the store is intended to be on a shared filesystem which
> compute nodes don't duplicate, which helps with space, but I don't think
> that should be required.  The stateless systems I've set up used a
> separate compute node image which was much smaller than the login node
> one by omitting non-runtime rpms.

Yeah the store is typically meant to be shared over NFS or similar.  On
the topic of setting up Guix on a cluster, you might want to check:

  https://elephly.net/posts/2015-04-17-gnu-guix.html
  https://hal.inria.fr/hal-01161771/en

More on that later…

[...]

> I think it's fine to remove the path from the point of view of (not)
> breaking things, and other information strings could go, like overall
> romio configure options.  (The relevant info about romio from ompi_info
> is just the filesystem types supported.)

OK.

[...]

>>> I was intending to look at parameterizing the build on gfortran version,
>>
>> I suppose you could to:
>>
>>   (define openmpi-with-gfortran7
>>     (package
>>       (inherit openmpi)
>>       (name "openmpi-gfortran7")
>>       (inputs `(("gfortran" ,gfortran-7)
>>                 ,@(alist-delete "gfortran" (package-inputs openmpi))))))
>
> Right.
>
>> (That said, if the .mod files are compatible among gfortran versions, it
>> probably doesn’t make sense to do this.)
>
> But they're not compatible, which is a real problem

It shouldn’t be a problem if you do something like shown above, then.

> By the way, I don't want to be an HPC bigot, but HPC requirements seem
> to be largely a superset of most others, and applicable in other areas.

Agreed!  And I think Guix also makes it easier to meet some of the HPC
requirements, from what I’ve seen.

Thanks,
Ludo’.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]