help-octave
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Octave Forge MPI package on Sungrid cluster


From: c.
Subject: Re: Octave Forge MPI package on Sungrid cluster
Date: Tue, 13 May 2014 20:30:20 +0200

On 13 May 2014, at 19:01, Timo <address@hidden> wrote:

> Dear Carlo,
> 
> I am now able to use the Octave MPI package on the sungrid cluster. 
> Now I am contacting you because the system administrator of the cluster has 
> contacted me with the following comment:
> 
> "However, we noticed that the nodes you are using are overoccupied, with a 
> load near to 45, instead of the optimum value 16. This is because of the use 
> of threads, that are competing for the same resources. I do not know your 
> code, but it seems that it is used/designed for being executed in 
> hyperthreading machines, which is disabled in our nodes. 
> 
> This doesn't affect other users, and there is no harm in this, but nodes are 
> busy in changing the current executing thread, so almost 50% of the cpu load 
> is due to this constant switch and I believe that the calculation is in fact 
> going at half speed as if there were no extra threads open. 
> 
> Just consider if the code allows to use exactly 16 processes in 16 cores, 
> with no additional threads, and if it is possible, change it, so the 
> resources would be used entirely for the computation. "
> 
> The MPI functionality I use is simply dividing the work into equal pieces and 
> to call the client nodes for computation. Similar, and inspired by the 
> "helloworld" script that ships with the MPI package.
> 
> I am afraid I can not answer the questions of the administrator adequately, 
> but I thought it might be interesting for you to read and maybe comment on / 
> propose a way to account for this issue. 
> 
> I hope I am not steeling to much of your time, 
> thanks and greetings
> Timo

Timo,

It would be better to keep the conversation on the list.

Anyway, I am afraid there is very little I can do to help,
neither Octave (*) nor the MPI package use any multithreading
directly, so this is again just a problem with your local setup.

Maybe you are linking to some multithreaded version of some of the 
dependencies? Maybe fftw or openBLAS? or another multithreaded BLAS/LAPACK?

I agree with your administrator that it would be better to not
use more threads than cores if you want your code to run efficiently,
but essentially you have to return the question again to your sysadmin
to see what libraries you have installed that are configured to run 
on multiple threads.

Without knowing details of your cluster configuration I can just 
"shoot in the dark" by suggesting you try something like

export OMP_NUM_THREADS=1
export OPENBLAS_NUM_THREADS=1

this may help if OpenMP or openBLAS are involved.

HTH,
c.
        

(*) unless you are starting in GUI mode, but that would make no sense
for MPI applications that can only run in batch mode ...


reply via email to

[Prev in Thread] Current Thread [Next in Thread]