help-octave
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: mpi 1.1.1 released


From: Sukanta Basu
Subject: Re: mpi 1.1.1 released
Date: Fri, 3 Jan 2014 08:04:14 -0500

On Fri, Jan 3, 2014 at 7:57 AM, c. <address@hidden> wrote:
> On 3 Jan 2014, at 13:42, c. <address@hidden> wrote:
>
>> Sukanta,
>>
>> Could you please avoid top-posting in your messages?
>> It makes it really hard to follow the discussion.
>> In particular I must have missed some message as I don't
>> understand what you refer to in your mail above.
>>
>> c.
>
>
> OK,
>
> I now found Michael's message (which for some reason
> was downloaded by my client AFTER your reply) and I understand
> what you are talking about.
>
>> On 3 Jan 2014, at 13:25, Sukanta Basu <address@hidden> wrote:
>>
>>> Hi Michael,
>>>
>>> Thanks for catching the typo in the sample file. I quickly created
>>> this sample code for testing. The outer loop should have a different
>>> index.
>
> I'm going to reformat the code and post it in the bug traker in order
> to keep track of the progress.
>
>>> I have to send multi-dimensional arrays in my work. There is no
>>> alternative.
>
> well, you could actually try to reshape your arrays before MPI_Send and
> after MPI_Receive. This causes a very small overhead and, if it avoids
> the emory leak it could help identify the origin of the problem.
>
>>> Best regards,
>>> Sukanta
>
> Thanks,
> c.

Hi Carlo,

Thanks in advance for posting the code in bug-tracker. My code deals
with many arrays. Reshaping them for every communication will
definitely slow down the computation. Today, I am going to check if
one-dimensional array solves the problem.

Cheers,
Sukanta

-- 
Sukanta Basu
Associate Professor
North Carolina State University
http://www4.ncsu.edu/~sbasu5/


reply via email to

[Prev in Thread] Current Thread [Next in Thread]