octave-maintainers
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [fem-fenics] MPI parallelisation


From: Marco Vassallo
Subject: Re: [fem-fenics] MPI parallelisation
Date: Mon, 28 Jul 2014 11:21:53 +0200




On Mon, Jul 28, 2014 at 10:22 AM, Eugenio Gianniti <address@hidden> wrote:

On 28 Jul 2014, at 08:53, Marco Vassallo <address@hidden> wrote:




On Mon, Jul 28, 2014 at 1:13 AM, Eugenio Gianniti <address@hidden> wrote:

On 16 Jul 2014, at 14:46, Eugenio Gianniti <address@hidden> wrote:

> Dear all,
>
> I am currently stuck in the implementation of the MPI parallelisation in fem-fenics. I wrote some code for the parallel assembly of matrices, but if I try to run the examples it crashes with DOLFIN internal errors even before this new code is executed. I quickly recall what I know about the issue:
>

Moreover, I tried to fix also the problem of meshes defined with the msh package. DOLFIN provides a handy method to distribute the mesh, but marked subdomains are not supported in version 1.3.0: indeed, Mesh.oct does mark a subdomain, but I cannot get any clue on why it does, so I need Marco to explain some details of that, please.

Hi Eugenio,

we mark the subdomain in the mesh.oct files in order to be consistent with the mesh representation in the msh pkg. In fact, the (p, e, t) representation contain this information and so we keep it also in fem-fenics. I do agree with you that it is not widely used, but for example in the msh_refine function it is necessary in order to give back in Octave a refined mesh with all the subdomain available (if they were present in the non-refined mesh).

What do you mean that they are not supported in FEniCS 1.3.0? By the way, now it is available FEniCS 1.4.0 and so we should later check if fem-fenics is compliant with it.

If I try to distribute the mesh with marked subdomains I get:


I now tried to add a check not to mark subdomains in parallel execution, but still the solution is not good. An example with fully Neumann conditions shows the strange behaviour I just mentioned, its parallel solution is the serial one divided by the number of processes. Instead, an example with DirichletBC leads to an outright wrong solution. Perhaps there is something I am missing that I should ask on the FEniCS list.

Concerning release 1.4.0, it has been out for a while, but from a first glance it is not compatible with the previous one. Ubuntu, for instance, still distributes 1.3.0 in its official repositories, so I figured out this could wait a bit.

Eugenio


Hi, the error came from here [1]

void LocalMeshData::extract_mesh_data(const Mesh& mesh)
{
  if (!mesh.domains().is_empty())
  {
    dolfin_error("LocalMeshData.cpp",
                 "extract local mesh data",
                 "Marked subdomains are not yet supported");
  }

  // Clear old data
  clear (); 
.....

The extract_mesh_data is an internal dolfin function which extract data on main process and split among processes.

As you already noticed, the subdomain are not supported. We should find a simple workaround to deal with it, waiting for it to be supported in FEniCS.


marco


HTH

marco


Eugenio




reply via email to

[Prev in Thread] Current Thread [Next in Thread]