espressomd-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [ESPResSo-users] iccp3m question


From: Stefan Kesselheim
Subject: Re: [ESPResSo-users] iccp3m question
Date: Tue, 7 Oct 2014 22:17:53 +0200

Hi,

On Oct 6, 2014, at 6:21 AM, Xikai Jiang <address@hidden> wrote:

> I have tested the case using the same P3M parameters on 1 core and 8 cores, 
> but they still give different net charge in the system.
> 
> One way I find to reduce the net charge is to shift all atoms in the system 
> in x- and y- directions by a small amount (~0.1nm), I'm thinking is this 
> related to the wall atoms that ride on the domain decomposition boundary?

that is most likely right, but quite unfortunate. I'm quite sure, I have 
already identified the problem. 

Here is how the parallelism of ICDP3M works: The algorithm assumes the 
electrostatics solver, in this case P3M calculates the forces on all particles, 
including ghost particles, and that it is stored in the force property of the 
particle. The electric field is calculated by dividing the occurring forces by 
the particle charge. Then in each iteration, the algorithm calculates the new 
surface charge density based on the electric field times the normal vector. 
Multiplied by the area of the corresponding discretisation element, this yields 
the new charge to assign to the particle. As the forces on real particles and 
ghost particles are identical (to the bit), the charge update does not have to 
be communicated, but is consistent on all nodes. 
Then, a new iteration step can be performed. 

The iteration in iccp3m_iteration the loop over is executed over the local 
cells. This would be correct, if the local cells included the ghost cells. It 
is however possible that this assumption is wrong. Then, of course, the 
magnitude of the charges becomes inconsistent between the nodes, as it is not 
communicated. Of the other espresso guys someone should know. Can you help guys?

I very surprised that I have not found that problem earlier, but possibly that 
the distance from the boundary was always large enough in my cases. If you 
prepare a smaller test case, that makes it possible to check the stuff faster, 
it can try to fix the problem. On the other hand, it is probably easy to check 
that yourself: Just add a few printfs to iccp3m_iteration(). But I clearly 
recommend a smaller system for that. 

Cheers
Stefan


reply via email to

[Prev in Thread] Current Thread [Next in Thread]