help-gsl
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Help-gsl] finite element assembly and calling UMFPACK for linear sy


From: Brijesh Upadhaya
Subject: Re: [Help-gsl] finite element assembly and calling UMFPACK for linear system solution
Date: Thu, 20 Jun 2019 00:28:37 +0300

Hi,

Yes I remember now that I have tried in Fortran two years ago what you did.
I used to write FEM code in Fortran until I switched to GSL. The main
problem was dependency (see below the old stuffs). I used dense matrix for
assembly and compressed before calling UMFPACK. The problem size was small
< 1000 unknowns. So it was not that time consuming, however the triangle
vertex in the tmat (Triangle Matrix) is not contiguous and can have
dependency. I think there is better way to do maybe but I didn't wanted to
spend more time (Now you already have it in GSL :) ).

!==================================================================================================!

! (dNu . Curl Ni, Curl Nj) => Stiffness Matrix {dNu should be available
beforehand}
!==================================================================================================!

  subroutine DNu_CurlUV(self, this)

   implicit none
   type(Assembly), intent(inout) :: self
   type(Element), intent(in) :: this
   real(dp) :: Nj(this%dof, 2), Sj(this%dof, this%dof)
   integer :: k

!  Stiffness matrix
!   !$OMP PARALLEL DO PRIVATE(k)
   do k = 1, this%N
     self%S(:,k) = 0.0_dp
   end do
!   !$OMP END PARALLEL DO

   !TODO: This is not so trivial to parallelize
   do k = 1, this%trinum
     Nj = transpose(matmul(self%dNu(:,:,k), this%curl(:,:,k)))
     Sj = matmul(Nj, this%curl(:,:,k)) * this%detAW(k)
     Sj = Sj + self%S(this%tmat(1:this%dof, k), this%tmat(1:this%dof, k))
     self%S(this%tmat(1:this%dof, k), this%tmat(1:this%dof, k)) = Sj
   end do

  end subroutine DNu_CurlUV
!==================================================================================================!


I used to follow book and I think there was some ways to fix that problem
1. Hager, Georg  and Wellein, Gerhard, "Introduction to high performance
computing for scientists and engineers", CRC Press 2011.

Could be nice to try out the new feature first. I'll update you.

BR,
BRijesh

On Wed, Jun 19, 2019 at 11:31 PM Patrick Alken <address@hidden> wrote:

> For sorting the indices, take a look at csr_sort_indices from SciPy:
>
> https://github.com/scipy/scipy/blob/v1.3.0/scipy/sparse/sparsetools/csr.h
> (line 358)
>
> I'll probably end up doing something similar in GSL when I find the time.
> You'll need to convert everything to CSC (sorting row indices instead of
> column).
>
> For the parallel assembly, you'll need to clone the git and
> compile/install it.
>
> Then, you'll do something like this:
>
> gsl_spmatrix * A = gsl_spmatrix_alloc(M, N);
>
> // non-parallel assembly for GSL to learn sparse pattern
> loop i, j
>   gsl_spmatrix_set(A, i, j, Aij);
>
> // at this point, GSL has built a binary tree structure of the sparse
> pattern of A; now set a flag to tell gsl_spmatrix_set not to allow
> modifications of that tree
> // structure
> A->spflags |= GSL_SPMATRIX_FLG_FIXED;
>
> // now you can safely call gsl_spmatrix_set from multiple threads (taking
> care not to set the same element from different threads of course
> #pragma omp parallel for
> for (i = 0; i < M; ++i) {
>   for (j = 0; j < N; ++j) {
>     gsl_spmatrix_set(A, i, j, Aij); // different threads are changing
> different elements of A at the same time
>   }
> }
>
> Note that this feature is still experimental and may change slightly
> before the next release. I would be quite interested to hear your
> experiences with it, and whether you have any suggestions for improvement.
> I have not looked in detail at how other sparse matrix libraries do
> parallel assembly, so there may be better ways of doing it, but so far I
> have used this in my work with good success.
>
> Patrick
>
> On 6/19/19 2:12 PM, Brijesh Upadhaya wrote:
>
> This would be awesome.
>
> I have a nonlinear magnetic problem, so for now I have reduced the number
> of unknowns (Coarse FE mesh). As far as I know the non-zero pattern of the
> nonlinear part of stiffness matrix should remain fixed. The matrix gets
> assembled in every Newton iteration. I think the new multi-threaded feature
> will come handy. I can then increase the number of unknowns as well (use
> dense FE mesh :) ) .
>
> I'll ask you when I get hold of few things for now. I need to make this
> UMFPACK work so I am trying to sort column indices and corresponding values.
>
> BR,
> Brijesh
>
> On Wed, Jun 19, 2019 at 10:45 PM Patrick Alken <address@hidden> wrote:
>
>> A new feature on the git is the ability to parallelize the assembly of
>> the sparse matrix. This feature is not yet documented, but I can give you
>> more details if interested.
>>
>> Essentially you do a first assembly (non-parallized) so GSL can learn the
>> non-zero pattern of the matrix. Then, if you want to change the matrix
>> entries later (without changing the non-zero pattern), you can essentially
>> "lock" the binary tree structure so that no new nodes will be inserted.
>> Then you can change existing matrix entries from multiple threads without
>> worrying.
>>
>> This is useful if you need to build many sparse matrices quickly which
>> all share the same non-zero pattern. Let me know if you want more details.
>>
>> Patrick
>>
>> On 6/19/19 1:40 PM, Brijesh Upadhaya wrote:
>>
>> Thank Patrick for your prompt response.
>>
>> I saw lots of new update in git.gsl. And yes now I assemble the entire
>> matrix and add boundary conditions before compressing it. For now I'll just
>> clone and compile the newer version.
>>
>> BR,
>> Brijesh
>>
>> On Wed, Jun 19, 2019 at 7:41 PM Patrick Alken <address@hidden> wrote:
>>
>>> Hello,
>>>
>>> On 6/19/19 9:15 AM, Brijesh Upadhaya wrote:
>>> > Hi everyone,
>>> >
>>> > I am working on a finite element problem and wanted to ask if anyone
>>> of you
>>> > have tried using UMFPACK for the linear algebra. I have following
>>> findings
>>> > when using sparse matrix module from gsl-2.5.
>>> >
>>> > 1. In CCS the row indices are not sorted so UMFPACK gives error (see
>>> > attached example)
>>> Its on my todo list to add a "sorted CSC" and "sorted CSR" spmatrix
>>> type. Hopefully it will get done for the next release.
>>> > 2. Adding boundary conditions were also difficult once compressed, so I
>>> > couldn't utilize gsl_spmatrix_add() to add mass matrix with stiffness
>>> > matrix. Instead I had to do it with triplet and calling the get/set
>>> > routines. Could only add boundary condition at triplet level and
>>> compressed
>>> > at the end and passed the pointers to UMFPACK routines.
>>> Yes the triplet (COO) format is designed for easy set/get operations,
>>> using a binary tree for fast lookup and insertion. Once you compress the
>>> matrix, the binary tree disappears, and you can only modify existing
>>> matrix entries - you cannot add new ones. Can't you assemble the entire
>>> matrix, including boundary conditions, before compressing it?
>>> > 3. Another thing (not related to gsl) was that UMFPACK don't use
>>> (size_t
>>> > *), instead it has (int *) and (long *) for column pointers and row
>>> > indices. I tried to cast (size_t *) to (long *) to utilize the
>>> pointers of
>>> > gsl_spmatrix object.
>>> I have decided to change the size_t pointers to int in gsl_spmatrix, for
>>> the reason you cite. Many external libraries use integer arrays instead
>>> of size_t. This has already been done on the git repository (you can
>>> just clone the gsl git).
>>> >
>>> > I am bit curious to know if anyone have a better experience. I wanted
>>> to
>>> > use direct solver instead of iterative solver.
>>> I would also like to include a direct solver in GSL itself, though I
>>> don't know when I'll have time to implement it.
>>> >
>>> > King Regards,
>>> > Brijesh
>>>
>>>
>>>
>>
>


reply via email to

[Prev in Thread] Current Thread [Next in Thread]