help-gsl
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Memory limit allocation


From: Patrick Alken
Subject: Re: Memory limit allocation
Date: Wed, 9 Dec 2020 08:38:43 -0700
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.10.0

Hi Pablo,

  The function gsl_spmatrix_alloc needs to allocate space for the
non-zero entries of the matrix. By default, it assumes a density of 10%.
So if N = 100,000, then it will try to allocate N*N*0.1 = 1e9 elements,
each of size sizeof(double), the the total allocation will be 8GB. But
then in the triplet representation, it needs not only the data values,
but also array tracking the row and column indices, so it will need
another 2*1e9*sizeof(int) = 8GB. So about 16GB total.

If you don't have 16GB of ram available, and if your matrix has much
less than 10% density, then you can instead use the function:

gsl_spmatrix_alloc_nzmax(n, n, nzmax, GSL_SPMATRIX_TRIPLET)

which allows you to precisely state how many non-zero entries will be in
the matrix. Then you can reduce the amount of memory allocated.

Patrick

On 12/9/20 6:22 AM, Pablo wrote:
> Hi,
>
> I've been searching in the web to solve this problem but I haven't
> found any solutions. My problem is related to the allocation limit of
> a program using sparse matrices. My project needs very large sparse
> matrices, with dimensions up to, i.e., 100,000x100,000, and the
> program returns;
>
> gsl: init_source.c:389: ERROR: failed to allocate space for memory block
> Default GSL error handler invoked.
>
> I've read something like memory leak in a loop, but my code can't be
> more simple;
>
> gsl_spmatrix* m = gsl_spmatrix_alloc(100000, 100000);
>
> How could I remove the limitation that prevents me for allocating
> space for a large matrix such like that? With dimensions 10,000x10,000
> it still works, and with other libraries such as Eigen3, I'm also able
> to build large matrices.
>
> Pablo
>
>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]