[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: memory exhausted when reading 129M file
From: |
Zheng, Xin (NIH) [C] |
Subject: |
RE: memory exhausted when reading 129M file |
Date: |
Tue, 14 Aug 2012 10:01:13 -0400 |
Thank you! Your idea works even without preallocating memory. The whole data
would occupy ~100M (based on 4-byte int and 1-byte char). I have no idea about
Octave internals. In Matlab, the cell data size would be 1G.
So it seems that there is some room for 'textscan' in Octave to be improved.
Same thing for 'textread'. Though 'dlmread' in Octave works fast and great in
reading the same file except it replaces all strings to 0.
Sorry for throwing trouble without ability to resolve it.
Xin1
-----Original Message-----
From: Przemek Klosowski [mailto:address@hidden
Sent: Monday, August 13, 2012 5:46 PM
To: address@hidden
Subject: Re: memory exhausted when reading 129M file
On 08/13/2012 04:25 PM, Zheng, Xin (NIH) [C] wrote:
> fid=fopen('filename')
> data=textscan(fid, '%s%f%s%f%s%f%s%f%s%f%s%f%s%f%s%f%s%f%s%f%s%f');
I haven't checked your example for larger data. It seems that since your file
is 120MB, and the example data you provided runs at about 190 characters per
line, you should have about 7 million numbers, which should take about 55 MB of
memory. Is that about right?
If so, then it's the textscan implementation that somehow uses more memory than
it needs to. Could you try simpler formats, e.g.
data=textscan(fid, "%s%f")
and try preallocating the data array
data={zeros(7e6,1),zeros(7e6,1)}
_______________________________________________
Help-octave mailing list
address@hidden
https://mailman.cae.wisc.edu/listinfo/help-octave