|
From: | Dmitri A. Sergatskov |
Subject: | Re: Working patch for FFTW 3.0.x and Nd FFT's |
Date: | Wed, 18 Feb 2004 17:47:55 -0700 |
User-agent: | Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.6) Gecko/20040115 |
John W. Eaton wrote:
On 18-Feb-2004, Dmitri A. Sergatskov <address@hidden> wrote: | Being bitten by ~1Gb octave-core few times myself, could we have some | equivalent of "ulimit -c" which limits size of octave_core? Saving an incomplete (truncated) file would not be very useful. Should Octave try to save as much data as possible by sorting the variables by total size and then skipping the largest (if any) until the total save size is less than the limit?
... I guess that would be ideal. In case it has a significant overhead another option perhaps is to save variables in chronological order -- so the newest variables will be most likely lost. But that is what we would want anyway -- we want to save older work and since the newest vars either cause the crash or occur during the crash so they have low fidelity anyway. But even a truncated core would be useful. E.g., currently if I mistyped rand(1000000,1) as rand(1000000)) octave will exhaust all memory and die dumping 4Gig into core-file some 20 minutes later. Since I do not have patience to wait those 20 minutes, I go and kill -9 octave process ending up with truncated core. Having ulimit would accomplish that with higher efficiency :). I did few simple tests and it seems that octave can partially recover variables from trancated either ascii or octave-binary files, and variables seems to appear in chronological order automatically... Sincerely, Dmitri.
[Prev in Thread] | Current Thread | [Next in Thread] |