>> I know that you've made a decision, but regarding"Creating small new
>> files for each message is unfortunately not an option
>> on ssd drives" - it's not an issue. First of all, modern SSD drives have
>> pretty long lifespan, and even swap area is located on them.
>Continuous creating/deleting 10 files a second for the purpose of
>communication is simply bad enginnering. Modern SSD drives may be more
>resilient, but people use old ones, some of my systems run on CF cards.
>Definitely not designed for this type of (useless) load.
>> IIRC UNIX default time after files are flushed to disk are 25 seconds.
>For ext4 it is 5 secs. You would not want to loose 25 secs of data after
>a power loss. I could not find the figure for NTFS.
As far as I know, on Unix short-lived files (and pipe data) are never
physically written to disk, they just live in the fs buffers in memory.
I imagine this is still true in all current operating systems that are
capable enough to run Octave.
Sure, you get all the fs overhead, so I agree that in principle it is
bad engineering, I do not think that the mass memory would suffer from
Francesco Potortì (ricercatore) Voice: +39.050.621.3058
ISTI - Area della ricerca CNR Mobile: +39.348.8283.107
via G. Moruzzi 1, I-56124 Pisa Skype: wnlabisti
(entrance 20, 1st floor, room C71) Web: http://fly.isti.cnr.it
Regarding bad engineering - it all depends. I.e. the files as semaphores are trivially easy to debug using just shell commands.
We may discuss speed. I.e. creating a file is context switch. I.e. current task is suspended and the OS is called to perform the file operation. So this won't be fast for many short variables to be transferred from process to process.
I was using the file approach for, say, 10 transfers per second. And I needed to transfer blocks of 8Kbytes or so.