Hi list,
I'm currently performing BER measurements for an IEEE 802.15.4 system. In order to do so, I have created a flowgraph (in GRC/python) that is executed for different SNR values until a certain number of bits has been processed. Between every run, I call tb.stop() followed by tb.wait(). Unfortunately, after a few runs (around 20), I get the following error message:
gr::vmcircbuf_sysv_shm: shmget (2): No space left on device
gr::vmcircbuf_sysv_shm: shmget (2): No space left on device
gr::vmcircbuf_sysv_shm: shmget (2): No space left on device
gr::buffer::allocate_buffer: failed to allocate buffer of size 64 KB
gr::vmcircbuf_sysv_shm: shmget (2): No space left on device
gr::vmcircbuf_sysv_shm: shmget (2): No space left on device
gr::vmcircbuf_sysv_shm: shmget (2): No space left on device
gr::buffer::allocate_buffer: failed to allocate buffer of size 64 KB
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Aborted (core dumped)
I don't really understand why this happens as my memory usage is very stable and < 20% of total RAM throughout the simulation. In my understanding, calling, tb.stop() and tb.wait() should delete anything the flowgraph allocated before. I also noticed that the block numbers (which are displayed because I call set_min_output_buffer() ) are monotonically increasing during the simulation even though the top block is stopped multiple times. This might be completely unrelated, however it indicates that my understanding might not be correct.