help-octave
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Parallelization with live stream of data


From: Olaf Till
Subject: Re: Parallelization with live stream of data
Date: Sun, 16 Sep 2012 19:17:49 +0200
User-agent: Mutt/1.5.20 (2009-06-14)

On Sun, Sep 16, 2012 at 02:28:04PM +0300, Jose wrote:
> Hello Olaf.
> 
> On 14/09/12 23:49, Olaf Till wrote:
> >On Fri, Sep 14, 2012 at 05:26:06PM +0300, Jose wrote:
> >>Hello all.
> >>
> >>I have a live stream of measurements that I need to process with
> >>octave in real time. The processing basically consists on the
> >>execution of a script when a new measurement arrives, with the
> >>measurement as argument. The problem is that the time between
> >>measurements might be shorter than the time required to execute the
> >>script, and I would loose data, as the measurements come at constant
> >>rate.
> >>
> >>One solution that I can think of is parallelization: as I have
> >>several cores in my machine, I could assign the processing of
> >>incoming measurements to the free cores (octave instances). Once
> >>that a measurement has been processed the core is ready to process a
> >>new measurement.
> >>
> >>To me this sounds like a classical problem that many others might
> >>have faced before.
> >>
> >>In this sense, I think I cannot use parcellfun, as I need to assign
> >>cores to measurements dynamically. I have also seen the packages
> >>multicore and parallel, and the MPITB toolbox. Before diving
> >>fiercely into code, documentation and so on, I'd appreciate some
> >>comments/suggestions from experienced users, as I am totally new to
> >>this world of parallelization.
> >>
> >>BR
> >>Jose
> >
> >Although I have not as yet dived in into how parcellfun works, very
> >probably you can use it. The helptext says the (maximum) recommended
> >number of processes is the number of cores (or one less, but I would
> >not take one less), but surely the author has provided scheduling so
> >each process will take a new job from the pool of unchanged jobs as
> >soon as the previous is over. Be sure to give it well more jobs in one
> >call than the number of cores, since there will of corse be no full
> >usage of all cores when there are less unfinished jobs left then there
> >are cores.
> 
> Yes, I guess this would work, but it is not optimal in the sense
> that it might keep some cores unused while there are jobs waiting in
> the queue. Another drawback is that, feeding a large number of jobs
> will add delay to the real-time response that I am searching for, as
> one has to wait until the complete batch of jobs has finished to
> start a new one. But definitely is something to try.
> 
> Thanks for the comments
> Jose

Jose, you should always CC the list so that others can comment ...
For a realtime task, it might be better to do the scheduling manually
(but thats not quite easy).

There is an openmpi_ext package, but I'm not the right person to
comment on this.

Alternatively, one can use Octaves fork(), waitpid(), pipe(), and
close() functions. The packages 'general' and 'parallel' both provide
functions for sending and receiving Octave variables over pipes
(fsave()/fload() and __bw_psend__()/__bw_prcv__(), respectively). They
require the additional use of Octaves fflush(). Both packages provide
a wrapper around Unix' _exit for child termination (__exit__() and
__internal_exit__(), respectively). The package 'parallel' also
provides a function select(), a wrapper around Unix' select. All these
together allow parallelization with Octave without writing C or C++
code.

The latter way, and possibly also the former (if it should be feasible
for multicore/single-machine), require you to write a scheduler.

Olaf

-- 
public key id EAFE0591, e.g. on x-hkp://pool.sks-keyservers.net

Attachment: signature.asc
Description: Digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]