[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: map-par slower than map
From: |
Zelphir Kaltstahl |
Subject: |
Re: map-par slower than map |
Date: |
Fri, 11 Nov 2022 12:25:53 +0000 |
Hi!
On 11/11/22 11:26, Damien Mattei wrote:
i rewrote a complete threaded routine both with guile and racket creating
threads and waiting they finish :
;; run the parallel code
{threads <+ (map (λ (seg) (call-with-new-thread
(λ () (proc-unify-minterms-seg seg))))
segmts)}
(nodebug
(display-nl "waiting for threads to finish..."))
;; wait for threads to finish
(map (λ (thread) (join-thread thread)) ;;(+ start-time max-sleep)))
threads)
it does not seems to block but it is a bit slower than on a single CPU.
i have this config:
Puce : Apple M1
Nombre total de cœurs : 8 (4 performance et 4 efficacité)
it is better on a single cpu than with all the cores...
i read all the doc of Posix Thread, i admit Scheme is not C , i read on forums
a few about C and Scheme comparaison.
In any thread local variables should be on the stack of each thread.
The only doubt i have is in Scheme (but the same question exist in C) what
portions of code are or not copied in any thread? for this reason i tried to
encapsulate all the procedures used in // as internals defines in the
procedure passed at call-with-new-thread hoping they are copied in each
threads. I hope the basic scheme procedures are reentrant...
I have no explaination why is it even a bit slower on multiple core than on one.
Regards,
Damien
Note, that threads in Guile and Racket are different:
https://docs.racket-lang.org/reference/eval-model.html#%28part._thread-model%29:
> Racket supports multiple threads of evaluation. Threads run concurrently, in
the sense that one thread can preempt another without its cooperation, but
threads currently all run on the same processor (i.e., the same underlying
operating system process and thread).
https://www.gnu.org/software/guile/manual/html_node/Threads.html:
> The procedures below manipulate Guile threads, which are wrappers around the
system’s POSIX threads. For application-level parallelism, using higher-level
constructs, such as futures, is recommended (see Futures).
I believe another word for Racket's threads is "green threads". They are like
(more like?) Python threads, and do not run on another core. If you start
multiple Racket threads on the same Racket VM, they will run all on the same
core. No speedup to be expected, unless you would be waiting for IO or
something, if you did not use threads. Racket threads are concurrent, but not
parallel.
I think Racket's threads' nature is the answer to why it is slower than single
threaded execution.
Regards,
Zelphir
--
repositories:https://notabug.org/ZelphirKaltstahl