guile-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Extremly slow for format & string-join


From: Daniel Hartwig
Subject: Re: Extremly slow for format & string-join
Date: Mon, 1 Apr 2013 15:40:48 +0800

On 1 April 2013 14:59, Daniel Llorens <address@hidden> wrote:
>
> Hello,
>
>> From: Daniel Hartwig <address@hidden>
>>
>> (define (str* str n)
>>  (call-with-output-string
>>    (lambda (p)
>>      (let lp ((n n))
>>        (unless (zero? n)
>>          (display str p)
>>          (lp (1- n)))))))
>>
>> Out of curiousity, how does the performance figures you showed compare
>> to the Python operator for similarly large values of N?
>
> I attempted a method that I thought should surely be faster using
> https://gitorious.org/guile-ploy
>
> (import (util ploy))
> (define str*-as-array (lambda (s n) (ravel (reshape s n (string-length s)))))
>
> ravel is essentially
>
> (define (ravel a)
>   (or (array-contents a) (array-contents (array-copy (array-type a) a))))
>
>
> reshape is more complicated but in this particular case it resolves
> to make-shared-array, so it's O(1).
>
> Here's a full trace:
>
> scheme@(guile-user)> ,trace (string-length (str*-as-array "1234567890" 
> 1000000))

>
> It is in fact quite slower than your solution using call-with-output-string + 
> display:
>
> scheme@(guile-user)> ,time (string-length (str* "1234567890" 1000000))
> $4 = 10000000
> ;; 0.528000s real time, 0.530000s run time.  0.000000s spent in GC.
> scheme@(guile-user)> ,time (string-length (str*-as-array "1234567890" 
> 1000000))
> $5 = 10000000
> ;; 1.745000s real time, 1.750000s run time.  0.000000s spent in GC.
> scheme@(guile-user)>
>
> The profile is interesting, I think:
>
> scheme@(guile-user)> ,profile (string-length (str*-as-array "1234567890" 
> 1000000))
> %     cumulative   self
> time   seconds     seconds      name
> 100.00      1.74      1.74  make-typed-array
>   0.00      1.74      0.00  call-with-prompt
>   0.00      1.74      0.00  start-repl
>   0.00      1.74      0.00  catch
>   0.00      1.74      0.00  #<procedure 1161a37c0 at ice-9/top-repl.scm:31:6 
> (thunk)>
>   0.00      1.74      0.00  apply-smob/1
>   0.00      1.74      0.00  run-repl
>   0.00      1.74      0.00  statprof
>   0.00      1.74      0.00  array-copy
>   0.00      1.74      0.00  #<procedure 117762d80 at statprof.scm:655:4 ()>
>   0.00      1.74      0.00  #<procedure 117b05e80 at <current input>:5:0 ()>
>   0.00      1.74      0.00  ravel
>   0.00      1.74      0.00  #<procedure 1161a36c0 at ice-9/top-repl.scm:66:5 
> ()>
>
> How can it be slower to allocate the result at once?
>

Shrug.  I do not know much of array internals.  You probably have much
more experience there than I.

Except for the curious profile output, I suspect the overhead is due
to such factors as repeated application of MAPFUNC and consequent
arithmetic to access the shared arrays contents

I see no reason to expect O(1) allocation of storage to be a
significant factor here.  I have not checked, but suspect that
‘call-with-output-string’ is very efficient with its storage
allocation.  Of course, comparing either of these to the
original implementations using ‘string-join’ and ‘format’ I
certainly would expect the allocation performance to be
significant.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]