gnash-commit
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Gnash-commit] gnash ChangeLog doc/C/internals.xml


From: Tomas Groth
Subject: [Gnash-commit] gnash ChangeLog doc/C/internals.xml
Date: Fri, 01 Sep 2006 22:24:15 +0000

CVSROOT:        /sources/gnash
Module name:    gnash
Changes by:     Tomas Groth <tgc>       06/09/01 22:24:15

Modified files:
        .              : ChangeLog 
        doc/C          : internals.xml 

Log message:
        Small doc fixes.

CVSWeb URLs:
http://cvs.savannah.gnu.org/viewcvs/gnash/ChangeLog?cvsroot=gnash&r1=1.799&r2=1.800
http://cvs.savannah.gnu.org/viewcvs/gnash/doc/C/internals.xml?cvsroot=gnash&r1=1.21&r2=1.22

Patches:
Index: ChangeLog
===================================================================
RCS file: /sources/gnash/gnash/ChangeLog,v
retrieving revision 1.799
retrieving revision 1.800
diff -u -b -r1.799 -r1.800
--- ChangeLog   1 Sep 2006 21:17:13 -0000       1.799
+++ ChangeLog   1 Sep 2006 22:24:15 -0000       1.800
@@ -1,3 +1,7 @@
+2006-09-01 Tomas Groth Christensen <address@hidden>
+
+        * doc/C/internals.xml: Small doc fixes.
+
 2006-09-01 Sandro Santilli  <address@hidden>
 
        * server/swf/ASHandlers.cpp (CommonGetUrl): support

Index: doc/C/internals.xml
===================================================================
RCS file: /sources/gnash/gnash/doc/C/internals.xml,v
retrieving revision 1.21
retrieving revision 1.22
diff -u -b -r1.21 -r1.22
--- doc/C/internals.xml 14 Aug 2006 16:25:15 -0000      1.21
+++ doc/C/internals.xml 1 Sep 2006 22:24:15 -0000       1.22
@@ -1719,22 +1719,22 @@
   </sect2>
 
   <sect2 id="soundhandlers">
-    <title>Soundhandling in Gnash</title>
+    <title>Sound handling in Gnash</title>
 
     <para>
-      When a SWF-files being played in Gnash contains audio Gnash uses its
-      soundshandlers to play it. At the moment there is 2 soundhandlers, but it
-      is likely that there will come more.
+      When a SWF-file contains audio Gnash uses its sound handlers to play it.
+      At the moment there is 2 sound handlers, but it is likely that more will
+      be made.
     </para>
 
     <sect3 id="soundtypes">
-      <title>Soundtypes</title>
+      <title>Sound types</title>
       <para>
         Sounds can be devided into two groups: event-sounds and soundstreams.
        Event-sounds are contained in a single SWF frame, but the playtime can
        span multiple frames. Soundstreams can be (and normally is) divided
-       over the SWF frames the soundstreams spans. This means that if a
-       gotoframe goes to a frame which contains data for a soundstream,
+       between the SWF frames the soundstreams spans. This means that if a
+       gotoframe-action jumps to a frame which contains data for a soundstream,
        playback of the stream can be picked up from there. 
       </para>
     </sect3>
@@ -1743,7 +1743,7 @@
       <title>Sound parsing</title>
       <para>
         When Gnash parses a SWF-file, it hands over the sounds to the
-       soundhandler. Since the event-sounds are contained in one frame, the
+       sound handler. Since the event-sounds are contained in one frame, the
        entire event-sound is retrieved at once, while a soundstream maybe not
        be completely retrived before the entire SWF-file has been parsed. But
        since the entire soundstream doesn't need to be present when playback
@@ -1754,11 +1754,11 @@
     <sect3 id="soundplayback">
       <title>Sound playback</title>
       <para>
-       When Gnash plays a SWF-file and a sound is to be played it calls the
-       soundhandler, which starts to play the sound and return. All the
-       playing is done by threads (in both SDL_mixer and Gstreamer), so once
+       When a sound is about to be played Gnash calls the sound handler, which
+       then starts to play the sound and return. All the playing is done by
+       threads (in both SDL_mixer and Gstreamer), so once 
        started the audio and graphics is not sync'ed with each other, which
-       means that we have to trust both the graphic renderer and the audio
+       means that we have to trust both the graphic backend and the audio
        backend to play at correct speed. 
       </para>
     </sect3>
@@ -1785,8 +1785,9 @@
       <para>
        The Gstreamer backend, though not complete, supports both soundstreams
        and event-sounds. When receiving sounddata it stores it uncompressed,
-       though it does decode ADPCM event sounds in the same manner that the
-       SDL_mixer backend does. When the playback starts, the backend setups a
+       unless if its ADPCM event-sounds, then it decodes in the same manner
+       that the SDL_mixer backend does. When the playback starts, the backend
+       setup a 
        Gstreamer bin containing a decoder (and other things needed) and places
        it in a Gstreamer pipeline, which plays the audio. All the sounddata is
        not passed at once, but in small chuncks, and via callbacks the
@@ -1803,7 +1804,7 @@
       <para>
        It would probably be desirable to make more backends in the future,
        either because other and better backend systems are brought to our
-       attention, or perhaps because an internal soundhandling is better
+       attention, or perhaps because an internal sound handling is better
        suited for embedded platform with limited software installed. 
       </para>
     </sect3>
@@ -1811,7 +1812,7 @@
     <sect3 id="gstreamer-details">
       <title>Detailed desciption of the Gstreamer backend</title>
       <para>
-       Gstreamer works with pipelines, bins and elements. Pipelines are the
+       Gstreamer uses pipelines, bins and elements. Pipelines are the
        main bin, where all other bins or elements are places. Visually the
        audio pipeline in gnash looks like this: 
       </para>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]