lilypond-user
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Animated SVG Percussion Music


From: Jay Anderson
Subject: Re: Animated SVG Percussion Music
Date: Fri, 30 Sep 2011 19:07:07 -0700

On Fri, Sep 30, 2011 at 1:20 AM, Tim Sawyer
<address@hidden> wrote:
> On 30/09/11 07:12, Jay Anderson wrote:
> It's not nice...took me quite a while to do.
>
> ...
>
> I then create an object which wrappers the .ly file and the .svg file, and
> links them together.  I'm parsing the music line from my .ly file so that I
> know what notes are involved in what order.  We first work out a music
> resolution (ie shortest note required to accurately playback the score), and
> then we produce a python dictionary that has positions in the stave (in
> units of the music resolution) pointing to objects that hold details of the
> note.  If it's a chord (ie snare drum and bass drum together) the note holds
> a chordList for the other notes.
>
> On the SVG side, we look through the svg file produced, looking for known
> shapes (ie a tag with attribute d="M217 139c67 0 112 -38 112 -94c0 -90 -111
> -184 -217 -184c-67 0 -112 38 -112 94c0 90 111 184 217 184z" is a solid
> notehead).  We order this structure by position (from the value in the
> transform attribute) and then match up the notes between the svg and the
> lilypond object structure.  If we get a different number of notes, then we
> have a problem somewhere!
>
> We then write out a new SVG file with some additional attributes on the
> nodes - an id for position, a delay for length and an audio for sounds at
> playback.  Rests are just notes with no sound.
>
> Note that this only currently works for one line staves - I've only done
> this for my short exercises.  I'd have to expand this matching code to cope
> with multiline stuff, and there's no point for what I'm doing with it.

This doesn't seem feasible for for multiple voices and multiple
staves. Could the svg backend be changed to optionally add in some
metadata about the pitch, moment, duration, voice etc.? The dump-path
function in output-svg.scm is where note heads (and a few other things
like clefs) are added, but I'm having a hard time tracing it backwards
from there. It looks like at some point the music is put into an
intermediary format and is parsed with a bunch of regular expressions.
Can someone point me to where this intermediary is created or provide
some more information on how the svg back-end is called?

On Fri, Sep 30, 2011 at 1:25 AM, address@hidden
<address@hidden> wrote:
> Lemme know off-list if you'd like to see more (I have to spruce up my script
> to make it human-readable).  And, on-list, one of my projects that I'll be
> taking on soon is incorporating some type of grouping mechanism into SVGs
> generated by LilyPond so that less python trickery is needed.

This sounds great. Would this be for grouping beams with notes for
instance? Or are you planning on grouping even more (staff lines,
voice, etc.)?

-----Jay



reply via email to

[Prev in Thread] Current Thread [Next in Thread]