what we talk about

….when we talk about beethoven:  I finished a piece about Beethoven!  You should try it out.  Find someone with Opinions about the Seventh Symphony, and have it it.

Beethoven_Mähler_1815

As you type, music plays:  as you discuss the piece, chunks of the piece plays back.  The relationships between theoretical discourse about Beethoven and the actual sonic components of the Seventh Symphony are exposed in a new way.  To paraphrase my friend Hollas Longton, “we move within Beethoven, not through him” – and the graph of possible paths depends on what you talk about and how you discuss it.

Beethoven 7 excerpt 6

So how does this work?  Basically, each word makes a noise.

I struggled with how to best map each word.  I was originally going to have a painstakingly long list of trigger, words, metacontrol words, and so on:  chord names would play the appropriate chord, ‘coda’ would move to the coda, ‘melody’ would play a very melodic passage, and so on.

This was too much work.  Not only was it too much work, it was too arbitrary.  Obviously, a word like ‘harmony’ is important, but what does it do?  I decided to keep the idea of moving between movements and sections, but map each word to the next beat or beats, and keep it that way.  This means that every performance of the piece is unique, that things start out sounding normal-ish, and that the listeners slowly figure out that the word ‘yak’ always triggers a certain sound.

And, In addition to the per-word audio, certain letters also trigger audio as they are typed.  This audio is filtered and run through a delay, resulting in a background haze of Beethoven-ish noise.  The mapping here is simple:  I organized all the segments by the loudest note, and then play back a random ‘A’ segment when the user types an ‘a’, and so on.

beethoven_sym_7_mvmt4_stoko

Technically, the piece uses remix.js to both analyze the file into its component beats, and play back the appropriate chunks.  I did my own analysis to segment  each movement into theme / coda / etc – hopefully that analysis is not totally wrong.  The Web Audio API provides both the filter and the delay, via convolution reverb, because doing convolution in JavScript is totally reasonable.  The Canvas API provides the visual distortion – all I am doing is copying a chunk of the image, and then pasting it in a new place.

I am mostly happy with this one:  it is not the masterpiece of in-jokes and reference that it could have been, but I think it is a stronger piece for working in a more generalized way.