week 69.5: elvis by elvis

Ryan Groves and I presented a piece of music at ISMIR this year:  he sang two Elvis Costello tunes, and I synthesized the backing music from beats of Elvis Presley songs.  It was…interesting.  Ryan nailed the performance of the first song, we got a good reception, and it was a far cry from the usual po-faced academic/computer music things that get played at conferences…and then we made the cardinal sin of overstaying our welcome on the second song.  But the first song!  Ah, good times.

So how did we do it?  By combining our respective small pieces of music tech magic.  I maintain the Echo Nest Remix API, which gives us access to timing data for each beat in a song, and Ryan builds harmonizers that can pick chords and chord progressions based on vocal input.

Ryan build his side in SuperCollider and C++:  it listens to his voice, quantizes the harmonic content to the nearest note, and then picks a chord to go with it – based on both the input and the previous chord (Hidden Markov models are involved, if that sort of thing is your jam – his paper is here.).  This chord is sent to my machine as a string, over OSC.

On my  side, I went through the McGill Billboard dataset, and found audio for six Elvis songs that also had timing data for each of their chords.  I then wrote a script using Remix to segment out each beat of each song into a folder – so I ended up with g_major, g_minor, and so on.  Then, for playback, I made a controller in Python that would read OSC from Ryan, validate the chord, and then send another OSC message to ChucK, which would play back the audio.  It sounds a bit like this.

In terms of long-distance collaboration, this worked really well:  we only had one point where our systems had to meet, and that was made simple by OSC – just send or read a chord string, and let the other guy worry about the details.  No word of lie, we got it working on the second try.  (And then almost died when it came time to perform it in the main venue.  If you’re using OSC, always test your work on the network you will be performing on.)

The songs we did were Alison and Pump It Up. I ran levels on both of them, both acting as a live human compressor, and as a kind of attempt to provide rhythm that would match Ryan’s singing, as opposed to the rather…confusing rhythm of the concatenated music.   It sounded like this, and I’m pretty happy with it, really.

The technical side was a resounding success:  we did exactly what we claimed to do, and the network  stood up for the entire performance.  The artistic side could use some work:  it would be nice to follow the signer’s rhythm, take volume and timbre into account, and so on.  I also think that I am over ChucK:  I am tired of having no useful string manipulation and other things that I take for granted in Python.

Hilariously, all of my side could be done really easily in remix.js – so maybe there will be a JavaScript version soon!