June 22nd, 2011
In the UK I did a song from the new record called “Nobody Loves You Like Me.” A few people have been asking about the technology – it looks like what’s happening is that I’m singing into a microphone and fiddling with my iPhone and something weird comes out. That’s an accurate technical description, but here’s a little more detail.
The microphone goes into my laptop through an audio interface. The laptop is running Ableton Live. I’ve got an audio track in there that’s listening to the mic input and running a plugin called The Mouth. That plugin does a lot of awesome things, but in this case it takes the audio and um. I don’t know exactly what it does. It sounds to me like it’s taking the audio input, and using some algorithm to retune the input to a single pitch at several different octaves, the relative volumes of those octaves being determined by the frequency content of the input. You know, robot voice. Kind of a vocoder I guess? But more juicy. I’ve listened to just the 100% wet effect, and it’s almost like it’s carving out space for whatever the input note is – it’s like you can hear the shadow of the melody as it shifts up and down the octaves.
Anyway, put that all in a box and say the effect is weirdifying the input and outputting a repitched copy of what I’m singing. That pitch is determined by midi messages. So I also have a midi track in Ableton Live. The iPhone is running an app called TouchOSC which is sending OSC data over wifi to an app on the laptop called Osculator, which is set up to translate certain OSC messages into midi note events, and then sending those events to the track in Ableton Live, which is then routed to the midi input of The Mouth on track 1. I’m playing a little onscreen keyboard, and that changes the note that The Mouth plays when I sing.
I am also texting three wives and two girlfriends at the same time!
Hope that explains it. It’s probably more than you wanted to know, huh?