TweetSinger was a project that came to us out of left field, which began with a surprise meeting with our CTO and a couple of developers. He was assembling us as a sort of special operations group to develop a rapid prototype for a proposed text-to-speech/autotuned/audio visualization project. We had a week and a half cleared from our schedules to prove it was feasible, and functional on both smartphones as well as IE9 at a minimum, with reasonable performance across platforms.
Working closely with the design team, we iterated through several visualization concepts before arriving at the orbiting arcs in the project. I investigated the HTML5 Audio Data API, but found it too immature at the time. As such, we decided to have the back-end process the audio stream (which it was constructing anyway, using a 3rd party text-to-speech service and a 3rd party auto-tune service), and provide a JSON file with the audio data to drive the visualization.
The visualization itself is built using Processing.js and Canvas, to achieve fast and fluid rendering across multiple devices, and the audio is controlled by SoundManager2 in order to provide a Flash fallback for IE audio.