EyeSynth 1 – Background

The hardware for this project will be based on one of the open source Eyegaze projects mentioned on the EyeGaze page. It isn’t the hardware that is the challenge in this particular project however it is the software. There are two strands to the EyeSynth development but they are interdependent; the music and the interface or UI (user interaction). Over the week we spent at the CTM Hacklab in Berlin we became convinced that Pure Data was the best option to create music back-end but also might offer the possibility of creating the interface. In order to play music using the gaze alone it will be necessary to have an intelligent and dynamic interface. The software will have to help the user by guessing what their next selection might be. To do this the application needs to have the ability to predict what the next note to be played will be. This prediction could be based on style of music (initially) and subsequently through learning the style of an individual musician. It is important however that the musician has the ability to make their own decision, this is an instrument to be played rather that just leading the user through a sequence of preprogrammed steps. The challenge is to create a music prediction engine that doesn’t produce predictable music!
There is also the possibility that this could be a very visual experience for an audience. Although the spectacle of someone playing music with their eyes wouldn’t have the appeal of a musician like maybe Jimmy Hendrix, a projection of the interface could be quite engaging, especially to a knowledgeable audience.