current research: Hybridizing speech and music through machine learning
Eight Piano Etudes Speaks the Moody Machine} was created via machine learning with the goal of hybridizing the musical qualities of three different emotional states of human speech with four different styles of piano music. Through this process, amused, angry or neutral speech have been melded with piano music created by Thelonious Monk, James Booker, Mara Gibson and William Thompson. These etudes are intended to be performed on one Yamaha Disklavier piano with two performers, one human pianist and one machine player piano. The Etudes that are intended to be performed by a human pianist are titled after notable events from the industrial revolution while those intended for machine performance are titled after possible future events regarding the rise of artificial intelligence as predicted by the author Ray Kurzweil. In the final etude both human and machine perform together representing the singularity. The contrast between events in the industrial revolution and future events is intended to entice listeners to reconsider each from different perspectives of time.
Composition Through Machine Learning
The Danger of Home.
This piece was created via machine learning with the goal of hybridizing the musical qualities of human speech with different styles of piano music using a recurrent neural network trained on scores (in a MIDI format) created by the author. This piece was realized using a Disklavier piano.
Infastain (acoustic Piano augmentation demo)
This is a demonstration of an acoustic piano augmentation that allows for infinite sustain of one or many notes. The result is a natural sounding piano sustain that lasts for an unnatural period of time. Using a tactile shaker, a contact microphone and an amplitude activated FFT-freeze Max patch, this system is easily assembled and creates an infinitely sustaining piano.
#speak baghdad music journal
#Speak BMJ is a composition for live electronics and projection art that both sonifies and visualizes text that is relevant to the composer’s experience as a soldier and counter intelligence agent in the Iraq War during 2004 and 2005. This text is collected in real-time during performance from Twitter search queries and is processed digitally in many interesting ways. In addition to sound art #Speak BMJ showcases sound reactive visual projection art that abstracts video filmed by the composer in Baghdad 2005.
The Joystuck is a new instrument designed and built by William Thompson. It has been realized using a Pure Data patch that manipulates the speed and play-back direction of audio recordings. The PD patch is loaded onto a Raspberry Pi single-board computer encapsulated by a custom laver-cut pine box with it's own built in speaker.