Monday, January 25, 2010


Experiment #1:

Starting tomorrow (1/26/10) at Fremont, I will begin experiment #1, which will be to setup a basic flex sensor and attach it to a ruler, while sampling the information with an Arduino and see to what resolution I can retain information from the flex sensor and how I can apply it to simple synthdefs.

Experiment #2:

Attach a basic darlington pair touch sensor to the end of the ruler to a basic on off switch to use in SC. Then, add a radio receiver to the mix and define a simple synth def that shoots burst of given frequency content from the receiver from the touch sensor.

Basic Gestures

I have attached a doodle of how the basic gestures will play out for my piece. Note that in the second graph, the number corresponds to the gesture, where "1" is when the rulers are hit with the hand hard, "2" is where the rulers are bent back and forth slowly as a control, and "3" indicates when the touch sensors are activated and firing pulses of radio frequency content.

The Instrument!

So after much thought, I have come to a preliminary design for my instrument that I will be designing. The basic design is one that is based on the integration of two physical actions everyone who has gone through grade school should know.

The interface is built around the simple production of sound by clamping one end of a ruler to the end of a table and slapping the freestanding end. At this point, I would like to build a prototype first of one and then see how beneficial it would be to add more, but I am thinking there will be about 5 of these devices in a row. Each would have a contact mic at the end that would be reporting to SuperCollider, where the frequency content generated by each "ruler" could be tracked and processed.

The beauty of this interface is that it really has a lot of potential for control, and with the addition of the flex and touch sensors (see the design to the right), it allows for three types of interaction that can be explored, where each would be primarily explored in its own gesture, with a final fourth gesture involving a combination of each type of interaction.

The first interaction involves producing sound more physically in that the device is producing an analog audio signal the way that most know, namely smacking the freestanding end and changing the pitch by dragging the ruler up and down the table. There are two aspects of this form that are enticing. First, in a piece where I would like to contrast action and aural experience, the slap and oscillation is associated with a very strong, loud auditory response. However, this response coupled with a simple amplification system (via the contact mic at the top of the setup as shown) could be controlled and its intensity modulated, perhaps in an interesting way. Second, this action can serve as the unabstracted, initial "value" that the piece can then springboard from (no pun intended).

The addition of the flex sensors adds a new dimension of physical control to the piece. When using the instrument in the "analog" way as outlined earlier, it is seen as more of a physical sound producing mechanism, where its interaction is merely being amplified to be heard by a live audience. But in utilizing the flex sensors as a "digital control signal," a more subtle bend can be used to vastly alter the shape, expanding the presence of a small movement into a larger, auditory gesture.

The final section of the interface is the touch sensor at the tip of each of the "rulers," which when pressed, would fire a burst of sound from a collection of radio frequencies, where the length of the ruler hanging off the end of the surface could determine further real-time parameters, such as what pitch the device would play, and how long. This simple control setup coupled with the third, most abstracted source of audio allows for the most extreme loss of physical interaction, where the subtlest of movements, a simple touch, which will change the audio output completely, using frequency content not being generated on site, but that is very much so present and chaotic, depending on where the performance is located.

Sunday, January 17, 2010

Instrument Inspirations

Instrumental Inspiration #1:

My dad has always said, "You stand on the shoulders of those who came before you." In the spirit of what my father has always told me and I firmly believe, I have always been a huge fan of the concept of the remix. Don't get me wrong, there are plenty of terrible remixes out there, mostly because the secondary artist ends up relying on the underlying material to carry there half thought out idea and it ends up becoming dead in its tracks.

Instrumental Inspiration #2:

It may sound cliche, but the fact is music really is all around us. As far as the electromagnetic spectrum is concerned, consider frequency bands within the 88MHz - 108 MHz range, which are packed with music, good and bad, as well as many more forms of communication. What if one could tap into these bands and use this spectral information to feed a performance? I envision using this information to power various frequency bands, where most of the original content would remain jumbled and mostly ambiguous, but the addition of the content in a new, redefined way could be extremely powerful.


If the design and executed of this instrument goes well, I feel it could be very successful in communicating the central theme of this project, namely if I could collect the sounds of many voices through the airwaves into a cohesive, single and powerful voice, this sonification of a significantly broad spectrum narrowed into a single, well-defined and bound stream would amplify the unifying theory of "anti-expectation."

Saturday, January 16, 2010

Having 3rd Thoughts

Well, here I go again. I started testing with creating animated content for this second idea and I just can't get myself to feel passionate enough about my current plan, perhaps for for two reasons. The first being every time I see something associated with the audio I wish to manipulate, it is never an animation that I associate with the action. It is always a real, tangible motion from the real world. No matter the abstraction in the sound, my mind tries to form fit it to reality, to make since of it. I love this interaction. I'm starting to realize that though many musicians and artists desire to believe that their audio is not supposed to represent something tangible, but rather elicit a response as abstract as the emissions they create, rather I'm realizing that I love creating abstraction and seeing what tangible actions the sounds evoke.

In a practical sense, every time I think about devising a well-rounded animation AND an audio piece that both coincide cohesively, I lose my breath. I firmly believe I don't have the time to create something this large in scope in the time allotted (not well anyway). I have been working on creating solid animated films with 5+ people in the past and these take more time to develop than the time I have at hand! Plus, I have been working with music and audio most of my life. Animation I have just picked up the last year and a half or so, and don't feel that I have enough life experience with it to tackle an entire project based around animated motion. In the end, I want to create a single well designed piece, not anything remotely half-assed.

Beyond these practical limitations in mind, I have reached deep into my heart and mind and have come to realize exactly what Toby McKes told me last year at his thesis exhibition. He mentioned that he was first thinking of creating an elaborate sound installation for his thesis, ran some experiments, but in the end, he went back to what he was jazzed up from when he first joined DXArts, namely creating a well thought out audio piece, no install strings attached. In my case, when I entered DXArts, I was enthralled by the possibilities of live performance and audio with the modern tools at hand. The thing that gets me excited is thinking about the possibilities with modern technology allowing us to better express ourselves as artists in live performance.

I have been having meetings with Nicolas and have been very inspired by his SPK project, as well as shooting the breeze with a buddy who works at Guitar Center. In both cases, I share a passion for the design of not only audio but for the design and transmission of audio. Over the last week, I have found myself pulled to creating and playing with audio interfaces in new and exciting ways, and am totally back to how I felt when I entered the program: simply excited. People keep asking me when I excitedly show them a new interactive setup "what class is that for?" I just stare blankly and think yeah I should get back to my school work. No reason I can't marry this passion with my thesis! (In fact, I feel it'd be a crime NOT to!)


I am officially changing my thesis to be a performance-based piece that involves the design and fabrication of a device. The premise is simple (and will hopefully flourish in its simplicity): A single instrument, perhaps comprised of many software, hardware, and traditional instrumental design components. But first and foremost something that I can control in real-time that has an enormous potential for live control and interaction. Over the next day or two, I will better define what intellectual premise the design of the piece will involve, but I am leaning toward the similar interaction that I was interested in with my second proposal, that is, visual information and motion that yields unexpected auditory results. For example, small movements, BIG sounds. Common motion that yields completely unexpected results. Stay tuned... I think I'm finally at peace with where my thesis idea is going!

Tuesday, January 12, 2010

Possible Experiments

First things first, what possible experiments would go down to begin to realize this piece?

Experiment #1:

Experiment with various forms of audio and video which evoke a strong response. Find various forms and strip each from its corresponding media. Watch it, listen to it, do each over and over alone and together, then with other sets of audio and video.

Experiment #2:

With basic motions established, experiment with how they can transform over time and see if it is possible to achieve a situation where an observer is filling in the gaps and is on some level conscious of doing so.

Thesis Premise: The Threshold of Common Media

In thinking about my past works that have dealt with how sound interacts with the world, but more importantly, how the world is transformed by sound, I have yet to play with the interaction of the common visual medium, namely film and animation, and how we associate audio with a given visual.

In my piece "Paranous," I enjoyed experimenting with sound that is developed in the space somewhere beyond the listener and within the listener using bone conduction headphones and an ambisonic speaker array. The one aspect of this piece I enjoyed was how the audio physically interacted with a listener. Music was going on literally both inside and outside them.

Another inspiration for this piece is Steve Reich's "Come Out." What would "Come Out" look like? How could this theory of phase be extrapolated to an audio/visual combination?

With the piece in development, I am wishing to deal with the audio/visual associative properties defined by our everyday existence. What common social interactions and mediums that we utilize everyday can be used as means of creation and destruction of this audio/visual associative property?

The initial idea is to create a piece that involves having a small assortment of animated forms that one would commonly associate with given audio. Each source would be rich in active forms, where some may have quick punchy animations and associated audio, and others slow sweeping, long drawn out motion. The piece would use this material to explore motion and our understanding of it, quickly cutting from the visual realm back to the audio realm and vice versa, jumbling the audio and video and testing how one may sound with another, etc.

The various streams of interaction could then begin to move slowly forward, pushing the presented audio and video beyond what has already been seen. If successful, it is my hopes that the observer would begin to formulate an experience that may or may not be physically present, with their ears filling in the gaps to the piece where there is silence, and their eyes filling in gaps where the video is a void, defining the experience in a different manner each time the piece is experienced.

Thursday, January 7, 2010

The New Blog

Welcome! I will post here about all the updates of my DXArts thesis each week.