Wednesday, May 19, 2010

So I am very please with the progress of my last few iterations of the last week.  Take a listen here and please give me some constructive feedback for the homestretch!

Still needed:  A touch more stochasticism at the beginning.  Still have a bit too much zipping coming from the resonator bank but only in one spot.  I need to iron that out by reducing the effect of a turn of my rq swipe knob (the turn spans too much bandwidth in a single turn.  A quarter to 1/8 reduction should really help).  Much more to come in the near future!


Thursday, May 13, 2010


Currently fixing a few things to add a more dynamic flow to my piece as well as make the beginning "cards-on-the-table" section more interesting to listen to:

1.  Gate the beginning with randomly assigned frequencies within a given range.  Run to noisy climax that then subsides into reverberant dust as the beginning of the second movement.
2.  Make the beginning section shorter.
3.  Break up the resonating banks by adding a random parameter that adds variance in each band's given amplitude.
4.  Practice, practice, practice... record, record, record!

Stay tuned, the radio's about to rock your world!

Sunday, May 2, 2010

At Long Last!

So I was able to update my interface, get my final basic movement functioning, and get an additional recording done this weekend after much work and debugging.

Updates include:

- Removed the second xy pad on the touchOSC side to allow for more room on the resonator banks and more room for the first xy pad (I originally thought I was going to utilize both pads but now realize as this is connected to the second movement which controls parameters that combine both radios, so only one is necessary).

- Removed pops from the resonating banks when the RQ rolled over

- First iteration of the "frozen grain" effect completed (the third movement that I have been working on for a while now)

- Experimented with the whole FM frequency band and the FM band alone. See how this affects the sound. I feel I was relying too much on static in the first iteration so by adding this limitation, I was forced to use more recognizable bands (hence the awesome and surprise ending).

- Added more lag parameters to allow for ramping to occur without the need of additional limbs and fingers on multiple knobs for long smooth gestures

Still Needed:

- Resonating banks sound too much like bad slot machines or video games. I am going to work on a new design where I have more control over specific bands that fade in and out at semi-stochastic center frequencies rather than rolling them all at once in the Shepard Tone fashion.

- Add more stochastic and random processes to the piece. The general final global design of the piece is to give the illusion that the content is going from chaotic to controlled (from more recognizable radio to more abstract content) and the control parameters from controlled to chaotic (frequencies becoming more honed and the amount of my involvement in shaping the sound is more evident to the audience).

- Add fade-in fade-out to the delay lines. Cut is really obvious in the middle of the piece where a delay line is killed.

Go here to hear the latest version.

Finally, a snapshot of my new touch design:

Friday, April 30, 2010

The last couple days have been insane and I still haven't had a chance to run another performance with the updates to my synths, but there will be one for sure tomorrow!

Thursday, April 22, 2010

First Recording (ROUGH!)

So I was able to get my first recording done, which was very rough. It is currently still missing a portion of the second movement due to a synthdef that is still not functioning. This was a great experience as it showed me currently what I need to fine tune to get a better functioning system. Currently the following still needs to be addressed:

My top priorities for this week are as follows:

- Complete second movement "Freeze" synth

- Implement balance, randomness, and lag where needed

- Remove filter pops

Listen to the first version here

Wednesday, April 14, 2010

Updated Interface

So here is the most recent interface for the pad:

I combined the faders into a single bank, which will be controlling resonant filters for each radio, allowing me to better control the amount of random behavior behind each radio's raw content in RT. The knobs will be associated with the q-value of each of these resonant filters. The 2D pads are still controlling the freeze movement introduced at around 3-4 minutes into the piece. Lastly, the buttons are for selecting which delay line is currently active.

Wednesday, April 7, 2010


My timeline for the remainder of development of the project is as following:

Thesis Timeline
4/7 Have hardware set up and begin testing synthdefs within the context of the final hardware setup
4/14Have a first iteration of a performance done on the new hardware setup (however rough) for review
4/21 Iterate on the performance and work on complexity and amount of automation in the performance vs. live control
4/28 Create multiple iterations of the performance at various times (late night, afternoon, early morning) to get a feel of how much this varies the performance)
5/5 MT Review: Demo as many of my iterations as possible and get a solid review on what works/ what doesn't
5/12 Finalize Reel of various versions of the performance and documentation for the exhibition portion of the piece
5/19 Final reel and readied performance due
5/28 BFA Install
6/4Finish Documentation

New Developements

After much thought, I have stripped two of the radios and simplified to two radios, each having four streams: namely live, 1:25 behind, 2:50 behind, and 5 minutes behind. Between these, two streams will be devoted to each of the four channels on my mixer as such:

Channel 1: Addition of live and 2:50 delay (each being on its own separate touchOSC fader and toggle) for radio 1
Channel 2: Addition of 1:25 and 5:00 delay (each being on its own separate touchOSC fader and toggle) for radio 1
Channel 3: Addition of live and 2:50 delay (each being on its own separate touchOSC fader and toggle) for radio 2
Channel 4: Addition of 1:25 and 5:00 delay (e
ach being on its own separate touchOSC fader and toggle) for radio 2

I have set up my final hardware design, which now is using touchOSC (as opposed to OSCemote). This is because after picking up the iPad, touchOSC has updated there app to support this device
and it is extremely easy to set up and beautifully done. OSCemote's multi-touch was too hard to manipulate in that it caused me to place prior mapped values before editing a new value (i.e. putting down the 3 initial fingers on the multi-touch before editing the 4th touch for example).

Below is pictures of my basic hardware layout for the performance (the picture is missing the 2nd channel radio and I won't have the second monitor) and the interface I designed on the iPad.

Sunday, March 14, 2010

Possible Interface

I have been developing multi-touch patches between OSCimote and SC over the past few weeks and playing with how they could potentially be used in my compositions. I pre-ordered an iPad and funds depending am hoping I may be able to use this device with the patches I have developed to interact with my streams of radio in real-time.

The current interface idea is the touchpad (iPad, though I want to stay away from this being simply a gimmick that defaces the meaning behind the piece), a 4-channel mixer running four separate radios (ideally tuned by a digital line to SC using an arduino), which are being routed through two delay lines and stores in buffers, allowing each radio to have 3 separate feeds, namely one in real-time, one from 5 minutes earlier, and one from 10 minutes earlier. This would allow more flexibility on my part with material.

Missing post

Just realized that I tried last week to get a post up that had a sample of my prototype that never made it on the blog. For some reason e-blogger only accepts pictures and video...annoying. Update none-the-less. So I was able to get some great feedback on my prototype including some constructive thoughts on how the structure and sound of the mixes should feel and how I could potentially exhibit my work in a gallery setting. It was mentioned on headphones but my preference would be on a speaker setup. We'll see as the show comes nearer and I get a better feel for how others would like to present. Over the next two weeks, I will be working out the kinks of interfacing and writing as much of the software necessary in SuperCollider to support my piece, allowing me to run as many 9-10 minute mixes as possible over the course of next quarter and fine tune a few select compositions based on the time and material experimented with that can be ready for the gallery. In essence, from the start of next quarter, it will be a matter of iterating and getting feedback weekly to get the best possible sound and experience out of the show.

Wednesday, February 24, 2010

Typical Thesis Meltdown

Today, I was trying to work on my interface in the wood shop with little luck. Nothing in theory was showing up nicely when fabricated, wood was splitting, and my hopes were getting shattered. It took a few weeks of frustration and a few good talks with Tivon and Roxie to realize that I was spending way too much time and effort on designing an interface and not designing an experience worth remembering.

In the end, I have no fabrication experience when it comes to designing instruments in the physical realm, yielding an amount of inexperience that should not be incorporated in a piece that is supposed to highlight the skills I have become proficient in over my undergraduate career.

Collaborating with Roxie, we thought up a few great points in the form of well-directed questions, which I will now answer in a hope to conserve good thought processes from my earlier explorations during thesis, but also to begin to look more holistically at what I want to accomplish with this piece.

- Who inspires me and what have they taught me?

I'm not going to hold back on this question. Not just "experimental artists" but more contemporary and everything in between. At the same time, I could go on forever, so I will just write down my influences as they come to me, but not ramble on too much.

"Wish I was there to see" works:

Christian Marclay - There is no such thing as "bad" sounds. Sound on its own has little weight, its the context of sound, its placement in time and space, and how it is shaped that give it life.

Stelios' "Fantasia on a Single Number" - You don't have to throw away popular reference, rhythm, and use an intricate interface to make an elegant, provocative, experimental piece

"I was there and it was amazing!" pieces:

Nicolas' "Speaker Performing Kiosk (Cube)" - The design of a great space and its vast implications can be inherently an instrument that can create a compelling and dynamic experience.

Juan, Ensu, and Joel's "Entanglement"- It was so fascinating to see a piece where sound had a physical response that was so tactile. I learned that sound should be thought of as something beyond an "effect" of a physical interaction, but rather "entangled" with physical interaction. I loved how my body became a part of the piece, and how personal the experience felt.

"I wasn't born pieces:"

Cage's "4:33" - This piece taught me two things (rather its documentation). First, it taught me that silence is unattainable, and that truly listening to the nuances of life can be rewarding on its own. So many go without respecting the power of this false sense of silence. Second, it taught me how important silence is in the context of composition, how most composers think in terms of what sound is present and not necessarily what sound is missing.

"It isn't 'experimental' enough for Digital Arts but I love it" music:

Amon Tobin's CD "Foley Room", Glitch Mob, - Its been done. Natural sounds, foley sounds, glitch, noise, etc. composed in the form of dance music. But the truth is its executed so damn well. Perhaps its just my opinion (well iTunes shows that many people love it, so maybe it isn't), but its a staple of the creative dance orriented music that I listen to day in, day out. It may not be as intellectually deep as some of the other previous pieces, but its the music I find myself listening to in my car, when I'm working, music I emotionally respond to no matter my "positional" disposition.

- What am I passionate about?

So much... too much. But in the end, its just three things.
1) User experience - If I don't communicate well, my art is useless.
2) "We stand on the shoulders of those who came before us" - Quote of my life. It is why I am so fascinated by remix, by the animation industry, by Systems Art. Its all based on the collective whole, not on the artist (singular).
3) Impromptu Performance - It is my belief that the truly great impromptu performance is not completely left up to chance. I feel it is a common misconception that an impromptu performance is completely unprepared. To me, a great impromptu performance has the years of experience and preparation coming into the piece. The only things left up to the spirit of the moment so-to-speak are the aspects that the musician leaves to "chance." This spirit, what I consider the impromptu spirit, is such a humane notion. In essence, it encompasses the fact of life that we prepare as much as possible for the future, but there is always a bit that is left up to hope.

- What can I strip and what can I keep in my interface?
Everything that is too performance specific. Sure, I want to perform this piece. It is pivotal to the impromptu spirit that I spoke of earlier. But Tivon brought up the good point in that this piece should be able to hold some significance beyond a site-specific performance. It is not about the interface, but rather the experience.

More specifically, I liked the bendable notion of my interface. This aspect has a continuous nature that may deem useful.

- Roxie: "When was the last time I just sat down at your computer and played with sound?"
Okay... somethings seriously wrong. Its been way too long. This leads to the next question...

- Where do I go from here?
I am going to spend a week playing and see what blossoms. I am not going to hold myself to a physical interface (though I will continue to think hard about what the final performance may look like), but really focus on experimenting with what the sound and aural experience will look like.

In the nature of impromptu performance and leaving a certain amount to chance, maybe something can be said for my less researched thoughts on collecting sound from radio and experimenting with what can be done with this spectral information. This is alluring for another reason. When a DJ or radio host compiles audio, it has a certain level of thought of how the audio may play out for multiple reasons. As this information is collected, and as it is put into context with the holistic view of "radio," its information becomes more stochastic in context of the larger whole. This plays into thoughts of what really is random, what is chance, what is providence, what is the impromptu?

In short, I'm going to play around for a week with these thoughts in the back of my mind and present a starting point next week and get peoples reactions. Lets see what becomes of it.

Wednesday, February 17, 2010

Goal for the end of the Quarter

My final goal is to have an updated version of my prototype with 3-4 tines created and a first iteration of the performance created by the end of the quarter. It is ambitious, but I feel it will put me in a good position coming into next quarter.


So I haven't had a chance to test it, and can honestly say its not a very robust prototype, but I have a first iteration of my interface where I can test the basic sensing systems I will be using in my device. The pictures are below.

Wednesday, February 10, 2010

Hard Week

So first off, I want to apologize for not posting before today's class, its been one hell of a week. I came in on Tuesday hoping to start my prototype only to hang out for the entire day not having anyone around to teach me how to use some of the equipment. Thus, I ended up working on my research project for Nicolas instead. I was able to flag Nate down (thanks again Nate!) to cut a few pieces of Aluminum, and picked up all the necessary supplies to construct my prototype with the exception of the pressure sensor, which is taking FOREVER to get here! Thus, I'm in a static state waiting for things to progress.

The one thing I was able to check out was utilizing the function in SuperCollider to collect more low-frequency (below hearing threshold) information from the tines, which is really useful. I am still working with getting the input signal well-conditioned but its coming along nicely. Perhaps with the addition of the pressure sensor, it will be formidable to sense all the amplitude and frequency information coming in from the tines.

Lastly, I found another work online that uses "Rulers" as an instrument. First thought... dammit. Second thought, wow there version is really limitted! No sliding capability, and the dynamics (at least from the recording) seem very muted and not very interesting. The use of an infrared sensor is interesting, but I am definitely thinking that with the use of a contact mic and pressure sensor, more useful information could come of the interface, and I'm not just saying that because of my slight disapointment that this instrument has been "done." Frankly, I'm not suprised by this. My design will also be much more discrete, without the likes of elaborate (and honestly cheesy) drawings on it, and I will be incorporating longer tines that will have longer dampening time and will generate lower frequencies. Lastly I am looking at it as more of a control surface to amplify my ideas of transformation in aural and visual gesture rather than an instrument in the traditional sense.

Check out the "Rulers" here:

Tuesday, February 2, 2010

Experiment Findings

So, in short, I have been able to find out the following in my experiments.

1) The flex sensors are far too unreliable and not a very accurate way to get good control information from the metal slapping quickly.

2) The contact mic offers a great wealth of information, more so that I initially gave it credit for. As should be expected, the amplitude gives a great idea of how hard the membrane is slapped, and the frequency correlates to how much length is given off the end of the table. The majority of my time thus far has been devoted to cleaning up the signal to be able to extract the frequency information, but it is my belief that with a bit more work and research I will be able to get something more proportional to the physical situation at hand.

3) Radio frequency content evaluation... still to come

Lastly, I have created a 3D model of what the device will look like and will begin a first prototype of a single slap tomorrow in shop.

Monday, January 25, 2010


Experiment #1:

Starting tomorrow (1/26/10) at Fremont, I will begin experiment #1, which will be to setup a basic flex sensor and attach it to a ruler, while sampling the information with an Arduino and see to what resolution I can retain information from the flex sensor and how I can apply it to simple synthdefs.

Experiment #2:

Attach a basic darlington pair touch sensor to the end of the ruler to a basic on off switch to use in SC. Then, add a radio receiver to the mix and define a simple synth def that shoots burst of given frequency content from the receiver from the touch sensor.

Basic Gestures

I have attached a doodle of how the basic gestures will play out for my piece. Note that in the second graph, the number corresponds to the gesture, where "1" is when the rulers are hit with the hand hard, "2" is where the rulers are bent back and forth slowly as a control, and "3" indicates when the touch sensors are activated and firing pulses of radio frequency content.

The Instrument!

So after much thought, I have come to a preliminary design for my instrument that I will be designing. The basic design is one that is based on the integration of two physical actions everyone who has gone through grade school should know.

The interface is built around the simple production of sound by clamping one end of a ruler to the end of a table and slapping the freestanding end. At this point, I would like to build a prototype first of one and then see how beneficial it would be to add more, but I am thinking there will be about 5 of these devices in a row. Each would have a contact mic at the end that would be reporting to SuperCollider, where the frequency content generated by each "ruler" could be tracked and processed.

The beauty of this interface is that it really has a lot of potential for control, and with the addition of the flex and touch sensors (see the design to the right), it allows for three types of interaction that can be explored, where each would be primarily explored in its own gesture, with a final fourth gesture involving a combination of each type of interaction.

The first interaction involves producing sound more physically in that the device is producing an analog audio signal the way that most know, namely smacking the freestanding end and changing the pitch by dragging the ruler up and down the table. There are two aspects of this form that are enticing. First, in a piece where I would like to contrast action and aural experience, the slap and oscillation is associated with a very strong, loud auditory response. However, this response coupled with a simple amplification system (via the contact mic at the top of the setup as shown) could be controlled and its intensity modulated, perhaps in an interesting way. Second, this action can serve as the unabstracted, initial "value" that the piece can then springboard from (no pun intended).

The addition of the flex sensors adds a new dimension of physical control to the piece. When using the instrument in the "analog" way as outlined earlier, it is seen as more of a physical sound producing mechanism, where its interaction is merely being amplified to be heard by a live audience. But in utilizing the flex sensors as a "digital control signal," a more subtle bend can be used to vastly alter the shape, expanding the presence of a small movement into a larger, auditory gesture.

The final section of the interface is the touch sensor at the tip of each of the "rulers," which when pressed, would fire a burst of sound from a collection of radio frequencies, where the length of the ruler hanging off the end of the surface could determine further real-time parameters, such as what pitch the device would play, and how long. This simple control setup coupled with the third, most abstracted source of audio allows for the most extreme loss of physical interaction, where the subtlest of movements, a simple touch, which will change the audio output completely, using frequency content not being generated on site, but that is very much so present and chaotic, depending on where the performance is located.

Sunday, January 17, 2010

Instrument Inspirations

Instrumental Inspiration #1:

My dad has always said, "You stand on the shoulders of those who came before you." In the spirit of what my father has always told me and I firmly believe, I have always been a huge fan of the concept of the remix. Don't get me wrong, there are plenty of terrible remixes out there, mostly because the secondary artist ends up relying on the underlying material to carry there half thought out idea and it ends up becoming dead in its tracks.

Instrumental Inspiration #2:

It may sound cliche, but the fact is music really is all around us. As far as the electromagnetic spectrum is concerned, consider frequency bands within the 88MHz - 108 MHz range, which are packed with music, good and bad, as well as many more forms of communication. What if one could tap into these bands and use this spectral information to feed a performance? I envision using this information to power various frequency bands, where most of the original content would remain jumbled and mostly ambiguous, but the addition of the content in a new, redefined way could be extremely powerful.


If the design and executed of this instrument goes well, I feel it could be very successful in communicating the central theme of this project, namely if I could collect the sounds of many voices through the airwaves into a cohesive, single and powerful voice, this sonification of a significantly broad spectrum narrowed into a single, well-defined and bound stream would amplify the unifying theory of "anti-expectation."

Saturday, January 16, 2010

Having 3rd Thoughts

Well, here I go again. I started testing with creating animated content for this second idea and I just can't get myself to feel passionate enough about my current plan, perhaps for for two reasons. The first being every time I see something associated with the audio I wish to manipulate, it is never an animation that I associate with the action. It is always a real, tangible motion from the real world. No matter the abstraction in the sound, my mind tries to form fit it to reality, to make since of it. I love this interaction. I'm starting to realize that though many musicians and artists desire to believe that their audio is not supposed to represent something tangible, but rather elicit a response as abstract as the emissions they create, rather I'm realizing that I love creating abstraction and seeing what tangible actions the sounds evoke.

In a practical sense, every time I think about devising a well-rounded animation AND an audio piece that both coincide cohesively, I lose my breath. I firmly believe I don't have the time to create something this large in scope in the time allotted (not well anyway). I have been working on creating solid animated films with 5+ people in the past and these take more time to develop than the time I have at hand! Plus, I have been working with music and audio most of my life. Animation I have just picked up the last year and a half or so, and don't feel that I have enough life experience with it to tackle an entire project based around animated motion. In the end, I want to create a single well designed piece, not anything remotely half-assed.

Beyond these practical limitations in mind, I have reached deep into my heart and mind and have come to realize exactly what Toby McKes told me last year at his thesis exhibition. He mentioned that he was first thinking of creating an elaborate sound installation for his thesis, ran some experiments, but in the end, he went back to what he was jazzed up from when he first joined DXArts, namely creating a well thought out audio piece, no install strings attached. In my case, when I entered DXArts, I was enthralled by the possibilities of live performance and audio with the modern tools at hand. The thing that gets me excited is thinking about the possibilities with modern technology allowing us to better express ourselves as artists in live performance.

I have been having meetings with Nicolas and have been very inspired by his SPK project, as well as shooting the breeze with a buddy who works at Guitar Center. In both cases, I share a passion for the design of not only audio but for the design and transmission of audio. Over the last week, I have found myself pulled to creating and playing with audio interfaces in new and exciting ways, and am totally back to how I felt when I entered the program: simply excited. People keep asking me when I excitedly show them a new interactive setup "what class is that for?" I just stare blankly and think yeah I should get back to my school work. No reason I can't marry this passion with my thesis! (In fact, I feel it'd be a crime NOT to!)


I am officially changing my thesis to be a performance-based piece that involves the design and fabrication of a device. The premise is simple (and will hopefully flourish in its simplicity): A single instrument, perhaps comprised of many software, hardware, and traditional instrumental design components. But first and foremost something that I can control in real-time that has an enormous potential for live control and interaction. Over the next day or two, I will better define what intellectual premise the design of the piece will involve, but I am leaning toward the similar interaction that I was interested in with my second proposal, that is, visual information and motion that yields unexpected auditory results. For example, small movements, BIG sounds. Common motion that yields completely unexpected results. Stay tuned... I think I'm finally at peace with where my thesis idea is going!

Tuesday, January 12, 2010

Possible Experiments

First things first, what possible experiments would go down to begin to realize this piece?

Experiment #1:

Experiment with various forms of audio and video which evoke a strong response. Find various forms and strip each from its corresponding media. Watch it, listen to it, do each over and over alone and together, then with other sets of audio and video.

Experiment #2:

With basic motions established, experiment with how they can transform over time and see if it is possible to achieve a situation where an observer is filling in the gaps and is on some level conscious of doing so.

Thesis Premise: The Threshold of Common Media

In thinking about my past works that have dealt with how sound interacts with the world, but more importantly, how the world is transformed by sound, I have yet to play with the interaction of the common visual medium, namely film and animation, and how we associate audio with a given visual.

In my piece "Paranous," I enjoyed experimenting with sound that is developed in the space somewhere beyond the listener and within the listener using bone conduction headphones and an ambisonic speaker array. The one aspect of this piece I enjoyed was how the audio physically interacted with a listener. Music was going on literally both inside and outside them.

Another inspiration for this piece is Steve Reich's "Come Out." What would "Come Out" look like? How could this theory of phase be extrapolated to an audio/visual combination?

With the piece in development, I am wishing to deal with the audio/visual associative properties defined by our everyday existence. What common social interactions and mediums that we utilize everyday can be used as means of creation and destruction of this audio/visual associative property?

The initial idea is to create a piece that involves having a small assortment of animated forms that one would commonly associate with given audio. Each source would be rich in active forms, where some may have quick punchy animations and associated audio, and others slow sweeping, long drawn out motion. The piece would use this material to explore motion and our understanding of it, quickly cutting from the visual realm back to the audio realm and vice versa, jumbling the audio and video and testing how one may sound with another, etc.

The various streams of interaction could then begin to move slowly forward, pushing the presented audio and video beyond what has already been seen. If successful, it is my hopes that the observer would begin to formulate an experience that may or may not be physically present, with their ears filling in the gaps to the piece where there is silence, and their eyes filling in gaps where the video is a void, defining the experience in a different manner each time the piece is experienced.

Thursday, January 7, 2010

The New Blog

Welcome! I will post here about all the updates of my DXArts thesis each week.