learning the ropes

things I made at ITP and after: sketches, prototypes, and other documentation

Tuesday, April 21, 2009

Second Drawing

Last night, after integrating the code I worked on during the day, I made another drawing.

Here are some video samples showing how it is currently working.  Both examples are taken from the full length drawing session, in which I used a Max/MSP patch to control audio synthesis in Reason.  The overall magnitude of the gestures I make controls the volume of the sound and the separate horizonal and vertical magnitude of the gestures controls filtering parameters.

I’m trying to understand the correlation between the drawing tool and the sound it makes (both in terms of texture and value). This drawing tool is a stick of hard graphite. I chose a airy sound which augments the natural sound of the tool on the drawing surface.

Here, the drawing tool is compressed charcoal. It makes much darker marks. Just for the record, my goal here is not to make light-saber sounds as I draw.

My initial reactions to last nights drawing:

  • The sounds the system produces are not expressive enough.  It’s only entertaining for a short period of time.  I want to be able to shape them more over time.
  • The interface needs to be more sensitive at lower speeds (when I’m drawing slowly, I don’t hear any variation in the sound).
  • The combination of the drawing tool and the sound it produces is important to me.
  • I need a way to iterate through ideas a bit more rapidly — my software “toolbox” in this iteration of the patch seems a bit more limited — or perhaps I’m feeling more squeezed for time than I was when I was in school.

Today, I’ve been researching ways to enhance the expressiveness of the sounds as I draw.

  • Currently I’m using a foot pedal to turn the sound on when I start drawing, but I would really like the sound to start and stop automatically when I touch the drawing surface.  This morning, I was experimenting with a piezo element to detect the sound of a drawing tool on the drawing surface, but didn’t get good results.
  • I’m also interested in finding ways to differentiate sonically between long flowing lines and short jagged lines.  This afternoon I played a bit with a computer vision library of Jitter (cv.jit) in the hopes that I might be able interpret the position data from the ropes graphically.  The library contains some pattern learning features, but I’ve only played with it for a few hours now.  Eventually, I would like to be able to distinguish areas of loops as well.  
posted by Michael at 4:22 pm  

No Comments »

No comments yet.

RSS feed for comments on this post. TrackBack URI

Leave a comment

Powered by WordPress