I received word that 4MP (a reincarnation of my rope&pulley system) has been accepted for exhibition at Maker Faire New York. Come out and join me for some improvised dance music.
Thursday, July 29, 2010
Tuesday, June 2, 2009
One of the motions I’ve been trying to represent physically is the selection of a region of audio. There are any number of screen-based ways to do this, but I wanted to create something mechanical that could do this.
Thursday, May 28, 2009
I’ve been thinking about machines: machines that perform and people who perform with machines. These are some visual references that I’ve found interesting in considering this subject matter.
Wednesday, May 6, 2009
This idea came out of a discussion Shlomit and I were having last Friday evening about whether I was making a performance or an installation. She pointed out that I really needed to consider the performance from the audience’s perspective, so I considered flipping the whole thing around — facing the audience through a sheet of glass rather than standing with my back to them. During the performance, I could scratch at the back of a sheet of glass covered with black paint. The drawing tool would continue to generate some sort of audio. This performance concerns perception and revelation. As I scratch away paint to reveal the audience and the space which I cannot see at the beginning of the performance. If I trace the outlines I see, I will also be rendering a mirror of the audience and simultaneously revealing my image through the scratch marks.
I created this mockup so I could see what it looked like at full size from the audience perspective.
Monday, May 4, 2009
On Monday evening, Wendy Richmond visited to see what I’ve been working on. One thing she questioned, both in watching me, and in using the mechanism herself, was whether I was bothered by the way the ropes constrained me to a section of the board. I hadn’t been terribly bothered by it, but at the same time, was already been working out a way to get past it with the infrared-detection scheme. The question still remains – what place do ropes have in this work? If they do belong in this work, is the tension of the constraint also part of the work or something I need to remove?
Later in the week, though, I sketched one possible solution to the roop loop constraint.
By replacing the rope loops with a simple system of counterweights, I would be able to move the drawing tool freely across the entire drawing surface.
On Saturday, I put together a prototype, by cutting the rope loops and adding water-filled soda bottles as counterweights.
At first, the system had too much friction, but after adjusting the pulley locations and changing the amount of weight in the soda bottles, I found a good balance.
Wednesday, April 29, 2009
One thing I’ve been trying to work out is how to turn the sound on and off. For testing purposes, I’ve been using a footswitch that I step on whenever I start drawing. As I’ve been making larger drawings, the ropes have pulled the footswitch away from me.
Originally, I was planning to use a piezo-based contact mic mounted to the drawing surface to pick up the sound of the drawing tool, but my initial experiments with this technique did not yield good results. Although techniques I found for interfacing a piezo element with Arduino don’t include an amplifier, I have a feeling it will be necessary to boost the signal coming out of the piezo. I tried several piezo elements I had laying around — including a known good one, but I couldn’t get reliable data from it.
Hal suggested in his response to my post the other day that I might want to consider a pressure sensor, so today I’ve been prototyping a drawing tool holder which uses a force sensistive resistor to determine how hard I’m pressing the drawing tool against the drawing surface.
My first attempt needs some refinement, though. There needs to be a spring mechanism inside (like a click ballpoint pen has) to keep the drawing tool off of the sensor when I’m not drawing. In this first prototype, I used some packaging foam to hold the drawing tool in place. The undesired side effect is that the foam also holds the drawing tool securely against the force sensing resistor, so this prototype would never be able to tell me when the drawing tool is lifted.
Thursday, April 23, 2009
This week I have focused on building the software which synthesizes audio. The software I used last year was very rudimentary; it could only map the position and direction of the pulleys’ movements onto audio file playback position and volume.
What I’ve been trying to do at the beginning of this residency is get the software to a point where I feel it has adequate expressive capabilities so that I can begin to try out the different mappings — and do more drawings.
At the beginning of this week, I worked towards mapping the existing data from the pulleys onto several different parameters in the Reason software synthesis environment. As I wrote earlier, there just wasn’t enough variety in the sounds I was generating to hold my interest.
To find another way to move forward, I turned my attention to the data — and came up with a number of ways to derive dynamic values from my existing sensing system.
Properties of Lines
Instead of simply treating the coordinates I receive from the sensors as changes in position, I could begin to record data about the emerging line — and its relationship to previously drawn lines.
- Current line length – since I know how far the pen has moved horizontally and vertically, I know the distance it has travelled.
- Running average of past line lengths – maybe it will be useful to know how the lengths of the lines have been changing over time.
- Current line drawing time – it’s easy to figure out how long the pen was down. Once I know how long the pen has been down, I can also compute the speeds at which lines are being drawn. Maybe averaged data about these properties will also yield useful control values for generating audio.
- “Not drawing” time – how long did I pause between drawing lines. Is my drawing proceeding at a “furious” pace or am I stopping to think a bit between making marks?
- Line slopes – I want some way to tell if I’m drawing similar lines. I want visual repetition to translate into sonic repetition.
- Line shapes: Loops – Am I drawing over the same line in order to darken it on the paper? The software should be able to sense this. I don’t know of a good computational approach to this, but I had the idea to look at the current line (pen-down to pen-up) as video.
If I render my digitized lines in a transparent color, they would darken over time and I could probably find a computer vision algorithm which tests for an enclosed shape (not sure if this is blob tracking or something else).
- Line shapes: Similar forms – There are computer vision techniques for sensing the statistical similarity of an image to a reference image. I tried some experiments with this using cv.jit, but this didn’t yield useful results, yet.
Tuesday, April 21, 2009
Last night, after integrating the code I worked on during the day, I made another drawing.
Here are some video samples showing how it is currently working. Both examples are taken from the full length drawing session, in which I used a Max/MSP patch to control audio synthesis in Reason. The overall magnitude of the gestures I make controls the volume of the sound and the separate horizonal and vertical magnitude of the gestures controls filtering parameters.
I’m trying to understand the correlation between the drawing tool and the sound it makes (both in terms of texture and value). This drawing tool is a stick of hard graphite. I chose a airy sound which augments the natural sound of the tool on the drawing surface.
Here, the drawing tool is compressed charcoal. It makes much darker marks. Just for the record, my goal here is not to make light-saber sounds as I draw.
My initial reactions to last nights drawing:
- The sounds the system produces are not expressive enough. It’s only entertaining for a short period of time. I want to be able to shape them more over time.
- The interface needs to be more sensitive at lower speeds (when I’m drawing slowly, I don’t hear any variation in the sound).
- The combination of the drawing tool and the sound it produces is important to me.
- I need a way to iterate through ideas a bit more rapidly — my software “toolbox” in this iteration of the patch seems a bit more limited — or perhaps I’m feeling more squeezed for time than I was when I was in school.
Today, I’ve been researching ways to enhance the expressiveness of the sounds as I draw.
- Currently I’m using a foot pedal to turn the sound on when I start drawing, but I would really like the sound to start and stop automatically when I touch the drawing surface. This morning, I was experimenting with a piezo element to detect the sound of a drawing tool on the drawing surface, but didn’t get good results.
- I’m also interested in finding ways to differentiate sonically between long flowing lines and short jagged lines. This afternoon I played a bit with a computer vision library of Jitter (cv.jit) in the hopes that I might be able interpret the position data from the ropes graphically. The library contains some pattern learning features, but I’ve only played with it for a few hours now. Eventually, I would like to be able to distinguish areas of loops as well.
Thursday, April 9, 2009
As I’ve been preparing for my residency at DPI I’ve been investigating alternative ways to produce sonic drawings. Several weeks ago, I saw (again) demonstration videos from Johnny Chung Lee’s Wiimote Projects and began thinking about whether my rope&pulley system was the ideal input for the audio drawings I’m going to be working on. I envision drawing on a surface with “traditional” drawing tools (graphite sticks, maybe chalk, or markers) and mapping those large drawing gestures in realtime to audio in order to create sonic marks. The infrared LED pens desribed by Lee could be used as a starting point for a sleeve which I could put around my drawing tools. Although this departs from the rugged physicality of the rope&pulley, I think it will make the drawing process more intuitive.
Over the past two weeks, I’ve been experimenting with a Wiimote. While it was fairly straightforward to get the Wiimote connected to my PC and run some of the demo programs, I’m going to be writing my software in Max/MSP/Jitter, so I’ve been trying to figure out how to transform the coordinates I get back from the camera in the Wiimote into coordinates on a rectangular screen. I don’t think I’ll be projecting any visuals from my computer, but I want to resolve the pen position onto a regular rectangle so I can create a mapping to the sound generation component of my software.
Earlier this morning, I looked back at the Wiimote projects page and found that one of his example programs contained source code — and provided I can follow it, I think I can take what I’ve been learning about matrix operations in Jitter to implement it.
- Johnny Chung Lee Wiimote Projects
- Smarter Presentations: Exploiting Homography in Camera-Projector Systems (academic paper)
- Projective Transform (google search)
- Wii IR (search through Cycling ’74 archives for examples of porting Johnny Lee’s Wiimote whiteboard to Max/MSP/Jitter)
- warpPerspective code: example from OpenCV library used in OpenFrameworks. A little difficult to follow, though.
- Keystoning Forum Discussion (cycling74): example in Jitter demonstrating a method for using textured shaders to apply drastic keystoning to movies.
- Johnny Chung Lee’s Wiimote Whiteboard source code: Contains C# calibration and warping code, though no explanation of how it works.